# Directory Structure
```
├── .gitignore
├── .python-version
├── addon.py
├── LICENSE
├── main.py
├── pyproject.toml
├── README.md
├── src
│ └── blender_open_mcp
│ ├── __init__.py
│ └── server.py
├── tests
│ ├── test_addon.py
│ └── test_server.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
```
1 | 3.12.11
2 |
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
1 | # Python-generated files
2 | __pycache__/
3 | *.py[oc]
4 | build/
5 | dist/
6 | wheels/
7 | *.egg-info
8 |
9 | # Virtual environments
10 | .venv
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # blender-open-mcp
2 |
3 | `blender-open-mcp` is an open source project that integrates Blender with local AI models (via [Ollama](https://ollama.com/)) using the Model Context Protocol (MCP). This allows you to control Blender using natural language prompts, leveraging the power of AI to assist with 3D modeling tasks.
4 |
5 | ## Features
6 |
7 | - **Control Blender with Natural Language:** Send prompts to a locally running Ollama model to perform actions in Blender.
8 | - **MCP Integration:** Uses the Model Context Protocol for structured communication between the AI model and Blender.
9 | - **Ollama Support:** Designed to work with Ollama for easy local model management.
10 | - **Blender Add-on:** Includes a Blender add-on to provide a user interface and handle communication with the server.
11 | - **PolyHaven Integration (Optional):** Download and use assets (HDRIs, textures, models) from [PolyHaven](https://polyhaven.com/) directly within Blender via AI prompts.
12 | - **Basic 3D Operations:**
13 | - Get Scene and Object Info
14 | - Create Primitives
15 | - Modify and delete objects
16 | - Apply materials
17 | - **Render Support:** Render images using the tool and retrieve information based on the output.
18 |
19 | ## Installation
20 |
21 | ### Prerequisites
22 |
23 | 1. **Blender:** Blender 3.0 or later. Download from [blender.org](https://www.blender.org/download/).
24 | 2. **Ollama:** Install from [ollama.com](https://ollama.com/), following OS-specific instructions.
25 | 3. **Python:** Python 3.10 or later.
26 | 4. **uv:** Install using `pip install uv`.
27 | 5. **Git:** Required for cloning the repository.
28 |
29 | ### Installation Steps
30 |
31 | 1. **Clone the Repository:**
32 |
33 | ```bash
34 | git clone https://github.com/dhakalnirajan/blender-open-mcp.git
35 | cd blender-open-mcp
36 | ```
37 |
38 | 2. **Create and Activate a Virtual Environment (Recommended):**
39 |
40 | ```bash
41 | uv venv
42 | source .venv/bin/activate # On Linux/macOS
43 | .venv\Scripts\activate # On Windows
44 | ```
45 |
46 | 3. **Install Dependencies:**
47 |
48 | ```bash
49 | uv pip install -e .
50 | ```
51 |
52 | 4. **Install the Blender Add-on:**
53 |
54 | - Open Blender.
55 | - Go to `Edit -> Preferences -> Add-ons`.
56 | - Click `Install...`.
57 | - Select the `addon.py` file from the `blender-open-mcp` directory.
58 | - Enable the "Blender MCP" add-on.
59 |
60 | 5. **Download an Ollama Model (if not already installed):**
61 |
62 | ```bash
63 | ollama run llama3.2
64 | ```
65 |
66 | *(Other models like **`Gemma3`** can also be used.)*
67 |
68 | ## Setup
69 |
70 | 1. **Start the Ollama Server:** Ensure Ollama is running in the background.
71 |
72 | 2. **Start the MCP Server:**
73 |
74 | ```bash
75 | blender-mcp
76 | ```
77 |
78 | Or,
79 |
80 | ```bash
81 | python src/blender_open_mcp/server.py
82 | ```
83 |
84 | By default, it listens on `http://0.0.0.0:8000`, but you can modify settings:
85 |
86 | ```bash
87 | blender-mcp --host 127.0.0.1 --port 8001 --ollama-url http://localhost:11434 --ollama-model llama3.2
88 | ```
89 |
90 | 3. **Start the Blender Add-on Server:**
91 |
92 | - Open Blender and the 3D Viewport.
93 | - Press `N` to open the sidebar.
94 | - Find the "Blender MCP" panel.
95 | - Click "Start MCP Server".
96 |
97 | ## Usage
98 |
99 | Interact with `blender-open-mcp` using the `mcp` command-line tool:
100 |
101 | ### Example Commands
102 |
103 | - **Basic Prompt:**
104 |
105 | ```bash
106 | mcp prompt "Hello BlenderMCP!" --host http://localhost:8000
107 | ```
108 |
109 | - **Get Scene Information:**
110 |
111 | ```bash
112 | mcp tool get_scene_info --host http://localhost:8000
113 | ```
114 |
115 | - **Create a Cube:**
116 |
117 | ```bash
118 | mcp prompt "Create a cube named 'my_cube'." --host http://localhost:8000
119 | ```
120 |
121 | - **Render an Image:**
122 |
123 | ```bash
124 | mcp prompt "Render the image." --host http://localhost:8000
125 | ```
126 |
127 | - **Using PolyHaven (if enabled):**
128 |
129 | ```bash
130 | mcp prompt "Download a texture from PolyHaven." --host http://localhost:8000
131 | ```
132 |
133 | ## Available Tools
134 |
135 | | Tool Name | Description | Parameters |
136 | | -------------------------- | -------------------------------------- | ----------------------------------------------------- |
137 | | `get_scene_info` | Retrieves scene details. | None |
138 | | `get_object_info` | Retrieves information about an object. | `object_name` (str) |
139 | | `create_object` | Creates a 3D object. | `type`, `name`, `location`, `rotation`, `scale` |
140 | | `modify_object` | Modifies an object’s properties. | `name`, `location`, `rotation`, `scale`, `visible` |
141 | | `delete_object` | Deletes an object. | `name` (str) |
142 | | `set_material` | Assigns a material to an object. | `object_name`, `material_name`, `color` |
143 | | `render_image` | Renders an image. | `file_path` (str) |
144 | | `execute_blender_code` | Executes Python code in Blender. | `code` (str) |
145 | | `get_polyhaven_categories` | Lists PolyHaven asset categories. | `asset_type` (str) |
146 | | `search_polyhaven_assets` | Searches PolyHaven assets. | `asset_type`, `categories` |
147 | | `download_polyhaven_asset` | Downloads a PolyHaven asset. | `asset_id`, `asset_type`, `resolution`, `file_format` |
148 | | `set_texture` | Applies a downloaded texture. | `object_name`, `texture_id` |
149 | | `set_ollama_model` | Sets the Ollama model. | `model_name` (str) |
150 | | `set_ollama_url` | Sets the Ollama server URL. | `url` (str) |
151 | | `get_ollama_models` | Lists available Ollama models. | None |
152 |
153 | ## Troubleshooting
154 |
155 | If you encounter issues:
156 |
157 | - Ensure Ollama and the `blender-open-mcp` server are running.
158 | - Check Blender’s add-on settings.
159 | - Verify command-line arguments.
160 | - Refer to logs for error details.
161 |
162 | For further assistance, visit the [GitHub Issues](https://github.com/dhakalnirajan/blender-open-mcp/issues) page.
163 |
164 | ---
165 |
166 | Happy Blending with AI! 🚀
167 |
```
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
```python
1 | from blender_open_mcp.server import main as server_main
2 |
3 | if __name__ == "__main__":
4 | main()
```
--------------------------------------------------------------------------------
/src/blender_open_mcp/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """Blender integration through the Model Context Protocol."""
2 |
3 | __version__ = "0.2.0" # Updated version
4 |
5 | # Expose key classes and functions for easier imports
6 | from .server import BlenderConnection, get_blender_connection
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
1 | [project]
2 | name = "blender-open-mcp"
3 | version = "0.2.0"
4 | description = "Blender integration with local AI models via MCP and Ollama"
5 | readme = "README.md"
6 | requires-python = ">=3.10"
7 | authors = [
8 | {name = "Nirajan Dhakal"}
9 | ]
10 | license = {text = "MIT"}
11 | classifiers = [
12 | "Programming Language :: Python :: 3",
13 | "License :: OSI Approved :: MIT License",
14 | "Operating System :: OS Independent",
15 | ]
16 | dependencies = [
17 | "mcp[cli]>=1.3.0",
18 | "httpx>=0.24.0",
19 | "ollama>=0.4.7",
20 | "requests",
21 | ]
22 |
23 | [project.scripts]
24 | blender-open-mcp = "blender_open_mcp.server:main"
25 |
26 | [build-system]
27 | requires = ["setuptools>=61.0", "wheel"]
28 | build-backend = "setuptools.build_meta"
29 |
30 | [tool.setuptools]
31 | package-dir = {"" = "src"}
32 |
33 | [project.urls]
34 | "Homepage" = "https://github.com/dhakalnirajan/blender-open-mcp"
35 | "Bug Tracker" = "https://github.com/dhakalnirajan/blender-open-mcp/issues"
```
--------------------------------------------------------------------------------
/tests/test_addon.py:
--------------------------------------------------------------------------------
```python
1 | import sys
2 | import os
3 | import unittest
4 | from unittest.mock import patch, MagicMock
5 | import tempfile
6 |
7 | # Mock bpy and its submodules before importing the addon module
8 | bpy_mock = MagicMock()
9 | bpy_mock.props = MagicMock()
10 | sys.modules['bpy'] = bpy_mock
11 | sys.modules['bpy.props'] = bpy_mock.props
12 |
13 | # Add the root directory to the path to allow imports
14 | sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
15 |
16 | # Now we can import the addon
17 | import addon
18 |
19 | class TestAddonBugs(unittest.TestCase):
20 |
21 | @patch('addon.requests.get')
22 | def test_hdri_temp_file_deleted(self, mock_requests_get):
23 | """
24 | Test that the temporary file for a downloaded HDRI is deleted.
25 | """
26 | # 1. Setup mocks
27 | # Mock requests.get to return a successful response
28 | mock_response = MagicMock()
29 | mock_response.status_code = 200
30 | mock_response.content = b"fake hdri data"
31 | mock_requests_get.return_value = mock_response
32 |
33 | # Mock the JSON response for the asset files
34 | mock_files_response = MagicMock()
35 | mock_files_response.status_code = 200
36 | mock_files_response.json.return_value = {
37 | "hdri": {
38 | "1k": {
39 | "hdr": {
40 | "url": "http://fake.url/hdri.hdr"
41 | }
42 | }
43 | }
44 | }
45 | # The first call to requests.get is for the files, the second is for the download
46 | mock_requests_get.side_effect = [mock_files_response, mock_response]
47 |
48 | # A very basic mock for bpy
49 | addon.bpy.data.images.load.return_value = MagicMock()
50 | addon.bpy.path.abspath.side_effect = lambda x: x # Return the path as is
51 |
52 | # 2. Instantiate the server from the addon
53 | server = addon.BlenderMCPServer()
54 |
55 | # 3. Call the function
56 | result = server.download_polyhaven_asset(
57 | asset_id="test_hdri",
58 | asset_type="hdris",
59 | resolution="1k",
60 | file_format="hdr"
61 | )
62 |
63 | # 4. Assertions
64 | self.assertTrue(result.get("success"))
65 |
66 | # Get the path of the temporary file that was created
67 | # The path is passed to bpy.data.images.load
68 | self.assertTrue(addon.bpy.data.images.load.called)
69 | temp_file_path = addon.bpy.data.images.load.call_args[0][0]
70 |
71 | # Assert that the temporary file no longer exists
72 | self.assertFalse(os.path.exists(temp_file_path), f"Temporary file was not deleted: {temp_file_path}")
73 |
74 | if __name__ == '__main__':
75 | unittest.main()
76 |
```
--------------------------------------------------------------------------------
/tests/test_server.py:
--------------------------------------------------------------------------------
```python
1 | import sys
2 | import os
3 | # Add src to the path to allow imports
4 | sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../src')))
5 |
6 | import unittest
7 | from unittest.mock import patch, MagicMock, AsyncMock
8 | import asyncio
9 | import tempfile
10 | import base64
11 | from mcp.server.fastmcp import Context, Image
12 |
13 | # Now import the server module
14 | from blender_open_mcp import server as server_module
15 |
16 | class TestServerTools(unittest.TestCase):
17 |
18 | def test_set_ollama_url(self):
19 | """Test the set_ollama_url function."""
20 | ctx = Context()
21 | new_url = "http://localhost:12345"
22 |
23 | # Run the async function
24 | result = asyncio.run(server_module.set_ollama_url(ctx, new_url))
25 |
26 | self.assertEqual(result, f"Ollama URL set to: {new_url}")
27 | self.assertEqual(server_module._ollama_url, new_url)
28 |
29 | def test_render_image_bug(self):
30 | """
31 | Test the render_image tool to demonstrate the bug.
32 | This test will fail before the fix and pass after.
33 | """
34 | # Create a mock context object with an add_image method
35 | ctx = MagicMock()
36 | ctx.add_image = MagicMock()
37 | # also mock get_image to return the added image
38 | def get_image():
39 | if ctx.add_image.call_args:
40 | return ctx.add_image.call_args[0][0]
41 | return None
42 | ctx.get_image = get_image
43 |
44 |
45 | # 1. Create a dummy image file to represent the rendered output
46 | with tempfile.NamedTemporaryFile(suffix=".png", delete=False) as tmp_file:
47 | tmp_file.write(b"fake_image_data")
48 | correct_image_path = tmp_file.name
49 |
50 | # 2. Mock the Blender connection
51 | mock_blender_conn = MagicMock()
52 |
53 | # 3. Configure the mock send_command to return the correct path
54 | # This simulates the behavior of the addon
55 | mock_blender_conn.send_command.return_value = {
56 | "rendered": True,
57 | "output_path": correct_image_path,
58 | "resolution": [1920, 1080]
59 | }
60 |
61 | # 4. Patch get_blender_connection to return our mock
62 | with patch('blender_open_mcp.server.get_blender_connection', return_value=mock_blender_conn):
63 |
64 | # 5. Call the render_image tool
65 | # The bug is that it uses "render.png" instead of `correct_image_path`
66 | result = asyncio.run(server_module.render_image(ctx, file_path="render.png"))
67 |
68 | # 6. Assertions
69 | self.assertEqual(result, "Image Rendered Successfully.")
70 |
71 | # Check if the context now has an image
72 | ctx.add_image.assert_called_once()
73 | img = ctx.add_image.call_args[0][0]
74 | self.assertIsInstance(img, Image)
75 |
76 | # Verify the image data is correct
77 | with open(correct_image_path, "rb") as f:
78 | expected_data = base64.b64encode(f.read()).decode('utf-8')
79 | self.assertEqual(img.data, expected_data)
80 |
81 | # 7. Clean up the dummy file
82 | os.remove(correct_image_path)
83 |
84 | if __name__ == '__main__':
85 | unittest.main()
86 |
```
--------------------------------------------------------------------------------
/src/blender_open_mcp/server.py:
--------------------------------------------------------------------------------
```python
1 | # server.py
2 | from mcp.server.fastmcp import FastMCP, Context, Image
3 | import socket
4 | import json
5 | import asyncio
6 | import logging
7 | from dataclasses import dataclass, field
8 | from contextlib import asynccontextmanager
9 | from typing import AsyncIterator, Dict, Any, List, Optional
10 | import httpx
11 | from io import BytesIO
12 | import base64
13 | import argparse
14 | import os
15 | from urllib.parse import urlparse
16 |
17 | # Configure logging
18 | logging.basicConfig(level=logging.INFO,
19 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
20 | logger = logging.getLogger("BlenderMCPServer")
21 |
22 | @dataclass
23 | class BlenderConnection:
24 | host: str
25 | port: int
26 | sock: Optional[socket.socket] = None
27 | timeout: float = 15.0 # Added timeout as a property
28 |
29 | def __post_init__(self):
30 | if not isinstance(self.host, str):
31 | raise ValueError("Host must be a string")
32 | if not isinstance(self.port, int):
33 | raise ValueError("Port must be an int")
34 |
35 | def connect(self) -> bool:
36 | if self.sock:
37 | return True
38 | try:
39 | self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
40 | self.sock.connect((self.host, self.port))
41 | logger.info(f"Connected to Blender at {self.host}:{self.port}")
42 | self.sock.settimeout(self.timeout) # Set timeout on socket
43 | return True
44 | except Exception as e:
45 | logger.error(f"Failed to connect to Blender: {e!s}")
46 | self.sock = None
47 | return False
48 |
49 | def disconnect(self) -> None:
50 | if self.sock:
51 | try:
52 | self.sock.close()
53 | except Exception as e:
54 | logger.error(f"Error disconnecting: {e!s}")
55 | finally:
56 | self.sock = None
57 |
58 | def _receive_full_response(self, buffer_size: int = 8192) -> bytes:
59 | """Receive data with timeout using a loop."""
60 | chunks: List[bytes] = []
61 | timed_out = False
62 | try:
63 | while True:
64 | try:
65 | chunk = self.sock.recv(buffer_size)
66 | if not chunk:
67 | if not chunks:
68 | # Requirement 1b
69 | raise Exception("Connection closed by Blender before any data was sent in this response")
70 | else:
71 | # Requirement 1a
72 | raise Exception("Connection closed by Blender mid-stream with incomplete JSON data")
73 | chunks.append(chunk)
74 | try:
75 | data = b''.join(chunks)
76 | json.loads(data.decode('utf-8')) # Check if it is valid json
77 | logger.debug(f"Received response ({len(data)} bytes)")
78 | return data # Complete JSON received
79 | except json.JSONDecodeError:
80 | # Incomplete JSON, continue receiving
81 | continue
82 | except socket.timeout:
83 | logger.warning("Socket timeout during receive")
84 | timed_out = True # Set flag
85 | break # Stop listening to socket
86 | except (ConnectionError, BrokenPipeError, ConnectionResetError) as e:
87 | logger.error(f"Socket connection error: {e!s}")
88 | self.sock = None
89 | raise # re-raise to outer error handler
90 |
91 | # This part is reached if loop is broken by 'break' (only timeout case now)
92 | if timed_out:
93 | if chunks:
94 | data = b''.join(chunks)
95 | # Check if the partial data is valid JSON (it shouldn't be if timeout happened mid-stream)
96 | try:
97 | json.loads(data.decode('utf-8'))
98 | # This case should ideally not be hit if JSON was incomplete,
99 | # but if it's somehow valid, return it.
100 | logger.warning("Timeout occurred, but received data forms valid JSON.")
101 | return data
102 | except json.JSONDecodeError:
103 | # Requirement 2a
104 | raise Exception(f"Incomplete JSON data received before timeout. Received: {data[:200]}")
105 | else:
106 | # Requirement 2b
107 | raise Exception("Timeout waiting for response, no data received.")
108 |
109 | # Fallback if loop exited for a reason not covered by explicit raises inside or by timeout logic
110 | # This should ideally not be reached with the current logic.
111 | if chunks: # Should have been handled by "Connection closed by Blender mid-stream..."
112 | data = b''.join(chunks)
113 | logger.warning(f"Exited receive loop unexpectedly with data: {data[:200]}")
114 | raise Exception("Receive loop ended unexpectedly with partial data.")
115 | else: # Should have been handled by "Connection closed by Blender before any data..." or timeout
116 | logger.warning("Exited receive loop unexpectedly with no data.")
117 | raise Exception("Receive loop ended unexpectedly with no data.")
118 |
119 | except (ConnectionError, BrokenPipeError, ConnectionResetError) as e:
120 | # This handles connection errors raised from within the loop or if self.sock.recv fails
121 | logger.error(f"Connection error during receive: {e!s}")
122 | self.sock = None # Ensure socket is reset
123 | # Re-raise with a more specific message if needed, or just re-raise
124 | raise Exception(f"Connection to Blender lost during receive: {e!s}")
125 | except Exception as e:
126 | # Catch other exceptions, including our custom ones, and log them
127 | logger.error(f"Error during _receive_full_response: {e!s}")
128 | # If it's not one of the specific connection errors, it might be one of our custom messages
129 | # or another unexpected issue. Re-raise to be handled by send_command.
130 | raise
131 |
132 |
133 | def send_command(self, command_type: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
134 | if not self.sock and not self.connect():
135 | raise ConnectionError("Not connected")
136 | command = {"type": command_type, "params": params or {}}
137 | try:
138 | logger.info(f"Sending command: {command_type} with params: {params}")
139 | self.sock.sendall(json.dumps(command).encode('utf-8'))
140 | logger.info(f"Command sent, waiting for response...")
141 | response_data = self._receive_full_response()
142 | logger.debug(f"Received response ({len(response_data)} bytes)")
143 | response = json.loads(response_data.decode('utf-8'))
144 | logger.info(f"Response status: {response.get('status', 'unknown')}")
145 | if response.get("status") == "error":
146 | logger.error(f"Blender error: {response.get('message')}")
147 | raise Exception(response.get("message", "Unknown Blender error"))
148 | return response.get("result", {})
149 |
150 | except socket.timeout:
151 | logger.error("Socket timeout from Blender")
152 | self.sock = None # reset socket connection
153 | raise Exception("Timeout waiting for Blender - simplify request")
154 | except (ConnectionError, BrokenPipeError, ConnectionResetError) as e:
155 | logger.error(f"Socket connection error: {e!s}")
156 | self.sock = None # reset socket connection
157 | raise Exception(f"Connection to Blender lost: {e!s}")
158 | except json.JSONDecodeError as e:
159 | logger.error(f"Invalid JSON response: {e!s}")
160 | if 'response_data' in locals() and response_data:
161 | logger.error(f"Raw (first 200): {response_data[:200]}")
162 | raise Exception(f"Invalid response from Blender: {e!s}")
163 | except Exception as e:
164 | logger.error(f"Error communicating with Blender: {e!s}")
165 | self.sock = None # reset socket connection
166 | raise Exception(f"Communication error: {e!s}")
167 |
168 |
169 | @asynccontextmanager
170 | async def server_lifespan(server: FastMCP) -> AsyncIterator[Dict[str, Any]]:
171 | logger.info("BlenderMCP server starting up")
172 | try:
173 | blender = get_blender_connection()
174 | logger.info("Connected to Blender on startup")
175 | except Exception as e:
176 | logger.warning(f"Could not connect to Blender on startup: {e!s}")
177 | logger.warning("Ensure Blender addon is running before using resources")
178 | yield {}
179 | global _blender_connection
180 | if _blender_connection:
181 | logger.info("Disconnecting from Blender on shutdown")
182 | _blender_connection.disconnect()
183 | _blender_connection = None
184 | logger.info("BlenderMCP server shut down")
185 |
186 | # Initialize MCP server instance globally
187 | mcp = FastMCP(
188 | "BlenderOpenMCP",
189 | lifespan=server_lifespan
190 | )
191 |
192 | _blender_connection = None
193 | _polyhaven_enabled = False
194 | # Default values (will be overridden by command-line arguments)
195 | _ollama_model = ""
196 | _ollama_url = "http://localhost:11434"
197 |
198 | def get_blender_connection() -> BlenderConnection:
199 | global _blender_connection, _polyhaven_enabled
200 | if _blender_connection:
201 | try:
202 | result = _blender_connection.send_command("get_polyhaven_status")
203 | _polyhaven_enabled = result.get("enabled", False)
204 | return _blender_connection
205 | except Exception as e:
206 | logger.warning(f"Existing connection invalid: {e!s}")
207 | try:
208 | _blender_connection.disconnect()
209 | except:
210 | pass
211 | _blender_connection = None
212 | if _blender_connection is None:
213 | _blender_connection = BlenderConnection(host="localhost", port=9876)
214 | if not _blender_connection.connect():
215 | logger.error("Failed to connect to Blender")
216 | _blender_connection = None
217 | raise Exception("Could not connect to Blender. Addon running?")
218 | logger.info("Created new persistent connection to Blender")
219 | return _blender_connection
220 |
221 |
222 | async def query_ollama(prompt: str, context: Optional[List[Dict]] = None, image: Optional[Image] = None) -> str:
223 | global _ollama_model, _ollama_url
224 |
225 | payload = {"prompt": prompt, "model": _ollama_model, "format": "json", "stream": False}
226 | if context:
227 | payload["context"] = context
228 | if image:
229 | if image.data:
230 | payload["images"] = [image.data]
231 | elif image.path:
232 | try:
233 | with open(image.path, "rb") as image_file:
234 | encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
235 | payload["images"] = [encoded_string]
236 | except FileNotFoundError:
237 | logger.error(f"Image file not found: {image.path}")
238 | return "Error: Image file not found."
239 | else:
240 | logger.warning("Image without data or path. Ignoring.")
241 |
242 | try:
243 | async with httpx.AsyncClient() as client:
244 | response = await client.post(f"{_ollama_url}/api/generate", json=payload, timeout=60.0)
245 | response.raise_for_status() # Raise HTTPStatusError for bad status
246 | response_data = response.json()
247 | logger.debug(f"Raw Ollama response: {response_data}")
248 | if "response" in response_data:
249 | return response_data["response"]
250 | else:
251 | logger.error(f"Unexpected response format: {response_data}")
252 | return "Error: Unexpected response format from Ollama."
253 |
254 | except httpx.HTTPStatusError as e:
255 | logger.error(f"Ollama API error: {e.response.status_code} - {e.response.text}")
256 | return f"Error: Ollama API returned: {e.response.status_code}"
257 | except httpx.RequestError as e:
258 | logger.error(f"Ollama API request failed: {e}")
259 | return "Error: Failed to connect to Ollama API."
260 | except Exception as e:
261 | logger.error(f"An unexpected error occurred: {e!s}")
262 | return f"Error: An unexpected error occurred: {e!s}"
263 |
264 | @mcp.prompt()
265 | async def base_prompt(context: Context, user_message: str) -> str:
266 | system_message = f"""You are a helpful assistant that controls Blender.
267 | You can use the following tools. Respond in well-formatted, valid JSON:
268 | {mcp.tools_schema()}"""
269 | full_prompt = f"{system_message}\n\n{user_message}"
270 | response = await query_ollama(full_prompt, context.history(), context.get_image())
271 | return response
272 |
273 | @mcp.tool()
274 | def get_scene_info(ctx: Context) -> str:
275 | try:
276 | blender = get_blender_connection()
277 | result = blender.send_command("get_scene_info")
278 | return json.dumps(result, indent=2) # Return as a formatted string
279 | except Exception as e:
280 | return f"Error: {e!s}"
281 |
282 | @mcp.tool()
283 | def get_object_info(ctx: Context, object_name: str) -> str:
284 | try:
285 | blender = get_blender_connection()
286 | result = blender.send_command("get_object_info", {"name": object_name})
287 | return json.dumps(result, indent=2) # Return as a formatted string
288 | except Exception as e:
289 | return f"Error: {e!s}"
290 |
291 | @mcp.tool()
292 | def create_object(
293 | ctx: Context,
294 | type: str = "CUBE",
295 | name: Optional[str] = None,
296 | location: Optional[List[float]] = None,
297 | rotation: Optional[List[float]] = None,
298 | scale: Optional[List[float]] = None
299 | ) -> str:
300 | try:
301 | blender = get_blender_connection()
302 | loc, rot, sc = location or [0, 0, 0], rotation or [0, 0, 0], scale or [1, 1, 1]
303 | params = {"type": type, "location": loc, "rotation": rot, "scale": sc}
304 | if name: params["name"] = name
305 | result = blender.send_command("create_object", params)
306 | return f"Created {type} object: {result['name']}"
307 | except Exception as e:
308 | return f"Error: {e!s}"
309 |
310 | @mcp.tool()
311 | def modify_object(
312 | ctx: Context,
313 | name: str,
314 | location: Optional[List[float]] = None,
315 | rotation: Optional[List[float]] = None,
316 | scale: Optional[List[float]] = None,
317 | visible: Optional[bool] = None
318 | ) -> str:
319 | try:
320 | blender = get_blender_connection()
321 | params = {"name": name}
322 | if location is not None: params["location"] = location
323 | if rotation is not None: params["rotation"] = rotation
324 | if scale is not None: params["scale"] = scale
325 | if visible is not None: params["visible"] = visible
326 | result = blender.send_command("modify_object", params)
327 | return f"Modified object: {result['name']}"
328 | except Exception as e:
329 | return f"Error: {e!s}"
330 |
331 | @mcp.tool()
332 | def delete_object(ctx: Context, name: str) -> str:
333 | try:
334 | blender = get_blender_connection()
335 | blender.send_command("delete_object", {"name": name})
336 | return f"Deleted object: {name}"
337 | except Exception as e:
338 | return f"Error: {e!s}"
339 |
340 | @mcp.tool()
341 | def set_material(
342 | ctx: Context,
343 | object_name: str,
344 | material_name: Optional[str] = None,
345 | color: Optional[List[float]] = None
346 | ) -> str:
347 | try:
348 | blender = get_blender_connection()
349 | params = {"object_name": object_name}
350 | if material_name: params["material_name"] = material_name
351 | if color: params["color"] = color
352 | result = blender.send_command("set_material", params)
353 | return f"Applied material to {object_name}: {result.get('material_name', 'unknown')}"
354 | except Exception as e:
355 | return f"Error: {e!s}"
356 |
357 | @mcp.tool()
358 | def execute_blender_code(ctx: Context, code: str) -> str:
359 | try:
360 | blender = get_blender_connection()
361 | result = blender.send_command("execute_code", {"code": code})
362 | return f"Code executed: {result.get('result', '')}"
363 | except Exception as e:
364 | return f"Error: {e!s}"
365 |
366 | @mcp.tool()
367 | def get_polyhaven_categories(ctx: Context, asset_type: str = "hdris") -> str:
368 | try:
369 | blender = get_blender_connection()
370 | if not _polyhaven_enabled: return "PolyHaven disabled."
371 | result = blender.send_command("get_polyhaven_categories", {"asset_type": asset_type})
372 | if "error" in result: return f"Error: {result['error']}"
373 | categories = result["categories"]
374 | formatted = f"Categories for {asset_type}:\n" + \
375 | "\n".join(f"- {cat}: {count}" for cat, count in
376 | sorted(categories.items(), key=lambda x: x[1], reverse=True))
377 | return formatted
378 | except Exception as e:
379 | return f"Error: {e!s}"
380 |
381 | @mcp.tool()
382 | def search_polyhaven_assets(ctx: Context, asset_type: str = "all", categories: Optional[str] = None) -> str:
383 | try:
384 | blender = get_blender_connection()
385 | result = blender.send_command("search_polyhaven_assets",
386 | {"asset_type": asset_type, "categories": categories})
387 | if "error" in result: return f"Error: {result['error']}"
388 | assets, total, returned = result["assets"], result["total_count"], result["returned_count"]
389 | formatted = f"Found {total} assets" + (f" in: {categories}" if categories else "") + \
390 | f"\nShowing {returned}:\n" + "".join(
391 | f"- {data.get('name', asset_id)} (ID: {asset_id})\n"
392 | f" Type: {['HDRI', 'Texture', 'Model'][data.get('type', 0)]}\n"
393 | f" Categories: {', '.join(data.get('categories', []))}\n"
394 | f" Downloads: {data.get('download_count', 'Unknown')}\n"
395 | for asset_id, data in sorted(assets.items(),
396 | key=lambda x: x[1].get("download_count", 0),
397 | reverse=True))
398 | return formatted
399 | except Exception as e:
400 | return f"Error: {e!s}"
401 |
402 | @mcp.tool()
403 | def download_polyhaven_asset(ctx: Context, asset_id: str, asset_type: str,
404 | resolution: str = "1k", file_format: Optional[str] = None) -> str:
405 | try:
406 | blender = get_blender_connection()
407 | result = blender.send_command("download_polyhaven_asset", {
408 | "asset_id": asset_id, "asset_type": asset_type,
409 | "resolution": resolution, "file_format": file_format})
410 | if "error" in result: return f"Error: {result['error']}"
411 | if result.get("success"):
412 | message = result.get("message", "Success")
413 | if asset_type == "hdris": return f"{message}. HDRI set as world."
414 | elif asset_type == "textures":
415 | mat_name, maps = result.get("material", ""), ", ".join(result.get("maps", []))
416 | return f"{message}. Material '{mat_name}' with: {maps}."
417 | elif asset_type == "models": return f"{message}. Model imported."
418 | return message
419 | return f"Failed: {result.get('message', 'Unknown')}"
420 | except Exception as e:
421 | return f"Error: {e!s}"
422 |
423 | @mcp.tool()
424 | def set_texture(ctx: Context, object_name: str, texture_id: str) -> str:
425 | try:
426 | blender = get_blender_connection()
427 | result = blender.send_command("set_texture",
428 | {"object_name": object_name, "texture_id": texture_id})
429 | if "error" in result: return f"Error: {result['error']}"
430 | if result.get("success"):
431 | mat_name, maps = result.get("material", ""), ", ".join(result.get("maps", []))
432 | info, nodes = result.get("material_info", {}), result.get("material_info", {}).get("texture_nodes", [])
433 | output = (f"Applied '{texture_id}' to {object_name}.\nMaterial '{mat_name}': {maps}.\n"
434 | f"Nodes: {info.get('has_nodes', False)}\nCount: {info.get('node_count', 0)}\n")
435 | if nodes:
436 | output += "Texture nodes:\n" + "".join(
437 | f"- {node['name']} ({node['image']})\n" +
438 | (" Connections:\n" + "".join(f" {conn}\n" for conn in node['connections'])
439 | if node['connections'] else "")
440 | for node in nodes)
441 | return output
442 | return f"Failed: {result.get('message', 'Unknown')}"
443 | except Exception as e:
444 | return f"Error: {e!s}"
445 |
446 | @mcp.tool()
447 | def get_polyhaven_status(ctx: Context) -> str:
448 | try:
449 | blender = get_blender_connection()
450 | result = blender.send_command("get_polyhaven_status")
451 | return result.get("message", "") # Return the message directly
452 | except Exception as e:
453 | return f"Error: {e!s}"
454 |
455 | @mcp.tool()
456 | async def set_ollama_model(ctx: Context, model_name: str) -> str:
457 | global _ollama_model
458 | try:
459 | async with httpx.AsyncClient() as client:
460 | response = await client.post(f"{_ollama_url}/api/show",
461 | json={"name": model_name}, timeout=10.0)
462 | if response.status_code == 200:
463 | _ollama_model = model_name
464 | return f"Ollama model set to: {_ollama_model}"
465 | else: return f"Error: Could not find model '{model_name}'."
466 | except Exception as e:
467 | return f"Error: Failed to communicate: {e!s}"
468 |
469 | @mcp.tool()
470 | async def set_ollama_url(ctx: Context, url: str) -> str:
471 | global _ollama_url
472 | if not (url.startswith("http://") or url.startswith("https://")):
473 | return "Error: Invalid URL format. Must start with http:// or https://."
474 | _ollama_url = url
475 | return f"Ollama URL set to: {_ollama_url}"
476 |
477 | @mcp.tool()
478 | async def get_ollama_models(ctx: Context) -> str:
479 | try:
480 | async with httpx.AsyncClient() as client:
481 | response = await client.get(f"{_ollama_url}/api/tags", timeout=10.0)
482 | response.raise_for_status()
483 | models_data = response.json()
484 | if "models" in models_data:
485 | model_list = [model["name"] for model in models_data["models"]]
486 | return "Available Ollama models:\n" + "\n".join(model_list)
487 | else: return "Error: Unexpected response from Ollama /api/tags."
488 | except httpx.HTTPStatusError as e:
489 | return f"Error: Ollama API error: {e.response.status_code}"
490 | except httpx.RequestError as e:
491 | return "Error: Failed to connect to Ollama API."
492 | except Exception as e:
493 | return f"Error: An unexpected error: {e!s}"
494 |
495 | @mcp.tool()
496 | async def render_image(ctx: Context, file_path: str = "render.png") -> str:
497 | try:
498 | blender = get_blender_connection()
499 | result = blender.send_command("render_scene", {"output_path": file_path})
500 | if result and result.get("rendered"):
501 | # Use the actual output path returned from Blender
502 | actual_file_path = result.get("output_path")
503 | if not actual_file_path:
504 | return "Error: Blender rendered but did not return an output path."
505 | try:
506 | with open(actual_file_path, "rb") as image_file:
507 | encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
508 | ctx.add_image(Image(data=encoded_string)) # Add image to the context
509 | return "Image Rendered Successfully."
510 | except FileNotFoundError:
511 | return f"Error: Blender rendered to '{actual_file_path}', but the file was not found by the server."
512 | except Exception as exception:
513 | return f"Blender rendered, but the image could not be read: {exception!s}"
514 | else:
515 | return f"Error: Rendering failed with result: {result}"
516 | except Exception as e:
517 | return f"Error: {e!s}"
518 |
519 | def main():
520 | """Run the MCP server."""
521 | parser = argparse.ArgumentParser(description="BlenderMCP Server")
522 | # Set global variables from command-line arguments
523 | global _ollama_url, _ollama_model
524 |
525 | parser.add_argument("--ollama-url", type=str, default=_ollama_url,
526 | help="URL of the Ollama server")
527 | parser.add_argument("--ollama-model", type=str, default=_ollama_model,
528 | help="Default Ollama model to use")
529 | parser.add_argument("--port", type=int, default=8000,
530 | help="Port for the MCP server to listen on")
531 | parser.add_argument("--host", type=str, default="0.0.0.0",
532 | help="Host for the MCP server to listen on")
533 |
534 | args = parser.parse_args()
535 |
536 | _ollama_url = args.ollama_url
537 | _ollama_model = args.ollama_model
538 |
539 | # MCP instance is already created globally
540 | mcp.run(host=args.host, port=args.port)
541 |
542 |
543 | if __name__ == "__main__":
544 | main()
```
--------------------------------------------------------------------------------
/addon.py:
--------------------------------------------------------------------------------
```python
1 | import bpy
2 | import json
3 | import threading
4 | import socket
5 | import time
6 | import requests
7 | import tempfile
8 | from bpy.props import StringProperty, IntProperty
9 | import traceback
10 | import os
11 | import shutil
12 |
13 | bl_info = {
14 | "name": "Blender MCP",
15 | "author": "BlenderMCP",
16 | "version": (0, 2), # Updated version
17 | "blender": (3, 0, 0),
18 | "location": "View3D > Sidebar > BlenderMCP",
19 | "description": "Connect Blender to local AI models via MCP", # Updated description
20 | "category": "Interface",
21 | }
22 |
23 | class BlenderMCPServer:
24 | def __init__(self, host='localhost', port=9876):
25 | self.host = host
26 | self.port = port
27 | self.running = False
28 | self.socket = None
29 | self.client = None
30 | self.command_queue = []
31 | self.buffer = b''
32 |
33 | def start(self):
34 | self.running = True
35 | self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
36 | self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
37 |
38 | try:
39 | self.socket.bind((self.host, self.port))
40 | self.socket.listen(1)
41 | self.socket.setblocking(False)
42 | bpy.app.timers.register(self._process_server, persistent=True)
43 | print(f"BlenderMCP server started on {self.host}:{self.port}")
44 | except Exception as e:
45 | print(f"Failed to start server: {str(e)}")
46 | self.stop()
47 |
48 | def stop(self):
49 | self.running = False
50 | if hasattr(bpy.app.timers, "unregister"):
51 | if bpy.app.timers.is_registered(self._process_server):
52 | bpy.app.timers.unregister(self._process_server)
53 | if self.socket:
54 | self.socket.close()
55 | if self.client:
56 | self.client.close()
57 | self.socket = None
58 | self.client = None
59 | print("BlenderMCP server stopped")
60 |
61 | def _process_server(self):
62 | if not self.running:
63 | return None
64 |
65 | try:
66 | if not self.client and self.socket:
67 | try:
68 | self.client, address = self.socket.accept()
69 | self.client.setblocking(False)
70 | print(f"Connected to client: {address}")
71 | except BlockingIOError:
72 | pass
73 | except Exception as e:
74 | print(f"Error accepting connection: {str(e)}")
75 |
76 | if self.client:
77 | try:
78 | try:
79 | data = self.client.recv(8192)
80 | if data:
81 | self.buffer += data
82 | try:
83 | command = json.loads(self.buffer.decode('utf-8'))
84 | self.buffer = b''
85 | response = self.execute_command(command)
86 | response_json = json.dumps(response)
87 | self.client.sendall(response_json.encode('utf-8'))
88 | except json.JSONDecodeError:
89 | pass
90 | else:
91 | print("Client disconnected")
92 | self.client.close()
93 | self.client = None
94 | self.buffer = b''
95 | except BlockingIOError:
96 | pass
97 | except Exception as e:
98 | print(f"Error receiving data: {str(e)}")
99 | self.client.close()
100 | self.client = None
101 | self.buffer = b''
102 |
103 | except Exception as e:
104 | print(f"Error with client: {str(e)}")
105 | if self.client:
106 | self.client.close()
107 | self.client = None
108 | self.buffer = b''
109 |
110 | except Exception as e:
111 | print(f"Server error: {str(e)}")
112 |
113 | return 0.1
114 |
115 | def execute_command(self, command):
116 | try:
117 | cmd_type = command.get("type")
118 | params = command.get("params", {})
119 | if cmd_type in ["create_object", "modify_object", "delete_object"]:
120 | if not bpy.context.screen or not bpy.context.screen.areas:
121 | return {"status": "error", "message": "Suitable 'VIEW_3D' context not found for command execution."}
122 |
123 | view_3d_areas = [area for area in bpy.context.screen.areas if area.type == 'VIEW_3D']
124 | if not view_3d_areas:
125 | return {"status": "error", "message": "Suitable 'VIEW_3D' context not found for command execution."}
126 |
127 | override = bpy.context.copy()
128 | override['area'] = view_3d_areas[0]
129 | with bpy.context.temp_override(**override):
130 | return self._execute_command_internal(command)
131 | else:
132 | return self._execute_command_internal(command)
133 | except Exception as e:
134 | print(f"Error executing command: {str(e)}")
135 | traceback.print_exc()
136 | return {"status": "error", "message": str(e)}
137 |
138 | def _execute_command_internal(self, command):
139 | cmd_type = command.get("type")
140 | params = command.get("params", {})
141 |
142 | if cmd_type == "get_polyhaven_status":
143 | return {"status": "success", "result": self.get_polyhaven_status()}
144 |
145 | handlers = {
146 | "get_scene_info": self.get_scene_info,
147 | "create_object": self.create_object,
148 | "modify_object": self.modify_object,
149 | "delete_object": self.delete_object,
150 | "get_object_info": self.get_object_info,
151 | "execute_code": self.execute_code,
152 | "set_material": self.set_material,
153 | "get_polyhaven_status": self.get_polyhaven_status,
154 | "render_scene": self.render_scene
155 | }
156 |
157 | if bpy.context.scene.blendermcp_use_polyhaven:
158 | polyhaven_handlers = {
159 | "get_polyhaven_categories": self.get_polyhaven_categories,
160 | "search_polyhaven_assets": self.search_polyhaven_assets,
161 | "download_polyhaven_asset": self.download_polyhaven_asset,
162 | "set_texture": self.set_texture,
163 | }
164 | handlers.update(polyhaven_handlers)
165 |
166 | handler = handlers.get(cmd_type)
167 | if handler:
168 | try:
169 | print(f"Executing handler for {cmd_type}")
170 | result = handler(**params)
171 | print(f"Handler execution complete")
172 | return {"status": "success", "result": result}
173 | except Exception as e:
174 | print(f"Error in handler: {str(e)}")
175 | traceback.print_exc()
176 | return {"status": "error", "message": str(e)}
177 | else:
178 | return {"status": "error", "message": f"Unknown command type: {cmd_type}"}
179 |
180 |
181 | def get_simple_info(self):
182 | return {
183 | "blender_version": ".".join(str(v) for v in bpy.app.version),
184 | "scene_name": bpy.context.scene.name,
185 | "object_count": len(bpy.context.scene.objects)
186 | }
187 |
188 | def get_scene_info(self):
189 | try:
190 | print("Getting scene info...")
191 | scene_info = {
192 | "name": bpy.context.scene.name,
193 | "object_count": len(bpy.context.scene.objects),
194 | "objects": [],
195 | "materials_count": len(bpy.data.materials),
196 | }
197 |
198 | for i, obj in enumerate(bpy.context.scene.objects):
199 | if i >= 10:
200 | break
201 |
202 | obj_info = {
203 | "name": obj.name,
204 | "type": obj.type,
205 | "location": [round(float(obj.location.x), 2),
206 | round(float(obj.location.y), 2),
207 | round(float(obj.location.z), 2)],
208 | }
209 | scene_info["objects"].append(obj_info)
210 |
211 | print(f"Scene info collected: {len(scene_info['objects'])} objects")
212 | return scene_info
213 | except Exception as e:
214 | print(f"Error in get_scene_info: {str(e)}")
215 | traceback.print_exc()
216 | return {"error": str(e)}
217 |
218 | def render_scene(self, output_path=None, resolution_x=None, resolution_y=None):
219 | """Render the current scene"""
220 | try:
221 | if resolution_x is not None:
222 | bpy.context.scene.render.resolution_x = int(resolution_x)
223 |
224 | if resolution_y is not None:
225 | bpy.context.scene.render.resolution_y = int(resolution_y)
226 |
227 | if output_path:
228 | # Use absolute path and ensure directory exists.
229 | output_path = bpy.path.abspath(output_path)
230 | output_dir = os.path.dirname(output_path)
231 | if not os.path.exists(output_dir):
232 | os.makedirs(output_dir)
233 | bpy.context.scene.render.filepath = output_path
234 | else: # If path not given save to a temp dir
235 | output_path = os.path.join(tempfile.gettempdir(),"render.png")
236 | bpy.context.scene.render.filepath = output_path
237 |
238 |
239 | # Render the scene
240 | bpy.ops.render.render(write_still=True) #Always write still even if no path given
241 |
242 | return {
243 | "rendered": True,
244 | "output_path": output_path ,
245 | "resolution": [bpy.context.scene.render.resolution_x, bpy.context.scene.render.resolution_y],
246 | }
247 | except Exception as e:
248 | print(f"Error in render_scene: {str(e)}")
249 | traceback.print_exc()
250 | return {"error": str(e)}
251 |
252 | def create_object(self, type="CUBE", name=None, location=(0, 0, 0), rotation=(0, 0, 0), scale=(1, 1, 1)):
253 | bpy.ops.object.select_all(action='DESELECT')
254 | if type == "CUBE":
255 | bpy.ops.mesh.primitive_cube_add(location=location, rotation=rotation, scale=scale)
256 | elif type == "SPHERE":
257 | bpy.ops.mesh.primitive_uv_sphere_add(location=location, rotation=rotation, scale=scale)
258 | elif type == "CYLINDER":
259 | bpy.ops.mesh.primitive_cylinder_add(location=location, rotation=rotation, scale=scale)
260 | elif type == "PLANE":
261 | bpy.ops.mesh.primitive_plane_add(location=location, rotation=rotation, scale=scale)
262 | elif type == "CONE":
263 | bpy.ops.mesh.primitive_cone_add(location=location, rotation=rotation, scale=scale)
264 | elif type == "TORUS":
265 | bpy.ops.mesh.primitive_torus_add(location=location, rotation=rotation, scale=scale)
266 | elif type == "EMPTY":
267 | bpy.ops.object.empty_add(location=location, rotation=rotation)
268 | elif type == "CAMERA":
269 | bpy.ops.object.camera_add(location=location, rotation=rotation)
270 | elif type == "LIGHT":
271 | bpy.ops.object.light_add(type='POINT', location=location, rotation=rotation)
272 | else:
273 | raise ValueError(f"Unsupported object type: {type}")
274 |
275 | obj = bpy.context.active_object
276 | if name:
277 | obj.name = name
278 |
279 | return {
280 | "name": obj.name,
281 | "type": obj.type,
282 | "location": [obj.location.x, obj.location.y, obj.location.z],
283 | "rotation": [obj.rotation_euler.x, obj.rotation_euler.y, obj.rotation_euler.z],
284 | "scale": [obj.scale.x, obj.scale.y, obj.scale.z],
285 | }
286 |
287 | def modify_object(self, name, location=None, rotation=None, scale=None, visible=None):
288 | obj = bpy.data.objects.get(name)
289 | if not obj:
290 | raise ValueError(f"Object not found: {name}")
291 |
292 | if location is not None:
293 | obj.location = location
294 | if rotation is not None:
295 | obj.rotation_euler = rotation
296 | if scale is not None:
297 | obj.scale = scale
298 | if visible is not None:
299 | obj.hide_viewport = not visible
300 | obj.hide_render = not visible
301 |
302 | return {
303 | "name": obj.name,
304 | "type": obj.type,
305 | "location": [obj.location.x, obj.location.y, obj.location.z],
306 | "rotation": [obj.rotation_euler.x, obj.rotation_euler.y, obj.rotation_euler.z],
307 | "scale": [obj.scale.x, obj.scale.y, obj.scale.z],
308 | "visible": obj.visible_get(),
309 | }
310 |
311 | def delete_object(self, name):
312 | obj = bpy.data.objects.get(name)
313 | if not obj:
314 | raise ValueError(f"Object not found: {name}")
315 |
316 | obj_name = obj.name
317 | bpy.ops.object.select_all(action='DESELECT')
318 | obj.select_set(True)
319 | bpy.ops.object.delete()
320 |
321 | return {"deleted": obj_name}
322 |
323 | def get_object_info(self, name):
324 | obj = bpy.data.objects.get(name)
325 | if not obj:
326 | raise ValueError(f"Object not found: {name}")
327 |
328 | obj_info = {
329 | "name": obj.name,
330 | "type": obj.type,
331 | "location": [obj.location.x, obj.location.y, obj.location.z],
332 | "rotation": [obj.rotation_euler.x, obj.rotation_euler.y, obj.rotation_euler.z],
333 | "scale": [obj.scale.x, obj.scale.y, obj.scale.z],
334 | "visible": obj.visible_get(),
335 | "materials": [],
336 | }
337 |
338 | for slot in obj.material_slots:
339 | if slot.material:
340 | obj_info["materials"].append(slot.material.name)
341 |
342 | if obj.type == 'MESH' and obj.data:
343 | mesh = obj.data
344 | obj_info["mesh"] = {
345 | "vertices": len(mesh.vertices),
346 | "edges": len(mesh.edges),
347 | "polygons": len(mesh.polygons),
348 | }
349 |
350 | return obj_info
351 |
352 | def execute_code(self, code):
353 | try:
354 | namespace = {"bpy": bpy}
355 | exec(code, namespace)
356 | return {"executed": True}
357 | except Exception as e:
358 | raise Exception(f"Code execution error: {str(e)}")
359 |
360 | def set_material(self, object_name, material_name=None, create_if_missing=True, color=None):
361 | """Set or create a material for an object."""
362 | try:
363 | obj = bpy.data.objects.get(object_name)
364 | if not obj:
365 | raise ValueError(f"Object not found: {object_name}")
366 |
367 | if not hasattr(obj, 'data') or not hasattr(obj.data, 'materials'):
368 | raise ValueError(f"Object {object_name} cannot accept materials")
369 | if material_name:
370 | mat = bpy.data.materials.get(material_name)
371 | if not mat and create_if_missing:
372 | mat = bpy.data.materials.new(name=material_name)
373 | print(f"Created new material: {material_name}")
374 | else:
375 | mat_name = f"{object_name}_material"
376 | mat = bpy.data.materials.get(mat_name)
377 | if not mat:
378 | mat = bpy.data.materials.new(name=mat_name)
379 | material_name = mat_name
380 | print(f"Using material: {mat_name}")
381 |
382 | if mat:
383 | if not mat.use_nodes:
384 | mat.use_nodes = True
385 | principled = mat.node_tree.nodes.get('Principled BSDF')
386 | if not principled:
387 | principled = mat.node_tree.nodes.new('ShaderNodeBsdfPrincipled')
388 | output = mat.node_tree.nodes.get('Material Output')
389 | if not output:
390 | output = mat.node_tree.nodes.new('ShaderNodeOutputMaterial')
391 | if not principled.outputs[0].links:
392 | mat.node_tree.links.new(principled.outputs[0], output.inputs[0])
393 |
394 | if color and len(color) >= 3:
395 | principled.inputs['Base Color'].default_value = (
396 | color[0],
397 | color[1],
398 | color[2],
399 | 1.0 if len(color) < 4 else color[3]
400 | )
401 | print(f"Set material color to {color}")
402 |
403 | if mat:
404 | if not obj.data.materials:
405 | obj.data.materials.append(mat)
406 | else:
407 | obj.data.materials[0] = mat
408 | print(f"Assigned material {mat.name} to object {object_name}")
409 | return {
410 | "status": "success",
411 | "object": object_name,
412 | "material": mat.name,
413 | "color": color if color else None
414 | }
415 | else:
416 | raise ValueError(f"Failed to create or find material: {material_name}")
417 | except Exception as e:
418 | print(f"Error in set_material: {str(e)}")
419 | traceback.print_exc()
420 | return {
421 | "status": "error",
422 | "message": str(e),
423 | "object": object_name,
424 | "material": material_name if 'material_name' in locals() else None
425 | }
426 | def get_polyhaven_categories(self, asset_type):
427 | """Get categories for a specific asset type from Polyhaven"""
428 | try:
429 | if asset_type not in ["hdris", "textures", "models", "all"]:
430 | return {"error": f"Invalid asset type: {asset_type}. Must be one of: hdris, textures, models, all"}
431 |
432 | response = requests.get(f"https://api.polyhaven.com/categories/{asset_type}")
433 | if response.status_code == 200:
434 | return {"categories": response.json()}
435 | else:
436 | return {"error": f"API request failed with status code {response.status_code}"}
437 | except Exception as e:
438 | return {"error": str(e)}
439 |
440 | def search_polyhaven_assets(self, asset_type=None, categories=None):
441 | """Search for assets from Polyhaven with optional filtering"""
442 | try:
443 | url = "https://api.polyhaven.com/assets"
444 | params = {}
445 |
446 | if asset_type and asset_type != "all":
447 | if asset_type not in ["hdris", "textures", "models"]:
448 | return {"error": f"Invalid asset type: {asset_type}. Must be one of: hdris, textures, models, all"}
449 | params["type"] = asset_type
450 |
451 | if categories:
452 | params["categories"] = categories
453 |
454 | response = requests.get(url, params=params)
455 | if response.status_code == 200:
456 | assets = response.json()
457 | limited_assets = {}
458 | for i, (key, value) in enumerate(assets.items()):
459 | if i >= 20:
460 | break
461 | limited_assets[key] = value
462 |
463 | return {"assets": limited_assets, "total_count": len(assets), "returned_count": len(limited_assets)}
464 | else:
465 | return {"error": f"API request failed with status code {response.status_code}"}
466 | except Exception as e:
467 | return {"error": str(e)}
468 |
469 | def download_polyhaven_asset(self, asset_id, asset_type, resolution="1k", file_format=None):
470 | """Downloads and imports a PolyHaven asset."""
471 | try:
472 | files_response = requests.get(f"https://api.polyhaven.com/files/{asset_id}")
473 | if files_response.status_code != 200:
474 | return {"error": f"Failed to get asset files: {files_response.status_code}"}
475 |
476 | files_data = files_response.json()
477 |
478 | if asset_type == "hdris":
479 | if not file_format:
480 | file_format = "hdr"
481 | if "hdri" in files_data and resolution in files_data["hdri"] and file_format in files_data["hdri"][resolution]:
482 | file_info = files_data["hdri"][resolution][file_format]
483 | file_url = file_info["url"]
484 |
485 | tmp_path = None
486 | try:
487 | with tempfile.NamedTemporaryFile(suffix=f".{file_format}", delete=False) as tmp_file:
488 | response = requests.get(file_url)
489 | if response.status_code != 200:
490 | return {"error": f"Failed to download HDRI: {response.status_code}"}
491 | tmp_file.write(response.content)
492 | tmp_path = tmp_file.name
493 |
494 | if not bpy.data.worlds:
495 | bpy.data.worlds.new("World")
496 | world = bpy.data.worlds[0]
497 | world.use_nodes = True
498 | node_tree = world.node_tree
499 | for node in node_tree.nodes:
500 | node_tree.nodes.remove(node)
501 | tex_coord = node_tree.nodes.new(type='ShaderNodeTexCoord')
502 | tex_coord.location = (-800, 0)
503 | mapping = node_tree.nodes.new(type='ShaderNodeMapping')
504 | mapping.location = (-600, 0)
505 | env_tex = node_tree.nodes.new(type='ShaderNodeTexEnvironment')
506 | env_tex.location = (-400, 0)
507 | env_tex.image = bpy.data.images.load(tmp_path)
508 | if file_format.lower() == 'exr':
509 | try:
510 | env_tex.image.colorspace_settings.name = 'Linear'
511 | except:
512 | env_tex.image.colorspace_settings.name = 'Non-Color'
513 | else:
514 | for color_space in ['Linear', 'Linear Rec.709', 'Non-Color']:
515 | try:
516 | env_tex.image.colorspace_settings.name = color_space
517 | break
518 | except:
519 | continue
520 | background = node_tree.nodes.new(type='ShaderNodeBackground')
521 | background.location = (-200, 0)
522 | output = node_tree.nodes.new(type='ShaderNodeOutputWorld')
523 | output.location = (0, 0)
524 | node_tree.links.new(tex_coord.outputs['Generated'], mapping.inputs['Vector'])
525 | node_tree.links.new(mapping.outputs['Vector'], env_tex.inputs['Vector'])
526 | node_tree.links.new(env_tex.outputs['Color'], background.inputs['Color'])
527 | node_tree.links.new(background.outputs['Background'], output.inputs['Surface'])
528 |
529 | bpy.context.scene.world = world
530 |
531 | return {
532 | "success": True,
533 | "message": f"HDRI {asset_id} imported successfully",
534 | "image_name": env_tex.image.name
535 | }
536 | except Exception as e:
537 | return {"error": f"Failed to set up HDRI: {str(e)}"}
538 | finally:
539 | if tmp_path and os.path.exists(tmp_path):
540 | os.remove(tmp_path)
541 | else:
542 | return {"error": f"Resolution/format unavailable."}
543 |
544 | elif asset_type == "textures":
545 | if not file_format:
546 | file_format = "jpg"
547 |
548 | downloaded_maps = {}
549 | try:
550 | for map_type in files_data:
551 | if map_type not in ["blend", "gltf"]:
552 | if resolution in files_data[map_type] and file_format in files_data[map_type][resolution]:
553 | file_info = files_data[map_type][resolution][file_format]
554 | file_url = file_info["url"]
555 |
556 | with tempfile.NamedTemporaryFile(suffix=f".{file_format}", delete=False) as tmp_file:
557 | response = requests.get(file_url)
558 | if response.status_code == 200:
559 | tmp_file.write(response.content)
560 | tmp_path = tmp_file.name
561 | image = bpy.data.images.load(tmp_path)
562 | image.name = f"{asset_id}_{map_type}.{file_format}"
563 | image.pack()
564 | if map_type in ['color', 'diffuse', 'albedo']:
565 | try:
566 | image.colorspace_settings.name = 'sRGB'
567 | except:
568 | pass
569 | else:
570 | try:
571 | image.colorspace_settings.name = 'Non-Color'
572 | except:
573 | pass
574 | downloaded_maps[map_type] = image
575 | try:
576 | os.unlink(tmp_path)
577 | except:
578 | pass
579 |
580 | if not downloaded_maps:
581 | return {"error": f"No texture maps found."}
582 |
583 | mat = bpy.data.materials.new(name=asset_id)
584 | mat.use_nodes = True
585 | nodes = mat.node_tree.nodes
586 | links = mat.node_tree.links
587 | for node in nodes:
588 | nodes.remove(node)
589 | output = nodes.new(type='ShaderNodeOutputMaterial')
590 | output.location = (300, 0)
591 | principled = nodes.new(type='ShaderNodeBsdfPrincipled')
592 | principled.location = (0, 0)
593 | links.new(principled.outputs[0], output.inputs[0])
594 | tex_coord = nodes.new(type='ShaderNodeTexCoord')
595 | tex_coord.location = (-800, 0)
596 | mapping = nodes.new(type='ShaderNodeMapping')
597 | mapping.location = (-600, 0)
598 | mapping.vector_type = 'TEXTURE'
599 | links.new(tex_coord.outputs['UV'], mapping.inputs['Vector'])
600 | x_pos = -400
601 | y_pos = 300
602 |
603 | for map_type, image in downloaded_maps.items():
604 | tex_node = nodes.new(type='ShaderNodeTexImage')
605 | tex_node.location = (x_pos, y_pos)
606 | tex_node.image = image
607 | if map_type.lower() in ['color', 'diffuse', 'albedo']:
608 | try:
609 | tex_node.image.colorspace_settings.name = 'sRGB'
610 | except:
611 | pass
612 | else:
613 | try:
614 | tex_node.image.colorspace_settings.name = 'Non-Color'
615 | except:
616 | pass
617 | links.new(mapping.outputs['Vector'], tex_node.inputs['Vector'])
618 |
619 | if map_type.lower() in ['color', 'diffuse', 'albedo']:
620 | links.new(tex_node.outputs['Color'], principled.inputs['Base Color'])
621 | elif map_type.lower() in ['roughness', 'rough']:
622 | links.new(tex_node.outputs['Color'], principled.inputs['Roughness'])
623 | elif map_type.lower() in ['metallic', 'metalness', 'metal']:
624 | links.new(tex_node.outputs['Color'], principled.inputs['Metallic'])
625 | elif map_type.lower() in ['normal', 'nor']:
626 | normal_map = nodes.new(type='ShaderNodeNormalMap')
627 | normal_map.location = (x_pos + 200, y_pos)
628 | links.new(tex_node.outputs['Color'], normal_map.inputs['Color'])
629 | links.new(normal_map.outputs['Normal'], principled.inputs['Normal'])
630 | elif map_type in ['displacement', 'disp', 'height']:
631 | disp_node = nodes.new(type='ShaderNodeDisplacement')
632 | disp_node.location = (x_pos + 200, y_pos - 200)
633 | links.new(tex_node.outputs['Color'], disp_node.inputs['Height'])
634 | links.new(disp_node.outputs['Displacement'], output.inputs['Displacement'])
635 | y_pos -= 250
636 | return {
637 | "success": True,
638 | "message": f"Texture {asset_id} imported as material",
639 | "material": mat.name,
640 | "maps": list(downloaded_maps.keys())
641 | }
642 | except Exception as e:
643 | return {"error": f"Failed to process textures: {str(e)}"}
644 |
645 | elif asset_type == "models":
646 | if not file_format:
647 | file_format = "gltf"
648 | if file_format in files_data and resolution in files_data[file_format]:
649 | file_info = files_data[file_format][resolution][file_format]
650 | file_url = file_info["url"]
651 | temp_dir = tempfile.mkdtemp()
652 | main_file_path = ""
653 | try:
654 | main_file_name = file_url.split("/")[-1]
655 | main_file_path = os.path.join(temp_dir, main_file_name)
656 | response = requests.get(file_url)
657 | if response.status_code != 200:
658 | return {"error": f"Failed to download model: {response.status_code}"}
659 | with open(main_file_path, "wb") as f:
660 | f.write(response.content)
661 | if "include" in file_info and file_info["include"]:
662 | for include_path, include_info in file_info["include"].items():
663 | include_url = include_info["url"]
664 | include_file_path = os.path.join(temp_dir, include_path)
665 | os.makedirs(os.path.dirname(include_file_path), exist_ok=True)
666 | include_response = requests.get(include_url)
667 | if include_response.status_code == 200:
668 | with open(include_file_path, "wb") as f:
669 | f.write(include_response.content)
670 | else:
671 | print(f"Failed to download included file: {include_path}")
672 | if file_format == "gltf" or file_format == "glb":
673 | bpy.ops.import_scene.gltf(filepath=main_file_path)
674 | elif file_format == "fbx":
675 | bpy.ops.import_scene.fbx(filepath=main_file_path)
676 | elif file_format == "obj":
677 | bpy.ops.import_scene.obj(filepath=main_file_path)
678 | elif file_format == "blend":
679 | with bpy.data.libraries.load(main_file_path, link=False) as (data_from, data_to):
680 | data_to.objects = data_from.objects
681 | for obj in data_to.objects:
682 | if obj is not None:
683 | bpy.context.collection.objects.link(obj)
684 | else:
685 | return {"error": f"Unsupported model format: {file_format}"}
686 | imported_objects = [obj.name for obj in bpy.context.selected_objects]
687 |
688 | return {
689 | "success": True,
690 | "message": f"Model {asset_id} imported successfully",
691 | "imported_objects": imported_objects
692 | }
693 | except Exception as e:
694 | return {"error": f"Failed to import model: {str(e)}"}
695 | finally:
696 | try:
697 | shutil.rmtree(temp_dir)
698 | except:
699 | print(f"Failed to clean up: {temp_dir}")
700 | else:
701 | return {"error": f"Format/resolution unavailable."}
702 | else:
703 | return {"error": f"Unsupported asset type: {asset_type}"}
704 | except Exception as e:
705 | return {"error": f"Failed to download asset: {str(e)}"}
706 |
707 | def set_texture(self, object_name, texture_id):
708 | """Apply a previously downloaded Polyhaven texture."""
709 | try:
710 | obj = bpy.data.objects.get(object_name)
711 | if not obj:
712 | return {"error": f"Object not found: {object_name}"}
713 | if not hasattr(obj, 'data') or not hasattr(obj.data, 'materials'):
714 | return {"error": f"Object {object_name} cannot accept materials"}
715 |
716 | texture_images = {}
717 | for img in bpy.data.images:
718 | if img.name.startswith(texture_id + "_"):
719 | map_type = img.name.split('_')[-1].split('.')[0]
720 | img.reload()
721 | if map_type.lower() in ['color', 'diffuse', 'albedo']:
722 | try:
723 | img.colorspace_settings.name = 'sRGB'
724 | except:
725 | pass
726 | else:
727 | try:
728 | img.colorspace_settings.name = 'Non-Color'
729 | except:
730 | pass
731 | if not img.packed_file:
732 | img.pack()
733 | texture_images[map_type] = img
734 | print(f"Loaded: {map_type} - {img.name}")
735 | print(f"Size: {img.size[0]}x{img.size[1]}")
736 | print(f"Colorspace: {img.colorspace_settings.name}")
737 | print(f"Format: {img.file_format}")
738 | print(f"Packed: {bool(img.packed_file)}")
739 |
740 | if not texture_images:
741 | return {"error": f"No images found for: {texture_id}."}
742 |
743 | new_mat_name = f"{texture_id}_material_{object_name}"
744 | existing_mat = bpy.data.materials.get(new_mat_name)
745 | if existing_mat:
746 | bpy.data.materials.remove(existing_mat)
747 |
748 | new_mat = bpy.data.materials.new(name=new_mat_name)
749 | new_mat.use_nodes = True
750 | nodes = new_mat.node_tree.nodes
751 | links = new_mat.node_tree.links
752 | nodes.clear()
753 | output = nodes.new(type='ShaderNodeOutputMaterial')
754 | output.location = (600, 0)
755 | principled = nodes.new(type='ShaderNodeBsdfPrincipled')
756 | principled.location = (300, 0)
757 | links.new(principled.outputs[0], output.inputs[0])
758 | tex_coord = nodes.new(type='ShaderNodeTexCoord')
759 | tex_coord.location = (-800, 0)
760 | mapping = nodes.new(type='ShaderNodeMapping')
761 | mapping.location = (-600, 0)
762 | mapping.vector_type = 'TEXTURE'
763 | links.new(tex_coord.outputs['UV'], mapping.inputs['Vector'])
764 | x_pos = -400
765 | y_pos = 300
766 |
767 | for map_type, image in texture_images.items():
768 | tex_node = nodes.new(type='ShaderNodeTexImage')
769 | tex_node.location = (x_pos, y_pos)
770 | tex_node.image = image
771 |
772 | if map_type.lower() in ['color', 'diffuse', 'albedo']:
773 | try:
774 | tex_node.image.colorspace_settings.name = 'sRGB'
775 | except:
776 | pass
777 | else:
778 | try:
779 | tex_node.image.colorspace_settings.name = 'Non-Color'
780 | except:
781 | pass
782 | links.new(mapping.outputs['Vector'], tex_node.inputs['Vector'])
783 | if map_type.lower() in ['color', 'diffuse', 'albedo']:
784 | links.new(tex_node.outputs['Color'], principled.inputs['Base Color'])
785 | elif map_type.lower() in ['roughness', 'rough']:
786 | links.new(tex_node.outputs['Color'], principled.inputs['Roughness'])
787 | elif map_type.lower() in ['metallic', 'metalness', 'metal']:
788 | links.new(tex_node.outputs['Color'], principled.inputs['Metallic'])
789 | elif map_type.lower() in ['normal', 'nor', 'dx', 'gl']:
790 | normal_map = nodes.new(type='ShaderNodeNormalMap')
791 | normal_map.location = (x_pos + 200, y_pos)
792 | links.new(tex_node.outputs['Color'], normal_map.inputs['Color'])
793 | links.new(normal_map.outputs['Normal'], principled.inputs['Normal'])
794 | elif map_type.lower() in ['displacement', 'disp', 'height']:
795 | disp_node = nodes.new(type='ShaderNodeDisplacement')
796 | disp_node.location = (x_pos + 200, y_pos - 200)
797 | disp_node.inputs['Scale'].default_value = 0.1
798 | links.new(tex_node.outputs['Color'], disp_node.inputs['Height'])
799 | links.new(disp_node.outputs['Displacement'], output.inputs['Displacement'])
800 |
801 | y_pos -= 250
802 |
803 | texture_nodes = {}
804 | for node in nodes:
805 | if node.type == 'TEX_IMAGE' and node.image:
806 | for map_type, image in texture_images.items():
807 | if node.image == image:
808 | texture_nodes[map_type] = node
809 | break
810 | for map_name in ['color', 'diffuse', 'albedo']:
811 | if map_name in texture_nodes:
812 | links.new(texture_nodes[map_name].outputs['Color'], principled.inputs['Base Color'])
813 | print(f"Connected {map_name} to Base Color")
814 | break
815 | for map_name in ['roughness', 'rough']:
816 | if map_name in texture_nodes:
817 | links.new(texture_nodes[map_name].outputs['Color'], principled.inputs['Roughness'])
818 | print(f"Connected {map_name} to Roughness")
819 | break
820 |
821 | for map_name in ['metallic', 'metalness', 'metal']:
822 | if map_name in texture_nodes:
823 | links.new(texture_nodes[map_name].outputs['Color'], principled.inputs['Metallic'])
824 | print(f"Connected {map_name} to Metallic")
825 | break
826 | for map_name in ['gl', 'dx', 'nor']:
827 | if map_name in texture_nodes:
828 | normal_map_node = nodes.new(type='ShaderNodeNormalMap')
829 | normal_map_node.location = (100, 100)
830 | links.new(texture_nodes[map_name].outputs['Color'], normal_map_node.inputs['Color'])
831 | links.new(normal_map_node.outputs['Normal'], principled.inputs['Normal'])
832 | print(f"Connected {map_name} to Normal")
833 | break
834 | for map_name in ['displacement', 'disp', 'height']:
835 | if map_name in texture_nodes:
836 | disp_node = nodes.new(type='ShaderNodeDisplacement')
837 | disp_node.location = (300, -200)
838 | disp_node.inputs['Scale'].default_value = 0.1
839 | links.new(texture_nodes[map_name].outputs['Color'], disp_node.inputs['Height'])
840 | links.new(disp_node.outputs['Displacement'], output.inputs['Displacement'])
841 | print(f"Connected {map_name} to Displacement")
842 | break
843 | if 'arm' in texture_nodes:
844 | separate_rgb = nodes.new(type='ShaderNodeSeparateRGB')
845 | separate_rgb.location = (-200, -100)
846 | links.new(texture_nodes['arm'].outputs['Color'], separate_rgb.inputs['Image'])
847 | if not any(map_name in texture_nodes for map_name in ['roughness', 'rough']):
848 | links.new(separate_rgb.outputs['G'], principled.inputs['Roughness'])
849 | print("Connected ARM.G to Roughness")
850 | if not any(map_name in texture_nodes for map_name in ['metallic', 'metalness', 'metal']):
851 | links.new(separate_rgb.outputs['B'], principled.inputs['Metallic'])
852 | print("Connected ARM.B to Metallic")
853 | base_color_node = None
854 | for map_name in ['color', 'diffuse', 'albedo']:
855 | if map_name in texture_nodes:
856 | base_color_node = texture_nodes[map_name]
857 | break
858 | if base_color_node:
859 | mix_node = nodes.new(type='ShaderNodeMixRGB')
860 | mix_node.location = (100, 200)
861 | mix_node.blend_type = 'MULTIPLY'
862 | mix_node.inputs['Fac'].default_value = 0.8
863 | for link in base_color_node.outputs['Color'].links:
864 | if link.to_socket == principled.inputs['Base Color']:
865 | links.remove(link)
866 | links.new(base_color_node.outputs['Color'], mix_node.inputs[1])
867 | links.new(separate_rgb.outputs['R'], mix_node.inputs[2])
868 | links.new(mix_node.outputs['Color'], principled.inputs['Base Color'])
869 | print("Connected ARM.R to AO mix with Base Color")
870 |
871 | if 'ao' in texture_nodes:
872 | base_color_node = None
873 | for map_name in ['color', 'diffuse', 'albedo']:
874 | if map_name in texture_nodes:
875 | base_color_node = texture_nodes[map_name]
876 | break
877 |
878 | if base_color_node:
879 | mix_node = nodes.new(type='ShaderNodeMixRGB')
880 | mix_node.location = (100, 200)
881 | mix_node.blend_type = 'MULTIPLY'
882 | mix_node.inputs['Fac'].default_value = 0.8
883 |
884 | for link in base_color_node.outputs['Color'].links:
885 | if link.to_socket == principled.inputs['Base Color']:
886 | links.remove(link)
887 |
888 | links.new(base_color_node.outputs['Color'], mix_node.inputs[1])
889 | links.new(texture_nodes['ao'].outputs['Color'], mix_node.inputs[2])
890 | links.new(mix_node.outputs['Color'], principled.inputs['Base Color'])
891 | print("Connected AO to mix with Base Color")
892 |
893 | while len(obj.data.materials) > 0:
894 | obj.data.materials.pop(index=0)
895 |
896 | obj.data.materials.append(new_mat)
897 | bpy.context.view_layer.objects.active = obj
898 | obj.select_set(True)
899 | bpy.context.view_layer.update()
900 | texture_maps = list(texture_images.keys())
901 |
902 | material_info = {
903 | "name": new_mat.name,
904 | "has_nodes": new_mat.use_nodes,
905 | "node_count": len(new_mat.node_tree.nodes),
906 | "texture_nodes": []
907 | }
908 |
909 | for node in new_mat.node_tree.nodes:
910 | if node.type == 'TEX_IMAGE' and node.image:
911 | connections = []
912 | for output in node.outputs:
913 | for link in output.links:
914 | connections.append(f"{output.name} → {link.to_node.name}.{link.to_socket.name}")
915 |
916 | material_info["texture_nodes"].append({
917 | "name": node.name,
918 | "image": node.image.name,
919 | "colorspace": node.image.colorspace_settings.name,
920 | "connections": connections
921 | })
922 |
923 | return {
924 | "success": True,
925 | "message": f"Created new material and applied texture {texture_id} to {object_name}",
926 | "material": new_mat.name,
927 | "maps": texture_maps,
928 | "material_info": material_info
929 | }
930 |
931 | except Exception as e:
932 | print(f"Error in set_texture: {str(e)}")
933 | traceback.print_exc()
934 | return {"error": f"Failed to apply texture: {str(e)}"}
935 |
936 | def get_polyhaven_status(self):
937 | enabled = bpy.context.scene.blendermcp_use_polyhaven
938 | if enabled:
939 | return {"enabled": True, "message": "PolyHaven integration is enabled and ready to use."}
940 | else:
941 | return {
942 | "enabled": False,
943 | "message": """PolyHaven integration is currently disabled. To enable it:
944 | 1. In the 3D Viewport, find the BlenderMCP panel in the sidebar (press N if hidden)
945 | 2. Check the 'Use assets from Poly Haven' checkbox
946 | 3. Restart the connection"""
947 | }
948 |
949 | class BLENDERMCP_PT_Panel(bpy.types.Panel):
950 | bl_label = "Blender MCP"
951 | bl_idname = "BLENDERMCP_PT_Panel"
952 | bl_space_type = 'VIEW_3D'
953 | bl_region_type = 'UI'
954 | bl_category = 'BlenderMCP'
955 |
956 | def draw(self, context):
957 | layout = self.layout
958 | scene = context.scene
959 |
960 | layout.prop(scene, "blendermcp_port")
961 | layout.prop(scene, "blendermcp_use_polyhaven", text="Use assets from Poly Haven")
962 |
963 | if not scene.blendermcp_server_running:
964 | layout.operator("blendermcp.start_server", text="Start MCP Server")
965 | else:
966 | layout.operator("blendermcp.stop_server", text="Stop MCP Server")
967 | layout.label(text=f"Running on port {scene.blendermcp_port}")
968 |
969 | class BLENDERMCP_OT_StartServer(bpy.types.Operator):
970 | bl_idname = "blendermcp.start_server"
971 | bl_label = "Connect to Local AI" # Updated label
972 | bl_description = "Start the BlenderMCP server to connect with a local AI model" # Updated description
973 |
974 | def execute(self, context):
975 | scene = context.scene
976 | if not hasattr(bpy.types, "blendermcp_server") or not bpy.types.blendermcp_server:
977 | bpy.types.blendermcp_server = BlenderMCPServer(port=scene.blendermcp_port)
978 | bpy.types.blendermcp_server.start()
979 | scene.blendermcp_server_running = True
980 | return {'FINISHED'}
981 |
982 | class BLENDERMCP_OT_StopServer(bpy.types.Operator):
983 | bl_idname = "blendermcp.stop_server"
984 | bl_label = "Stop the connection" # Updated
985 | bl_description = "Stop Server" # Updated
986 |
987 | def execute(self, context):
988 | scene = context.scene
989 | if hasattr(bpy.types, "blendermcp_server") and bpy.types.blendermcp_server:
990 | bpy.types.blendermcp_server.stop()
991 | del bpy.types.blendermcp_server
992 | scene.blendermcp_server_running = False
993 | return {'FINISHED'}
994 |
995 | def register():
996 | bpy.types.Scene.blendermcp_port = IntProperty(
997 | name="Port",
998 | description="Port for the BlenderMCP server",
999 | default=9876,
1000 | min=1024,
1001 | max=65535
1002 | )
1003 | bpy.types.Scene.blendermcp_server_running = bpy.props.BoolProperty(
1004 | name="Server Running",
1005 | default=False
1006 | )
1007 | bpy.types.Scene.blendermcp_use_polyhaven = bpy.props.BoolProperty(
1008 | name="Use Poly Haven",
1009 | description="Enable Poly Haven asset integration",
1010 | default=False
1011 | )
1012 | bpy.utils.register_class(BLENDERMCP_PT_Panel)
1013 | bpy.utils.register_class(BLENDERMCP_OT_StartServer)
1014 | bpy.utils.register_class(BLENDERMCP_OT_StopServer)
1015 | print("BlenderMCP addon registered")
1016 |
1017 | def unregister():
1018 | if hasattr(bpy.types, "blendermcp_server") and bpy.types.blendermcp_server:
1019 | bpy.types.blendermcp_server.stop()
1020 | del bpy.types.blendermcp_server
1021 |
1022 | bpy.utils.unregister_class(BLENDERMCP_PT_Panel)
1023 | bpy.utils.unregister_class(BLENDERMCP_OT_StartServer)
1024 | bpy.utils.unregister_class(BLENDERMCP_OT_StopServer)
1025 | del bpy.types.Scene.blendermcp_port
1026 | del bpy.types.Scene.blendermcp_server_running
1027 | del bpy.types.Scene.blendermcp_use_polyhaven
1028 | print("BlenderMCP addon unregistered")
1029 |
1030 | if __name__ == "__main__":
1031 | register()
```