This is page 1 of 2. Use http://codebase.md/montevive/penpot-mcp?page={x} to view the full context.
# Directory Structure
```
├── .editorconfig
├── .flake8
├── .github
│ ├── dependabot.yml
│ ├── ISSUE_TEMPLATE
│ │ ├── bug_report.md
│ │ └── feature_request.md
│ ├── PULL_REQUEST_TEMPLATE.md
│ ├── SETUP_CICD.md
│ └── workflows
│ ├── ci.yml
│ ├── code-quality.yml
│ ├── publish.yml
│ └── version-bump.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .vscode
│ └── launch.json
├── CHANGELOG.md
├── CLAUDE_INTEGRATION.md
├── CLAUDE.md
├── CONTRIBUTING.md
├── env.example
├── fix-lint-deps.sh
├── images
│ └── penpot-mcp.png
├── lint.py
├── LINTING.md
├── Makefile
├── penpot_mcp
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ └── penpot_api.py
│ ├── resources
│ │ ├── penpot-schema.json
│ │ └── penpot-tree-schema.json
│ ├── server
│ │ ├── __init__.py
│ │ ├── client.py
│ │ └── mcp_server.py
│ ├── tools
│ │ ├── __init__.py
│ │ ├── cli
│ │ │ ├── __init__.py
│ │ │ ├── tree_cmd.py
│ │ │ └── validate_cmd.py
│ │ └── penpot_tree.py
│ └── utils
│ ├── __init__.py
│ ├── cache.py
│ ├── config.py
│ └── http_server.py
├── pyproject.toml
├── README.md
├── SECURITY.md
├── test_credentials.py
├── tests
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_cache.py
│ ├── test_config.py
│ ├── test_mcp_server.py
│ └── test_penpot_tree.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.editorconfig:
--------------------------------------------------------------------------------
```
root = true
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
indent_style = space
indent_size = 4
[*.{json,yml,yaml}]
indent_size = 2
[*.md]
trim_trailing_whitespace = false
[Makefile]
indent_style = tab
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# Virtual Environment
venv/
ENV/
env/
.venv/
# uv
.python-version
# Environment variables
.env
# IDE files
.idea/
.vscode/
*.swp
*.swo
# OS specific
.DS_Store
Thumbs.db
# Logs
logs/
*.log
*.json
!penpot-schema.json
!penpot-tree-schema.json
.coverage
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
pytestdebug.log
```
--------------------------------------------------------------------------------
/.flake8:
--------------------------------------------------------------------------------
```
[flake8]
max-line-length = 88
exclude =
.venv,
venv,
__pycache__,
.git,
build,
dist,
*.egg-info,
node_modules,
.tox,
.pytest_cache
ignore =
# Line too long (handled by max-line-length)
E501,
# Missing docstrings (can be addressed later)
D100, D101, D102, D103, D105, D107,
# Docstring formatting (can be addressed later)
D200, D205, D401,
# Whitespace issues (auto-fixable)
W293, W291, W292,
# Unused imports (will be cleaned up)
F401,
# Unused variables (will be cleaned up)
F841,
# Bare except (will be improved)
E722,
# f-string without placeholders
F541,
# Comparison to True (minor issue)
E712,
# Continuation line formatting
E128,
# Blank line formatting
E302, E306
per-file-ignores =
# Tests can be more lenient
tests/*:D,E,F,W
# CLI tools can be more lenient
*/cli/*:D401
# Allow unused imports in __init__.py files
*/__init__.py:F401
# Allow long lines in configuration files
*/config.py:E501
select = E,W,F
```
--------------------------------------------------------------------------------
/.pre-commit-config.yaml:
--------------------------------------------------------------------------------
```yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- repo: https://github.com/pycqa/flake8
rev: 6.1.0
hooks:
- id: flake8
additional_dependencies: [flake8-docstrings]
types: [python]
files: ^(penpot_mcp|tests)/.*\.py$
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
args: ["--profile", "black", "--filter-files"]
types: [python]
files: ^(penpot_mcp|tests)/.*\.py$
- repo: https://github.com/asottile/pyupgrade
rev: v3.13.0
hooks:
- id: pyupgrade
args: [--py312-plus]
types: [python]
files: ^(penpot_mcp|tests)/.*\.py$
- repo: https://github.com/pre-commit/mirrors-autopep8
rev: v2.0.4
hooks:
- id: autopep8
args: [--aggressive, --aggressive, --select=E,W]
types: [python]
files: ^(penpot_mcp|tests)/.*\.py$
additional_dependencies: [setuptools>=65.5.0]
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Penpot MCP Server 🎨🤖
<p align="center">
<img src="images/penpot-mcp.png" alt="Penpot MCP Logo" width="400"/>
</p>
<p align="center">
<strong>AI-Powered Design Workflow Automation</strong><br>
Connect Claude AI and other LLMs to Penpot designs via Model Context Protocol
</p>
<p align="center">
<a href="https://github.com/montevive/penpot-mcp/blob/main/LICENSE">
<img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="License: MIT">
</a>
<a href="https://www.python.org/downloads/">
<img src="https://img.shields.io/badge/python-3.12%2B-blue" alt="Python Version">
</a>
<a href="https://pypi.org/project/penpot-mcp/">
<img src="https://img.shields.io/pypi/v/penpot-mcp" alt="PyPI version">
</a>
<a href="https://github.com/montevive/penpot-mcp/actions">
<img src="https://img.shields.io/github/workflow/status/montevive/penpot-mcp/CI" alt="Build Status">
</a>
</p>
---
## 🚀 What is Penpot MCP?
**Penpot MCP** is a revolutionary Model Context Protocol (MCP) server that bridges the gap between AI language models and [Penpot](https://penpot.app/), the open-source design and prototyping platform. This integration enables AI assistants like Claude (in both Claude Desktop and Cursor IDE) to understand, analyze, and interact with your design files programmatically.
### 🎯 Key Benefits
- **🤖 AI-Native Design Analysis**: Let Claude AI analyze your UI/UX designs, provide feedback, and suggest improvements
- **⚡ Automated Design Workflows**: Streamline repetitive design tasks with AI-powered automation
- **🔍 Intelligent Design Search**: Find design components and patterns across your projects using natural language
- **📊 Design System Management**: Automatically document and maintain design systems with AI assistance
- **🎨 Cross-Platform Integration**: Works with any MCP-compatible AI assistant (Claude Desktop, Cursor IDE, etc.)
## 🎥 Demo Video
Check out our demo video to see Penpot MCP in action:
[](https://www.youtube.com/watch?v=vOMEh-ONN1k)
## ✨ Features
### 🔌 Core Capabilities
- **MCP Protocol Implementation**: Full compliance with Model Context Protocol standards
- **Real-time Design Access**: Direct integration with Penpot's API for live design data
- **Component Analysis**: AI-powered analysis of design components and layouts
- **Export Automation**: Programmatic export of design assets in multiple formats
- **Design Validation**: Automated design system compliance checking
### 🛠️ Developer Tools
- **Command-line Utilities**: Powerful CLI tools for design file analysis and validation
- **Python SDK**: Comprehensive Python library for custom integrations
- **REST API**: HTTP endpoints for web application integration
- **Extensible Architecture**: Plugin system for custom AI workflows
### 🎨 AI Integration Features
- **Claude Desktop & Cursor Integration**: Native support for Claude AI assistant in both Claude Desktop and Cursor IDE
- **Design Context Sharing**: Provide design context to AI models for better responses
- **Visual Component Recognition**: AI can "see" and understand design components
- **Natural Language Queries**: Ask questions about your designs in plain English
- **IDE Integration**: Seamless integration with modern development environments
## 💡 Use Cases
### For Designers
- **Design Review Automation**: Get instant AI feedback on accessibility, usability, and design principles
- **Component Documentation**: Automatically generate documentation for design systems
- **Design Consistency Checks**: Ensure brand guidelines compliance across projects
- **Asset Organization**: AI-powered tagging and categorization of design components
### For Developers
- **Design-to-Code Workflows**: Bridge the gap between design and development with AI assistance
- **API Integration**: Programmatic access to design data for custom tools and workflows
- **Automated Testing**: Generate visual regression tests from design specifications
- **Design System Sync**: Keep design tokens and code components in sync
### For Product Teams
- **Design Analytics**: Track design system adoption and component usage
- **Collaboration Enhancement**: AI-powered design reviews and feedback collection
- **Workflow Optimization**: Automate repetitive design operations and approvals
- **Cross-tool Integration**: Connect Penpot with other tools in your design workflow
## 🚀 Quick Start
### Prerequisites
- **Python 3.12+** (Latest Python recommended for optimal performance)
- **Penpot Account** ([Sign up free](https://penpot.app/))
- **Claude Desktop or Cursor IDE** (Optional, for AI integration)
## Installation
### Prerequisites
- Python 3.12+
- Penpot account credentials
### Installation
#### Option 1: Install from PyPI
```bash
pip install penpot-mcp
```
#### Option 2: Using uv (recommended for modern Python development)
```bash
# Install directly with uvx (when published to PyPI)
uvx penpot-mcp
# For local development, use uvx with local path
uvx --from . penpot-mcp
# Or install in a project with uv
uv add penpot-mcp
```
#### Option 3: Install from source
```bash
# Clone the repository
git clone https://github.com/montevive/penpot-mcp.git
cd penpot-mcp
# Using uv (recommended)
uv sync
uv run penpot-mcp
# Or using traditional pip
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -e .
```
### Configuration
Create a `.env` file based on `env.example` with your Penpot credentials:
```
PENPOT_API_URL=https://design.penpot.app/api
PENPOT_USERNAME=your_penpot_username
PENPOT_PASSWORD=your_penpot_password
PORT=5000
DEBUG=true
```
> **⚠️ CloudFlare Protection Notice**: The Penpot cloud site (penpot.app) uses CloudFlare protection that may occasionally block API requests. If you encounter authentication errors or blocked requests:
> 1. Open your web browser and navigate to [https://design.penpot.app](https://design.penpot.app)
> 2. Log in to your Penpot account
> 3. Complete any CloudFlare human verification challenges if prompted
> 4. Once verified, the API requests should work normally for a period of time
## Usage
### Running the MCP Server
```bash
# Using uvx (when published to PyPI)
uvx penpot-mcp
# Using uvx for local development
uvx --from . penpot-mcp
# Using uv in a project (recommended for local development)
uv run penpot-mcp
# Using the entry point (if installed)
penpot-mcp
# Or using the module directly
python -m penpot_mcp.server.mcp_server
```
### Debugging the MCP Server
To debug the MCP server, you can:
1. Enable debug mode in your `.env` file by setting `DEBUG=true`
2. Use the Penpot API CLI for testing API operations:
```bash
# Test API connection with debug output
python -m penpot_mcp.api.penpot_api --debug list-projects
# Get details for a specific project
python -m penpot_mcp.api.penpot_api --debug get-project --id YOUR_PROJECT_ID
# List files in a project
python -m penpot_mcp.api.penpot_api --debug list-files --project-id YOUR_PROJECT_ID
# Get file details
python -m penpot_mcp.api.penpot_api --debug get-file --file-id YOUR_FILE_ID
```
### Command-line Tools
The package includes utility command-line tools:
```bash
# Generate a tree visualization of a Penpot file
penpot-tree path/to/penpot_file.json
# Validate a Penpot file against the schema
penpot-validate path/to/penpot_file.json
```
### MCP Monitoring & Testing
#### MCP CLI Monitor
```bash
# Start your MCP server in one terminal
python -m penpot_mcp.server.mcp_server
# In another terminal, use mcp-cli to monitor and interact with your server
python -m mcp.cli monitor python -m penpot_mcp.server.mcp_server
# Or connect to an already running server on a specific port
python -m mcp.cli monitor --port 5000
```
#### MCP Inspector
```bash
# Start your MCP server in one terminal
python -m penpot_mcp.server.mcp_server
# In another terminal, run the MCP Inspector (requires Node.js)
npx @modelcontextprotocol/inspector
```
### Using the Client
```bash
# Run the example client
penpot-client
```
## MCP Resources & Tools
### Resources
- `server://info` - Server status and information
- `penpot://schema` - Penpot API schema as JSON
- `penpot://tree-schema` - Penpot object tree schema as JSON
- `rendered-component://{component_id}` - Rendered component images
- `penpot://cached-files` - List of cached Penpot files
### Tools
- `list_projects` - List all Penpot projects
- `get_project_files` - Get files for a specific project
- `get_file` - Retrieve a Penpot file by its ID and cache it
- `export_object` - Export a Penpot object as an image
- `get_object_tree` - Get the object tree structure for a Penpot object
- `search_object` - Search for objects within a Penpot file by name
## AI Integration
The Penpot MCP server can be integrated with AI assistants using the Model Context Protocol. It supports both Claude Desktop and Cursor IDE for seamless design workflow automation.
### Claude Desktop Integration
For detailed Claude Desktop setup instructions, see [CLAUDE_INTEGRATION.md](CLAUDE_INTEGRATION.md).
Add the following configuration to your Claude Desktop config file (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS or `%APPDATA%\Claude\claude_desktop_config.json` on Windows):
```json
{
"mcpServers": {
"penpot": {
"command": "uvx",
"args": ["penpot-mcp"],
"env": {
"PENPOT_API_URL": "https://design.penpot.app/api",
"PENPOT_USERNAME": "your_penpot_username",
"PENPOT_PASSWORD": "your_penpot_password"
}
}
}
}
```
### Cursor IDE Integration
Cursor IDE supports MCP servers through its AI integration features. To configure Penpot MCP with Cursor:
1. **Install the MCP server** (if not already installed):
```bash
pip install penpot-mcp
```
2. **Configure Cursor settings** by adding the MCP server to your Cursor configuration. Open Cursor settings and add:
```json
{
"mcpServers": {
"penpot": {
"command": "uvx",
"args": ["penpot-mcp"],
"env": {
"PENPOT_API_URL": "https://design.penpot.app/api",
"PENPOT_USERNAME": "your_penpot_username",
"PENPOT_PASSWORD": "your_penpot_password"
}
}
}
}
```
3. **Alternative: Use environment variables** by creating a `.env` file in your project root:
```bash
PENPOT_API_URL=https://design.penpot.app/api
PENPOT_USERNAME=your_penpot_username
PENPOT_PASSWORD=your_penpot_password
```
4. **Start the MCP server** in your project:
```bash
# In your project directory
penpot-mcp
```
5. **Use in Cursor**: Once configured, you can interact with your Penpot designs directly in Cursor by asking questions like:
- "Show me all projects in my Penpot account"
- "Analyze the design components in project X"
- "Export the main button component as an image"
- "What design patterns are used in this file?"
### Key Integration Features
Both Claude Desktop and Cursor integration provide:
- **Direct access** to Penpot projects and files
- **Visual component analysis** with AI-powered insights
- **Design export capabilities** for assets and components
- **Natural language queries** about your design files
- **Real-time design feedback** and suggestions
- **Design system documentation** generation
## Package Structure
```
penpot_mcp/
├── api/ # Penpot API client
├── server/ # MCP server implementation
│ ├── mcp_server.py # Main MCP server
│ └── client.py # Client implementation
├── tools/ # Utility tools
│ ├── cli/ # Command-line interfaces
│ └── penpot_tree.py # Penpot object tree visualization
├── resources/ # Resource files and schemas
└── utils/ # Helper utilities
```
## Development
### Testing
The project uses pytest for testing:
```bash
# Using uv (recommended)
uv sync --extra dev
uv run pytest
# Run with coverage
uv run pytest --cov=penpot_mcp tests/
# Using traditional pip
pip install -e ".[dev]"
pytest
pytest --cov=penpot_mcp tests/
```
### Linting
```bash
# Using uv (recommended)
uv sync --extra dev
# Set up pre-commit hooks
uv run pre-commit install
# Run linting
uv run python lint.py
# Auto-fix linting issues
uv run python lint.py --autofix
# Using traditional pip
pip install -r requirements-dev.txt
pre-commit install
./lint.py
./lint.py --autofix
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
Please make sure your code follows the project's coding standards and includes appropriate tests.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- [Penpot](https://penpot.app/) - The open-source design and prototyping platform
- [Model Context Protocol](https://modelcontextprotocol.io) - The standardized protocol for AI model context
```
--------------------------------------------------------------------------------
/CLAUDE.md:
--------------------------------------------------------------------------------
```markdown
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Penpot MCP Server is a Python-based Model Context Protocol (MCP) server that bridges AI language models with Penpot, an open-source design platform. It enables programmatic interaction with design files through a well-structured API.
## Key Commands
### Development Setup
```bash
# Install dependencies (recommended)
uv sync --extra dev
# Run the MCP server
uv run penpot-mcp
# Run tests
uv run pytest
uv run pytest --cov=penpot_mcp tests/ # with coverage
# Lint and fix code
uv run python lint.py # check issues
uv run python lint.py --autofix # auto-fix issues
```
### Running the Server
```bash
# Default stdio mode (for Claude Desktop/Cursor)
make mcp-server
# SSE mode (for debugging with inspector)
make mcp-server-sse
# Launch MCP inspector (requires SSE mode)
make mcp-inspector
```
### CLI Tools
```bash
# Generate tree visualization
penpot-tree path/to/penpot_file.json
# Validate Penpot file
penpot-validate path/to/penpot_file.json
```
## Architecture Overview
### Core Components
1. **MCP Server** (`penpot_mcp/server/mcp_server.py`)
- Built on FastMCP framework
- Implements resources and tools for Penpot interaction
- Memory cache with 10-minute TTL
- Supports stdio (default) and SSE modes
2. **API Client** (`penpot_mcp/api/penpot_api.py`)
- REST client for Penpot platform
- Transit+JSON format handling
- Cookie-based authentication with auto-refresh
- Lazy authentication pattern
3. **Key Design Patterns**
- **Authentication**: Cookie-based with automatic re-authentication on 401/403
- **Caching**: In-memory file cache to reduce API calls
- **Resource/Tool Duality**: Resources can be exposed as tools via RESOURCES_AS_TOOLS config
- **Transit Format**: Special handling for UUIDs (`~u` prefix) and keywords (`~:` prefix)
### Available Tools/Functions
- `list_projects`: Get all Penpot projects
- `get_project_files`: List files in a project
- `get_file`: Retrieve and cache file data
- `search_object`: Search design objects by name (regex)
- `get_object_tree`: Get filtered object tree with screenshot
- `export_object`: Export design objects as images
- `penpot_tree_schema`: Get schema for object tree fields
### Environment Configuration
Create a `.env` file with:
```env
PENPOT_API_URL=https://design.penpot.app/api
PENPOT_USERNAME=your_username
PENPOT_PASSWORD=your_password
ENABLE_HTTP_SERVER=true # for image serving
RESOURCES_AS_TOOLS=false # MCP resource mode
DEBUG=true # debug logging
```
### Working with the Codebase
1. **Adding New Tools**: Decorate functions with `@self.mcp.tool()` in mcp_server.py
2. **API Extensions**: Add methods to PenpotAPI class following existing patterns
3. **Error Handling**: Always check for `"error"` keys in API responses
4. **Testing**: Use `test_mode=True` when creating server instances in tests
5. **Transit Format**: Remember to handle Transit+JSON when working with raw API
### Common Workflow for Code Generation
1. List projects → Find target project
2. Get project files → Locate design file
3. Search for component → Find specific element
4. Get tree schema → Understand available fields
5. Get object tree → Retrieve structure with screenshot
6. Export if needed → Get rendered component image
### Testing Patterns
- Mock fixtures in `tests/conftest.py`
- Test both stdio and SSE modes
- Verify Transit format conversions
- Check cache behavior and expiration
## Memories
- Keep the current transport format for the current API requests
```
--------------------------------------------------------------------------------
/SECURITY.md:
--------------------------------------------------------------------------------
```markdown
# Security Policy
## Supported Versions
We actively support the following versions of Penpot MCP with security updates:
| Version | Supported |
| ------- | ------------------ |
| 0.1.x | :white_check_mark: |
| < 0.1 | :x: |
## Reporting a Vulnerability
The Penpot MCP team takes security seriously. If you discover a security vulnerability, please follow these steps:
### 🔒 Private Disclosure
**DO NOT** create a public GitHub issue for security vulnerabilities.
Instead, please email us at: **[email protected]**
### 📧 What to Include
Please include the following information in your report:
- **Description**: A clear description of the vulnerability
- **Impact**: What could an attacker accomplish?
- **Reproduction**: Step-by-step instructions to reproduce the issue
- **Environment**: Affected versions, operating systems, configurations
- **Proof of Concept**: Code, screenshots, or other evidence (if applicable)
- **Suggested Fix**: If you have ideas for how to fix the issue
### 🕐 Response Timeline
- **Initial Response**: Within 48 hours
- **Triage**: Within 1 week
- **Fix Development**: Depends on severity and complexity
- **Public Disclosure**: After fix is released and users have time to update
### 🏆 Recognition
We believe in recognizing security researchers who help keep our users safe:
- **Security Hall of Fame**: Public recognition (with your permission)
- **CVE Assignment**: For qualifying vulnerabilities
- **Coordinated Disclosure**: We'll work with you on timing and attribution
## Security Considerations
### 🔐 Authentication & Credentials
- **Penpot Credentials**: Store securely using environment variables or secure credential management
- **API Keys**: Never commit API keys or passwords to version control
- **Environment Files**: Add `.env` files to `.gitignore`
### 🌐 Network Security
- **HTTPS Only**: Always use HTTPS for Penpot API connections
- **Certificate Validation**: Don't disable SSL certificate verification
- **Rate Limiting**: Respect API rate limits to avoid service disruption
### 🛡️ Input Validation
- **User Input**: All user inputs are validated and sanitized
- **File Uploads**: Penpot file parsing includes safety checks
- **API Responses**: External API responses are validated before processing
### 🔍 Data Privacy
- **Minimal Data**: We only access necessary Penpot data
- **No Storage**: Design data is not permanently stored by default
- **User Control**: Users control what data is shared with AI assistants
### 🚀 Deployment Security
- **Dependencies**: Regularly update dependencies for security patches
- **Permissions**: Run with minimal required permissions
- **Isolation**: Use virtual environments or containers
## Security Best Practices for Users
### 🔧 Configuration
```bash
# Use environment variables for sensitive data
export PENPOT_USERNAME="your_username"
export PENPOT_PASSWORD="your_secure_password"
export PENPOT_API_URL="https://design.penpot.app/api"
# Or use a .env file (never commit this!)
echo "PENPOT_USERNAME=your_username" > .env
echo "PENPOT_PASSWORD=your_secure_password" >> .env
echo "PENPOT_API_URL=https://design.penpot.app/api" >> .env
```
### 🔒 Access Control
- **Principle of Least Privilege**: Only grant necessary Penpot permissions
- **Regular Audits**: Review and rotate credentials regularly
- **Team Access**: Use team accounts rather than personal credentials for shared projects
### 🖥️ Local Development
```bash
# Keep your development environment secure
chmod 600 .env # Restrict file permissions
git add .env # This should fail if .gitignore is properly configured
```
### 🤖 AI Integration
- **Data Sensitivity**: Be mindful of what design data you share with AI assistants
- **Public vs Private**: Consider using private AI instances for sensitive designs
- **Audit Logs**: Monitor what data is being accessed and shared
## Vulnerability Disclosure Policy
### 🎯 Scope
This security policy applies to:
- **Penpot MCP Server**: Core MCP protocol implementation
- **API Client**: Penpot API integration code
- **CLI Tools**: Command-line utilities
- **Documentation**: Security-related documentation
### ⚠️ Out of Scope
The following are outside our direct control but we'll help coordinate:
- **Penpot Platform**: Report to Penpot team directly
- **Third-party Dependencies**: We'll help coordinate with upstream maintainers
- **AI Assistant Platforms**: Report to respective platform security teams
### 🚫 Testing Guidelines
When testing for vulnerabilities:
- **DO NOT** test against production Penpot instances without permission
- **DO NOT** access data you don't own
- **DO NOT** perform destructive actions
- **DO** use test accounts and data
- **DO** respect rate limits and terms of service
## Security Updates
### 📢 Notifications
Security updates will be announced through:
- **GitHub Security Advisories**: Primary notification method
- **Release Notes**: Detailed in version release notes
- **Email**: For critical vulnerabilities (if you've subscribed)
### 🔄 Update Process
```bash
# Always update to the latest version for security fixes
pip install --upgrade penpot-mcp
# Or with uv
uv add penpot-mcp@latest
```
## Contact
- **Security Issues**: [email protected]
- **General Questions**: Use [GitHub Discussions](https://github.com/montevive/penpot-mcp/discussions)
- **Bug Reports**: [GitHub Issues](https://github.com/montevive/penpot-mcp/issues)
---
Thank you for helping keep Penpot MCP and our community safe! 🛡️
```
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
```markdown
# Contributing to Penpot MCP 🤝
Thank you for your interest in contributing to Penpot MCP! This project aims to bridge AI assistants with Penpot design tools, and we welcome contributions from developers, designers, and AI enthusiasts.
## 🌟 Ways to Contribute
### For Developers
- **Bug Fixes**: Help us squash bugs and improve stability
- **New Features**: Add new MCP tools, resources, or AI integrations
- **Performance**: Optimize API calls, caching, and response times
- **Documentation**: Improve code documentation and examples
- **Testing**: Add unit tests, integration tests, and edge case coverage
### For Designers
- **Use Case Documentation**: Share how you use Penpot MCP in your workflow
- **Feature Requests**: Suggest new AI-powered design features
- **UI/UX Feedback**: Help improve the developer and user experience
- **Design Examples**: Contribute example Penpot files for testing
### For AI Enthusiasts
- **Prompt Engineering**: Improve AI interaction patterns
- **Model Integration**: Add support for new AI models and assistants
- **Workflow Automation**: Create AI-powered design automation scripts
- **Research**: Explore new applications of AI in design workflows
## 🚀 Getting Started
### 1. Fork and Clone
```bash
# Fork the repository on GitHub, then clone your fork
git clone https://github.com/YOUR_USERNAME/penpot-mcp.git
cd penpot-mcp
```
### 2. Set Up Development Environment
```bash
# Install uv (recommended Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies and set up development environment
uv sync --extra dev
# Install pre-commit hooks
uv run pre-commit install
```
### 3. Configure Environment
```bash
# Copy environment template
cp env.example .env
# Edit .env with your Penpot credentials
# PENPOT_API_URL=https://design.penpot.app/api
# PENPOT_USERNAME=your_username
# PENPOT_PASSWORD=your_password
```
### 4. Run Tests
```bash
# Run the full test suite
uv run pytest
# Run with coverage
uv run pytest --cov=penpot_mcp
# Run specific test categories
uv run pytest -m "not slow" # Skip slow tests
uv run pytest tests/test_api/ # Test specific module
```
## 🔧 Development Workflow
### Code Style
We use automated code formatting and linting:
```bash
# Run all linting and formatting
uv run python lint.py
# Auto-fix issues where possible
uv run python lint.py --autofix
# Check specific files
uv run flake8 penpot_mcp/
uv run isort penpot_mcp/
```
### Testing Guidelines
- **Unit Tests**: Test individual functions and classes
- **Integration Tests**: Test MCP protocol interactions
- **API Tests**: Test Penpot API integration (use mocks for CI)
- **End-to-End Tests**: Test complete workflows with real data
```bash
# Test structure
tests/
├── unit/ # Fast, isolated tests
├── integration/ # MCP protocol tests
├── api/ # Penpot API tests
└── e2e/ # End-to-end workflow tests
```
### Adding New Features
1. **Create an Issue**: Discuss your idea before implementing
2. **Branch Naming**: Use descriptive names like `feature/ai-design-analysis`
3. **Small PRs**: Keep changes focused and reviewable
4. **Documentation**: Update README, docstrings, and examples
5. **Tests**: Add comprehensive tests for new functionality
### MCP Protocol Guidelines
When adding new MCP tools or resources:
```python
# Follow this pattern for new tools
@mcp_tool("tool_name")
async def new_tool(param1: str, param2: int = 10) -> dict:
"""
Brief description of what this tool does.
Args:
param1: Description of parameter
param2: Optional parameter with default
Returns:
Dictionary with tool results
"""
# Implementation here
pass
```
## 📝 Commit Guidelines
We follow [Conventional Commits](https://www.conventionalcommits.org/):
```bash
# Format: type(scope): description
git commit -m "feat(api): add design component analysis tool"
git commit -m "fix(mcp): handle connection timeout errors"
git commit -m "docs(readme): add Claude Desktop setup guide"
git commit -m "test(api): add unit tests for file export"
```
### Commit Types
- `feat`: New features
- `fix`: Bug fixes
- `docs`: Documentation changes
- `test`: Adding or updating tests
- `refactor`: Code refactoring
- `perf`: Performance improvements
- `chore`: Maintenance tasks
## 🐛 Reporting Issues
### Bug Reports
Use our [bug report template](.github/ISSUE_TEMPLATE/bug_report.md) and include:
- Clear reproduction steps
- Environment details (OS, Python version, etc.)
- Error messages and logs
- Expected vs actual behavior
### Feature Requests
Use our [feature request template](.github/ISSUE_TEMPLATE/feature_request.md) and include:
- Use case description
- Proposed solution
- Implementation ideas
- Priority level
## 🔍 Code Review Process
1. **Automated Checks**: All PRs must pass CI/CD checks
2. **Peer Review**: At least one maintainer review required
3. **Testing**: New features must include tests
4. **Documentation**: Update relevant documentation
5. **Backwards Compatibility**: Avoid breaking changes when possible
## 🏆 Recognition
Contributors are recognized in:
- GitHub contributors list
- Release notes for significant contributions
- Special mentions for innovative features
- Community showcase for creative use cases
## 📚 Resources
### Documentation
- [MCP Protocol Specification](https://modelcontextprotocol.io)
- [Penpot API Documentation](https://help.penpot.app/technical-guide/developer-resources/)
- [Claude AI Integration Guide](CLAUDE_INTEGRATION.md)
### Community
- [GitHub Discussions](https://github.com/montevive/penpot-mcp/discussions)
- [Issues](https://github.com/montevive/penpot-mcp/issues)
- [Penpot Community](https://community.penpot.app/)
## 📄 License
By contributing to Penpot MCP, you agree that your contributions will be licensed under the [MIT License](LICENSE).
## ❓ Questions?
- **General Questions**: Use [GitHub Discussions](https://github.com/montevive/penpot-mcp/discussions)
- **Bug Reports**: Create an [issue](https://github.com/montevive/penpot-mcp/issues)
- **Feature Ideas**: Use our [feature request template](.github/ISSUE_TEMPLATE/feature_request.md)
- **Security Issues**: Email us at [email protected]
---
Thank you for helping make Penpot MCP better! 🎨🤖
```
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
```python
"""Package tests."""
```
--------------------------------------------------------------------------------
/penpot_mcp/tools/cli/__init__.py:
--------------------------------------------------------------------------------
```python
"""Command-line interface tools for Penpot MCP."""
```
--------------------------------------------------------------------------------
/penpot_mcp/tools/__init__.py:
--------------------------------------------------------------------------------
```python
"""Tool implementations for the Penpot MCP server."""
```
--------------------------------------------------------------------------------
/penpot_mcp/server/__init__.py:
--------------------------------------------------------------------------------
```python
"""Server implementation for the Penpot MCP server."""
```
--------------------------------------------------------------------------------
/penpot_mcp/utils/__init__.py:
--------------------------------------------------------------------------------
```python
"""Utility functions and helper modules for the Penpot MCP server."""
```
--------------------------------------------------------------------------------
/penpot_mcp/api/__init__.py:
--------------------------------------------------------------------------------
```python
"""PenpotAPI module for interacting with the Penpot design platform."""
```
--------------------------------------------------------------------------------
/penpot_mcp/__init__.py:
--------------------------------------------------------------------------------
```python
"""Penpot MCP Server - Model Context Protocol server for Penpot."""
__version__ = "0.1.2"
__author__ = "Montevive AI Team"
__email__ = "[email protected]"
```
--------------------------------------------------------------------------------
/.vscode/launch.json:
--------------------------------------------------------------------------------
```json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Debug Penpot MCP Server",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/penpot_mcp/server/mcp_server.py",
"justMyCode": false,
"console": "integratedTerminal",
"args": [
"--mode",
"sse"
]
}
]
}
```
--------------------------------------------------------------------------------
/penpot_mcp/utils/config.py:
--------------------------------------------------------------------------------
```python
"""Configuration module for the Penpot MCP server."""
import os
from dotenv import find_dotenv, load_dotenv
# Load environment variables
load_dotenv(find_dotenv())
# Server configuration
PORT = int(os.environ.get('PORT', 5000))
DEBUG = os.environ.get('DEBUG', 'true').lower() == 'true'
RESOURCES_AS_TOOLS = os.environ.get('RESOURCES_AS_TOOLS', 'true').lower() == 'true'
# HTTP server for exported images
ENABLE_HTTP_SERVER = os.environ.get('ENABLE_HTTP_SERVER', 'true').lower() == 'true'
HTTP_SERVER_HOST = os.environ.get('HTTP_SERVER_HOST', 'localhost')
HTTP_SERVER_PORT = int(os.environ.get('HTTP_SERVER_PORT', 0))
# Penpot API configuration
PENPOT_API_URL = os.environ.get('PENPOT_API_URL', 'https://design.penpot.app/api')
PENPOT_USERNAME = os.environ.get('PENPOT_USERNAME')
PENPOT_PASSWORD = os.environ.get('PENPOT_PASSWORD')
RESOURCES_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'resources')
```
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
```markdown
---
name: Bug report
about: Create a report to help us improve Penpot MCP
title: '[BUG] '
labels: 'bug'
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Ubuntu 22.04, macOS 14.0, Windows 11]
- Python version: [e.g. 3.12.0]
- Penpot MCP version: [e.g. 0.1.0]
- Penpot version: [e.g. 2.0.0]
- AI Assistant: [e.g. Claude Desktop, Custom MCP client]
**Configuration**
- Are you using environment variables or .env file?
- What's your PENPOT_API_URL?
- Any custom configuration?
**Logs**
If applicable, add relevant log output:
```
Paste logs here
```
**Additional context**
Add any other context about the problem here.
```
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
```markdown
---
name: Feature request
about: Suggest an idea for Penpot MCP
title: '[FEATURE] '
labels: 'enhancement'
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Use case**
Describe how this feature would be used:
- Who would benefit from this feature?
- In what scenarios would it be useful?
- How would it improve the Penpot MCP workflow?
**Implementation ideas**
If you have ideas about how this could be implemented, please share them:
- API changes needed
- New MCP tools or resources
- Integration points with Penpot or AI assistants
**Additional context**
Add any other context, screenshots, mockups, or examples about the feature request here.
**Priority**
How important is this feature to you?
- [ ] Nice to have
- [ ] Important for my workflow
- [ ] Critical for adoption
```
--------------------------------------------------------------------------------
/.github/dependabot.yml:
--------------------------------------------------------------------------------
```yaml
version: 2
updates:
# Python dependencies
- package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
day: "monday"
time: "09:00"
timezone: "UTC"
open-pull-requests-limit: 5
reviewers:
- "montevive"
assignees:
- "montevive"
commit-message:
prefix: "deps"
include: "scope"
labels:
- "dependencies"
- "python"
groups:
dev-dependencies:
patterns:
- "pytest*"
- "flake8*"
- "coverage*"
- "pre-commit*"
- "isort*"
- "autopep8*"
- "pyupgrade*"
- "setuptools*"
production-dependencies:
patterns:
- "mcp*"
- "requests*"
- "python-dotenv*"
- "gunicorn*"
- "anytree*"
- "jsonschema*"
- "PyYAML*"
# GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
day: "monday"
time: "09:00"
timezone: "UTC"
open-pull-requests-limit: 3
reviewers:
- "montevive"
commit-message:
prefix: "ci"
include: "scope"
labels:
- "dependencies"
- "github-actions"
```
--------------------------------------------------------------------------------
/tests/test_config.py:
--------------------------------------------------------------------------------
```python
"""Tests for config module."""
from penpot_mcp.utils import config
def test_config_values():
"""Test that config has the expected values and types."""
assert isinstance(config.PORT, int)
assert isinstance(config.DEBUG, bool)
assert isinstance(config.PENPOT_API_URL, str)
assert config.RESOURCES_PATH is not None
def test_environment_variable_override(monkeypatch):
"""Test that environment variables override default config values."""
# Save original values
original_port = config.PORT
original_debug = config.DEBUG
original_api_url = config.PENPOT_API_URL
# Override with environment variables
monkeypatch.setenv("PORT", "8080")
monkeypatch.setenv("DEBUG", "false")
monkeypatch.setenv("PENPOT_API_URL", "https://test.example.com/api")
# Reload the config module to apply the environment variables
import importlib
importlib.reload(config)
# Check the new values
assert config.PORT == 8080
assert config.DEBUG is False
assert config.PENPOT_API_URL == "https://test.example.com/api"
# Restore original values
monkeypatch.setattr(config, "PORT", original_port)
monkeypatch.setattr(config, "DEBUG", original_debug)
monkeypatch.setattr(config, "PENPOT_API_URL", original_api_url)
```
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
```markdown
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Comprehensive CI/CD pipeline with GitHub Actions
- Automated PyPI publishing on version bumps
- CloudFlare error detection and user-friendly error handling
- Version bump automation workflow
### Changed
- Enhanced error handling in API client and MCP server
- Improved documentation for setup and usage
### Fixed
- CloudFlare protection blocking issues with helpful resolution instructions
## [0.1.1] - 2024-06-29
### Added
- Initial MCP server implementation
- Penpot API client with authentication
- Object tree visualization and analysis tools
- Export functionality for design objects
- Cache system for improved performance
- Comprehensive test suite
### Features
- List and access Penpot projects and files
- Search design objects by name with regex support
- Get object tree structure with field filtering
- Export design objects as images
- Claude Desktop and Cursor IDE integration
- HTTP server for image serving
## [0.1.0] - 2024-06-28
### Added
- Initial project structure
- Basic Penpot API integration
- MCP protocol implementation
- Core tool definitions
```
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
```markdown
## Description
Brief description of the changes in this PR.
## Type of Change
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
- [ ] Performance improvement
- [ ] Code refactoring
## Related Issues
Fixes #(issue number)
## Changes Made
- [ ] Added/modified MCP tools or resources
- [ ] Updated Penpot API integration
- [ ] Enhanced AI assistant compatibility
- [ ] Improved error handling
- [ ] Added tests
- [ ] Updated documentation
## Testing
- [ ] Tests pass locally
- [ ] Added tests for new functionality
- [ ] Tested with Claude Desktop integration
- [ ] Tested with Penpot API
- [ ] Manual testing completed
## Checklist
- [ ] My code follows the project's style guidelines
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
## Screenshots (if applicable)
Add screenshots to help explain your changes.
## Additional Notes
Any additional information that reviewers should know.
```
--------------------------------------------------------------------------------
/tests/conftest.py:
--------------------------------------------------------------------------------
```python
"""Test configuration for Penpot MCP tests."""
import os
from unittest.mock import MagicMock
import pytest
from penpot_mcp.api.penpot_api import PenpotAPI
from penpot_mcp.server.mcp_server import PenpotMCPServer
# Add the project root directory to the Python path
os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
@pytest.fixture
def mock_penpot_api(monkeypatch):
"""Create a mock PenpotAPI object."""
mock_api = MagicMock(spec=PenpotAPI)
# Add default behavior to the mock
mock_api.list_projects.return_value = [
{"id": "project1", "name": "Test Project 1"},
{"id": "project2", "name": "Test Project 2"}
]
mock_api.get_project_files.return_value = [
{"id": "file1", "name": "Test File 1"},
{"id": "file2", "name": "Test File 2"}
]
mock_api.get_file.return_value = {
"id": "file1",
"name": "Test File",
"data": {
"pages": [
{
"id": "page1",
"name": "Page 1",
"objects": {
"obj1": {"id": "obj1", "name": "Object 1", "type": "frame"},
"obj2": {"id": "obj2", "name": "Object 2", "type": "text"}
}
}
]
}
}
return mock_api
@pytest.fixture
def mock_server(mock_penpot_api):
"""Create a mock PenpotMCPServer with a mock API."""
server = PenpotMCPServer(name="Test Server")
server.api = mock_penpot_api
return server
```
--------------------------------------------------------------------------------
/penpot_mcp/tools/cli/tree_cmd.py:
--------------------------------------------------------------------------------
```python
"""Command-line interface for the Penpot tree visualization tool."""
import argparse
import json
import sys
from typing import Any, Dict
from penpot_mcp.tools.penpot_tree import build_tree, export_tree_to_dot, print_tree
def parse_args() -> argparse.Namespace:
"""Parse command line arguments."""
parser = argparse.ArgumentParser(description='Generate a tree from a Penpot JSON file')
parser.add_argument('input_file', help='Path to the Penpot JSON file')
parser.add_argument('--filter', '-f', help='Filter nodes by regex pattern')
parser.add_argument('--export', '-e', help='Export tree to a file (supports PNG, SVG, etc.)')
return parser.parse_args()
def load_penpot_file(file_path: str) -> Dict[str, Any]:
"""
Load a Penpot JSON file.
Args:
file_path: Path to the JSON file
Returns:
The loaded JSON data
Raises:
FileNotFoundError: If the file doesn't exist
json.JSONDecodeError: If the file isn't valid JSON
"""
try:
with open(file_path, 'r') as f:
return json.load(f)
except FileNotFoundError:
sys.exit(f"Error: File not found: {file_path}")
except json.JSONDecodeError:
sys.exit(f"Error: Invalid JSON file: {file_path}")
def main() -> None:
"""Main entry point for the command."""
args = parse_args()
# Load the Penpot file
data = load_penpot_file(args.input_file)
# Build the tree
root = build_tree(data)
# Export the tree if requested
if args.export:
export_tree_to_dot(root, args.export, args.filter)
# Print the tree
print_tree(root, args.filter)
if __name__ == '__main__':
main()
```
--------------------------------------------------------------------------------
/tests/test_cache.py:
--------------------------------------------------------------------------------
```python
"""
Tests for the memory caching functionality.
"""
import time
import pytest
from penpot_mcp.utils.cache import MemoryCache
@pytest.fixture
def memory_cache():
"""Create a MemoryCache instance with a short TTL for testing."""
return MemoryCache(ttl_seconds=2)
def test_cache_set_get(memory_cache):
"""Test setting and getting a file from cache."""
test_data = {"test": "data"}
file_id = "test123"
# Set data in cache
memory_cache.set(file_id, test_data)
# Get data from cache
cached_data = memory_cache.get(file_id)
assert cached_data == test_data
def test_cache_expiration(memory_cache):
"""Test that cached files expire after TTL."""
test_data = {"test": "data"}
file_id = "test123"
# Set data in cache
memory_cache.set(file_id, test_data)
# Data should be available immediately
assert memory_cache.get(file_id) == test_data
# Wait for cache to expire
time.sleep(3)
# Data should be expired
assert memory_cache.get(file_id) is None
def test_cache_clear(memory_cache):
"""Test clearing the cache."""
test_data = {"test": "data"}
file_id = "test123"
# Set data in cache
memory_cache.set(file_id, test_data)
# Verify data is cached
assert memory_cache.get(file_id) == test_data
# Clear cache
memory_cache.clear()
# Verify data is gone
assert memory_cache.get(file_id) is None
def test_get_all_cached_files(memory_cache):
"""Test getting all cached files."""
test_data1 = {"test": "data1"}
test_data2 = {"test": "data2"}
# Set multiple files in cache
memory_cache.set("file1", test_data1)
memory_cache.set("file2", test_data2)
# Get all cached files
all_files = memory_cache.get_all_cached_files()
# Verify all files are present
assert len(all_files) == 2
assert all_files["file1"] == test_data1
assert all_files["file2"] == test_data2
# Wait for cache to expire
time.sleep(3)
# Verify expired files are removed
all_files = memory_cache.get_all_cached_files()
assert len(all_files) == 0
def test_cache_nonexistent_file(memory_cache):
"""Test getting a nonexistent file from cache."""
assert memory_cache.get("nonexistent") is None
```
--------------------------------------------------------------------------------
/penpot_mcp/utils/cache.py:
--------------------------------------------------------------------------------
```python
"""
Cache utilities for Penpot MCP server.
"""
import time
from typing import Any, Dict, Optional
class MemoryCache:
"""In-memory cache implementation with TTL support."""
def __init__(self, ttl_seconds: int = 600):
"""
Initialize the memory cache.
Args:
ttl_seconds: Time to live in seconds (default 10 minutes)
"""
self.ttl_seconds = ttl_seconds
self._cache: Dict[str, Dict[str, Any]] = {}
def get(self, file_id: str) -> Optional[Dict[str, Any]]:
"""
Get a file from cache if it exists and is not expired.
Args:
file_id: The ID of the file to retrieve
Returns:
The cached file data or None if not found/expired
"""
if file_id not in self._cache:
return None
cache_data = self._cache[file_id]
# Check if cache is expired
if time.time() - cache_data['timestamp'] > self.ttl_seconds:
del self._cache[file_id] # Remove expired cache
return None
return cache_data['data']
def set(self, file_id: str, data: Dict[str, Any]) -> None:
"""
Store a file in cache.
Args:
file_id: The ID of the file to cache
data: The file data to cache
"""
self._cache[file_id] = {
'timestamp': time.time(),
'data': data
}
def clear(self) -> None:
"""Clear all cached files."""
self._cache.clear()
def get_all_cached_files(self) -> Dict[str, Dict[str, Any]]:
"""
Get all valid cached files.
Returns:
Dictionary mapping file IDs to their cached data
"""
result = {}
current_time = time.time()
# Create a list of expired keys to remove
expired_keys = []
for file_id, cache_data in self._cache.items():
if current_time - cache_data['timestamp'] <= self.ttl_seconds:
result[file_id] = cache_data['data']
else:
expired_keys.append(file_id)
# Remove expired entries
for key in expired_keys:
del self._cache[key]
return result
```
--------------------------------------------------------------------------------
/CLAUDE_INTEGRATION.md:
--------------------------------------------------------------------------------
```markdown
# Using Penpot MCP with Claude
This guide explains how to integrate the Penpot MCP server with Claude AI using the Model Context Protocol (MCP).
## Prerequisites
1. Claude Desktop application installed
2. Penpot MCP server set up and configured
## Installing the Penpot MCP Server in Claude Desktop
The easiest way to use the Penpot MCP server with Claude is to install it directly in Claude Desktop:
1. Make sure you have installed the required dependencies:
```bash
pip install -r requirements.txt
```
2. Install the MCP server in Claude Desktop:
```bash
mcp install mcp_server.py
```
3. Claude will ask for your permission to install the server. Click "Allow".
4. The Penpot MCP server will now appear in Claude's tool menu.
## Using Penpot in Claude
Once installed, you can interact with Penpot through Claude by:
1. Open Claude Desktop
2. Click on the "+" button in the message input area
3. Select "Penpot MCP Server" from the list
4. Claude now has access to your Penpot projects and can:
- List your projects
- Get project details
- Access file information
- View components
## Example Prompts for Claude
Here are some example prompts you can use with Claude to interact with your Penpot data:
### Listing Projects
```
Can you show me a list of my Penpot projects?
```
### Getting Project Details
```
Please show me the details of my most recent Penpot project.
```
### Working with Files
```
Can you list the files in my "Website Redesign" project?
```
### Exploring Components
```
Please show me the available UI components in Penpot.
```
## Troubleshooting
If you encounter issues:
1. Check that your Penpot access token is correctly set in the environment variables
2. Verify that the Penpot API URL is correct
3. Try reinstalling the MCP server in Claude Desktop:
```bash
mcp uninstall "Penpot MCP Server"
mcp install mcp_server.py
```
## Advanced: Using with Other MCP-compatible Tools
The Penpot MCP server can be used with any MCP-compatible client, not just Claude Desktop. Other integrations include:
- OpenAI Agents SDK
- PydanticAI
- Python MCP clients (see `example_client.py`)
Refer to the specific documentation for these tools for integration instructions.
## Resources
- [Model Context Protocol Documentation](https://modelcontextprotocol.io)
- [Claude Developer Documentation](https://docs.anthropic.com)
- [MCP Python SDK Documentation](https://github.com/modelcontextprotocol/python-sdk)
```
--------------------------------------------------------------------------------
/penpot_mcp/tools/cli/validate_cmd.py:
--------------------------------------------------------------------------------
```python
"""Command-line interface for validating Penpot files against a schema."""
import argparse
import json
import os
import sys
from typing import Any, Dict, Optional, Tuple
from jsonschema import SchemaError, ValidationError, validate
from penpot_mcp.utils import config
def parse_args() -> argparse.Namespace:
"""Parse command line arguments."""
parser = argparse.ArgumentParser(description='Validate a Penpot JSON file against a schema')
parser.add_argument('input_file', help='Path to the Penpot JSON file to validate')
parser.add_argument(
'--schema',
'-s',
default=os.path.join(
config.RESOURCES_PATH,
'penpot-schema.json'),
help='Path to the JSON schema file (default: resources/penpot-schema.json)')
parser.add_argument('--verbose', '-v', action='store_true',
help='Enable verbose output with detailed validation errors')
return parser.parse_args()
def load_json_file(file_path: str) -> Dict[str, Any]:
"""
Load a JSON file.
Args:
file_path: Path to the JSON file
Returns:
The loaded JSON data
Raises:
FileNotFoundError: If the file doesn't exist
json.JSONDecodeError: If the file isn't valid JSON
"""
try:
with open(file_path, 'r') as f:
return json.load(f)
except FileNotFoundError:
sys.exit(f"Error: File not found: {file_path}")
except json.JSONDecodeError:
sys.exit(f"Error: Invalid JSON file: {file_path}")
def validate_penpot_file(data: Dict[str, Any], schema: Dict[str,
Any]) -> Tuple[bool, Optional[str]]:
"""
Validate a Penpot file against a schema.
Args:
data: The Penpot file data
schema: The JSON schema
Returns:
Tuple of (is_valid, error_message)
"""
try:
validate(instance=data, schema=schema)
return True, None
except ValidationError as e:
return False, str(e)
except SchemaError as e:
return False, f"Schema error: {str(e)}"
def main() -> None:
"""Main entry point for the command."""
args = parse_args()
# Load the files
print(f"Loading Penpot file: {args.input_file}")
data = load_json_file(args.input_file)
print(f"Loading schema file: {args.schema}")
schema = load_json_file(args.schema)
# Validate the file
print("Validating file...")
is_valid, error = validate_penpot_file(data, schema)
if is_valid:
print("✅ Validation successful! The file conforms to the schema.")
else:
print("❌ Validation failed!")
if args.verbose and error:
print("\nError details:")
print(error)
sys.exit(1)
if __name__ == '__main__':
main()
```
--------------------------------------------------------------------------------
/test_credentials.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""
Test script to verify Penpot API credentials and list projects.
"""
import os
from dotenv import load_dotenv
from penpot_mcp.api.penpot_api import PenpotAPI
def test_credentials():
"""Test Penpot API credentials and list projects."""
load_dotenv()
api_url = os.getenv("PENPOT_API_URL")
username = os.getenv("PENPOT_USERNAME")
password = os.getenv("PENPOT_PASSWORD")
if not all([api_url, username, password]):
print("❌ Missing credentials in .env file")
print("Required: PENPOT_API_URL, PENPOT_USERNAME, PENPOT_PASSWORD")
return False
print(f"🔗 Testing connection to: {api_url}")
print(f"👤 Username: {username}")
try:
api = PenpotAPI(api_url, debug=False, email=username, password=password)
print("🔐 Authenticating...")
token = api.login_with_password()
print("✅ Authentication successful!")
print("📁 Fetching projects...")
projects = api.list_projects()
if isinstance(projects, dict) and "error" in projects:
print(f"❌ Failed to list projects: {projects['error']}")
return False
print(f"✅ Found {len(projects)} projects:")
for i, project in enumerate(projects, 1):
if isinstance(project, dict):
name = project.get('name', 'Unnamed')
project_id = project.get('id', 'N/A')
team_name = project.get('team-name', 'Unknown Team')
print(f" {i}. {name} (ID: {project_id}) - Team: {team_name}")
else:
print(f" {i}. {project}")
# Test getting project files if we have a project
if projects and isinstance(projects[0], dict):
project_id = projects[0].get('id')
if project_id:
print(f"\n📄 Testing project files for project: {project_id}")
try:
files = api.get_project_files(project_id)
print(f"✅ Found {len(files)} files:")
for j, file in enumerate(files[:3], 1): # Show first 3 files
if isinstance(file, dict):
print(f" {j}. {file.get('name', 'Unnamed')} (ID: {file.get('id', 'N/A')})")
else:
print(f" {j}. {file}")
if len(files) > 3:
print(f" ... and {len(files) - 3} more files")
except Exception as file_error:
print(f"❌ Error getting files: {file_error}")
return True
except Exception as e:
print(f"❌ Error: {e}")
return False
if __name__ == "__main__":
success = test_credentials()
exit(0 if success else 1)
```
--------------------------------------------------------------------------------
/.github/workflows/ci.yml:
--------------------------------------------------------------------------------
```yaml
name: CI
on:
pull_request:
branches: [ main, develop ]
push:
branches: [ main, develop ]
workflow_call:
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "latest"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Run linting
run: |
uv run python lint.py || echo "Linting found issues but continuing..."
continue-on-error: true
- name: Run tests with coverage
run: |
uv run pytest --cov=penpot_mcp tests/ --cov-report=xml --cov-report=term-missing
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
if: matrix.python-version == '3.12'
with:
file: ./coverage.xml
flags: unittests
name: codecov-umbrella
fail_ci_if_error: false
security-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install dependencies
run: |
uv sync --extra dev
- name: Run security checks with bandit
run: |
uv add bandit[toml]
uv run bandit -r penpot_mcp/ -f json -o bandit-report.json || true
- name: Upload security scan results
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: bandit-report.json
continue-on-error: true
build-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install dependencies
run: |
uv sync --extra dev
- name: Build package
run: |
uv build
- name: Test package installation
run: |
python -m pip install dist/*.whl
penpot-mcp --help || echo "CLI help command failed"
python -c "import penpot_mcp; print(f'Version: {penpot_mcp.__version__}')"
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: dist-files
path: dist/
retention-days: 7
test-docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Create test Dockerfile
run: |
cat > Dockerfile.test << 'EOF'
FROM python:3.12-slim
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
# Set working directory
WORKDIR /app
# Copy project files
COPY . .
# Install dependencies and run tests
RUN uv sync --extra dev
RUN uv run pytest
# Test CLI commands
RUN uv run penpot-mcp --help || echo "CLI help test completed"
EOF
- name: Build and test Docker image
run: |
docker build -f Dockerfile.test -t penpot-mcp-test .
```
--------------------------------------------------------------------------------
/.github/workflows/code-quality.yml:
--------------------------------------------------------------------------------
```yaml
name: Code Quality
on:
workflow_dispatch:
schedule:
# Run weekly on Sundays at 2 AM UTC
- cron: '0 2 * * 0'
jobs:
code-quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install dependencies
run: |
uv sync --extra dev
- name: Run comprehensive linting
run: |
echo "Running full linting analysis..."
uv run python lint.py --autofix || true
- name: Check for auto-fixes
run: |
if [[ -n $(git status --porcelain) ]]; then
echo "Auto-fixes were applied"
git diff
else
echo "No auto-fixes needed"
fi
- name: Create Pull Request for fixes
if: success()
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: "🔧 Auto-fix code quality issues"
title: "🔧 Automated Code Quality Improvements"
body: |
## Automated Code Quality Fixes
This PR contains automated fixes for code quality issues:
### Changes Applied
- Line length adjustments
- Import sorting
- Whitespace cleanup
- Unused import removal
### Review Notes
- All changes are automatically applied by linting tools
- Tests should still pass after these changes
- Manual review recommended for any significant changes
🤖 This PR was automatically created by the Code Quality workflow.
branch: automated-code-quality-fixes
delete-branch: true
reviewers: montevive
labels: |
code-quality
automated
enhancement
- name: Security Analysis
run: |
echo "Running security analysis..."
uv add bandit[toml]
uv run bandit -r penpot_mcp/ -f json -o bandit-report.json || true
if [ -f bandit-report.json ]; then
echo "Security report generated"
cat bandit-report.json | head -20
fi
- name: Code Coverage Analysis
run: |
echo "Running code coverage analysis..."
uv run pytest --cov=penpot_mcp tests/ --cov-report=html --cov-report=term
echo "Coverage report generated in htmlcov/"
- name: Upload Coverage Report
uses: actions/upload-artifact@v4
with:
name: coverage-report
path: htmlcov/
retention-days: 30
- name: Upload Security Report
uses: actions/upload-artifact@v4
if: always()
with:
name: security-report
path: bandit-report.json
retention-days: 30
- name: Summary
run: |
echo "## Code Quality Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Linting" >> $GITHUB_STEP_SUMMARY
echo "- Auto-fixes applied (if any)" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Security Analysis" >> $GITHUB_STEP_SUMMARY
echo "- Bandit security scan completed" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Coverage" >> $GITHUB_STEP_SUMMARY
echo "- Code coverage report generated" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Artifacts" >> $GITHUB_STEP_SUMMARY
echo "- Coverage report: htmlcov/" >> $GITHUB_STEP_SUMMARY
echo "- Security report: bandit-report.json" >> $GITHUB_STEP_SUMMARY
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "penpot-mcp"
dynamic = ["version"]
description = "Model Context Protocol server for Penpot - AI-powered design workflow automation"
readme = "README.md"
license = "MIT"
authors = [
{name = "Montevive AI Team", email = "[email protected]"}
]
keywords = ["penpot", "mcp", "llm", "ai", "design", "prototyping", "claude", "cursor", "model-context-protocol"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: End Users/Desktop",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Multimedia :: Graphics :: Graphics Conversion",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development :: User Interfaces",
"Environment :: Console",
"Operating System :: OS Independent",
]
requires-python = ">=3.10"
dependencies = [
"mcp>=1.7.0",
"python-dotenv>=1.0.0",
"requests>=2.26.0",
"gunicorn>=20.1.0",
"anytree>=2.8.0",
"jsonschema>=4.0.0",
"PyYAML>=6.0.0",
"twine>=6.1.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.4.0",
"pytest-mock>=3.11.1",
"pytest-cov>=4.1.0",
"flake8>=6.1.0",
"flake8-docstrings>=1.7.0",
"pre-commit>=3.5.0",
"isort>=5.12.0",
"autopep8>=2.0.4",
"pyupgrade>=3.13.0",
"setuptools>=65.5.0",
]
cli = [
"mcp[cli]>=1.7.0",
]
[project.urls]
Homepage = "https://github.com/montevive/penpot-mcp"
Repository = "https://github.com/montevive/penpot-mcp.git"
Issues = "https://github.com/montevive/penpot-mcp/issues"
Documentation = "https://github.com/montevive/penpot-mcp#readme"
Changelog = "https://github.com/montevive/penpot-mcp/releases"
[project.scripts]
penpot-mcp = "penpot_mcp.server.mcp_server:main"
penpot-client = "penpot_mcp.server.client:main"
penpot-tree = "penpot_mcp.tools.cli.tree_cmd:main"
penpot-validate = "penpot_mcp.tools.cli.validate_cmd:main"
[tool.setuptools.dynamic]
version = {attr = "penpot_mcp.__version__"}
[tool.setuptools.packages.find]
where = ["."]
include = ["penpot_mcp*"]
[tool.setuptools.package-data]
penpot_mcp = ["resources/*.json"]
# pytest configuration
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
addopts = [
"--strict-markers",
"--strict-config",
"--verbose",
]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"integration: marks tests as integration tests",
]
# Coverage configuration
[tool.coverage.run]
source = ["penpot_mcp"]
omit = [
"*/tests/*",
"*/test_*",
"*/__pycache__/*",
"*/venv/*",
"*/.venv/*",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"if self.debug:",
"if settings.DEBUG",
"raise AssertionError",
"raise NotImplementedError",
"if 0:",
"if __name__ == .__main__.:",
"class .*\\bProtocol\\):",
"@(abc\\.)?abstractmethod",
]
# isort configuration
[tool.isort]
profile = "black"
multi_line_output = 3
line_length = 88
known_first_party = ["penpot_mcp"]
skip = [".venv", "venv", "__pycache__"]
# Black configuration (if you decide to use it)
[tool.black]
line-length = 88
target-version = ['py312']
include = '\.pyi?$'
extend-exclude = '''
/(
# directories
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| build
| dist
)/
'''
```
--------------------------------------------------------------------------------
/fix-lint-deps.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Helper script to install missing linting dependencies
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
NC='\033[0m' # No Color
# Function to create and activate a virtual environment
create_venv() {
echo -e "${YELLOW}Creating virtual environment in '$1'...${NC}"
python3 -m venv "$1"
if [ $? -ne 0 ]; then
echo -e "${RED}Failed to create virtual environment.${NC}"
echo "Make sure python3-venv is installed."
echo "On Ubuntu/Debian: sudo apt install python3-venv"
exit 1
fi
echo -e "${GREEN}Virtual environment created successfully.${NC}"
# Activate the virtual environment
if [[ "$OSTYPE" == "msys" || "$OSTYPE" == "win32" ]]; then
# Windows
source "$1/Scripts/activate"
else
# Unix/Linux/MacOS
source "$1/bin/activate"
fi
if [ $? -ne 0 ]; then
echo -e "${RED}Failed to activate virtual environment.${NC}"
exit 1
fi
echo -e "${GREEN}Virtual environment activated.${NC}"
# Upgrade pip to avoid issues
pip install --upgrade pip
if [ $? -ne 0 ]; then
echo -e "${YELLOW}Warning: Could not upgrade pip, but continuing anyway.${NC}"
fi
}
# Check if we're in a virtual environment
if [[ -z "$VIRTUAL_ENV" ]]; then
echo -e "${YELLOW}You are not in a virtual environment.${NC}"
# Check if a virtual environment already exists
if [ -d ".venv" ]; then
echo "Found existing virtual environment in .venv directory."
read -p "Would you like to use it? (y/n): " use_existing
if [[ $use_existing == "y" || $use_existing == "Y" ]]; then
create_venv ".venv"
else
read -p "Create a new virtual environment? (y/n): " create_new
if [[ $create_new == "y" || $create_new == "Y" ]]; then
read -p "Enter path for new virtual environment [.venv]: " venv_path
venv_path=${venv_path:-.venv}
create_venv "$venv_path"
else
echo -e "${RED}Cannot continue without a virtual environment.${NC}"
echo "Using system Python is not recommended and may cause permission issues."
echo "Please run this script again and choose to create a virtual environment."
exit 1
fi
fi
else
read -p "Would you like to create a virtual environment? (y/n): " create_new
if [[ $create_new == "y" || $create_new == "Y" ]]; then
read -p "Enter path for new virtual environment [.venv]: " venv_path
venv_path=${venv_path:-.venv}
create_venv "$venv_path"
else
echo -e "${RED}Cannot continue without a virtual environment.${NC}"
echo "Using system Python is not recommended and may cause permission issues."
echo "Please run this script again and choose to create a virtual environment."
exit 1
fi
fi
else
echo -e "${GREEN}Using existing virtual environment: $VIRTUAL_ENV${NC}"
fi
# Install development dependencies
echo -e "${YELLOW}Installing linting dependencies...${NC}"
pip install -r requirements-dev.txt
if [ $? -ne 0 ]; then
echo -e "${RED}Failed to install dependencies.${NC}"
exit 1
fi
echo -e "${GREEN}Dependencies installed successfully.${NC}"
# Install pre-commit hooks
echo -e "${YELLOW}Setting up pre-commit hooks...${NC}"
pre-commit install
if [ $? -ne 0 ]; then
echo -e "${RED}Failed to install pre-commit hooks.${NC}"
exit 1
fi
echo -e "${GREEN}Pre-commit hooks installed successfully.${NC}"
echo -e "\n${GREEN}Setup completed!${NC}"
echo "You can now run the linting script with:"
echo " ./lint.py"
echo "Or with auto-fix:"
echo " ./lint.py --autofix"
echo ""
echo "Remember to activate your virtual environment whenever you open a new terminal:"
echo " source .venv/bin/activate # On Linux/macOS"
echo " .venv\\Scripts\\activate # On Windows"
```
--------------------------------------------------------------------------------
/penpot_mcp/utils/http_server.py:
--------------------------------------------------------------------------------
```python
"""HTTP server module for serving exported images from memory."""
import io
import json
import socketserver
import threading
from http.server import BaseHTTPRequestHandler, HTTPServer
class InMemoryImageHandler(BaseHTTPRequestHandler):
"""HTTP request handler for serving images stored in memory."""
# Class variable to store images
images = {}
def do_GET(self):
"""Handle GET requests."""
# Remove query parameters if any
path = self.path.split('?', 1)[0]
path = path.split('#', 1)[0]
# Extract image ID from path
# Expected path format: /images/{image_id}.{format}
parts = path.split('/')
if len(parts) == 3 and parts[1] == 'images':
# Extract image_id by removing the file extension if present
image_id_with_ext = parts[2]
image_id = image_id_with_ext.split('.')[0]
if image_id in self.images:
img_data = self.images[image_id]['data']
img_format = self.images[image_id]['format']
# Set content type based on format
content_type = f"image/{img_format}"
if img_format == 'svg':
content_type = 'image/svg+xml'
self.send_response(200)
self.send_header('Content-type', content_type)
self.send_header('Content-length', len(img_data))
self.end_headers()
self.wfile.write(img_data)
return
# Return 404 if image not found
self.send_response(404)
self.send_header('Content-type', 'application/json')
self.end_headers()
response = {'error': 'Image not found'}
self.wfile.write(json.dumps(response).encode())
class ImageServer:
"""Server for in-memory images."""
def __init__(self, host='localhost', port=0):
"""Initialize the HTTP server.
Args:
host: Host address to listen on
port: Port to listen on (0 means use a random available port)
"""
self.host = host
self.port = port
self.server = None
self.server_thread = None
self.is_running = False
self.base_url = None
def start(self):
"""Start the HTTP server in a background thread.
Returns:
Base URL of the server with actual port used
"""
if self.is_running:
return self.base_url
# Create TCP server with address reuse enabled
class ReuseAddressTCPServer(socketserver.TCPServer):
allow_reuse_address = True
self.server = ReuseAddressTCPServer((self.host, self.port), InMemoryImageHandler)
# Get the actual port that was assigned
self.port = self.server.socket.getsockname()[1]
self.base_url = f"http://{self.host}:{self.port}"
# Start server in a separate thread
self.server_thread = threading.Thread(target=self.server.serve_forever)
self.server_thread.daemon = True # Don't keep process running if main thread exits
self.server_thread.start()
self.is_running = True
print(f"Image server started at {self.base_url}")
return self.base_url
def stop(self):
"""Stop the HTTP server."""
if not self.is_running:
return
self.server.shutdown()
self.server.server_close()
self.is_running = False
print("Image server stopped")
def add_image(self, image_id, image_data, image_format='png'):
"""Add image to in-memory storage.
Args:
image_id: Unique identifier for the image
image_data: Binary image data
image_format: Image format (png, jpg, etc.)
Returns:
URL to access the image
"""
InMemoryImageHandler.images[image_id] = {
'data': image_data,
'format': image_format
}
return f"{self.base_url}/images/{image_id}.{image_format}"
def remove_image(self, image_id):
"""Remove image from in-memory storage."""
if image_id in InMemoryImageHandler.images:
del InMemoryImageHandler.images[image_id]
```
--------------------------------------------------------------------------------
/LINTING.md:
--------------------------------------------------------------------------------
```markdown
# Linting Guide
This document provides guidelines on how to work with the linting tools configured in this project.
## Overview
The project uses the following linting tools:
- **flake8**: Code style and quality checker
- **isort**: Import sorting
- **autopep8**: PEP 8 code formatting with auto-fix capability
- **pyupgrade**: Upgrades Python syntax for newer versions
- **pre-commit**: Framework for managing pre-commit hooks
## Quick Start
1. Use the setup script to install all dependencies and set up pre-commit hooks:
```bash
./fix-lint-deps.sh
```
Or install dependencies manually:
```bash
pip install -r requirements-dev.txt
pre-commit install
```
2. Run the linting script:
```bash
# Check for issues
./lint.py
# Fix issues automatically where possible
./lint.py --autofix
```
## Dependencies
The linting tools require specific dependencies:
- **flake8** and **flake8-docstrings**: For code style and documentation checking
- **isort**: For import sorting
- **autopep8**: For automatic PEP 8 compliance
- **pyupgrade**: For Python syntax upgrading
- **setuptools**: Required for lib2to3 which is used by autopep8
If you encounter a `ModuleNotFoundError: No module named 'lib2to3'` error, make sure you have setuptools installed:
```bash
pip install setuptools>=65.5.0
```
Or simply run the fix script:
```bash
./fix-lint-deps.sh
```
## Configuration
The linting tools are configured in the following files:
- **setup.cfg**: Contains settings for flake8, autopep8, etc.
- **.pre-commit-config.yaml**: Configuration for pre-commit hooks
- **.editorconfig**: Editor settings for consistent code formatting
## Linting Rules
### Code Style Rules
We follow PEP 8 with some exceptions:
- **Line Length**: Max line length is 100 characters
- **Ignored Rules**:
- E203: Whitespace before ':' (conflicts with Black)
- W503: Line break before binary operator (conflicts with Black)
### Documentation Rules
All public modules, functions, classes, and methods should have docstrings. We use the Google style for docstrings.
Example:
```python
def function(param1, param2):
"""Summary of function purpose.
More detailed explanation if needed.
Args:
param1: Description of param1.
param2: Description of param2.
Returns:
Description of return value.
Raises:
ExceptionType: When and why this exception is raised.
"""
# function implementation
```
### Import Sorting
Imports should be sorted using isort with the black profile. Imports are grouped in the following order:
1. Standard library imports
2. Related third-party imports
3. Local application/library specific imports
With each group sorted alphabetically.
## Auto-Fixing Issues
Many issues can be fixed automatically:
- **Import Sorting**: `isort` can sort imports automatically
- **PEP 8 Formatting**: `autopep8` can fix many style issues
- **Python Syntax**: `pyupgrade` can update syntax to newer Python versions
Run the auto-fix command:
```bash
./lint.py --autofix
```
## Troubleshooting
If you encounter issues with the linting tools:
1. **Missing dependencies**: Run `./fix-lint-deps.sh` to install all required dependencies
2. **Autopep8 errors**: Make sure setuptools is installed for lib2to3 support
3. **Pre-commit hook failures**: Run `pre-commit run --all-files` to see which files are causing issues
## Pre-commit Hooks
Pre-commit hooks run automatically when you commit changes. They ensure that linting issues are caught before code is committed.
If hooks fail during a commit:
1. The commit will be aborted
2. Review the error messages
3. Fix the issues manually or using auto-fix
4. Stage the fixed files
5. Retry your commit
## Common Issues and Solutions
### Disabling Linting for Specific Lines
Sometimes it's necessary to disable linting for specific lines:
```python
# For flake8
some_code = "example" # noqa: E501
# For multiple rules
some_code = "example" # noqa: E501, F401
```
### Handling Third-Party Code
For third-party code that doesn't follow our style, consider isolating it in a separate file or directory and excluding it from linting.
## IDE Integration
### VSCode
Install the Python, Flake8, and EditorConfig extensions. Add to settings.json:
```json
{
"python.linting.enabled": true,
"python.linting.flake8Enabled": true,
"editor.formatOnSave": true,
"python.formatting.provider": "autopep8",
"python.sortImports.args": ["--profile", "black"]
}
```
### PyCharm
Enable Flake8 in:
Settings → Editor → Inspections → Python → Flake8
Configure isort:
Settings → Editor → Code Style → Python → Imports
## Customizing Linting Rules
To modify linting rules:
1. Edit `setup.cfg` for flake8 and autopep8 settings
2. Edit `.pre-commit-config.yaml` for pre-commit hook settings
3. Run `pre-commit autoupdate` to update hook versions
## Continuous Integration
Linting checks are part of the CI pipeline. Pull requests that fail linting will not be merged until issues are fixed.
```
--------------------------------------------------------------------------------
/.github/workflows/version-bump.yml:
--------------------------------------------------------------------------------
```yaml
name: Version Bump
on:
workflow_dispatch:
inputs:
version-type:
description: 'Version bump type'
required: true
default: 'patch'
type: choice
options:
- patch
- minor
- major
custom-version:
description: 'Custom version (optional, overrides version-type)'
required: false
type: string
jobs:
bump-version:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install packaging
- name: Get current version
id: current-version
run: |
CURRENT_VERSION=$(python -c "import penpot_mcp; print(penpot_mcp.__version__)")
echo "current=$CURRENT_VERSION" >> $GITHUB_OUTPUT
echo "Current version: $CURRENT_VERSION"
- name: Calculate new version
id: new-version
run: |
python << 'EOF'
import os
from packaging import version
current = "${{ steps.current-version.outputs.current }}"
custom = "${{ github.event.inputs.custom-version }}"
bump_type = "${{ github.event.inputs.version-type }}"
if custom:
new_version = custom
else:
v = version.parse(current)
if bump_type == "major":
new_version = f"{v.major + 1}.0.0"
elif bump_type == "minor":
new_version = f"{v.major}.{v.minor + 1}.0"
else: # patch
new_version = f"{v.major}.{v.minor}.{v.micro + 1}"
print(f"New version: {new_version}")
with open(os.environ['GITHUB_OUTPUT'], 'a') as f:
f.write(f"version={new_version}\n")
EOF
- name: Update version in files
run: |
NEW_VERSION="${{ steps.new-version.outputs.version }}"
# Update __init__.py
sed -i "s/__version__ = \".*\"/__version__ = \"$NEW_VERSION\"/" penpot_mcp/__init__.py
# Verify the change
echo "Updated version in penpot_mcp/__init__.py:"
grep "__version__" penpot_mcp/__init__.py
- name: Create changelog entry
run: |
NEW_VERSION="${{ steps.new-version.outputs.version }}"
DATE=$(date +"%Y-%m-%d")
# Create CHANGELOG.md if it doesn't exist
if [ ! -f CHANGELOG.md ]; then
cat > CHANGELOG.md << 'EOF'
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
EOF
fi
# Add new version entry
sed -i "3i\\\\n## [$NEW_VERSION] - $DATE\\\\n\\\\n### Added\\\\n- Version bump to $NEW_VERSION\\\\n\\\\n### Changed\\\\n- Update dependencies and improve stability\\\\n\\\\n### Fixed\\\\n- Bug fixes and performance improvements\\\\n" CHANGELOG.md
echo "Updated CHANGELOG.md with version $NEW_VERSION"
- name: Commit and push changes
run: |
NEW_VERSION="${{ steps.new-version.outputs.version }}"
git config --local user.email "[email protected]"
git config --local user.name "GitHub Action"
git add penpot_mcp/__init__.py CHANGELOG.md
git commit -m "Bump version to $NEW_VERSION
- Update version in __init__.py to $NEW_VERSION
- Add changelog entry for version $NEW_VERSION
🤖 Generated with GitHub Actions"
git push
echo "✅ Version bumped to $NEW_VERSION and pushed to repository"
- name: Create pull request (if on branch)
if: github.ref != 'refs/heads/main'
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: "Bump version to ${{ steps.new-version.outputs.version }}"
title: "🔖 Bump version to ${{ steps.new-version.outputs.version }}"
body: |
## Version Bump to ${{ steps.new-version.outputs.version }}
This PR was automatically created to bump the version.
### Changes
- Updated `__version__` in `penpot_mcp/__init__.py`
- Added changelog entry for version ${{ steps.new-version.outputs.version }}
### Type of Change
- [${{ github.event.inputs.version-type == 'major' && 'x' || ' ' }}] Major version (breaking changes)
- [${{ github.event.inputs.version-type == 'minor' && 'x' || ' ' }}] Minor version (new features)
- [${{ github.event.inputs.version-type == 'patch' && 'x' || ' ' }}] Patch version (bug fixes)
### Checklist
- [x] Version updated in `__init__.py`
- [x] Changelog updated
- [ ] Tests pass (will be verified by CI)
- [ ] Ready for merge and auto-publish
**Note**: Merging this PR to `main` will trigger automatic publishing to PyPI.
branch: version-bump-${{ steps.new-version.outputs.version }}
delete-branch: true
```
--------------------------------------------------------------------------------
/.github/SETUP_CICD.md:
--------------------------------------------------------------------------------
```markdown
# CI/CD Setup Guide
This guide explains how to set up the CI/CD pipeline for automatic testing and PyPI publishing.
## 🚀 Quick Setup
### 1. PyPI API Tokens
You need to create API tokens for both PyPI and Test PyPI:
#### Create PyPI API Token
1. Go to [PyPI Account Settings](https://pypi.org/manage/account/)
2. Scroll to "API tokens" section
3. Click "Add API token"
4. Set name: `penpot-mcp-github-actions`
5. Scope: "Entire account" (or specific to `penpot-mcp` project if it exists)
6. Copy the token (starts with `pypi-`)
#### Create Test PyPI API Token
1. Go to [Test PyPI Account Settings](https://test.pypi.org/manage/account/)
2. Follow same steps as above
3. Copy the token
### 2. GitHub Secrets Configuration
Add the following secrets to your GitHub repository:
1. Go to your GitHub repository
2. Navigate to **Settings** → **Secrets and variables** → **Actions**
3. Click **New repository secret** and add:
| Secret Name | Value | Description |
|-------------|-------|-------------|
| `PYPI_API_TOKEN` | `pypi-AgEIcHl...` | Your PyPI API token |
| `TEST_PYPI_API_TOKEN` | `pypi-AgEIcHl...` | Your Test PyPI API token |
### 3. Enable GitHub Actions
1. Go to **Settings** → **Actions** → **General**
2. Ensure "Allow all actions and reusable workflows" is selected
3. Under "Workflow permissions":
- Select "Read and write permissions"
- Check "Allow GitHub Actions to create and approve pull requests"
## 📋 Workflow Overview
### CI Workflow (`.github/workflows/ci.yml`)
**Triggers:**
- Pull requests to `main` or `develop` branches
- Pushes to `main` or `develop` branches
**Jobs:**
- **Test Matrix**: Tests across Python 3.10, 3.11, 3.12, 3.13
- **Security Check**: Runs `bandit` security analysis
- **Build Test**: Tests package building and installation
- **Docker Test**: Tests Docker containerization
**Features:**
- ✅ Cross-platform testing (Linux, macOS, Windows can be added)
- ✅ Multiple Python version support
- ✅ Code coverage reporting (uploads to Codecov)
- ✅ Security vulnerability scanning
- ✅ Package build verification
- ✅ Docker compatibility testing
### CD Workflow (`.github/workflows/publish.yml`)
**Triggers:**
- Pushes to `main` branch (automatic)
- GitHub releases (manual)
**Auto-Publish Process:**
1. ✅ Runs full CI test suite first
2. ✅ Checks if version was bumped in `__init__.py`
3. ✅ Skips publishing if version already exists on PyPI
4. ✅ Builds and validates package
5. ✅ Tests package installation
6. ✅ Publishes to Test PyPI first (optional)
7. ✅ Publishes to PyPI
8. ✅ Creates GitHub release automatically
9. ✅ Uploads release assets
## 🔄 Version Management
### Automatic Publishing
The pipeline automatically publishes when:
1. You push to `main` branch
2. The version in `penpot_mcp/__init__.py` is different from the latest PyPI version
### Manual Version Bump
To trigger a new release:
```bash
# 1. Update version in penpot_mcp/__init__.py
echo '__version__ = "0.1.2"' > penpot_mcp/__init__.py
# 2. Commit and push to main
git add penpot_mcp/__init__.py
git commit -m "Bump version to 0.1.2"
git push origin main
# 3. Pipeline will automatically:
# - Run tests
# - Build package
# - Publish to PyPI
# - Create GitHub release
```
### Manual Release (Alternative)
You can also create releases manually:
```bash
# 1. Create and push a tag
git tag v0.1.2
git push origin v0.1.2
# 2. Create release on GitHub UI
# 3. Pipeline will automatically publish to PyPI
```
## 🛠 Advanced Configuration
### Environment Variables
You can customize the pipeline behavior using environment variables:
```yaml
env:
SKIP_TESTS: false # Skip tests (not recommended)
SKIP_TESTPYPI: false # Skip Test PyPI upload
CREATE_RELEASE: true # Create GitHub releases
PYTHON_VERSION: "3.12" # Default Python version
```
### Dependency Caching
The workflows use `uv` for fast dependency management:
```yaml
- name: Install dependencies
run: |
uv sync --extra dev # Install with dev dependencies
uv sync --frozen # Use locked dependencies (production)
```
### Security Scanning
The pipeline includes multiple security checks:
- **Bandit**: Python security linter
- **Safety**: Dependency vulnerability scanner (can be added)
- **CodeQL**: GitHub's semantic code analysis (can be enabled)
### Adding Security Scanning
To add more security tools:
```yaml
- name: Run safety check
run: |
uv add safety
uv run safety check --json --output safety-report.json
```
## 🐛 Troubleshooting
### Common Issues
#### 1. "Version already exists" error
- Check that you bumped the version in `__init__.py`
- Verify the version doesn't exist on PyPI already
#### 2. PyPI upload fails
- Verify your API tokens are correct
- Check that token has proper scope permissions
- Ensure package name doesn't conflict
#### 3. Tests fail in CI but pass locally
- Check Python version compatibility
- Verify all dependencies are specified in `pyproject.toml`
- Check for environment-specific issues
#### 4. GitHub Actions permissions error
- Ensure "Read and write permissions" are enabled
- Check that secrets are properly configured
### Debug Commands
```bash
# Test build locally
uv build
uv run twine check dist/*
# Test package installation
python -m pip install dist/*.whl
penpot-mcp --help
# Check version
python -c "import penpot_mcp; print(penpot_mcp.__version__)"
# Verify PyPI package
pip index versions penpot-mcp
```
## 📊 Monitoring
### GitHub Actions Dashboard
- View workflow runs: `https://github.com/YOUR_ORG/penpot-mcp/actions`
- Monitor success/failure rates
- Check deployment status
### PyPI Package Page
- Package stats: `https://pypi.org/project/penpot-mcp/`
- Download statistics
- Version history
### Codecov (Optional)
- Code coverage reports
- Coverage trends over time
- Pull request coverage analysis
## 🔐 Security Best Practices
1. **API Tokens**:
- Use scoped tokens (project-specific when possible)
- Rotate tokens regularly
- Never commit tokens to code
2. **Repository Settings**:
- Enable branch protection on `main`
- Require status checks to pass
- Require up-to-date branches
3. **Secrets Management**:
- Use GitHub Secrets for sensitive data
- Consider using environment-specific secrets
- Audit secret access regularly
## 🎯 Next Steps
After setup:
1. **Test the Pipeline**:
- Create a test PR to verify CI
- Push a version bump to test CD
2. **Configure Notifications**:
- Set up Slack/Discord webhooks
- Configure email notifications
3. **Add Integrations**:
- CodeQL for security analysis
- Dependabot for dependency updates
- Pre-commit hooks for code quality
4. **Documentation**:
- Update README with CI/CD badges
- Document release process
- Create contribution guidelines
```
--------------------------------------------------------------------------------
/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
```yaml
name: Publish to PyPI
on:
push:
branches: [ main ]
paths-ignore:
- 'README.md'
- 'CHANGELOG.md'
- 'docs/**'
- '.gitignore'
release:
types: [published]
jobs:
# Only run if tests pass first
check-tests:
uses: ./.github/workflows/ci.yml
publish:
needs: check-tests
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
permissions:
contents: write # Required for creating releases
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch full history for version bump detection
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
version: "latest"
- name: Install dependencies
run: |
uv sync --extra dev
- name: Check if version was bumped
id: version-check
run: |
# Get current version from __init__.py
CURRENT_VERSION=$(python -c "import penpot_mcp; print(penpot_mcp.__version__)")
echo "current_version=$CURRENT_VERSION" >> $GITHUB_OUTPUT
# Check if this version already exists on PyPI using the JSON API
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://pypi.org/pypi/penpot-mcp/$CURRENT_VERSION/json")
if [ "$HTTP_STATUS" = "200" ]; then
echo "version_exists=true" >> $GITHUB_OUTPUT
echo "Version $CURRENT_VERSION already exists on PyPI"
else
echo "version_exists=false" >> $GITHUB_OUTPUT
echo "Version $CURRENT_VERSION is new, will publish"
fi
- name: Build package
if: steps.version-check.outputs.version_exists == 'false'
run: |
uv build
- name: Check package quality
if: steps.version-check.outputs.version_exists == 'false'
run: |
# Install twine for checking
uv add twine
# Check the built package
uv run twine check dist/*
# Verify package contents
python -m tarfile -l dist/*.tar.gz
python -m zipfile -l dist/*.whl
- name: Test package installation
if: steps.version-check.outputs.version_exists == 'false'
run: |
# Test installation in a clean environment
python -m pip install dist/*.whl
# Test basic imports and CLI
python -c "import penpot_mcp; print(f'Successfully imported penpot_mcp v{penpot_mcp.__version__}')"
penpot-mcp --help
# Uninstall to avoid conflicts
python -m pip uninstall -y penpot-mcp
- name: Publish to Test PyPI
if: steps.version-check.outputs.version_exists == 'false'
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.TEST_PYPI_API_TOKEN }}
run: |
uv run twine upload --repository testpypi dist/* --verbose
continue-on-error: true # Test PyPI upload can fail, but don't stop main PyPI upload
- name: Wait for Test PyPI propagation
if: steps.version-check.outputs.version_exists == 'false'
run: |
echo "Waiting 60 seconds for Test PyPI propagation..."
sleep 60
- name: Test installation from Test PyPI
if: steps.version-check.outputs.version_exists == 'false'
run: |
# Try to install from Test PyPI (may fail due to dependencies)
python -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ penpot-mcp==${{ steps.version-check.outputs.current_version }} || echo "Test PyPI installation failed (expected due to dependencies)"
continue-on-error: true
- name: Publish to PyPI
if: steps.version-check.outputs.version_exists == 'false'
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: |
uv run twine upload dist/* --verbose
- name: Create GitHub Release
if: steps.version-check.outputs.version_exists == 'false'
uses: softprops/action-gh-release@v2
with:
tag_name: v${{ steps.version-check.outputs.current_version }}
name: Release v${{ steps.version-check.outputs.current_version }}
body: |
## Changes in v${{ steps.version-check.outputs.current_version }}
Auto-generated release for version ${{ steps.version-check.outputs.current_version }}.
### Installation
```bash
pip install penpot-mcp==${{ steps.version-check.outputs.current_version }}
# or
uvx penpot-mcp
```
### What's Changed
See commit history for detailed changes.
**Full Changelog**: https://github.com/montevive/penpot-mcp/compare/v${{ steps.version-check.outputs.current_version }}...HEAD
files: dist/*
draft: false
prerelease: false
- name: Notify on success
if: steps.version-check.outputs.version_exists == 'false'
run: |
echo "✅ Successfully published penpot-mcp v${{ steps.version-check.outputs.current_version }} to PyPI!"
echo "📦 Package: https://pypi.org/project/penpot-mcp/${{ steps.version-check.outputs.current_version }}/"
echo "🏷️ Release: https://github.com/montevive/penpot-mcp/releases/tag/v${{ steps.version-check.outputs.current_version }}"
- name: Skip publishing
if: steps.version-check.outputs.version_exists == 'true'
run: |
echo "⏭️ Skipping publish - version ${{ steps.version-check.outputs.current_version }} already exists on PyPI"
# Manual release workflow (triggered by GitHub releases)
publish-release:
runs-on: ubuntu-latest
if: github.event_name == 'release' && github.event.action == 'published'
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install dependencies
run: |
uv sync --extra dev
- name: Update version to match release tag
run: |
RELEASE_VERSION="${{ github.event.release.tag_name }}"
# Remove 'v' prefix if present
VERSION="${RELEASE_VERSION#v}"
# Update version in __init__.py
sed -i "s/__version__ = \".*\"/__version__ = \"$VERSION\"/" penpot_mcp/__init__.py
echo "Updated version to: $VERSION"
- name: Build and publish
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: |
uv build
uv run twine check dist/*
uv run twine upload dist/* --verbose
```
--------------------------------------------------------------------------------
/penpot_mcp/server/client.py:
--------------------------------------------------------------------------------
```python
"""Client for connecting to the Penpot MCP server."""
import asyncio
from typing import Any, Dict, List, Optional
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
class PenpotMCPClient:
"""Client for interacting with the Penpot MCP server."""
def __init__(self, server_command="python", server_args=None, env=None):
"""
Initialize the Penpot MCP client.
Args:
server_command: The command to run the server
server_args: Arguments to pass to the server command
env: Environment variables for the server process
"""
self.server_command = server_command
self.server_args = server_args or ["-m", "penpot_mcp.server.mcp_server"]
self.env = env
self.session = None
async def connect(self):
"""
Connect to the MCP server.
Returns:
The client session
"""
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command=self.server_command,
args=self.server_args,
env=self.env,
)
# Connect to the server
read, write = await stdio_client(server_params).__aenter__()
self.session = await ClientSession(read, write).__aenter__()
# Initialize the connection
await self.session.initialize()
return self.session
async def disconnect(self):
"""Disconnect from the server."""
if self.session:
await self.session.__aexit__(None, None, None)
self.session = None
async def list_resources(self) -> List[Dict[str, Any]]:
"""
List available resources from the server.
Returns:
List of resource information
"""
if not self.session:
raise RuntimeError("Not connected to server")
return await self.session.list_resources()
async def list_tools(self) -> List[Dict[str, Any]]:
"""
List available tools from the server.
Returns:
List of tool information
"""
if not self.session:
raise RuntimeError("Not connected to server")
return await self.session.list_tools()
async def get_server_info(self) -> Dict[str, Any]:
"""
Get server information.
Returns:
Server information
"""
if not self.session:
raise RuntimeError("Not connected to server")
info, _ = await self.session.read_resource("server://info")
return info
async def list_projects(self) -> Dict[str, Any]:
"""
List Penpot projects.
Returns:
Project information
"""
if not self.session:
raise RuntimeError("Not connected to server")
return await self.session.call_tool("list_projects")
async def get_project(self, project_id: str) -> Dict[str, Any]:
"""
Get details for a specific project.
Args:
project_id: The project ID
Returns:
Project information
"""
if not self.session:
raise RuntimeError("Not connected to server")
return await self.session.call_tool("get_project", {"project_id": project_id})
async def get_project_files(self, project_id: str) -> Dict[str, Any]:
"""
Get files for a specific project.
Args:
project_id: The project ID
Returns:
File information
"""
if not self.session:
raise RuntimeError("Not connected to server")
return await self.session.call_tool("get_project_files", {"project_id": project_id})
async def get_file(self, file_id: str, features: Optional[List[str]] = None,
project_id: Optional[str] = None) -> Dict[str, Any]:
"""
Get details for a specific file.
Args:
file_id: The file ID
features: List of features to include
project_id: Optional project ID
Returns:
File information
"""
if not self.session:
raise RuntimeError("Not connected to server")
params = {"file_id": file_id}
if features:
params["features"] = features
if project_id:
params["project_id"] = project_id
return await self.session.call_tool("get_file", params)
async def get_components(self) -> Dict[str, Any]:
"""
Get components from the server.
Returns:
Component information
"""
if not self.session:
raise RuntimeError("Not connected to server")
components, _ = await self.session.read_resource("content://components")
return components
async def export_object(self, file_id: str, page_id: str, object_id: str,
export_type: str = "png", scale: int = 1,
save_to_file: Optional[str] = None) -> Dict[str, Any]:
"""
Export an object from a Penpot file.
Args:
file_id: The ID of the file containing the object
page_id: The ID of the page containing the object
object_id: The ID of the object to export
export_type: Export format (png, svg, pdf)
scale: Scale factor for the export
save_to_file: Optional path to save the exported file
Returns:
If save_to_file is None: Dictionary with the exported image data
If save_to_file is provided: Dictionary with the saved file path
"""
if not self.session:
raise RuntimeError("Not connected to server")
params = {
"file_id": file_id,
"page_id": page_id,
"object_id": object_id,
"export_type": export_type,
"scale": scale
}
result = await self.session.call_tool("export_object", params)
# The result is now directly an Image object which has 'data' and 'format' fields
# If the client wants to save the file
if save_to_file:
import os
# Create directory if it doesn't exist
os.makedirs(os.path.dirname(os.path.abspath(save_to_file)), exist_ok=True)
# Save to file
with open(save_to_file, "wb") as f:
f.write(result["data"])
return {"file_path": save_to_file, "format": result.get("format")}
# Otherwise return the result as is
return result
async def run_client_example():
"""Run a simple example using the client."""
# Create and connect the client
client = PenpotMCPClient()
await client.connect()
try:
# Get server info
print("Getting server info...")
server_info = await client.get_server_info()
print(f"Server info: {server_info}")
# List projects
print("\nListing projects...")
projects_result = await client.list_projects()
if "error" in projects_result:
print(f"Error: {projects_result['error']}")
else:
projects = projects_result.get("projects", [])
print(f"Found {len(projects)} projects:")
for project in projects[:5]: # Show first 5 projects
print(f"- {project.get('name', 'Unknown')} (ID: {project.get('id', 'N/A')})")
# Example of exporting an object (uncomment and update with actual IDs to test)
"""
print("\nExporting object...")
# Replace with actual IDs from your Penpot account
export_result = await client.export_object(
file_id="your-file-id",
page_id="your-page-id",
object_id="your-object-id",
export_type="png",
scale=2,
save_to_file="exported_object.png"
)
print(f"Export saved to: {export_result.get('file_path')}")
# Or get the image data directly without saving
image_data = await client.export_object(
file_id="your-file-id",
page_id="your-page-id",
object_id="your-object-id"
)
print(f"Received image in format: {image_data.get('format')}")
print(f"Image size: {len(image_data.get('data'))} bytes")
"""
finally:
# Disconnect from the server
await client.disconnect()
def main():
"""Run the client example."""
asyncio.run(run_client_example())
if __name__ == "__main__":
main()
```
--------------------------------------------------------------------------------
/lint.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""Script to run linters with auto-fix capabilities.
Run with: python lint.py [--autofix]
"""
import argparse
import importlib.util
import subprocess
import sys
from pathlib import Path
def is_venv():
"""Check if running in a virtual environment."""
return (hasattr(sys, 'real_prefix') or
(hasattr(sys, 'base_prefix') and sys.base_prefix != sys.prefix))
def check_dependencies():
"""Check if all required dependencies are installed."""
missing_deps = []
# Check for required modules
required_modules = ["flake8", "isort", "autopep8", "pyflakes"]
# In Python 3.12+, also check for pycodestyle as a fallback
if sys.version_info >= (3, 12):
required_modules.append("pycodestyle")
for module in required_modules:
if importlib.util.find_spec(module) is None:
missing_deps.append(module)
# Special check for autopep8 compatibility with Python 3.12+
if sys.version_info >= (3, 12) and importlib.util.find_spec("autopep8") is not None:
try:
import autopep8
# Try to access a function that would use lib2to3
# Will throw an error if lib2to3 is missing and not handled properly
autopep8_version = autopep8.__version__
print(f"Using autopep8 version: {autopep8_version}")
except ImportError as e:
if "lib2to3" in str(e):
print("WARNING: You're using Python 3.12+ where lib2to3 is no longer included.")
print("Your installed version of autopep8 may not work correctly.")
print("Consider using a version of autopep8 compatible with Python 3.12+")
print("or run this script with Python 3.11 or earlier.")
if missing_deps:
print("ERROR: Missing required dependencies:")
for dep in missing_deps:
print(f" - {dep}")
if not is_venv():
print("\nYou are using the system Python environment.")
print("It's recommended to use a virtual environment:")
print("\n1. Create a virtual environment:")
print(" python3 -m venv .venv")
print("\n2. Activate the virtual environment:")
print(" source .venv/bin/activate # On Linux/macOS")
print(" .venv\\Scripts\\activate # On Windows")
print("\n3. Install dependencies:")
print(" pip install -r requirements-dev.txt")
else:
print("\nPlease install these dependencies with:")
print(" pip install -r requirements-dev.txt")
return False
return True
def run_command(cmd, cwd=None):
"""Run a shell command and return the exit code."""
try:
process = subprocess.run(cmd, shell=True, cwd=cwd)
return process.returncode
except Exception as e:
print(f"Error executing command '{cmd}': {e}")
return 1
def fix_unused_imports(root_dir):
"""Fix unused imports using pyflakes and autoflake."""
try:
if importlib.util.find_spec("autoflake") is not None:
print("Running autoflake to remove unused imports...")
cmd = "autoflake --remove-all-unused-imports --recursive --in-place penpot_mcp/ tests/"
return run_command(cmd, cwd=root_dir)
else:
print("autoflake not found. To automatically remove unused imports, install:")
print(" pip install autoflake")
return 0
except Exception as e:
print(f"Error with autoflake: {e}")
return 0
def fix_whitespace_and_docstring_issues(root_dir):
"""Attempt to fix whitespace and simple docstring issues."""
# Find Python files that need fixing
try:
filelist_cmd = "find penpot_mcp tests setup.py -name '*.py' -type f"
process = subprocess.run(
filelist_cmd, shell=True, cwd=root_dir,
capture_output=True, text=True
)
if process.returncode != 0:
print("Error finding Python files")
return 1
files = process.stdout.strip().split('\n')
fixed_count = 0
for file_path in files:
if not file_path:
continue
full_path = Path(root_dir) / file_path
try:
with open(full_path, 'r', encoding='utf-8') as f:
content = f.read()
# Fix trailing whitespace
fixed_content = '\n'.join(line.rstrip() for line in content.split('\n'))
# Ensure final newline
if not fixed_content.endswith('\n'):
fixed_content += '\n'
# Add basic docstrings to empty modules, classes, functions
if '__init__.py' in file_path and '"""' not in fixed_content:
package_name = file_path.split('/')[-2]
fixed_content = f'"""Package {package_name}."""\n' + fixed_content
# Write back if changes were made
if fixed_content != content:
with open(full_path, 'w', encoding='utf-8') as f:
f.write(fixed_content)
fixed_count += 1
except Exception as e:
print(f"Error processing {file_path}: {e}")
if fixed_count > 0:
print(f"Fixed whitespace and newlines in {fixed_count} files")
return 0
except Exception as e:
print(f"Error in whitespace fixing: {e}")
return 0
def main():
"""Main entry point for the linter script."""
parser = argparse.ArgumentParser(description="Run linters with optional auto-fix")
parser.add_argument(
"--autofix", "-a", action="store_true", help="Automatically fix linting issues"
)
args = parser.parse_args()
# Verify dependencies before proceeding
if not check_dependencies():
return 1
root_dir = Path(__file__).parent.absolute()
print("Running linters...")
# Run isort
isort_cmd = "isort --profile black ."
if args.autofix:
print("Running isort with auto-fix...")
exit_code = run_command(isort_cmd, cwd=root_dir)
else:
print("Checking imports with isort...")
exit_code = run_command(f"{isort_cmd} --check", cwd=root_dir)
if exit_code != 0 and not args.autofix:
print("isort found issues. Run with --autofix to fix automatically.")
# Run additional fixers when in autofix mode
if args.autofix:
# Fix unused imports
fix_unused_imports(root_dir)
# Fix whitespace and newline issues
fix_whitespace_and_docstring_issues(root_dir)
# Run autopep8
print("Running autopep8 with auto-fix...")
if sys.version_info >= (3, 12):
print("Detected Python 3.12+. Using compatible code formatting approach...")
# Use a more compatible approach for Python 3.12+
# First try autopep8 (newer versions may have fixed lib2to3 dependency)
autopep8_cmd = "autopep8 --recursive --aggressive --aggressive --in-place --select E,W penpot_mcp/ tests/ setup.py"
try:
exit_code = run_command(autopep8_cmd, cwd=root_dir)
if exit_code != 0:
print("Warning: autopep8 encountered issues. Some files may not have been fixed.")
except Exception as e:
if "lib2to3" in str(e):
print("Error with autopep8 due to missing lib2to3 module in Python 3.12+")
print("Using pycodestyle for checking only (no auto-fix is possible)")
exit_code = run_command("pycodestyle penpot_mcp/ tests/", cwd=root_dir)
else:
raise
else:
# Normal execution for Python < 3.12
autopep8_cmd = "autopep8 --recursive --aggressive --aggressive --in-place --select E,W penpot_mcp/ tests/ setup.py"
exit_code = run_command(autopep8_cmd, cwd=root_dir)
if exit_code != 0:
print("Warning: autopep8 encountered issues. Some files may not have been fixed.")
# Run flake8 (check only, no auto-fix)
print("Running flake8...")
flake8_cmd = "flake8 --exclude=.venv,venv,__pycache__,.git,build,dist,*.egg-info,node_modules"
flake8_result = run_command(flake8_cmd, cwd=root_dir)
if flake8_result != 0:
print("flake8 found issues that need to be fixed manually.")
print("Common issues and how to fix them:")
print("- F401 (unused import): Remove the import or use it")
print("- D1XX (missing docstring): Add a docstring to the module/class/function")
print("- E501 (line too long): Break the line or use line continuation")
print("- F841 (unused variable): Remove or use the variable")
if args.autofix:
print("Auto-fix completed! Run flake8 again to see if there are any remaining issues.")
elif exit_code != 0 or flake8_result != 0:
print("Linting issues found. Run with --autofix to fix automatically where possible.")
return 1
else:
print("All linting checks passed!")
return 0
if __name__ == "__main__":
sys.exit(main())
```
--------------------------------------------------------------------------------
/penpot_mcp/resources/penpot-tree-schema.json:
--------------------------------------------------------------------------------
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": ["colors", "typographies", "pages", "components", "id", "tokensLib", "pagesIndex"],
"properties": {
"colors": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["path", "color", "name", "modifiedAt", "opacity", "id"],
"properties": {
"path": {"type": "string"},
"color": {"type": "string", "pattern": "^#[0-9A-Fa-f]{6}$"},
"name": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"opacity": {"type": "number", "minimum": 0, "maximum": 1},
"id": {"type": "string", "format": "uuid"}
}
}
}
},
"typographies": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["lineHeight", "path", "fontStyle", "textTransform", "fontId", "fontSize", "fontWeight", "name", "modifiedAt", "fontVariantId", "id", "letterSpacing", "fontFamily"],
"properties": {
"lineHeight": {"type": "string"},
"path": {"type": "string"},
"fontStyle": {"type": "string", "enum": ["normal"]},
"textTransform": {"type": "string", "enum": ["uppercase", "none"]},
"fontId": {"type": "string"},
"fontSize": {"type": "string"},
"fontWeight": {"type": "string"},
"name": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"fontVariantId": {"type": "string"},
"id": {"type": "string", "format": "uuid"},
"letterSpacing": {"type": "string"},
"fontFamily": {"type": "string"}
}
}
}
},
"components": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["id", "name", "path", "modifiedAt", "mainInstanceId", "mainInstancePage"],
"properties": {
"id": {"type": "string", "format": "uuid"},
"name": {"type": "string"},
"path": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"mainInstanceId": {"type": "string", "format": "uuid"},
"mainInstancePage": {"type": "string", "format": "uuid"},
"annotation": {"type": "string"}
}
}
}
},
"id": {"type": "string", "format": "uuid"},
"tokensLib": {
"type": "object",
"required": ["sets", "themes", "activeThemes"],
"properties": {
"sets": {
"type": "object",
"patternProperties": {
"^S-[a-z]+$": {
"type": "object",
"required": ["name", "description", "modifiedAt", "tokens"],
"properties": {
"name": {"type": "string"},
"description": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"tokens": {
"type": "object",
"patternProperties": {
"^[a-z][a-z0-9.-]*$": {
"type": "object",
"required": ["name", "type", "value", "description", "modifiedAt"],
"properties": {
"name": {"type": "string"},
"type": {"type": "string", "enum": ["dimensions", "sizing", "color", "border-radius", "spacing", "stroke-width", "rotation", "opacity"]},
"value": {"type": "string"},
"description": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"}
}
}
}
}
}
}
}
},
"themes": {
"type": "object",
"patternProperties": {
".*": {
"type": "object",
"patternProperties": {
".*": {
"type": "object",
"required": ["name", "group", "description", "isSource", "id", "modifiedAt", "sets"],
"properties": {
"name": {"type": "string"},
"group": {"type": "string"},
"description": {"type": "string"},
"isSource": {"type": "boolean"},
"id": {"type": "string", "format": "uuid"},
"modifiedAt": {"type": "string", "format": "date-time"},
"sets": {"type": "array", "items": {"type": "string"}}
}
}
}
}
}
},
"activeThemes": {
"type": "array",
"items": {"type": "string"}
}
}
},
"options": {
"type": "object",
"properties": {
"componentsV2": {"type": "boolean"}
}
},
"objects": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["options", "objects", "id", "name"],
"properties": {
"options": {"type": "object"},
"objects": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["id", "name", "type"],
"properties": {
"id": {"type": "string", "format": "uuid"},
"name": {"type": "string"},
"type": {"type": "string", "enum": ["frame", "rect", "text"]},
"x": {"type": "number"},
"y": {"type": "number"},
"width": {"type": "number"},
"height": {"type": "number"},
"rotation": {"type": "number"},
"selrect": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
"width": {"type": "number"},
"height": {"type": "number"},
"x1": {"type": "number"},
"y1": {"type": "number"},
"x2": {"type": "number"},
"y2": {"type": "number"}
}
},
"points": {
"type": "array",
"items": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"}
}
}
},
"transform": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"},
"c": {"type": "number"},
"d": {"type": "number"},
"e": {"type": "number"},
"f": {"type": "number"}
}
},
"transformInverse": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"},
"c": {"type": "number"},
"d": {"type": "number"},
"e": {"type": "number"},
"f": {"type": "number"}
}
},
"parentId": {"type": "string", "format": "uuid"},
"frameId": {"type": "string", "format": "uuid"},
"flipX": {"type": ["null", "boolean"]},
"flipY": {"type": ["null", "boolean"]},
"hideFillOnExport": {"type": "boolean"},
"growType": {"type": "string", "enum": ["fixed", "auto-height"]},
"hideInViewer": {"type": "boolean"},
"r1": {"type": "number"},
"r2": {"type": "number"},
"r3": {"type": "number"},
"r4": {"type": "number"},
"proportion": {"type": "number"},
"proportionLock": {"type": "boolean"},
"componentRoot": {"type": "boolean"},
"componentId": {"type": "string", "format": "uuid"},
"mainInstance": {"type": "boolean"},
"componentFile": {"type": "string", "format": "uuid"},
"strokes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"strokeStyle": {"type": "string"},
"strokeAlignment": {"type": "string"},
"strokeWidth": {"type": "number"},
"strokeColor": {"type": "string"},
"strokeOpacity": {"type": "number"}
}
}
},
"fills": {
"type": "array",
"items": {
"type": "object",
"properties": {
"fillColor": {"type": "string"},
"fillOpacity": {"type": "number"},
"fillImage": {
"type": "object",
"properties": {
"name": {"type": "string"},
"width": {"type": "number"},
"height": {"type": "number"},
"mtype": {"type": "string"},
"id": {"type": "string", "format": "uuid"},
"keepAspectRatio": {"type": "boolean"}
}
}
}
}
},
"shapes": {
"type": "array",
"items": {"type": "string", "format": "uuid"}
},
"content": {
"type": "object",
"properties": {
"type": {"type": "string"},
"children": {"type": "array"}
}
},
"appliedTokens": {"type": "object"},
"positionData": {"type": "array"},
"layoutItemMarginType": {"type": "string"},
"constraintsV": {"type": "string"},
"constraintsH": {"type": "string"},
"layoutItemMargin": {"type": "object"},
"layoutGapType": {"type": "string"},
"layoutPadding": {"type": "object"},
"layoutWrapType": {"type": "string"},
"layout": {"type": "string"},
"layoutAlignItems": {"type": "string"},
"layoutPaddingType": {"type": "string"},
"layoutItemHSizing": {"type": "string"},
"layoutGap": {"type": "object"},
"layoutItemVSizing": {"type": "string"},
"layoutJustifyContent": {"type": "string"},
"layoutFlexDir": {"type": "string"},
"layoutAlignContent": {"type": "string"},
"shapeRef": {"type": "string", "format": "uuid"}
}
}
}
},
"id": {"type": "string", "format": "uuid"},
"name": {"type": "string"}
}
}
}
}
}
}
```
--------------------------------------------------------------------------------
/penpot_mcp/resources/penpot-schema.json:
--------------------------------------------------------------------------------
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": ["colors", "typographies", "pages", "components", "id", "tokensLib", "pagesIndex"],
"properties": {
"colors": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["path", "color", "name", "modifiedAt", "opacity", "id"],
"properties": {
"path": {"type": "string"},
"color": {"type": "string", "pattern": "^#[0-9A-Fa-f]{6}$"},
"name": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"opacity": {"type": "number", "minimum": 0, "maximum": 1},
"id": {"type": "string", "format": "uuid"}
}
}
}
},
"typographies": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["lineHeight", "path", "fontStyle", "textTransform", "fontId", "fontSize", "fontWeight", "name", "modifiedAt", "fontVariantId", "id", "letterSpacing", "fontFamily"],
"properties": {
"lineHeight": {"type": "string"},
"path": {"type": "string"},
"fontStyle": {"type": "string", "enum": ["normal"]},
"textTransform": {"type": "string", "enum": ["uppercase", "none"]},
"fontId": {"type": "string"},
"fontSize": {"type": "string"},
"fontWeight": {"type": "string"},
"name": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"fontVariantId": {"type": "string"},
"id": {"type": "string", "format": "uuid"},
"letterSpacing": {"type": "string"},
"fontFamily": {"type": "string"}
}
}
}
},
"pages": {
"type": "array",
"items": {"type": "string", "format": "uuid"}
},
"components": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["id", "name", "path", "modifiedAt", "mainInstanceId", "mainInstancePage"],
"properties": {
"id": {"type": "string", "format": "uuid"},
"name": {"type": "string"},
"path": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"mainInstanceId": {"type": "string", "format": "uuid"},
"mainInstancePage": {"type": "string", "format": "uuid"},
"annotation": {"type": "string"}
}
}
}
},
"id": {"type": "string", "format": "uuid"},
"tokensLib": {
"type": "object",
"required": ["sets", "themes", "activeThemes"],
"properties": {
"sets": {
"type": "object",
"patternProperties": {
"^S-[a-z]+$": {
"type": "object",
"required": ["name", "description", "modifiedAt", "tokens"],
"properties": {
"name": {"type": "string"},
"description": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"},
"tokens": {
"type": "object",
"patternProperties": {
"^[a-z][a-z0-9.-]*$": {
"type": "object",
"required": ["name", "type", "value", "description", "modifiedAt"],
"properties": {
"name": {"type": "string"},
"type": {"type": "string", "enum": ["dimensions", "sizing", "color", "border-radius", "spacing", "stroke-width", "rotation", "opacity"]},
"value": {"type": "string"},
"description": {"type": "string"},
"modifiedAt": {"type": "string", "format": "date-time"}
}
}
}
}
}
}
}
},
"themes": {
"type": "object",
"patternProperties": {
".*": {
"type": "object",
"patternProperties": {
".*": {
"type": "object",
"required": ["name", "group", "description", "isSource", "id", "modifiedAt", "sets"],
"properties": {
"name": {"type": "string"},
"group": {"type": "string"},
"description": {"type": "string"},
"isSource": {"type": "boolean"},
"id": {"type": "string", "format": "uuid"},
"modifiedAt": {"type": "string", "format": "date-time"},
"sets": {"type": "array", "items": {"type": "string"}}
}
}
}
}
}
},
"activeThemes": {
"type": "array",
"items": {"type": "string"}
}
}
},
"options": {
"type": "object",
"properties": {
"componentsV2": {"type": "boolean"}
}
},
"pagesIndex": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["options", "objects", "id", "name"],
"properties": {
"options": {"type": "object"},
"objects": {
"type": "object",
"patternProperties": {
"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$": {
"type": "object",
"required": ["id", "name", "type"],
"properties": {
"id": {"type": "string", "format": "uuid"},
"name": {"type": "string"},
"type": {"type": "string", "enum": ["frame", "rect", "text"]},
"x": {"type": "number"},
"y": {"type": "number"},
"width": {"type": "number"},
"height": {"type": "number"},
"rotation": {"type": "number"},
"selrect": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
"width": {"type": "number"},
"height": {"type": "number"},
"x1": {"type": "number"},
"y1": {"type": "number"},
"x2": {"type": "number"},
"y2": {"type": "number"}
}
},
"points": {
"type": "array",
"items": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"}
}
}
},
"transform": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"},
"c": {"type": "number"},
"d": {"type": "number"},
"e": {"type": "number"},
"f": {"type": "number"}
}
},
"transformInverse": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"},
"c": {"type": "number"},
"d": {"type": "number"},
"e": {"type": "number"},
"f": {"type": "number"}
}
},
"parentId": {"type": "string", "format": "uuid"},
"frameId": {"type": "string", "format": "uuid"},
"flipX": {"type": ["null", "boolean"]},
"flipY": {"type": ["null", "boolean"]},
"hideFillOnExport": {"type": "boolean"},
"growType": {"type": "string", "enum": ["fixed", "auto-height"]},
"hideInViewer": {"type": "boolean"},
"r1": {"type": "number"},
"r2": {"type": "number"},
"r3": {"type": "number"},
"r4": {"type": "number"},
"proportion": {"type": "number"},
"proportionLock": {"type": "boolean"},
"componentRoot": {"type": "boolean"},
"componentId": {"type": "string", "format": "uuid"},
"mainInstance": {"type": "boolean"},
"componentFile": {"type": "string", "format": "uuid"},
"strokes": {
"type": "array",
"items": {
"type": "object",
"properties": {
"strokeStyle": {"type": "string"},
"strokeAlignment": {"type": "string"},
"strokeWidth": {"type": "number"},
"strokeColor": {"type": "string"},
"strokeOpacity": {"type": "number"}
}
}
},
"fills": {
"type": "array",
"items": {
"type": "object",
"properties": {
"fillColor": {"type": "string"},
"fillOpacity": {"type": "number"},
"fillImage": {
"type": "object",
"properties": {
"name": {"type": "string"},
"width": {"type": "number"},
"height": {"type": "number"},
"mtype": {"type": "string"},
"id": {"type": "string", "format": "uuid"},
"keepAspectRatio": {"type": "boolean"}
}
}
}
}
},
"shapes": {
"type": "array",
"items": {"type": "string", "format": "uuid"}
},
"content": {
"type": "object",
"properties": {
"type": {"type": "string"},
"children": {"type": "array"}
}
},
"appliedTokens": {"type": "object"},
"positionData": {"type": "array"},
"layoutItemMarginType": {"type": "string"},
"constraintsV": {"type": "string"},
"constraintsH": {"type": "string"},
"layoutItemMargin": {"type": "object"},
"layoutGapType": {"type": "string"},
"layoutPadding": {"type": "object"},
"layoutWrapType": {"type": "string"},
"layout": {"type": "string"},
"layoutAlignItems": {"type": "string"},
"layoutPaddingType": {"type": "string"},
"layoutItemHSizing": {"type": "string"},
"layoutGap": {"type": "object"},
"layoutItemVSizing": {"type": "string"},
"layoutJustifyContent": {"type": "string"},
"layoutFlexDir": {"type": "string"},
"layoutAlignContent": {"type": "string"},
"shapeRef": {"type": "string", "format": "uuid"}
}
}
}
},
"id": {"type": "string", "format": "uuid"},
"name": {"type": "string"}
}
}
}
}
}
}
```
--------------------------------------------------------------------------------
/penpot_mcp/tools/penpot_tree.py:
--------------------------------------------------------------------------------
```python
"""
Tool for building and visualizing the structure of Penpot files as a tree.
This module provides functionality to parse Penpot file data and generate
a tree representation, which can be displayed or exported.
"""
import re
from typing import Any, Dict, List, Optional, Union
from anytree import Node, RenderTree
from anytree.exporter import DotExporter
def build_tree(data: Dict[str, Any]) -> Node:
"""
Build a tree representation of Penpot file data.
Args:
data: The Penpot file data
Returns:
The root node of the tree
"""
# Create nodes dictionary with ID as key
nodes = {}
# Create a synthetic root node with a special ID that won't conflict
synthetic_root_id = "SYNTHETIC-ROOT"
root = Node(f"{synthetic_root_id} (root) - Root")
nodes[synthetic_root_id] = root
# Add components section
components_node = Node(f"components (section) - Components", parent=root)
# Store component annotations for later reference
component_annotations = {}
# Process components
for comp_id, comp_data in data.get('components', {}).items():
comp_name = comp_data.get('name', 'Unnamed')
comp_node = Node(f"{comp_id} (component) - {comp_name}", parent=components_node)
nodes[comp_id] = comp_node
# Store annotation if present
if 'annotation' in comp_data and comp_data['annotation']:
component_annotations[comp_id] = comp_data['annotation']
# First pass: create all page nodes
for page_id, page_data in data.get('pagesIndex', {}).items():
# Create page node
page_name = page_data.get('name', 'Unnamed')
page_node = Node(f"{page_id} (page) - {page_name}", parent=root)
nodes[page_id] = page_node
# Second pass: process each page and its objects
for page_id, page_data in data.get('pagesIndex', {}).items():
page_name = page_data.get('name', 'Unnamed')
# Create a page-specific dictionary for objects to avoid ID collisions
page_nodes = {}
# First, create all object nodes for this page
for obj_id, obj_data in page_data.get('objects', {}).items():
obj_type = obj_data.get('type', 'unknown')
obj_name = obj_data.get('name', 'Unnamed')
# Make a unique key that includes the page ID to avoid collisions
page_obj_id = f"{page_id}:{obj_id}"
node = Node(f"{obj_id} ({obj_type}) - {obj_name}")
page_nodes[obj_id] = node # Store with original ID for this page's lookup
# Store additional properties for filtering
node.obj_type = obj_type
node.obj_name = obj_name
node.obj_id = obj_id
# Add component reference if this is a component instance
if 'componentId' in obj_data and obj_data['componentId'] in nodes:
comp_ref = obj_data['componentId']
node.componentRef = comp_ref
# If this component has an annotation, store it
if comp_ref in component_annotations:
node.componentAnnotation = component_annotations[comp_ref]
# Identify the all-zeros root frame for this page
all_zeros_id = "00000000-0000-0000-0000-000000000000"
page_root_frame = None
# First, find and connect the all-zeros root frame if it exists
if all_zeros_id in page_data.get('objects', {}):
page_root_frame = page_nodes[all_zeros_id]
page_root_frame.parent = nodes[page_id]
# Then build parent-child relationships for this page
for obj_id, obj_data in page_data.get('objects', {}).items():
# Skip the all-zeros root frame as we already processed it
if obj_id == all_zeros_id:
continue
parent_id = obj_data.get('parentId')
# Skip if parent ID is the same as object ID (circular reference)
if parent_id and parent_id == obj_id:
print(
f"Warning: Object {obj_id} references itself as parent. Attaching to page instead.")
page_nodes[obj_id].parent = nodes[page_id]
elif parent_id and parent_id in page_nodes:
# Check for circular references in the node hierarchy
is_circular = False
check_node = page_nodes[parent_id]
while check_node.parent is not None:
if hasattr(check_node.parent, 'obj_id') and check_node.parent.obj_id == obj_id:
is_circular = True
break
check_node = check_node.parent
if is_circular:
print(
f"Warning: Circular reference detected for {obj_id}. Attaching to page instead.")
page_nodes[obj_id].parent = nodes[page_id]
else:
page_nodes[obj_id].parent = page_nodes[parent_id]
else:
# If no parent or parent not found, connect to the all-zeros root frame if it exists,
# otherwise connect to the page
if page_root_frame:
page_nodes[obj_id].parent = page_root_frame
else:
page_nodes[obj_id].parent = nodes[page_id]
return root
def print_tree(root: Node, filter_pattern: Optional[str] = None) -> None:
"""
Print a tree representation to the console, with optional filtering.
Args:
root: The root node of the tree
filter_pattern: Optional regex pattern to filter nodes
"""
matched_nodes = []
# Apply filtering
if filter_pattern:
# Find all nodes that match the filter
pattern = re.compile(filter_pattern, re.IGNORECASE)
# Helper function to check if a node matches the filter
def matches_filter(node):
if not hasattr(node, 'obj_type') and not hasattr(node, 'obj_name'):
return False # Root node or section nodes
if pattern.search(
node.obj_type) or pattern.search(
node.obj_name) or pattern.search(
node.obj_id):
return True
return False
# Find all matching nodes and their paths to root
for pre, _, node in RenderTree(root):
if matches_filter(node):
matched_nodes.append(node)
# If we found matches, only print these nodes and their ancestors
if matched_nodes:
print(f"Filtered results matching '{filter_pattern}':")
# Build a set of all nodes to show (matching nodes and their ancestors)
nodes_to_show = set()
for node in matched_nodes:
# Add the node and all its ancestors
current = node
while current is not None:
nodes_to_show.add(current)
current = current.parent
# Print the filtered tree
for pre, _, node in RenderTree(root):
if node in nodes_to_show:
node_name = node.name
if hasattr(node, 'componentRef'):
comp_ref_str = f" (refs component: {node.componentRef}"
if hasattr(node, 'componentAnnotation'):
comp_ref_str += f" - Note: {node.componentAnnotation}"
comp_ref_str += ")"
node_name += comp_ref_str
# Highlight matched nodes
if node in matched_nodes:
print(f"{pre}{node_name} <-- MATCH")
else:
print(f"{pre}{node_name}")
print(f"\nFound {len(matched_nodes)} matching objects.")
return
# If no filter or no matches, print the entire tree
for pre, _, node in RenderTree(root):
node_name = node.name
if hasattr(node, 'componentRef'):
comp_ref_str = f" (refs component: {node.componentRef}"
if hasattr(node, 'componentAnnotation'):
comp_ref_str += f" - Note: {node.componentAnnotation}"
comp_ref_str += ")"
node_name += comp_ref_str
print(f"{pre}{node_name}")
def export_tree_to_dot(root: Node, output_file: str, filter_pattern: Optional[str] = None) -> bool:
"""
Export the tree to a DOT file (Graphviz format).
Args:
root: The root node of the tree
output_file: Path to save the exported file
filter_pattern: Optional regex pattern to filter nodes
Returns:
True if successful, False otherwise
"""
try:
# If filtering, we may want to only export the filtered tree
if filter_pattern:
# TODO: Implement filtered export
pass
DotExporter(root).to_picture(output_file)
print(f"Tree exported to {output_file}")
return True
except Exception as e:
print(f"Warning: Could not export to {output_file}: {e}")
print("Make sure Graphviz is installed: https://graphviz.org/download/")
return False
def find_page_containing_object(content: Dict[str, Any], object_id: str) -> Optional[str]:
"""
Find which page contains the specified object.
Args:
content: The Penpot file content
object_id: The ID of the object to find
Returns:
The page ID containing the object, or None if not found
"""
# Helper function to recursively search for an object in the hierarchy
def find_object_in_hierarchy(objects_dict, target_id):
# Check if the object is directly in the dictionary
if target_id in objects_dict:
return True
# Check if the object is a child of any object in the dictionary
for obj_id, obj_data in objects_dict.items():
# Look for objects that have shapes (children)
if "shapes" in obj_data and target_id in obj_data["shapes"]:
return True
# Check in children elements if any
if "children" in obj_data:
child_objects = {child["id"]: child for child in obj_data["children"]}
if find_object_in_hierarchy(child_objects, target_id):
return True
return False
# Check each page
for page_id, page_data in content.get('pagesIndex', {}).items():
objects_dict = page_data.get('objects', {})
if find_object_in_hierarchy(objects_dict, object_id):
return page_id
return None
def find_object_in_tree(tree: Node, target_id: str) -> Optional[Dict[str, Any]]:
"""
Find an object in the tree by its ID and return its subtree as a dictionary.
Args:
tree: The root node of the tree
target_id: The ID of the object to find
Returns:
Dictionary representation of the object's subtree, or None if not found
"""
# Helper function to search in a node's children
def find_object_in_children(node, target_id):
for child in node.children:
if hasattr(child, 'obj_id') and child.obj_id == target_id:
return convert_node_to_dict(child)
result = find_object_in_children(child, target_id)
if result:
return result
return None
# Iterate through the tree's children
for child in tree.children:
# Check if this is a page node (contains "(page)" in its name)
if "(page)" in child.name:
# Check all objects under this page
for obj in child.children:
if hasattr(obj, 'obj_id') and obj.obj_id == target_id:
return convert_node_to_dict(obj)
# Check children recursively
result = find_object_in_children(obj, target_id)
if result:
return result
return None
def convert_node_to_dict(node: Node) -> Dict[str, Any]:
"""
Convert an anytree.Node to a dictionary format for API response.
Args:
node: The node to convert
Returns:
Dictionary representation of the node and its subtree
"""
result = {
'id': node.obj_id if hasattr(node, 'obj_id') else None,
'type': node.obj_type if hasattr(node, 'obj_type') else None,
'name': node.obj_name if hasattr(node, 'obj_name') else None,
'children': []
}
# Add component reference if available
if hasattr(node, 'componentRef'):
result['componentRef'] = node.componentRef
# Add component annotation if available
if hasattr(node, 'componentAnnotation'):
result['componentAnnotation'] = node.componentAnnotation
# Recursively add children
for child in node.children:
result['children'].append(convert_node_to_dict(child))
return result
def get_object_subtree(file_data: Dict[str, Any], object_id: str) -> Dict[str, Union[Dict, str]]:
"""
Get a simplified tree representation of an object and its children.
Args:
file_data: The Penpot file data
object_id: The ID of the object to get the tree for
Returns:
Dictionary containing the simplified tree or an error message
"""
try:
# Get the content from file data
content = file_data.get('data')
# Find which page contains the object
page_id = find_page_containing_object(content, object_id)
if not page_id:
return {"error": f"Object {object_id} not found in file"}
# Build the full tree
full_tree = build_tree(content)
# Find the object in the full tree and extract its subtree
simplified_tree = find_object_in_tree(full_tree, object_id)
if not simplified_tree:
return {"error": f"Object {object_id} not found in tree structure"}
return {
"tree": simplified_tree,
"page_id": page_id
}
except Exception as e:
return {"error": str(e)}
def get_object_subtree_with_fields(file_data: Dict[str, Any], object_id: str,
include_fields: Optional[List[str]] = None,
depth: int = -1) -> Dict[str, Any]:
"""
Get a filtered tree representation of an object with only specified fields.
This function finds an object in the Penpot file data and returns a subtree
with the object as the root, including only the specified fields and limiting
the depth of the tree if requested.
Args:
file_data: The Penpot file data
object_id: The ID of the object to get the tree for
include_fields: List of field names to include in the output (None means include all)
depth: Maximum depth of the tree (-1 means no limit)
Returns:
Dictionary containing the filtered tree or an error message
"""
try:
# Get the content from file data
content = file_data.get('data', file_data)
# Find which page contains the object
page_id = find_page_containing_object(content, object_id)
if not page_id:
return {"error": f"Object {object_id} not found in file"}
# Get the page data
page_data = content.get('pagesIndex', {}).get(page_id, {})
objects_dict = page_data.get('objects', {})
# Check if the object exists in this page
if object_id not in objects_dict:
return {"error": f"Object {object_id} not found in page {page_id}"}
# Track visited nodes to prevent infinite loops
visited = set()
# Function to recursively build the filtered object tree
def build_filtered_object_tree(obj_id: str, current_depth: int = 0):
if obj_id not in objects_dict:
return None
# Check for circular reference
if obj_id in visited:
# Return a placeholder to indicate circular reference
return {
'id': obj_id,
'name': objects_dict[obj_id].get('name', 'Unnamed'),
'type': objects_dict[obj_id].get('type', 'unknown'),
'_circular_reference': True
}
# Mark this object as visited
visited.add(obj_id)
obj_data = objects_dict[obj_id]
# Create a new dict with only the requested fields or all fields if None
if include_fields is None:
filtered_obj = obj_data.copy()
else:
filtered_obj = {field: obj_data[field] for field in include_fields if field in obj_data}
# Always include the id field
filtered_obj['id'] = obj_id
# If depth limit reached, don't process children
if depth != -1 and current_depth >= depth:
# Remove from visited before returning
visited.remove(obj_id)
return filtered_obj
# Find all children of this object
children = []
for child_id, child_data in objects_dict.items():
if child_data.get('parentId') == obj_id:
child_tree = build_filtered_object_tree(child_id, current_depth + 1)
if child_tree:
children.append(child_tree)
# Add children field only if we have children
if children:
filtered_obj['children'] = children
# Remove from visited after processing
visited.remove(obj_id)
return filtered_obj
# Build the filtered tree starting from the requested object
object_tree = build_filtered_object_tree(object_id)
if not object_tree:
return {"error": f"Failed to build object tree for {object_id}"}
return {
"tree": object_tree,
"page_id": page_id
}
except Exception as e:
return {"error": str(e)}
```
--------------------------------------------------------------------------------
/penpot_mcp/server/mcp_server.py:
--------------------------------------------------------------------------------
```python
"""
Main MCP server implementation for Penpot.
This module defines the MCP server with resources and tools for interacting with
the Penpot design platform.
"""
import argparse
import hashlib
import json
import os
import re
import sys
from typing import Dict, List, Optional
from mcp.server.fastmcp import FastMCP, Image
from penpot_mcp.api.penpot_api import CloudFlareError, PenpotAPI, PenpotAPIError
from penpot_mcp.tools.penpot_tree import get_object_subtree_with_fields
from penpot_mcp.utils import config
from penpot_mcp.utils.cache import MemoryCache
from penpot_mcp.utils.http_server import ImageServer
class PenpotMCPServer:
"""Penpot MCP Server implementation."""
def __init__(self, name="Penpot MCP Server", test_mode=False):
"""
Initialize the Penpot MCP Server.
Args:
name: Server name
test_mode: If True, certain features like HTTP server will be disabled for testing
"""
# Initialize the MCP server
self.mcp = FastMCP(name, instructions="""
I can help you generate code from your Penpot UI designs. My primary aim is to convert Penpot design components into functional code.
The typical workflow for code generation from Penpot designs is:
1. List your projects using 'list_projects' to find the project containing your designs
2. List files within the project using 'get_project_files' to locate the specific design file
3. Search for the target component within the file using 'search_object' to find the component you want to convert
4. Retrieve the Penpot tree schema using 'penpot_tree_schema' to understand which fields are available in the object tree
5. Get a cropped version of the object tree with a screenshot using 'get_object_tree' to see the component structure and visual representation
6. Get the full screenshot of the object using 'get_rendered_component' for detailed visual reference
For complex designs, you may need multiple iterations of 'get_object_tree' and 'get_rendered_component' due to LLM context limits.
Use the resources to access schemas, cached files, and rendered objects (screenshots) as needed.
Let me know which Penpot design you'd like to convert to code, and I'll guide you through the process!
""")
# Initialize the Penpot API
self.api = PenpotAPI(
base_url=config.PENPOT_API_URL,
debug=config.DEBUG
)
# Initialize memory cache
self.file_cache = MemoryCache(ttl_seconds=600) # 10 minutes
# Storage for rendered component images
self.rendered_components: Dict[str, Image] = {}
# Initialize HTTP server for images if enabled and not in test mode
self.image_server = None
self.image_server_url = None
# Detect if running in a test environment
is_test_env = test_mode or 'pytest' in sys.modules
if config.ENABLE_HTTP_SERVER and not is_test_env:
try:
self.image_server = ImageServer(
host=config.HTTP_SERVER_HOST,
port=config.HTTP_SERVER_PORT
)
# Start the server and get the URL with actual port assigned
self.image_server_url = self.image_server.start()
print(f"Image server started at {self.image_server_url}")
except Exception as e:
print(f"Warning: Failed to start image server: {str(e)}")
# Register resources and tools
if config.RESOURCES_AS_TOOLS:
self._register_resources(resources_only=True)
self._register_tools(include_resource_tools=True)
else:
self._register_resources(resources_only=False)
self._register_tools(include_resource_tools=False)
def _handle_api_error(self, e: Exception) -> dict:
"""Handle API errors and return user-friendly error messages."""
if isinstance(e, CloudFlareError):
return {
"error": "CloudFlare Protection",
"message": str(e),
"error_type": "cloudflare_protection",
"instructions": [
"Open your web browser and navigate to https://design.penpot.app",
"Log in to your Penpot account",
"Complete any CloudFlare human verification challenges if prompted",
"Once verified, try your request again"
]
}
elif isinstance(e, PenpotAPIError):
return {
"error": "Penpot API Error",
"message": str(e),
"error_type": "api_error",
"status_code": getattr(e, 'status_code', None)
}
else:
return {"error": str(e)}
def _register_resources(self, resources_only=False):
"""Register all MCP resources. If resources_only is True, only register server://info as a resource."""
@self.mcp.resource("server://info")
def server_info() -> dict:
"""Provide information about the server."""
info = {
"status": "online",
"name": "Penpot MCP Server",
"description": "Model Context Provider for Penpot",
"api_url": config.PENPOT_API_URL
}
if self.image_server and self.image_server.is_running:
info["image_server"] = self.image_server_url
return info
if resources_only:
return
@self.mcp.resource("penpot://schema", mime_type="application/schema+json")
def penpot_schema() -> dict:
"""Provide the Penpot API schema as JSON."""
schema_path = os.path.join(config.RESOURCES_PATH, 'penpot-schema.json')
try:
with open(schema_path, 'r') as f:
return json.load(f)
except Exception as e:
return {"error": f"Failed to load schema: {str(e)}"}
@self.mcp.resource("penpot://tree-schema", mime_type="application/schema+json")
def penpot_tree_schema() -> dict:
"""Provide the Penpot object tree schema as JSON."""
schema_path = os.path.join(config.RESOURCES_PATH, 'penpot-tree-schema.json')
try:
with open(schema_path, 'r') as f:
return json.load(f)
except Exception as e:
return {"error": f"Failed to load tree schema: {str(e)}"}
@self.mcp.resource("rendered-component://{component_id}", mime_type="image/png")
def get_rendered_component(component_id: str) -> Image:
"""Return a rendered component image by its ID."""
if component_id in self.rendered_components:
return self.rendered_components[component_id]
raise Exception(f"Component with ID {component_id} not found")
@self.mcp.resource("penpot://cached-files")
def get_cached_files() -> dict:
"""List all files currently stored in the cache."""
return self.file_cache.get_all_cached_files()
def _register_tools(self, include_resource_tools=False):
"""Register all MCP tools. If include_resource_tools is True, also register resource logic as tools."""
@self.mcp.tool()
def list_projects() -> dict:
"""Retrieve a list of all available Penpot projects."""
try:
projects = self.api.list_projects()
return {"projects": projects}
except Exception as e:
return self._handle_api_error(e)
@self.mcp.tool()
def get_project_files(project_id: str) -> dict:
"""Get all files contained within a specific Penpot project.
Args:
project_id: The ID of the Penpot project
"""
try:
files = self.api.get_project_files(project_id)
return {"files": files}
except Exception as e:
return self._handle_api_error(e)
def get_cached_file(file_id: str) -> dict:
"""Internal helper to retrieve a file, using cache if available.
Args:
file_id: The ID of the Penpot file
"""
cached_data = self.file_cache.get(file_id)
if cached_data is not None:
return cached_data
try:
file_data = self.api.get_file(file_id=file_id)
self.file_cache.set(file_id, file_data)
return file_data
except Exception as e:
return self._handle_api_error(e)
@self.mcp.tool()
def get_file(file_id: str) -> dict:
"""Retrieve a Penpot file by its ID and cache it. Don't use this tool for code generation, use 'get_object_tree' instead.
Args:
file_id: The ID of the Penpot file
"""
try:
file_data = self.api.get_file(file_id=file_id)
self.file_cache.set(file_id, file_data)
return file_data
except Exception as e:
return self._handle_api_error(e)
@self.mcp.tool()
def export_object(
file_id: str,
page_id: str,
object_id: str,
export_type: str = "png",
scale: int = 1) -> Image:
"""Export a Penpot design object as an image.
Args:
file_id: The ID of the Penpot file
page_id: The ID of the page containing the object
object_id: The ID of the object to export
export_type: Image format (png, svg, etc.)
scale: Scale factor for the exported image
"""
temp_filename = None
try:
import tempfile
temp_dir = tempfile.gettempdir()
temp_filename = os.path.join(temp_dir, f"{object_id}.{export_type}")
output_path = self.api.export_and_download(
file_id=file_id,
page_id=page_id,
object_id=object_id,
export_type=export_type,
scale=scale,
save_to_file=temp_filename
)
with open(output_path, "rb") as f:
file_content = f.read()
image = Image(data=file_content, format=export_type)
# If HTTP server is enabled, add the image to the server
if self.image_server and self.image_server.is_running:
image_id = hashlib.md5(f"{file_id}:{page_id}:{object_id}".encode()).hexdigest()
# Use the current image_server_url to ensure the correct port
image_url = self.image_server.add_image(image_id, file_content, export_type)
# Add HTTP URL to the image metadata
image.http_url = image_url
return image
except Exception as e:
if isinstance(e, CloudFlareError):
raise Exception(f"CloudFlare Protection: {str(e)}")
else:
raise Exception(f"Export failed: {str(e)}")
finally:
if temp_filename and os.path.exists(temp_filename):
try:
os.remove(temp_filename)
except Exception as e:
print(f"Warning: Failed to delete temporary file {temp_filename}: {str(e)}")
@self.mcp.tool()
def get_object_tree(
file_id: str,
object_id: str,
fields: List[str],
depth: int = -1,
format: str = "json"
) -> dict:
"""Get the object tree structure for a Penpot object ("tree" field) with rendered screenshot image of the object ("image.mcp_uri" field).
Args:
file_id: The ID of the Penpot file
object_id: The ID of the object to retrieve
fields: Specific fields to include in the tree (call "penpot_tree_schema" resource/tool for available fields)
depth: How deep to traverse the object tree (-1 for full depth)
format: Output format ('json' or 'yaml')
"""
try:
file_data = get_cached_file(file_id)
if "error" in file_data:
return file_data
result = get_object_subtree_with_fields(
file_data,
object_id,
include_fields=fields,
depth=depth
)
if "error" in result:
return result
simplified_tree = result["tree"]
page_id = result["page_id"]
final_result = {"tree": simplified_tree}
try:
image = export_object(
file_id=file_id,
page_id=page_id,
object_id=object_id
)
image_id = hashlib.md5(f"{file_id}:{object_id}".encode()).hexdigest()
self.rendered_components[image_id] = image
# Image URI preferences:
# 1. HTTP server URL if available
# 2. Fallback to MCP resource URI
image_uri = f"render_component://{image_id}"
if hasattr(image, 'http_url'):
final_result["image"] = {
"uri": image.http_url,
"mcp_uri": image_uri,
"format": image.format if hasattr(image, 'format') else "png"
}
else:
final_result["image"] = {
"uri": image_uri,
"format": image.format if hasattr(image, 'format') else "png"
}
except Exception as e:
final_result["image_error"] = str(e)
if format.lower() == "yaml":
try:
import yaml
yaml_result = yaml.dump(final_result, default_flow_style=False, sort_keys=False)
return {"yaml_result": yaml_result}
except ImportError:
return {"format_error": "YAML format requested but PyYAML package is not installed"}
except Exception as e:
return {"format_error": f"Error formatting as YAML: {str(e)}"}
return final_result
except Exception as e:
return self._handle_api_error(e)
@self.mcp.tool()
def search_object(file_id: str, query: str) -> dict:
"""Search for objects within a Penpot file by name.
Args:
file_id: The ID of the Penpot file to search in
query: Search string (supports regex patterns)
"""
try:
file_data = get_cached_file(file_id)
if "error" in file_data:
return file_data
pattern = re.compile(query, re.IGNORECASE)
matches = []
data = file_data.get('data', {})
for page_id, page_data in data.get('pagesIndex', {}).items():
page_name = page_data.get('name', 'Unnamed')
for obj_id, obj_data in page_data.get('objects', {}).items():
obj_name = obj_data.get('name', '')
if pattern.search(obj_name):
matches.append({
'id': obj_id,
'name': obj_name,
'page_id': page_id,
'page_name': page_name,
'object_type': obj_data.get('type', 'unknown')
})
return {'objects': matches}
except Exception as e:
return self._handle_api_error(e)
if include_resource_tools:
@self.mcp.tool()
def penpot_schema() -> dict:
"""Provide the Penpot API schema as JSON."""
schema_path = os.path.join(config.RESOURCES_PATH, 'penpot-schema.json')
try:
with open(schema_path, 'r') as f:
return json.load(f)
except Exception as e:
return {"error": f"Failed to load schema: {str(e)}"}
@self.mcp.tool()
def penpot_tree_schema() -> dict:
"""Provide the Penpot object tree schema as JSON."""
schema_path = os.path.join(config.RESOURCES_PATH, 'penpot-tree-schema.json')
try:
with open(schema_path, 'r') as f:
return json.load(f)
except Exception as e:
return {"error": f"Failed to load tree schema: {str(e)}"}
@self.mcp.tool()
def get_rendered_component(component_id: str) -> Image:
"""Return a rendered component image by its ID."""
if component_id in self.rendered_components:
return self.rendered_components[component_id]
raise Exception(f"Component with ID {component_id} not found")
@self.mcp.tool()
def get_cached_files() -> dict:
"""List all files currently stored in the cache."""
return self.file_cache.get_all_cached_files()
def run(self, port=None, debug=None, mode=None):
"""
Run the MCP server.
Args:
port: Port to run on (overrides config) - only used in 'sse' mode
debug: Debug mode (overrides config)
mode: MCP mode ('stdio' or 'sse', overrides config)
"""
# Use provided values or fall back to config
debug = debug if debug is not None else config.DEBUG
# Get mode from parameter, environment variable, or default to stdio
mode = mode or os.environ.get('MODE', 'stdio')
# Validate mode
if mode not in ['stdio', 'sse']:
print(f"Invalid mode: {mode}. Using stdio mode.")
mode = 'stdio'
if mode == 'sse':
print(f"Starting Penpot MCP Server on port {port} (debug={debug}, mode={mode})")
else:
print(f"Starting Penpot MCP Server (debug={debug}, mode={mode})")
# Start HTTP server if enabled and not already running
if config.ENABLE_HTTP_SERVER and self.image_server and not self.image_server.is_running:
try:
self.image_server_url = self.image_server.start()
except Exception as e:
print(f"Warning: Failed to start image server: {str(e)}")
self.mcp.run(mode)
def create_server():
"""Create and configure a new server instance."""
# Detect if running in a test environment
is_test_env = 'pytest' in sys.modules
return PenpotMCPServer(test_mode=is_test_env)
# Create a global server instance with a standard name for the MCP tool
server = create_server()
def main():
"""Entry point for the console script."""
parser = argparse.ArgumentParser(description='Run the Penpot MCP Server')
parser.add_argument('--port', type=int, help='Port to run on')
parser.add_argument('--debug', action='store_true', help='Enable debug mode')
parser.add_argument('--mode', choices=['stdio', 'sse'], default=os.environ.get('MODE', 'stdio'),
help='MCP mode (stdio or sse)')
args = parser.parse_args()
server.run(port=args.port, debug=args.debug, mode=args.mode)
if __name__ == "__main__":
main()
```