#
tokens: 39400/50000 21/21 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── .python-version
├── assets
│   └── create-edit.png
├── pyproject.toml
├── README.md
├── server.json
├── src
│   └── video_editor_mcp
│       ├── __init__.py
│       ├── edit.json
│       ├── generate_charts.py
│       ├── generate_opentimeline.py
│       ├── search_local_videos.py
│       └── server.py
├── tools
│   ├── pyproject.toml
│   ├── README.md
│   └── src
│       └── manim
│           ├── manim_loop.py
│           └── run_manim.py
├── uv.lock
└── video-player
    ├── build-downloader.sh
    ├── build.sh
    ├── downloader.swift
    ├── Icon.svg
    ├── Info.plist
    ├── open-window
    ├── open-window.swift
    ├── player.swift
    └── vj-player
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
3.11

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info

# Virtual environments
.venv
app.log
# Generated / Downloaded V  ideos
*.mp4
config.json

```

--------------------------------------------------------------------------------
/tools/README.md:
--------------------------------------------------------------------------------

```markdown
# Asset Viewer

Allows you to view Video Assets
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Video Editor MCP server

[![Video Jungle MCP Server](./assets/create-edit.png)](https://www.video-jungle.com)

See a demo here: [https://www.youtube.com/watch?v=KG6TMLD8GmA](https://www.youtube.com/watch?v=KG6TMLD8GmA)

Upload, edit, search, and generate videos from everyone's favorite LLM and [Video Jungle](https://www.video-jungle.com/).

You'll need to sign up for an account at [Video Jungle](https://app.video-jungle.com/register) in order to use this tool, and add your API key.

[![PyPI version](https://badge.fury.io/py/video-editor-mcp.svg)](https://badge.fury.io/py/video-editor-mcp)

## Components

### Resources

The server implements an interface to upload, generate, and edit videos with:
- Custom vj:// URI scheme for accessing individual videos and projects
- Each project resource has a name, description
- Search results are returned with metadata about what is in the video, and when, allowing for edit generation directly

### Prompts

Coming soon.

### Tools

The server implements a few tools:
- add-video
  - Add a Video File for analysis from a URL. Returns an vj:// URI to reference the Video file
- create-videojungle-project
  - Creates a Video Jungle project to contain generative scripts, analyzed videos, and images for video edit generation
- edit-locally
  - Creates an OpenTimelineIO project and downloads it to your machine to open in a Davinci Resolve Studio instance (Resolve Studio _must_ already be running before calling this tool.) 
- generate-edit-from-videos
  - Generates a rendered video edit from a set of video files
- generate-edit-from-single-video
  - Generate an edit from a single input video file
- get-project-assets
  - Get assets within a project for video edit generation.
- search-videos
  - Returns video matches based upon embeddings and keywords
- update-video-edit
  - Live update a video edit's information. If Video Jungle is open, edit will be updated in real time.

### Using Tools in Practice

In order to use the tools, you'll need to sign up for Video Jungle and add your API key.

**add-video**

Here's an example prompt to invoke the `add-video` tool:

```
can you download the video at https://www.youtube.com/shorts/RumgYaH5XYw and name it fly traps?
```

This will download a video from a URL, add it to your library, and analyze it for retrieval later. Analysis is multi-modal, so both audio and visual components can be queried against.

**search-videos**

Once you've got a video downloaded and analyzed, you can then do queries on it using the `search-videos` tool:

```
can you search my videos for fly traps?
```

Search results contain relevant metadata for generating a video edit according to details discovered in the initial analysis.

**search-local-videos**

You must set the environment variable `LOAD_PHOTOS_DB=1` in order to use this tool, as it will make Claude prompt to access your files on your local machine.

Once that's done, you can search through your Photos app for videos that exist on your phone, using Apple's tags.

In my case, when I search for "Skateboard", I get 1903 video files.

```
can you search my local video files for Skateboard?
```

**generate-edit-from-videos**

Finally, you can use these search results to generate an edit:

```
can you create an edit of all the times the video says "fly trap"?
```

(Currently), the video edits tool relies on the context within the current chat. 

**generate-edit-from-single-video**

Finally, you can cut down an edit from a single, existing video:

```
can you create an edit of all the times this video says the word "fly trap"?
```

## Configuration

You must login to [Video Jungle settings](https://app.video-jungle.com/profile/settings), and get your [API key](https://app.video-jungle.com/profile/settings). Then, use this to start Video Jungle MCP:

```bash
$ uv run video-editor-mcp YOURAPIKEY
```

To allow this MCP server to search your Photos app on MacOS:

```
$ LOAD_PHOTOS_DB=1 uv run video-editor-mcp YOURAPIKEY
```
## Quickstart

### Install

#### Installing via Smithery

To install Video Editor for Claude Desktop automatically via [Smithery](https://smithery.ai/server/video-editor-mcp):

```bash
npx -y @smithery/cli install video-editor-mcp --client claude
```

#### Claude Desktop

You'll need to adjust your `claude_desktop_config.json` manually:

On MacOS: `~/Library/Application\ Support/Claude/claude_desktop_config.json`
On Windows: `%APPDATA%/Claude/claude_desktop_config.json`

<details>
<details>
  <summary>Published Server Configuration</summary>
  
 ```json
  "mcpServers": {
    "video-editor-mcp": {
      "command": "uvx",
      "args": [
        "video-editor-mcp",
        "YOURAPIKEY"
      ]
    }
  }
  ```
</details>
  <summary>Development/Unpublished Servers Configuration</summary>
  
 ```json
  "mcpServers": {
    "video-editor-mcp": {
      "command": "uv",
      "args": [
        "--directory",
        "/Users/YOURDIRECTORY/video-editor-mcp",
        "run",
        "video-editor-mcp",
        "YOURAPIKEY"
      ]
    }
  }
  ```

  With local Photos app access enabled (search your Photos app):

  ```json
    "video-jungle-mcp": {
      "command": "uv",
      "args": [
        "--directory",
        "/Users/<PATH_TO>/video-jungle-mcp",
        "run",
        "video-editor-mcp",
        "<YOURAPIKEY>"
      ],
     "env": {
	      "LOAD_PHOTOS_DB": "1"
      }
    },
  ```

</details>

Be sure to replace the directories with the directories you've placed the repository in on **your** computer.

## Development

### Building and Publishing

To prepare the package for distribution:

1. Sync dependencies and update lockfile:
```bash
uv sync
```

2. Build package distributions:
```bash
uv build
```

This will create source and wheel distributions in the `dist/` directory.

3. Publish to PyPI:
```bash
uv publish
```

Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token: `--token` or `UV_PUBLISH_TOKEN`
- Or username/password: `--username`/`UV_PUBLISH_USERNAME` and `--password`/`UV_PUBLISH_PASSWORD`

### MCP Server Registry

```
mcp-name: io.github.burningion/video-editing-mcp
```

### Debugging

Since MCP servers run over stdio, debugging can be challenging. For the best debugging
experience, we strongly recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector).


You can launch the MCP Inspector via [`npm`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) with this command:

(Be sure to replace `YOURDIRECTORY` and `YOURAPIKEY` with the directory this repo is in, and your Video Jungle API key, found in the settings page.)

```bash
npx @modelcontextprotocol/inspector uv run --directory /Users/YOURDIRECTORY/video-editor-mcp video-editor-mcp YOURAPIKEY
```


Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.

Additionally, I've added logging to `app.log` in the project directory. You can add logging to diagnose API calls via a:

```
logging.info("this is a test log")
```

A reasonable way to follow along as you're workin on the project is to open a terminal session and do a:

```bash
$ tail -n 90 -f app.log
```

```

--------------------------------------------------------------------------------
/video-player/build.sh:
--------------------------------------------------------------------------------

```bash
swiftc player.swift -o vj-player
```

--------------------------------------------------------------------------------
/tools/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "viewer"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = []

[[project.authors]]
name = "Kirk Kaiser"
email = "[email protected]"

[build-system]
requires = [ "hatchling",]
build-backend = "hatchling.build"


```

--------------------------------------------------------------------------------
/tools/src/manim/run_manim.py:
--------------------------------------------------------------------------------

```python
import subprocess

# Launch manim with pipes for stdin/stdout/stderr
process = subprocess.Popen(
    ["uv", "run", "manim", "manim_loop.py", "-p", "--renderer=opengl"],
    stdin=subprocess.PIPE,
    stdout=subprocess.PIPE,
    stderr=subprocess.PIPE,
    text=True,  # This makes it use text mode instead of bytes
)

while True:
    b = input(": ")
    if b == "exit":
        break
    process.stdin.write(b + "\n")
    process.stdin.flush()

```

--------------------------------------------------------------------------------
/video-player/Icon.svg:
--------------------------------------------------------------------------------

```
<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 18 18">
  <g transform="scale(0.051)" fill="black">
    <path d="m287.41,219.03c0,11.16-4.35,21.69-12.21,29.63-7.98,7.9-18.51,12.26-29.67,12.26h-21.91v59h21.91c59.19,0,107.34-48.15,107.34-107.33V.02h-65.45v219.01Z"/>
    <path d="M292.26 0 262.38 65.47 174.94 257.17 116.68 257.17 0 0 65.7 0 146.1 175.17 196.11 65.47 225.99 0Z"/>
    <path d="M313.31 0 313.31 65.47 196.11 65.47 225.99 0Z"/>
  </g>
</svg>
```

--------------------------------------------------------------------------------
/tools/src/manim/manim_loop.py:
--------------------------------------------------------------------------------

```python
from manim import *
from manim.opengl import *

from pyglet.window import key


class CameraScene(Scene):
    def construct(self):
        self.camera_states = []

        self.interactive_embed()

    def on_key_press(self, symbol, modifiers):
        # + adds a new camera position to interpolate
        if symbol == key.PLUS:
            print("New position added!")
            self.camera_states.append(self.camera.copy())

        # P plays the animations, one by one
        elif symbol == key.P:
            print("Replaying!")
            for cam in self.camera_states:
                self.play(self.camera.animate.become(cam))

        super().on_key_press(symbol, modifiers)

```

--------------------------------------------------------------------------------
/server.json:
--------------------------------------------------------------------------------

```json
{
  "$schema": "https://static.modelcontextprotocol.io/schemas/2025-07-09/server.schema.json",
  "name": "io.github.burningion/video-editing-mcp",
  "description": "MCP Server for Video Jungle - Analyze, Search, Generate, and Edit Videos",
  "status": "active",
  "repository": {
    "url": "https://github.com/burningion/video-editing-mcp",
    "source": "github"
  },
  "version": "1.0.1",
  "packages": [
    {
      "registry_type": "pypi",
      "registry_base_url": "https://pypi.org",
      "identifier": "video-editor-mcp",
      "version": "1.0.1",
      "transport": {
        "type": "stdio"
      },
      "environment_variables": [
        {
          "description": "Video Jungle API Key (found at https://www.video-jungle.com/user/settings)",
          "is_required": true,
          "format": "string",
          "is_secret": true,
          "name": "VJ_API_KEY"
        }
      ]
    }
  ]
}
```

--------------------------------------------------------------------------------
/src/video_editor_mcp/__init__.py:
--------------------------------------------------------------------------------

```python
import asyncio
import sys

from . import server


def main():
    """Main entry point for the package."""
    # Check for --help flag
    if len(sys.argv) > 1 and sys.argv[1] == "--help":
        print("""Video Jungle MCP Server

Usage: video-editor-mcp [OPTIONS] API_KEY

A Model Context Protocol server for video editing operations using Video Jungle API.

Arguments:
  API_KEY    Your Video Jungle API key (can also be set via VJ_API_KEY environment variable)

Options:
  --help     Show this help message and exit

Environment Variables:
  VJ_API_KEY        Video Jungle API key (alternative to command line argument)
  LOAD_PHOTOS_DB    Set to 1 to enable Photos database integration

Examples:
  # Run with API key as argument
  video-editor-mcp your-api-key-here
  
  # Run with API key from environment
  export VJ_API_KEY=your-api-key-here
  video-editor-mcp
  
  # Run with Photos database access
  LOAD_PHOTOS_DB=1 video-editor-mcp your-api-key-here

For more information, visit: https://github.com/burningion/video-editing-mcp""")
        sys.exit(0)

    asyncio.run(server.main())


# Optionally expose other important items at package level
__all__ = ["main", "server"]

```

--------------------------------------------------------------------------------
/video-player/build-downloader.sh:
--------------------------------------------------------------------------------

```bash
set -e

# App name and bundle structure
APP_NAME="VJ Uploader"
BUNDLE_NAME="$APP_NAME.app"
CONTENTS_DIR="$BUNDLE_NAME/Contents"
MACOS_DIR="$CONTENTS_DIR/MacOS"
RESOURCES_DIR="$CONTENTS_DIR/Resources"
ICONSET_NAME="AppIcon.iconset"

# Create bundle structure
mkdir -p "$MACOS_DIR" "$RESOURCES_DIR"
cp Icon.svg "$RESOURCES_DIR/"
cp config.json "$RESOURCES_DIR/"

# Compile the application
swiftc downloader.swift -o "$MACOS_DIR/$APP_NAME"

# Copy Info.plist
cp Info.plist "$CONTENTS_DIR/"

# Create iconset directory
mkdir -p "$ICONSET_NAME"

# Generate different icon sizes from SVG
for size in 16 32 64 128 256 512; do
    # Generate regular size
    magick convert -background none -resize ${size}x${size} Icon.svg "$ICONSET_NAME/icon_${size}x${size}.png"
    
    # Generate @2x size
    if [ $size -lt 512 ]; then
        magick convert -background none -resize $((size*2))x$((size*2)) Icon.svg "$ICONSET_NAME/icon_${size}x${size}@2x.png"
    fi
done

# Convert iconset to icns
iconutil -c icns -o "$RESOURCES_DIR/AppIcon.icns" "$ICONSET_NAME"

# Clean up iconset directory
rm -rf "$ICONSET_NAME"

# Set executable permissions
chmod +x "$MACOS_DIR/$APP_NAME"

echo "Build complete: $BUNDLE_NAME"
```

--------------------------------------------------------------------------------
/video-player/open-window.swift:
--------------------------------------------------------------------------------

```swift
import Cocoa

let workspace = NSWorkspace.shared
let bundleId = "com.skate85.videojungle"
let url = "videojungle://upload"

if let appURL = NSWorkspace.shared.urlForApplication(withBundleIdentifier: bundleId) {
    let configuration = NSWorkspace.OpenConfiguration()
    configuration.arguments = [url]
    
    // Check if app is already running
    let isRunning = NSWorkspace.shared.runningApplications.contains { 
        $0.bundleIdentifier == bundleId 
    }
    
    if isRunning {
        // If running, just open the URL
        if let urlObj = URL(string: url) {
            workspace.open(urlObj, configuration: configuration) { (app, error) in
                if let error = error {
                    print("Error opening URL: \(error)")
                    exit(1)
                }
                exit(0)
            }
        }
    } else {
        // If not running, launch with URL as parameter
        workspace.openApplication(at: appURL, 
                                configuration: configuration) { (app, error) in
            if let error = error {
                print("Error launching app: \(error)")
                exit(1)
            }
            exit(0)
        }
    }
    
    RunLoop.main.run(until: Date(timeIntervalSinceNow: 5))
} else {
    print("Could not find application")
    exit(1)
}
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "video-editor-mcp"
version = "0.1.59"
description = "Video Jungle MCP Server for Adding, Analysing, Searching, and Editing Videos"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
 "einops>=0.8.0",
 "manim>=0.18.1",
 "mcp>=1.6.0",
 "numpy>=2.2.2",
 "opentimelineio>=0.17.0",
 "osxphotos>=0.69.2",
 "pillow>=11.0.0",
 "requests>=2.32.3",
 "thefuzz>=0.22.1",
 "timm>=1.0.12",
 "torch==2.5.1",
 "torchvision==0.20.1",
 "transformers[torch]==4.47.1",
 "videojungle>=0.1.80",
]
homepage = "https://github.com/burningion/video-editing-mcp"

[[project.authors]]
name = "Kirk Kaiser"
email = "[email protected]"

[build-system]
requires = [ "hatchling",]
build-backend = "hatchling.build"

[tool.uv.workspace]
members = ["tools"]

[dependency-groups]
dev = [
    "ipykernel>=6.29.5",
    "mcp[cli]>=1.6.0",
    "pre-commit>=4.0.1",
    "ruff>=0.8.4",
]

[project.scripts]
video-editor-mcp = "video_editor_mcp:main"

[project.urls]
Homepage = "https://github.com/burningion/video-editing-mcp"
Issues = "https://github.com/burningion/video-editing-mcp/issues"

[tool.uv.sources]
torch = [
    { index = "pytorch-cpu" },
]
torchvision = [
    { index = "pytorch-cpu" },
]
torchaudio = [
    { index = "pytorch-cpu" },
]

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true

[tool.uv]
constraint-dependencies = [
    "pyglet==2.1.6",
]

```

--------------------------------------------------------------------------------
/src/video_editor_mcp/edit.json:
--------------------------------------------------------------------------------

```json
{
    "edit": [
        {
            "video_id": "86f37f08-98fa-4bb2-bb75-e164276c1933",
            "video_end_time": "00:00:20",
            "video_start_time": "00:00:10"
        },
        {
            "video_id": "814233f0-7a3a-4e40-90e6-f093be4c44b7",
            "video_end_time": "00:00:25",
            "video_start_time": "00:00:15"
        },
        {
            "video_id": "04c78873-5dba-4bbc-8ba0-2f43b57fee4c",
            "video_end_time": "00:00:30",
            "video_start_time": "00:00:20"
        },
        {
            "video_id": "b292d015-230a-42a8-bce7-8fdd0838623d",
            "video_end_time": "00:00:25",
            "video_start_time": "00:00:15"
        },
        {
            "video_id": "928f5627-d20b-49cd-bace-377a528b5486",
            "video_end_time": "00:00:15",
            "video_start_time": "00:00:05"
        },
        {
            "video_id": "52ac445f-cc60-40b3-b228-7d426150fb7e",
            "video_end_time": "00:00:33",
            "video_start_time": "00:00:23"
        },
        {
            "video_id": "424289e9-e2f2-4c2c-9f84-8bef7827470d",
            "video_end_time": "00:00:20",
            "video_start_time": "00:00:10"
        },
        {
            "video_id": "1ab8002b-2a94-4ffe-88f0-2cacb56e3ffe",
            "video_end_time": "00:00:15",
            "video_start_time": "00:00:05"
        }
    ],
    "project_id": "612afec9-bcb2-47ca-807b-756d6e83b4b7",
    "resolution": "1080p"
}
```

--------------------------------------------------------------------------------
/src/video_editor_mcp/search_local_videos.py:
--------------------------------------------------------------------------------

```python
import json
import sys
from collections import defaultdict
from datetime import datetime

import osxphotos
from thefuzz import fuzz


def load_keywords(keyword_dict):
    # Convert string dict to actual dict if needed
    if isinstance(keyword_dict, str):
        keyword_dict = json.loads(keyword_dict)
    return {k.lower(): v for k, v in keyword_dict.items()}


def videos_to_json(video_list):
    simplified_videos = []
    for video in video_list:
        simplified = {
            "filename": video.filename,
            "date": video.date.isoformat() if video.date else None,
            "duration": video.exif_info.duration,
            "labels": video.labels,
            "latitude": video.latitude,
            "longitude": video.longitude,
            "place_name": video.place.name
            if video.place and hasattr(video.place, "name")
            else None,
            "width": video.width,
            "height": video.height,
            "fps": video.exif_info.fps,
            "codec": video.exif_info.codec,
            "camera_make": video.exif_info.camera_make,
            "camera_model": video.exif_info.camera_model,
        }
        simplified_videos.append(simplified)

    return simplified_videos


def match_description(description, keyword_dict, threshold=60):
    keywords = load_keywords(keyword_dict)

    matches = defaultdict(int)
    words = description.lower().split()

    for word in words:
        for keyword in keywords:
            ratio = fuzz.ratio(word, keyword)
            if ratio > threshold:
                matches[keyword] = max(matches[keyword], ratio)

    # Return keywords sorted by match ratio
    return sorted(matches.items(), key=lambda x: x[1], reverse=True)


def get_videos_by_keyword(photosdb, keyword, start_date=None, end_date=None):
    # Use only_movies=True instead of is_video=True
    if start_date and end_date:
        videos = photosdb.query(
            osxphotos.QueryOptions(
                label=[keyword],
                photos=False,
                movies=True,
                incloud=True,
                ignore_case=True,
                from_date=datetime.fromisoformat(start_date.replace("Z", "+00:00")),
                to_date=datetime.fromisoformat(end_date.replace("Z", "+00:00")),
            )
        )
    else:
        videos = photosdb.query(
            osxphotos.QueryOptions(
                label=[keyword],
                photos=False,
                movies=True,
                incloud=True,
                ignore_case=True,
            )
        )

    # Convert to list of dictionaries if needed
    video_data = videos_to_json(videos)

    return video_data


def find_and_export_videos(photosdb, keyword, export_path):
    videos = photosdb.query(
        osxphotos.QueryOptions(
            label=[keyword], photos=False, movies=True, incloud=True, ignore_case=True
        )
    )

    exported_files = []
    for video in videos:
        try:
            exported = video.export(export_path)
            exported_files.extend(exported)
            print(f"Exported {video.filename} to {exported}")
        except Exception as e:
            print(f"Error exporting {video.filename}: {e}")

    return exported_files


# Example usage
if __name__ == "__main__":
    """
    Usage: python search_local_videos.py <keyword>
    """
    if len(sys.argv) < 2:
        print("Usage: python search_local_videos.py <keyword>")
        sys.exit(1)
    photosdb = osxphotos.PhotosDB()
    video_dict = photosdb.labels_as_dict
    videos = get_videos_by_keyword(photosdb, sys.argv[1])
    for video in videos:
        print(
            f"Found video: {video.get('filename', 'Unknown')}, {video.get('labels', '')}"
        )
        print(f"number of items returned: {len(videos)}")
    # Example
    keywords = video_dict
    matches = match_description("me skateboarding", keywords)
    print(matches)
    import IPython

    IPython.embed()

```

--------------------------------------------------------------------------------
/src/video_editor_mcp/generate_opentimeline.py:
--------------------------------------------------------------------------------

```python
import opentimelineio as otio
from videojungle import ApiClient
import os
import sys
import json
import argparse
import logging
import requests

logging.basicConfig(
    filename="app.log",  # Name of the log file
    level=logging.INFO,  # Log level (e.g., DEBUG, INFO, WARNING, ERROR, CRITICAL)
    format="%(asctime)s - %(levelname)s - %(message)s",  # Log format
)

vj = ApiClient(os.environ.get("VJ_API_KEY"))


def timecode_to_frames(timecode, fps=24.0):
    """
    Convert HH:MM:SS.xxx format to frames, handling variable decimal places
    """
    try:
        parts = timecode.split(":")
        hours = float(parts[0])
        minutes = float(parts[1])
        seconds = float(parts[2])

        total_seconds = hours * 3600 + minutes * 60 + seconds
        return int(total_seconds * fps)
    except (ValueError, IndexError) as e:
        raise ValueError(f"Invalid timecode format: {timecode}") from e


def create_rational_time(timecode, fps=24.0):
    """Create RationalTime object from HH:MM:SS.xxx format"""
    frames = timecode_to_frames(timecode, fps)
    return otio.opentime.RationalTime(frames, fps)


def download_asset(asset_id, asset_type, download_dir="downloads"):
    """Download an asset using either the assets API or video files API based on type"""
    try:
        # Determine which API to use based on asset type
        if asset_type in ["user", "audio", "mp3", "wav", "aac", "m4a"]:
            # Use assets API for user uploads and audio files
            asset = vj.assets.get(asset_id)
            if not asset.download_url:
                logging.error(f"No download URL for asset {asset_id}")
                return None
            download_url = asset.download_url
            filename = (
                asset.name if hasattr(asset, "name") and asset.name else str(asset_id)
            )
        else:
            # Use video files API for video files
            video = vj.video_files.get(asset_id)
            if not video.download_url:
                logging.error(f"No download URL for video {asset_id}")
                return None
            download_url = video.download_url
            filename = (
                video.name if hasattr(video, "name") and video.name else str(asset_id)
            )

        # Determine file extension based on asset type
        ext_map = {
            "mp3": ".mp3",
            "wav": ".wav",
            "aac": ".aac",
            "m4a": ".m4a",
            "user": ".mp4",  # Default for user videos
            "video": ".mp4",
            "audio": ".mp3",  # Default for generic audio
        }
        ext = ext_map.get(asset_type, ".mp4")

        # Remove any existing extension and add the correct one
        if "." in filename:
            filename = filename.rsplit(".", 1)[0]
        local_file = os.path.join(download_dir, f"{filename}{ext}")

        # Check if file already exists
        if os.path.exists(local_file):
            logging.info(f"Asset already exists at {local_file}, skipping download")
            return local_file

        # Download the file
        if asset_type in ["user", "audio", "mp3", "wav", "aac", "m4a"]:
            # Use requests for assets API downloads
            response = requests.get(download_url, stream=True)
            response.raise_for_status()

            with open(local_file, "wb") as f:
                for chunk in response.iter_content(chunk_size=8192):
                    f.write(chunk)
        else:
            # Use video files download method
            lf = vj.video_files.download(asset_id, local_file)
            logging.info(f"Downloaded video to {lf}")
            return lf

        logging.info(f"Downloaded asset {asset_id} to {local_file}")
        return local_file

    except Exception as e:
        logging.error(f"Error downloading asset {asset_id}: {e}")
        return None


def create_otio_timeline(
    edit_spec, filename, download_dir="downloads"
) -> otio.schema.Timeline:
    if not os.path.exists(download_dir):
        os.makedirs(download_dir)

    timeline = otio.schema.Timeline(name=edit_spec.get("name", "Timeline"))

    # Create video track
    video_track = otio.schema.Track(name="V1", kind=otio.schema.TrackKind.Video)
    timeline.tracks.append(video_track)

    # Create audio track if there are audio overlays
    audio_track = None
    if "audio_overlay" in edit_spec and edit_spec["audio_overlay"]:
        audio_track = otio.schema.Track(name="A1", kind=otio.schema.TrackKind.Audio)
        timeline.tracks.append(audio_track)

    # Process video clips
    for cut in edit_spec["video_series_sequential"]:
        asset_type = cut.get("type", "video")
        local_file = download_asset(cut["video_id"], asset_type, download_dir)

        if not local_file:
            continue

        fps = edit_spec.get("video_output_fps", 24.0)
        start_time = create_rational_time(cut["video_start_time"], fps)
        end_time = create_rational_time(cut["video_end_time"], fps)

        clip = otio.schema.Clip(
            name=f"clip_{cut['video_id']}",
            media_reference=otio.schema.ExternalReference(
                target_url=os.path.abspath(local_file)
            ),
            source_range=otio.opentime.TimeRange(start_time, (end_time - start_time)),
        )

        # TODO: Add audio level metadata if needed
        if "audio_levels" in cut and cut["audio_levels"]:
            # OpenTimelineIO doesn't have direct audio level support
            # This would need to be handled in the editing software
            clip.metadata["audio_levels"] = cut["audio_levels"]

        # Add crop metadata if present
        if "crop" in cut and cut["crop"]:
            # Store crop settings in metadata for the editing software to interpret
            clip.metadata["crop"] = {
                "zoom": cut["crop"].get("zoom", 1.0),
                "position_x": cut["crop"].get("position_x", 0.0),
                "position_y": cut["crop"].get("position_y", 0.0),
            }

        video_track.append(clip)

    # Process audio overlays
    if audio_track and "audio_overlay" in edit_spec:
        for audio_item in edit_spec["audio_overlay"]:
            audio_type = audio_item.get("type", "mp3")
            local_audio_file = download_asset(
                audio_item["audio_id"], audio_type, download_dir
            )

            if not local_audio_file:
                continue

            fps = edit_spec.get("video_output_fps", 24.0)
            audio_start = create_rational_time(audio_item["audio_start_time"], fps)
            audio_end = create_rational_time(audio_item["audio_end_time"], fps)

            audio_clip = otio.schema.Clip(
                name=f"audio_{audio_item['audio_id']}",
                media_reference=otio.schema.ExternalReference(
                    target_url=os.path.abspath(local_audio_file)
                ),
                source_range=otio.opentime.TimeRange(
                    audio_start, (audio_end - audio_start)
                ),
            )

            # Add audio level metadata if present
            if "audio_levels" in audio_item and audio_item["audio_levels"]:
                audio_clip.metadata["audio_levels"] = audio_item["audio_levels"]

            audio_track.append(audio_clip)

    otio.adapters.write_to_file(timeline, filename)
    logging.info(f"OTIO timeline saved to {filename}")


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--file", help="JSON file path")
    parser.add_argument("--output", help="Output file path")
    parser.add_argument("--json", type=json.loads, help="JSON string")

    args = parser.parse_args()
    spec = None
    # Set DaVinci Resolve environment variables
    os.environ["RESOLVE_SCRIPT_API"] = (
        "/Library/Application Support/Blackmagic Design/DaVinci Resolve/Developer/Scripting"
    )
    os.environ["RESOLVE_SCRIPT_LIB"] = (
        "/Applications/DaVinci Resolve/DaVinci Resolve.app/Contents/Libraries/Fusion/fusionscript.so"
    )

    # Add Resolve's Python modules to the path
    script_module_path = os.path.join(os.environ["RESOLVE_SCRIPT_API"], "Modules")
    if script_module_path not in sys.path:
        sys.path.append(script_module_path)

    # Now import DaVinciResolveScript
    try:
        import DaVinciResolveScript as dvr_script
    except ImportError:
        # print(f"Error importing DaVinciResolveScript: {e}")
        # print("Make sure DaVinci Resolve is installed correctly.")
        # Re-raise the exception or set dvr_script to None as a fallback
        dvr_script = None

    if args.json:
        spec = args.json
    elif args.file:
        with open(args.file) as f:
            spec = json.load(f)
    elif not sys.stdin.isatty():  # Check if data is being piped
        spec = json.load(sys.stdin)
    else:
        parser.print_help()
        sys.exit(1)
    if args.output:
        output_file = args.output
    else:
        output_file = "output.otio"
    create_otio_timeline(spec, output_file)
    output_file_absolute = os.path.abspath(output_file)
    if dvr_script:
        resolve = dvr_script.scriptapp("Resolve")
        if resolve:
            project_manager = resolve.GetProjectManager()
            project = project_manager.GetCurrentProject()
            media_pool = project.GetMediaPool()
            media_pool.ImportTimelineFromFile(
                output_file_absolute, {"timelineName": spec["name"]}
            )
            logging.info(f"Imported {output_file} into DaVinci Resolve")
        else:
            logging.error("Could not connect to DaVinci Resolve.")

```

--------------------------------------------------------------------------------
/src/video_editor_mcp/generate_charts.py:
--------------------------------------------------------------------------------

```python
from manim import *

import sys
import json


class LineGraphAnimation(Scene):
    def __init__(
        self,
        x_values=None,
        y_values=None,
        x_label="X-Axis",
        y_label="Y-Axis",
        title="Graph",
    ):
        super().__init__()
        # Default values if none provided
        self.x_values = x_values if x_values is not None else list(range(6))
        self.y_values = y_values if y_values is not None else [1, 4, 2, 8, 5, 7]
        self.x_label = x_label
        self.y_label = y_label
        self.title = title

        # Validate input
        if len(self.x_values) != len(self.y_values):
            raise ValueError("X and Y value lists must be the same length")

    def create_axes(self):
        # Calculate appropriate ranges for y-axis
        y_min, y_max = min(self.y_values), max(self.y_values)
        y_padding = (y_max - y_min) * 0.1  # 10% padding

        # Store original x values and create numerical mapping if needed
        self.original_x_values = self.x_values
        if not all(isinstance(x, (int, float)) for x in self.x_values):
            self.x_values = list(range(len(self.x_values)))

        x_min, x_max = min(self.x_values), max(self.x_values)
        x_padding = (x_max - x_min) * 0.1 if isinstance(x_min, (int, float)) else 0.5

        # Adjust ranges to start at 0 for x-axis
        x_range = [0, x_max + x_padding, (x_max - x_min) / 10 if x_max != x_min else 1]
        y_range = [0, y_max + y_padding, (y_max - y_min) / 10 if y_max != y_min else 1]

        axes = Axes(
            x_range=x_range,
            y_range=y_range,
            axis_config={
                "include_tip": True,
                "tick_size": 0.1,
                "color": self.black,
            },
            x_axis_config={
                "include_numbers": isinstance(x_min, (int, float)),
                "decimal_number_config": {"num_decimal_places": 1, "color": self.black},
            },
            y_axis_config={
                "include_numbers": True,
                "decimal_number_config": {"num_decimal_places": 1, "color": self.black},
            },
            tips=True,
            x_length=10,
            y_length=6,
        )

        # Ensure axes start at origin
        axes.move_to(ORIGIN)
        axes.to_corner(DOWN + LEFT)

        return axes

    def create_labels(self, axes):
        # Create axis labels
        x_label = Text(self.x_label, font_size=24).next_to(axes.x_axis, DOWN)
        y_label = Text(self.y_label, font_size=24).next_to(axes.y_axis, LEFT)

        # Create title
        title = Text(self.title, font_size=36).to_edge(UP, buff=-0.25)

        return x_label, y_label, title

    def create_data_points(self, axes):
        # Convert data points to coordinates
        points = [
            axes.coords_to_point(x, y) for x, y in zip(self.x_values, self.y_values)
        ]

        # Create the graph line
        graph = VMobject()
        graph.set_points_smoothly([*points])
        graph.set_color("#87c2a5")

        # Create dots for each point
        dots = VGroup(*[Dot(point, color="#525893") for point in points])

        # Create labels for data points
        value_labels = VGroup(
            *[
                Text(f"({x}, {y})", font_size=16, color=self.black).next_to(dot, UP)
                for x, y, dot in zip(self.x_values, self.y_values, dots)
            ]
        )

        return graph, dots, value_labels

    def construct(self):
        # Create the elements
        self.camera.frame_height = 10
        self.camera.frame_width = 17
        self.camera.background_color = "#ece6e2"
        self.camera.frame_center = [-1.3, 0, 0]
        self.black = "#343434"
        Text.set_default(font="Helvetica", color=self.black)
        axes = self.create_axes()
        x_label, y_label, title = self.create_labels(axes)
        graph, dots, value_labels = self.create_data_points(axes)

        # Initial animations
        self.play(Write(title))
        self.play(Create(axes), Write(x_label), Write(y_label))

        # Graph creation animation
        self.play(Create(graph), run_time=2)

        # Animate dots appearing
        self.play(Create(dots))

        # Animate value labels
        self.play(Write(value_labels))

        # Sequential point highlighting
        for dot, label in zip(dots, value_labels):
            self.play(
                dot.animate.scale(1.5).set_color("#e07a5f"),
                label.animate.scale(1.2),
                rate_func=there_and_back,
                run_time=0.5,
            )

        # Final pause
        self.wait(2)


class BarChartAnimation(Scene):
    def __init__(
        self,
        x_values=None,
        y_values=None,
        x_label="Categories",
        y_label="Values",
        title="Bar Chart",
    ):
        super().__init__()
        self.x_values = x_values if x_values is not None else ["A", "B", "C", "D", "E"]
        self.y_values = y_values if y_values is not None else [4, 8, 2, 6, 5]
        self.x_label = x_label
        self.y_label = y_label
        self.title = title
        self.bar_color = BLUE
        self.bar_width = 0.5

    def construct(self):
        # Calculate ranges
        self.camera.frame_height = 10
        self.camera.frame_width = 17
        self.camera.background_color = "#ece6e2"
        self.camera.frame_center = [-1.3, 0, 0]
        self.black = "#343434"

        y_max = max(self.y_values)
        y_padding = y_max * 0.2
        Text.set_default(font="Helvetica", color=self.black)
        # Create axes with adjusted ranges and position
        axes = Axes(
            x_range=[0, len(self.x_values), 1],  # Start from 0
            y_range=[0, y_max + y_padding, y_max / 5],
            axis_config={
                "include_tip": True,
                "tip_width": 0.2,
                "tip_height": 0.2,
                "color": BLACK,
            },
            x_length=8,
            y_length=6,
        ).to_corner(DL, buff=1)  # Align to bottom left with padding

        # Shift the entire axes right to create space after y-axis
        axes.shift(RIGHT * 1)

        # Create bars using axes coordinates
        bars = VGroup()
        labels = VGroup()

        for i, value in enumerate(self.y_values):
            # Calculate bar position and height
            bar_bottom = axes.c2p(i + 0.5, 0)  # Add 0.5 to center on tick marks
            bar_top = axes.c2p(i + 0.5, value)
            bar_height = bar_top[1] - bar_bottom[1]

            bar = Rectangle(
                width=self.bar_width,
                height=bar_height,
                color=self.bar_color,
                fill_opacity=0.8,
            ).move_to(bar_bottom, aligned_edge=DOWN)

            # Create value label
            label = Text(f"{value}", font_size=24)
            label.next_to(bar, UP, buff=0.1)

            bars.add(bar)
            labels.add(label)

        # Create axis labels
        x_labels = VGroup()
        for i, label_text in enumerate(self.x_values):
            label = Text(label_text, font_size=24)
            label.next_to(
                axes.c2p(i + 0.5, 0), DOWN, buff=0.5
            )  # Add 0.5 to align with bars
            x_labels.add(label)

        y_label = Text(self.y_label, font_size=24).next_to(axes.y_axis, LEFT, buff=0.5)
        x_axis_label = Text(self.x_label, font_size=24).next_to(
            axes.x_axis, DOWN, buff=1.5
        )
        title = Text(self.title, font_size=36).to_edge(UP, buff=0.5)

        # Animations
        self.play(Create(axes))
        self.play(Write(title))
        self.play(Write(VGroup(y_label, x_axis_label)))
        self.play(Write(x_labels))

        # Animate each bar appearing
        for bar, label in zip(bars, labels):
            self.play(GrowFromEdge(bar, DOWN), Write(label), run_time=0.5)

        # Highlight bars
        for bar, label in zip(bars, labels):
            self.play(
                bar.animate.set_color(RED),
                label.animate.scale(1.2),
                rate_func=there_and_back,
                run_time=0.3,
            )

        self.wait()


def render_bar_chart(
    x_values, y_values, x_label, y_label, title, filename="bar_chart.mp4"
):
    config.verbosity = "ERROR"
    config.pixel_height = 720
    config.pixel_width = 1280
    config.frame_height = 8
    config.frame_width = 14
    config.output_file = filename  # Optional: specify output filename
    config.preview = True  # Opens the video after rendering
    config.quality = "medium_quality"  # or "high_quality", "production_quality"

    scene = BarChartAnimation(
        x_values=x_values,
        y_values=y_values,
        x_label=x_label,
        y_label=y_label,
        title=title,
    )
    scene.render()
    return


# Example usage
if __name__ == "__main__":
    try:
        # Check command line arguments
        if len(sys.argv) < 3:
            print(
                "Usage: python generate_charts.py <input_json_file> <chart_type>",
                file=sys.stderr,
            )
            sys.exit(1)

        # Sample data
        input_json_file = sys.argv[1]
        chart_type = sys.argv[2]

        # Validate chart type
        if chart_type not in ["bar", "line"]:
            print(
                f"Invalid chart type: {chart_type}. Use 'bar' or 'line'.",
                file=sys.stderr,
            )
            sys.exit(1)

        # Read and validate JSON data
        try:
            with open(input_json_file, "r", encoding="utf-8") as f:
                data = json.load(f)
        except FileNotFoundError:
            print(f"Input file not found: {input_json_file}", file=sys.stderr)
            sys.exit(1)
        except json.JSONDecodeError as e:
            print(f"Invalid JSON in input file: {e}", file=sys.stderr)
            sys.exit(1)

        if "x_values" not in data or "y_values" not in data:
            print(
                "Invalid JSON data format: missing x_values or y_values",
                file=sys.stderr,
            )
            sys.exit(1)

        x_values = data["x_values"]
        y_values = data["y_values"]
        x_label = data.get("x_label", "Categories")
        y_label = data.get("y_label", "Values")
        title = data.get("title", "Chart")
        filename = data.get("filename", f"{chart_type}_chart.mp4")

        # Validate data lengths match
        if len(x_values) != len(y_values):
            print(
                f"Error: x_values length ({len(x_values)}) does not match y_values length ({len(y_values)})",
                file=sys.stderr,
            )
            sys.exit(1)

        # Configure manim settings
        config.verbosity = "ERROR"
        config.pixel_height = 720
        config.pixel_width = 1280
        config.frame_height = 8
        config.frame_width = 14
        config.output_file = filename
        config.preview = False  # Don't auto-open video to prevent hanging
        config.quality = "medium_quality"

        # Create and render the appropriate scene
        if chart_type == "bar":
            scene = BarChartAnimation(
                x_values=x_values,
                y_values=y_values,
                x_label=x_label,
                y_label=y_label,
                title=title,
            )
        else:  # chart_type == "line"
            scene = LineGraphAnimation(
                x_values=x_values,
                y_values=y_values,
                x_label=x_label,
                y_label=y_label,
                title=title,
            )

        scene.render()
        print(f"Successfully generated {chart_type} chart: {filename}")

    except Exception as e:
        print(f"Error generating chart: {str(e)}", file=sys.stderr)
        sys.exit(1)

```

--------------------------------------------------------------------------------
/video-player/downloader.swift:
--------------------------------------------------------------------------------

```swift
import AppKit
import Cocoa
import Foundation

class Config {
    static func getAPIKey() -> String? {
        if let configPath = Bundle.main.path(forResource: "config", ofType: "json"),
           let configData = try? Data(contentsOf: URL(fileURLWithPath: configPath)),
           let json = try? JSONSerialization.jsonObject(with: configData) as? [String: String] {
            return json["api_key"]
        }
        return nil
    }
}

struct VideoUpload: Codable {
    let name: String
    let filename: String
    let upload_method: String
}

enum UploadError: Error {
    case invalidURL
    case networkError(Error)
    case invalidResponse
    case serverError(Int)
    case noData
}



class CustomTextField: NSTextField {
    
    override func performKeyEquivalent(with event: NSEvent) -> Bool {
        if event.modifierFlags.contains(.command) {
            switch event.charactersIgnoringModifiers {
            case "v":
                if NSApp.sendAction(#selector(NSText.paste(_:)), to: nil, from: self) {
                    return true
                }
            case "c":
                if NSApp.sendAction(#selector(NSText.copy(_:)), to: nil, from: self) {
                    return true
                }
            case "x":
                if NSApp.sendAction(#selector(NSText.cut(_:)), to: nil, from: self) {
                    return true
                }
            case "a":
                if NSApp.sendAction(#selector(NSText.selectAll(_:)), to: nil, from: self) {
                    return true
                }
            default:
                break
            }
        }
        return super.performKeyEquivalent(with: event)
    }
}

class AppDelegate: NSObject, NSApplicationDelegate, NSWindowDelegate {
    var statusItem: NSStatusItem!
    var uploadWindow: NSWindow?
    // Use lazy property initialization for uploadManager
    lazy var uploadManager: UploadManager = {
        guard let apiKey = Config.getAPIKey() else {
            fatalError("API key not found in config.json")
        }
        return UploadManager(apiKey: apiKey)
    }()

    func applicationDidFinishLaunching(_ notification: Notification) {
        setupStatusItem()
        // Get command line arguments
        let arguments = ProcessInfo.processInfo.arguments
        
        // Skip the first argument (which is the app path)
        if arguments.count > 1 {
            for argument in arguments.dropFirst() {
                if let url = URL(string: argument), url.scheme == "videojungle" {
                    handleURL(url)
                }
            }
        }
        // Register for Apple Events
        NSAppleEventManager.shared().setEventHandler(
            self,
            andSelector: #selector(handleURLEvent(_:withReplyEvent:)),
            forEventClass: AEEventClass(kInternetEventClass),
            andEventID: AEEventID(kAEGetURL)
        )
    }
    
    func handleURL(_ url: URL) {
        // Parse the URL and handle different actions
        switch url.host {
        case "upload":
            // Open your upload window/view
            self.showUploadWindow()
        default:
            print("Unknown URL command: \(url)")
        }
    }
    
    @objc func handleURLEvent(_ event: NSAppleEventDescriptor, withReplyEvent: NSAppleEventDescriptor) {
        guard let urlString = event.paramDescriptor(forKeyword: keyDirectObject)?.stringValue,
            let url = URL(string: urlString) else {
            return
        }
        
        // Handle different commands
        if url.host == "upload" {
            DispatchQueue.main.async {
                self.showUploadWindow()
            }
        }
    }

    func setupStatusItem() {
        statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.squareLength)
        
        if let button = statusItem.button {
            // Try bundle path first
            let bundlePath = Bundle.main.path(forResource: "Icon", ofType: "svg")
            // Fall back to local path if bundle path fails
            let localPath = "./Icon.svg"
            
            let finalPath = bundlePath ?? localPath
            
            if let svgData = try? Data(contentsOf: URL(fileURLWithPath: finalPath)),
            let svgImage = NSImage(data: svgData) {
                button.image = svgImage
                button.image?.isTemplate = true
            }
        }
        
        let menu = NSMenu()
        menu.addItem(NSMenuItem(title: "Upload File or URL...", action: #selector(showUploadWindow), keyEquivalent: "u"))
        menu.addItem(NSMenuItem.separator())
        menu.addItem(NSMenuItem(title: "Quit", action: #selector(NSApplication.terminate(_:)), keyEquivalent: "q"))
        
        statusItem.menu = menu
    }
    
    @objc func showUploadWindow() {
        if let window = uploadWindow {
            window.level = .floating
            window.orderFrontRegardless()
            NSApp.activate(ignoringOtherApps: true)
            return
        }
        
        let window = NSWindow(
            contentRect: NSRect(x: 0, y: 0, width: 400, height: 300),
            styleMask: [.titled, .closable, .miniaturizable],
            backing: .buffered,
            defer: false
        )
        
        window.title = "Upload File or URL"
        window.level = .floating
        window.center()
        window.delegate = self
        window.isReleasedWhenClosed = false
        
        let viewController = UploaderViewController()
        window.contentViewController = viewController
        
        window.orderFrontRegardless()
        NSApp.activate(ignoringOtherApps: true)
        
        uploadWindow = window
    }
    
    // Window delegate method to handle window closing
    func windowShouldClose(_ sender: NSWindow) -> Bool {
        sender.orderOut(nil)  // Hide the window
        return false  // Prevent the window from being destroyed
    }
}

class UploaderViewController: NSViewController, NSTextFieldDelegate {
    private var nameField: NSTextField!
    private var urlField: NSTextField!
    private var statusLabel: NSTextField!
    private var uploadButton: NSButton!

    private let uploadManager: UploadManager = {
        guard let apiKey = Config.getAPIKey() else {
            fatalError("API key not found in config.json")
        }
        return UploadManager(apiKey: apiKey)
    }()

    override func loadView() {
        let container = NSView(frame: NSRect(x: 0, y: 0, width: 400, height: 300))

        // Name Input Field - Positioned at top
        nameField = CustomTextField(frame: NSRect(x: 20, y: 250, width: 360, height: 24))
        nameField.placeholderString = "Name"
        nameField.target = self
        nameField.action = #selector(handleNameInput(_:))
        nameField.isEditable = true
        nameField.isSelectable = true
        nameField.usesSingleLineMode = true
        container.addSubview(nameField)

        // URL Input Field - Positioned below name field
        urlField = CustomTextField(frame: NSRect(x: 20, y: 215, width: 270, height: 24))
        urlField.placeholderString = "Enter URL to upload"
        urlField.target = self
        urlField.action = #selector(handleURLInput(_:))
        urlField.isEditable = true
        urlField.isSelectable = true
        urlField.usesSingleLineMode = true
        container.addSubview(urlField)

        // Upload Button - Aligned with URL field
        uploadButton = NSButton(frame: NSRect(x: 300, y: 215, width: 80, height: 24))
        uploadButton.title = "Upload"
        uploadButton.bezelStyle = .rounded
        uploadButton.target = self
        uploadButton.action = #selector(handleUpload(_:))
        container.addSubview(uploadButton)

        // Drop Zone - Below URL field and button
        let dropZone = DropZoneView(frame: NSRect(x: 20, y: 50, width: 360, height: 150))
        dropZone.uploadManager = uploadManager
        container.addSubview(dropZone)
        
        // Status Label - At bottom
        statusLabel = NSTextField(frame: NSRect(x: 20, y: 20, width: 360, height: 24))
        statusLabel.isEditable = false
        statusLabel.isBezeled = false
        statusLabel.drawsBackground = false
        statusLabel.stringValue = "Drag and drop files here or enter a URL above"
        container.addSubview(statusLabel)
        
        self.view = container
    }
    
    @objc private func handleNameInput(_ sender: NSTextField) {
        // Update status or validate name as needed
        statusLabel.stringValue = "Name entered: \(sender.stringValue)"
    }

    @objc func handleURLInput(_ sender: NSTextField) {
        Task {
            await handleUploadAttempt()
        }
    }

    @objc private func handleUpload(_ sender: NSButton) {
        Task {
            await handleUploadAttempt()
        }
    }
    
    private func handleUploadAttempt() async {
        // Validate inputs
        guard let name = nameField.stringValue.nilIfEmpty else {
            showAlert(message: "Please enter a name")
            return
        }
        
        guard let urlString = urlField.stringValue.nilIfEmpty,
              let url = URL(string: urlString) else {
            showAlert(message: "Please enter a valid URL")
            return
        }
        
        await MainActor.run {
            statusLabel.stringValue = "Uploading..."
            uploadButton.isEnabled = false
        }
        
        // Convert completion handler to async/await
        do {
            await MainActor.run {
                statusLabel.stringValue = "Uploading..."
                uploadButton.isEnabled = false
            }
            
            let _ = try await uploadManager.uploadURL(name: name, urlString: urlString)
            
            await MainActor.run {
                statusLabel.stringValue = "Upload completed successfully!"
                showAlert(message: "Video uploaded successfully")
                // Clear the fields after successful upload
                nameField.stringValue = ""
                urlField.stringValue = ""
                uploadButton.isEnabled = true
            }
            
            await MainActor.run {
                statusLabel.stringValue = "Upload completed successfully!"
                showAlert(message: "Video uploaded successfully")
                // Clear the fields after successful upload
                nameField.stringValue = ""
                urlField.stringValue = ""
                uploadButton.isEnabled = true
            }
        } catch {
            await MainActor.run {
                statusLabel.stringValue = "Upload failed"
                showAlert(message: "Upload failed: \(error.localizedDescription)")
                uploadButton.isEnabled = true
            }
        }
    }

    func showAlert(message: String) {
        let alert = NSAlert()
        alert.messageText = message
        alert.alertStyle = .informational
        alert.addButton(withTitle: "OK")
        alert.runModal()
    }
    
    // MARK: - NSTextDelegate Methods
    func textDidChange(_ notification: Notification) {
        // Handle text changes if needed
    }
    
    func textDidEndEditing(_ notification: Notification) {
        // Handle when editing ends if needed
    }
}

class DropZoneView: NSView {
    var uploadManager: UploadManager?
    private let dropLabel = NSTextField()
    
    var isReceivingDrag: Bool = false {
        didSet {
            needsDisplay = true
            updateDropLabel()
        }
    }
    
    override init(frame frameRect: NSRect) {
        super.init(frame: frameRect)
        setupView()
    }
    
    required init?(coder: NSCoder) {
        super.init(coder: coder)
        setupView()
    }
    
    private func setupView() {
        registerForDraggedTypes([.fileURL])
        wantsLayer = true
        layer?.cornerRadius = 10
        
        // Setup drop label
        dropLabel.isEditable = false
        dropLabel.isBezeled = false
        dropLabel.drawsBackground = false
        dropLabel.alignment = .center
        dropLabel.font = NSFont.systemFont(ofSize: 16, weight: .medium)
        dropLabel.stringValue = "Drop files here to upload"
        dropLabel.textColor = .secondaryLabelColor
        addSubview(dropLabel)
    }
    
    override func layout() {
        super.layout()
        dropLabel.frame = bounds
    }
    
    private func updateDropLabel() {
        dropLabel.textColor = isReceivingDrag ? .systemBlue : .secondaryLabelColor
        
        NSAnimationContext.runAnimationGroup { context in
            context.duration = 0.2
            dropLabel.animator().alphaValue = isReceivingDrag ? 1.0 : 0.6
        }
    }
    
    override func draw(_ dirtyRect: NSRect) {
        let bounds = self.bounds
        
        // Background
        if isReceivingDrag {
            NSColor(calibratedWhite: 0.95, alpha: 1.0).setFill()
        } else {
            NSColor(calibratedWhite: 0.98, alpha: 1.0).setFill()
        }
        
        let path = NSBezierPath(roundedRect: bounds, xRadius: 10, yRadius: 10)
        path.fill()
        
        // Border
        if isReceivingDrag {
            NSColor.systemBlue.setStroke()
        } else {
            NSColor.separatorColor.setStroke()
        }
        
        path.lineWidth = 2
        path.stroke()
    }
    
    override func draggingEntered(_ sender: NSDraggingInfo) -> NSDragOperation {
        isReceivingDrag = true
        return .copy
    }
    
    override func draggingExited(_ sender: NSDraggingInfo?) {
        isReceivingDrag = false
    }
    
    override func prepareForDragOperation(_ sender: NSDraggingInfo) -> Bool {
        return true
    }
    
    override func performDragOperation(_ sender: NSDraggingInfo) -> Bool {
        isReceivingDrag = false
        
        guard let draggedFileURL = sender.draggingPasteboard.readObjects(forClasses: [NSURL.self], options: nil)?.first as? URL else {
            return false
        }
        
        uploadManager?.uploadFile(draggedFileURL) { result in
            DispatchQueue.main.async {
                let alert = NSAlert()
                switch result {
                case .success:
                    alert.messageText = "File uploaded successfully"
                case .failure(let error):
                    alert.messageText = "Upload failed: \(error.localizedDescription)"
                }
                alert.alertStyle = .informational
                alert.addButton(withTitle: "OK")
                alert.runModal()
            }
        }
        
        return true
    }
}

class UploadManager {
    private let apiKey: String
    private let baseURL = "https://api.video-jungle.com/video-file"
    
    init(apiKey: String) {
        self.apiKey = apiKey
    }
    
    func uploadURL(name: String, urlString: String) async throws -> Data {
        guard let url = URL(string: baseURL) else {
            throw UploadError.invalidURL
        }
        
        // Prepare the upload data
        let uploadData = VideoUpload(
            name: name,
            filename: urlString,
            upload_method: "url"
        )
        
        var request = URLRequest(url: url)
        request.httpMethod = "POST"
        request.setValue("application/json", forHTTPHeaderField: "Content-Type")
        request.setValue(apiKey, forHTTPHeaderField: "X-API-Key")
        
        // Encode the JSON data
        request.httpBody = try JSONEncoder().encode(uploadData)
        
        do {
            let (data, response) = try await URLSession.shared.data(for: request)
            
            guard let httpResponse = response as? HTTPURLResponse else {
                throw UploadError.invalidResponse
            }
            
            switch httpResponse.statusCode {
            case 200...299:
                return data
            default:
                throw UploadError.serverError(httpResponse.statusCode)
            }
        } catch let error as UploadError {
            throw error
        } catch {
            throw UploadError.networkError(error)
        }
    }
    func uploadFile(_ fileURL: URL, completion: @escaping (Result<Void, Error>) -> Void) {
        // Implement your file upload logic here
        // This is a placeholder implementation
        DispatchQueue.global().async {
            // Simulating network delay
            Thread.sleep(forTimeInterval: 1.0)
            completion(.success(()))
        }
    }
    
}

extension String {
    var nilIfEmpty: String? {
        self.isEmpty ? nil : self
    }
}

// Main entry point
let app = NSApplication.shared
let delegate = AppDelegate()
app.delegate = delegate
app.run()
```

--------------------------------------------------------------------------------
/video-player/player.swift:
--------------------------------------------------------------------------------

```swift
import AVFoundation
import AVKit
import AppKit
import Foundation
import PythonKit  // You'll need to add this dependency to your project

NSApplication.shared.activate(ignoringOtherApps: true)

class VideoPlayerDelegate: NSObject, NSApplicationDelegate {
    var window: NSWindow!
    var playerView: AVPlayerView!
    var player: AVPlayer!
    var queuePlayer: AVQueuePlayer!
    var playerLooper: AVPlayerLooper?
    
    // Frame processing
    var pythonProcessor: PythonFrameProcessor!
    var isProcessingEnabled: Bool = false
    var processingOutput: AVSampleBufferDisplayLayer!
    var displayLink: CVDisplayLink?
    
    // Python script editor
    var editorWindow: NSWindow?
    var editorTextView: NSTextView?
    var currentScript: String = ""
    
    // Playlist management
    struct VideoEntry {
        let path: String
        let videoName: String
    }
    var videos: [VideoEntry] = []
    var currentVideoIndex: Int = 0

    // UI Elements
    var controlsView: NSView!
    var previousButton: NSButton!
    var nextButton: NSButton!
    var videoLabel: NSTextField!
    var processButton: NSButton!
    var editScriptButton: NSButton!
    
    func applicationDidFinishLaunching(_ notification: Notification) {
        // Initialize Python processor
        pythonProcessor = PythonFrameProcessor()
        
        // Need at least a video name and video path pair
        guard CommandLine.arguments.count > 2 else {
            print("Usage: vj-player \"Video Name 1\" video1.mp4 \"Video Name 2\" video2.mp4 ...")
            NSApplication.shared.terminate(nil)
            return
        }
        
        // Parse arguments into video entries
        let args = Array(CommandLine.arguments.dropFirst())
        if args.count % 2 != 0 {
            NSApplication.shared.terminate(nil)
            return
        }
        
        // Create video entries from pairs of arguments
        for i in stride(from: 0, to: args.count, by: 2) {
            videos.append(VideoEntry(path: args[i + 1], videoName: args[i]))
        }
        
        // Create the window
        let windowRect = NSRect(x: 0, y: 0, width: 800, height: 650)
        window = NSWindow(
            contentRect: windowRect,
            styleMask: [.titled, .closable, .miniaturizable, .resizable],
            backing: .buffered,
            defer: false
        )
        window.level = .floating 
        
        // Create the main container view
        let containerView = NSView(frame: windowRect)
        window.contentView = containerView
        
        // Create the player view
        let playerRect = NSRect(x: 0, y: 50, width: windowRect.width, height: windowRect.height - 50)
        playerView = AVPlayerView(frame: playerRect)
        playerView.autoresizingMask = [.width, .height]
        playerView.controlsStyle = .floating
        playerView.showsFullScreenToggleButton = true
        containerView.addSubview(playerView)
        
        // Create controls view with more space for additional buttons
        let controlsRect = NSRect(x: 0, y: 0, width: windowRect.width, height: 50)
        controlsView = NSView(frame: controlsRect)
        controlsView.autoresizingMask = [.width]
        containerView.addSubview(controlsView)
        
        // Create navigation buttons
        previousButton = NSButton(frame: NSRect(x: 10, y: 10, width: 80, height: 30))
        previousButton.title = "Previous"
        previousButton.bezelStyle = .rounded
        previousButton.target = self
        previousButton.action = #selector(previousVideo)
        controlsView.addSubview(previousButton)
        
        nextButton = NSButton(frame: NSRect(x: 100, y: 10, width: 80, height: 30))
        nextButton.title = "Next"
        nextButton.bezelStyle = .rounded
        nextButton.target = self
        nextButton.action = #selector(nextVideo)
        controlsView.addSubview(nextButton)
        
        // Create process button
        processButton = NSButton(frame: NSRect(x: 190, y: 10, width: 120, height: 30))
        processButton.title = "Enable Processing"
        processButton.bezelStyle = .rounded
        processButton.target = self
        processButton.action = #selector(toggleProcessing)
        controlsView.addSubview(processButton)
        
        // Create edit script button
        editScriptButton = NSButton(frame: NSRect(x: 320, y: 10, width: 100, height: 30))
        editScriptButton.title = "Edit Script"
        editScriptButton.bezelStyle = .rounded
        editScriptButton.target = self
        editScriptButton.action = #selector(openScriptEditor)
        controlsView.addSubview(editScriptButton)
        
        // Create video label
        videoLabel = NSTextField(frame: NSRect(x: 430, y: 15, width: 300, height: 20))
        videoLabel.isEditable = false
        videoLabel.isBordered = false
        videoLabel.backgroundColor = .clear
        videoLabel.font = NSFont.systemFont(ofSize: 14, weight: .bold)
        controlsView.addSubview(videoLabel)
        
        // Setup window
        window.title = "Video Player with Python Processing"
        window.center()
        window.makeKeyAndOrderFront(nil)
        
        // Start playing first video
        playCurrentVideo()
        
        // Create default Python script if none exists
        if !FileManager.default.fileExists(atPath: pythonProcessor.scriptPath.path) {
            createDefaultPythonScript()
        }
        
        // Load the current script
        do {
            currentScript = try String(contentsOf: pythonProcessor.scriptPath, encoding: .utf8)
        } catch {
            print("Error loading script: \(error)")
            currentScript = defaultPythonScript()
        }
        
        // Set up keyboard event monitoring
        NSEvent.addLocalMonitorForEvents(matching: .keyDown) { event in
            self.handleKeyEvent(event)
            return event
        }
    }
    
    @objc func toggleProcessing() {
        isProcessingEnabled.toggle()
        
        if isProcessingEnabled {
            processButton.title = "Disable Processing"
            setupFrameProcessing()
        } else {
            processButton.title = "Enable Processing"
            tearDownFrameProcessing()
        }
    }
    
    func setupFrameProcessing() {
        guard let playerItem = queuePlayer.currentItem else { return }
        
        // Add video output to player item
        let videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: [
            kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
        ])
        playerItem.add(videoOutput)
        
        // Setup display link for synchronized frame capture
        CVDisplayLinkCreateWithActiveCGDisplays(&displayLink)
        
        if let displayLink = displayLink {
            CVDisplayLinkSetOutputCallback(displayLink, { (displayLink, inNow, inOutputTime, flagsIn, flagsOut, displayLinkContext) -> CVReturn in
                let videoPlayerDelegate = Unmanaged<VideoPlayerDelegate>.fromOpaque(displayLinkContext!).takeUnretainedValue()
                videoPlayerDelegate.processCurrentFrame()
                return kCVReturnSuccess
            }, Unmanaged.passUnretained(self).toOpaque())
            
            CVDisplayLinkStart(displayLink)
        }
    }
    
    func tearDownFrameProcessing() {
        if let displayLink = displayLink {
            CVDisplayLinkStop(displayLink)
            self.displayLink = nil
        }
        
        // Remove video output
        queuePlayer.currentItem?.outputs.forEach { output in
            queuePlayer.currentItem?.remove(output)
        }
    }
    
    func processCurrentFrame() {
        guard isProcessingEnabled,
              let playerItem = queuePlayer.currentItem,
              let videoOutput = playerItem.outputs.first as? AVPlayerItemVideoOutput else {
            return
        }
        
        let itemTime = queuePlayer.currentTime()
        
        guard videoOutput.hasNewPixelBuffer(forItemTime: itemTime) else {
            return
        }
        
        guard let pixelBuffer = videoOutput.copyPixelBuffer(forItemTime: itemTime, itemTimeForDisplay: nil) else {
            return
        }
        
        // Process frame with Python
        if let processedBuffer = pythonProcessor.processFrame(pixelBuffer) {
            // Display the processed frame
            DispatchQueue.main.async {
                // Here you would replace or overlay the frame in your player view
                // This is complex and depends on how you want to display the processed frames
                // For simplicity, we're just logging that we processed a frame
                print("Processed frame at time: \(CMTimeGetSeconds(itemTime))")
            }
        }
    }
    
    // Script Editor
    @objc func openScriptEditor() {
        if editorWindow == nil {
            createScriptEditorWindow()
        }
        
        editorWindow?.makeKeyAndOrderFront(nil)
    }
    
    func createScriptEditorWindow() {
        let windowRect = NSRect(x: 0, y: 0, width: 600, height: 400)
        editorWindow = NSWindow(
            contentRect: windowRect,
            styleMask: [.titled, .closable, .miniaturizable, .resizable],
            backing: .buffered,
            defer: false
        )
        
        editorWindow?.title = "Python Script Editor"
        
        // Create scroll view for text editor
        let scrollView = NSScrollView(frame: NSRect(x: 0, y: 50, width: windowRect.width, height: windowRect.height - 50))
        scrollView.autoresizingMask = [.width, .height]
        scrollView.hasVerticalScroller = true
        scrollView.hasHorizontalScroller = true
        scrollView.borderType = .noBorder
        
        // Create text view
        let contentSize = scrollView.contentSize
        let textStorage = NSTextStorage()
        let layoutManager = NSLayoutManager()
        textStorage.addLayoutManager(layoutManager)
        let textContainer = NSTextContainer(containerSize: NSSize(width: contentSize.width, height: CGFloat.greatestFiniteMagnitude))
        textContainer.widthTracksTextView = true
        layoutManager.addTextContainer(textContainer)
        
        editorTextView = NSTextView(frame: NSRect(x: 0, y: 0, width: contentSize.width, height: contentSize.height), textContainer: textContainer)
        if let editorTextView = editorTextView {
            editorTextView.autoresizingMask = [.width]
            editorTextView.font = NSFont(name: "Menlo", size: 12)
            editorTextView.isRichText = false
            editorTextView.isEditable = true
            editorTextView.backgroundColor = NSColor(white: 0.95, alpha: 1.0)
            editorTextView.string = currentScript
            
            scrollView.documentView = editorTextView
        }
        
        // Create buttons
        let saveButton = NSButton(frame: NSRect(x: windowRect.width - 180, y: 10, width: 80, height: 30))
        saveButton.title = "Save"
        saveButton.bezelStyle = .rounded
        saveButton.target = self
        saveButton.action = #selector(saveScript)
        
        let cancelButton = NSButton(frame: NSRect(x: windowRect.width - 90, y: 10, width: 80, height: 30))
        cancelButton.title = "Cancel"
        cancelButton.bezelStyle = .rounded
        cancelButton.target = self
        cancelButton.action = #selector(closeScriptEditor)
        
        // Add controls to window
        if let contentView = editorWindow?.contentView {
            contentView.addSubview(scrollView)
            contentView.addSubview(saveButton)
            contentView.addSubview(cancelButton)
        }
        
        editorWindow?.center()
    }
    
    @objc func saveScript() {
        guard let scriptText = editorTextView?.string else { return }
        
        do {
            try scriptText.write(to: pythonProcessor.scriptPath, atomically: true, encoding: .utf8)
            currentScript = scriptText
            
            // Reload the Python script
            pythonProcessor.reloadScript()
            
            closeScriptEditor()
        } catch {
            let alert = NSAlert()
            alert.messageText = "Error Saving Script"
            alert.informativeText = error.localizedDescription
            alert.alertStyle = .warning
            alert.addButton(withTitle: "OK")
            alert.runModal()
        }
    }
    
    @objc func closeScriptEditor() {
        editorWindow?.close()
    }
    
    func createDefaultPythonScript() {
        do {
            try defaultPythonScript().write(to: pythonProcessor.scriptPath, atomically: true, encoding: .utf8)
        } catch {
            print("Error creating default script: \(error)")
        }
    }
    
    func defaultPythonScript() -> String {
        return """
        import numpy as np
        import cv2
        
        def process_frame(frame):
            '''
            Process a video frame.
            
            Args:
                frame: NumPy array representing the frame (BGR format)
                
            Returns:
                Processed frame as NumPy array (BGR format)
            '''
            # Example: Convert to grayscale and then back to color
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            return cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR)
        """
    }
    
    func playCurrentVideo() {
        guard currentVideoIndex >= 0 && currentVideoIndex < videos.count else {
            return
        }
        
        // If processing was enabled, disable and re-enable to reset for new video
        let wasProcessingEnabled = isProcessingEnabled
        if wasProcessingEnabled {
            tearDownFrameProcessing()
        }
        
        let videoEntry = videos[currentVideoIndex]
        let videoURL = URL(fileURLWithPath: videoEntry.path)
        
        // Create a new player item
        let playerItem = AVPlayerItem(url: videoURL)
        
        // Create or reuse queue player
        if queuePlayer == nil {
            queuePlayer = AVQueuePlayer()
            playerView.player = queuePlayer
        }
        
        // Remove existing looper if any
        playerLooper?.disableLooping()
        playerLooper = nil
        
        // Create new looper
        playerLooper = AVPlayerLooper(player: queuePlayer, templateItem: playerItem)
        
        // Update window title and video label
        window.title = "Video Player with Python Processing - \(videoURL.lastPathComponent) [\(currentVideoIndex + 1)/\(videos.count)]"
        videoLabel.stringValue = videoEntry.videoName
        
        // Update button states
        previousButton.isEnabled = currentVideoIndex > 0
        nextButton.isEnabled = currentVideoIndex < videos.count - 1
        
        // Re-enable processing if it was enabled
        if wasProcessingEnabled {
            setupFrameProcessing()
        }
        
        queuePlayer.play()
    }
    
    @objc func previousVideo() {
        if currentVideoIndex > 0 {
            currentVideoIndex -= 1
            playCurrentVideo()
        }
    }
    
    @objc func nextVideo() {
        if currentVideoIndex < videos.count - 1 {
            currentVideoIndex += 1
            playCurrentVideo()
        }
    }
    
    func handleKeyEvent(_ event: NSEvent) {
       guard let characters = event.characters else { return }
        
        switch characters {
        case " ":
            // Toggle play/pause
            if queuePlayer.rate == 0 {
                queuePlayer.play()
            } else {
                queuePlayer.pause()
            }
            
        case String(Character(UnicodeScalar(NSLeftArrowFunctionKey)!)):
            // Seek backward 10 seconds
            let currentTime = queuePlayer.currentTime()
            let newTime = CMTimeAdd(currentTime, CMTime(seconds: -10, preferredTimescale: 1))
            queuePlayer.seek(to: newTime)
            
        case String(Character(UnicodeScalar(NSRightArrowFunctionKey)!)):
            // Seek forward 10 seconds
            let currentTime = queuePlayer.currentTime()
            let newTime = CMTimeAdd(currentTime, CMTime(seconds: 10, preferredTimescale: 1))
            queuePlayer.seek(to: newTime)
            
        case "n", "N":
            nextVideo()
            
        case "p", "P":
            previousVideo()
            
        case "e", "E":
            openScriptEditor()
            
        case "f", "F":
            toggleProcessing()
            
        case "q", "Q":
            NSApplication.shared.terminate(nil)
            
        default:
            break
        }
    }
    
    func applicationShouldTerminateAfterLastWindowClosed(_ sender: NSApplication) -> Bool {
        return true
    }
}

// Python Frame Processor Class
class PythonFrameProcessor {
    private let python: Python
    private let sys: PythonObject
    private let np: PythonObject
    private let cv2: PythonObject
    private var userModule: PythonObject
    
    let scriptPath: URL
    
    init() {
        // Initialize Python
        python = Python.shared
        sys = python.import("sys")
        
        // Add necessary paths for Python libraries
        let resourcePath = Bundle.main.resourcePath ?? ""
        sys.path.append(resourcePath)
        sys.path.append("\(resourcePath)/python-stdlib")
        sys.path.append("\(resourcePath)/python-packages")
        
        // Import required modules
        np = python.import("numpy")
        cv2 = python.import("cv2")
        
        // Set up script directory
        let fileManager = FileManager.default
        
        // Use the Application Support directory
        let appSupportDir = try! fileManager.url(for: .applicationSupportDirectory,
                                               in: .userDomainMask,
                                               appropriateFor: nil,
                                               create: true)
            .appendingPathComponent("VideoPlayerPython", isDirectory: true)
        
        // Create directory if it doesn't exist
        if !fileManager.fileExists(atPath: appSupportDir.path) {
            try! fileManager.createDirectory(at: appSupportDir, withIntermediateDirectories: true)
        }
        
        // Set script path
        scriptPath = appSupportDir.appendingPathComponent("video_processor.py")
        
        // Import user module (or create if it doesn't exist)
        reloadScript()
    }
    
    func reloadScript() {
        // Make sure the script exists
        if !FileManager.default.fileExists(atPath: scriptPath.path) {
            // Create a basic script if it doesn't exist
            let basicScript = """
            import numpy as np
            import cv2
            
            def process_frame(frame):
                # Default: return the frame unchanged
                return frame
            """
            
            try? basicScript.write(to: scriptPath, atomically: true, encoding: .utf8)
        }
        
        // Add script directory to Python path
        sys.path.append(scriptPath.deletingLastPathComponent().path)
        
        // Try to import the user script
        do {
            // If we've already imported it, reload it
            if userModule != nil {
                let importlib = python.import("importlib")
                userModule = importlib.reload(userModule)
            } else {
                // First time import
                userModule = python.import("video_processor")
            }
        } catch {
            print("Error loading Python script: \(error)")
            // Create a fallback module with a basic function
            let globals = python.globals()
            userModule = globals.get("__builtins__").get("type")("video_processor", python.tuple([]), python.dict([]))
            userModule.process_frame = python.def { (frame: PythonObject) -> PythonObject in
                return frame
            }
        }
    }
    
    func processFrame(_ pixelBuffer: CVPixelBuffer) -> CVPixelBuffer? {
        // Lock the pixel buffer for reading
        CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
        
        // Get dimensions
        let width = CVPixelBufferGetWidth(pixelBuffer)
        let height = CVPixelBufferGetHeight(pixelBuffer)
        let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
        let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)!
        
        // Convert CVPixelBuffer to numpy array
        let buffer = UnsafeMutableRawPointer(baseAddress)
        let data = np.frombuffer(Python.bytes(buffer.assumingMemoryBound(to: UInt8.self), count: bytesPerRow * height),
                             dtype: np.uint8)
        let frame = data.reshape(height, width, 4)  // BGRA format
        
        // Convert to BGR for OpenCV
        let bgrFrame = cv2.cvtColor(frame, cv2.COLOR_BGRA2BGR)
        
        // Process the frame with the user's Python function
        let processedFrame: PythonObject
        do {
            processedFrame = userModule.process_frame(bgrFrame)
        } catch {
            print("Error in Python processing: \(error)")
            CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
            return nil
        }
        
        // Convert back to BGRA
        let processedBGRA = cv2.cvtColor(processedFrame, cv2.COLOR_BGR2BGRA)
        
        // Create a new pixel buffer
        var newPixelBuffer: CVPixelBuffer?
        let status = CVPixelBufferCreate(kCFAllocatorDefault,
                                       width, height,
                                       kCVPixelFormatType_32BGRA,
                                       [kCVPixelBufferCGImageCompatibilityKey: true,
                                        kCVPixelBufferCGBitmapContextCompatibilityKey: true] as CFDictionary,
                                       &newPixelBuffer)
        
        guard status == kCVReturnSuccess, let newPixelBuffer = newPixelBuffer else {
            CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
            return nil
        }
        
        // Copy processed data to the new pixel buffer
        CVPixelBufferLockBaseAddress(newPixelBuffer, [])
        let newBaseAddress = CVPixelBufferGetBaseAddress(newPixelBuffer)!
        let newBytesPerRow = CVPixelBufferGetBytesPerRow(newPixelBuffer)
        let newBuffer = UnsafeMutableRawPointer(newBaseAddress)
        
        // Get numpy array data bytes
        let npData = Python.bytes(processedBGRA.tobytes())
        let count = npData.__len__()
        
        // Copy the data
        let npDataPtr = npData.data_as(np.ctypeslib.as_ctypes_type(np.dtype("B").char))
        let bytesPointer = UnsafeMutableRawPointer(npDataPtr.__array_interface__["data"][0])
        
        // Copy data carefully, accounting for possible stride differences
        for y in 0..<height {
            let srcRow = bytesPointer!.advanced(by: y * width * 4)
            let destRow = newBuffer.advanced(by: y * newBytesPerRow)
            memcpy(destRow, srcRow, width * 4)
        }
        
        // Unlock buffers
        CVPixelBufferUnlockBaseAddress(newPixelBuffer, [])
        CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
        
        return newPixelBuffer
    }
}

// Create and start the application
let delegate = VideoPlayerDelegate()
let app = NSApplication.shared
app.delegate = delegate
app.run()
```

--------------------------------------------------------------------------------
/src/video_editor_mcp/server.py:
--------------------------------------------------------------------------------

```python
import logging
import os
import subprocess
import sys
import threading
import time
from typing import List, Optional, Union, Any, Dict
import json
import webbrowser
import uuid

import mcp.server.stdio
import mcp.types as types
import osxphotos
import requests
from mcp.server import NotificationOptions, Server
from mcp.server.models import InitializationOptions
from pydantic import AnyUrl

from transformers import AutoModel
from videojungle import ApiClient

from .search_local_videos import get_videos_by_keyword

import numpy as np


if os.environ.get("VJ_API_KEY"):
    VJ_API_KEY = os.environ.get("VJ_API_KEY")
else:
    try:
        VJ_API_KEY = sys.argv[1]
    except Exception:
        VJ_API_KEY = None

BROWSER_OPEN = False
# Configure the logging
logging.basicConfig(
    filename="app.log",  # Name of the log file
    level=logging.INFO,  # Log level (e.g., DEBUG, INFO, WARNING, ERROR, CRITICAL)
    format="%(asctime)s - %(levelname)s - %(message)s",  # Log format
)

if not VJ_API_KEY:
    try:
        with open(".env", "r") as f:
            for line in f:
                if "VJ_API_KEY" in line:
                    VJ_API_KEY = line.split("=")[1].strip()
    except Exception:
        raise Exception(
            "VJ_API_KEY environment variable is required or a .env file with the key is required"
        )

    if not VJ_API_KEY:
        raise Exception("VJ_API_KEY environment variable is required")

vj = ApiClient(VJ_API_KEY)


class PhotosDBLoader:
    def __init__(self):
        self._db: Optional[osxphotos.PhotosDB] = None
        self.start_loading()

    def start_loading(self):
        def load():
            self._db = osxphotos.PhotosDB()
            logging.info("PhotosDB loaded")

        thread = threading.Thread(target=load)
        thread.daemon = True  # Make thread exit when main program exits
        thread.start()

    @property
    def db(self) -> osxphotos.PhotosDB:
        if self._db is None:
            raise Exception("PhotosDB still loading")
        return self._db


class EmbeddingModelLoader:
    def __init__(self, model_name: str = "jinaai/jina-clip-v1"):
        self._model: Optional[AutoModel] = None
        self.model_name = model_name
        self.start_loading()

    def start_loading(self):
        def load():
            self._model = AutoModel.from_pretrained(
                self.model_name, trust_remote_code=True
            )
            logging.info(f"Model {self.model_name} loaded")

        thread = threading.Thread(target=load)
        thread.daemon = True
        thread.start()

    @property
    def model(self) -> AutoModel:
        if self._model is None:
            raise Exception(f"Model {self.model_name} still loading")
        return self._model

    def encode_text(
        self,
        texts: Union[str, List[str]],
        truncate_dim: Optional[int] = None,
        task: Optional[str] = None,
    ) -> dict:
        """
        Encode text and format the embeddings in the expected JSON structure
        """
        embeddings = self.model.encode_text(texts, truncate_dim=truncate_dim, task=task)

        # Format the response in the expected structure
        return {"embeddings": embeddings.tolist(), "embedding_type": "text_embeddings"}

    def encode_image(
        self, images: Union[str, List[str]], truncate_dim: Optional[int] = None
    ) -> dict:
        """
        Encode images and format the embeddings in the expected JSON structure
        """
        embeddings = self.model.encode_image(images, truncate_dim=truncate_dim)

        return {"embeddings": embeddings.tolist(), "embedding_type": "image_embeddings"}

    def post_embeddings(
        self, embeddings: dict, endpoint_url: str, headers: Optional[dict] = None
    ) -> requests.Response:
        """
        Post embeddings to the specified endpoint
        """
        if headers is None:
            headers = {"Content-Type": "application/json"}

        response = requests.post(endpoint_url, json=embeddings, headers=headers)
        response.raise_for_status()
        return response


# Create global loader instance, (requires access to host computer!)
if sys.platform == "darwin" and os.environ.get("LOAD_PHOTOS_DB"):
    photos_loader = PhotosDBLoader()

model_loader = EmbeddingModelLoader()

server = Server("video-jungle-mcp")

try:
    # videos_at_start = vj.video_files.list()
    projects_at_start = vj.projects.list()
except Exception as e:
    logging.error(f"Error getting projects at start: {e}")
    videos_at_start = []

counter = 10

# Cache for pagination with timestamps for cleanup
_search_result_cache: Dict[str, Dict] = {}
_project_assets_cache: Dict[str, Dict] = {}
_CACHE_TTL = 60 * 4  # 4 minute cache TTL


# Function to clean old cache entries
def cleanup_cache():
    """Remove cache entries older than TTL."""
    current_time = time.time()
    search_keys_to_remove = []
    project_keys_to_remove = []

    # Clean search cache
    for key, cache_entry in _search_result_cache.items():
        if current_time - cache_entry["timestamp"] > _CACHE_TTL:
            search_keys_to_remove.append(key)

    for key in search_keys_to_remove:
        del _search_result_cache[key]

    # Clean project assets cache
    for key, cache_entry in _project_assets_cache.items():
        if current_time - cache_entry["timestamp"] > _CACHE_TTL:
            project_keys_to_remove.append(key)

    for key in project_keys_to_remove:
        del _project_assets_cache[key]

    total_removed = len(search_keys_to_remove) + len(project_keys_to_remove)
    if total_removed > 0:
        logging.info(
            f"Cleaned up {len(search_keys_to_remove)} expired search caches and {len(project_keys_to_remove)} project asset caches"
        )


tools = [
    "add-video",
    "search-local-videos",
    "search-remote-videos",
    "generate-edit-from-videos",
    "get-project-assets",
    "create-videojungle-project",
    "create-video-bar-chart-from-two-axis-data",
    "create-video-line-chart-from-two-axis-data",
    "edit-locally",
    "generate-edit-from-single-video",
    "update-video-edit",
]


def validate_y_values(y_values: Any) -> bool:
    """
    Validates that y_values is a single-dimensional array/list of numbers.

    Args:
        y_values: The input to validate

    Returns:
        bool: True if validation passes

    Raises:
        ValueError: If validation fails with a descriptive message
    """
    # Check if input is a list or numpy array
    if not isinstance(y_values, (list, np.ndarray)):
        raise ValueError("y_values must be a list")

    # Convert to numpy array for easier handling
    y_array = np.array(y_values)

    # Check if it's multi-dimensional
    if len(y_array.shape) > 1:
        raise ValueError("y_values must be a single-dimensional array")

    # Check if all elements are numeric
    if not np.issubdtype(y_array.dtype, np.number):
        raise ValueError("all elements in y_values must be numbers")

    # Check for NaN or infinite values
    if np.any(np.isnan(y_array)) or np.any(np.isinf(y_array)):
        raise ValueError("y_values cannot contain NaN or infinite values")

    return True


@server.list_resources()
async def handle_list_resources() -> list[types.Resource]:
    """
    List available video files.
    Each video files is available at a specific url
    """
    global counter, projects_at_start
    counter += 1
    # check to see if DaVinci Resolve is open

    # We do this counter because otherwise Claude is very aggressive
    # about requests
    if counter % 100 == 0:
        projects = vj.projects.list()
        projects_at_start = projects
        counter = 0
    """
    videos = [
        types.Resource(
            uri=AnyUrl(f"vj://video-file/{video.id}"),
            name=f"Video Jungle Video: {video.name}",
            description=f"User provided description: {video.description}",
            mimeType="video/mp4",
        )
        for video in videos_at_start
    ]"""

    projects = [
        types.Resource(
            uri=AnyUrl(f"vj://projects/{project.id}"),
            name=f"Video Jungle Project: {project.name}",
            description=f"Project description: {project.description}",
            mimeType="application/json",
        )
        for project in projects_at_start
    ]

    return projects  # videos  # + projects


@server.read_resource()
async def handle_read_resource(uri: AnyUrl) -> str:
    """
    Read a video's content by its URI.
    The video id is extracted from the URI host component.
    """
    if uri.scheme != "vj":
        raise ValueError(f"Unsupported URI scheme: {uri.scheme}")

    id = uri.path
    if id is not None:
        id = id.lstrip("/projects/")
        proj = vj.projects.get(id)
        logging.info(f"project is: {proj}")
        return proj.model_dump_json()
    raise ValueError(f"Project not found: {id}")


@server.list_prompts()
async def handle_list_prompts() -> list[types.Prompt]:
    """
    List available prompts.
    Each prompt can have optional arguments to customize its behavior.
    """
    return [
        types.Prompt(
            name="generate-local-search",
            description="Generate a local search for videos using appropriate label names from the Photos app.",
            arguments=[
                types.PromptArgument(
                    name="search_query",
                    description="Natural language query to be translated into Photos app label names.",
                    required=False,
                )
            ],
        )
    ]


@server.get_prompt()
async def handle_get_prompt(
    name: str, arguments: dict[str, str] | None
) -> types.GetPromptResult:
    """
    Generate a prompt by combining arguments with server state.
    The prompt includes all current notes and can be customized via arguments.
    """
    if name != "generate-local-search":
        raise ValueError(f"Unknown prompt: {name}")

    if not arguments:
        raise ValueError("Missing arguments")

    search_query = arguments.get("search_query")
    if not search_query:
        raise ValueError("Missing search_query")

    return types.GetPromptResult(
        description="Generate a local search for videos using appropriate label names from the Photos app.",
        messages=[
            types.PromptMessage(
                role="user",
                content=types.TextContent(
                    type="text",
                    text=f"Here are the exact label names you need to match in your query:\n\n For the specific query: {search_query}, you should use the following labels: {photos_loader.db.labels_as_dict} for the search-local-videos tool",
                ),
            )
        ],
    )


@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
    """
    List available tools.
    Each tool specifies its arguments using JSON Schema validation.
    """
    if os.environ.get("LOAD_PHOTOS_DB"):
        return [
            types.Tool(
                name="create-videojungle-project",
                description="Create a new Video Jungle project to create video edits, add videos, assets, and more.",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "name": {
                            "type": "string",
                            "description": "Name of the project",
                        },
                        "description": {
                            "type": "string",
                            "description": "Description of the project",
                        },
                    },
                },
            ),
            types.Tool(
                name="edit-locally",
                description="Create an OpenTimelineIO file for local editing with the user's desktop video editing suite.",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "edit_id": {
                            "type": "string",
                            "description": "UUID of the edit to download",
                        },
                        "project_id": {
                            "type": "string",
                            "description": "UUID of the project the video edit lives within",
                        },
                    },
                    "required": ["edit_id", "project_id"],
                },
            ),
            types.Tool(
                name="add-video",
                description="Upload video from URL. Begins analysis of video to allow for later information retrieval for automatic video editing an search.",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "name": {"type": "string"},
                        "url": {"type": "string"},
                    },
                    "required": ["name", "url"],
                },
            ),
            types.Tool(
                name="search-remote-videos",
                description="Default method to search videos. Will return videos including video_ids, which allow for information retrieval and building video edits. For large result sets, you can paginate through chunks using search_id and page parameters.",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "query": {"type": "string", "description": "Text search query"},
                        "limit": {
                            "type": "integer",
                            "default": 10,
                            "minimum": 1,
                            "description": "Maximum number of results to return per page",
                        },
                        "project_id": {
                            "type": "string",
                            "format": "uuid",
                            "description": "Project ID to scope the search",
                        },
                        "duration_min": {
                            "type": "number",
                            "minimum": 0,
                            "description": "Minimum video duration in seconds",
                        },
                        "duration_max": {
                            "type": "number",
                            "minimum": 0,
                            "description": "Maximum video duration in seconds",
                        },
                        "search_id": {
                            "type": "string",
                            "description": "ID of a previous search to continue pagination. If provided, returns the next chunk of results",
                        },
                        "page": {
                            "type": "integer",
                            "default": 1,
                            "minimum": 1,
                            "description": "Page number to retrieve when paginating through results",
                        },
                        "items_per_page": {
                            "type": "integer",
                            "default": 5,
                            "minimum": 1,
                            "maximum": 20,
                            "description": "Number of items to show per page when paginating",
                        },
                        "created_after": {
                            "type": "string",
                            "format": "date-time",
                            "description": "Filter videos created after this datetime",
                        },
                        "created_before": {
                            "type": "string",
                            "format": "date-time",
                            "description": "Filter videos created before this datetime",
                        },
                        "tags": {
                            "type": "array",
                            "items": {"type": "string"},
                            "description": "Set of tags to filter by",
                        },
                        "include_segments": {
                            "type": "boolean",
                            "default": True,
                            "description": "Whether to include video segments in results",
                        },
                        "include_related": {
                            "type": "boolean",
                            "default": False,
                            "description": "Whether to include related videos",
                        },
                        "query_audio": {
                            "type": "string",
                            "description": "Audio search query",
                        },
                        "query_img": {
                            "type": "string",
                            "description": "Image search query",
                        },
                    },
                },
            ),
            types.Tool(
                name="search-local-videos",
                description="Search user's local videos in Photos app by keyword",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "keyword": {"type": "string"},
                        "start_date": {
                            "type": "string",
                            "description": "ISO 8601 formatted datetime string (e.g. 2024-01-21T15:30:00Z)",
                        },
                        "end_date": {
                            "type": "string",
                            "description": "ISO 8601 formatted datetime string (e.g. 2024-01-21T15:30:00Z)",
                        },
                    },
                    "required": ["keyword"],
                },
            ),
            types.Tool(
                name="generate-edit-from-videos",
                description="Generate an edit from videos, from within a specific project. Creates a new project to work within no existing project ID (UUID) is passed ",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "project_id": {
                            "type": "string",
                            "description": "Either an existing Project UUID or String. A UUID puts the edit in an existing project, and a string creates a new project with that name.",
                        },
                        "name": {"type": "string", "description": "Video Edit name"},
                        "open_editor": {
                            "type": "boolean",
                            "description": "Open a live editor with the project's edit",
                        },
                        "resolution": {
                            "type": "string",
                            "description": "Video resolution. Examples include '1920x1080', '1280x720'",
                        },
                        "subtitles": {
                            "type": "boolean",
                            "description": "Whether to render subtitles in the video edit",
                            "default": True,
                        },
                        "vertical_crop": {
                            "type": "string",
                            "description": "ML-powered automatic vertical crop mode. Pass 'standard' to enable automatic vertical video cropping",
                        },
                        "edit": {
                            "type": "array",
                            "items": {
                                "type": "object",
                                "properties": {
                                    "video_id": {
                                        "type": "string",
                                        "description": "Video UUID",
                                    },
                                    "video_start_time": {
                                        "type": "string",
                                        "description": "Clip start time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                    },
                                    "video_end_time": {
                                        "type": "string",
                                        "description": "Clip end time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                    },
                                    "type": {
                                        "type": "string",
                                        "description": "Type of asset ('videofile' for video files, or 'user' for project specific assets)",
                                    },
                                    "audio_levels": {
                                        "type": "array",
                                        "description": "Optional audio level adjustments for this clip",
                                        "items": {
                                            "type": "object",
                                            "properties": {
                                                "audio_level": {
                                                    "type": "string",
                                                    "description": "Audio level (0.0 to 1.0)",
                                                }
                                            },
                                        },
                                    },
                                    "crop": {
                                        "type": "object",
                                        "description": "Optional crop/zoom settings for this video segment",
                                        "properties": {
                                            "zoom": {
                                                "type": "number",
                                                "minimum": 0.1,
                                                "maximum": 10.0,
                                                "default": 1.0,
                                                "description": "Zoom factor (1.0 = 100%, 1.5 = 150%, etc.)",
                                            },
                                            "position_x": {
                                                "type": "number",
                                                "minimum": -1.0,
                                                "maximum": 1.0,
                                                "default": 0.0,
                                                "description": "Horizontal offset from center (-1.0 to 1.0)",
                                            },
                                            "position_y": {
                                                "type": "number",
                                                "minimum": -1.0,
                                                "maximum": 1.0,
                                                "default": 0.0,
                                                "description": "Vertical offset from center (-1.0 to 1.0)",
                                            },
                                        },
                                    },
                                },
                            },
                            "description": "Array of video clips to include in the edit",
                        },
                        "audio_asset": {
                            "type": "object",
                            "properties": {
                                "audio_id": {
                                    "type": "string",
                                    "description": "Audio asset UUID",
                                },
                                "type": {
                                    "type": "string",
                                    "description": "Audio file type (e.g., 'mp3', 'wav')",
                                },
                                "filename": {
                                    "type": "string",
                                    "description": "Audio file name",
                                },
                                "audio_start_time": {
                                    "type": "string",
                                    "description": "Audio start time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                },
                                "audio_end_time": {
                                    "type": "string",
                                    "description": "Audio end time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                },
                                "url": {
                                    "type": "string",
                                    "description": "Optional URL for the audio file",
                                },
                                "audio_levels": {
                                    "type": "array",
                                    "description": "Optional audio level adjustments",
                                    "items": {"type": "object"},
                                },
                            },
                            "description": "Optional audio overlay for the video edit",
                        },
                    },
                    "required": ["edit", "name", "project_id"],
                },
            ),
            types.Tool(
                name="generate-edit-from-single-video",
                description="Generate a compressed video edit from a single video.",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "project_id": {"type": "string"},
                        "resolution": {"type": "string"},
                        "video_id": {"type": "string"},
                        "subtitles": {
                            "type": "boolean",
                            "description": "Whether to render subtitles in the video edit",
                            "default": True,
                        },
                        "vertical_crop": {
                            "type": "string",
                            "description": "ML-powered automatic vertical crop mode. Pass 'standard' to enable automatic vertical video cropping",
                        },
                        "edit": {
                            "type": "array",
                            "items": {
                                "type": "object",
                                "properties": {
                                    "video_start_time": {
                                        "type": "string",
                                        "description": "Clip start time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                    },
                                    "video_end_time": {
                                        "type": "string",
                                        "description": "Clip end time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                    },
                                },
                            },
                            "description": "Array of time segments to extract from the video",
                        },
                    },
                    "required": ["edit", "project_id", "video_id"],
                },
            ),
            types.Tool(
                name="update-video-edit",
                description="Update an existing video edit within a specific project.",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "project_id": {
                            "type": "string",
                            "description": "UUID of the project containing the edit",
                        },
                        "edit_id": {
                            "type": "string",
                            "description": "UUID of the video edit to update",
                        },
                        "name": {"type": "string", "description": "Video Edit name"},
                        "description": {
                            "type": "string",
                            "description": "Description of the video edit",
                        },
                        "video_output_format": {
                            "type": "string",
                            "description": "Output format for the video (e.g., 'mp4', 'webm')",
                        },
                        "video_output_resolution": {
                            "type": "string",
                            "description": "Video resolution. Examples include '1920x1080', '1280x720'",
                        },
                        "video_output_fps": {
                            "type": "number",
                            "description": "Frames per second for the output video",
                        },
                        "subtitles": {
                            "type": "boolean",
                            "description": "Whether to render subtitles in the video edit",
                        },
                        "video_series_sequential": {
                            "type": "array",
                            "description": "Array of video clips in sequential order",
                            "items": {
                                "type": "object",
                                "properties": {
                                    "video_id": {
                                        "type": "string",
                                        "description": "Video UUID",
                                    },
                                    "video_start_time": {
                                        "type": "string",
                                        "description": "Clip start time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                    },
                                    "video_end_time": {
                                        "type": "string",
                                        "description": "Clip end time in HH:MM:SS.mmm format (e.g., '00:01:30.500' or '01:05:22.123'). Hours, minutes, seconds, and 3-digit milliseconds are required.",
                                    },
                                    "audio_levels": {
                                        "type": "array",
                                        "description": "Optional audio level adjustments for this clip",
                                        "items": {
                                            "type": "object",
                                            "properties": {
                                                "audio_level": {
                                                    "type": "string",
                                                    "description": "Audio level (0.0 to 1.0)",
                                                }
                                            },
                                        },
                                    },
                                    "crop": {
                                        "type": "object",
                                        "description": "Optional crop/zoom settings for this video segment",
                                        "properties": {
                                            "zoom": {
                                                "type": "number",
                                                "minimum": 0.1,
                                                "maximum": 10.0,
                                                "default": 1.0,
                                                "description": "Zoom factor (1.0 = 100%, 1.5 = 150%, etc.)",
                                            },
                                            "position_x": {
                                                "type": "number",
                                                "minimum": -1.0,
                                                "maximum": 1.0,
                                                "default": 0.0,
                                                "description": "Horizontal offset from center (-1.0 to 1.0)",
                                            },
                                            "position_y": {
                                                "type": "number",
                                                "minimum": -1.0,
                                                "maximum": 1.0,
                                                "default": 0.0,
                                                "description": "Vertical offset from center (-1.0 to 1.0)",
                                            },
                                        },
                                    },
                                },
                            },
                        },
                        "audio_overlay": {
                            "type": "object",
                            "description": "Audio overlay settings and assets",
                        },
                        "rendered": {
                            "type": "boolean",
                            "description": "Whether the edit has been rendered",
                        },
                        "vertical_crop": {
                            "type": "string",
                            "description": "ML-powered automatic vertical crop mode. Pass 'standard' to enable automatic vertical video cropping",
                        },
                    },
                    "required": ["project_id", "edit_id"],
                },
            ),
            types.Tool(
                name="create-video-bar-chart-from-two-axis-data",
                description="Create a video bar chart from two-axis data",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "x_values": {"type": "array", "items": {"type": "string"}},
                        "y_values": {"type": "array", "items": {"type": "number"}},
                        "x_label": {"type": "string"},
                        "y_label": {"type": "string"},
                        "title": {"type": "string"},
                        "filename": {"type": "string"},
                    },
                    "required": ["x_values", "y_values", "x_label", "y_label", "title"],
                },
            ),
            types.Tool(
                name="create-video-line-chart-from-two-axis-data",
                description="Create a video line chart from two-axis data",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "x_values": {"type": "array", "items": {"type": "string"}},
                        "y_values": {"type": "array", "items": {"type": "number"}},
                        "x_label": {"type": "string"},
                        "y_label": {"type": "string"},
                        "title": {"type": "string"},
                        "filename": {"type": "string"},
                    },
                    "required": ["x_values", "y_values", "x_label", "y_label", "title"],
                },
            ),
            types.Tool(
                name="get-project-assets",
                description="Get all assets and details for a specific project, with pagination support for large projects",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "project_id": {
                            "type": "string",
                            "description": "UUID of the project to retrieve assets for",
                        },
                        "asset_types": {
                            "type": "array",
                            "items": {"type": "string"},
                            "description": "List of asset types to filter by (e.g. 'user', 'video', 'image', 'audio', 'generated_video', 'video_edit'). Video assets in a project are labeled 'user' for user uploaded, so prefer 'user' when building a video edit from project assets.",
                            "default": ["user", "video", "image", "audio"],
                        },
                        "page": {
                            "type": "integer",
                            "default": 1,
                            "minimum": 1,
                            "description": "Page number to retrieve when paginating through assets",
                        },
                        "items_per_page": {
                            "type": "integer",
                            "default": 10,
                            "minimum": 1,
                            "maximum": 50,
                            "description": "Number of items to show per page when paginating",
                        },
                        "asset_cache_id": {
                            "type": "string",
                            "description": "ID of a previous asset cache to continue pagination. If provided, returns the next chunk of results",
                        },
                    },
                    "required": ["project_id"],
                },
            ),
        ]
    return [
        types.Tool(
            name="create-videojungle-project",
            description="Create a new Video Jungle project to create video edits, add videos, assets, and more.",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string", "description": "Name of the project"},
                    "description": {
                        "type": "string",
                        "description": "Description of the project",
                    },
                },
            },
        ),
        types.Tool(
            name="edit-locally",
            description="Create an OpenTimelineIO file for local editing with the user's desktop video editing suite.",
            inputSchema={
                "type": "object",
                "properties": {
                    "edit_id": {
                        "type": "string",
                        "description": "UUID of the edit to download",
                    },
                    "project_id": {
                        "type": "string",
                        "description": "UUID of the project the video edit lives within",
                    },
                },
                "required": ["edit_id", "project_id"],
            },
        ),
        types.Tool(
            name="add-video",
            description="Upload video from URL. Begins analysis of video to allow for later information retrieval for automatic video editing an search.",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string"},
                    "url": {"type": "string"},
                },
                "required": ["name", "url"],
            },
        ),
        types.Tool(
            name="search-remote-videos",
            description="Default method to search videos. Will return videos including video_ids, which allow for information retrieval and building video edits. For large result sets, you can paginate through chunks using search_id and page parameters.",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "Text search query"},
                    "limit": {
                        "type": "integer",
                        "default": 50,
                        "minimum": 1,
                        "maximum": 100,
                        "description": "Maximum number of results to return per page",
                    },
                    "project_id": {
                        "type": "string",
                        "format": "uuid",
                        "description": "Project ID to scope the search",
                    },
                    "duration_min": {
                        "type": "number",
                        "minimum": 0,
                        "description": "Minimum video duration in seconds",
                    },
                    "duration_max": {
                        "type": "number",
                        "minimum": 0,
                        "description": "Maximum video duration in seconds",
                    },
                    "search_id": {
                        "type": "string",
                        "description": "ID of a previous search to continue pagination. If provided, returns the next chunk of results",
                    },
                    "page": {
                        "type": "integer",
                        "default": 1,
                        "minimum": 1,
                        "description": "Page number to retrieve when paginating through results",
                    },
                    "items_per_page": {
                        "type": "integer",
                        "default": 20,
                        "minimum": 1,
                        "maximum": 50,
                        "description": "Number of items to show per page when paginating",
                    },
                    "created_after": {
                        "type": "string",
                        "format": "date-time",
                        "description": "Filter videos created after this datetime",
                    },
                    "created_before": {
                        "type": "string",
                        "format": "date-time",
                        "description": "Filter videos created before this datetime",
                    },
                    "tags": {
                        "type": "array",
                        "items": {"type": "string"},
                        "description": "Set of tags to filter by",
                    },
                    "include_segments": {
                        "type": "boolean",
                        "default": True,
                        "description": "Whether to include video segments in results",
                    },
                    "include_related": {
                        "type": "boolean",
                        "default": False,
                        "description": "Whether to include related videos",
                    },
                    "query_audio": {
                        "type": "string",
                        "description": "Audio search query",
                    },
                    "query_img": {
                        "type": "string",
                        "description": "Image search query",
                    },
                },
            },
        ),
        types.Tool(
            name="generate-edit-from-videos",
            description="Generate an edit from videos, from within a specific project. Creates a new project to work within no existing project ID (UUID) is passed ",
            inputSchema={
                "type": "object",
                "properties": {
                    "project_id": {
                        "type": "string",
                        "description": "Either an existing Project UUID or String. A UUID puts the edit in an existing project, and a string creates a new project with that name.",
                    },
                    "name": {"type": "string", "description": "Video Edit name"},
                    "open_editor": {
                        "type": "boolean",
                        "description": "Open a live editor with the project's edit",
                    },
                    "resolution": {
                        "type": "string",
                        "description": "Video resolution. Examples include '1920x1080', '1280x720'",
                    },
                    "subtitles": {
                        "type": "boolean",
                        "description": "Whether to render subtitles in the video edit",
                        "default": True,
                    },
                    "vertical_crop": {
                        "type": "string",
                        "description": "ML-powered automatic vertical crop mode. Pass 'standard' to enable automatic vertical video cropping",
                    },
                    "edit": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "video_id": {
                                    "type": "string",
                                    "description": "Video UUID",
                                },
                                "video_start_time": {
                                    "type": "string",
                                    "description": "Clip start time in 00:00:00.000 format",
                                },
                                "video_end_time": {
                                    "type": "string",
                                    "description": "Clip end time in 00:00:00.000 format",
                                },
                                "type": {
                                    "type": "string",
                                    "description": "Type of asset ('videofile' for video files, or 'user' for project specific assets)",
                                },
                                "audio_levels": {
                                    "type": "array",
                                    "description": "Optional audio level adjustments for this clip",
                                    "items": {
                                        "type": "object",
                                        "properties": {
                                            "audio_level": {
                                                "type": "string",
                                                "description": "Audio level (0.0 to 1.0)",
                                            }
                                        },
                                    },
                                },
                                "crop": {
                                    "type": "object",
                                    "description": "Optional crop/zoom settings for this video segment",
                                    "properties": {
                                        "zoom": {
                                            "type": "number",
                                            "minimum": 0.1,
                                            "maximum": 10.0,
                                            "default": 1.0,
                                            "description": "Zoom factor (1.0 = 100%, 1.5 = 150%, etc.)",
                                        },
                                        "position_x": {
                                            "type": "number",
                                            "minimum": -1.0,
                                            "maximum": 1.0,
                                            "default": 0.0,
                                            "description": "Horizontal offset from center (-1.0 to 1.0)",
                                        },
                                        "position_y": {
                                            "type": "number",
                                            "minimum": -1.0,
                                            "maximum": 1.0,
                                            "default": 0.0,
                                            "description": "Vertical offset from center (-1.0 to 1.0)",
                                        },
                                    },
                                },
                            },
                        },
                        "description": "Array of video clips to include in the edit",
                    },
                    "audio_asset": {
                        "type": "object",
                        "properties": {
                            "audio_id": {
                                "type": "string",
                                "description": "Audio asset UUID",
                            },
                            "type": {
                                "type": "string",
                                "description": "Audio file type (e.g., 'mp3', 'wav')",
                            },
                            "filename": {
                                "type": "string",
                                "description": "Audio file name",
                            },
                            "audio_start_time": {
                                "type": "string",
                                "description": "Audio start time in 00:00:00.000 format",
                            },
                            "audio_end_time": {
                                "type": "string",
                                "description": "Audio end time in 00:00:00.000 format",
                            },
                            "url": {
                                "type": "string",
                                "description": "Optional URL for the audio file",
                            },
                            "audio_levels": {
                                "type": "array",
                                "description": "Optional audio level adjustments",
                                "items": {"type": "object"},
                            },
                        },
                        "description": "Optional audio overlay for the video edit",
                    },
                },
                "required": ["edit", "name", "project_id"],
            },
        ),
        types.Tool(
            name="generate-edit-from-single-video",
            description="Generate a compressed video edit from a single video.",
            inputSchema={
                "type": "object",
                "properties": {
                    "project_id": {"type": "string"},
                    "resolution": {"type": "string"},
                    "video_id": {"type": "string"},
                    "subtitles": {
                        "type": "boolean",
                        "description": "Whether to render subtitles in the video edit",
                        "default": True,
                    },
                    "vertical_crop": {
                        "type": "string",
                        "description": "ML-powered automatic vertical crop mode. Pass 'standard' to enable automatic vertical video cropping",
                    },
                    "edit": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "video_start_time": {
                                    "type": "string",
                                    "description": "Clip start time in 00:00:00.000 format",
                                },
                                "video_end_time": {
                                    "type": "string",
                                    "description": "Clip end time in 00:00:00.000 format",
                                },
                            },
                        },
                        "description": "Array of time segments to extract from the video",
                    },
                },
                "required": ["edit", "project_id", "video_id"],
            },
        ),
        types.Tool(
            name="update-video-edit",
            description="Update an existing video edit within a specific project.",
            inputSchema={
                "type": "object",
                "properties": {
                    "project_id": {
                        "type": "string",
                        "description": "UUID of the project containing the edit",
                    },
                    "edit_id": {
                        "type": "string",
                        "description": "UUID of the video edit to update",
                    },
                    "name": {"type": "string", "description": "Video Edit name"},
                    "description": {
                        "type": "string",
                        "description": "Description of the video edit",
                    },
                    "video_output_format": {
                        "type": "string",
                        "description": "Output format for the video (e.g., 'mp4', 'webm')",
                    },
                    "video_output_resolution": {
                        "type": "string",
                        "description": "Video resolution. Examples include '1920x1080', '1280x720'",
                    },
                    "video_output_fps": {
                        "type": "number",
                        "description": "Frames per second for the output video",
                    },
                    "subtitles": {
                        "type": "boolean",
                        "description": "Whether to render subtitles in the video edit",
                    },
                    "video_series_sequential": {
                        "type": "array",
                        "description": "Array of video clips in sequential order",
                        "items": {
                            "type": "object",
                            "properties": {
                                "video_id": {
                                    "type": "string",
                                    "description": "Video UUID",
                                },
                                "video_start_time": {
                                    "type": "string",
                                    "description": "Clip start time in 00:00:00.000 format",
                                },
                                "video_end_time": {
                                    "type": "string",
                                    "description": "Clip end time in 00:00:00.000 format",
                                },
                                "type": {
                                    "type": "string",
                                    "description": "Type of asset ('videofile' for video files, or 'user' for project specific assets)",
                                },
                                "audio_levels": {
                                    "type": "array",
                                    "description": "Optional audio level adjustments for this clip",
                                    "items": {
                                        "type": "object",
                                        "properties": {
                                            "audio_level": {
                                                "type": "string",
                                                "description": "Audio level (0.0 to 1.0)",
                                            }
                                        },
                                    },
                                },
                                "crop": {
                                    "type": "object",
                                    "description": "Optional crop/zoom settings for this video segment",
                                    "properties": {
                                        "zoom": {
                                            "type": "number",
                                            "minimum": 0.1,
                                            "maximum": 10.0,
                                            "default": 1.0,
                                            "description": "Zoom factor (1.0 = 100%, 1.5 = 150%, etc.)",
                                        },
                                        "position_x": {
                                            "type": "number",
                                            "minimum": -1.0,
                                            "maximum": 1.0,
                                            "default": 0.0,
                                            "description": "Horizontal offset from center (-1.0 to 1.0)",
                                        },
                                        "position_y": {
                                            "type": "number",
                                            "minimum": -1.0,
                                            "maximum": 1.0,
                                            "default": 0.0,
                                            "description": "Vertical offset from center (-1.0 to 1.0)",
                                        },
                                    },
                                },
                            },
                        },
                    },
                    "audio_overlay": {
                        "type": "object",
                        "description": "Audio overlay settings and assets",
                    },
                    "rendered": {
                        "type": "boolean",
                        "description": "Whether the edit has been rendered",
                    },
                    "vertical_crop": {
                        "type": "string",
                        "description": "ML-powered automatic vertical crop mode. Pass 'standard' to enable automatic vertical video cropping",
                    },
                },
                "required": ["project_id", "edit_id"],
            },
        ),
        types.Tool(
            name="create-video-bar-chart-from-two-axis-data",
            description="Create a video bar chart from two-axis data",
            inputSchema={
                "type": "object",
                "properties": {
                    "x_values": {"type": "array", "items": {"type": "string"}},
                    "y_values": {"type": "array", "items": {"type": "number"}},
                    "x_label": {"type": "string"},
                    "y_label": {"type": "string"},
                    "title": {"type": "string"},
                    "filename": {"type": "string"},
                },
                "required": ["x_values", "y_values", "x_label", "y_label", "title"],
            },
        ),
        types.Tool(
            name="create-video-line-chart-from-two-axis-data",
            description="Create a video line chart from two-axis data",
            inputSchema={
                "type": "object",
                "properties": {
                    "x_values": {"type": "array", "items": {"type": "string"}},
                    "y_values": {"type": "array", "items": {"type": "number"}},
                    "x_label": {"type": "string"},
                    "y_label": {"type": "string"},
                    "title": {"type": "string"},
                    "filename": {"type": "string"},
                },
                "required": ["x_values", "y_values", "x_label", "y_label", "title"],
            },
        ),
        types.Tool(
            name="get-project-assets",
            description="Get all assets and details for a specific project, with pagination support for large projects",
            inputSchema={
                "type": "object",
                "properties": {
                    "project_id": {
                        "type": "string",
                        "description": "UUID of the project to retrieve assets for",
                    },
                    "asset_types": {
                        "type": "array",
                        "items": {"type": "string"},
                        "description": "List of asset types to filter by (e.g. 'user', 'video', 'image', 'audio', 'generated_video', 'generated_audio', 'video_edit')",
                        "default": [
                            "user",
                            "video",
                            "image",
                            "audio",
                            "generated_audio",
                        ],
                    },
                    "page": {
                        "type": "integer",
                        "default": 1,
                        "minimum": 1,
                        "description": "Page number to retrieve when paginating through assets",
                    },
                    "items_per_page": {
                        "type": "integer",
                        "default": 50,
                        "minimum": 1,
                        "maximum": 100,
                        "description": "Number of items to show per page when paginating",
                    },
                    "asset_cache_id": {
                        "type": "string",
                        "description": "ID of a previous asset cache to continue pagination. If provided, returns the next chunk of results",
                    },
                },
                "required": ["project_id"],
            },
        ),
    ]


def format_single_video(video):
    """
    Format a single video metadata tuple (metadata_dict, confidence_score)
    Returns a formatted string and a Python code string representation
    """
    try:
        # Create human-readable format
        readable_format = f"""
            Video Embedding Result:
            -------------
            Video ID: {video['video_id']}
            Description: {video['description']}
            Timestamp: {video['timepoint']}
            Detected Items: {', '.join(video['detected_items']) if video['detected_items'] else 'None'}
        """
    except Exception as e:
        raise ValueError(f"Error formatting video: {str(e)}")

    return readable_format


def filter_unique_videos_keep_first(json_results):
    seen = set()
    return [
        item
        for item in json_results
        if item["video_id"] not in seen and not seen.add(item["video_id"])
    ]


def format_video_info(video):
    try:
        if video.get("script") is not None:
            if len(video.get("script")) > 200:
                script = video.get("script")[:200] + "..."
            else:
                script = video.get("script")
        else:
            script = "N/A"
        segments = []
        for segment in video.get("matching_segments", []):
            segments.append(
                f"- Time: {segment.get('start_seconds', 'N/A')} to {segment.get('end_seconds', 'N/A')}"
            )
        joined_segments = "\n".join(segments)
        return (
            f"- Video Id: {video.get('video_id', 'N/A')}\n"
            f"  Video name: {video.get('video', {}).get('name', 'N/A')}\n"
            f"  URL to view video: {video.get('video', {}).get('url', 'N/A')}\n"
            f"  Video manuscript: {script}"
            f"  Matching scenes: {joined_segments}"
            f"  Generated description: {video.get('video', 'N/A').get('generated_description', 'N/A')}"
        )
    except Exception as e:
        return f"Error formatting video: {str(e)}"


def format_video_info_long(video):
    try:
        if video.get("script") is not None:
            if len(video.get("script")) > 200:
                script = video.get("script")[:200] + "..."
            else:
                script = video.get("script")
        else:
            script = "N/A"
        return (
            f"- Video Id: {video.get('video_id', 'N/A')}\n"
            f"  Video name: {video.get('video', {}).get('name', 'N/A')}\n"
            f"  URL to view video: {video.get('video', {}).get('url', 'N/A')}\n"
            f"  Generated description: {video.get('video', 'N/A').get('generated_description', 'N/A')}"
            f"  Video manuscript: {script}"
            f"  Matching times: {video.get('scene_changes', 'N/A')}"
        )
    except Exception as e:
        return f"Error formatting video: {str(e)}"


def format_asset_info(asset):
    """Format asset information for display based on the example structure you showed"""
    try:
        # Support both type and asset_type fields
        asset_type = asset.get("type", asset.get("asset_type", "unknown"))
        asset_id = asset.get("id", "N/A")
        # Support both name and keyname fields
        asset_name = asset.get("name", asset.get("keyname", "N/A"))

        # Common fields first
        formatted = [f"- Asset ID: {asset_id}"]
        formatted.append(f"  Type: {asset_type}")
        formatted.append(f"  Name: {asset_name}")

        # Get URL (try different possible fields)
        url = asset.get("url", "N/A")
        download_url = asset.get("download_url", "N/A")

        if url and url != "N/A":
            # Truncate very long URLs
            if len(url) > 80:
                formatted.append(f"  URL: {url[:77]}...")
            else:
                formatted.append(f"  URL: {url}")

        if download_url and download_url != "N/A" and download_url != url:
            if len(download_url) > 80:
                formatted.append(f"  Download URL: {download_url[:77]}...")
            else:
                formatted.append(f"  Download URL: {download_url}")

        # Description
        description = asset.get("description", "N/A")
        if description and description != "N/A":
            if len(description) > 120:
                formatted.append(f"  Description: {description[:117]}...")
            else:
                formatted.append(f"  Description: {description}")

        # Creation time
        created_at = asset.get("created_at", "N/A")
        if created_at and created_at != "N/A":
            formatted.append(f"  Created: {created_at}")

        # Handle video assets and user-uploaded content
        if asset_type in ["user", "video"]:
            # Look for generated description (from your example)
            gen_desc = asset.get("generated_description", "N/A")
            if gen_desc and gen_desc != "N/A":
                formatted.append(f"  Generated description: {gen_desc}")

            # Check for create_parameters.analysis (from your example)
            create_params = asset.get("create_parameters", {})
            if create_params and isinstance(create_params, dict):
                analysis = create_params.get("analysis", {})
                if analysis and isinstance(analysis, dict):
                    formatted.append(f" analysis: {str(analysis)}")

            # Status field (if available)
            status = asset.get("status", "N/A")
            if status and status != "N/A":
                formatted.append(f"  Status: {status}")

            # Asset path field (if available)
            asset_path = asset.get("asset_path", "N/A")
            if asset_path and asset_path != "N/A":
                formatted.append(f"  Asset path: {asset_path}")

        # Handle video_edit assets
        elif asset_type == "video_edit":
            description = asset.get("description", "N/A")
            if description and description != "N/A":
                formatted.append(f"  Description: {description}")

            # Add edit-specific details
            resolution = asset.get("video_output_resolution", "N/A")
            fps = asset.get("video_output_fps", "N/A")
            format = asset.get("video_output_format", "N/A")

            if resolution != "N/A":
                formatted.append(f"  Resolution: {resolution}")
            if fps != "N/A":
                formatted.append(f"  FPS: {fps}")
            if format != "N/A":
                formatted.append(f"  Format: {format}")

            # Show clips in the edit
            clips = asset.get("video_series_sequential", [])
            if clips and len(clips) > 0:
                formatted.append(f"  Clips: {len(clips)} total")
                # Show first 3 clips as examples
                for i, clip in enumerate(clips[:3]):
                    clip_id = clip.get("video_id", "N/A")
                    start = clip.get("video_start_time", "N/A")
                    end = clip.get("video_end_time", "N/A")
                    asset_type = clip.get("type", "N/A")
                    formatted.append(
                        f"    Clip {i+1}: {clip_id} of type {asset_type} from {start} to {end}"
                    )
                if len(clips) > 3:
                    formatted.append(f"    ... and {len(clips)-3} more clips")

        # Add any other important fields we might have missed
        important_fields = ["filetype", "duration", "width", "height", "uploaded"]
        for field in important_fields:
            if field in asset and asset[field] is not None and asset[field] != "N/A":
                formatted.append(f"  {field}: {asset[field]}")

        return "\n".join(formatted)
    except Exception as e:
        return f"Error formatting asset {asset.get('id', 'unknown')}: {str(e)}"


@server.call_tool()
async def handle_call_tool(
    name: str, arguments: dict | None
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
    """
    Handle tool execution requests.
    Tools can modify server state and notify clients of changes.
    """
    if name not in tools:
        raise ValueError(f"Unknown tool: {name}")

    if not arguments:
        raise ValueError("Missing arguments")

    # Store some tool results in server state for pagination
    global _search_result_cache

    if name == "create-videojungle-project" and arguments:
        namez = arguments.get("name")
        description = arguments.get("description")

        if not namez or not description:
            raise ValueError("Missing project name")

        # Create a new project
        project = vj.projects.create(name=namez, description=description)

        # Notify clients that resources have changed
        await server.request_context.session.send_resource_list_changed()

        return [
            types.TextContent(
                type="text",
                text=f"Created new project '{project.name}' with id '{project.id}'",
            )
        ]

    if name == "edit-locally" and arguments:
        project_id = arguments.get("project_id")
        edit_id = arguments.get("edit_id")

        if not project_id or not edit_id:
            raise ValueError("Missing edit and / or  project id")
        env_vars = {"VJ_API_KEY": VJ_API_KEY, "PATH": os.environ["PATH"]}
        edit_data = vj.projects.get_edit(project_id, edit_id)
        formatted_name = edit_data["name"].replace(" ", "-")
        with open(f"{formatted_name}.json", "w") as f:
            json.dump(edit_data, f, indent=4)
        logging.info(f"edit data is: {edit_data}")
        logging.info(f"current directory is: {os.getcwd()}")
        subprocess.Popen(
            [
                "uv",
                "run",
                "python",
                "./src/video_editor_mcp/generate_opentimeline.py",
                "--file",
                f"{formatted_name}.json",
                "--output",
                f"{formatted_name}.otio",
            ],
            env=env_vars,
        )

        return [
            types.TextContent(
                type="text",
                text=f"Edit {edit_data['name']} is being downloaded and converted to OpenTimelineIO format. You can find it in the current directory.",
            )
        ]

    if name == "add-video" and arguments:
        name = arguments.get("name")  # type: ignore
        url = arguments.get("url")

        if not name or not url:
            raise ValueError("Missing name or content")

        # Update server state
        vj.video_files.create(name=name, filename=str(url), upload_method="url")

        # Notify clients that resources have changed
        await server.request_context.session.send_resource_list_changed()
        return [
            types.TextContent(
                type="text",
                text=f"Added video '{name}' with url: {url}",
            )
        ]
    if name == "search-remote-videos" and arguments:
        # Check if this is a pagination request
        search_id = arguments.get("search_id")
        page = arguments.get("page", 1)
        items_per_page = arguments.get("items_per_page", 5)

        # Run cache cleanup
        cleanup_cache()

        # If we have a search_id, we're doing pagination
        if search_id and search_id in _search_result_cache:
            cache_entry = _search_result_cache[search_id]
            cached_results = cache_entry["results"]
            total_items = len(cached_results)
            total_pages = (total_items + items_per_page - 1) // items_per_page

            # Update timestamp on access
            _search_result_cache[search_id]["timestamp"] = time.time()

            start_idx = (page - 1) * items_per_page
            end_idx = min(start_idx + items_per_page, total_items)

            # Get current page items
            current_page_items = cached_results[start_idx:end_idx]

            # Format the paginated results
            query_info = cache_entry.get("query", "unknown")
            response_text = []
            response_text.append(
                f"Search Results for '{query_info}' (Page {page}/{total_pages}, showing items {start_idx+1}-{end_idx} of {total_items})"
            )

            # Add embedding note if it exists in the cache
            embedding_note = cache_entry.get("embedding_note")
            if embedding_note:
                response_text.append(embedding_note)

            # Format each item based on whether it's a regular result or an embedding result
            if len(current_page_items) > 0:
                if (
                    isinstance(current_page_items[0], dict)
                    and "video_id" in current_page_items[0]
                ):
                    response_text.extend(
                        format_video_info(video) for video in current_page_items
                    )
                else:
                    response_text.extend(current_page_items)
            else:
                response_text.append("No items to display on this page.")

            # Add pagination info with navigation options
            pagination_info = []
            if page > 1:
                pagination_info.append(
                    f"Previous page: call search-remote-videos with search_id='{search_id}' and page={page-1}"
                )

            has_more = page < total_pages
            if has_more:
                pagination_info.append(
                    f"Next page: call search-remote-videos with search_id='{search_id}' and page={page+1}"
                )

            if pagination_info:
                response_text.append("\nNavigation options:")
                response_text.extend(pagination_info)

            if not has_more:
                response_text.append("\nEnd of results.")

            return [
                types.TextContent(
                    type="text",
                    text="\n".join(response_text),
                )
            ]

        # This is a new search request
        logging.info(f"search-remote-videos received arguments: {arguments}")
        query = arguments.get("query")
        limit = arguments.get("limit", 10)
        project_id = arguments.get("project_id")
        tags = arguments.get("tags", None)
        duration_min = arguments.get("duration_min", None)
        duration_max = arguments.get("duration_max", None)
        created_after = arguments.get("created_after", None)
        created_before = arguments.get("created_before", None)
        include_segments = arguments.get("include_segments", True)
        include_related = arguments.get("include_related", False)

        # Validate that at least one query type is provided
        if not query and not tags:
            raise ValueError("At least one query or tag must be provided")

        # Perform the main search with all parameters
        if tags:
            search_params = {
                "limit": limit,
                "include_segments": include_segments,
                "include_related": include_related,
                "tags": json.loads(tags),
                "duration_min": duration_min,
                "duration_max": duration_max,
                "created_after": created_after,
                "created_before": created_before,
            }
        else:
            search_params = {
                "limit": limit,
                "include_segments": include_segments,
                "include_related": include_related,
                "duration_min": duration_min,
                "duration_max": duration_max,
                "created_after": created_after,
                "created_before": created_before,
            }

        # Add optional parameters
        if query:
            search_params["query"] = query
        if project_id:
            # Convert UUID to string if it's not already a string
            search_params["project_id"] = str(project_id)

        embedding_results = []
        embedding_search_formatted = []
        embedding_note = None

        # If we have a text query, try embedding search but fallback to regular search if model is still loading
        if query:
            try:
                embeddings = model_loader.encode_text(query)
                response = model_loader.post_embeddings(
                    embeddings,
                    "https://api.video-jungle.com/video-file/embedding-search",
                    headers={
                        "Content-Type": "application/json",
                        "X-API-KEY": VJ_API_KEY,
                    },
                )

                logging.info(f"Response is: {response.json()}")
                if response.status_code != 200:
                    raise RuntimeError(f"Error searching for videos: {response.text}")

                embedding_results = response.json()
                embedding_search_formatted = [
                    format_single_video(video) for video in embedding_results
                ]
            except Exception as e:
                if "still loading" in str(e):
                    logging.warning(
                        "Embedding model still loading, falling back to text-only search"
                    )
                    embedding_results = []
                    embedding_search_formatted = []
                    # Add note that will be displayed to the user
                    embedding_note = "Note: Embedding-based semantic search is still initializing. Only text-based search results are shown. Please try again later for more accurate semantic search results."
                else:
                    # For other errors, log and continue with regular search
                    logging.error(f"Error in embedding search: {e}")
                    embedding_results = []
                    embedding_search_formatted = []

        # Get regular search results
        logging.info(
            f"Search params being passed to vj.video_files.search: {search_params}"
        )
        logging.info(f"VJ client: {vj}, API key present: {bool(VJ_API_KEY)}")
        try:
            videos = vj.video_files.search(**search_params)
            logging.info(f"Search returned {len(videos)} videos")
            if videos:
                logging.info(f"First video: {videos[0]}")
        except Exception as e:
            logging.error(f"Error in vj.video_files.search: {e}")
            videos = []
        logging.info(f"num videos are: {len(videos)}")

        # If no results found, return a helpful message
        if len(videos) == 0 and not embedding_results:
            return [
                types.TextContent(
                    type="text",
                    text=f"No videos found matching query '{query}' with the specified filters. Try broadening your search criteria.",
                )
            ]

        # If only a few results, return them directly without pagination
        if len(videos) <= 3 and len(videos) >= 1 and not embedding_results:
            return [
                types.TextContent(
                    type="text",
                    text=format_video_info_long(video),
                )
                for video in videos
            ]

        # For larger result sets, set up pagination
        formatted_videos = [format_video_info(video) for video in videos]

        # Store the results in the cache for pagination
        new_search_id = str(uuid.uuid4())

        all_results = []
        if query and embedding_results:
            # Store both types of results
            all_results = formatted_videos + embedding_search_formatted
        else:
            all_results = formatted_videos

        # Store results with timestamp and embedding note if present
        _search_result_cache[new_search_id] = {
            "results": all_results,
            "timestamp": time.time(),
            "query": query or "tag-search",
            "embedding_note": embedding_note,
        }

        # Calculate pagination info
        total_items = len(all_results)
        total_pages = (total_items + items_per_page - 1) // items_per_page

        # Format the first page results
        response_text = []
        query_display = query or "tag search"
        response_text.append(
            f"Search Results for '{query_display}' (Page 1/{total_pages}, showing items 1-{min(items_per_page, total_items)} of {total_items})"
        )

        # Add note about embedding search if it was skipped due to model loading
        if embedding_note:
            response_text.append(embedding_note)

        # Show first page items
        first_page_items = all_results[:items_per_page]
        if first_page_items:
            response_text.extend(first_page_items)
        else:
            response_text.append("No results found matching your query.")

        # Add pagination info
        has_more = total_pages > 1
        if has_more:
            response_text.append("\nNavigation options:")
            response_text.append(
                f"Next page: call search-remote-videos with search_id='{new_search_id}' and page=2"
            )
            response_text.append(
                "\nTip: You can control items per page with the items_per_page parameter (default: 5, max: 20)"
            )
        else:
            response_text.append("\nEnd of results.")

        return [
            types.TextContent(
                type="text",
                text="\n".join(response_text),
            )
        ]

    if name == "search-local-videos" and arguments:
        if not os.environ.get("LOAD_PHOTOS_DB"):
            raise ValueError(
                "You must set the LOAD_PHOTOS_DB environment variable to True to use this tool"
            )

        keyword = arguments.get("keyword")
        if not keyword:
            raise ValueError("Missing keyword")
        start_date = None
        end_date = None

        if arguments.get("start_date") and arguments.get("end_date"):
            start_date = arguments.get("start_date")
            end_date = arguments.get("end_date")

        try:
            db = photos_loader.db
            videos = get_videos_by_keyword(db, keyword, start_date, end_date)
            return [
                types.TextContent(
                    type="text",
                    text=(
                        f"Number of Videos Returned: {len(videos)}. Here are the first 100 results: \n{videos[:100]}"
                    ),
                )
            ]
        except Exception:
            raise RuntimeError("Local Photos database not yet initialized")

    if name == "generate-edit-from-videos" and arguments:
        edit = arguments.get("edit")
        project = arguments.get("project_id")
        name = arguments.get("name")  # type: ignore
        open_editor = arguments.get("open_editor")
        resolution = arguments.get("resolution")
        audio_asset = arguments.get("audio_asset")
        # Accept only vertical_crop from agents; map to API field later
        vertical_crop = arguments.get("vertical_crop")
        if isinstance(vertical_crop, bool):
            vertical_crop = "standard" if vertical_crop else None
        subtitles = arguments.get("subtitles", True)
        created = False

        logging.info(f"edit is: {edit} and the type is: {type(edit)}")
        if open_editor is None:
            open_editor = True

        if not edit:
            raise ValueError("Missing edit")
        if not project:
            raise ValueError("Missing project")
        if not resolution:
            resolution = "1080x1920"
        if not name:
            raise ValueError("Missing name for edit")
        if resolution == "1080p":
            resolution = "1920x1080"
        elif resolution == "720p":
            resolution = "1280x720"

        try:
            w, h = resolution.split("x")
            _ = f"{int(w)}x{int(h)}"
        except Exception as e:
            raise ValueError(
                f"Resolution must be in the format 'widthxheight' where width and height are integers: {e}"
            )

        updated_edit = []
        for cut in edit:
            # Get the audio level for this clip (default to 0.5)
            audio_level_value = "0.5"
            if "audio_levels" in cut and len(cut["audio_levels"]) > 0:
                audio_level_value = cut["audio_levels"][0].get("audio_level", "0.5")

            clip_data = {
                "video_id": cut["video_id"],
                "video_start_time": cut["video_start_time"],
                "video_end_time": cut["video_end_time"],
                "type": cut["type"],
                "audio_levels": [
                    {
                        "audio_level": audio_level_value,
                        "start_time": cut["video_start_time"],
                        "end_time": cut["video_end_time"],
                    }
                ],
            }

            # Add crop settings if provided
            if "crop" in cut and cut["crop"]:
                clip_data["crop"] = cut["crop"]

            updated_edit.append(clip_data)

        logging.info(f"updated edit is: {updated_edit}")

        # Process audio asset if provided
        audio_overlay = []
        if audio_asset:
            audio_overlay_item = {
                "audio_id": audio_asset.get("audio_id", ""),
                "type": audio_asset.get("type", "mp3"),
                "filename": audio_asset.get("filename", ""),
                "audio_start_time": audio_asset.get("audio_start_time", "00:00:00.000"),
                "audio_end_time": audio_asset.get("audio_end_time", "00:00:00.000"),
                "url": audio_asset.get("url", ""),
                "audio_levels": audio_asset.get("audio_levels", []),
            }
            audio_overlay.append(audio_overlay_item)
            logging.info(f"Audio overlay configured: {audio_overlay_item}")
        # Do not force subtitles off; backend can use default audio if no overlay
        json_edit = {
            "video_edit_version": "1.0",
            "video_output_format": "mp4",
            "video_output_resolution": resolution,
            "video_output_fps": 60.0,
            "name": name,
            "video_output_filename": "output_video.mp4",
            "audio_overlay": audio_overlay,
            "video_series_sequential": updated_edit,
            "skip_rendering": True,
            "subtitle_from_audio_overlay": subtitles,
        }

        # Forward as API field
        if vertical_crop:
            json_edit["auto_vertical_crop"] = vertical_crop

        try:
            proj = vj.projects.get(project)
        except Exception as e:
            logging.info(f"project not found, creating new project because {e}")
            proj = vj.projects.create(
                name=project, description="Claude generated project"
            )
            project = proj.id
            created = True

        logging.info(f"video edit is: {json_edit}")

        edit = vj.projects.render_edit(project, json_edit)

        webbrowser.open(
            f"https://app.video-jungle.com/projects/{proj.id}/edits/{edit['edit_id']}"
        )
        global BROWSER_OPEN
        BROWSER_OPEN = True
        if created:
            # we created a new project so let the user / LLM know
            return [
                types.TextContent(
                    type="text",
                    text=f"Created new project {proj.name} with id '{proj.id}' with the new edit id: {edit['edit_id']} viewable at this url: https://app.video-jungle.com/projects/{proj.id}/edits/{edit['edit_id']}",
                )
            ]

        return [
            types.TextContent(
                type="text",
                text=f"Generated edit in existing project {proj.name} with id '{proj.id}' with the new edit id: {edit['edit_id']} viewable at this url: https://app.video-jungle.com/projects/{proj.id}/edits/{edit['edit_id']}",
            )
        ]

    if name == "generate-edit-from-single-video" and arguments:
        edit = arguments.get("edit")
        project = arguments.get("project_id")
        video_id = arguments.get("video_id")

        resolution = arguments.get("resolution")
        # Accept only vertical_crop from agents; map to API field later
        vertical_crop = arguments.get("vertical_crop")
        if isinstance(vertical_crop, bool):
            vertical_crop = "standard" if vertical_crop else None
        # Subtitles flag (backend will use default audio if overlay absent)
        subtitles = arguments.get("subtitles", True)
        created = False

        logging.info(f"edit is: {edit} and the type is: {type(edit)}")

        if not edit:
            raise ValueError("Missing edit")
        if not project:
            raise ValueError("Missing project")
        if not video_id:
            raise ValueError("Missing video_id")
        if not resolution:
            resolution = "1080x1920"

        try:
            w, h = resolution.split("x")
            _ = f"{int(w)}x{int(h)}"
        except Exception as e:
            raise ValueError(
                f"Resolution must be in the format 'widthxheight' where width and height are integers: {e}"
            )

        try:
            updated_edit = [
                {
                    "video_id": video_id,
                    "video_start_time": cut["video_start_time"],
                    "video_end_time": cut["video_end_time"],
                    "type": "video-file",
                    "audio_levels": [],
                }
                for cut in edit
            ]
        except Exception as e:
            raise ValueError(f"Error updating edit: {e}")

        logging.info(f"updated edit is: {updated_edit}")

        json_edit = {
            "video_edit_version": "1.0",
            "video_output_format": "mp4",
            "video_output_resolution": resolution,
            "video_output_fps": 60.0,
            "video_output_filename": "output_video.mp4",
            "audio_overlay": [],  # TODO: add this back in
            "video_series_sequential": updated_edit,
            "subtitle_from_audio_overlay": subtitles,
        }

        # Forward as API field
        if vertical_crop:
            json_edit["auto_vertical_crop"] = vertical_crop

        try:
            proj = vj.projects.get(project)
        except Exception:
            proj = vj.projects.create(
                name=project, description="Claude generated project"
            )
            project = proj.id
            created = True

        logging.info(f"video edit is: {json_edit}")
        try:
            edit = vj.projects.render_edit(project, json_edit)
        except Exception as e:
            logging.error(f"Error rendering edit: {e}")
        logging.info(f"edit is: {edit}")
        if created:
            # we created a new project so let the user / LLM know
            logging.info(f"created new project {proj.name} and created edit {edit}")
            return [
                types.TextContent(
                    type="text",
                    text=f"Created new project {proj.name} with project id '{proj.id}' viewable at this url: https://app.video-jungle.com/projects/{proj.id}/edits/{edit['edit_id']}",
                )
            ]

        return [
            types.TextContent(
                type="text",
                text=f"Generated edit with id '{edit['edit_id']}' in project {proj.name} with project id '{proj.id}' viewable at this url: https://app.video-jungle.com/projects/{proj.id}/edits/{edit['edit_id']}",
            )
        ]

    if name == "update-video-edit" and arguments:
        project_id = arguments.get("project_id")
        edit_id = arguments.get("edit_id")
        edit_name = arguments.get("name")
        description = arguments.get("description")
        video_output_format = arguments.get("video_output_format")
        video_output_resolution = arguments.get("video_output_resolution")
        video_output_fps = arguments.get("video_output_fps")
        video_series_sequential = arguments.get("video_series_sequential")
        audio_overlay = arguments.get("audio_overlay")
        rendered = arguments.get("rendered")
        subtitles = arguments.get("subtitles")
        # Accept only vertical_crop from agents; map to API field later
        vertical_crop = arguments.get("vertical_crop")
        if isinstance(vertical_crop, bool):
            vertical_crop = "standard" if vertical_crop else None

        # Validate required parameters
        if not project_id:
            raise ValueError("Missing project_id")
        if not edit_id:
            raise ValueError("Missing edit_id")

        # Process resolution format like in create function
        if video_output_resolution:
            if video_output_resolution == "1080p":
                video_output_resolution = "1920x1080"
            elif video_output_resolution == "720p":
                video_output_resolution = "1280x720"

            # Validate resolution format
            try:
                w, h = video_output_resolution.split("x")
                _ = f"{int(w)}x{int(h)}"
            except Exception as e:
                raise ValueError(
                    f"Resolution must be in the format 'widthxheight' where width and height are integers: {e}"
                )

        # Try to get the existing project
        try:
            proj = vj.projects.get(project_id)
        except Exception as e:
            raise ValueError(f"Project with ID {project_id} not found: {e}")

        # Try to get the existing edit
        try:
            existing_edit = vj.projects.get_edit(project_id, edit_id)
        except Exception as e:
            raise ValueError(
                f"Edit with ID {edit_id} not found in project {project_id}: {e}"
            )

        # Process video clips if provided
        updated_video_series = None
        if video_series_sequential:
            updated_video_series = []
            for clip in video_series_sequential:
                # Get the audio level for this clip (default to 0.5)
                audio_level_value = "0.5"
                if "audio_levels" in clip and len(clip["audio_levels"]) > 0:
                    audio_level_value = clip["audio_levels"][0].get(
                        "audio_level", "0.5"
                    )

                clip_data = {
                    "video_id": clip["video_id"],
                    "video_start_time": clip["video_start_time"],
                    "video_end_time": clip["video_end_time"],
                    "type": clip["type"],
                    "audio_levels": [
                        {
                            "audio_level": audio_level_value,
                            "start_time": clip["video_start_time"],
                            "end_time": clip["video_end_time"],
                        }
                    ],
                }

                # Add crop settings if provided
                if "crop" in clip and clip["crop"]:
                    clip_data["crop"] = clip["crop"]

                updated_video_series.append(clip_data)

        # Create an empty dictionary without type annotations
        update_json = dict()

        # Add fields one by one with explicit type handling
        update_json["video_edit_version"] = "1.0"

        if edit_name:
            update_json["name"] = edit_name
        if description:
            update_json["description"] = description
        if video_output_format:
            update_json["video_output_format"] = video_output_format
        if video_output_resolution:
            update_json["video_output_resolution"] = video_output_resolution
        if video_output_fps is not None:
            update_json["video_output_fps"] = float(video_output_fps)
        if updated_video_series is not None:
            # Cast to a list to ensure proper typing
            update_json["video_series_sequential"] = list(updated_video_series)
        if audio_overlay is not None:
            # Cast to a list to ensure proper typing
            update_json["audio_overlay"] = list(audio_overlay) if audio_overlay else []
        if subtitles is not None:
            update_json["subtitle_from_audio_overlay"] = bool(subtitles)

        # Skip rendering by default like in create function
        update_json["skip_rendering"] = bool(True)

        # If rendering is explicitly requested
        if rendered is True:
            update_json["skip_rendering"] = bool(False)

        # Forward as API field
        if vertical_crop:
            update_json["auto_vertical_crop"] = vertical_crop

        logging.info(f"Updating edit {edit_id} with: {update_json}")

        # Call the API to update the edit
        updated_edit = vj.projects.update_edit(project_id, edit_id, update_json)

        # Optionally open the browser to the updated edit
        if not BROWSER_OPEN:
            webbrowser.open(
                f"https://app.video-jungle.com/projects/{project_id}/edits/{edit_id}"
            )

        return [
            types.TextContent(
                type="text",
                text=f"Updated edit {edit_id} in project {proj.name} at url https://app.video-jungle.com/projects/{project_id}/edits/{edit_id} with changes: {update_json}",
            )
        ]

    if name == "get-project-assets" and arguments:
        # Extract arguments
        project_id = arguments.get("project_id")
        page = arguments.get("page", 1)
        items_per_page = arguments.get("items_per_page", 10)
        asset_cache_id = arguments.get("asset_cache_id")
        asset_types = arguments.get("asset_types", ["user", "video", "image", "audio"])

        # Validate required arguments
        if not project_id:
            raise ValueError("Missing project_id parameter")

        # Run cache cleanup
        cleanup_cache()

        # Check if this is a pagination request using an existing cache
        if asset_cache_id and asset_cache_id in _project_assets_cache:
            cache_entry = _project_assets_cache[asset_cache_id]
            cached_assets = cache_entry["assets"]
            project_info = cache_entry.get("project_info", {})

            # Update timestamp on access
            _project_assets_cache[asset_cache_id]["timestamp"] = time.time()

            # Calculate pagination
            total_items = len(cached_assets)
            total_pages = (total_items + items_per_page - 1) // items_per_page

            # Calculate current page items
            start_idx = (page - 1) * items_per_page
            end_idx = min(start_idx + items_per_page, total_items)
            current_page_items = cached_assets[start_idx:end_idx]

            # Format the response
            response_text = []

            # Add project info header
            project_name = project_info.get("name", "Project")
            project_description = project_info.get("description", "")

            response_text.append(f"Project: {project_name}")
            if project_description:
                response_text.append(f"Description: {project_description}")

            response_text.append(
                f"\nAssets (Page {page}/{total_pages}, showing items {start_idx+1}-{end_idx} of {total_items}):"
            )

            # Format each asset
            if current_page_items:
                formatted_assets = [
                    format_asset_info(asset) for asset in current_page_items
                ]
                response_text.extend(formatted_assets)
            else:
                response_text.append("No assets to display on this page.")

            # Add pagination info
            navigation_options = []
            if page > 1:
                navigation_options.append(
                    f"Previous page: call get-project-assets with asset_cache_id='{asset_cache_id}' and page={page-1}"
                )

            has_more = page < total_pages
            if has_more:
                navigation_options.append(
                    f"Next page: call get-project-assets with asset_cache_id='{asset_cache_id}' and page={page+1}"
                )

            if navigation_options:
                response_text.append("\nNavigation options:")
                response_text.extend(navigation_options)

            if not has_more:
                response_text.append("\nEnd of results.")

            return [types.TextContent(type="text", text="\n".join(response_text))]

        # This is a new request - get the project and its assets
        try:
            # Fetch project data
            project = vj.projects.get(project_id)
            logging.info(f"Retrieved project: {project.name} (ID: {project_id})")

            # Get project data as a dictionary so we can extract assets
            project_data = project.model_dump()
            logging.info(f"Project data: {project_data}")

            # Direct assignment - based on the data structure you showed
            all_assets = project_data.get("assets", [])
            logging.info(f"Found {len(all_assets)} assets in project")

            # Filter assets by asset_type if specified
            project_assets = []
            for asset in all_assets:
                if not asset_types or asset.get("asset_type") in asset_types:
                    project_assets.append(asset)

            logging.info(
                f"After filtering by types {asset_types}: {len(project_assets)} assets remaining"
            )
            # If no assets found, provide a helpful message
            if not project_assets:
                return [
                    types.TextContent(
                        type="text",
                        text=f"Project {project.name} (ID: {project_id}) contains no assets of types: {', '.join(asset_types)}.",
                    )
                ]

            # Store results in cache for pagination
            new_cache_id = str(uuid.uuid4())
            _project_assets_cache[new_cache_id] = {
                "assets": project_assets,
                "project_info": {
                    "id": project_id,
                    "name": project.name,
                    "description": project.description,
                },
                "timestamp": time.time(),
            }

            # Calculate pagination
            total_items = len(project_assets)
            total_pages = (total_items + items_per_page - 1) // items_per_page

            # Get first page
            first_page_items = project_assets[:items_per_page]

            # Format the response
            response_text = []

            # Add project info header
            response_text.append(f"Project: {project.name}")
            if project.description:
                response_text.append(f"Description: {project.description}")

            response_text.append(
                f"\nAssets (Page 1/{total_pages}, showing items 1-{min(items_per_page, total_items)} of {total_items}):"
            )

            # Format assets
            if first_page_items:
                formatted_assets = [
                    format_asset_info(asset) for asset in first_page_items
                ]
                response_text.extend(formatted_assets)
            else:
                response_text.append("No assets to display.")

            # Add pagination info
            has_more = total_pages > 1
            if has_more:
                response_text.append("\nNavigation options:")
                response_text.append(
                    f"Next page: call get-project-assets with asset_cache_id='{new_cache_id}' and page=2"
                )
                response_text.append(
                    "\nTip: You can control items per page with the items_per_page parameter (default: 10, max: 50)"
                )
            else:
                response_text.append("\nEnd of results.")

            return [types.TextContent(type="text", text="\n".join(response_text))]

        except Exception as e:
            logging.error(f"Error fetching project assets: {e}")
            raise ValueError(f"Error retrieving project assets: {str(e)}")

    if (
        name
        in [
            "create-video-bar-chart-from-two-axis-data",
            "create-video-line-chart-from-two-axis-data",
        ]
        and arguments
    ):
        x_values = arguments.get("x_values")
        y_values = arguments.get("y_values")
        x_label = arguments.get("x_label")
        y_label = arguments.get("y_label")
        title = arguments.get("title")
        filename = arguments.get("filename")

        if not x_values or not y_values or not x_label or not y_label or not title:
            raise ValueError("Missing required arguments")
        if not filename:
            if name == "create-video-bar-chart-from-two-axis-data":
                filename = "bar_chart.mp4"
            elif name == "create-video-line-chart-from-two-axis-data":
                filename = "line_chart.mp4"
            else:
                raise ValueError("Invalid tool name")

        y_axis_safe = validate_y_values(y_values)
        if not y_axis_safe:
            raise ValueError("Y values are not valid")

        # Validate data and prepare for chart generation
        try:
            # Ensure output directory exists
            output_dir = os.path.join(os.getcwd(), "media", "videos", "720p30")
            os.makedirs(output_dir, exist_ok=True)

            # Prepare data for chart generation
            data = {
                "x_values": x_values,
                "y_values": y_values,
                "x_label": x_label,
                "y_label": y_label,
                "title": title,
                "filename": filename,
            }

            # Write data to temporary file
            chart_data_path = os.path.join(os.getcwd(), "chart_data.json")
            with open(chart_data_path, "w") as f:
                json.dump(data, f, indent=4)

            file_path = os.path.join(output_dir, filename)

            # Determine chart type
            chart_type = (
                "bar" if name == "create-video-bar-chart-from-two-axis-data" else "line"
            )

            # Get the script path
            script_path = os.path.join(os.path.dirname(__file__), "generate_charts.py")

            # Run the chart generation script
            env = os.environ.copy()
            env["PYTHONPATH"] = os.getcwd()

            # Use subprocess.run with proper error handling instead of Popen
            result = subprocess.run(
                ["uv", "run", "python", script_path, chart_data_path, chart_type],
                capture_output=True,
                text=True,
                env=env,
                timeout=60,  # 60 second timeout
            )

            if result.returncode != 0:
                error_msg = f"Chart generation failed: {result.stderr}"
                logging.error(error_msg)
                raise RuntimeError(error_msg)

            # Clean up temporary file
            try:
                os.remove(chart_data_path)
            except:
                pass

            chart_type_display = "Bar chart" if chart_type == "bar" else "Line chart"
            return [
                types.TextContent(
                    type="text",
                    text=f"{chart_type_display} video generation started.\nOutput will be saved to {file_path}",
                )
            ]

        except subprocess.TimeoutExpired:
            logging.error("Chart generation timed out")
            raise RuntimeError("Chart generation timed out after 60 seconds")
        except Exception as e:
            logging.error(f"Error generating chart: {str(e)}")
            raise RuntimeError(f"Failed to generate chart: {str(e)}")


async def main():
    # Run the server using stdin/stdout streams
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="video-jungle-mcp",
                server_version="0.1.0",
                capabilities=server.get_capabilities(
                    notification_options=NotificationOptions(),
                    experimental_capabilities={},
                ),
            ),
        )

```