# Directory Structure ``` ├── .gitignore ├── .npmrc ├── eslint.config.js ├── images │ ├── 2024-12-05-flux-shuttle.png │ ├── 2024-12-08-mcp-omni-artifact.png │ ├── 2024-12-08-mcp-parler.png │ ├── 2024-12-09-bowie.png │ ├── 2024-12-09-flower.png │ ├── 2024-12-09-qwen-reason.png │ └── 2024-12-09-transcribe.png ├── LICENSE ├── package-lock.json ├── package.json ├── README.md ├── scripts │ └── generate-version.js ├── src │ ├── config.ts │ ├── content_converter.ts │ ├── endpoint_wrapper.ts │ ├── gradio_api.ts │ ├── gradio_convert.ts │ ├── index.ts │ ├── mime_types.ts │ ├── progress_notifier.ts │ ├── types.ts │ └── working_directory.ts ├── test │ ├── endpoint_wrapper.test.ts │ ├── parameter_test.json │ └── utils.test.ts └── tsconfig.json ``` # Files -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- ``` node_modules/ build/ *.log .env* src/version.ts ``` -------------------------------------------------------------------------------- /.npmrc: -------------------------------------------------------------------------------- ``` save-exact=true package-lock=true engine-strict=true ``` -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- ```markdown # mcp-hfspace MCP Server 🤗 Read the introduction here [llmindset.co.uk/resources/mcp-hfspace/](https://llmindset.co.uk/resources/mcp-hfspace/) Connect to [Hugging Face Spaces](https://huggingface.co/spaces) with minimal setup needed - simply add your spaces and go! By default, it connects to `evalstate/FLUX.1-schnell` providing Image Generation capabilities to Claude Desktop.  ## Installation NPM Package is `@llmindset/mcp-hfspsace`. Install a recent version of [NodeJS](https://nodejs.org/en/download) for your platform, then add the following to the `mcpServers` section of your `claude_desktop_config.json` file: ```json "mcp=hfspace": { "command": "npx", "args": [ "-y", "@llmindset/mcp-hfspace" ] } ``` Please make sure you are using Claude Desktop 0.78 or greater. This will get you started with an Image Generator. ### Basic setup Supply a list of HuggingFace spaces in the arguments. mcp-hfspace will find the most appropriate endpoint and automatically configure it for usage. An example `claude_desktop_config.json` is supplied [below](#installation). By default the current working directory is used for file upload/download. On Windows this is a read/write folder at `\users\<username>\AppData\Roaming\Claude\<version.number\`, and on MacOS it is the is the read-only root: `/`. It is recommended to override this and set a Working Directory for handling the upload and download of images and other file-based content. Specify either the `--work-dir=/your_directory` argument or `MCP_HF_WORK_DIR` environment variable. An example configuration for using a modern image generator, vision model and text to speech is below with a working directory set is below: ```json "mcp-hfspace": { "command": "npx", "args": [ "-y", "@llmindset/mcp-hfspace", "--work-dir=/Users/evalstate/mcp-store", "shuttleai/shuttle-jaguar", "styletts2/styletts2", "Qwen/QVQ-72B-preview" ] } ``` To use private spaces, supply your Hugging Face Token with either the `--hf-token=hf_...` argument or `HF_TOKEN` environment variable. It's possible to run multiple server instances to use different working directories and tokens if needed. ## File Handling and Claude Desktop Mode By default, the Server operates in _Claude Desktop Mode_. In this mode, Images are returned in the tool responses, while other files are saved in the working folder, their file path is returned as a message. This will usually give the best experience if using Claude Desktop as the client. URLs can also be supplied as inputs: the content gets passed to the Space. There is an "Available Resources" prompt that gives Claude the available files and mime types from your working directory. This is currently the best way to manage files. ### Example 1 - Image Generation (Download Image / Claude Vision) We'll use Claude to compare images created by `shuttleai/shuttle-3.1-aesthetic` and `FLUX.1-schnell`. The images gets saved to the Work Directory, as well as included in Claude's context window - so Claude can use its vision capabilities.  ### Example 2 - Vision Model (Upload Image) We'll use `merve/paligemma2-vqav2` [space link](https://huggingface.co/spaces/merve/paligemma2-vqav2) to query an image. In this case, we specify the filename which is available in the Working Directory: we don't want to upload the Image directly to Claude's context window. So, we can prompt Claude: `use paligemma to find out who is in "test_gemma.jpg"` -> `Text Output: david bowie`  _If you are uploading something to Claude's context use the Paperclip Attachment button, otherwise specify the filename for the Server to send directly._ We can also supply a URL. For example : `use paligemma to detect humans in https://e3.365dm.com/24/12/1600x900/skynews-taylor-swift-eras-tour_6771083.jpg?20241209000914` -> `One person is detected in the image - Taylor Swift on stage.` ### Example 3 - Text-to-Speech (Download Audio) In _Claude Desktop Mode_, the audio file is saved in the WORK_DIR, and Claude is notified of the creation. If not in desktop mode, the file is returned as a base64 encoded resource to the Client (useful if it supports embedded Audio attachments).  ### Example 4 - Speech-to-Text (Upload Audio) Here, we use `hf-audio/whisper-large-v3-turbo` to transcribe some audio, and make it available to Claude.  ### Example 5 - Image-to-Image In this example, we specify the filename for `microsoft/OmniParser` to use, and get returned an annotated Image and 2 separate pieces of text: descriptions and coordinates. The prompt used was `use omniparser to analyse ./screenshot.png` and `use the analysis to produce an artifact that reproduces that screen`. `DawnC/Pawmatch` is also good at this.  ### Example 6 - Chat In this example, Claude sets a number of reasoning puzzles for Qwen, and asks follow-up questions for clarification.  ### Specifying API Endpoint If you need, you can specify a specific API Endpoint by adding it to the spacename. So rather than passing in `Qwen/Qwen2.5-72B-Instruct` you would use `Qwen/Qwen2.5-72B-Instruct/model_chat`. ### Claude Desktop Mode This can be disabled with the option --desktop-mode=false or the environment variable CLAUDE_DESKTOP_MODE=false. In this case, content as returned as an embedded Base64 encoded Resource. ## Recommended Spaces Some recommended spaces to try: ### Image Generation - shuttleai/shuttle-3.1-aesthetic - black-forest-labs/FLUX.1-schnell - yanze/PuLID-FLUX - Inspyrenet-Rembg (Background Removal) - diyism/Datou1111-shou_xin - [Beautiful Pencil Drawings](https://x.com/ClementDelangue/status/1867318931502895358) ### Chat - Qwen/Qwen2.5-72B-Instruct - prithivMLmods/Mistral-7B-Instruct-v0.3 ### Text-to-speech / Audio Generation - fantaxy/Sound-AI-SFX - parler-tts/parler_tts ### Speech-to-text - hf-audio/whisper-large-v3-turbo - (the openai models use unnamed parameters so will not work) ### Text-to-music - haoheliu/audioldm2-text2audio-text2music ### Vision Tasks - microsoft/OmniParser - merve/paligemma2-vqav2 - merve/paligemma-doc - DawnC/PawMatchAI - DawnC/PawMatchAI/on_find_match_click - for interactive dog recommendations ## Other Features ### Prompts Prompts for each Space are generated, and provide an opportunity to input. Bear in mind that often Spaces aren't configured with particularly helpful labels etc. Claude is actually very good at figuring this out, and the Tool description is quite rich (but not visible in Claude Desktop). ### Resources A list of files in the WORK_DIR is returned, and as a convenience returns the name as "Use the file..." text. If you want to add something to Claude's context, use the paperclip - otherwise specify the filename for the MCP Server. Claude does not support transmitting resources from within Context. ### Private Spaces Private Spaces are supported with a HuggingFace token. The Token is used to download and save generated content. ### Using Claude Desktop To use with Claude Desktop, add the server config: On MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json` On Windows: `%APPDATA%/Claude/claude_desktop_config.json` ```json { "mcpServers": { "mcp-hfspace": { "command": "npx" "args:" [ "-y", "@llmindset/mcp-hfspace", "--work-dir=~/mcp-files/ or x:/temp/mcp-files/", "--HF_TOKEN=HF_{optional token}" "Qwen/Qwen2-72B-Instruct", "black-forest-labs/FLUX.1-schnell", "space/example/specific-endpint" (... and so on) ] } } } ``` ## Known Issues and Limitations ### mcp-hfspace - Endpoints with unnamed parameters are unsupported for the moment. - Full translation from some complex Python types to suitable MCP formats. ### Claude Desktop - Claude Desktop 0.75 doesn't seem to respond to errors from the MCP Server, timing out instead. For persistent issues, use the MCP Inspector to get a better look at diagnosing what's going wrong. If something suddenly stops working, it's probably due to exhausting your HuggingFace ZeroGPU quota - try again after a short period, or set up your own Space for hosting. - Claude Desktop seems to use a hard timeout value of 60s, and doesn't appear to use Progress Notifications to manage UX or keep-alive. If you are using ZeroGPU spaces, large/heavy jobs may timeout. Check the WORK_DIR for results though; the MCP Server will still capture and save the result if it was produced. - Claude Desktops reporting of Server Status, logging etc. isn't great - use [@modelcontextprotocol/inspector](https://github.com/modelcontextprotocol/inspector) to help diagnose issues. ### HuggingFace Spaces - If ZeroGPU quotas or queues are too long, try duplicating the space. If your job takes less than sixty seconds, you can usually change the function decorator `@spaces.GPU(duration=20)` in `app.py` to request less quota when running the job. - If you have a HuggingFace Pro account, please note that The Gradio API does not your additional quote for ZeroGPU jobs - you will need to set an `X-IP-Token` header to achieve that. - If you have a private space, and dedicated hardware your HF_TOKEN will give you direct access to that - no quota's apply. I recommend this if you are using for any kind of Production task. ## Third Party MCP Services <a href="https://glama.ai/mcp/servers/s57c80wvgq"><img width="380" height="200" src="https://glama.ai/mcp/servers/s57c80wvgq/badge" alt="mcp-hfspace MCP server" /></a> ``` -------------------------------------------------------------------------------- /src/types.ts: -------------------------------------------------------------------------------- ```typescript export type GradioOutput = { label: string; type: string; python_type: { type: string; description: string; }; component: string; description?: string; }; ``` -------------------------------------------------------------------------------- /tsconfig.json: -------------------------------------------------------------------------------- ```json { "compilerOptions": { "target": "ES2022", "module": "Node16", "moduleResolution": "Node16", "outDir": "./build", "rootDir": "./src", "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true, "types": ["node"], }, "include": ["src/**/*"], "exclude": ["node_modules"] } ``` -------------------------------------------------------------------------------- /eslint.config.js: -------------------------------------------------------------------------------- ```javascript import globals from "globals"; import pluginJs from "@eslint/js"; import tseslint from "typescript-eslint"; /** @type {import('eslint').Linter.Config[]} */ export default [ {files: ["**/*.{js,mjs,cjs,ts}"]}, {files: ["**/*.js"], languageOptions: {sourceType: "commonjs"}}, {languageOptions: { globals: globals.browser }}, pluginJs.configs.recommended, ...tseslint.configs.recommended, ]; ``` -------------------------------------------------------------------------------- /scripts/generate-version.js: -------------------------------------------------------------------------------- ```javascript // scripts/generate-version.js import { readFileSync, writeFileSync } from 'fs'; import { fileURLToPath } from 'url'; import { dirname, join } from 'path'; const __filename = fileURLToPath(import.meta.url); const __dirname = dirname(__filename); const packageJson = JSON.parse( readFileSync(join(__dirname, '../package.json'), 'utf8') ); const content = `// Generated file - do not edit export const VERSION = '${packageJson.version}'; `; writeFileSync(join(__dirname, '../src/version.ts'), content); console.log(`Generated version.ts with version ${packageJson.version}`); ``` -------------------------------------------------------------------------------- /src/gradio_api.ts: -------------------------------------------------------------------------------- ```typescript // Just the types we need for the API structure - copied from Gradio client library export interface ApiParameter { label: string; parameter_name?: string; // Now optional parameter_has_default?: boolean; parameter_default?: unknown; type: string; python_type: { type: string; description?: string; }; component: string; example_input?: string; description?: string; } export interface ApiEndpoint { parameters: ApiParameter[]; returns: { label: string; type: string; python_type: { type: string; description: string; }; component: string; }[]; type: { generator: boolean; cancel: boolean; }; } export interface ApiStructure { named_endpoints: Record<string, ApiEndpoint>; unnamed_endpoints: Record<string, ApiEndpoint>; } export type ApiReturn = { label: string; type: string; python_type: { type: string; description: string; }; component: string; }; ``` -------------------------------------------------------------------------------- /src/config.ts: -------------------------------------------------------------------------------- ```typescript import minimist from "minimist"; import path from "path"; export interface Config { claudeDesktopMode: boolean; workDir: string; spacePaths: string[]; hfToken?: string; debug: boolean; } export const config = parseConfig(); export function parseConfig(): Config { const argv = minimist(process.argv.slice(2), { string: ["work-dir", "hf-token"], boolean: ["desktop-mode", "debug"], default: { "desktop-mode": process.env.CLAUDE_DESKTOP_MODE !== "false", "work-dir": process.env.MCP_HF_WORK_DIR || process.cwd(), "hf-token": process.env.HF_TOKEN, debug: false, }, "--": true, }); return { claudeDesktopMode: argv["desktop-mode"], workDir: path.resolve(argv["work-dir"]), hfToken: argv["hf-token"], debug: argv["debug"], spacePaths: (() => { const filtered = argv._.filter((arg) => arg.toString().trim().length > 0); return filtered.length > 0 ? filtered : ["evalstate/FLUX.1-schnell"]; })(), }; } ``` -------------------------------------------------------------------------------- /src/mime_types.ts: -------------------------------------------------------------------------------- ```typescript /** * Supported MIME types and related utilities * @packageDocumentation */ /** Known MIME types that should be handled as text */ export const textBasedMimeTypes = [ // Standard text formats "text/*", // Data interchange "application/json", "application/xml", "application/yaml", "application/javascript", "application/typescript", ] as readonly string[]; /** Supported document types */ export const documentMimeTypes = ["application/pdf"] as const; export const imageMimeTypes = [ "image/jpeg", "image/webp", "image/gif", "image/png", ]; /** All supported MIME types */ export const claudeSupportedMimeTypes = [ ...textBasedMimeTypes, ...documentMimeTypes, ...imageMimeTypes, ] as const; export const FALLBACK_MIME_TYPE = "application/octet-stream"; export function treatAsText(mimetype: string) { if (mimetype.startsWith("text/")) return true; if (textBasedMimeTypes.includes(mimetype)) return true; if (mimetype.indexOf("vnd.openxmlformats") > 0) return true; return false; } ``` -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- ```json { "name": "@llmindset/mcp-hfspace", "version": "0.5.0", "description": "MCP Server to connect to Hugging Face spaces. Simple configuration, Claude Desktop friendly.", "type": "module", "publishConfig": { "access": "public" }, "bin": { "mcp-hfspace": "./build/index.js" }, "files": [ "build" ], "repository": { "type": "git", "url": "git+https://github.com/evalstate/mcp-hfspace" }, "bugs": { "url": "https://github.com/evalstate/mcp-hfspace/issues" }, "engines": { "node": ">=18", "npm": ">=9" }, "scripts": { "clean": "rimraf build", "prebuild": "node scripts/generate-version.js", "build": "npm run lint:fix && npm run format:fix && npm run clean && npm run prebuild && tsc", "prepack": "npm run build", "lint": "eslint src/**/*.ts --max-warnings 0", "lint:fix": "eslint src/**/*.ts --fix", "format": "prettier --write \"src/**/*.ts\"", "format:fix": "prettier --write \"src/**/*.ts\"", "validate": "eslint src/**/*.ts && prettier --check \"src/**/*.ts\"", "watch": "tsc --watch", "inspector": "npx @modelcontextprotocol/inspector build/index.js", "test": "vitest", "test:watch": "vitest watch", "coverage": "vitest run --coverage" }, "dependencies": { "@gradio/client": "^1.8.0", "@modelcontextprotocol/sdk": "0.6.0", "mime": "^4.0.6", "minimist": "^1.2.8" }, "devDependencies": { "@eslint/js": "9.19.0", "@types/minimist": "^1.2.5", "@types/node": "^20.11.24", "@typescript-eslint/eslint-plugin": "latest", "@typescript-eslint/parser": "latest", "eslint": "9.19.0", "globals": "15.14.0", "prettier": "latest", "rimraf": "^5.0.1", "typescript": "^5.3.3", "typescript-eslint": "8.21.0", "vitest": "^2.1.8" } } ``` -------------------------------------------------------------------------------- /src/progress_notifier.ts: -------------------------------------------------------------------------------- ```typescript import { Status } from "@gradio/client"; import { Server } from "@modelcontextprotocol/sdk/server/index.js"; import type { ProgressNotification } from "@modelcontextprotocol/sdk/types.js"; export interface ProgressNotifier { notify(status: Status, progressToken: string | number): Promise<void>; } export function createProgressNotifier(server: Server): ProgressNotifier { let lastProgress = 0; function createNotification( status: Status, progressToken: string | number, ): ProgressNotification { let progress = lastProgress; const total = 100; if (status.progress_data?.length) { const item = status.progress_data[0]; if ( item && typeof item.index === "number" && typeof item.length === "number" ) { const stepProgress = (item.index / (item.length - 1)) * 80; progress = Math.round(10 + stepProgress); } } else { switch (status.stage) { case "pending": progress = status.queue ? (status.position === 0 ? 10 : 5) : 15; break; case "generating": progress = 50; break; case "complete": progress = 100; break; case "error": progress = lastProgress; break; } } progress = Math.max(progress, lastProgress); if (status.stage === "complete") { progress = 100; } else if (progress === lastProgress && lastProgress >= 75) { progress = Math.min(99, lastProgress + 1); } lastProgress = progress; let message = status.message; if (!message) { if (status.queue && status.position !== undefined) { message = `Queued at position ${status.position}`; } else if (status.progress_data?.length) { const item = status.progress_data[0]; message = item.desc || `Step ${item.index + 1} of ${item.length}`; } else { message = status.stage.charAt(0).toUpperCase() + status.stage.slice(1); } } return { method: "notifications/progress", params: { progressToken, progress, total, message, _meta: status, }, }; } return { async notify(status: Status, progressToken: string | number) { if (!progressToken) return; const notification = createNotification(status, progressToken); await server.notification(notification); }, }; } ``` -------------------------------------------------------------------------------- /src/gradio_convert.ts: -------------------------------------------------------------------------------- ```typescript import type { Tool } from "@modelcontextprotocol/sdk/types.js"; import type { ApiEndpoint, ApiParameter } from "./gradio_api.js"; // Type for a parameter schema in MCP Tool export type ParameterSchema = Tool["inputSchema"]["properties"]; function parseNumberConstraints(description: string = "") { const constraints: { minimum?: number; maximum?: number } = {}; // Check for "between X and Y" format const betweenMatch = description.match( /between\s+(-?\d+\.?\d*)\s+and\s+(-?\d+\.?\d*)/i, ); if (betweenMatch) { constraints.minimum = Number(betweenMatch[1]); constraints.maximum = Number(betweenMatch[2]); return constraints; } // Fall back to existing min/max parsing const minMatch = description.match(/min(?:imum)?\s*[:=]\s*(-?\d+\.?\d*)/i); const maxMatch = description.match(/max(?:imum)?\s*[:=]\s*(-?\d+\.?\d*)/i); if (minMatch) constraints.minimum = Number(minMatch[1]); if (maxMatch) constraints.maximum = Number(maxMatch[1]); return constraints; } export function isFileParameter(param: ApiParameter): boolean { return ( param.python_type?.type === "filepath" || param.type === "Blob | File | Buffer" || param.component === "Image" || param.component === "Audio" ); } export function convertParameter(param: ApiParameter): ParameterSchema { // Start with determining the base type and description let baseType = param.type || "string"; let baseDescription = param.python_type?.description || param.label || undefined; // Special case for chat history - override type and description if (param.parameter_name === "history" && param.component === "Chatbot") { baseType = "array"; baseDescription = "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components."; } // Handle file types with specific descriptions if (isFileParameter(param)) { baseType = "string"; // Always string for file inputs if (param.component === "Audio") { baseDescription = "Accepts: Audio file URL, file path, file name, or resource identifier"; } else if (param.component === "Image") { baseDescription = "Accepts: Image file URL, file path, file name, or resource identifier"; } else { baseDescription = "Accepts: URL, file path, file name, or resource identifier"; } } const baseSchema = { type: baseType, description: baseDescription, ...(param.parameter_has_default && { default: param.parameter_default, }), ...(param.example_input && { examples: [param.example_input], }), }; // Add number constraints if it's a number type if (param.type === "number" && param.python_type?.description) { const constraints = parseNumberConstraints(param.python_type.description); return { ...baseSchema, ...constraints }; } // Handle Literal type to extract enum values if (param.python_type?.type?.startsWith("Literal[")) { const enumValues = param.python_type.type .slice(8, -1) // Remove "Literal[" and "]" .split(",") .map((value) => value.trim().replace(/['"]/g, "")); // Remove quotes and trim spaces return { ...baseSchema, description: param.python_type?.description || param.label || undefined, enum: enumValues, }; } return baseSchema; } export function convertApiToSchema(endpoint: ApiEndpoint) { const properties: { [key: string]: ParameterSchema } = {}; const required: string[] = []; let propertyCounter = 1; const unnamedParameters: Record<string, number> = {}; endpoint.parameters.forEach((param: ApiParameter, index: number) => { // Get property name from parameter_name, label, or generate one const propertyName = param.parameter_name || param.label || `Unnamed Parameter ${propertyCounter++}`; if (!param.parameter_name) { unnamedParameters[propertyName] = index; } // Convert parameter using existing function properties[propertyName] = convertParameter(param); // Add to required if no default value if (!param.parameter_has_default) { required.push(propertyName); } }); return { type: "object", properties, required, }; } ``` -------------------------------------------------------------------------------- /test/endpoint_wrapper.test.ts: -------------------------------------------------------------------------------- ```typescript import { describe, it, expect, vi } from "vitest"; import { EndpointWrapper, endpointSpecified, parsePath } from "../src/endpoint_wrapper"; import type { ApiEndpoint } from "../src/gradio_api"; import { Server } from "@modelcontextprotocol/sdk/server/index.js"; // Mock the Client class const mockSubmit = vi.fn(); const MockClient = { submit: mockSubmit, connect: vi.fn().mockResolvedValue({ submit: mockSubmit, view_api: vi.fn(), }), }; // Helper to create test endpoint function createTestEndpoint(parameters: any[]): ApiEndpoint { return { parameters, returns: [{ label: "Output", type: "string", python_type: { type: "str", description: "Output text" }, component: "Text" }], type: { generator: false, cancel: false } }; } describe("EndpointWrapper parameter mapping", () => { it("maps named parameters correctly", async () => { const endpoint = createTestEndpoint([ { label: "Text Input", parameter_name: "text_input", type: "string", python_type: { type: "str", description: "" }, component: "Textbox" } ]); const wrapper = new EndpointWrapper( parsePath("test/space/predict"), endpoint, MockClient as any, ); // Mock successful response mockSubmit.mockImplementation(async function* () { yield { type: "data", data: ["response"] }; }); await wrapper.call({ method: "tools/call", params: { name: "test", arguments: { text_input: "hello" } } }, {} as Server); // Verify the parameters were mapped correctly expect(mockSubmit).toHaveBeenCalledWith("/predict", { text_input: "hello" }); }); it("maps unnamed parameters to their index", async () => { const endpoint = createTestEndpoint([ { label: "parameter_0", type: "string", python_type: { type: "str", description: "" }, component: "Textbox" }, { label: "parameter_1", type: "number", python_type: { type: "float", description: "" }, component: "Number" } ]); const wrapper = new EndpointWrapper( parsePath("/test/space/predict"), endpoint, MockClient as any, ); mockSubmit.mockImplementation(async function* () { yield { type: "data", data: ["response"] }; }); await wrapper.call({ params: { name: "test", arguments: { "parameter_0": "hello", "parameter_1": 42 } }, method: "tools/call" }, {} as Server); // Verify parameters were mapped by position expect(mockSubmit).toHaveBeenCalledWith("/predict", { "parameter_0": "hello", "parameter_1": 42 }); }); it("handles mix of named and unnamed parameters", async () => { const endpoint = createTestEndpoint([ { label: "Text Input", parameter_name: "text_input", type: "string", python_type: { type: "str", description: "" }, component: "Textbox" }, { label: "parameter_1", type: "number", python_type: { type: "float", description: "" }, component: "Number" } ]); const wrapper = new EndpointWrapper( parsePath("test/space/predict"), endpoint, MockClient as any, ); mockSubmit.mockImplementation(async function* () { yield { type: "data", data: ["response"] }; }); await wrapper.call({ params: { name: "test", arguments: { text_input: "hello", "parameter_1": 42 } }, method: "tools/call" }, {} as Server); // Verify mixed parameter mapping expect(mockSubmit).toHaveBeenCalledWith("/predict", { text_input: "hello", "parameter_1": 42 }); }); }); describe("specific endpoint detection works",()=>{ it("detects no endpoint specified"),()=>{ expect(endpointSpecified("/owner/space")).toBe(false); } it("detects endpoints specified"),()=>{ expect(endpointSpecified("/owner/space/foo")).toBe(true); expect(endpointSpecified("/owner/space/3")).toBe(true);; expect(endpointSpecified("owner/space/3")).toBe(true);; } }) describe("endpoint and tool naming works",() => { it("handles named endpoints", () => { const endpoint = parsePath("/prithivMLmods/Mistral-7B-Instruct-v0.3/model_chat"); if(null==endpoint) throw new Error("endpoint is null"); expect(endpoint.owner).toBe("prithivMLmods"); expect(endpoint.space).toBe("Mistral-7B-Instruct-v0.3"); expect(endpoint.endpoint).toBe("/model_chat"); expect(endpoint.mcpToolName).toBe("Mistral-7B-Instruct-v0_3-model_chat"); expect(endpoint.mcpDisplayName).toBe("Mistral-7B-Instruct-v0.3 endpoint /model_chat"); }); it("handles numbered endpoint"),() => { const endpoint = parsePath("/suno/bark/3"); if(null==endpoint) throw new Error("endpoint is null"); expect(endpoint.owner).toBe("suno"); expect(endpoint.space).toBe("bark"); expect(endpoint.endpoint).toBe(3); expect(endpoint.mcpToolName).toBe("bark-3"); expect(endpoint.mcpDisplayName).toBe("bark endpoint /3"); } }) ``` -------------------------------------------------------------------------------- /src/index.ts: -------------------------------------------------------------------------------- ```typescript #!/usr/bin/env node const AVAILABLE_RESOURCES = "Available Resources"; const AVAILABLE_FILES = "available-files"; import { Server } from "@modelcontextprotocol/sdk/server/index.js"; import { VERSION } from "./version.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; // Remove mime import and treatAsText import as they're now handled in WorkingDirectory import { CallToolRequestSchema, ListToolsRequestSchema, ListPromptsRequestSchema, GetPromptRequestSchema, ListResourcesRequestSchema, ReadResourceRequestSchema, } from "@modelcontextprotocol/sdk/types.js"; import { EndpointWrapper } from "./endpoint_wrapper.js"; import { parseConfig } from "./config.js"; import { WorkingDirectory } from "./working_directory.js"; // Create MCP server const server = new Server( { name: "mcp-hfspace", version: VERSION, }, { capabilities: { tools: {}, prompts: {}, resources: { list: true, }, }, }, ); // Parse configuration const config = parseConfig(); // Change to configured working directory process.chdir(config.workDir); const workingDir = new WorkingDirectory( config.workDir, config.claudeDesktopMode, ); // Create a map to store endpoints by their tool names const endpoints = new Map<string, EndpointWrapper>(); // Create endpoints with working directory for (const spacePath of config.spacePaths) { try { const endpoint = await EndpointWrapper.createEndpoint( spacePath, workingDir, ); endpoints.set(endpoint.toolDefinition().name, endpoint); } catch (e) { if (e instanceof Error) { console.error(`Error loading ${spacePath}: ${e.message}`); } else { throw e; } continue; } } if (endpoints.size === 0) { throw new Error("No valid endpoints found in any of the provided spaces"); } server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: [ { name: AVAILABLE_FILES, description: "A list of available file and resources. " + "If the User requests things like 'most recent image' or 'the audio' use " + "this tool to identify the intended resource." + "This tool returns 'resource uri', 'name', 'size', 'last modified' and 'mime type' in a markdown table", inputSchema: { type: "object", properties: {}, }, }, ...Array.from(endpoints.values()).map((endpoint) => endpoint.toolDefinition(), ), ], }; }); server.setRequestHandler(CallToolRequestSchema, async (request) => { if (AVAILABLE_FILES === request.params.name) { return { content: [ { type: `resource`, resource: { uri: `/available-files`, mimeType: `text/markdown`, text: await workingDir.generateResourceTable(), }, }, ], }; } const endpoint = endpoints.get(request.params.name); if (!endpoint) { throw new Error(`Unknown tool: ${request.params.name}`); } try { return await endpoint.call(request, server); } catch (error) { if (error instanceof Error) { return { content: [ { type: `text`, text: `mcp-hfspace error: ${error.message}`, }, ], isError: true, }; } throw error; } }); server.setRequestHandler(ListPromptsRequestSchema, async () => { return { prompts: [ { name: AVAILABLE_RESOURCES, description: "List of available resources.", arguments: [], }, ...Array.from(endpoints.values()).map((endpoint) => endpoint.promptDefinition(), ), ], }; }); server.setRequestHandler(GetPromptRequestSchema, async (request) => { const promptName = request.params.name; if (AVAILABLE_RESOURCES === promptName) { return availableResourcesPrompt(); } const endpoint = endpoints.get(promptName); if (!endpoint) { throw new Error(`Unknown prompt: ${promptName}`); } return await endpoint.getPromptTemplate(request.params.arguments); }); async function availableResourcesPrompt() { const tableText = await workingDir.generateResourceTable(); return { messages: [ { role: "user", content: { type: "text", text: tableText, }, }, ], }; } server.setRequestHandler(ListResourcesRequestSchema, async () => { try { const resources = await workingDir.getSupportedResources(); return { resources: resources.map((resource) => ({ uri: resource.uri, name: resource.name, mimetype: resource.mimeType, })), }; } catch (error) { if (error instanceof Error) { throw new Error(`Failed to list resources: ${error.message}`); } throw error; } }); server.setRequestHandler(ReadResourceRequestSchema, async (request) => { try { const contents = await workingDir.readResource(request.params.uri); return { contents: [contents], }; } catch (error) { if (error instanceof Error) { throw new Error(`Failed to read resource: ${error.message}`); } throw error; } }); /** * Start the server using stdio transport. * This allows the server to communicate via standard input/output streams. */ async function main() { const transport = new StdioServerTransport(); await server.connect(transport); } main().catch((error) => { console.error("Server error:", error); process.exit(1); }); ``` -------------------------------------------------------------------------------- /src/working_directory.ts: -------------------------------------------------------------------------------- ```typescript import { Dirent, promises as fs } from "fs"; import path from "path"; import mime from "mime"; import { pathToFileURL } from "url"; import { FALLBACK_MIME_TYPE, treatAsText } from "./mime_types.js"; import { claudeSupportedMimeTypes } from "./mime_types.js"; export interface ResourceFile { uri: string; name: string; mimeType: string; size: number; lastModified: Date; formattedSize?: string; // Add optional formatted size } export interface ResourceContents { uri: string; mimeType: string; text?: string; blob?: string; } export class WorkingDirectory { private readonly MAX_RESOURCE_SIZE = 1024 * 1024 * 2; constructor( private readonly directory: string, private readonly claudeDesktopMode: boolean = false, ) {} async listFiles(recursive = true): Promise<Dirent[]> { return await fs.readdir(this.directory, { withFileTypes: true, recursive, }); } async getResourceFile(file: Dirent): Promise<ResourceFile> { const fullPath = path.join(file.parentPath || "", file.name); const relativePath = path .relative(this.directory, fullPath) .replace(/\\/g, "/"); const stats = await fs.stat(fullPath); return { uri: `file:./${relativePath}`, name: file.name, mimeType: mime.getType(file.name) || FALLBACK_MIME_TYPE, size: stats.size, lastModified: stats.mtime, }; } async generateFilename( prefix: string, extension: string, mcpToolName: string, ): Promise<string> { const date = new Date().toISOString().split("T")[0]; const randomId = crypto.randomUUID().slice(0, 5); return path.join( this.directory, `${date}_${mcpToolName}_${prefix}_${randomId}.${extension}`, ); } async saveFile(arrayBuffer: ArrayBuffer, filename: string): Promise<void> { await fs.writeFile(filename, Buffer.from(arrayBuffer), { encoding: "binary", }); } getFileUrl(filename: string): string { return pathToFileURL(path.resolve(this.directory, filename)).href; } async isSupportedFile(filename: string): Promise<boolean> { if (!this.claudeDesktopMode) return true; try { const stats = await fs.stat(filename); if (stats.size > this.MAX_RESOURCE_SIZE) return false; const mimetype = mime.getType(filename); if (!mimetype) return false; if (treatAsText(mimetype)) return true; return claudeSupportedMimeTypes.some((supported) => { if (!supported.includes("/*")) return supported === mimetype; const supportedMainType = supported.split("/")[0]; const mainType = mimetype.split("/")[0]; return supportedMainType === mainType; }); } catch { return false; } } async validatePath(filePath: string): Promise<string> { if (filePath.startsWith("http://") || filePath.startsWith("https://")) { return filePath; } if (filePath.startsWith("file:")) { filePath = filePath.replace(/^file:(?:\/\/|\.\/)/, ""); } const normalizedFilePath = path.normalize(path.resolve(filePath)); const normalizedCwd = path.normalize(this.directory); if (!normalizedFilePath.startsWith(normalizedCwd)) { throw new Error(`Path ${filePath} is outside of working directory`); } await fs.access(normalizedFilePath); return normalizedFilePath; } formatFileSize(bytes: number): string { const units = ["B", "KB", "MB", "GB"]; let size = bytes; let unitIndex = 0; while (size >= 1024 && unitIndex < units.length - 1) { size /= 1024; unitIndex++; } return `${size.toFixed(1)} ${units[unitIndex]}`; } async generateResourceTable(): Promise<string> { const files = await this.listFiles(); const resources = await Promise.all( files .filter((entry) => entry.isFile()) .map(async (entry) => await this.getResourceFile(entry)), ); if (resources.length === 0) { return "No resources available."; } return ` The following resources are available for tool calls: | Resource URI | Name | MIME Type | Size | Last Modified | |--------------|------|-----------|------|---------------| ${resources .map( (f) => `| ${f.uri} | ${f.name} | ${f.mimeType} | ${this.formatFileSize(f.size)} | ${f.lastModified.toISOString()} |`, ) .join("\n")} Prefer using the Resource URI for tool parameters which require a file input. URLs are also accepted.`.trim(); } isFileSizeSupported(size: number): boolean { return size <= this.MAX_RESOURCE_SIZE; } async getSupportedResources(): Promise<ResourceFile[]> { const files = await this.listFiles(); const supportedFiles = await Promise.all( files .filter((entry) => entry.isFile()) .map(async (entry) => { const isSupported = await this.isSupportedFile(entry.name); if (!isSupported) return null; return await this.getResourceFile(entry); }), ); return supportedFiles.filter((file): file is ResourceFile => file !== null); } async readResource(resourceUri: string): Promise<ResourceContents> { const validatedPath = await this.validatePath(resourceUri); const file = path.basename(validatedPath); const mimeType = mime.getType(file) || FALLBACK_MIME_TYPE; const content = this.isMimeTypeText(mimeType) ? { text: await fs.readFile(file, "utf-8") } : { blob: (await fs.readFile(file)).toString("base64") }; return { uri: resourceUri, mimeType, ...content, }; } private isMimeTypeText(mimeType: string): boolean { return ( mimeType.startsWith("text/") || mimeType === "application/json" || mimeType === "application/javascript" || mimeType === "application/xml" ); } } ``` -------------------------------------------------------------------------------- /src/content_converter.ts: -------------------------------------------------------------------------------- ```typescript import { EmbeddedResource, ImageContent, TextContent, } from "@modelcontextprotocol/sdk/types.js"; import { ApiReturn } from "./gradio_api.js"; import * as fs from "fs/promises"; import { pathToFileURL } from "url"; import path from "path"; import { config } from "./config.js"; import { EndpointPath } from "./endpoint_wrapper.js"; import { WorkingDirectory } from "./working_directory.js"; // Add types for Gradio component values export interface GradioResourceValue { url?: string; mime_type?: string; orig_name?: string; } // Component types enum enum GradioComponentType { Image = "Image", Audio = "Audio", Chatbot = "Chatbot", } // Resource response interface interface ResourceResponse { mimeType: string; base64Data: string; arrayBuffer: ArrayBuffer; originalExtension: string | null; } // Simple converter registry type ContentConverter = ( component: ApiReturn, value: GradioResourceValue, endpointPath: EndpointPath, ) => Promise<TextContent | ImageContent | EmbeddedResource>; // Type for converter functions that may not succeed type ConverterFn = ( component: ApiReturn, value: GradioResourceValue, endpointPath: EndpointPath, ) => Promise<TextContent | ImageContent | EmbeddedResource | null>; // Default converter implementation const defaultConverter: ConverterFn = async () => null; export class GradioConverter { private converters: Map<string, ContentConverter> = new Map(); constructor(private readonly workingDir: WorkingDirectory) { // Register converters with fallback behavior this.register( GradioComponentType.Image, withFallback(this.imageConverter.bind(this)), ); this.register( GradioComponentType.Audio, withFallback(this.audioConverter.bind(this)), ); this.register( GradioComponentType.Chatbot, withFallback(async () => null), ); } register(component: string, converter: ContentConverter) { this.converters.set(component, converter); } async convert( component: ApiReturn, value: GradioResourceValue, endpointPath: EndpointPath, ): Promise<TextContent | ImageContent | EmbeddedResource> { if (config.debug) { await fs.writeFile( generateFilename("debug", "json", endpointPath.mcpToolName), JSON.stringify(value, null, 2), ); } const converter = this.converters.get(component.component) || withFallback(defaultConverter); return converter(component, value, endpointPath); } private async saveFile( arrayBuffer: ArrayBuffer, mimeType: string, prefix: string, mcpToolName: string, originalExtension?: string | null, ): Promise<string> { const extension = originalExtension || mimeType.split("/")[1] || "bin"; const filename = await this.workingDir.generateFilename( prefix, extension, mcpToolName, ); await this.workingDir.saveFile(arrayBuffer, filename); return filename; } private readonly imageConverter: ConverterFn = async ( _component, value, endpointPath, ) => { if (!value?.url) return null; try { const response = await convertUrlToBase64(value.url, value); try { await this.saveFile( response.arrayBuffer, response.mimeType, GradioComponentType.Image, endpointPath.mcpToolName, response.originalExtension, ); } catch (saveError) { if (config.claudeDesktopMode) { console.error( `Failed to save image file: ${saveError instanceof Error ? saveError.message : String(saveError)}`, ); } else { throw saveError; } } return { type: "image", data: response.base64Data, mimeType: response.mimeType, }; } catch (error) { console.error("Image conversion failed:", error); return createTextContent( _component, `Failed to load image: ${error instanceof Error ? error.message : String(error)}`, ); } }; private readonly audioConverter: ConverterFn = async ( _component, value, endpointPath, ) => { if (!value?.url) return null; try { const { mimeType, base64Data, arrayBuffer, originalExtension } = await convertUrlToBase64(value.url, value); const filename = await this.saveFile( arrayBuffer, mimeType, "audio", endpointPath.mcpToolName, originalExtension, ); if (config.claudeDesktopMode) { return { type: "resource", resource: { uri: `${pathToFileURL(path.resolve(filename)).href}`, mimetype: `text/plain`, text: `Your audio was succesfully created and is available for playback at ${path.resolve(filename)}. Claude Desktop does not currently support audio content`, }, }; } else { return { type: "resource", resource: { uri: `${pathToFileURL(path.resolve(filename)).href}`, mimeType, blob: base64Data, }, }; } } catch (error) { console.error("Audio conversion failed:", error); return { type: "text", text: `Failed to load audio: ${(error as Error).message}`, }; } }; } // Shared text content creator const createTextContent = ( component: ApiReturn, value: unknown, ): TextContent => { const label = component.label ? `${component.label}: ` : ""; const text = typeof value === "string" ? value : JSON.stringify(value); return { type: "text", text: `${label}${text}`, }; }; // Wrapper that adds fallback behavior const withFallback = (converter: ConverterFn): ContentConverter => { return async ( component: ApiReturn, value: GradioResourceValue, endpointPath: EndpointPath, ) => { const result = await converter(component, value, endpointPath); return result ?? createTextContent(component, value); }; }; // Update generateFilename to use space name const generateFilename = ( prefix: string, extension: string, mcpToolName: string, ): string => { const date = new Date().toISOString().split("T")[0]; // YYYY-MM-DD const randomId = crypto.randomUUID().slice(0, 5); // First 5 chars return `${date}_${mcpToolName}_${prefix}_${randomId}.${extension}`; }; const getExtensionFromFilename = (url: string): string | null => { const match = url.match(/\/([^/?#]+)[^/]*$/); if (match && match[1].includes(".")) { return match[1].split(".").pop() || null; } return null; }; const getMimeTypeFromOriginalName = (origName: string): string | null => { const extension = origName.split(".").pop()?.toLowerCase(); if (!extension) return null; // Common image formats if (["jpg", "jpeg", "png", "gif", "webp", "bmp", "svg"].includes(extension)) { return `image/${extension}`; } // Common audio formats if (["mp3", "wav", "ogg", "aac", "m4a"].includes(extension)) { return `audio/${extension}`; } // For unknown types, fall back to application/* return `application/${extension}`; }; const determineMimeType = ( value: GradioResourceValue, responseHeaders: Headers, ): string => { // First priority: mime_type from the value object if (value?.mime_type) { return value.mime_type; } // Second priority: derived from orig_name if (value?.orig_name) { const mimeFromName = getMimeTypeFromOriginalName(value.orig_name); if (mimeFromName) { return mimeFromName; } } // Third priority: response headers const headerMimeType = responseHeaders.get("content-type"); if (headerMimeType && headerMimeType !== "text/plain") { return headerMimeType; } // Final fallback return "text/plain"; }; const convertUrlToBase64 = async ( url: string, value: GradioResourceValue, ): Promise<ResourceResponse> => { const headers: HeadersInit = {}; if (config.hfToken) { headers.Authorization = `Bearer ${config.hfToken}`; } const response = await fetch(url, { headers }); if (!response.ok) { throw new Error( `Failed to fetch resource: ${response.status} ${response.statusText}`, ); } const mimeType = determineMimeType(value, response.headers); const originalExtension = getExtensionFromFilename(url); const arrayBuffer = await response.arrayBuffer(); const base64Data = Buffer.from(arrayBuffer).toString("base64"); return { mimeType, base64Data, arrayBuffer, originalExtension }; }; ``` -------------------------------------------------------------------------------- /src/endpoint_wrapper.ts: -------------------------------------------------------------------------------- ```typescript import { Client, handle_file } from "@gradio/client"; import { ApiStructure, ApiEndpoint, ApiReturn } from "./gradio_api.js"; import { convertApiToSchema, isFileParameter, ParameterSchema, } from "./gradio_convert.js"; import { Server } from "@modelcontextprotocol/sdk/server/index.js"; import * as fs from "fs/promises"; import type { CallToolResult, GetPromptResult, CallToolRequest, } from "@modelcontextprotocol/sdk/types.d.ts"; import type { TextContent, ImageContent, EmbeddedResource, } from "@modelcontextprotocol/sdk/types.d.ts"; import { createProgressNotifier } from "./progress_notifier.js"; import { GradioConverter, GradioResourceValue } from "./content_converter.js"; import { config } from "./config.js"; import type { StatusMessage, Payload } from "@gradio/client"; import { WorkingDirectory } from "./working_directory.js"; type GradioEvent = StatusMessage | Payload; export interface EndpointPath { owner: string; space: string; endpoint: string | number; mcpToolName: string; mcpDisplayName: string; } export function endpointSpecified(path: string) { const parts = path.replace(/^\//, "").split("/"); return parts.length === 3; } export function parsePath(path: string): EndpointPath { const parts = path.replace(/^\//, "").split("/"); if (parts.length != 3) { throw new Error( `Invalid Endpoint path format [${path}]. Use or vendor/space/endpoint`, ); } const [owner, space, rawEndpoint] = parts; return { owner, space, endpoint: isNaN(Number(rawEndpoint)) ? `/${rawEndpoint}` : parseInt(rawEndpoint), mcpToolName: formatMcpToolName(space, rawEndpoint), mcpDisplayName: formatMcpDisplayName(space, rawEndpoint), }; function formatMcpToolName(space: string, endpoint: string | number) { return `${space}-${endpoint}`.replace(/[^a-zA-Z0-9_-]/g, "_").slice(0, 64); } function formatMcpDisplayName(space: string, endpoint: string | number) { return `${space} endpoint /${endpoint}`; } } export class EndpointWrapper { private converter: GradioConverter; constructor( private endpointPath: EndpointPath, private endpoint: ApiEndpoint, private client: Client, private workingDir: WorkingDirectory, ) { this.converter = new GradioConverter(workingDir); } static async createEndpoint( configuredPath: string, workingDir: WorkingDirectory, ): Promise<EndpointWrapper> { const pathParts = configuredPath.split("/"); if (pathParts.length < 2 || pathParts.length > 3) { throw new Error( `Invalid space path format [${configuredPath}]. Use: vendor/space or vendor/space/endpoint`, ); } const spaceName = `${pathParts[0]}/${pathParts[1]}`; const endpointTarget = pathParts[2] ? `/${pathParts[2]}` : undefined; const preferredApis = [ "/predict", "/infer", "/generate", "/complete", "/model_chat", "/lambda", "/generate_image", "/process_prompt", "/on_submit", "/add_text", ]; const gradio: Client = await Client.connect(spaceName, { events: ["data", "status"], hf_token: config.hfToken, }); const api = (await gradio.view_api()) as ApiStructure; if (config.debug) { await fs.writeFile( `${pathParts[0]}_${pathParts[1]}_debug_api.json`, JSON.stringify(api, null, 2), ); } // Try chosen API if specified if (endpointTarget && api.named_endpoints[endpointTarget]) { return new EndpointWrapper( parsePath(configuredPath), api.named_endpoints[endpointTarget], gradio, workingDir, ); } // Try preferred APIs const preferredApi = preferredApis.find( (name) => api.named_endpoints[name], ); if (preferredApi) { return new EndpointWrapper( parsePath(`${configuredPath}${preferredApi}`), api.named_endpoints[preferredApi], gradio, workingDir, ); } // Try first named endpoint const firstNamed = Object.entries(api.named_endpoints)[0]; if (firstNamed) { return new EndpointWrapper( parsePath(`${configuredPath}${firstNamed[0]}`), firstNamed[1], gradio, workingDir, ); } // Try unnamed endpoints const validUnnamed = Object.entries(api.unnamed_endpoints).find( ([, endpoint]) => endpoint.parameters.length > 0 && endpoint.returns.length > 0, ); if (validUnnamed) { return new EndpointWrapper( parsePath(`${configuredPath}/${validUnnamed[0]}`), validUnnamed[1], gradio, workingDir, ); } throw new Error(`No valid endpoints found for ${configuredPath}`); } async validatePath(filePath: string): Promise<string> { return this.workingDir.validatePath(filePath); } /* Endpoint Wrapper */ private mcpDescriptionName(): string { return this.endpointPath.mcpDisplayName; } get mcpToolName() { return this.endpointPath.mcpToolName; } toolDefinition() { return { name: this.mcpToolName, description: `Call the ${this.mcpDescriptionName()}`, inputSchema: convertApiToSchema(this.endpoint), }; } async call( request: CallToolRequest, server: Server, ): Promise<CallToolResult> { const progressToken = request.params._meta?.progressToken as | string | number | undefined; const parameters = request.params.arguments as Record<string, unknown>; // Get the endpoint parameters to check against const endpointParams = this.endpoint.parameters; // Process each parameter, applying handle_file for file inputs for (const [key, value] of Object.entries(parameters)) { const param = endpointParams.find( (p) => p.parameter_name === key || p.label === key, ); if (param && isFileParameter(param) && typeof value === "string") { const file = await this.validatePath(value); parameters[key] = handle_file(file); } } const normalizedToken = typeof progressToken === "number" ? progressToken.toString() : progressToken; return this.handleToolCall(parameters, normalizedToken, server); } async handleToolCall( parameters: Record<string, unknown>, progressToken: string | undefined, server: Server, ): Promise<CallToolResult> { const events = []; try { let result = null; const submission: AsyncIterable<GradioEvent> = this.client.submit( this.endpointPath.endpoint, parameters, ) as AsyncIterable<GradioEvent>; const progressNotifier = createProgressNotifier(server); for await (const msg of submission) { if (config.debug) events.push(msg); if (msg.type === "data") { if (Array.isArray(msg.data)) { const hasContent = msg.data.some( (item: unknown) => typeof item !== "object", ); if (hasContent) result = msg.data; if (null === result) result = msg.data; } } else if (msg.type === "status") { if (msg.stage === "error") { throw new Error(`Gradio error: ${msg.message || "Unknown error"}`); } if (progressToken) await progressNotifier.notify(msg, progressToken); } } if (!result) { throw new Error("No data received from endpoint"); } return await this.convertPredictResults( this.endpoint.returns, result, this.endpointPath, ); } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); throw new Error(`Error calling endpoint: ${errorMessage}`); } finally { if (config.debug && events.length > 0) { await fs.writeFile( `${this.mcpToolName}_status_${crypto .randomUUID() .substring(0, 5)}.json`, JSON.stringify(events, null, 2), ); } } } private async convertPredictResults( returns: ApiReturn[], predictResults: unknown[], endpointPath: EndpointPath, ): Promise<CallToolResult> { const content: (TextContent | ImageContent | EmbeddedResource)[] = []; for (const [index, output] of returns.entries()) { const value = predictResults[index]; const converted = await this.converter.convert( output, value as GradioResourceValue, endpointPath, ); content.push(converted); } return { content, isError: false, }; } promptName() { return this.mcpToolName; } promptDefinition() { const schema = convertApiToSchema(this.endpoint); return { name: this.promptName(), description: `Use the ${this.mcpDescriptionName()}.`, arguments: Object.entries(schema.properties).map( ([name, prop]: [string, ParameterSchema]) => ({ name, description: prop?.description || name, required: schema.required?.includes(name) || false, }), ), }; } async getPromptTemplate( args?: Record<string, string>, ): Promise<GetPromptResult> { const schema = convertApiToSchema(this.endpoint); let promptText = `Using the ${this.mcpDescriptionName()}:\n\n`; promptText += Object.entries(schema.properties) .map(([name, prop]: [string, ParameterSchema]) => { const defaultHint = prop?.default !== undefined ? ` - default: ${prop.default}` : ""; const value = args?.[name] || `[Provide ${prop?.description || name}${defaultHint}]`; return `${name}: ${value}`; }) .join("\n"); return { description: this.promptDefinition().description, messages: [ { role: "user", content: { type: "text", text: promptText, }, }, ], }; } } ``` -------------------------------------------------------------------------------- /test/utils.test.ts: -------------------------------------------------------------------------------- ```typescript import { describe, it, expect } from "vitest"; import type { ApiEndpoint, ApiParameter } from "../src/gradio_api"; import { convertParameter } from "../src/gradio_convert"; function createParameter( override: Partial<ApiParameter> & { // Require the essential bits we always need to specify python_type: { type: string; description?: string; }; } ): ApiParameter { return { label: "Test Parameter", // parameter_name is now optional type: "string", component: "Textbox", // Spread the override at the end to allow overriding defaults ...override, }; } // Test just the parameter conversion describe("basic conversions", () => { it("converts a single basic string parameter", () => { const param = createParameter({ python_type: { type: "str", description: "A text parameter", }, }); const result = convertParameter(param); // TypeScript ensures result matches ParameterSchema expect(result).toEqual({ type: "string", description: "A text parameter", }); }); it("uses python_type description when available", () => { const param = createParameter({ label: "Prompt", python_type: { type: "str", description: "The input prompt text", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "The input prompt text", }); }); it("falls back to label when python_type description is empty", () => { const param = createParameter({ label: "Prompt", python_type: { type: "str", description: "", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "Prompt", }); }); it("includes default value when specified", () => { const param = createParameter({ parameter_has_default: true, parameter_default: "default text", python_type: { type: "str", description: "", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "Test Parameter", default: "default text", }); }); it("includes example when specified", () => { const param = createParameter({ example_input: "example text", python_type: { type: "str", description: "" }, }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "Test Parameter", examples: ["example text"], }); }); it("includes both default and example when specified", () => { const param = createParameter({ parameter_has_default: true, parameter_default: "default text", example_input: "example text", python_type: { type: "str", description:"", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "Test Parameter", default: "default text", examples: ["example text"], }); }); }); describe("convertParameter", () => { it("converts a single parameter correctly", () => { const param: ApiParameter = { label: "Input Text", parameter_name: "text", parameter_has_default: false, parameter_default: null, type: "string", python_type: { type: "str", description: "A text input", }, component: "Textbox", }; const result = convertParameter(param); // TypeScript ensures result matches ParameterSchema expect(result).toEqual({ type: "string", description: "A text input", }); }); }); describe("number type conversions", () => { // ...existing tests... it("handles basic number without constraints", () => { const param = createParameter({ type: "number", python_type: { type: "float", description: "A number parameter", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "number", description: "A number parameter", }); }); it("parses minimum constraint", () => { const param = createParameter({ type: "number", python_type: { type: "float", description: "A number parameter (min: 0)", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "number", description: "A number parameter (min: 0)", minimum: 0, }); }); it("parses maximum constraint", () => { const param = createParameter({ type: "number", python_type: { type: "float", description: "A number parameter (maximum=100)", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "number", description: "A number parameter (maximum=100)", maximum: 100, }); }); it("parses both min and max constraints", () => { const param = createParameter({ type: "number", python_type: { type: "float", description: "A number parameter (min: 0, max: 1.0)", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "number", description: "A number parameter (min: 0, max: 1.0)", minimum: 0, maximum: 1.0, }); }); it("parses 'between X and Y' format", () => { const param = createParameter({ type: "number", python_type: { type: "float", description: "numeric value between 256 and 2048", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "number", description: "numeric value between 256 and 2048", minimum: 256, maximum: 2048, }); }); it("parses large number ranges", () => { const param = createParameter({ type: "number", python_type: { type: "float", description: "numeric value between 0 and 2147483647", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "number", description: "numeric value between 0 and 2147483647", minimum: 0, maximum: 2147483647, }); }); }); describe("boolean type conversions", () => { it("handles basic boolean parameter", () => { const param = createParameter({ type: "boolean", python_type: { type: "bool", description: "A boolean flag", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "boolean", description: "A boolean flag", }); }); it("handles boolean with default value", () => { const param = createParameter({ type: "boolean", parameter_has_default: true, parameter_default: true, python_type: { type: "bool", description: "", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "boolean", description: "Test Parameter", default: true, }); }); it("handles boolean with example", () => { const param = createParameter({ type: "boolean", example_input: true, python_type: { type: "bool", description: "", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "boolean", description: "Test Parameter", examples: [true], }); }); it("matches the Randomize seed example exactly", () => { const param = createParameter({ label: "Randomize seed", parameter_name: "randomize_seed", parameter_has_default: true, parameter_default: true, type: "boolean", example_input: true, python_type: { type: "bool", description: "", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "boolean", description: "Randomize seed", default: true, examples: [true], }); }); }); describe("literal type conversions", () => { it("handles Literal type with enum values", () => { const param = createParameter({ label: "Aspect Ratio", parameter_name: "aspect_ratio", parameter_has_default: true, parameter_default: "1:1", type: "string", python_type: { type: "Literal['1:1', '16:9', '9:16', '4:3']", description: "", }, example_input: "1:1", }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "Aspect Ratio", default: "1:1", examples: ["1:1"], enum: ["1:1", "16:9", "9:16", "4:3"] }); }); it("handles boolean-like Literal type with True/False strings", () => { const param = createParameter({ label: "Is Example Image", parameter_name: "is_example_image", parameter_has_default: true, parameter_default: "False", type: "string", python_type: { type: "Literal['True', 'False']", description: "", }, example_input: "True", }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "Is Example Image", default: "False", examples: ["True"], enum: ["True", "False"] }); }); }); describe("file and blob type conversions", () => { it("handles simple filepath type", () => { const exampleUrl = "https://example.com/image.png"; const param = createParameter({ type: "Blob | File | Buffer", python_type: { type: "filepath", description: "", }, example_input: { path: exampleUrl, meta: { _type: "gradio.FileData" }, orig_name: "image.png", url: exampleUrl, }, }); const result = convertParameter(param); expect(result).toMatchObject({ type: "string", description: "Accepts: URL, file path, file name, or resource identifier", }); }); it("handles complex Dict type for image input", () => { const exampleUrl = "https://example.com/image.png"; const param = createParameter({ type: "Blob | File | Buffer", python_type: { type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url), ...)", description: "For input, either path or url must be provided.", }, example_input: { path: exampleUrl, meta: { _type: "gradio.FileData" }, orig_name: "image.png", url: exampleUrl, }, }); const result = convertParameter(param); expect(result).toMatchObject({ type: "string", description: "Accepts: URL, file path, file name, or resource identifier", }); }); it("handles audio file input type", () => { const exampleUrl = "https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav"; const param = createParameter({ type: "", python_type: { type: "filepath", description: "", }, component: "Audio", example_input: { path: exampleUrl, meta: { _type: "gradio.FileData" }, orig_name: "audio_sample.wav", url: exampleUrl, }, }); const result = convertParameter(param); expect(result).toMatchObject({ type: "string", description: "Accepts: Audio file URL, file path, file name, or resource identifier", }); }); it("handles empty type string for audio input", () => { const param = createParameter({ label: "parameter_1", parameter_name: "inputs", parameter_has_default: false, parameter_default: null, python_type: { type: "filepath", description: "", }, component: "Audio", example_input: { path: "https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav", meta: { _type: "gradio.FileData" }, orig_name: "audio_sample.wav", url: "https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav", }, }); const result = convertParameter(param); expect(result).toMatchObject({ type: "string", // Should always be "string" for file inputs description: "Accepts: Audio file URL, file path, file name, or resource identifier", }); }); it("handles image file input type", () => { const param = createParameter({ type: "", python_type: { type: "filepath", description: "", }, component: "Image", example_input: { path: "https://example.com/image.png", meta: { _type: "gradio.FileData" }, orig_name: "image.png", url: "https://example.com/image.png", }, }); const result = convertParameter(param); expect(result).toMatchObject({ type: "string", description: "Accepts: Image file URL, file path, file name, or resource identifier", }); }); }); describe("unnamed parameters", () => { it("handles parameters without explicit names", () => { const param = createParameter({ label: "Input Text", type: "string", python_type: { type: "str", description: "A text input", }, component: "Textbox", }); const result = convertParameter(param); expect(result).toEqual({ type: "string", description: "A text input", }); }); it("handles array of unnamed parameters", () => { const params = [ createParameter({ label: "Image Input", type: "Blob | File | Buffer", python_type: { type: "filepath", description: "", }, component: "Image", example_input: "https://example.com/image.png", }), createParameter({ label: "Text Input", type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!", }), ]; // Test each parameter conversion params.forEach((param, index) => { const result = convertParameter(param); expect(result).toBeTruthy(); }); }); }); describe("special cases", () => { // chatbox historys often have incorrect types/example pairings. // will later handle python dicts/tuples more elegantly. it("handles chat history parameter", () => { const param = createParameter({ label: "Qwen2.5-72B-Instruct", parameter_name: "history", component: "Chatbot", python_type: { type: "list", description: "Some other description that should be ignored", }, }); const result = convertParameter(param); expect(result).toEqual({ type: "array", description: "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components." }); }); it("handles chat history parameter with examples", () => { const param = createParameter({ label: "Qwen2.5-72B-Instruct", parameter_name: "history", component: "Chatbot", python_type: { type: "list", description: "Some other description that should be ignored", }, example_input: [["Hello", "Hi there!"]], // Example chat history }); const result = convertParameter(param); expect(result).toEqual({ type: "array", description: "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components.", examples: [[["Hello", "Hi there!"]]] }); }); it("handles chat history parameter with default value", () => { const param = createParameter({ label: "Qwen2.5-72B-Instruct", parameter_name: "history", component: "Chatbot", parameter_has_default: true, parameter_default: [], python_type: { type: "list", description: "Some other description that should be ignored", } }); const result = convertParameter(param); expect(result).toEqual({ type: "array", description: "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components.", default: [] }); }); }); ``` -------------------------------------------------------------------------------- /test/parameter_test.json: -------------------------------------------------------------------------------- ```json [ { label: "Input Text", type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Howdy!", serializer: "StringSerializable", description: undefined, }, { label: "Acoustic Prompt", type: "", python_type: { type: "Any", description: "any valid value", }, component: "Dropdown", example_input: null, serializer: "SimpleSerializable", description: "any valid value", }, ] [ { label: "Input Text", type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Howdy!", serializer: "StringSerializable", description: undefined, }, { label: "Acoustic Prompt", type: "", python_type: { type: "Any", description: "any valid value", }, component: "Dropdown", example_input: null, serializer: "SimpleSerializable", description: "any valid value", }, ] { parameters: [ { label: "Upload Image", parameter_name: "img", parameter_has_default: false, parameter_default: null, type: "Blob | File | Buffer", python_type: { type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", description: "For input, either path or url must be provided. For output, path is always provided.", }, component: "Image", example_input: { path: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", meta: { _type: "gradio.FileData", }, orig_name: "bus.png", url: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", }, description: undefined, }, { label: "Object to Extract", parameter_name: "prompt", parameter_has_default: false, parameter_default: null, type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!!", description: undefined, }, { label: "Background Prompt (optional)", parameter_name: "bg_prompt", parameter_has_default: true, parameter_default: null, type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!!", description: undefined, }, { label: "Aspect Ratio", parameter_name: "aspect_ratio", parameter_has_default: true, parameter_default: "1:1", type: "string", python_type: { type: "Literal['1:1', '16:9', '9:16', '4:3']", description: "", }, component: "Dropdown", example_input: "1:1", description: undefined, }, { component: "state", example: null, parameter_default: null, parameter_has_default: true, parameter_name: null, hidden: true, description: undefined, type: "", }, { label: "Object Size (%)", parameter_name: "scale_percent", parameter_has_default: true, parameter_default: 50, type: "number", python_type: { type: "float", description: "", }, component: "Slider", example_input: 10, description: "numeric value between 10 and 200", }, ], returns: [ { label: "Combined Result", type: "string", python_type: { type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", description: "", }, component: "Image", description: undefined, }, { label: "Extracted Object", type: "string", python_type: { type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", description: "", }, component: "Image", description: undefined, }, ], type: { generator: false, cancel: false, }, } [ { label: "Input Screenshot", parameter_name: "image", parameter_has_default: false, parameter_default: null, type: "Blob | File | Buffer", python_type: { type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", description: "For input, either path or url must be provided. For output, path is always provided.", }, component: "Image", example_input: { path: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", meta: { _type: "gradio.FileData", }, orig_name: "bus.png", url: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", }, description: undefined, }, { label: "Query", parameter_name: "query", parameter_has_default: false, parameter_default: null, type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!!", description: undefined, }, { label: "Refinement Steps", parameter_name: "iterations", parameter_has_default: true, parameter_default: 1, type: "number", python_type: { type: "float", description: "", }, component: "Slider", example_input: 1, description: "numeric value between 1 and 3", }, { label: "Is Example Image", parameter_name: "is_example_image", parameter_has_default: true, parameter_default: "False", type: "string", python_type: { type: "Literal['True', 'False']", description: "", }, component: "Dropdown", example_input: "True", description: undefined, }, ] [ { label: "Input", parameter_name: "query", parameter_has_default: false, parameter_default: null, type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!!", description: undefined, }, { label: "Qwen2.5-72B-Instruct", parameter_name: "history", parameter_has_default: true, parameter_default: [ ], type: "", python_type: { type: "Tuple[str | Dict(file: filepath, alt_text: str | None) | Dict(component: str, value: Any, constructor_args: Dict(), props: Dict()) | None, str | Dict(file: filepath, alt_text: str | None) | Dict(component: str, value: Any, constructor_args: Dict(), props: Dict()) | None]", description: "", }, component: "Chatbot", example_input: [ [ "Hello!", null, ], ], description: undefined, }, { label: "parameter_8", parameter_name: "system", parameter_has_default: true, parameter_default: "You are Qwen, created by Alibaba Cloud. You are a helpful assistant.", type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!!", description: undefined, }, ] [ { label: "Prompt", parameter_name: "prompt", parameter_has_default: false, parameter_default: null, type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!!", description: undefined, }, { label: "Seed", parameter_name: "seed", parameter_has_default: true, parameter_default: 0, type: "number", python_type: { type: "float", description: "numeric value between 0 and 2147483647", }, component: "Slider", example_input: 0, description: "numeric value between 0 and 2147483647", }, { label: "Randomize seed", parameter_name: "randomize_seed", parameter_has_default: true, parameter_default: true, type: "boolean", python_type: { type: "bool", description: "", }, component: "Checkbox", example_input: true, description: undefined, }, { label: "Width", parameter_name: "width", parameter_has_default: true, parameter_default: 1024, type: "number", python_type: { type: "float", description: "numeric value between 256 and 2048", }, component: "Slider", example_input: 256, description: "numeric value between 256 and 2048", }, { label: "Height", parameter_name: "height", parameter_has_default: true, parameter_default: 1024, type: "number", python_type: { type: "float", description: "numeric value between 256 and 2048", }, component: "Slider", example_input: 256, description: "numeric value between 256 and 2048", }, { label: "Number of inference steps", parameter_name: "num_inference_steps", parameter_has_default: true, parameter_default: 4, type: "number", python_type: { type: "float", description: "numeric value between 1 and 50", }, component: "Slider", example_input: 1, description: "numeric value between 1 and 50", }, ] TRELLIS: { "/preprocess_image": { parameters: [ { label: "Image Prompt", parameter_name: "image", parameter_has_default: false, parameter_default: null, type: "Blob | File | Buffer", python_type: { type: "filepath", description: "", }, component: "Image", example_input: { path: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", meta: { _type: "gradio.FileData", }, orig_name: "bus.png", url: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", }, description: undefined, }, ], returns: [ { label: "value_29", type: "string", python_type: { type: "str", description: "", }, component: "Textbox", description: undefined, }, { label: "Image Prompt", type: "string", python_type: { type: "filepath", description: "", }, component: "Image", description: undefined, }, ], type: { generator: false, cancel: false, }, }, "/lambda": { parameters: [ ], returns: [ { label: "value_29", type: "string", python_type: { type: "str", description: "", }, component: "Textbox", description: undefined, }, ], type: { generator: false, cancel: false, }, }, "/image_to_3d": { parameters: [ { label: "parameter_29", parameter_name: "trial_id", parameter_has_default: false, parameter_default: null, type: "string", python_type: { type: "str", description: "", }, component: "Textbox", example_input: "Hello!!", description: undefined, }, { label: "Seed", parameter_name: "seed", parameter_has_default: true, parameter_default: 0, type: "number", python_type: { type: "float", description: "numeric value between 0 and 2147483647", }, component: "Slider", example_input: 0, description: "numeric value between 0 and 2147483647", }, { label: "Randomize Seed", parameter_name: "randomize_seed", parameter_has_default: true, parameter_default: true, type: "boolean", python_type: { type: "bool", description: "", }, component: "Checkbox", example_input: true, description: undefined, }, { label: "Guidance Strength", parameter_name: "ss_guidance_strength", parameter_has_default: true, parameter_default: 7.5, type: "number", python_type: { type: "float", description: "numeric value between 0.0 and 10.0", }, component: "Slider", example_input: 0, description: "numeric value between 0.0 and 10.0", }, { label: "Sampling Steps", parameter_name: "ss_sampling_steps", parameter_has_default: true, parameter_default: 12, type: "number", python_type: { type: "float", description: "numeric value between 1 and 50", }, component: "Slider", example_input: 1, description: "numeric value between 1 and 50", }, { label: "Guidance Strength", parameter_name: "slat_guidance_strength", parameter_has_default: true, parameter_default: 3, type: "number", python_type: { type: "float", description: "numeric value between 0.0 and 10.0", }, component: "Slider", example_input: 0, description: "numeric value between 0.0 and 10.0", }, { label: "Sampling Steps", parameter_name: "slat_sampling_steps", parameter_has_default: true, parameter_default: 12, type: "number", python_type: { type: "float", description: "numeric value between 1 and 50", }, component: "Slider", example_input: 1, description: "numeric value between 1 and 50", }, ], returns: [ { label: "Generated 3D Asset", type: "", python_type: { type: "Dict(video: filepath, subtitles: filepath | None)", description: "", }, component: "Video", description: undefined, }, ], type: { generator: false, cancel: false, }, }, "/activate_button": { parameters: [ ], returns: [ ], type: { generator: false, cancel: false, }, }, "/deactivate_button": { parameters: [ ], returns: [ ], type: { generator: false, cancel: false, }, }, "/extract_glb": { parameters: [ { component: "state", example: null, parameter_default: null, parameter_has_default: true, parameter_name: null, hidden: true, description: undefined, type: "", }, { label: "Simplify", parameter_name: "mesh_simplify", parameter_has_default: true, parameter_default: 0.95, type: "number", python_type: { type: "float", description: "numeric value between 0.9 and 0.98", }, component: "Slider", example_input: 0.9, description: "numeric value between 0.9 and 0.98", }, { label: "Texture Size", parameter_name: "texture_size", parameter_has_default: true, parameter_default: 1024, type: "number", python_type: { type: "float", description: "numeric value between 512 and 2048", }, component: "Slider", example_input: 512, description: "numeric value between 512 and 2048", }, ], returns: [ { label: "Extracted GLB", type: "", python_type: { type: "filepath", description: "", }, component: "Litmodel3d", description: undefined, }, { label: "Download GLB", type: "", python_type: { type: "filepath", description: "", }, component: "Downloadbutton", description: undefined, }, ], type: { generator: false, cancel: false, }, }, "/activate_button_1": { parameters: [ ], returns: [ { label: "Download GLB", type: "", python_type: { type: "filepath", description: "", }, component: "Downloadbutton", description: undefined, }, ], type: { generator: false, cancel: false, }, }, "/deactivate_button_1": { parameters: [ ], returns: [ { label: "Download GLB", type: "", python_type: { type: "filepath", description: "", }, component: "Downloadbutton", description: undefined, }, ], type: { generator: false, cancel: false, }, }, } ```