# Directory Structure ``` ├── .gitignore ├── .npmrc ├── eslint.config.js ├── images │ ├── 2024-12-05-flux-shuttle.png │ ├── 2024-12-08-mcp-omni-artifact.png │ ├── 2024-12-08-mcp-parler.png │ ├── 2024-12-09-bowie.png │ ├── 2024-12-09-flower.png │ ├── 2024-12-09-qwen-reason.png │ └── 2024-12-09-transcribe.png ├── LICENSE ├── package-lock.json ├── package.json ├── README.md ├── scripts │ └── generate-version.js ├── src │ ├── config.ts │ ├── content_converter.ts │ ├── endpoint_wrapper.ts │ ├── gradio_api.ts │ ├── gradio_convert.ts │ ├── index.ts │ ├── mime_types.ts │ ├── progress_notifier.ts │ ├── types.ts │ └── working_directory.ts ├── test │ ├── endpoint_wrapper.test.ts │ ├── parameter_test.json │ └── utils.test.ts └── tsconfig.json ``` # Files -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- ``` 1 | node_modules/ 2 | build/ 3 | *.log 4 | .env* 5 | src/version.ts 6 | ``` -------------------------------------------------------------------------------- /.npmrc: -------------------------------------------------------------------------------- ``` 1 | 2 | save-exact=true 3 | package-lock=true 4 | engine-strict=true ``` -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- ```markdown 1 | # mcp-hfspace MCP Server 🤗 2 | 3 | Read the introduction here [llmindset.co.uk/resources/mcp-hfspace/](https://llmindset.co.uk/resources/mcp-hfspace/) 4 | 5 | Connect to [Hugging Face Spaces](https://huggingface.co/spaces) with minimal setup needed - simply add your spaces and go! 6 | 7 | By default, it connects to `evalstate/FLUX.1-schnell` providing Image Generation capabilities to Claude Desktop. 8 | 9 |  10 | 11 | ## Installation 12 | 13 | NPM Package is `@llmindset/mcp-hfspsace`. 14 | 15 | Install a recent version of [NodeJS](https://nodejs.org/en/download) for your platform, then add the following to the `mcpServers` section of your `claude_desktop_config.json` file: 16 | 17 | ```json 18 | "mcp=hfspace": { 19 | "command": "npx", 20 | "args": [ 21 | "-y", 22 | "@llmindset/mcp-hfspace" 23 | ] 24 | } 25 | ``` 26 | 27 | Please make sure you are using Claude Desktop 0.78 or greater. 28 | 29 | This will get you started with an Image Generator. 30 | 31 | ### Basic setup 32 | 33 | Supply a list of HuggingFace spaces in the arguments. mcp-hfspace will find the most appropriate endpoint and automatically configure it for usage. An example `claude_desktop_config.json` is supplied [below](#installation). 34 | 35 | By default the current working directory is used for file upload/download. On Windows this is a read/write folder at `\users\<username>\AppData\Roaming\Claude\<version.number\`, and on MacOS it is the is the read-only root: `/`. 36 | 37 | It is recommended to override this and set a Working Directory for handling the upload and download of images and other file-based content. Specify either the `--work-dir=/your_directory` argument or `MCP_HF_WORK_DIR` environment variable. 38 | 39 | An example configuration for using a modern image generator, vision model and text to speech is below with a working directory set is below: 40 | 41 | ```json 42 | "mcp-hfspace": { 43 | "command": "npx", 44 | "args": [ 45 | "-y", 46 | "@llmindset/mcp-hfspace", 47 | "--work-dir=/Users/evalstate/mcp-store", 48 | "shuttleai/shuttle-jaguar", 49 | "styletts2/styletts2", 50 | "Qwen/QVQ-72B-preview" 51 | ] 52 | } 53 | ``` 54 | 55 | 56 | To use private spaces, supply your Hugging Face Token with either the `--hf-token=hf_...` argument or `HF_TOKEN` environment variable. 57 | 58 | It's possible to run multiple server instances to use different working directories and tokens if needed. 59 | 60 | ## File Handling and Claude Desktop Mode 61 | 62 | By default, the Server operates in _Claude Desktop Mode_. In this mode, Images are returned in the tool responses, while other files are saved in the working folder, their file path is returned as a message. This will usually give the best experience if using Claude Desktop as the client. 63 | 64 | URLs can also be supplied as inputs: the content gets passed to the Space. 65 | 66 | There is an "Available Resources" prompt that gives Claude the available files and mime types from your working directory. This is currently the best way to manage files. 67 | 68 | ### Example 1 - Image Generation (Download Image / Claude Vision) 69 | 70 | We'll use Claude to compare images created by `shuttleai/shuttle-3.1-aesthetic` and `FLUX.1-schnell`. The images gets saved to the Work Directory, as well as included in Claude's context window - so Claude can use its vision capabilities. 71 | 72 |  73 | 74 | ### Example 2 - Vision Model (Upload Image) 75 | 76 | We'll use `merve/paligemma2-vqav2` [space link](https://huggingface.co/spaces/merve/paligemma2-vqav2) to query an image. In this case, we specify the filename which is available in the Working Directory: we don't want to upload the Image directly to Claude's context window. So, we can prompt Claude: 77 | 78 | `use paligemma to find out who is in "test_gemma.jpg"` -> `Text Output: david bowie` 79 |  80 | 81 | _If you are uploading something to Claude's context use the Paperclip Attachment button, otherwise specify the filename for the Server to send directly._ 82 | 83 | We can also supply a URL. For example : `use paligemma to detect humans in https://e3.365dm.com/24/12/1600x900/skynews-taylor-swift-eras-tour_6771083.jpg?20241209000914` -> `One person is detected in the image - Taylor Swift on stage.` 84 | 85 | ### Example 3 - Text-to-Speech (Download Audio) 86 | 87 | In _Claude Desktop Mode_, the audio file is saved in the WORK_DIR, and Claude is notified of the creation. If not in desktop mode, the file is returned as a base64 encoded resource to the Client (useful if it supports embedded Audio attachments). 88 | 89 |  90 | 91 | ### Example 4 - Speech-to-Text (Upload Audio) 92 | 93 | Here, we use `hf-audio/whisper-large-v3-turbo` to transcribe some audio, and make it available to Claude. 94 | 95 |  96 | 97 | ### Example 5 - Image-to-Image 98 | 99 | In this example, we specify the filename for `microsoft/OmniParser` to use, and get returned an annotated Image and 2 separate pieces of text: descriptions and coordinates. The prompt used was `use omniparser to analyse ./screenshot.png` and `use the analysis to produce an artifact that reproduces that screen`. `DawnC/Pawmatch` is also good at this. 100 | 101 |  102 | 103 | ### Example 6 - Chat 104 | 105 | In this example, Claude sets a number of reasoning puzzles for Qwen, and asks follow-up questions for clarification. 106 | 107 |  108 | 109 | ### Specifying API Endpoint 110 | 111 | If you need, you can specify a specific API Endpoint by adding it to the spacename. So rather than passing in `Qwen/Qwen2.5-72B-Instruct` you would use `Qwen/Qwen2.5-72B-Instruct/model_chat`. 112 | 113 | ### Claude Desktop Mode 114 | 115 | This can be disabled with the option --desktop-mode=false or the environment variable CLAUDE_DESKTOP_MODE=false. In this case, content as returned as an embedded Base64 encoded Resource. 116 | 117 | ## Recommended Spaces 118 | 119 | Some recommended spaces to try: 120 | 121 | ### Image Generation 122 | 123 | - shuttleai/shuttle-3.1-aesthetic 124 | - black-forest-labs/FLUX.1-schnell 125 | - yanze/PuLID-FLUX 126 | - Inspyrenet-Rembg (Background Removal) 127 | - diyism/Datou1111-shou_xin - [Beautiful Pencil Drawings](https://x.com/ClementDelangue/status/1867318931502895358) 128 | 129 | ### Chat 130 | 131 | - Qwen/Qwen2.5-72B-Instruct 132 | - prithivMLmods/Mistral-7B-Instruct-v0.3 133 | 134 | ### Text-to-speech / Audio Generation 135 | 136 | - fantaxy/Sound-AI-SFX 137 | - parler-tts/parler_tts 138 | 139 | ### Speech-to-text 140 | 141 | - hf-audio/whisper-large-v3-turbo 142 | - (the openai models use unnamed parameters so will not work) 143 | 144 | ### Text-to-music 145 | 146 | - haoheliu/audioldm2-text2audio-text2music 147 | 148 | ### Vision Tasks 149 | 150 | - microsoft/OmniParser 151 | - merve/paligemma2-vqav2 152 | - merve/paligemma-doc 153 | - DawnC/PawMatchAI 154 | - DawnC/PawMatchAI/on_find_match_click - for interactive dog recommendations 155 | 156 | ## Other Features 157 | 158 | ### Prompts 159 | 160 | Prompts for each Space are generated, and provide an opportunity to input. Bear in mind that often Spaces aren't configured with particularly helpful labels etc. Claude is actually very good at figuring this out, and the Tool description is quite rich (but not visible in Claude Desktop). 161 | 162 | ### Resources 163 | 164 | A list of files in the WORK_DIR is returned, and as a convenience returns the name as "Use the file..." text. If you want to add something to Claude's context, use the paperclip - otherwise specify the filename for the MCP Server. Claude does not support transmitting resources from within Context. 165 | 166 | ### Private Spaces 167 | 168 | Private Spaces are supported with a HuggingFace token. The Token is used to download and save generated content. 169 | 170 | ### Using Claude Desktop 171 | 172 | To use with Claude Desktop, add the server config: 173 | 174 | On MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json` 175 | On Windows: `%APPDATA%/Claude/claude_desktop_config.json` 176 | 177 | ```json 178 | { 179 | "mcpServers": { 180 | "mcp-hfspace": { 181 | "command": "npx" 182 | "args:" [ 183 | "-y", 184 | "@llmindset/mcp-hfspace", 185 | "--work-dir=~/mcp-files/ or x:/temp/mcp-files/", 186 | "--HF_TOKEN=HF_{optional token}" 187 | "Qwen/Qwen2-72B-Instruct", 188 | "black-forest-labs/FLUX.1-schnell", 189 | "space/example/specific-endpint" 190 | (... and so on) 191 | ] 192 | } 193 | } 194 | } 195 | ``` 196 | 197 | ## Known Issues and Limitations 198 | 199 | ### mcp-hfspace 200 | 201 | - Endpoints with unnamed parameters are unsupported for the moment. 202 | - Full translation from some complex Python types to suitable MCP formats. 203 | 204 | ### Claude Desktop 205 | 206 | - Claude Desktop 0.75 doesn't seem to respond to errors from the MCP Server, timing out instead. For persistent issues, use the MCP Inspector to get a better look at diagnosing what's going wrong. If something suddenly stops working, it's probably due to exhausting your HuggingFace ZeroGPU quota - try again after a short period, or set up your own Space for hosting. 207 | - Claude Desktop seems to use a hard timeout value of 60s, and doesn't appear to use Progress Notifications to manage UX or keep-alive. If you are using ZeroGPU spaces, large/heavy jobs may timeout. Check the WORK_DIR for results though; the MCP Server will still capture and save the result if it was produced. 208 | - Claude Desktops reporting of Server Status, logging etc. isn't great - use [@modelcontextprotocol/inspector](https://github.com/modelcontextprotocol/inspector) to help diagnose issues. 209 | 210 | ### HuggingFace Spaces 211 | 212 | - If ZeroGPU quotas or queues are too long, try duplicating the space. If your job takes less than sixty seconds, you can usually change the function decorator `@spaces.GPU(duration=20)` in `app.py` to request less quota when running the job. 213 | - If you have a HuggingFace Pro account, please note that The Gradio API does not your additional quote for ZeroGPU jobs - you will need to set an `X-IP-Token` header to achieve that. 214 | - If you have a private space, and dedicated hardware your HF_TOKEN will give you direct access to that - no quota's apply. I recommend this if you are using for any kind of Production task. 215 | 216 | ## Third Party MCP Services 217 | 218 | <a href="https://glama.ai/mcp/servers/s57c80wvgq"><img width="380" height="200" src="https://glama.ai/mcp/servers/s57c80wvgq/badge" alt="mcp-hfspace MCP server" /></a> 219 | ``` -------------------------------------------------------------------------------- /src/types.ts: -------------------------------------------------------------------------------- ```typescript 1 | export type GradioOutput = { 2 | label: string; 3 | type: string; 4 | python_type: { 5 | type: string; 6 | description: string; 7 | }; 8 | component: string; 9 | description?: string; 10 | }; 11 | ``` -------------------------------------------------------------------------------- /tsconfig.json: -------------------------------------------------------------------------------- ```json 1 | { 2 | "compilerOptions": { 3 | "target": "ES2022", 4 | "module": "Node16", 5 | "moduleResolution": "Node16", 6 | "outDir": "./build", 7 | "rootDir": "./src", 8 | "strict": true, 9 | "esModuleInterop": true, 10 | "skipLibCheck": true, 11 | "forceConsistentCasingInFileNames": true, 12 | "types": ["node"], 13 | }, 14 | "include": ["src/**/*"], 15 | "exclude": ["node_modules"] 16 | } 17 | ``` -------------------------------------------------------------------------------- /eslint.config.js: -------------------------------------------------------------------------------- ```javascript 1 | import globals from "globals"; 2 | import pluginJs from "@eslint/js"; 3 | import tseslint from "typescript-eslint"; 4 | 5 | 6 | /** @type {import('eslint').Linter.Config[]} */ 7 | export default [ 8 | {files: ["**/*.{js,mjs,cjs,ts}"]}, 9 | {files: ["**/*.js"], languageOptions: {sourceType: "commonjs"}}, 10 | {languageOptions: { globals: globals.browser }}, 11 | pluginJs.configs.recommended, 12 | ...tseslint.configs.recommended, 13 | ]; ``` -------------------------------------------------------------------------------- /scripts/generate-version.js: -------------------------------------------------------------------------------- ```javascript 1 | // scripts/generate-version.js 2 | import { readFileSync, writeFileSync } from 'fs'; 3 | import { fileURLToPath } from 'url'; 4 | import { dirname, join } from 'path'; 5 | 6 | const __filename = fileURLToPath(import.meta.url); 7 | const __dirname = dirname(__filename); 8 | 9 | const packageJson = JSON.parse( 10 | readFileSync(join(__dirname, '../package.json'), 'utf8') 11 | ); 12 | 13 | const content = `// Generated file - do not edit 14 | export const VERSION = '${packageJson.version}'; 15 | `; 16 | 17 | writeFileSync(join(__dirname, '../src/version.ts'), content); 18 | console.log(`Generated version.ts with version ${packageJson.version}`); ``` -------------------------------------------------------------------------------- /src/gradio_api.ts: -------------------------------------------------------------------------------- ```typescript 1 | // Just the types we need for the API structure - copied from Gradio client library 2 | export interface ApiParameter { 3 | label: string; 4 | parameter_name?: string; // Now optional 5 | parameter_has_default?: boolean; 6 | parameter_default?: unknown; 7 | type: string; 8 | python_type: { 9 | type: string; 10 | description?: string; 11 | }; 12 | component: string; 13 | example_input?: string; 14 | description?: string; 15 | } 16 | export interface ApiEndpoint { 17 | parameters: ApiParameter[]; 18 | returns: { 19 | label: string; 20 | type: string; 21 | python_type: { 22 | type: string; 23 | description: string; 24 | }; 25 | component: string; 26 | }[]; 27 | type: { 28 | generator: boolean; 29 | cancel: boolean; 30 | }; 31 | } 32 | export interface ApiStructure { 33 | named_endpoints: Record<string, ApiEndpoint>; 34 | unnamed_endpoints: Record<string, ApiEndpoint>; 35 | } 36 | 37 | export type ApiReturn = { 38 | label: string; 39 | type: string; 40 | python_type: { 41 | type: string; 42 | description: string; 43 | }; 44 | component: string; 45 | }; 46 | ``` -------------------------------------------------------------------------------- /src/config.ts: -------------------------------------------------------------------------------- ```typescript 1 | import minimist from "minimist"; 2 | import path from "path"; 3 | 4 | export interface Config { 5 | claudeDesktopMode: boolean; 6 | workDir: string; 7 | spacePaths: string[]; 8 | hfToken?: string; 9 | debug: boolean; 10 | } 11 | 12 | export const config = parseConfig(); 13 | 14 | export function parseConfig(): Config { 15 | const argv = minimist(process.argv.slice(2), { 16 | string: ["work-dir", "hf-token"], 17 | boolean: ["desktop-mode", "debug"], 18 | default: { 19 | "desktop-mode": process.env.CLAUDE_DESKTOP_MODE !== "false", 20 | "work-dir": process.env.MCP_HF_WORK_DIR || process.cwd(), 21 | "hf-token": process.env.HF_TOKEN, 22 | debug: false, 23 | }, 24 | "--": true, 25 | }); 26 | 27 | return { 28 | claudeDesktopMode: argv["desktop-mode"], 29 | workDir: path.resolve(argv["work-dir"]), 30 | hfToken: argv["hf-token"], 31 | debug: argv["debug"], 32 | spacePaths: (() => { 33 | const filtered = argv._.filter((arg) => arg.toString().trim().length > 0); 34 | return filtered.length > 0 ? filtered : ["evalstate/FLUX.1-schnell"]; 35 | })(), 36 | }; 37 | } 38 | ``` -------------------------------------------------------------------------------- /src/mime_types.ts: -------------------------------------------------------------------------------- ```typescript 1 | /** 2 | * Supported MIME types and related utilities 3 | * @packageDocumentation 4 | */ 5 | 6 | /** Known MIME types that should be handled as text */ 7 | export const textBasedMimeTypes = [ 8 | // Standard text formats 9 | "text/*", 10 | 11 | // Data interchange 12 | "application/json", 13 | "application/xml", 14 | "application/yaml", 15 | "application/javascript", 16 | "application/typescript", 17 | ] as readonly string[]; 18 | 19 | /** Supported document types */ 20 | export const documentMimeTypes = ["application/pdf"] as const; 21 | 22 | export const imageMimeTypes = [ 23 | "image/jpeg", 24 | "image/webp", 25 | "image/gif", 26 | "image/png", 27 | ]; 28 | 29 | /** All supported MIME types */ 30 | export const claudeSupportedMimeTypes = [ 31 | ...textBasedMimeTypes, 32 | ...documentMimeTypes, 33 | ...imageMimeTypes, 34 | ] as const; 35 | 36 | export const FALLBACK_MIME_TYPE = "application/octet-stream"; 37 | 38 | export function treatAsText(mimetype: string) { 39 | if (mimetype.startsWith("text/")) return true; 40 | if (textBasedMimeTypes.includes(mimetype)) return true; 41 | if (mimetype.indexOf("vnd.openxmlformats") > 0) return true; 42 | return false; 43 | } 44 | ``` -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- ```json 1 | { 2 | "name": "@llmindset/mcp-hfspace", 3 | "version": "0.5.0", 4 | "description": "MCP Server to connect to Hugging Face spaces. Simple configuration, Claude Desktop friendly.", 5 | "type": "module", 6 | "publishConfig": { 7 | "access": "public" 8 | }, 9 | "bin": { 10 | "mcp-hfspace": "./build/index.js" 11 | }, 12 | "files": [ 13 | "build" 14 | ], 15 | "repository": { 16 | "type": "git", 17 | "url": "git+https://github.com/evalstate/mcp-hfspace" 18 | }, 19 | "bugs": { 20 | "url": "https://github.com/evalstate/mcp-hfspace/issues" 21 | }, 22 | "engines": { 23 | "node": ">=18", 24 | "npm": ">=9" 25 | }, 26 | "scripts": { 27 | "clean": "rimraf build", 28 | "prebuild": "node scripts/generate-version.js", 29 | "build": "npm run lint:fix && npm run format:fix && npm run clean && npm run prebuild && tsc", 30 | "prepack": "npm run build", 31 | "lint": "eslint src/**/*.ts --max-warnings 0", 32 | "lint:fix": "eslint src/**/*.ts --fix", 33 | "format": "prettier --write \"src/**/*.ts\"", 34 | "format:fix": "prettier --write \"src/**/*.ts\"", 35 | "validate": "eslint src/**/*.ts && prettier --check \"src/**/*.ts\"", 36 | "watch": "tsc --watch", 37 | "inspector": "npx @modelcontextprotocol/inspector build/index.js", 38 | "test": "vitest", 39 | "test:watch": "vitest watch", 40 | "coverage": "vitest run --coverage" 41 | }, 42 | "dependencies": { 43 | "@gradio/client": "^1.8.0", 44 | "@modelcontextprotocol/sdk": "0.6.0", 45 | "mime": "^4.0.6", 46 | "minimist": "^1.2.8" 47 | }, 48 | "devDependencies": { 49 | "@eslint/js": "9.19.0", 50 | "@types/minimist": "^1.2.5", 51 | "@types/node": "^20.11.24", 52 | "@typescript-eslint/eslint-plugin": "latest", 53 | "@typescript-eslint/parser": "latest", 54 | "eslint": "9.19.0", 55 | "globals": "15.14.0", 56 | "prettier": "latest", 57 | "rimraf": "^5.0.1", 58 | "typescript": "^5.3.3", 59 | "typescript-eslint": "8.21.0", 60 | "vitest": "^2.1.8" 61 | } 62 | } 63 | ``` -------------------------------------------------------------------------------- /src/progress_notifier.ts: -------------------------------------------------------------------------------- ```typescript 1 | import { Status } from "@gradio/client"; 2 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; 3 | import type { ProgressNotification } from "@modelcontextprotocol/sdk/types.js"; 4 | 5 | export interface ProgressNotifier { 6 | notify(status: Status, progressToken: string | number): Promise<void>; 7 | } 8 | 9 | export function createProgressNotifier(server: Server): ProgressNotifier { 10 | let lastProgress = 0; 11 | 12 | function createNotification( 13 | status: Status, 14 | progressToken: string | number, 15 | ): ProgressNotification { 16 | let progress = lastProgress; 17 | const total = 100; 18 | 19 | if (status.progress_data?.length) { 20 | const item = status.progress_data[0]; 21 | if ( 22 | item && 23 | typeof item.index === "number" && 24 | typeof item.length === "number" 25 | ) { 26 | const stepProgress = (item.index / (item.length - 1)) * 80; 27 | progress = Math.round(10 + stepProgress); 28 | } 29 | } else { 30 | switch (status.stage) { 31 | case "pending": 32 | progress = status.queue ? (status.position === 0 ? 10 : 5) : 15; 33 | break; 34 | case "generating": 35 | progress = 50; 36 | break; 37 | case "complete": 38 | progress = 100; 39 | break; 40 | case "error": 41 | progress = lastProgress; 42 | break; 43 | } 44 | } 45 | 46 | progress = Math.max(progress, lastProgress); 47 | if (status.stage === "complete") { 48 | progress = 100; 49 | } else if (progress === lastProgress && lastProgress >= 75) { 50 | progress = Math.min(99, lastProgress + 1); 51 | } 52 | 53 | lastProgress = progress; 54 | 55 | let message = status.message; 56 | if (!message) { 57 | if (status.queue && status.position !== undefined) { 58 | message = `Queued at position ${status.position}`; 59 | } else if (status.progress_data?.length) { 60 | const item = status.progress_data[0]; 61 | message = item.desc || `Step ${item.index + 1} of ${item.length}`; 62 | } else { 63 | message = status.stage.charAt(0).toUpperCase() + status.stage.slice(1); 64 | } 65 | } 66 | 67 | return { 68 | method: "notifications/progress", 69 | params: { 70 | progressToken, 71 | progress, 72 | total, 73 | message, 74 | _meta: status, 75 | }, 76 | }; 77 | } 78 | 79 | return { 80 | async notify(status: Status, progressToken: string | number) { 81 | if (!progressToken) return; 82 | const notification = createNotification(status, progressToken); 83 | await server.notification(notification); 84 | }, 85 | }; 86 | } 87 | ``` -------------------------------------------------------------------------------- /src/gradio_convert.ts: -------------------------------------------------------------------------------- ```typescript 1 | import type { Tool } from "@modelcontextprotocol/sdk/types.js"; 2 | import type { ApiEndpoint, ApiParameter } from "./gradio_api.js"; 3 | 4 | // Type for a parameter schema in MCP Tool 5 | export type ParameterSchema = Tool["inputSchema"]["properties"]; 6 | 7 | function parseNumberConstraints(description: string = "") { 8 | const constraints: { minimum?: number; maximum?: number } = {}; 9 | 10 | // Check for "between X and Y" format 11 | const betweenMatch = description.match( 12 | /between\s+(-?\d+\.?\d*)\s+and\s+(-?\d+\.?\d*)/i, 13 | ); 14 | if (betweenMatch) { 15 | constraints.minimum = Number(betweenMatch[1]); 16 | constraints.maximum = Number(betweenMatch[2]); 17 | return constraints; 18 | } 19 | 20 | // Fall back to existing min/max parsing 21 | const minMatch = description.match(/min(?:imum)?\s*[:=]\s*(-?\d+\.?\d*)/i); 22 | const maxMatch = description.match(/max(?:imum)?\s*[:=]\s*(-?\d+\.?\d*)/i); 23 | 24 | if (minMatch) constraints.minimum = Number(minMatch[1]); 25 | if (maxMatch) constraints.maximum = Number(maxMatch[1]); 26 | return constraints; 27 | } 28 | 29 | export function isFileParameter(param: ApiParameter): boolean { 30 | return ( 31 | param.python_type?.type === "filepath" || 32 | param.type === "Blob | File | Buffer" || 33 | param.component === "Image" || 34 | param.component === "Audio" 35 | ); 36 | } 37 | 38 | export function convertParameter(param: ApiParameter): ParameterSchema { 39 | // Start with determining the base type and description 40 | let baseType = param.type || "string"; 41 | let baseDescription = 42 | param.python_type?.description || param.label || undefined; 43 | 44 | // Special case for chat history - override type and description 45 | if (param.parameter_name === "history" && param.component === "Chatbot") { 46 | baseType = "array"; 47 | baseDescription = 48 | "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components."; 49 | } 50 | 51 | // Handle file types with specific descriptions 52 | if (isFileParameter(param)) { 53 | baseType = "string"; // Always string for file inputs 54 | if (param.component === "Audio") { 55 | baseDescription = 56 | "Accepts: Audio file URL, file path, file name, or resource identifier"; 57 | } else if (param.component === "Image") { 58 | baseDescription = 59 | "Accepts: Image file URL, file path, file name, or resource identifier"; 60 | } else { 61 | baseDescription = 62 | "Accepts: URL, file path, file name, or resource identifier"; 63 | } 64 | } 65 | 66 | const baseSchema = { 67 | type: baseType, 68 | description: baseDescription, 69 | ...(param.parameter_has_default && { 70 | default: param.parameter_default, 71 | }), 72 | ...(param.example_input && { 73 | examples: [param.example_input], 74 | }), 75 | }; 76 | // Add number constraints if it's a number type 77 | if (param.type === "number" && param.python_type?.description) { 78 | const constraints = parseNumberConstraints(param.python_type.description); 79 | return { ...baseSchema, ...constraints }; 80 | } 81 | 82 | // Handle Literal type to extract enum values 83 | if (param.python_type?.type?.startsWith("Literal[")) { 84 | const enumValues = param.python_type.type 85 | .slice(8, -1) // Remove "Literal[" and "]" 86 | .split(",") 87 | .map((value) => value.trim().replace(/['"]/g, "")); // Remove quotes and trim spaces 88 | return { 89 | ...baseSchema, 90 | description: param.python_type?.description || param.label || undefined, 91 | enum: enumValues, 92 | }; 93 | } 94 | 95 | return baseSchema; 96 | } 97 | 98 | export function convertApiToSchema(endpoint: ApiEndpoint) { 99 | const properties: { [key: string]: ParameterSchema } = {}; 100 | const required: string[] = []; 101 | let propertyCounter = 1; 102 | const unnamedParameters: Record<string, number> = {}; 103 | 104 | endpoint.parameters.forEach((param: ApiParameter, index: number) => { 105 | // Get property name from parameter_name, label, or generate one 106 | const propertyName = 107 | param.parameter_name || 108 | param.label || 109 | `Unnamed Parameter ${propertyCounter++}`; 110 | if (!param.parameter_name) { 111 | unnamedParameters[propertyName] = index; 112 | } 113 | // Convert parameter using existing function 114 | properties[propertyName] = convertParameter(param); 115 | 116 | // Add to required if no default value 117 | if (!param.parameter_has_default) { 118 | required.push(propertyName); 119 | } 120 | }); 121 | 122 | return { 123 | type: "object", 124 | properties, 125 | required, 126 | }; 127 | } 128 | ``` -------------------------------------------------------------------------------- /test/endpoint_wrapper.test.ts: -------------------------------------------------------------------------------- ```typescript 1 | 2 | import { describe, it, expect, vi } from "vitest"; 3 | import { EndpointWrapper, endpointSpecified, parsePath } from "../src/endpoint_wrapper"; 4 | import type { ApiEndpoint } from "../src/gradio_api"; 5 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; 6 | 7 | // Mock the Client class 8 | const mockSubmit = vi.fn(); 9 | const MockClient = { 10 | submit: mockSubmit, 11 | connect: vi.fn().mockResolvedValue({ 12 | submit: mockSubmit, 13 | view_api: vi.fn(), 14 | }), 15 | }; 16 | 17 | // Helper to create test endpoint 18 | function createTestEndpoint(parameters: any[]): ApiEndpoint { 19 | return { 20 | parameters, 21 | returns: [{ 22 | label: "Output", 23 | type: "string", 24 | python_type: { type: "str", description: "Output text" }, 25 | component: "Text" 26 | }], 27 | type: { generator: false, cancel: false } 28 | }; 29 | } 30 | 31 | describe("EndpointWrapper parameter mapping", () => { 32 | it("maps named parameters correctly", async () => { 33 | const endpoint = createTestEndpoint([ 34 | { 35 | label: "Text Input", 36 | parameter_name: "text_input", 37 | type: "string", 38 | python_type: { type: "str", description: "" }, 39 | component: "Textbox" 40 | } 41 | ]); 42 | 43 | const wrapper = new EndpointWrapper( 44 | parsePath("test/space/predict"), 45 | endpoint, 46 | MockClient as any, 47 | ); 48 | 49 | // Mock successful response 50 | mockSubmit.mockImplementation(async function* () { 51 | yield { type: "data", data: ["response"] }; 52 | }); 53 | 54 | await wrapper.call({ 55 | method: "tools/call", 56 | params: { 57 | name: "test", 58 | arguments: { 59 | text_input: "hello" 60 | } 61 | } 62 | }, {} as Server); 63 | 64 | // Verify the parameters were mapped correctly 65 | expect(mockSubmit).toHaveBeenCalledWith("/predict", { 66 | text_input: "hello" 67 | }); 68 | }); 69 | 70 | it("maps unnamed parameters to their index", async () => { 71 | const endpoint = createTestEndpoint([ 72 | { 73 | label: "parameter_0", 74 | type: "string", 75 | python_type: { type: "str", description: "" }, 76 | component: "Textbox" 77 | }, 78 | { 79 | label: "parameter_1", 80 | type: "number", 81 | python_type: { type: "float", description: "" }, 82 | component: "Number" 83 | } 84 | ]); 85 | 86 | const wrapper = new EndpointWrapper( 87 | parsePath("/test/space/predict"), 88 | endpoint, 89 | MockClient as any, 90 | ); 91 | 92 | mockSubmit.mockImplementation(async function* () { 93 | yield { type: "data", data: ["response"] }; 94 | }); 95 | 96 | await wrapper.call({ 97 | params: { 98 | name: "test", 99 | arguments: { 100 | "parameter_0": "hello", 101 | "parameter_1": 42 102 | } 103 | }, 104 | method: "tools/call" 105 | }, {} as Server); 106 | 107 | // Verify parameters were mapped by position 108 | expect(mockSubmit).toHaveBeenCalledWith("/predict", { 109 | "parameter_0": "hello", 110 | "parameter_1": 42 111 | }); 112 | }); 113 | 114 | it("handles mix of named and unnamed parameters", async () => { 115 | const endpoint = createTestEndpoint([ 116 | { 117 | label: "Text Input", 118 | parameter_name: "text_input", 119 | type: "string", 120 | python_type: { type: "str", description: "" }, 121 | component: "Textbox" 122 | }, 123 | { 124 | label: "parameter_1", 125 | type: "number", 126 | python_type: { type: "float", description: "" }, 127 | component: "Number" 128 | } 129 | ]); 130 | 131 | const wrapper = new EndpointWrapper( 132 | parsePath("test/space/predict"), 133 | endpoint, 134 | MockClient as any, 135 | ); 136 | 137 | mockSubmit.mockImplementation(async function* () { 138 | yield { type: "data", data: ["response"] }; 139 | }); 140 | 141 | await wrapper.call({ 142 | params: { 143 | name: "test", 144 | arguments: { 145 | text_input: "hello", 146 | "parameter_1": 42 147 | } 148 | }, 149 | method: "tools/call" 150 | }, {} as Server); 151 | 152 | // Verify mixed parameter mapping 153 | expect(mockSubmit).toHaveBeenCalledWith("/predict", { 154 | text_input: "hello", 155 | "parameter_1": 42 156 | }); 157 | }); 158 | }); 159 | 160 | describe("specific endpoint detection works",()=>{ 161 | it("detects no endpoint specified"),()=>{ 162 | expect(endpointSpecified("/owner/space")).toBe(false); 163 | } 164 | it("detects endpoints specified"),()=>{ 165 | expect(endpointSpecified("/owner/space/foo")).toBe(true); 166 | expect(endpointSpecified("/owner/space/3")).toBe(true);; 167 | expect(endpointSpecified("owner/space/3")).toBe(true);; 168 | } 169 | }) 170 | 171 | 172 | describe("endpoint and tool naming works",() => { 173 | it("handles named endpoints", () => { 174 | const endpoint = parsePath("/prithivMLmods/Mistral-7B-Instruct-v0.3/model_chat"); 175 | if(null==endpoint) throw new Error("endpoint is null"); 176 | expect(endpoint.owner).toBe("prithivMLmods"); 177 | expect(endpoint.space).toBe("Mistral-7B-Instruct-v0.3"); 178 | expect(endpoint.endpoint).toBe("/model_chat"); 179 | expect(endpoint.mcpToolName).toBe("Mistral-7B-Instruct-v0_3-model_chat"); 180 | expect(endpoint.mcpDisplayName).toBe("Mistral-7B-Instruct-v0.3 endpoint /model_chat"); 181 | }); 182 | it("handles numbered endpoint"),() => { 183 | const endpoint = parsePath("/suno/bark/3"); 184 | if(null==endpoint) throw new Error("endpoint is null"); 185 | expect(endpoint.owner).toBe("suno"); 186 | expect(endpoint.space).toBe("bark"); 187 | expect(endpoint.endpoint).toBe(3); 188 | expect(endpoint.mcpToolName).toBe("bark-3"); 189 | expect(endpoint.mcpDisplayName).toBe("bark endpoint /3"); 190 | } 191 | }) ``` -------------------------------------------------------------------------------- /src/index.ts: -------------------------------------------------------------------------------- ```typescript 1 | #!/usr/bin/env node 2 | 3 | const AVAILABLE_RESOURCES = "Available Resources"; 4 | const AVAILABLE_FILES = "available-files"; 5 | 6 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; 7 | import { VERSION } from "./version.js"; 8 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; 9 | // Remove mime import and treatAsText import as they're now handled in WorkingDirectory 10 | import { 11 | CallToolRequestSchema, 12 | ListToolsRequestSchema, 13 | ListPromptsRequestSchema, 14 | GetPromptRequestSchema, 15 | ListResourcesRequestSchema, 16 | ReadResourceRequestSchema, 17 | } from "@modelcontextprotocol/sdk/types.js"; 18 | 19 | import { EndpointWrapper } from "./endpoint_wrapper.js"; 20 | import { parseConfig } from "./config.js"; 21 | import { WorkingDirectory } from "./working_directory.js"; 22 | 23 | // Create MCP server 24 | const server = new Server( 25 | { 26 | name: "mcp-hfspace", 27 | version: VERSION, 28 | }, 29 | { 30 | capabilities: { 31 | tools: {}, 32 | prompts: {}, 33 | resources: { 34 | list: true, 35 | }, 36 | }, 37 | }, 38 | ); 39 | // Parse configuration 40 | const config = parseConfig(); 41 | 42 | // Change to configured working directory 43 | process.chdir(config.workDir); 44 | 45 | const workingDir = new WorkingDirectory( 46 | config.workDir, 47 | config.claudeDesktopMode, 48 | ); 49 | 50 | // Create a map to store endpoints by their tool names 51 | const endpoints = new Map<string, EndpointWrapper>(); 52 | 53 | // Create endpoints with working directory 54 | for (const spacePath of config.spacePaths) { 55 | try { 56 | const endpoint = await EndpointWrapper.createEndpoint( 57 | spacePath, 58 | workingDir, 59 | ); 60 | endpoints.set(endpoint.toolDefinition().name, endpoint); 61 | } catch (e) { 62 | if (e instanceof Error) { 63 | console.error(`Error loading ${spacePath}: ${e.message}`); 64 | } else { 65 | throw e; 66 | } 67 | continue; 68 | } 69 | } 70 | 71 | if (endpoints.size === 0) { 72 | throw new Error("No valid endpoints found in any of the provided spaces"); 73 | } 74 | 75 | server.setRequestHandler(ListToolsRequestSchema, async () => { 76 | return { 77 | tools: [ 78 | { 79 | name: AVAILABLE_FILES, 80 | description: 81 | "A list of available file and resources. " + 82 | "If the User requests things like 'most recent image' or 'the audio' use " + 83 | "this tool to identify the intended resource." + 84 | "This tool returns 'resource uri', 'name', 'size', 'last modified' and 'mime type' in a markdown table", 85 | inputSchema: { 86 | type: "object", 87 | properties: {}, 88 | }, 89 | }, 90 | ...Array.from(endpoints.values()).map((endpoint) => 91 | endpoint.toolDefinition(), 92 | ), 93 | ], 94 | }; 95 | }); 96 | 97 | server.setRequestHandler(CallToolRequestSchema, async (request) => { 98 | if (AVAILABLE_FILES === request.params.name) { 99 | return { 100 | content: [ 101 | { 102 | type: `resource`, 103 | resource: { 104 | uri: `/available-files`, 105 | mimeType: `text/markdown`, 106 | text: await workingDir.generateResourceTable(), 107 | }, 108 | }, 109 | ], 110 | }; 111 | } 112 | 113 | const endpoint = endpoints.get(request.params.name); 114 | 115 | if (!endpoint) { 116 | throw new Error(`Unknown tool: ${request.params.name}`); 117 | } 118 | try { 119 | return await endpoint.call(request, server); 120 | } catch (error) { 121 | if (error instanceof Error) { 122 | return { 123 | content: [ 124 | { 125 | type: `text`, 126 | text: `mcp-hfspace error: ${error.message}`, 127 | }, 128 | ], 129 | isError: true, 130 | }; 131 | } 132 | throw error; 133 | } 134 | }); 135 | 136 | server.setRequestHandler(ListPromptsRequestSchema, async () => { 137 | return { 138 | prompts: [ 139 | { 140 | name: AVAILABLE_RESOURCES, 141 | description: "List of available resources.", 142 | arguments: [], 143 | }, 144 | ...Array.from(endpoints.values()).map((endpoint) => 145 | endpoint.promptDefinition(), 146 | ), 147 | ], 148 | }; 149 | }); 150 | 151 | server.setRequestHandler(GetPromptRequestSchema, async (request) => { 152 | const promptName = request.params.name; 153 | 154 | if (AVAILABLE_RESOURCES === promptName) { 155 | return availableResourcesPrompt(); 156 | } 157 | 158 | const endpoint = endpoints.get(promptName); 159 | 160 | if (!endpoint) { 161 | throw new Error(`Unknown prompt: ${promptName}`); 162 | } 163 | 164 | return await endpoint.getPromptTemplate(request.params.arguments); 165 | }); 166 | 167 | async function availableResourcesPrompt() { 168 | const tableText = await workingDir.generateResourceTable(); 169 | 170 | return { 171 | messages: [ 172 | { 173 | role: "user", 174 | content: { 175 | type: "text", 176 | text: tableText, 177 | }, 178 | }, 179 | ], 180 | }; 181 | } 182 | 183 | server.setRequestHandler(ListResourcesRequestSchema, async () => { 184 | try { 185 | const resources = await workingDir.getSupportedResources(); 186 | return { 187 | resources: resources.map((resource) => ({ 188 | uri: resource.uri, 189 | name: resource.name, 190 | mimetype: resource.mimeType, 191 | })), 192 | }; 193 | } catch (error) { 194 | if (error instanceof Error) { 195 | throw new Error(`Failed to list resources: ${error.message}`); 196 | } 197 | throw error; 198 | } 199 | }); 200 | 201 | server.setRequestHandler(ReadResourceRequestSchema, async (request) => { 202 | try { 203 | const contents = await workingDir.readResource(request.params.uri); 204 | return { 205 | contents: [contents], 206 | }; 207 | } catch (error) { 208 | if (error instanceof Error) { 209 | throw new Error(`Failed to read resource: ${error.message}`); 210 | } 211 | throw error; 212 | } 213 | }); 214 | 215 | /** 216 | * Start the server using stdio transport. 217 | * This allows the server to communicate via standard input/output streams. 218 | */ 219 | async function main() { 220 | const transport = new StdioServerTransport(); 221 | await server.connect(transport); 222 | } 223 | 224 | main().catch((error) => { 225 | console.error("Server error:", error); 226 | process.exit(1); 227 | }); 228 | ``` -------------------------------------------------------------------------------- /src/working_directory.ts: -------------------------------------------------------------------------------- ```typescript 1 | import { Dirent, promises as fs } from "fs"; 2 | import path from "path"; 3 | import mime from "mime"; 4 | import { pathToFileURL } from "url"; 5 | import { FALLBACK_MIME_TYPE, treatAsText } from "./mime_types.js"; 6 | import { claudeSupportedMimeTypes } from "./mime_types.js"; 7 | 8 | export interface ResourceFile { 9 | uri: string; 10 | name: string; 11 | mimeType: string; 12 | size: number; 13 | lastModified: Date; 14 | formattedSize?: string; // Add optional formatted size 15 | } 16 | 17 | export interface ResourceContents { 18 | uri: string; 19 | mimeType: string; 20 | text?: string; 21 | blob?: string; 22 | } 23 | 24 | export class WorkingDirectory { 25 | private readonly MAX_RESOURCE_SIZE = 1024 * 1024 * 2; 26 | 27 | constructor( 28 | private readonly directory: string, 29 | private readonly claudeDesktopMode: boolean = false, 30 | ) {} 31 | 32 | async listFiles(recursive = true): Promise<Dirent[]> { 33 | return await fs.readdir(this.directory, { 34 | withFileTypes: true, 35 | recursive, 36 | }); 37 | } 38 | 39 | async getResourceFile(file: Dirent): Promise<ResourceFile> { 40 | const fullPath = path.join(file.parentPath || "", file.name); 41 | const relativePath = path 42 | .relative(this.directory, fullPath) 43 | .replace(/\\/g, "/"); 44 | const stats = await fs.stat(fullPath); 45 | 46 | return { 47 | uri: `file:./${relativePath}`, 48 | name: file.name, 49 | mimeType: mime.getType(file.name) || FALLBACK_MIME_TYPE, 50 | size: stats.size, 51 | lastModified: stats.mtime, 52 | }; 53 | } 54 | 55 | async generateFilename( 56 | prefix: string, 57 | extension: string, 58 | mcpToolName: string, 59 | ): Promise<string> { 60 | const date = new Date().toISOString().split("T")[0]; 61 | const randomId = crypto.randomUUID().slice(0, 5); 62 | return path.join( 63 | this.directory, 64 | `${date}_${mcpToolName}_${prefix}_${randomId}.${extension}`, 65 | ); 66 | } 67 | 68 | async saveFile(arrayBuffer: ArrayBuffer, filename: string): Promise<void> { 69 | await fs.writeFile(filename, Buffer.from(arrayBuffer), { 70 | encoding: "binary", 71 | }); 72 | } 73 | 74 | getFileUrl(filename: string): string { 75 | return pathToFileURL(path.resolve(this.directory, filename)).href; 76 | } 77 | 78 | async isSupportedFile(filename: string): Promise<boolean> { 79 | if (!this.claudeDesktopMode) return true; 80 | 81 | try { 82 | const stats = await fs.stat(filename); 83 | if (stats.size > this.MAX_RESOURCE_SIZE) return false; 84 | 85 | const mimetype = mime.getType(filename); 86 | if (!mimetype) return false; 87 | if (treatAsText(mimetype)) return true; 88 | return claudeSupportedMimeTypes.some((supported) => { 89 | if (!supported.includes("/*")) return supported === mimetype; 90 | const supportedMainType = supported.split("/")[0]; 91 | const mainType = mimetype.split("/")[0]; 92 | return supportedMainType === mainType; 93 | }); 94 | } catch { 95 | return false; 96 | } 97 | } 98 | 99 | async validatePath(filePath: string): Promise<string> { 100 | if (filePath.startsWith("http://") || filePath.startsWith("https://")) { 101 | return filePath; 102 | } 103 | 104 | if (filePath.startsWith("file:")) { 105 | filePath = filePath.replace(/^file:(?:\/\/|\.\/)/, ""); 106 | } 107 | 108 | const normalizedFilePath = path.normalize(path.resolve(filePath)); 109 | const normalizedCwd = path.normalize(this.directory); 110 | 111 | if (!normalizedFilePath.startsWith(normalizedCwd)) { 112 | throw new Error(`Path ${filePath} is outside of working directory`); 113 | } 114 | 115 | await fs.access(normalizedFilePath); 116 | return normalizedFilePath; 117 | } 118 | 119 | formatFileSize(bytes: number): string { 120 | const units = ["B", "KB", "MB", "GB"]; 121 | let size = bytes; 122 | let unitIndex = 0; 123 | 124 | while (size >= 1024 && unitIndex < units.length - 1) { 125 | size /= 1024; 126 | unitIndex++; 127 | } 128 | 129 | return `${size.toFixed(1)} ${units[unitIndex]}`; 130 | } 131 | 132 | async generateResourceTable(): Promise<string> { 133 | const files = await this.listFiles(); 134 | const resources = await Promise.all( 135 | files 136 | .filter((entry) => entry.isFile()) 137 | .map(async (entry) => await this.getResourceFile(entry)), 138 | ); 139 | 140 | if (resources.length === 0) { 141 | return "No resources available."; 142 | } 143 | 144 | return ` 145 | The following resources are available for tool calls: 146 | | Resource URI | Name | MIME Type | Size | Last Modified | 147 | |--------------|------|-----------|------|---------------| 148 | ${resources 149 | .map( 150 | (f) => 151 | `| ${f.uri} | ${f.name} | ${f.mimeType} | ${this.formatFileSize(f.size)} | ${f.lastModified.toISOString()} |`, 152 | ) 153 | .join("\n")} 154 | 155 | Prefer using the Resource URI for tool parameters which require a file input. URLs are also accepted.`.trim(); 156 | } 157 | 158 | isFileSizeSupported(size: number): boolean { 159 | return size <= this.MAX_RESOURCE_SIZE; 160 | } 161 | 162 | async getSupportedResources(): Promise<ResourceFile[]> { 163 | const files = await this.listFiles(); 164 | 165 | const supportedFiles = await Promise.all( 166 | files 167 | .filter((entry) => entry.isFile()) 168 | .map(async (entry) => { 169 | const isSupported = await this.isSupportedFile(entry.name); 170 | if (!isSupported) return null; 171 | return await this.getResourceFile(entry); 172 | }), 173 | ); 174 | 175 | return supportedFiles.filter((file): file is ResourceFile => file !== null); 176 | } 177 | 178 | async readResource(resourceUri: string): Promise<ResourceContents> { 179 | const validatedPath = await this.validatePath(resourceUri); 180 | const file = path.basename(validatedPath); 181 | const mimeType = mime.getType(file) || FALLBACK_MIME_TYPE; 182 | 183 | const content = this.isMimeTypeText(mimeType) 184 | ? { text: await fs.readFile(file, "utf-8") } 185 | : { blob: (await fs.readFile(file)).toString("base64") }; 186 | 187 | return { 188 | uri: resourceUri, 189 | mimeType, 190 | ...content, 191 | }; 192 | } 193 | 194 | private isMimeTypeText(mimeType: string): boolean { 195 | return ( 196 | mimeType.startsWith("text/") || 197 | mimeType === "application/json" || 198 | mimeType === "application/javascript" || 199 | mimeType === "application/xml" 200 | ); 201 | } 202 | } 203 | ``` -------------------------------------------------------------------------------- /src/content_converter.ts: -------------------------------------------------------------------------------- ```typescript 1 | import { 2 | EmbeddedResource, 3 | ImageContent, 4 | TextContent, 5 | } from "@modelcontextprotocol/sdk/types.js"; 6 | import { ApiReturn } from "./gradio_api.js"; 7 | import * as fs from "fs/promises"; 8 | import { pathToFileURL } from "url"; 9 | import path from "path"; 10 | import { config } from "./config.js"; 11 | import { EndpointPath } from "./endpoint_wrapper.js"; 12 | import { WorkingDirectory } from "./working_directory.js"; 13 | 14 | // Add types for Gradio component values 15 | export interface GradioResourceValue { 16 | url?: string; 17 | mime_type?: string; 18 | orig_name?: string; 19 | } 20 | 21 | // Component types enum 22 | enum GradioComponentType { 23 | Image = "Image", 24 | Audio = "Audio", 25 | Chatbot = "Chatbot", 26 | } 27 | 28 | // Resource response interface 29 | interface ResourceResponse { 30 | mimeType: string; 31 | base64Data: string; 32 | arrayBuffer: ArrayBuffer; 33 | originalExtension: string | null; 34 | } 35 | 36 | // Simple converter registry 37 | type ContentConverter = ( 38 | component: ApiReturn, 39 | value: GradioResourceValue, 40 | endpointPath: EndpointPath, 41 | ) => Promise<TextContent | ImageContent | EmbeddedResource>; 42 | 43 | // Type for converter functions that may not succeed 44 | type ConverterFn = ( 45 | component: ApiReturn, 46 | value: GradioResourceValue, 47 | endpointPath: EndpointPath, 48 | ) => Promise<TextContent | ImageContent | EmbeddedResource | null>; 49 | // Default converter implementation 50 | const defaultConverter: ConverterFn = async () => null; 51 | 52 | export class GradioConverter { 53 | private converters: Map<string, ContentConverter> = new Map(); 54 | 55 | constructor(private readonly workingDir: WorkingDirectory) { 56 | // Register converters with fallback behavior 57 | this.register( 58 | GradioComponentType.Image, 59 | withFallback(this.imageConverter.bind(this)), 60 | ); 61 | this.register( 62 | GradioComponentType.Audio, 63 | withFallback(this.audioConverter.bind(this)), 64 | ); 65 | this.register( 66 | GradioComponentType.Chatbot, 67 | withFallback(async () => null), 68 | ); 69 | } 70 | 71 | register(component: string, converter: ContentConverter) { 72 | this.converters.set(component, converter); 73 | } 74 | 75 | async convert( 76 | component: ApiReturn, 77 | value: GradioResourceValue, 78 | endpointPath: EndpointPath, 79 | ): Promise<TextContent | ImageContent | EmbeddedResource> { 80 | if (config.debug) { 81 | await fs.writeFile( 82 | generateFilename("debug", "json", endpointPath.mcpToolName), 83 | JSON.stringify(value, null, 2), 84 | ); 85 | } 86 | const converter = 87 | this.converters.get(component.component) || 88 | withFallback(defaultConverter); 89 | return converter(component, value, endpointPath); 90 | } 91 | 92 | private async saveFile( 93 | arrayBuffer: ArrayBuffer, 94 | mimeType: string, 95 | prefix: string, 96 | mcpToolName: string, 97 | originalExtension?: string | null, 98 | ): Promise<string> { 99 | const extension = originalExtension || mimeType.split("/")[1] || "bin"; 100 | const filename = await this.workingDir.generateFilename( 101 | prefix, 102 | extension, 103 | mcpToolName, 104 | ); 105 | await this.workingDir.saveFile(arrayBuffer, filename); 106 | return filename; 107 | } 108 | 109 | private readonly imageConverter: ConverterFn = async ( 110 | _component, 111 | value, 112 | endpointPath, 113 | ) => { 114 | if (!value?.url) return null; 115 | try { 116 | const response = await convertUrlToBase64(value.url, value); 117 | 118 | try { 119 | await this.saveFile( 120 | response.arrayBuffer, 121 | response.mimeType, 122 | GradioComponentType.Image, 123 | endpointPath.mcpToolName, 124 | response.originalExtension, 125 | ); 126 | } catch (saveError) { 127 | if (config.claudeDesktopMode) { 128 | console.error( 129 | `Failed to save image file: ${saveError instanceof Error ? saveError.message : String(saveError)}`, 130 | ); 131 | } else { 132 | throw saveError; 133 | } 134 | } 135 | 136 | return { 137 | type: "image", 138 | data: response.base64Data, 139 | mimeType: response.mimeType, 140 | }; 141 | } catch (error) { 142 | console.error("Image conversion failed:", error); 143 | return createTextContent( 144 | _component, 145 | `Failed to load image: ${error instanceof Error ? error.message : String(error)}`, 146 | ); 147 | } 148 | }; 149 | 150 | private readonly audioConverter: ConverterFn = async ( 151 | _component, 152 | value, 153 | endpointPath, 154 | ) => { 155 | if (!value?.url) return null; 156 | try { 157 | const { mimeType, base64Data, arrayBuffer, originalExtension } = 158 | await convertUrlToBase64(value.url, value); 159 | const filename = await this.saveFile( 160 | arrayBuffer, 161 | mimeType, 162 | "audio", 163 | endpointPath.mcpToolName, 164 | originalExtension, 165 | ); 166 | 167 | if (config.claudeDesktopMode) { 168 | return { 169 | type: "resource", 170 | resource: { 171 | uri: `${pathToFileURL(path.resolve(filename)).href}`, 172 | mimetype: `text/plain`, 173 | text: `Your audio was succesfully created and is available for playback at ${path.resolve(filename)}. Claude Desktop does not currently support audio content`, 174 | }, 175 | }; 176 | } else { 177 | return { 178 | type: "resource", 179 | resource: { 180 | uri: `${pathToFileURL(path.resolve(filename)).href}`, 181 | mimeType, 182 | blob: base64Data, 183 | }, 184 | }; 185 | } 186 | } catch (error) { 187 | console.error("Audio conversion failed:", error); 188 | return { 189 | type: "text", 190 | text: `Failed to load audio: ${(error as Error).message}`, 191 | }; 192 | } 193 | }; 194 | } 195 | 196 | // Shared text content creator 197 | const createTextContent = ( 198 | component: ApiReturn, 199 | value: unknown, 200 | ): TextContent => { 201 | const label = component.label ? `${component.label}: ` : ""; 202 | const text = typeof value === "string" ? value : JSON.stringify(value); 203 | return { 204 | type: "text", 205 | text: `${label}${text}`, 206 | }; 207 | }; 208 | 209 | // Wrapper that adds fallback behavior 210 | const withFallback = (converter: ConverterFn): ContentConverter => { 211 | return async ( 212 | component: ApiReturn, 213 | value: GradioResourceValue, 214 | endpointPath: EndpointPath, 215 | ) => { 216 | const result = await converter(component, value, endpointPath); 217 | return result ?? createTextContent(component, value); 218 | }; 219 | }; 220 | 221 | // Update generateFilename to use space name 222 | const generateFilename = ( 223 | prefix: string, 224 | extension: string, 225 | mcpToolName: string, 226 | ): string => { 227 | const date = new Date().toISOString().split("T")[0]; // YYYY-MM-DD 228 | const randomId = crypto.randomUUID().slice(0, 5); // First 5 chars 229 | return `${date}_${mcpToolName}_${prefix}_${randomId}.${extension}`; 230 | }; 231 | 232 | const getExtensionFromFilename = (url: string): string | null => { 233 | const match = url.match(/\/([^/?#]+)[^/]*$/); 234 | if (match && match[1].includes(".")) { 235 | return match[1].split(".").pop() || null; 236 | } 237 | return null; 238 | }; 239 | 240 | const getMimeTypeFromOriginalName = (origName: string): string | null => { 241 | const extension = origName.split(".").pop()?.toLowerCase(); 242 | if (!extension) return null; 243 | 244 | // Common image formats 245 | if (["jpg", "jpeg", "png", "gif", "webp", "bmp", "svg"].includes(extension)) { 246 | return `image/${extension}`; 247 | } 248 | 249 | // Common audio formats 250 | if (["mp3", "wav", "ogg", "aac", "m4a"].includes(extension)) { 251 | return `audio/${extension}`; 252 | } 253 | 254 | // For unknown types, fall back to application/* 255 | return `application/${extension}`; 256 | }; 257 | 258 | const determineMimeType = ( 259 | value: GradioResourceValue, 260 | responseHeaders: Headers, 261 | ): string => { 262 | // First priority: mime_type from the value object 263 | if (value?.mime_type) { 264 | return value.mime_type; 265 | } 266 | 267 | // Second priority: derived from orig_name 268 | if (value?.orig_name) { 269 | const mimeFromName = getMimeTypeFromOriginalName(value.orig_name); 270 | if (mimeFromName) { 271 | return mimeFromName; 272 | } 273 | } 274 | 275 | // Third priority: response headers 276 | const headerMimeType = responseHeaders.get("content-type"); 277 | if (headerMimeType && headerMimeType !== "text/plain") { 278 | return headerMimeType; 279 | } 280 | 281 | // Final fallback 282 | return "text/plain"; 283 | }; 284 | 285 | const convertUrlToBase64 = async ( 286 | url: string, 287 | value: GradioResourceValue, 288 | ): Promise<ResourceResponse> => { 289 | const headers: HeadersInit = {}; 290 | if (config.hfToken) { 291 | headers.Authorization = `Bearer ${config.hfToken}`; 292 | } 293 | 294 | const response = await fetch(url, { headers }); 295 | 296 | if (!response.ok) { 297 | throw new Error( 298 | `Failed to fetch resource: ${response.status} ${response.statusText}`, 299 | ); 300 | } 301 | 302 | const mimeType = determineMimeType(value, response.headers); 303 | const originalExtension = getExtensionFromFilename(url); 304 | const arrayBuffer = await response.arrayBuffer(); 305 | const base64Data = Buffer.from(arrayBuffer).toString("base64"); 306 | 307 | return { mimeType, base64Data, arrayBuffer, originalExtension }; 308 | }; 309 | ``` -------------------------------------------------------------------------------- /src/endpoint_wrapper.ts: -------------------------------------------------------------------------------- ```typescript 1 | import { Client, handle_file } from "@gradio/client"; 2 | import { ApiStructure, ApiEndpoint, ApiReturn } from "./gradio_api.js"; 3 | import { 4 | convertApiToSchema, 5 | isFileParameter, 6 | ParameterSchema, 7 | } from "./gradio_convert.js"; 8 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; 9 | import * as fs from "fs/promises"; 10 | import type { 11 | CallToolResult, 12 | GetPromptResult, 13 | CallToolRequest, 14 | } from "@modelcontextprotocol/sdk/types.d.ts"; 15 | import type { 16 | TextContent, 17 | ImageContent, 18 | EmbeddedResource, 19 | } from "@modelcontextprotocol/sdk/types.d.ts"; 20 | import { createProgressNotifier } from "./progress_notifier.js"; 21 | import { GradioConverter, GradioResourceValue } from "./content_converter.js"; 22 | import { config } from "./config.js"; 23 | import type { StatusMessage, Payload } from "@gradio/client"; 24 | import { WorkingDirectory } from "./working_directory.js"; 25 | 26 | type GradioEvent = StatusMessage | Payload; 27 | 28 | export interface EndpointPath { 29 | owner: string; 30 | space: string; 31 | endpoint: string | number; 32 | mcpToolName: string; 33 | mcpDisplayName: string; 34 | } 35 | 36 | export function endpointSpecified(path: string) { 37 | const parts = path.replace(/^\//, "").split("/"); 38 | return parts.length === 3; 39 | } 40 | 41 | export function parsePath(path: string): EndpointPath { 42 | const parts = path.replace(/^\//, "").split("/"); 43 | 44 | if (parts.length != 3) { 45 | throw new Error( 46 | `Invalid Endpoint path format [${path}]. Use or vendor/space/endpoint`, 47 | ); 48 | } 49 | 50 | const [owner, space, rawEndpoint] = parts; 51 | return { 52 | owner, 53 | space, 54 | endpoint: isNaN(Number(rawEndpoint)) 55 | ? `/${rawEndpoint}` 56 | : parseInt(rawEndpoint), 57 | mcpToolName: formatMcpToolName(space, rawEndpoint), 58 | mcpDisplayName: formatMcpDisplayName(space, rawEndpoint), 59 | }; 60 | 61 | function formatMcpToolName(space: string, endpoint: string | number) { 62 | return `${space}-${endpoint}`.replace(/[^a-zA-Z0-9_-]/g, "_").slice(0, 64); 63 | } 64 | 65 | function formatMcpDisplayName(space: string, endpoint: string | number) { 66 | return `${space} endpoint /${endpoint}`; 67 | } 68 | } 69 | 70 | export class EndpointWrapper { 71 | private converter: GradioConverter; 72 | 73 | constructor( 74 | private endpointPath: EndpointPath, 75 | private endpoint: ApiEndpoint, 76 | private client: Client, 77 | private workingDir: WorkingDirectory, 78 | ) { 79 | this.converter = new GradioConverter(workingDir); 80 | } 81 | 82 | static async createEndpoint( 83 | configuredPath: string, 84 | workingDir: WorkingDirectory, 85 | ): Promise<EndpointWrapper> { 86 | const pathParts = configuredPath.split("/"); 87 | if (pathParts.length < 2 || pathParts.length > 3) { 88 | throw new Error( 89 | `Invalid space path format [${configuredPath}]. Use: vendor/space or vendor/space/endpoint`, 90 | ); 91 | } 92 | 93 | const spaceName = `${pathParts[0]}/${pathParts[1]}`; 94 | const endpointTarget = pathParts[2] ? `/${pathParts[2]}` : undefined; 95 | 96 | const preferredApis = [ 97 | "/predict", 98 | "/infer", 99 | "/generate", 100 | "/complete", 101 | "/model_chat", 102 | "/lambda", 103 | "/generate_image", 104 | "/process_prompt", 105 | "/on_submit", 106 | "/add_text", 107 | ]; 108 | 109 | const gradio: Client = await Client.connect(spaceName, { 110 | events: ["data", "status"], 111 | hf_token: config.hfToken, 112 | }); 113 | const api = (await gradio.view_api()) as ApiStructure; 114 | if (config.debug) { 115 | await fs.writeFile( 116 | `${pathParts[0]}_${pathParts[1]}_debug_api.json`, 117 | JSON.stringify(api, null, 2), 118 | ); 119 | } 120 | // Try chosen API if specified 121 | if (endpointTarget && api.named_endpoints[endpointTarget]) { 122 | return new EndpointWrapper( 123 | parsePath(configuredPath), 124 | api.named_endpoints[endpointTarget], 125 | gradio, 126 | workingDir, 127 | ); 128 | } 129 | 130 | // Try preferred APIs 131 | const preferredApi = preferredApis.find( 132 | (name) => api.named_endpoints[name], 133 | ); 134 | if (preferredApi) { 135 | return new EndpointWrapper( 136 | parsePath(`${configuredPath}${preferredApi}`), 137 | api.named_endpoints[preferredApi], 138 | gradio, 139 | workingDir, 140 | ); 141 | } 142 | 143 | // Try first named endpoint 144 | const firstNamed = Object.entries(api.named_endpoints)[0]; 145 | if (firstNamed) { 146 | return new EndpointWrapper( 147 | parsePath(`${configuredPath}${firstNamed[0]}`), 148 | firstNamed[1], 149 | gradio, 150 | workingDir, 151 | ); 152 | } 153 | 154 | // Try unnamed endpoints 155 | const validUnnamed = Object.entries(api.unnamed_endpoints).find( 156 | ([, endpoint]) => 157 | endpoint.parameters.length > 0 && endpoint.returns.length > 0, 158 | ); 159 | 160 | if (validUnnamed) { 161 | return new EndpointWrapper( 162 | parsePath(`${configuredPath}/${validUnnamed[0]}`), 163 | validUnnamed[1], 164 | gradio, 165 | workingDir, 166 | ); 167 | } 168 | 169 | throw new Error(`No valid endpoints found for ${configuredPath}`); 170 | } 171 | 172 | async validatePath(filePath: string): Promise<string> { 173 | return this.workingDir.validatePath(filePath); 174 | } 175 | 176 | /* Endpoint Wrapper */ 177 | private mcpDescriptionName(): string { 178 | return this.endpointPath.mcpDisplayName; 179 | } 180 | 181 | get mcpToolName() { 182 | return this.endpointPath.mcpToolName; 183 | } 184 | 185 | toolDefinition() { 186 | return { 187 | name: this.mcpToolName, 188 | description: `Call the ${this.mcpDescriptionName()}`, 189 | inputSchema: convertApiToSchema(this.endpoint), 190 | }; 191 | } 192 | 193 | async call( 194 | request: CallToolRequest, 195 | server: Server, 196 | ): Promise<CallToolResult> { 197 | const progressToken = request.params._meta?.progressToken as 198 | | string 199 | | number 200 | | undefined; 201 | 202 | const parameters = request.params.arguments as Record<string, unknown>; 203 | 204 | // Get the endpoint parameters to check against 205 | const endpointParams = this.endpoint.parameters; 206 | 207 | // Process each parameter, applying handle_file for file inputs 208 | for (const [key, value] of Object.entries(parameters)) { 209 | const param = endpointParams.find( 210 | (p) => p.parameter_name === key || p.label === key, 211 | ); 212 | if (param && isFileParameter(param) && typeof value === "string") { 213 | const file = await this.validatePath(value); 214 | parameters[key] = handle_file(file); 215 | } 216 | } 217 | 218 | const normalizedToken = 219 | typeof progressToken === "number" 220 | ? progressToken.toString() 221 | : progressToken; 222 | 223 | return this.handleToolCall(parameters, normalizedToken, server); 224 | } 225 | 226 | async handleToolCall( 227 | parameters: Record<string, unknown>, 228 | progressToken: string | undefined, 229 | server: Server, 230 | ): Promise<CallToolResult> { 231 | const events = []; 232 | try { 233 | let result = null; 234 | const submission: AsyncIterable<GradioEvent> = this.client.submit( 235 | this.endpointPath.endpoint, 236 | parameters, 237 | ) as AsyncIterable<GradioEvent>; 238 | const progressNotifier = createProgressNotifier(server); 239 | for await (const msg of submission) { 240 | if (config.debug) events.push(msg); 241 | if (msg.type === "data") { 242 | if (Array.isArray(msg.data)) { 243 | const hasContent = msg.data.some( 244 | (item: unknown) => typeof item !== "object", 245 | ); 246 | 247 | if (hasContent) result = msg.data; 248 | if (null === result) result = msg.data; 249 | } 250 | } else if (msg.type === "status") { 251 | if (msg.stage === "error") { 252 | throw new Error(`Gradio error: ${msg.message || "Unknown error"}`); 253 | } 254 | if (progressToken) await progressNotifier.notify(msg, progressToken); 255 | } 256 | } 257 | 258 | if (!result) { 259 | throw new Error("No data received from endpoint"); 260 | } 261 | 262 | return await this.convertPredictResults( 263 | this.endpoint.returns, 264 | result, 265 | this.endpointPath, 266 | ); 267 | } catch (error) { 268 | const errorMessage = 269 | error instanceof Error ? error.message : String(error); 270 | throw new Error(`Error calling endpoint: ${errorMessage}`); 271 | } finally { 272 | if (config.debug && events.length > 0) { 273 | await fs.writeFile( 274 | `${this.mcpToolName}_status_${crypto 275 | .randomUUID() 276 | .substring(0, 5)}.json`, 277 | JSON.stringify(events, null, 2), 278 | ); 279 | } 280 | } 281 | } 282 | 283 | private async convertPredictResults( 284 | returns: ApiReturn[], 285 | predictResults: unknown[], 286 | endpointPath: EndpointPath, 287 | ): Promise<CallToolResult> { 288 | const content: (TextContent | ImageContent | EmbeddedResource)[] = []; 289 | 290 | for (const [index, output] of returns.entries()) { 291 | const value = predictResults[index]; 292 | const converted = await this.converter.convert( 293 | output, 294 | value as GradioResourceValue, 295 | endpointPath, 296 | ); 297 | content.push(converted); 298 | } 299 | 300 | return { 301 | content, 302 | isError: false, 303 | }; 304 | } 305 | 306 | promptName() { 307 | return this.mcpToolName; 308 | } 309 | 310 | promptDefinition() { 311 | const schema = convertApiToSchema(this.endpoint); 312 | return { 313 | name: this.promptName(), 314 | description: `Use the ${this.mcpDescriptionName()}.`, 315 | arguments: Object.entries(schema.properties).map( 316 | ([name, prop]: [string, ParameterSchema]) => ({ 317 | name, 318 | description: prop?.description || name, 319 | required: schema.required?.includes(name) || false, 320 | }), 321 | ), 322 | }; 323 | } 324 | 325 | async getPromptTemplate( 326 | args?: Record<string, string>, 327 | ): Promise<GetPromptResult> { 328 | const schema = convertApiToSchema(this.endpoint); 329 | let promptText = `Using the ${this.mcpDescriptionName()}:\n\n`; 330 | 331 | promptText += Object.entries(schema.properties) 332 | .map(([name, prop]: [string, ParameterSchema]) => { 333 | const defaultHint = 334 | prop?.default !== undefined ? ` - default: ${prop.default}` : ""; 335 | const value = 336 | args?.[name] || 337 | `[Provide ${prop?.description || name}${defaultHint}]`; 338 | return `${name}: ${value}`; 339 | }) 340 | .join("\n"); 341 | 342 | return { 343 | description: this.promptDefinition().description, 344 | messages: [ 345 | { 346 | role: "user", 347 | content: { 348 | type: "text", 349 | text: promptText, 350 | }, 351 | }, 352 | ], 353 | }; 354 | } 355 | } 356 | ``` -------------------------------------------------------------------------------- /test/utils.test.ts: -------------------------------------------------------------------------------- ```typescript 1 | import { describe, it, expect } from "vitest"; 2 | import type { ApiEndpoint, ApiParameter } from "../src/gradio_api"; 3 | import { convertParameter } from "../src/gradio_convert"; 4 | 5 | function createParameter( 6 | override: Partial<ApiParameter> & { 7 | // Require the essential bits we always need to specify 8 | python_type: { 9 | type: string; 10 | description?: string; 11 | }; 12 | } 13 | ): ApiParameter { 14 | return { 15 | label: "Test Parameter", 16 | // parameter_name is now optional 17 | type: "string", 18 | component: "Textbox", 19 | // Spread the override at the end to allow overriding defaults 20 | ...override, 21 | }; 22 | } 23 | 24 | // Test just the parameter conversion 25 | describe("basic conversions", () => { 26 | it("converts a single basic string parameter", () => { 27 | const param = createParameter({ 28 | python_type: { 29 | type: "str", 30 | description: "A text parameter", 31 | }, 32 | }); 33 | const result = convertParameter(param); 34 | 35 | // TypeScript ensures result matches ParameterSchema 36 | expect(result).toEqual({ 37 | type: "string", 38 | description: "A text parameter", 39 | }); 40 | }); 41 | 42 | it("uses python_type description when available", () => { 43 | const param = createParameter({ 44 | label: "Prompt", 45 | python_type: { 46 | type: "str", 47 | description: "The input prompt text", 48 | }, 49 | }); 50 | const result = convertParameter(param); 51 | 52 | expect(result).toEqual({ 53 | type: "string", 54 | description: "The input prompt text", 55 | }); 56 | }); 57 | 58 | it("falls back to label when python_type description is empty", () => { 59 | const param = createParameter({ 60 | label: "Prompt", 61 | python_type: { 62 | type: "str", 63 | description: "", 64 | }, 65 | }); 66 | const result = convertParameter(param); 67 | 68 | expect(result).toEqual({ 69 | type: "string", 70 | description: "Prompt", 71 | }); 72 | }); 73 | 74 | it("includes default value when specified", () => { 75 | const param = createParameter({ 76 | parameter_has_default: true, 77 | parameter_default: "default text", 78 | python_type: { 79 | type: "str", 80 | description: "", 81 | }, 82 | }); 83 | const result = convertParameter(param); 84 | 85 | expect(result).toEqual({ 86 | type: "string", 87 | description: "Test Parameter", 88 | default: "default text", 89 | }); 90 | }); 91 | 92 | it("includes example when specified", () => { 93 | const param = createParameter({ 94 | example_input: "example text", 95 | python_type: { 96 | type: "str", 97 | description: "" 98 | }, 99 | }); 100 | const result = convertParameter(param); 101 | 102 | expect(result).toEqual({ 103 | type: "string", 104 | description: "Test Parameter", 105 | examples: ["example text"], 106 | }); 107 | }); 108 | 109 | it("includes both default and example when specified", () => { 110 | const param = createParameter({ 111 | parameter_has_default: true, 112 | parameter_default: "default text", 113 | example_input: "example text", 114 | python_type: { 115 | type: "str", 116 | description:"", 117 | }, 118 | }); 119 | const result = convertParameter(param); 120 | 121 | expect(result).toEqual({ 122 | type: "string", 123 | description: "Test Parameter", 124 | default: "default text", 125 | examples: ["example text"], 126 | }); 127 | }); 128 | }); 129 | 130 | describe("convertParameter", () => { 131 | it("converts a single parameter correctly", () => { 132 | const param: ApiParameter = { 133 | label: "Input Text", 134 | parameter_name: "text", 135 | parameter_has_default: false, 136 | parameter_default: null, 137 | type: "string", 138 | python_type: { 139 | type: "str", 140 | description: "A text input", 141 | }, 142 | component: "Textbox", 143 | }; 144 | 145 | const result = convertParameter(param); 146 | 147 | // TypeScript ensures result matches ParameterSchema 148 | expect(result).toEqual({ 149 | type: "string", 150 | description: "A text input", 151 | }); 152 | }); 153 | }); 154 | 155 | describe("number type conversions", () => { 156 | // ...existing tests... 157 | 158 | it("handles basic number without constraints", () => { 159 | const param = createParameter({ 160 | type: "number", 161 | python_type: { 162 | type: "float", 163 | description: "A number parameter", 164 | }, 165 | }); 166 | const result = convertParameter(param); 167 | 168 | expect(result).toEqual({ 169 | type: "number", 170 | description: "A number parameter", 171 | }); 172 | }); 173 | 174 | it("parses minimum constraint", () => { 175 | const param = createParameter({ 176 | type: "number", 177 | python_type: { 178 | type: "float", 179 | description: "A number parameter (min: 0)", 180 | }, 181 | }); 182 | const result = convertParameter(param); 183 | 184 | expect(result).toEqual({ 185 | type: "number", 186 | description: "A number parameter (min: 0)", 187 | minimum: 0, 188 | }); 189 | }); 190 | 191 | it("parses maximum constraint", () => { 192 | const param = createParameter({ 193 | type: "number", 194 | python_type: { 195 | type: "float", 196 | description: "A number parameter (maximum=100)", 197 | }, 198 | }); 199 | const result = convertParameter(param); 200 | 201 | expect(result).toEqual({ 202 | type: "number", 203 | description: "A number parameter (maximum=100)", 204 | maximum: 100, 205 | }); 206 | }); 207 | 208 | it("parses both min and max constraints", () => { 209 | const param = createParameter({ 210 | type: "number", 211 | python_type: { 212 | type: "float", 213 | description: "A number parameter (min: 0, max: 1.0)", 214 | }, 215 | }); 216 | const result = convertParameter(param); 217 | 218 | expect(result).toEqual({ 219 | type: "number", 220 | description: "A number parameter (min: 0, max: 1.0)", 221 | minimum: 0, 222 | maximum: 1.0, 223 | }); 224 | }); 225 | 226 | it("parses 'between X and Y' format", () => { 227 | const param = createParameter({ 228 | type: "number", 229 | python_type: { 230 | type: "float", 231 | description: "numeric value between 256 and 2048", 232 | }, 233 | }); 234 | const result = convertParameter(param); 235 | 236 | expect(result).toEqual({ 237 | type: "number", 238 | description: "numeric value between 256 and 2048", 239 | minimum: 256, 240 | maximum: 2048, 241 | }); 242 | }); 243 | 244 | it("parses large number ranges", () => { 245 | const param = createParameter({ 246 | type: "number", 247 | python_type: { 248 | type: "float", 249 | description: "numeric value between 0 and 2147483647", 250 | }, 251 | }); 252 | const result = convertParameter(param); 253 | 254 | expect(result).toEqual({ 255 | type: "number", 256 | description: "numeric value between 0 and 2147483647", 257 | minimum: 0, 258 | maximum: 2147483647, 259 | }); 260 | }); 261 | }); 262 | 263 | describe("boolean type conversions", () => { 264 | it("handles basic boolean parameter", () => { 265 | const param = createParameter({ 266 | type: "boolean", 267 | python_type: { 268 | type: "bool", 269 | description: "A boolean flag", 270 | }, 271 | }); 272 | const result = convertParameter(param); 273 | 274 | expect(result).toEqual({ 275 | type: "boolean", 276 | description: "A boolean flag", 277 | }); 278 | }); 279 | 280 | it("handles boolean with default value", () => { 281 | const param = createParameter({ 282 | type: "boolean", 283 | parameter_has_default: true, 284 | parameter_default: true, 285 | python_type: { 286 | type: "bool", 287 | description: "", 288 | }, 289 | }); 290 | const result = convertParameter(param); 291 | 292 | expect(result).toEqual({ 293 | type: "boolean", 294 | description: "Test Parameter", 295 | default: true, 296 | }); 297 | }); 298 | 299 | it("handles boolean with example", () => { 300 | const param = createParameter({ 301 | type: "boolean", 302 | example_input: true, 303 | python_type: { 304 | type: "bool", 305 | description: "", 306 | }, 307 | }); 308 | const result = convertParameter(param); 309 | 310 | expect(result).toEqual({ 311 | type: "boolean", 312 | description: "Test Parameter", 313 | examples: [true], 314 | }); 315 | }); 316 | 317 | it("matches the Randomize seed example exactly", () => { 318 | const param = createParameter({ 319 | label: "Randomize seed", 320 | parameter_name: "randomize_seed", 321 | parameter_has_default: true, 322 | parameter_default: true, 323 | type: "boolean", 324 | example_input: true, 325 | python_type: { 326 | type: "bool", 327 | description: "", 328 | }, 329 | }); 330 | const result = convertParameter(param); 331 | 332 | expect(result).toEqual({ 333 | type: "boolean", 334 | description: "Randomize seed", 335 | default: true, 336 | examples: [true], 337 | }); 338 | }); 339 | }); 340 | 341 | describe("literal type conversions", () => { 342 | it("handles Literal type with enum values", () => { 343 | const param = createParameter({ 344 | label: "Aspect Ratio", 345 | parameter_name: "aspect_ratio", 346 | parameter_has_default: true, 347 | parameter_default: "1:1", 348 | type: "string", 349 | python_type: { 350 | type: "Literal['1:1', '16:9', '9:16', '4:3']", 351 | description: "", 352 | }, 353 | example_input: "1:1", 354 | }); 355 | const result = convertParameter(param); 356 | 357 | expect(result).toEqual({ 358 | type: "string", 359 | description: "Aspect Ratio", 360 | default: "1:1", 361 | examples: ["1:1"], 362 | enum: ["1:1", "16:9", "9:16", "4:3"] 363 | }); 364 | }); 365 | 366 | it("handles boolean-like Literal type with True/False strings", () => { 367 | const param = createParameter({ 368 | label: "Is Example Image", 369 | parameter_name: "is_example_image", 370 | parameter_has_default: true, 371 | parameter_default: "False", 372 | type: "string", 373 | python_type: { 374 | type: "Literal['True', 'False']", 375 | description: "", 376 | }, 377 | example_input: "True", 378 | }); 379 | const result = convertParameter(param); 380 | 381 | expect(result).toEqual({ 382 | type: "string", 383 | description: "Is Example Image", 384 | default: "False", 385 | examples: ["True"], 386 | enum: ["True", "False"] 387 | }); 388 | }); 389 | }); 390 | 391 | describe("file and blob type conversions", () => { 392 | it("handles simple filepath type", () => { 393 | const exampleUrl = "https://example.com/image.png"; 394 | const param = createParameter({ 395 | type: "Blob | File | Buffer", 396 | python_type: { 397 | type: "filepath", 398 | description: "", 399 | }, 400 | example_input: { 401 | path: exampleUrl, 402 | meta: { _type: "gradio.FileData" }, 403 | orig_name: "image.png", 404 | url: exampleUrl, 405 | }, 406 | }); 407 | const result = convertParameter(param); 408 | 409 | expect(result).toMatchObject({ 410 | type: "string", 411 | description: "Accepts: URL, file path, file name, or resource identifier", 412 | }); 413 | }); 414 | 415 | it("handles complex Dict type for image input", () => { 416 | const exampleUrl = "https://example.com/image.png"; 417 | const param = createParameter({ 418 | type: "Blob | File | Buffer", 419 | python_type: { 420 | type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url), ...)", 421 | description: "For input, either path or url must be provided.", 422 | }, 423 | example_input: { 424 | path: exampleUrl, 425 | meta: { _type: "gradio.FileData" }, 426 | orig_name: "image.png", 427 | url: exampleUrl, 428 | }, 429 | }); 430 | const result = convertParameter(param); 431 | 432 | expect(result).toMatchObject({ 433 | type: "string", 434 | description: "Accepts: URL, file path, file name, or resource identifier", 435 | }); 436 | }); 437 | 438 | it("handles audio file input type", () => { 439 | const exampleUrl = "https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav"; 440 | const param = createParameter({ 441 | type: "", 442 | python_type: { 443 | type: "filepath", 444 | description: "", 445 | }, 446 | component: "Audio", 447 | example_input: { 448 | path: exampleUrl, 449 | meta: { _type: "gradio.FileData" }, 450 | orig_name: "audio_sample.wav", 451 | url: exampleUrl, 452 | }, 453 | }); 454 | const result = convertParameter(param); 455 | 456 | expect(result).toMatchObject({ 457 | type: "string", 458 | description: "Accepts: Audio file URL, file path, file name, or resource identifier", 459 | }); 460 | }); 461 | 462 | it("handles empty type string for audio input", () => { 463 | const param = createParameter({ 464 | label: "parameter_1", 465 | parameter_name: "inputs", 466 | parameter_has_default: false, 467 | parameter_default: null, 468 | python_type: { 469 | type: "filepath", 470 | description: "", 471 | }, 472 | component: "Audio", 473 | example_input: { 474 | path: "https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav", 475 | meta: { _type: "gradio.FileData" }, 476 | orig_name: "audio_sample.wav", 477 | url: "https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav", 478 | }, 479 | }); 480 | const result = convertParameter(param); 481 | 482 | expect(result).toMatchObject({ 483 | type: "string", // Should always be "string" for file inputs 484 | description: "Accepts: Audio file URL, file path, file name, or resource identifier", 485 | }); 486 | }); 487 | 488 | it("handles image file input type", () => { 489 | const param = createParameter({ 490 | type: "", 491 | python_type: { 492 | type: "filepath", 493 | description: "", 494 | }, 495 | component: "Image", 496 | example_input: { 497 | path: "https://example.com/image.png", 498 | meta: { _type: "gradio.FileData" }, 499 | orig_name: "image.png", 500 | url: "https://example.com/image.png", 501 | }, 502 | }); 503 | const result = convertParameter(param); 504 | 505 | expect(result).toMatchObject({ 506 | type: "string", 507 | description: "Accepts: Image file URL, file path, file name, or resource identifier", 508 | }); 509 | }); 510 | }); 511 | 512 | describe("unnamed parameters", () => { 513 | it("handles parameters without explicit names", () => { 514 | const param = createParameter({ 515 | label: "Input Text", 516 | type: "string", 517 | python_type: { 518 | type: "str", 519 | description: "A text input", 520 | }, 521 | component: "Textbox", 522 | }); 523 | 524 | const result = convertParameter(param); 525 | 526 | expect(result).toEqual({ 527 | type: "string", 528 | description: "A text input", 529 | }); 530 | }); 531 | 532 | it("handles array of unnamed parameters", () => { 533 | const params = [ 534 | createParameter({ 535 | label: "Image Input", 536 | type: "Blob | File | Buffer", 537 | python_type: { 538 | type: "filepath", 539 | description: "", 540 | }, 541 | component: "Image", 542 | example_input: "https://example.com/image.png", 543 | }), 544 | createParameter({ 545 | label: "Text Input", 546 | type: "string", 547 | python_type: { 548 | type: "str", 549 | description: "", 550 | }, 551 | component: "Textbox", 552 | example_input: "Hello!", 553 | }), 554 | ]; 555 | 556 | // Test each parameter conversion 557 | params.forEach((param, index) => { 558 | const result = convertParameter(param); 559 | expect(result).toBeTruthy(); 560 | }); 561 | }); 562 | }); 563 | 564 | describe("special cases", () => { 565 | // chatbox historys often have incorrect types/example pairings. 566 | // will later handle python dicts/tuples more elegantly. 567 | it("handles chat history parameter", () => { 568 | const param = createParameter({ 569 | label: "Qwen2.5-72B-Instruct", 570 | parameter_name: "history", 571 | component: "Chatbot", 572 | python_type: { 573 | type: "list", 574 | description: "Some other description that should be ignored", 575 | }, 576 | }); 577 | const result = convertParameter(param); 578 | 579 | expect(result).toEqual({ 580 | type: "array", 581 | description: "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components." 582 | }); 583 | }); 584 | 585 | it("handles chat history parameter with examples", () => { 586 | const param = createParameter({ 587 | label: "Qwen2.5-72B-Instruct", 588 | parameter_name: "history", 589 | component: "Chatbot", 590 | python_type: { 591 | type: "list", 592 | description: "Some other description that should be ignored", 593 | }, 594 | example_input: [["Hello", "Hi there!"]], // Example chat history 595 | }); 596 | const result = convertParameter(param); 597 | 598 | expect(result).toEqual({ 599 | type: "array", 600 | description: "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components.", 601 | examples: [[["Hello", "Hi there!"]]] 602 | }); 603 | }); 604 | 605 | it("handles chat history parameter with default value", () => { 606 | const param = createParameter({ 607 | label: "Qwen2.5-72B-Instruct", 608 | parameter_name: "history", 609 | component: "Chatbot", 610 | parameter_has_default: true, 611 | parameter_default: [], 612 | python_type: { 613 | type: "list", 614 | description: "Some other description that should be ignored", 615 | } 616 | }); 617 | const result = convertParameter(param); 618 | 619 | expect(result).toEqual({ 620 | type: "array", 621 | description: "Chat history as an array of message pairs. Each pair is [user_message, assistant_message] where messages can be text strings or null. Advanced: messages can also be file references or UI components.", 622 | default: [] 623 | }); 624 | }); 625 | }); 626 | ``` -------------------------------------------------------------------------------- /test/parameter_test.json: -------------------------------------------------------------------------------- ```json 1 | [ 2 | { 3 | label: "Input Text", 4 | type: "string", 5 | python_type: { 6 | type: "str", 7 | description: "", 8 | }, 9 | component: "Textbox", 10 | example_input: "Howdy!", 11 | serializer: "StringSerializable", 12 | description: undefined, 13 | }, 14 | { 15 | label: "Acoustic Prompt", 16 | type: "", 17 | python_type: { 18 | type: "Any", 19 | description: "any valid value", 20 | }, 21 | component: "Dropdown", 22 | example_input: null, 23 | serializer: "SimpleSerializable", 24 | description: "any valid value", 25 | }, 26 | ] 27 | 28 | 29 | 30 | [ 31 | { 32 | label: "Input Text", 33 | type: "string", 34 | python_type: { 35 | type: "str", 36 | description: "", 37 | }, 38 | component: "Textbox", 39 | example_input: "Howdy!", 40 | serializer: "StringSerializable", 41 | description: undefined, 42 | }, 43 | { 44 | label: "Acoustic Prompt", 45 | type: "", 46 | python_type: { 47 | type: "Any", 48 | description: "any valid value", 49 | }, 50 | component: "Dropdown", 51 | example_input: null, 52 | serializer: "SimpleSerializable", 53 | description: "any valid value", 54 | }, 55 | ] 56 | 57 | { 58 | parameters: [ 59 | { 60 | label: "Upload Image", 61 | parameter_name: "img", 62 | parameter_has_default: false, 63 | parameter_default: null, 64 | type: "Blob | File | Buffer", 65 | python_type: { 66 | type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", 67 | description: "For input, either path or url must be provided. For output, path is always provided.", 68 | }, 69 | component: "Image", 70 | example_input: { 71 | path: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", 72 | meta: { 73 | _type: "gradio.FileData", 74 | }, 75 | orig_name: "bus.png", 76 | url: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", 77 | }, 78 | description: undefined, 79 | }, 80 | { 81 | label: "Object to Extract", 82 | parameter_name: "prompt", 83 | parameter_has_default: false, 84 | parameter_default: null, 85 | type: "string", 86 | python_type: { 87 | type: "str", 88 | description: "", 89 | }, 90 | component: "Textbox", 91 | example_input: "Hello!!", 92 | description: undefined, 93 | }, 94 | { 95 | label: "Background Prompt (optional)", 96 | parameter_name: "bg_prompt", 97 | parameter_has_default: true, 98 | parameter_default: null, 99 | type: "string", 100 | python_type: { 101 | type: "str", 102 | description: "", 103 | }, 104 | component: "Textbox", 105 | example_input: "Hello!!", 106 | description: undefined, 107 | }, 108 | { 109 | label: "Aspect Ratio", 110 | parameter_name: "aspect_ratio", 111 | parameter_has_default: true, 112 | parameter_default: "1:1", 113 | type: "string", 114 | python_type: { 115 | type: "Literal['1:1', '16:9', '9:16', '4:3']", 116 | description: "", 117 | }, 118 | component: "Dropdown", 119 | example_input: "1:1", 120 | description: undefined, 121 | }, 122 | { 123 | component: "state", 124 | example: null, 125 | parameter_default: null, 126 | parameter_has_default: true, 127 | parameter_name: null, 128 | hidden: true, 129 | description: undefined, 130 | type: "", 131 | }, 132 | { 133 | label: "Object Size (%)", 134 | parameter_name: "scale_percent", 135 | parameter_has_default: true, 136 | parameter_default: 50, 137 | type: "number", 138 | python_type: { 139 | type: "float", 140 | description: "", 141 | }, 142 | component: "Slider", 143 | example_input: 10, 144 | description: "numeric value between 10 and 200", 145 | }, 146 | ], 147 | returns: [ 148 | { 149 | label: "Combined Result", 150 | type: "string", 151 | python_type: { 152 | type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", 153 | description: "", 154 | }, 155 | component: "Image", 156 | description: undefined, 157 | }, 158 | { 159 | label: "Extracted Object", 160 | type: "string", 161 | python_type: { 162 | type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", 163 | description: "", 164 | }, 165 | component: "Image", 166 | description: undefined, 167 | }, 168 | ], 169 | type: { 170 | generator: false, 171 | cancel: false, 172 | }, 173 | } 174 | 175 | 176 | 177 | [ 178 | { 179 | label: "Input Screenshot", 180 | parameter_name: "image", 181 | parameter_has_default: false, 182 | parameter_default: null, 183 | type: "Blob | File | Buffer", 184 | python_type: { 185 | type: "Dict(path: str | None (Path to a local file), url: str | None (Publicly available url or base64 encoded image), size: int | None (Size of image in bytes), orig_name: str | None (Original filename), mime_type: str | None (mime type of image), is_stream: bool (Can always be set to False), meta: Dict())", 186 | description: "For input, either path or url must be provided. For output, path is always provided.", 187 | }, 188 | component: "Image", 189 | example_input: { 190 | path: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", 191 | meta: { 192 | _type: "gradio.FileData", 193 | }, 194 | orig_name: "bus.png", 195 | url: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", 196 | }, 197 | description: undefined, 198 | }, 199 | { 200 | label: "Query", 201 | parameter_name: "query", 202 | parameter_has_default: false, 203 | parameter_default: null, 204 | type: "string", 205 | python_type: { 206 | type: "str", 207 | description: "", 208 | }, 209 | component: "Textbox", 210 | example_input: "Hello!!", 211 | description: undefined, 212 | }, 213 | { 214 | label: "Refinement Steps", 215 | parameter_name: "iterations", 216 | parameter_has_default: true, 217 | parameter_default: 1, 218 | type: "number", 219 | python_type: { 220 | type: "float", 221 | description: "", 222 | }, 223 | component: "Slider", 224 | example_input: 1, 225 | description: "numeric value between 1 and 3", 226 | }, 227 | { 228 | label: "Is Example Image", 229 | parameter_name: "is_example_image", 230 | parameter_has_default: true, 231 | parameter_default: "False", 232 | type: "string", 233 | python_type: { 234 | type: "Literal['True', 'False']", 235 | description: "", 236 | }, 237 | component: "Dropdown", 238 | example_input: "True", 239 | description: undefined, 240 | }, 241 | ] 242 | 243 | 244 | [ 245 | { 246 | label: "Input", 247 | parameter_name: "query", 248 | parameter_has_default: false, 249 | parameter_default: null, 250 | type: "string", 251 | python_type: { 252 | type: "str", 253 | description: "", 254 | }, 255 | component: "Textbox", 256 | example_input: "Hello!!", 257 | description: undefined, 258 | }, 259 | { 260 | label: "Qwen2.5-72B-Instruct", 261 | parameter_name: "history", 262 | parameter_has_default: true, 263 | parameter_default: [ 264 | ], 265 | type: "", 266 | python_type: { 267 | type: "Tuple[str | Dict(file: filepath, alt_text: str | None) | Dict(component: str, value: Any, constructor_args: Dict(), props: Dict()) | None, str | Dict(file: filepath, alt_text: str | None) | Dict(component: str, value: Any, constructor_args: Dict(), props: Dict()) | None]", 268 | description: "", 269 | }, 270 | component: "Chatbot", 271 | example_input: [ 272 | [ 273 | "Hello!", 274 | null, 275 | ], 276 | ], 277 | description: undefined, 278 | }, 279 | { 280 | label: "parameter_8", 281 | parameter_name: "system", 282 | parameter_has_default: true, 283 | parameter_default: "You are Qwen, created by Alibaba Cloud. You are a helpful assistant.", 284 | type: "string", 285 | python_type: { 286 | type: "str", 287 | description: "", 288 | }, 289 | component: "Textbox", 290 | example_input: "Hello!!", 291 | description: undefined, 292 | }, 293 | ] 294 | 295 | 296 | 297 | 298 | [ 299 | { 300 | label: "Prompt", 301 | parameter_name: "prompt", 302 | parameter_has_default: false, 303 | parameter_default: null, 304 | type: "string", 305 | python_type: { 306 | type: "str", 307 | description: "", 308 | }, 309 | component: "Textbox", 310 | example_input: "Hello!!", 311 | description: undefined, 312 | }, 313 | { 314 | label: "Seed", 315 | parameter_name: "seed", 316 | parameter_has_default: true, 317 | parameter_default: 0, 318 | type: "number", 319 | python_type: { 320 | type: "float", 321 | description: "numeric value between 0 and 2147483647", 322 | }, 323 | component: "Slider", 324 | example_input: 0, 325 | description: "numeric value between 0 and 2147483647", 326 | }, 327 | { 328 | label: "Randomize seed", 329 | parameter_name: "randomize_seed", 330 | parameter_has_default: true, 331 | parameter_default: true, 332 | type: "boolean", 333 | python_type: { 334 | type: "bool", 335 | description: "", 336 | }, 337 | component: "Checkbox", 338 | example_input: true, 339 | description: undefined, 340 | }, 341 | { 342 | label: "Width", 343 | parameter_name: "width", 344 | parameter_has_default: true, 345 | parameter_default: 1024, 346 | type: "number", 347 | python_type: { 348 | type: "float", 349 | description: "numeric value between 256 and 2048", 350 | }, 351 | component: "Slider", 352 | example_input: 256, 353 | description: "numeric value between 256 and 2048", 354 | }, 355 | { 356 | label: "Height", 357 | parameter_name: "height", 358 | parameter_has_default: true, 359 | parameter_default: 1024, 360 | type: "number", 361 | python_type: { 362 | type: "float", 363 | description: "numeric value between 256 and 2048", 364 | }, 365 | component: "Slider", 366 | example_input: 256, 367 | description: "numeric value between 256 and 2048", 368 | }, 369 | { 370 | label: "Number of inference steps", 371 | parameter_name: "num_inference_steps", 372 | parameter_has_default: true, 373 | parameter_default: 4, 374 | type: "number", 375 | python_type: { 376 | type: "float", 377 | description: "numeric value between 1 and 50", 378 | }, 379 | component: "Slider", 380 | example_input: 1, 381 | description: "numeric value between 1 and 50", 382 | }, 383 | ] 384 | 385 | 386 | TRELLIS: 387 | { 388 | "/preprocess_image": { 389 | parameters: [ 390 | { 391 | label: "Image Prompt", 392 | parameter_name: "image", 393 | parameter_has_default: false, 394 | parameter_default: null, 395 | type: "Blob | File | Buffer", 396 | python_type: { 397 | type: "filepath", 398 | description: "", 399 | }, 400 | component: "Image", 401 | example_input: { 402 | path: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", 403 | meta: { 404 | _type: "gradio.FileData", 405 | }, 406 | orig_name: "bus.png", 407 | url: "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png", 408 | }, 409 | description: undefined, 410 | }, 411 | ], 412 | returns: [ 413 | { 414 | label: "value_29", 415 | type: "string", 416 | python_type: { 417 | type: "str", 418 | description: "", 419 | }, 420 | component: "Textbox", 421 | description: undefined, 422 | }, 423 | { 424 | label: "Image Prompt", 425 | type: "string", 426 | python_type: { 427 | type: "filepath", 428 | description: "", 429 | }, 430 | component: "Image", 431 | description: undefined, 432 | }, 433 | ], 434 | type: { 435 | generator: false, 436 | cancel: false, 437 | }, 438 | }, 439 | "/lambda": { 440 | parameters: [ 441 | ], 442 | returns: [ 443 | { 444 | label: "value_29", 445 | type: "string", 446 | python_type: { 447 | type: "str", 448 | description: "", 449 | }, 450 | component: "Textbox", 451 | description: undefined, 452 | }, 453 | ], 454 | type: { 455 | generator: false, 456 | cancel: false, 457 | }, 458 | }, 459 | "/image_to_3d": { 460 | parameters: [ 461 | { 462 | label: "parameter_29", 463 | parameter_name: "trial_id", 464 | parameter_has_default: false, 465 | parameter_default: null, 466 | type: "string", 467 | python_type: { 468 | type: "str", 469 | description: "", 470 | }, 471 | component: "Textbox", 472 | example_input: "Hello!!", 473 | description: undefined, 474 | }, 475 | { 476 | label: "Seed", 477 | parameter_name: "seed", 478 | parameter_has_default: true, 479 | parameter_default: 0, 480 | type: "number", 481 | python_type: { 482 | type: "float", 483 | description: "numeric value between 0 and 2147483647", 484 | }, 485 | component: "Slider", 486 | example_input: 0, 487 | description: "numeric value between 0 and 2147483647", 488 | }, 489 | { 490 | label: "Randomize Seed", 491 | parameter_name: "randomize_seed", 492 | parameter_has_default: true, 493 | parameter_default: true, 494 | type: "boolean", 495 | python_type: { 496 | type: "bool", 497 | description: "", 498 | }, 499 | component: "Checkbox", 500 | example_input: true, 501 | description: undefined, 502 | }, 503 | { 504 | label: "Guidance Strength", 505 | parameter_name: "ss_guidance_strength", 506 | parameter_has_default: true, 507 | parameter_default: 7.5, 508 | type: "number", 509 | python_type: { 510 | type: "float", 511 | description: "numeric value between 0.0 and 10.0", 512 | }, 513 | component: "Slider", 514 | example_input: 0, 515 | description: "numeric value between 0.0 and 10.0", 516 | }, 517 | { 518 | label: "Sampling Steps", 519 | parameter_name: "ss_sampling_steps", 520 | parameter_has_default: true, 521 | parameter_default: 12, 522 | type: "number", 523 | python_type: { 524 | type: "float", 525 | description: "numeric value between 1 and 50", 526 | }, 527 | component: "Slider", 528 | example_input: 1, 529 | description: "numeric value between 1 and 50", 530 | }, 531 | { 532 | label: "Guidance Strength", 533 | parameter_name: "slat_guidance_strength", 534 | parameter_has_default: true, 535 | parameter_default: 3, 536 | type: "number", 537 | python_type: { 538 | type: "float", 539 | description: "numeric value between 0.0 and 10.0", 540 | }, 541 | component: "Slider", 542 | example_input: 0, 543 | description: "numeric value between 0.0 and 10.0", 544 | }, 545 | { 546 | label: "Sampling Steps", 547 | parameter_name: "slat_sampling_steps", 548 | parameter_has_default: true, 549 | parameter_default: 12, 550 | type: "number", 551 | python_type: { 552 | type: "float", 553 | description: "numeric value between 1 and 50", 554 | }, 555 | component: "Slider", 556 | example_input: 1, 557 | description: "numeric value between 1 and 50", 558 | }, 559 | ], 560 | returns: [ 561 | { 562 | label: "Generated 3D Asset", 563 | type: "", 564 | python_type: { 565 | type: "Dict(video: filepath, subtitles: filepath | None)", 566 | description: "", 567 | }, 568 | component: "Video", 569 | description: undefined, 570 | }, 571 | ], 572 | type: { 573 | generator: false, 574 | cancel: false, 575 | }, 576 | }, 577 | "/activate_button": { 578 | parameters: [ 579 | ], 580 | returns: [ 581 | ], 582 | type: { 583 | generator: false, 584 | cancel: false, 585 | }, 586 | }, 587 | "/deactivate_button": { 588 | parameters: [ 589 | ], 590 | returns: [ 591 | ], 592 | type: { 593 | generator: false, 594 | cancel: false, 595 | }, 596 | }, 597 | "/extract_glb": { 598 | parameters: [ 599 | { 600 | component: "state", 601 | example: null, 602 | parameter_default: null, 603 | parameter_has_default: true, 604 | parameter_name: null, 605 | hidden: true, 606 | description: undefined, 607 | type: "", 608 | }, 609 | { 610 | label: "Simplify", 611 | parameter_name: "mesh_simplify", 612 | parameter_has_default: true, 613 | parameter_default: 0.95, 614 | type: "number", 615 | python_type: { 616 | type: "float", 617 | description: "numeric value between 0.9 and 0.98", 618 | }, 619 | component: "Slider", 620 | example_input: 0.9, 621 | description: "numeric value between 0.9 and 0.98", 622 | }, 623 | { 624 | label: "Texture Size", 625 | parameter_name: "texture_size", 626 | parameter_has_default: true, 627 | parameter_default: 1024, 628 | type: "number", 629 | python_type: { 630 | type: "float", 631 | description: "numeric value between 512 and 2048", 632 | }, 633 | component: "Slider", 634 | example_input: 512, 635 | description: "numeric value between 512 and 2048", 636 | }, 637 | ], 638 | returns: [ 639 | { 640 | label: "Extracted GLB", 641 | type: "", 642 | python_type: { 643 | type: "filepath", 644 | description: "", 645 | }, 646 | component: "Litmodel3d", 647 | description: undefined, 648 | }, 649 | { 650 | label: "Download GLB", 651 | type: "", 652 | python_type: { 653 | type: "filepath", 654 | description: "", 655 | }, 656 | component: "Downloadbutton", 657 | description: undefined, 658 | }, 659 | ], 660 | type: { 661 | generator: false, 662 | cancel: false, 663 | }, 664 | }, 665 | "/activate_button_1": { 666 | parameters: [ 667 | ], 668 | returns: [ 669 | { 670 | label: "Download GLB", 671 | type: "", 672 | python_type: { 673 | type: "filepath", 674 | description: "", 675 | }, 676 | component: "Downloadbutton", 677 | description: undefined, 678 | }, 679 | ], 680 | type: { 681 | generator: false, 682 | cancel: false, 683 | }, 684 | }, 685 | "/deactivate_button_1": { 686 | parameters: [ 687 | ], 688 | returns: [ 689 | { 690 | label: "Download GLB", 691 | type: "", 692 | python_type: { 693 | type: "filepath", 694 | description: "", 695 | }, 696 | component: "Downloadbutton", 697 | description: undefined, 698 | }, 699 | ], 700 | type: { 701 | generator: false, 702 | cancel: false, 703 | }, 704 | }, 705 | } 706 | ```