#
tokens: 12836/50000 25/25 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── Dockerfile
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   ├── config
│   │   └── index.ts
│   ├── index.ts
│   ├── resources
│   │   ├── imageList.ts
│   │   ├── index.ts
│   │   ├── predictionList.ts
│   │   └── svgList.ts
│   ├── server
│   │   └── index.ts
│   ├── services
│   │   └── replicate.ts
│   ├── tools
│   │   ├── createPrediction.ts
│   │   ├── generateImage.ts
│   │   ├── generateImageVariants.ts
│   │   ├── generateMultipleImages.ts
│   │   ├── generateSVG.ts
│   │   ├── getPrediction.ts
│   │   ├── index.ts
│   │   └── predictionList.ts
│   ├── types
│   │   └── index.ts
│   └── utils
│       ├── error.ts
│       └── image.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
/node_modules
/build
/dist
/coverage
/logs
/tmp
.env
.cursor
.cursorignore
.cursorrules
.cursorconfig
.cursorignorerules
.cursorignoreconfig
.npmrc
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
[![MseeP.ai Security Assessment Badge](https://mseep.net/pr/awkoy-replicate-flux-mcp-badge.png)](https://mseep.ai/app/awkoy-replicate-flux-mcp)

# Replicate Flux MCP

![MCP Compatible](https://img.shields.io/badge/MCP-Compatible-blue)
![License](https://img.shields.io/badge/license-MIT-green)
![TypeScript](https://img.shields.io/badge/TypeScript-4.9+-blue)
![Model Context Protocol](https://img.shields.io/badge/MCP-Enabled-purple)
[![smithery badge](https://smithery.ai/badge/@awkoy/replicate-flux-mcp)](https://smithery.ai/server/@awkoy/replicate-flux-mcp)
![NPM Downloads](https://img.shields.io/npm/dw/replicate-flux-mcp)
![Stars](https://img.shields.io/github/stars/awkoy/replicate-flux-mcp)

<a href="https://glama.ai/mcp/servers/ss8n1knen8">
  <img width="380" height="200" src="https://glama.ai/mcp/servers/ss8n1knen8/badge" />
</a>

**Replicate Flux MCP** is an advanced Model Context Protocol (MCP) server that empowers AI assistants to generate high-quality images and vector graphics. Leveraging [Black Forest Labs' Flux Schnell model](https://replicate.com/black-forest-labs/flux-schnell) for raster images and [Recraft's V3 SVG model](https://replicate.com/recraft-ai/recraft-v3-svg) for vector graphics via the Replicate API.

## 📑 Table of Contents

- [Getting Started & Integration](#-getting-started--integration)
  - [Setup Process](#setup-process)
  - [Cursor Integration](#cursor-integration)
  - [Claude Desktop Integration](#claude-desktop-integration)
  - [Smithery Integration](#smithery-integration)
  - [Glama.ai Integration](#glamaai-integration)
- [Features](#-features)
- [Documentation](#-documentation)
  - [Available Tools](#available-tools)
  - [Available Resources](#available-resources)
- [Development](#-development)
- [Technical Details](#-technical-details)
- [Troubleshooting](#-troubleshooting)
- [Contributing](#-contributing)
- [License](#-license)
- [Resources](#-resources)
- [Examples](#-examples)

## 🚀 Getting Started & Integration

### Setup Process

1. **Obtain a Replicate API Token**
   - Sign up at [Replicate](https://replicate.com/)
   - Create an API token in your account settings

2. **Choose Your Integration Method**
   - Follow one of the integration options below based on your preferred MCP client

3. **Ask Your AI Assistant to Generate an Image**
   - Simply ask naturally: "Can you generate an image of a serene mountain landscape at sunset?"
   - Or be more specific: "Please create an image showing a peaceful mountain scene with a lake reflecting the sunset colors in the foreground"

4. **Explore Advanced Features**
   - Try different parameter settings for customized results
   - Experiment with SVG generation using `generate_svg`
   - Use batch image generation or variant generation features

### Cursor Integration

#### Method 1: Using mcp.json

1. Create or edit the `.cursor/mcp.json` file in your project directory:

```json
{
  "mcpServers": {
    "replicate-flux-mcp": {
      "command": "env REPLICATE_API_TOKEN=YOUR_TOKEN npx",
      "args": ["-y", "replicate-flux-mcp"]
    }
  }
}
```

2. Replace `YOUR_TOKEN` with your actual Replicate API token
3. Restart Cursor to apply the changes

#### Method 2: Manual Mode

1. Open Cursor and go to Settings
2. Navigate to the "MCP" or "Model Context Protocol" section
3. Click "Add Server" or equivalent
4. Enter the following command in the appropriate field:

```
env REPLICATE_API_TOKEN=YOUR_TOKEN npx -y replicate-flux-mcp
```

5. Replace `YOUR_TOKEN` with your actual Replicate API token
6. Save the settings and restart Cursor if necessary

### Claude Desktop Integration

1. Create or edit the `mcp.json` file in your configuration directory:

```json
{
  "mcpServers": {
    "replicate-flux-mcp": {
      "command": "npx",
      "args": ["-y", "replicate-flux-mcp"],
      "env": {
        "REPLICATE_API_TOKEN": "YOUR TOKEN"
      }
    }
  }
}
```

2. Replace `YOUR_TOKEN` with your actual Replicate API token
3. Restart Claude Desktop to apply the changes

### Smithery Integration

This MCP server is available as a hosted service on Smithery, allowing you to use it without setting up your own server.

1. Visit [Smithery](https://smithery.ai/) and create an account if you don't have one
2. Navigate to the [Replicate Flux MCP server page](https://smithery.ai/server/@awkoy/replicate-flux-mcp)
3. Click "Add to Workspace" to add the server to your Smithery workspace
4. Configure your MCP client (Cursor, Claude Desktop, etc.) to use your Smithery workspace URL

For more information on using Smithery with your MCP clients, visit the [Smithery documentation](https://smithery.ai/docs).

### Glama.ai Integration

This MCP server is also available as a hosted service on Glama.ai, providing another option to use it without local setup.

1. Visit [Glama.ai](https://glama.ai/) and create an account if you don't have one
2. Go to the [Replicate Flux MCP server page](https://glama.ai/mcp/servers/ss8n1knen8)
3. Click "Install Server" to add the server to your workspace
4. Configure your MCP client to use your Glama.ai workspace

For more information, visit the [Glama.ai MCP servers documentation](https://glama.ai/mcp/servers).

## 🌟 Features

- **🖼️ High-Quality Image Generation** - Create stunning images using Flux Schnell, a state-of-the-art AI model
- **🎨 Vector Graphics Support** - Generate professional SVG vector graphics with Recraft V3 SVG model
- **🤖 AI Assistant Integration** - Seamlessly enable AI assistants like Claude to generate visual content
- **🎛️ Advanced Customization** - Fine-tune generation with controls for aspect ratio, quality, resolution, and more
- **🔌 Universal MCP Compatibility** - Works with all MCP clients including Cursor, Claude Desktop, Cline, and Zed
- **🔒 Secure Local Processing** - All requests are processed locally for enhanced privacy and security
- **🔍 Comprehensive History Management** - Track, view, and retrieve your complete generation history
- **📊 Batch Processing** - Generate multiple images from different prompts in a single request
- **🔄 Variant Exploration** - Create and compare multiple interpretations of the same concept
- **✏️ Prompt Engineering** - Fine-tune image variations with specialized prompt modifications

## 📚 Documentation

### Available Tools

#### `generate_image`

Generates an image based on a text prompt using the Flux Schnell model.

```typescript
{
  prompt: string;                // Required: Text description of the image to generate
  seed?: number;                 // Optional: Random seed for reproducible generation
  go_fast?: boolean;             // Optional: Run faster predictions with optimized model (default: true)
  megapixels?: "1" | "0.25";     // Optional: Image resolution (default: "1")
  num_outputs?: number;          // Optional: Number of images to generate (1-4) (default: 1)
  aspect_ratio?: string;         // Optional: Aspect ratio (e.g., "16:9", "4:3") (default: "1:1")
  output_format?: string;        // Optional: Output format ("webp", "jpg", "png") (default: "webp")
  output_quality?: number;       // Optional: Image quality (0-100) (default: 80)
  num_inference_steps?: number;  // Optional: Number of denoising steps (1-4) (default: 4)
  disable_safety_checker?: boolean; // Optional: Disable safety filter (default: false)
}
```

#### `generate_multiple_images`

Generates multiple images based on an array of prompts using the Flux Schnell model.

```typescript
{
  prompts: string[];             // Required: Array of text descriptions for images to generate (1-10 prompts)
  seed?: number;                 // Optional: Random seed for reproducible generation
  go_fast?: boolean;             // Optional: Run faster predictions with optimized model (default: true)
  megapixels?: "1" | "0.25";     // Optional: Image resolution (default: "1")
  aspect_ratio?: string;         // Optional: Aspect ratio (e.g., "16:9", "4:3") (default: "1:1")
  output_format?: string;        // Optional: Output format ("webp", "jpg", "png") (default: "webp")
  output_quality?: number;       // Optional: Image quality (0-100) (default: 80)
  num_inference_steps?: number;  // Optional: Number of denoising steps (1-4) (default: 4)
  disable_safety_checker?: boolean; // Optional: Disable safety filter (default: false)
}
```

#### `generate_image_variants`

Generates multiple variants of the same image from a single prompt.

```typescript
{
  prompt: string;                // Required: Text description for the image to generate variants of
  num_variants: number;          // Required: Number of image variants to generate (2-10, default: 4)
  prompt_variations?: string[];  // Optional: List of prompt modifiers to apply to variants (e.g., ["in watercolor style", "in oil painting style"])
  variation_mode?: "append" | "replace"; // Optional: How to apply variations - 'append' adds to base prompt, 'replace' uses variations directly (default: "append")
  seed?: number;                 // Optional: Base random seed. Each variant will use seed+variant_index
  go_fast?: boolean;             // Optional: Run faster predictions with optimized model (default: true)
  megapixels?: "1" | "0.25";     // Optional: Image resolution (default: "1")
  aspect_ratio?: string;         // Optional: Aspect ratio (e.g., "16:9", "4:3") (default: "1:1")
  output_format?: string;        // Optional: Output format ("webp", "jpg", "png") (default: "webp")
  output_quality?: number;       // Optional: Image quality (0-100) (default: 80)
  num_inference_steps?: number;  // Optional: Number of denoising steps (1-4) (default: 4)
  disable_safety_checker?: boolean; // Optional: Disable safety filter (default: false)
}
```

#### `generate_svg`

Generates an SVG vector image based on a text prompt using the Recraft V3 SVG model.

```typescript
{
  prompt: string;                // Required: Text description of the SVG to generate
  size?: string;                 // Optional: Size of the generated SVG (default: "1024x1024")
  style?: string;                // Optional: Style of the generated image (default: "any")
                                // Options: "any", "engraving", "line_art", "line_circuit", "linocut"
}
```

#### `prediction_list`

Retrieves a list of your recent predictions from Replicate.

```typescript
{
  limit?: number;  // Optional: Maximum number of predictions to return (1-100) (default: 50)
}
```

#### `get_prediction`

Gets detailed information about a specific prediction.

```typescript
{
  predictionId: string;  // Required: ID of the prediction to retrieve
}
```

### Available Resources

#### `imagelist`

Browse your history of generated images created with the Flux Schnell model.

#### `svglist`

Browse your history of generated SVG images created with the Recraft V3 SVG model.

#### `predictionlist`

Browse all your Replicate predictions history.

## 💻 Development

1. Clone the repository:

```bash
git clone https://github.com/awkoy/replicate-flux-mcp.git
cd replicate-flux-mcp
```

2. Install dependencies:

```bash
npm install
```

3. Start development mode:

```bash
npm run dev
```

4. Build the project:

```bash
npm run build
```

5. Connect to Client:

```json
{
  "mcpServers": {
    "image-generation-mcp": {
      "command": "npx",
      "args": [
        "/Users/{USERNAME}/{PATH_TO}/replicate-flux-mcp/build/index.js"
      ],
      "env": {
        "REPLICATE_API_TOKEN": "YOUR REPLICATE API TOKEN"
      }
    }
  }
}
```

## ⚙️ Technical Details

### Stack

- **Model Context Protocol SDK** - Core MCP functionality for tool and resource management
- **Replicate API** - Provides access to state-of-the-art AI image generation models
- **TypeScript** - Ensures type safety and leverages modern JavaScript features
- **Zod** - Implements runtime type validation for robust API interactions

### Configuration

The server can be configured by modifying the `CONFIG` object in `src/config/index.ts`:

```javascript
const CONFIG = {
  serverName: "replicate-flux-mcp",
  serverVersion: "0.1.2",
  imageModelId: "black-forest-labs/flux-schnell",
  svgModelId: "recraft-ai/recraft-v3-svg",
  pollingAttempts: 25,
  pollingInterval: 2000, // ms
};
```

## 🔍 Troubleshooting

### Common Issues

#### Authentication Error
- Ensure your `REPLICATE_API_TOKEN` is correctly set in the environment
- Verify your token is valid by testing it with the Replicate API directly

#### Safety Filter Triggered
- The model has a built-in safety filter that may block certain prompts
- Try modifying your prompt to avoid potentially problematic content

#### Timeout Error
- For larger images or busy servers, you might need to increase `pollingAttempts` or `pollingInterval` in the configuration
- Default settings should work for most use cases

## 🤝 Contributing

Contributions are welcome! Please follow these steps to contribute:

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

For feature requests or bug reports, please create a GitHub issue. If you like this project, consider starring the repository!

## 📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

## 🔗 Resources

- [Model Context Protocol Documentation](https://modelcontextprotocol.io)
- [Replicate API Documentation](https://replicate.com/docs)
- [Flux Schnell Model](https://replicate.com/black-forest-labs/flux-schnell)
- [Recraft V3 SVG Model](https://replicate.com/recraft-ai/recraft-v3-svg)
- [MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk)
- [Smithery Documentation](https://smithery.ai/docs)
- [Glama.ai MCP Servers](https://glama.ai/mcp/servers)

## 🎨 Examples

![Demo](https://github.com/user-attachments/assets/ad6db606-ae3a-48db-a1cc-e1f88847769e)

| Multiple Prompts | Prompt Variants |
|-----------------|-----------------|
| ![Multiple prompts example: "A serene mountain lake at sunset", "A bustling city street at night", "A peaceful garden in spring"](https://github.com/user-attachments/assets/e5ac56d2-bfbb-4f33-938c-a3d7bffeee60) | ![Variants example: Base prompt "A majestic castle" with modifiers "in watercolor style", "as an oil painting", "with gothic architecture"](https://github.com/user-attachments/assets/8ebe5992-4803-4bf3-a82a-251135b0698a) |

Here are some examples of how to use the tools:

### Batch Image Generation with `generate_multiple_images`

Create multiple distinct images at once with different prompts:

```json
{
  "prompts": [
    "A red sports car on a mountain road", 
    "A blue sports car on a beach", 
    "A vintage sports car in a city street"
  ]
}
```

### Image Variants with `generate_image_variants`

Create different interpretations of the same concept using seeds:

```json
{
  "prompt": "A futuristic city skyline at night",
  "num_variants": 4,
  "seed": 42
}
```

Or explore style variations with prompt modifiers:

```json
{
  "prompt": "A character portrait",
  "prompt_variations": [
    "in anime style", 
    "in watercolor style", 
    "in oil painting style", 
    "as a 3D render"
  ]
}
```

---

Made with ❤️ by Yaroslav Boiko


```

--------------------------------------------------------------------------------
/src/utils/error.ts:
--------------------------------------------------------------------------------

```typescript
import { ErrorCode, McpError } from "@modelcontextprotocol/sdk/types.js";

export function handleError(error: unknown): never {
  if (error instanceof Error) {
    throw new McpError(ErrorCode.InternalError, error.message);
  }
  throw new McpError(ErrorCode.InternalError, String(error));
}

```

--------------------------------------------------------------------------------
/src/config/index.ts:
--------------------------------------------------------------------------------

```typescript
// Configuration
export const CONFIG = {
  serverName: "replicate-flux-mcp",
  serverVersion: "0.1.2",
  imageModelId: "black-forest-labs/flux-schnell" as `${string}/${string}`,
  svgModelId: "recraft-ai/recraft-v3-svg" as `${string}/${string}`,
  pollingAttempts: 25,
  pollingInterval: 2000, // ms
};

```

--------------------------------------------------------------------------------
/src/resources/index.ts:
--------------------------------------------------------------------------------

```typescript
import { registerImageListResource } from "./imageList.js";
import { registerPreditionListResource } from "./predictionList.js";
import { registerSvgListResource } from "./svgList.js";

export const registerAllResources = () => {
  registerImageListResource();
  registerPreditionListResource();
  registerSvgListResource();
};

```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
FROM node:22.12-alpine AS builder

WORKDIR /app

COPY package*.json ./
COPY tsconfig.json ./
COPY src/ ./src/

RUN --mount=type=cache,target=/root/.npm npm install
RUN npm run build

FROM node:22.12-alpine AS release

WORKDIR /app

COPY --from=builder /app/build /app/build
COPY --from=builder /app/package.json ./
COPY --from=builder /app/package-lock.json ./

ENV NODE_ENV=production

RUN npm ci --ignore-scripts --omit-dev

CMD ["node", "build/index.js"]

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/deployments

startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    required:
      - replicateApiToken
    properties:
      replicateApiToken:
        type: string
        description: The API key for the Replicate API.
  commandFunction:
    # A function that produces the CLI command to start the MCP on stdio.
    |-
    config=>({command:'node',args:['build/index.js'],env:{REPLICATE_API_TOKEN:config.replicateApiToken}})
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
#!/usr/bin/env node
import { registerAllResources } from "./resources/index.js";
import { startServer } from "./server/index.js";
import { registerAllTools } from "./tools/index.js";

registerAllTools();
registerAllResources();

async function main() {
  try {
    await startServer();
  } catch (error) {
    console.error(
      "Unhandled server error:",
      error instanceof Error ? error.message : String(error)
    );
    process.exit(1);
  }
}

main().catch((error: unknown) => {
  console.error(
    "Unhandled server error:",
    error instanceof Error ? error.message : String(error)
  );
  process.exit(1);
});

```

--------------------------------------------------------------------------------
/src/tools/getPrediction.ts:
--------------------------------------------------------------------------------

```typescript
import { replicate } from "../services/replicate.js";
import { handleError } from "../utils/error.js";
import { GetPredictionParams } from "../types/index.js";
import { CallToolResult } from "@modelcontextprotocol/sdk/types.js";

export const registerGetPredictionTool = async ({
  predictionId,
}: GetPredictionParams): Promise<CallToolResult> => {
  try {
    const prediction = await replicate.predictions.get(predictionId);

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(prediction, null, 2),
        },
      ],
    };
  } catch (error) {
    handleError(error);
  }
};

```

--------------------------------------------------------------------------------
/src/services/replicate.ts:
--------------------------------------------------------------------------------

```typescript
import Replicate from "replicate";
import { CONFIG } from "../config/index.js";

export function getReplicateApiToken(): string {
  const token = process.env.REPLICATE_API_TOKEN;
  if (!token) {
    console.error(
      "Error: REPLICATE_API_TOKEN environment variable is required"
    );
    process.exit(1);
  }
  return token;
}

export const replicate = new Replicate({
  auth: getReplicateApiToken(),
});

export async function pollForCompletion(predictionId: string) {
  for (let i = 0; i < CONFIG.pollingAttempts; i++) {
    const latest = await replicate.predictions.get(predictionId);
    if (latest.status !== "starting" && latest.status !== "processing") {
      return latest;
    }
    await new Promise((resolve) => setTimeout(resolve, CONFIG.pollingInterval));
  }
  return null;
}

```

--------------------------------------------------------------------------------
/src/tools/createPrediction.ts:
--------------------------------------------------------------------------------

```typescript
import { CreatePredictionParams } from "../types/index.js";
import { pollForCompletion, replicate } from "../services/replicate.js";
import { handleError } from "../utils/error.js";
import { CallToolResult } from "@modelcontextprotocol/sdk/types.js";
import { CONFIG } from "../config/index.js";

export const registerCreatePredictionTool = async (
  input: CreatePredictionParams
): Promise<CallToolResult> => {
  try {
    const prediction = await replicate.predictions.create({
      model: CONFIG.imageModelId,
      input,
    });

    await replicate.predictions.get(prediction.id);
    const completed = await pollForCompletion(prediction.id);

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(completed || "Processing timed out", null, 2),
        },
      ],
    };
  } catch (error) {
    handleError(error);
  }
};

```

--------------------------------------------------------------------------------
/src/server/index.ts:
--------------------------------------------------------------------------------

```typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CONFIG } from "../config/index.js";

export const server = new McpServer(
  {
    name: CONFIG.serverName,
    version: CONFIG.serverVersion,
  },
  {
    capabilities: {
      resources: {},
      tools: {},
    },
    instructions: `
    MCP server for the Replicate models.
    It is used to generate images and SVGs from text prompts.
    `,
  }
);

export async function startServer() {
  try {
    const transport = new StdioServerTransport();
    await server.connect(transport);
    console.error(
      `${CONFIG.serverName} v${CONFIG.serverVersion} running on stdio`
    );
  } catch (error) {
    console.error(
      "Server initialization error:",
      error instanceof Error ? error.message : String(error)
    );
    process.exit(1);
  }
}

```

--------------------------------------------------------------------------------
/src/utils/image.ts:
--------------------------------------------------------------------------------

```typescript
import { FileOutput } from "replicate";

export async function outputToBase64(output: FileOutput) {
  const blob = await output.blob();
  const buffer = Buffer.from(await blob.arrayBuffer());
  return buffer.toString("base64");
}

export async function urlToSvg(url: string) {
  try {
    const data = await fetch(url, {
      headers: {
        Authorization: `Bearer ${process.env.REPLICATE_API_TOKEN}`,
      },
    });

    const text = await data.text();

    return text;
  } catch (error) {
    throw new Error("Error fetching svg");
  }
}

export async function urlToBase64(url: string) {
  try {
    const data = await fetch(url, {
      headers: {
        Authorization: `Bearer ${process.env.REPLICATE_API_TOKEN}`,
      },
    });

    const blob = await data.blob();

    let buffer = Buffer.from(await blob.arrayBuffer());
    return buffer.toString("base64");
  } catch (error) {
    throw new Error("Error fetching image");
  }
}

```

--------------------------------------------------------------------------------
/src/tools/predictionList.ts:
--------------------------------------------------------------------------------

```typescript
import { PredictionListParams } from "../types/index.js";
import { replicate } from "../services/replicate.js";
import { handleError } from "../utils/error.js";
import { CallToolResult } from "@modelcontextprotocol/sdk/types.js";

export const registerPredictionListTool = async ({
  limit,
}: PredictionListParams): Promise<CallToolResult> => {
  try {
    const predictions = [];
    for await (const page of replicate.paginate(replicate.predictions.list)) {
      predictions.push(...page);
      if (predictions.length >= limit) {
        break;
      }
    }

    const limitedPredictions = predictions.slice(0, limit);
    const totalPages = Math.ceil(predictions.length / limit);

    return {
      content: [
        {
          type: "text",
          text: `Found ${limitedPredictions.length} predictions (showing ${limitedPredictions.length} of ${predictions.length} total, page 1 of ${totalPages})`,
        },
        {
          type: "text",
          text: JSON.stringify(limitedPredictions, null, 2),
        },
      ],
    };
  } catch (error) {
    handleError(error);
  }
};

```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
  "name": "replicate-flux-mcp",
  "version": "0.1.2",
  "type": "module",
  "bin": {
    "replicate-flux-mcp": "build/index.js"
  },
  "scripts": {
    "build": "tsc && shx chmod +x build/*.js",
    "prepare": "npm run build",
    "watch": "tsc --watch",
    "inspector": "npx @modelcontextprotocol/inspector build/index.js -e REPLICATE_API_TOKEN=YOUR_REPLICATE_API_TOKEN"
  },
  "homepage": "https://github.com/awkoy/replicate-flux-mcp",
  "keywords": [
    "replicate",
    "flux",
    "mcp",
    "flux-schnell",
    "flux-schnell-mcp",
    "modelcontextprotocol",
    "image-generation",
    "ai"
  ],
  "author": "Yaroslav Boiko <[email protected]>",
  "license": "MIT",
  "description": "MCP for Replicate Flux Model",
  "files": [
    "build"
  ],
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.7.0",
    "replicate": "^1.0.1",
    "zod": "^3.24.2"
  },
  "devDependencies": {
    "@types/node": "^22.13.10",
    "shx": "^0.3.4",
    "typescript": "^5.8.2"
  },
  "engines": {
    "node": ">=18"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/awkoy/replicate-flux-mcp.git"
  }
}

```

--------------------------------------------------------------------------------
/src/tools/generateSVG.ts:
--------------------------------------------------------------------------------

```typescript
import { SvgGenerationParams } from "../types/index.js";
import { replicate } from "../services/replicate.js";
import { handleError } from "../utils/error.js";
import { CallToolResult } from "@modelcontextprotocol/sdk/types.js";
import { urlToSvg } from "../utils/image.js";
import { CONFIG } from "../config/index.js";
import { FileOutput } from "replicate";

export const registerGenerateSvgTool = async (
  input: SvgGenerationParams
): Promise<CallToolResult> => {
  try {
    const output = (await replicate.run(CONFIG.svgModelId, {
      input,
    })) as FileOutput;

    const svgUrl = output.url() as unknown as string;
    if (!svgUrl) {
      throw new Error("Failed to generate SVG URL");
    }

    try {
      const svg = await urlToSvg(svgUrl);

      return {
        content: [
          {
            type: "text",
            text: `This is a generated SVG url: ${svgUrl}`,
          },
          {
            type: "text",
            text: svg,
          },
        ],
      };
    } catch (error) {
      return {
        content: [
          {
            type: "text",
            text: `This is a generated SVG url: ${svgUrl}`,
          },
        ],
      };
    }
  } catch (error) {
    return handleError(error);
  }
};

```

--------------------------------------------------------------------------------
/src/resources/predictionList.ts:
--------------------------------------------------------------------------------

```typescript
import { server } from "../server/index.js";
import {
  ListResourcesCallback,
  ResourceTemplate,
} from "@modelcontextprotocol/sdk/server/mcp.js";
import { replicate } from "../services/replicate.js";
import { Prediction } from "replicate";

export const registerPreditionListResource = () => {
  const list: ListResourcesCallback = async () => {
    try {
      const predictions: Prediction[] = [];
      for await (const page of replicate.paginate(replicate.predictions.list)) {
        predictions.push(...page);
      }

      return {
        resources: predictions.map((prediction) => ({
          uri: `predictions://${prediction.id}`,
          name: `Prediction ${prediction.id}`,
          mimeType: "application/json",
        })),
        nextCursor: undefined,
      };
    } catch (error) {
      console.error("Error listing predictions:", error);
      return {
        resources: [],
        nextCursor: undefined,
      };
    }
  };

  server.resource(
    "predictions",
    new ResourceTemplate("predictions://{id}", {
      list,
    }),
    async (uri, { id }) => {
      const prediction = await replicate.predictions.get(id as string);

      return {
        contents: [
          {
            name: "prediction",
            uri: uri.href,
            text: JSON.stringify(prediction),
            mimeType: "application/json",
          },
        ],
      };
    }
  );
};

```

--------------------------------------------------------------------------------
/src/tools/generateImage.ts:
--------------------------------------------------------------------------------

```typescript
import { ImageGenerationParams } from "../types/index.js";
import { replicate } from "../services/replicate.js";
import { handleError } from "../utils/error.js";
import { CallToolResult } from "@modelcontextprotocol/sdk/types.js";
import { FileOutput } from "replicate";
import { outputToBase64 } from "../utils/image.js";
import { CONFIG } from "../config/index.js";

export const registerGenerateImageTool = async (
  input: ImageGenerationParams
): Promise<CallToolResult> => {
  const { support_image_mcp_response_type, ...predictionInput } = input;
  try {
    const [output] = (await replicate.run(CONFIG.imageModelId, {
      input: predictionInput,
    })) as [FileOutput];
    const imageUrl = output.url() as unknown as string;

    if (support_image_mcp_response_type) {
      const imageBase64 = await outputToBase64(output);
      return {
        content: [
          {
            type: "text",
            text: `This is a generated image link: ${imageUrl}`,
          },
          {
            type: "image",
            data: imageBase64,
            mimeType: "image/png",
          },
          {
            type: "text",
            text: `The image above is generated by the Flux model and prompt: ${input.prompt}`,
          },
        ],
      };
    }

    return {
      content: [
        {
          type: "text",
          text: `This is a generated image link: ${imageUrl}`,
        },
        {
          type: "text",
          text: `The image above is generated by the Flux model and prompt: ${input.prompt}`,
        },
      ],
    };
  } catch (error) {
    handleError(error);
  }
};

```

--------------------------------------------------------------------------------
/src/resources/svgList.ts:
--------------------------------------------------------------------------------

```typescript
import { server } from "../server/index.js";
import {
  ListResourcesCallback,
  ResourceTemplate,
} from "@modelcontextprotocol/sdk/server/mcp.js";
import { replicate } from "../services/replicate.js";
import { urlToSvg } from "../utils/image.js";
import { Prediction } from "replicate";
import { CONFIG } from "../config/index.js";

export const registerSvgListResource = () => {
  const list: ListResourcesCallback = async () => {
    try {
      const predictions: Prediction[] = [];
      for await (const page of replicate.paginate(replicate.predictions.list)) {
        predictions.push(...page);
      }

      return {
        resources: predictions
          .filter((prediction) => prediction.model === CONFIG.svgModelId)
          .map((prediction) => ({
            uri: `svglist://${prediction.id}`,
            name: `SVG ${prediction.id}`,
            mimeType: "application/json",
            description: `Generated image by ${prediction.model} with id ${prediction.id}`,
          })),
        nextCursor: undefined,
      };
    } catch (error) {
      console.error("Error listing predictions:", error);
      return {
        resources: [],
        nextCursor: undefined,
      };
    }
  };

  server.resource(
    "svglist",
    new ResourceTemplate("svglist://{id}", {
      list,
    }),
    async (uri, { id }) => {
      const prediction = await replicate.predictions.get(id as string);

      if (!prediction.output) {
        return {
          contents: [
            {
              name: "Not Found!",
              uri: uri.href,
              text: `Data has been removed by Replicate automatically after an hour, by default. You have to save your own copies before it is removed.`,
              mimeType: "text/plain",
            },
          ],
        };
      }

      const svg = await urlToSvg(prediction.output);

      return {
        contents: [
          {
            name: "svglist",
            uri: uri.href,
            text: svg,
            mimeType: "image/svg+xml",
          },
        ],
      };
    }
  );
};

```

--------------------------------------------------------------------------------
/src/tools/index.ts:
--------------------------------------------------------------------------------

```typescript
import {
  getPredictionSchema,
  svgGenerationSchema,
  multiImageGenerationSchema,
  imageVariantsGenerationSchema,
} from "../types/index.js";
import { predictionListSchema } from "../types/index.js";

import { server } from "../server/index.js";
import { imageGenerationSchema } from "../types/index.js";
import { registerGetPredictionTool } from "./getPrediction.js";
import { registerPredictionListTool } from "./predictionList.js";
import { registerGenerateImageTool } from "./generateImage.js";
import { createPredictionSchema } from "../types/index.js";
import { registerCreatePredictionTool } from "./createPrediction.js";
import { registerGenerateSvgTool } from "./generateSVG.js";
import { registerGenerateMultipleImagesTool } from "./generateMultipleImages.js";
import { registerGenerateImageVariantsTool } from "./generateImageVariants.js";

export const registerAllTools = () => {
  server.tool(
    "generate_image",
    "Generate an image from a text prompt using Flux Schnell model",
    imageGenerationSchema,
    registerGenerateImageTool
  );
  server.tool(
    "generate_multiple_images",
    "Generate multiple images from an array of prompts using Flux Schnell model",
    multiImageGenerationSchema,
    registerGenerateMultipleImagesTool
  );
  server.tool(
    "generate_image_variants",
    "Generate multiple variants of the same image from a single prompt",
    imageVariantsGenerationSchema,
    registerGenerateImageVariantsTool
  );
  server.tool(
    "generate_svg",
    "Generate an SVG from a text prompt using Recraft model",
    svgGenerationSchema,
    registerGenerateSvgTool
  );
  server.tool(
    "get_prediction",
    "Get details of a specific prediction by ID",
    getPredictionSchema,
    registerGetPredictionTool
  );
  server.tool(
    "create_prediction",
    "Generate an prediction from a text prompt using Flux Schnell model",
    createPredictionSchema,
    registerCreatePredictionTool
  );
  server.tool(
    "prediction_list",
    "Get a list of recent predictions from Replicate",
    predictionListSchema,
    registerPredictionListTool
  );
};

```

--------------------------------------------------------------------------------
/src/resources/imageList.ts:
--------------------------------------------------------------------------------

```typescript
import { server } from "../server/index.js";
import {
  ListResourcesCallback,
  ResourceTemplate,
} from "@modelcontextprotocol/sdk/server/mcp.js";
import { replicate } from "../services/replicate.js";
import { urlToBase64 } from "../utils/image.js";
import { Prediction } from "replicate";
import { CONFIG } from "../config/index.js";

export const registerImageListResource = () => {
  const list: ListResourcesCallback = async () => {
    try {
      const predictions: Prediction[] = [];
      for await (const page of replicate.paginate(replicate.predictions.list)) {
        predictions.push(...page);
      }

      return {
        resources: predictions
          .filter(
            (prediction) =>
              prediction.output?.length &&
              prediction.model === CONFIG.imageModelId
          )
          .map((prediction) => ({
            uri: `images://${prediction.id}`,
            name: `Image ${prediction.id}`,
            mimeType: "application/json",
            description: `Generated image by ${prediction.model} with id ${prediction.id}`,
          })),
        nextCursor: undefined,
      };
    } catch (error) {
      console.error("Error listing predictions:", error);
      return {
        resources: [],
        nextCursor: undefined,
      };
    }
  };

  server.resource(
    "images",
    new ResourceTemplate("images://{id}", {
      list,
    }),
    async (uri, { id }) => {
      const prediction = await replicate.predictions.get(id as string);

      if (!prediction.output?.length) {
        return {
          contents: [
            {
              name: "Not Found!",
              uri: uri.href,
              text: `Data has been removed by Replicate automatically after an hour, by default. You have to save your own copies before it is removed.`,
              mimeType: "text/plain",
            },
          ],
        };
      }

      const imageBase64 = await urlToBase64(prediction.output[0]);

      return {
        contents: [
          {
            name: "image",
            uri: uri.href,
            blob: imageBase64,
            mimeType: "image/png",
          },
        ],
      };
    }
  );
};

```

--------------------------------------------------------------------------------
/src/tools/generateMultipleImages.ts:
--------------------------------------------------------------------------------

```typescript
import { MultiImageGenerationParams } from "../types/index.js";
import { replicate } from "../services/replicate.js";
import { handleError } from "../utils/error.js";
import {
  CallToolResult,
  TextContent,
  ImageContent,
} from "@modelcontextprotocol/sdk/types.js";
import { FileOutput } from "replicate";
import { outputToBase64 } from "../utils/image.js";
import { CONFIG } from "../config/index.js";

export const registerGenerateMultipleImagesTool = async (
  input: MultiImageGenerationParams
): Promise<CallToolResult> => {
  const { prompts, support_image_mcp_response_type, ...commonParams } = input;
  try {
    // Process all prompts in parallel
    const generationPromises = prompts.map(async (prompt) => {
      const [output] = (await replicate.run(CONFIG.imageModelId, {
        input: {
          prompt,
          ...commonParams,
        },
      })) as [FileOutput];

      const imageUrl = output.url() as unknown as string;

      if (support_image_mcp_response_type) {
        const imageBase64 = await outputToBase64(output);
        return {
          prompt,
          imageUrl,
          imageBase64,
        };
      }

      return {
        prompt,
        imageUrl,
      };
    });

    // Wait for all image generation to complete
    const results = await Promise.all(generationPromises);

    // Build response content
    const responseContent: (TextContent | ImageContent)[] = [];

    // Add intro text
    responseContent.push({
      type: "text",
      text: `Generated ${results.length} images based on your prompts:`,
    } as TextContent);

    // Add each image with its prompt
    for (const result of results) {
      responseContent.push({
        type: "text",
        text: `\n\nPrompt: "${result.prompt}"\nImage URL: ${result.imageUrl}`,
      } as TextContent);

      if (support_image_mcp_response_type && result.imageBase64) {
        responseContent.push({
          type: "image",
          data: result.imageBase64,
          mimeType: `image/${
            input.output_format === "jpg" ? "jpeg" : input.output_format
          }`,
        } as ImageContent);
      }
    }

    return {
      content: responseContent,
    };
  } catch (error) {
    handleError(error);
  }
};

```

--------------------------------------------------------------------------------
/src/tools/generateImageVariants.ts:
--------------------------------------------------------------------------------

```typescript
import { ImageVariantsGenerationParams } from "../types/index.js";
import { replicate } from "../services/replicate.js";
import { handleError } from "../utils/error.js";
import {
  CallToolResult,
  TextContent,
  ImageContent,
} from "@modelcontextprotocol/sdk/types.js";
import { FileOutput } from "replicate";
import { outputToBase64 } from "../utils/image.js";
import { CONFIG } from "../config/index.js";

type ImageVariantResult = {
  variantIndex: number;
  imageUrl: string;
  imageBase64?: string;
  usedPrompt: string;
};

export const registerGenerateImageVariantsTool = async (
  input: ImageVariantsGenerationParams
): Promise<CallToolResult> => {
  const {
    prompt,
    num_variants,
    seed,
    support_image_mcp_response_type,
    prompt_variations,
    variation_mode,
    ...commonParams
  } = input;

  try {
    let effectiveVariants = num_variants;
    let usingPromptVariations = false;

    // Decide if we're using prompt variations
    if (prompt_variations && prompt_variations.length > 0) {
      usingPromptVariations = true;
      // If using prompt variations, number of variants is limited by available variations
      effectiveVariants = Math.min(num_variants, prompt_variations.length);
    }

    // Process all variants in parallel
    const generationPromises = Array.from(
      { length: effectiveVariants },
      (_, index) => {
        // If seed is provided, create deterministic variants by adding the index
        const variantSeed = seed !== undefined ? seed + index : undefined;

        // Determine which prompt to use for this variant
        let variantPrompt = prompt;
        if (usingPromptVariations) {
          const variation = prompt_variations![index];
          if (variation_mode === "append") {
            variantPrompt = `${prompt} ${variation}`;
          } else {
            // 'replace' mode
            variantPrompt = variation;
          }
        }

        return replicate
          .run(CONFIG.imageModelId, {
            input: {
              prompt: variantPrompt,
              seed: variantSeed,
              ...commonParams,
            },
          })
          .then((outputs) => {
            const [output] = outputs as [FileOutput];
            const imageUrl = output.url() as unknown as string;

            if (support_image_mcp_response_type) {
              return outputToBase64(output).then((imageBase64) => ({
                variantIndex: index + 1,
                imageUrl,
                imageBase64,
                usedPrompt: variantPrompt,
              }));
            }

            return {
              variantIndex: index + 1,
              imageUrl,
              usedPrompt: variantPrompt,
            };
          });
      }
    );

    // Wait for all variant generation to complete
    const results = (await Promise.all(
      generationPromises
    )) as ImageVariantResult[];

    // Build response content
    const responseContent: (TextContent | ImageContent)[] = [];

    // Add intro text - different based on whether we're using prompt variations
    if (usingPromptVariations) {
      responseContent.push({
        type: "text",
        text: `Generated ${results.length} variants of "${prompt}" using custom prompt variations (${variation_mode} mode)`,
      } as TextContent);
    } else {
      responseContent.push({
        type: "text",
        text: `Generated ${results.length} variants of: "${prompt}" using seed variations`,
      } as TextContent);
    }

    // Add each variant with its index and prompt info
    for (const result of results) {
      // Build an appropriate description based on variant type
      let variantDescription = `Variant #${result.variantIndex}`;

      if (usingPromptVariations) {
        variantDescription += `\nPrompt: "${result.usedPrompt}"`;
      } else if (seed !== undefined) {
        variantDescription += ` (seed: ${seed + (result.variantIndex - 1)})`;
      }

      variantDescription += `\nImage URL: ${result.imageUrl}`;

      responseContent.push({
        type: "text",
        text: `\n\n${variantDescription}`,
      } as TextContent);

      if (support_image_mcp_response_type && result.imageBase64) {
        responseContent.push({
          type: "image",
          data: result.imageBase64,
          mimeType: `image/${
            input.output_format === "jpg" ? "jpeg" : input.output_format
          }`,
        } as ImageContent);
      }
    }

    return {
      content: responseContent,
    };
  } catch (error) {
    handleError(error);
  }
};

```

--------------------------------------------------------------------------------
/src/types/index.ts:
--------------------------------------------------------------------------------

```typescript
import { z } from "zod";

export const createPredictionSchema = {
  prompt: z.string().min(1).describe("Prompt for generated image"),
  seed: z
    .number()
    .int()
    .optional()
    .describe("Random seed. Set for reproducible generation"),
  go_fast: z
    .boolean()
    .default(true)
    .describe(
      "Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16"
    ),
  megapixels: z
    .enum(["1", "0.25"])
    .default("1")
    .describe("Approximate number of megapixels for generated image"),
  num_outputs: z
    .number()
    .int()
    .min(1)
    .max(4)
    .default(1)
    .describe("Number of outputs to generate"),
  aspect_ratio: z
    .enum([
      "1:1",
      "16:9",
      "21:9",
      "3:2",
      "2:3",
      "4:5",
      "5:4",
      "3:4",
      "4:3",
      "9:16",
      "9:21",
    ])
    .default("1:1")
    .describe("Aspect ratio for the generated image"),
  output_format: z
    .enum(["webp", "jpg", "png"])
    .default("webp")
    .describe("Format of the output images"),
  output_quality: z
    .number()
    .int()
    .min(0)
    .max(100)
    .default(80)
    .describe(
      "Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs"
    ),
  num_inference_steps: z
    .number()
    .int()
    .min(1)
    .max(4)
    .default(4)
    .describe(
      "Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster."
    ),
  disable_safety_checker: z
    .boolean()
    .default(false)
    .describe("Disable safety checker for generated images."),
};
const createPredictionObjectSchema = z.object(createPredictionSchema);
export type CreatePredictionParams = z.infer<
  typeof createPredictionObjectSchema
>;

export const imageGenerationSchema = {
  ...createPredictionSchema,
  support_image_mcp_response_type: z
    .boolean()
    .default(true)
    .describe(
      "Disable if the image type is not supported in the response, if it's Cursor app for example"
    ),
};
const imageGenerationObjectSchema = z.object(imageGenerationSchema);
export type ImageGenerationParams = z.infer<typeof imageGenerationObjectSchema>;

export const svgGenerationSchema = {
  prompt: z.string().min(1).describe("Prompt for generated SVG"),
  size: z
    .enum([
      "1024x1024",
      "1365x1024",
      "1024x1365",
      "1536x1024",
      "1024x1536",
      "1820x1024",
      "1024x1820",
      "1024x2048",
      "2048x1024",
      "1434x1024",
      "1024x1434",
      "1024x1280",
      "1280x1024",
      "1024x1707",
      "1707x1024",
    ])
    .default("1024x1024")
    .describe("Size of the generated SVG"),
  style: z
    .enum(["any", "engraving", "line_art", "line_circuit", "linocut"])
    .default("any")
    .describe("Style of the generated image."),
};
const svgGenerationObjectSchema = z.object(svgGenerationSchema);
export type SvgGenerationParams = z.infer<typeof svgGenerationObjectSchema>;

export const predictionListSchema = {
  limit: z
    .number()
    .int()
    .min(1)
    .max(100)
    .default(50)
    .describe("Maximum number of predictions to return"),
};
const predictionListObjectSchema = z.object(predictionListSchema);
export type PredictionListParams = z.infer<typeof predictionListObjectSchema>;

export const getPredictionSchema = {
  predictionId: z.string().min(1).describe("ID of the prediction to retrieve"),
};
const getPredictionObjectSchema = z.object(getPredictionSchema);
export type GetPredictionParams = z.infer<typeof getPredictionObjectSchema>;

export const multiImageGenerationSchema = {
  prompts: z
    .array(z.string().min(1))
    .min(1)
    .max(10)
    .describe("Array of text descriptions for the images to generate"),
  seed: z
    .number()
    .int()
    .optional()
    .describe("Random seed. Set for reproducible generation"),
  go_fast: z
    .boolean()
    .default(true)
    .describe(
      "Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16"
    ),
  megapixels: z
    .enum(["1", "0.25"])
    .default("1")
    .describe("Approximate number of megapixels for generated image"),
  aspect_ratio: z
    .enum([
      "1:1",
      "16:9",
      "21:9",
      "3:2",
      "2:3",
      "4:5",
      "5:4",
      "3:4",
      "4:3",
      "9:16",
      "9:21",
    ])
    .default("1:1")
    .describe("Aspect ratio for the generated image"),
  output_format: z
    .enum(["webp", "jpg", "png"])
    .default("webp")
    .describe("Format of the output images"),
  output_quality: z
    .number()
    .int()
    .min(0)
    .max(100)
    .default(80)
    .describe(
      "Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs"
    ),
  num_inference_steps: z
    .number()
    .int()
    .min(1)
    .max(4)
    .default(4)
    .describe(
      "Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster."
    ),
  disable_safety_checker: z
    .boolean()
    .default(false)
    .describe("Disable safety checker for generated images."),
  support_image_mcp_response_type: z
    .boolean()
    .default(true)
    .describe(
      "Disable if the image type is not supported in the response, if it's Cursor app for example"
    ),
};
const multiImageGenerationObjectSchema = z.object(multiImageGenerationSchema);
export type MultiImageGenerationParams = z.infer<
  typeof multiImageGenerationObjectSchema
>;

export const imageVariantsGenerationSchema = {
  prompt: z
    .string()
    .min(1)
    .describe("Text description for the image to generate variants of"),
  num_variants: z
    .number()
    .int()
    .min(2)
    .max(10)
    .default(4)
    .describe("Number of image variants to generate (2-10)"),
  prompt_variations: z
    .array(z.string())
    .optional()
    .describe(
      "Optional list of prompt modifiers to apply to variants (e.g., ['in watercolor style', 'in oil painting style']). If provided, these will be used instead of random seeds."
    ),
  variation_mode: z
    .enum(["append", "replace"])
    .default("append")
    .describe(
      "How to apply prompt variations: 'append' adds to the base prompt, 'replace' uses variations as standalone prompts"
    ),
  seed: z
    .number()
    .int()
    .optional()
    .describe(
      "Base random seed. Each variant will use seed+variant_index for reproducibility"
    ),
  go_fast: z
    .boolean()
    .default(true)
    .describe(
      "Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16"
    ),
  megapixels: z
    .enum(["1", "0.25"])
    .default("1")
    .describe("Approximate number of megapixels for generated image"),
  aspect_ratio: z
    .enum([
      "1:1",
      "16:9",
      "21:9",
      "3:2",
      "2:3",
      "4:5",
      "5:4",
      "3:4",
      "4:3",
      "9:16",
      "9:21",
    ])
    .default("1:1")
    .describe("Aspect ratio for the generated image"),
  output_format: z
    .enum(["webp", "jpg", "png"])
    .default("webp")
    .describe("Format of the output images"),
  output_quality: z
    .number()
    .int()
    .min(0)
    .max(100)
    .default(80)
    .describe(
      "Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs"
    ),
  num_inference_steps: z
    .number()
    .int()
    .min(1)
    .max(4)
    .default(4)
    .describe(
      "Number of denoising steps. 4 is recommended, and lower number of steps produce lower quality outputs, faster."
    ),
  disable_safety_checker: z
    .boolean()
    .default(false)
    .describe("Disable safety checker for generated images."),
  support_image_mcp_response_type: z
    .boolean()
    .default(true)
    .describe("Support image MCP response type on client side"),
};
const imageVariantsGenerationObjectSchema = z.object(
  imageVariantsGenerationSchema
);
export type ImageVariantsGenerationParams = z.infer<
  typeof imageVariantsGenerationObjectSchema
>;

```