# Directory Structure
```
├── .gitignore
├── Dockerfile
├── LICENSE.txt
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   └── index.ts
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Dependencies
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Build
build/
dist/
*.tsbuildinfo
# Environment
.env
.env.local
.env.*.local
# IDE
.idea/
.vscode/
*.swp
*.swo
# OS
.DS_Store
Thumbs.db
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
[](https://mseep.ai/app/66julienmartin-mcp-server-qwen-max)
# Qwen Max MCP Server
A Model Context Protocol (MCP) server implementation for the Qwen Max language model.
[](https://smithery.ai/server/@66julienmartin/mcp-server-qwen_max)
<a href="https://glama.ai/mcp/servers/1v7po9oa9w"><img width="380" height="200" src="https://glama.ai/mcp/servers/1v7po9oa9w/badge" alt="Qwen Max Server MCP server" /></a>
Why Node.js?
This implementation uses Node.js/TypeScript as it currently provides the most stable and reliable integration 
with MCP servers compared to other languages like Python. The Node.js SDK for MCP offers better type safety, 
error handling, and compatibility with Claude Desktop.
## Prerequisites
- Node.js (v18 or higher)
- npm
- Claude Desktop
- Dashscope API key
## Installation
### Installing via Smithery
To install Qwen Max MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@66julienmartin/mcp-server-qwen_max):
```bash
npx -y @smithery/cli install @66julienmartin/mcp-server-qwen_max --client claude
```
### Manual Installation
```bash
git clone https://github.com/66julienmartin/mcp-server-qwen-max.git
cd Qwen_Max
npm install
```
## Model Selection
By default, this server uses the Qwen-Max model. 
The Qwen series offers several commercial models with different capabilities:
### Qwen-Max
Provides the best inference performance, especially for complex and multi-step tasks.
Context window: 32,768 tokens
- Max input: 30,720 tokens
- Max output: 8,192 tokens
- Pricing: $0.0016/1K tokens (input), $0.0064/1K tokens (output)
- Free quota: 1 million tokens
Available versions:
- qwen-max (Stable)
- qwen-max-latest (Latest)
- qwen-max-2025-01-25 (Snapshot, also known as qwen-max-0125 or Qwen2.5-Max)
### Qwen-Plus
Balanced combination of performance, speed, and cost, ideal for moderately complex tasks.
Context window: 131,072 tokens
- Max input: 129,024 tokens
- Max output: 8,192 tokens
- Pricing: $0.0004/1K tokens (input), $0.0012/1K tokens (output)
- Free quota: 1 million tokens
Available versions:
- qwen-plus (Stable)
- qwen-plus-latest (Latest)
- qwen-plus-2025-01-25 (Snapshot, also known as qwen-plus-0125)
### Qwen-Turbo
Fast speed and low cost, suitable for simple tasks.
- Context window: 1,000,000 tokens
- Max input: 1,000,000 tokens
- Max output: 8,192 tokens
- Pricing: $0.00005/1K tokens (input), $0.0002/1K tokens (output)
- Free quota: 1 million tokens
Available versions:
- qwen-turbo (Stable)
- qwen-turbo-latest (Latest)
- qwen-turbo-2024-11-01 (Snapshot, also known as qwen-turbo-1101)
To modify the model, update the model name in src/index.ts:
```typescript
// For Qwen-Max (default)
model: "qwen-max"
// For Qwen-Plus
model: "qwen-plus"
// For Qwen-Turbo
model: "qwen-turbo"
```
For more detailed information about available models, visit the Alibaba Cloud Model Documentation https://www.alibabacloud.com/help/en/model-studio/getting-started/models?spm=a3c0i.23458820.2359477120.1.446c7d3f9LT0FY.
## Project Structure
```
qwen-max-mcp/
├── src/
│   ├── index.ts             # Main server implementation
├── build/                   # Compiled files
│   ├── index.js
├── LICENSE
├── README.md
├── package.json
├── package-lock.json
└── tsconfig.json
```
## Configuration
1. Create a `.env` file in the project root:
```
DASHSCOPE_API_KEY=your-api-key-here
```
2. Update Claude Desktop configuration:
```json
{
  "mcpServers": {
    "qwen_max": {
      "command": "node",
      "args": ["/path/to/Qwen_Max/build/index.js"],
      "env": {
        "DASHSCOPE_API_KEY": "your-api-key-here"
      }
    }
  }
}
```
## Development
```bash
npm run dev     # Watch mode
npm run build   # Build
npm run start   # Start server
```
## Features
- Text generation with Qwen models
- Configurable parameters (max_tokens, temperature)
- Error handling
- MCP protocol support
- Claude Desktop integration
- Support for all Qwen commercial models (Max, Plus, Turbo)
- Extensive token context windows
## API Usage
```typescript
// Example tool call
{
  "name": "qwen_max",
  "arguments": {
    "prompt": "Your prompt here",
    "max_tokens": 8192,
    "temperature": 0.7
  }
}
```
## The Temperature Parameter
The temperature parameter controls the randomness of the model's output:
Lower values (0.0-0.7): More focused and deterministic outputs
Higher values (0.7-1.0): More creative and varied outputs
Recommended temperature settings by task:
Code generation: 0.0-0.3
Technical writing: 0.3-0.5
General tasks: 0.7 (default)
Creative writing: 0.8-1.0
## Error Handling
The server provides detailed error messages for common issues:
API authentication errors
Invalid parameters
Rate limiting
Network issues
Token limit exceeded
Model availability issues
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
MIT
```
--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------
```json
{
    "compilerOptions": {
      "target": "ES2022",
      "module": "Node16",
      "moduleResolution": "Node16",
      "outDir": "./build",
      "rootDir": "./src",
      "strict": true,
      "esModuleInterop": true,
      "skipLibCheck": true,
      "forceConsistentCasingInFileNames": true
    },
    "files": ["./src/index.ts"],
    "include": ["src/**/*.ts"],
    "exclude": ["node_modules"]
  }
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
{
    "name": "qwen_max",
    "version": "1.0.0",
    "type": "module",
    "bin": {
      "qwen_max": "./build/index.js"
    },
    "scripts": {
      "build": "tsc",
      "start": "node build/index.js",
      "dev": "tsc --watch"
    },
    "dependencies": {
      "@modelcontextprotocol/sdk": "0.6.0",
      "dotenv": "^16.4.7",
      "openai": "^4.80.1"
    },
    "devDependencies": {
      "@types/node": "^20.11.24",
      "typescript": "^5.3.3"
    }
  }
```
--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------
```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    required:
      - dashscopeApiKey
    properties:
      dashscopeApiKey:
        type: string
        description: The API key for the Dashscope server.
  commandFunction:
    # A function that produces the CLI command to start the MCP on stdio.
    |-
    config => ({ command: 'node', args: ['build/index.js'], env: { DASHSCOPE_API_KEY: config.dashscopeApiKey } })
```
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
# Use Node.js 18 as the base image
FROM node:18-alpine AS builder
# Set working directory
WORKDIR /app
# Copy package files
COPY package.json package-lock.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY src ./src
COPY tsconfig.json ./
# Build the application
RUN npx tsc
# Create the runtime image
FROM node:18-alpine
WORKDIR /app
# Copy built application from the builder stage
COPY --from=builder /app/build ./build
COPY package.json package-lock.json ./
# Install production dependencies
RUN npm install --production
# Copy .env file (or ensure it's mounted at runtime)
COPY .env ./
# Expose necessary ports (if any are known; adjust as needed)
EXPOSE 3000
# Define the default command
CMD ["node", "build/index.js"]
```
--------------------------------------------------------------------------------
/LICENSE.txt:
--------------------------------------------------------------------------------
```
MIT License
Copyright (c) 2025 Kamel IRZOUNI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------
```typescript
#!/usr/bin/env node
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { 
    ListToolsRequestSchema, 
    CallToolRequestSchema, 
    ErrorCode, 
    McpError 
} from "@modelcontextprotocol/sdk/types.js";
import OpenAI from "openai";
import dotenv from "dotenv";
dotenv.config();
const QWEN_BASE_URL = "https://dashscope-intl.aliyuncs.com/compatible-mode/v1";
const API_KEY = process.env.DASHSCOPE_API_KEY;
if (!API_KEY) {
    throw new Error("DASHSCOPE_API_KEY environment variable is required");
}
interface QwenMaxArgs {
    prompt: string;
    max_tokens?: number;
    temperature?: number;
}
class QwenMaxServer {
    private server: Server;
    private openai: OpenAI;
    constructor() {
        this.server = new Server(
            { name: "qwen_max", version: "1.0.0" },
            { capabilities: { tools: {} } }
        );
        this.openai = new OpenAI({
            apiKey: API_KEY,
            baseURL: QWEN_BASE_URL
        });
        this.setupHandlers();
        this.setupErrorHandling();
    }
    private setupErrorHandling(): void {
        this.server.onerror = (error: Error): void => {
            console.error("[MCP Error]", error);
        };
        process.on("SIGINT", async (): Promise<void> => {
            await this.server.close();
            process.exit(0);
        });
    }
    private setupHandlers(): void {
        this.server.setRequestHandler(
            ListToolsRequestSchema,
            async () => ({
                tools: [{
                    name: "qwen_max",
                    description: "Generate text using Qwen Max model",
                    inputSchema: {
                        type: "object",
                        properties: {
                            prompt: {
                                type: "string",
                                description: "The text prompt to generate content from"
                            },
                            max_tokens: {
                                type: "number",
                                description: "Maximum number of tokens to generate",
                                default: 8192
                            },
                            temperature: {
                                type: "number",
                                description: "Sampling temperature (0-2)",
                                default: 0.7,
                                minimum: 0,
                                maximum: 2
                            }
                        },
                        required: ["prompt"]
                    }
                }]
            })
        );
        this.server.setRequestHandler(
            CallToolRequestSchema,
            async (request) => {
                if (request.params.name !== "qwen_max") {
                    throw new McpError(
                        ErrorCode.MethodNotFound,
                        `Unknown tool: ${request.params.name}`
                    );
                }
                const { prompt, max_tokens = 8192, temperature = 0.7 } = 
                    request.params.arguments as QwenMaxArgs;
                try {
                    const completion = await this.openai.chat.completions.create({
                        model: "qwen-max-latest",
                        messages: [{ role: "user", content: prompt }],
                        max_tokens,
                        temperature
                    });
                    return {
                        content: [{
                            type: "text",
                            text: completion.choices[0].message.content || ""
                        }]
                    };
                } catch (error: any) {
                    console.error("Qwen API Error:", error);
                    throw new McpError(
                        ErrorCode.InternalError,
                        `Qwen API error: ${error.message}`
                    );
                }
            }
        );
    }
    async run(): Promise<void> {
        const transport = new StdioServerTransport();
        await this.server.connect(transport);
        console.error("Qwen Max MCP server running on stdio");
    }
}
const server = new QwenMaxServer();
server.run().catch(console.error);
```