# Directory Structure
```
├── .gitignore
├── Dockerfile
├── LICENSE.txt
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   └── index.ts
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
 1 | # Dependencies
 2 | node_modules/
 3 | npm-debug.log*
 4 | yarn-debug.log*
 5 | yarn-error.log*
 6 | 
 7 | # Build
 8 | build/
 9 | dist/
10 | *.tsbuildinfo
11 | 
12 | # Environment
13 | .env
14 | .env.local
15 | .env.*.local
16 | 
17 | # IDE
18 | .idea/
19 | .vscode/
20 | *.swp
21 | *.swo
22 | 
23 | # OS
24 | .DS_Store
25 | Thumbs.db
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
  1 | [](https://mseep.ai/app/66julienmartin-mcp-server-qwen-max)
  2 | 
  3 | # Qwen Max MCP Server
  4 | 
  5 | A Model Context Protocol (MCP) server implementation for the Qwen Max language model.
  6 | 
  7 | [](https://smithery.ai/server/@66julienmartin/mcp-server-qwen_max)
  8 | 
  9 | <a href="https://glama.ai/mcp/servers/1v7po9oa9w"><img width="380" height="200" src="https://glama.ai/mcp/servers/1v7po9oa9w/badge" alt="Qwen Max Server MCP server" /></a>
 10 | 
 11 | Why Node.js?
 12 | This implementation uses Node.js/TypeScript as it currently provides the most stable and reliable integration 
 13 | with MCP servers compared to other languages like Python. The Node.js SDK for MCP offers better type safety, 
 14 | error handling, and compatibility with Claude Desktop.
 15 | 
 16 | ## Prerequisites
 17 | 
 18 | - Node.js (v18 or higher)
 19 | - npm
 20 | - Claude Desktop
 21 | - Dashscope API key
 22 | 
 23 | ## Installation
 24 | 
 25 | ### Installing via Smithery
 26 | 
 27 | To install Qwen Max MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@66julienmartin/mcp-server-qwen_max):
 28 | 
 29 | ```bash
 30 | npx -y @smithery/cli install @66julienmartin/mcp-server-qwen_max --client claude
 31 | ```
 32 | 
 33 | ### Manual Installation
 34 | ```bash
 35 | git clone https://github.com/66julienmartin/mcp-server-qwen-max.git
 36 | cd Qwen_Max
 37 | npm install
 38 | ```
 39 | 
 40 | ## Model Selection
 41 | By default, this server uses the Qwen-Max model. 
 42 | The Qwen series offers several commercial models with different capabilities:
 43 | 
 44 | ### Qwen-Max
 45 | Provides the best inference performance, especially for complex and multi-step tasks.
 46 | 
 47 | Context window: 32,768 tokens
 48 | - Max input: 30,720 tokens
 49 | - Max output: 8,192 tokens
 50 | - Pricing: $0.0016/1K tokens (input), $0.0064/1K tokens (output)
 51 | - Free quota: 1 million tokens
 52 | 
 53 | Available versions:
 54 | 
 55 | - qwen-max (Stable)
 56 | - qwen-max-latest (Latest)
 57 | - qwen-max-2025-01-25 (Snapshot, also known as qwen-max-0125 or Qwen2.5-Max)
 58 | 
 59 | ### Qwen-Plus
 60 | Balanced combination of performance, speed, and cost, ideal for moderately complex tasks.
 61 | 
 62 | Context window: 131,072 tokens
 63 | - Max input: 129,024 tokens
 64 | - Max output: 8,192 tokens
 65 | - Pricing: $0.0004/1K tokens (input), $0.0012/1K tokens (output)
 66 | - Free quota: 1 million tokens
 67 | 
 68 | Available versions:
 69 | 
 70 | - qwen-plus (Stable)
 71 | - qwen-plus-latest (Latest)
 72 | - qwen-plus-2025-01-25 (Snapshot, also known as qwen-plus-0125)
 73 | 
 74 | ### Qwen-Turbo
 75 | Fast speed and low cost, suitable for simple tasks.
 76 | 
 77 | - Context window: 1,000,000 tokens
 78 | - Max input: 1,000,000 tokens
 79 | - Max output: 8,192 tokens
 80 | - Pricing: $0.00005/1K tokens (input), $0.0002/1K tokens (output)
 81 | - Free quota: 1 million tokens
 82 | 
 83 | Available versions:
 84 | 
 85 | - qwen-turbo (Stable)
 86 | - qwen-turbo-latest (Latest)
 87 | - qwen-turbo-2024-11-01 (Snapshot, also known as qwen-turbo-1101)
 88 | 
 89 | To modify the model, update the model name in src/index.ts:
 90 | 
 91 | ```typescript
 92 | // For Qwen-Max (default)
 93 | model: "qwen-max"
 94 | 
 95 | // For Qwen-Plus
 96 | model: "qwen-plus"
 97 | 
 98 | // For Qwen-Turbo
 99 | model: "qwen-turbo"
100 | ```
101 | 
102 | For more detailed information about available models, visit the Alibaba Cloud Model Documentation https://www.alibabacloud.com/help/en/model-studio/getting-started/models?spm=a3c0i.23458820.2359477120.1.446c7d3f9LT0FY.
103 | 
104 | ## Project Structure
105 | ```
106 | qwen-max-mcp/
107 | ├── src/
108 | │   ├── index.ts             # Main server implementation
109 | ├── build/                   # Compiled files
110 | │   ├── index.js
111 | ├── LICENSE
112 | ├── README.md
113 | ├── package.json
114 | ├── package-lock.json
115 | └── tsconfig.json
116 | ```
117 | ## Configuration
118 | 
119 | 1. Create a `.env` file in the project root:
120 | ```
121 | DASHSCOPE_API_KEY=your-api-key-here
122 | ```
123 | 
124 | 2. Update Claude Desktop configuration:
125 | ```json
126 | {
127 |   "mcpServers": {
128 |     "qwen_max": {
129 |       "command": "node",
130 |       "args": ["/path/to/Qwen_Max/build/index.js"],
131 |       "env": {
132 |         "DASHSCOPE_API_KEY": "your-api-key-here"
133 |       }
134 |     }
135 |   }
136 | }
137 | ```
138 | 
139 | ## Development
140 | 
141 | ```bash
142 | npm run dev     # Watch mode
143 | npm run build   # Build
144 | npm run start   # Start server
145 | ```
146 | 
147 | ## Features
148 | 
149 | - Text generation with Qwen models
150 | - Configurable parameters (max_tokens, temperature)
151 | - Error handling
152 | - MCP protocol support
153 | - Claude Desktop integration
154 | - Support for all Qwen commercial models (Max, Plus, Turbo)
155 | - Extensive token context windows
156 | 
157 | ## API Usage
158 | 
159 | ```typescript
160 | // Example tool call
161 | {
162 |   "name": "qwen_max",
163 |   "arguments": {
164 |     "prompt": "Your prompt here",
165 |     "max_tokens": 8192,
166 |     "temperature": 0.7
167 |   }
168 | }
169 | ```
170 | ## The Temperature Parameter
171 | 
172 | The temperature parameter controls the randomness of the model's output:
173 | 
174 | Lower values (0.0-0.7): More focused and deterministic outputs
175 | Higher values (0.7-1.0): More creative and varied outputs
176 | 
177 | Recommended temperature settings by task:
178 | 
179 | Code generation: 0.0-0.3
180 | Technical writing: 0.3-0.5
181 | General tasks: 0.7 (default)
182 | Creative writing: 0.8-1.0
183 | 
184 | ## Error Handling
185 | 
186 | The server provides detailed error messages for common issues:
187 | 
188 | API authentication errors
189 | Invalid parameters
190 | Rate limiting
191 | Network issues
192 | Token limit exceeded
193 | Model availability issues
194 | 
195 | ## Contributing
196 | Contributions are welcome! Please feel free to submit a Pull Request.
197 | 
198 | ## License
199 | MIT
```
--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------
```json
 1 | {
 2 |     "compilerOptions": {
 3 |       "target": "ES2022",
 4 |       "module": "Node16",
 5 |       "moduleResolution": "Node16",
 6 |       "outDir": "./build",
 7 |       "rootDir": "./src",
 8 |       "strict": true,
 9 |       "esModuleInterop": true,
10 |       "skipLibCheck": true,
11 |       "forceConsistentCasingInFileNames": true
12 |     },
13 |     "files": ["./src/index.ts"],
14 |     "include": ["src/**/*.ts"],
15 |     "exclude": ["node_modules"]
16 |   }
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
 1 | {
 2 |     "name": "qwen_max",
 3 |     "version": "1.0.0",
 4 |     "type": "module",
 5 |     "bin": {
 6 |       "qwen_max": "./build/index.js"
 7 |     },
 8 |     "scripts": {
 9 |       "build": "tsc",
10 |       "start": "node build/index.js",
11 |       "dev": "tsc --watch"
12 |     },
13 |     "dependencies": {
14 |       "@modelcontextprotocol/sdk": "0.6.0",
15 |       "dotenv": "^16.4.7",
16 |       "openai": "^4.80.1"
17 |     },
18 |     "devDependencies": {
19 |       "@types/node": "^20.11.24",
20 |       "typescript": "^5.3.3"
21 |     }
22 |   }
```
--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------
```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required:
 9 |       - dashscopeApiKey
10 |     properties:
11 |       dashscopeApiKey:
12 |         type: string
13 |         description: The API key for the Dashscope server.
14 |   commandFunction:
15 |     # A function that produces the CLI command to start the MCP on stdio.
16 |     |-
17 |     config => ({ command: 'node', args: ['build/index.js'], env: { DASHSCOPE_API_KEY: config.dashscopeApiKey } })
18 | 
```
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | # Use Node.js 18 as the base image
 3 | FROM node:18-alpine AS builder
 4 | 
 5 | # Set working directory
 6 | WORKDIR /app
 7 | 
 8 | # Copy package files
 9 | COPY package.json package-lock.json ./
10 | 
11 | # Install dependencies
12 | RUN npm install
13 | 
14 | # Copy the rest of the application code
15 | COPY src ./src
16 | COPY tsconfig.json ./
17 | 
18 | # Build the application
19 | RUN npx tsc
20 | 
21 | # Create the runtime image
22 | FROM node:18-alpine
23 | 
24 | WORKDIR /app
25 | 
26 | # Copy built application from the builder stage
27 | COPY --from=builder /app/build ./build
28 | COPY package.json package-lock.json ./
29 | 
30 | # Install production dependencies
31 | RUN npm install --production
32 | 
33 | # Copy .env file (or ensure it's mounted at runtime)
34 | COPY .env ./
35 | 
36 | # Expose necessary ports (if any are known; adjust as needed)
37 | EXPOSE 3000
38 | 
39 | # Define the default command
40 | CMD ["node", "build/index.js"]
41 | 
```
--------------------------------------------------------------------------------
/LICENSE.txt:
--------------------------------------------------------------------------------
```
 1 | MIT License
 2 | 
 3 | Copyright (c) 2025 Kamel IRZOUNI
 4 | 
 5 | Permission is hereby granted, free of charge, to any person obtaining a copy
 6 | of this software and associated documentation files (the "Software"), to deal
 7 | in the Software without restriction, including without limitation the rights
 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 | 
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 | 
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
```
--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------
```typescript
  1 | #!/usr/bin/env node
  2 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
  3 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
  4 | import { 
  5 |     ListToolsRequestSchema, 
  6 |     CallToolRequestSchema, 
  7 |     ErrorCode, 
  8 |     McpError 
  9 | } from "@modelcontextprotocol/sdk/types.js";
 10 | import OpenAI from "openai";
 11 | import dotenv from "dotenv";
 12 | 
 13 | dotenv.config();
 14 | 
 15 | const QWEN_BASE_URL = "https://dashscope-intl.aliyuncs.com/compatible-mode/v1";
 16 | const API_KEY = process.env.DASHSCOPE_API_KEY;
 17 | 
 18 | if (!API_KEY) {
 19 |     throw new Error("DASHSCOPE_API_KEY environment variable is required");
 20 | }
 21 | 
 22 | interface QwenMaxArgs {
 23 |     prompt: string;
 24 |     max_tokens?: number;
 25 |     temperature?: number;
 26 | }
 27 | 
 28 | class QwenMaxServer {
 29 |     private server: Server;
 30 |     private openai: OpenAI;
 31 | 
 32 |     constructor() {
 33 |         this.server = new Server(
 34 |             { name: "qwen_max", version: "1.0.0" },
 35 |             { capabilities: { tools: {} } }
 36 |         );
 37 | 
 38 |         this.openai = new OpenAI({
 39 |             apiKey: API_KEY,
 40 |             baseURL: QWEN_BASE_URL
 41 |         });
 42 | 
 43 |         this.setupHandlers();
 44 |         this.setupErrorHandling();
 45 |     }
 46 | 
 47 |     private setupErrorHandling(): void {
 48 |         this.server.onerror = (error: Error): void => {
 49 |             console.error("[MCP Error]", error);
 50 |         };
 51 | 
 52 |         process.on("SIGINT", async (): Promise<void> => {
 53 |             await this.server.close();
 54 |             process.exit(0);
 55 |         });
 56 |     }
 57 | 
 58 |     private setupHandlers(): void {
 59 |         this.server.setRequestHandler(
 60 |             ListToolsRequestSchema,
 61 |             async () => ({
 62 |                 tools: [{
 63 |                     name: "qwen_max",
 64 |                     description: "Generate text using Qwen Max model",
 65 |                     inputSchema: {
 66 |                         type: "object",
 67 |                         properties: {
 68 |                             prompt: {
 69 |                                 type: "string",
 70 |                                 description: "The text prompt to generate content from"
 71 |                             },
 72 |                             max_tokens: {
 73 |                                 type: "number",
 74 |                                 description: "Maximum number of tokens to generate",
 75 |                                 default: 8192
 76 |                             },
 77 |                             temperature: {
 78 |                                 type: "number",
 79 |                                 description: "Sampling temperature (0-2)",
 80 |                                 default: 0.7,
 81 |                                 minimum: 0,
 82 |                                 maximum: 2
 83 |                             }
 84 |                         },
 85 |                         required: ["prompt"]
 86 |                     }
 87 |                 }]
 88 |             })
 89 |         );
 90 | 
 91 |         this.server.setRequestHandler(
 92 |             CallToolRequestSchema,
 93 |             async (request) => {
 94 |                 if (request.params.name !== "qwen_max") {
 95 |                     throw new McpError(
 96 |                         ErrorCode.MethodNotFound,
 97 |                         `Unknown tool: ${request.params.name}`
 98 |                     );
 99 |                 }
100 | 
101 |                 const { prompt, max_tokens = 8192, temperature = 0.7 } = 
102 |                     request.params.arguments as QwenMaxArgs;
103 | 
104 |                 try {
105 |                     const completion = await this.openai.chat.completions.create({
106 |                         model: "qwen-max-latest",
107 |                         messages: [{ role: "user", content: prompt }],
108 |                         max_tokens,
109 |                         temperature
110 |                     });
111 | 
112 |                     return {
113 |                         content: [{
114 |                             type: "text",
115 |                             text: completion.choices[0].message.content || ""
116 |                         }]
117 |                     };
118 |                 } catch (error: any) {
119 |                     console.error("Qwen API Error:", error);
120 |                     throw new McpError(
121 |                         ErrorCode.InternalError,
122 |                         `Qwen API error: ${error.message}`
123 |                     );
124 |                 }
125 |             }
126 |         );
127 |     }
128 | 
129 |     async run(): Promise<void> {
130 |         const transport = new StdioServerTransport();
131 |         await this.server.connect(transport);
132 |         console.error("Qwen Max MCP server running on stdio");
133 |     }
134 | }
135 | 
136 | const server = new QwenMaxServer();
137 | server.run().catch(console.error);
```