#
tokens: 9305/50000 8/8 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .env.example
├── .gitignore
├── Dockerfile
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   ├── index.ts
│   └── old-index.ts-working.exemple
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Dependencies
 2 | node_modules/
 3 | 
 4 | # Build output
 5 | build/
 6 | dist/
 7 | 
 8 | # Logs
 9 | *.log
10 | npm-debug.log*
11 | yarn-debug.log*
12 | yarn-error.log*
13 | 
14 | # Environment variables
15 | .env
16 | .env.local
17 | .env.*.local
18 | 
19 | # Editor directories and files
20 | .idea/
21 | .vscode/
22 | *.swp
23 | *.swo
24 | *.swn
25 | .DS_Store
26 | 
```

--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------

```
1 | # Required: OpenRouter API key for both DeepSeek and Claude models
2 | OPENROUTER_API_KEY=your_openrouter_api_key_here
3 | 
4 | # Optional: Model configuration (defaults shown below)
5 | DEEPSEEK_MODEL=deepseek/deepseek-r1:free  # DeepSeek model for reasoning
6 | CLAUDE_MODEL=anthropic/claude-3.5-sonnet:beta  # Claude model for responses
7 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP
  2 | 
  3 | [![smithery badge](https://smithery.ai/badge/@newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP)](https://smithery.ai/server/@newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP)
  4 | 
  5 | A Model Context Protocol (MCP) server that combines DeepSeek R1's reasoning capabilities with Claude 3.5 Sonnet's response generation through OpenRouter. This implementation uses a two-stage process where DeepSeek provides structured reasoning which is then incorporated into Claude's response generation.
  6 | 
  7 | ## Features
  8 | 
  9 | - **Two-Stage Processing**:
 10 |   - Uses DeepSeek R1 for initial reasoning (50k character context)
 11 |   - Uses Claude 3.5 Sonnet for final response (600k character context)
 12 |   - Both models accessed through OpenRouter's unified API
 13 |   - Injects DeepSeek's reasoning tokens into Claude's context
 14 | 
 15 | - **Smart Conversation Management**:
 16 |   - Detects active conversations using file modification times
 17 |   - Handles multiple concurrent conversations
 18 |   - Filters out ended conversations automatically
 19 |   - Supports context clearing when needed
 20 | 
 21 | - **Optimized Parameters**:
 22 |   - Model-specific context limits:
 23 |     * DeepSeek: 50,000 characters for focused reasoning
 24 |     * Claude: 600,000 characters for comprehensive responses
 25 |   - Recommended settings:
 26 |     * temperature: 0.7 for balanced creativity
 27 |     * top_p: 1.0 for full probability distribution
 28 |     * repetition_penalty: 1.0 to prevent repetition
 29 | 
 30 | ## Installation
 31 | 
 32 | ### Installing via Smithery
 33 | 
 34 | To install DeepSeek Thinking with Claude 3.5 Sonnet for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP):
 35 | 
 36 | ```bash
 37 | npx -y @smithery/cli install @newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP --client claude
 38 | ```
 39 | 
 40 | ### Manual Installation
 41 | 1. Clone the repository:
 42 | ```bash
 43 | git clone https://github.com/yourusername/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP.git
 44 | cd Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP
 45 | ```
 46 | 
 47 | 2. Install dependencies:
 48 | ```bash
 49 | npm install
 50 | ```
 51 | 
 52 | 3. Create a `.env` file with your OpenRouter API key:
 53 | ```env
 54 | # Required: OpenRouter API key for both DeepSeek and Claude models
 55 | OPENROUTER_API_KEY=your_openrouter_api_key_here
 56 | 
 57 | # Optional: Model configuration (defaults shown below)
 58 | DEEPSEEK_MODEL=deepseek/deepseek-r1  # DeepSeek model for reasoning
 59 | CLAUDE_MODEL=anthropic/claude-3.5-sonnet:beta  # Claude model for responses
 60 | ```
 61 | 
 62 | 4. Build the server:
 63 | ```bash
 64 | npm run build
 65 | ```
 66 | 
 67 | ## Usage with Cline
 68 | 
 69 | Add to your Cline MCP settings (usually in `~/.vscode/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json`):
 70 | 
 71 | ```json
 72 | {
 73 |   "mcpServers": {
 74 |     "deepseek-claude": {
 75 |       "command": "/path/to/node",
 76 |       "args": ["/path/to/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP/build/index.js"],
 77 |       "env": {
 78 |         "OPENROUTER_API_KEY": "your_key_here"
 79 |       },
 80 |       "disabled": false,
 81 |       "autoApprove": []
 82 |     }
 83 |   }
 84 | }
 85 | ```
 86 | 
 87 | ## Tool Usage
 88 | 
 89 | The server provides two tools for generating and monitoring responses:
 90 | 
 91 | ### generate_response
 92 | 
 93 | Main tool for generating responses with the following parameters:
 94 | 
 95 | ```typescript
 96 | {
 97 |   "prompt": string,           // Required: The question or prompt
 98 |   "showReasoning"?: boolean, // Optional: Show DeepSeek's reasoning process
 99 |   "clearContext"?: boolean,  // Optional: Clear conversation history
100 |   "includeHistory"?: boolean // Optional: Include Cline conversation history
101 | }
102 | ```
103 | 
104 | ### check_response_status
105 | 
106 | Tool for checking the status of a response generation task:
107 | 
108 | ```typescript
109 | {
110 |   "taskId": string  // Required: The task ID from generate_response
111 | }
112 | ```
113 | 
114 | ### Response Polling
115 | 
116 | The server uses a polling mechanism to handle long-running requests:
117 | 
118 | 1. Initial Request:
119 |    - `generate_response` returns immediately with a task ID
120 |    - Response format: `{"taskId": "uuid-here"}`
121 | 
122 | 2. Status Checking:
123 |    - Use `check_response_status` to poll the task status
124 |    - **Note:** Responses can take up to 60 seconds to complete
125 |    - Status progresses through: pending → reasoning → responding → complete
126 | 
127 | Example usage in Cline:
128 | ```typescript
129 | // Initial request
130 | const result = await use_mcp_tool({
131 |   server_name: "deepseek-claude",
132 |   tool_name: "generate_response",
133 |   arguments: {
134 |     prompt: "What is quantum computing?",
135 |     showReasoning: true
136 |   }
137 | });
138 | 
139 | // Get taskId from result
140 | const taskId = JSON.parse(result.content[0].text).taskId;
141 | 
142 | // Poll for status (may need multiple checks over ~60 seconds)
143 | const status = await use_mcp_tool({
144 |   server_name: "deepseek-claude",
145 |   tool_name: "check_response_status",
146 |   arguments: { taskId }
147 | });
148 | 
149 | // Example status response when complete:
150 | {
151 |   "status": "complete",
152 |   "reasoning": "...",  // If showReasoning was true
153 |   "response": "..."    // The final response
154 | }
155 | ```
156 | 
157 | ## Development
158 | 
159 | For development with auto-rebuild:
160 | ```bash
161 | npm run watch
162 | ```
163 | 
164 | ## How It Works
165 | 
166 | 1. **Reasoning Stage (DeepSeek R1)**:
167 |    - Uses OpenRouter's reasoning tokens feature
168 |    - Prompt is modified to output 'done' while capturing reasoning
169 |    - Reasoning is extracted from response metadata
170 | 
171 | 2. **Response Stage (Claude 3.5 Sonnet)**:
172 |    - Receives the original prompt and DeepSeek's reasoning
173 |    - Generates final response incorporating the reasoning
174 |    - Maintains conversation context and history
175 | 
176 | ## License
177 | 
178 | MIT License - See LICENSE file for details.
179 | 
180 | ## Credits
181 | 
182 | Based on the RAT (Retrieval Augmented Thinking) concept by [Skirano](https://x.com/skirano/status/1881922469411643413), which enhances AI responses through structured reasoning and knowledge retrieval.
183 | 
184 | This implementation specifically combines DeepSeek R1's reasoning capabilities with Claude 3.5 Sonnet's response generation through OpenRouter's unified API.
185 | 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "compilerOptions": {
 3 |     "target": "ES2022",
 4 |     "module": "Node16",
 5 |     "moduleResolution": "Node16",
 6 |     "outDir": "./build",
 7 |     "rootDir": "./src",
 8 |     "strict": true,
 9 |     "esModuleInterop": true,
10 |     "skipLibCheck": true,
11 |     "forceConsistentCasingInFileNames": true
12 |   },
13 |   "include": ["src/**/*"],
14 |   "exclude": ["node_modules"]
15 | }
16 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required:
 9 |       - openrouterApiKey
10 |     properties:
11 |       openrouterApiKey:
12 |         type: string
13 |         description: The API key for accessing the OpenRouter service.
14 |   commandFunction:
15 |     # A function that produces the CLI command to start the MCP on stdio.
16 |     |-
17 |     (config) => ({ command: 'node', args: ['build/index.js'], env: { OPENROUTER_API_KEY: config.openrouterApiKey } })
18 | 
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "deepseek-thinking-claude-3-5-sonnet-cline-mcp",
 3 |   "version": "0.1.0",
 4 |   "description": "MCP server that combines DeepSeek's reasoning with Claude 3.5 Sonnet's response generation through Cline",
 5 |   "private": true,
 6 |   "type": "module",
 7 |   "bin": {
 8 |     "deepseek-thinking-claude-mcp": "./build/index.js"
 9 |   },
10 |   "files": [
11 |     "build"
12 |   ],
13 |   "scripts": {
14 |     "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
15 |     "prepare": "npm run build",
16 |     "watch": "tsc --watch",
17 |     "inspector": "npx @modelcontextprotocol/inspector build/index.js"
18 |   },
19 |   "dependencies": {
20 |     "@anthropic-ai/sdk": "^0.36.2",
21 |     "@modelcontextprotocol/sdk": "0.6.0",
22 |     "dotenv": "^16.4.7",
23 |     "openai": "^4.80.1",
24 |     "uuid": "^11.0.5"
25 |   },
26 |   "devDependencies": {
27 |     "@types/node": "^20.11.24",
28 |     "@types/uuid": "^10.0.0",
29 |     "typescript": "^5.3.3"
30 |   }
31 | }
32 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | # Stage 1: Build the application using Node.js
 3 | FROM node:18-alpine AS builder
 4 | 
 5 | # Set working directory
 6 | WORKDIR /app
 7 | 
 8 | # Copy package.json and package-lock.json to the working directory
 9 | COPY package.json package-lock.json ./
10 | 
11 | # Install dependencies
12 | RUN npm install
13 | 
14 | # Copy source files
15 | COPY src ./src
16 | 
17 | # Build the project
18 | RUN npm run build
19 | 
20 | # Stage 2: Create a lightweight image for production
21 | FROM node:18-alpine
22 | 
23 | # Set working directory
24 | WORKDIR /app
25 | 
26 | # Copy built files from builder
27 | COPY --from=builder /app/build ./build
28 | 
29 | # Copy necessary files
30 | COPY package.json package-lock.json ./
31 | 
32 | # Install only production dependencies
33 | RUN npm install --omit=dev
34 | 
35 | # Environment variables
36 | ENV NODE_ENV=production
37 | 
38 | # Entrypoint command to run the MCP server
39 | ENTRYPOINT ["node", "build/index.js"]
40 | 
41 | # Command to start the server
42 | CMD ["node", "build/index.js"]
43 | 
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
  3 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
  4 | import {
  5 |   CallToolRequestSchema,
  6 |   ErrorCode,
  7 |   ListToolsRequestSchema,
  8 |   McpError,
  9 | } from "@modelcontextprotocol/sdk/types.js";
 10 | import { OpenAI } from "openai";
 11 | import dotenv from "dotenv";
 12 | import * as os from "os";
 13 | import * as path from "path";
 14 | import * as fs from "fs/promises";
 15 | import { v4 as uuidv4 } from "uuid";
 16 | 
 17 | // Load environment variables
 18 | dotenv.config();
 19 | 
 20 | // Debug logging
 21 | const DEBUG = true;
 22 | const log = (...args: any[]) => {
 23 |   if (DEBUG) {
 24 |     console.error("[DEEPSEEK-CLAUDE MCP]", ...args);
 25 |   }
 26 | };
 27 | 
 28 | // Constants - Utiliser uniquement le modèle DeepSeek
 29 | const DEEPSEEK_MODEL =
 30 |   process.env.DEEPSEEK_MODEL || "deepseek/deepseek-chat-v3-0324:free";
 31 | // Ne plus utiliser Claude du tout
 32 | // const CLAUDE_MODEL = "anthropic/claude-3.5-sonnet:beta";
 33 | 
 34 | // Constants pour le mécanisme de gestion des statuts
 35 | const INITIAL_STATUS_CHECK_DELAY_MS = 5000; // 5 secondes pour la première vérification
 36 | const MAX_STATUS_CHECK_DELAY_MS = 60000; // 1 minute maximum entre les vérifications
 37 | const STATUS_CHECK_BACKOFF_FACTOR = 1.5; // Facteur d'augmentation du délai
 38 | const MAX_STATUS_CHECK_ATTEMPTS = 20; // Nombre maximal de tentatives (évite boucle infinie)
 39 | const TASK_TIMEOUT_MS = 10 * 60 * 1000; // 10 minutes maximum pour une tâche
 40 | 
 41 | interface ConversationEntry {
 42 |   timestamp: number;
 43 |   prompt: string;
 44 |   reasoning: string;
 45 |   response: string;
 46 |   model: string;
 47 | }
 48 | 
 49 | interface ConversationContext {
 50 |   entries: ConversationEntry[];
 51 |   maxEntries: number;
 52 | }
 53 | 
 54 | interface GenerateResponseArgs {
 55 |   prompt: string;
 56 |   showReasoning?: boolean;
 57 |   clearContext?: boolean;
 58 |   includeHistory?: boolean;
 59 | }
 60 | 
 61 | interface CheckResponseStatusArgs {
 62 |   taskId: string;
 63 | }
 64 | 
 65 | interface TaskStatus {
 66 |   status: "pending" | "reasoning" | "responding" | "complete" | "error";
 67 |   prompt: string;
 68 |   showReasoning?: boolean;
 69 |   reasoning?: string;
 70 |   response?: string;
 71 |   error?: string;
 72 |   timestamp: number;
 73 |   // Nouvelles propriétés pour gérer le polling
 74 |   lastChecked?: number;
 75 |   nextCheckDelay?: number;
 76 |   checkAttempts?: number;
 77 | }
 78 | 
 79 | const isValidCheckResponseStatusArgs = (
 80 |   args: any
 81 | ): args is CheckResponseStatusArgs =>
 82 |   typeof args === "object" && args !== null && typeof args.taskId === "string";
 83 | 
 84 | interface ClaudeMessage {
 85 |   role: "user" | "assistant";
 86 |   content: string | { type: string; text: string }[];
 87 | }
 88 | 
 89 | interface UiMessage {
 90 |   ts: number;
 91 |   type: string;
 92 |   say?: string;
 93 |   ask?: string;
 94 |   text: string;
 95 |   conversationHistoryIndex: number;
 96 | }
 97 | 
 98 | const isValidGenerateResponseArgs = (args: any): args is GenerateResponseArgs =>
 99 |   typeof args === "object" &&
100 |   args !== null &&
101 |   typeof args.prompt === "string" &&
102 |   (args.showReasoning === undefined ||
103 |     typeof args.showReasoning === "boolean") &&
104 |   (args.clearContext === undefined || typeof args.clearContext === "boolean") &&
105 |   (args.includeHistory === undefined ||
106 |     typeof args.includeHistory === "boolean");
107 | 
108 | function getClaudePath(): string {
109 |   const homeDir = os.homedir();
110 |   switch (process.platform) {
111 |     case "win32":
112 |       return path.join(
113 |         homeDir,
114 |         "AppData",
115 |         "Roaming",
116 |         "Code",
117 |         "User",
118 |         "globalStorage",
119 |         "saoudrizwan.claude-dev",
120 |         "tasks"
121 |       );
122 |     case "darwin":
123 |       return path.join(
124 |         homeDir,
125 |         "Library",
126 |         "Application Support",
127 |         "Code",
128 |         "User",
129 |         "globalStorage",
130 |         "saoudrizwan.claude-dev",
131 |         "tasks"
132 |       );
133 |     default: // linux
134 |       return path.join(
135 |         homeDir,
136 |         ".config",
137 |         "Code",
138 |         "User",
139 |         "globalStorage",
140 |         "saoudrizwan.claude-dev",
141 |         "tasks"
142 |       );
143 |   }
144 | }
145 | 
146 | async function findActiveConversation(): Promise<ClaudeMessage[] | null> {
147 |   try {
148 |     const tasksPath = getClaudePath();
149 |     const dirs = await fs.readdir(tasksPath);
150 | 
151 |     // Get modification time for each api_conversation_history.json
152 |     const dirStats = await Promise.all(
153 |       dirs.map(async (dir) => {
154 |         try {
155 |           const historyPath = path.join(
156 |             tasksPath,
157 |             dir,
158 |             "api_conversation_history.json"
159 |           );
160 |           const stats = await fs.stat(historyPath);
161 |           const uiPath = path.join(tasksPath, dir, "ui_messages.json");
162 |           const uiContent = await fs.readFile(uiPath, "utf8");
163 |           const uiMessages: UiMessage[] = JSON.parse(uiContent);
164 |           const hasEnded = uiMessages.some(
165 |             (m) => m.type === "conversation_ended"
166 |           );
167 | 
168 |           return {
169 |             dir,
170 |             mtime: stats.mtime.getTime(),
171 |             hasEnded,
172 |           };
173 |         } catch (error) {
174 |           log("Error checking folder:", dir, error);
175 |           return null;
176 |         }
177 |       })
178 |     );
179 | 
180 |     // Filter out errors and ended conversations, then sort by modification time
181 |     const sortedDirs = dirStats
182 |       .filter(
183 |         (stat): stat is NonNullable<typeof stat> =>
184 |           stat !== null && !stat.hasEnded
185 |       )
186 |       .sort((a, b) => b.mtime - a.mtime);
187 | 
188 |     // Use most recently modified active conversation
189 |     const latest = sortedDirs[0]?.dir;
190 |     if (!latest) {
191 |       log("No active conversations found");
192 |       return null;
193 |     }
194 | 
195 |     const historyPath = path.join(
196 |       tasksPath,
197 |       latest,
198 |       "api_conversation_history.json"
199 |     );
200 |     const history = await fs.readFile(historyPath, "utf8");
201 |     return JSON.parse(history);
202 |   } catch (error) {
203 |     log("Error finding active conversation:", error);
204 |     return null;
205 |   }
206 | }
207 | 
208 | function formatHistoryForModel(
209 |   history: ClaudeMessage[],
210 |   isDeepSeek: boolean
211 | ): string {
212 |   const maxLength = isDeepSeek ? 50000 : 600000; // 50k chars for DeepSeek, 600k for Claude
213 |   const formattedMessages = [];
214 |   let totalLength = 0;
215 | 
216 |   // Process messages in reverse chronological order to get most recent first
217 |   for (let i = history.length - 1; i >= 0; i--) {
218 |     const msg = history[i];
219 |     const content = Array.isArray(msg.content)
220 |       ? msg.content.map((c) => c.text).join("\n")
221 |       : msg.content;
222 | 
223 |     const formattedMsg = `${
224 |       msg.role === "user" ? "Human" : "Assistant"
225 |     }: ${content}`;
226 |     const msgLength = formattedMsg.length;
227 | 
228 |     // Stop adding messages if we'd exceed the limit
229 |     if (totalLength + msgLength > maxLength) {
230 |       break;
231 |     }
232 | 
233 |     formattedMessages.push(formattedMsg); // Add most recent messages first
234 |     totalLength += msgLength;
235 |   }
236 | 
237 |   // Reverse to get chronological order
238 |   return formattedMessages.reverse().join("\n\n");
239 | }
240 | 
241 | class DeepseekClaudeServer {
242 |   private server: Server;
243 |   private openrouterClient: OpenAI;
244 |   private context: ConversationContext = {
245 |     entries: [],
246 |     maxEntries: 10,
247 |   };
248 |   private activeTasks: Map<string, TaskStatus> = new Map();
249 | 
250 |   constructor() {
251 |     log("Initializing API clients...");
252 | 
253 |     // Initialize OpenRouter client
254 |     this.openrouterClient = new OpenAI({
255 |       baseURL: "https://openrouter.ai/api/v1",
256 |       apiKey: process.env.OPENROUTER_API_KEY,
257 |     });
258 |     log("OpenRouter client initialized");
259 | 
260 |     // Initialize MCP server
261 |     this.server = new Server(
262 |       {
263 |         name: "deepseek-thinking-claude-mcp",
264 |         version: "0.1.0",
265 |       },
266 |       {
267 |         capabilities: {
268 |           tools: {},
269 |         },
270 |       }
271 |     );
272 | 
273 |     this.setupToolHandlers();
274 | 
275 |     // Error handling
276 |     this.server.onerror = (error) => console.error("[MCP Error]", error);
277 |     process.on("SIGINT", async () => {
278 |       await this.server.close();
279 |       process.exit(0);
280 |     });
281 |   }
282 | 
283 |   private addToContext(entry: ConversationEntry) {
284 |     // Modifier pour utiliser DEEPSEEK_MODEL au lieu de CLAUDE_MODEL
285 |     const entryWithUpdatedModel = {
286 |       ...entry,
287 |       model: DEEPSEEK_MODEL,
288 |     };
289 |     this.context.entries.push(entryWithUpdatedModel);
290 |     if (this.context.entries.length > this.context.maxEntries) {
291 |       this.context.entries.shift(); // Remove oldest
292 |     }
293 |   }
294 | 
295 |   private formatContextForPrompt(): string {
296 |     return this.context.entries
297 |       .map(
298 |         (entry) =>
299 |           `Question: ${entry.prompt}\nReasoning: ${entry.reasoning}\nAnswer: ${entry.response}`
300 |       )
301 |       .join("\n\n");
302 |   }
303 | 
304 |   private setupToolHandlers() {
305 |     this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
306 |       tools: [
307 |         {
308 |           name: "generate_response",
309 |           description:
310 |             "Generate a response using DeepSeek's reasoning and Claude's response generation through OpenRouter.",
311 |           inputSchema: {
312 |             type: "object",
313 |             properties: {
314 |               prompt: {
315 |                 type: "string",
316 |                 description: "The user's input prompt",
317 |               },
318 |               showReasoning: {
319 |                 type: "boolean",
320 |                 description: "Whether to include reasoning in response",
321 |                 default: false,
322 |               },
323 |               clearContext: {
324 |                 type: "boolean",
325 |                 description: "Clear conversation history before this request",
326 |                 default: false,
327 |               },
328 |               includeHistory: {
329 |                 type: "boolean",
330 |                 description: "Include Cline conversation history for context",
331 |                 default: true,
332 |               },
333 |             },
334 |             required: ["prompt"],
335 |           },
336 |         },
337 |         {
338 |           name: "check_response_status",
339 |           description: "Check the status of a response generation task",
340 |           inputSchema: {
341 |             type: "object",
342 |             properties: {
343 |               taskId: {
344 |                 type: "string",
345 |                 description: "The task ID returned by generate_response",
346 |               },
347 |             },
348 |             required: ["taskId"],
349 |           },
350 |         },
351 |       ],
352 |     }));
353 | 
354 |     this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
355 |       if (request.params.name === "generate_response") {
356 |         if (!isValidGenerateResponseArgs(request.params.arguments)) {
357 |           throw new McpError(
358 |             ErrorCode.InvalidParams,
359 |             "Invalid generate_response arguments"
360 |           );
361 |         }
362 | 
363 |         const taskId = uuidv4();
364 |         const { prompt, showReasoning, clearContext, includeHistory } =
365 |           request.params.arguments;
366 | 
367 |         // Initialize task status with les propriétés de suivi pour le polling
368 |         this.activeTasks.set(taskId, {
369 |           status: "pending",
370 |           prompt,
371 |           showReasoning,
372 |           timestamp: Date.now(),
373 |           lastChecked: Date.now(),
374 |           nextCheckDelay: INITIAL_STATUS_CHECK_DELAY_MS,
375 |           checkAttempts: 0
376 |         });
377 | 
378 |         // Start processing in background
379 |         this.processTask(taskId, clearContext, includeHistory).catch(
380 |           (error) => {
381 |             log("Error processing task:", error);
382 |             this.activeTasks.set(taskId, {
383 |               ...this.activeTasks.get(taskId)!,
384 |               status: "error",
385 |               error: error.message,
386 |             });
387 |           }
388 |         );
389 | 
390 |         // Return task ID immediately
391 |         return {
392 |           content: [
393 |             {
394 |               type: "text",
395 |               text: JSON.stringify({ 
396 |                 taskId,
397 |                 suggestedWaitTime: Math.round(INITIAL_STATUS_CHECK_DELAY_MS / 1000)  // Temps suggéré en secondes
398 |               }),
399 |             },
400 |           ],
401 |         };
402 |       } else if (request.params.name === "check_response_status") {
403 |         if (!isValidCheckResponseStatusArgs(request.params.arguments)) {
404 |           throw new McpError(
405 |             ErrorCode.InvalidParams,
406 |             "Invalid check_response_status arguments"
407 |           );
408 |         }
409 | 
410 |         const taskId = request.params.arguments.taskId;
411 |         const task = this.activeTasks.get(taskId);
412 | 
413 |         if (!task) {
414 |           throw new McpError(
415 |             ErrorCode.InvalidRequest,
416 |             `No task found with ID: ${taskId}`
417 |           );
418 |         }
419 | 
420 |         // Vérifier si la tâche a expiré
421 |         const currentTime = Date.now();
422 |         if (currentTime - task.timestamp > TASK_TIMEOUT_MS) {
423 |           const updatedTask = {
424 |             ...task,
425 |             status: "error" as const,
426 |             error: `Tâche expirée après ${TASK_TIMEOUT_MS / 60000} minutes`
427 |           };
428 |           this.activeTasks.set(taskId, updatedTask);
429 |           return {
430 |             content: [
431 |               {
432 |                 type: "text",
433 |                 text: JSON.stringify({
434 |                   status: updatedTask.status,
435 |                   reasoning: updatedTask.showReasoning ? updatedTask.reasoning : undefined,
436 |                   response: undefined,
437 |                   error: updatedTask.error,
438 |                   timeoutAfter: TASK_TIMEOUT_MS / 60000
439 |                 })
440 |               }
441 |             ]
442 |           };
443 |         }
444 | 
445 |         // Mettre à jour les propriétés de suivi
446 |         const checkAttempts = (task.checkAttempts || 0) + 1;
447 |         
448 |         // Vérifier si nous avons atteint le nombre maximal de tentatives
449 |         if (checkAttempts > MAX_STATUS_CHECK_ATTEMPTS && task.status !== "complete" && task.status !== "error") {
450 |           const updatedTask = {
451 |             ...task,
452 |             status: "error" as const,
453 |             error: `Nombre maximum de tentatives atteint (${MAX_STATUS_CHECK_ATTEMPTS})`,
454 |             checkAttempts
455 |           };
456 |           this.activeTasks.set(taskId, updatedTask);
457 |           return {
458 |             content: [
459 |               {
460 |                 type: "text",
461 |                 text: JSON.stringify({
462 |                   status: updatedTask.status,
463 |                   reasoning: updatedTask.showReasoning ? updatedTask.reasoning : undefined,
464 |                   response: undefined,
465 |                   error: updatedTask.error,
466 |                   maxAttempts: MAX_STATUS_CHECK_ATTEMPTS
467 |                 })
468 |               }
469 |             ]
470 |           };
471 |         }
472 | 
473 |         // Calculer le délai avant la prochaine vérification (backoff exponentiel)
474 |         let nextCheckDelay = task.nextCheckDelay || INITIAL_STATUS_CHECK_DELAY_MS;
475 |         nextCheckDelay = Math.min(nextCheckDelay * STATUS_CHECK_BACKOFF_FACTOR, MAX_STATUS_CHECK_DELAY_MS);
476 |         
477 |         // Mettre à jour le statut de la tâche
478 |         const updatedTask = {
479 |           ...task,
480 |           lastChecked: currentTime,
481 |           nextCheckDelay,
482 |           checkAttempts
483 |         };
484 |         this.activeTasks.set(taskId, updatedTask);
485 | 
486 |         return {
487 |           content: [
488 |             {
489 |               type: "text",
490 |               text: JSON.stringify({
491 |                 status: task.status,
492 |                 reasoning: task.showReasoning ? task.reasoning : undefined,
493 |                 response: task.status === "complete" ? task.response : undefined,
494 |                 error: task.error,
495 |                 nextCheckIn: Math.round(nextCheckDelay / 1000), // Temps suggéré en secondes
496 |                 checkAttempts,
497 |                 elapsedTime: Math.round((currentTime - task.timestamp) / 1000) // Temps écoulé en secondes
498 |               }),
499 |             },
500 |           ],
501 |         };
502 |       } else {
503 |         throw new McpError(
504 |           ErrorCode.MethodNotFound,
505 |           `Unknown tool: ${request.params.name}`
506 |         );
507 |       }
508 |     });
509 |   }
510 | 
511 |   private async processTask(
512 |     taskId: string,
513 |     clearContext?: boolean,
514 |     includeHistory?: boolean
515 |   ): Promise<void> {
516 |     const task = this.activeTasks.get(taskId);
517 |     if (!task) {
518 |       throw new Error(`No task found with ID: ${taskId}`);
519 |     }
520 | 
521 |     try {
522 |       if (clearContext) {
523 |         this.context.entries = [];
524 |       }
525 | 
526 |       // Update status to reasoning
527 |       this.activeTasks.set(taskId, {
528 |         ...task,
529 |         status: "reasoning",
530 |       });
531 | 
532 |       // Get Cline conversation history if requested
533 |       let history: ClaudeMessage[] | null = null;
534 |       if (includeHistory !== false) {
535 |         history = await findActiveConversation();
536 |       }
537 | 
538 |       // Get DeepSeek reasoning with limited history
539 |       const reasoningHistory = history
540 |         ? formatHistoryForModel(history, true)
541 |         : "";
542 |       const reasoningPrompt = reasoningHistory
543 |         ? `${reasoningHistory}\n\nNew question: ${task.prompt}`
544 |         : task.prompt;
545 |       const reasoning = await this.getDeepseekReasoning(reasoningPrompt);
546 | 
547 |       // Update status with reasoning
548 |       this.activeTasks.set(taskId, {
549 |         ...task,
550 |         status: "responding",
551 |         reasoning,
552 |       });
553 | 
554 |       // Get final response with full history
555 |       const responseHistory = history
556 |         ? formatHistoryForModel(history, false)
557 |         : "";
558 |       const fullPrompt = responseHistory
559 |         ? `${responseHistory}\n\nCurrent task: ${task.prompt}`
560 |         : task.prompt;
561 |       const response = await this.getFinalResponse(fullPrompt, reasoning);
562 | 
563 |       // Add to context after successful response
564 |       this.addToContext({
565 |         timestamp: Date.now(),
566 |         prompt: task.prompt,
567 |         reasoning,
568 |         response,
569 |         model: DEEPSEEK_MODEL, // Utiliser DEEPSEEK_MODEL au lieu de CLAUDE_MODEL
570 |       });
571 | 
572 |       // Update status to complete
573 |       this.activeTasks.set(taskId, {
574 |         ...task,
575 |         status: "complete",
576 |         reasoning,
577 |         response,
578 |         timestamp: Date.now(),
579 |       });
580 |     } catch (error) {
581 |       // Update status to error
582 |       this.activeTasks.set(taskId, {
583 |         ...task,
584 |         status: "error",
585 |         error: error instanceof Error ? error.message : "Unknown error",
586 |         timestamp: Date.now(),
587 |       });
588 |       throw error;
589 |     }
590 |   }
591 | 
592 |   private async getDeepseekReasoning(prompt: string): Promise<string> {
593 |     const contextPrompt =
594 |       this.context.entries.length > 0
595 |         ? `Previous conversation:\n${this.formatContextForPrompt()}\n\nNew question: ${prompt}`
596 |         : prompt;
597 | 
598 |     try {
599 |       // Ajouter instruction explicite pour que le modèle génère un raisonnement
600 |       const requestPrompt = `Analyse la question suivante en détail avant de répondre. Réfléchis étape par étape et expose ton raisonnement complet.\n\n${contextPrompt}`;
601 | 
602 |       // Get reasoning from DeepSeek (sans le paramètre include_reasoning)
603 |       const response = await this.openrouterClient.chat.completions.create({
604 |         model: DEEPSEEK_MODEL,
605 |         messages: [
606 |           {
607 |             role: "user",
608 |             content: requestPrompt,
609 |           },
610 |         ],
611 |         temperature: 0.7,
612 |         top_p: 1,
613 |       });
614 | 
615 |       // Utiliser directement le contenu de la réponse comme raisonnement
616 |       if (
617 |         !response.choices ||
618 |         !response.choices[0] ||
619 |         !response.choices[0].message ||
620 |         !response.choices[0].message.content
621 |       ) {
622 |         throw new Error("Réponse vide de DeepSeek");
623 |       }
624 | 
625 |       return response.choices[0].message.content;
626 |     } catch (error) {
627 |       log("Error in getDeepseekReasoning:", error);
628 |       throw error;
629 |     }
630 |   }
631 | 
632 |   private async getFinalResponse(
633 |     prompt: string,
634 |     reasoning: string
635 |   ): Promise<string> {
636 |     try {
637 |       // Au lieu d'envoyer à Claude, on utilise DeepSeek pour la réponse finale aussi
638 |       const response = await this.openrouterClient.chat.completions.create({
639 |         model: DEEPSEEK_MODEL, // Utiliser DeepSeek ici
640 |         messages: [
641 |           {
642 |             role: "user",
643 |             content: `${prompt}\n\nVoici mon analyse préalable de cette question: ${reasoning}\nMaintenant, génère une réponse complète et détaillée basée sur cette analyse.`,
644 |           },
645 |         ],
646 |         temperature: 0.7,
647 |         top_p: 1,
648 |       } as any);
649 | 
650 |       return (
651 |         response.choices[0].message.content || "Error: No response content"
652 |       );
653 |     } catch (error) {
654 |       log("Error in getFinalResponse:", error);
655 |       throw error;
656 |     }
657 |   }
658 | 
659 |   async run() {
660 |     const transport = new StdioServerTransport();
661 |     await this.server.connect(transport);
662 |     console.error("DeepSeek-Claude MCP server running on stdio");
663 |   }
664 | }
665 | 
666 | const server = new DeepseekClaudeServer();
667 | server.run().catch(console.error);
668 | 
```