# Directory Structure
```
├── .env.example
├── .gitignore
├── Dockerfile
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│ ├── index.ts
│ └── old-index.ts-working.exemple
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Dependencies
node_modules/
# Build output
build/
dist/
# Logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Environment variables
.env
.env.local
.env.*.local
# Editor directories and files
.idea/
.vscode/
*.swp
*.swo
*.swn
.DS_Store
```
--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------
```
# Required: OpenRouter API key for both DeepSeek and Claude models
OPENROUTER_API_KEY=your_openrouter_api_key_here
# Optional: Model configuration (defaults shown below)
DEEPSEEK_MODEL=deepseek/deepseek-r1:free # DeepSeek model for reasoning
CLAUDE_MODEL=anthropic/claude-3.5-sonnet:beta # Claude model for responses
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP
[](https://smithery.ai/server/@newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP)
A Model Context Protocol (MCP) server that combines DeepSeek R1's reasoning capabilities with Claude 3.5 Sonnet's response generation through OpenRouter. This implementation uses a two-stage process where DeepSeek provides structured reasoning which is then incorporated into Claude's response generation.
## Features
- **Two-Stage Processing**:
- Uses DeepSeek R1 for initial reasoning (50k character context)
- Uses Claude 3.5 Sonnet for final response (600k character context)
- Both models accessed through OpenRouter's unified API
- Injects DeepSeek's reasoning tokens into Claude's context
- **Smart Conversation Management**:
- Detects active conversations using file modification times
- Handles multiple concurrent conversations
- Filters out ended conversations automatically
- Supports context clearing when needed
- **Optimized Parameters**:
- Model-specific context limits:
* DeepSeek: 50,000 characters for focused reasoning
* Claude: 600,000 characters for comprehensive responses
- Recommended settings:
* temperature: 0.7 for balanced creativity
* top_p: 1.0 for full probability distribution
* repetition_penalty: 1.0 to prevent repetition
## Installation
### Installing via Smithery
To install DeepSeek Thinking with Claude 3.5 Sonnet for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP):
```bash
npx -y @smithery/cli install @newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP --client claude
```
### Manual Installation
1. Clone the repository:
```bash
git clone https://github.com/yourusername/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP.git
cd Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP
```
2. Install dependencies:
```bash
npm install
```
3. Create a `.env` file with your OpenRouter API key:
```env
# Required: OpenRouter API key for both DeepSeek and Claude models
OPENROUTER_API_KEY=your_openrouter_api_key_here
# Optional: Model configuration (defaults shown below)
DEEPSEEK_MODEL=deepseek/deepseek-r1 # DeepSeek model for reasoning
CLAUDE_MODEL=anthropic/claude-3.5-sonnet:beta # Claude model for responses
```
4. Build the server:
```bash
npm run build
```
## Usage with Cline
Add to your Cline MCP settings (usually in `~/.vscode/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json`):
```json
{
"mcpServers": {
"deepseek-claude": {
"command": "/path/to/node",
"args": ["/path/to/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP/build/index.js"],
"env": {
"OPENROUTER_API_KEY": "your_key_here"
},
"disabled": false,
"autoApprove": []
}
}
}
```
## Tool Usage
The server provides two tools for generating and monitoring responses:
### generate_response
Main tool for generating responses with the following parameters:
```typescript
{
"prompt": string, // Required: The question or prompt
"showReasoning"?: boolean, // Optional: Show DeepSeek's reasoning process
"clearContext"?: boolean, // Optional: Clear conversation history
"includeHistory"?: boolean // Optional: Include Cline conversation history
}
```
### check_response_status
Tool for checking the status of a response generation task:
```typescript
{
"taskId": string // Required: The task ID from generate_response
}
```
### Response Polling
The server uses a polling mechanism to handle long-running requests:
1. Initial Request:
- `generate_response` returns immediately with a task ID
- Response format: `{"taskId": "uuid-here"}`
2. Status Checking:
- Use `check_response_status` to poll the task status
- **Note:** Responses can take up to 60 seconds to complete
- Status progresses through: pending → reasoning → responding → complete
Example usage in Cline:
```typescript
// Initial request
const result = await use_mcp_tool({
server_name: "deepseek-claude",
tool_name: "generate_response",
arguments: {
prompt: "What is quantum computing?",
showReasoning: true
}
});
// Get taskId from result
const taskId = JSON.parse(result.content[0].text).taskId;
// Poll for status (may need multiple checks over ~60 seconds)
const status = await use_mcp_tool({
server_name: "deepseek-claude",
tool_name: "check_response_status",
arguments: { taskId }
});
// Example status response when complete:
{
"status": "complete",
"reasoning": "...", // If showReasoning was true
"response": "..." // The final response
}
```
## Development
For development with auto-rebuild:
```bash
npm run watch
```
## How It Works
1. **Reasoning Stage (DeepSeek R1)**:
- Uses OpenRouter's reasoning tokens feature
- Prompt is modified to output 'done' while capturing reasoning
- Reasoning is extracted from response metadata
2. **Response Stage (Claude 3.5 Sonnet)**:
- Receives the original prompt and DeepSeek's reasoning
- Generates final response incorporating the reasoning
- Maintains conversation context and history
## License
MIT License - See LICENSE file for details.
## Credits
Based on the RAT (Retrieval Augmented Thinking) concept by [Skirano](https://x.com/skirano/status/1881922469411643413), which enhances AI responses through structured reasoning and knowledge retrieval.
This implementation specifically combines DeepSeek R1's reasoning capabilities with Claude 3.5 Sonnet's response generation through OpenRouter's unified API.
```
--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------
```json
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}
```
--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------
```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
startCommand:
type: stdio
configSchema:
# JSON Schema defining the configuration options for the MCP.
type: object
required:
- openrouterApiKey
properties:
openrouterApiKey:
type: string
description: The API key for accessing the OpenRouter service.
commandFunction:
# A function that produces the CLI command to start the MCP on stdio.
|-
(config) => ({ command: 'node', args: ['build/index.js'], env: { OPENROUTER_API_KEY: config.openrouterApiKey } })
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
{
"name": "deepseek-thinking-claude-3-5-sonnet-cline-mcp",
"version": "0.1.0",
"description": "MCP server that combines DeepSeek's reasoning with Claude 3.5 Sonnet's response generation through Cline",
"private": true,
"type": "module",
"bin": {
"deepseek-thinking-claude-mcp": "./build/index.js"
},
"files": [
"build"
],
"scripts": {
"build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
"prepare": "npm run build",
"watch": "tsc --watch",
"inspector": "npx @modelcontextprotocol/inspector build/index.js"
},
"dependencies": {
"@anthropic-ai/sdk": "^0.36.2",
"@modelcontextprotocol/sdk": "0.6.0",
"dotenv": "^16.4.7",
"openai": "^4.80.1",
"uuid": "^11.0.5"
},
"devDependencies": {
"@types/node": "^20.11.24",
"@types/uuid": "^10.0.0",
"typescript": "^5.3.3"
}
}
```
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
# Stage 1: Build the application using Node.js
FROM node:18-alpine AS builder
# Set working directory
WORKDIR /app
# Copy package.json and package-lock.json to the working directory
COPY package.json package-lock.json ./
# Install dependencies
RUN npm install
# Copy source files
COPY src ./src
# Build the project
RUN npm run build
# Stage 2: Create a lightweight image for production
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy built files from builder
COPY --from=builder /app/build ./build
# Copy necessary files
COPY package.json package-lock.json ./
# Install only production dependencies
RUN npm install --omit=dev
# Environment variables
ENV NODE_ENV=production
# Entrypoint command to run the MCP server
ENTRYPOINT ["node", "build/index.js"]
# Command to start the server
CMD ["node", "build/index.js"]
```
--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------
```typescript
#!/usr/bin/env node
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ErrorCode,
ListToolsRequestSchema,
McpError,
} from "@modelcontextprotocol/sdk/types.js";
import { OpenAI } from "openai";
import dotenv from "dotenv";
import * as os from "os";
import * as path from "path";
import * as fs from "fs/promises";
import { v4 as uuidv4 } from "uuid";
// Load environment variables
dotenv.config();
// Debug logging
const DEBUG = true;
const log = (...args: any[]) => {
if (DEBUG) {
console.error("[DEEPSEEK-CLAUDE MCP]", ...args);
}
};
// Constants - Utiliser uniquement le modèle DeepSeek
const DEEPSEEK_MODEL =
process.env.DEEPSEEK_MODEL || "deepseek/deepseek-chat-v3-0324:free";
// Ne plus utiliser Claude du tout
// const CLAUDE_MODEL = "anthropic/claude-3.5-sonnet:beta";
// Constants pour le mécanisme de gestion des statuts
const INITIAL_STATUS_CHECK_DELAY_MS = 5000; // 5 secondes pour la première vérification
const MAX_STATUS_CHECK_DELAY_MS = 60000; // 1 minute maximum entre les vérifications
const STATUS_CHECK_BACKOFF_FACTOR = 1.5; // Facteur d'augmentation du délai
const MAX_STATUS_CHECK_ATTEMPTS = 20; // Nombre maximal de tentatives (évite boucle infinie)
const TASK_TIMEOUT_MS = 10 * 60 * 1000; // 10 minutes maximum pour une tâche
interface ConversationEntry {
timestamp: number;
prompt: string;
reasoning: string;
response: string;
model: string;
}
interface ConversationContext {
entries: ConversationEntry[];
maxEntries: number;
}
interface GenerateResponseArgs {
prompt: string;
showReasoning?: boolean;
clearContext?: boolean;
includeHistory?: boolean;
}
interface CheckResponseStatusArgs {
taskId: string;
}
interface TaskStatus {
status: "pending" | "reasoning" | "responding" | "complete" | "error";
prompt: string;
showReasoning?: boolean;
reasoning?: string;
response?: string;
error?: string;
timestamp: number;
// Nouvelles propriétés pour gérer le polling
lastChecked?: number;
nextCheckDelay?: number;
checkAttempts?: number;
}
const isValidCheckResponseStatusArgs = (
args: any
): args is CheckResponseStatusArgs =>
typeof args === "object" && args !== null && typeof args.taskId === "string";
interface ClaudeMessage {
role: "user" | "assistant";
content: string | { type: string; text: string }[];
}
interface UiMessage {
ts: number;
type: string;
say?: string;
ask?: string;
text: string;
conversationHistoryIndex: number;
}
const isValidGenerateResponseArgs = (args: any): args is GenerateResponseArgs =>
typeof args === "object" &&
args !== null &&
typeof args.prompt === "string" &&
(args.showReasoning === undefined ||
typeof args.showReasoning === "boolean") &&
(args.clearContext === undefined || typeof args.clearContext === "boolean") &&
(args.includeHistory === undefined ||
typeof args.includeHistory === "boolean");
function getClaudePath(): string {
const homeDir = os.homedir();
switch (process.platform) {
case "win32":
return path.join(
homeDir,
"AppData",
"Roaming",
"Code",
"User",
"globalStorage",
"saoudrizwan.claude-dev",
"tasks"
);
case "darwin":
return path.join(
homeDir,
"Library",
"Application Support",
"Code",
"User",
"globalStorage",
"saoudrizwan.claude-dev",
"tasks"
);
default: // linux
return path.join(
homeDir,
".config",
"Code",
"User",
"globalStorage",
"saoudrizwan.claude-dev",
"tasks"
);
}
}
async function findActiveConversation(): Promise<ClaudeMessage[] | null> {
try {
const tasksPath = getClaudePath();
const dirs = await fs.readdir(tasksPath);
// Get modification time for each api_conversation_history.json
const dirStats = await Promise.all(
dirs.map(async (dir) => {
try {
const historyPath = path.join(
tasksPath,
dir,
"api_conversation_history.json"
);
const stats = await fs.stat(historyPath);
const uiPath = path.join(tasksPath, dir, "ui_messages.json");
const uiContent = await fs.readFile(uiPath, "utf8");
const uiMessages: UiMessage[] = JSON.parse(uiContent);
const hasEnded = uiMessages.some(
(m) => m.type === "conversation_ended"
);
return {
dir,
mtime: stats.mtime.getTime(),
hasEnded,
};
} catch (error) {
log("Error checking folder:", dir, error);
return null;
}
})
);
// Filter out errors and ended conversations, then sort by modification time
const sortedDirs = dirStats
.filter(
(stat): stat is NonNullable<typeof stat> =>
stat !== null && !stat.hasEnded
)
.sort((a, b) => b.mtime - a.mtime);
// Use most recently modified active conversation
const latest = sortedDirs[0]?.dir;
if (!latest) {
log("No active conversations found");
return null;
}
const historyPath = path.join(
tasksPath,
latest,
"api_conversation_history.json"
);
const history = await fs.readFile(historyPath, "utf8");
return JSON.parse(history);
} catch (error) {
log("Error finding active conversation:", error);
return null;
}
}
function formatHistoryForModel(
history: ClaudeMessage[],
isDeepSeek: boolean
): string {
const maxLength = isDeepSeek ? 50000 : 600000; // 50k chars for DeepSeek, 600k for Claude
const formattedMessages = [];
let totalLength = 0;
// Process messages in reverse chronological order to get most recent first
for (let i = history.length - 1; i >= 0; i--) {
const msg = history[i];
const content = Array.isArray(msg.content)
? msg.content.map((c) => c.text).join("\n")
: msg.content;
const formattedMsg = `${
msg.role === "user" ? "Human" : "Assistant"
}: ${content}`;
const msgLength = formattedMsg.length;
// Stop adding messages if we'd exceed the limit
if (totalLength + msgLength > maxLength) {
break;
}
formattedMessages.push(formattedMsg); // Add most recent messages first
totalLength += msgLength;
}
// Reverse to get chronological order
return formattedMessages.reverse().join("\n\n");
}
class DeepseekClaudeServer {
private server: Server;
private openrouterClient: OpenAI;
private context: ConversationContext = {
entries: [],
maxEntries: 10,
};
private activeTasks: Map<string, TaskStatus> = new Map();
constructor() {
log("Initializing API clients...");
// Initialize OpenRouter client
this.openrouterClient = new OpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: process.env.OPENROUTER_API_KEY,
});
log("OpenRouter client initialized");
// Initialize MCP server
this.server = new Server(
{
name: "deepseek-thinking-claude-mcp",
version: "0.1.0",
},
{
capabilities: {
tools: {},
},
}
);
this.setupToolHandlers();
// Error handling
this.server.onerror = (error) => console.error("[MCP Error]", error);
process.on("SIGINT", async () => {
await this.server.close();
process.exit(0);
});
}
private addToContext(entry: ConversationEntry) {
// Modifier pour utiliser DEEPSEEK_MODEL au lieu de CLAUDE_MODEL
const entryWithUpdatedModel = {
...entry,
model: DEEPSEEK_MODEL,
};
this.context.entries.push(entryWithUpdatedModel);
if (this.context.entries.length > this.context.maxEntries) {
this.context.entries.shift(); // Remove oldest
}
}
private formatContextForPrompt(): string {
return this.context.entries
.map(
(entry) =>
`Question: ${entry.prompt}\nReasoning: ${entry.reasoning}\nAnswer: ${entry.response}`
)
.join("\n\n");
}
private setupToolHandlers() {
this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "generate_response",
description:
"Generate a response using DeepSeek's reasoning and Claude's response generation through OpenRouter.",
inputSchema: {
type: "object",
properties: {
prompt: {
type: "string",
description: "The user's input prompt",
},
showReasoning: {
type: "boolean",
description: "Whether to include reasoning in response",
default: false,
},
clearContext: {
type: "boolean",
description: "Clear conversation history before this request",
default: false,
},
includeHistory: {
type: "boolean",
description: "Include Cline conversation history for context",
default: true,
},
},
required: ["prompt"],
},
},
{
name: "check_response_status",
description: "Check the status of a response generation task",
inputSchema: {
type: "object",
properties: {
taskId: {
type: "string",
description: "The task ID returned by generate_response",
},
},
required: ["taskId"],
},
},
],
}));
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "generate_response") {
if (!isValidGenerateResponseArgs(request.params.arguments)) {
throw new McpError(
ErrorCode.InvalidParams,
"Invalid generate_response arguments"
);
}
const taskId = uuidv4();
const { prompt, showReasoning, clearContext, includeHistory } =
request.params.arguments;
// Initialize task status with les propriétés de suivi pour le polling
this.activeTasks.set(taskId, {
status: "pending",
prompt,
showReasoning,
timestamp: Date.now(),
lastChecked: Date.now(),
nextCheckDelay: INITIAL_STATUS_CHECK_DELAY_MS,
checkAttempts: 0
});
// Start processing in background
this.processTask(taskId, clearContext, includeHistory).catch(
(error) => {
log("Error processing task:", error);
this.activeTasks.set(taskId, {
...this.activeTasks.get(taskId)!,
status: "error",
error: error.message,
});
}
);
// Return task ID immediately
return {
content: [
{
type: "text",
text: JSON.stringify({
taskId,
suggestedWaitTime: Math.round(INITIAL_STATUS_CHECK_DELAY_MS / 1000) // Temps suggéré en secondes
}),
},
],
};
} else if (request.params.name === "check_response_status") {
if (!isValidCheckResponseStatusArgs(request.params.arguments)) {
throw new McpError(
ErrorCode.InvalidParams,
"Invalid check_response_status arguments"
);
}
const taskId = request.params.arguments.taskId;
const task = this.activeTasks.get(taskId);
if (!task) {
throw new McpError(
ErrorCode.InvalidRequest,
`No task found with ID: ${taskId}`
);
}
// Vérifier si la tâche a expiré
const currentTime = Date.now();
if (currentTime - task.timestamp > TASK_TIMEOUT_MS) {
const updatedTask = {
...task,
status: "error" as const,
error: `Tâche expirée après ${TASK_TIMEOUT_MS / 60000} minutes`
};
this.activeTasks.set(taskId, updatedTask);
return {
content: [
{
type: "text",
text: JSON.stringify({
status: updatedTask.status,
reasoning: updatedTask.showReasoning ? updatedTask.reasoning : undefined,
response: undefined,
error: updatedTask.error,
timeoutAfter: TASK_TIMEOUT_MS / 60000
})
}
]
};
}
// Mettre à jour les propriétés de suivi
const checkAttempts = (task.checkAttempts || 0) + 1;
// Vérifier si nous avons atteint le nombre maximal de tentatives
if (checkAttempts > MAX_STATUS_CHECK_ATTEMPTS && task.status !== "complete" && task.status !== "error") {
const updatedTask = {
...task,
status: "error" as const,
error: `Nombre maximum de tentatives atteint (${MAX_STATUS_CHECK_ATTEMPTS})`,
checkAttempts
};
this.activeTasks.set(taskId, updatedTask);
return {
content: [
{
type: "text",
text: JSON.stringify({
status: updatedTask.status,
reasoning: updatedTask.showReasoning ? updatedTask.reasoning : undefined,
response: undefined,
error: updatedTask.error,
maxAttempts: MAX_STATUS_CHECK_ATTEMPTS
})
}
]
};
}
// Calculer le délai avant la prochaine vérification (backoff exponentiel)
let nextCheckDelay = task.nextCheckDelay || INITIAL_STATUS_CHECK_DELAY_MS;
nextCheckDelay = Math.min(nextCheckDelay * STATUS_CHECK_BACKOFF_FACTOR, MAX_STATUS_CHECK_DELAY_MS);
// Mettre à jour le statut de la tâche
const updatedTask = {
...task,
lastChecked: currentTime,
nextCheckDelay,
checkAttempts
};
this.activeTasks.set(taskId, updatedTask);
return {
content: [
{
type: "text",
text: JSON.stringify({
status: task.status,
reasoning: task.showReasoning ? task.reasoning : undefined,
response: task.status === "complete" ? task.response : undefined,
error: task.error,
nextCheckIn: Math.round(nextCheckDelay / 1000), // Temps suggéré en secondes
checkAttempts,
elapsedTime: Math.round((currentTime - task.timestamp) / 1000) // Temps écoulé en secondes
}),
},
],
};
} else {
throw new McpError(
ErrorCode.MethodNotFound,
`Unknown tool: ${request.params.name}`
);
}
});
}
private async processTask(
taskId: string,
clearContext?: boolean,
includeHistory?: boolean
): Promise<void> {
const task = this.activeTasks.get(taskId);
if (!task) {
throw new Error(`No task found with ID: ${taskId}`);
}
try {
if (clearContext) {
this.context.entries = [];
}
// Update status to reasoning
this.activeTasks.set(taskId, {
...task,
status: "reasoning",
});
// Get Cline conversation history if requested
let history: ClaudeMessage[] | null = null;
if (includeHistory !== false) {
history = await findActiveConversation();
}
// Get DeepSeek reasoning with limited history
const reasoningHistory = history
? formatHistoryForModel(history, true)
: "";
const reasoningPrompt = reasoningHistory
? `${reasoningHistory}\n\nNew question: ${task.prompt}`
: task.prompt;
const reasoning = await this.getDeepseekReasoning(reasoningPrompt);
// Update status with reasoning
this.activeTasks.set(taskId, {
...task,
status: "responding",
reasoning,
});
// Get final response with full history
const responseHistory = history
? formatHistoryForModel(history, false)
: "";
const fullPrompt = responseHistory
? `${responseHistory}\n\nCurrent task: ${task.prompt}`
: task.prompt;
const response = await this.getFinalResponse(fullPrompt, reasoning);
// Add to context after successful response
this.addToContext({
timestamp: Date.now(),
prompt: task.prompt,
reasoning,
response,
model: DEEPSEEK_MODEL, // Utiliser DEEPSEEK_MODEL au lieu de CLAUDE_MODEL
});
// Update status to complete
this.activeTasks.set(taskId, {
...task,
status: "complete",
reasoning,
response,
timestamp: Date.now(),
});
} catch (error) {
// Update status to error
this.activeTasks.set(taskId, {
...task,
status: "error",
error: error instanceof Error ? error.message : "Unknown error",
timestamp: Date.now(),
});
throw error;
}
}
private async getDeepseekReasoning(prompt: string): Promise<string> {
const contextPrompt =
this.context.entries.length > 0
? `Previous conversation:\n${this.formatContextForPrompt()}\n\nNew question: ${prompt}`
: prompt;
try {
// Ajouter instruction explicite pour que le modèle génère un raisonnement
const requestPrompt = `Analyse la question suivante en détail avant de répondre. Réfléchis étape par étape et expose ton raisonnement complet.\n\n${contextPrompt}`;
// Get reasoning from DeepSeek (sans le paramètre include_reasoning)
const response = await this.openrouterClient.chat.completions.create({
model: DEEPSEEK_MODEL,
messages: [
{
role: "user",
content: requestPrompt,
},
],
temperature: 0.7,
top_p: 1,
});
// Utiliser directement le contenu de la réponse comme raisonnement
if (
!response.choices ||
!response.choices[0] ||
!response.choices[0].message ||
!response.choices[0].message.content
) {
throw new Error("Réponse vide de DeepSeek");
}
return response.choices[0].message.content;
} catch (error) {
log("Error in getDeepseekReasoning:", error);
throw error;
}
}
private async getFinalResponse(
prompt: string,
reasoning: string
): Promise<string> {
try {
// Au lieu d'envoyer à Claude, on utilise DeepSeek pour la réponse finale aussi
const response = await this.openrouterClient.chat.completions.create({
model: DEEPSEEK_MODEL, // Utiliser DeepSeek ici
messages: [
{
role: "user",
content: `${prompt}\n\nVoici mon analyse préalable de cette question: ${reasoning}\nMaintenant, génère une réponse complète et détaillée basée sur cette analyse.`,
},
],
temperature: 0.7,
top_p: 1,
} as any);
return (
response.choices[0].message.content || "Error: No response content"
);
} catch (error) {
log("Error in getFinalResponse:", error);
throw error;
}
}
async run() {
const transport = new StdioServerTransport();
await this.server.connect(transport);
console.error("DeepSeek-Claude MCP server running on stdio");
}
}
const server = new DeepseekClaudeServer();
server.run().catch(console.error);
```