#
tokens: 9973/50000 16/16 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .env.example
├── .gitattributes
├── .gitignore
├── config
│   ├── models.yaml
│   └── system_instructions.yaml
├── config.example.ts
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── src
│   ├── index.ts
│   ├── providers
│   │   └── openrouter.ts
│   ├── stores
│   │   ├── FileSystemStore.ts
│   │   └── Store.ts
│   └── types
│       ├── conversation.ts
│       ├── errors.ts
│       └── server.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitattributes:
--------------------------------------------------------------------------------

```
# Auto detect text files and perform LF normalization
* text=auto

```

--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------

```
# OpenAI Configuration
OPENAI_API_KEY=your_openai_key_here

# DeepSeek Configuration
DEEPSEEK_API_KEY=your_api_key_here

# OpenRouter Configuration
OPENROUTER_API_KEY=your-openrouter-api-key

# Server Configuration
DATA_DIR=./data/conversations
LOG_LEVEL=info  # debug, info, warn, error

# Conversation storage path
CONVERSATIONS_PATH=d:\\Projects\\Conversations 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
.pnpm-debug.log*

# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov

# Coverage directory used by tools like istanbul
coverage
*.lcov

# nyc test coverage
.nyc_output

# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt

# Bower dependency directory (https://bower.io/)
bower_components

# node-waf configuration
.lock-wscript

# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release

# Dependency directories
node_modules/
jspm_packages/

# Snowpack dependency directory (https://snowpack.dev/)
web_modules/

# TypeScript cache
*.tsbuildinfo

# Optional npm cache directory
.npm

# Optional eslint cache
.eslintcache

# Optional stylelint cache
.stylelintcache

# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/

# Optional REPL history
.node_repl_history

# Output of 'npm pack'
*.tgz

# Yarn Integrity file
.yarn-integrity

# dotenv environment variable files
.env
.env.development.local
.env.test.local
.env.production.local
.env.local

# parcel-bundler cache (https://parceljs.org/)
.cache
.parcel-cache

# Next.js build output
.next
out

# Nuxt.js build / generate output
.nuxt
dist

# Gatsby files
.cache/
# Comment in the public line in if your project uses Gatsby and not Next.js
# https://nextjs.org/blog/next-9-1#public-directory-support
# public

# vuepress build output
.vuepress/dist

# vuepress v2.x temp and cache directory
.temp
.cache

# Docusaurus cache and generated files
.docusaurus

# Serverless directories
.serverless/

# FuseBox cache
.fusebox/

# DynamoDB Local files
.dynamodb/

# TernJS port file
.tern-port

# Stores VSCode versions used for testing VSCode extensions
.vscode-test

# yarn v2
.yarn/cache
.yarn/unplugged
.yarn/build-state.yml
.yarn/install-state.gz
.pnp.*
node_modules

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# MCP Conversation Server

A Model Context Protocol (MCP) server implementation for managing conversations with OpenRouter's language models. This server provides a standardized interface for applications to interact with various language models through a unified conversation management system.

## Features

- **MCP Protocol Support**
  - Full MCP protocol compliance
  - Resource management and discovery
  - Tool-based interaction model
  - Streaming response support
  - Error handling and recovery

- **OpenRouter Integration**
  - Support for all OpenRouter models
  - Real-time streaming responses
  - Automatic token counting
  - Model context window management
  - Available models include:
    - Claude 3 Opus
    - Claude 3 Sonnet
    - Llama 2 70B
    - And many more from OpenRouter's catalog

- **Conversation Management**
  - Create and manage multiple conversations
  - Support for system messages
  - Message history tracking
  - Token usage monitoring
  - Conversation filtering and search

- **Streaming Support**
  - Real-time message streaming
  - Chunked response handling
  - Token counting

- **File System Persistence**
  - Conversation state persistence
  - Configurable storage location
  - Automatic state management

## Installation

```bash
npm install mcp-conversation-server
```

## Configuration

### Configuration

All configuration for the MCP Conversation Server is now provided via YAML. Please update the `config/models.yaml` file with your settings. For example:

```yaml
# MCP Server Configuration
openRouter:
  apiKey: "YOUR_OPENROUTER_API_KEY"  # Replace with your actual OpenRouter API key.

persistence:
  path: "./conversations"  # Directory for storing conversation data.

models:
  # Define your models here
  'provider/model-name':
    id: 'provider/model-name'
    contextWindow: 123456
    streaming: true
    temperature: 0.7
    description: 'Model description'

# Default model to use if none specified
defaultModel: 'provider/model-name'
```

### Server Configuration

The MCP Conversation Server now loads all its configuration from the YAML file. In your application, you can load the configuration as follows:

```typescript
const config = await loadModelsConfig(); // Loads openRouter, persistence, models, and defaultModel settings from 'config/models.yaml'
```

*Note: Environment variables are no longer required as all configuration is provided via the YAML file.*

## Usage

### Basic Server Setup

```typescript
import { ConversationServer } from 'mcp-conversation-server';

const server = new ConversationServer(config);
server.run().catch(console.error);
```

### Available Tools

The server exposes several MCP tools:

1. **create-conversation**

   ```typescript
   {
       provider: 'openrouter',    // Provider is always 'openrouter'
       model: string,             // OpenRouter model ID (e.g., 'anthropic/claude-3-opus-20240229')
       title?: string;            // Optional conversation title
   }
   ```

2. **send-message**

   ```typescript
   {
       conversationId: string;  // Conversation ID
       content: string;         // Message content
       stream?: boolean;        // Enable streaming responses
   }
   ```

3. **list-conversations**

   ```typescript
   {
       filter?: {
           model?: string;      // Filter by model
           startDate?: string;  // Filter by start date
           endDate?: string;    // Filter by end date
       }
   }
   ```

### Resources

The server provides access to several resources:

1. **conversation://{id}**
   - Access specific conversation details
   - View message history
   - Check conversation metadata

2. **conversation://list**
   - List all active conversations
   - Filter conversations by criteria
   - Sort by recent activity

## Development

### Building

```bash
npm run build
```

### Running Tests

```bash
npm test
```

### Debugging

The server provides several debugging features:

1. **Error Logging**
   - All errors are logged with stack traces
   - Token usage tracking
   - Rate limit monitoring

2. **MCP Inspector**

   ```bash
   npm run inspector
   ```

   Use the MCP Inspector to:
   - Test tool execution
   - View resource contents
   - Monitor message flow
   - Validate protocol compliance

3. **Provider Validation**

   ```typescript
   await server.providerManager.validateProviders();
   ```

   Validates:
   - API key validity
   - Model availability
   - Rate limit status

### Troubleshooting

Common issues and solutions:

1. **OpenRouter Connection Issues**
   - Verify your API key is valid
   - Check rate limits on [OpenRouter's dashboard](https://openrouter.ai/dashboard)
   - Ensure the model ID is correct
   - Monitor credit usage

2. **Message Streaming Errors**
   - Verify model streaming support
   - Check connection stability
   - Monitor token limits
   - Handle timeout settings

3. **File System Errors**
   - Check directory permissions
   - Verify path configuration
   - Monitor disk space
   - Handle concurrent access

## Contributing

1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Push to the branch
5. Create a Pull Request

## License

ISC License

```

--------------------------------------------------------------------------------
/src/stores/Store.ts:
--------------------------------------------------------------------------------

```typescript
import { Conversation } from '../types/conversation.js';

export interface Store {
    initialize(): Promise<void>;
    saveConversation(conversation: Conversation): Promise<void>;
    getConversation(id: string): Promise<Conversation | null>;
    listConversations(): Promise<Conversation[]>;
} 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}

```

--------------------------------------------------------------------------------
/src/types/conversation.ts:
--------------------------------------------------------------------------------

```typescript
export interface Message {
    role: 'system' | 'user' | 'assistant';
    content: string;
    timestamp: string;
    name?: string;
}

export interface Conversation {
    id: string;
    model: string;
    title: string;
    messages: Message[];
    created: string;
    updated: string;
}

export interface ConversationFilter {
    model?: string;
    startDate?: string;
    endDate?: string;
} 
```

--------------------------------------------------------------------------------
/src/types/errors.ts:
--------------------------------------------------------------------------------

```typescript
export class McpError extends Error {
    constructor(public code: string, message: string) {
        super(message);
        this.name = 'McpError';
    }
}

export class ValidationError extends Error {
    constructor(message: string) {
        super(message);
        this.name = 'ValidationError';
    }
}

export class OpenRouterError extends Error {
    constructor(message: string) {
        super(message);
        this.name = 'OpenRouterError';
    }
}

export class FileSystemError extends Error {
    constructor(message: string) {
        super(message);
        this.name = 'FileSystemError';
    }
} 
```

--------------------------------------------------------------------------------
/config/models.yaml:
--------------------------------------------------------------------------------

```yaml
# OpenRouter Models Configuration
# Visit https://openrouter.ai/docs#models for the complete list of available models

# MCP Server Configuration
openRouter:
  apiKey: "<<OPEN ROUTER>>"  # Replace with your actual OpenRouter API key.

persistence:
  path: "d:/projects/conversations"  # Optional: Directory for storing conversation data.

models:

  'google/gemini-2.0-pro-exp-02-05:free':
    id: 'google/gemini-2.0-pro-exp-02-05:free'
    contextWindow: 2000000
    streaming: true
    temperature: 0.2
    description: 'Google Gemini 2.0 Pro is a powerful and versatile language model that can handle a wide range of tasks.'


  'google/gemini-2.0-flash-001':
    id: 'google/gemini-2.0-flash-001'
    contextWindow: 1000000
    streaming: true
    temperature: 0.2
    description: 'Google Gemini 2.0 Flash is a powerful and versatile language model that can handle a wide range of tasks.'

  
  # Add more models as needed following the same format
  # Example:
  # 'provider/model-name':
  #   id: 'provider/model-name'
  #   contextWindow: <window_size>
  #   streaming: true/false
  #   description: 'Model description'

# Default model to use if none specified
defaultModel: 'google/gemini-2.0-pro-exp-02-05:free'
```

--------------------------------------------------------------------------------
/src/types/server.ts:
--------------------------------------------------------------------------------

```typescript
export interface ResourceConfig {
    maxSizeBytes: number;
    allowedTypes: string[];
    chunkSize: number;
}

export interface ServerConfig {
    openRouter: {
        apiKey: string;
    };
    models: {
        [key: string]: {
            id: string;
            contextWindow: number;
            streaming: boolean;
            description?: string;
        };
    };
    defaultModel: string;
    persistence: {
        type: 'filesystem';
        path: string;
    };
    resources: {
        maxSizeBytes: number;
        allowedTypes: string[];
        chunkSize: number;
    };
}

export interface ModelConfig {
    contextWindow: number;
    streaming: boolean;
}

export interface PersistenceConfig {
    type: 'filesystem' | 'memory';
    path?: string;
}

export interface CreateConversationParams {
    provider?: string;
    model?: string;
    title?: string;
}

export interface Conversation {
    id: string;
    provider: string;
    model: string;
    title: string;
    messages: Message[];
    createdAt: number;
    updatedAt: number;
}

export interface Message {
    role: 'user' | 'assistant';
    content: string;
    timestamp: number;
    context?: {
        documents?: string[];
        code?: string[];
    };
}

```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
  "name": "mcp-conversation-server",
  "version": "0.1.0",
  "description": "A Model Context Protocol server used to execute various applicatoin types.",
  "private": true,
  "type": "module",
  "bin": {
    "mcp-conversation-server": "./build/index.js"
  },
  "files": [
    "build"
  ],
  "scripts": {
    "prebuild": "rimraf build",
    "build": "tsc && npm run copy-config",
    "copy-config": "copyfiles config/**/* build/",
    "start": "node build/index.js",
    "dev": "ts-node-esm src/index.ts",
    "test": "jest"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.0.0",
    "@types/dotenv": "^8.2.3",
    "@types/express": "^4.17.21",
    "@types/uuid": "^9.0.7",
    "dotenv": "^16.4.7",
    "express": "^4.18.2",
    "openai": "^4.83.0",
    "uuid": "^9.0.1",
    "yaml": "^2.7.0"
  },
  "devDependencies": {
    "@types/jest": "^29.5.14",
    "@types/node": "^20.11.24",
    "@typescript-eslint/eslint-plugin": "^7.0.0",
    "@typescript-eslint/parser": "^7.0.0",
    "copyfiles": "^2.4.1",
    "eslint": "^8.56.0",
    "jest": "^29.7.0",
    "prettier": "^3.4.2",
    "rimraf": "^5.0.10",
    "ts-jest": "^29.2.5",
    "ts-node-dev": "^2.0.0",
    "typescript": "^5.3.3"
  }
}

```

--------------------------------------------------------------------------------
/config/system_instructions.yaml:
--------------------------------------------------------------------------------

```yaml
# System Instructions Configuration
# Define default and model-specific system instructions

default: |
  You are a helpful AI assistant focused on providing accurate and concise responses.
  Please follow these guidelines:
  - Be direct and to the point
  - Show code examples when relevant
  - Explain complex concepts clearly
  - Ask for clarification when needed

models:
  # DeepSeek Models
  'deepseek/deepseek-chat': |
    You are DeepSeek Chat, a helpful AI assistant with strong coding and technical capabilities.
    Guidelines:
    - Focus on practical, implementable solutions
    - Provide code examples with explanations
    - Use clear technical explanations
    - Follow best practices in software development
    - Ask for clarification on ambiguous requirements

  'deepseek/deepseek-r1': |
    You are DeepSeek Reasoner, an AI focused on step-by-step problem solving and logical reasoning.
    Guidelines:
    - Break down complex problems into steps
    - Show your reasoning process clearly
    - Validate assumptions
    - Consider edge cases
    - Provide concrete examples

  # Claude Models
  'anthropic/claude-3-opus-20240229': |
    You are Claude 3 Opus, a highly capable AI assistant with strong analytical and creative abilities.
    Guidelines:
    - Provide comprehensive, well-reasoned responses
    - Balance depth with clarity
    - Use examples to illustrate complex points
    - Consider multiple perspectives
    - Maintain high standards of accuracy

  'anthropic/claude-3-sonnet-20240229': |
    You are Claude 3 Sonnet, focused on efficient and practical problem-solving.
    Guidelines:
    - Provide concise, actionable responses
    - Focus on practical solutions
    - Use clear examples
    - Be direct and efficient
    - Ask for clarification when needed

  # Llama Models
  'meta-llama/llama-2-70b-chat': |
    You are Llama 2, an open-source AI assistant focused on helpful and accurate responses.
    Guidelines:
    - Provide clear, straightforward answers
    - Use examples when helpful
    - Stay within known capabilities
    - Be direct about limitations
    - Focus on practical solutions 
```

--------------------------------------------------------------------------------
/config.example.ts:
--------------------------------------------------------------------------------

```typescript
import { ServerConfig } from './src/types/server.js';
import * as path from 'path';

/**
 * Example configuration for the MCP Conversation Server
 * 
 * This configuration includes examples for all supported providers:
 * - OpenAI
 * - DeepSeek
 * - OpenRouter
 * 
 * Storage paths can be configured in several ways:
 * 1. Use environment variable: CONVERSATIONS_PATH
 * 2. Set absolute path in config
 * 3. Set relative path (relative to project root)
 * 4. Let it default to OS-specific app data directory
 */

const config: ServerConfig = {
    providers: {
        deepseek: {
            endpoint: 'https://api.deepseek.com/v1',
            apiKey: process.env.DEEPSEEK_API_KEY || '',
            models: {
                'deepseek-chat': {
                    id: 'deepseek-chat',
                    contextWindow: 32768,
                    streaming: true
                },
                'deepseek-reasoner': {
                    id: 'deepseek-reasoner',
                    contextWindow: 64000,
                    streaming: true
                }
            },
            timeouts: {
                completion: 300000,  // 5 minutes for non-streaming
                stream: 120000       // 2 minutes per stream chunk
            }
        },
        openai: {
            endpoint: 'https://api.openai.com/v1',
            apiKey: process.env.OPENAI_API_KEY || '',
            models: {
                'gpt-4': {
                    id: 'gpt-4',
                    contextWindow: 8192,
                    streaming: true
                },
                'gpt-3.5-turbo': {
                    id: 'gpt-3.5-turbo',
                    contextWindow: 4096,
                    streaming: true
                }
            },
            timeouts: {
                completion: 300000,  // 5 minutes for non-streaming
                stream: 60000       // 1 minute per stream chunk
            }
        }
    },
    defaultProvider: 'deepseek',
    defaultModel: 'deepseek-chat',
    persistence: {
        type: 'filesystem' as const,
        // Use environment variable or default to d:\Projects\Conversations
        path: process.env.CONVERSATIONS_PATH || path.normalize('d:\\Projects\\Conversations')
    },
    resources: {
        maxSizeBytes: 10 * 1024 * 1024, // 10MB
        allowedTypes: ['.txt', '.md', '.json', '.csv'],
        chunkSize: 1024 // 1KB chunks
    }
};

export default config;

/**
 * Example usage:
 * 
 * ```typescript
 * import { ConversationServer } from './src/index.js';
 * import { config } from './config.js';
 * 
 * // Override storage path if needed
 * config.persistence.path = '/custom/path/to/conversations';
 * 
 * const server = new ConversationServer(config);
 * server.initialize().then(() => {
 *     console.log('Server initialized, connecting...');
 *     server.connect().catch(err => console.error('Failed to connect:', err));
 * }).catch(err => console.error('Failed to initialize:', err));
 * ```
 */ 
```

--------------------------------------------------------------------------------
/src/stores/FileSystemStore.ts:
--------------------------------------------------------------------------------

```typescript
import * as fs from 'fs/promises';
import * as path from 'path';
import { Conversation } from '../types/conversation.js';
import { Store } from './Store.js';

interface FSError extends Error {
    code?: string;
    message: string;
}

function isFSError(error: unknown): error is FSError {
    return error instanceof Error && ('code' in error || 'message' in error);
}

export class FileSystemStore implements Store {
    private dataPath: string;
    private initialized: boolean = false;

    constructor(dataPath: string) {
        this.dataPath = dataPath;
    }

    async initialize(): Promise<void> {
        if (this.initialized) {
            return;
        }

        try {
            await fs.mkdir(this.dataPath, { recursive: true });
            this.initialized = true;
        } catch (error) {
            throw new Error(`Failed to initialize store: ${error instanceof Error ? error.message : String(error)}`);
        }
    }

    private getConversationPath(id: string): string {
        return path.join(this.dataPath, `${id}.json`);
    }

    async saveConversation(conversation: Conversation): Promise<void> {
        const filePath = this.getConversationPath(conversation.id);
        try {
            await fs.writeFile(filePath, JSON.stringify(conversation, null, 2));
        } catch (error) {
            throw new Error(`Failed to save conversation: ${error instanceof Error ? error.message : String(error)}`);
        }
    }

    async getConversation(id: string): Promise<Conversation | null> {
        const filePath = this.getConversationPath(id);
        try {
            const data = await fs.readFile(filePath, 'utf-8');
            return JSON.parse(data) as Conversation;
        } catch (error) {
            if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
                return null;
            }
            throw new Error(`Failed to read conversation: ${error instanceof Error ? error.message : String(error)}`);
        }
    }

    async listConversations(): Promise<Conversation[]> {
        try {
            const files = await fs.readdir(this.dataPath);
            const conversations: Conversation[] = [];

            for (const file of files) {
                if (path.extname(file) === '.json') {
                    try {
                        const data = await fs.readFile(path.join(this.dataPath, file), 'utf-8');
                        conversations.push(JSON.parse(data) as Conversation);
                    } catch (error) {
                        console.error(`Failed to read conversation file ${file}:`, error);
                    }
                }
            }

            return conversations;
        } catch (error) {
            if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
                return [];
            }
            throw new Error(`Failed to list conversations: ${error instanceof Error ? error.message : String(error)}`);
        }
    }

    async deleteConversation(id: string): Promise<void> {
        const filePath = this.getConversationPath(id);
        try {
            await fs.unlink(filePath);
        } catch (error) {
            if (isFSError(error)) {
                if (error.code !== 'ENOENT') {
                    throw new Error(`Failed to delete conversation ${id}: ${error.message}`);
                }
            } else {
                throw new Error(`Failed to delete conversation ${id}: Unknown error`);
            }
        }
    }
}

```

--------------------------------------------------------------------------------
/src/providers/openrouter.ts:
--------------------------------------------------------------------------------

```typescript
import OpenAI from 'openai';
import { Message } from '../types/conversation.js';

interface Model {
    id: string;
    contextWindow: number;
    streaming: boolean;
    supportsFunctions?: boolean;
    temperature?: number;
    description?: string;
}

interface ProviderConfig {
    apiKey: string;
    models: {
        [key: string]: {
            id: string;
            contextWindow: number;
            streaming: boolean;
            temperature?: number;
            description?: string;
        };
    };
    defaultModel: string;
    timeouts?: {
        completion?: number;
        stream?: number;
    };
}

interface ProviderResponse {
    content: string;
    model: string;
    tokenCount?: number;
    metadata?: Record<string, unknown>;
}

interface CompletionParams {
    messages: Message[];
    model: string;
    stream: boolean;
    timeout?: number;
    temperature?: number;
    maxTokens?: number;
}

interface ModelInfo extends Model {
    isDefault: boolean;
    provider: string;
    cost?: {
        prompt: number;
        completion: number;
    };
}

export class OpenRouterProvider {
    private client: OpenAI;
    private _models: Model[];
    private defaultModel: string;
    private timeouts: Required<NonNullable<ProviderConfig['timeouts']>>;
    readonly name = 'openrouter';

    constructor(config: ProviderConfig) {
        if (!config.apiKey) {
            throw new Error('Missing openRouter.apiKey in YAML configuration');
        }

        if (!config.defaultModel) {
            throw new Error('Missing defaultModel in YAML configuration');
        }

        // Initialize OpenAI client with OpenRouter configuration
        this.client = new OpenAI({
            apiKey: config.apiKey,
            baseURL: 'https://openrouter.ai/api/v1',
            defaultQuery: { use_cache: 'true' },
            defaultHeaders: {
                'HTTP-Referer': 'https://github.com/cursor-ai/mcp-conversation-server',
                'X-Title': 'MCP Conversation Server',
                'Content-Type': 'application/json',
                'OR-SITE-LOCATION': 'https://github.com/cursor-ai/mcp-conversation-server',
                'OR-ALLOW-FINE-TUNING': 'false'
            }
        });

        this.timeouts = {
            completion: config.timeouts?.completion ?? 30000,
            stream: config.timeouts?.stream ?? 60000
        };

        this.defaultModel = config.defaultModel;

        // Convert configured models to internal format
        this._models = Object.entries(config.models).map(([id, modelConfig]) => ({
            id,
            contextWindow: modelConfig.contextWindow,
            streaming: modelConfig.streaming,
            temperature: modelConfig.temperature,
            description: modelConfig.description,
            supportsFunctions: false
        }));
    }

    private getModelConfig(modelId: string): Model {
        const model = this._models.find(m => m.id === modelId);
        if (!model) {
            console.warn(`Model ${modelId} not found in configuration, falling back to default model ${this.defaultModel}`);
            const defaultModel = this._models.find(m => m.id === this.defaultModel);
            if (!defaultModel) {
                throw new Error('Default model not found in configuration');
            }
            return defaultModel;
        }
        return model;
    }

    get models(): Model[] {
        return this._models;
    }

    async validateConfig(): Promise<void> {
        if (this._models.length === 0) {
            throw new Error('No models configured for OpenRouter provider');
        }

        try {
            // Simple validation - just verify API connection works
            await this.client.chat.completions.create({
                model: this._models[0].id,
                messages: [{ role: 'user', content: 'test' }],
                max_tokens: 1  // Minimum response size for validation
            });
        } catch (error: unknown) {
            const message = error instanceof Error ? error.message : 'Unknown error';
            throw new Error(`Failed to validate OpenRouter configuration: ${message}`);
        }
    }

    async createCompletion(params: CompletionParams): Promise<ProviderResponse> {
        try {
            // Get model configuration or fall back to default
            const modelConfig = this.getModelConfig(params.model);

            const response = await this.client.chat.completions.create({
                model: modelConfig.id,
                messages: params.messages.map((msg: Message) => ({
                    role: msg.role,
                    content: msg.content,
                    name: msg.name
                })),
                temperature: params.temperature ?? modelConfig.temperature ?? 0.7,
                max_tokens: params.maxTokens,
                stream: false
            });

            // Validate response structure
            if (!response || !response.choices || !Array.isArray(response.choices) || response.choices.length === 0) {
                throw new Error('Invalid or empty response from OpenRouter');
            }

            const choice = response.choices[0];
            if (!choice || !choice.message || typeof choice.message.content !== 'string') {
                throw new Error('Invalid message structure in OpenRouter response');
            }

            return {
                content: choice.message.content,
                model: modelConfig.id,
                tokenCount: response.usage?.total_tokens,
                metadata: {
                    provider: 'openrouter',
                    modelName: modelConfig.id,
                    ...response.usage && { usage: response.usage }
                }
            };
        } catch (error: unknown) {
            if (error instanceof Error) {
                if (error.message.includes('timeout')) {
                    throw new Error('OpenRouter request timed out. Please try again.');
                }
                if (error.message.includes('rate_limit')) {
                    throw new Error('OpenRouter rate limit exceeded. Please try again later.');
                }
                if (error.message.includes('insufficient_quota')) {
                    throw new Error('OpenRouter quota exceeded. Please check your credits.');
                }
                throw new Error(`OpenRouter completion failed: ${error.message}`);
            }
            throw new Error('Unknown error occurred during OpenRouter completion');
        }
    }

    async *streamCompletion(params: CompletionParams): AsyncIterableIterator<ProviderResponse> {
        try {
            // Get model configuration or fall back to default
            const modelConfig = this.getModelConfig(params.model);

            const stream = await this.client.chat.completions.create({
                model: modelConfig.id,
                messages: params.messages.map((msg: Message) => ({
                    role: msg.role,
                    content: msg.content,
                    name: msg.name
                })),
                temperature: params.temperature ?? modelConfig.temperature ?? 0.7,
                max_tokens: params.maxTokens,
                stream: true
            });

            for await (const chunk of stream) {
                // Validate chunk structure
                if (!chunk || !chunk.choices || !Array.isArray(chunk.choices) || chunk.choices.length === 0) {
                    continue;
                }

                const delta = chunk.choices[0]?.delta;
                if (!delta || typeof delta.content !== 'string') {
                    continue;
                }

                yield {
                    content: delta.content,
                    model: modelConfig.id,
                    metadata: {
                        provider: 'openrouter',
                        modelName: modelConfig.id,
                        isPartial: true
                    }
                };
            }
        } catch (error: unknown) {
            if (error instanceof Error) {
                if (error.message.includes('timeout')) {
                    throw new Error('OpenRouter streaming request timed out. Please try again.');
                }
                if (error.message.includes('rate_limit')) {
                    throw new Error('OpenRouter rate limit exceeded. Please try again later.');
                }
                if (error.message.includes('insufficient_quota')) {
                    throw new Error('OpenRouter quota exceeded. Please check your credits.');
                }
                throw new Error(`OpenRouter streaming completion failed: ${error.message}`);
            }
            throw new Error('Unknown error occurred during OpenRouter streaming');
        }
    }

    /**
     * Get detailed information about all available models
     * @returns Array of model information including default status and pricing
     */
    async listAvailableModels(): Promise<ModelInfo[]> {
        try {
            return this._models.map(model => {
                const [provider, modelName] = model.id.split('/');
                return {
                    ...model,
                    provider: provider || 'unknown',
                    isDefault: model.id === this.defaultModel,
                    cost: undefined // Could be fetched from OpenRouter API if needed
                };
            }).sort((a, b) => {
                // Sort with default model first, then by provider/name
                if (a.isDefault) return -1;
                if (b.isDefault) return 1;
                return a.id.localeCompare(b.id);
            });
        } catch (error) {
            const message = error instanceof Error ? error.message : 'Unknown error';
            throw new Error(`Failed to list available models: ${message}`);
        }
    }

    /**
     * Get the current default model configuration
     * @returns The default model configuration
     */
    getDefaultModel(): ModelInfo {
        const defaultModel = this._models.find(m => m.id === this.defaultModel);
        if (!defaultModel) {
            throw new Error('Default model not found in configuration');
        }
        const [provider] = defaultModel.id.split('/');
        return {
            ...defaultModel,
            isDefault: true,
            provider: provider || 'unknown',
            cost: undefined
        };
    }
} 
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
import * as dotenv from 'dotenv';
import * as path from 'path';
import * as os from 'os';
import * as fs from 'fs/promises';
import { parse } from 'yaml';
import OpenAI from 'openai';
import { Message, Conversation, ConversationFilter } from './types/conversation.js';
import { ServerConfig } from './types/server.js';
import { OpenRouterError, FileSystemError } from './types/errors.js';
import { OpenRouterProvider } from './providers/openrouter.js';

// Load environment variables from .env file
dotenv.config();

// Determine the appropriate app data directory based on OS
function getAppDataPath(): string {
    switch (process.platform) {
        case 'win32':
            return process.env.APPDATA || path.join(os.homedir(), 'AppData', 'Roaming');
        case 'darwin':
            return path.join(os.homedir(), 'Library', 'Application Support');
        default:
            return process.env.XDG_DATA_HOME || path.join(os.homedir(), '.local', 'share');
    }
}

// Create the app-specific data directory path
const APP_NAME = 'mcp-conversation-server';
const defaultDataPath = path.join(getAppDataPath(), APP_NAME, 'conversations');

/**
 * MCP Conversation Server
 * 
 * Workflow:
 * 1. Create a conversation:
 *    - Use create-conversation tool
 *    - Specify provider (e.g., 'deepseek') and model (e.g., 'deepseek-chat')
 *    - Optionally provide a title
 * 
 * 2. Send messages:
 *    - Use send-message tool
 *    - Provide conversationId from step 1
 *    - Set stream: true for real-time responses
 *    - Messages maintain chat context automatically
 * 
 * 3. Access conversation history:
 *    - Use resources/read with conversation://{id}/history
 *    - Full chat history with context is preserved
 * 
 * Error Handling:
 * - All errors include detailed messages and proper error codes
 * - Automatic retries for transient failures
 * - Timeouts are configurable per operation
 */

// Schema definitions
const ListResourcesSchema = z.object({
    method: z.literal('resources/list')
});

const ReadResourceSchema = z.object({
    method: z.literal('resources/read'),
    params: z.object({
        uri: z.string()
    })
});

const ListToolsSchema = z.object({
    method: z.literal('tools/list')
});

const CallToolSchema = z.object({
    method: z.literal('tools/call'),
    params: z.object({
        name: z.string(),
        arguments: z.record(z.unknown())
    })
});

const ListPromptsSchema = z.object({
    method: z.literal('prompts/list')
});

const GetPromptSchema = z.object({
    method: z.literal('prompts/get'),
    params: z.object({
        name: z.string(),
        arguments: z.record(z.unknown()).optional()
    })
});

// Modify logging to use stderr for ALL non-JSON-RPC messages
function logDebug(...args: any[]): void {
    console.error('[DEBUG]', ...args);
}

function logError(...args: any[]): void {
    console.error('[ERROR]', ...args);
}

// Create the MCP server instance
const server = new McpServer({
    name: 'conversation-server',
    version: '1.0.0'
});

// Initialize server configuration
const config: ServerConfig = {
    openRouter: {
        apiKey: process.env.OPENROUTER_API_KEY || ''
    },
    models: {},  // Will be populated from YAML config
    defaultModel: '',  // Will be populated from YAML config
    persistence: {
        type: 'filesystem',
        path: process.env.CONVERSATIONS_PATH || defaultDataPath
    },
    resources: {
        maxSizeBytes: 10 * 1024 * 1024, // 10MB
        allowedTypes: ['.txt', '.md', '.json', '.csv', '.cs', '.ts', '.js', '.jsx', '.tsx', '.pdf'],
        chunkSize: 1024 // 1KB chunks
    }
};

let openRouterProvider: OpenRouterProvider;

// Load models configuration
async function loadModelsConfig(): Promise<ServerConfig> {
    try {
        // Try to load from build directory first (for production)
        const buildConfigPath = path.join(path.dirname(process.argv[1]), 'config', 'models.yaml');
        let fileContents: string;

        try {
            fileContents = await fs.readFile(buildConfigPath, 'utf8');
        } catch (error) {
            // If not found in build directory, try source directory (for development)
            const sourceConfigPath = path.join(process.cwd(), 'config', 'models.yaml');
            fileContents = await fs.readFile(sourceConfigPath, 'utf8');
        }

        const config = parse(fileContents);

        // Validate required configuration
        if (!config.openRouter?.apiKey) {
            throw new Error('Missing openRouter.apiKey in models.yaml configuration');
        }

        if (!config.models || Object.keys(config.models).length === 0) {
            throw new Error('No models configured in models.yaml configuration');
        }

        if (!config.defaultModel) {
            throw new Error('Missing defaultModel in models.yaml configuration');
        }

        // Set default persistence path if not specified
        if (!config.persistence?.path) {
            config.persistence = {
                path: defaultDataPath
            };
        }

        return {
            openRouter: {
                apiKey: config.openRouter.apiKey
            },
            models: config.models,
            defaultModel: config.defaultModel,
            persistence: {
                type: 'filesystem',
                path: config.persistence.path
            },
            resources: {
                maxSizeBytes: 10 * 1024 * 1024, // 10MB
                allowedTypes: ['.txt', '.md', '.json', '.csv', '.cs', '.ts', '.js', '.jsx', '.tsx', '.pdf'],
                chunkSize: 1024 // 1KB chunks
            }
        };
    } catch (error) {
        if (error instanceof Error) {
            throw new Error(`Failed to load models configuration: ${error.message}`);
        }
        throw new Error('Failed to load models configuration. Make sure models.yaml exists in the config directory.');
    }
}

// Initialize and start the server
async function startServer() {
    try {
        console.error('Starting MCP Conversation Server...');

        // Load and validate the complete configuration from YAML
        const config = await loadModelsConfig();

        console.error('Using data directory:', config.persistence.path);

        // Initialize OpenRouter provider with loaded config
        openRouterProvider = new OpenRouterProvider({
            apiKey: config.openRouter.apiKey,
            models: config.models,
            defaultModel: config.defaultModel,
            timeouts: {
                completion: 30000,
                stream: 60000
            }
        });

        // Create data directory if it doesn't exist
        await fs.mkdir(config.persistence.path, { recursive: true });

        // Validate OpenRouter connection using the provider
        await openRouterProvider.validateConfig();

        // Set up tools after provider is initialized
        setupTools();

        console.error('Successfully connected to OpenRouter');
        console.error('Available models:', Object.keys(config.models).join(', '));
        console.error('Default model:', config.defaultModel);

        // Set up server transport
        const transport = new StdioServerTransport();
        await server.connect(transport);

        console.error('Server connected and ready');
    } catch (error) {
        console.error('Failed to start server:', error);
        process.exit(1);
    }
}

// Setup server tools
function setupTools() {
    // Add create-conversation tool
    server.tool(
        'create-conversation',
        `Creates a new conversation with a specified model.`,
        {
            model: z.string().describe('The model ID to use for the conversation'),
            title: z.string().optional().describe('Optional title for the conversation')
        },
        async (args: { model: string; title?: string }, _extra: any) => {
            const { model, title } = args;
            const now = new Date().toISOString();
            const conversation: Conversation = {
                id: crypto.randomUUID(),
                model,
                title: title || `Conversation ${now}`,
                messages: [],
                created: now,
                updated: now
            };

            try {
                const conversationPath = path.join(config.persistence.path, `${conversation.id}.json`);
                await fs.writeFile(conversationPath, JSON.stringify(conversation, null, 2));
                return {
                    content: [{
                        type: 'text',
                        text: JSON.stringify(conversation, null, 2)
                    }]
                };
            } catch (error) {
                const message = error instanceof Error ? error.message : 'Unknown error';
                throw new FileSystemError(`Failed to save conversation: ${message}`);
            }
        }
    );

    // Add send-message tool
    server.tool(
        'send-message',
        `Sends a message to an existing conversation and receives a response.`,
        {
            conversationId: z.string(),
            content: z.string(),
            stream: z.boolean().optional()
        },
        async (args: { conversationId: string; content: string; stream?: boolean }, _extra: any) => {
            const { conversationId, content, stream = false } = args;

            try {
                const conversationPath = path.join(config.persistence.path, `${conversationId}.json`);
                const conversation: Conversation = JSON.parse(await fs.readFile(conversationPath, 'utf8'));

                const userMessage: Message = {
                    role: 'user',
                    content,
                    timestamp: new Date().toISOString()
                };
                conversation.messages.push(userMessage);
                conversation.updated = new Date().toISOString();

                try {
                    if (stream) {
                        const streamResponse = await openRouterProvider.streamCompletion({
                            model: conversation.model,
                            messages: conversation.messages,
                            stream: true
                        });

                        await fs.writeFile(conversationPath, JSON.stringify(conversation, null, 2));

                        return {
                            content: [{
                                type: 'resource',
                                resource: {
                                    uri: `stream://${conversationId}`,
                                    text: 'Message stream started',
                                    mimeType: 'text/plain'
                                }
                            }]
                        };
                    } else {
                        const response = await openRouterProvider.createCompletion({
                            model: conversation.model,
                            messages: conversation.messages,
                            stream: false
                        });

                        const assistantMessage: Message = {
                            role: 'assistant',
                            content: response.content,
                            timestamp: new Date().toISOString()
                        };
                        conversation.messages.push(assistantMessage);
                        conversation.updated = new Date().toISOString();

                        await fs.writeFile(conversationPath, JSON.stringify(conversation, null, 2));

                        return {
                            content: [{
                                type: 'text',
                                text: JSON.stringify(assistantMessage, null, 2)
                            }]
                        };
                    }
                } catch (error) {
                    const message = error instanceof Error ? error.message : 'Unknown error';
                    throw new OpenRouterError(`OpenRouter request failed: ${message}`);
                }
            } catch (error) {
                if (error instanceof OpenRouterError) throw error;
                const message = error instanceof Error ? error.message : 'Unknown error';
                throw new FileSystemError(`Failed to handle message: ${message}`);
            }
        }
    );

    // Add list-models tool
    server.tool(
        'list-models',
        `Lists all available models with their configurations and capabilities.`,
        {},
        async (_args: {}, _extra: any) => {
            try {
                const models = await openRouterProvider.listAvailableModels();
                return {
                    content: [{
                        type: 'text',
                        text: JSON.stringify({
                            models,
                            defaultModel: openRouterProvider.getDefaultModel(),
                            totalModels: models.length
                        }, null, 2)
                    }]
                };
            } catch (error) {
                const message = error instanceof Error ? error.message : 'Unknown error';
                throw new Error(`Failed to list models: ${message}`);
            }
        }
    );
}

// Start the server
startServer();
```