#
tokens: 26143/50000 23/23 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .claude
│   └── settings.local.json
├── .gitignore
├── CLAUDE.md
├── LICENSE
├── package.json
├── prompts
│   ├── api_design.md
│   ├── chinese_text_summarizer.md
│   ├── code_review.md
│   ├── debugging_assistant.md
│   └── git_commit_push.md
├── README.md
├── src
│   ├── cache.ts
│   ├── fileOperations.ts
│   ├── index.ts
│   ├── tools.ts
│   └── types.ts
├── tests
│   ├── cache.test.ts
│   ├── fileOperations.test.ts
│   ├── helpers
│   │   ├── mocks.ts
│   │   └── testUtils.ts
│   ├── index.test.ts
│   ├── tools.test.ts
│   └── types.test.ts
├── tsconfig.json
└── vitest.config.ts
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Dependencies
node_modules/
package-lock.json
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# Build output
dist/

# Test coverage
coverage/

# Runtime data
pids
*.pid
*.seed
*.pid.lock

# Environment files
.env
.env.local
.env.development.local
.env.test.local
.env.production.local

# IDE files
.vscode/
.idea/
*.swp
*.swo
*~

# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db

# Logs
logs
*.log

# Temporary files
tmp/
temp/

# Cache
.npm
.eslintcache
.cache/
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Prompts MCP Server

A Model Context Protocol (MCP) server for managing and providing prompts. This server allows users and LLMs to easily add, retrieve, and manage prompt templates stored as markdown files with YAML frontmatter support.

<a href="https://glama.ai/mcp/servers/@tanker327/prompts-mcp-server">
  <img width="380" height="200" src="https://glama.ai/mcp/servers/@tanker327/prompts-mcp-server/badge" alt="Prompts Server MCP server" />
</a>

## Quick Start

```bash
# 1. Install from NPM
npm install -g prompts-mcp-server

# 2. Add to your MCP client config (e.g., Claude Desktop)
# Add this to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
  "mcpServers": {
    "prompts-mcp-server": {
      "command": "prompts-mcp-server"
    }
  }
}

# 3. Restart your MCP client and start using the tools!
```

## Features

- **Add Prompts**: Store new prompts as markdown files with YAML frontmatter
- **Retrieve Prompts**: Get specific prompts by name
- **List Prompts**: View all available prompts with metadata preview
- **Delete Prompts**: Remove prompts from the collection
- **File-based Storage**: Prompts are stored as markdown files in the `prompts/` directory
- **Real-time Caching**: In-memory cache with automatic file change monitoring
- **YAML Frontmatter**: Support for structured metadata (title, description, tags, etc.)
- **TypeScript**: Full TypeScript implementation with comprehensive type definitions
- **Modular Architecture**: Clean separation of concerns with dependency injection
- **Comprehensive Testing**: 95 tests with 84.53% code coverage

## Installation

### Option 1: From NPM (Recommended)

Install the package globally from NPM:
```bash
npm install -g prompts-mcp-server
```
This will make the `prompts-mcp-server` command available in your system.

After installation, you need to configure your MCP client to use it. See [MCP Client Configuration](#mcp-client-configuration).

### Option 2: From GitHub (for development)

```bash
# Clone the repository
git clone https://github.com/tanker327/prompts-mcp-server.git
cd prompts-mcp-server

# Install dependencies
npm install

# Build the TypeScript code
npm run build

# Test the installation
npm test
```

### Option 3: Direct Download

1. Download the latest release from GitHub
2. Extract to your desired location
3. Run installation steps from Option 2.

### Verification

After installation, verify the server works:

```bash
# Start the server (should show no errors)
npm start

# Or test with MCP Inspector
npx @modelcontextprotocol/inspector prompts-mcp-server
```

## Testing

Run the comprehensive test suite:
```bash
npm test
```

Run tests with coverage:
```bash
npm run test:coverage
```

Watch mode for development:
```bash
npm run test:watch
```

## MCP Tools

The server provides the following tools:

### `add_prompt`
Add a new prompt to the collection. If no YAML frontmatter is provided, default metadata will be automatically added.
- **name** (string): Name of the prompt
- **content** (string): Content of the prompt in markdown format with optional YAML frontmatter

### `create_structured_prompt`
Create a new prompt with guided metadata structure and validation.
- **name** (string): Name of the prompt
- **title** (string): Human-readable title for the prompt
- **description** (string): Brief description of what the prompt does
- **category** (string, optional): Category (defaults to "general")
- **tags** (array, optional): Array of tags for categorization (defaults to ["general"])
- **difficulty** (string, optional): "beginner", "intermediate", or "advanced" (defaults to "beginner")
- **author** (string, optional): Author of the prompt (defaults to "User")
- **content** (string): The actual prompt content (markdown)

### `get_prompt` 
Retrieve a prompt by name.
- **name** (string): Name of the prompt to retrieve

### `list_prompts`
List all available prompts with metadata preview. No parameters required.

### `delete_prompt`
Delete a prompt by name.
- **name** (string): Name of the prompt to delete

## Usage Examples

Once connected to an MCP client, you can use the tools like this:

### Method 1: Quick prompt creation with automatic metadata
```javascript
// Add a prompt without frontmatter - metadata will be added automatically
add_prompt({
  name: "debug_helper",
  content: `# Debug Helper

Help me debug this issue by:
1. Analyzing the error message
2. Suggesting potential causes
3. Recommending debugging steps`
})
// This automatically adds default frontmatter with title "Debug Helper", category "general", etc.
```

### Method 2: Structured prompt creation with full metadata control
```javascript
// Create a prompt with explicit metadata using the structured tool
create_structured_prompt({
  name: "code_review",
  title: "Code Review Assistant",
  description: "Helps review code for best practices and potential issues",
  category: "development",
  tags: ["code", "review", "quality"],
  difficulty: "intermediate",
  author: "Development Team",
  content: `# Code Review Prompt

Please review the following code for:
- Code quality and best practices
- Potential bugs or issues
- Performance considerations
- Security vulnerabilities

## Code to Review
[Insert code here]`
})
```

### Method 3: Manual frontmatter (preserves existing metadata)
```javascript
// Add a prompt with existing frontmatter - no changes made
add_prompt({
  name: "custom_prompt",
  content: `---
title: "Custom Assistant"
category: "specialized"
tags: ["custom", "specific"]
difficulty: "advanced"
---

# Custom Prompt Content
Your specific prompt here...`
})
```

### Other operations
```javascript
// Get a prompt
get_prompt({ name: "code_review" })

// List all prompts (shows metadata preview)
list_prompts({})

// Delete a prompt
delete_prompt({ name: "old_prompt" })
```

## File Structure

```
prompts-mcp-server/
├── src/
│   ├── index.ts          # Main server orchestration
│   ├── types.ts          # TypeScript type definitions
│   ├── cache.ts          # Caching system with file watching
│   ├── fileOperations.ts # File I/O operations
│   └── tools.ts          # MCP tool definitions and handlers
├── tests/
│   ├── helpers/
│   │   ├── testUtils.ts  # Test utilities
│   │   └── mocks.ts      # Mock implementations
│   ├── cache.test.ts     # Cache module tests
│   ├── fileOperations.test.ts # File operations tests
│   ├── tools.test.ts     # Tools module tests
│   └── index.test.ts     # Integration tests
├── prompts/              # Directory for storing prompt markdown files
│   ├── code_review.md
│   ├── debugging_assistant.md
│   └── api_design.md
├── dist/                 # Compiled JavaScript output
├── CLAUDE.md            # Development documentation
├── package.json
├── tsconfig.json
└── README.md
```

## Architecture

The server uses a modular architecture with the following components:

- **PromptCache**: In-memory caching with real-time file change monitoring via chokidar
- **PromptFileOperations**: File I/O operations with cache integration
- **PromptTools**: MCP tool definitions and request handlers
- **Type System**: Comprehensive TypeScript types for all data structures

## YAML Frontmatter Support

Prompts can include structured metadata using YAML frontmatter:

```yaml
---
title: "Prompt Title"
description: "Brief description of the prompt"
category: "development"
tags: ["tag1", "tag2", "tag3"]
difficulty: "beginner" | "intermediate" | "advanced"
author: "Author Name"
version: "1.0"
---

# Prompt Content

Your prompt content goes here...
```

## MCP Client Configuration

This server can be configured with various MCP-compatible applications. Here are setup instructions for popular clients:

### Claude Desktop

Add this to your Claude Desktop configuration file:

**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%/Claude/claude_desktop_config.json`

```json
{
  "mcpServers": {
    "prompts-mcp-server": {
      "command": "prompts-mcp-server",
      "env": {
        "PROMPTS_FOLDER_PATH": "/path/to/your/prompts/directory"
      }
    }
  }
}
```

### Cline (VS Code Extension)

Add to your Cline MCP settings in VS Code:

```json
{
  "cline.mcp.servers": {
    "prompts-mcp-server": {
      "command": "prompts-mcp-server",
      "env": {
        "PROMPTS_FOLDER_PATH": "/path/to/your/prompts/directory"
      }
    }
  }
}
```

### Continue.dev

In your `~/.continue/config.json`:

```json
{
  "mcpServers": [
    {
      "name": "prompts-mcp-server",
      "command": "prompts-mcp-server",
      "env": {
        "PROMPTS_FOLDER_PATH": "/path/to/your/prompts/directory"
      }
    }
  ]
}
```

### Zed Editor

In your Zed settings (`~/.config/zed/settings.json`):

```json
{
  "assistant": {
    "mcp_servers": {
      "prompts-mcp-server": {
        "command": "prompts-mcp-server",
        "env": {
          "PROMPTS_DIR": "/path/to/your/prompts/directory"
        }
      }
    }
  }
}
```

### Custom MCP Client

For any MCP-compatible application, use these connection details:

- **Protocol**: Model Context Protocol (MCP)
- **Transport**: stdio
- **Command**: `prompts-mcp-server`
- **Environment Variables**: 
  - `PROMPTS_FOLDER_PATH`: Custom directory for storing prompts (optional, defaults to `./prompts`)

### Development/Testing Setup

For development or testing with the MCP Inspector:

```bash
# Install MCP Inspector
npm install -g @modelcontextprotocol/inspector

# Run the server with inspector
npx @modelcontextprotocol/inspector prompts-mcp-server
```

### Docker Configuration

Create a `docker-compose.yml` for containerized deployment:

```yaml
version: '3.8'
services:
  prompts-mcp-server:
    build: .
    environment:
      - PROMPTS_FOLDER_PATH=/app/prompts
    volumes:
      - ./prompts:/app/prompts
    stdin_open: true
    tty: true
```

## Server Configuration

- The server automatically creates the `prompts/` directory if it doesn't exist
- Prompt files are automatically sanitized to use safe filenames (alphanumeric characters, hyphens, and underscores only)
- File changes are monitored in real-time and cache is updated automatically
- Prompts directory can be customized via the `PROMPTS_FOLDER_PATH` environment variable

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `PROMPTS_FOLDER_PATH` | Custom directory to store prompt files (overrides default) | (not set) |
| `NODE_ENV` | Environment mode | `production` |

> **Note**: If `PROMPTS_FOLDER_PATH` is set, it will be used as the prompts directory. If not set, the server defaults to `./prompts` relative to the server location.

## Requirements

- Node.js 18.0.0 or higher
- TypeScript 5.0.0 or higher
- Dependencies:
  - @modelcontextprotocol/sdk ^1.0.0
  - gray-matter ^4.0.3 (YAML frontmatter parsing)
  - chokidar ^3.5.3 (file watching)

## Development

The project includes comprehensive tooling for development:

- **TypeScript**: Strict type checking and modern ES modules
- **Vitest**: Fast testing framework with 95 tests and 84.53% coverage
- **ESLint**: Code linting (if configured)
- **File Watching**: Real-time cache updates during development

## Troubleshooting

### Common Issues

#### "Module not found" errors
```bash
# Ensure TypeScript is built
npm run build

# Check that dist/ directory exists and contains .js files
ls dist/
```

#### MCP client can't connect
1. Verify the server starts without errors: `npm start`
2. Check the correct path is used in client configuration
3. Ensure Node.js 18+ is installed: `node --version`
4. Test with MCP Inspector: `npx @modelcontextprotocol/inspector prompts-mcp-server`

#### Permission errors with prompts directory
```bash
# Ensure the prompts directory is writable
mkdir -p ./prompts
chmod 755 ./prompts
```

#### File watching not working
- On Linux: Install `inotify-tools`
- On macOS: No additional setup needed
- On Windows: Ensure Windows Subsystem for Linux (WSL) or native Node.js

### Debug Mode

Enable debug logging by setting environment variables:

```bash
# Enable debug mode
DEBUG=* node dist/index.js

# Or with specific debug namespace
DEBUG=prompts-mcp:* node dist/index.js
```

### Getting Help

1. Check the [GitHub Issues](https://github.com/tanker327/prompts-mcp-server/issues)
2. Review the test files for usage examples
3. Use MCP Inspector for debugging client connections
4. Check your MCP client's documentation for configuration details

### Performance Tips

- The server uses in-memory caching for fast prompt retrieval
- File watching automatically updates the cache when files change
- Large prompt collections (1000+ files) work efficiently due to caching
- Consider using SSD storage for better file I/O performance


## Community Variants & Extensions

| Project | Maintainer | Extra Features |
|---------|-----------|----------------|
| [smart-prompts-mcp](https://github.com/jezweb/smart-prompts-mcp) | [@jezweb](https://github.com/jezweb) | GitHub-hosted prompt libraries, advanced search & composition, richer TypeScript types, etc. |

👉 Have you built something cool on top of **prompts-mcp-server**?  
Open an issue or PR to add it here so others can discover your variant!

## License

MIT
```

--------------------------------------------------------------------------------
/CLAUDE.md:
--------------------------------------------------------------------------------

```markdown
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Development Commands

- `npm install` - Install dependencies
- `npm run build` - Compile TypeScript to JavaScript
- `npm start` - Build and start the MCP server 
- `npm run dev` - Start server with auto-reload for development (uses tsx)
- `npm test` - Run all tests (exits automatically)
- `npm run test:watch` - Run tests in watch mode (continuous)
- `npm run test:coverage` - Run tests with coverage report (exits automatically)

## Architecture Overview

This is an MCP (Model Context Protocol) server written in TypeScript that manages prompt templates stored as markdown files with YAML frontmatter metadata.

### Core Components

**Main Server (`src/index.ts`)**
- Entry point that orchestrates all components
- Handles MCP server initialization and graceful shutdown
- Registers tool handlers and connects to stdio transport
- Minimal orchestration layer that delegates to specialized modules

**Type Definitions (`src/types.ts`)**
- Central location for all TypeScript interfaces
- `PromptMetadata`, `PromptInfo`, `ToolArguments`, `ServerConfig`
- Ensures type consistency across all modules

**Caching System (`src/cache.ts`)**
- `PromptCache` class manages in-memory prompt metadata
- File watcher integration with chokidar for real-time updates
- Handles cache initialization, updates, and cleanup
- Provides methods for cache access and management

**File Operations (`src/fileOperations.ts`)**
- `PromptFileOperations` class handles all file I/O
- CRUD operations: create, read, update, delete prompts
- Filename sanitization and directory management
- Integrates with cache for optimal performance

**MCP Tools (`src/tools.ts`)**
- `PromptTools` class implements MCP tool definitions and handlers
- Handles all 4 MCP tools: `add_prompt`, `get_prompt`, `list_prompts`, `delete_prompt`
- Tool validation, execution, and response formatting
- Clean separation between MCP protocol and business logic

### Module Dependencies

```
index.ts (main)
├── cache.ts (PromptCache)
├── fileOperations.ts (PromptFileOperations)
│   └── cache.ts (dependency)
├── tools.ts (PromptTools)
│   └── fileOperations.ts (dependency)
└── types.ts (shared interfaces)
```

### Data Flow

1. **Startup**: Main orchestrates cache initialization and file watcher setup
2. **File Changes**: PromptCache detects changes and updates cache automatically  
3. **MCP Requests**: PromptTools delegates to PromptFileOperations which uses cached data
4. **File Operations**: PromptFileOperations writes to filesystem, cache auto-updates via watcher

### Testing Strategy

Modular design enables easy unit testing:
- Each class can be tested in isolation with dependency injection
- Cache operations can be tested without file I/O
- File operations can be tested with mock cache
- Tool handlers can be tested with mock file operations

## Key Implementation Details

- **Modular Architecture**: Clean separation of concerns across 5 focused modules
- **TypeScript**: Full type safety with centralized type definitions
- **Build Process**: TypeScript compiles to `dist/` directory, source in `src/`
- **Development**: Uses `tsx` for hot-reload during development
- **Dependency Injection**: Classes accept dependencies via constructor for testability
- **Graceful Shutdown**: Proper cleanup of file watchers and resources
- Server communicates via stdio (not HTTP)
- ES modules used throughout (`type: "module"` in package.json)
- Error handling returns MCP-compatible error responses with `isError: true`
- Console.error() used for logging (stderr) to avoid interfering with stdio transport

## Module Overview

- **`types.ts`**: Shared interfaces and type definitions
- **`cache.ts`**: In-memory caching with file watching (PromptCache class)
- **`fileOperations.ts`**: File I/O operations (PromptFileOperations class)  
- **`tools.ts`**: MCP tool definitions and handlers (PromptTools class)
- **`index.ts`**: Main orchestration and server setup

Each module is independently testable and has a single responsibility.

## Testing

The project includes comprehensive test coverage using Vitest:

### Test Structure
```
tests/
├── helpers/
│   ├── testUtils.ts    # Test utilities and helper functions
│   └── mocks.ts        # Mock implementations for testing
├── types.test.ts       # Type definition tests
├── cache.test.ts       # PromptCache class tests
├── fileOperations.test.ts  # PromptFileOperations class tests
├── tools.test.ts       # PromptTools class tests
└── index.test.ts       # Integration tests
```

### Test Coverage
- **Unit Tests**: Each class tested in isolation with dependency injection
- **Integration Tests**: End-to-end workflows and component interactions
- **Error Handling**: Comprehensive error scenarios and edge cases
- **File System**: Real file operations and mock scenarios
- **MCP Protocol**: Tool definitions and request/response handling

### Testing Approach
- **Mocking**: Uses Vitest mocking for external dependencies
- **Temporary Files**: Creates isolated temp directories for file system tests
- **Real Integration**: Tests actual file I/O, caching, and file watching
- **Error Scenarios**: Tests failure modes and error propagation
- **Type Safety**: Validates TypeScript interfaces and type constraints

### Test Results
- **95 tests** across all modules with **100% pass rate**
- **84.53% overall coverage** with critical modules at 98-100% coverage
- **Fast execution** with proper test isolation and cleanup

## Development Workflow

1. **Install dependencies**: `npm install`
2. **Run tests**: `npm test` (verifies everything works)
3. **Start development**: `npm run dev` (auto-reload)
4. **Build for production**: `npm run build`
5. **Run built server**: `npm start`

## File Structure

```
prompts-mcp/
├── src/                    # TypeScript source code
│   ├── types.ts           # Type definitions
│   ├── cache.ts           # Caching system
│   ├── fileOperations.ts  # File I/O operations
│   ├── tools.ts           # MCP tool handlers
│   └── index.ts           # Main server entry point
├── tests/                 # Comprehensive test suite
│   ├── helpers/           # Test utilities and mocks
│   └── *.test.ts          # Test files for each module
├── prompts/               # Prompt storage directory
├── dist/                  # Compiled JavaScript (after build)
├── package.json           # Dependencies and scripts
├── tsconfig.json          # TypeScript configuration
├── vitest.config.ts       # Test configuration
└── CLAUDE.md              # This documentation
```
```

--------------------------------------------------------------------------------
/vitest.config.ts:
--------------------------------------------------------------------------------

```typescript
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    globals: true,
    environment: 'node',
    coverage: {
      provider: 'v8',
      reporter: ['text', 'json', 'html'],
      exclude: [
        'node_modules/',
        'dist/',
        '**/*.d.ts',
        'coverage/',
        'vitest.config.ts'
      ]
    },
    include: ['tests/**/*.test.ts'],
    exclude: ['node_modules/', 'dist/']
  },
});
```

--------------------------------------------------------------------------------
/.claude/settings.local.json:
--------------------------------------------------------------------------------

```json
{
  "permissions": {
    "allow": [
      "Bash(mkdir:*)",
      "Bash(git init:*)",
      "Bash(git add:*)",
      "Bash(rm:*)",
      "Bash(npm install)",
      "Bash(npm test)",
      "Bash(rg:*)",
      "Bash(npm test:*)",
      "Bash(npm run test:coverage:*)",
      "Bash(gh auth:*)",
      "Bash(gh repo create:*)",
      "Bash(git remote add:*)",
      "Bash(git branch:*)",
      "Bash(git push:*)",
      "Bash(git commit:*)",
      "Bash(npm run build:*)",
      "Bash(ls:*)",
      "Bash(git rm:*)",
      "Bash(black:*)"
    ],
    "deny": []
  }
}
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "ESNext",
    "moduleResolution": "node",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true,
    "resolveJsonModule": true,
    "allowSyntheticDefaultImports": true,
    "isolatedModules": true,
    "noEmitOnError": true,
    "exactOptionalPropertyTypes": true,
    "noImplicitReturns": true,
    "noFallthroughCasesInSwitch": true,
    "noUncheckedIndexedAccess": true
  },
  "include": [
    "src/**/*"
  ],
  "exclude": [
    "node_modules",
    "dist"
  ]
}
```

--------------------------------------------------------------------------------
/src/types.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Type definitions for the prompts MCP server
 */

export interface PromptMetadata {
  title?: string;
  description?: string;
  category?: string;
  tags?: string[];
  difficulty?: 'beginner' | 'intermediate' | 'advanced';
  author?: string;
  version?: string;
  [key: string]: unknown;
}

export interface PromptInfo {
  name: string;
  metadata: PromptMetadata;
  preview: string;
}

export interface ToolArguments {
  name?: string;
  filename?: string;
  content?: string;
  // Fields for create_structured_prompt
  title?: string;
  description?: string;
  category?: string;
  tags?: string[];
  difficulty?: 'beginner' | 'intermediate' | 'advanced';
  author?: string;
}

export interface ServerConfig {
  name: string;
  version: string;
  promptsDir: string;
  prompts_folder_path?: string;
}
```

--------------------------------------------------------------------------------
/prompts/code_review.md:
--------------------------------------------------------------------------------

```markdown
---
title: "Code Review Assistant"
description: "Comprehensive code review with focus on quality, performance, and security"
category: "development"
tags: ["code-review", "quality", "security", "performance"]
difficulty: "intermediate"
author: "System"
version: "1.0"
---

# Code Review Prompt

You are an experienced software engineer performing a code review. Please review the following code and provide feedback on:

1. **Code Quality**: Look for best practices, readability, and maintainability
2. **Performance**: Identify potential performance issues or optimizations
3. **Security**: Check for security vulnerabilities or concerns
4. **Testing**: Assess test coverage and suggest additional test cases
5. **Documentation**: Evaluate code comments and documentation

Please be constructive in your feedback and suggest specific improvements where applicable.
```

--------------------------------------------------------------------------------
/prompts/debugging_assistant.md:
--------------------------------------------------------------------------------

```markdown
---
title: "Debugging Assistant"
description: "Systematic debugging approach for identifying and resolving code issues"
category: "development"
tags: ["debugging", "troubleshooting", "problem-solving"]
difficulty: "beginner"
author: "System"
version: "1.0"
---

# Debugging Assistant Prompt

You are a debugging expert helping to identify and resolve issues in code. When analyzing problems:

1. **Understand the Problem**: Ask clarifying questions about the expected vs actual behavior
2. **Analyze the Code**: Look for common issues like:
   - Logic errors
   - Type mismatches
   - Null/undefined references
   - Race conditions
   - Memory leaks
3. **Systematic Approach**: 
   - Check inputs and outputs
   - Verify assumptions
   - Test edge cases
   - Use debugging tools effectively
4. **Provide Solutions**: Offer step-by-step debugging strategies and potential fixes

Focus on teaching debugging techniques while solving the immediate problem.
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
    "name": "prompts-mcp-server",
    "version": "1.2.0",
    "description": "MCP server for managing and providing prompts",
    "main": "dist/index.js",
    "type": "module",
    "bin": {
        "prompts-mcp-server": "dist/index.js"
    },
    "files": [
        "dist"
    ],
    "scripts": {
        "build": "tsc",
        "start": "npm run build && node dist/index.js",
        "dev": "tsx --watch src/index.ts",
        "test": "vitest run",
        "test:watch": "vitest",
        "test:coverage": "vitest run --coverage"
    },
    "keywords": [
        "mcp",
        "prompts",
        "ai",
        "llm"
    ],
    "author": "Eric Wu",
    "license": "MIT",
    "dependencies": {
        "@modelcontextprotocol/sdk": "^1.0.0",
        "gray-matter": "^4.0.3",
        "chokidar": "^3.5.3"
    },
    "devDependencies": {
        "@types/node": "^20.0.0",
        "@vitest/coverage-v8": "^1.0.0",
        "typescript": "^5.0.0",
        "tsx": "^4.0.0",
        "vitest": "^1.0.0"
    },
    "engines": {
        "node": ">=18.0.0"
    }
}

```

--------------------------------------------------------------------------------
/prompts/api_design.md:
--------------------------------------------------------------------------------

```markdown
---
title: "API Design Expert"
description: "RESTful API design with best practices and conventions"
category: "architecture"
tags: ["api", "rest", "design", "backend"]
difficulty: "advanced"
author: "System"
version: "1.0"
---

# API Design Prompt

You are an API design expert. Help design RESTful APIs that are:

## Design Principles
1. **RESTful**: Follow REST conventions and HTTP methods properly
2. **Consistent**: Use consistent naming, structure, and patterns
3. **Intuitive**: Make endpoints discoverable and self-explanatory
4. **Versioned**: Include versioning strategy for future changes
5. **Secure**: Implement proper authentication and authorization

## Key Considerations
- Resource naming (nouns, not verbs)
- HTTP status codes usage
- Error handling and responses
- Pagination for large datasets
- Rate limiting and throttling
- Documentation and examples

## Response Format
- Clear data structures
- Meaningful error messages
- Consistent field naming (camelCase or snake_case)
- Include metadata when appropriate

Provide specific recommendations with examples for the given use case.
```

--------------------------------------------------------------------------------
/tests/helpers/mocks.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Mock implementations for testing
 */

import { vi } from 'vitest';
import { PromptInfo } from '../../src/types.js';

/**
 * Mock PromptCache class
 */
export class MockPromptCache {
  private cache = new Map<string, PromptInfo>();
  
  getAllPrompts = vi.fn(() => Array.from(this.cache.values()));
  getPrompt = vi.fn((name: string) => this.cache.get(name));
  isEmpty = vi.fn(() => this.cache.size === 0);
  size = vi.fn(() => this.cache.size);
  initializeCache = vi.fn();
  initializeFileWatcher = vi.fn();
  cleanup = vi.fn();

  // Helper methods for testing
  _setPrompt(name: string, prompt: PromptInfo) {
    this.cache.set(name, prompt);
  }

  _clear() {
    this.cache.clear();
  }
}

/**
 * Mock PromptFileOperations class
 */
export class MockPromptFileOperations {
  listPrompts = vi.fn();
  readPrompt = vi.fn();
  savePrompt = vi.fn();
  savePromptWithFilename = vi.fn();
  deletePrompt = vi.fn();
  promptExists = vi.fn();
  getPromptInfo = vi.fn();
}

/**
 * Mock chokidar watcher
 */
export function createMockWatcher() {
  const watcher = {
    on: vi.fn().mockReturnThis(),
    close: vi.fn().mockResolvedValue(undefined),
    add: vi.fn().mockReturnThis(),
    unwatch: vi.fn().mockReturnThis()
  };

  return watcher;
}

/**
 * Mock process for testing
 */
export function createMockProcess() {
  return {
    on: vi.fn(),
    exit: vi.fn(),
    cwd: vi.fn(() => '/test/cwd')
  };
}
```

--------------------------------------------------------------------------------
/prompts/git_commit_push.md:
--------------------------------------------------------------------------------

```markdown
---
title: "Git Commit and Push Assistant"
description: "Automatically stage, commit, and push all changes to the remote Git repository"
category: "development"
tags: ["git","version-control","commit","push","automation"]
difficulty: "beginner"
author: "User"
version: "1.0"
created: "2025-06-10"
---

# Git Commit and Push Assistant

You are a Git automation assistant. Your task is to:

1. **Stage all changes** - Add all modified, new, and deleted files to the staging area
2. **Create a meaningful commit** - Generate an appropriate commit message based on the changes
3. **Push to remote** - Push the committed changes to the remote repository

## Process:

1. First, check the current Git status to see what files have been modified
2. Stage all changes using `git add .` or `git add -A`
3. Create a commit with a descriptive message that summarizes the changes
4. Push the changes to the remote repository (typically `git push origin main` or the current branch)

## Commit Message Guidelines:

- Use present tense ("Add feature" not "Added feature")
- Be concise but descriptive
- If there are multiple types of changes, use a general message like "Update project files" or "Various improvements"
- For specific changes, be more precise: "Fix authentication bug", "Add user profile component", "Update documentation"

## Commands to execute:

```bash
# Check status
git status

# Stage all changes
git add .

# Commit with message
git commit -m "[generated message based on changes]"

# Push to remote
git push
```

## Error Handling:

- If there are no changes to commit, inform the user
- If there are merge conflicts, guide the user to resolve them first
- If the remote repository requires authentication, provide guidance
- If pushing to a protected branch, suggest creating a pull request instead

Execute these Git operations safely and provide clear feedback about what was done.
```

--------------------------------------------------------------------------------
/tests/helpers/testUtils.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Test utilities and helper functions
 */

import fs from 'fs/promises';
import path from 'path';
import os from 'os';
import { vi } from 'vitest';
import { PromptInfo, PromptMetadata } from '../../src/types.js';

/**
 * Create a temporary directory for testing
 */
export async function createTempDir(): Promise<string> {
  const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'prompts-test-'));
  return tempDir;
}

/**
 * Clean up temporary directory
 */
export async function cleanupTempDir(dir: string): Promise<void> {
  try {
    await fs.rm(dir, { recursive: true, force: true });
  } catch (error) {
    // Ignore cleanup errors
  }
}

/**
 * Create a test prompt file
 */
export async function createTestPromptFile(
  dir: string,
  name: string,
  metadata: PromptMetadata = {},
  content: string = 'Test prompt content'
): Promise<string> {
  // Ensure directory exists
  await fs.mkdir(dir, { recursive: true });
  
  const frontmatter = Object.keys(metadata).length > 0
    ? `---\n${Object.entries(metadata).map(([key, value]) => `${key}: ${JSON.stringify(value)}`).join('\n')}\n---\n\n`
    : '';
  
  const fullContent = frontmatter + content;
  const fileName = `${name}.md`;
  const filePath = path.join(dir, fileName);
  
  await fs.writeFile(filePath, fullContent, 'utf-8');
  return filePath;
}

/**
 * Create sample prompt info for testing
 */
export function createSamplePromptInfo(overrides: Partial<PromptInfo> = {}): PromptInfo {
  return {
    name: 'test-prompt',
    metadata: {
      title: 'Test Prompt',
      description: 'A test prompt for testing',
      category: 'test',
      tags: ['test', 'sample'],
      difficulty: 'beginner',
      author: 'Test Author',
      version: '1.0'
    },
    preview: 'This is a test prompt content for testing purposes. It contains sample text to verify functionality...',
    ...overrides
  };
}

/**
 * Mock console.error to capture error logs in tests
 */
export function mockConsoleError() {
  return vi.spyOn(console, 'error').mockImplementation(() => {});
}

/**
 * Create a mock fs module for testing
 */
export function createMockFs() {
  return {
    readFile: vi.fn(),
    writeFile: vi.fn(),
    unlink: vi.fn(),
    access: vi.fn(),
    mkdir: vi.fn(),
    readdir: vi.fn()
  };
}

/**
 * Create mock MCP request objects
 */
export function createMockCallToolRequest(toolName: string, args: Record<string, unknown>) {
  return {
    params: {
      name: toolName,
      arguments: args
    }
  };
}

/**
 * Create expected MCP response format
 */
export function createExpectedResponse(text: string, isError = false) {
  return {
    content: [
      {
        type: 'text',
        text
      }
    ],
    ...(isError && { isError: true })
  };
}

/**
 * Wait for a specified amount of time (useful for file watcher tests)
 */
export function wait(ms: number): Promise<void> {
  return new Promise(resolve => setTimeout(resolve, ms));
}
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
#!/usr/bin/env node

/**
 * Main entry point for the prompts MCP server
 */

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
import path from 'path';
import { fileURLToPath } from 'url';
import { PromptCache } from './cache.js';
import { PromptFileOperations } from './fileOperations.js';
import { PromptTools } from './tools.js';
import { ServerConfig } from './types.js';

// Server configuration
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);

// Read configuration from environment or use defaults
const promptsFolderPath = process.env.PROMPTS_FOLDER_PATH;
const defaultPromptsDir = path.join(__dirname, '..', 'prompts');

const config: ServerConfig = {
  name: 'prompts-mcp-server',
  version: '1.0.0',
  promptsDir: promptsFolderPath || defaultPromptsDir,
  ...(promptsFolderPath && { prompts_folder_path: promptsFolderPath }),
};

// Initialize components
const cache = new PromptCache(config.promptsDir);
const fileOps = new PromptFileOperations(config.promptsDir, cache);
const tools = new PromptTools(fileOps);

// Create MCP server
const server = new Server(
  {
    name: config.name,
    version: config.version,
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

// Register tool handlers
server.setRequestHandler(ListToolsRequestSchema, async () => {
  return tools.getToolDefinitions();
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  return await tools.handleToolCall(request);
});

/**
 * Main server startup function
 */
async function main(): Promise<void> {
  try {
    // Initialize cache and file watcher on startup
    await cache.initializeCache();
    cache.initializeFileWatcher();
    
    // Connect to stdio transport
    const transport = new StdioServerTransport();
    await server.connect(transport);
    
    console.error('Prompts MCP Server running on stdio');
  } catch (error) {
    const errorMessage = error instanceof Error ? error.message : 'Unknown error';
    console.error('Failed to start server:', errorMessage);
    process.exit(1);
  }
}

/**
 * Graceful shutdown handler
 */
async function shutdown(): Promise<void> {
  console.error('Shutting down server...');
  try {
    await cache.cleanup();
    console.error('Server shutdown complete');
  } catch (error) {
    const errorMessage = error instanceof Error ? error.message : 'Unknown error';
    console.error('Error during shutdown:', errorMessage);
  }
  process.exit(0);
}

// Handle shutdown signals
process.on('SIGINT', shutdown);
process.on('SIGTERM', shutdown);

// Start the server
main().catch((error: unknown) => {
  const errorMessage = error instanceof Error ? error.message : 'Unknown error';
  console.error('Server error:', errorMessage);
  process.exit(1);
});
```

--------------------------------------------------------------------------------
/src/fileOperations.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * File operations for prompt management (CRUD operations)
 */

import fs from 'fs/promises';
import path from 'path';
import { PromptInfo } from './types.js';
import { PromptCache } from './cache.js';

export class PromptFileOperations {
  constructor(
    private promptsDir: string,
    private cache: PromptCache
  ) {}

  /**
   * Sanitize filename to be filesystem-safe
   */
  private sanitizeFileName(name: string): string {
    return name.replace(/[^a-z0-9-_]/gi, '_').toLowerCase();
  }

  /**
   * Ensure prompts directory exists
   */
  private async ensurePromptsDir(): Promise<void> {
    try {
      await fs.access(this.promptsDir);
    } catch {
      await fs.mkdir(this.promptsDir, { recursive: true });
    }
  }

  /**
   * List all prompts (uses cache for performance)
   */
  async listPrompts(): Promise<PromptInfo[]> {
    // Initialize cache and file watcher if not already done
    if (this.cache.isEmpty()) {
      await this.cache.initializeCache();
      this.cache.initializeFileWatcher();
    }
    
    return this.cache.getAllPrompts();
  }

  /**
   * Read a specific prompt by name
   */
  async readPrompt(name: string): Promise<string> {
    const fileName = this.sanitizeFileName(name) + '.md';
    const filePath = path.join(this.promptsDir, fileName);
    try {
      return await fs.readFile(filePath, 'utf-8');
    } catch (error) {
      throw new Error(`Prompt "${name}" not found`);
    }
  }

  /**
   * Save a new prompt or update existing one
   */
  async savePrompt(name: string, content: string): Promise<string> {
    await this.ensurePromptsDir();
    const fileName = this.sanitizeFileName(name) + '.md';
    const filePath = path.join(this.promptsDir, fileName);
    await fs.writeFile(filePath, content, 'utf-8');
    return fileName;
  }

  /**
   * Save a new prompt with a custom filename
   */
  async savePromptWithFilename(filename: string, content: string): Promise<string> {
    await this.ensurePromptsDir();
    const sanitizedFileName = this.sanitizeFileName(filename) + '.md';
    const filePath = path.join(this.promptsDir, sanitizedFileName);
    await fs.writeFile(filePath, content, 'utf-8');
    return sanitizedFileName;
  }

  /**
   * Delete a prompt by name
   */
  async deletePrompt(name: string): Promise<boolean> {
    const fileName = this.sanitizeFileName(name) + '.md';
    const filePath = path.join(this.promptsDir, fileName);
    try {
      await fs.unlink(filePath);
      return true;
    } catch (error) {
      throw new Error(`Prompt "${name}" not found`);
    }
  }

  /**
   * Check if a prompt exists
   */
  async promptExists(name: string): Promise<boolean> {
    const fileName = this.sanitizeFileName(name) + '.md';
    const filePath = path.join(this.promptsDir, fileName);
    try {
      await fs.access(filePath);
      return true;
    } catch {
      return false;
    }
  }

  /**
   * Get prompt info from cache (if available)
   */
  getPromptInfo(name: string): PromptInfo | undefined {
    return this.cache.getPrompt(name);
  }
}
```

--------------------------------------------------------------------------------
/src/cache.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Caching and file watching functionality for prompt metadata
 */

import fs from 'fs/promises';
import path from 'path';
import chokidar, { FSWatcher } from 'chokidar';
import matter from 'gray-matter';
import { PromptInfo, PromptMetadata } from './types.js';

export class PromptCache {
  private cache = new Map<string, PromptInfo>();
  private watcher: FSWatcher | null = null;
  private isWatcherInitialized = false;

  constructor(private promptsDir: string) {}

  /**
   * Get all cached prompts
   */
  getAllPrompts(): PromptInfo[] {
    return Array.from(this.cache.values());
  }

  /**
   * Get a specific prompt from cache
   */
  getPrompt(name: string): PromptInfo | undefined {
    return this.cache.get(name);
  }

  /**
   * Check if cache is empty
   */
  isEmpty(): boolean {
    return this.cache.size === 0;
  }

  /**
   * Get cache size
   */
  size(): number {
    return this.cache.size;
  }

  /**
   * Load prompt metadata from a file
   */
  private async loadPromptMetadata(fileName: string): Promise<PromptInfo | null> {
    const filePath = path.join(this.promptsDir, fileName);
    try {
      const content = await fs.readFile(filePath, 'utf-8');
      const parsed = matter(content);
      const name = fileName.replace('.md', '');
      
      return {
        name,
        metadata: parsed.data as PromptMetadata,
        preview: parsed.content.substring(0, 100).replace(/\n/g, ' ').trim() + '...'
      };
    } catch (error) {
      const errorMessage = error instanceof Error ? error.message : 'Unknown error';
      console.error(`Failed to load prompt metadata for ${fileName}:`, errorMessage);
      return null;
    }
  }

  /**
   * Update cache for a specific file
   */
  private async updateCacheForFile(fileName: string): Promise<void> {
    if (!fileName.endsWith('.md')) return;
    
    const metadata = await this.loadPromptMetadata(fileName);
    if (metadata) {
      this.cache.set(metadata.name, metadata);
    }
  }

  /**
   * Remove a file from cache
   */
  private async removeFromCache(fileName: string): Promise<void> {
    if (!fileName.endsWith('.md')) return;
    
    const name = fileName.replace('.md', '');
    this.cache.delete(name);
  }

  /**
   * Ensure prompts directory exists
   */
  private async ensurePromptsDir(): Promise<void> {
    try {
      await fs.access(this.promptsDir);
    } catch {
      await fs.mkdir(this.promptsDir, { recursive: true });
    }
  }

  /**
   * Initialize cache by loading all prompt files
   */
  async initializeCache(): Promise<void> {
    await this.ensurePromptsDir();
    
    try {
      const files = await fs.readdir(this.promptsDir);
      const mdFiles = files.filter(file => file.endsWith('.md'));
      
      // Clear existing cache
      this.cache.clear();
      
      // Load all prompt metadata
      await Promise.all(
        mdFiles.map(async (file) => {
          await this.updateCacheForFile(file);
        })
      );
      
      console.error(`Loaded ${this.cache.size} prompts into cache`);
    } catch (error) {
      const errorMessage = error instanceof Error ? error.message : 'Unknown error';
      console.error('Failed to initialize cache:', errorMessage);
    }
  }

  /**
   * Initialize file watcher to monitor changes
   */
  initializeFileWatcher(): void {
    if (this.isWatcherInitialized) return;
    
    this.watcher = chokidar.watch(path.join(this.promptsDir, '*.md'), {
      ignored: /^\./, // ignore dotfiles
      persistent: true,
      ignoreInitial: true // don't fire events for initial scan
    });

    this.watcher
      .on('add', async (filePath: string) => {
        const fileName = path.basename(filePath);
        console.error(`Prompt added: ${fileName}`);
        await this.updateCacheForFile(fileName);
      })
      .on('change', async (filePath: string) => {
        const fileName = path.basename(filePath);
        console.error(`Prompt updated: ${fileName}`);
        await this.updateCacheForFile(fileName);
      })
      .on('unlink', async (filePath: string) => {
        const fileName = path.basename(filePath);
        console.error(`Prompt deleted: ${fileName}`);
        await this.removeFromCache(fileName);
      })
      .on('error', (error: Error) => {
        console.error('File watcher error:', error);
      });

    this.isWatcherInitialized = true;
    console.error('File watcher initialized for prompts directory');
  }

  /**
   * Stop file watcher and cleanup
   */
  async cleanup(): Promise<void> {
    if (this.watcher) {
      await this.watcher.close();
      this.watcher = null;
      this.isWatcherInitialized = false;
    }
  }
}
```

--------------------------------------------------------------------------------
/tests/types.test.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Tests for type definitions and interfaces
 */

import { describe, it, expect } from 'vitest';
import type { PromptMetadata, PromptInfo, ToolArguments, ServerConfig } from '../src/types.js';

describe('Types', () => {
  describe('PromptMetadata', () => {
    it('should allow all optional fields', () => {
      const metadata: PromptMetadata = {
        title: 'Test Title',
        description: 'Test Description',
        category: 'test',
        tags: ['tag1', 'tag2'],
        difficulty: 'beginner',
        author: 'Test Author',
        version: '1.0'
      };

      expect(metadata.title).toBe('Test Title');
      expect(metadata.description).toBe('Test Description');
      expect(metadata.category).toBe('test');
      expect(metadata.tags).toEqual(['tag1', 'tag2']);
      expect(metadata.difficulty).toBe('beginner');
      expect(metadata.author).toBe('Test Author');
      expect(metadata.version).toBe('1.0');
    });

    it('should allow empty metadata object', () => {
      const metadata: PromptMetadata = {};
      expect(Object.keys(metadata)).toHaveLength(0);
    });

    it('should allow custom fields with unknown type', () => {
      const metadata: PromptMetadata = {
        customField: 'custom value',
        customNumber: 42,
        customBoolean: true,
        customArray: [1, 2, 3],
        customObject: { nested: 'value' }
      };

      expect(metadata.customField).toBe('custom value');
      expect(metadata.customNumber).toBe(42);
      expect(metadata.customBoolean).toBe(true);
      expect(metadata.customArray).toEqual([1, 2, 3]);
      expect(metadata.customObject).toEqual({ nested: 'value' });
    });

    it('should enforce difficulty type constraints', () => {
      // These should compile without issues
      const beginner: PromptMetadata = { difficulty: 'beginner' };
      const intermediate: PromptMetadata = { difficulty: 'intermediate' };
      const advanced: PromptMetadata = { difficulty: 'advanced' };

      expect(beginner.difficulty).toBe('beginner');
      expect(intermediate.difficulty).toBe('intermediate');
      expect(advanced.difficulty).toBe('advanced');
    });
  });

  describe('PromptInfo', () => {
    it('should require all fields', () => {
      const promptInfo: PromptInfo = {
        name: 'test-prompt',
        metadata: {
          title: 'Test Prompt',
          description: 'A test prompt'
        },
        preview: 'This is a preview of the prompt content...'
      };

      expect(promptInfo.name).toBe('test-prompt');
      expect(promptInfo.metadata.title).toBe('Test Prompt');
      expect(promptInfo.metadata.description).toBe('A test prompt');
      expect(promptInfo.preview).toBe('This is a preview of the prompt content...');
    });

    it('should work with minimal metadata', () => {
      const promptInfo: PromptInfo = {
        name: 'minimal-prompt',
        metadata: {},
        preview: 'Minimal preview'
      };

      expect(promptInfo.name).toBe('minimal-prompt');
      expect(Object.keys(promptInfo.metadata)).toHaveLength(0);
      expect(promptInfo.preview).toBe('Minimal preview');
    });
  });

  describe('ToolArguments', () => {
    it('should require name field', () => {
      const args: ToolArguments = {
        name: 'test-prompt'
      };

      expect(args.name).toBe('test-prompt');
      expect(args.content).toBeUndefined();
    });

    it('should allow optional content field', () => {
      const args: ToolArguments = {
        name: 'test-prompt',
        content: 'Test content for the prompt'
      };

      expect(args.name).toBe('test-prompt');
      expect(args.content).toBe('Test content for the prompt');
    });
  });

  describe('ServerConfig', () => {
    it('should require all fields', () => {
      const config: ServerConfig = {
        name: 'test-server',
        version: '1.0.0',
        promptsDir: '/path/to/prompts'
      };

      expect(config.name).toBe('test-server');
      expect(config.version).toBe('1.0.0');
      expect(config.promptsDir).toBe('/path/to/prompts');
    });

    it('should allow optional prompts_folder_path', () => {
      const configWithCustomPath: ServerConfig = {
        name: 'test-server',
        version: '1.0.0',
        promptsDir: '/default/prompts',
        prompts_folder_path: '/custom/prompts'
      };

      expect(configWithCustomPath.prompts_folder_path).toBe('/custom/prompts');
    });

    it('should work without prompts_folder_path', () => {
      const configWithoutCustomPath: ServerConfig = {
        name: 'test-server',
        version: '1.0.0',
        promptsDir: '/default/prompts'
      };

      expect(configWithoutCustomPath.prompts_folder_path).toBeUndefined();
    });
  });

  describe('Type compatibility', () => {
    it('should work together in realistic scenarios', () => {
      const config: ServerConfig = {
        name: 'prompts-mcp-server',
        version: '1.0.0',
        promptsDir: '/app/prompts'
      };

      const metadata: PromptMetadata = {
        title: 'Code Review Assistant',
        description: 'Helps review code for quality and issues',
        category: 'development',
        tags: ['code-review', 'quality'],
        difficulty: 'intermediate',
        author: 'System',
        version: '1.0'
      };

      const promptInfo: PromptInfo = {
        name: 'code-review',
        metadata,
        preview: 'You are an experienced software engineer performing a code review...'
      };

      const toolArgs: ToolArguments = {
        name: promptInfo.name,
        content: '# Code Review Prompt\n\nYou are an experienced software engineer...'
      };

      expect(config.name).toBe('prompts-mcp-server');
      expect(promptInfo.metadata.difficulty).toBe('intermediate');
      expect(toolArgs.name).toBe('code-review');
      expect(toolArgs.content).toContain('Code Review Prompt');
    });
  });
});
```

--------------------------------------------------------------------------------
/src/tools.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * MCP tool definitions and handlers
 */

import {
  ListToolsResult,
  CallToolResult,
  CallToolRequest,
  TextContent,
} from '@modelcontextprotocol/sdk/types.js';
import { ToolArguments, PromptInfo } from './types.js';
import { PromptFileOperations } from './fileOperations.js';

export class PromptTools {
  constructor(private fileOps: PromptFileOperations) {}

  /**
   * Get MCP tool definitions
   */
  getToolDefinitions(): ListToolsResult {
    return {
      tools: [
        {
          name: 'add_prompt',
          description: 'Add a new prompt to the collection',
          inputSchema: {
            type: 'object',
            properties: {
              name: {
                type: 'string',
                description: 'Name of the prompt',
              },
              filename: {
                type: 'string',
                description: 'English filename for the prompt file (without .md extension)',
              },
              content: {
                type: 'string',
                description: 'Content of the prompt in markdown format',
              },
            },
            required: ['name', 'filename', 'content'],
          },
        },
        {
          name: 'get_prompt',
          description: 'Retrieve a prompt by name',
          inputSchema: {
            type: 'object',
            properties: {
              name: {
                type: 'string',
                description: 'Name of the prompt to retrieve',
              },
            },
            required: ['name'],
          },
        },
        {
          name: 'list_prompts',
          description: 'List all available prompts',
          inputSchema: {
            type: 'object',
            properties: {},
          },
        },
        {
          name: 'delete_prompt',
          description: 'Delete a prompt by name',
          inputSchema: {
            type: 'object',
            properties: {
              name: {
                type: 'string',
                description: 'Name of the prompt to delete',
              },
            },
            required: ['name'],
          },
        },
        {
          name: 'create_structured_prompt',
          description: 'Create a new prompt with guided metadata structure',
          inputSchema: {
            type: 'object',
            properties: {
              name: {
                type: 'string',
                description: 'Name of the prompt',
              },
              title: {
                type: 'string',
                description: 'Human-readable title for the prompt',
              },
              description: {
                type: 'string',
                description: 'Brief description of what the prompt does',
              },
              category: {
                type: 'string',
                description: 'Category (e.g., development, writing, analysis)',
              },
              tags: {
                type: 'array',
                items: { type: 'string' },
                description: 'Array of tags for categorization',
              },
              difficulty: {
                type: 'string',
                enum: ['beginner', 'intermediate', 'advanced'],
                description: 'Difficulty level of the prompt',
              },
              author: {
                type: 'string',
                description: 'Author of the prompt',
              },
              content: {
                type: 'string',
                description: 'The actual prompt content (markdown)',
              },
            },
            required: ['name', 'title', 'description', 'content'],
          },
        },
      ],
    };
  }

  /**
   * Handle MCP tool calls
   */
  async handleToolCall(request: CallToolRequest): Promise<CallToolResult> {
    const { name, arguments: args } = request.params;
    const toolArgs = (args || {}) as ToolArguments;

    try {
      switch (name) {
        case 'add_prompt':
          return await this.handleAddPrompt(toolArgs);
        case 'get_prompt':
          return await this.handleGetPrompt(toolArgs);
        case 'list_prompts':
          return await this.handleListPrompts();
        case 'delete_prompt':
          return await this.handleDeletePrompt(toolArgs);
        case 'create_structured_prompt':
          return await this.handleCreateStructuredPrompt(toolArgs);
        default:
          throw new Error(`Unknown tool: ${name}`);
      }
    } catch (error) {
      const errorMessage = error instanceof Error ? error.message : 'Unknown error';
      return {
        content: [
          {
            type: 'text',
            text: `Error: ${errorMessage}`,
          } as TextContent,
        ],
        isError: true,
      };
    }
  }

  /**
   * Handle add_prompt tool
   */
  private async handleAddPrompt(args: ToolArguments): Promise<CallToolResult> {
    if (!args.name || !args.filename || !args.content) {
      throw new Error('Name, filename, and content are required for add_prompt');
    }
    
    // Validate and enhance content with metadata if needed
    const processedContent = this.ensureMetadata(args.content, args.name);
    
    const fileName = await this.fileOps.savePromptWithFilename(args.filename, processedContent);
    return {
      content: [
        {
          type: 'text',
          text: `Prompt "${args.name}" saved as ${fileName}`,
        } as TextContent,
      ],
    };
  }

  /**
   * Ensure content has proper YAML frontmatter metadata
   */
  private ensureMetadata(content: string, promptName: string): string {
    // Check if content already has frontmatter
    if (content.trim().startsWith('---')) {
      return content; // Already has frontmatter, keep as-is
    }

    // Add default frontmatter if missing
    const defaultMetadata = `---
title: "${promptName.replace(/-/g, ' ').replace(/\b\w/g, l => l.toUpperCase())}"
description: "A prompt for ${promptName.replace(/-/g, ' ')}"
category: "general"
tags: ["general"]
difficulty: "beginner"
author: "User"
version: "1.0"
created: "${new Date().toISOString().split('T')[0]}"
---

`;

    return defaultMetadata + content;
  }

  /**
   * Handle get_prompt tool
   */
  private async handleGetPrompt(args: ToolArguments): Promise<CallToolResult> {
    if (!args.name) {
      throw new Error('Name is required for get_prompt');
    }
    
    const content = await this.fileOps.readPrompt(args.name);
    return {
      content: [
        {
          type: 'text',
          text: content,
        } as TextContent,
      ],
    };
  }

  /**
   * Handle list_prompts tool
   */
  private async handleListPrompts(): Promise<CallToolResult> {
    const prompts = await this.fileOps.listPrompts();
    
    if (prompts.length === 0) {
      return {
        content: [
          {
            type: 'text',
            text: 'No prompts available',
          } as TextContent,
        ],
      };
    }

    const text = this.formatPromptsList(prompts);
    
    return {
      content: [
        {
          type: 'text',
          text,
        } as TextContent,
      ],
    };
  }

  /**
   * Handle delete_prompt tool
   */
  private async handleDeletePrompt(args: ToolArguments): Promise<CallToolResult> {
    if (!args.name) {
      throw new Error('Name is required for delete_prompt');
    }
    
    await this.fileOps.deletePrompt(args.name);
    return {
      content: [
        {
          type: 'text',
          text: `Prompt "${args.name}" deleted successfully`,
        } as TextContent,
      ],
    };
  }

  /**
   * Handle create_structured_prompt tool
   */
  private async handleCreateStructuredPrompt(args: ToolArguments): Promise<CallToolResult> {
    if (!args.name || !args.content || !args.title || !args.description) {
      throw new Error('Name, content, title, and description are required for create_structured_prompt');
    }

    // Build structured frontmatter with provided metadata
    const metadata = {
      title: args.title,
      description: args.description,
      category: args.category || 'general',
      tags: args.tags || ['general'],
      difficulty: args.difficulty || 'beginner',
      author: args.author || 'User',
      version: '1.0',
      created: new Date().toISOString().split('T')[0],
    };

    // Create YAML frontmatter
    const frontmatter = `---
title: "${metadata.title}"
description: "${metadata.description}"
category: "${metadata.category}"
tags: ${JSON.stringify(metadata.tags)}
difficulty: "${metadata.difficulty}"
author: "${metadata.author}"
version: "${metadata.version}"
created: "${metadata.created}"
---

`;

    const fullContent = frontmatter + args.content;
    const fileName = await this.fileOps.savePrompt(args.name, fullContent);
    
    return {
      content: [
        {
          type: 'text',
          text: `Structured prompt "${args.name}" created successfully as ${fileName} with metadata:\n- Title: ${metadata.title}\n- Category: ${metadata.category}\n- Tags: ${metadata.tags.join(', ')}\n- Difficulty: ${metadata.difficulty}`,
        } as TextContent,
      ],
    };
  }

  /**
   * Format prompts list for display
   */
  private formatPromptsList(prompts: PromptInfo[]): string {
    const formatPrompt = (prompt: PromptInfo): string => {
      let output = `## ${prompt.name}\n`;
      
      if (Object.keys(prompt.metadata).length > 0) {
        output += '**Metadata:**\n';
        Object.entries(prompt.metadata).forEach(([key, value]) => {
          output += `- ${key}: ${value}\n`;
        });
        output += '\n';
      }
      
      output += `**Preview:** ${prompt.preview}\n`;
      return output;
    };

    return `# Available Prompts\n\n${prompts.map(formatPrompt).join('\n---\n\n')}`;
  }
}
```

--------------------------------------------------------------------------------
/tests/fileOperations.test.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Tests for PromptFileOperations class
 */

import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import { PromptFileOperations } from '../src/fileOperations.js';
import { createTempDir, cleanupTempDir, createTestPromptFile, createSamplePromptInfo } from './helpers/testUtils.js';
import { MockPromptCache } from './helpers/mocks.js';

describe('PromptFileOperations', () => {
  let tempDir: string;
  let mockCache: MockPromptCache;
  let fileOps: PromptFileOperations;

  beforeEach(async () => {
    tempDir = await createTempDir();
    mockCache = new MockPromptCache();
    fileOps = new PromptFileOperations(tempDir, mockCache as any);
  });

  afterEach(async () => {
    await cleanupTempDir(tempDir);
  });

  describe('constructor', () => {
    it('should create instance with provided directory and cache', () => {
      expect(fileOps).toBeDefined();
      expect(fileOps).toBeInstanceOf(PromptFileOperations);
    });
  });

  describe('listPrompts', () => {
    it('should initialize cache when empty', async () => {
      mockCache.isEmpty.mockReturnValue(true);
      
      await fileOps.listPrompts();
      
      expect(mockCache.initializeCache).toHaveBeenCalled();
      expect(mockCache.initializeFileWatcher).toHaveBeenCalled();
      expect(mockCache.getAllPrompts).toHaveBeenCalled();
    });

    it('should use cached data when cache is not empty', async () => {
      const samplePrompts = [createSamplePromptInfo()];
      mockCache.isEmpty.mockReturnValue(false);
      mockCache.getAllPrompts.mockReturnValue(samplePrompts);
      
      const result = await fileOps.listPrompts();
      
      expect(mockCache.initializeCache).not.toHaveBeenCalled();
      expect(mockCache.initializeFileWatcher).not.toHaveBeenCalled();
      expect(mockCache.getAllPrompts).toHaveBeenCalled();
      expect(result).toEqual(samplePrompts);
    });

    it('should return empty array when no prompts exist', async () => {
      mockCache.isEmpty.mockReturnValue(true);
      mockCache.getAllPrompts.mockReturnValue([]);
      
      const result = await fileOps.listPrompts();
      
      expect(result).toEqual([]);
    });
  });

  describe('readPrompt', () => {
    it('should read existing prompt file', async () => {
      const content = '# Test Prompt\n\nThis is a test prompt.';
      await createTestPromptFile(tempDir, 'test-prompt', {}, content);
      
      const result = await fileOps.readPrompt('test-prompt');
      
      expect(result).toContain('This is a test prompt.');
    });

    it('should sanitize prompt name for file lookup', async () => {
      const content = 'Test content';
      // Create file with sanitized name (what the sanitization function would produce)
      await createTestPromptFile(tempDir, 'test_prompt_with_special_chars___', {}, content);
      
      // Test with unsanitized name
      const result = await fileOps.readPrompt('Test Prompt With Special Chars!@#');
      
      expect(result).toContain('Test content');
    });

    it('should throw error for non-existent prompt', async () => {
      await expect(fileOps.readPrompt('non-existent')).rejects.toThrow(
        'Prompt "non-existent" not found'
      );
    });

    it('should handle file read errors', async () => {
      // Try to read from a directory that doesn't exist
      const badFileOps = new PromptFileOperations('/non/existent/path', mockCache as any);
      
      await expect(badFileOps.readPrompt('any-prompt')).rejects.toThrow(
        'Prompt "any-prompt" not found'
      );
    });
  });

  describe('savePrompt', () => {
    it('should save prompt to file', async () => {
      const content = '# New Prompt\n\nThis is a new prompt.';
      
      const fileName = await fileOps.savePrompt('new-prompt', content);
      
      expect(fileName).toBe('new-prompt.md');
      
      // Verify file was created
      const savedContent = await fileOps.readPrompt('new-prompt');
      expect(savedContent).toBe(content);
    });

    it('should sanitize filename', async () => {
      const content = 'Test content';
      
      const fileName = await fileOps.savePrompt('Test Prompt With Special Chars!@#', content);
      
      expect(fileName).toBe('test_prompt_with_special_chars___.md');
      
      // Should be readable with sanitized name
      const savedContent = await fileOps.readPrompt('test_prompt_with_special_chars___');
      expect(savedContent).toBe(content);
    });

    it('should create prompts directory if it does not exist', async () => {
      const newDir = `${tempDir}/new-prompts-dir`;
      const newFileOps = new PromptFileOperations(newDir, mockCache as any);
      
      const fileName = await newFileOps.savePrompt('test', 'content');
      
      expect(fileName).toBe('test.md');
      
      // Should be able to read the file
      const content = await newFileOps.readPrompt('test');
      expect(content).toBe('content');
    });

    it('should overwrite existing files', async () => {
      const originalContent = 'Original content';
      const updatedContent = 'Updated content';
      
      await fileOps.savePrompt('test-prompt', originalContent);
      await fileOps.savePrompt('test-prompt', updatedContent);
      
      const result = await fileOps.readPrompt('test-prompt');
      expect(result).toBe(updatedContent);
    });
  });

  describe('deletePrompt', () => {
    it('should delete existing prompt file', async () => {
      await createTestPromptFile(tempDir, 'to-delete');
      
      const result = await fileOps.deletePrompt('to-delete');
      
      expect(result).toBe(true);
      
      // File should no longer exist
      await expect(fileOps.readPrompt('to-delete')).rejects.toThrow(
        'Prompt "to-delete" not found'
      );
    });

    it('should sanitize prompt name for deletion', async () => {
      await createTestPromptFile(tempDir, 'prompt_with_special_chars___');
      
      const result = await fileOps.deletePrompt('Prompt With Special Chars!@#');
      
      expect(result).toBe(true);
      
      // Should not be readable anymore
      await expect(fileOps.readPrompt('Prompt With Special Chars!@#')).rejects.toThrow();
    });

    it('should throw error when deleting non-existent prompt', async () => {
      await expect(fileOps.deletePrompt('non-existent')).rejects.toThrow(
        'Prompt "non-existent" not found'
      );
    });
  });

  describe('promptExists', () => {
    it('should return true for existing prompt', async () => {
      await createTestPromptFile(tempDir, 'existing-prompt');
      
      const exists = await fileOps.promptExists('existing-prompt');
      
      expect(exists).toBe(true);
    });

    it('should return false for non-existent prompt', async () => {
      const exists = await fileOps.promptExists('non-existent');
      
      expect(exists).toBe(false);
    });

    it('should sanitize prompt name for existence check', async () => {
      await createTestPromptFile(tempDir, 'prompt_with_special_chars___');
      
      const exists = await fileOps.promptExists('Prompt With Special Chars!@#');
      
      expect(exists).toBe(true);
    });
  });

  describe('getPromptInfo', () => {
    it('should delegate to cache', () => {
      const samplePrompt = createSamplePromptInfo();
      mockCache.getPrompt.mockReturnValue(samplePrompt);
      
      const result = fileOps.getPromptInfo('test-prompt');
      
      expect(mockCache.getPrompt).toHaveBeenCalledWith('test-prompt');
      expect(result).toBe(samplePrompt);
    });

    it('should return undefined when prompt not in cache', () => {
      mockCache.getPrompt.mockReturnValue(undefined);
      
      const result = fileOps.getPromptInfo('non-existent');
      
      expect(result).toBeUndefined();
    });
  });

  describe('filename sanitization', () => {
    it('should convert to lowercase', async () => {
      await fileOps.savePrompt('UPPERCASE', 'content');
      const content = await fileOps.readPrompt('UPPERCASE');
      expect(content).toBe('content');
    });

    it('should replace special characters with underscores', async () => {
      const testCases = [
        { input: 'hello world', expected: 'hello_world' },
        { input: 'hello@world', expected: 'hello_world' },
        { input: 'hello#world', expected: 'hello_world' },
        { input: 'UPPERCASE', expected: 'uppercase' }
      ];

      for (const testCase of testCases) {
        await fileOps.savePrompt(testCase.input, `content for ${testCase.input}`);
        const content = await fileOps.readPrompt(testCase.input);
        expect(content).toBe(`content for ${testCase.input}`);
      }
    });

    it('should preserve allowed characters', async () => {
      const allowedNames = [
        'simple-name',
        'name_with_underscores',
        'name123',
        'abc-def_ghi789'
      ];

      for (const name of allowedNames) {
        await fileOps.savePrompt(name, `content for ${name}`);
        const content = await fileOps.readPrompt(name);
        expect(content).toBe(`content for ${name}`);
      }
    });
  });

  describe('integration with cache', () => {
    it('should work correctly when cache is populated', async () => {
      const samplePrompts = [
        createSamplePromptInfo({ name: 'prompt1' }),
        createSamplePromptInfo({ name: 'prompt2' })
      ];
      
      mockCache.isEmpty.mockReturnValue(false);
      mockCache.getAllPrompts.mockReturnValue(samplePrompts);
      mockCache.getPrompt.mockImplementation(name => 
        samplePrompts.find(p => p.name === name)
      );
      
      const allPrompts = await fileOps.listPrompts();
      const specificPrompt = fileOps.getPromptInfo('prompt1');
      
      expect(allPrompts).toEqual(samplePrompts);
      expect(specificPrompt?.name).toBe('prompt1');
    });

    it('should handle cache initialization properly', async () => {
      // Create actual files
      await createTestPromptFile(tempDir, 'real-prompt1', { title: 'Real Prompt 1' });
      await createTestPromptFile(tempDir, 'real-prompt2', { title: 'Real Prompt 2' });
      
      // Use real cache for this test
      const realCache = new (await import('../src/cache.js')).PromptCache(tempDir);
      const realFileOps = new PromptFileOperations(tempDir, realCache);
      
      const prompts = await realFileOps.listPrompts();
      
      expect(prompts).toHaveLength(2);
      expect(prompts.some(p => p.name === 'real-prompt1')).toBe(true);
      expect(prompts.some(p => p.name === 'real-prompt2')).toBe(true);
      
      await realCache.cleanup();
    });
  });
});
```

--------------------------------------------------------------------------------
/tests/index.test.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Integration tests for main index module
 */

import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import { createTempDir, cleanupTempDir, createTestPromptFile, mockConsoleError } from './helpers/testUtils.js';
import { createMockProcess } from './helpers/mocks.js';

// Mock external dependencies
vi.mock('@modelcontextprotocol/sdk/server/index.js', () => ({
  Server: vi.fn().mockImplementation(() => ({
    setRequestHandler: vi.fn(),
    connect: vi.fn()
  }))
}));

vi.mock('@modelcontextprotocol/sdk/server/stdio.js', () => ({
  StdioServerTransport: vi.fn()
}));

describe('Main Server Integration', () => {
  let tempDir: string;
  let consoleErrorSpy: ReturnType<typeof mockConsoleError>;
  let originalProcess: typeof process;

  beforeEach(async () => {
    tempDir = await createTempDir();
    consoleErrorSpy = mockConsoleError();
    
    // Mock process for signal handling tests
    originalProcess = global.process;
  });

  afterEach(async () => {
    await cleanupTempDir(tempDir);
    consoleErrorSpy.mockRestore();
    global.process = originalProcess;
    vi.clearAllMocks();
  });

  describe('server configuration', () => {
    it('should have correct server configuration', async () => {
      // Since we can't easily test the actual module loading without side effects,
      // we'll test the configuration values that would be used
      const config = {
        name: 'prompts-mcp-server',
        version: '1.0.0',
        promptsDir: '/test/path'
      };

      expect(config.name).toBe('prompts-mcp-server');
      expect(config.version).toBe('1.0.0');
      expect(typeof config.promptsDir).toBe('string');
    });

    it('should use PROMPTS_FOLDER_PATH environment variable when set', async () => {
      // Save original env var
      const originalPath = process.env.PROMPTS_FOLDER_PATH;
      
      // Set test env var
      process.env.PROMPTS_FOLDER_PATH = '/custom/prompts/path';
      
      // Test configuration would use the custom path
      const customPath = process.env.PROMPTS_FOLDER_PATH;
      const defaultPath = '/default/prompts';
      const promptsDir = customPath || defaultPath;
      
      expect(promptsDir).toBe('/custom/prompts/path');
      
      // Restore original env var
      if (originalPath !== undefined) {
        process.env.PROMPTS_FOLDER_PATH = originalPath;
      } else {
        delete process.env.PROMPTS_FOLDER_PATH;
      }
    });

    it('should use default path when PROMPTS_FOLDER_PATH is not set', async () => {
      // Save original env var
      const originalPath = process.env.PROMPTS_FOLDER_PATH;
      
      // Clear env var
      delete process.env.PROMPTS_FOLDER_PATH;
      
      // Test configuration would use default path
      const customPath = process.env.PROMPTS_FOLDER_PATH;
      const defaultPath = '/default/prompts';
      const promptsDir = customPath || defaultPath;
      
      expect(promptsDir).toBe('/default/prompts');
      
      // Restore original env var
      if (originalPath !== undefined) {
        process.env.PROMPTS_FOLDER_PATH = originalPath;
      }
    });
  });

  describe('component integration', () => {
    it('should integrate cache, file operations, and tools correctly', async () => {
      // Test the integration by using the actual classes
      const { PromptCache } = await import('../src/cache.js');
      const { PromptFileOperations } = await import('../src/fileOperations.js');
      const { PromptTools } = await import('../src/tools.js');

      // Create test files
      await createTestPromptFile(tempDir, 'integration-test', 
        { title: 'Integration Test', category: 'test' },
        'This is an integration test prompt.'
      );

      // Initialize components
      const cache = new PromptCache(tempDir);
      const fileOps = new PromptFileOperations(tempDir, cache);
      const tools = new PromptTools(fileOps);

      // Test integration flow
      await cache.initializeCache();
      
      // Verify cache has the prompt
      expect(cache.size()).toBe(1);
      expect(cache.getPrompt('integration-test')?.metadata.title).toBe('Integration Test');

      // Test file operations
      const prompts = await fileOps.listPrompts();
      expect(prompts).toHaveLength(1);
      expect(prompts[0].name).toBe('integration-test');

      // Test tools
      const toolDefinitions = tools.getToolDefinitions();
      expect(toolDefinitions.tools).toHaveLength(5);

      // Cleanup
      await cache.cleanup();
    });

    it('should handle end-to-end prompt management workflow', async () => {
      const { PromptCache } = await import('../src/cache.js');
      const { PromptFileOperations } = await import('../src/fileOperations.js');
      const { PromptTools } = await import('../src/tools.js');

      const cache = new PromptCache(tempDir);
      const fileOps = new PromptFileOperations(tempDir, cache);
      const tools = new PromptTools(fileOps);

      await cache.initializeCache();

      // 1. Add a prompt via tools
      const addRequest = {
        params: {
          name: 'add_prompt',
          arguments: {
            name: 'e2e-test',
            filename: 'e2e_test',
            content: '---\ntitle: "E2E Test"\ncategory: "testing"\n---\n\n# E2E Test Prompt\n\nThis is an end-to-end test.'
          }
        }
      };

      const addResult = await tools.handleToolCall(addRequest as any);
      expect(addResult.content[0].text).toContain('saved as e2e_test.md');

      // 2. List prompts should show the new prompt
      const listRequest = { params: { name: 'list_prompts', arguments: {} } };
      const listResult = await tools.handleToolCall(listRequest as any);
      expect(listResult.content[0].text).toContain('e2e_test');
      expect(listResult.content[0].text).toContain('E2E Test');

      // 3. Get the specific prompt
      const getRequest = {
        params: {
          name: 'get_prompt',
          arguments: { name: 'e2e_test' }
        }
      };
      const getResult = await tools.handleToolCall(getRequest as any);
      expect(getResult.content[0].text).toContain('E2E Test Prompt');

      // 4. Verify prompt exists in cache
      expect(cache.getPrompt('e2e_test')?.metadata.title).toBe('E2E Test');
      expect(cache.getPrompt('e2e_test')?.metadata.category).toBe('testing');

      // 5. Delete the prompt
      const deleteRequest = {
        params: {
          name: 'delete_prompt',
          arguments: { name: 'e2e_test' }
        }
      };
      const deleteResult = await tools.handleToolCall(deleteRequest as any);
      expect(deleteResult.content[0].text).toContain('deleted successfully');

      // 6. Verify prompt is no longer accessible
      const getDeletedRequest = {
        params: {
          name: 'get_prompt',
          arguments: { name: 'e2e_test' }
        }
      };
      const getDeletedResult = await tools.handleToolCall(getDeletedRequest as any);
      expect(getDeletedResult.isError).toBe(true);
      expect(getDeletedResult.content[0].text).toContain('not found');

      await cache.cleanup();
    });
  });

  describe('error handling integration', () => {
    it('should handle errors across component boundaries', async () => {
      const { PromptCache } = await import('../src/cache.js');
      const { PromptFileOperations } = await import('../src/fileOperations.js');
      const { PromptTools } = await import('../src/tools.js');

      // Use a non-existent directory to trigger errors
      const badDir = '/non/existent/directory';
      const cache = new PromptCache(badDir);
      const fileOps = new PromptFileOperations(badDir, cache);
      const tools = new PromptTools(fileOps);

      // Try to get a prompt from non-existent directory
      const getRequest = {
        params: {
          name: 'get_prompt',
          arguments: { name: 'any-prompt' }
        }
      };

      const result = await tools.handleToolCall(getRequest as any);
      expect(result.isError).toBe(true);
      expect(result.content[0].text).toContain('Error:');

      await cache.cleanup();
    });
  });

  describe('signal handling simulation', () => {
    it('should handle shutdown signals properly', () => {
      const mockProcess = createMockProcess();
      
      // Simulate signal handler registration
      const signalHandlers = new Map();
      mockProcess.on.mockImplementation((signal: string, handler: Function) => {
        signalHandlers.set(signal, handler);
      });

      // Simulate the signal registration that would happen in main
      mockProcess.on('SIGINT', () => {
        console.error('Shutting down server...');
      });
      mockProcess.on('SIGTERM', () => {
        console.error('Shutting down server...');
      });

      expect(mockProcess.on).toHaveBeenCalledWith('SIGINT', expect.any(Function));
      expect(mockProcess.on).toHaveBeenCalledWith('SIGTERM', expect.any(Function));
    });
  });

  describe('server initialization', () => {
    it('should create server with correct configuration', async () => {
      const { Server } = await import('@modelcontextprotocol/sdk/server/index.js');
      const { StdioServerTransport } = await import('@modelcontextprotocol/sdk/server/stdio.js');

      // These are mocked, so we're testing that the mocks are called correctly
      expect(Server).toBeDefined();
      expect(StdioServerTransport).toBeDefined();
    });
  });

  describe('real file system integration', () => {
    it('should work with real file operations', async () => {
      const { PromptCache } = await import('../src/cache.js');
      
      // Create some test files
      await createTestPromptFile(tempDir, 'real-test-1', 
        { title: 'Real Test 1', difficulty: 'beginner' },
        'This is a real file system test.'
      );
      await createTestPromptFile(tempDir, 'real-test-2',
        { title: 'Real Test 2', difficulty: 'advanced' },
        'This is another real file system test.'
      );

      const cache = new PromptCache(tempDir);
      await cache.initializeCache();

      // Verify cache loaded the files
      expect(cache.size()).toBe(2);
      
      const prompt1 = cache.getPrompt('real-test-1');
      const prompt2 = cache.getPrompt('real-test-2');
      
      expect(prompt1?.metadata.title).toBe('Real Test 1');
      expect(prompt1?.metadata.difficulty).toBe('beginner');
      expect(prompt2?.metadata.title).toBe('Real Test 2');
      expect(prompt2?.metadata.difficulty).toBe('advanced');
      
      expect(prompt1?.preview).toContain('real file system test');
      expect(prompt2?.preview).toContain('another real file system test');

      await cache.cleanup();
    });

    it('should handle mixed valid and invalid files', async () => {
      const fs = await import('fs/promises');
      
      // Create valid prompt file
      await createTestPromptFile(tempDir, 'valid-prompt', 
        { title: 'Valid Prompt' },
        'This is valid content.'
      );

      // Create invalid file (not markdown)
      await fs.writeFile(`${tempDir}/invalid.txt`, 'This is not a markdown file');
      
      // Create file with invalid YAML (but should still work)
      await fs.writeFile(`${tempDir}/broken-yaml.md`, 
        '---\ninvalid: yaml: content\n---\n\nContent after broken YAML'
      );

      const { PromptCache } = await import('../src/cache.js');
      const cache = new PromptCache(tempDir);
      await cache.initializeCache();

      // Should load at least the valid prompt
      expect(cache.size()).toBeGreaterThanOrEqual(1);
      expect(cache.getPrompt('valid-prompt')?.metadata.title).toBe('Valid Prompt');
      
      // Invalid.txt should be ignored
      expect(cache.getPrompt('invalid')).toBeUndefined();

      await cache.cleanup();
    });
  });
});
```

--------------------------------------------------------------------------------
/tests/cache.test.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Tests for PromptCache class
 */

import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
import { PromptCache } from '../src/cache.js';
import { createTempDir, cleanupTempDir, createTestPromptFile, createSamplePromptInfo, mockConsoleError, wait } from './helpers/testUtils.js';
import { createMockWatcher } from './helpers/mocks.js';

// Mock chokidar
vi.mock('chokidar', () => ({
  default: {
    watch: vi.fn()
  }
}));

describe('PromptCache', () => {
  let tempDir: string;
  let cache: PromptCache;
  let consoleErrorSpy: ReturnType<typeof mockConsoleError>;

  beforeEach(async () => {
    tempDir = await createTempDir();
    cache = new PromptCache(tempDir);
    consoleErrorSpy = mockConsoleError();
    vi.clearAllMocks();
  });

  afterEach(async () => {
    await cache.cleanup();
    await cleanupTempDir(tempDir);
    consoleErrorSpy.mockRestore();
  });

  describe('constructor', () => {
    it('should create cache with empty state', () => {
      expect(cache.isEmpty()).toBe(true);
      expect(cache.size()).toBe(0);
      expect(cache.getAllPrompts()).toEqual([]);
    });
  });

  describe('getAllPrompts', () => {
    it('should return empty array when cache is empty', () => {
      expect(cache.getAllPrompts()).toEqual([]);
    });

    it('should return all cached prompts', async () => {
      await createTestPromptFile(tempDir, 'test1', { title: 'Test 1' });
      await createTestPromptFile(tempDir, 'test2', { title: 'Test 2' });
      
      await cache.initializeCache();
      
      const prompts = cache.getAllPrompts();
      expect(prompts).toHaveLength(2);
      expect(prompts.some(p => p.name === 'test1')).toBe(true);
      expect(prompts.some(p => p.name === 'test2')).toBe(true);
    });
  });

  describe('getPrompt', () => {
    it('should return undefined for non-existent prompt', () => {
      expect(cache.getPrompt('non-existent')).toBeUndefined();
    });

    it('should return cached prompt by name', async () => {
      await createTestPromptFile(tempDir, 'test-prompt', { title: 'Test Prompt' });
      await cache.initializeCache();
      
      const prompt = cache.getPrompt('test-prompt');
      expect(prompt).toBeDefined();
      expect(prompt?.name).toBe('test-prompt');
      expect(prompt?.metadata.title).toBe('Test Prompt');
    });
  });

  describe('isEmpty', () => {
    it('should return true when cache is empty', () => {
      expect(cache.isEmpty()).toBe(true);
    });

    it('should return false when cache has prompts', async () => {
      await createTestPromptFile(tempDir, 'test-prompt');
      await cache.initializeCache();
      
      expect(cache.isEmpty()).toBe(false);
    });
  });

  describe('size', () => {
    it('should return 0 when cache is empty', () => {
      expect(cache.size()).toBe(0);
    });

    it('should return correct count of cached prompts', async () => {
      await createTestPromptFile(tempDir, 'test1');
      await createTestPromptFile(tempDir, 'test2');
      await createTestPromptFile(tempDir, 'test3');
      
      await cache.initializeCache();
      
      expect(cache.size()).toBe(3);
    });
  });

  describe('initializeCache', () => {
    it('should load all markdown files from directory', async () => {
      await createTestPromptFile(tempDir, 'prompt1', { title: 'Prompt 1' });
      await createTestPromptFile(tempDir, 'prompt2', { title: 'Prompt 2' });
      
      await cache.initializeCache();
      
      expect(cache.size()).toBe(2);
      expect(cache.getPrompt('prompt1')?.metadata.title).toBe('Prompt 1');
      expect(cache.getPrompt('prompt2')?.metadata.title).toBe('Prompt 2');
    });

    it('should ignore non-markdown files', async () => {
      await createTestPromptFile(tempDir, 'prompt1');
      // Create a non-markdown file
      const fs = await import('fs/promises');
      await fs.writeFile(`${tempDir}/readme.txt`, 'Not a prompt');
      
      await cache.initializeCache();
      
      expect(cache.size()).toBe(1);
      expect(cache.getPrompt('prompt1')).toBeDefined();
      expect(cache.getPrompt('readme')).toBeUndefined();
    });

    it('should handle files with YAML frontmatter', async () => {
      const metadata = {
        title: 'Test Prompt',
        description: 'A test prompt',
        category: 'test',
        tags: ['test', 'example'],
        difficulty: 'beginner' as const,
        author: 'Test Author',
        version: '1.0'
      };
      
      await createTestPromptFile(tempDir, 'with-frontmatter', metadata, 'Content after frontmatter');
      await cache.initializeCache();
      
      const prompt = cache.getPrompt('with-frontmatter');
      expect(prompt?.metadata).toEqual(metadata);
      expect(prompt?.preview).toContain('Content after frontmatter');
    });

    it('should handle files without frontmatter', async () => {
      await createTestPromptFile(tempDir, 'no-frontmatter', {}, 'Just plain content');
      await cache.initializeCache();
      
      const prompt = cache.getPrompt('no-frontmatter');
      expect(prompt?.metadata).toEqual({});
      expect(prompt?.preview).toContain('Just plain content');
    });

    it('should create preview text', async () => {
      const longContent = 'A'.repeat(200);
      await createTestPromptFile(tempDir, 'long-content', {}, longContent);
      await cache.initializeCache();
      
      const prompt = cache.getPrompt('long-content');
      expect(prompt?.preview).toHaveLength(103); // 100 chars + '...'
      expect(prompt?.preview.endsWith('...')).toBe(true);
    });

    it('should handle file read errors gracefully', async () => {
      // Create a valid file first
      await createTestPromptFile(tempDir, 'valid-prompt');
      
      // Create an invalid file by creating a directory with .md extension
      const fs = await import('fs/promises');
      await fs.mkdir(`${tempDir}/invalid.md`, { recursive: true });
      
      await cache.initializeCache();
      
      // Should have loaded the valid file and logged error for invalid one
      expect(cache.size()).toBe(1);
      expect(consoleErrorSpy).toHaveBeenCalledWith(
        expect.stringContaining('Failed to load prompt metadata for invalid.md'),
        expect.any(String)
      );
    });

    it('should log successful cache initialization', async () => {
      await createTestPromptFile(tempDir, 'test1');
      await createTestPromptFile(tempDir, 'test2');
      
      await cache.initializeCache();
      
      expect(consoleErrorSpy).toHaveBeenCalledWith('Loaded 2 prompts into cache');
    });

    it('should handle missing directory gracefully', async () => {
      const nonExistentDir = `${tempDir}/non-existent`;
      const cacheWithBadDir = new PromptCache(nonExistentDir);
      
      await cacheWithBadDir.initializeCache();
      
      expect(cacheWithBadDir.size()).toBe(0);
      expect(consoleErrorSpy).not.toHaveBeenCalledWith(
        expect.stringContaining('Failed to initialize cache')
      );
    });
  });

  describe('initializeFileWatcher', () => {
    let mockWatcher: ReturnType<typeof createMockWatcher>;

    beforeEach(async () => {
      const chokidar = await import('chokidar');
      mockWatcher = createMockWatcher();
      vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
    });

    it('should initialize file watcher only once', () => {
      cache.initializeFileWatcher();
      cache.initializeFileWatcher();
      
      const chokidar = vi.mocked(import('chokidar'));
      expect(chokidar).toBeDefined();
      // Should only be called once despite multiple calls
    });

    it('should set up file watcher with correct options', async () => {
      const chokidar = await import('chokidar');
      
      cache.initializeFileWatcher();
      
      expect(chokidar.default.watch).toHaveBeenCalledWith(
        expect.stringContaining('*.md'),
        {
          ignored: /^\./,
          persistent: true,
          ignoreInitial: true
        }
      );
    });

    it('should register event handlers', () => {
      cache.initializeFileWatcher();
      
      expect(mockWatcher.on).toHaveBeenCalledWith('add', expect.any(Function));
      expect(mockWatcher.on).toHaveBeenCalledWith('change', expect.any(Function));
      expect(mockWatcher.on).toHaveBeenCalledWith('unlink', expect.any(Function));
      expect(mockWatcher.on).toHaveBeenCalledWith('error', expect.any(Function));
    });

    it('should log initialization message', () => {
      cache.initializeFileWatcher();
      
      expect(consoleErrorSpy).toHaveBeenCalledWith(
        'File watcher initialized for prompts directory'
      );
    });
  });

  describe('cleanup', () => {
    it('should close file watcher if initialized', async () => {
      const chokidar = await import('chokidar');
      const mockWatcher = createMockWatcher();
      vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
      
      cache.initializeFileWatcher();
      await cache.cleanup();
      
      expect(mockWatcher.close).toHaveBeenCalled();
    });

    it('should handle cleanup when watcher not initialized', async () => {
      // Should not throw
      await expect(cache.cleanup()).resolves.not.toThrow();
    });

    it('should reset watcher state after cleanup', async () => {
      const chokidar = await import('chokidar');
      const mockWatcher = createMockWatcher();
      vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
      
      cache.initializeFileWatcher();
      await cache.cleanup();
      
      // Should be able to initialize again
      cache.initializeFileWatcher();
      expect(chokidar.default.watch).toHaveBeenCalledTimes(2);
    });
  });

  describe('file watcher integration', () => {
    let mockWatcher: ReturnType<typeof createMockWatcher>;
    let addHandler: Function;
    let changeHandler: Function;
    let unlinkHandler: Function;
    let errorHandler: Function;

    beforeEach(async () => {
      const chokidar = await import('chokidar');
      mockWatcher = createMockWatcher();
      
      // Capture event handlers
      mockWatcher.on.mockImplementation((event: string, handler: Function) => {
        switch (event) {
          case 'add': addHandler = handler; break;
          case 'change': changeHandler = handler; break;
          case 'unlink': unlinkHandler = handler; break;
          case 'error': errorHandler = handler; break;
        }
        return mockWatcher;
      });
      
      vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
      
      // Initialize cache and watcher
      await cache.initializeCache();
      cache.initializeFileWatcher();
    });

    it('should handle file addition', async () => {
      const filePath = `${tempDir}/new-prompt.md`;
      await createTestPromptFile(tempDir, 'new-prompt', { title: 'New Prompt' });
      
      // Simulate file watcher add event
      await addHandler(filePath);
      
      expect(consoleErrorSpy).toHaveBeenCalledWith('Prompt added: new-prompt.md');
      expect(cache.getPrompt('new-prompt')?.metadata.title).toBe('New Prompt');
    });

    it('should handle file changes', async () => {
      // Create initial file
      await createTestPromptFile(tempDir, 'test-prompt', { title: 'Original Title' });
      await cache.initializeCache();
      
      // Update file
      await createTestPromptFile(tempDir, 'test-prompt', { title: 'Updated Title' });
      
      // Simulate file watcher change event
      const filePath = `${tempDir}/test-prompt.md`;
      await changeHandler(filePath);
      
      expect(consoleErrorSpy).toHaveBeenCalledWith('Prompt updated: test-prompt.md');
      expect(cache.getPrompt('test-prompt')?.metadata.title).toBe('Updated Title');
    });

    it('should handle file deletion', async () => {
      // Create and cache a file
      await createTestPromptFile(tempDir, 'to-delete');
      await cache.initializeCache();
      expect(cache.getPrompt('to-delete')).toBeDefined();
      
      // Simulate file watcher unlink event
      const filePath = `${tempDir}/to-delete.md`;
      await unlinkHandler(filePath);
      
      expect(consoleErrorSpy).toHaveBeenCalledWith('Prompt deleted: to-delete.md');
      expect(cache.getPrompt('to-delete')).toBeUndefined();
    });

    it('should handle watcher errors', () => {
      const error = new Error('Watcher error');
      
      errorHandler(error);
      
      expect(consoleErrorSpy).toHaveBeenCalledWith('File watcher error:', error);
    });

    it('should ignore non-markdown files in watcher events', async () => {
      const sizeBefore = cache.size();
      
      // Simulate adding a non-markdown file
      await addHandler(`${tempDir}/readme.txt`);
      
      expect(cache.size()).toBe(sizeBefore);
    });
  });
});
```

--------------------------------------------------------------------------------
/tests/tools.test.ts:
--------------------------------------------------------------------------------

```typescript
/**
 * Tests for PromptTools class
 */

import { describe, it, expect, beforeEach, vi } from 'vitest';
import { PromptTools } from '../src/tools.js';
import { createSamplePromptInfo, createMockCallToolRequest, createExpectedResponse } from './helpers/testUtils.js';
import { MockPromptFileOperations } from './helpers/mocks.js';

describe('PromptTools', () => {
  let mockFileOps: MockPromptFileOperations;
  let tools: PromptTools;

  beforeEach(() => {
    mockFileOps = new MockPromptFileOperations();
    tools = new PromptTools(mockFileOps as any);
  });

  describe('constructor', () => {
    it('should create instance with file operations dependency', () => {
      expect(tools).toBeDefined();
      expect(tools).toBeInstanceOf(PromptTools);
    });
  });

  describe('getToolDefinitions', () => {
    it('should return all tool definitions', () => {
      const definitions = tools.getToolDefinitions();
      
      expect(definitions.tools).toHaveLength(5);
      
      const toolNames = definitions.tools.map(tool => tool.name);
      expect(toolNames).toEqual([
        'add_prompt',
        'get_prompt',
        'list_prompts',
        'delete_prompt',
        'create_structured_prompt'
      ]);
    });

    it('should have correct add_prompt tool definition', () => {
      const definitions = tools.getToolDefinitions();
      const addPromptTool = definitions.tools.find(tool => tool.name === 'add_prompt');
      
      expect(addPromptTool).toBeDefined();
      expect(addPromptTool?.description).toBe('Add a new prompt to the collection');
      expect(addPromptTool?.inputSchema.required).toEqual(['name', 'filename', 'content']);
      expect(addPromptTool?.inputSchema.properties).toHaveProperty('name');
      expect(addPromptTool?.inputSchema.properties).toHaveProperty('filename');
      expect(addPromptTool?.inputSchema.properties).toHaveProperty('content');
    });

    it('should have correct get_prompt tool definition', () => {
      const definitions = tools.getToolDefinitions();
      const getPromptTool = definitions.tools.find(tool => tool.name === 'get_prompt');
      
      expect(getPromptTool).toBeDefined();
      expect(getPromptTool?.description).toBe('Retrieve a prompt by name');
      expect(getPromptTool?.inputSchema.required).toEqual(['name']);
      expect(getPromptTool?.inputSchema.properties).toHaveProperty('name');
    });

    it('should have correct list_prompts tool definition', () => {
      const definitions = tools.getToolDefinitions();
      const listPromptsTool = definitions.tools.find(tool => tool.name === 'list_prompts');
      
      expect(listPromptsTool).toBeDefined();
      expect(listPromptsTool?.description).toBe('List all available prompts');
      expect(listPromptsTool?.inputSchema.properties).toEqual({});
    });

    it('should have correct delete_prompt tool definition', () => {
      const definitions = tools.getToolDefinitions();
      const deletePromptTool = definitions.tools.find(tool => tool.name === 'delete_prompt');
      
      expect(deletePromptTool).toBeDefined();
      expect(deletePromptTool?.description).toBe('Delete a prompt by name');
      expect(deletePromptTool?.inputSchema.required).toEqual(['name']);
      expect(deletePromptTool?.inputSchema.properties).toHaveProperty('name');
    });
  });

  describe('handleToolCall', () => {
    describe('add_prompt', () => {
      it('should add prompt with automatic metadata when none exists', async () => {
        const request = createMockCallToolRequest('add_prompt', {
          name: 'test-prompt',
          filename: 'test-prompt-file',
          content: '# Test Prompt\n\nThis is a test.'
        });
        
        mockFileOps.savePromptWithFilename.mockResolvedValue('test-prompt-file.md');
        
        const result = await tools.handleToolCall(request as any);
        
        // Should call savePromptWithFilename with content enhanced with metadata
        expect(mockFileOps.savePromptWithFilename).toHaveBeenCalledWith(
          'test-prompt-file',
          expect.stringContaining('title: "Test Prompt"')
        );
        expect(mockFileOps.savePromptWithFilename).toHaveBeenCalledWith(
          'test-prompt-file',
          expect.stringContaining('# Test Prompt\n\nThis is a test.')
        );
        expect(result).toEqual(createExpectedResponse(
          'Prompt "test-prompt" saved as test-prompt-file.md'
        ));
      });

      it('should preserve existing frontmatter when present', async () => {
        const contentWithFrontmatter = `---
title: "Existing Title"
category: "custom"
---

# Test Prompt

This already has metadata.`;
        
        const request = createMockCallToolRequest('add_prompt', {
          name: 'test-prompt',
          filename: 'test-prompt-file',
          content: contentWithFrontmatter
        });
        
        mockFileOps.savePromptWithFilename.mockResolvedValue('test-prompt-file.md');
        
        const result = await tools.handleToolCall(request as any);
        
        // Should call savePromptWithFilename with original content unchanged
        expect(mockFileOps.savePromptWithFilename).toHaveBeenCalledWith(
          'test-prompt-file',
          contentWithFrontmatter
        );
        expect(result).toEqual(createExpectedResponse(
          'Prompt "test-prompt" saved as test-prompt-file.md'
        ));
      });

      it('should handle missing content parameter', async () => {
        const request = createMockCallToolRequest('add_prompt', {
          name: 'test-prompt'
          // filename and content are missing
        });
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Name, filename, and content are required for add_prompt',
          true
        ));
        expect(mockFileOps.savePrompt).not.toHaveBeenCalled();
      });

      it('should handle file operation errors', async () => {
        const request = createMockCallToolRequest('add_prompt', {
          name: 'test-prompt',
          filename: 'test-prompt-file',
          content: 'test content'
        });
        
        mockFileOps.savePromptWithFilename.mockRejectedValue(new Error('Disk full'));
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Disk full',
          true
        ));
      });
    });

    describe('get_prompt', () => {
      it('should retrieve prompt successfully', async () => {
        const request = createMockCallToolRequest('get_prompt', {
          name: 'test-prompt'
        });
        
        const promptContent = '# Test Prompt\n\nThis is the full content.';
        mockFileOps.readPrompt.mockResolvedValue(promptContent);
        
        const result = await tools.handleToolCall(request as any);
        
        expect(mockFileOps.readPrompt).toHaveBeenCalledWith('test-prompt');
        expect(result).toEqual(createExpectedResponse(promptContent));
      });

      it('should handle non-existent prompt', async () => {
        const request = createMockCallToolRequest('get_prompt', {
          name: 'non-existent'
        });
        
        mockFileOps.readPrompt.mockRejectedValue(new Error('Prompt "non-existent" not found'));
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Prompt "non-existent" not found',
          true
        ));
      });
    });

    describe('list_prompts', () => {
      it('should list prompts with metadata formatting', async () => {
        const request = createMockCallToolRequest('list_prompts', {});
        
        const samplePrompts = [
          createSamplePromptInfo({
            name: 'prompt1',
            metadata: { title: 'Prompt 1', category: 'test' },
            preview: 'This is prompt 1 preview...'
          }),
          createSamplePromptInfo({
            name: 'prompt2',
            metadata: { title: 'Prompt 2', difficulty: 'advanced' },
            preview: 'This is prompt 2 preview...'
          })
        ];
        
        mockFileOps.listPrompts.mockResolvedValue(samplePrompts);
        
        const result = await tools.handleToolCall(request as any);
        
        expect(mockFileOps.listPrompts).toHaveBeenCalled();
        expect(result.content[0].type).toBe('text');
        
        const text = result.content[0].text as string;
        expect(text).toContain('# Available Prompts');
        expect(text).toContain('## prompt1');
        expect(text).toContain('## prompt2');
        expect(text).toContain('**Metadata:**');
        expect(text).toContain('- title: Prompt 1');
        expect(text).toContain('- category: test');
        expect(text).toContain('**Preview:** This is prompt 1 preview...');
        expect(text).toContain('---'); // Separator between prompts
      });

      it('should handle empty prompt list', async () => {
        const request = createMockCallToolRequest('list_prompts', {});
        
        mockFileOps.listPrompts.mockResolvedValue([]);
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse('No prompts available'));
      });

      it('should handle prompts with no metadata', async () => {
        const request = createMockCallToolRequest('list_prompts', {});
        
        const promptWithoutMetadata = createSamplePromptInfo({
          name: 'simple-prompt',
          metadata: {},
          preview: 'Simple prompt preview...'
        });
        
        mockFileOps.listPrompts.mockResolvedValue([promptWithoutMetadata]);
        
        const result = await tools.handleToolCall(request as any);
        
        const text = result.content[0].text as string;
        expect(text).toContain('## simple-prompt');
        expect(text).not.toContain('**Metadata:**');
        expect(text).toContain('**Preview:** Simple prompt preview...');
      });

      it('should handle file operation errors', async () => {
        const request = createMockCallToolRequest('list_prompts', {});
        
        mockFileOps.listPrompts.mockRejectedValue(new Error('Directory not accessible'));
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Directory not accessible',
          true
        ));
      });
    });

    describe('delete_prompt', () => {
      it('should delete prompt successfully', async () => {
        const request = createMockCallToolRequest('delete_prompt', {
          name: 'test-prompt'
        });
        
        mockFileOps.deletePrompt.mockResolvedValue(true);
        
        const result = await tools.handleToolCall(request as any);
        
        expect(mockFileOps.deletePrompt).toHaveBeenCalledWith('test-prompt');
        expect(result).toEqual(createExpectedResponse(
          'Prompt "test-prompt" deleted successfully'
        ));
      });

      it('should handle non-existent prompt deletion', async () => {
        const request = createMockCallToolRequest('delete_prompt', {
          name: 'non-existent'
        });
        
        mockFileOps.deletePrompt.mockRejectedValue(new Error('Prompt "non-existent" not found'));
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Prompt "non-existent" not found',
          true
        ));
      });
    });

    describe('unknown tool', () => {
      it('should handle unknown tool name', async () => {
        const request = createMockCallToolRequest('unknown_tool', {});
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Unknown tool: unknown_tool',
          true
        ));
      });
    });

    describe('error handling', () => {
      it('should handle non-Error exceptions', async () => {
        const request = createMockCallToolRequest('get_prompt', { name: 'test' });
        
        // Mock a non-Error exception
        mockFileOps.readPrompt.mockRejectedValue('String error');
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Unknown error',
          true
        ));
      });

      it('should handle null/undefined exceptions', async () => {
        const request = createMockCallToolRequest('get_prompt', { name: 'test' });
        
        mockFileOps.readPrompt.mockRejectedValue(null);
        
        const result = await tools.handleToolCall(request as any);
        
        expect(result).toEqual(createExpectedResponse(
          'Error: Unknown error',
          true
        ));
      });
    });
  });

  describe('formatPromptsList', () => {
    it('should format single prompt correctly', async () => {
      const request = createMockCallToolRequest('list_prompts', {});
      
      const singlePrompt = createSamplePromptInfo({
        name: 'single-prompt',
        metadata: {
          title: 'Single Prompt',
          description: 'A single test prompt',
          tags: ['test']
        },
        preview: 'This is the preview text...'
      });
      
      mockFileOps.listPrompts.mockResolvedValue([singlePrompt]);
      
      const result = await tools.handleToolCall(request as any);
      const text = result.content[0].text as string;
      
      expect(text).toContain('# Available Prompts');
      expect(text).toContain('## single-prompt');
      expect(text).toContain('- title: Single Prompt');
      expect(text).toContain('- description: A single test prompt');
      expect(text).toContain('- tags: test');
      expect(text).toContain('**Preview:** This is the preview text...');
      expect(text).not.toContain('---'); // No separator for single prompt
    });

    it('should handle complex metadata values', async () => {
      const request = createMockCallToolRequest('list_prompts', {});
      
      const complexPrompt = createSamplePromptInfo({
        name: 'complex-prompt',
        metadata: {
          title: 'Complex Prompt',
          tags: ['tag1', 'tag2', 'tag3'],
          customObject: { nested: 'value', array: [1, 2, 3] },
          customBoolean: true,
          customNumber: 42
        },
        preview: 'Complex preview...'
      });
      
      mockFileOps.listPrompts.mockResolvedValue([complexPrompt]);
      
      const result = await tools.handleToolCall(request as any);
      const text = result.content[0].text as string;
      
      expect(text).toContain('- tags: tag1,tag2,tag3');
      expect(text).toContain('- customBoolean: true');
      expect(text).toContain('- customNumber: 42');
      expect(text).toContain('- customObject: [object Object]');
    });
  });

  describe('integration scenarios', () => {
    it('should handle complete workflow', async () => {
      // Add a prompt
      const addRequest = createMockCallToolRequest('add_prompt', {
        name: 'workflow-test',
        filename: 'workflow-test-file',
        content: '# Workflow Test\n\nTest content'
      });
      
      mockFileOps.savePromptWithFilename.mockResolvedValue('workflow-test-file.md');
      
      let result = await tools.handleToolCall(addRequest as any);
      expect(result.content[0].text).toContain('saved as workflow-test-file.md');
      
      // List prompts
      const listRequest = createMockCallToolRequest('list_prompts', {});
      const samplePrompt = createSamplePromptInfo({ name: 'workflow-test' });
      mockFileOps.listPrompts.mockResolvedValue([samplePrompt]);
      
      result = await tools.handleToolCall(listRequest as any);
      expect(result.content[0].text).toContain('workflow-test');
      
      // Get specific prompt
      const getRequest = createMockCallToolRequest('get_prompt', {
        name: 'workflow-test'
      });
      
      mockFileOps.readPrompt.mockResolvedValue('# Workflow Test\n\nTest content');
      
      result = await tools.handleToolCall(getRequest as any);
      expect(result.content[0].text).toContain('Test content');
      
      // Delete prompt
      const deleteRequest = createMockCallToolRequest('delete_prompt', {
        name: 'workflow-test'
      });
      
      mockFileOps.deletePrompt.mockResolvedValue(true);
      
      result = await tools.handleToolCall(deleteRequest as any);
      expect(result.content[0].text).toContain('deleted successfully');
    });
  });

  describe('create_structured_prompt', () => {
    it('should create structured prompt with all metadata', async () => {
      const request = createMockCallToolRequest('create_structured_prompt', {
        name: 'test-structured',
        title: 'Test Structured Prompt',
        description: 'A test prompt with full metadata',
        category: 'testing',
        tags: ['test', 'structured'],
        difficulty: 'intermediate',
        author: 'Test Author',
        content: '# Test Content\n\nThis is structured content.'
      });
      
      mockFileOps.savePrompt.mockResolvedValue('test-structured.md');
      
      const result = await tools.handleToolCall(request as any);
      
      // Should call savePrompt with structured frontmatter
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'test-structured',
        expect.stringContaining('title: "Test Structured Prompt"')
      );
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'test-structured',
        expect.stringContaining('category: "testing"')
      );
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'test-structured',
        expect.stringContaining('tags: ["test","structured"]')
      );
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'test-structured',
        expect.stringContaining('difficulty: "intermediate"')
      );
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'test-structured',
        expect.stringContaining('# Test Content')
      );
      
      expect(result.content[0].text).toContain('Structured prompt "test-structured" created successfully');
      expect(result.content[0].text).toContain('Category: testing');
      expect(result.content[0].text).toContain('Tags: test, structured');
    });

    it('should use defaults for optional fields', async () => {
      const request = createMockCallToolRequest('create_structured_prompt', {
        name: 'minimal-structured',
        title: 'Minimal Prompt',
        description: 'A minimal prompt',
        content: 'Just content.'
      });
      
      mockFileOps.savePrompt.mockResolvedValue('minimal-structured.md');
      
      const result = await tools.handleToolCall(request as any);
      
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'minimal-structured',
        expect.stringContaining('category: "general"')
      );
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'minimal-structured',
        expect.stringContaining('tags: ["general"]')
      );
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'minimal-structured',
        expect.stringContaining('difficulty: "beginner"')
      );
      expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
        'minimal-structured',
        expect.stringContaining('author: "User"')
      );
    });

    it('should handle missing required fields', async () => {
      const request = createMockCallToolRequest('create_structured_prompt', {
        name: 'incomplete',
        title: 'Missing Description'
        // Missing description and content
      });
      
      const result = await tools.handleToolCall(request as any);
      
      expect(result.isError).toBe(true);
      expect(result.content[0].text).toContain('Name, content, title, and description are required');
    });
  });
});
```