#
tokens: 35395/50000 23/23 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .claude
│   └── settings.local.json
├── .gitignore
├── CLAUDE.md
├── LICENSE
├── package.json
├── prompts
│   ├── api_design.md
│   ├── chinese_text_summarizer.md
│   ├── code_review.md
│   ├── debugging_assistant.md
│   └── git_commit_push.md
├── README.md
├── src
│   ├── cache.ts
│   ├── fileOperations.ts
│   ├── index.ts
│   ├── tools.ts
│   └── types.ts
├── tests
│   ├── cache.test.ts
│   ├── fileOperations.test.ts
│   ├── helpers
│   │   ├── mocks.ts
│   │   └── testUtils.ts
│   ├── index.test.ts
│   ├── tools.test.ts
│   └── types.test.ts
├── tsconfig.json
└── vitest.config.ts
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Dependencies
 2 | node_modules/
 3 | package-lock.json
 4 | npm-debug.log*
 5 | yarn-debug.log*
 6 | yarn-error.log*
 7 | 
 8 | # Build output
 9 | dist/
10 | 
11 | # Test coverage
12 | coverage/
13 | 
14 | # Runtime data
15 | pids
16 | *.pid
17 | *.seed
18 | *.pid.lock
19 | 
20 | # Environment files
21 | .env
22 | .env.local
23 | .env.development.local
24 | .env.test.local
25 | .env.production.local
26 | 
27 | # IDE files
28 | .vscode/
29 | .idea/
30 | *.swp
31 | *.swo
32 | *~
33 | 
34 | # OS generated files
35 | .DS_Store
36 | .DS_Store?
37 | ._*
38 | .Spotlight-V100
39 | .Trashes
40 | ehthumbs.db
41 | Thumbs.db
42 | 
43 | # Logs
44 | logs
45 | *.log
46 | 
47 | # Temporary files
48 | tmp/
49 | temp/
50 | 
51 | # Cache
52 | .npm
53 | .eslintcache
54 | .cache/
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Prompts MCP Server
  2 | 
  3 | A Model Context Protocol (MCP) server for managing and providing prompts. This server allows users and LLMs to easily add, retrieve, and manage prompt templates stored as markdown files with YAML frontmatter support.
  4 | 
  5 | <a href="https://glama.ai/mcp/servers/@tanker327/prompts-mcp-server">
  6 |   <img width="380" height="200" src="https://glama.ai/mcp/servers/@tanker327/prompts-mcp-server/badge" alt="Prompts Server MCP server" />
  7 | </a>
  8 | 
  9 | ## Quick Start
 10 | 
 11 | ```bash
 12 | # 1. Install from NPM
 13 | npm install -g prompts-mcp-server
 14 | 
 15 | # 2. Add to your MCP client config (e.g., Claude Desktop)
 16 | # Add this to ~/Library/Application Support/Claude/claude_desktop_config.json:
 17 | {
 18 |   "mcpServers": {
 19 |     "prompts-mcp-server": {
 20 |       "command": "prompts-mcp-server"
 21 |     }
 22 |   }
 23 | }
 24 | 
 25 | # 3. Restart your MCP client and start using the tools!
 26 | ```
 27 | 
 28 | ## Features
 29 | 
 30 | - **Add Prompts**: Store new prompts as markdown files with YAML frontmatter
 31 | - **Retrieve Prompts**: Get specific prompts by name
 32 | - **List Prompts**: View all available prompts with metadata preview
 33 | - **Delete Prompts**: Remove prompts from the collection
 34 | - **File-based Storage**: Prompts are stored as markdown files in the `prompts/` directory
 35 | - **Real-time Caching**: In-memory cache with automatic file change monitoring
 36 | - **YAML Frontmatter**: Support for structured metadata (title, description, tags, etc.)
 37 | - **TypeScript**: Full TypeScript implementation with comprehensive type definitions
 38 | - **Modular Architecture**: Clean separation of concerns with dependency injection
 39 | - **Comprehensive Testing**: 95 tests with 84.53% code coverage
 40 | 
 41 | ## Installation
 42 | 
 43 | ### Option 1: From NPM (Recommended)
 44 | 
 45 | Install the package globally from NPM:
 46 | ```bash
 47 | npm install -g prompts-mcp-server
 48 | ```
 49 | This will make the `prompts-mcp-server` command available in your system.
 50 | 
 51 | After installation, you need to configure your MCP client to use it. See [MCP Client Configuration](#mcp-client-configuration).
 52 | 
 53 | ### Option 2: From GitHub (for development)
 54 | 
 55 | ```bash
 56 | # Clone the repository
 57 | git clone https://github.com/tanker327/prompts-mcp-server.git
 58 | cd prompts-mcp-server
 59 | 
 60 | # Install dependencies
 61 | npm install
 62 | 
 63 | # Build the TypeScript code
 64 | npm run build
 65 | 
 66 | # Test the installation
 67 | npm test
 68 | ```
 69 | 
 70 | ### Option 3: Direct Download
 71 | 
 72 | 1. Download the latest release from GitHub
 73 | 2. Extract to your desired location
 74 | 3. Run installation steps from Option 2.
 75 | 
 76 | ### Verification
 77 | 
 78 | After installation, verify the server works:
 79 | 
 80 | ```bash
 81 | # Start the server (should show no errors)
 82 | npm start
 83 | 
 84 | # Or test with MCP Inspector
 85 | npx @modelcontextprotocol/inspector prompts-mcp-server
 86 | ```
 87 | 
 88 | ## Testing
 89 | 
 90 | Run the comprehensive test suite:
 91 | ```bash
 92 | npm test
 93 | ```
 94 | 
 95 | Run tests with coverage:
 96 | ```bash
 97 | npm run test:coverage
 98 | ```
 99 | 
100 | Watch mode for development:
101 | ```bash
102 | npm run test:watch
103 | ```
104 | 
105 | ## MCP Tools
106 | 
107 | The server provides the following tools:
108 | 
109 | ### `add_prompt`
110 | Add a new prompt to the collection. If no YAML frontmatter is provided, default metadata will be automatically added.
111 | - **name** (string): Name of the prompt
112 | - **content** (string): Content of the prompt in markdown format with optional YAML frontmatter
113 | 
114 | ### `create_structured_prompt`
115 | Create a new prompt with guided metadata structure and validation.
116 | - **name** (string): Name of the prompt
117 | - **title** (string): Human-readable title for the prompt
118 | - **description** (string): Brief description of what the prompt does
119 | - **category** (string, optional): Category (defaults to "general")
120 | - **tags** (array, optional): Array of tags for categorization (defaults to ["general"])
121 | - **difficulty** (string, optional): "beginner", "intermediate", or "advanced" (defaults to "beginner")
122 | - **author** (string, optional): Author of the prompt (defaults to "User")
123 | - **content** (string): The actual prompt content (markdown)
124 | 
125 | ### `get_prompt` 
126 | Retrieve a prompt by name.
127 | - **name** (string): Name of the prompt to retrieve
128 | 
129 | ### `list_prompts`
130 | List all available prompts with metadata preview. No parameters required.
131 | 
132 | ### `delete_prompt`
133 | Delete a prompt by name.
134 | - **name** (string): Name of the prompt to delete
135 | 
136 | ## Usage Examples
137 | 
138 | Once connected to an MCP client, you can use the tools like this:
139 | 
140 | ### Method 1: Quick prompt creation with automatic metadata
141 | ```javascript
142 | // Add a prompt without frontmatter - metadata will be added automatically
143 | add_prompt({
144 |   name: "debug_helper",
145 |   content: `# Debug Helper
146 | 
147 | Help me debug this issue by:
148 | 1. Analyzing the error message
149 | 2. Suggesting potential causes
150 | 3. Recommending debugging steps`
151 | })
152 | // This automatically adds default frontmatter with title "Debug Helper", category "general", etc.
153 | ```
154 | 
155 | ### Method 2: Structured prompt creation with full metadata control
156 | ```javascript
157 | // Create a prompt with explicit metadata using the structured tool
158 | create_structured_prompt({
159 |   name: "code_review",
160 |   title: "Code Review Assistant",
161 |   description: "Helps review code for best practices and potential issues",
162 |   category: "development",
163 |   tags: ["code", "review", "quality"],
164 |   difficulty: "intermediate",
165 |   author: "Development Team",
166 |   content: `# Code Review Prompt
167 | 
168 | Please review the following code for:
169 | - Code quality and best practices
170 | - Potential bugs or issues
171 | - Performance considerations
172 | - Security vulnerabilities
173 | 
174 | ## Code to Review
175 | [Insert code here]`
176 | })
177 | ```
178 | 
179 | ### Method 3: Manual frontmatter (preserves existing metadata)
180 | ```javascript
181 | // Add a prompt with existing frontmatter - no changes made
182 | add_prompt({
183 |   name: "custom_prompt",
184 |   content: `---
185 | title: "Custom Assistant"
186 | category: "specialized"
187 | tags: ["custom", "specific"]
188 | difficulty: "advanced"
189 | ---
190 | 
191 | # Custom Prompt Content
192 | Your specific prompt here...`
193 | })
194 | ```
195 | 
196 | ### Other operations
197 | ```javascript
198 | // Get a prompt
199 | get_prompt({ name: "code_review" })
200 | 
201 | // List all prompts (shows metadata preview)
202 | list_prompts({})
203 | 
204 | // Delete a prompt
205 | delete_prompt({ name: "old_prompt" })
206 | ```
207 | 
208 | ## File Structure
209 | 
210 | ```
211 | prompts-mcp-server/
212 | ├── src/
213 | │   ├── index.ts          # Main server orchestration
214 | │   ├── types.ts          # TypeScript type definitions
215 | │   ├── cache.ts          # Caching system with file watching
216 | │   ├── fileOperations.ts # File I/O operations
217 | │   └── tools.ts          # MCP tool definitions and handlers
218 | ├── tests/
219 | │   ├── helpers/
220 | │   │   ├── testUtils.ts  # Test utilities
221 | │   │   └── mocks.ts      # Mock implementations
222 | │   ├── cache.test.ts     # Cache module tests
223 | │   ├── fileOperations.test.ts # File operations tests
224 | │   ├── tools.test.ts     # Tools module tests
225 | │   └── index.test.ts     # Integration tests
226 | ├── prompts/              # Directory for storing prompt markdown files
227 | │   ├── code_review.md
228 | │   ├── debugging_assistant.md
229 | │   └── api_design.md
230 | ├── dist/                 # Compiled JavaScript output
231 | ├── CLAUDE.md            # Development documentation
232 | ├── package.json
233 | ├── tsconfig.json
234 | └── README.md
235 | ```
236 | 
237 | ## Architecture
238 | 
239 | The server uses a modular architecture with the following components:
240 | 
241 | - **PromptCache**: In-memory caching with real-time file change monitoring via chokidar
242 | - **PromptFileOperations**: File I/O operations with cache integration
243 | - **PromptTools**: MCP tool definitions and request handlers
244 | - **Type System**: Comprehensive TypeScript types for all data structures
245 | 
246 | ## YAML Frontmatter Support
247 | 
248 | Prompts can include structured metadata using YAML frontmatter:
249 | 
250 | ```yaml
251 | ---
252 | title: "Prompt Title"
253 | description: "Brief description of the prompt"
254 | category: "development"
255 | tags: ["tag1", "tag2", "tag3"]
256 | difficulty: "beginner" | "intermediate" | "advanced"
257 | author: "Author Name"
258 | version: "1.0"
259 | ---
260 | 
261 | # Prompt Content
262 | 
263 | Your prompt content goes here...
264 | ```
265 | 
266 | ## MCP Client Configuration
267 | 
268 | This server can be configured with various MCP-compatible applications. Here are setup instructions for popular clients:
269 | 
270 | ### Claude Desktop
271 | 
272 | Add this to your Claude Desktop configuration file:
273 | 
274 | **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
275 | **Windows**: `%APPDATA%/Claude/claude_desktop_config.json`
276 | 
277 | ```json
278 | {
279 |   "mcpServers": {
280 |     "prompts-mcp-server": {
281 |       "command": "prompts-mcp-server",
282 |       "env": {
283 |         "PROMPTS_FOLDER_PATH": "/path/to/your/prompts/directory"
284 |       }
285 |     }
286 |   }
287 | }
288 | ```
289 | 
290 | ### Cline (VS Code Extension)
291 | 
292 | Add to your Cline MCP settings in VS Code:
293 | 
294 | ```json
295 | {
296 |   "cline.mcp.servers": {
297 |     "prompts-mcp-server": {
298 |       "command": "prompts-mcp-server",
299 |       "env": {
300 |         "PROMPTS_FOLDER_PATH": "/path/to/your/prompts/directory"
301 |       }
302 |     }
303 |   }
304 | }
305 | ```
306 | 
307 | ### Continue.dev
308 | 
309 | In your `~/.continue/config.json`:
310 | 
311 | ```json
312 | {
313 |   "mcpServers": [
314 |     {
315 |       "name": "prompts-mcp-server",
316 |       "command": "prompts-mcp-server",
317 |       "env": {
318 |         "PROMPTS_FOLDER_PATH": "/path/to/your/prompts/directory"
319 |       }
320 |     }
321 |   ]
322 | }
323 | ```
324 | 
325 | ### Zed Editor
326 | 
327 | In your Zed settings (`~/.config/zed/settings.json`):
328 | 
329 | ```json
330 | {
331 |   "assistant": {
332 |     "mcp_servers": {
333 |       "prompts-mcp-server": {
334 |         "command": "prompts-mcp-server",
335 |         "env": {
336 |           "PROMPTS_DIR": "/path/to/your/prompts/directory"
337 |         }
338 |       }
339 |     }
340 |   }
341 | }
342 | ```
343 | 
344 | ### Custom MCP Client
345 | 
346 | For any MCP-compatible application, use these connection details:
347 | 
348 | - **Protocol**: Model Context Protocol (MCP)
349 | - **Transport**: stdio
350 | - **Command**: `prompts-mcp-server`
351 | - **Environment Variables**: 
352 |   - `PROMPTS_FOLDER_PATH`: Custom directory for storing prompts (optional, defaults to `./prompts`)
353 | 
354 | ### Development/Testing Setup
355 | 
356 | For development or testing with the MCP Inspector:
357 | 
358 | ```bash
359 | # Install MCP Inspector
360 | npm install -g @modelcontextprotocol/inspector
361 | 
362 | # Run the server with inspector
363 | npx @modelcontextprotocol/inspector prompts-mcp-server
364 | ```
365 | 
366 | ### Docker Configuration
367 | 
368 | Create a `docker-compose.yml` for containerized deployment:
369 | 
370 | ```yaml
371 | version: '3.8'
372 | services:
373 |   prompts-mcp-server:
374 |     build: .
375 |     environment:
376 |       - PROMPTS_FOLDER_PATH=/app/prompts
377 |     volumes:
378 |       - ./prompts:/app/prompts
379 |     stdin_open: true
380 |     tty: true
381 | ```
382 | 
383 | ## Server Configuration
384 | 
385 | - The server automatically creates the `prompts/` directory if it doesn't exist
386 | - Prompt files are automatically sanitized to use safe filenames (alphanumeric characters, hyphens, and underscores only)
387 | - File changes are monitored in real-time and cache is updated automatically
388 | - Prompts directory can be customized via the `PROMPTS_FOLDER_PATH` environment variable
389 | 
390 | ### Environment Variables
391 | 
392 | | Variable | Description | Default |
393 | |----------|-------------|---------|
394 | | `PROMPTS_FOLDER_PATH` | Custom directory to store prompt files (overrides default) | (not set) |
395 | | `NODE_ENV` | Environment mode | `production` |
396 | 
397 | > **Note**: If `PROMPTS_FOLDER_PATH` is set, it will be used as the prompts directory. If not set, the server defaults to `./prompts` relative to the server location.
398 | 
399 | ## Requirements
400 | 
401 | - Node.js 18.0.0 or higher
402 | - TypeScript 5.0.0 or higher
403 | - Dependencies:
404 |   - @modelcontextprotocol/sdk ^1.0.0
405 |   - gray-matter ^4.0.3 (YAML frontmatter parsing)
406 |   - chokidar ^3.5.3 (file watching)
407 | 
408 | ## Development
409 | 
410 | The project includes comprehensive tooling for development:
411 | 
412 | - **TypeScript**: Strict type checking and modern ES modules
413 | - **Vitest**: Fast testing framework with 95 tests and 84.53% coverage
414 | - **ESLint**: Code linting (if configured)
415 | - **File Watching**: Real-time cache updates during development
416 | 
417 | ## Troubleshooting
418 | 
419 | ### Common Issues
420 | 
421 | #### "Module not found" errors
422 | ```bash
423 | # Ensure TypeScript is built
424 | npm run build
425 | 
426 | # Check that dist/ directory exists and contains .js files
427 | ls dist/
428 | ```
429 | 
430 | #### MCP client can't connect
431 | 1. Verify the server starts without errors: `npm start`
432 | 2. Check the correct path is used in client configuration
433 | 3. Ensure Node.js 18+ is installed: `node --version`
434 | 4. Test with MCP Inspector: `npx @modelcontextprotocol/inspector prompts-mcp-server`
435 | 
436 | #### Permission errors with prompts directory
437 | ```bash
438 | # Ensure the prompts directory is writable
439 | mkdir -p ./prompts
440 | chmod 755 ./prompts
441 | ```
442 | 
443 | #### File watching not working
444 | - On Linux: Install `inotify-tools`
445 | - On macOS: No additional setup needed
446 | - On Windows: Ensure Windows Subsystem for Linux (WSL) or native Node.js
447 | 
448 | ### Debug Mode
449 | 
450 | Enable debug logging by setting environment variables:
451 | 
452 | ```bash
453 | # Enable debug mode
454 | DEBUG=* node dist/index.js
455 | 
456 | # Or with specific debug namespace
457 | DEBUG=prompts-mcp:* node dist/index.js
458 | ```
459 | 
460 | ### Getting Help
461 | 
462 | 1. Check the [GitHub Issues](https://github.com/tanker327/prompts-mcp-server/issues)
463 | 2. Review the test files for usage examples
464 | 3. Use MCP Inspector for debugging client connections
465 | 4. Check your MCP client's documentation for configuration details
466 | 
467 | ### Performance Tips
468 | 
469 | - The server uses in-memory caching for fast prompt retrieval
470 | - File watching automatically updates the cache when files change
471 | - Large prompt collections (1000+ files) work efficiently due to caching
472 | - Consider using SSD storage for better file I/O performance
473 | 
474 | 
475 | ## Community Variants & Extensions
476 | 
477 | | Project | Maintainer | Extra Features |
478 | |---------|-----------|----------------|
479 | | [smart-prompts-mcp](https://github.com/jezweb/smart-prompts-mcp) | [@jezweb](https://github.com/jezweb) | GitHub-hosted prompt libraries, advanced search & composition, richer TypeScript types, etc. |
480 | 
481 | 👉 Have you built something cool on top of **prompts-mcp-server**?  
482 | Open an issue or PR to add it here so others can discover your variant!
483 | 
484 | ## License
485 | 
486 | MIT
```

--------------------------------------------------------------------------------
/CLAUDE.md:
--------------------------------------------------------------------------------

```markdown
  1 | # CLAUDE.md
  2 | 
  3 | This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
  4 | 
  5 | ## Development Commands
  6 | 
  7 | - `npm install` - Install dependencies
  8 | - `npm run build` - Compile TypeScript to JavaScript
  9 | - `npm start` - Build and start the MCP server 
 10 | - `npm run dev` - Start server with auto-reload for development (uses tsx)
 11 | - `npm test` - Run all tests (exits automatically)
 12 | - `npm run test:watch` - Run tests in watch mode (continuous)
 13 | - `npm run test:coverage` - Run tests with coverage report (exits automatically)
 14 | 
 15 | ## Architecture Overview
 16 | 
 17 | This is an MCP (Model Context Protocol) server written in TypeScript that manages prompt templates stored as markdown files with YAML frontmatter metadata.
 18 | 
 19 | ### Core Components
 20 | 
 21 | **Main Server (`src/index.ts`)**
 22 | - Entry point that orchestrates all components
 23 | - Handles MCP server initialization and graceful shutdown
 24 | - Registers tool handlers and connects to stdio transport
 25 | - Minimal orchestration layer that delegates to specialized modules
 26 | 
 27 | **Type Definitions (`src/types.ts`)**
 28 | - Central location for all TypeScript interfaces
 29 | - `PromptMetadata`, `PromptInfo`, `ToolArguments`, `ServerConfig`
 30 | - Ensures type consistency across all modules
 31 | 
 32 | **Caching System (`src/cache.ts`)**
 33 | - `PromptCache` class manages in-memory prompt metadata
 34 | - File watcher integration with chokidar for real-time updates
 35 | - Handles cache initialization, updates, and cleanup
 36 | - Provides methods for cache access and management
 37 | 
 38 | **File Operations (`src/fileOperations.ts`)**
 39 | - `PromptFileOperations` class handles all file I/O
 40 | - CRUD operations: create, read, update, delete prompts
 41 | - Filename sanitization and directory management
 42 | - Integrates with cache for optimal performance
 43 | 
 44 | **MCP Tools (`src/tools.ts`)**
 45 | - `PromptTools` class implements MCP tool definitions and handlers
 46 | - Handles all 4 MCP tools: `add_prompt`, `get_prompt`, `list_prompts`, `delete_prompt`
 47 | - Tool validation, execution, and response formatting
 48 | - Clean separation between MCP protocol and business logic
 49 | 
 50 | ### Module Dependencies
 51 | 
 52 | ```
 53 | index.ts (main)
 54 | ├── cache.ts (PromptCache)
 55 | ├── fileOperations.ts (PromptFileOperations)
 56 | │   └── cache.ts (dependency)
 57 | ├── tools.ts (PromptTools)
 58 | │   └── fileOperations.ts (dependency)
 59 | └── types.ts (shared interfaces)
 60 | ```
 61 | 
 62 | ### Data Flow
 63 | 
 64 | 1. **Startup**: Main orchestrates cache initialization and file watcher setup
 65 | 2. **File Changes**: PromptCache detects changes and updates cache automatically  
 66 | 3. **MCP Requests**: PromptTools delegates to PromptFileOperations which uses cached data
 67 | 4. **File Operations**: PromptFileOperations writes to filesystem, cache auto-updates via watcher
 68 | 
 69 | ### Testing Strategy
 70 | 
 71 | Modular design enables easy unit testing:
 72 | - Each class can be tested in isolation with dependency injection
 73 | - Cache operations can be tested without file I/O
 74 | - File operations can be tested with mock cache
 75 | - Tool handlers can be tested with mock file operations
 76 | 
 77 | ## Key Implementation Details
 78 | 
 79 | - **Modular Architecture**: Clean separation of concerns across 5 focused modules
 80 | - **TypeScript**: Full type safety with centralized type definitions
 81 | - **Build Process**: TypeScript compiles to `dist/` directory, source in `src/`
 82 | - **Development**: Uses `tsx` for hot-reload during development
 83 | - **Dependency Injection**: Classes accept dependencies via constructor for testability
 84 | - **Graceful Shutdown**: Proper cleanup of file watchers and resources
 85 | - Server communicates via stdio (not HTTP)
 86 | - ES modules used throughout (`type: "module"` in package.json)
 87 | - Error handling returns MCP-compatible error responses with `isError: true`
 88 | - Console.error() used for logging (stderr) to avoid interfering with stdio transport
 89 | 
 90 | ## Module Overview
 91 | 
 92 | - **`types.ts`**: Shared interfaces and type definitions
 93 | - **`cache.ts`**: In-memory caching with file watching (PromptCache class)
 94 | - **`fileOperations.ts`**: File I/O operations (PromptFileOperations class)  
 95 | - **`tools.ts`**: MCP tool definitions and handlers (PromptTools class)
 96 | - **`index.ts`**: Main orchestration and server setup
 97 | 
 98 | Each module is independently testable and has a single responsibility.
 99 | 
100 | ## Testing
101 | 
102 | The project includes comprehensive test coverage using Vitest:
103 | 
104 | ### Test Structure
105 | ```
106 | tests/
107 | ├── helpers/
108 | │   ├── testUtils.ts    # Test utilities and helper functions
109 | │   └── mocks.ts        # Mock implementations for testing
110 | ├── types.test.ts       # Type definition tests
111 | ├── cache.test.ts       # PromptCache class tests
112 | ├── fileOperations.test.ts  # PromptFileOperations class tests
113 | ├── tools.test.ts       # PromptTools class tests
114 | └── index.test.ts       # Integration tests
115 | ```
116 | 
117 | ### Test Coverage
118 | - **Unit Tests**: Each class tested in isolation with dependency injection
119 | - **Integration Tests**: End-to-end workflows and component interactions
120 | - **Error Handling**: Comprehensive error scenarios and edge cases
121 | - **File System**: Real file operations and mock scenarios
122 | - **MCP Protocol**: Tool definitions and request/response handling
123 | 
124 | ### Testing Approach
125 | - **Mocking**: Uses Vitest mocking for external dependencies
126 | - **Temporary Files**: Creates isolated temp directories for file system tests
127 | - **Real Integration**: Tests actual file I/O, caching, and file watching
128 | - **Error Scenarios**: Tests failure modes and error propagation
129 | - **Type Safety**: Validates TypeScript interfaces and type constraints
130 | 
131 | ### Test Results
132 | - **95 tests** across all modules with **100% pass rate**
133 | - **84.53% overall coverage** with critical modules at 98-100% coverage
134 | - **Fast execution** with proper test isolation and cleanup
135 | 
136 | ## Development Workflow
137 | 
138 | 1. **Install dependencies**: `npm install`
139 | 2. **Run tests**: `npm test` (verifies everything works)
140 | 3. **Start development**: `npm run dev` (auto-reload)
141 | 4. **Build for production**: `npm run build`
142 | 5. **Run built server**: `npm start`
143 | 
144 | ## File Structure
145 | 
146 | ```
147 | prompts-mcp/
148 | ├── src/                    # TypeScript source code
149 | │   ├── types.ts           # Type definitions
150 | │   ├── cache.ts           # Caching system
151 | │   ├── fileOperations.ts  # File I/O operations
152 | │   ├── tools.ts           # MCP tool handlers
153 | │   └── index.ts           # Main server entry point
154 | ├── tests/                 # Comprehensive test suite
155 | │   ├── helpers/           # Test utilities and mocks
156 | │   └── *.test.ts          # Test files for each module
157 | ├── prompts/               # Prompt storage directory
158 | ├── dist/                  # Compiled JavaScript (after build)
159 | ├── package.json           # Dependencies and scripts
160 | ├── tsconfig.json          # TypeScript configuration
161 | ├── vitest.config.ts       # Test configuration
162 | └── CLAUDE.md              # This documentation
163 | ```
```

--------------------------------------------------------------------------------
/vitest.config.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { defineConfig } from 'vitest/config';
 2 | 
 3 | export default defineConfig({
 4 |   test: {
 5 |     globals: true,
 6 |     environment: 'node',
 7 |     coverage: {
 8 |       provider: 'v8',
 9 |       reporter: ['text', 'json', 'html'],
10 |       exclude: [
11 |         'node_modules/',
12 |         'dist/',
13 |         '**/*.d.ts',
14 |         'coverage/',
15 |         'vitest.config.ts'
16 |       ]
17 |     },
18 |     include: ['tests/**/*.test.ts'],
19 |     exclude: ['node_modules/', 'dist/']
20 |   },
21 | });
```

--------------------------------------------------------------------------------
/.claude/settings.local.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "permissions": {
 3 |     "allow": [
 4 |       "Bash(mkdir:*)",
 5 |       "Bash(git init:*)",
 6 |       "Bash(git add:*)",
 7 |       "Bash(rm:*)",
 8 |       "Bash(npm install)",
 9 |       "Bash(npm test)",
10 |       "Bash(rg:*)",
11 |       "Bash(npm test:*)",
12 |       "Bash(npm run test:coverage:*)",
13 |       "Bash(gh auth:*)",
14 |       "Bash(gh repo create:*)",
15 |       "Bash(git remote add:*)",
16 |       "Bash(git branch:*)",
17 |       "Bash(git push:*)",
18 |       "Bash(git commit:*)",
19 |       "Bash(npm run build:*)",
20 |       "Bash(ls:*)",
21 |       "Bash(git rm:*)",
22 |       "Bash(black:*)"
23 |     ],
24 |     "deny": []
25 |   }
26 | }
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "compilerOptions": {
 3 |     "target": "ES2022",
 4 |     "module": "ESNext",
 5 |     "moduleResolution": "node",
 6 |     "outDir": "./dist",
 7 |     "rootDir": "./src",
 8 |     "strict": true,
 9 |     "esModuleInterop": true,
10 |     "skipLibCheck": true,
11 |     "forceConsistentCasingInFileNames": true,
12 |     "declaration": true,
13 |     "declarationMap": true,
14 |     "sourceMap": true,
15 |     "resolveJsonModule": true,
16 |     "allowSyntheticDefaultImports": true,
17 |     "isolatedModules": true,
18 |     "noEmitOnError": true,
19 |     "exactOptionalPropertyTypes": true,
20 |     "noImplicitReturns": true,
21 |     "noFallthroughCasesInSwitch": true,
22 |     "noUncheckedIndexedAccess": true
23 |   },
24 |   "include": [
25 |     "src/**/*"
26 |   ],
27 |   "exclude": [
28 |     "node_modules",
29 |     "dist"
30 |   ]
31 | }
```

--------------------------------------------------------------------------------
/src/types.ts:
--------------------------------------------------------------------------------

```typescript
 1 | /**
 2 |  * Type definitions for the prompts MCP server
 3 |  */
 4 | 
 5 | export interface PromptMetadata {
 6 |   title?: string;
 7 |   description?: string;
 8 |   category?: string;
 9 |   tags?: string[];
10 |   difficulty?: 'beginner' | 'intermediate' | 'advanced';
11 |   author?: string;
12 |   version?: string;
13 |   [key: string]: unknown;
14 | }
15 | 
16 | export interface PromptInfo {
17 |   name: string;
18 |   metadata: PromptMetadata;
19 |   preview: string;
20 | }
21 | 
22 | export interface ToolArguments {
23 |   name?: string;
24 |   filename?: string;
25 |   content?: string;
26 |   // Fields for create_structured_prompt
27 |   title?: string;
28 |   description?: string;
29 |   category?: string;
30 |   tags?: string[];
31 |   difficulty?: 'beginner' | 'intermediate' | 'advanced';
32 |   author?: string;
33 | }
34 | 
35 | export interface ServerConfig {
36 |   name: string;
37 |   version: string;
38 |   promptsDir: string;
39 |   prompts_folder_path?: string;
40 | }
```

--------------------------------------------------------------------------------
/prompts/code_review.md:
--------------------------------------------------------------------------------

```markdown
 1 | ---
 2 | title: "Code Review Assistant"
 3 | description: "Comprehensive code review with focus on quality, performance, and security"
 4 | category: "development"
 5 | tags: ["code-review", "quality", "security", "performance"]
 6 | difficulty: "intermediate"
 7 | author: "System"
 8 | version: "1.0"
 9 | ---
10 | 
11 | # Code Review Prompt
12 | 
13 | You are an experienced software engineer performing a code review. Please review the following code and provide feedback on:
14 | 
15 | 1. **Code Quality**: Look for best practices, readability, and maintainability
16 | 2. **Performance**: Identify potential performance issues or optimizations
17 | 3. **Security**: Check for security vulnerabilities or concerns
18 | 4. **Testing**: Assess test coverage and suggest additional test cases
19 | 5. **Documentation**: Evaluate code comments and documentation
20 | 
21 | Please be constructive in your feedback and suggest specific improvements where applicable.
```

--------------------------------------------------------------------------------
/prompts/debugging_assistant.md:
--------------------------------------------------------------------------------

```markdown
 1 | ---
 2 | title: "Debugging Assistant"
 3 | description: "Systematic debugging approach for identifying and resolving code issues"
 4 | category: "development"
 5 | tags: ["debugging", "troubleshooting", "problem-solving"]
 6 | difficulty: "beginner"
 7 | author: "System"
 8 | version: "1.0"
 9 | ---
10 | 
11 | # Debugging Assistant Prompt
12 | 
13 | You are a debugging expert helping to identify and resolve issues in code. When analyzing problems:
14 | 
15 | 1. **Understand the Problem**: Ask clarifying questions about the expected vs actual behavior
16 | 2. **Analyze the Code**: Look for common issues like:
17 |    - Logic errors
18 |    - Type mismatches
19 |    - Null/undefined references
20 |    - Race conditions
21 |    - Memory leaks
22 | 3. **Systematic Approach**: 
23 |    - Check inputs and outputs
24 |    - Verify assumptions
25 |    - Test edge cases
26 |    - Use debugging tools effectively
27 | 4. **Provide Solutions**: Offer step-by-step debugging strategies and potential fixes
28 | 
29 | Focus on teaching debugging techniques while solving the immediate problem.
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |     "name": "prompts-mcp-server",
 3 |     "version": "1.2.0",
 4 |     "description": "MCP server for managing and providing prompts",
 5 |     "main": "dist/index.js",
 6 |     "type": "module",
 7 |     "bin": {
 8 |         "prompts-mcp-server": "dist/index.js"
 9 |     },
10 |     "files": [
11 |         "dist"
12 |     ],
13 |     "scripts": {
14 |         "build": "tsc",
15 |         "start": "npm run build && node dist/index.js",
16 |         "dev": "tsx --watch src/index.ts",
17 |         "test": "vitest run",
18 |         "test:watch": "vitest",
19 |         "test:coverage": "vitest run --coverage"
20 |     },
21 |     "keywords": [
22 |         "mcp",
23 |         "prompts",
24 |         "ai",
25 |         "llm"
26 |     ],
27 |     "author": "Eric Wu",
28 |     "license": "MIT",
29 |     "dependencies": {
30 |         "@modelcontextprotocol/sdk": "^1.0.0",
31 |         "gray-matter": "^4.0.3",
32 |         "chokidar": "^3.5.3"
33 |     },
34 |     "devDependencies": {
35 |         "@types/node": "^20.0.0",
36 |         "@vitest/coverage-v8": "^1.0.0",
37 |         "typescript": "^5.0.0",
38 |         "tsx": "^4.0.0",
39 |         "vitest": "^1.0.0"
40 |     },
41 |     "engines": {
42 |         "node": ">=18.0.0"
43 |     }
44 | }
45 | 
```

--------------------------------------------------------------------------------
/prompts/api_design.md:
--------------------------------------------------------------------------------

```markdown
 1 | ---
 2 | title: "API Design Expert"
 3 | description: "RESTful API design with best practices and conventions"
 4 | category: "architecture"
 5 | tags: ["api", "rest", "design", "backend"]
 6 | difficulty: "advanced"
 7 | author: "System"
 8 | version: "1.0"
 9 | ---
10 | 
11 | # API Design Prompt
12 | 
13 | You are an API design expert. Help design RESTful APIs that are:
14 | 
15 | ## Design Principles
16 | 1. **RESTful**: Follow REST conventions and HTTP methods properly
17 | 2. **Consistent**: Use consistent naming, structure, and patterns
18 | 3. **Intuitive**: Make endpoints discoverable and self-explanatory
19 | 4. **Versioned**: Include versioning strategy for future changes
20 | 5. **Secure**: Implement proper authentication and authorization
21 | 
22 | ## Key Considerations
23 | - Resource naming (nouns, not verbs)
24 | - HTTP status codes usage
25 | - Error handling and responses
26 | - Pagination for large datasets
27 | - Rate limiting and throttling
28 | - Documentation and examples
29 | 
30 | ## Response Format
31 | - Clear data structures
32 | - Meaningful error messages
33 | - Consistent field naming (camelCase or snake_case)
34 | - Include metadata when appropriate
35 | 
36 | Provide specific recommendations with examples for the given use case.
```

--------------------------------------------------------------------------------
/tests/helpers/mocks.ts:
--------------------------------------------------------------------------------

```typescript
 1 | /**
 2 |  * Mock implementations for testing
 3 |  */
 4 | 
 5 | import { vi } from 'vitest';
 6 | import { PromptInfo } from '../../src/types.js';
 7 | 
 8 | /**
 9 |  * Mock PromptCache class
10 |  */
11 | export class MockPromptCache {
12 |   private cache = new Map<string, PromptInfo>();
13 |   
14 |   getAllPrompts = vi.fn(() => Array.from(this.cache.values()));
15 |   getPrompt = vi.fn((name: string) => this.cache.get(name));
16 |   isEmpty = vi.fn(() => this.cache.size === 0);
17 |   size = vi.fn(() => this.cache.size);
18 |   initializeCache = vi.fn();
19 |   initializeFileWatcher = vi.fn();
20 |   cleanup = vi.fn();
21 | 
22 |   // Helper methods for testing
23 |   _setPrompt(name: string, prompt: PromptInfo) {
24 |     this.cache.set(name, prompt);
25 |   }
26 | 
27 |   _clear() {
28 |     this.cache.clear();
29 |   }
30 | }
31 | 
32 | /**
33 |  * Mock PromptFileOperations class
34 |  */
35 | export class MockPromptFileOperations {
36 |   listPrompts = vi.fn();
37 |   readPrompt = vi.fn();
38 |   savePrompt = vi.fn();
39 |   savePromptWithFilename = vi.fn();
40 |   deletePrompt = vi.fn();
41 |   promptExists = vi.fn();
42 |   getPromptInfo = vi.fn();
43 | }
44 | 
45 | /**
46 |  * Mock chokidar watcher
47 |  */
48 | export function createMockWatcher() {
49 |   const watcher = {
50 |     on: vi.fn().mockReturnThis(),
51 |     close: vi.fn().mockResolvedValue(undefined),
52 |     add: vi.fn().mockReturnThis(),
53 |     unwatch: vi.fn().mockReturnThis()
54 |   };
55 | 
56 |   return watcher;
57 | }
58 | 
59 | /**
60 |  * Mock process for testing
61 |  */
62 | export function createMockProcess() {
63 |   return {
64 |     on: vi.fn(),
65 |     exit: vi.fn(),
66 |     cwd: vi.fn(() => '/test/cwd')
67 |   };
68 | }
```

--------------------------------------------------------------------------------
/prompts/git_commit_push.md:
--------------------------------------------------------------------------------

```markdown
 1 | ---
 2 | title: "Git Commit and Push Assistant"
 3 | description: "Automatically stage, commit, and push all changes to the remote Git repository"
 4 | category: "development"
 5 | tags: ["git","version-control","commit","push","automation"]
 6 | difficulty: "beginner"
 7 | author: "User"
 8 | version: "1.0"
 9 | created: "2025-06-10"
10 | ---
11 | 
12 | # Git Commit and Push Assistant
13 | 
14 | You are a Git automation assistant. Your task is to:
15 | 
16 | 1. **Stage all changes** - Add all modified, new, and deleted files to the staging area
17 | 2. **Create a meaningful commit** - Generate an appropriate commit message based on the changes
18 | 3. **Push to remote** - Push the committed changes to the remote repository
19 | 
20 | ## Process:
21 | 
22 | 1. First, check the current Git status to see what files have been modified
23 | 2. Stage all changes using `git add .` or `git add -A`
24 | 3. Create a commit with a descriptive message that summarizes the changes
25 | 4. Push the changes to the remote repository (typically `git push origin main` or the current branch)
26 | 
27 | ## Commit Message Guidelines:
28 | 
29 | - Use present tense ("Add feature" not "Added feature")
30 | - Be concise but descriptive
31 | - If there are multiple types of changes, use a general message like "Update project files" or "Various improvements"
32 | - For specific changes, be more precise: "Fix authentication bug", "Add user profile component", "Update documentation"
33 | 
34 | ## Commands to execute:
35 | 
36 | ```bash
37 | # Check status
38 | git status
39 | 
40 | # Stage all changes
41 | git add .
42 | 
43 | # Commit with message
44 | git commit -m "[generated message based on changes]"
45 | 
46 | # Push to remote
47 | git push
48 | ```
49 | 
50 | ## Error Handling:
51 | 
52 | - If there are no changes to commit, inform the user
53 | - If there are merge conflicts, guide the user to resolve them first
54 | - If the remote repository requires authentication, provide guidance
55 | - If pushing to a protected branch, suggest creating a pull request instead
56 | 
57 | Execute these Git operations safely and provide clear feedback about what was done.
```

--------------------------------------------------------------------------------
/tests/helpers/testUtils.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Test utilities and helper functions
  3 |  */
  4 | 
  5 | import fs from 'fs/promises';
  6 | import path from 'path';
  7 | import os from 'os';
  8 | import { vi } from 'vitest';
  9 | import { PromptInfo, PromptMetadata } from '../../src/types.js';
 10 | 
 11 | /**
 12 |  * Create a temporary directory for testing
 13 |  */
 14 | export async function createTempDir(): Promise<string> {
 15 |   const tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'prompts-test-'));
 16 |   return tempDir;
 17 | }
 18 | 
 19 | /**
 20 |  * Clean up temporary directory
 21 |  */
 22 | export async function cleanupTempDir(dir: string): Promise<void> {
 23 |   try {
 24 |     await fs.rm(dir, { recursive: true, force: true });
 25 |   } catch (error) {
 26 |     // Ignore cleanup errors
 27 |   }
 28 | }
 29 | 
 30 | /**
 31 |  * Create a test prompt file
 32 |  */
 33 | export async function createTestPromptFile(
 34 |   dir: string,
 35 |   name: string,
 36 |   metadata: PromptMetadata = {},
 37 |   content: string = 'Test prompt content'
 38 | ): Promise<string> {
 39 |   // Ensure directory exists
 40 |   await fs.mkdir(dir, { recursive: true });
 41 |   
 42 |   const frontmatter = Object.keys(metadata).length > 0
 43 |     ? `---\n${Object.entries(metadata).map(([key, value]) => `${key}: ${JSON.stringify(value)}`).join('\n')}\n---\n\n`
 44 |     : '';
 45 |   
 46 |   const fullContent = frontmatter + content;
 47 |   const fileName = `${name}.md`;
 48 |   const filePath = path.join(dir, fileName);
 49 |   
 50 |   await fs.writeFile(filePath, fullContent, 'utf-8');
 51 |   return filePath;
 52 | }
 53 | 
 54 | /**
 55 |  * Create sample prompt info for testing
 56 |  */
 57 | export function createSamplePromptInfo(overrides: Partial<PromptInfo> = {}): PromptInfo {
 58 |   return {
 59 |     name: 'test-prompt',
 60 |     metadata: {
 61 |       title: 'Test Prompt',
 62 |       description: 'A test prompt for testing',
 63 |       category: 'test',
 64 |       tags: ['test', 'sample'],
 65 |       difficulty: 'beginner',
 66 |       author: 'Test Author',
 67 |       version: '1.0'
 68 |     },
 69 |     preview: 'This is a test prompt content for testing purposes. It contains sample text to verify functionality...',
 70 |     ...overrides
 71 |   };
 72 | }
 73 | 
 74 | /**
 75 |  * Mock console.error to capture error logs in tests
 76 |  */
 77 | export function mockConsoleError() {
 78 |   return vi.spyOn(console, 'error').mockImplementation(() => {});
 79 | }
 80 | 
 81 | /**
 82 |  * Create a mock fs module for testing
 83 |  */
 84 | export function createMockFs() {
 85 |   return {
 86 |     readFile: vi.fn(),
 87 |     writeFile: vi.fn(),
 88 |     unlink: vi.fn(),
 89 |     access: vi.fn(),
 90 |     mkdir: vi.fn(),
 91 |     readdir: vi.fn()
 92 |   };
 93 | }
 94 | 
 95 | /**
 96 |  * Create mock MCP request objects
 97 |  */
 98 | export function createMockCallToolRequest(toolName: string, args: Record<string, unknown>) {
 99 |   return {
100 |     params: {
101 |       name: toolName,
102 |       arguments: args
103 |     }
104 |   };
105 | }
106 | 
107 | /**
108 |  * Create expected MCP response format
109 |  */
110 | export function createExpectedResponse(text: string, isError = false) {
111 |   return {
112 |     content: [
113 |       {
114 |         type: 'text',
115 |         text
116 |       }
117 |     ],
118 |     ...(isError && { isError: true })
119 |   };
120 | }
121 | 
122 | /**
123 |  * Wait for a specified amount of time (useful for file watcher tests)
124 |  */
125 | export function wait(ms: number): Promise<void> {
126 |   return new Promise(resolve => setTimeout(resolve, ms));
127 | }
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | 
  3 | /**
  4 |  * Main entry point for the prompts MCP server
  5 |  */
  6 | 
  7 | import { Server } from '@modelcontextprotocol/sdk/server/index.js';
  8 | import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
  9 | import {
 10 |   CallToolRequestSchema,
 11 |   ListToolsRequestSchema,
 12 | } from '@modelcontextprotocol/sdk/types.js';
 13 | import path from 'path';
 14 | import { fileURLToPath } from 'url';
 15 | import { PromptCache } from './cache.js';
 16 | import { PromptFileOperations } from './fileOperations.js';
 17 | import { PromptTools } from './tools.js';
 18 | import { ServerConfig } from './types.js';
 19 | 
 20 | // Server configuration
 21 | const __filename = fileURLToPath(import.meta.url);
 22 | const __dirname = path.dirname(__filename);
 23 | 
 24 | // Read configuration from environment or use defaults
 25 | const promptsFolderPath = process.env.PROMPTS_FOLDER_PATH;
 26 | const defaultPromptsDir = path.join(__dirname, '..', 'prompts');
 27 | 
 28 | const config: ServerConfig = {
 29 |   name: 'prompts-mcp-server',
 30 |   version: '1.0.0',
 31 |   promptsDir: promptsFolderPath || defaultPromptsDir,
 32 |   ...(promptsFolderPath && { prompts_folder_path: promptsFolderPath }),
 33 | };
 34 | 
 35 | // Initialize components
 36 | const cache = new PromptCache(config.promptsDir);
 37 | const fileOps = new PromptFileOperations(config.promptsDir, cache);
 38 | const tools = new PromptTools(fileOps);
 39 | 
 40 | // Create MCP server
 41 | const server = new Server(
 42 |   {
 43 |     name: config.name,
 44 |     version: config.version,
 45 |   },
 46 |   {
 47 |     capabilities: {
 48 |       tools: {},
 49 |     },
 50 |   }
 51 | );
 52 | 
 53 | // Register tool handlers
 54 | server.setRequestHandler(ListToolsRequestSchema, async () => {
 55 |   return tools.getToolDefinitions();
 56 | });
 57 | 
 58 | server.setRequestHandler(CallToolRequestSchema, async (request) => {
 59 |   return await tools.handleToolCall(request);
 60 | });
 61 | 
 62 | /**
 63 |  * Main server startup function
 64 |  */
 65 | async function main(): Promise<void> {
 66 |   try {
 67 |     // Initialize cache and file watcher on startup
 68 |     await cache.initializeCache();
 69 |     cache.initializeFileWatcher();
 70 |     
 71 |     // Connect to stdio transport
 72 |     const transport = new StdioServerTransport();
 73 |     await server.connect(transport);
 74 |     
 75 |     console.error('Prompts MCP Server running on stdio');
 76 |   } catch (error) {
 77 |     const errorMessage = error instanceof Error ? error.message : 'Unknown error';
 78 |     console.error('Failed to start server:', errorMessage);
 79 |     process.exit(1);
 80 |   }
 81 | }
 82 | 
 83 | /**
 84 |  * Graceful shutdown handler
 85 |  */
 86 | async function shutdown(): Promise<void> {
 87 |   console.error('Shutting down server...');
 88 |   try {
 89 |     await cache.cleanup();
 90 |     console.error('Server shutdown complete');
 91 |   } catch (error) {
 92 |     const errorMessage = error instanceof Error ? error.message : 'Unknown error';
 93 |     console.error('Error during shutdown:', errorMessage);
 94 |   }
 95 |   process.exit(0);
 96 | }
 97 | 
 98 | // Handle shutdown signals
 99 | process.on('SIGINT', shutdown);
100 | process.on('SIGTERM', shutdown);
101 | 
102 | // Start the server
103 | main().catch((error: unknown) => {
104 |   const errorMessage = error instanceof Error ? error.message : 'Unknown error';
105 |   console.error('Server error:', errorMessage);
106 |   process.exit(1);
107 | });
```

--------------------------------------------------------------------------------
/src/fileOperations.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * File operations for prompt management (CRUD operations)
  3 |  */
  4 | 
  5 | import fs from 'fs/promises';
  6 | import path from 'path';
  7 | import { PromptInfo } from './types.js';
  8 | import { PromptCache } from './cache.js';
  9 | 
 10 | export class PromptFileOperations {
 11 |   constructor(
 12 |     private promptsDir: string,
 13 |     private cache: PromptCache
 14 |   ) {}
 15 | 
 16 |   /**
 17 |    * Sanitize filename to be filesystem-safe
 18 |    */
 19 |   private sanitizeFileName(name: string): string {
 20 |     return name.replace(/[^a-z0-9-_]/gi, '_').toLowerCase();
 21 |   }
 22 | 
 23 |   /**
 24 |    * Ensure prompts directory exists
 25 |    */
 26 |   private async ensurePromptsDir(): Promise<void> {
 27 |     try {
 28 |       await fs.access(this.promptsDir);
 29 |     } catch {
 30 |       await fs.mkdir(this.promptsDir, { recursive: true });
 31 |     }
 32 |   }
 33 | 
 34 |   /**
 35 |    * List all prompts (uses cache for performance)
 36 |    */
 37 |   async listPrompts(): Promise<PromptInfo[]> {
 38 |     // Initialize cache and file watcher if not already done
 39 |     if (this.cache.isEmpty()) {
 40 |       await this.cache.initializeCache();
 41 |       this.cache.initializeFileWatcher();
 42 |     }
 43 |     
 44 |     return this.cache.getAllPrompts();
 45 |   }
 46 | 
 47 |   /**
 48 |    * Read a specific prompt by name
 49 |    */
 50 |   async readPrompt(name: string): Promise<string> {
 51 |     const fileName = this.sanitizeFileName(name) + '.md';
 52 |     const filePath = path.join(this.promptsDir, fileName);
 53 |     try {
 54 |       return await fs.readFile(filePath, 'utf-8');
 55 |     } catch (error) {
 56 |       throw new Error(`Prompt "${name}" not found`);
 57 |     }
 58 |   }
 59 | 
 60 |   /**
 61 |    * Save a new prompt or update existing one
 62 |    */
 63 |   async savePrompt(name: string, content: string): Promise<string> {
 64 |     await this.ensurePromptsDir();
 65 |     const fileName = this.sanitizeFileName(name) + '.md';
 66 |     const filePath = path.join(this.promptsDir, fileName);
 67 |     await fs.writeFile(filePath, content, 'utf-8');
 68 |     return fileName;
 69 |   }
 70 | 
 71 |   /**
 72 |    * Save a new prompt with a custom filename
 73 |    */
 74 |   async savePromptWithFilename(filename: string, content: string): Promise<string> {
 75 |     await this.ensurePromptsDir();
 76 |     const sanitizedFileName = this.sanitizeFileName(filename) + '.md';
 77 |     const filePath = path.join(this.promptsDir, sanitizedFileName);
 78 |     await fs.writeFile(filePath, content, 'utf-8');
 79 |     return sanitizedFileName;
 80 |   }
 81 | 
 82 |   /**
 83 |    * Delete a prompt by name
 84 |    */
 85 |   async deletePrompt(name: string): Promise<boolean> {
 86 |     const fileName = this.sanitizeFileName(name) + '.md';
 87 |     const filePath = path.join(this.promptsDir, fileName);
 88 |     try {
 89 |       await fs.unlink(filePath);
 90 |       return true;
 91 |     } catch (error) {
 92 |       throw new Error(`Prompt "${name}" not found`);
 93 |     }
 94 |   }
 95 | 
 96 |   /**
 97 |    * Check if a prompt exists
 98 |    */
 99 |   async promptExists(name: string): Promise<boolean> {
100 |     const fileName = this.sanitizeFileName(name) + '.md';
101 |     const filePath = path.join(this.promptsDir, fileName);
102 |     try {
103 |       await fs.access(filePath);
104 |       return true;
105 |     } catch {
106 |       return false;
107 |     }
108 |   }
109 | 
110 |   /**
111 |    * Get prompt info from cache (if available)
112 |    */
113 |   getPromptInfo(name: string): PromptInfo | undefined {
114 |     return this.cache.getPrompt(name);
115 |   }
116 | }
```

--------------------------------------------------------------------------------
/src/cache.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Caching and file watching functionality for prompt metadata
  3 |  */
  4 | 
  5 | import fs from 'fs/promises';
  6 | import path from 'path';
  7 | import chokidar, { FSWatcher } from 'chokidar';
  8 | import matter from 'gray-matter';
  9 | import { PromptInfo, PromptMetadata } from './types.js';
 10 | 
 11 | export class PromptCache {
 12 |   private cache = new Map<string, PromptInfo>();
 13 |   private watcher: FSWatcher | null = null;
 14 |   private isWatcherInitialized = false;
 15 | 
 16 |   constructor(private promptsDir: string) {}
 17 | 
 18 |   /**
 19 |    * Get all cached prompts
 20 |    */
 21 |   getAllPrompts(): PromptInfo[] {
 22 |     return Array.from(this.cache.values());
 23 |   }
 24 | 
 25 |   /**
 26 |    * Get a specific prompt from cache
 27 |    */
 28 |   getPrompt(name: string): PromptInfo | undefined {
 29 |     return this.cache.get(name);
 30 |   }
 31 | 
 32 |   /**
 33 |    * Check if cache is empty
 34 |    */
 35 |   isEmpty(): boolean {
 36 |     return this.cache.size === 0;
 37 |   }
 38 | 
 39 |   /**
 40 |    * Get cache size
 41 |    */
 42 |   size(): number {
 43 |     return this.cache.size;
 44 |   }
 45 | 
 46 |   /**
 47 |    * Load prompt metadata from a file
 48 |    */
 49 |   private async loadPromptMetadata(fileName: string): Promise<PromptInfo | null> {
 50 |     const filePath = path.join(this.promptsDir, fileName);
 51 |     try {
 52 |       const content = await fs.readFile(filePath, 'utf-8');
 53 |       const parsed = matter(content);
 54 |       const name = fileName.replace('.md', '');
 55 |       
 56 |       return {
 57 |         name,
 58 |         metadata: parsed.data as PromptMetadata,
 59 |         preview: parsed.content.substring(0, 100).replace(/\n/g, ' ').trim() + '...'
 60 |       };
 61 |     } catch (error) {
 62 |       const errorMessage = error instanceof Error ? error.message : 'Unknown error';
 63 |       console.error(`Failed to load prompt metadata for ${fileName}:`, errorMessage);
 64 |       return null;
 65 |     }
 66 |   }
 67 | 
 68 |   /**
 69 |    * Update cache for a specific file
 70 |    */
 71 |   private async updateCacheForFile(fileName: string): Promise<void> {
 72 |     if (!fileName.endsWith('.md')) return;
 73 |     
 74 |     const metadata = await this.loadPromptMetadata(fileName);
 75 |     if (metadata) {
 76 |       this.cache.set(metadata.name, metadata);
 77 |     }
 78 |   }
 79 | 
 80 |   /**
 81 |    * Remove a file from cache
 82 |    */
 83 |   private async removeFromCache(fileName: string): Promise<void> {
 84 |     if (!fileName.endsWith('.md')) return;
 85 |     
 86 |     const name = fileName.replace('.md', '');
 87 |     this.cache.delete(name);
 88 |   }
 89 | 
 90 |   /**
 91 |    * Ensure prompts directory exists
 92 |    */
 93 |   private async ensurePromptsDir(): Promise<void> {
 94 |     try {
 95 |       await fs.access(this.promptsDir);
 96 |     } catch {
 97 |       await fs.mkdir(this.promptsDir, { recursive: true });
 98 |     }
 99 |   }
100 | 
101 |   /**
102 |    * Initialize cache by loading all prompt files
103 |    */
104 |   async initializeCache(): Promise<void> {
105 |     await this.ensurePromptsDir();
106 |     
107 |     try {
108 |       const files = await fs.readdir(this.promptsDir);
109 |       const mdFiles = files.filter(file => file.endsWith('.md'));
110 |       
111 |       // Clear existing cache
112 |       this.cache.clear();
113 |       
114 |       // Load all prompt metadata
115 |       await Promise.all(
116 |         mdFiles.map(async (file) => {
117 |           await this.updateCacheForFile(file);
118 |         })
119 |       );
120 |       
121 |       console.error(`Loaded ${this.cache.size} prompts into cache`);
122 |     } catch (error) {
123 |       const errorMessage = error instanceof Error ? error.message : 'Unknown error';
124 |       console.error('Failed to initialize cache:', errorMessage);
125 |     }
126 |   }
127 | 
128 |   /**
129 |    * Initialize file watcher to monitor changes
130 |    */
131 |   initializeFileWatcher(): void {
132 |     if (this.isWatcherInitialized) return;
133 |     
134 |     this.watcher = chokidar.watch(path.join(this.promptsDir, '*.md'), {
135 |       ignored: /^\./, // ignore dotfiles
136 |       persistent: true,
137 |       ignoreInitial: true // don't fire events for initial scan
138 |     });
139 | 
140 |     this.watcher
141 |       .on('add', async (filePath: string) => {
142 |         const fileName = path.basename(filePath);
143 |         console.error(`Prompt added: ${fileName}`);
144 |         await this.updateCacheForFile(fileName);
145 |       })
146 |       .on('change', async (filePath: string) => {
147 |         const fileName = path.basename(filePath);
148 |         console.error(`Prompt updated: ${fileName}`);
149 |         await this.updateCacheForFile(fileName);
150 |       })
151 |       .on('unlink', async (filePath: string) => {
152 |         const fileName = path.basename(filePath);
153 |         console.error(`Prompt deleted: ${fileName}`);
154 |         await this.removeFromCache(fileName);
155 |       })
156 |       .on('error', (error: Error) => {
157 |         console.error('File watcher error:', error);
158 |       });
159 | 
160 |     this.isWatcherInitialized = true;
161 |     console.error('File watcher initialized for prompts directory');
162 |   }
163 | 
164 |   /**
165 |    * Stop file watcher and cleanup
166 |    */
167 |   async cleanup(): Promise<void> {
168 |     if (this.watcher) {
169 |       await this.watcher.close();
170 |       this.watcher = null;
171 |       this.isWatcherInitialized = false;
172 |     }
173 |   }
174 | }
```

--------------------------------------------------------------------------------
/tests/types.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for type definitions and interfaces
  3 |  */
  4 | 
  5 | import { describe, it, expect } from 'vitest';
  6 | import type { PromptMetadata, PromptInfo, ToolArguments, ServerConfig } from '../src/types.js';
  7 | 
  8 | describe('Types', () => {
  9 |   describe('PromptMetadata', () => {
 10 |     it('should allow all optional fields', () => {
 11 |       const metadata: PromptMetadata = {
 12 |         title: 'Test Title',
 13 |         description: 'Test Description',
 14 |         category: 'test',
 15 |         tags: ['tag1', 'tag2'],
 16 |         difficulty: 'beginner',
 17 |         author: 'Test Author',
 18 |         version: '1.0'
 19 |       };
 20 | 
 21 |       expect(metadata.title).toBe('Test Title');
 22 |       expect(metadata.description).toBe('Test Description');
 23 |       expect(metadata.category).toBe('test');
 24 |       expect(metadata.tags).toEqual(['tag1', 'tag2']);
 25 |       expect(metadata.difficulty).toBe('beginner');
 26 |       expect(metadata.author).toBe('Test Author');
 27 |       expect(metadata.version).toBe('1.0');
 28 |     });
 29 | 
 30 |     it('should allow empty metadata object', () => {
 31 |       const metadata: PromptMetadata = {};
 32 |       expect(Object.keys(metadata)).toHaveLength(0);
 33 |     });
 34 | 
 35 |     it('should allow custom fields with unknown type', () => {
 36 |       const metadata: PromptMetadata = {
 37 |         customField: 'custom value',
 38 |         customNumber: 42,
 39 |         customBoolean: true,
 40 |         customArray: [1, 2, 3],
 41 |         customObject: { nested: 'value' }
 42 |       };
 43 | 
 44 |       expect(metadata.customField).toBe('custom value');
 45 |       expect(metadata.customNumber).toBe(42);
 46 |       expect(metadata.customBoolean).toBe(true);
 47 |       expect(metadata.customArray).toEqual([1, 2, 3]);
 48 |       expect(metadata.customObject).toEqual({ nested: 'value' });
 49 |     });
 50 | 
 51 |     it('should enforce difficulty type constraints', () => {
 52 |       // These should compile without issues
 53 |       const beginner: PromptMetadata = { difficulty: 'beginner' };
 54 |       const intermediate: PromptMetadata = { difficulty: 'intermediate' };
 55 |       const advanced: PromptMetadata = { difficulty: 'advanced' };
 56 | 
 57 |       expect(beginner.difficulty).toBe('beginner');
 58 |       expect(intermediate.difficulty).toBe('intermediate');
 59 |       expect(advanced.difficulty).toBe('advanced');
 60 |     });
 61 |   });
 62 | 
 63 |   describe('PromptInfo', () => {
 64 |     it('should require all fields', () => {
 65 |       const promptInfo: PromptInfo = {
 66 |         name: 'test-prompt',
 67 |         metadata: {
 68 |           title: 'Test Prompt',
 69 |           description: 'A test prompt'
 70 |         },
 71 |         preview: 'This is a preview of the prompt content...'
 72 |       };
 73 | 
 74 |       expect(promptInfo.name).toBe('test-prompt');
 75 |       expect(promptInfo.metadata.title).toBe('Test Prompt');
 76 |       expect(promptInfo.metadata.description).toBe('A test prompt');
 77 |       expect(promptInfo.preview).toBe('This is a preview of the prompt content...');
 78 |     });
 79 | 
 80 |     it('should work with minimal metadata', () => {
 81 |       const promptInfo: PromptInfo = {
 82 |         name: 'minimal-prompt',
 83 |         metadata: {},
 84 |         preview: 'Minimal preview'
 85 |       };
 86 | 
 87 |       expect(promptInfo.name).toBe('minimal-prompt');
 88 |       expect(Object.keys(promptInfo.metadata)).toHaveLength(0);
 89 |       expect(promptInfo.preview).toBe('Minimal preview');
 90 |     });
 91 |   });
 92 | 
 93 |   describe('ToolArguments', () => {
 94 |     it('should require name field', () => {
 95 |       const args: ToolArguments = {
 96 |         name: 'test-prompt'
 97 |       };
 98 | 
 99 |       expect(args.name).toBe('test-prompt');
100 |       expect(args.content).toBeUndefined();
101 |     });
102 | 
103 |     it('should allow optional content field', () => {
104 |       const args: ToolArguments = {
105 |         name: 'test-prompt',
106 |         content: 'Test content for the prompt'
107 |       };
108 | 
109 |       expect(args.name).toBe('test-prompt');
110 |       expect(args.content).toBe('Test content for the prompt');
111 |     });
112 |   });
113 | 
114 |   describe('ServerConfig', () => {
115 |     it('should require all fields', () => {
116 |       const config: ServerConfig = {
117 |         name: 'test-server',
118 |         version: '1.0.0',
119 |         promptsDir: '/path/to/prompts'
120 |       };
121 | 
122 |       expect(config.name).toBe('test-server');
123 |       expect(config.version).toBe('1.0.0');
124 |       expect(config.promptsDir).toBe('/path/to/prompts');
125 |     });
126 | 
127 |     it('should allow optional prompts_folder_path', () => {
128 |       const configWithCustomPath: ServerConfig = {
129 |         name: 'test-server',
130 |         version: '1.0.0',
131 |         promptsDir: '/default/prompts',
132 |         prompts_folder_path: '/custom/prompts'
133 |       };
134 | 
135 |       expect(configWithCustomPath.prompts_folder_path).toBe('/custom/prompts');
136 |     });
137 | 
138 |     it('should work without prompts_folder_path', () => {
139 |       const configWithoutCustomPath: ServerConfig = {
140 |         name: 'test-server',
141 |         version: '1.0.0',
142 |         promptsDir: '/default/prompts'
143 |       };
144 | 
145 |       expect(configWithoutCustomPath.prompts_folder_path).toBeUndefined();
146 |     });
147 |   });
148 | 
149 |   describe('Type compatibility', () => {
150 |     it('should work together in realistic scenarios', () => {
151 |       const config: ServerConfig = {
152 |         name: 'prompts-mcp-server',
153 |         version: '1.0.0',
154 |         promptsDir: '/app/prompts'
155 |       };
156 | 
157 |       const metadata: PromptMetadata = {
158 |         title: 'Code Review Assistant',
159 |         description: 'Helps review code for quality and issues',
160 |         category: 'development',
161 |         tags: ['code-review', 'quality'],
162 |         difficulty: 'intermediate',
163 |         author: 'System',
164 |         version: '1.0'
165 |       };
166 | 
167 |       const promptInfo: PromptInfo = {
168 |         name: 'code-review',
169 |         metadata,
170 |         preview: 'You are an experienced software engineer performing a code review...'
171 |       };
172 | 
173 |       const toolArgs: ToolArguments = {
174 |         name: promptInfo.name,
175 |         content: '# Code Review Prompt\n\nYou are an experienced software engineer...'
176 |       };
177 | 
178 |       expect(config.name).toBe('prompts-mcp-server');
179 |       expect(promptInfo.metadata.difficulty).toBe('intermediate');
180 |       expect(toolArgs.name).toBe('code-review');
181 |       expect(toolArgs.content).toContain('Code Review Prompt');
182 |     });
183 |   });
184 | });
```

--------------------------------------------------------------------------------
/src/tools.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * MCP tool definitions and handlers
  3 |  */
  4 | 
  5 | import {
  6 |   ListToolsResult,
  7 |   CallToolResult,
  8 |   CallToolRequest,
  9 |   TextContent,
 10 | } from '@modelcontextprotocol/sdk/types.js';
 11 | import { ToolArguments, PromptInfo } from './types.js';
 12 | import { PromptFileOperations } from './fileOperations.js';
 13 | 
 14 | export class PromptTools {
 15 |   constructor(private fileOps: PromptFileOperations) {}
 16 | 
 17 |   /**
 18 |    * Get MCP tool definitions
 19 |    */
 20 |   getToolDefinitions(): ListToolsResult {
 21 |     return {
 22 |       tools: [
 23 |         {
 24 |           name: 'add_prompt',
 25 |           description: 'Add a new prompt to the collection',
 26 |           inputSchema: {
 27 |             type: 'object',
 28 |             properties: {
 29 |               name: {
 30 |                 type: 'string',
 31 |                 description: 'Name of the prompt',
 32 |               },
 33 |               filename: {
 34 |                 type: 'string',
 35 |                 description: 'English filename for the prompt file (without .md extension)',
 36 |               },
 37 |               content: {
 38 |                 type: 'string',
 39 |                 description: 'Content of the prompt in markdown format',
 40 |               },
 41 |             },
 42 |             required: ['name', 'filename', 'content'],
 43 |           },
 44 |         },
 45 |         {
 46 |           name: 'get_prompt',
 47 |           description: 'Retrieve a prompt by name',
 48 |           inputSchema: {
 49 |             type: 'object',
 50 |             properties: {
 51 |               name: {
 52 |                 type: 'string',
 53 |                 description: 'Name of the prompt to retrieve',
 54 |               },
 55 |             },
 56 |             required: ['name'],
 57 |           },
 58 |         },
 59 |         {
 60 |           name: 'list_prompts',
 61 |           description: 'List all available prompts',
 62 |           inputSchema: {
 63 |             type: 'object',
 64 |             properties: {},
 65 |           },
 66 |         },
 67 |         {
 68 |           name: 'delete_prompt',
 69 |           description: 'Delete a prompt by name',
 70 |           inputSchema: {
 71 |             type: 'object',
 72 |             properties: {
 73 |               name: {
 74 |                 type: 'string',
 75 |                 description: 'Name of the prompt to delete',
 76 |               },
 77 |             },
 78 |             required: ['name'],
 79 |           },
 80 |         },
 81 |         {
 82 |           name: 'create_structured_prompt',
 83 |           description: 'Create a new prompt with guided metadata structure',
 84 |           inputSchema: {
 85 |             type: 'object',
 86 |             properties: {
 87 |               name: {
 88 |                 type: 'string',
 89 |                 description: 'Name of the prompt',
 90 |               },
 91 |               title: {
 92 |                 type: 'string',
 93 |                 description: 'Human-readable title for the prompt',
 94 |               },
 95 |               description: {
 96 |                 type: 'string',
 97 |                 description: 'Brief description of what the prompt does',
 98 |               },
 99 |               category: {
100 |                 type: 'string',
101 |                 description: 'Category (e.g., development, writing, analysis)',
102 |               },
103 |               tags: {
104 |                 type: 'array',
105 |                 items: { type: 'string' },
106 |                 description: 'Array of tags for categorization',
107 |               },
108 |               difficulty: {
109 |                 type: 'string',
110 |                 enum: ['beginner', 'intermediate', 'advanced'],
111 |                 description: 'Difficulty level of the prompt',
112 |               },
113 |               author: {
114 |                 type: 'string',
115 |                 description: 'Author of the prompt',
116 |               },
117 |               content: {
118 |                 type: 'string',
119 |                 description: 'The actual prompt content (markdown)',
120 |               },
121 |             },
122 |             required: ['name', 'title', 'description', 'content'],
123 |           },
124 |         },
125 |       ],
126 |     };
127 |   }
128 | 
129 |   /**
130 |    * Handle MCP tool calls
131 |    */
132 |   async handleToolCall(request: CallToolRequest): Promise<CallToolResult> {
133 |     const { name, arguments: args } = request.params;
134 |     const toolArgs = (args || {}) as ToolArguments;
135 | 
136 |     try {
137 |       switch (name) {
138 |         case 'add_prompt':
139 |           return await this.handleAddPrompt(toolArgs);
140 |         case 'get_prompt':
141 |           return await this.handleGetPrompt(toolArgs);
142 |         case 'list_prompts':
143 |           return await this.handleListPrompts();
144 |         case 'delete_prompt':
145 |           return await this.handleDeletePrompt(toolArgs);
146 |         case 'create_structured_prompt':
147 |           return await this.handleCreateStructuredPrompt(toolArgs);
148 |         default:
149 |           throw new Error(`Unknown tool: ${name}`);
150 |       }
151 |     } catch (error) {
152 |       const errorMessage = error instanceof Error ? error.message : 'Unknown error';
153 |       return {
154 |         content: [
155 |           {
156 |             type: 'text',
157 |             text: `Error: ${errorMessage}`,
158 |           } as TextContent,
159 |         ],
160 |         isError: true,
161 |       };
162 |     }
163 |   }
164 | 
165 |   /**
166 |    * Handle add_prompt tool
167 |    */
168 |   private async handleAddPrompt(args: ToolArguments): Promise<CallToolResult> {
169 |     if (!args.name || !args.filename || !args.content) {
170 |       throw new Error('Name, filename, and content are required for add_prompt');
171 |     }
172 |     
173 |     // Validate and enhance content with metadata if needed
174 |     const processedContent = this.ensureMetadata(args.content, args.name);
175 |     
176 |     const fileName = await this.fileOps.savePromptWithFilename(args.filename, processedContent);
177 |     return {
178 |       content: [
179 |         {
180 |           type: 'text',
181 |           text: `Prompt "${args.name}" saved as ${fileName}`,
182 |         } as TextContent,
183 |       ],
184 |     };
185 |   }
186 | 
187 |   /**
188 |    * Ensure content has proper YAML frontmatter metadata
189 |    */
190 |   private ensureMetadata(content: string, promptName: string): string {
191 |     // Check if content already has frontmatter
192 |     if (content.trim().startsWith('---')) {
193 |       return content; // Already has frontmatter, keep as-is
194 |     }
195 | 
196 |     // Add default frontmatter if missing
197 |     const defaultMetadata = `---
198 | title: "${promptName.replace(/-/g, ' ').replace(/\b\w/g, l => l.toUpperCase())}"
199 | description: "A prompt for ${promptName.replace(/-/g, ' ')}"
200 | category: "general"
201 | tags: ["general"]
202 | difficulty: "beginner"
203 | author: "User"
204 | version: "1.0"
205 | created: "${new Date().toISOString().split('T')[0]}"
206 | ---
207 | 
208 | `;
209 | 
210 |     return defaultMetadata + content;
211 |   }
212 | 
213 |   /**
214 |    * Handle get_prompt tool
215 |    */
216 |   private async handleGetPrompt(args: ToolArguments): Promise<CallToolResult> {
217 |     if (!args.name) {
218 |       throw new Error('Name is required for get_prompt');
219 |     }
220 |     
221 |     const content = await this.fileOps.readPrompt(args.name);
222 |     return {
223 |       content: [
224 |         {
225 |           type: 'text',
226 |           text: content,
227 |         } as TextContent,
228 |       ],
229 |     };
230 |   }
231 | 
232 |   /**
233 |    * Handle list_prompts tool
234 |    */
235 |   private async handleListPrompts(): Promise<CallToolResult> {
236 |     const prompts = await this.fileOps.listPrompts();
237 |     
238 |     if (prompts.length === 0) {
239 |       return {
240 |         content: [
241 |           {
242 |             type: 'text',
243 |             text: 'No prompts available',
244 |           } as TextContent,
245 |         ],
246 |       };
247 |     }
248 | 
249 |     const text = this.formatPromptsList(prompts);
250 |     
251 |     return {
252 |       content: [
253 |         {
254 |           type: 'text',
255 |           text,
256 |         } as TextContent,
257 |       ],
258 |     };
259 |   }
260 | 
261 |   /**
262 |    * Handle delete_prompt tool
263 |    */
264 |   private async handleDeletePrompt(args: ToolArguments): Promise<CallToolResult> {
265 |     if (!args.name) {
266 |       throw new Error('Name is required for delete_prompt');
267 |     }
268 |     
269 |     await this.fileOps.deletePrompt(args.name);
270 |     return {
271 |       content: [
272 |         {
273 |           type: 'text',
274 |           text: `Prompt "${args.name}" deleted successfully`,
275 |         } as TextContent,
276 |       ],
277 |     };
278 |   }
279 | 
280 |   /**
281 |    * Handle create_structured_prompt tool
282 |    */
283 |   private async handleCreateStructuredPrompt(args: ToolArguments): Promise<CallToolResult> {
284 |     if (!args.name || !args.content || !args.title || !args.description) {
285 |       throw new Error('Name, content, title, and description are required for create_structured_prompt');
286 |     }
287 | 
288 |     // Build structured frontmatter with provided metadata
289 |     const metadata = {
290 |       title: args.title,
291 |       description: args.description,
292 |       category: args.category || 'general',
293 |       tags: args.tags || ['general'],
294 |       difficulty: args.difficulty || 'beginner',
295 |       author: args.author || 'User',
296 |       version: '1.0',
297 |       created: new Date().toISOString().split('T')[0],
298 |     };
299 | 
300 |     // Create YAML frontmatter
301 |     const frontmatter = `---
302 | title: "${metadata.title}"
303 | description: "${metadata.description}"
304 | category: "${metadata.category}"
305 | tags: ${JSON.stringify(metadata.tags)}
306 | difficulty: "${metadata.difficulty}"
307 | author: "${metadata.author}"
308 | version: "${metadata.version}"
309 | created: "${metadata.created}"
310 | ---
311 | 
312 | `;
313 | 
314 |     const fullContent = frontmatter + args.content;
315 |     const fileName = await this.fileOps.savePrompt(args.name, fullContent);
316 |     
317 |     return {
318 |       content: [
319 |         {
320 |           type: 'text',
321 |           text: `Structured prompt "${args.name}" created successfully as ${fileName} with metadata:\n- Title: ${metadata.title}\n- Category: ${metadata.category}\n- Tags: ${metadata.tags.join(', ')}\n- Difficulty: ${metadata.difficulty}`,
322 |         } as TextContent,
323 |       ],
324 |     };
325 |   }
326 | 
327 |   /**
328 |    * Format prompts list for display
329 |    */
330 |   private formatPromptsList(prompts: PromptInfo[]): string {
331 |     const formatPrompt = (prompt: PromptInfo): string => {
332 |       let output = `## ${prompt.name}\n`;
333 |       
334 |       if (Object.keys(prompt.metadata).length > 0) {
335 |         output += '**Metadata:**\n';
336 |         Object.entries(prompt.metadata).forEach(([key, value]) => {
337 |           output += `- ${key}: ${value}\n`;
338 |         });
339 |         output += '\n';
340 |       }
341 |       
342 |       output += `**Preview:** ${prompt.preview}\n`;
343 |       return output;
344 |     };
345 | 
346 |     return `# Available Prompts\n\n${prompts.map(formatPrompt).join('\n---\n\n')}`;
347 |   }
348 | }
```

--------------------------------------------------------------------------------
/tests/fileOperations.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for PromptFileOperations class
  3 |  */
  4 | 
  5 | import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
  6 | import { PromptFileOperations } from '../src/fileOperations.js';
  7 | import { createTempDir, cleanupTempDir, createTestPromptFile, createSamplePromptInfo } from './helpers/testUtils.js';
  8 | import { MockPromptCache } from './helpers/mocks.js';
  9 | 
 10 | describe('PromptFileOperations', () => {
 11 |   let tempDir: string;
 12 |   let mockCache: MockPromptCache;
 13 |   let fileOps: PromptFileOperations;
 14 | 
 15 |   beforeEach(async () => {
 16 |     tempDir = await createTempDir();
 17 |     mockCache = new MockPromptCache();
 18 |     fileOps = new PromptFileOperations(tempDir, mockCache as any);
 19 |   });
 20 | 
 21 |   afterEach(async () => {
 22 |     await cleanupTempDir(tempDir);
 23 |   });
 24 | 
 25 |   describe('constructor', () => {
 26 |     it('should create instance with provided directory and cache', () => {
 27 |       expect(fileOps).toBeDefined();
 28 |       expect(fileOps).toBeInstanceOf(PromptFileOperations);
 29 |     });
 30 |   });
 31 | 
 32 |   describe('listPrompts', () => {
 33 |     it('should initialize cache when empty', async () => {
 34 |       mockCache.isEmpty.mockReturnValue(true);
 35 |       
 36 |       await fileOps.listPrompts();
 37 |       
 38 |       expect(mockCache.initializeCache).toHaveBeenCalled();
 39 |       expect(mockCache.initializeFileWatcher).toHaveBeenCalled();
 40 |       expect(mockCache.getAllPrompts).toHaveBeenCalled();
 41 |     });
 42 | 
 43 |     it('should use cached data when cache is not empty', async () => {
 44 |       const samplePrompts = [createSamplePromptInfo()];
 45 |       mockCache.isEmpty.mockReturnValue(false);
 46 |       mockCache.getAllPrompts.mockReturnValue(samplePrompts);
 47 |       
 48 |       const result = await fileOps.listPrompts();
 49 |       
 50 |       expect(mockCache.initializeCache).not.toHaveBeenCalled();
 51 |       expect(mockCache.initializeFileWatcher).not.toHaveBeenCalled();
 52 |       expect(mockCache.getAllPrompts).toHaveBeenCalled();
 53 |       expect(result).toEqual(samplePrompts);
 54 |     });
 55 | 
 56 |     it('should return empty array when no prompts exist', async () => {
 57 |       mockCache.isEmpty.mockReturnValue(true);
 58 |       mockCache.getAllPrompts.mockReturnValue([]);
 59 |       
 60 |       const result = await fileOps.listPrompts();
 61 |       
 62 |       expect(result).toEqual([]);
 63 |     });
 64 |   });
 65 | 
 66 |   describe('readPrompt', () => {
 67 |     it('should read existing prompt file', async () => {
 68 |       const content = '# Test Prompt\n\nThis is a test prompt.';
 69 |       await createTestPromptFile(tempDir, 'test-prompt', {}, content);
 70 |       
 71 |       const result = await fileOps.readPrompt('test-prompt');
 72 |       
 73 |       expect(result).toContain('This is a test prompt.');
 74 |     });
 75 | 
 76 |     it('should sanitize prompt name for file lookup', async () => {
 77 |       const content = 'Test content';
 78 |       // Create file with sanitized name (what the sanitization function would produce)
 79 |       await createTestPromptFile(tempDir, 'test_prompt_with_special_chars___', {}, content);
 80 |       
 81 |       // Test with unsanitized name
 82 |       const result = await fileOps.readPrompt('Test Prompt With Special Chars!@#');
 83 |       
 84 |       expect(result).toContain('Test content');
 85 |     });
 86 | 
 87 |     it('should throw error for non-existent prompt', async () => {
 88 |       await expect(fileOps.readPrompt('non-existent')).rejects.toThrow(
 89 |         'Prompt "non-existent" not found'
 90 |       );
 91 |     });
 92 | 
 93 |     it('should handle file read errors', async () => {
 94 |       // Try to read from a directory that doesn't exist
 95 |       const badFileOps = new PromptFileOperations('/non/existent/path', mockCache as any);
 96 |       
 97 |       await expect(badFileOps.readPrompt('any-prompt')).rejects.toThrow(
 98 |         'Prompt "any-prompt" not found'
 99 |       );
100 |     });
101 |   });
102 | 
103 |   describe('savePrompt', () => {
104 |     it('should save prompt to file', async () => {
105 |       const content = '# New Prompt\n\nThis is a new prompt.';
106 |       
107 |       const fileName = await fileOps.savePrompt('new-prompt', content);
108 |       
109 |       expect(fileName).toBe('new-prompt.md');
110 |       
111 |       // Verify file was created
112 |       const savedContent = await fileOps.readPrompt('new-prompt');
113 |       expect(savedContent).toBe(content);
114 |     });
115 | 
116 |     it('should sanitize filename', async () => {
117 |       const content = 'Test content';
118 |       
119 |       const fileName = await fileOps.savePrompt('Test Prompt With Special Chars!@#', content);
120 |       
121 |       expect(fileName).toBe('test_prompt_with_special_chars___.md');
122 |       
123 |       // Should be readable with sanitized name
124 |       const savedContent = await fileOps.readPrompt('test_prompt_with_special_chars___');
125 |       expect(savedContent).toBe(content);
126 |     });
127 | 
128 |     it('should create prompts directory if it does not exist', async () => {
129 |       const newDir = `${tempDir}/new-prompts-dir`;
130 |       const newFileOps = new PromptFileOperations(newDir, mockCache as any);
131 |       
132 |       const fileName = await newFileOps.savePrompt('test', 'content');
133 |       
134 |       expect(fileName).toBe('test.md');
135 |       
136 |       // Should be able to read the file
137 |       const content = await newFileOps.readPrompt('test');
138 |       expect(content).toBe('content');
139 |     });
140 | 
141 |     it('should overwrite existing files', async () => {
142 |       const originalContent = 'Original content';
143 |       const updatedContent = 'Updated content';
144 |       
145 |       await fileOps.savePrompt('test-prompt', originalContent);
146 |       await fileOps.savePrompt('test-prompt', updatedContent);
147 |       
148 |       const result = await fileOps.readPrompt('test-prompt');
149 |       expect(result).toBe(updatedContent);
150 |     });
151 |   });
152 | 
153 |   describe('deletePrompt', () => {
154 |     it('should delete existing prompt file', async () => {
155 |       await createTestPromptFile(tempDir, 'to-delete');
156 |       
157 |       const result = await fileOps.deletePrompt('to-delete');
158 |       
159 |       expect(result).toBe(true);
160 |       
161 |       // File should no longer exist
162 |       await expect(fileOps.readPrompt('to-delete')).rejects.toThrow(
163 |         'Prompt "to-delete" not found'
164 |       );
165 |     });
166 | 
167 |     it('should sanitize prompt name for deletion', async () => {
168 |       await createTestPromptFile(tempDir, 'prompt_with_special_chars___');
169 |       
170 |       const result = await fileOps.deletePrompt('Prompt With Special Chars!@#');
171 |       
172 |       expect(result).toBe(true);
173 |       
174 |       // Should not be readable anymore
175 |       await expect(fileOps.readPrompt('Prompt With Special Chars!@#')).rejects.toThrow();
176 |     });
177 | 
178 |     it('should throw error when deleting non-existent prompt', async () => {
179 |       await expect(fileOps.deletePrompt('non-existent')).rejects.toThrow(
180 |         'Prompt "non-existent" not found'
181 |       );
182 |     });
183 |   });
184 | 
185 |   describe('promptExists', () => {
186 |     it('should return true for existing prompt', async () => {
187 |       await createTestPromptFile(tempDir, 'existing-prompt');
188 |       
189 |       const exists = await fileOps.promptExists('existing-prompt');
190 |       
191 |       expect(exists).toBe(true);
192 |     });
193 | 
194 |     it('should return false for non-existent prompt', async () => {
195 |       const exists = await fileOps.promptExists('non-existent');
196 |       
197 |       expect(exists).toBe(false);
198 |     });
199 | 
200 |     it('should sanitize prompt name for existence check', async () => {
201 |       await createTestPromptFile(tempDir, 'prompt_with_special_chars___');
202 |       
203 |       const exists = await fileOps.promptExists('Prompt With Special Chars!@#');
204 |       
205 |       expect(exists).toBe(true);
206 |     });
207 |   });
208 | 
209 |   describe('getPromptInfo', () => {
210 |     it('should delegate to cache', () => {
211 |       const samplePrompt = createSamplePromptInfo();
212 |       mockCache.getPrompt.mockReturnValue(samplePrompt);
213 |       
214 |       const result = fileOps.getPromptInfo('test-prompt');
215 |       
216 |       expect(mockCache.getPrompt).toHaveBeenCalledWith('test-prompt');
217 |       expect(result).toBe(samplePrompt);
218 |     });
219 | 
220 |     it('should return undefined when prompt not in cache', () => {
221 |       mockCache.getPrompt.mockReturnValue(undefined);
222 |       
223 |       const result = fileOps.getPromptInfo('non-existent');
224 |       
225 |       expect(result).toBeUndefined();
226 |     });
227 |   });
228 | 
229 |   describe('filename sanitization', () => {
230 |     it('should convert to lowercase', async () => {
231 |       await fileOps.savePrompt('UPPERCASE', 'content');
232 |       const content = await fileOps.readPrompt('UPPERCASE');
233 |       expect(content).toBe('content');
234 |     });
235 | 
236 |     it('should replace special characters with underscores', async () => {
237 |       const testCases = [
238 |         { input: 'hello world', expected: 'hello_world' },
239 |         { input: 'hello@world', expected: 'hello_world' },
240 |         { input: 'hello#world', expected: 'hello_world' },
241 |         { input: 'UPPERCASE', expected: 'uppercase' }
242 |       ];
243 | 
244 |       for (const testCase of testCases) {
245 |         await fileOps.savePrompt(testCase.input, `content for ${testCase.input}`);
246 |         const content = await fileOps.readPrompt(testCase.input);
247 |         expect(content).toBe(`content for ${testCase.input}`);
248 |       }
249 |     });
250 | 
251 |     it('should preserve allowed characters', async () => {
252 |       const allowedNames = [
253 |         'simple-name',
254 |         'name_with_underscores',
255 |         'name123',
256 |         'abc-def_ghi789'
257 |       ];
258 | 
259 |       for (const name of allowedNames) {
260 |         await fileOps.savePrompt(name, `content for ${name}`);
261 |         const content = await fileOps.readPrompt(name);
262 |         expect(content).toBe(`content for ${name}`);
263 |       }
264 |     });
265 |   });
266 | 
267 |   describe('integration with cache', () => {
268 |     it('should work correctly when cache is populated', async () => {
269 |       const samplePrompts = [
270 |         createSamplePromptInfo({ name: 'prompt1' }),
271 |         createSamplePromptInfo({ name: 'prompt2' })
272 |       ];
273 |       
274 |       mockCache.isEmpty.mockReturnValue(false);
275 |       mockCache.getAllPrompts.mockReturnValue(samplePrompts);
276 |       mockCache.getPrompt.mockImplementation(name => 
277 |         samplePrompts.find(p => p.name === name)
278 |       );
279 |       
280 |       const allPrompts = await fileOps.listPrompts();
281 |       const specificPrompt = fileOps.getPromptInfo('prompt1');
282 |       
283 |       expect(allPrompts).toEqual(samplePrompts);
284 |       expect(specificPrompt?.name).toBe('prompt1');
285 |     });
286 | 
287 |     it('should handle cache initialization properly', async () => {
288 |       // Create actual files
289 |       await createTestPromptFile(tempDir, 'real-prompt1', { title: 'Real Prompt 1' });
290 |       await createTestPromptFile(tempDir, 'real-prompt2', { title: 'Real Prompt 2' });
291 |       
292 |       // Use real cache for this test
293 |       const realCache = new (await import('../src/cache.js')).PromptCache(tempDir);
294 |       const realFileOps = new PromptFileOperations(tempDir, realCache);
295 |       
296 |       const prompts = await realFileOps.listPrompts();
297 |       
298 |       expect(prompts).toHaveLength(2);
299 |       expect(prompts.some(p => p.name === 'real-prompt1')).toBe(true);
300 |       expect(prompts.some(p => p.name === 'real-prompt2')).toBe(true);
301 |       
302 |       await realCache.cleanup();
303 |     });
304 |   });
305 | });
```

--------------------------------------------------------------------------------
/tests/index.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Integration tests for main index module
  3 |  */
  4 | 
  5 | import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
  6 | import { createTempDir, cleanupTempDir, createTestPromptFile, mockConsoleError } from './helpers/testUtils.js';
  7 | import { createMockProcess } from './helpers/mocks.js';
  8 | 
  9 | // Mock external dependencies
 10 | vi.mock('@modelcontextprotocol/sdk/server/index.js', () => ({
 11 |   Server: vi.fn().mockImplementation(() => ({
 12 |     setRequestHandler: vi.fn(),
 13 |     connect: vi.fn()
 14 |   }))
 15 | }));
 16 | 
 17 | vi.mock('@modelcontextprotocol/sdk/server/stdio.js', () => ({
 18 |   StdioServerTransport: vi.fn()
 19 | }));
 20 | 
 21 | describe('Main Server Integration', () => {
 22 |   let tempDir: string;
 23 |   let consoleErrorSpy: ReturnType<typeof mockConsoleError>;
 24 |   let originalProcess: typeof process;
 25 | 
 26 |   beforeEach(async () => {
 27 |     tempDir = await createTempDir();
 28 |     consoleErrorSpy = mockConsoleError();
 29 |     
 30 |     // Mock process for signal handling tests
 31 |     originalProcess = global.process;
 32 |   });
 33 | 
 34 |   afterEach(async () => {
 35 |     await cleanupTempDir(tempDir);
 36 |     consoleErrorSpy.mockRestore();
 37 |     global.process = originalProcess;
 38 |     vi.clearAllMocks();
 39 |   });
 40 | 
 41 |   describe('server configuration', () => {
 42 |     it('should have correct server configuration', async () => {
 43 |       // Since we can't easily test the actual module loading without side effects,
 44 |       // we'll test the configuration values that would be used
 45 |       const config = {
 46 |         name: 'prompts-mcp-server',
 47 |         version: '1.0.0',
 48 |         promptsDir: '/test/path'
 49 |       };
 50 | 
 51 |       expect(config.name).toBe('prompts-mcp-server');
 52 |       expect(config.version).toBe('1.0.0');
 53 |       expect(typeof config.promptsDir).toBe('string');
 54 |     });
 55 | 
 56 |     it('should use PROMPTS_FOLDER_PATH environment variable when set', async () => {
 57 |       // Save original env var
 58 |       const originalPath = process.env.PROMPTS_FOLDER_PATH;
 59 |       
 60 |       // Set test env var
 61 |       process.env.PROMPTS_FOLDER_PATH = '/custom/prompts/path';
 62 |       
 63 |       // Test configuration would use the custom path
 64 |       const customPath = process.env.PROMPTS_FOLDER_PATH;
 65 |       const defaultPath = '/default/prompts';
 66 |       const promptsDir = customPath || defaultPath;
 67 |       
 68 |       expect(promptsDir).toBe('/custom/prompts/path');
 69 |       
 70 |       // Restore original env var
 71 |       if (originalPath !== undefined) {
 72 |         process.env.PROMPTS_FOLDER_PATH = originalPath;
 73 |       } else {
 74 |         delete process.env.PROMPTS_FOLDER_PATH;
 75 |       }
 76 |     });
 77 | 
 78 |     it('should use default path when PROMPTS_FOLDER_PATH is not set', async () => {
 79 |       // Save original env var
 80 |       const originalPath = process.env.PROMPTS_FOLDER_PATH;
 81 |       
 82 |       // Clear env var
 83 |       delete process.env.PROMPTS_FOLDER_PATH;
 84 |       
 85 |       // Test configuration would use default path
 86 |       const customPath = process.env.PROMPTS_FOLDER_PATH;
 87 |       const defaultPath = '/default/prompts';
 88 |       const promptsDir = customPath || defaultPath;
 89 |       
 90 |       expect(promptsDir).toBe('/default/prompts');
 91 |       
 92 |       // Restore original env var
 93 |       if (originalPath !== undefined) {
 94 |         process.env.PROMPTS_FOLDER_PATH = originalPath;
 95 |       }
 96 |     });
 97 |   });
 98 | 
 99 |   describe('component integration', () => {
100 |     it('should integrate cache, file operations, and tools correctly', async () => {
101 |       // Test the integration by using the actual classes
102 |       const { PromptCache } = await import('../src/cache.js');
103 |       const { PromptFileOperations } = await import('../src/fileOperations.js');
104 |       const { PromptTools } = await import('../src/tools.js');
105 | 
106 |       // Create test files
107 |       await createTestPromptFile(tempDir, 'integration-test', 
108 |         { title: 'Integration Test', category: 'test' },
109 |         'This is an integration test prompt.'
110 |       );
111 | 
112 |       // Initialize components
113 |       const cache = new PromptCache(tempDir);
114 |       const fileOps = new PromptFileOperations(tempDir, cache);
115 |       const tools = new PromptTools(fileOps);
116 | 
117 |       // Test integration flow
118 |       await cache.initializeCache();
119 |       
120 |       // Verify cache has the prompt
121 |       expect(cache.size()).toBe(1);
122 |       expect(cache.getPrompt('integration-test')?.metadata.title).toBe('Integration Test');
123 | 
124 |       // Test file operations
125 |       const prompts = await fileOps.listPrompts();
126 |       expect(prompts).toHaveLength(1);
127 |       expect(prompts[0].name).toBe('integration-test');
128 | 
129 |       // Test tools
130 |       const toolDefinitions = tools.getToolDefinitions();
131 |       expect(toolDefinitions.tools).toHaveLength(5);
132 | 
133 |       // Cleanup
134 |       await cache.cleanup();
135 |     });
136 | 
137 |     it('should handle end-to-end prompt management workflow', async () => {
138 |       const { PromptCache } = await import('../src/cache.js');
139 |       const { PromptFileOperations } = await import('../src/fileOperations.js');
140 |       const { PromptTools } = await import('../src/tools.js');
141 | 
142 |       const cache = new PromptCache(tempDir);
143 |       const fileOps = new PromptFileOperations(tempDir, cache);
144 |       const tools = new PromptTools(fileOps);
145 | 
146 |       await cache.initializeCache();
147 | 
148 |       // 1. Add a prompt via tools
149 |       const addRequest = {
150 |         params: {
151 |           name: 'add_prompt',
152 |           arguments: {
153 |             name: 'e2e-test',
154 |             filename: 'e2e_test',
155 |             content: '---\ntitle: "E2E Test"\ncategory: "testing"\n---\n\n# E2E Test Prompt\n\nThis is an end-to-end test.'
156 |           }
157 |         }
158 |       };
159 | 
160 |       const addResult = await tools.handleToolCall(addRequest as any);
161 |       expect(addResult.content[0].text).toContain('saved as e2e_test.md');
162 | 
163 |       // 2. List prompts should show the new prompt
164 |       const listRequest = { params: { name: 'list_prompts', arguments: {} } };
165 |       const listResult = await tools.handleToolCall(listRequest as any);
166 |       expect(listResult.content[0].text).toContain('e2e_test');
167 |       expect(listResult.content[0].text).toContain('E2E Test');
168 | 
169 |       // 3. Get the specific prompt
170 |       const getRequest = {
171 |         params: {
172 |           name: 'get_prompt',
173 |           arguments: { name: 'e2e_test' }
174 |         }
175 |       };
176 |       const getResult = await tools.handleToolCall(getRequest as any);
177 |       expect(getResult.content[0].text).toContain('E2E Test Prompt');
178 | 
179 |       // 4. Verify prompt exists in cache
180 |       expect(cache.getPrompt('e2e_test')?.metadata.title).toBe('E2E Test');
181 |       expect(cache.getPrompt('e2e_test')?.metadata.category).toBe('testing');
182 | 
183 |       // 5. Delete the prompt
184 |       const deleteRequest = {
185 |         params: {
186 |           name: 'delete_prompt',
187 |           arguments: { name: 'e2e_test' }
188 |         }
189 |       };
190 |       const deleteResult = await tools.handleToolCall(deleteRequest as any);
191 |       expect(deleteResult.content[0].text).toContain('deleted successfully');
192 | 
193 |       // 6. Verify prompt is no longer accessible
194 |       const getDeletedRequest = {
195 |         params: {
196 |           name: 'get_prompt',
197 |           arguments: { name: 'e2e_test' }
198 |         }
199 |       };
200 |       const getDeletedResult = await tools.handleToolCall(getDeletedRequest as any);
201 |       expect(getDeletedResult.isError).toBe(true);
202 |       expect(getDeletedResult.content[0].text).toContain('not found');
203 | 
204 |       await cache.cleanup();
205 |     });
206 |   });
207 | 
208 |   describe('error handling integration', () => {
209 |     it('should handle errors across component boundaries', async () => {
210 |       const { PromptCache } = await import('../src/cache.js');
211 |       const { PromptFileOperations } = await import('../src/fileOperations.js');
212 |       const { PromptTools } = await import('../src/tools.js');
213 | 
214 |       // Use a non-existent directory to trigger errors
215 |       const badDir = '/non/existent/directory';
216 |       const cache = new PromptCache(badDir);
217 |       const fileOps = new PromptFileOperations(badDir, cache);
218 |       const tools = new PromptTools(fileOps);
219 | 
220 |       // Try to get a prompt from non-existent directory
221 |       const getRequest = {
222 |         params: {
223 |           name: 'get_prompt',
224 |           arguments: { name: 'any-prompt' }
225 |         }
226 |       };
227 | 
228 |       const result = await tools.handleToolCall(getRequest as any);
229 |       expect(result.isError).toBe(true);
230 |       expect(result.content[0].text).toContain('Error:');
231 | 
232 |       await cache.cleanup();
233 |     });
234 |   });
235 | 
236 |   describe('signal handling simulation', () => {
237 |     it('should handle shutdown signals properly', () => {
238 |       const mockProcess = createMockProcess();
239 |       
240 |       // Simulate signal handler registration
241 |       const signalHandlers = new Map();
242 |       mockProcess.on.mockImplementation((signal: string, handler: Function) => {
243 |         signalHandlers.set(signal, handler);
244 |       });
245 | 
246 |       // Simulate the signal registration that would happen in main
247 |       mockProcess.on('SIGINT', () => {
248 |         console.error('Shutting down server...');
249 |       });
250 |       mockProcess.on('SIGTERM', () => {
251 |         console.error('Shutting down server...');
252 |       });
253 | 
254 |       expect(mockProcess.on).toHaveBeenCalledWith('SIGINT', expect.any(Function));
255 |       expect(mockProcess.on).toHaveBeenCalledWith('SIGTERM', expect.any(Function));
256 |     });
257 |   });
258 | 
259 |   describe('server initialization', () => {
260 |     it('should create server with correct configuration', async () => {
261 |       const { Server } = await import('@modelcontextprotocol/sdk/server/index.js');
262 |       const { StdioServerTransport } = await import('@modelcontextprotocol/sdk/server/stdio.js');
263 | 
264 |       // These are mocked, so we're testing that the mocks are called correctly
265 |       expect(Server).toBeDefined();
266 |       expect(StdioServerTransport).toBeDefined();
267 |     });
268 |   });
269 | 
270 |   describe('real file system integration', () => {
271 |     it('should work with real file operations', async () => {
272 |       const { PromptCache } = await import('../src/cache.js');
273 |       
274 |       // Create some test files
275 |       await createTestPromptFile(tempDir, 'real-test-1', 
276 |         { title: 'Real Test 1', difficulty: 'beginner' },
277 |         'This is a real file system test.'
278 |       );
279 |       await createTestPromptFile(tempDir, 'real-test-2',
280 |         { title: 'Real Test 2', difficulty: 'advanced' },
281 |         'This is another real file system test.'
282 |       );
283 | 
284 |       const cache = new PromptCache(tempDir);
285 |       await cache.initializeCache();
286 | 
287 |       // Verify cache loaded the files
288 |       expect(cache.size()).toBe(2);
289 |       
290 |       const prompt1 = cache.getPrompt('real-test-1');
291 |       const prompt2 = cache.getPrompt('real-test-2');
292 |       
293 |       expect(prompt1?.metadata.title).toBe('Real Test 1');
294 |       expect(prompt1?.metadata.difficulty).toBe('beginner');
295 |       expect(prompt2?.metadata.title).toBe('Real Test 2');
296 |       expect(prompt2?.metadata.difficulty).toBe('advanced');
297 |       
298 |       expect(prompt1?.preview).toContain('real file system test');
299 |       expect(prompt2?.preview).toContain('another real file system test');
300 | 
301 |       await cache.cleanup();
302 |     });
303 | 
304 |     it('should handle mixed valid and invalid files', async () => {
305 |       const fs = await import('fs/promises');
306 |       
307 |       // Create valid prompt file
308 |       await createTestPromptFile(tempDir, 'valid-prompt', 
309 |         { title: 'Valid Prompt' },
310 |         'This is valid content.'
311 |       );
312 | 
313 |       // Create invalid file (not markdown)
314 |       await fs.writeFile(`${tempDir}/invalid.txt`, 'This is not a markdown file');
315 |       
316 |       // Create file with invalid YAML (but should still work)
317 |       await fs.writeFile(`${tempDir}/broken-yaml.md`, 
318 |         '---\ninvalid: yaml: content\n---\n\nContent after broken YAML'
319 |       );
320 | 
321 |       const { PromptCache } = await import('../src/cache.js');
322 |       const cache = new PromptCache(tempDir);
323 |       await cache.initializeCache();
324 | 
325 |       // Should load at least the valid prompt
326 |       expect(cache.size()).toBeGreaterThanOrEqual(1);
327 |       expect(cache.getPrompt('valid-prompt')?.metadata.title).toBe('Valid Prompt');
328 |       
329 |       // Invalid.txt should be ignored
330 |       expect(cache.getPrompt('invalid')).toBeUndefined();
331 | 
332 |       await cache.cleanup();
333 |     });
334 |   });
335 | });
```

--------------------------------------------------------------------------------
/tests/cache.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for PromptCache class
  3 |  */
  4 | 
  5 | import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
  6 | import { PromptCache } from '../src/cache.js';
  7 | import { createTempDir, cleanupTempDir, createTestPromptFile, createSamplePromptInfo, mockConsoleError, wait } from './helpers/testUtils.js';
  8 | import { createMockWatcher } from './helpers/mocks.js';
  9 | 
 10 | // Mock chokidar
 11 | vi.mock('chokidar', () => ({
 12 |   default: {
 13 |     watch: vi.fn()
 14 |   }
 15 | }));
 16 | 
 17 | describe('PromptCache', () => {
 18 |   let tempDir: string;
 19 |   let cache: PromptCache;
 20 |   let consoleErrorSpy: ReturnType<typeof mockConsoleError>;
 21 | 
 22 |   beforeEach(async () => {
 23 |     tempDir = await createTempDir();
 24 |     cache = new PromptCache(tempDir);
 25 |     consoleErrorSpy = mockConsoleError();
 26 |     vi.clearAllMocks();
 27 |   });
 28 | 
 29 |   afterEach(async () => {
 30 |     await cache.cleanup();
 31 |     await cleanupTempDir(tempDir);
 32 |     consoleErrorSpy.mockRestore();
 33 |   });
 34 | 
 35 |   describe('constructor', () => {
 36 |     it('should create cache with empty state', () => {
 37 |       expect(cache.isEmpty()).toBe(true);
 38 |       expect(cache.size()).toBe(0);
 39 |       expect(cache.getAllPrompts()).toEqual([]);
 40 |     });
 41 |   });
 42 | 
 43 |   describe('getAllPrompts', () => {
 44 |     it('should return empty array when cache is empty', () => {
 45 |       expect(cache.getAllPrompts()).toEqual([]);
 46 |     });
 47 | 
 48 |     it('should return all cached prompts', async () => {
 49 |       await createTestPromptFile(tempDir, 'test1', { title: 'Test 1' });
 50 |       await createTestPromptFile(tempDir, 'test2', { title: 'Test 2' });
 51 |       
 52 |       await cache.initializeCache();
 53 |       
 54 |       const prompts = cache.getAllPrompts();
 55 |       expect(prompts).toHaveLength(2);
 56 |       expect(prompts.some(p => p.name === 'test1')).toBe(true);
 57 |       expect(prompts.some(p => p.name === 'test2')).toBe(true);
 58 |     });
 59 |   });
 60 | 
 61 |   describe('getPrompt', () => {
 62 |     it('should return undefined for non-existent prompt', () => {
 63 |       expect(cache.getPrompt('non-existent')).toBeUndefined();
 64 |     });
 65 | 
 66 |     it('should return cached prompt by name', async () => {
 67 |       await createTestPromptFile(tempDir, 'test-prompt', { title: 'Test Prompt' });
 68 |       await cache.initializeCache();
 69 |       
 70 |       const prompt = cache.getPrompt('test-prompt');
 71 |       expect(prompt).toBeDefined();
 72 |       expect(prompt?.name).toBe('test-prompt');
 73 |       expect(prompt?.metadata.title).toBe('Test Prompt');
 74 |     });
 75 |   });
 76 | 
 77 |   describe('isEmpty', () => {
 78 |     it('should return true when cache is empty', () => {
 79 |       expect(cache.isEmpty()).toBe(true);
 80 |     });
 81 | 
 82 |     it('should return false when cache has prompts', async () => {
 83 |       await createTestPromptFile(tempDir, 'test-prompt');
 84 |       await cache.initializeCache();
 85 |       
 86 |       expect(cache.isEmpty()).toBe(false);
 87 |     });
 88 |   });
 89 | 
 90 |   describe('size', () => {
 91 |     it('should return 0 when cache is empty', () => {
 92 |       expect(cache.size()).toBe(0);
 93 |     });
 94 | 
 95 |     it('should return correct count of cached prompts', async () => {
 96 |       await createTestPromptFile(tempDir, 'test1');
 97 |       await createTestPromptFile(tempDir, 'test2');
 98 |       await createTestPromptFile(tempDir, 'test3');
 99 |       
100 |       await cache.initializeCache();
101 |       
102 |       expect(cache.size()).toBe(3);
103 |     });
104 |   });
105 | 
106 |   describe('initializeCache', () => {
107 |     it('should load all markdown files from directory', async () => {
108 |       await createTestPromptFile(tempDir, 'prompt1', { title: 'Prompt 1' });
109 |       await createTestPromptFile(tempDir, 'prompt2', { title: 'Prompt 2' });
110 |       
111 |       await cache.initializeCache();
112 |       
113 |       expect(cache.size()).toBe(2);
114 |       expect(cache.getPrompt('prompt1')?.metadata.title).toBe('Prompt 1');
115 |       expect(cache.getPrompt('prompt2')?.metadata.title).toBe('Prompt 2');
116 |     });
117 | 
118 |     it('should ignore non-markdown files', async () => {
119 |       await createTestPromptFile(tempDir, 'prompt1');
120 |       // Create a non-markdown file
121 |       const fs = await import('fs/promises');
122 |       await fs.writeFile(`${tempDir}/readme.txt`, 'Not a prompt');
123 |       
124 |       await cache.initializeCache();
125 |       
126 |       expect(cache.size()).toBe(1);
127 |       expect(cache.getPrompt('prompt1')).toBeDefined();
128 |       expect(cache.getPrompt('readme')).toBeUndefined();
129 |     });
130 | 
131 |     it('should handle files with YAML frontmatter', async () => {
132 |       const metadata = {
133 |         title: 'Test Prompt',
134 |         description: 'A test prompt',
135 |         category: 'test',
136 |         tags: ['test', 'example'],
137 |         difficulty: 'beginner' as const,
138 |         author: 'Test Author',
139 |         version: '1.0'
140 |       };
141 |       
142 |       await createTestPromptFile(tempDir, 'with-frontmatter', metadata, 'Content after frontmatter');
143 |       await cache.initializeCache();
144 |       
145 |       const prompt = cache.getPrompt('with-frontmatter');
146 |       expect(prompt?.metadata).toEqual(metadata);
147 |       expect(prompt?.preview).toContain('Content after frontmatter');
148 |     });
149 | 
150 |     it('should handle files without frontmatter', async () => {
151 |       await createTestPromptFile(tempDir, 'no-frontmatter', {}, 'Just plain content');
152 |       await cache.initializeCache();
153 |       
154 |       const prompt = cache.getPrompt('no-frontmatter');
155 |       expect(prompt?.metadata).toEqual({});
156 |       expect(prompt?.preview).toContain('Just plain content');
157 |     });
158 | 
159 |     it('should create preview text', async () => {
160 |       const longContent = 'A'.repeat(200);
161 |       await createTestPromptFile(tempDir, 'long-content', {}, longContent);
162 |       await cache.initializeCache();
163 |       
164 |       const prompt = cache.getPrompt('long-content');
165 |       expect(prompt?.preview).toHaveLength(103); // 100 chars + '...'
166 |       expect(prompt?.preview.endsWith('...')).toBe(true);
167 |     });
168 | 
169 |     it('should handle file read errors gracefully', async () => {
170 |       // Create a valid file first
171 |       await createTestPromptFile(tempDir, 'valid-prompt');
172 |       
173 |       // Create an invalid file by creating a directory with .md extension
174 |       const fs = await import('fs/promises');
175 |       await fs.mkdir(`${tempDir}/invalid.md`, { recursive: true });
176 |       
177 |       await cache.initializeCache();
178 |       
179 |       // Should have loaded the valid file and logged error for invalid one
180 |       expect(cache.size()).toBe(1);
181 |       expect(consoleErrorSpy).toHaveBeenCalledWith(
182 |         expect.stringContaining('Failed to load prompt metadata for invalid.md'),
183 |         expect.any(String)
184 |       );
185 |     });
186 | 
187 |     it('should log successful cache initialization', async () => {
188 |       await createTestPromptFile(tempDir, 'test1');
189 |       await createTestPromptFile(tempDir, 'test2');
190 |       
191 |       await cache.initializeCache();
192 |       
193 |       expect(consoleErrorSpy).toHaveBeenCalledWith('Loaded 2 prompts into cache');
194 |     });
195 | 
196 |     it('should handle missing directory gracefully', async () => {
197 |       const nonExistentDir = `${tempDir}/non-existent`;
198 |       const cacheWithBadDir = new PromptCache(nonExistentDir);
199 |       
200 |       await cacheWithBadDir.initializeCache();
201 |       
202 |       expect(cacheWithBadDir.size()).toBe(0);
203 |       expect(consoleErrorSpy).not.toHaveBeenCalledWith(
204 |         expect.stringContaining('Failed to initialize cache')
205 |       );
206 |     });
207 |   });
208 | 
209 |   describe('initializeFileWatcher', () => {
210 |     let mockWatcher: ReturnType<typeof createMockWatcher>;
211 | 
212 |     beforeEach(async () => {
213 |       const chokidar = await import('chokidar');
214 |       mockWatcher = createMockWatcher();
215 |       vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
216 |     });
217 | 
218 |     it('should initialize file watcher only once', () => {
219 |       cache.initializeFileWatcher();
220 |       cache.initializeFileWatcher();
221 |       
222 |       const chokidar = vi.mocked(import('chokidar'));
223 |       expect(chokidar).toBeDefined();
224 |       // Should only be called once despite multiple calls
225 |     });
226 | 
227 |     it('should set up file watcher with correct options', async () => {
228 |       const chokidar = await import('chokidar');
229 |       
230 |       cache.initializeFileWatcher();
231 |       
232 |       expect(chokidar.default.watch).toHaveBeenCalledWith(
233 |         expect.stringContaining('*.md'),
234 |         {
235 |           ignored: /^\./,
236 |           persistent: true,
237 |           ignoreInitial: true
238 |         }
239 |       );
240 |     });
241 | 
242 |     it('should register event handlers', () => {
243 |       cache.initializeFileWatcher();
244 |       
245 |       expect(mockWatcher.on).toHaveBeenCalledWith('add', expect.any(Function));
246 |       expect(mockWatcher.on).toHaveBeenCalledWith('change', expect.any(Function));
247 |       expect(mockWatcher.on).toHaveBeenCalledWith('unlink', expect.any(Function));
248 |       expect(mockWatcher.on).toHaveBeenCalledWith('error', expect.any(Function));
249 |     });
250 | 
251 |     it('should log initialization message', () => {
252 |       cache.initializeFileWatcher();
253 |       
254 |       expect(consoleErrorSpy).toHaveBeenCalledWith(
255 |         'File watcher initialized for prompts directory'
256 |       );
257 |     });
258 |   });
259 | 
260 |   describe('cleanup', () => {
261 |     it('should close file watcher if initialized', async () => {
262 |       const chokidar = await import('chokidar');
263 |       const mockWatcher = createMockWatcher();
264 |       vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
265 |       
266 |       cache.initializeFileWatcher();
267 |       await cache.cleanup();
268 |       
269 |       expect(mockWatcher.close).toHaveBeenCalled();
270 |     });
271 | 
272 |     it('should handle cleanup when watcher not initialized', async () => {
273 |       // Should not throw
274 |       await expect(cache.cleanup()).resolves.not.toThrow();
275 |     });
276 | 
277 |     it('should reset watcher state after cleanup', async () => {
278 |       const chokidar = await import('chokidar');
279 |       const mockWatcher = createMockWatcher();
280 |       vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
281 |       
282 |       cache.initializeFileWatcher();
283 |       await cache.cleanup();
284 |       
285 |       // Should be able to initialize again
286 |       cache.initializeFileWatcher();
287 |       expect(chokidar.default.watch).toHaveBeenCalledTimes(2);
288 |     });
289 |   });
290 | 
291 |   describe('file watcher integration', () => {
292 |     let mockWatcher: ReturnType<typeof createMockWatcher>;
293 |     let addHandler: Function;
294 |     let changeHandler: Function;
295 |     let unlinkHandler: Function;
296 |     let errorHandler: Function;
297 | 
298 |     beforeEach(async () => {
299 |       const chokidar = await import('chokidar');
300 |       mockWatcher = createMockWatcher();
301 |       
302 |       // Capture event handlers
303 |       mockWatcher.on.mockImplementation((event: string, handler: Function) => {
304 |         switch (event) {
305 |           case 'add': addHandler = handler; break;
306 |           case 'change': changeHandler = handler; break;
307 |           case 'unlink': unlinkHandler = handler; break;
308 |           case 'error': errorHandler = handler; break;
309 |         }
310 |         return mockWatcher;
311 |       });
312 |       
313 |       vi.mocked(chokidar.default.watch).mockReturnValue(mockWatcher as any);
314 |       
315 |       // Initialize cache and watcher
316 |       await cache.initializeCache();
317 |       cache.initializeFileWatcher();
318 |     });
319 | 
320 |     it('should handle file addition', async () => {
321 |       const filePath = `${tempDir}/new-prompt.md`;
322 |       await createTestPromptFile(tempDir, 'new-prompt', { title: 'New Prompt' });
323 |       
324 |       // Simulate file watcher add event
325 |       await addHandler(filePath);
326 |       
327 |       expect(consoleErrorSpy).toHaveBeenCalledWith('Prompt added: new-prompt.md');
328 |       expect(cache.getPrompt('new-prompt')?.metadata.title).toBe('New Prompt');
329 |     });
330 | 
331 |     it('should handle file changes', async () => {
332 |       // Create initial file
333 |       await createTestPromptFile(tempDir, 'test-prompt', { title: 'Original Title' });
334 |       await cache.initializeCache();
335 |       
336 |       // Update file
337 |       await createTestPromptFile(tempDir, 'test-prompt', { title: 'Updated Title' });
338 |       
339 |       // Simulate file watcher change event
340 |       const filePath = `${tempDir}/test-prompt.md`;
341 |       await changeHandler(filePath);
342 |       
343 |       expect(consoleErrorSpy).toHaveBeenCalledWith('Prompt updated: test-prompt.md');
344 |       expect(cache.getPrompt('test-prompt')?.metadata.title).toBe('Updated Title');
345 |     });
346 | 
347 |     it('should handle file deletion', async () => {
348 |       // Create and cache a file
349 |       await createTestPromptFile(tempDir, 'to-delete');
350 |       await cache.initializeCache();
351 |       expect(cache.getPrompt('to-delete')).toBeDefined();
352 |       
353 |       // Simulate file watcher unlink event
354 |       const filePath = `${tempDir}/to-delete.md`;
355 |       await unlinkHandler(filePath);
356 |       
357 |       expect(consoleErrorSpy).toHaveBeenCalledWith('Prompt deleted: to-delete.md');
358 |       expect(cache.getPrompt('to-delete')).toBeUndefined();
359 |     });
360 | 
361 |     it('should handle watcher errors', () => {
362 |       const error = new Error('Watcher error');
363 |       
364 |       errorHandler(error);
365 |       
366 |       expect(consoleErrorSpy).toHaveBeenCalledWith('File watcher error:', error);
367 |     });
368 | 
369 |     it('should ignore non-markdown files in watcher events', async () => {
370 |       const sizeBefore = cache.size();
371 |       
372 |       // Simulate adding a non-markdown file
373 |       await addHandler(`${tempDir}/readme.txt`);
374 |       
375 |       expect(cache.size()).toBe(sizeBefore);
376 |     });
377 |   });
378 | });
```

--------------------------------------------------------------------------------
/tests/tools.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for PromptTools class
  3 |  */
  4 | 
  5 | import { describe, it, expect, beforeEach, vi } from 'vitest';
  6 | import { PromptTools } from '../src/tools.js';
  7 | import { createSamplePromptInfo, createMockCallToolRequest, createExpectedResponse } from './helpers/testUtils.js';
  8 | import { MockPromptFileOperations } from './helpers/mocks.js';
  9 | 
 10 | describe('PromptTools', () => {
 11 |   let mockFileOps: MockPromptFileOperations;
 12 |   let tools: PromptTools;
 13 | 
 14 |   beforeEach(() => {
 15 |     mockFileOps = new MockPromptFileOperations();
 16 |     tools = new PromptTools(mockFileOps as any);
 17 |   });
 18 | 
 19 |   describe('constructor', () => {
 20 |     it('should create instance with file operations dependency', () => {
 21 |       expect(tools).toBeDefined();
 22 |       expect(tools).toBeInstanceOf(PromptTools);
 23 |     });
 24 |   });
 25 | 
 26 |   describe('getToolDefinitions', () => {
 27 |     it('should return all tool definitions', () => {
 28 |       const definitions = tools.getToolDefinitions();
 29 |       
 30 |       expect(definitions.tools).toHaveLength(5);
 31 |       
 32 |       const toolNames = definitions.tools.map(tool => tool.name);
 33 |       expect(toolNames).toEqual([
 34 |         'add_prompt',
 35 |         'get_prompt',
 36 |         'list_prompts',
 37 |         'delete_prompt',
 38 |         'create_structured_prompt'
 39 |       ]);
 40 |     });
 41 | 
 42 |     it('should have correct add_prompt tool definition', () => {
 43 |       const definitions = tools.getToolDefinitions();
 44 |       const addPromptTool = definitions.tools.find(tool => tool.name === 'add_prompt');
 45 |       
 46 |       expect(addPromptTool).toBeDefined();
 47 |       expect(addPromptTool?.description).toBe('Add a new prompt to the collection');
 48 |       expect(addPromptTool?.inputSchema.required).toEqual(['name', 'filename', 'content']);
 49 |       expect(addPromptTool?.inputSchema.properties).toHaveProperty('name');
 50 |       expect(addPromptTool?.inputSchema.properties).toHaveProperty('filename');
 51 |       expect(addPromptTool?.inputSchema.properties).toHaveProperty('content');
 52 |     });
 53 | 
 54 |     it('should have correct get_prompt tool definition', () => {
 55 |       const definitions = tools.getToolDefinitions();
 56 |       const getPromptTool = definitions.tools.find(tool => tool.name === 'get_prompt');
 57 |       
 58 |       expect(getPromptTool).toBeDefined();
 59 |       expect(getPromptTool?.description).toBe('Retrieve a prompt by name');
 60 |       expect(getPromptTool?.inputSchema.required).toEqual(['name']);
 61 |       expect(getPromptTool?.inputSchema.properties).toHaveProperty('name');
 62 |     });
 63 | 
 64 |     it('should have correct list_prompts tool definition', () => {
 65 |       const definitions = tools.getToolDefinitions();
 66 |       const listPromptsTool = definitions.tools.find(tool => tool.name === 'list_prompts');
 67 |       
 68 |       expect(listPromptsTool).toBeDefined();
 69 |       expect(listPromptsTool?.description).toBe('List all available prompts');
 70 |       expect(listPromptsTool?.inputSchema.properties).toEqual({});
 71 |     });
 72 | 
 73 |     it('should have correct delete_prompt tool definition', () => {
 74 |       const definitions = tools.getToolDefinitions();
 75 |       const deletePromptTool = definitions.tools.find(tool => tool.name === 'delete_prompt');
 76 |       
 77 |       expect(deletePromptTool).toBeDefined();
 78 |       expect(deletePromptTool?.description).toBe('Delete a prompt by name');
 79 |       expect(deletePromptTool?.inputSchema.required).toEqual(['name']);
 80 |       expect(deletePromptTool?.inputSchema.properties).toHaveProperty('name');
 81 |     });
 82 |   });
 83 | 
 84 |   describe('handleToolCall', () => {
 85 |     describe('add_prompt', () => {
 86 |       it('should add prompt with automatic metadata when none exists', async () => {
 87 |         const request = createMockCallToolRequest('add_prompt', {
 88 |           name: 'test-prompt',
 89 |           filename: 'test-prompt-file',
 90 |           content: '# Test Prompt\n\nThis is a test.'
 91 |         });
 92 |         
 93 |         mockFileOps.savePromptWithFilename.mockResolvedValue('test-prompt-file.md');
 94 |         
 95 |         const result = await tools.handleToolCall(request as any);
 96 |         
 97 |         // Should call savePromptWithFilename with content enhanced with metadata
 98 |         expect(mockFileOps.savePromptWithFilename).toHaveBeenCalledWith(
 99 |           'test-prompt-file',
100 |           expect.stringContaining('title: "Test Prompt"')
101 |         );
102 |         expect(mockFileOps.savePromptWithFilename).toHaveBeenCalledWith(
103 |           'test-prompt-file',
104 |           expect.stringContaining('# Test Prompt\n\nThis is a test.')
105 |         );
106 |         expect(result).toEqual(createExpectedResponse(
107 |           'Prompt "test-prompt" saved as test-prompt-file.md'
108 |         ));
109 |       });
110 | 
111 |       it('should preserve existing frontmatter when present', async () => {
112 |         const contentWithFrontmatter = `---
113 | title: "Existing Title"
114 | category: "custom"
115 | ---
116 | 
117 | # Test Prompt
118 | 
119 | This already has metadata.`;
120 |         
121 |         const request = createMockCallToolRequest('add_prompt', {
122 |           name: 'test-prompt',
123 |           filename: 'test-prompt-file',
124 |           content: contentWithFrontmatter
125 |         });
126 |         
127 |         mockFileOps.savePromptWithFilename.mockResolvedValue('test-prompt-file.md');
128 |         
129 |         const result = await tools.handleToolCall(request as any);
130 |         
131 |         // Should call savePromptWithFilename with original content unchanged
132 |         expect(mockFileOps.savePromptWithFilename).toHaveBeenCalledWith(
133 |           'test-prompt-file',
134 |           contentWithFrontmatter
135 |         );
136 |         expect(result).toEqual(createExpectedResponse(
137 |           'Prompt "test-prompt" saved as test-prompt-file.md'
138 |         ));
139 |       });
140 | 
141 |       it('should handle missing content parameter', async () => {
142 |         const request = createMockCallToolRequest('add_prompt', {
143 |           name: 'test-prompt'
144 |           // filename and content are missing
145 |         });
146 |         
147 |         const result = await tools.handleToolCall(request as any);
148 |         
149 |         expect(result).toEqual(createExpectedResponse(
150 |           'Error: Name, filename, and content are required for add_prompt',
151 |           true
152 |         ));
153 |         expect(mockFileOps.savePrompt).not.toHaveBeenCalled();
154 |       });
155 | 
156 |       it('should handle file operation errors', async () => {
157 |         const request = createMockCallToolRequest('add_prompt', {
158 |           name: 'test-prompt',
159 |           filename: 'test-prompt-file',
160 |           content: 'test content'
161 |         });
162 |         
163 |         mockFileOps.savePromptWithFilename.mockRejectedValue(new Error('Disk full'));
164 |         
165 |         const result = await tools.handleToolCall(request as any);
166 |         
167 |         expect(result).toEqual(createExpectedResponse(
168 |           'Error: Disk full',
169 |           true
170 |         ));
171 |       });
172 |     });
173 | 
174 |     describe('get_prompt', () => {
175 |       it('should retrieve prompt successfully', async () => {
176 |         const request = createMockCallToolRequest('get_prompt', {
177 |           name: 'test-prompt'
178 |         });
179 |         
180 |         const promptContent = '# Test Prompt\n\nThis is the full content.';
181 |         mockFileOps.readPrompt.mockResolvedValue(promptContent);
182 |         
183 |         const result = await tools.handleToolCall(request as any);
184 |         
185 |         expect(mockFileOps.readPrompt).toHaveBeenCalledWith('test-prompt');
186 |         expect(result).toEqual(createExpectedResponse(promptContent));
187 |       });
188 | 
189 |       it('should handle non-existent prompt', async () => {
190 |         const request = createMockCallToolRequest('get_prompt', {
191 |           name: 'non-existent'
192 |         });
193 |         
194 |         mockFileOps.readPrompt.mockRejectedValue(new Error('Prompt "non-existent" not found'));
195 |         
196 |         const result = await tools.handleToolCall(request as any);
197 |         
198 |         expect(result).toEqual(createExpectedResponse(
199 |           'Error: Prompt "non-existent" not found',
200 |           true
201 |         ));
202 |       });
203 |     });
204 | 
205 |     describe('list_prompts', () => {
206 |       it('should list prompts with metadata formatting', async () => {
207 |         const request = createMockCallToolRequest('list_prompts', {});
208 |         
209 |         const samplePrompts = [
210 |           createSamplePromptInfo({
211 |             name: 'prompt1',
212 |             metadata: { title: 'Prompt 1', category: 'test' },
213 |             preview: 'This is prompt 1 preview...'
214 |           }),
215 |           createSamplePromptInfo({
216 |             name: 'prompt2',
217 |             metadata: { title: 'Prompt 2', difficulty: 'advanced' },
218 |             preview: 'This is prompt 2 preview...'
219 |           })
220 |         ];
221 |         
222 |         mockFileOps.listPrompts.mockResolvedValue(samplePrompts);
223 |         
224 |         const result = await tools.handleToolCall(request as any);
225 |         
226 |         expect(mockFileOps.listPrompts).toHaveBeenCalled();
227 |         expect(result.content[0].type).toBe('text');
228 |         
229 |         const text = result.content[0].text as string;
230 |         expect(text).toContain('# Available Prompts');
231 |         expect(text).toContain('## prompt1');
232 |         expect(text).toContain('## prompt2');
233 |         expect(text).toContain('**Metadata:**');
234 |         expect(text).toContain('- title: Prompt 1');
235 |         expect(text).toContain('- category: test');
236 |         expect(text).toContain('**Preview:** This is prompt 1 preview...');
237 |         expect(text).toContain('---'); // Separator between prompts
238 |       });
239 | 
240 |       it('should handle empty prompt list', async () => {
241 |         const request = createMockCallToolRequest('list_prompts', {});
242 |         
243 |         mockFileOps.listPrompts.mockResolvedValue([]);
244 |         
245 |         const result = await tools.handleToolCall(request as any);
246 |         
247 |         expect(result).toEqual(createExpectedResponse('No prompts available'));
248 |       });
249 | 
250 |       it('should handle prompts with no metadata', async () => {
251 |         const request = createMockCallToolRequest('list_prompts', {});
252 |         
253 |         const promptWithoutMetadata = createSamplePromptInfo({
254 |           name: 'simple-prompt',
255 |           metadata: {},
256 |           preview: 'Simple prompt preview...'
257 |         });
258 |         
259 |         mockFileOps.listPrompts.mockResolvedValue([promptWithoutMetadata]);
260 |         
261 |         const result = await tools.handleToolCall(request as any);
262 |         
263 |         const text = result.content[0].text as string;
264 |         expect(text).toContain('## simple-prompt');
265 |         expect(text).not.toContain('**Metadata:**');
266 |         expect(text).toContain('**Preview:** Simple prompt preview...');
267 |       });
268 | 
269 |       it('should handle file operation errors', async () => {
270 |         const request = createMockCallToolRequest('list_prompts', {});
271 |         
272 |         mockFileOps.listPrompts.mockRejectedValue(new Error('Directory not accessible'));
273 |         
274 |         const result = await tools.handleToolCall(request as any);
275 |         
276 |         expect(result).toEqual(createExpectedResponse(
277 |           'Error: Directory not accessible',
278 |           true
279 |         ));
280 |       });
281 |     });
282 | 
283 |     describe('delete_prompt', () => {
284 |       it('should delete prompt successfully', async () => {
285 |         const request = createMockCallToolRequest('delete_prompt', {
286 |           name: 'test-prompt'
287 |         });
288 |         
289 |         mockFileOps.deletePrompt.mockResolvedValue(true);
290 |         
291 |         const result = await tools.handleToolCall(request as any);
292 |         
293 |         expect(mockFileOps.deletePrompt).toHaveBeenCalledWith('test-prompt');
294 |         expect(result).toEqual(createExpectedResponse(
295 |           'Prompt "test-prompt" deleted successfully'
296 |         ));
297 |       });
298 | 
299 |       it('should handle non-existent prompt deletion', async () => {
300 |         const request = createMockCallToolRequest('delete_prompt', {
301 |           name: 'non-existent'
302 |         });
303 |         
304 |         mockFileOps.deletePrompt.mockRejectedValue(new Error('Prompt "non-existent" not found'));
305 |         
306 |         const result = await tools.handleToolCall(request as any);
307 |         
308 |         expect(result).toEqual(createExpectedResponse(
309 |           'Error: Prompt "non-existent" not found',
310 |           true
311 |         ));
312 |       });
313 |     });
314 | 
315 |     describe('unknown tool', () => {
316 |       it('should handle unknown tool name', async () => {
317 |         const request = createMockCallToolRequest('unknown_tool', {});
318 |         
319 |         const result = await tools.handleToolCall(request as any);
320 |         
321 |         expect(result).toEqual(createExpectedResponse(
322 |           'Error: Unknown tool: unknown_tool',
323 |           true
324 |         ));
325 |       });
326 |     });
327 | 
328 |     describe('error handling', () => {
329 |       it('should handle non-Error exceptions', async () => {
330 |         const request = createMockCallToolRequest('get_prompt', { name: 'test' });
331 |         
332 |         // Mock a non-Error exception
333 |         mockFileOps.readPrompt.mockRejectedValue('String error');
334 |         
335 |         const result = await tools.handleToolCall(request as any);
336 |         
337 |         expect(result).toEqual(createExpectedResponse(
338 |           'Error: Unknown error',
339 |           true
340 |         ));
341 |       });
342 | 
343 |       it('should handle null/undefined exceptions', async () => {
344 |         const request = createMockCallToolRequest('get_prompt', { name: 'test' });
345 |         
346 |         mockFileOps.readPrompt.mockRejectedValue(null);
347 |         
348 |         const result = await tools.handleToolCall(request as any);
349 |         
350 |         expect(result).toEqual(createExpectedResponse(
351 |           'Error: Unknown error',
352 |           true
353 |         ));
354 |       });
355 |     });
356 |   });
357 | 
358 |   describe('formatPromptsList', () => {
359 |     it('should format single prompt correctly', async () => {
360 |       const request = createMockCallToolRequest('list_prompts', {});
361 |       
362 |       const singlePrompt = createSamplePromptInfo({
363 |         name: 'single-prompt',
364 |         metadata: {
365 |           title: 'Single Prompt',
366 |           description: 'A single test prompt',
367 |           tags: ['test']
368 |         },
369 |         preview: 'This is the preview text...'
370 |       });
371 |       
372 |       mockFileOps.listPrompts.mockResolvedValue([singlePrompt]);
373 |       
374 |       const result = await tools.handleToolCall(request as any);
375 |       const text = result.content[0].text as string;
376 |       
377 |       expect(text).toContain('# Available Prompts');
378 |       expect(text).toContain('## single-prompt');
379 |       expect(text).toContain('- title: Single Prompt');
380 |       expect(text).toContain('- description: A single test prompt');
381 |       expect(text).toContain('- tags: test');
382 |       expect(text).toContain('**Preview:** This is the preview text...');
383 |       expect(text).not.toContain('---'); // No separator for single prompt
384 |     });
385 | 
386 |     it('should handle complex metadata values', async () => {
387 |       const request = createMockCallToolRequest('list_prompts', {});
388 |       
389 |       const complexPrompt = createSamplePromptInfo({
390 |         name: 'complex-prompt',
391 |         metadata: {
392 |           title: 'Complex Prompt',
393 |           tags: ['tag1', 'tag2', 'tag3'],
394 |           customObject: { nested: 'value', array: [1, 2, 3] },
395 |           customBoolean: true,
396 |           customNumber: 42
397 |         },
398 |         preview: 'Complex preview...'
399 |       });
400 |       
401 |       mockFileOps.listPrompts.mockResolvedValue([complexPrompt]);
402 |       
403 |       const result = await tools.handleToolCall(request as any);
404 |       const text = result.content[0].text as string;
405 |       
406 |       expect(text).toContain('- tags: tag1,tag2,tag3');
407 |       expect(text).toContain('- customBoolean: true');
408 |       expect(text).toContain('- customNumber: 42');
409 |       expect(text).toContain('- customObject: [object Object]');
410 |     });
411 |   });
412 | 
413 |   describe('integration scenarios', () => {
414 |     it('should handle complete workflow', async () => {
415 |       // Add a prompt
416 |       const addRequest = createMockCallToolRequest('add_prompt', {
417 |         name: 'workflow-test',
418 |         filename: 'workflow-test-file',
419 |         content: '# Workflow Test\n\nTest content'
420 |       });
421 |       
422 |       mockFileOps.savePromptWithFilename.mockResolvedValue('workflow-test-file.md');
423 |       
424 |       let result = await tools.handleToolCall(addRequest as any);
425 |       expect(result.content[0].text).toContain('saved as workflow-test-file.md');
426 |       
427 |       // List prompts
428 |       const listRequest = createMockCallToolRequest('list_prompts', {});
429 |       const samplePrompt = createSamplePromptInfo({ name: 'workflow-test' });
430 |       mockFileOps.listPrompts.mockResolvedValue([samplePrompt]);
431 |       
432 |       result = await tools.handleToolCall(listRequest as any);
433 |       expect(result.content[0].text).toContain('workflow-test');
434 |       
435 |       // Get specific prompt
436 |       const getRequest = createMockCallToolRequest('get_prompt', {
437 |         name: 'workflow-test'
438 |       });
439 |       
440 |       mockFileOps.readPrompt.mockResolvedValue('# Workflow Test\n\nTest content');
441 |       
442 |       result = await tools.handleToolCall(getRequest as any);
443 |       expect(result.content[0].text).toContain('Test content');
444 |       
445 |       // Delete prompt
446 |       const deleteRequest = createMockCallToolRequest('delete_prompt', {
447 |         name: 'workflow-test'
448 |       });
449 |       
450 |       mockFileOps.deletePrompt.mockResolvedValue(true);
451 |       
452 |       result = await tools.handleToolCall(deleteRequest as any);
453 |       expect(result.content[0].text).toContain('deleted successfully');
454 |     });
455 |   });
456 | 
457 |   describe('create_structured_prompt', () => {
458 |     it('should create structured prompt with all metadata', async () => {
459 |       const request = createMockCallToolRequest('create_structured_prompt', {
460 |         name: 'test-structured',
461 |         title: 'Test Structured Prompt',
462 |         description: 'A test prompt with full metadata',
463 |         category: 'testing',
464 |         tags: ['test', 'structured'],
465 |         difficulty: 'intermediate',
466 |         author: 'Test Author',
467 |         content: '# Test Content\n\nThis is structured content.'
468 |       });
469 |       
470 |       mockFileOps.savePrompt.mockResolvedValue('test-structured.md');
471 |       
472 |       const result = await tools.handleToolCall(request as any);
473 |       
474 |       // Should call savePrompt with structured frontmatter
475 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
476 |         'test-structured',
477 |         expect.stringContaining('title: "Test Structured Prompt"')
478 |       );
479 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
480 |         'test-structured',
481 |         expect.stringContaining('category: "testing"')
482 |       );
483 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
484 |         'test-structured',
485 |         expect.stringContaining('tags: ["test","structured"]')
486 |       );
487 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
488 |         'test-structured',
489 |         expect.stringContaining('difficulty: "intermediate"')
490 |       );
491 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
492 |         'test-structured',
493 |         expect.stringContaining('# Test Content')
494 |       );
495 |       
496 |       expect(result.content[0].text).toContain('Structured prompt "test-structured" created successfully');
497 |       expect(result.content[0].text).toContain('Category: testing');
498 |       expect(result.content[0].text).toContain('Tags: test, structured');
499 |     });
500 | 
501 |     it('should use defaults for optional fields', async () => {
502 |       const request = createMockCallToolRequest('create_structured_prompt', {
503 |         name: 'minimal-structured',
504 |         title: 'Minimal Prompt',
505 |         description: 'A minimal prompt',
506 |         content: 'Just content.'
507 |       });
508 |       
509 |       mockFileOps.savePrompt.mockResolvedValue('minimal-structured.md');
510 |       
511 |       const result = await tools.handleToolCall(request as any);
512 |       
513 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
514 |         'minimal-structured',
515 |         expect.stringContaining('category: "general"')
516 |       );
517 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
518 |         'minimal-structured',
519 |         expect.stringContaining('tags: ["general"]')
520 |       );
521 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
522 |         'minimal-structured',
523 |         expect.stringContaining('difficulty: "beginner"')
524 |       );
525 |       expect(mockFileOps.savePrompt).toHaveBeenCalledWith(
526 |         'minimal-structured',
527 |         expect.stringContaining('author: "User"')
528 |       );
529 |     });
530 | 
531 |     it('should handle missing required fields', async () => {
532 |       const request = createMockCallToolRequest('create_structured_prompt', {
533 |         name: 'incomplete',
534 |         title: 'Missing Description'
535 |         // Missing description and content
536 |       });
537 |       
538 |       const result = await tools.handleToolCall(request as any);
539 |       
540 |       expect(result.isError).toBe(true);
541 |       expect(result.content[0].text).toContain('Name, content, title, and description are required');
542 |     });
543 |   });
544 | });
```