# Directory Structure
```
├── .gitignore
├── package-lock.json
├── package.json
├── README.md
├── src
│ ├── index.ts
│ ├── mcp
│ │ └── memoryBankMcp.ts
│ ├── templates
│ │ ├── .byterules
│ │ ├── activeContext.md
│ │ ├── productContext.md
│ │ ├── progress.md
│ │ ├── projectbrief.md
│ │ ├── systemPatterns.md
│ │ └── techContext.md
│ └── utils
│ ├── cursorRulesGenerator.ts
│ ├── fileManager.ts
│ └── gemini.ts
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
1 | # Node.js
2 | node_modules/
3 | npm-debug.log
4 | yarn-debug.log
5 | yarn-error.log
6 |
7 | # Derleme ve dağıtım
8 | dist/
9 | build/
10 | *.tsbuildinfo
11 |
12 | # Çevre değişkenleri
13 | .env
14 | .env.local
15 | .env.development.local
16 | .env.test.local
17 | .env.production.local
18 |
19 | # Memory Bank verileri
20 | memory-bank/
21 | memory-bank-export/
22 | memory-bank-export.json
23 |
24 | # Editör ve IDE
25 | .vscode/
26 | .idea/
27 | *.swp
28 | *.swo
29 |
30 | # İşletim sistemi
31 | .DS_Store
32 | Thumbs.db
33 |
34 | # Log dosyaları
35 | logs/
36 | *.log
37 | npm-debug.log*
38 | yarn-debug.log*
39 | yarn-error.log*
```
--------------------------------------------------------------------------------
/src/templates/.byterules:
--------------------------------------------------------------------------------
```
1 | # Memory Bank Document Orchestration Rules
2 |
3 | ## Document Types and Purposes
4 |
5 | ### 1. Project Brief (projectbrief.md)
6 | - **Purpose**: Acts as the project's foundation document defining key objectives
7 | - **When to Update**: At project initiation and when scope changes
8 | - **Structure**:
9 | - Project Vision & Goals
10 | - Key Stakeholders
11 | - Success Metrics
12 | - Constraints & Limitations
13 | - Timeline Overview
14 | - **Commands**:
15 | - `update_document projectbrief` - Update project brief details
16 | - `query_memory_bank "project goals"` - Find project goal information
17 |
18 | ### 2. Product Context (productContext.md)
19 | - **Purpose**: Details product functionality, user experience, and market position
20 | - **When to Update**: When features change or market requirements shift
21 | - **Structure**:
22 | - User Personas
23 | - Feature List & Priorities
24 | - User Stories
25 | - Competitive Analysis
26 | - Product Roadmap
27 | - **Commands**:
28 | - `update_document productContext` - Update product information
29 | - `query_memory_bank "features"` - Find feature-related information
30 |
31 | ### 3. System Patterns (systemPatterns.md)
32 | - **Purpose**: Documents system architecture and design decisions
33 | - **When to Update**: When architectural decisions are made or changed
34 | - **Structure**:
35 | - System Architecture
36 | - Component Diagrams
37 | - Design Patterns Used
38 | - Integration Points
39 | - Data Flow
40 | - **Commands**:
41 | - `update_document systemPatterns` - Update system architecture information
42 | - `query_memory_bank "architecture"` - Find architecture information
43 |
44 | ### 4. Tech Context (techContext.md)
45 | - **Purpose**: Technical details, stack choices, and tooling decisions
46 | - **When to Update**: When technology decisions are made or changed
47 | - **Structure**:
48 | - Technology Stack
49 | - Development Environment
50 | - Deployment Process
51 | - Performance Considerations
52 | - Technical Debt
53 | - **Commands**:
54 | - `update_document techContext` - Update technology information
55 | - `query_memory_bank "stack"` - Find technology stack information
56 |
57 | ### 5. Active Context (activeContext.md)
58 | - **Purpose**: Current tasks, open questions, and active development
59 | - **When to Update**: Daily or when switching focus areas
60 | - **Structure**:
61 | - Current Sprint Goals
62 | - Active Tasks
63 | - Blockers & Challenges
64 | - Decisions Needed
65 | - Next Actions
66 | - **Commands**:
67 | - `update_document activeContext` - Update current work information
68 | - `query_memory_bank "current tasks"` - Find information about active work
69 |
70 | ### 6. Progress (progress.md)
71 | - **Purpose**: Progress tracking and milestone documentation
72 | - **When to Update**: After completing tasks or reaching milestones
73 | - **Structure**:
74 | - Milestones Achieved
75 | - Current Progress Status
76 | - Sprint/Cycle History
77 | - Learnings & Adjustments
78 | - Next Milestones
79 | - **Commands**:
80 | - `update_document progress` - Update project progress information
81 | - `query_memory_bank "milestone"` - Find milestone information
82 |
83 | ## Document Workflow Processes
84 |
85 | ### Project Initialization
86 | 1. Create project brief with clear goals
87 | 2. Define product context based on brief
88 | 3. Establish initial system patterns
89 | 4. Document technology decisions
90 | 5. Set up initial active context and progress tracking
91 |
92 | ### Feature Development Cycle
93 | 1. Update active context with new feature details
94 | 2. Reference system patterns for implementation guidance
95 | 3. Document technical decisions in tech context
96 | 4. Update progress when feature is completed
97 | 5. Ensure product context reflects new capabilities
98 |
99 | ### Project Review Process
100 | 1. Review progress against project brief goals
101 | 2. Validate system patterns match implementation
102 | 3. Update product context with feedback/learnings
103 | 4. Document technical challenges in tech context
104 | 5. Set new goals in active context
105 |
106 | ## Best Practices
107 |
108 | ### Document Maintenance
109 | - Keep documents concise and focused
110 | - Use markdown formatting for readability
111 | - Include diagrams where appropriate (store in resources/)
112 | - Link between documents when referencing related content
113 | - Update documents regularly based on the workflow process
114 |
115 | ### Collaboration Guidelines
116 | - Review document changes with team members
117 | - Hold regular sync meetings to update active context
118 | - Use version control for tracking document history
119 | - Maintain changelog entries in progress.md
120 | - Cross-reference documents to maintain consistency
121 |
122 | ## Command Reference
123 | - `initialize_memory_bank` - Create a new Memory Bank structure
124 | - `update_document <docType>` - Update specific document content
125 | - `query_memory_bank <query>` - Search across all documents
126 | - `export_memory_bank` - Export current Memory Bank state
127 |
128 | ## Document Integration Flow
129 | Project Brief → Product Context → System Patterns → Tech Context → Active Context → Progress
130 |
131 | Follow this integration flow to ensure proper document orchestration and maintain project coherence.
132 |
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # Memory Bank MCP
2 |
3 | <div align="center">
4 | <img src="https://github.com/tuncer-byte/byte/blob/main/media/icons/icon-white.png" height="128">
5 | <h1>Memory Bank MCP</h1>
6 | <p>
7 | <b>Structured project knowledge management for LLMs via Model Context Protocol (MCP)</b>
8 | </p>
9 | </div>
10 |
11 | <a href="https://glama.ai/mcp/servers/@tuncer-byte/memory-bank-MCP">
12 | <img width="380" height="200" src="https://glama.ai/mcp/servers/@tuncer-byte/memory-bank-MCP/badge" alt="Memory Bank MCP server" />
13 | </a>
14 |
15 | ---
16 |
17 | > **Note:** This is not a traditional Node.js application. Memory Bank MCP is an **MCP server**—a component in the [Model Context Protocol](https://modelcontextprotocol.io/introduction) ecosystem. It exposes project knowledge to LLM-powered agents and tools using a standardized protocol, enabling seamless integration with AI clients (e.g., Claude Desktop, IDEs, or custom LLM agents).
18 |
19 | ---
20 |
21 | ## What is Model Context Protocol (MCP)?
22 |
23 | MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI: it provides a universal way to connect AI models to data sources and tools, both locally and remotely. MCP enables:
24 |
25 | - **Plug-and-play integrations** between LLMs, data, and tools
26 | - **Switching between LLM providers** with minimal friction
27 | - **Secure, modular architecture** for building AI workflows
28 |
29 | Learn more: [MCP Introduction](https://modelcontextprotocol.io/introduction)
30 |
31 | ## About Memory Bank MCP
32 |
33 | Memory Bank MCP is an **MCP server** that helps teams create, manage, and access structured project documentation. It generates and maintains interconnected Markdown documents capturing all aspects of project knowledge, from high-level goals to technical details and daily progress. It is designed to be accessed by MCP-compatible clients and LLM agents.
34 |
35 | ## Features
36 |
37 | - **AI-Generated Documentation**: Uses Gemini API to generate and update project documentation
38 | - **Structured Knowledge System**: Maintains six core document types in a hierarchical structure
39 | - **MCP Server**: Implements the Model Context Protocol for integration with LLM agents and tools
40 | - **Customizable Storage**: Choose where your Memory Bank directory is created
41 | - **Document Templates**: Pre-defined templates for project brief, product context, system patterns, etc.
42 | - **AI-Assisted Updates**: Update documents manually or regenerate them with AI
43 | - **Advanced Querying**: Search across all documents with context-aware relevance ranking
44 |
45 | ## Installation
46 |
47 | ```bash
48 | # Clone the repository
49 | git clone https://github.com/tuncer-byte/memory-bank-mcp.git
50 | cd memory-bank-mcp
51 |
52 | # Install dependencies
53 | npm install
54 |
55 | # (Optional) Create .env file with your Gemini API key
56 | echo "GEMINI_API_KEY=your_api_key_here" > .env
57 | ```
58 |
59 | ## Usage
60 |
61 | > **Note:** Memory Bank MCP is intended to be run as an MCP server, not as a standalone app. You typically launch it as part of an MCP workflow, and connect to it from an MCP-compatible client (such as Claude Desktop or your own LLM agent).
62 |
63 | ### Development Mode
64 |
65 | ```bash
66 | npm run dev
67 | ```
68 |
69 | ### Production Mode
70 |
71 | ```bash
72 | npm run build
73 | npm run start
74 | ```
75 |
76 | ### MCP Integration
77 |
78 | To connect Memory Bank MCP to your MCP client, add the following to your `mcp.json` configuration:
79 |
80 | ```json
81 | {
82 | "memoryBank": {
83 | "command": "node",
84 | "args": ["/path/to/memory-bank-mcp/dist/index.js"],
85 | "env": {
86 | "GEMINI_API_KEY": "your_gemini_api_key_here"
87 | }
88 | }
89 | }
90 | ```
91 |
92 | Replace `/path/to/memory-bank-mcp/dist/index.js` with the absolute path to your built file, and add your Gemini API key if needed.
93 |
94 | ---
95 |
96 | ## MCP Tools Exposed by Memory Bank
97 |
98 | Memory Bank MCP provides the following tools via the Model Context Protocol:
99 |
100 | ### `initialize_memory_bank`
101 |
102 | Creates a new Memory Bank structure with all document templates.
103 |
104 | **Parameters:**
105 | - `goal` (string): Project goal description (min 10 characters)
106 | - `geminiApiKey` (string, optional): Gemini API key for document generation
107 | - `location` (string, optional): Absolute path where memory-bank folder will be created
108 |
109 | **Example:**
110 | ```javascript
111 | await callTool({
112 | name: "initialize_memory_bank",
113 | arguments: {
114 | goal: "Building a self-documenting AI-powered software development assistant",
115 | location: "/Users/username/Documents/projects/ai-assistant"
116 | }
117 | });
118 | ```
119 |
120 | ### `update_document`
121 |
122 | Updates a specific document in the Memory Bank.
123 |
124 | **Parameters:**
125 | - `documentType` (enum): One of: `projectbrief`, `productContext`, `systemPatterns`, `techContext`, `activeContext`, `progress`
126 | - `content` (string, optional): New content for the document
127 | - `regenerate` (boolean, default: false): Whether to regenerate the document using AI
128 |
129 | **Example:**
130 | ```javascript
131 | await callTool({
132 | name: "update_document",
133 | arguments: {
134 | documentType: "projectbrief",
135 | content: "# Project Brief\n\n## Purpose\nTo develop an advanced and user-friendly AI..."
136 | }
137 | });
138 | ```
139 |
140 | ### `query_memory_bank`
141 |
142 | Searches across all documents with context-aware relevance ranking.
143 |
144 | **Parameters:**
145 | - `query` (string): Search query (min 5 characters)
146 |
147 | **Example:**
148 | ```javascript
149 | await callTool({
150 | name: "query_memory_bank",
151 | arguments: {
152 | query: "system architecture components"
153 | }
154 | });
155 | ```
156 |
157 | ### `export_memory_bank`
158 |
159 | Exports all Memory Bank documents.
160 |
161 | **Parameters:**
162 | - `format` (enum, default: "folder"): Export format, either "json" or "folder"
163 | - `outputPath` (string, optional): Custom output path for the export
164 |
165 | **Example:**
166 | ```javascript
167 | await callTool({
168 | name: "export_memory_bank",
169 | arguments: {
170 | format: "json",
171 | outputPath: "/Users/username/Documents/exports"
172 | }
173 | });
174 | ```
175 |
176 | ## Document Types
177 |
178 | Memory Bank organizes project knowledge into six core document types:
179 |
180 | 1. **Project Brief** (`projectbrief.md`): Core document defining project objectives, scope, and vision
181 | 2. **Product Context** (`productContext.md`): Documents product functionality from a user perspective
182 | 3. **System Patterns** (`systemPatterns.md`): Establishes system architecture and component relationships
183 | 4. **Tech Context** (`techContext.md`): Specifies technology stack and implementation details
184 | 5. **Active Context** (`activeContext.md`): Tracks current tasks, open issues, and development focus
185 | 6. **Progress** (`progress.md`): Documents completed work, milestones, and project history
186 |
187 | ## License
188 |
189 | MIT
```
--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "compilerOptions": {
3 | "target": "ES2022",
4 | "module": "NodeNext",
5 | "moduleResolution": "NodeNext",
6 | "esModuleInterop": true,
7 | "outDir": "./dist",
8 | "rootDir": "./src",
9 | "strict": true,
10 | "declaration": true,
11 | "resolveJsonModule": true
12 | },
13 | "include": ["src/**/*"],
14 | "exclude": ["node_modules", "**/*.test.ts"]
15 | }
```
--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { startServer } from './mcp/memoryBankMcp.js';
2 |
3 | // Ana fonksiyon
4 | async function main() {
5 | console.log('Memory Bank MCP uygulaması başlatılıyor...');
6 |
7 | try {
8 | // MCP sunucusunu başlat
9 | await startServer();
10 | } catch (error) {
11 | console.error('Hata:', error);
12 | process.exit(1);
13 | }
14 | }
15 |
16 | // Uygulamayı başlat
17 | main().catch(error => {
18 | console.error('Kritik hata:', error);
19 | process.exit(1);
20 | });
```
--------------------------------------------------------------------------------
/src/templates/projectbrief.md:
--------------------------------------------------------------------------------
```markdown
1 | # Project Brief
2 |
3 | ## Project Purpose
4 | {{projectPurpose}}
5 |
6 | ## Core Objectives
7 | {{projectGoals}}
8 |
9 | ## Target Audience
10 | {{targetAudience}}
11 |
12 | ## Key Features
13 | {{keyFeatures}}
14 |
15 | ## Success Criteria
16 | {{successCriteria}}
17 |
18 | ## Timeline
19 | {{timeline}}
20 |
21 | ## Project Scope
22 | {{projectScope}}
23 |
24 | ## Stakeholders
25 | {{stakeholders}}
26 |
27 | ## Dependencies
28 | {{dependencies}}
29 |
30 | ## Constraints
31 | {{constraints}}
32 |
33 | ---
34 | *This document was created by AI on {{date}} and serves as the foundation for all Memory Bank documentation.*
```
--------------------------------------------------------------------------------
/src/templates/productContext.md:
--------------------------------------------------------------------------------
```markdown
1 | # Product Context
2 |
3 | ## Market Analysis
4 | {{marketAnalysis}}
5 |
6 | ## Competitive Landscape
7 | {{competitiveAnalysis}}
8 |
9 | ## User Stories
10 | {{userStories}}
11 |
12 | ## Requirements
13 | {{requirements}}
14 |
15 | ## Workflows
16 | {{workflows}}
17 |
18 | ## Product Roadmap
19 | {{roadmap}}
20 |
21 | ## Problem Statement
22 | {{problemStatement}}
23 |
24 | ## Solution Overview
25 | {{solutionOverview}}
26 |
27 | ## User Experience Goals
28 | {{uxGoals}}
29 |
30 | ## Business Model
31 | {{businessModel}}
32 |
33 | ---
34 | *This document was created by AI on {{date}} and outlines why this project exists and how it should work.*
```
--------------------------------------------------------------------------------
/src/templates/activeContext.md:
--------------------------------------------------------------------------------
```markdown
1 | # Active Context
2 |
3 | ## Current Sprint
4 | {{currentSprint}}
5 |
6 | ## Ongoing Tasks
7 | {{ongoingTasks}}
8 |
9 | ## Known Issues
10 | {{knownIssues}}
11 |
12 | ## Priorities
13 | {{priorities}}
14 |
15 | ## Next Steps
16 | {{nextSteps}}
17 |
18 | ## Meeting Notes
19 | {{meetingNotes}}
20 |
21 | ## Recent Decisions
22 | {{recentDecisions}}
23 |
24 | ## Blockers
25 | {{blockers}}
26 |
27 | ## Resource Allocation
28 | {{resourceAllocation}}
29 |
30 | ## Risk Assessment
31 | {{riskAssessment}}
32 |
33 | ## Current Focus Areas
34 | {{currentFocusAreas}}
35 |
36 | ---
37 | *This document was created by AI on {{date}} and is updated regularly to reflect the current state of the project.*
```
--------------------------------------------------------------------------------
/src/templates/progress.md:
--------------------------------------------------------------------------------
```markdown
1 | # Progress Report
2 |
3 | ## Completed Tasks
4 | {{completedTasks}}
5 |
6 | ## Milestones
7 | {{milestones}}
8 |
9 | ## Test Results
10 | {{testResults}}
11 |
12 | ## Performance Metrics
13 | {{performanceMetrics}}
14 |
15 | ## Feedback
16 | {{feedback}}
17 |
18 | ## Changelog
19 | {{changelog}}
20 |
21 | ## Quality Assurance Results
22 | {{qaResults}}
23 |
24 | ## User Adoption Metrics
25 | {{userAdoption}}
26 |
27 | ## Lessons Learned
28 | {{lessonsLearned}}
29 |
30 | ## Time Tracking
31 | {{timeTracking}}
32 |
33 | ## Achievement Highlights
34 | {{achievementHighlights}}
35 |
36 | ---
37 | *This document was created by AI on {{date}} and is updated regularly to track the project's progress and outcomes.*
```
--------------------------------------------------------------------------------
/src/templates/systemPatterns.md:
--------------------------------------------------------------------------------
```markdown
1 | # System Patterns
2 |
3 | ## Architectural Design
4 | {{architectureDesign}}
5 |
6 | ## Data Models
7 | {{dataModels}}
8 |
9 | ## API Definitions
10 | {{apiDefinitions}}
11 |
12 | ## Component Structure
13 | {{componentStructure}}
14 |
15 | ## Integration Points
16 | {{integrationPoints}}
17 |
18 | ## Scalability Strategy
19 | {{scalabilityStrategy}}
20 |
21 | ## Security Architecture
22 | {{securityArchitecture}}
23 |
24 | ## Design Patterns
25 | {{designPatterns}}
26 |
27 | ## Technical Debt
28 | {{technicalDebt}}
29 |
30 | ## System Constraints
31 | {{systemConstraints}}
32 |
33 | ## Performance Considerations
34 | {{performanceConsiderations}}
35 |
36 | ---
37 | *This document was created by AI on {{date}} and documents the system architecture and key technical decisions.*
```
--------------------------------------------------------------------------------
/src/templates/techContext.md:
--------------------------------------------------------------------------------
```markdown
1 | # Technology Context
2 |
3 | ## Technologies Used
4 | {{technologiesUsed}}
5 |
6 | ## Development Tools
7 | {{developmentTools}}
8 |
9 | ## Development Environment
10 | {{developmentEnvironment}}
11 |
12 | ## Testing Strategy
13 | {{testingStrategy}}
14 |
15 | ## Deployment Process
16 | {{deploymentProcess}}
17 |
18 | ## Continuous Integration
19 | {{continuousIntegration}}
20 |
21 | ## Infrastructure
22 | {{infrastructure}}
23 |
24 | ## Third-Party Dependencies
25 | {{thirdPartyDependencies}}
26 |
27 | ## Configuration Management
28 | {{configManagement}}
29 |
30 | ## Monitoring & Logging
31 | {{monitoringLogging}}
32 |
33 | ## Backup & Recovery
34 | {{backupRecovery}}
35 |
36 | ---
37 | *This document was created by AI on {{date}} and covers the technical foundation of the project.*
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "name": "memory-bank-mcp",
3 | "version": "1.0.0",
4 | "description": "MCP tabanlı Memory Bank: metin tabanlı bilgileri depolama ve erişim aracı",
5 | "main": "dist/index.js",
6 | "type": "module",
7 | "scripts": {
8 | "build": "tsc",
9 | "start": "node dist/index.js",
10 | "dev": "ts-node src/index.ts"
11 | },
12 | "dependencies": {
13 | "@google/generative-ai": "^0.2.0",
14 | "@modelcontextprotocol/sdk": "^1.8.0",
15 | "dotenv": "^16.3.1",
16 | "zod": "^3.22.4",
17 | "fs-extra": "^11.2.0"
18 | },
19 | "devDependencies": {
20 | "@types/fs-extra": "^11.0.4",
21 | "@types/node": "^20.10.5",
22 | "ts-node": "^10.9.2",
23 | "typescript": "^5.3.3"
24 | },
25 | "keywords": ["mcp", "memory-bank", "gemini", "ai"],
26 | "author": "",
27 | "license": "MIT"
28 | }
```
--------------------------------------------------------------------------------
/src/utils/gemini.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { GoogleGenerativeAI, HarmBlockThreshold, HarmCategory } from '@google/generative-ai';
2 | import dotenv from 'dotenv';
3 | import fs from 'fs-extra';
4 | import path from 'path';
5 |
6 |
7 | // Load environment variables
8 | dotenv.config();
9 |
10 | // Default API key placeholder - replace with your actual API key
11 | const DEFAULT_API_KEY = 'YOUR_DEFAULT_API_KEY';
12 |
13 | // Initialize API key
14 | let apiKey = process.env.GEMINI_API_KEY;
15 |
16 | console.log('Checking for Gemini API key...');
17 | if (!apiKey) {
18 | console.error('GEMINI_API_KEY environment variable is not defined.');
19 |
20 | try {
21 | const envPath = path.resolve(process.cwd(), '.env');
22 | if (!fs.existsSync(envPath)) {
23 | fs.writeFileSync(envPath, `GEMINI_API_KEY=${DEFAULT_API_KEY}`, 'utf-8');
24 | console.log('Created .env file with default API key.');
25 | }
26 | } catch (err) {
27 | console.error('Failed to create .env file:', err);
28 | }
29 |
30 | apiKey = DEFAULT_API_KEY;
31 | console.log('Using default API key.');
32 | }
33 |
34 | if (apiKey === 'your_gemini_api_key_here' || apiKey === 'YOUR_DEFAULT_API_KEY') {
35 | console.warn('GEMINI_API_KEY is set to the example value. Please set a valid API key in your .env file.');
36 | throw new Error('Invalid API key. Please set a valid GEMINI_API_KEY in your .env file.');
37 | }
38 |
39 | console.log('Gemini API key found.');
40 |
41 | // Initialize Gemini client
42 | let genAI: GoogleGenerativeAI;
43 | try {
44 | genAI = new GoogleGenerativeAI(apiKey);
45 | console.log('Gemini client created successfully.');
46 | } catch (error) {
47 | console.error('Failed to create Gemini client:', error);
48 | throw new Error(`Gemini client creation failed: ${error}`);
49 | }
50 |
51 | const safetySettings = [
52 | {
53 | category: HarmCategory.HARM_CATEGORY_HARASSMENT,
54 | threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
55 | },
56 | {
57 | category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
58 | threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
59 | },
60 | {
61 | category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
62 | threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
63 | },
64 | {
65 | category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
66 | threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
67 | },
68 | ];
69 |
70 | export async function generateContent(prompt: string): Promise<string> {
71 | try {
72 | const model = genAI.getGenerativeModel({ model: 'gemini-2.0-flash' });
73 |
74 | const result = await model.generateContent({
75 | contents: [{ role: 'user', parts: [{ text: prompt }] }],
76 | safetySettings,
77 | generationConfig: {
78 | temperature: 0.7,
79 | topK: 40,
80 | topP: 0.95,
81 | maxOutputTokens: 8192,
82 | },
83 | });
84 |
85 | const response = result.response;
86 | return response.text();
87 | } catch (error) {
88 | console.error('Gemini API error:', error);
89 | throw new Error(`Error generating document content: ${error}`);
90 | }
91 | }
92 |
93 | export async function fillTemplate(templatePath: string, values: Record<string, string>): Promise<string> {
94 | try {
95 | let templateContent = await fs.readFile(templatePath, 'utf-8');
96 |
97 | Object.entries(values).forEach(([key, value]) => {
98 | const regex = new RegExp(`{{${key}}}`, 'g');
99 | templateContent = templateContent.replace(regex, value);
100 | });
101 |
102 | return templateContent;
103 | } catch (error) {
104 | console.error('Template filling error:', error);
105 | throw new Error(`Error filling template: ${error}`);
106 | }
107 | }
108 |
109 | export async function generateAllDocuments(goal: string): Promise<Record<string, string>> {
110 | const currentDate = new Date().toLocaleDateString('tr-TR');
111 |
112 | const basePrompt = `
113 | You are a project documentation expert. You will create comprehensive documentation for the following project:
114 |
115 | PROJECT PURPOSE: ${goal}
116 |
117 | Create the following documents for this project:
118 | `;
119 |
120 | const documentTypes = {
121 | projectbrief: `
122 | 1. Project Brief (projectbrief.md):
123 | - Explain the general purpose and vision of the project
124 | - List the main objectives
125 | - Define the target audience
126 | - Specify key features
127 | - Determine success criteria
128 | - Present a realistic timeline`,
129 |
130 | productContext: `
131 | 2. Product Context (productContext.md):
132 | - Conduct market analysis
133 | - Evaluate competitive landscape
134 | - Write user stories
135 | - List requirements
136 | - Explain workflows
137 | - Define product roadmap`,
138 |
139 | systemPatterns: `
140 | 3. System Patterns (systemPatterns.md):
141 | - Explain architectural design
142 | - Define data models
143 | - Specify API definitions
144 | - Show component structure
145 | - List integration points
146 | - Explain scalability strategy`,
147 |
148 | techContext: `
149 | 4. Technology Context (techContext.md):
150 | - List technologies used
151 | - Specify software development tools
152 | - Define development environment
153 | - Explain testing strategy
154 | - Define deployment process
155 | - Explain continuous integration approach`,
156 |
157 | activeContext: `
158 | 5. Active Context (activeContext.md):
159 | - Explain current sprint goals
160 | - List ongoing tasks
161 | - Specify known issues
162 | - Define priorities
163 | - Explain next steps
164 | - Add meeting notes`,
165 |
166 | progress: `
167 | 6. Progress Report (progress.md):
168 | - List completed tasks
169 | - Specify milestones
170 | - Report test results
171 | - Show performance metrics
172 | - Summarize feedback
173 | - Maintain a changelog`
174 | };
175 |
176 |
177 | const results: Record<string, string> = {};
178 |
179 | for (const [docType, docPrompt] of Object.entries(documentTypes)) {
180 | console.log(`Creating ${docType} document...`);
181 |
182 | const fullPrompt = `${basePrompt}${docPrompt}\n\nPlease create content only for the "${docType}" document. Use Markdown format with section headers marked by ##. At the end of the document, add the note "Created on ${currentDate}".`;
183 |
184 | const content = await generateContent(fullPrompt);
185 | results[docType] = content;
186 | }
187 |
188 | return results;
189 | }
```
--------------------------------------------------------------------------------
/src/utils/cursorRulesGenerator.ts:
--------------------------------------------------------------------------------
```typescript
1 | /**
2 | * Utility functions for generating Cursor rules
3 | *
4 | * Note: This module dynamically imports the gemini.js module when necessary
5 | * to generate AI-powered content for cursor rules.
6 | */
7 |
8 | /**
9 | * Generates cursor rules content based on project purpose
10 | * @param purpose Project purpose description
11 | * @returns Generated cursor rules content
12 | */
13 | export async function generateCursorRules(purpose: string): Promise<string> {
14 | // Format current date in English locale
15 | const currentDate = new Date().toLocaleDateString("en-US");
16 |
17 | // Project type detection based on user's purpose
18 | const projectType = detectProjectType(purpose);
19 |
20 | // AI ile içerik oluştur
21 | try {
22 | console.log("Attempting to generate cursor rules with AI...");
23 | const mainRulesContent = await generateMainCursorRulesWithAI(
24 | purpose,
25 | currentDate,
26 | projectType
27 | );
28 | console.log("Successfully generated cursor rules with AI");
29 | return mainRulesContent;
30 | } catch (error) {
31 | console.error("Error generating cursor rules with AI:", error);
32 |
33 | // Hata detayı için log ekleyelim
34 | if (error instanceof Error) {
35 | console.error("Error details:", error.message);
36 | if (error.stack) {
37 | console.error("Error stack:", error.stack);
38 | }
39 |
40 | // Kullanıcıya daha açıklayıcı hata mesajı göster
41 | throw new Error(`AI ile içerik oluşturulamadı: ${error.message}`);
42 | }
43 |
44 | // Genel hata durumunda
45 | throw new Error("AI ile içerik oluşturulamadı. Lütfen daha sonra tekrar deneyin.");
46 | }
47 | }
48 |
49 | /**
50 | * Detects project type based on user's purpose
51 | */
52 | function detectProjectType(purpose: string): string {
53 | const purposeLower = purpose.toLowerCase();
54 |
55 | if (
56 | purposeLower.includes("frontend") ||
57 | purposeLower.includes("web") ||
58 | purposeLower.includes("site") ||
59 | purposeLower.includes("ui")
60 | ) {
61 | return "frontend";
62 | }
63 |
64 | if (
65 | purposeLower.includes("backend") ||
66 | purposeLower.includes("api") ||
67 | purposeLower.includes("service")
68 | ) {
69 | return "backend";
70 | }
71 |
72 | if (
73 | purposeLower.includes("mobile") ||
74 | purposeLower.includes("android") ||
75 | purposeLower.includes("ios")
76 | ) {
77 | return "mobile";
78 | }
79 |
80 | if (
81 | purposeLower.includes("fullstack") ||
82 | purposeLower.includes("full-stack")
83 | ) {
84 | return "fullstack";
85 | }
86 |
87 | if (
88 | purposeLower.includes("data") ||
89 | purposeLower.includes("analytics") ||
90 | purposeLower.includes("ml") ||
91 | purposeLower.includes("ai")
92 | ) {
93 | return "data";
94 | }
95 |
96 | if (
97 | purposeLower.includes("devops") ||
98 | purposeLower.includes("infrastructure") ||
99 | purposeLower.includes("cloud")
100 | ) {
101 | return "devops";
102 | }
103 |
104 | return "general";
105 | }
106 |
107 | /**
108 | * Generates the main cursor rules content using Gemini API
109 | */
110 | async function generateMainCursorRulesWithAI(
111 | purpose: string,
112 | currentDate: string,
113 | projectType: string
114 | ): Promise<string> {
115 | const frontmatter = `---
116 | description: Main development guidelines for the ${purpose} project
117 | globs: **/*
118 | alwaysApply: true
119 | ---`;
120 |
121 | try {
122 | console.log("Dynamically importing gemini.js module...");
123 | const { generateContent } = await import("./gemini.js");
124 | console.log("Successfully imported gemini.js module");
125 |
126 | const prompt = `
127 | As a software development expert, you are creating Cursor rules for the ${purpose} project.
128 |
129 | PROJECT DETAILS:
130 | - PURPOSE: ${purpose}
131 | - TYPE: ${projectType}
132 | - DATE: ${currentDate}
133 |
134 | FORMAT REQUIREMENTS:
135 | 1. Start with a clear and concise main title
136 | 2. Use hierarchical markdown headings (## for main sections, ### for subsections)
137 | 3. Use numbered lists for step-by-step instructions
138 | 4. Use bullet points for important notes and guidelines
139 | 5. Include language-specific code blocks for all examples
140 | 6. Provide good and bad examples with explanatory comments
141 | 7. Use bold and italic formatting to emphasize important points
142 | 8. Include a footer with "Powered by tuncer-byte" and GitHub reference
143 |
144 | CONTENT REQUIREMENTS:
145 | 1. PROJECT OVERVIEW:
146 | - Detailed project purpose and objectives
147 | - Technical goals and success criteria
148 | - Recommended technology stack with version numbers
149 | - Architectural patterns and design decisions
150 |
151 | 2. CODE STRUCTURE AND ORGANIZATION:
152 | - Detailed file/folder structure for ${projectType} projects
153 | - Comprehensive naming conventions with examples
154 | - Module organization and dependency management
155 | - State management patterns (if applicable)
156 |
157 | 3. CODING STANDARDS:
158 | - Language-specific best practices
159 | - Error handling and logging strategies
160 | - Performance optimization techniques
161 | - Security implementation guidelines
162 | - Code review checklist
163 |
164 | 4. DEVELOPMENT WORKFLOW:
165 | - Git workflow with branch naming rules
166 | - Commit message format with examples
167 | - PR template and review process
168 | - CI/CD pipeline configuration
169 | - Environment management
170 |
171 | 5. TESTING REQUIREMENTS:
172 | - Test pyramid implementation
173 | - Framework setup and configuration
174 | - Test coverage goals and metrics
175 | - Mocking and test data strategies
176 | - E2E testing approach
177 |
178 | 6. DOCUMENTATION STANDARDS:
179 | - Code documentation templates
180 | - API documentation format
181 | - README structure and content
182 | - Architectural decision records
183 | - Deployment documentation
184 |
185 | 7. QUALITY ASSURANCE:
186 | - Code quality metrics
187 | - Static analysis tools
188 | - Performance monitoring
189 | - Security scanning
190 | - Accessibility guidelines
191 |
192 | 8. FILE ORGANIZATION:
193 | - Explain the purpose of each directory
194 | - Provide examples of correct file placement
195 |
196 | 9. ONBOARDING PROCESS:
197 | - Step-by-step guide for new developers
198 | - Required development environment setup
199 | - Access management and permissions
200 | - Communication channels and protocols
201 |
202 | 10. DEPLOYMENT STRATEGY:
203 | - Environment configuration
204 | - Release process
205 | - Rollback procedures
206 | - Monitoring and alerting setup
207 |
208 | Include specific, practical examples that directly apply to ${projectType} development.
209 | Each guideline should be actionable and specific.
210 | End with a footer containing "Powered by tuncer-byte" and GitHub reference.
211 | `;
212 |
213 | console.log("Sending request to Gemini API...");
214 | const aiGeneratedContent = await generateContent(prompt);
215 | console.log("Successfully received response from Gemini API");
216 |
217 | return `${frontmatter}
218 |
219 | ${aiGeneratedContent}
220 |
221 | ---
222 | *Powered by tuncer-byte*
223 | *GitHub: @tuncer-byte*`;
224 | } catch (error) {
225 | console.error("Error generating cursor rules with AI:", error);
226 |
227 | // Daha açıklayıcı hata mesajı
228 | if (error instanceof Error) {
229 | if (error.message.includes("GEMINI_API_KEY")) {
230 | throw new Error(
231 | "Gemini API anahtarı bulunamadı. Lütfen .env dosyasında GEMINI_API_KEY değişkenini tanımlayın."
232 | );
233 | } else if (
234 | error.message.includes("network") ||
235 | error.message.includes("fetch")
236 | ) {
237 | throw new Error(
238 | "Gemini API ile iletişim kurulamadı. Lütfen internet bağlantınızı kontrol edin."
239 | );
240 | }
241 | }
242 |
243 | // Orijinal hatayı yeniden fırlat
244 | throw error;
245 | }
246 | }
247 |
```
--------------------------------------------------------------------------------
/src/utils/fileManager.ts:
--------------------------------------------------------------------------------
```typescript
1 | import fs from 'fs-extra';
2 | import path from 'path';
3 |
4 | /**
5 | * Creates the Memory Bank directory structure
6 | * @param outputDir Output directory (must already exist)
7 | */
8 | export async function createMemoryBankStructure(outputDir: string): Promise<void> {
9 | try {
10 | console.log(`Creating Memory Bank structure in existing directory: ${outputDir}`);
11 |
12 | // Verify directory exists
13 | if (!await fs.pathExists(outputDir)) {
14 | console.warn(`Directory does not exist: ${outputDir}, will create it`);
15 | await fs.ensureDir(outputDir);
16 | }
17 |
18 | // No subdirectories needed - using a flat structure for simplicity
19 | console.log(`Using flat structure for Memory Bank in "${outputDir}"`);
20 |
21 | // Create a README.md file with component descriptions
22 | const readmePath = path.join(outputDir, 'README.md');
23 | const readmeContent = `# Memory Bank
24 |
25 | This directory serves as a structured repository for your project information and notes.
26 |
27 | ## Directory Structure
28 | - **resources**: Images, diagrams, and other resources
29 | - **temp**: Temporary files and drafts
30 | - **archive**: Archived documents
31 | - **references**: Reference materials and documentation
32 |
33 | ## Core Documents
34 | - **projectbrief.md**: Project goals, scope, and vision
35 | - **productContext.md**: Product features, user stories, and market context
36 | - **systemPatterns.md**: System architecture, design patterns, and component structure
37 | - **techContext.md**: Technology stack, frameworks, and technical specifications
38 | - **activeContext.md**: Active tasks, current sprint, and in-progress work
39 | - **progress.md**: Progress tracking, milestones, and project history
40 |
41 | ## Document Management
42 | This Memory Bank uses a structured approach to organize project knowledge. Each document serves a specific purpose in the project lifecycle and should be maintained according to the rules specified in the \`.byterules\` file.
43 |
44 | See the \`.byterules\` file for detailed guidelines on how to maintain and update these documents.
45 | `;
46 | try {
47 | await fs.writeFile(readmePath, readmeContent, 'utf-8');
48 | console.log(`README file created at: ${readmePath}`);
49 | } catch (error) {
50 | const err = error as any;
51 | console.error(`Error creating README file: ${err.code} - ${err.message}`);
52 | // Continue without README
53 | }
54 |
55 | console.log(`Memory Bank structure successfully created in "${outputDir}".`);
56 | } catch (error) {
57 | console.error(`Error creating directory structure at ${outputDir}:`, error);
58 | if (error instanceof Error) {
59 | throw new Error(`Failed to create Memory Bank structure: ${error.message} (Code: ${(error as any).code || 'UNKNOWN'})`);
60 | } else {
61 | throw new Error(`Failed to create Memory Bank structure: Unknown error`);
62 | }
63 | }
64 | }
65 |
66 | /**
67 | * Saves document content to a specific file
68 | * @param content File content
69 | * @param filePath File path
70 | */
71 | export async function saveDocument(content: string, filePath: string): Promise<void> {
72 | try {
73 | // Ensure directory exists
74 | await fs.ensureDir(path.dirname(filePath));
75 |
76 | // Write file
77 | await fs.writeFile(filePath, content, 'utf-8');
78 |
79 | console.log(`Document saved: ${filePath}`);
80 | } catch (error) {
81 | console.error('Error saving document:', error);
82 | throw new Error(`Failed to save document: ${error}`);
83 | }
84 | }
85 |
86 | /**
87 | * Reads document content
88 | * @param filePath File path
89 | * @returns File content
90 | */
91 | export async function readDocument(filePath: string): Promise<string> {
92 | try {
93 | // Check if file exists
94 | if (!await fs.pathExists(filePath)) {
95 | throw new Error(`Document not found: ${filePath}`);
96 | }
97 |
98 | // Read file
99 | const content = await fs.readFile(filePath, 'utf-8');
100 |
101 | return content;
102 | } catch (error) {
103 | console.error('Error reading document:', error);
104 | throw new Error(`Failed to read document: ${error}`);
105 | }
106 | }
107 |
108 | /**
109 | * Reads all documents from a directory
110 | * @param directoryPath Directory path
111 | * @returns Object containing file paths and contents
112 | */
113 | export async function readAllDocuments(directoryPath: string): Promise<Record<string, string>> {
114 | try {
115 | // Check if directory exists
116 | if (!await fs.pathExists(directoryPath)) {
117 | throw new Error(`Directory not found: ${directoryPath}`);
118 | }
119 |
120 | // List all files
121 | const files = await fs.readdir(directoryPath);
122 |
123 | // Filter only markdown files
124 | const markdownFiles = files.filter(file => file.endsWith('.md'));
125 |
126 | // Read each file
127 | const results: Record<string, string> = {};
128 |
129 | for (const file of markdownFiles) {
130 | const filePath = path.join(directoryPath, file);
131 | const content = await fs.readFile(filePath, 'utf-8');
132 |
133 | // Use filename as key (without extension)
134 | const fileName = path.basename(file, path.extname(file));
135 | results[fileName] = content;
136 | }
137 |
138 | return results;
139 | } catch (error) {
140 | console.error('Error reading documents:', error);
141 | throw new Error(`Failed to read documents: ${error}`);
142 | }
143 | }
144 |
145 | /**
146 | * Exports Memory Bank documents
147 | * @param sourceDir Source directory
148 | * @param format Export format ('folder' or 'json')
149 | * @param outputPath Output file path
150 | * @returns Path to the exported content
151 | */
152 | export async function exportMemoryBank(sourceDir: string, format: string = 'folder', outputPath: string): Promise<string> {
153 | try {
154 | // Check if source directory exists
155 | if (!await fs.pathExists(sourceDir)) {
156 | throw new Error(`Source directory not found: ${sourceDir}`);
157 | }
158 |
159 | const exportDir = path.dirname(outputPath);
160 | await fs.ensureDir(exportDir);
161 |
162 | if (format === 'folder') {
163 | // Export as folder (copy entire directory structure)
164 | const exportFolderPath = path.join(exportDir, path.basename(sourceDir));
165 | await fs.copy(sourceDir, exportFolderPath);
166 | console.log(`Memory Bank folder exported to "${exportFolderPath}".`);
167 | return exportFolderPath;
168 | } else if (format === 'json') {
169 | // Export as JSON
170 | const documents = await readAllDocuments(sourceDir);
171 |
172 | // Add metadata
173 | const exportData = {
174 | exportDate: new Date().toISOString(),
175 | memoryBank: documents
176 | };
177 |
178 | const jsonFilePath = outputPath.endsWith('.json') ? outputPath : `${outputPath}.json`;
179 | await fs.writeFile(jsonFilePath, JSON.stringify(exportData, null, 2), 'utf-8');
180 | console.log(`Memory Bank data exported to "${jsonFilePath}" in JSON format.`);
181 | return jsonFilePath;
182 | } else {
183 | throw new Error(`Unsupported format: ${format}. Use 'folder' or 'json'.`);
184 | }
185 | } catch (error) {
186 | console.error('Error exporting:', error);
187 | throw new Error(`Failed to export Memory Bank: ${error}`);
188 | }
189 | }
190 |
191 | /**
192 | * Reads the .byterules file and returns its content
193 | * @param directory Directory where .byterules file is located
194 | * @returns Content of .byterules file
195 | */
196 | export async function readByteRules(directory: string): Promise<string> {
197 | try {
198 | const byteRulesPath = path.join(directory, '.byterules');
199 |
200 | // Check if file exists
201 | if (!await fs.pathExists(byteRulesPath)) {
202 | throw new Error('ByteRules file not found. Memory Bank may not be properly initialized.');
203 | }
204 |
205 | // Read file
206 | const content = await fs.readFile(byteRulesPath, 'utf-8');
207 |
208 | return content;
209 | } catch (error) {
210 | console.error('Error reading ByteRules:', error);
211 | throw new Error(`Failed to read ByteRules: ${error}`);
212 | }
213 | }
214 |
215 | /**
216 | * Gets document workflow information based on document type
217 | * @param directory Directory where .byterules file is located
218 | * @param documentType Type of document to get workflow for
219 | * @returns Workflow information for the document
220 | */
221 | export async function getDocumentWorkflow(directory: string, documentType: string): Promise<{
222 | purpose: string;
223 | updateTiming: string;
224 | structure: string[];
225 | commands: string[];
226 | }> {
227 | try {
228 | // Get byterules content
229 | const byteRulesContent = await readByteRules(directory);
230 |
231 | // Extract section for the specific document type
232 | const regex = new RegExp(`###\\s*\\d+\\.\\s*${documentType.replace(/Context/g, ' Context')}\\s*\\([\\w\\.]+\\)[\\s\\S]*?(?=###\\s*\\d+\\.\\s*|##\\s*|$)`, 'i');
233 | const match = byteRulesContent.match(regex);
234 |
235 | if (!match) {
236 | return {
237 | purpose: `Information about ${documentType} document`,
238 | updateTiming: 'As needed',
239 | structure: ['No specific structure defined'],
240 | commands: [`update_document ${documentType.toLowerCase()}`]
241 | };
242 | }
243 |
244 | // Parse section content
245 | const sectionContent = match[0];
246 |
247 | // Extract purpose
248 | const purposeMatch = sectionContent.match(/\*\*Purpose\*\*:\s*(.*?)(?=\n)/);
249 | const purpose = purposeMatch ? purposeMatch[1].trim() : `Information about ${documentType}`;
250 |
251 | // Extract when to update
252 | const updateMatch = sectionContent.match(/\*\*When to Update\*\*:\s*(.*?)(?=\n)/);
253 | const updateTiming = updateMatch ? updateMatch[1].trim() : 'As needed';
254 |
255 | // Extract structure
256 | const structureMatch = sectionContent.match(/\*\*Structure\*\*:[\s\S]*?(?=\*\*|$)/);
257 | const structure = structureMatch
258 | ? structureMatch[0]
259 | .replace(/\*\*Structure\*\*:\s*/, '')
260 | .trim()
261 | .split('\n')
262 | .map(line => line.replace(/^\s*-\s*/, '').trim())
263 | .filter(line => line.length > 0)
264 | : ['No specific structure defined'];
265 |
266 | // Extract commands
267 | const commandsMatch = sectionContent.match(/\*\*Commands\*\*:[\s\S]*?(?=\*\*|$)/);
268 | const commands = commandsMatch
269 | ? commandsMatch[0]
270 | .replace(/\*\*Commands\*\*:\s*/, '')
271 | .trim()
272 | .split('\n')
273 | .map(line => line.replace(/^\s*-\s*`(.*?)`.*/, '$1').trim())
274 | .filter(line => line.length > 0)
275 | : [`update_document ${documentType.toLowerCase()}`];
276 |
277 | return {
278 | purpose,
279 | updateTiming,
280 | structure,
281 | commands
282 | };
283 | } catch (error) {
284 | console.error('Error getting document workflow:', error);
285 | return {
286 | purpose: `Information about ${documentType} document`,
287 | updateTiming: 'As needed',
288 | structure: ['No specific structure defined'],
289 | commands: [`update_document ${documentType.toLowerCase()}`]
290 | };
291 | }
292 | }
293 |
294 | /**
295 | * Creates a structured template for a document based on ByteRules
296 | * @param directory Directory where .byterules file is located
297 | * @param documentType Type of document to create template for
298 | * @returns Structured template content
299 | */
300 | export async function createDocumentTemplate(directory: string, documentType: string): Promise<string> {
301 | try {
302 | // Get workflow info
303 | const workflow = await getDocumentWorkflow(directory, documentType);
304 |
305 | // Build template
306 | let template = `# ${documentType.replace(/([A-Z])/g, ' $1').trim()}\n\n`;
307 | template += `> ${workflow.purpose}\n\n`;
308 | template += `> Last Updated: ${new Date().toISOString().split('T')[0]}\n\n`;
309 |
310 | // Add sections based on structure
311 | for (const section of workflow.structure) {
312 | template += `## ${section}\n\n_Add content here_\n\n`;
313 | }
314 |
315 | // Add reference to update timing
316 | template += `---\n\n**Note:** This document should be updated ${workflow.updateTiming.toLowerCase()}.\n`;
317 |
318 | return template;
319 | } catch (error) {
320 | console.error('Error creating document template:', error);
321 |
322 | // Return basic template on error
323 | return `# ${documentType.replace(/([A-Z])/g, ' $1').trim()}\n\nLast Updated: ${new Date().toISOString().split('T')[0]}\n\n## Content\n\n_Add content here_\n`;
324 | }
325 | }
326 |
327 | /**
328 | * Analyzes document consistency with ByteRules guidelines
329 | * @param directory Directory where documents are located
330 | * @returns Analysis results with recommendations
331 | */
332 | export async function analyzeDocumentConsistency(directory: string): Promise<{
333 | documentType: string;
334 | status: 'good' | 'needs-update';
335 | recommendation: string;
336 | }[]> {
337 | try {
338 | // Get all documents
339 | const documents = await readAllDocuments(directory);
340 | const results = [];
341 |
342 | // Analyze each document
343 | for (const [docName, content] of Object.entries(documents)) {
344 | // Skip non-standard documents
345 | if (!['projectbrief', 'productContext', 'systemPatterns', 'techContext', 'activeContext', 'progress'].includes(docName)) {
346 | continue;
347 | }
348 |
349 | // Get workflow info
350 | const workflow = await getDocumentWorkflow(directory, docName);
351 |
352 | // Check for required sections
353 | const missingSections = [];
354 | for (const section of workflow.structure) {
355 | const sectionRegex = new RegExp(`##\\s*${section}`, 'i');
356 | if (!sectionRegex.test(content)) {
357 | missingSections.push(section);
358 | }
359 | }
360 |
361 | // Check if document was updated recently (within last 30 days)
362 | const lastUpdatedMatch = content.match(/Last Updated:\s*(\d{4}-\d{2}-\d{2})/);
363 | let needsUpdate = false;
364 |
365 | if (lastUpdatedMatch) {
366 | const lastUpdated = new Date(lastUpdatedMatch[1]);
367 | const thirtyDaysAgo = new Date();
368 | thirtyDaysAgo.setDate(thirtyDaysAgo.getDate() - 30);
369 | needsUpdate = lastUpdated < thirtyDaysAgo;
370 | } else {
371 | needsUpdate = true; // No update date found
372 | }
373 |
374 | let recommendation = '';
375 | let status: 'good' | 'needs-update' = 'good';
376 |
377 | if (missingSections.length > 0) {
378 | recommendation = `Missing sections: ${missingSections.join(', ')}`;
379 | status = 'needs-update';
380 | } else if (needsUpdate) {
381 | recommendation = 'Document may need updating (last update over 30 days ago)';
382 | status = 'needs-update';
383 | } else {
384 | recommendation = 'Document follows the structure defined in ByteRules';
385 | }
386 |
387 | results.push({
388 | documentType: docName,
389 | status,
390 | recommendation
391 | });
392 | }
393 |
394 | return results;
395 | } catch (error) {
396 | console.error('Error analyzing documents:', error);
397 | throw new Error(`Failed to analyze documents: ${error}`);
398 | }
399 | }
```
--------------------------------------------------------------------------------
/src/mcp/memoryBankMcp.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
2 | import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
3 | import { z } from 'zod';
4 | import path from 'path';
5 | import fs from 'fs-extra';
6 | import os from 'os';
7 | import { generateAllDocuments } from '../utils/gemini.js';
8 | import {
9 | createMemoryBankStructure,
10 | saveDocument,
11 | readDocument,
12 | readAllDocuments,
13 | exportMemoryBank
14 | } from '../utils/fileManager.js';
15 | import { generateCursorRules } from '../utils/cursorRulesGenerator.js';
16 |
17 | // Create MCP server
18 | const server = new McpServer({
19 | name: 'Memory Bank MCP',
20 | version: '1.0.0'
21 | });
22 |
23 | // Import URL and fileURLToPath for ESM compatible __dirname alternative
24 | import { fileURLToPath } from 'url';
25 | import { dirname } from 'path';
26 |
27 | // Helper function to get the workspace root directory
28 | const getWorkspaceRootDir = () => {
29 | // Try to get VS Code workspace folder from environment variables
30 | // This is more reliable than process.cwd() in VS Code environment
31 | if (process.env.VSCODE_WORKSPACE_FOLDER) {
32 | console.log(`Using VS Code workspace folder: ${process.env.VSCODE_WORKSPACE_FOLDER}`);
33 | return process.env.VSCODE_WORKSPACE_FOLDER;
34 | }
35 |
36 | // If not in VS Code or env var not available, try to determine from current file path
37 | // ESM compatible version of __dirname
38 | const __filename = fileURLToPath(import.meta.url);
39 | const __dirname = dirname(__filename);
40 |
41 | const currentFilePath = __dirname;
42 | console.log(`Current file directory: ${currentFilePath}`);
43 |
44 | // Try to find the workspace root by looking for package.json
45 | let dir = currentFilePath;
46 | while (dir !== path.parse(dir).root) {
47 | if (fs.existsSync(path.join(dir, 'package.json'))) {
48 | console.log(`Found workspace root at: ${dir}`);
49 | return dir;
50 | }
51 | dir = path.dirname(dir);
52 | }
53 |
54 | // Fallback to current working directory with warning
55 | console.warn(`Could not determine workspace root, falling back to CWD: ${process.cwd()}`);
56 | return process.cwd();
57 | };
58 |
59 | // Default document directory path - initialize to null, will be set during initialization
60 | let MEMORY_BANK_DIR: string | null = null;
61 |
62 | // Initialize Memory Bank - create new document structure
63 | server.tool(
64 | 'initialize_memory_bank',
65 | {
66 | goal: z.string().min(10, 'Project goal must be at least 10 characters'),
67 | geminiApiKey: z.string().optional().describe('Gemini API key (optional)'),
68 | location: z.string().describe('Absolute path where memory-bank folder will be created')
69 | },
70 | async ({ goal, geminiApiKey, location }) => {
71 | try {
72 | // Diagnostics: Log environment info
73 | console.log(`Current working directory: ${process.cwd()}`);
74 | console.log(`Node version: ${process.version}`);
75 | console.log(`Platform: ${process.platform}`);
76 |
77 | // Determine where to create the memory-bank directory
78 | let baseDir;
79 | let memoryBankDir;
80 |
81 | if (location) {
82 | // Use user-specified location as the base directory
83 | if (path.isAbsolute(location)) {
84 | // If absolute path is provided, use it directly as base directory
85 | baseDir = location;
86 | } else {
87 | // If relative path is provided, resolve against current working directory
88 | baseDir = path.resolve(process.cwd(), location);
89 | }
90 | console.log(`Using user specified base location: ${baseDir}`);
91 | } else {
92 | // If no location provided, use current working directory as base
93 | baseDir = process.cwd();
94 | console.log(`No location specified, using current directory as base: ${baseDir}`);
95 | }
96 |
97 | // Create memory-bank directory inside the base directory
98 | memoryBankDir = path.join(baseDir, 'memory-bank');
99 | console.log(`Will create Memory Bank structure at: ${memoryBankDir}`);
100 |
101 | // Ensure parent directory exists if needed
102 | const parentDir = path.dirname(memoryBankDir);
103 | try {
104 | await fs.ensureDir(parentDir);
105 | console.log(`Ensured parent directory exists: ${parentDir}`);
106 | } catch (error) {
107 | console.error(`Error ensuring parent directory: ${error}`);
108 | throw new Error(`Cannot create or access parent directory: ${error}`);
109 | }
110 |
111 | // Set global memory bank directory
112 | MEMORY_BANK_DIR = memoryBankDir;
113 |
114 | console.log(`Will create Memory Bank at: ${MEMORY_BANK_DIR}`);
115 |
116 | // Ensure memory-bank directory exists before passing to createMemoryBankStructure
117 | try {
118 | await fs.ensureDir(MEMORY_BANK_DIR);
119 | console.log(`Created Memory Bank root directory: ${MEMORY_BANK_DIR}`);
120 | } catch (error) {
121 | console.error(`Error creating Memory Bank directory: ${error}`);
122 | throw new Error(`Cannot create Memory Bank directory: ${error}`);
123 | }
124 |
125 | // Temporarily set the API key if provided
126 | if (geminiApiKey) {
127 | process.env.GEMINI_API_KEY = geminiApiKey;
128 | }
129 |
130 | // First, set up the .byterules file before creating other files
131 | // This ensures the byterules file is in place before other operations
132 | const byterulesDest = path.join(MEMORY_BANK_DIR, '.byterules');
133 |
134 | try {
135 | // Debug: List all search paths we're going to try
136 | console.log('Searching for .byterules template file...');
137 |
138 | // Get the ESM compatible dirname
139 | const __filename = fileURLToPath(import.meta.url);
140 | const __dirname = dirname(__filename);
141 | console.log(`Current file directory: ${__dirname}`);
142 |
143 | // Try multiple possible locations for the .byterules file
144 | const possiblePaths = [
145 | path.join(process.cwd(), 'src', 'templates', '.byterules'), // From current working dir
146 | path.join(__dirname, '..', 'templates', '.byterules'), // From mcp dir to templates
147 | path.join(__dirname, '..', '..', 'src', 'templates', '.byterules'), // From mcp dir up two levels
148 | path.join(process.cwd(), 'templates', '.byterules'), // Direct templates folder
149 | path.join(process.cwd(), '.byterules') // Root of project
150 | ];
151 |
152 | // Manually create .byterules content as fallback
153 | const defaultByterules = `# Memory Bank Document Orchestration Standard
154 |
155 | ## Directory Validation
156 |
157 | Before any operation (create/update/reference/review), ensure you are in the correct project root directory. Specifically:
158 |
159 | - A valid Memory Bank system **must contain** this \`.byterules\` file at its root.
160 | - If this file is missing, halt operations and **navigate to the correct directory** using:
161 |
162 | \`\`\`bash
163 | cd /your/project/root
164 | \`\`\`
165 |
166 | Failing to validate the directory can lead to misplaced or inconsistent documentation.
167 |
168 | ---
169 |
170 | ## System Overview
171 |
172 | Memory Bank is a structured documentation system designed to maintain project knowledge in an organized, accessible format. This \`.byterules\` file serves as the standard guide for how the system works across all projects.
173 |
174 | ## Standard Document Types
175 |
176 | ### 1. Project Brief (projectbrief.md)
177 | - **Purpose**: Core document that defines project objectives, scope, and vision
178 | - **When to Use**: Reference when making any major project decisions
179 | - **Workflow Step**: Start here; all other documents derive from this foundation
180 | - **Critical For**: Maintaining alignment with business goals throughout development
181 |
182 | ### 2. Product Context (productContext.md)
183 | - **Purpose**: Documents product functionality from a user perspective
184 | - **When to Use**: When designing features and establishing requirements
185 | - **Workflow Step**: Second document in sequence, expands on project brief goals
186 | - **Critical For**: Ensuring user needs drive technical decisions
187 |
188 | ### 3. System Patterns (systemPatterns.md)
189 | - **Purpose**: Establishes system architecture and component relationships
190 | - **When to Use**: During system design and when making integration decisions
191 | - **Workflow Step**: Third document, translates product needs to technical design
192 | - **Critical For**: Maintaining a coherent and scalable technical architecture
193 |
194 | ### 4. Tech Context (techContext.md)
195 | - **Purpose**: Specifies technology stack and implementation details
196 | - **When to Use**: During development and when onboarding technical team members
197 | - **Workflow Step**: Fourth document, makes concrete technology choices
198 | - **Critical For**: Technical consistency and efficient development
199 |
200 | ### 5. Active Context (activeContext.md)
201 | - **Purpose**: Tracks current tasks, open issues, and development focus
202 | - **When to Use**: Daily, during planning sessions, and when switching tasks
203 | - **Workflow Step**: Fifth document, operationalizes the technical approach
204 | - **Critical For**: Day-to-day execution and short-term planning
205 |
206 | ### 6. Progress (progress.md)
207 | - **Purpose**: Documents completed work, milestones, and project history
208 | - **When to Use**: After completing significant work or during reviews
209 | - **Workflow Step**: Ongoing document that records the project journey
210 | - **Critical For**: Tracking accomplishments and learning from experience
211 |
212 | ## Standard Workflows
213 |
214 | ### Documentation Sequence
215 | Always follow this sequence for document creation and reference:
216 | 1. **Project Brief** → Foundation of all project decisions
217 | 2. **Product Context** → User-focused requirements and features
218 | 3. **System Patterns** → Architecture and component design
219 | 4. **Tech Context** → Technology choices and implementation guidelines
220 | 5. **Active Context** → Current work and immediate focus
221 | 6. **Progress** → Historical record and milestone tracking
222 |
223 | ### Document Lifecycle Management
224 | Each document follows a standard lifecycle:
225 | 1. **Creation**: Establish initial content based on project needs
226 | 2. **Reference**: Use document for planning and decision-making
227 | 3. **Update**: Revise when relevant factors change
228 | 4. **Review**: Periodically validate for accuracy and completeness
229 | 5. **Archive**: Maintain as historical reference when superseded
230 |
231 | ## Best Practices
232 |
233 | ### Document Quality Standards
234 | - **Clarity**: Write in clear, concise language
235 | - **Completeness**: Include all relevant information
236 | - **Consistency**: Use consistent terminology across documents
237 | - **Structure**: Follow standardized document formats
238 | - **Granularity**: Balance detail with readability
239 | - **Traceability**: Link related concepts across documents
240 |
241 | ### Document Integration Principles
242 | - **Vertical Traceability**: Ensure business goals trace to technical implementation
243 | - **Horizontal Consistency**: Maintain alignment across documents at the same level
244 | - **Change Impact Analysis**: Update related documents when one changes
245 | - **Decision Recording**: Document the reasoning behind significant decisions
246 |
247 |
248 | `;
249 |
250 | // Try each path and use the first one that exists
251 | let bytesRulesFound = false;
252 |
253 | for (const testPath of possiblePaths) {
254 | console.log(`Checking path: ${testPath}`);
255 |
256 | if (await fs.pathExists(testPath)) {
257 | console.log(`✓ Found .byterules at: ${testPath}`);
258 | await fs.copy(testPath, byterulesDest);
259 | console.log(`Standard .byterules file copied to: ${byterulesDest}`);
260 | bytesRulesFound = true;
261 | break;
262 | } else {
263 | console.log(`✗ Not found at: ${testPath}`);
264 | }
265 | }
266 |
267 | // If no .byterules file found, create one with the default content
268 | if (!bytesRulesFound) {
269 | console.log('No .byterules template found, creating default');
270 | await fs.writeFile(byterulesDest, defaultByterules, 'utf-8');
271 | console.log(`Default .byterules file created at: ${byterulesDest}`);
272 | }
273 |
274 | } catch (error) {
275 | console.error(`Error setting up .byterules file: ${error}`);
276 | throw new Error(`Failed to set up .byterules file: ${error}`);
277 | }
278 |
279 | // Now create the full structure
280 | await createMemoryBankStructure(MEMORY_BANK_DIR);
281 |
282 | // Generate document contents
283 | const documentContents = await generateAllDocuments(goal);
284 |
285 | // Save each document
286 | for (const [docType, content] of Object.entries(documentContents)) {
287 | const filePath = path.join(MEMORY_BANK_DIR, `${docType}.md`);
288 | await saveDocument(content, filePath);
289 | }
290 |
291 | return {
292 | content: [
293 | {
294 | type: 'text',
295 | text: `✅ Memory Bank successfully created!\n\nLocation: ${MEMORY_BANK_DIR}\n\nGenerated Documents:\n- projectbrief.md\n- productContext.md\n- systemPatterns.md\n- techContext.md\n- activeContext.md\n- progress.md\n- .byterules`
296 | }
297 | ]
298 | };
299 | } catch (error) {
300 | console.error('Error creating Memory Bank:', error);
301 | return {
302 | content: [{ type: 'text', text: `❌ Error: ${error instanceof Error ? error.message : String(error)}` }],
303 | isError: true
304 | };
305 | }
306 | }
307 | );
308 |
309 | // Update document
310 | server.tool(
311 | 'update_document',
312 | {
313 | documentType: z.enum(['projectbrief', 'productContext', 'systemPatterns', 'techContext', 'activeContext', 'progress']),
314 | content: z.string().optional(),
315 | regenerate: z.boolean().default(false)
316 | },
317 | async ({ documentType, content, regenerate }) => {
318 | try {
319 | // Check if Memory Bank directory is initialized
320 | if (!MEMORY_BANK_DIR) {
321 | throw new Error('Memory Bank not initialized. Please use initialize_memory_bank tool first.');
322 | }
323 |
324 | const filePath = path.join(MEMORY_BANK_DIR, `${documentType}.md`);
325 |
326 | // Check if file exists
327 | if (!await fs.pathExists(filePath)) {
328 | // Create file if it doesn't exist
329 | await fs.ensureFile(filePath);
330 | await fs.writeFile(filePath, `# ${documentType}\n\n`, 'utf-8');
331 | }
332 |
333 | if (regenerate) {
334 | // Read existing document
335 | const currentContent = await readDocument(filePath);
336 |
337 | // Always use en-US locale for date formatting to ensure English output
338 | const dateOptions = { year: 'numeric', month: 'long', day: 'numeric' };
339 | const englishDate = new Date().toLocaleDateString('en-US');
340 |
341 | // TODO: Generate new content with Gemini (example for now)
342 | const newContent = `${currentContent}\n\n## Update\nThis document was regenerated on ${englishDate}.`;
343 |
344 | // Save document
345 | await saveDocument(newContent, filePath);
346 | } else if (content) {
347 | // Save provided content
348 | await saveDocument(content, filePath);
349 | } else {
350 | throw new Error('Content must be provided or regenerate=true');
351 | }
352 |
353 | // Always use English for all response messages
354 | return {
355 | content: [{
356 | type: 'text',
357 | text: `✅ "${documentType}.md" document successfully updated!`
358 | }]
359 | };
360 | } catch (error) {
361 | console.error('Error updating document:', error);
362 | // Ensure error messages are also in English
363 | const errorMessage = error instanceof Error ? error.message : String(error);
364 | return {
365 | content: [{ type: 'text', text: `❌ Error: ${errorMessage}` }],
366 | isError: true
367 | };
368 | }
369 | }
370 | );
371 |
372 | // Query Memory Bank
373 | server.tool(
374 | 'query_memory_bank',
375 | {
376 | query: z.string().min(5, 'Query must be at least 5 characters')
377 | },
378 | async ({ query }) => {
379 | try {
380 | // Check if Memory Bank has been initialized
381 | if (!MEMORY_BANK_DIR) {
382 | return {
383 | content: [{ type: 'text', text: `ℹ️ Memory Bank not initialized. Please use 'initialize_memory_bank' tool first.` }]
384 | };
385 | }
386 |
387 | // Check if Memory Bank directory exists on disk
388 | if (!await fs.pathExists(MEMORY_BANK_DIR)) {
389 | return {
390 | content: [{ type: 'text', text: `ℹ️ Memory Bank directory (${MEMORY_BANK_DIR}) not found on disk. Please use 'initialize_memory_bank' tool first.` }]
391 | };
392 | }
393 |
394 | // Read all documents
395 | const documents = await readAllDocuments(MEMORY_BANK_DIR);
396 |
397 | // Advanced search function
398 | const searchResults = performAdvancedSearch(query, documents);
399 |
400 | if (searchResults.length === 0) {
401 | return {
402 | content: [{ type: 'text', text: `ℹ️ No results found for query "${query}".` }]
403 | };
404 | }
405 |
406 | // Format results
407 | const formattedResults = searchResults.map(result => {
408 | return `📄 **${result.documentType}**:\n${result.snippet}\n`;
409 | }).join('\n');
410 |
411 | return {
412 | content: [{
413 | type: 'text',
414 | text: `🔍 Results for query "${query}":\n\n${formattedResults}`
415 | }]
416 | };
417 | } catch (error) {
418 | console.error('Error querying Memory Bank:', error);
419 | return {
420 | content: [{ type: 'text', text: `❌ Error: ${error instanceof Error ? error.message : String(error)}` }],
421 | isError: true
422 | };
423 | }
424 | }
425 | );
426 |
427 | // Advanced search functionality
428 | interface SearchResult {
429 | documentType: string;
430 | relevanceScore: number;
431 | snippet: string;
432 | }
433 |
434 | function performAdvancedSearch(query: string, documents: Record<string, string>): SearchResult[] {
435 | const results: SearchResult[] = [];
436 | const queryTerms = query.toLowerCase().split(/\s+/).filter(term => term.length > 2);
437 |
438 | // Search in each document
439 | for (const [docType, content] of Object.entries(documents)) {
440 | // Split document into sections and paragraphs
441 | const sections = content.split(/\n#{2,3}\s+/).filter(Boolean);
442 |
443 | for (const section of sections) {
444 | // Extract title and content
445 | const titleMatch = section.match(/^([^\n]+)/);
446 | const title = titleMatch ? titleMatch[1].trim() : '';
447 |
448 | // Evaluate each paragraph
449 | const paragraphs = section.split(/\n\n+/);
450 |
451 | for (const paragraph of paragraphs) {
452 | // Calculate relevance score
453 | const relevanceScore = calculateRelevanceScore(query, queryTerms, paragraph);
454 |
455 | // Add results above threshold
456 | if (relevanceScore > 0.3) {
457 | // Extract relevant snippet
458 | const snippet = extractRelevantSnippet(paragraph, queryTerms, title);
459 |
460 | results.push({
461 | documentType: docType,
462 | relevanceScore,
463 | snippet
464 | });
465 | }
466 | }
467 | }
468 | }
469 |
470 | // Sort results by relevance and return top 5
471 | return results
472 | .sort((a, b) => b.relevanceScore - a.relevanceScore)
473 | .slice(0, 5);
474 | }
475 |
476 | function calculateRelevanceScore(query: string, queryTerms: string[], text: string): number {
477 | const lowerText = text.toLowerCase();
478 |
479 | // Exact match check (highest score)
480 | if (lowerText.includes(query.toLowerCase())) {
481 | return 1.0;
482 | }
483 |
484 | // Term-based matching
485 | let matchCount = 0;
486 | for (const term of queryTerms) {
487 | if (lowerText.includes(term)) {
488 | matchCount++;
489 | }
490 | }
491 |
492 | // Term match ratio
493 | const termMatchRatio = queryTerms.length > 0 ? matchCount / queryTerms.length : 0;
494 |
495 | // Proximity factor calculation
496 | let proximityFactor = 0;
497 | if (matchCount >= 2) {
498 | // Calculate proximity between matching terms
499 | // (This is a simplified approach)
500 | proximityFactor = 0.2;
501 | }
502 |
503 | return termMatchRatio * 0.8 + proximityFactor;
504 | }
505 |
506 | function extractRelevantSnippet(text: string, queryTerms: string[], sectionTitle: string): string {
507 | const lowerText = text.toLowerCase();
508 | const MAX_SNIPPET_LENGTH = 150;
509 |
510 | // Find best match
511 | let bestPosition = 0;
512 | let bestTermCount = 0;
513 |
514 | // Query for each character in the document
515 | for (let i = 0; i < lowerText.length; i++) {
516 | let termCount = 0;
517 | for (const term of queryTerms) {
518 | if (lowerText.substring(i, i + 100).includes(term)) {
519 | termCount++;
520 | }
521 | }
522 |
523 | if (termCount > bestTermCount) {
524 | bestTermCount = termCount;
525 | bestPosition = i;
526 | }
527 | }
528 |
529 | // Create snippet around best match
530 | let startPos = Math.max(0, bestPosition - 30);
531 | let endPos = Math.min(text.length, bestPosition + MAX_SNIPPET_LENGTH - 30);
532 |
533 | // Adjust to word boundaries
534 | while (startPos > 0 && text[startPos] !== ' ' && text[startPos] !== '\n') {
535 | startPos--;
536 | }
537 |
538 | while (endPos < text.length && text[endPos] !== ' ' && text[endPos] !== '\n') {
539 | endPos++;
540 | }
541 |
542 | let snippet = text.substring(startPos, endPos).trim();
543 |
544 | // Add ellipsis to indicate truncation
545 | if (startPos > 0) {
546 | snippet = '...' + snippet;
547 | }
548 |
549 | if (endPos < text.length) {
550 | snippet = snippet + '...';
551 | }
552 |
553 | // Add title
554 | if (sectionTitle) {
555 | return `**${sectionTitle}**: ${snippet}`;
556 | }
557 |
558 | return snippet;
559 | }
560 |
561 | // Export Memory Bank
562 | server.tool(
563 | 'export_memory_bank',
564 | {
565 | format: z.enum(['json', 'folder']).default('folder').describe('Export format'),
566 | outputPath: z.string().optional()
567 | },
568 | async ({ format, outputPath }) => {
569 | try {
570 | // Check if Memory Bank has been initialized
571 | if (!MEMORY_BANK_DIR) {
572 | return {
573 | content: [{ type: 'text', text: `ℹ️ Memory Bank not initialized. Please use 'initialize_memory_bank' tool first.` }]
574 | };
575 | }
576 |
577 | // Check if Memory Bank directory exists on disk
578 | if (!await fs.pathExists(MEMORY_BANK_DIR)) {
579 | return {
580 | content: [{ type: 'text', text: `ℹ️ Memory Bank directory (${MEMORY_BANK_DIR}) not found on disk. Please use 'initialize_memory_bank' tool first.` }]
581 | };
582 | }
583 |
584 | // Ensure we have an absolute path for the output
585 | const defaultOutputPath = path.resolve(path.join(process.cwd(), 'memory-bank-export'));
586 | const targetOutputPath = outputPath ? path.resolve(outputPath) : defaultOutputPath;
587 |
588 | console.log(`Exporting Memory Bank from ${MEMORY_BANK_DIR} to ${targetOutputPath}`);
589 |
590 | // Call exportMemoryBank function
591 | const exportResult = await exportMemoryBank(MEMORY_BANK_DIR, format, targetOutputPath);
592 |
593 | // Create message based on format type
594 | const formatMessage = format === 'json' ? 'JSON file' : 'folder';
595 |
596 | return {
597 | content: [{
598 | type: 'text',
599 | text: `✅ Memory Bank successfully exported as ${formatMessage}: ${exportResult}`
600 | }]
601 | };
602 | } catch (error) {
603 | console.error('Error exporting Memory Bank:', error);
604 | return {
605 | content: [{ type: 'text', text: `❌ Error: ${error instanceof Error ? error.message : String(error)}` }],
606 | isError: true
607 | };
608 | }
609 | }
610 | );
611 |
612 | // Create Cursor Rules
613 | server.tool(
614 | 'create_cursor_rules',
615 | {
616 | projectPurpose: z.string()
617 | .min(10, 'Proje amacı en az 10 karakter olmalıdır')
618 | .describe('Proje amacını detaylı bir şekilde açıklayan bir metin giriniz. Bu metin projenin temel hedeflerini ve kapsamını belirleyecektir.'),
619 | location: z.string()
620 | .describe('Absolute path where cursor-rules will be created')
621 | },
622 | async ({ projectPurpose, location }) => {
623 | try {
624 | // Diagnostics: Log environment info
625 | console.log(`Current working directory: ${process.cwd()}`);
626 | console.log(`Node version: ${process.version}`);
627 | console.log(`Platform: ${process.platform}`);
628 |
629 | // Determine where to create the .cursor directory
630 | let baseDir;
631 |
632 | if (location) {
633 | // Use user-specified location as the base directory
634 | if (path.isAbsolute(location)) {
635 | // If absolute path is provided, use it directly as base directory
636 | baseDir = location;
637 | } else {
638 | // If relative path is provided, resolve against current working directory
639 | baseDir = path.resolve(process.cwd(), location);
640 | }
641 | console.log(`Using user specified base location: ${baseDir}`);
642 | } else {
643 | // If no location provided, use current working directory as base
644 | baseDir = process.cwd();
645 | console.log(`No location specified, using current directory as base: ${baseDir}`);
646 | }
647 |
648 | // Create .cursor directory in the base directory
649 | const cursorDir = path.join(baseDir, '.cursor');
650 | console.log(`Will create Cursor Rules at: ${cursorDir}`);
651 |
652 | // Ensure parent directory exists if needed
653 | const parentDir = path.dirname(cursorDir);
654 | try {
655 | await fs.ensureDir(parentDir);
656 | console.log(`Ensured parent directory exists: ${parentDir}`);
657 | } catch (error) {
658 | console.error(`Error ensuring parent directory: ${error}`);
659 | throw new Error(`Cannot create or access parent directory: ${error}`);
660 | }
661 |
662 | // Ensure .cursor directory exists
663 | try {
664 | await fs.ensureDir(cursorDir);
665 | console.log(`Created .cursor directory: ${cursorDir}`);
666 | } catch (error) {
667 | console.error(`Error creating .cursor directory: ${error}`);
668 | throw new Error(`Cannot create .cursor directory: ${error}`);
669 | }
670 |
671 | // Create the cursor-rules.mdc file
672 | const cursorRulesPath = path.join(cursorDir, 'cursor-rules.mdc');
673 | console.log(`Will create cursor-rules.mdc at: ${cursorRulesPath}`);
674 |
675 | // Generate content for the rules file based on project purpose
676 | console.log(`Generating cursor rules content for purpose: ${projectPurpose}`);
677 | try {
678 | const cursorRulesContent = await generateCursorRules(projectPurpose);
679 |
680 | // Save the file
681 | try {
682 | await fs.writeFile(cursorRulesPath, cursorRulesContent, 'utf-8');
683 | console.log(`Created cursor-rules.mdc at: ${cursorRulesPath}`);
684 | } catch (error) {
685 | console.error(`Error creating cursor-rules.mdc file: ${error}`);
686 | throw new Error(`Cannot create cursor-rules.mdc file: ${error}`);
687 | }
688 |
689 | return {
690 | content: [{
691 | type: 'text',
692 | text: `✅ Cursor Rules successfully created!\n\nLocation: ${cursorRulesPath}`
693 | }]
694 | };
695 | } catch (ruleGenError) {
696 | console.error(`Error generating cursor rules content: ${ruleGenError}`);
697 |
698 | // Detaylı hata mesajı oluştur
699 | let errorMessage = 'Error generating Cursor Rules content: ';
700 | if (ruleGenError instanceof Error) {
701 | errorMessage += ruleGenError.message;
702 |
703 | // API key ile ilgili hata mesajlarını daha açıklayıcı hale getir
704 | if (ruleGenError.message.includes('GEMINI_API_KEY') || ruleGenError.message.includes('API key')) {
705 | errorMessage += '\n\nÖnemli: Bu özellik Gemini API kullanıyor. Lütfen .env dosyasında geçerli bir GEMINI_API_KEY tanımladığınızdan emin olun.';
706 | }
707 | } else {
708 | errorMessage += String(ruleGenError);
709 | }
710 |
711 | throw new Error(errorMessage);
712 | }
713 | } catch (error) {
714 | console.error('Error creating Cursor Rules:', error);
715 | return {
716 | content: [{ type: 'text', text: `❌ Error: ${error instanceof Error ? error.message : String(error)}` }],
717 | isError: true
718 | };
719 | }
720 | }
721 | );
722 |
723 | // Read document contents - provides as resource
724 | server.resource(
725 | 'memory_bank_document',
726 | 'memory-bank://{documentType}',
727 | async (uri) => {
728 | try {
729 | // First check if Memory Bank has been initialized
730 | if (!MEMORY_BANK_DIR) {
731 | throw new Error('Memory Bank not initialized. Please use initialize_memory_bank tool first.');
732 | }
733 |
734 | const documentType = uri.pathname.split('/').pop();
735 | const validDocumentTypes = ['projectbrief', 'productContext', 'systemPatterns', 'techContext', 'activeContext', 'progress'];
736 |
737 | if (!documentType || !validDocumentTypes.includes(documentType)) {
738 | throw new Error(`Invalid document type: ${documentType}`);
739 | }
740 |
741 | const filePath = path.join(MEMORY_BANK_DIR, `${documentType}.md`);
742 |
743 | // Check if file exists
744 | if (!await fs.pathExists(filePath)) {
745 | // Create file if it doesn't exist
746 | await fs.ensureFile(filePath);
747 | await fs.writeFile(filePath, `# ${documentType}\n\nThis document has not been created yet.`, 'utf-8');
748 | }
749 |
750 | const content = await readDocument(filePath);
751 |
752 | return {
753 | contents: [{
754 | uri: uri.href,
755 | text: content
756 | }]
757 | };
758 | } catch (error) {
759 | console.error('Error reading document:', error);
760 | throw error;
761 | }
762 | }
763 | );
764 |
765 | // Export MCP server
766 | export default server;
767 |
768 | // Direct execution function
769 | export async function startServer(): Promise<void> {
770 | const transport = new StdioServerTransport();
771 |
772 | try {
773 | console.log('Starting Memory Bank MCP server...');
774 | await server.connect(transport);
775 | console.log('Memory Bank MCP server successfully started!');
776 | } catch (error) {
777 | console.error('Error starting server:', error);
778 | process.exit(1);
779 | }
780 | }
```