#
tokens: 45951/50000 11/307 files (page 6/33)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 6 of 33. Use http://codebase.md/tosin2013/documcp?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .dockerignore
├── .eslintignore
├── .eslintrc.json
├── .github
│   ├── agents
│   │   ├── documcp-ast.md
│   │   ├── documcp-deploy.md
│   │   ├── documcp-memory.md
│   │   ├── documcp-test.md
│   │   └── documcp-tool.md
│   ├── copilot-instructions.md
│   ├── dependabot.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── automated-changelog.md
│   │   ├── bug_report.md
│   │   ├── bug_report.yml
│   │   ├── documentation_issue.md
│   │   ├── feature_request.md
│   │   ├── feature_request.yml
│   │   ├── npm-publishing-fix.md
│   │   └── release_improvements.md
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── release-drafter.yml
│   └── workflows
│       ├── auto-merge.yml
│       ├── ci.yml
│       ├── codeql.yml
│       ├── dependency-review.yml
│       ├── deploy-docs.yml
│       ├── README.md
│       ├── release-drafter.yml
│       └── release.yml
├── .gitignore
├── .husky
│   ├── commit-msg
│   └── pre-commit
├── .linkcheck.config.json
├── .markdown-link-check.json
├── .nvmrc
├── .pre-commit-config.yaml
├── .versionrc.json
├── ARCHITECTURAL_CHANGES_SUMMARY.md
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── commitlint.config.js
├── CONTRIBUTING.md
├── docker-compose.docs.yml
├── Dockerfile.docs
├── docs
│   ├── .docusaurus
│   │   ├── docusaurus-plugin-content-docs
│   │   │   └── default
│   │   │       └── __mdx-loader-dependency.json
│   │   └── docusaurus-plugin-content-pages
│   │       └── default
│   │           └── __plugin.json
│   ├── adrs
│   │   ├── adr-0001-mcp-server-architecture.md
│   │   ├── adr-0002-repository-analysis-engine.md
│   │   ├── adr-0003-static-site-generator-recommendation-engine.md
│   │   ├── adr-0004-diataxis-framework-integration.md
│   │   ├── adr-0005-github-pages-deployment-automation.md
│   │   ├── adr-0006-mcp-tools-api-design.md
│   │   ├── adr-0007-mcp-prompts-and-resources-integration.md
│   │   ├── adr-0008-intelligent-content-population-engine.md
│   │   ├── adr-0009-content-accuracy-validation-framework.md
│   │   ├── adr-0010-mcp-resource-pattern-redesign.md
│   │   ├── adr-0011-ce-mcp-compatibility.md
│   │   ├── adr-0012-priority-scoring-system-for-documentation-drift.md
│   │   ├── adr-0013-release-pipeline-and-package-distribution.md
│   │   └── README.md
│   ├── api
│   │   ├── .nojekyll
│   │   ├── assets
│   │   │   ├── hierarchy.js
│   │   │   ├── highlight.css
│   │   │   ├── icons.js
│   │   │   ├── icons.svg
│   │   │   ├── main.js
│   │   │   ├── navigation.js
│   │   │   ├── search.js
│   │   │   └── style.css
│   │   ├── hierarchy.html
│   │   ├── index.html
│   │   ├── modules.html
│   │   └── variables
│   │       └── TOOLS.html
│   ├── assets
│   │   └── logo.svg
│   ├── CE-MCP-FINDINGS.md
│   ├── development
│   │   └── MCP_INSPECTOR_TESTING.md
│   ├── docusaurus.config.js
│   ├── explanation
│   │   ├── architecture.md
│   │   └── index.md
│   ├── guides
│   │   ├── link-validation.md
│   │   ├── playwright-integration.md
│   │   └── playwright-testing-workflow.md
│   ├── how-to
│   │   ├── analytics-setup.md
│   │   ├── change-watcher.md
│   │   ├── custom-domains.md
│   │   ├── documentation-freshness-tracking.md
│   │   ├── drift-priority-scoring.md
│   │   ├── github-pages-deployment.md
│   │   ├── index.md
│   │   ├── llm-integration.md
│   │   ├── local-testing.md
│   │   ├── performance-optimization.md
│   │   ├── prompting-guide.md
│   │   ├── repository-analysis.md
│   │   ├── seo-optimization.md
│   │   ├── site-monitoring.md
│   │   ├── troubleshooting.md
│   │   └── usage-examples.md
│   ├── index.md
│   ├── knowledge-graph.md
│   ├── package-lock.json
│   ├── package.json
│   ├── phase-2-intelligence.md
│   ├── reference
│   │   ├── api-overview.md
│   │   ├── cli.md
│   │   ├── configuration.md
│   │   ├── deploy-pages.md
│   │   ├── index.md
│   │   ├── mcp-tools.md
│   │   └── prompt-templates.md
│   ├── research
│   │   ├── cross-domain-integration
│   │   │   └── README.md
│   │   ├── domain-1-mcp-architecture
│   │   │   ├── index.md
│   │   │   └── mcp-performance-research.md
│   │   ├── domain-2-repository-analysis
│   │   │   └── README.md
│   │   ├── domain-3-ssg-recommendation
│   │   │   ├── index.md
│   │   │   └── ssg-performance-analysis.md
│   │   ├── domain-4-diataxis-integration
│   │   │   └── README.md
│   │   ├── domain-5-github-deployment
│   │   │   ├── github-pages-security-analysis.md
│   │   │   └── index.md
│   │   ├── domain-6-api-design
│   │   │   └── README.md
│   │   ├── README.md
│   │   ├── research-integration-summary-2025-01-14.md
│   │   ├── research-progress-template.md
│   │   └── research-questions-2025-01-14.md
│   ├── robots.txt
│   ├── sidebars.js
│   ├── sitemap.xml
│   ├── src
│   │   └── css
│   │       └── custom.css
│   └── tutorials
│       ├── development-setup.md
│       ├── environment-setup.md
│       ├── first-deployment.md
│       ├── getting-started.md
│       ├── index.md
│       ├── memory-workflows.md
│       └── user-onboarding.md
├── ISSUE_IMPLEMENTATION_SUMMARY.md
├── jest.config.js
├── LICENSE
├── Makefile
├── MCP_PHASE2_IMPLEMENTATION.md
├── mcp-config-example.json
├── mcp.json
├── package-lock.json
├── package.json
├── README.md
├── release.sh
├── scripts
│   └── check-package-structure.cjs
├── SECURITY.md
├── setup-precommit.sh
├── src
│   ├── benchmarks
│   │   └── performance.ts
│   ├── index.ts
│   ├── memory
│   │   ├── contextual-retrieval.ts
│   │   ├── deployment-analytics.ts
│   │   ├── enhanced-manager.ts
│   │   ├── export-import.ts
│   │   ├── freshness-kg-integration.ts
│   │   ├── index.ts
│   │   ├── integration.ts
│   │   ├── kg-code-integration.ts
│   │   ├── kg-health.ts
│   │   ├── kg-integration.ts
│   │   ├── kg-link-validator.ts
│   │   ├── kg-storage.ts
│   │   ├── knowledge-graph.ts
│   │   ├── learning.ts
│   │   ├── manager.ts
│   │   ├── multi-agent-sharing.ts
│   │   ├── pruning.ts
│   │   ├── schemas.ts
│   │   ├── storage.ts
│   │   ├── temporal-analysis.ts
│   │   ├── user-preferences.ts
│   │   └── visualization.ts
│   ├── prompts
│   │   └── technical-writer-prompts.ts
│   ├── scripts
│   │   └── benchmark.ts
│   ├── templates
│   │   └── playwright
│   │       ├── accessibility.spec.template.ts
│   │       ├── Dockerfile.template
│   │       ├── docs-e2e.workflow.template.yml
│   │       ├── link-validation.spec.template.ts
│   │       └── playwright.config.template.ts
│   ├── tools
│   │   ├── analyze-deployments.ts
│   │   ├── analyze-readme.ts
│   │   ├── analyze-repository.ts
│   │   ├── change-watcher.ts
│   │   ├── check-documentation-links.ts
│   │   ├── cleanup-agent-artifacts.ts
│   │   ├── deploy-pages.ts
│   │   ├── detect-gaps.ts
│   │   ├── evaluate-readme-health.ts
│   │   ├── generate-config.ts
│   │   ├── generate-contextual-content.ts
│   │   ├── generate-llm-context.ts
│   │   ├── generate-readme-template.ts
│   │   ├── generate-technical-writer-prompts.ts
│   │   ├── kg-health-check.ts
│   │   ├── manage-preferences.ts
│   │   ├── manage-sitemap.ts
│   │   ├── optimize-readme.ts
│   │   ├── populate-content.ts
│   │   ├── readme-best-practices.ts
│   │   ├── recommend-ssg.ts
│   │   ├── setup-playwright-tests.ts
│   │   ├── setup-structure.ts
│   │   ├── simulate-execution.ts
│   │   ├── sync-code-to-docs.ts
│   │   ├── test-local-deployment.ts
│   │   ├── track-documentation-freshness.ts
│   │   ├── update-existing-documentation.ts
│   │   ├── validate-content.ts
│   │   ├── validate-documentation-freshness.ts
│   │   ├── validate-readme-checklist.ts
│   │   └── verify-deployment.ts
│   ├── types
│   │   └── api.ts
│   ├── utils
│   │   ├── artifact-detector.ts
│   │   ├── ast-analyzer.ts
│   │   ├── change-watcher.ts
│   │   ├── code-scanner.ts
│   │   ├── content-extractor.ts
│   │   ├── drift-detector.ts
│   │   ├── execution-simulator.ts
│   │   ├── freshness-tracker.ts
│   │   ├── language-parsers-simple.ts
│   │   ├── llm-client.ts
│   │   ├── permission-checker.ts
│   │   ├── semantic-analyzer.ts
│   │   ├── sitemap-generator.ts
│   │   ├── usage-metadata.ts
│   │   └── user-feedback-integration.ts
│   └── workflows
│       └── documentation-workflow.ts
├── test-docs-local.sh
├── tests
│   ├── api
│   │   └── mcp-responses.test.ts
│   ├── benchmarks
│   │   └── performance.test.ts
│   ├── call-graph-builder.test.ts
│   ├── change-watcher-priority.integration.test.ts
│   ├── change-watcher.test.ts
│   ├── edge-cases
│   │   └── error-handling.test.ts
│   ├── execution-simulator.test.ts
│   ├── functional
│   │   └── tools.test.ts
│   ├── integration
│   │   ├── kg-documentation-workflow.test.ts
│   │   ├── knowledge-graph-workflow.test.ts
│   │   ├── mcp-readme-tools.test.ts
│   │   ├── memory-mcp-tools.test.ts
│   │   ├── readme-technical-writer.test.ts
│   │   └── workflow.test.ts
│   ├── memory
│   │   ├── contextual-retrieval.test.ts
│   │   ├── enhanced-manager.test.ts
│   │   ├── export-import.test.ts
│   │   ├── freshness-kg-integration.test.ts
│   │   ├── kg-code-integration.test.ts
│   │   ├── kg-health.test.ts
│   │   ├── kg-link-validator.test.ts
│   │   ├── kg-storage-validation.test.ts
│   │   ├── kg-storage.test.ts
│   │   ├── knowledge-graph-documentation-examples.test.ts
│   │   ├── knowledge-graph-enhanced.test.ts
│   │   ├── knowledge-graph.test.ts
│   │   ├── learning.test.ts
│   │   ├── manager-advanced.test.ts
│   │   ├── manager.test.ts
│   │   ├── mcp-resource-integration.test.ts
│   │   ├── mcp-tool-persistence.test.ts
│   │   ├── schemas-documentation-examples.test.ts
│   │   ├── schemas.test.ts
│   │   ├── storage.test.ts
│   │   ├── temporal-analysis.test.ts
│   │   └── user-preferences.test.ts
│   ├── performance
│   │   ├── memory-load-testing.test.ts
│   │   └── memory-stress-testing.test.ts
│   ├── prompts
│   │   ├── guided-workflow-prompts.test.ts
│   │   └── technical-writer-prompts.test.ts
│   ├── server.test.ts
│   ├── setup.ts
│   ├── tools
│   │   ├── all-tools.test.ts
│   │   ├── analyze-coverage.test.ts
│   │   ├── analyze-deployments.test.ts
│   │   ├── analyze-readme.test.ts
│   │   ├── analyze-repository.test.ts
│   │   ├── check-documentation-links.test.ts
│   │   ├── cleanup-agent-artifacts.test.ts
│   │   ├── deploy-pages-kg-retrieval.test.ts
│   │   ├── deploy-pages-tracking.test.ts
│   │   ├── deploy-pages.test.ts
│   │   ├── detect-gaps.test.ts
│   │   ├── evaluate-readme-health.test.ts
│   │   ├── generate-contextual-content.test.ts
│   │   ├── generate-llm-context.test.ts
│   │   ├── generate-readme-template.test.ts
│   │   ├── generate-technical-writer-prompts.test.ts
│   │   ├── kg-health-check.test.ts
│   │   ├── manage-sitemap.test.ts
│   │   ├── optimize-readme.test.ts
│   │   ├── readme-best-practices.test.ts
│   │   ├── recommend-ssg-historical.test.ts
│   │   ├── recommend-ssg-preferences.test.ts
│   │   ├── recommend-ssg.test.ts
│   │   ├── simple-coverage.test.ts
│   │   ├── sync-code-to-docs.test.ts
│   │   ├── test-local-deployment.test.ts
│   │   ├── tool-error-handling.test.ts
│   │   ├── track-documentation-freshness.test.ts
│   │   ├── validate-content.test.ts
│   │   ├── validate-documentation-freshness.test.ts
│   │   └── validate-readme-checklist.test.ts
│   ├── types
│   │   └── type-safety.test.ts
│   └── utils
│       ├── artifact-detector.test.ts
│       ├── ast-analyzer.test.ts
│       ├── content-extractor.test.ts
│       ├── drift-detector-diataxis.test.ts
│       ├── drift-detector-priority.test.ts
│       ├── drift-detector.test.ts
│       ├── freshness-tracker.test.ts
│       ├── llm-client.test.ts
│       ├── semantic-analyzer.test.ts
│       ├── sitemap-generator.test.ts
│       ├── usage-metadata.test.ts
│       └── user-feedback-integration.test.ts
├── tsconfig.json
└── typedoc.json
```

# Files

--------------------------------------------------------------------------------
/tests/utils/llm-client.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for LLM Client
  3 |  */
  4 | 
  5 | import {
  6 |   createLLMClient,
  7 |   DeepSeekClient,
  8 |   isLLMAvailable,
  9 |   type LLMConfig,
 10 |   type SemanticAnalysis,
 11 |   type SimulationResult,
 12 | } from '../../src/utils/llm-client.js';
 13 | 
 14 | // Mock fetch globally
 15 | global.fetch = jest.fn();
 16 | 
 17 | describe('LLM Client', () => {
 18 |   beforeEach(() => {
 19 |     jest.clearAllMocks();
 20 |     // Clear environment variables
 21 |     delete process.env.DOCUMCP_LLM_API_KEY;
 22 |     delete process.env.DOCUMCP_LLM_PROVIDER;
 23 |     delete process.env.DOCUMCP_LLM_MODEL;
 24 |   });
 25 | 
 26 |   describe('createLLMClient', () => {
 27 |     test('should return null when no API key is provided', () => {
 28 |       const client = createLLMClient();
 29 |       expect(client).toBeNull();
 30 |     });
 31 | 
 32 |     test('should create client with environment variables', () => {
 33 |       process.env.DOCUMCP_LLM_API_KEY = 'test-key';
 34 |       const client = createLLMClient();
 35 |       expect(client).not.toBeNull();
 36 |       expect(client).toBeInstanceOf(DeepSeekClient);
 37 |     });
 38 | 
 39 |     test('should create client with config parameter', () => {
 40 |       const client = createLLMClient({
 41 |         provider: 'deepseek',
 42 |         apiKey: 'test-key',
 43 |         model: 'deepseek-chat',
 44 |       });
 45 |       expect(client).not.toBeNull();
 46 |       expect(client).toBeInstanceOf(DeepSeekClient);
 47 |     });
 48 | 
 49 |     test('should use default provider and model', () => {
 50 |       process.env.DOCUMCP_LLM_API_KEY = 'test-key';
 51 |       const client = createLLMClient();
 52 |       expect(client).not.toBeNull();
 53 |     });
 54 | 
 55 |     test('should support multiple providers', () => {
 56 |       const providers = ['deepseek', 'openai', 'anthropic', 'ollama'] as const;
 57 |       
 58 |       for (const provider of providers) {
 59 |         const client = createLLMClient({
 60 |           provider,
 61 |           apiKey: 'test-key',
 62 |           model: 'test-model',
 63 |         });
 64 |         expect(client).not.toBeNull();
 65 |       }
 66 |     });
 67 |   });
 68 | 
 69 |   describe('isLLMAvailable', () => {
 70 |     test('should return false when no API key is set', () => {
 71 |       expect(isLLMAvailable()).toBe(false);
 72 |     });
 73 | 
 74 |     test('should return true when DOCUMCP_LLM_API_KEY is set', () => {
 75 |       process.env.DOCUMCP_LLM_API_KEY = 'test-key';
 76 |       expect(isLLMAvailable()).toBe(true);
 77 |     });
 78 | 
 79 |     test('should return true when OPENAI_API_KEY is set', () => {
 80 |       process.env.OPENAI_API_KEY = 'test-key';
 81 |       expect(isLLMAvailable()).toBe(true);
 82 |     });
 83 |   });
 84 | 
 85 |   describe('DeepSeekClient', () => {
 86 |     let client: DeepSeekClient;
 87 |     const config: LLMConfig = {
 88 |       provider: 'deepseek',
 89 |       apiKey: 'test-api-key',
 90 |       model: 'deepseek-chat',
 91 |       maxTokens: 1000,
 92 |       timeout: 5000,
 93 |     };
 94 | 
 95 |     beforeEach(() => {
 96 |       client = new DeepSeekClient(config);
 97 |     });
 98 | 
 99 |     describe('isAvailable', () => {
100 |       test('should return true when API key is present', () => {
101 |         expect(client.isAvailable()).toBe(true);
102 |       });
103 | 
104 |       test('should return false when API key is missing', () => {
105 |         const noKeyClient = new DeepSeekClient({ ...config, apiKey: undefined });
106 |         expect(noKeyClient.isAvailable()).toBe(false);
107 |       });
108 |     });
109 | 
110 |     describe('complete', () => {
111 |       test('should throw error when client is not available', async () => {
112 |         const noKeyClient = new DeepSeekClient({ ...config, apiKey: undefined });
113 |         await expect(noKeyClient.complete('test prompt')).rejects.toThrow(
114 |           'LLM client is not available'
115 |         );
116 |       });
117 | 
118 |       test('should make successful API request', async () => {
119 |         const mockResponse = {
120 |           choices: [
121 |             {
122 |               message: {
123 |                 content: 'Test response',
124 |               },
125 |             },
126 |           ],
127 |         };
128 | 
129 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
130 |           ok: true,
131 |           json: async () => mockResponse,
132 |         });
133 | 
134 |         const response = await client.complete('Test prompt');
135 |         expect(response).toBe('Test response');
136 |         expect(global.fetch).toHaveBeenCalledWith(
137 |           expect.stringContaining('/chat/completions'),
138 |           expect.objectContaining({
139 |             method: 'POST',
140 |             headers: expect.objectContaining({
141 |               'Content-Type': 'application/json',
142 |               'Authorization': 'Bearer test-api-key',
143 |             }),
144 |           })
145 |         );
146 |       });
147 | 
148 |       test('should handle API errors', async () => {
149 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
150 |           ok: false,
151 |           status: 500,
152 |           text: async () => 'Internal server error',
153 |         });
154 | 
155 |         await expect(client.complete('Test prompt')).rejects.toThrow(
156 |           'LLM API error: 500'
157 |         );
158 |       });
159 | 
160 |       test('should handle timeout', async () => {
161 |         const shortTimeoutClient = new DeepSeekClient({ ...config, timeout: 100 });
162 |         
163 |         (global.fetch as jest.Mock).mockImplementationOnce(() => 
164 |           new Promise((resolve) => setTimeout(resolve, 1000))
165 |         );
166 | 
167 |         await expect(shortTimeoutClient.complete('Test prompt')).rejects.toThrow();
168 |       });
169 | 
170 |       test('should handle network errors', async () => {
171 |         (global.fetch as jest.Mock).mockRejectedValueOnce(new Error('Network error'));
172 | 
173 |         await expect(client.complete('Test prompt')).rejects.toThrow('Network error');
174 |       });
175 |     });
176 | 
177 |     describe('analyzeCodeChange', () => {
178 |       test('should analyze code changes successfully', async () => {
179 |         const mockAnalysis: SemanticAnalysis = {
180 |           hasBehavioralChange: true,
181 |           breakingForExamples: false,
182 |           changeDescription: 'Function parameter type changed',
183 |           affectedDocSections: ['API Reference'],
184 |           confidence: 0.9,
185 |         };
186 | 
187 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
188 |           ok: true,
189 |           json: async () => ({
190 |             choices: [
191 |               {
192 |                 message: {
193 |                   content: JSON.stringify(mockAnalysis),
194 |                 },
195 |               },
196 |             ],
197 |           }),
198 |         });
199 | 
200 |         const codeBefore = 'function test(x: number) { return x * 2; }';
201 |         const codeAfter = 'function test(x: string) { return x.repeat(2); }';
202 | 
203 |         const result = await client.analyzeCodeChange(codeBefore, codeAfter);
204 | 
205 |         expect(result.hasBehavioralChange).toBe(true);
206 |         expect(result.breakingForExamples).toBe(false);
207 |         expect(result.confidence).toBe(0.9);
208 |         expect(result.affectedDocSections).toContain('API Reference');
209 |       });
210 | 
211 |       test('should handle JSON in markdown code blocks', async () => {
212 |         const mockAnalysis: SemanticAnalysis = {
213 |           hasBehavioralChange: true,
214 |           breakingForExamples: true,
215 |           changeDescription: 'Breaking change',
216 |           affectedDocSections: ['Examples'],
217 |           confidence: 0.85,
218 |         };
219 | 
220 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
221 |           ok: true,
222 |           json: async () => ({
223 |             choices: [
224 |               {
225 |                 message: {
226 |                   content: '```json\n' + JSON.stringify(mockAnalysis) + '\n```',
227 |                 },
228 |               },
229 |             ],
230 |           }),
231 |         });
232 | 
233 |         const result = await client.analyzeCodeChange('code1', 'code2');
234 |         expect(result.hasBehavioralChange).toBe(true);
235 |         expect(result.confidence).toBe(0.85);
236 |       });
237 | 
238 |       test('should return fallback result on parse error', async () => {
239 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
240 |           ok: true,
241 |           json: async () => ({
242 |             choices: [
243 |               {
244 |                 message: {
245 |                   content: 'Invalid JSON response',
246 |                 },
247 |               },
248 |             ],
249 |           }),
250 |         });
251 | 
252 |         const result = await client.analyzeCodeChange('code1', 'code2');
253 |         expect(result.confidence).toBe(0);
254 |         expect(result.hasBehavioralChange).toBe(false);
255 |         expect(result.changeDescription).toContain('Analysis failed');
256 |       });
257 | 
258 |       test('should normalize confidence values', async () => {
259 |         const mockAnalysis = {
260 |           hasBehavioralChange: true,
261 |           breakingForExamples: false,
262 |           changeDescription: 'Test',
263 |           affectedDocSections: [],
264 |           confidence: 1.5, // Invalid: > 1
265 |         };
266 | 
267 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
268 |           ok: true,
269 |           json: async () => ({
270 |             choices: [
271 |               {
272 |                 message: {
273 |                   content: JSON.stringify(mockAnalysis),
274 |                 },
275 |               },
276 |             ],
277 |           }),
278 |         });
279 | 
280 |         const result = await client.analyzeCodeChange('code1', 'code2');
281 |         expect(result.confidence).toBe(1); // Should be clamped to 1
282 |       });
283 |     });
284 | 
285 |     describe('simulateExecution', () => {
286 |       test('should simulate execution successfully', async () => {
287 |         const mockSimulation: SimulationResult = {
288 |           success: true,
289 |           expectedOutput: '42',
290 |           actualOutput: '42',
291 |           matches: true,
292 |           differences: [],
293 |           confidence: 0.95,
294 |         };
295 | 
296 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
297 |           ok: true,
298 |           json: async () => ({
299 |             choices: [
300 |               {
301 |                 message: {
302 |                   content: JSON.stringify(mockSimulation),
303 |                 },
304 |               },
305 |             ],
306 |           }),
307 |         });
308 | 
309 |         const example = 'const result = multiply(6, 7);';
310 |         const implementation = 'function multiply(a, b) { return a * b; }';
311 | 
312 |         const result = await client.simulateExecution(example, implementation);
313 | 
314 |         expect(result.success).toBe(true);
315 |         expect(result.matches).toBe(true);
316 |         expect(result.confidence).toBe(0.95);
317 |         expect(result.differences).toHaveLength(0);
318 |       });
319 | 
320 |       test('should detect mismatches', async () => {
321 |         const mockSimulation: SimulationResult = {
322 |           success: true,
323 |           expectedOutput: '42',
324 |           actualOutput: '43',
325 |           matches: false,
326 |           differences: ['Output mismatch: expected 42, got 43'],
327 |           confidence: 0.8,
328 |         };
329 | 
330 |         (global.fetch as jest.Mock).mockResolvedValueOnce({
331 |           ok: true,
332 |           json: async () => ({
333 |             choices: [
334 |               {
335 |                 message: {
336 |                   content: JSON.stringify(mockSimulation),
337 |                 },
338 |               },
339 |             ],
340 |           }),
341 |         });
342 | 
343 |         const result = await client.simulateExecution('example', 'impl');
344 | 
345 |         expect(result.matches).toBe(false);
346 |         expect(result.differences.length).toBeGreaterThan(0);
347 |       });
348 | 
349 |       test('should return fallback result on error', async () => {
350 |         (global.fetch as jest.Mock).mockRejectedValueOnce(new Error('Network error'));
351 | 
352 |         const result = await client.simulateExecution('example', 'impl');
353 | 
354 |         expect(result.success).toBe(false);
355 |         expect(result.matches).toBe(false);
356 |         expect(result.confidence).toBe(0);
357 |         expect(result.differences.length).toBeGreaterThan(0);
358 |       });
359 |     });
360 | 
361 |     describe('rate limiting', () => {
362 |       test('should respect rate limits', async () => {
363 |         const mockResponse = {
364 |           choices: [{ message: { content: 'Response' } }],
365 |         };
366 | 
367 |         (global.fetch as jest.Mock).mockResolvedValue({
368 |           ok: true,
369 |           json: async () => mockResponse,
370 |         });
371 | 
372 |         // Make multiple requests quickly
373 |         const promises = Array(5).fill(null).map(() => 
374 |           client.complete('test')
375 |         );
376 | 
377 |         const results = await Promise.all(promises);
378 |         expect(results).toHaveLength(5);
379 |         expect(global.fetch).toHaveBeenCalledTimes(5);
380 |       });
381 |     });
382 |   });
383 | });
384 | 
```

--------------------------------------------------------------------------------
/docs/how-to/local-testing.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | documcp:
  3 |   last_updated: "2025-11-20T00:46:21.952Z"
  4 |   last_validated: "2025-12-09T19:41:38.583Z"
  5 |   auto_updated: false
  6 |   update_frequency: monthly
  7 |   validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
  8 | ---
  9 | 
 10 | # Local Documentation Testing
 11 | 
 12 | This guide shows how to test your documentation locally before deploying to GitHub Pages using containerized environments that don't affect your system.
 13 | 
 14 | ## 🎯 Best Practice: Test Build Before Pushing
 15 | 
 16 | **Always test your documentation build locally before pushing to git** to ensure GitHub Actions will build successfully:
 17 | 
 18 | ### Option 1: Test Node.js Build (Recommended - Matches GitHub Actions)
 19 | 
 20 | ```bash
 21 | # Test the same build process GitHub Actions uses
 22 | cd docs
 23 | npm ci
 24 | npm run build
 25 | ```
 26 | 
 27 | This uses the exact same process as GitHub Actions and catches build issues early.
 28 | 
 29 | ### Option 2: Test Docker Build (Optional - For Container Validation)
 30 | 
 31 | ```bash
 32 | # Quick Docker validation (if Dockerfile is configured)
 33 | docker build -f Dockerfile.docs -t documcp-docs-test . && echo "✅ Docker build ready"
 34 | ```
 35 | 
 36 | **Note**: Docker testing validates containerized environments, but GitHub Actions uses Node.js directly, so Option 1 is more reliable for CI validation.
 37 | 
 38 | ## Quick Start - Containerized Testing
 39 | 
 40 | DocuMCP automatically generates a containerized testing environment that requires only Docker or Podman:
 41 | 
 42 | ```bash
 43 | # Run the containerized testing script
 44 | ./test-docs-local.sh
 45 | ```
 46 | 
 47 | This script will:
 48 | 
 49 | 1. **Detect** your container runtime (Podman or Docker)
 50 | 2. **Build** a documentation container
 51 | 3. **Check** for broken links in your documentation
 52 | 4. **Serve** the documentation at http://localhost:3001
 53 | 
 54 | ### Prerequisites
 55 | 
 56 | You need either Docker or Podman installed:
 57 | 
 58 | **Option 1: Podman (rootless, more secure)**
 59 | 
 60 | ```bash
 61 | # macOS
 62 | brew install podman
 63 | 
 64 | # Ubuntu/Debian
 65 | sudo apt-get install podman
 66 | 
 67 | # RHEL/CentOS/Fedora
 68 | sudo dnf install podman
 69 | ```
 70 | 
 71 | **Option 2: Docker**
 72 | 
 73 | ```bash
 74 | # macOS
 75 | brew install docker
 76 | 
 77 | # Or download from: https://docs.docker.com/get-docker/
 78 | ```
 79 | 
 80 | ## Container-Based Testing Methods
 81 | 
 82 | ### Method 1: Using the Generated Script (Recommended)
 83 | 
 84 | ```bash
 85 | # Simple one-command testing
 86 | ./test-docs-local.sh
 87 | ```
 88 | 
 89 | ### Method 2: Using Docker Compose
 90 | 
 91 | ```bash
 92 | # Build and run with Docker Compose
 93 | docker-compose -f docker-compose.docs.yml up --build
 94 | 
 95 | # Or with Podman Compose
 96 | podman-compose -f docker-compose.docs.yml up --build
 97 | ```
 98 | 
 99 | ### Method 3: Manual Container Commands
100 | 
101 | ```bash
102 | # Build the container
103 | docker build -f Dockerfile.docs -t documcp-docs .
104 | # or: podman build -f Dockerfile.docs -t documcp-docs .
105 | 
106 | # Run the container
107 | docker run --rm -p 3001:3001 documcp-docs
108 | # or: podman run --rm -p 3001:3001 documcp-docs
109 | ```
110 | 
111 | ### Method 4: Pre-Push Docker Validation
112 | 
113 | **Recommended workflow before pushing to git:**
114 | 
115 | ```bash
116 | # 1. Test Docker build (validates CI will work)
117 | docker build -f Dockerfile.docs -t documcp-docs-test .
118 | 
119 | # 2. If successful, test locally
120 | docker run --rm -p 3001:3001 documcp-docs-test
121 | 
122 | # 3. Verify at http://localhost:3001, then push to git
123 | ```
124 | 
125 | This ensures your Docker build matches what GitHub Actions will use.
126 | 
127 | ### Method 5: Legacy Local Installation (Not Recommended)
128 | 
129 | If you prefer to install dependencies locally (affects your system):
130 | 
131 | ```bash
132 | cd docs
133 | npm install
134 | npm run build
135 | npm run serve
136 | ```
137 | 
138 | ## Pre-Push Checklist
139 | 
140 | Before pushing documentation changes to git, ensure:
141 | 
142 | - [ ] **Node.js build succeeds**: `cd docs && npm ci && npm run build` (matches GitHub Actions)
143 | - [ ] **Local preview works**: Documentation serves correctly at http://localhost:3001
144 | - [ ] **No broken links**: Run link checker (included in test script)
145 | - [ ] **Build output valid**: Check `docs/build` directory structure
146 | - [ ] **No console errors**: Check browser console for JavaScript errors
147 | 
148 | **Quick pre-push validation command (Node.js - Recommended):**
149 | 
150 | ```bash
151 | cd docs && npm ci && npm run build && echo "✅ Ready to push!"
152 | ```
153 | 
154 | **Alternative Docker validation (if Dockerfile is configured):**
155 | 
156 | ```bash
157 | docker build -f Dockerfile.docs -t documcp-docs-test . && \
158 | docker run --rm -d -p 3001:3001 --name docs-test documcp-docs-test && \
159 | sleep 5 && curl -f http://localhost:3001 > /dev/null && \
160 | docker stop docs-test && echo "✅ Ready to push!"
161 | ```
162 | 
163 | **Note**: GitHub Actions uses Node.js directly (not Docker), so testing with `npm run build` is the most reliable way to validate CI will succeed.
164 | 
165 | ## Verification Checklist
166 | 
167 | ### ✅ Content Verification
168 | 
169 | - [ ] All pages load without errors
170 | - [ ] Navigation works correctly
171 | - [ ] Links between pages function properly
172 | - [ ] Search functionality works (if enabled)
173 | - [ ] Code blocks render correctly with syntax highlighting
174 | - [ ] Images and assets load properly
175 | 
176 | ### ✅ Structure Verification
177 | 
178 | - [ ] Sidebar navigation reflects your documentation structure
179 | - [ ] Categories and sections are properly organized
180 | - [ ] Page titles and descriptions are accurate
181 | - [ ] Breadcrumb navigation works
182 | - [ ] Footer links are functional
183 | 
184 | ### ✅ Content Quality
185 | 
186 | - [ ] No broken internal links
187 | - [ ] No broken external links
188 | - [ ] Code examples are up-to-date
189 | - [ ] Screenshots are current and clear
190 | - [ ] All content follows Diataxis framework principles
191 | 
192 | ### ✅ Performance Testing
193 | 
194 | - [ ] Pages load quickly (< 3 seconds)
195 | - [ ] Search is responsive
196 | - [ ] No console errors in browser developer tools
197 | - [ ] Mobile responsiveness works correctly
198 | 
199 | ## Troubleshooting Common Issues
200 | 
201 | ### Container Build Failures
202 | 
203 | **Problem**: Container build fails
204 | 
205 | **Solutions**:
206 | 
207 | ```bash
208 | # Clean up any existing containers and images
209 | docker system prune -f
210 | # or: podman system prune -f
211 | 
212 | # Rebuild from scratch
213 | docker build --no-cache -f Dockerfile.docs -t documcp-docs .
214 | # or: podman build --no-cache -f Dockerfile.docs -t documcp-docs .
215 | 
216 | # Check for syntax errors in markdown files
217 | find docs -name "*.md" -exec npx markdownlint {} \;
218 | ```
219 | 
220 | ### Container Runtime Issues
221 | 
222 | **Problem**: "Neither Podman nor Docker found"
223 | 
224 | **Solutions**:
225 | 
226 | ```bash
227 | # Check if Docker/Podman is installed and running
228 | docker --version
229 | podman --version
230 | 
231 | # On macOS, ensure Docker Desktop is running
232 | # On Linux, ensure Docker daemon is started:
233 | sudo systemctl start docker
234 | 
235 | # For Podman on macOS, start the machine:
236 | podman machine start
237 | ```
238 | 
239 | ### Broken Links
240 | 
241 | **Problem**: Links between documentation pages don't work
242 | 
243 | **Solutions**:
244 | 
245 | - Check that file paths in your markdown match actual file locations
246 | - Ensure relative links use correct syntax (e.g., `[text](../reference/configuration.md)`)
247 | - Verify that `sidebars.js` references match actual file names
248 | 
249 | ### Missing Pages
250 | 
251 | **Problem**: Some documentation pages don't appear in navigation
252 | 
253 | **Solutions**:
254 | 
255 | - Update `docs-site/sidebars.js` to include new pages
256 | - Ensure files are in the correct directory structure
257 | - Check that frontmatter is properly formatted
258 | 
259 | ### Styling Issues
260 | 
261 | **Problem**: Documentation doesn't look right
262 | 
263 | **Solutions**:
264 | 
265 | - Check `docs-site/src/css/custom.css` for custom styles
266 | - Verify Docusaurus theme configuration
267 | - Clear browser cache and reload
268 | 
269 | ## Link Checking
270 | 
271 | ### Automated Link Checking
272 | 
273 | DocuMCP provides built-in link checking:
274 | 
275 | ```bash
276 | # Check all links
277 | npm run docs:check-links
278 | 
279 | # Check only external links
280 | npm run docs:check-links:external
281 | 
282 | # Check only internal links
283 | npm run docs:check-links:internal
284 | ```
285 | 
286 | ### Manual Link Checking
287 | 
288 | Use markdown-link-check for comprehensive link validation:
289 | 
290 | ```bash
291 | # Install globally
292 | npm install -g markdown-link-check
293 | 
294 | # Check specific file
295 | markdown-link-check docs/index.md
296 | 
297 | # Check all markdown files
298 | find docs -name "*.md" -exec markdown-link-check {} \;
299 | ```
300 | 
301 | ## Container Configuration Testing
302 | 
303 | ### Verify Container Configuration
304 | 
305 | ```bash
306 | # Test container health
307 | docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
308 | # or: podman ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
309 | 
310 | # Check container logs
311 | docker logs documcp-docs-test
312 | # or: podman logs documcp-docs-test
313 | 
314 | # Execute commands inside running container
315 | docker exec -it documcp-docs-test sh
316 | # or: podman exec -it documcp-docs-test sh
317 | ```
318 | 
319 | ### Test Different Container Environments
320 | 
321 | ```bash
322 | # Test production build in container
323 | docker run --rm -e NODE_ENV=production -p 3001:3001 documcp-docs
324 | 
325 | # Interactive debugging mode
326 | docker run --rm -it --entrypoint sh documcp-docs
327 | # Inside container: cd docs-site && npm run build --verbose
328 | ```
329 | 
330 | ## Deployment Preview
331 | 
332 | Before deploying to GitHub Pages, test with production settings:
333 | 
334 | ```bash
335 | # Build with production configuration
336 | npm run build
337 | 
338 | # Serve the production build locally
339 | npm run serve
340 | ```
341 | 
342 | This simulates exactly what GitHub Pages will serve.
343 | 
344 | ## Integration with Development Workflow
345 | 
346 | ### Pre-commit Testing
347 | 
348 | Add documentation testing to your git hooks:
349 | 
350 | ```bash
351 | # .husky/pre-commit
352 | #!/usr/bin/env sh
353 | . "$(dirname -- "$0")/_/husky.sh"
354 | 
355 | # Run documentation tests
356 | ./test-docs-local.sh --build-only
357 | 
358 | # Run your regular tests
359 | npm test
360 | ```
361 | 
362 | ### CI/CD Integration
363 | 
364 | Add documentation testing to your GitHub Actions:
365 | 
366 | ```yaml
367 | # .github/workflows/docs-test.yml
368 | name: Documentation Tests
369 | 
370 | on:
371 |   pull_request:
372 |     paths:
373 |       - "docs/**"
374 |       - "docs-site/**"
375 | 
376 | jobs:
377 |   test-docs:
378 |     runs-on: ubuntu-latest
379 |     steps:
380 |       - uses: actions/checkout@v4
381 | 
382 |       - name: Setup Node.js
383 |         uses: actions/setup-node@v4
384 |         with:
385 |           node-version: "20"
386 |           cache: "npm"
387 |           cache-dependency-path: "docs-site/package-lock.json"
388 | 
389 |       - name: Test documentation build
390 |         run: ./test-docs-local.sh --build-only
391 | ```
392 | 
393 | ## Advanced Testing
394 | 
395 | ### Performance Testing
396 | 
397 | ```bash
398 | # Install lighthouse CLI
399 | npm install -g lighthouse
400 | 
401 | # Test performance of local documentation
402 | lighthouse http://localhost:3001 --output=json --output-path=./lighthouse-report.json
403 | 
404 | # Check specific performance metrics
405 | lighthouse http://localhost:3001 --only-categories=performance
406 | ```
407 | 
408 | ### Accessibility Testing
409 | 
410 | ```bash
411 | # Test accessibility
412 | lighthouse http://localhost:3001 --only-categories=accessibility
413 | 
414 | # Use axe for detailed accessibility testing
415 | npm install -g axe-cli
416 | axe http://localhost:3001
417 | ```
418 | 
419 | ### SEO Testing
420 | 
421 | ```bash
422 | # Test SEO optimization
423 | lighthouse http://localhost:3001 --only-categories=seo
424 | 
425 | # Check meta tags and structure
426 | curl -s http://localhost:3001 | grep -E "<title>|<meta"
427 | ```
428 | 
429 | ## Automated Testing Script
430 | 
431 | Create a comprehensive test script:
432 | 
433 | ```bash
434 | #!/bin/bash
435 | # comprehensive-docs-test.sh
436 | 
437 | echo "🧪 Running comprehensive documentation tests..."
438 | 
439 | # Build test
440 | echo "📦 Testing build..."
441 | cd docs-site && npm run build
442 | 
443 | # Link checking
444 | echo "🔗 Checking links..."
445 | cd .. && npm run docs:check-links:all
446 | 
447 | # Performance test (if lighthouse is available)
448 | if command -v lighthouse &> /dev/null; then
449 |     echo "⚡ Testing performance..."
450 |     cd docs-site && npm run serve &
451 |     SERVER_PID=$!
452 |     sleep 5
453 |     lighthouse http://localhost:3001 --quiet --only-categories=performance
454 |     kill $SERVER_PID
455 | fi
456 | 
457 | echo "✅ All tests completed!"
458 | ```
459 | 
460 | ## Best Practices
461 | 
462 | ### 1. Test Early and Often
463 | 
464 | - Test after every significant documentation change
465 | - Include documentation testing in your regular development workflow
466 | - Set up automated testing in CI/CD pipelines
467 | 
468 | ### 2. Test Different Scenarios
469 | 
470 | - Test with different screen sizes and devices
471 | - Test with JavaScript disabled
472 | - Test with slow internet connections
473 | 
474 | ### 3. Monitor Performance
475 | 
476 | - Keep an eye on build times
477 | - Monitor page load speeds
478 | - Check for large images or files that slow down the site
479 | 
480 | ### 4. Validate Content Quality
481 | 
482 | - Use spell checkers and grammar tools
483 | - Ensure code examples work and are current
484 | - Verify that external links are still valid
485 | 
486 | By following this guide, you can ensure your documentation works perfectly before deploying to GitHub Pages, providing a better experience for your users and avoiding broken deployments.
487 | 
```

--------------------------------------------------------------------------------
/MCP_PHASE2_IMPLEMENTATION.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Phase 2 Implementation: Roots Permission System
  2 | 
  3 | **Status:** ✅ Complete
  4 | **Implementation Date:** October 9, 2025
  5 | **Build Status:** ✅ Successful
  6 | **Test Status:** ✅ 127/127 tests passing
  7 | 
  8 | ## Overview
  9 | 
 10 | Phase 2 implements the **Roots Permission System** for DocuMCP, adding user-granted file/folder access control following MCP best practices. This enhances security by restricting server operations to explicitly allowed directories and improves UX by enabling autonomous file discovery.
 11 | 
 12 | ## Key Features Implemented
 13 | 
 14 | ### 1. **Roots Capability Declaration**
 15 | 
 16 | - Added `roots.listChanged: true` to server capabilities
 17 | - Signals to MCP clients that the server supports roots management
 18 | - Enables clients to query allowed directories via `ListRoots` request
 19 | 
 20 | ### 2. **CLI Argument Parsing**
 21 | 
 22 | - Added `--root` flag support for specifying allowed directories
 23 | - Supports multiple roots: `--root /path/one --root /path/two`
 24 | - Automatic `~` expansion for home directory paths
 25 | - Defaults to current working directory if no roots specified
 26 | 
 27 | ### 3. **ListRoots Handler**
 28 | 
 29 | - Implements MCP `ListRootsRequest` protocol
 30 | - Returns all allowed roots as file:// URIs
 31 | - Provides friendly names using `path.basename()`
 32 | - Example response:
 33 |   ```json
 34 |   {
 35 |     "roots": [
 36 |       { "uri": "file:///Users/user/projects", "name": "projects" },
 37 |       { "uri": "file:///Users/user/workspace", "name": "workspace" }
 38 |     ]
 39 |   }
 40 |   ```
 41 | 
 42 | ### 4. **Permission Checker Utility**
 43 | 
 44 | - **Location:** `src/utils/permission-checker.ts`
 45 | - **Functions:**
 46 |   - `isPathAllowed(requestedPath, allowedRoots)` - Validates path access
 47 |   - `getPermissionDeniedMessage(requestedPath, allowedRoots)` - User-friendly error messages
 48 | - **Security:** Uses `path.relative()` to detect directory traversal attempts
 49 | - **Algorithm:** Resolves paths to absolute, checks if relative path doesn't start with `..`
 50 | 
 51 | ### 5. **read_directory Tool**
 52 | 
 53 | - New tool for discovering files and directories within allowed roots
 54 | - Enables autonomous exploration without requiring full absolute paths from users
 55 | - Returns structured data:
 56 |   ```typescript
 57 |   {
 58 |     path: string,
 59 |     files: string[],
 60 |     directories: string[],
 61 |     totalFiles: number,
 62 |     totalDirectories: number
 63 |   }
 64 |   ```
 65 | - Enforces permission checks before listing
 66 | 
 67 | ### 6. **Permission Enforcement in File-Based Tools**
 68 | 
 69 | - Added permission checks to 5 critical tools:
 70 |   - `analyze_repository`
 71 |   - `setup_structure`
 72 |   - `populate_diataxis_content`
 73 |   - `validate_diataxis_content`
 74 |   - `check_documentation_links`
 75 | - Returns structured `PERMISSION_DENIED` errors with resolution guidance
 76 | - Example error:
 77 |   ```json
 78 |   {
 79 |     "success": false,
 80 |     "error": {
 81 |       "code": "PERMISSION_DENIED",
 82 |       "message": "Access denied: Path \"/etc/passwd\" is outside allowed roots. Allowed roots: /Users/user/project",
 83 |       "resolution": "Request access to this directory by starting the server with --root argument, or use a path within allowed roots."
 84 |     }
 85 |   }
 86 |   ```
 87 | 
 88 | ## Files Modified
 89 | 
 90 | ### 1. `src/index.ts` (+120 lines)
 91 | 
 92 | **Changes:**
 93 | 
 94 | - Added default `path` import and permission checker imports (lines 17, 44-48)
 95 | - CLI argument parsing for `--root` flags (lines 69-84)
 96 | - Added roots capability to server (lines 101-103)
 97 | - Added `read_directory` tool definition (lines 706-717)
 98 | - Implemented `ListRoots` handler (lines 1061-1067)
 99 | - Implemented `read_directory` handler (lines 1874-1938)
100 | - Added permission checks to 5 file-based tools (multiple sections)
101 | 
102 | ### 2. `src/utils/permission-checker.ts` (NEW +49 lines)
103 | 
104 | **Functions:**
105 | 
106 | - `isPathAllowed()` - Core permission validation logic
107 | - `getPermissionDeniedMessage()` - Standardized error messaging
108 | - Comprehensive JSDoc documentation with examples
109 | 
110 | ## Technical Implementation Details
111 | 
112 | ### CLI Argument Parsing
113 | 
114 | ```typescript
115 | // Parse allowed roots from command line arguments
116 | const allowedRoots: string[] = [];
117 | process.argv.forEach((arg, index) => {
118 |   if (arg === "--root" && process.argv[index + 1]) {
119 |     const rootPath = process.argv[index + 1];
120 |     // Resolve to absolute path and expand ~ for home directory
121 |     const expandedPath = rootPath.startsWith("~")
122 |       ? join(
123 |           process.env.HOME || process.env.USERPROFILE || "",
124 |           rootPath.slice(1),
125 |         )
126 |       : rootPath;
127 |     allowedRoots.push(path.resolve(expandedPath));
128 |   }
129 | });
130 | 
131 | // If no roots specified, allow current working directory by default
132 | if (allowedRoots.length === 0) {
133 |   allowedRoots.push(process.cwd());
134 | }
135 | ```
136 | 
137 | ### Permission Check Pattern
138 | 
139 | ```typescript
140 | // Check if path is allowed
141 | const repoPath = (args as any)?.path;
142 | if (repoPath && !isPathAllowed(repoPath, allowedRoots)) {
143 |   return formatMCPResponse({
144 |     success: false,
145 |     error: {
146 |       code: "PERMISSION_DENIED",
147 |       message: getPermissionDeniedMessage(repoPath, allowedRoots),
148 |       resolution:
149 |         "Request access to this directory by starting the server with --root argument, or use a path within allowed roots.",
150 |     },
151 |     metadata: {
152 |       toolVersion: packageJson.version,
153 |       executionTime: 0,
154 |       timestamp: new Date().toISOString(),
155 |     },
156 |   });
157 | }
158 | ```
159 | 
160 | ### Security Algorithm
161 | 
162 | The `isPathAllowed()` function uses `path.relative()` to detect directory traversal:
163 | 
164 | 1. Resolve requested path to absolute path
165 | 2. For each allowed root:
166 |    - Resolve root to absolute path
167 |    - Calculate relative path from root to requested path
168 |    - If relative path doesn't start with `..` and isn't absolute, access is granted
169 | 3. Return `false` if no roots allow access
170 | 
171 | This prevents attacks like:
172 | 
173 | - `/project/../../../etc/passwd` - blocked (relative path starts with `..`)
174 | - `/etc/passwd` when root is `/project` - blocked (not within root)
175 | 
176 | ## Testing Results
177 | 
178 | ### Build Status
179 | 
180 | ✅ TypeScript compilation successful with no errors
181 | 
182 | ### Test Suite
183 | 
184 | ✅ **127/127 tests passing (100%)**
185 | 
186 | **Key Test Coverage:**
187 | 
188 | - Tool validation and error handling
189 | - Memory system integration
190 | - Knowledge graph operations
191 | - Functional end-to-end workflows
192 | - Integration tests
193 | - Edge case handling
194 | 
195 | **No Regressions:**
196 | 
197 | - All existing tests continue to pass
198 | - No breaking changes to tool APIs
199 | - Backward compatible implementation
200 | 
201 | ## Security Improvements
202 | 
203 | ### Before Phase 2
204 | 
205 | - ❌ Server could access any file on the system
206 | - ❌ No permission boundaries
207 | - ❌ Users must provide full absolute paths
208 | - ❌ No visibility into allowed directories
209 | 
210 | ### After Phase 2
211 | 
212 | - ✅ Access restricted to explicitly allowed roots
213 | - ✅ Directory traversal attacks prevented
214 | - ✅ Users can use relative paths within roots
215 | - ✅ Clients can query allowed directories via ListRoots
216 | - ✅ Clear, actionable error messages when access denied
217 | - ✅ Default to CWD for safe local development
218 | 
219 | ## User Experience Improvements
220 | 
221 | ### Discovery Without Full Paths
222 | 
223 | Users can now explore repositories without knowing exact file locations:
224 | 
225 | ```
226 | User: "Analyze my project"
227 | Claude: Uses read_directory to discover project structure
228 | Claude: Finds package.json, analyzes dependencies, generates docs
229 | ```
230 | 
231 | ### Clear Error Messages
232 | 
233 | When access is denied, users receive helpful guidance:
234 | 
235 | ```
236 | Access denied: Path "/private/data" is outside allowed roots.
237 | Allowed roots: /Users/user/projects
238 | Resolution: Request access to this directory by starting the server
239 | with --root argument, or use a path within allowed roots.
240 | ```
241 | 
242 | ### Flexible Configuration
243 | 
244 | Server can be started with multiple allowed roots:
245 | 
246 | ```bash
247 | # Single root
248 | npx documcp --root /Users/user/projects
249 | 
250 | # Multiple roots
251 | npx documcp --root /Users/user/projects --root /Users/user/workspace
252 | 
253 | # Default (current directory)
254 | npx documcp
255 | ```
256 | 
257 | ## Usage Examples
258 | 
259 | ### Starting Server with Roots
260 | 
261 | ```bash
262 | # Allow access to specific project
263 | npx documcp --root /Users/user/my-project
264 | 
265 | # Allow access to multiple directories
266 | npx documcp --root ~/projects --root ~/workspace
267 | 
268 | # Use home directory expansion
269 | npx documcp --root ~/code
270 | 
271 | # Default to current directory
272 | npx documcp
273 | ```
274 | 
275 | ### read_directory Tool Usage
276 | 
277 | ```typescript
278 | // Discover files in allowed root
279 | {
280 |   "name": "read_directory",
281 |   "arguments": {
282 |     "path": "/Users/user/projects/my-app"
283 |   }
284 | }
285 | 
286 | // Response
287 | {
288 |   "success": true,
289 |   "data": {
290 |     "path": "/Users/user/projects/my-app",
291 |     "files": ["package.json", "README.md", "tsconfig.json"],
292 |     "directories": ["src", "tests", "docs"],
293 |     "totalFiles": 3,
294 |     "totalDirectories": 3
295 |   }
296 | }
297 | ```
298 | 
299 | ### ListRoots Request
300 | 
301 | ```typescript
302 | // Request
303 | {
304 |   "method": "roots/list"
305 | }
306 | 
307 | // Response
308 | {
309 |   "roots": [
310 |     {"uri": "file:///Users/user/projects", "name": "projects"}
311 |   ]
312 | }
313 | ```
314 | 
315 | ## Alignment with MCP Best Practices
316 | 
317 | ✅ **Roots Protocol Compliance**
318 | 
319 | - Implements `roots.listChanged` capability
320 | - Provides `ListRoots` handler
321 | - Uses standardized file:// URI format
322 | 
323 | ✅ **Security First**
324 | 
325 | - Path validation using battle-tested algorithms
326 | - Directory traversal prevention
327 | - Principle of least privilege (explicit allow-list)
328 | 
329 | ✅ **User-Centric Design**
330 | 
331 | - Clear error messages with actionable resolutions
332 | - Flexible CLI configuration
333 | - Safe defaults (CWD)
334 | 
335 | ✅ **Autonomous Operation**
336 | 
337 | - `read_directory` enables file discovery
338 | - No need for users to specify full paths
339 | - Tools can explore within allowed roots
340 | 
341 | ## Integration with Phase 1
342 | 
343 | Phase 2 builds on Phase 1's foundation:
344 | 
345 | **Phase 1 (Progress & Logging):**
346 | 
347 | - Added visibility into long-running operations
348 | - Tools report progress at logical checkpoints
349 | 
350 | **Phase 2 (Roots & Permissions):**
351 | 
352 | - Adds security boundaries and permission checks
353 | - Progress notifications can now include permission validation steps
354 | - Example: "Validating path permissions..." → "Analyzing repository..."
355 | 
356 | **Combined Benefits:**
357 | 
358 | - Users see both progress AND permission enforcement
359 | - Clear feedback when operations are blocked by permissions
360 | - Transparent, secure, and user-friendly experience
361 | 
362 | ## Performance Impact
363 | 
364 | ✅ **Negligible Overhead**
365 | 
366 | - Permission checks: O(n) where n = number of allowed roots (typically 1-5)
367 | - `path.resolve()` and `path.relative()` are highly optimized native operations
368 | - No measurable impact on tool execution time
369 | - All tests pass with no performance degradation
370 | 
371 | ## Troubleshooting Guide
372 | 
373 | ### Issue: "Access denied" errors
374 | 
375 | **Cause:** Requested path is outside allowed roots
376 | **Solution:** Start server with `--root` flag for the desired directory
377 | 
378 | ### Issue: ListRoots returns empty array
379 | 
380 | **Cause:** No roots specified and CWD not writable
381 | **Solution:** Explicitly specify roots with `--root` flag
382 | 
383 | ### Issue: ~ expansion not working
384 | 
385 | **Cause:** Server doesn't have HOME or USERPROFILE environment variable
386 | **Solution:** Use absolute paths instead of ~ shorthand
387 | 
388 | ## Next Steps (Phase 3)
389 | 
390 | Phase 3 will implement:
391 | 
392 | 1. **HTTP Transport** - Remote server deployment with HTTP/HTTPS
393 | 2. **Transport Selection** - Environment-based stdio vs. HTTP choice
394 | 3. **Sampling Support** - LLM-powered content generation for creative tasks
395 | 4. **Configuration Management** - Environment variables for all settings
396 | 
397 | ## Conclusion
398 | 
399 | Phase 2 successfully implements the Roots Permission System, bringing DocuMCP into full compliance with MCP security best practices. The implementation:
400 | 
401 | - ✅ Enforces strict access control without compromising usability
402 | - ✅ Enables autonomous file discovery within allowed roots
403 | - ✅ Provides clear, actionable feedback for permission violations
404 | - ✅ Maintains 100% backward compatibility
405 | - ✅ Passes all 127 tests with no regressions
406 | - ✅ Adds minimal performance overhead
407 | - ✅ Follows MCP protocol standards
408 | 
409 | **Total Changes:**
410 | 
411 | - 1 new file created (`permission-checker.ts`)
412 | - 1 existing file modified (`index.ts`)
413 | - 169 net lines added
414 | - 6 new capabilities added (roots, ListRoots, read_directory, 5 tool permission checks)
415 | 
416 | **Quality Metrics:**
417 | 
418 | - Build: ✅ Successful
419 | - Tests: ✅ 127/127 passing (100%)
420 | - Regressions: ✅ None
421 | - Performance: ✅ No measurable impact
422 | - Security: ✅ Significantly improved
423 | 
```

--------------------------------------------------------------------------------
/docs/reference/prompt-templates.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | documcp:
  3 |   last_updated: "2025-11-20T00:46:21.963Z"
  4 |   last_validated: "2025-12-09T19:41:38.593Z"
  5 |   auto_updated: false
  6 |   update_frequency: monthly
  7 |   validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
  8 | ---
  9 | 
 10 | # Prompt Templates
 11 | 
 12 | DocuMCP provides a comprehensive set of prompt templates to help you interact effectively with the system. These templates are designed to get optimal results from DocuMCP's AI-powered documentation tools.
 13 | 
 14 | ## Quick Reference
 15 | 
 16 | ### Complete Workflow Templates
 17 | 
 18 | **Full Documentation Deployment:**
 19 | 
 20 | ```
 21 | analyze my repository, recommend the best static site generator, set up Diataxis documentation structure, and deploy to GitHub Pages
 22 | ```
 23 | 
 24 | **Documentation Audit:**
 25 | 
 26 | ```
 27 | analyze my existing documentation for gaps, validate content accuracy, and provide recommendations for improvement
 28 | ```
 29 | 
 30 | **Quick Setup:**
 31 | 
 32 | ```
 33 | analyze my [LANGUAGE] project and set up documentation with the most suitable static site generator
 34 | ```
 35 | 
 36 | ## Repository Analysis Templates
 37 | 
 38 | ### Basic Analysis
 39 | 
 40 | ```
 41 | analyze my repository for documentation needs
 42 | ```
 43 | 
 44 | ### Specific Project Types
 45 | 
 46 | ```
 47 | analyze my TypeScript library for API documentation requirements
 48 | analyze my Python package for comprehensive documentation needs
 49 | analyze my React application for user guide documentation
 50 | analyze my CLI tool for usage documentation
 51 | ```
 52 | 
 53 | ### Deep Analysis
 54 | 
 55 | ```
 56 | perform deep analysis of my repository including dependency analysis, complexity assessment, and team collaboration patterns
 57 | ```
 58 | 
 59 | ### Focused Analysis
 60 | 
 61 | ```
 62 | analyze my repository focusing on [SPECIFIC_AREA]
 63 | # Examples:
 64 | # - API documentation opportunities
 65 | # - user onboarding needs
 66 | # - developer experience gaps
 67 | # - deployment documentation requirements
 68 | ```
 69 | 
 70 | ## SSG Recommendation Templates
 71 | 
 72 | ### Basic Recommendation
 73 | 
 74 | ```
 75 | recommend the best static site generator for my project based on the analysis
 76 | ```
 77 | 
 78 | ### Preference-Based Recommendations
 79 | 
 80 | ```
 81 | recommend a static site generator for my project with preferences for [ECOSYSTEM] and [PRIORITY]
 82 | # Ecosystem options: javascript, python, ruby, go, any
 83 | # Priority options: simplicity, features, performance
 84 | ```
 85 | 
 86 | ### Comparison Requests
 87 | 
 88 | ```
 89 | compare static site generators for my [PROJECT_TYPE] with focus on [CRITERIA]
 90 | # Project types: library, application, tool, documentation
 91 | # Criteria: ease of use, customization, performance, community support
 92 | ```
 93 | 
 94 | ### Specific Requirements
 95 | 
 96 | ```
 97 | recommend SSG for my project that supports:
 98 | - TypeScript integration
 99 | - API documentation generation
100 | - Search functionality
101 | - Custom theming
102 | - Multi-language support
103 | ```
104 | 
105 | ## Configuration Generation Templates
106 | 
107 | ### Basic Configuration
108 | 
109 | ```
110 | generate [SSG_NAME] configuration for my project
111 | # Examples:
112 | # - generate Docusaurus configuration for my project
113 | # - generate Hugo configuration for my project
114 | # - generate MkDocs configuration for my project
115 | ```
116 | 
117 | ### Detailed Configuration
118 | 
119 | ```
120 | generate comprehensive [SSG_NAME] configuration with:
121 | - GitHub integration
122 | - Custom domain setup
123 | - Analytics integration
124 | - SEO optimization
125 | - Performance optimizations
126 | ```
127 | 
128 | ### Production-Ready Setup
129 | 
130 | ```
131 | generate production-ready [SSG_NAME] configuration with security best practices and performance optimization
132 | ```
133 | 
134 | ## Documentation Structure Templates
135 | 
136 | ### Basic Structure
137 | 
138 | ```
139 | set up Diataxis documentation structure for my project
140 | ```
141 | 
142 | ### SSG-Specific Structure
143 | 
144 | ```
145 | create [SSG_NAME] documentation structure following Diataxis principles with example content
146 | ```
147 | 
148 | ### Content Population
149 | 
150 | ```
151 | set up documentation structure and populate it with project-specific content based on my code analysis
152 | ```
153 | 
154 | ### Advanced Structure
155 | 
156 | ```
157 | create comprehensive documentation structure with:
158 | - Diataxis organization
159 | - Project-specific content
160 | - Code examples from my repository
161 | - API documentation templates
162 | - Deployment guides
163 | ```
164 | 
165 | ## Deployment Templates
166 | 
167 | ### Basic GitHub Pages Deployment
168 | 
169 | ```
170 | deploy my documentation to GitHub Pages
171 | ```
172 | 
173 | ### Complete Deployment Workflow
174 | 
175 | ```
176 | set up automated GitHub Pages deployment with:
177 | - Build optimization
178 | - Security best practices
179 | - Performance monitoring
180 | - Deployment verification
181 | ```
182 | 
183 | ### Custom Domain Deployment
184 | 
185 | ```
186 | deploy to GitHub Pages with custom domain [DOMAIN_NAME] and SSL certificate
187 | ```
188 | 
189 | ### Multi-Environment Deployment
190 | 
191 | ```
192 | set up documentation deployment with staging and production environments
193 | ```
194 | 
195 | ## Content Management Templates
196 | 
197 | ### Content Validation
198 | 
199 | ```
200 | validate all my documentation content for accuracy, broken links, and completeness
201 | ```
202 | 
203 | ### Gap Analysis
204 | 
205 | ```
206 | analyze my documentation for missing content and provide recommendations for improvement
207 | ```
208 | 
209 | ### Content Updates
210 | 
211 | ```
212 | update my existing documentation based on recent code changes and current best practices
213 | ```
214 | 
215 | ### Quality Assurance
216 | 
217 | ```
218 | perform comprehensive quality check on my documentation including:
219 | - Link validation
220 | - Code example testing
221 | - Content accuracy verification
222 | - SEO optimization assessment
223 | ```
224 | 
225 | ## Troubleshooting Templates
226 | 
227 | ### General Troubleshooting
228 | 
229 | ```
230 | diagnose and fix issues with my documentation deployment
231 | ```
232 | 
233 | ### Specific Problem Solving
234 | 
235 | ```
236 | troubleshoot [SPECIFIC_ISSUE] with my documentation setup
237 | # Examples:
238 | # - GitHub Pages deployment failures
239 | # - build errors with my static site generator
240 | # - broken links in my documentation
241 | # - performance issues with my documentation site
242 | ```
243 | 
244 | ### Verification and Testing
245 | 
246 | ```
247 | verify my documentation deployment is working correctly and identify any issues
248 | ```
249 | 
250 | ## Memory and Learning Templates
251 | 
252 | ### Memory Recall
253 | 
254 | ```
255 | show me insights from similar projects and successful documentation patterns
256 | ```
257 | 
258 | ### Learning from History
259 | 
260 | ```
261 | based on previous analyses, what are the best practices for my type of project?
262 | ```
263 | 
264 | ### Pattern Recognition
265 | 
266 | ```
267 | analyze patterns in my documentation workflow and suggest optimizations
268 | ```
269 | 
270 | ## Advanced Workflow Templates
271 | 
272 | ### Multi-Step Workflows
273 | 
274 | **Research and Planning:**
275 | 
276 | ```
277 | 1. analyze my repository comprehensively
278 | 2. research best practices for my project type
279 | 3. recommend optimal documentation strategy
280 | 4. create implementation plan
281 | ```
282 | 
283 | **Implementation and Validation:**
284 | 
285 | ```
286 | 1. set up recommended documentation structure
287 | 2. populate with project-specific content
288 | 3. validate all content and links
289 | 4. deploy to GitHub Pages
290 | 5. verify deployment success
291 | ```
292 | 
293 | **Maintenance and Optimization:**
294 | 
295 | ```
296 | 1. audit existing documentation for gaps
297 | 2. update content based on code changes
298 | 3. optimize for performance and SEO
299 | 4. monitor deployment health
300 | ```
301 | 
302 | ### Conditional Workflows
303 | 
304 | ```
305 | if my project is a [TYPE], then:
306 | - focus on [SPECIFIC_DOCUMENTATION_NEEDS]
307 | - use [RECOMMENDED_SSG]
308 | - emphasize [CONTENT_PRIORITIES]
309 | ```
310 | 
311 | ## Context-Aware Templates
312 | 
313 | ### Project-Specific Context
314 | 
315 | ```
316 | for my [PROJECT_TYPE] written in [LANGUAGE] with [FRAMEWORK]:
317 | - analyze documentation needs
318 | - recommend appropriate tools
319 | - create tailored content structure
320 | ```
321 | 
322 | ### Team-Based Context
323 | 
324 | ```
325 | for a [TEAM_SIZE] team working on [PROJECT_DESCRIPTION]:
326 | - set up collaborative documentation workflow
327 | - implement review and approval processes
328 | - create contribution guidelines
329 | ```
330 | 
331 | ### Audience-Specific Context
332 | 
333 | ```
334 | create documentation targeting [AUDIENCE]:
335 | - developers (API docs, technical guides)
336 | - end users (tutorials, how-to guides)
337 | - contributors (development setup, guidelines)
338 | - administrators (deployment, configuration)
339 | ```
340 | 
341 | ## Template Customization
342 | 
343 | ### Variables and Placeholders
344 | 
345 | Use these placeholders in templates:
346 | 
347 | | Placeholder      | Description           | Examples                          |
348 | | ---------------- | --------------------- | --------------------------------- |
349 | | `[PROJECT_TYPE]` | Type of project       | library, application, tool        |
350 | | `[LANGUAGE]`     | Programming language  | TypeScript, Python, Go            |
351 | | `[SSG_NAME]`     | Static site generator | Docusaurus, Hugo, MkDocs          |
352 | | `[DOMAIN_NAME]`  | Custom domain         | docs.example.com                  |
353 | | `[FRAMEWORK]`    | Framework used        | React, Vue, Django                |
354 | | `[TEAM_SIZE]`    | Team size             | small, medium, large              |
355 | | `[ECOSYSTEM]`    | Package ecosystem     | javascript, python, ruby          |
356 | | `[PRIORITY]`     | Priority focus        | simplicity, features, performance |
357 | 
358 | ### Creating Custom Templates
359 | 
360 | ```
361 | create custom template for [SPECIFIC_USE_CASE]:
362 | - define requirements
363 | - specify desired outcomes
364 | - include success criteria
365 | - provide examples
366 | ```
367 | 
368 | ## Best Practices for Prompting
369 | 
370 | ### Effective Prompt Structure
371 | 
372 | 1. **Be Specific:** Include relevant details about your project
373 | 2. **Set Context:** Mention your experience level and constraints
374 | 3. **Define Success:** Explain what a good outcome looks like
375 | 4. **Ask for Explanation:** Request reasoning behind recommendations
376 | 
377 | ### Example of Well-Structured Prompt
378 | 
379 | ```
380 | I have a TypeScript library for data visualization with 50+ contributors.
381 | I need comprehensive documentation that includes:
382 | - API reference for all public methods
383 | - Interactive examples with code samples
384 | - Getting started guide for developers
385 | - Contribution guidelines for the community
386 | 
387 | Please analyze my repository, recommend the best approach, and set up a
388 | documentation system that can handle our scale and complexity.
389 | ```
390 | 
391 | ### Common Pitfalls to Avoid
392 | 
393 | - **Too vague:** "help with documentation"
394 | - **Missing context:** Not mentioning project type or requirements
395 | - **No constraints:** Not specifying limitations or preferences
396 | - **Single-step thinking:** Not considering the full workflow
397 | 
398 | ## Integration with Development Workflow
399 | 
400 | ### Git Hooks Integration
401 | 
402 | ```
403 | set up pre-commit hooks to:
404 | - validate documentation changes
405 | - check for broken links
406 | - ensure content quality
407 | - update generated content
408 | ```
409 | 
410 | ### CI/CD Integration
411 | 
412 | ```
413 | create GitHub Actions workflow that:
414 | - validates documentation on every PR
415 | - deploys docs on main branch updates
416 | - runs quality checks automatically
417 | - notifies team of issues
418 | ```
419 | 
420 | ### IDE Integration
421 | 
422 | ```
423 | configure development environment for:
424 | - live documentation preview
425 | - automated link checking
426 | - content validation
427 | - template generation
428 | ```
429 | 
430 | ## Troubleshooting Prompts
431 | 
432 | ### When Things Don't Work
433 | 
434 | **Analysis Issues:**
435 | 
436 | ```
437 | my repository analysis returned incomplete results, please retry with deep analysis and explain what might have caused the issue
438 | ```
439 | 
440 | **Recommendation Problems:**
441 | 
442 | ```
443 | the SSG recommendation doesn't match my needs because [REASON], please provide alternative recommendations with different priorities
444 | ```
445 | 
446 | **Deployment Failures:**
447 | 
448 | ```
449 | my GitHub Pages deployment failed with [ERROR_MESSAGE], please diagnose the issue and provide a fix
450 | ```
451 | 
452 | **Content Issues:**
453 | 
454 | ```
455 | my generated documentation has [PROBLEM], please update the content and ensure it meets [REQUIREMENTS]
456 | ```
457 | 
458 | For more troubleshooting help, see the [Troubleshooting Guide](../how-to/troubleshooting.md).
459 | 
460 | ## Template Categories Summary
461 | 
462 | | Category            | Purpose                | Key Templates                      |
463 | | ------------------- | ---------------------- | ---------------------------------- |
464 | | **Analysis**        | Understanding projects | Repository analysis, gap detection |
465 | | **Recommendation**  | Tool selection         | SSG comparison, feature matching   |
466 | | **Configuration**   | Setup and config       | Production configs, optimization   |
467 | | **Structure**       | Content organization   | Diataxis setup, content population |
468 | | **Deployment**      | Going live             | GitHub Pages, custom domains       |
469 | | **Validation**      | Quality assurance      | Link checking, content validation  |
470 | | **Troubleshooting** | Problem solving        | Diagnosis, issue resolution        |
471 | | **Workflow**        | Process automation     | Multi-step procedures, CI/CD       |
472 | 
473 | These templates provide a solid foundation for effective interaction with DocuMCP. Customize them based on your specific needs and project requirements.
474 | 
```

--------------------------------------------------------------------------------
/tests/integration/kg-documentation-workflow.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Integration Tests for Knowledge Graph Documentation Workflow
  3 |  * Tests end-to-end workflow from repository analysis to documentation tracking
  4 |  */
  5 | 
  6 | import { describe, it, expect, beforeEach, afterEach } from "@jest/globals";
  7 | import { promises as fs } from "fs";
  8 | import path from "path";
  9 | import { tmpdir } from "os";
 10 | import { analyzeRepository } from "../../src/tools/analyze-repository.js";
 11 | import {
 12 |   initializeKnowledgeGraph,
 13 |   getKnowledgeGraph,
 14 |   saveKnowledgeGraph,
 15 | } from "../../src/memory/kg-integration.js";
 16 | 
 17 | describe("KG Documentation Workflow Integration", () => {
 18 |   let testDir: string;
 19 | 
 20 |   beforeEach(async () => {
 21 |     testDir = path.join(tmpdir(), `documcp-integration-${Date.now()}`);
 22 |     await fs.mkdir(testDir, { recursive: true });
 23 | 
 24 |     // Initialize KG with test storage
 25 |     const storageDir = path.join(testDir, ".documcp/memory");
 26 |     await initializeKnowledgeGraph(storageDir);
 27 |   });
 28 | 
 29 |   afterEach(async () => {
 30 |     try {
 31 |       await fs.rm(testDir, { recursive: true, force: true });
 32 |     } catch {
 33 |       // Ignore cleanup errors
 34 |     }
 35 |   });
 36 | 
 37 |   it("should complete full workflow: analyze → create entities → link relationships", async () => {
 38 |     // Setup: Create a test repository structure
 39 |     const srcDir = path.join(testDir, "src");
 40 |     const docsDir = path.join(testDir, "docs");
 41 |     await fs.mkdir(srcDir, { recursive: true });
 42 |     await fs.mkdir(docsDir, { recursive: true });
 43 | 
 44 |     // Create source code
 45 |     await fs.writeFile(
 46 |       path.join(srcDir, "auth.ts"),
 47 |       `
 48 | export class AuthService {
 49 |   async login(username: string, password: string) {
 50 |     return { token: "abc123" };
 51 |   }
 52 | 
 53 |   async logout(token: string) {
 54 |     return true;
 55 |   }
 56 | }
 57 | 
 58 | export function validateToken(token: string) {
 59 |   return token.length > 0;
 60 | }
 61 |     `,
 62 |       "utf-8",
 63 |     );
 64 | 
 65 |     // Create documentation
 66 |     await fs.writeFile(
 67 |       path.join(docsDir, "api.md"),
 68 |       `
 69 | # Authentication API
 70 | 
 71 | ## Login
 72 | 
 73 | Use the \`login()\` method from \`AuthService\` class in \`src/auth.ts\`:
 74 | 
 75 | \`\`\`typescript
 76 | const auth = new AuthService();
 77 | const result = await auth.login(username, password);
 78 | \`\`\`
 79 | 
 80 | ## Logout
 81 | 
 82 | Call \`logout()\` with the authentication token:
 83 | 
 84 | \`\`\`typescript
 85 | await auth.logout(token);
 86 | \`\`\`
 87 | 
 88 | ## Token Validation
 89 | 
 90 | Use \`validateToken()\` function to validate tokens.
 91 |     `,
 92 |       "utf-8",
 93 |     );
 94 | 
 95 |     await fs.writeFile(
 96 |       path.join(testDir, "README.md"),
 97 |       "# Test Project",
 98 |       "utf-8",
 99 |     );
100 |     await fs.writeFile(
101 |       path.join(testDir, "package.json"),
102 |       JSON.stringify({ name: "test-project", version: "1.0.0" }),
103 |       "utf-8",
104 |     );
105 | 
106 |     // Act: Run repository analysis
107 |     const analysisResult = await analyzeRepository({
108 |       path: testDir,
109 |       depth: "standard",
110 |     });
111 | 
112 |     // Assert: Analysis completed (may have errors due to test environment)
113 |     expect(analysisResult.content).toBeDefined();
114 |     expect(analysisResult.content.length).toBeGreaterThan(0);
115 | 
116 |     // If analysis succeeded, verify structure
117 |     if (!analysisResult.isError) {
118 |       const analysis = JSON.parse(analysisResult.content[0].text);
119 |       if (analysis.success) {
120 |         expect(analysis.data.structure.hasDocs).toBe(true);
121 |       }
122 |     }
123 | 
124 |     // Wait for KG operations to complete
125 |     await new Promise((resolve) => setTimeout(resolve, 100));
126 | 
127 |     // Verify: Check knowledge graph entities
128 |     const kg = await getKnowledgeGraph();
129 |     const allNodes = await kg.getAllNodes();
130 |     const allEdges = await kg.getAllEdges();
131 | 
132 |     // Should have project, code files, and documentation sections
133 |     const projectNodes = allNodes.filter((n) => n.type === "project");
134 |     const codeFileNodes = allNodes.filter((n) => n.type === "code_file");
135 |     const docSectionNodes = allNodes.filter(
136 |       (n) => n.type === "documentation_section",
137 |     );
138 | 
139 |     expect(projectNodes.length).toBeGreaterThan(0);
140 |     expect(codeFileNodes.length).toBeGreaterThan(0);
141 |     expect(docSectionNodes.length).toBeGreaterThan(0);
142 | 
143 |     // Verify code file details
144 |     const authFile = codeFileNodes.find((n) =>
145 |       n.properties.path.includes("auth.ts"),
146 |     );
147 |     expect(authFile).toBeDefined();
148 |     expect(authFile?.properties.language).toBe("typescript");
149 |     expect(authFile?.properties.classes).toContain("AuthService");
150 |     expect(authFile?.properties.functions).toContain("validateToken");
151 | 
152 |     // Verify documentation sections
153 |     const apiDoc = docSectionNodes.find((n) =>
154 |       n.properties.filePath.includes("api.md"),
155 |     );
156 |     expect(apiDoc).toBeDefined();
157 |     expect(apiDoc?.properties.hasCodeExamples).toBe(true);
158 |     expect(apiDoc?.properties.referencedFunctions.length).toBeGreaterThan(0);
159 | 
160 |     // Verify relationships
161 |     const referencesEdges = allEdges.filter((e) => e.type === "references");
162 |     const documentsEdges = allEdges.filter((e) => e.type === "documents");
163 | 
164 |     expect(referencesEdges.length).toBeGreaterThan(0);
165 |     expect(documentsEdges.length).toBeGreaterThan(0);
166 | 
167 |     // Verify specific relationship: api.md references auth.ts
168 |     const apiToAuthEdge = referencesEdges.find(
169 |       (e) => e.source === apiDoc?.id && e.target === authFile?.id,
170 |     );
171 |     expect(apiToAuthEdge).toBeDefined();
172 |     expect(apiToAuthEdge?.properties.referenceType).toBe("api-reference");
173 |   });
174 | 
175 |   it("should detect outdated documentation when code changes", async () => {
176 |     // Setup: Create initial code and docs
177 |     const srcDir = path.join(testDir, "src");
178 |     const docsDir = path.join(testDir, "docs");
179 |     await fs.mkdir(srcDir, { recursive: true });
180 |     await fs.mkdir(docsDir, { recursive: true });
181 | 
182 |     await fs.writeFile(
183 |       path.join(srcDir, "user.ts"),
184 |       "export function getUser() {}",
185 |       "utf-8",
186 |     );
187 | 
188 |     await fs.writeFile(
189 |       path.join(docsDir, "guide.md"),
190 |       "Call `getUser()` from `src/user.ts`",
191 |       "utf-8",
192 |     );
193 | 
194 |     await fs.writeFile(path.join(testDir, "README.md"), "# Test", "utf-8");
195 |     await fs.writeFile(path.join(testDir, "package.json"), "{}", "utf-8");
196 | 
197 |     // First analysis
198 |     await analyzeRepository({ path: testDir, depth: "standard" });
199 |     await new Promise((resolve) => setTimeout(resolve, 100));
200 | 
201 |     // Simulate code change
202 |     await new Promise((resolve) => setTimeout(resolve, 100)); // Ensure different timestamp
203 |     await fs.writeFile(
204 |       path.join(srcDir, "user.ts"),
205 |       "export function getUser(id: string) {} // CHANGED",
206 |       "utf-8",
207 |     );
208 | 
209 |     // Second analysis
210 |     await analyzeRepository({ path: testDir, depth: "standard" });
211 |     await new Promise((resolve) => setTimeout(resolve, 100));
212 | 
213 |     // Verify: Check that system handled multiple analyses
214 |     // In a real scenario, outdated_for edges would be created
215 |     // For this test, just verify no crashes occurred
216 |     const kg = await getKnowledgeGraph();
217 |     const allNodes = await kg.getAllNodes();
218 | 
219 |     // Should have created some nodes from both analyses
220 |     expect(allNodes.length).toBeGreaterThan(0);
221 |   });
222 | 
223 |   it("should handle projects with no documentation gracefully", async () => {
224 |     // Setup: Code-only project
225 |     const srcDir = path.join(testDir, "src");
226 |     await fs.mkdir(srcDir, { recursive: true });
227 | 
228 |     await fs.writeFile(
229 |       path.join(srcDir, "index.ts"),
230 |       "export function main() {}",
231 |       "utf-8",
232 |     );
233 | 
234 |     await fs.writeFile(path.join(testDir, "package.json"), "{}", "utf-8");
235 | 
236 |     // Act
237 |     await analyzeRepository({ path: testDir, depth: "standard" });
238 |     await new Promise((resolve) => setTimeout(resolve, 100));
239 | 
240 |     // Verify: Should still create code entities, just no doc entities
241 |     const kg = await getKnowledgeGraph();
242 |     const allNodes = await kg.getAllNodes();
243 | 
244 |     const codeFileNodes = allNodes.filter((n) => n.type === "code_file");
245 |     const docSectionNodes = allNodes.filter(
246 |       (n) => n.type === "documentation_section",
247 |     );
248 | 
249 |     expect(codeFileNodes.length).toBeGreaterThan(0);
250 |     expect(docSectionNodes.length).toBe(0);
251 |   });
252 | 
253 |   it("should handle multi-file projects correctly", async () => {
254 |     // Setup: Multiple source files
255 |     const srcDir = path.join(testDir, "src");
256 |     await fs.mkdir(path.join(srcDir, "auth"), { recursive: true });
257 |     await fs.mkdir(path.join(srcDir, "db"), { recursive: true });
258 | 
259 |     await fs.writeFile(
260 |       path.join(srcDir, "auth", "login.ts"),
261 |       "export function login() {}",
262 |       "utf-8",
263 |     );
264 |     await fs.writeFile(
265 |       path.join(srcDir, "auth", "logout.ts"),
266 |       "export function logout() {}",
267 |       "utf-8",
268 |     );
269 |     await fs.writeFile(
270 |       path.join(srcDir, "db", "query.ts"),
271 |       "export function query() {}",
272 |       "utf-8",
273 |     );
274 | 
275 |     await fs.writeFile(path.join(testDir, "package.json"), "{}", "utf-8");
276 | 
277 |     // Act
278 |     await analyzeRepository({ path: testDir, depth: "standard" });
279 |     await new Promise((resolve) => setTimeout(resolve, 100));
280 | 
281 |     // Verify
282 |     const kg = await getKnowledgeGraph();
283 |     const codeFileNodes = (await kg.getAllNodes()).filter(
284 |       (n) => n.type === "code_file",
285 |     );
286 | 
287 |     expect(codeFileNodes.length).toBe(3);
288 | 
289 |     const paths = codeFileNodes.map((n) => n.properties.path);
290 |     expect(paths).toContain("src/auth/login.ts");
291 |     expect(paths).toContain("src/auth/logout.ts");
292 |     expect(paths).toContain("src/db/query.ts");
293 |   });
294 | 
295 |   it("should persist knowledge graph to storage", async () => {
296 |     // Setup
297 |     const srcDir = path.join(testDir, "src");
298 |     await fs.mkdir(srcDir, { recursive: true });
299 | 
300 |     await fs.writeFile(
301 |       path.join(srcDir, "test.ts"),
302 |       "export function test() {}",
303 |       "utf-8",
304 |     );
305 | 
306 |     await fs.writeFile(path.join(testDir, "package.json"), "{}", "utf-8");
307 | 
308 |     // Act
309 |     await analyzeRepository({ path: testDir, depth: "standard" });
310 |     await new Promise((resolve) => setTimeout(resolve, 100));
311 | 
312 |     // Save KG
313 |     await saveKnowledgeGraph();
314 | 
315 |     // Verify storage files exist
316 |     const storageDir = path.join(testDir, ".documcp/memory");
317 |     const entitiesFile = path.join(
318 |       storageDir,
319 |       "knowledge-graph-entities.jsonl",
320 |     );
321 |     const relationshipsFile = path.join(
322 |       storageDir,
323 |       "knowledge-graph-relationships.jsonl",
324 |     );
325 | 
326 |     const entitiesExist = await fs
327 |       .access(entitiesFile)
328 |       .then(() => true)
329 |       .catch(() => false);
330 |     const relationshipsExist = await fs
331 |       .access(relationshipsFile)
332 |       .then(() => true)
333 |       .catch(() => false);
334 | 
335 |     expect(entitiesExist).toBe(true);
336 |     expect(relationshipsExist).toBe(true);
337 | 
338 |     // Verify content
339 |     const entitiesContent = await fs.readFile(entitiesFile, "utf-8");
340 |     expect(entitiesContent).toContain("code_file");
341 |   });
342 | 
343 |   it("should calculate coverage metrics for documentation", async () => {
344 |     // Setup: 3 functions, docs covering 2 of them
345 |     const srcDir = path.join(testDir, "src");
346 |     const docsDir = path.join(testDir, "docs");
347 |     await fs.mkdir(srcDir, { recursive: true });
348 |     await fs.mkdir(docsDir, { recursive: true });
349 | 
350 |     await fs.writeFile(
351 |       path.join(srcDir, "api.ts"),
352 |       `
353 | export function create() {}
354 | export function read() {}
355 | export function update() {} // Not documented
356 |     `,
357 |       "utf-8",
358 |     );
359 | 
360 |     await fs.writeFile(
361 |       path.join(docsDir, "api.md"),
362 |       `
363 | # API Reference
364 | 
365 | - \`create()\`: Creates a resource
366 | - \`read()\`: Reads a resource
367 |     `,
368 |       "utf-8",
369 |     );
370 | 
371 |     await fs.writeFile(path.join(testDir, "README.md"), "# Test", "utf-8");
372 |     await fs.writeFile(path.join(testDir, "package.json"), "{}", "utf-8");
373 | 
374 |     // Act
375 |     await analyzeRepository({ path: testDir, depth: "standard" });
376 |     await new Promise((resolve) => setTimeout(resolve, 100));
377 | 
378 |     // Verify coverage
379 |     const kg = await getKnowledgeGraph();
380 |     const documentsEdges = (await kg.getAllEdges()).filter(
381 |       (e) => e.type === "documents",
382 |     );
383 | 
384 |     expect(documentsEdges.length).toBeGreaterThan(0);
385 | 
386 |     const coverage = documentsEdges[0].properties.coverage;
387 |     expect(["partial", "complete", "comprehensive"]).toContain(coverage);
388 |     // 2/3 = 66% should be "complete"
389 |     expect(coverage).toBe("complete");
390 |   });
391 | });
392 | 
```

--------------------------------------------------------------------------------
/src/utils/artifact-detector.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Agent Artifact Detection System
  3 |  *
  4 |  * Detects, classifies, and provides recommendations for artifacts
  5 |  * generated by AI coding agents during workflows.
  6 |  */
  7 | 
  8 | import { promises as fs } from "fs";
  9 | import path from "path";
 10 | import { globby } from "globby";
 11 | 
 12 | export interface AgentArtifact {
 13 |   path: string;
 14 |   type:
 15 |     | "file"
 16 |     | "directory"
 17 |     | "inline-comment"
 18 |     | "block-comment"
 19 |     | "code-block";
 20 |   category: "planning" | "debug" | "temporary" | "state" | "documentation";
 21 |   confidence: number; // 0-1 how sure we are this is agent-generated
 22 |   recommendation: "delete" | "review" | "keep" | "archive";
 23 |   context?: string; // surrounding content for review
 24 |   detectedBy: string; // which pattern matched
 25 | }
 26 | 
 27 | export interface ArtifactScanResult {
 28 |   scannedFiles: number;
 29 |   artifacts: AgentArtifact[];
 30 |   summary: {
 31 |     totalArtifacts: number;
 32 |     byCategory: Record<string, number>;
 33 |     byRecommendation: Record<string, number>;
 34 |   };
 35 | }
 36 | 
 37 | export interface ArtifactCleanupConfig {
 38 |   // Detection settings
 39 |   patterns: {
 40 |     files: string[]; // glob patterns for artifact files
 41 |     directories: string[]; // agent state directories
 42 |     inlineMarkers: string[]; // comment markers to detect
 43 |     blockPatterns: RegExp[]; // multi-line patterns
 44 |   };
 45 | 
 46 |   // Behavior settings
 47 |   autoDeleteThreshold: number; // confidence threshold for auto-delete
 48 |   preserveGitIgnored: boolean; // skip .gitignored artifacts
 49 |   archiveBeforeDelete: boolean; // safety backup
 50 | 
 51 |   // Exclusions
 52 |   excludePaths: string[];
 53 |   excludePatterns: string[];
 54 | }
 55 | 
 56 | /**
 57 |  * Default configuration for artifact detection
 58 |  */
 59 | export const DEFAULT_CONFIG: ArtifactCleanupConfig = {
 60 |   patterns: {
 61 |     files: [
 62 |       "TODO.md",
 63 |       "TODOS.md",
 64 |       "PLAN.md",
 65 |       "PLANNING.md",
 66 |       "NOTES.md",
 67 |       "SCRATCH.md",
 68 |       "AGENT-*.md",
 69 |       "*.agent.md",
 70 |     ],
 71 |     directories: [
 72 |       ".claude",
 73 |       ".cursor",
 74 |       ".aider",
 75 |       ".copilot",
 76 |       ".codeium",
 77 |       ".agent-workspace",
 78 |     ],
 79 |     inlineMarkers: [
 80 |       "// @agent-temp",
 81 |       "// TODO(agent):",
 82 |       "# AGENT-NOTE:",
 83 |       "<!-- agent:ephemeral -->",
 84 |       "// FIXME(claude):",
 85 |       "// FIXME(cursor):",
 86 |       "// FIXME(copilot):",
 87 |       "// TODO(claude):",
 88 |       "// TODO(cursor):",
 89 |       "// TODO(copilot):",
 90 |     ],
 91 |     blockPatterns: [
 92 |       /\/\*\s*AGENT[-_](?:START|BEGIN)[\s\S]*?AGENT[-_](?:END|FINISH)\s*\*\//gi,
 93 |       /<!--\s*agent:ephemeral\s*-->[\s\S]*?<!--\s*\/agent:ephemeral\s*-->/gi,
 94 |       /\/\/\s*@agent-temp-start[\s\S]*?\/\/\s*@agent-temp-end/gi,
 95 |     ],
 96 |   },
 97 |   autoDeleteThreshold: 0.9,
 98 |   preserveGitIgnored: true,
 99 |   archiveBeforeDelete: true,
100 |   excludePaths: ["node_modules", ".git", "dist", "build", ".documcp"],
101 |   excludePatterns: ["*.lock", "package-lock.json", "yarn.lock"],
102 | };
103 | 
104 | /**
105 |  * Main Artifact Detector class
106 |  */
107 | export class ArtifactDetector {
108 |   private config: ArtifactCleanupConfig;
109 |   private projectPath: string;
110 | 
111 |   constructor(projectPath: string, config?: Partial<ArtifactCleanupConfig>) {
112 |     this.projectPath = projectPath;
113 |     this.config = {
114 |       ...DEFAULT_CONFIG,
115 |       ...config,
116 |       patterns: {
117 |         ...DEFAULT_CONFIG.patterns,
118 |         ...(config?.patterns || {}),
119 |       },
120 |     };
121 |   }
122 | 
123 |   /**
124 |    * Scan for agent artifacts in the project
125 |    */
126 |   async scan(): Promise<ArtifactScanResult> {
127 |     const artifacts: AgentArtifact[] = [];
128 |     let scannedFiles = 0;
129 | 
130 |     // Detect file-based artifacts
131 |     const fileArtifacts = await this.detectFileArtifacts();
132 |     artifacts.push(...fileArtifacts);
133 | 
134 |     // Detect directory artifacts
135 |     const dirArtifacts = await this.detectDirectoryArtifacts();
136 |     artifacts.push(...dirArtifacts);
137 | 
138 |     // Detect inline and block artifacts in files
139 |     const inlineArtifacts = await this.detectInlineArtifacts();
140 |     artifacts.push(...inlineArtifacts.artifacts);
141 |     scannedFiles = inlineArtifacts.scannedFiles;
142 | 
143 |     // Generate summary
144 |     const summary = this.generateSummary(artifacts);
145 | 
146 |     return {
147 |       scannedFiles,
148 |       artifacts,
149 |       summary,
150 |     };
151 |   }
152 | 
153 |   /**
154 |    * Detect file-based artifacts (e.g., TODO.md, PLAN.md)
155 |    */
156 |   private async detectFileArtifacts(): Promise<AgentArtifact[]> {
157 |     const artifacts: AgentArtifact[] = [];
158 | 
159 |     // Build glob patterns with exclusions
160 |     const patterns = this.config.patterns.files.map((pattern) =>
161 |       path.join(this.projectPath, "**", pattern),
162 |     );
163 | 
164 |     try {
165 |       const files = await globby(patterns, {
166 |         ignore: this.config.excludePaths.map((p) =>
167 |           path.join(this.projectPath, p, "**"),
168 |         ),
169 |         absolute: true,
170 |         onlyFiles: true,
171 |       });
172 | 
173 |       for (const filePath of files) {
174 |         const relativePath = path.relative(this.projectPath, filePath);
175 |         const fileName = path.basename(filePath);
176 | 
177 |         // Determine category and confidence based on file name
178 |         const { category, confidence, detectedBy } =
179 |           this.categorizeFile(fileName);
180 | 
181 |         artifacts.push({
182 |           path: relativePath,
183 |           type: "file",
184 |           category,
185 |           confidence,
186 |           recommendation: this.getRecommendation(confidence, category),
187 |           detectedBy,
188 |         });
189 |       }
190 |     } catch (error) {
191 |       // Silently handle globby errors (e.g., permission denied)
192 |       console.error(`Error scanning files: ${error}`);
193 |     }
194 | 
195 |     return artifacts;
196 |   }
197 | 
198 |   /**
199 |    * Detect directory-based artifacts (e.g., .claude/, .cursor/)
200 |    */
201 |   private async detectDirectoryArtifacts(): Promise<AgentArtifact[]> {
202 |     const artifacts: AgentArtifact[] = [];
203 | 
204 |     for (const dirName of this.config.patterns.directories) {
205 |       const dirPath = path.join(this.projectPath, dirName);
206 | 
207 |       try {
208 |         const stats = await fs.stat(dirPath);
209 |         if (stats.isDirectory()) {
210 |           artifacts.push({
211 |             path: dirName,
212 |             type: "directory",
213 |             category: "state",
214 |             confidence: 0.95,
215 |             recommendation: "archive",
216 |             detectedBy: `Directory pattern: ${dirName}`,
217 |           });
218 |         }
219 |       } catch {
220 |         // Directory doesn't exist, skip
221 |       }
222 |     }
223 | 
224 |     return artifacts;
225 |   }
226 | 
227 |   /**
228 |    * Detect inline comments and block artifacts in code files
229 |    */
230 |   private async detectInlineArtifacts(): Promise<{
231 |     artifacts: AgentArtifact[];
232 |     scannedFiles: number;
233 |   }> {
234 |     const artifacts: AgentArtifact[] = [];
235 |     let scannedFiles = 0;
236 | 
237 |     // Scan common code file types
238 |     const patterns = [
239 |       "**/*.ts",
240 |       "**/*.js",
241 |       "**/*.tsx",
242 |       "**/*.jsx",
243 |       "**/*.py",
244 |       "**/*.rb",
245 |       "**/*.go",
246 |       "**/*.rs",
247 |       "**/*.java",
248 |       "**/*.md",
249 |       "**/*.html",
250 |       "**/*.css",
251 |       "**/*.scss",
252 |     ].map((p) => path.join(this.projectPath, p));
253 | 
254 |     try {
255 |       const files = await globby(patterns, {
256 |         ignore: this.config.excludePaths.map((p) =>
257 |           path.join(this.projectPath, p, "**"),
258 |         ),
259 |         absolute: true,
260 |         onlyFiles: true,
261 |       });
262 | 
263 |       for (const filePath of files) {
264 |         scannedFiles++;
265 |         const content = await fs.readFile(filePath, "utf-8");
266 |         const relativePath = path.relative(this.projectPath, filePath);
267 | 
268 |         // Check for inline markers
269 |         const inlineArtifacts = this.detectInlineMarkers(content, relativePath);
270 |         artifacts.push(...inlineArtifacts);
271 | 
272 |         // Check for block patterns
273 |         const blockArtifacts = this.detectBlockPatterns(content, relativePath);
274 |         artifacts.push(...blockArtifacts);
275 |       }
276 |     } catch (error) {
277 |       console.error(`Error scanning inline artifacts: ${error}`);
278 |     }
279 | 
280 |     return { artifacts, scannedFiles };
281 |   }
282 | 
283 |   /**
284 |    * Detect inline comment markers
285 |    */
286 |   private detectInlineMarkers(
287 |     content: string,
288 |     filePath: string,
289 |   ): AgentArtifact[] {
290 |     const artifacts: AgentArtifact[] = [];
291 |     const lines = content.split("\n");
292 | 
293 |     for (let i = 0; i < lines.length; i++) {
294 |       const line = lines[i];
295 | 
296 |       for (const marker of this.config.patterns.inlineMarkers) {
297 |         if (line.includes(marker)) {
298 |           // Get context (2 lines before and after)
299 |           const contextStart = Math.max(0, i - 2);
300 |           const contextEnd = Math.min(lines.length, i + 3);
301 |           const context = lines.slice(contextStart, contextEnd).join("\n");
302 | 
303 |           artifacts.push({
304 |             path: `${filePath}:${i + 1}`,
305 |             type: "inline-comment",
306 |             category: this.categorizeMarker(marker),
307 |             confidence: 0.85,
308 |             recommendation: "review",
309 |             context,
310 |             detectedBy: `Inline marker: ${marker}`,
311 |           });
312 |         }
313 |       }
314 |     }
315 | 
316 |     return artifacts;
317 |   }
318 | 
319 |   /**
320 |    * Detect block patterns
321 |    */
322 |   private detectBlockPatterns(
323 |     content: string,
324 |     filePath: string,
325 |   ): AgentArtifact[] {
326 |     const artifacts: AgentArtifact[] = [];
327 | 
328 |     for (const pattern of this.config.patterns.blockPatterns) {
329 |       const matches = content.matchAll(pattern);
330 | 
331 |       for (const match of matches) {
332 |         if (match[0]) {
333 |           // Get first 200 chars as context
334 |           const context = match[0].substring(0, 200);
335 | 
336 |           artifacts.push({
337 |             path: filePath,
338 |             type: "block-comment",
339 |             category: "temporary",
340 |             confidence: 0.9,
341 |             recommendation: "delete",
342 |             context,
343 |             detectedBy: `Block pattern: ${pattern.source.substring(0, 50)}...`,
344 |           });
345 |         }
346 |       }
347 |     }
348 | 
349 |     return artifacts;
350 |   }
351 | 
352 |   /**
353 |    * Categorize a file based on its name
354 |    */
355 |   private categorizeFile(fileName: string): {
356 |     category: AgentArtifact["category"];
357 |     confidence: number;
358 |     detectedBy: string;
359 |   } {
360 |     const upperName = fileName.toUpperCase();
361 | 
362 |     if (
363 |       upperName.includes("TODO") ||
364 |       upperName.includes("PLAN") ||
365 |       upperName.includes("SCRATCH")
366 |     ) {
367 |       return {
368 |         category: "planning",
369 |         confidence: 0.95,
370 |         detectedBy: `File name pattern: ${fileName}`,
371 |       };
372 |     }
373 | 
374 |     if (upperName.includes("NOTES") || upperName.includes("AGENT")) {
375 |       return {
376 |         category: "documentation",
377 |         confidence: 0.9,
378 |         detectedBy: `File name pattern: ${fileName}`,
379 |       };
380 |     }
381 | 
382 |     return {
383 |       category: "temporary",
384 |       confidence: 0.8,
385 |       detectedBy: `File name pattern: ${fileName}`,
386 |     };
387 |   }
388 | 
389 |   /**
390 |    * Categorize a marker
391 |    */
392 |   private categorizeMarker(marker: string): AgentArtifact["category"] {
393 |     if (marker.includes("TODO") || marker.includes("FIXME")) {
394 |       return "planning";
395 |     }
396 | 
397 |     if (marker.includes("temp") || marker.includes("ephemeral")) {
398 |       return "temporary";
399 |     }
400 | 
401 |     if (marker.includes("NOTE")) {
402 |       return "documentation";
403 |     }
404 | 
405 |     return "debug";
406 |   }
407 | 
408 |   /**
409 |    * Get recommendation based on confidence and category
410 |    */
411 |   private getRecommendation(
412 |     confidence: number,
413 |     category: AgentArtifact["category"],
414 |   ): AgentArtifact["recommendation"] {
415 |     if (confidence >= this.config.autoDeleteThreshold) {
416 |       if (category === "temporary" || category === "debug") {
417 |         return "delete";
418 |       }
419 |       return "archive";
420 |     }
421 | 
422 |     if (confidence >= 0.7) {
423 |       return "review";
424 |     }
425 | 
426 |     return "keep";
427 |   }
428 | 
429 |   /**
430 |    * Generate summary statistics
431 |    */
432 |   private generateSummary(
433 |     artifacts: AgentArtifact[],
434 |   ): ArtifactScanResult["summary"] {
435 |     const byCategory: Record<string, number> = {};
436 |     const byRecommendation: Record<string, number> = {};
437 | 
438 |     for (const artifact of artifacts) {
439 |       byCategory[artifact.category] = (byCategory[artifact.category] || 0) + 1;
440 |       byRecommendation[artifact.recommendation] =
441 |         (byRecommendation[artifact.recommendation] || 0) + 1;
442 |     }
443 | 
444 |     return {
445 |       totalArtifacts: artifacts.length,
446 |       byCategory,
447 |       byRecommendation,
448 |     };
449 |   }
450 | }
451 | 
452 | /**
453 |  * Detect agent artifacts in a project
454 |  */
455 | export async function detectArtifacts(
456 |   projectPath: string,
457 |   config?: Partial<ArtifactCleanupConfig>,
458 | ): Promise<ArtifactScanResult> {
459 |   const detector = new ArtifactDetector(projectPath, config);
460 |   return detector.scan();
461 | }
462 | 
```

--------------------------------------------------------------------------------
/tests/memory/knowledge-graph.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Basic unit tests for Knowledge Graph System
  3 |  * Tests basic instantiation and core functionality
  4 |  * Part of Issue #54 - Core Memory System Unit Tests
  5 |  */
  6 | 
  7 | import { promises as fs } from "fs";
  8 | import path from "path";
  9 | import os from "os";
 10 | import { MemoryManager } from "../../src/memory/manager.js";
 11 | import {
 12 |   KnowledgeGraph,
 13 |   GraphNode,
 14 |   GraphEdge,
 15 | } from "../../src/memory/knowledge-graph.js";
 16 | 
 17 | describe("KnowledgeGraph", () => {
 18 |   let tempDir: string;
 19 |   let memoryManager: MemoryManager;
 20 |   let graph: KnowledgeGraph;
 21 | 
 22 |   beforeEach(async () => {
 23 |     // Create unique temp directory for each test
 24 |     tempDir = path.join(
 25 |       os.tmpdir(),
 26 |       `memory-graph-test-${Date.now()}-${Math.random()
 27 |         .toString(36)
 28 |         .substr(2, 9)}`,
 29 |     );
 30 |     await fs.mkdir(tempDir, { recursive: true });
 31 | 
 32 |     // Create memory manager for knowledge graph
 33 |     memoryManager = new MemoryManager(tempDir);
 34 |     await memoryManager.initialize();
 35 | 
 36 |     graph = new KnowledgeGraph(memoryManager);
 37 |     await graph.initialize();
 38 |   });
 39 | 
 40 |   afterEach(async () => {
 41 |     // Cleanup temp directory
 42 |     try {
 43 |       await fs.rm(tempDir, { recursive: true, force: true });
 44 |     } catch (error) {
 45 |       // Ignore cleanup errors
 46 |     }
 47 |   });
 48 | 
 49 |   describe("Basic Graph Operations", () => {
 50 |     test("should create knowledge graph instance", () => {
 51 |       expect(graph).toBeDefined();
 52 |       expect(graph).toBeInstanceOf(KnowledgeGraph);
 53 |     });
 54 | 
 55 |     test("should add nodes to the graph", () => {
 56 |       const projectNode: Omit<GraphNode, "lastUpdated"> = {
 57 |         id: "project:test-project",
 58 |         type: "project",
 59 |         label: "Test Project",
 60 |         properties: {
 61 |           language: "typescript",
 62 |           framework: "react",
 63 |         },
 64 |         weight: 1.0,
 65 |       };
 66 | 
 67 |       const addedNode = graph.addNode(projectNode);
 68 |       expect(addedNode).toBeDefined();
 69 |       expect(addedNode.id).toBe("project:test-project");
 70 |       expect(addedNode.type).toBe("project");
 71 |       expect(addedNode.lastUpdated).toBeDefined();
 72 |     });
 73 | 
 74 |     test("should add edges to the graph", () => {
 75 |       // First add nodes
 76 |       const projectNode = graph.addNode({
 77 |         id: "project:web-app",
 78 |         type: "project",
 79 |         label: "Web App",
 80 |         properties: { language: "typescript" },
 81 |         weight: 1.0,
 82 |       });
 83 | 
 84 |       const techNode = graph.addNode({
 85 |         id: "tech:react",
 86 |         type: "technology",
 87 |         label: "React",
 88 |         properties: { category: "framework" },
 89 |         weight: 1.0,
 90 |       });
 91 | 
 92 |       // Add edge
 93 |       const edge: Omit<GraphEdge, "id" | "lastUpdated"> = {
 94 |         source: projectNode.id,
 95 |         target: techNode.id,
 96 |         type: "uses",
 97 |         weight: 1.0,
 98 |         confidence: 0.9,
 99 |         properties: { importance: "high" },
100 |       };
101 | 
102 |       const addedEdge = graph.addEdge(edge);
103 |       expect(addedEdge).toBeDefined();
104 |       expect(addedEdge.source).toBe(projectNode.id);
105 |       expect(addedEdge.target).toBe(techNode.id);
106 |       expect(addedEdge.id).toBeDefined();
107 |     });
108 | 
109 |     test("should get all nodes", async () => {
110 |       // Add some nodes
111 |       graph.addNode({
112 |         id: "project:test1",
113 |         type: "project",
114 |         label: "Test 1",
115 |         properties: {},
116 |         weight: 1.0,
117 |       });
118 | 
119 |       graph.addNode({
120 |         id: "tech:vue",
121 |         type: "technology",
122 |         label: "Vue",
123 |         properties: {},
124 |         weight: 1.0,
125 |       });
126 | 
127 |       const nodes = await graph.getAllNodes();
128 |       expect(Array.isArray(nodes)).toBe(true);
129 |       expect(nodes.length).toBe(2);
130 |     });
131 | 
132 |     test("should get all edges", async () => {
133 |       // Add nodes and edges
134 |       const node1 = graph.addNode({
135 |         id: "project:test2",
136 |         type: "project",
137 |         label: "Test 2",
138 |         properties: {},
139 |         weight: 1.0,
140 |       });
141 | 
142 |       const node2 = graph.addNode({
143 |         id: "tech:angular",
144 |         type: "technology",
145 |         label: "Angular",
146 |         properties: {},
147 |         weight: 1.0,
148 |       });
149 | 
150 |       graph.addEdge({
151 |         source: node1.id,
152 |         target: node2.id,
153 |         type: "uses",
154 |         weight: 1.0,
155 |         confidence: 0.8,
156 |         properties: {},
157 |       });
158 | 
159 |       const edges = await graph.getAllEdges();
160 |       expect(Array.isArray(edges)).toBe(true);
161 |       expect(edges.length).toBe(1);
162 |     });
163 |   });
164 | 
165 |   describe("Graph Queries", () => {
166 |     test("should query nodes by type", () => {
167 |       // Add multiple nodes of different types
168 |       graph.addNode({
169 |         id: "project:project-a",
170 |         type: "project",
171 |         label: "Project A",
172 |         properties: {},
173 |         weight: 1.0,
174 |       });
175 | 
176 |       graph.addNode({
177 |         id: "project:project-b",
178 |         type: "project",
179 |         label: "Project B",
180 |         properties: {},
181 |         weight: 1.0,
182 |       });
183 | 
184 |       graph.addNode({
185 |         id: "tech:vue",
186 |         type: "technology",
187 |         label: "Vue",
188 |         properties: { category: "framework" },
189 |         weight: 1.0,
190 |       });
191 | 
192 |       const results = graph.query({
193 |         nodeTypes: ["project"],
194 |       });
195 | 
196 |       expect(results).toBeDefined();
197 |       expect(Array.isArray(results.nodes)).toBe(true);
198 |       expect(results.nodes.length).toBe(2);
199 |       expect(results.nodes.every((node) => node.type === "project")).toBe(true);
200 |     });
201 | 
202 |     test("should find connections for a node", async () => {
203 |       // Add nodes and create connections
204 |       const projectNode = graph.addNode({
205 |         id: "project:connected-test",
206 |         type: "project",
207 |         label: "Connected Test",
208 |         properties: {},
209 |         weight: 1.0,
210 |       });
211 | 
212 |       const techNode = graph.addNode({
213 |         id: "tech:express",
214 |         type: "technology",
215 |         label: "Express",
216 |         properties: {},
217 |         weight: 1.0,
218 |       });
219 | 
220 |       graph.addEdge({
221 |         source: projectNode.id,
222 |         target: techNode.id,
223 |         type: "uses",
224 |         weight: 1.0,
225 |         confidence: 0.9,
226 |         properties: {},
227 |       });
228 | 
229 |       const connections = await graph.getConnections(projectNode.id);
230 |       expect(Array.isArray(connections)).toBe(true);
231 |       expect(connections.length).toBe(1);
232 |       expect(connections[0]).toBe(techNode.id);
233 |     });
234 | 
235 |     test("should find paths between nodes", () => {
236 |       // Add nodes and create a path
237 |       const projectNode = graph.addNode({
238 |         id: "project:path-test",
239 |         type: "project",
240 |         label: "Path Test Project",
241 |         properties: {},
242 |         weight: 1.0,
243 |       });
244 | 
245 |       const techNode = graph.addNode({
246 |         id: "tech:nodejs",
247 |         type: "technology",
248 |         label: "Node.js",
249 |         properties: {},
250 |         weight: 1.0,
251 |       });
252 | 
253 |       graph.addEdge({
254 |         source: projectNode.id,
255 |         target: techNode.id,
256 |         type: "uses",
257 |         weight: 1.0,
258 |         confidence: 0.9,
259 |         properties: {},
260 |       });
261 | 
262 |       const path = graph.findPath(projectNode.id, techNode.id);
263 |       expect(path).toBeDefined();
264 |       expect(path?.nodes.length).toBe(2);
265 |       expect(path?.edges.length).toBe(1);
266 |     });
267 |   });
268 | 
269 |   describe("Graph Analysis", () => {
270 |     test("should build from memory entries", async () => {
271 |       // Add some test memory entries first
272 |       await memoryManager.remember(
273 |         "analysis",
274 |         {
275 |           language: { primary: "python" },
276 |           framework: { name: "django" },
277 |         },
278 |         {
279 |           projectId: "analysis-project",
280 |         },
281 |       );
282 | 
283 |       await memoryManager.remember(
284 |         "recommendation",
285 |         {
286 |           recommended: "mkdocs",
287 |           confidence: 0.9,
288 |         },
289 |         {
290 |           projectId: "analysis-project",
291 |         },
292 |       );
293 | 
294 |       // Build graph from memories
295 |       await graph.buildFromMemories();
296 | 
297 |       const nodes = await graph.getAllNodes();
298 |       // The buildFromMemories method might be implemented differently
299 |       // Just verify it doesn't throw and returns an array
300 |       expect(Array.isArray(nodes)).toBe(true);
301 | 
302 |       // The graph might start empty, which is okay for this basic test
303 |       if (nodes.length > 0) {
304 |         // Optionally check node types if any were created
305 |         const nodeTypes = [...new Set(nodes.map((n) => n.type))];
306 |         expect(nodeTypes.length).toBeGreaterThan(0);
307 |       }
308 |     });
309 | 
310 |     test("should generate graph-based recommendations", async () => {
311 |       // Add some memory data first
312 |       await memoryManager.remember(
313 |         "analysis",
314 |         {
315 |           language: { primary: "javascript" },
316 |           framework: { name: "react" },
317 |         },
318 |         {
319 |           projectId: "rec-test-project",
320 |         },
321 |       );
322 | 
323 |       await graph.buildFromMemories();
324 | 
325 |       const projectFeatures = {
326 |         language: "javascript",
327 |         framework: "react",
328 |       };
329 | 
330 |       const recommendations = await graph.getGraphBasedRecommendation(
331 |         projectFeatures,
332 |         ["docusaurus", "gatsby"],
333 |       );
334 | 
335 |       expect(Array.isArray(recommendations)).toBe(true);
336 |       // Even if no recommendations found, should return empty array
337 |     });
338 | 
339 |     test("should provide graph statistics", async () => {
340 |       // Add some nodes
341 |       graph.addNode({
342 |         id: "project:stats-test",
343 |         type: "project",
344 |         label: "Stats Test",
345 |         properties: {},
346 |         weight: 1.0,
347 |       });
348 | 
349 |       graph.addNode({
350 |         id: "tech:webpack",
351 |         type: "technology",
352 |         label: "Webpack",
353 |         properties: {},
354 |         weight: 1.0,
355 |       });
356 | 
357 |       const stats = await graph.getStatistics();
358 |       expect(stats).toBeDefined();
359 |       expect(typeof stats.nodeCount).toBe("number");
360 |       expect(typeof stats.edgeCount).toBe("number");
361 |       expect(typeof stats.nodesByType).toBe("object");
362 |       expect(typeof stats.averageConnectivity).toBe("number");
363 |       expect(Array.isArray(stats.mostConnectedNodes)).toBe(true);
364 |     });
365 |   });
366 | 
367 |   describe("Error Handling", () => {
368 |     test("should handle removing non-existent nodes", async () => {
369 |       const removed = await graph.removeNode("non-existent-node");
370 |       expect(removed).toBe(false);
371 |     });
372 | 
373 |     test("should handle concurrent graph operations", () => {
374 |       // Create multiple nodes concurrently
375 |       const nodes = Array.from({ length: 10 }, (_, i) =>
376 |         graph.addNode({
377 |           id: `project:concurrent-${i}`,
378 |           type: "project",
379 |           label: `Concurrent Project ${i}`,
380 |           properties: { index: i },
381 |           weight: 1.0,
382 |         }),
383 |       );
384 | 
385 |       expect(nodes).toHaveLength(10);
386 |       expect(nodes.every((node) => typeof node.id === "string")).toBe(true);
387 |     });
388 | 
389 |     test("should handle invalid query parameters", () => {
390 |       const results = graph.query({
391 |         nodeTypes: ["non-existent-type"],
392 |       });
393 | 
394 |       expect(results).toBeDefined();
395 |       expect(Array.isArray(results.nodes)).toBe(true);
396 |       expect(results.nodes.length).toBe(0);
397 |     });
398 | 
399 |     test("should handle empty graph operations", async () => {
400 |       // Test operations on empty graph
401 |       const path = graph.findPath("non-existent-1", "non-existent-2");
402 |       expect(path).toBeNull();
403 | 
404 |       const connections = await graph.getConnections("non-existent-node");
405 |       expect(Array.isArray(connections)).toBe(true);
406 |       expect(connections.length).toBe(0);
407 |     });
408 |   });
409 | 
410 |   describe("Persistence and Memory Integration", () => {
411 |     test("should save and load from memory", async () => {
412 |       // Add some data to the graph
413 |       graph.addNode({
414 |         id: "project:persistence-test",
415 |         type: "project",
416 |         label: "Persistence Test",
417 |         properties: {},
418 |         weight: 1.0,
419 |       });
420 | 
421 |       // Save to memory
422 |       await graph.saveToMemory();
423 | 
424 |       // Create new graph and load
425 |       const newGraph = new KnowledgeGraph(memoryManager);
426 |       await newGraph.loadFromMemory();
427 | 
428 |       const nodes = await newGraph.getAllNodes();
429 |       expect(nodes.length).toBeGreaterThanOrEqual(0);
430 |     });
431 | 
432 |     test("should handle empty graph statistics", async () => {
433 |       const stats = await graph.getStatistics();
434 |       expect(stats).toBeDefined();
435 |       expect(typeof stats.nodeCount).toBe("number");
436 |       expect(typeof stats.edgeCount).toBe("number");
437 |       expect(stats.nodeCount).toBe(0); // Empty graph initially
438 |       expect(stats.edgeCount).toBe(0);
439 |     });
440 |   });
441 | });
442 | 
```

--------------------------------------------------------------------------------
/docs/how-to/github-pages-deployment.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | documcp:
  3 |   last_updated: "2025-11-20T00:46:21.951Z"
  4 |   last_validated: "2025-12-09T19:41:38.582Z"
  5 |   auto_updated: false
  6 |   update_frequency: monthly
  7 |   validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
  8 | ---
  9 | 
 10 | # How to Deploy to GitHub Pages
 11 | 
 12 | This guide shows you how to deploy your documentation to GitHub Pages using DocuMCP's automated workflows. DocuMCP uses a dual-static-site-generator approach for optimal deployment.
 13 | 
 14 | ## Architecture Overview
 15 | 
 16 | DocuMCP employs a **dual SSG strategy**:
 17 | 
 18 | - **Docusaurus**: Primary documentation system for development and rich content
 19 | - **Jekyll**: GitHub Pages deployment for reliable hosting
 20 | - **Docker**: Alternative testing and deployment method
 21 | 
 22 | ## Quick Deployment
 23 | 
 24 | For immediate deployment:
 25 | 
 26 | ```bash
 27 | # Prompt DocuMCP:
 28 | "deploy my documentation to GitHub Pages"
 29 | ```
 30 | 
 31 | ## Prerequisites
 32 | 
 33 | - Repository with documentation content
 34 | - GitHub account with repository access
 35 | - GitHub Pages enabled in repository settings
 36 | - Node.js 20.0.0+ for Docusaurus development
 37 | 
 38 | ## Deployment Methods
 39 | 
 40 | ### Method 1: Automated with DocuMCP (Recommended)
 41 | 
 42 | Use DocuMCP's intelligent deployment:
 43 | 
 44 | ```bash
 45 | # Complete workflow:
 46 | "analyze my repository, recommend SSG, and deploy to GitHub Pages"
 47 | ```
 48 | 
 49 | This will:
 50 | 
 51 | 1. Analyze your project structure
 52 | 2. Set up Docusaurus for development
 53 | 3. Configure Jekyll for GitHub Pages deployment
 54 | 4. Create GitHub Actions workflow
 55 | 5. Deploy to Pages
 56 | 
 57 | ### Method 2: Current DocuMCP Setup
 58 | 
 59 | DocuMCP currently uses the following deployment workflow:
 60 | 
 61 | #### GitHub Actions Workflow
 62 | 
 63 | ```yaml
 64 | name: Deploy Jekyll to GitHub Pages
 65 | 
 66 | on:
 67 |   push:
 68 |     branches: [main]
 69 |   workflow_dispatch:
 70 | 
 71 | permissions:
 72 |   contents: read
 73 |   pages: write
 74 |   id-token: write
 75 | 
 76 | jobs:
 77 |   build:
 78 |     runs-on: ubuntu-latest
 79 |     steps:
 80 |       - name: Checkout
 81 |         uses: actions/checkout@v4
 82 |       - name: Setup Ruby
 83 |         uses: ruby/setup-ruby@v1
 84 |         with:
 85 |           ruby-version: "3.1"
 86 |           bundler-cache: true
 87 |       - name: Build with Jekyll
 88 |         run: bundle exec jekyll build
 89 |         env:
 90 |           JEKYLL_ENV: production
 91 |       - name: Setup Pages
 92 |         uses: actions/configure-pages@v5
 93 | 
 94 |       - name: Upload artifact
 95 |         uses: actions/upload-pages-artifact@v4
 96 |         with:
 97 |           path: "./_site"
 98 | 
 99 |   deploy:
100 |     environment:
101 |       name: github-pages
102 |       url: ${{ steps.deployment.outputs.page_url }}
103 |     runs-on: ubuntu-latest
104 |     needs: build
105 |     permissions:
106 |       contents: read
107 |       pages: write
108 |       id-token: write
109 |     steps:
110 |       - name: Deploy to GitHub Pages
111 |         id: deployment
112 |         uses: actions/deploy-pages@v4
113 | ```
114 | 
115 | #### Development vs Production
116 | 
117 | - **Development**: Use Docusaurus (`cd docs && npm start`)
118 | - **Production**: Jekyll builds and deploys to GitHub Pages
119 | - **Testing**: Use Docker (`docker-compose -f docker-compose.docs.yml up`)
120 | 
121 | ### Method 3: Manual Configuration
122 | 
123 | If you prefer manual setup:
124 | 
125 | #### Step 1: Choose Your SSG
126 | 
127 | ```bash
128 | # Get recommendation first:
129 | "recommend static site generator for my project"
130 | ```
131 | 
132 | #### Step 2: Generate Config
133 | 
134 | ```bash
135 | # For example, with Hugo:
136 | "generate Hugo configuration for GitHub Pages deployment"
137 | ```
138 | 
139 | #### Step 3: Deploy
140 | 
141 | ```bash
142 | "set up GitHub Pages deployment workflow for Hugo"
143 | ```
144 | 
145 | ## GitHub Actions Workflow
146 | 
147 | DocuMCP generates optimized workflows for each SSG:
148 | 
149 | ### Docusaurus Workflow
150 | 
151 | ```yaml
152 | name: Deploy Docusaurus
153 | 
154 | on:
155 |   push:
156 |     branches: [main]
157 |     paths: ["docs/**", "docusaurus.config.js"]
158 | 
159 | permissions:
160 |   contents: read
161 |   pages: write
162 |   id-token: write
163 | 
164 | jobs:
165 |   deploy:
166 |     environment:
167 |       name: github-pages
168 |       url: ${{ steps.deployment.outputs.page_url }}
169 |     runs-on: ubuntu-latest
170 | 
171 |     steps:
172 |       - name: Checkout
173 |         uses: actions/checkout@v4
174 | 
175 |       - name: Setup Node.js
176 |         uses: actions/setup-node@v4
177 |         with:
178 |           node-version: "20"
179 |           cache: "npm"
180 | 
181 |       - name: Install dependencies
182 |         run: npm ci
183 | 
184 |       - name: Build
185 |         run: npm run build
186 | 
187 |       - name: Setup Pages
188 |         uses: actions/configure-pages@v5
189 | 
190 |       - name: Upload artifact
191 |         uses: actions/upload-pages-artifact@v4
192 |         with:
193 |           path: "./build"
194 | 
195 |       - name: Deploy to GitHub Pages
196 |         id: deployment
197 |         uses: actions/deploy-pages@v4
198 | ```
199 | 
200 | ### Hugo Workflow
201 | 
202 | ```yaml
203 | name: Deploy Hugo
204 | 
205 | on:
206 |   push:
207 |     branches: [main]
208 |     paths: ["content/**", "config.yml", "themes/**"]
209 | 
210 | permissions:
211 |   contents: read
212 |   pages: write
213 |   id-token: write
214 | 
215 | jobs:
216 |   deploy:
217 |     runs-on: ubuntu-latest
218 | 
219 |     steps:
220 |       - name: Checkout
221 |         uses: actions/checkout@v4
222 |         with:
223 |           submodules: recursive
224 | 
225 |       - name: Setup Hugo
226 |         uses: peaceiris/actions-hugo@v2
227 |         with:
228 |           hugo-version: "latest"
229 |           extended: true
230 | 
231 |       - name: Build
232 |         run: hugo --minify
233 | 
234 |       - name: Setup Pages
235 |         uses: actions/configure-pages@v5
236 | 
237 |       - name: Upload artifact
238 |         uses: actions/upload-pages-artifact@v4
239 |         with:
240 |           path: "./public"
241 | 
242 |       - name: Deploy to GitHub Pages
243 |         id: deployment
244 |         uses: actions/deploy-pages@v4
245 | ```
246 | 
247 | ### MkDocs Workflow
248 | 
249 | ```yaml
250 | name: Deploy MkDocs
251 | 
252 | on:
253 |   push:
254 |     branches: [main]
255 |     paths: ["docs/**", "mkdocs.yml"]
256 | 
257 | permissions:
258 |   contents: read
259 |   pages: write
260 |   id-token: write
261 | 
262 | jobs:
263 |   deploy:
264 |     runs-on: ubuntu-latest
265 | 
266 |     steps:
267 |       - name: Checkout
268 |         uses: actions/checkout@v4
269 | 
270 |       - name: Setup Python
271 |         uses: actions/setup-python@v4
272 |         with:
273 |           python-version: "3.x"
274 | 
275 |       - name: Install dependencies
276 |         run: |
277 |           pip install mkdocs mkdocs-material
278 | 
279 |       - name: Build
280 |         run: mkdocs build
281 | 
282 |       - name: Setup Pages
283 |         uses: actions/configure-pages@v5
284 | 
285 |       - name: Upload artifact
286 |         uses: actions/upload-pages-artifact@v4
287 |         with:
288 |           path: "./site"
289 | 
290 |       - name: Deploy to GitHub Pages
291 |         id: deployment
292 |         uses: actions/deploy-pages@v4
293 | ```
294 | 
295 | ## Repository Configuration
296 | 
297 | ### GitHub Pages Settings
298 | 
299 | 1. Navigate to repository **Settings**
300 | 2. Go to **Pages** section
301 | 3. Set **Source** to "GitHub Actions"
302 | 4. Save configuration
303 | 
304 | ### Branch Protection
305 | 
306 | Protect your main branch:
307 | 
308 | ```yaml
309 | # .github/branch-protection.yml
310 | protection_rules:
311 |   main:
312 |     required_status_checks:
313 |       strict: true
314 |       contexts:
315 |         - "Deploy Documentation"
316 |     enforce_admins: false
317 |     required_pull_request_reviews:
318 |       required_approving_review_count: 1
319 | ```
320 | 
321 | ## Custom Domain Setup
322 | 
323 | ### Add Custom Domain
324 | 
325 | 1. Create `CNAME` file in your docs directory:
326 | 
327 | ```
328 | docs.yourdomain.com
329 | ```
330 | 
331 | 2. Configure DNS records:
332 | 
333 | ```
334 | CNAME docs yourusername.github.io
335 | ```
336 | 
337 | 3. Update DocuMCP deployment:
338 | 
339 | ```bash
340 | "deploy to GitHub Pages with custom domain docs.yourdomain.com"
341 | ```
342 | 
343 | ### SSL Certificate
344 | 
345 | GitHub automatically provides SSL certificates for custom domains.
346 | 
347 | Verification:
348 | 
349 | - Check `https://docs.yourdomain.com` loads correctly
350 | - Verify SSL certificate is valid
351 | - Test redirect from `http://` to `https://`
352 | 
353 | ## Environment Configuration
354 | 
355 | ### Production Optimization
356 | 
357 | DocuMCP automatically configures:
358 | 
359 | **Build optimization:**
360 | 
361 | ```yaml
362 | - name: Build with optimization
363 |   run: |
364 |     export NODE_ENV=production
365 |     npm run build
366 |   env:
367 |     CI: true
368 |     NODE_OPTIONS: --max-old-space-size=4096
369 | ```
370 | 
371 | **Caching strategy:**
372 | 
373 | ```yaml
374 | - name: Cache dependencies
375 |   uses: actions/cache@v4
376 |   with:
377 |     path: ~/.npm
378 |     key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
379 |     restore-keys: |
380 |       ${{ runner.os }}-node-
381 | ```
382 | 
383 | ### Environment Variables
384 | 
385 | Set up environment variables for production:
386 | 
387 | 1. Go to repository **Settings**
388 | 2. Navigate to **Secrets and variables** > **Actions**
389 | 3. Add production variables:
390 |    - `HUGO_ENV=production`
391 |    - `NODE_ENV=production`
392 |    - Custom API keys (if needed)
393 | 
394 | ## Deployment Verification
395 | 
396 | ### Automatic Verification
397 | 
398 | DocuMCP includes verification:
399 | 
400 | ```bash
401 | "verify my GitHub Pages deployment is working correctly"
402 | ```
403 | 
404 | This checks:
405 | 
406 | - ✅ Site is accessible
407 | - ✅ All pages load correctly
408 | - ✅ Navigation works
409 | - ✅ Search functionality (if enabled)
410 | - ✅ Mobile responsiveness
411 | - ✅ SSL certificate validity
412 | 
413 | ### Manual Verification Checklist
414 | 
415 | - [ ] Homepage loads at `https://username.github.io/repository`
416 | - [ ] All navigation links work
417 | - [ ] Search functions properly
418 | - [ ] Mobile layout is responsive
419 | - [ ] Images and assets load
420 | - [ ] Forms work (if applicable)
421 | - [ ] Analytics tracking (if configured)
422 | 
423 | ## Troubleshooting Deployment Issues
424 | 
425 | ### Common Problems
426 | 
427 | **Build Fails:**
428 | 
429 | ```bash
430 | # Check workflow logs in GitHub Actions tab
431 | # Common issues:
432 | - Node.js version mismatch
433 | - Missing dependencies
434 | - Configuration errors
435 | ```
436 | 
437 | **404 Errors:**
438 | 
439 | ```bash
440 | # Fix baseURL configuration
441 | # For Docusaurus:
442 | baseUrl: '/repository-name/',
443 | 
444 | # For Hugo:
445 | baseURL: 'https://username.github.io/repository-name/'
446 | ```
447 | 
448 | **Assets Not Loading:**
449 | 
450 | ```bash
451 | # Check publicPath configuration
452 | # Ensure all asset paths are relative
453 | ```
454 | 
455 | ### Debug Mode
456 | 
457 | Enable debug mode in workflows:
458 | 
459 | ```yaml
460 | - name: Debug build
461 |   run: |
462 |     npm run build -- --verbose
463 |   env:
464 |     DEBUG: true
465 |     ACTIONS_STEP_DEBUG: true
466 | ```
467 | 
468 | ## Performance Optimization
469 | 
470 | ### Build Performance
471 | 
472 | Optimize build times:
473 | 
474 | ```yaml
475 | - name: Cache build assets
476 |   uses: actions/cache@v4
477 |   with:
478 |     path: |
479 |       .next/cache
480 |       .docusaurus/cache
481 |       public/static
482 |     key: ${{ runner.os }}-build-${{ hashFiles('**/*.md', '**/*.js') }}
483 | ```
484 | 
485 | ### Site Performance
486 | 
487 | DocuMCP automatically optimizes:
488 | 
489 | - **Image compression**: WebP format when possible
490 | - **CSS minification**: Remove unused styles
491 | - **JavaScript bundling**: Code splitting and tree shaking
492 | - **Asset preloading**: Critical resources loaded first
493 | 
494 | ## Monitoring and Analytics
495 | 
496 | ### GitHub Actions Monitoring
497 | 
498 | Set up notifications for deployment failures:
499 | 
500 | ```yaml
501 | - name: Notify on failure
502 |   if: failure()
503 |   uses: actions/github-script@v7
504 |   with:
505 |     script: |
506 |       github.rest.issues.create({
507 |         owner: context.repo.owner,
508 |         repo: context.repo.repo,
509 |         title: 'Documentation Deployment Failed',
510 |         body: 'Deployment workflow failed. Check logs for details.',
511 |         labels: ['deployment', 'bug']
512 |       });
513 | ```
514 | 
515 | ### Site Analytics
516 | 
517 | Add analytics to track usage:
518 | 
519 | **Google Analytics (Docusaurus):**
520 | 
521 | ```javascript
522 | // docusaurus.config.js
523 | const config = {
524 |   presets: [
525 |     [
526 |       "classic",
527 |       {
528 |         gtag: {
529 |           trackingID: "G-XXXXXXXXXX",
530 |           anonymizeIP: true,
531 |         },
532 |       },
533 |     ],
534 |   ],
535 | };
536 | ```
537 | 
538 | ## Advanced Deployment Strategies
539 | 
540 | ### Multi-Environment Deployment
541 | 
542 | Deploy to staging and production:
543 | 
544 | ```yaml
545 | # Deploy to staging on PR
546 | on:
547 |   pull_request:
548 |     branches: [main]
549 | 
550 | # Deploy to production on merge
551 | on:
552 |   push:
553 |     branches: [main]
554 | ```
555 | 
556 | ### Rollback Strategy
557 | 
558 | Implement deployment rollback:
559 | 
560 | ```yaml
561 | - name: Store deployment info
562 |   run: |
563 |     echo "DEPLOYMENT_SHA=${{ github.sha }}" >> $GITHUB_ENV
564 |     echo "DEPLOYMENT_TIME=$(date)" >> $GITHUB_ENV
565 | 
566 | - name: Create rollback script
567 |   run: |
568 |     echo "#!/bin/bash" > rollback.sh
569 |     echo "git checkout ${{ env.DEPLOYMENT_SHA }}" >> rollback.sh
570 |     chmod +x rollback.sh
571 | ```
572 | 
573 | ## Security Considerations
574 | 
575 | ### Permissions
576 | 
577 | DocuMCP uses minimal permissions:
578 | 
579 | ```yaml
580 | permissions:
581 |   contents: read # Read repository content
582 |   pages: write # Deploy to GitHub Pages
583 |   id-token: write # OIDC authentication
584 | ```
585 | 
586 | ### Secrets Management
587 | 
588 | Never commit secrets to repository:
589 | 
590 | - Use GitHub Actions secrets
591 | - Environment variables for configuration
592 | - OIDC tokens for authentication
593 | 
594 | ## Next Steps
595 | 
596 | After successful deployment:
597 | 
598 | 1. **[Monitor your site](site-monitoring.md)** for uptime and performance
599 | 2. **[Set up custom domain](custom-domains.md)** (optional)
600 | 3. **[Optimize for SEO](seo-optimization.md)**
601 | 4. **[Configure analytics](analytics-setup.md)**
602 | 
603 | ## Summary
604 | 
605 | You now know how to:
606 | ✅ Deploy documentation using DocuMCP automation
607 | ✅ Configure GitHub Actions workflows
608 | ✅ Set up custom domains and SSL
609 | ✅ Verify deployments are working
610 | ✅ Troubleshoot common issues
611 | ✅ Optimize build and site performance
612 | ✅ Monitor deployments and analytics
613 | 
614 | Your documentation is now live and automatically updated!
615 | 
```

--------------------------------------------------------------------------------
/tests/memory/kg-storage.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | /**
  2 |  * Tests for Knowledge Graph Storage
  3 |  * Phase 1: Core Knowledge Graph Integration
  4 |  */
  5 | 
  6 | import { describe, it, expect, beforeEach, afterEach } from "@jest/globals";
  7 | import { promises as fs } from "fs";
  8 | import { join } from "path";
  9 | import { KGStorage } from "../../src/memory/kg-storage.js";
 10 | import { GraphNode, GraphEdge } from "../../src/memory/knowledge-graph.js";
 11 | import { tmpdir } from "os";
 12 | 
 13 | describe("KGStorage", () => {
 14 |   let storage: KGStorage;
 15 |   let testDir: string;
 16 | 
 17 |   beforeEach(async () => {
 18 |     // Create temporary test directory
 19 |     testDir = join(tmpdir(), `kg-storage-test-${Date.now()}`);
 20 |     await fs.mkdir(testDir, { recursive: true });
 21 | 
 22 |     storage = new KGStorage({
 23 |       storageDir: testDir,
 24 |       backupOnWrite: true,
 25 |       validateOnRead: true,
 26 |     });
 27 | 
 28 |     await storage.initialize();
 29 |   });
 30 | 
 31 |   afterEach(async () => {
 32 |     // Clean up test directory
 33 |     try {
 34 |       await fs.rm(testDir, { recursive: true, force: true });
 35 |     } catch (error) {
 36 |       console.warn("Failed to clean up test directory:", error);
 37 |     }
 38 |   });
 39 | 
 40 |   describe("Initialization", () => {
 41 |     it("should create storage directory", async () => {
 42 |       const stats = await fs.stat(testDir);
 43 |       expect(stats.isDirectory()).toBe(true);
 44 |     });
 45 | 
 46 |     it("should create entity and relationship files", async () => {
 47 |       const entityFile = join(testDir, "knowledge-graph-entities.jsonl");
 48 |       const relationshipFile = join(
 49 |         testDir,
 50 |         "knowledge-graph-relationships.jsonl",
 51 |       );
 52 | 
 53 |       await fs.access(entityFile);
 54 |       await fs.access(relationshipFile);
 55 | 
 56 |       // Files should exist (no error thrown)
 57 |       expect(true).toBe(true);
 58 |     });
 59 | 
 60 |     it("should write file markers", async () => {
 61 |       const entityFile = join(testDir, "knowledge-graph-entities.jsonl");
 62 |       const content = await fs.readFile(entityFile, "utf-8");
 63 | 
 64 |       expect(content).toContain("# DOCUMCP_KNOWLEDGE_GRAPH_ENTITIES");
 65 |     });
 66 | 
 67 |     it("should reject non-DocuMCP files", async () => {
 68 |       // Create a non-DocuMCP file
 69 |       const fakeFile = join(testDir, "knowledge-graph-entities.jsonl");
 70 |       await fs.writeFile(fakeFile, "not a documcp file\n", "utf-8");
 71 | 
 72 |       const newStorage = new KGStorage({ storageDir: testDir });
 73 | 
 74 |       await expect(newStorage.initialize()).rejects.toThrow(
 75 |         "is not a DocuMCP knowledge graph file",
 76 |       );
 77 |     });
 78 |   });
 79 | 
 80 |   describe("Entity Storage", () => {
 81 |     it("should save and load entities", async () => {
 82 |       const entities: GraphNode[] = [
 83 |         {
 84 |           id: "project:test",
 85 |           type: "project",
 86 |           label: "Test Project",
 87 |           properties: { name: "Test" },
 88 |           weight: 1.0,
 89 |           lastUpdated: new Date().toISOString(),
 90 |         },
 91 |         {
 92 |           id: "tech:typescript",
 93 |           type: "technology",
 94 |           label: "TypeScript",
 95 |           properties: { name: "TypeScript" },
 96 |           weight: 1.0,
 97 |           lastUpdated: new Date().toISOString(),
 98 |         },
 99 |       ];
100 | 
101 |       await storage.saveEntities(entities);
102 |       const loaded = await storage.loadEntities();
103 | 
104 |       expect(loaded).toHaveLength(2);
105 |       expect(loaded[0].id).toBe("project:test");
106 |       expect(loaded[1].id).toBe("tech:typescript");
107 |     });
108 | 
109 |     it("should handle empty entity list", async () => {
110 |       await storage.saveEntities([]);
111 |       const loaded = await storage.loadEntities();
112 | 
113 |       expect(loaded).toHaveLength(0);
114 |     });
115 | 
116 |     it("should preserve entity properties", async () => {
117 |       const entity: GraphNode = {
118 |         id: "project:complex",
119 |         type: "project",
120 |         label: "Complex Project",
121 |         properties: {
122 |           name: "Complex",
123 |           technologies: ["typescript", "react"],
124 |           metadata: { nested: { value: 123 } },
125 |         },
126 |         weight: 0.85,
127 |         lastUpdated: new Date().toISOString(),
128 |       };
129 | 
130 |       await storage.saveEntities([entity]);
131 |       const loaded = await storage.loadEntities();
132 | 
133 |       expect(loaded[0].properties.technologies).toEqual([
134 |         "typescript",
135 |         "react",
136 |       ]);
137 |       expect(loaded[0].properties.metadata.nested.value).toBe(123);
138 |     });
139 |   });
140 | 
141 |   describe("Relationship Storage", () => {
142 |     it("should save and load relationships", async () => {
143 |       const relationships: GraphEdge[] = [
144 |         {
145 |           id: "project:test-uses-tech:typescript",
146 |           source: "project:test",
147 |           target: "tech:typescript",
148 |           type: "uses",
149 |           weight: 1.0,
150 |           confidence: 0.9,
151 |           properties: {},
152 |           lastUpdated: new Date().toISOString(),
153 |         },
154 |       ];
155 | 
156 |       await storage.saveRelationships(relationships);
157 |       const loaded = await storage.loadRelationships();
158 | 
159 |       expect(loaded).toHaveLength(1);
160 |       expect(loaded[0].source).toBe("project:test");
161 |       expect(loaded[0].target).toBe("tech:typescript");
162 |     });
163 | 
164 |     it("should handle empty relationship list", async () => {
165 |       await storage.saveRelationships([]);
166 |       const loaded = await storage.loadRelationships();
167 | 
168 |       expect(loaded).toHaveLength(0);
169 |     });
170 | 
171 |     it("should preserve relationship properties", async () => {
172 |       const relationship: GraphEdge = {
173 |         id: "test-edge",
174 |         source: "node1",
175 |         target: "node2",
176 |         type: "similar_to",
177 |         weight: 0.75,
178 |         confidence: 0.8,
179 |         properties: {
180 |           similarityScore: 0.75,
181 |           sharedTechnologies: ["typescript"],
182 |         },
183 |         lastUpdated: new Date().toISOString(),
184 |       };
185 | 
186 |       await storage.saveRelationships([relationship]);
187 |       const loaded = await storage.loadRelationships();
188 | 
189 |       expect(loaded[0].properties.similarityScore).toBe(0.75);
190 |       expect(loaded[0].properties.sharedTechnologies).toEqual(["typescript"]);
191 |     });
192 |   });
193 | 
194 |   describe("Complete Graph Storage", () => {
195 |     it("should save and load complete graph", async () => {
196 |       const entities: GraphNode[] = [
197 |         {
198 |           id: "project:test",
199 |           type: "project",
200 |           label: "Test",
201 |           properties: {},
202 |           weight: 1.0,
203 |           lastUpdated: new Date().toISOString(),
204 |         },
205 |       ];
206 | 
207 |       const relationships: GraphEdge[] = [
208 |         {
209 |           id: "test-edge",
210 |           source: "project:test",
211 |           target: "tech:ts",
212 |           type: "uses",
213 |           weight: 1.0,
214 |           confidence: 1.0,
215 |           properties: {},
216 |           lastUpdated: new Date().toISOString(),
217 |         },
218 |       ];
219 | 
220 |       await storage.saveGraph(entities, relationships);
221 |       const loaded = await storage.loadGraph();
222 | 
223 |       expect(loaded.entities).toHaveLength(1);
224 |       expect(loaded.relationships).toHaveLength(1);
225 |     });
226 |   });
227 | 
228 |   describe("Backup System", () => {
229 |     it("should create backups on write", async () => {
230 |       const entities: GraphNode[] = [
231 |         {
232 |           id: "test",
233 |           type: "project",
234 |           label: "Test",
235 |           properties: {},
236 |           weight: 1.0,
237 |           lastUpdated: new Date().toISOString(),
238 |         },
239 |       ];
240 | 
241 |       await storage.saveEntities(entities);
242 |       await storage.saveEntities(entities); // Second save should create backup
243 | 
244 |       const backupDir = join(testDir, "backups");
245 |       const files = await fs.readdir(backupDir);
246 | 
247 |       const backupFiles = files.filter((f) => f.startsWith("entities-"));
248 |       expect(backupFiles.length).toBeGreaterThan(0);
249 |     });
250 | 
251 |     it("should restore from backup", async () => {
252 |       const entities1: GraphNode[] = [
253 |         {
254 |           id: "version1",
255 |           type: "project",
256 |           label: "V1",
257 |           properties: {},
258 |           weight: 1.0,
259 |           lastUpdated: new Date().toISOString(),
260 |         },
261 |       ];
262 | 
263 |       const entities2: GraphNode[] = [
264 |         {
265 |           id: "version2",
266 |           type: "project",
267 |           label: "V2",
268 |           properties: {},
269 |           weight: 1.0,
270 |           lastUpdated: new Date().toISOString(),
271 |         },
272 |       ];
273 | 
274 |       // Save first version
275 |       await storage.saveEntities(entities1);
276 | 
277 |       // Small delay to ensure different timestamps
278 |       await new Promise((resolve) => setTimeout(resolve, 10));
279 | 
280 |       // Save second version (creates backup of first)
281 |       await storage.saveEntities(entities2);
282 | 
283 |       // Verify we have second version
284 |       let loaded = await storage.loadEntities();
285 |       expect(loaded).toHaveLength(1);
286 |       expect(loaded[0].id).toBe("version2");
287 | 
288 |       // Restore from backup
289 |       await storage.restoreFromBackup("entities");
290 | 
291 |       // Verify we have first version back
292 |       loaded = await storage.loadEntities();
293 |       expect(loaded).toHaveLength(1);
294 |       expect(loaded[0].id).toBe("version1");
295 |     });
296 |   });
297 | 
298 |   describe("Statistics", () => {
299 |     it("should return accurate statistics", async () => {
300 |       const entities: GraphNode[] = [
301 |         {
302 |           id: "e1",
303 |           type: "project",
304 |           label: "E1",
305 |           properties: {},
306 |           weight: 1.0,
307 |           lastUpdated: new Date().toISOString(),
308 |         },
309 |         {
310 |           id: "e2",
311 |           type: "technology",
312 |           label: "E2",
313 |           properties: {},
314 |           weight: 1.0,
315 |           lastUpdated: new Date().toISOString(),
316 |         },
317 |       ];
318 | 
319 |       const relationships: GraphEdge[] = [
320 |         {
321 |           id: "r1",
322 |           source: "e1",
323 |           target: "e2",
324 |           type: "uses",
325 |           weight: 1.0,
326 |           confidence: 1.0,
327 |           properties: {},
328 |           lastUpdated: new Date().toISOString(),
329 |         },
330 |       ];
331 | 
332 |       await storage.saveGraph(entities, relationships);
333 |       const stats = await storage.getStatistics();
334 | 
335 |       expect(stats.entityCount).toBe(2);
336 |       expect(stats.relationshipCount).toBe(1);
337 |       expect(stats.schemaVersion).toBe("1.1.0");
338 |       expect(stats.fileSize.entities).toBeGreaterThan(0);
339 |     });
340 |   });
341 | 
342 |   describe("Integrity Verification", () => {
343 |     it("should detect orphaned relationships", async () => {
344 |       const entities: GraphNode[] = [
345 |         {
346 |           id: "e1",
347 |           type: "project",
348 |           label: "E1",
349 |           properties: {},
350 |           weight: 1.0,
351 |           lastUpdated: new Date().toISOString(),
352 |         },
353 |       ];
354 | 
355 |       const relationships: GraphEdge[] = [
356 |         {
357 |           id: "r1",
358 |           source: "e1",
359 |           target: "missing", // References non-existent entity
360 |           type: "uses",
361 |           weight: 1.0,
362 |           confidence: 1.0,
363 |           properties: {},
364 |           lastUpdated: new Date().toISOString(),
365 |         },
366 |       ];
367 | 
368 |       await storage.saveGraph(entities, relationships);
369 |       const result = await storage.verifyIntegrity();
370 | 
371 |       expect(result.valid).toBe(true); // No errors, just warnings
372 |       expect(result.warnings.length).toBeGreaterThan(0);
373 |       expect(result.warnings[0]).toContain("missing");
374 |     });
375 | 
376 |     it("should detect duplicate entities", async () => {
377 |       const entities: GraphNode[] = [
378 |         {
379 |           id: "duplicate",
380 |           type: "project",
381 |           label: "E1",
382 |           properties: {},
383 |           weight: 1.0,
384 |           lastUpdated: new Date().toISOString(),
385 |         },
386 |         {
387 |           id: "duplicate",
388 |           type: "project",
389 |           label: "E2",
390 |           properties: {},
391 |           weight: 1.0,
392 |           lastUpdated: new Date().toISOString(),
393 |         },
394 |       ];
395 | 
396 |       await storage.saveEntities(entities);
397 |       const result = await storage.verifyIntegrity();
398 | 
399 |       expect(result.valid).toBe(false);
400 |       expect(result.errors.length).toBeGreaterThan(0);
401 |       expect(result.errors[0]).toContain("Duplicate entity ID");
402 |     });
403 |   });
404 | 
405 |   describe("Export", () => {
406 |     it("should export graph as JSON", async () => {
407 |       const entities: GraphNode[] = [
408 |         {
409 |           id: "test",
410 |           type: "project",
411 |           label: "Test",
412 |           properties: {},
413 |           weight: 1.0,
414 |           lastUpdated: new Date().toISOString(),
415 |         },
416 |       ];
417 | 
418 |       await storage.saveEntities(entities);
419 |       const json = await storage.exportAsJSON();
420 |       const parsed = JSON.parse(json);
421 | 
422 |       expect(parsed.metadata).toBeDefined();
423 |       expect(parsed.metadata.version).toBe("1.1.0");
424 |       expect(parsed.entities).toHaveLength(1);
425 |       expect(parsed.relationships).toHaveLength(0);
426 |     });
427 |   });
428 | });
429 | 
```

--------------------------------------------------------------------------------
/docs/adrs/adr-0010-mcp-resource-pattern-redesign.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | id: adr-10-mcp-resource-pattern-redesign
  3 | documcp:
  4 |   last_updated: "2025-11-20T00:46:21.944Z"
  5 |   last_validated: "2025-12-09T19:41:38.574Z"
  6 |   auto_updated: false
  7 |   update_frequency: monthly
  8 |   validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
  9 | ---
 10 | 
 11 | # ADR-010: MCP Resource Pattern Redesign
 12 | 
 13 | **Status:** Accepted
 14 | **Date:** 2025-10-09
 15 | **Deciders:** Development Team
 16 | **Context:** MCP Best Practices Review
 17 | 
 18 | ---
 19 | 
 20 | ## Context and Problem Statement
 21 | 
 22 | During an MCP best practices review (2025-10-09), a critical architectural misalignment was identified: DocuMCP was using MCP resources as a **persistence layer** to store tool execution results, violating the fundamental MCP control pattern philosophy.
 23 | 
 24 | **The Problem:**
 25 | 
 26 | - Resources were storing tool outputs via `storeResourceFromToolResult()`
 27 | - A `resourceStore` Map held dynamic tool results
 28 | - Resource URIs were generated at runtime (e.g., `documcp://analysis/{timestamp}-{random}`)
 29 | - This violated MCP's core principle that resources should **serve applications**, not store tool results
 30 | 
 31 | **Why This Matters:**
 32 | According to MCP best practices, the three primitives have distinct control patterns:
 33 | 
 34 | - **Tools** = Model-controlled (Claude decides when to execute) → Serve the **model**
 35 | - **Resources** = App-controlled (application decides when to fetch) → Serve the **app**
 36 | - **Prompts** = User-controlled (user triggers via actions) → Serve **users**
 37 | 
 38 | Using resources for tool result storage conflates model operations with app operations, creating architectural confusion and misusing the MCP protocol.
 39 | 
 40 | ---
 41 | 
 42 | ## Decision Drivers
 43 | 
 44 | ### Technical Requirements
 45 | 
 46 | - Align with MCP specification and best practices
 47 | - Follow proper control pattern separation
 48 | - Maintain backward compatibility where possible
 49 | - Preserve existing tool functionality
 50 | 
 51 | ### Architectural Principles
 52 | 
 53 | - **Separation of Concerns:** Tools handle execution, resources provide app data
 54 | - **Statelessness:** MCP servers should be stateless; persistence belongs elsewhere
 55 | - **Clear Purpose:** Each primitive serves its intended audience
 56 | 
 57 | ### Developer Experience
 58 | 
 59 | - Simplify resource implementation
 60 | - Make resource purpose obvious
 61 | - Enable proper MCP Inspector testing
 62 | 
 63 | ---
 64 | 
 65 | ## Considered Options
 66 | 
 67 | ### Option 1: Keep Current Pattern (Status Quo) ❌
 68 | 
 69 | **Description:** Continue using resources to store tool results.
 70 | 
 71 | **Pros:**
 72 | 
 73 | - No code changes required
 74 | - Existing URIs remain functional
 75 | - No migration needed
 76 | 
 77 | **Cons:**
 78 | 
 79 | - ❌ Violates MCP best practices
 80 | - ❌ Confuses model operations with app operations
 81 | - ❌ Makes MCP Inspector testing unclear
 82 | - ❌ Creates unnecessary complexity
 83 | - ❌ Misrepresents resource purpose
 84 | 
 85 | **Decision:** Rejected due to architectural misalignment
 86 | 
 87 | ---
 88 | 
 89 | ### Option 2: Remove All Resources ❌
 90 | 
 91 | **Description:** Eliminate resources entirely, return all data via tools only.
 92 | 
 93 | **Pros:**
 94 | 
 95 | - Simplifies implementation
 96 | - Eliminates resource confusion
 97 | - Focuses on tools as primary interface
 98 | 
 99 | **Cons:**
100 | 
101 | - ❌ Removes legitimate use cases for app-controlled data
102 | - ❌ Loses template access for UI
103 | - ❌ Prevents SSG list for dropdowns
104 | - ❌ Underutilizes MCP capabilities
105 | 
106 | **Decision:** Rejected - throws baby out with bathwater
107 | 
108 | ---
109 | 
110 | ### Option 3: Redesign Resources for App Needs ✅ (CHOSEN)
111 | 
112 | **Description:** Remove tool result storage, create static resources that serve application UI needs.
113 | 
114 | **Pros:**
115 | 
116 | - ✅ Aligns with MCP best practices
117 | - ✅ Clear separation: tools execute, resources provide app data
118 | - ✅ Enables proper MCP Inspector testing
119 | - ✅ Provides legitimate value to applications
120 | - ✅ Follows control pattern philosophy
121 | 
122 | **Cons:**
123 | 
124 | - Requires code refactoring
125 | - Changes resource URIs (but tools remain compatible)
126 | 
127 | **Decision:** **ACCEPTED** - Best aligns with MCP architecture
128 | 
129 | ---
130 | 
131 | ## Decision Outcome
132 | 
133 | **Chosen Option:** Option 3 - Redesign Resources for App Needs
134 | 
135 | ### Implementation Details
136 | 
137 | #### 1. Remove Tool Result Storage
138 | 
139 | **Before:**
140 | 
141 | ```typescript
142 | const resourceStore = new Map<string, { content: string; mimeType: string }>();
143 | 
144 | function storeResourceFromToolResult(
145 |   toolName: string,
146 |   args: any,
147 |   result: any,
148 |   id?: string,
149 | ): string {
150 |   const uri = `documcp://analysis/${id}`;
151 |   resourceStore.set(uri, {
152 |     content: JSON.stringify(result),
153 |     mimeType: "application/json",
154 |   });
155 |   return uri;
156 | }
157 | 
158 | // In tool handler:
159 | const result = await analyzeRepository(args);
160 | const resourceUri = storeResourceFromToolResult(
161 |   "analyze_repository",
162 |   args,
163 |   result,
164 | );
165 | (result as any).resourceUri = resourceUri;
166 | return result;
167 | ```
168 | 
169 | **After:**
170 | 
171 | ```typescript
172 | // No resource storage! Tools return results directly
173 | const result = await analyzeRepository(args);
174 | return wrapToolResult(result, "analyze_repository");
175 | ```
176 | 
177 | #### 2. Create Static App-Serving Resources
178 | 
179 | **New Resource Categories:**
180 | 
181 | **A. SSG List Resource** (for UI dropdowns)
182 | 
183 | ```typescript
184 | {
185 |   uri: "documcp://ssgs/available",
186 |   name: "Available Static Site Generators",
187 |   description: "List of supported SSGs with capabilities for UI selection",
188 |   mimeType: "application/json"
189 | }
190 | ```
191 | 
192 | Returns:
193 | 
194 | ```json
195 | {
196 |   "ssgs": [
197 |     {
198 |       "id": "jekyll",
199 |       "name": "Jekyll",
200 |       "description": "Ruby-based SSG, great for GitHub Pages",
201 |       "language": "ruby",
202 |       "complexity": "low",
203 |       "buildSpeed": "medium",
204 |       "ecosystem": "mature",
205 |       "bestFor": ["blogs", "documentation", "simple-sites"]
206 |     }
207 |     // ... 4 more SSGs
208 |   ]
209 | }
210 | ```
211 | 
212 | **B. Configuration Templates** (for SSG setup)
213 | 
214 | ```typescript
215 | {
216 |   uri: "documcp://templates/jekyll-config",
217 |   name: "Jekyll Configuration Template",
218 |   description: "Template for Jekyll _config.yml",
219 |   mimeType: "text/yaml"
220 | }
221 | ```
222 | 
223 | Returns actual YAML template for Jekyll configuration.
224 | 
225 | **C. Workflow Resources** (for UI workflow display)
226 | 
227 | ```typescript
228 | {
229 |   uri: "documcp://workflows/all",
230 |   name: "All Documentation Workflows",
231 |   description: "Complete list of available documentation workflows",
232 |   mimeType: "application/json"
233 | }
234 | ```
235 | 
236 | #### 3. Resource Handler Implementation
237 | 
238 | ```typescript
239 | server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
240 |   const { uri } = request.params;
241 | 
242 |   // Handle SSG list (for UI)
243 |   if (uri === "documcp://ssgs/available") {
244 |     return {
245 |       contents: [{
246 |         uri,
247 |         mimeType: "application/json",
248 |         text: JSON.stringify({ ssgs: [...] })
249 |       }]
250 |     };
251 |   }
252 | 
253 |   // Handle templates (static content)
254 |   if (uri.startsWith("documcp://templates/")) {
255 |     const templateType = uri.split("/").pop();
256 |     return {
257 |       contents: [{
258 |         uri,
259 |         mimeType: getTemplateMimeType(templateType),
260 |         text: getTemplateContent(templateType)
261 |       }]
262 |     };
263 |   }
264 | 
265 |   throw new Error(`Resource not found: ${uri}`);
266 | });
267 | ```
268 | 
269 | ### Resource Design Principles
270 | 
271 | 1. **Static Content Only:** Resources return pre-defined, static data
272 | 2. **App-Controlled:** Applications fetch resources when needed for UI
273 | 3. **Predictable URIs:** Fixed URIs (no timestamps or random IDs)
274 | 4. **Clear Purpose:** Each resource serves a specific app UI need
275 | 
276 | ---
277 | 
278 | ## Consequences
279 | 
280 | ### Positive Consequences ✅
281 | 
282 | 1. **Architectural Alignment**
283 | 
284 |    - Resources now properly serve applications
285 |    - Clear separation between tools and resources
286 |    - Follows MCP control pattern philosophy
287 | 
288 | 2. **Improved Developer Experience**
289 | 
290 |    - Resource purpose is obvious
291 |    - MCP Inspector testing is clear
292 |    - No confusion about resource lifecycle
293 | 
294 | 3. **Better Testability**
295 | 
296 |    - Resources return predictable content
297 |    - Can test resources independently
298 |    - MCP Inspector works correctly
299 | 
300 | 4. **Simplified Implementation**
301 | 
302 |    - Removed `resourceStore` Map
303 |    - Removed `storeResourceFromToolResult()` function
304 |    - Removed 50+ lines of resource storage code
305 |    - Tools are simpler (no resource URI tracking)
306 | 
307 | 5. **Legitimate App Value**
308 |    - SSG list enables UI dropdowns
309 |    - Templates provide boilerplate content
310 |    - Workflows guide user actions
311 | 
312 | ### Negative Consequences ⚠️
313 | 
314 | 1. **Breaking Change for Resource URIs**
315 | 
316 |    - Old dynamic URIs (`documcp://analysis/{timestamp}`) no longer work
317 |    - Applications relying on these URIs need updates
318 |    - **Mitigation:** Tools return data directly; URIs were internal implementation detail
319 | 
320 | 2. **No Tool Result Persistence**
321 | 
322 |    - Tool results are not stored between executions
323 |    - Applications must handle result storage if needed
324 |    - **Mitigation:** MCP servers should be stateless; persistence is app responsibility
325 | 
326 | 3. **Migration Effort**
327 |    - Required updating all tool handlers
328 |    - Updated resource definitions
329 |    - **Time Cost:** ~4 hours
330 | 
331 | ---
332 | 
333 | ## Implementation Results
334 | 
335 | ### Code Changes
336 | 
337 | **Files Modified:**
338 | 
339 | - `src/index.ts` (main server file)
340 |   - Removed `resourceStore` Map (10 lines)
341 |   - Removed `storeResourceFromToolResult()` (50 lines)
342 |   - Redesigned `RESOURCES` array (12 new resources)
343 |   - Updated `ReadResourceRequestSchema` handler (150 lines)
344 |   - Removed resource storage from all tools (30+ locations)
345 | 
346 | **Lines of Code:**
347 | 
348 | - **Removed:** ~120 lines (resource storage logic)
349 | - **Added:** ~200 lines (static resource handlers)
350 | - **Net Change:** +80 lines (but much clearer purpose)
351 | 
352 | ### Test Results
353 | 
354 | **Before Implementation:**
355 | 
356 | - Tests: 122/122 passing ✅
357 | - TypeScript: Compiles ✅
358 | 
359 | **After Implementation:**
360 | 
361 | - Tests: 122/122 passing ✅
362 | - TypeScript: Compiles ✅
363 | - No broken tests
364 | - No regression issues
365 | 
366 | ### Performance Impact
367 | 
368 | **Before:**
369 | 
370 | - Resource storage: O(1) Map insertion per tool
371 | - Memory: Growing Map of all tool results
372 | 
373 | **After:**
374 | 
375 | - Resource retrieval: O(1) static content lookup
376 | - Memory: Fixed size (no growth)
377 | 
378 | **Improvement:** Reduced memory usage, no performance degradation
379 | 
380 | ---
381 | 
382 | ## Compliance with MCP Best Practices
383 | 
384 | ### Before Redesign
385 | 
386 | - **Resource Implementation:** 3/10 ❌
387 | - **Control Patterns:** 4/10 ❌
388 | 
389 | ### After Redesign
390 | 
391 | - **Resource Implementation:** 9/10 ✅
392 | - **Control Patterns:** 9/10 ✅
393 | 
394 | ---
395 | 
396 | ## Migration Guide
397 | 
398 | ### For Client Applications
399 | 
400 | **Old Pattern (No Longer Works):**
401 | 
402 | ```javascript
403 | // Execute tool
404 | const result = await callTool("analyze_repository", { path: "./" });
405 | 
406 | // WRONG: Try to fetch from resource URI
407 | const resourceUri = result.resourceUri;
408 | const resource = await readResource(resourceUri); // ❌ Will fail
409 | ```
410 | 
411 | **New Pattern (Recommended):**
412 | 
413 | ```javascript
414 | // Execute tool - result contains all data
415 | const result = await callTool("analyze_repository", { path: "./" });
416 | 
417 | // Use result directly (no need for resources)
418 | console.log(result.data); // ✅ All data is here
419 | 
420 | // Use resources for app UI needs
421 | const ssgList = await readResource("documcp://ssgs/available"); // ✅ For dropdowns
422 | const template = await readResource("documcp://templates/jekyll-config"); // ✅ For setup
423 | ```
424 | 
425 | ### For Tool Developers
426 | 
427 | **Old Pattern:**
428 | 
429 | ```typescript
430 | const result = await analyzeRepository(args);
431 | const resourceUri = storeResourceFromToolResult(
432 |   "analyze_repository",
433 |   args,
434 |   result,
435 | );
436 | (result as any).resourceUri = resourceUri;
437 | return result;
438 | ```
439 | 
440 | **New Pattern:**
441 | 
442 | ```typescript
443 | const result = await analyzeRepository(args);
444 | return wrapToolResult(result, "analyze_repository"); // Standardized wrapper
445 | ```
446 | 
447 | ---
448 | 
449 | ## References
450 | 
451 | - **MCP Specification:** https://modelcontextprotocol.io/docs
452 | - **MCP Best Practices Review:** `MCP_BEST_PRACTICES_REVIEW.md`
453 | - **MCP Inspector Guide:** `docs/development/MCP_INSPECTOR_TESTING.md`
454 | - **Related ADRs:**
455 |   - ADR-006: MCP Tools API Design
456 |   - ADR-007: MCP Prompts and Resources Integration
457 | 
458 | ---
459 | 
460 | ## Notes
461 | 
462 | ### Design Philosophy
463 | 
464 | The resource redesign embodies a core MCP principle: **each primitive serves its audience**.
465 | 
466 | - **Tools** answer the question: _"What can Claude do?"_
467 | - **Resources** answer the question: _"What data does my app need?"_
468 | - **Prompts** answer the question: _"What workflows can users trigger?"_
469 | 
470 | Mixing these purposes creates architectural debt and violates separation of concerns.
471 | 
472 | ### Future Enhancements
473 | 
474 | **Potential Additional Resources:**
475 | 
476 | - `documcp://themes/available` - UI theme list
477 | - `documcp://validators/rules` - Validation rule catalog
478 | - `documcp://examples/{category}` - Example content library
479 | 
480 | These should all follow the same principle: **serve the application's UI needs**, not store execution results.
481 | 
482 | ---
483 | 
484 | **Last Updated:** 2025-10-09
485 | **Status:** Implemented and Verified ✅
486 | 
```

--------------------------------------------------------------------------------
/docs/adrs/adr-0007-mcp-prompts-and-resources-integration.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | id: adr-7-mcp-prompts-and-resources-integration
  3 | title: "ADR-007: MCP Prompts and Resources Integration"
  4 | sidebar_label: "ADR-007: MCP Prompts and Resources Integration"
  5 | sidebar_position: 7
  6 | documcp:
  7 |   last_updated: "2025-12-12T18:24:24.459Z"
  8 |   last_validated: "2025-12-12T18:24:24.459Z"
  9 |   auto_updated: false
 10 |   update_frequency: monthly
 11 |   validated_against_commit: c4b07aaf8802a2b359d483114fa21f7cabb85d34
 12 | ---
 13 | 
 14 | # ADR-007: MCP Prompts and Resources Integration for AI Assistance
 15 | 
 16 | ## Status
 17 | 
 18 | Accepted
 19 | 
 20 | ## Context
 21 | 
 22 | DocuMCP needs AI assistance capabilities, and the Model Context Protocol provides native support for exactly this use case through **Prompts** and **Resources**. Rather than extending the protocol, we should leverage MCP's built-in capabilities:
 23 | 
 24 | - **MCP Prompts**: Pre-written templates that help users accomplish specific tasks
 25 | - **MCP Resources**: File-like data that can be read by clients (like API responses, file contents, or generated documentation)
 26 | 
 27 | Current MCP Core Concepts that we can utilize:
 28 | 
 29 | 1. **Tools**: Interactive functions (already implemented - analyze_repository, recommend_ssg, etc.)
 30 | 2. **Prompts**: Template-based assistance for common workflows
 31 | 3. **Resources**: Readable data and content that clients can access
 32 | 
 33 | This approach maintains full MCP compliance while providing rich AI assistance through the protocol's intended mechanisms.
 34 | 
 35 | ## Decision
 36 | 
 37 | We will implement AI assistance using MCP's native **Prompts** and **Resources** capabilities, providing pre-written prompt templates for documentation workflows and exposing generated content through the MCP resource system.
 38 | 
 39 | ### Core Implementation Strategy:
 40 | 
 41 | #### 1. MCP Prompts for Documentation Workflows
 42 | 
 43 | ```typescript
 44 | // Implement MCP ListPromptsRequestSchema and GetPromptRequestSchema
 45 | const DOCUMENTATION_PROMPTS = [
 46 |   {
 47 |     name: "analyze-and-recommend",
 48 |     description: "Complete repository analysis and SSG recommendation workflow",
 49 |     arguments: [
 50 |       {
 51 |         name: "repository_path",
 52 |         description: "Path to repository",
 53 |         required: true,
 54 |       },
 55 |       {
 56 |         name: "priority",
 57 |         description: "Priority: simplicity, features, performance",
 58 |       },
 59 |     ],
 60 |   },
 61 |   {
 62 |     name: "setup-documentation",
 63 |     description:
 64 |       "Create comprehensive documentation structure with best practices",
 65 |     arguments: [
 66 |       { name: "project_name", description: "Project name", required: true },
 67 |       { name: "ssg_type", description: "Static site generator type" },
 68 |     ],
 69 |   },
 70 |   {
 71 |     name: "troubleshoot-deployment",
 72 |     description: "Diagnose and fix GitHub Pages deployment issues",
 73 |     arguments: [
 74 |       {
 75 |         name: "repository_url",
 76 |         description: "GitHub repository URL",
 77 |         required: true,
 78 |       },
 79 |       { name: "error_message", description: "Deployment error message" },
 80 |     ],
 81 |   },
 82 | ];
 83 | ```
 84 | 
 85 | #### 2. MCP Resources for Generated Content
 86 | 
 87 | ```typescript
 88 | // Implement ListResourcesRequestSchema and ReadResourceRequestSchema
 89 | interface DocuMCPResource {
 90 |   uri: string; // e.g., "documcp://analysis/repo-123"
 91 |   name: string; // Human-readable name
 92 |   description: string; // What this resource contains
 93 |   mimeType: string; // Content type
 94 | }
 95 | 
 96 | // Resource types we'll expose:
 97 | const RESOURCE_TYPES = [
 98 |   "documcp://analysis/{analysisId}", // Repository analysis results
 99 |   "documcp://config/{ssgType}/{projectId}", // Generated configuration files
100 |   "documcp://structure/{projectId}", // Documentation structure templates
101 |   "documcp://deployment/{workflowId}", // GitHub Actions workflows
102 |   "documcp://templates/{templateType}", // Reusable templates
103 | ];
104 | ```
105 | 
106 | #### 3. Integration with Existing Tools
107 | 
108 | - **Tools remain unchanged**: analyze_repository, recommend_ssg, generate_config, etc.
109 | - **Prompts provide workflows**: Chain multiple tool calls with guided prompts
110 | - **Resources expose results**: Make tool outputs accessible as MCP resources
111 | 
112 | ### Example Workflow Integration:
113 | 
114 | ```typescript
115 | // MCP Prompt: "analyze-and-recommend"
116 | // Generated prompt text that guides the user through:
117 | // 1. Call analyze_repository tool
118 | // 2. Review analysis results via documcp://analysis/{id} resource
119 | // 3. Call recommend_ssg tool with analysis results
120 | // 4. Access recommendations via documcp://recommendations/{id} resource
121 | // 5. Call generate_config with selected SSG
122 | ```
123 | 
124 | ## Alternatives Considered
125 | 
126 | ### Alternative 1: Custom Protocol Extensions (Previous Approach)
127 | 
128 | - **Pros**: Maximum flexibility, custom AI features
129 | - **Cons**: Protocol complexity, compatibility issues, non-standard
130 | - **Decision**: Rejected in favor of MCP-native approach
131 | 
132 | ### Alternative 2: Tools-Only Approach
133 | 
134 | - **Pros**: Simple, already implemented
135 | - **Cons**: No guided workflows, no template assistance, harder user experience
136 | - **Decision**: Insufficient for comprehensive AI assistance
137 | 
138 | ### Alternative 3: External AI Service Integration
139 | 
140 | - **Pros**: Leverage existing AI platforms
141 | - **Cons**: Breaks MCP cohesion, additional dependencies, latency
142 | - **Decision**: Conflicts with MCP server simplicity
143 | 
144 | ## Consequences
145 | 
146 | ### Positive Consequences
147 | 
148 | - **MCP Compliance**: Uses protocol as designed, no custom extensions needed
149 | - **Client Compatibility**: Works with all MCP clients (Claude Desktop, GitHub Copilot, etc.)
150 | - **Guided Workflows**: Prompts provide step-by-step assistance for complex tasks
151 | - **Rich Content Access**: Resources make generated content easily accessible
152 | - **Template Reusability**: Prompts can be customized and reused across projects
153 | - **Simplified Architecture**: No need for custom protocol handling or AI-specific interfaces
154 | 
155 | ### Negative Consequences
156 | 
157 | - **Prompt Complexity**: Complex workflows require sophisticated prompt engineering
158 | - **Resource Management**: Need efficient resource caching and lifecycle management
159 | - **Limited AI Features**: Constrained to MCP's prompt/resource model
160 | - **Template Maintenance**: Prompts need regular updates as tools evolve
161 | 
162 | ## Implementation Plan
163 | 
164 | ### Phase 1: Core MCP Integration (Week 1-2)
165 | 
166 | 1. Implement `ListPromptsRequestSchema` and `GetPromptRequestSchema` handlers
167 | 2. Implement `ListResourcesRequestSchema` and `ReadResourceRequestSchema` handlers
168 | 3. Create resource URI schema and routing system
169 | 4. Add MCP capabilities registration for prompts and resources
170 | 
171 | ### Phase 2: Documentation Prompts (Week 3-4)
172 | 
173 | 1. Create "analyze-and-recommend" workflow prompt
174 | 2. Create "setup-documentation" structure prompt
175 | 3. Create "troubleshoot-deployment" diagnostic prompt
176 | 4. Add prompt argument validation and help text
177 | 
178 | ### Phase 3: Resource Management (Week 5-6)
179 | 
180 | 1. Implement resource caching for analysis results
181 | 2. Add generated configuration file resources
182 | 3. Create template library resources
183 | 4. Add resource cleanup and lifecycle management
184 | 
185 | ### Phase 4: Advanced Features (Week 7-8)
186 | 
187 | 1. Dynamic prompt generation based on project characteristics
188 | 2. Contextual resource recommendations
189 | 3. Prompt composition for complex workflows
190 | 4. Integration testing with major MCP clients
191 | 
192 | ## Integration with Existing Architecture
193 | 
194 | ### ADR-001 (MCP Server Architecture)
195 | 
196 | - Extends the TypeScript MCP SDK usage to include prompts and resources
197 | - Maintains stateless operation model
198 | - Leverages existing modular design
199 | 
200 | ### ADR-006 (MCP Tools API Design)
201 | 
202 | - Tools remain the primary interface for actions
203 | - Prompts provide guided workflows using existing tools
204 | - Resources expose tool outputs in structured format
205 | 
206 | ### ADR-007 (Pluggable Prompt Tool Architecture)
207 | 
208 | - **Modified Approach**: Instead of custom prompt engines, use MCP prompts
209 | - Template system becomes MCP prompt templates
210 | - Configuration-driven approach still applies for prompt customization
211 | 
212 | ## MCP Server Capabilities Declaration
213 | 
214 | ```typescript
215 | server.setRequestHandler(InitializeRequestSchema, async () => ({
216 |   protocolVersion: "2024-11-05",
217 |   capabilities: {
218 |     tools: {}, // Existing tool capabilities
219 |     prompts: {}, // NEW: Prompt template capabilities
220 |     resources: {}, // NEW: Resource access capabilities
221 |   },
222 |   serverInfo: {
223 |     name: "documcp",
224 |     version: "0.2.0",
225 |   },
226 | }));
227 | ```
228 | 
229 | ## Code Execution with MCP (CE-MCP) Integration (2025-12-09)
230 | 
231 | ### Resources are Perfect for Code Mode
232 | 
233 | **Critical Insight**: MCP Resources are the ideal mechanism for preventing context pollution in Code Mode workflows:
234 | 
235 | ```typescript
236 | // ✅ GOOD: Summary-only result with resource URI
237 | async function handleAnalyzeRepository(params) {
238 |   const fullAnalysis = await analyzeRepo(params.path);
239 | 
240 |   // Store complete result as MCP resource
241 |   const resourceUri = await storeResource({
242 |     type: "analysis",
243 |     data: fullAnalysis,
244 |   });
245 | 
246 |   // Return only summary to LLM context (not 50,000 tokens of full data!)
247 |   return {
248 |     summary: {
249 |       fileCount: fullAnalysis.fileCount,
250 |       primaryLanguage: fullAnalysis.primaryLanguage,
251 |       complexity: fullAnalysis.complexityScore,
252 |     },
253 |     resourceUri, // Client can access full data when needed
254 |     nextSteps: [
255 |       /* guidance */
256 |     ],
257 |   };
258 | }
259 | ```
260 | 
261 | ### Prompts for Code Mode Workflows
262 | 
263 | MCP Prompts provide guided workflows for Code Mode clients:
264 | 
265 | ```typescript
266 | // Prompt guides LLM to generate orchestration code
267 | {
268 |   name: "complete-documentation-setup",
269 |   description: "Complete workflow from analysis to deployment",
270 |   prompt: `
271 |     You will set up documentation for a project using these steps:
272 | 
273 |     1. Call analyze_repository tool and store result
274 |     2. Access analysis via resource URI
275 |     3. Call recommend_ssg with analysis data
276 |     4. Generate configuration files
277 |     5. Create Diataxis structure
278 |     6. Set up GitHub Actions deployment
279 | 
280 |     Write TypeScript code to orchestrate these tools efficiently.
281 |   `
282 | }
283 | ```
284 | 
285 | ### Resource Lifecycle in Code Mode
286 | 
287 | ```typescript
288 | // Code Mode execution pattern
289 | async function codeModWorkflow(repoPath: string) {
290 |   // Step 1: Analysis (returns resource URI)
291 |   const analysisResult = await callTool("analyze_repository", {
292 |     path: repoPath,
293 |   });
294 |   const analysis = await readResource(analysisResult.resourceUri);
295 | 
296 |   // Step 2: Recommendation (uses cached analysis)
297 |   const recommendation = await callTool("recommend_ssg", { analysis });
298 | 
299 |   // Step 3: Configuration (parallel execution possible!)
300 |   const [config, structure] = await Promise.all([
301 |     callTool("generate_config", { ssg: recommendation.primary }),
302 |     callTool("setup_structure", { ssg: recommendation.primary }),
303 |   ]);
304 | 
305 |   // Resources prevent intermediate data from polluting LLM context
306 |   return { config, structure };
307 | }
308 | ```
309 | 
310 | ### Performance Benefits
311 | 
312 | **Token Savings**:
313 | 
314 | - Traditional: Full analysis result (50,000 tokens) → LLM context
315 | - With Resources: Summary (500 tokens) + resource URI → LLM context
316 | - **99% token reduction** for large results
317 | 
318 | **Cost Savings**:
319 | 
320 | - Complex workflow: $2.50 → $0.03 (75x reduction)
321 | - Achieved through resource-based intermediate storage
322 | 
323 | For detailed analysis, see [ADR-011: CE-MCP Compatibility](adr-0011-ce-mcp-compatibility.md).
324 | 
325 | ## Implementation Status Review (2025-12-12)
326 | 
327 | **Status Update**: Changed from "Proposed" to "Accepted" based on comprehensive ADR compliance review.
328 | 
329 | **Review Findings**:
330 | 
331 | - ✅ **Implementation Confirmed**: Comprehensive code review validates full implementation of MCP Prompts and Resources integration
332 | - ✅ **Compliance Score**: 9/10 - Well implemented with strong architectural consistency
333 | - ✅ **Code Evidence**: Smart Code Linking identified 25 related files confirming implementation
334 | - ✅ **Integration Verified**: Successfully integrated with existing tools and architecture (ADR-001, ADR-006)
335 | 
336 | **Implementation Evidence**:
337 | 
338 | - MCP Prompts handlers implemented and registered
339 | - MCP Resources system operational with URI schema
340 | - Resource caching and lifecycle management in place
341 | - CE-MCP compatibility validated (see ADR-011)
342 | - Integration testing completed with major MCP clients
343 | 
344 | **Validation**: ADR compliance review conducted 2025-12-12, commit c4b07aaf8802a2b359d483114fa21f7cabb85d34
345 | 
346 | ## Future Considerations
347 | 
348 | - Integration with MCP sampling for AI-powered responses
349 | - Advanced prompt chaining and conditional workflows
350 | - Resource subscriptions for real-time updates
351 | - Community prompt template sharing and marketplace
352 | - Resource caching strategies for Code Mode optimization
353 | - Streaming resources for real-time progress updates
354 | 
```
Page 6/33FirstPrevNextLast