This is page 4 of 29. Use http://codebase.md/tosin2013/documcp?lines=true&page={x} to view the full context.
# Directory Structure
```
├── .dockerignore
├── .eslintignore
├── .eslintrc.json
├── .github
│ ├── agents
│ │ ├── documcp-ast.md
│ │ ├── documcp-deploy.md
│ │ ├── documcp-memory.md
│ │ ├── documcp-test.md
│ │ └── documcp-tool.md
│ ├── copilot-instructions.md
│ ├── dependabot.yml
│ ├── ISSUE_TEMPLATE
│ │ ├── automated-changelog.md
│ │ ├── bug_report.md
│ │ ├── bug_report.yml
│ │ ├── documentation_issue.md
│ │ ├── feature_request.md
│ │ ├── feature_request.yml
│ │ ├── npm-publishing-fix.md
│ │ └── release_improvements.md
│ ├── PULL_REQUEST_TEMPLATE.md
│ ├── release-drafter.yml
│ └── workflows
│ ├── auto-merge.yml
│ ├── ci.yml
│ ├── codeql.yml
│ ├── dependency-review.yml
│ ├── deploy-docs.yml
│ ├── README.md
│ ├── release-drafter.yml
│ └── release.yml
├── .gitignore
├── .husky
│ ├── commit-msg
│ └── pre-commit
├── .linkcheck.config.json
├── .markdown-link-check.json
├── .nvmrc
├── .pre-commit-config.yaml
├── .versionrc.json
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── commitlint.config.js
├── CONTRIBUTING.md
├── docker-compose.docs.yml
├── Dockerfile.docs
├── docs
│ ├── .docusaurus
│ │ ├── docusaurus-plugin-content-docs
│ │ │ └── default
│ │ │ └── __mdx-loader-dependency.json
│ │ └── docusaurus-plugin-content-pages
│ │ └── default
│ │ └── __plugin.json
│ ├── adrs
│ │ ├── 001-mcp-server-architecture.md
│ │ ├── 002-repository-analysis-engine.md
│ │ ├── 003-static-site-generator-recommendation-engine.md
│ │ ├── 004-diataxis-framework-integration.md
│ │ ├── 005-github-pages-deployment-automation.md
│ │ ├── 006-mcp-tools-api-design.md
│ │ ├── 007-mcp-prompts-and-resources-integration.md
│ │ ├── 008-intelligent-content-population-engine.md
│ │ ├── 009-content-accuracy-validation-framework.md
│ │ ├── 010-mcp-resource-pattern-redesign.md
│ │ └── README.md
│ ├── api
│ │ ├── .nojekyll
│ │ ├── assets
│ │ │ ├── hierarchy.js
│ │ │ ├── highlight.css
│ │ │ ├── icons.js
│ │ │ ├── icons.svg
│ │ │ ├── main.js
│ │ │ ├── navigation.js
│ │ │ ├── search.js
│ │ │ └── style.css
│ │ ├── hierarchy.html
│ │ ├── index.html
│ │ ├── modules.html
│ │ └── variables
│ │ └── TOOLS.html
│ ├── assets
│ │ └── logo.svg
│ ├── development
│ │ └── MCP_INSPECTOR_TESTING.md
│ ├── docusaurus.config.js
│ ├── explanation
│ │ ├── architecture.md
│ │ └── index.md
│ ├── guides
│ │ ├── link-validation.md
│ │ ├── playwright-integration.md
│ │ └── playwright-testing-workflow.md
│ ├── how-to
│ │ ├── analytics-setup.md
│ │ ├── custom-domains.md
│ │ ├── documentation-freshness-tracking.md
│ │ ├── github-pages-deployment.md
│ │ ├── index.md
│ │ ├── local-testing.md
│ │ ├── performance-optimization.md
│ │ ├── prompting-guide.md
│ │ ├── repository-analysis.md
│ │ ├── seo-optimization.md
│ │ ├── site-monitoring.md
│ │ ├── troubleshooting.md
│ │ └── usage-examples.md
│ ├── index.md
│ ├── knowledge-graph.md
│ ├── package-lock.json
│ ├── package.json
│ ├── phase-2-intelligence.md
│ ├── reference
│ │ ├── api-overview.md
│ │ ├── cli.md
│ │ ├── configuration.md
│ │ ├── deploy-pages.md
│ │ ├── index.md
│ │ ├── mcp-tools.md
│ │ └── prompt-templates.md
│ ├── research
│ │ ├── cross-domain-integration
│ │ │ └── README.md
│ │ ├── domain-1-mcp-architecture
│ │ │ ├── index.md
│ │ │ └── mcp-performance-research.md
│ │ ├── domain-2-repository-analysis
│ │ │ └── README.md
│ │ ├── domain-3-ssg-recommendation
│ │ │ ├── index.md
│ │ │ └── ssg-performance-analysis.md
│ │ ├── domain-4-diataxis-integration
│ │ │ └── README.md
│ │ ├── domain-5-github-deployment
│ │ │ ├── github-pages-security-analysis.md
│ │ │ └── index.md
│ │ ├── domain-6-api-design
│ │ │ └── README.md
│ │ ├── README.md
│ │ ├── research-integration-summary-2025-01-14.md
│ │ ├── research-progress-template.md
│ │ └── research-questions-2025-01-14.md
│ ├── robots.txt
│ ├── sidebars.js
│ ├── sitemap.xml
│ ├── src
│ │ └── css
│ │ └── custom.css
│ └── tutorials
│ ├── development-setup.md
│ ├── environment-setup.md
│ ├── first-deployment.md
│ ├── getting-started.md
│ ├── index.md
│ ├── memory-workflows.md
│ └── user-onboarding.md
├── jest.config.js
├── LICENSE
├── Makefile
├── MCP_PHASE2_IMPLEMENTATION.md
├── mcp-config-example.json
├── mcp.json
├── package-lock.json
├── package.json
├── README.md
├── release.sh
├── scripts
│ └── check-package-structure.cjs
├── SECURITY.md
├── setup-precommit.sh
├── src
│ ├── benchmarks
│ │ └── performance.ts
│ ├── index.ts
│ ├── memory
│ │ ├── contextual-retrieval.ts
│ │ ├── deployment-analytics.ts
│ │ ├── enhanced-manager.ts
│ │ ├── export-import.ts
│ │ ├── freshness-kg-integration.ts
│ │ ├── index.ts
│ │ ├── integration.ts
│ │ ├── kg-code-integration.ts
│ │ ├── kg-health.ts
│ │ ├── kg-integration.ts
│ │ ├── kg-link-validator.ts
│ │ ├── kg-storage.ts
│ │ ├── knowledge-graph.ts
│ │ ├── learning.ts
│ │ ├── manager.ts
│ │ ├── multi-agent-sharing.ts
│ │ ├── pruning.ts
│ │ ├── schemas.ts
│ │ ├── storage.ts
│ │ ├── temporal-analysis.ts
│ │ ├── user-preferences.ts
│ │ └── visualization.ts
│ ├── prompts
│ │ └── technical-writer-prompts.ts
│ ├── scripts
│ │ └── benchmark.ts
│ ├── templates
│ │ └── playwright
│ │ ├── accessibility.spec.template.ts
│ │ ├── Dockerfile.template
│ │ ├── docs-e2e.workflow.template.yml
│ │ ├── link-validation.spec.template.ts
│ │ └── playwright.config.template.ts
│ ├── tools
│ │ ├── analyze-deployments.ts
│ │ ├── analyze-readme.ts
│ │ ├── analyze-repository.ts
│ │ ├── check-documentation-links.ts
│ │ ├── deploy-pages.ts
│ │ ├── detect-gaps.ts
│ │ ├── evaluate-readme-health.ts
│ │ ├── generate-config.ts
│ │ ├── generate-contextual-content.ts
│ │ ├── generate-llm-context.ts
│ │ ├── generate-readme-template.ts
│ │ ├── generate-technical-writer-prompts.ts
│ │ ├── kg-health-check.ts
│ │ ├── manage-preferences.ts
│ │ ├── manage-sitemap.ts
│ │ ├── optimize-readme.ts
│ │ ├── populate-content.ts
│ │ ├── readme-best-practices.ts
│ │ ├── recommend-ssg.ts
│ │ ├── setup-playwright-tests.ts
│ │ ├── setup-structure.ts
│ │ ├── sync-code-to-docs.ts
│ │ ├── test-local-deployment.ts
│ │ ├── track-documentation-freshness.ts
│ │ ├── update-existing-documentation.ts
│ │ ├── validate-content.ts
│ │ ├── validate-documentation-freshness.ts
│ │ ├── validate-readme-checklist.ts
│ │ └── verify-deployment.ts
│ ├── types
│ │ └── api.ts
│ ├── utils
│ │ ├── ast-analyzer.ts
│ │ ├── code-scanner.ts
│ │ ├── content-extractor.ts
│ │ ├── drift-detector.ts
│ │ ├── freshness-tracker.ts
│ │ ├── language-parsers-simple.ts
│ │ ├── permission-checker.ts
│ │ └── sitemap-generator.ts
│ └── workflows
│ └── documentation-workflow.ts
├── test-docs-local.sh
├── tests
│ ├── api
│ │ └── mcp-responses.test.ts
│ ├── benchmarks
│ │ └── performance.test.ts
│ ├── edge-cases
│ │ └── error-handling.test.ts
│ ├── functional
│ │ └── tools.test.ts
│ ├── integration
│ │ ├── kg-documentation-workflow.test.ts
│ │ ├── knowledge-graph-workflow.test.ts
│ │ ├── mcp-readme-tools.test.ts
│ │ ├── memory-mcp-tools.test.ts
│ │ ├── readme-technical-writer.test.ts
│ │ └── workflow.test.ts
│ ├── memory
│ │ ├── contextual-retrieval.test.ts
│ │ ├── enhanced-manager.test.ts
│ │ ├── export-import.test.ts
│ │ ├── freshness-kg-integration.test.ts
│ │ ├── kg-code-integration.test.ts
│ │ ├── kg-health.test.ts
│ │ ├── kg-link-validator.test.ts
│ │ ├── kg-storage-validation.test.ts
│ │ ├── kg-storage.test.ts
│ │ ├── knowledge-graph-enhanced.test.ts
│ │ ├── knowledge-graph.test.ts
│ │ ├── learning.test.ts
│ │ ├── manager-advanced.test.ts
│ │ ├── manager.test.ts
│ │ ├── mcp-resource-integration.test.ts
│ │ ├── mcp-tool-persistence.test.ts
│ │ ├── schemas.test.ts
│ │ ├── storage.test.ts
│ │ ├── temporal-analysis.test.ts
│ │ └── user-preferences.test.ts
│ ├── performance
│ │ ├── memory-load-testing.test.ts
│ │ └── memory-stress-testing.test.ts
│ ├── prompts
│ │ ├── guided-workflow-prompts.test.ts
│ │ └── technical-writer-prompts.test.ts
│ ├── server.test.ts
│ ├── setup.ts
│ ├── tools
│ │ ├── all-tools.test.ts
│ │ ├── analyze-coverage.test.ts
│ │ ├── analyze-deployments.test.ts
│ │ ├── analyze-readme.test.ts
│ │ ├── analyze-repository.test.ts
│ │ ├── check-documentation-links.test.ts
│ │ ├── deploy-pages-kg-retrieval.test.ts
│ │ ├── deploy-pages-tracking.test.ts
│ │ ├── deploy-pages.test.ts
│ │ ├── detect-gaps.test.ts
│ │ ├── evaluate-readme-health.test.ts
│ │ ├── generate-contextual-content.test.ts
│ │ ├── generate-llm-context.test.ts
│ │ ├── generate-readme-template.test.ts
│ │ ├── generate-technical-writer-prompts.test.ts
│ │ ├── kg-health-check.test.ts
│ │ ├── manage-sitemap.test.ts
│ │ ├── optimize-readme.test.ts
│ │ ├── readme-best-practices.test.ts
│ │ ├── recommend-ssg-historical.test.ts
│ │ ├── recommend-ssg-preferences.test.ts
│ │ ├── recommend-ssg.test.ts
│ │ ├── simple-coverage.test.ts
│ │ ├── sync-code-to-docs.test.ts
│ │ ├── test-local-deployment.test.ts
│ │ ├── tool-error-handling.test.ts
│ │ ├── track-documentation-freshness.test.ts
│ │ ├── validate-content.test.ts
│ │ ├── validate-documentation-freshness.test.ts
│ │ └── validate-readme-checklist.test.ts
│ ├── types
│ │ └── type-safety.test.ts
│ └── utils
│ ├── ast-analyzer.test.ts
│ ├── content-extractor.test.ts
│ ├── drift-detector.test.ts
│ ├── freshness-tracker.test.ts
│ └── sitemap-generator.test.ts
├── tsconfig.json
└── typedoc.json
```
# Files
--------------------------------------------------------------------------------
/docs/adrs/001-mcp-server-architecture.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | id: 001-mcp-server-architecture
3 | title: "ADR-001: MCP Server Architecture using TypeScript SDK"
4 | sidebar_label: "ADR-001: MCP Server Architecture"
5 | sidebar_position: 1
6 | documcp:
7 | last_updated: "2025-11-20T00:46:21.934Z"
8 | last_validated: "2025-11-20T00:46:21.934Z"
9 | auto_updated: false
10 | update_frequency: monthly
11 | ---
12 |
13 | # ADR-001: MCP Server Architecture using TypeScript SDK
14 |
15 | ## Status
16 |
17 | Accepted
18 |
19 | ## Context
20 |
21 | DocuMCP requires a robust server architecture that can integrate seamlessly with development environments like GitHub Copilot, Claude Desktop, and other MCP-enabled tools. The server needs to provide intelligent repository analysis, static site generator recommendations, and automated documentation deployment workflows.
22 |
23 | Key requirements:
24 |
25 | - Standards-compliant MCP protocol implementation
26 | - Stateless operation for consistency and reliability
27 | - Modular design separating concerns
28 | - Integration with existing developer workflows
29 | - Scalable architecture supporting complex multi-step operations
30 |
31 | ## Decision
32 |
33 | We will implement the DocuMCP server using the TypeScript Model Context Protocol SDK, following a modular, stateless architecture pattern.
34 |
35 | ### Core Architectural Components:
36 |
37 | 1. **MCP Server Foundation**: TypeScript-based implementation using official MCP SDK
38 | 2. **Repository Analysis Engine**: Multi-layered analysis of project characteristics
39 | 3. **Static Site Generator Recommendation Engine**: Algorithmic decision framework
40 | 4. **File Generation and Template System**: Template-based configuration generation
41 | 5. **GitHub Integration Layer**: Automated deployment orchestration
42 |
43 | ### Design Principles:
44 |
45 | - **Stateless Operation**: Each invocation analyzes current repository state
46 | - **Modular Design**: Clear separation between analysis, recommendation, generation, and deployment
47 | - **Standards Compliance**: Full adherence to MCP specification requirements
48 | - **Session Context**: Temporary context preservation within single sessions for complex workflows
49 |
50 | ## Alternatives Considered
51 |
52 | ### Python-based Implementation
53 |
54 | - **Pros**: Rich ecosystem for NLP and analysis, familiar to many developers
55 | - **Cons**: Less mature MCP SDK, deployment complexity, slower startup times
56 | - **Decision**: Rejected due to MCP ecosystem maturity in TypeScript
57 |
58 | ### Go-based Implementation
59 |
60 | - **Pros**: High performance, excellent concurrency, small binary size
61 | - **Cons**: Limited MCP SDK support, smaller ecosystem for documentation tools
62 | - **Decision**: Rejected due to limited MCP tooling and development velocity concerns
63 |
64 | ### Stateful Server with Database
65 |
66 | - **Pros**: Could cache analysis results, maintain user preferences
67 | - **Cons**: Deployment complexity, synchronization issues, potential staleness
68 | - **Decision**: Rejected to maintain simplicity and ensure consistency
69 |
70 | ## Consequences
71 |
72 | ### Positive
73 |
74 | - **Developer Familiarity**: TypeScript is widely known in the target developer community
75 | - **MCP Ecosystem**: Mature tooling and extensive documentation available
76 | - **Rapid Development**: Rich ecosystem accelerates feature development
77 | - **Integration**: Seamless integration with existing JavaScript/TypeScript tooling
78 | - **Consistency**: Stateless design eliminates synchronization issues
79 | - **Reliability**: Reduces complexity and potential failure modes
80 |
81 | ### Negative
82 |
83 | - **Runtime Overhead**: Node.js runtime may have higher memory usage than compiled alternatives
84 | - **Startup Time**: Node.js startup may be slower than Go or Rust alternatives
85 | - **Dependency Management**: npm ecosystem can introduce supply chain complexity
86 |
87 | ### Risks and Mitigations
88 |
89 | - **Supply Chain Security**: Use npm audit and dependency scanning in CI/CD
90 | - **Performance**: Implement intelligent caching and optimize hot paths
91 | - **Memory Usage**: Monitor and optimize memory allocation patterns
92 |
93 | ## Implementation Details
94 |
95 | ### Project Structure
96 |
97 | ```
98 | src/
99 | ├── server/ # MCP server implementation
100 | ├── analysis/ # Repository analysis engine
101 | ├── recommendation/ # SSG recommendation logic
102 | ├── generation/ # File and template generation
103 | ├── deployment/ # GitHub integration
104 | └── types/ # TypeScript type definitions
105 | ```
106 |
107 | ### Key Dependencies
108 |
109 | - `@modelcontextprotocol/typescript-sdk`: MCP protocol implementation
110 | - `typescript`: Type safety and development experience
111 | - `zod`: Runtime type validation for MCP tools
112 | - `yaml`: Configuration file parsing and generation
113 | - `mustache`: Template rendering engine
114 | - `simple-git`: Git repository interaction
115 |
116 | ### Error Handling Strategy
117 |
118 | - Comprehensive input validation using Zod schemas
119 | - Structured error responses with actionable guidance
120 | - Graceful degradation for partial analysis failures
121 | - Detailed logging for debugging and monitoring
122 |
123 | ## Compliance and Standards
124 |
125 | - Full MCP specification compliance for protocol interactions
126 | - JSON-RPC message handling with proper error codes
127 | - Standardized tool parameter validation and responses
128 | - Security best practices for file system access and Git operations
129 |
130 | ## Research Integration (2025-01-14)
131 |
132 | ### Performance Validation
133 |
134 | **Research Findings Incorporated**: Comprehensive analysis validates our architectural decisions:
135 |
136 | 1. **TypeScript MCP SDK Performance**:
137 |
138 | - ✅ JSON-RPC 2.0 protocol provides minimal communication overhead
139 | - ✅ Native WebSocket/stdio transport layers optimize performance
140 | - ✅ Type safety adds compile-time benefits without runtime performance cost
141 |
142 | 2. **Node.js Memory Optimization** (Critical for Repository Analysis):
143 | - **Streaming Implementation**: 10x memory reduction for files >100MB
144 | - **Worker Thread Pool**: 3-4x performance improvement for parallel processing
145 | - **Memory-Mapped Files**: 5x speed improvement for large directory traversal
146 |
147 | ### Updated Implementation Strategy
148 |
149 | Based on research validation, the architecture will implement:
150 |
151 | ```typescript
152 | // Enhanced streaming approach for large repositories
153 | class RepositoryAnalyzer {
154 | private workerPool: WorkerPool;
155 | private streamThreshold = 10 * 1024 * 1024; // 10MB
156 |
157 | async analyzeRepository(repoPath: string): Promise<AnalysisResult> {
158 | try {
159 | const files = await this.scanDirectory(repoPath);
160 |
161 | // Parallel processing with worker threads
162 | const chunks = this.chunkFiles(files, this.workerPool.size);
163 | const results = await Promise.allSettled(
164 | chunks.map((chunk) => this.workerPool.execute("analyzeChunk", chunk)),
165 | );
166 |
167 | // Handle partial failures gracefully
168 | const successfulResults = results
169 | .filter(
170 | (result): result is PromiseFulfilledResult<any> =>
171 | result.status === "fulfilled",
172 | )
173 | .map((result) => result.value);
174 |
175 | if (successfulResults.length === 0) {
176 | throw new Error("All analysis chunks failed");
177 | }
178 |
179 | return this.aggregateResults(successfulResults);
180 | } catch (error) {
181 | throw new Error(`Repository analysis failed: ${error.message}`);
182 | }
183 | }
184 |
185 | private async analyzeFile(filePath: string): Promise<FileAnalysis> {
186 | try {
187 | const stats = await fs.stat(filePath);
188 |
189 | // Use streaming for large files
190 | if (stats.size > this.streamThreshold) {
191 | return await this.analyzeFileStream(filePath);
192 | }
193 |
194 | return await this.analyzeFileStandard(filePath);
195 | } catch (error) {
196 | throw new Error(`File analysis failed for ${filePath}: ${error.message}`);
197 | }
198 | }
199 | }
200 | ```
201 |
202 | ### Performance Benchmarks
203 |
204 | Research-validated performance targets:
205 |
206 | - **Small Repositories** (<100 files): <1 second analysis time
207 | - **Medium Repositories** (100-1000 files): <10 seconds analysis time
208 | - **Large Repositories** (1000+ files): <60 seconds analysis time
209 | - **Memory Usage**: Constant memory profile regardless of repository size
210 |
211 | ## Future Considerations
212 |
213 | - Potential migration to WebAssembly for performance-critical components
214 | - Plugin architecture for extensible SSG support
215 | - Distributed analysis for large repository handling (validated by research)
216 | - Machine learning integration for improved recommendations
217 |
218 | ## References
219 |
220 | - [MCP TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
221 | - [Model Context Protocol Specification](https://spec.modelcontextprotocol.io/)
222 | - [TypeScript Performance Best Practices](https://github.com/microsoft/TypeScript/wiki/Performance)
223 |
```
--------------------------------------------------------------------------------
/docs/adrs/002-repository-analysis-engine.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | id: 002-repository-analysis-engine
3 | title: "ADR-002: Repository Analysis Engine Design"
4 | sidebar_label: "ADR-2: Repository Analysis Engine Design"
5 | sidebar_position: 2
6 | documcp:
7 | last_updated: "2025-11-20T00:46:21.936Z"
8 | last_validated: "2025-11-20T00:46:21.936Z"
9 | auto_updated: false
10 | update_frequency: monthly
11 | ---
12 |
13 | # ADR-002: Multi-Layered Repository Analysis Engine Design
14 |
15 | ## Status
16 |
17 | Accepted
18 |
19 | ## Context
20 |
21 | DocuMCP needs to understand repository characteristics to make intelligent recommendations about static site generators and documentation structure. The analysis must go beyond simple file counting to provide deep insights into project complexity, language ecosystems, existing documentation patterns, and development practices.
22 |
23 | Key requirements:
24 |
25 | - Comprehensive project characterization
26 | - Language ecosystem detection
27 | - Documentation quality assessment
28 | - Project complexity evaluation
29 | - Performance optimization for large repositories
30 | - Extensible architecture for new analysis types
31 |
32 | ## Decision
33 |
34 | We will implement a multi-layered repository analysis engine that examines repositories from multiple perspectives to build comprehensive project profiles.
35 |
36 | ### Analysis Layers:
37 |
38 | #### 1. File System Analysis Layer
39 |
40 | - **Recursive directory traversal** with intelligent filtering
41 | - **File categorization** by extension and content patterns
42 | - **Metrics calculation**: file counts, lines of code, directory depth, size distributions
43 | - **Ignore pattern handling**: .gitignore, common build artifacts, node_modules
44 |
45 | #### 2. Language Ecosystem Analysis Layer
46 |
47 | - **Package manager detection**: package.json, requirements.txt, Cargo.toml, go.mod, etc.
48 | - **Dependency analysis**: direct and transitive dependencies
49 | - **Build tool identification**: webpack, vite, gradle, maven, cargo, etc.
50 | - **Version constraint analysis**: compatibility requirements
51 |
52 | #### 3. Content Analysis Layer
53 |
54 | - **Documentation quality assessment**: README analysis, existing docs
55 | - **Code comment analysis**: inline documentation patterns
56 | - **API surface detection**: public interfaces, exported functions
57 | - **Content gap identification**: missing documentation areas
58 |
59 | #### 4. Project Metadata Analysis Layer
60 |
61 | - **Git history patterns**: commit frequency, contributor activity
62 | - **Release management**: tagging patterns, version schemes
63 | - **Issue tracking**: GitHub issues, project management indicators
64 | - **Community engagement**: contributor count, activity patterns
65 |
66 | #### 5. Complexity Assessment Layer
67 |
68 | - **Architectural complexity**: microservices, modular design patterns
69 | - **Technical complexity**: multi-language projects, advanced configurations
70 | - **Maintenance indicators**: test coverage, CI/CD presence, code quality metrics
71 | - **Documentation sophistication needs**: API complexity, user journey complexity
72 |
73 | ## Alternatives Considered
74 |
75 | ### Single-Pass Analysis
76 |
77 | - **Pros**: Simpler implementation, faster for small repositories
78 | - **Cons**: Limited depth, cannot build sophisticated project profiles
79 | - **Decision**: Rejected due to insufficient intelligence for quality recommendations
80 |
81 | ### External Tool Integration (e.g., GitHub API, CodeClimate)
82 |
83 | - **Pros**: Rich metadata, established metrics
84 | - **Cons**: External dependencies, rate limiting, requires authentication
85 | - **Decision**: Rejected for core analysis; may integrate as optional enhancement
86 |
87 | ### Machine Learning-Based Analysis
88 |
89 | - **Pros**: Could learn patterns from successful documentation projects
90 | - **Cons**: Training data requirements, model maintenance, unpredictable results
91 | - **Decision**: Deferred to future versions; start with rule-based analysis
92 |
93 | ### Database-Backed Caching
94 |
95 | - **Pros**: Faster repeat analysis, could store learning patterns
96 | - **Cons**: Deployment complexity, staleness issues, synchronization problems
97 | - **Decision**: Rejected for initial version; implement in-memory caching only
98 |
99 | ## Consequences
100 |
101 | ### Positive
102 |
103 | - **Intelligent Recommendations**: Deep analysis enables sophisticated SSG matching
104 | - **Extensible Architecture**: Easy to add new analysis dimensions
105 | - **Performance Optimization**: Layered approach allows selective analysis depth
106 | - **Quality Assessment**: Can identify and improve existing documentation
107 | - **Future-Proof**: Architecture supports ML integration and advanced analytics
108 |
109 | ### Negative
110 |
111 | - **Analysis Time**: Comprehensive analysis may be slower for large repositories
112 | - **Complexity**: Multi-layered architecture requires careful coordination
113 | - **Memory Usage**: Full repository analysis requires significant memory for large projects
114 |
115 | ### Risks and Mitigations
116 |
117 | - **Performance**: Implement streaming analysis and configurable depth limits
118 | - **Accuracy**: Validate analysis results against known project types
119 | - **Maintenance**: Regular testing against diverse repository types
120 |
121 | ## Implementation Details
122 |
123 | ### Analysis Engine Structure
124 |
125 | ```typescript
126 | interface RepositoryAnalysis {
127 | fileSystem: FileSystemAnalysis;
128 | languageEcosystem: LanguageEcosystemAnalysis;
129 | content: ContentAnalysis;
130 | metadata: ProjectMetadataAnalysis;
131 | complexity: ComplexityAssessment;
132 | }
133 |
134 | interface AnalysisLayer {
135 | analyze(repositoryPath: string): Promise<LayerResult>;
136 | getMetrics(): AnalysisMetrics;
137 | validate(): ValidationResult;
138 | }
139 | ```
140 |
141 | ### Performance Optimizations
142 |
143 | - **Parallel Analysis**: Independent layers run concurrently
144 | - **Intelligent Filtering**: Skip irrelevant files and directories early
145 | - **Progressive Analysis**: Start with lightweight analysis, deepen as needed
146 | - **Caching Strategy**: Cache analysis results within session scope
147 | - **Size Limits**: Configurable limits for very large repositories
148 |
149 | ### File Pattern Recognition
150 |
151 | ```typescript
152 | const FILE_PATTERNS = {
153 | documentation: [".md", ".rst", ".adoc", "docs/", "documentation/"],
154 | configuration: ["config/", ".config/", "*.json", "*.yaml", "*.toml"],
155 | source: ["src/", "lib/", "*.js", "*.ts", "*.py", "*.go", "*.rs"],
156 | tests: ["test/", "tests/", "__tests__/", "*.test.*", "*.spec.*"],
157 | build: ["build/", "dist/", "target/", "bin/", "*.lock"],
158 | };
159 | ```
160 |
161 | ### Language Ecosystem Detection
162 |
163 | ```typescript
164 | const ECOSYSTEM_INDICATORS = {
165 | javascript: ["package.json", "node_modules/", "yarn.lock", "pnpm-lock.yaml"],
166 | python: ["requirements.txt", "setup.py", "pyproject.toml", "Pipfile"],
167 | rust: ["Cargo.toml", "Cargo.lock", "src/main.rs"],
168 | go: ["go.mod", "go.sum", "main.go"],
169 | java: ["pom.xml", "build.gradle", "gradlew"],
170 | };
171 | ```
172 |
173 | ### Complexity Scoring Algorithm
174 |
175 | ```typescript
176 | interface ComplexityFactors {
177 | fileCount: number;
178 | languageCount: number;
179 | dependencyCount: number;
180 | directoryDepth: number;
181 | contributorCount: number;
182 | apiSurfaceSize: number;
183 | }
184 |
185 | function calculateComplexityScore(factors: ComplexityFactors): ComplexityScore {
186 | // Weighted scoring algorithm balancing multiple factors
187 | // Returns: 'simple' | 'moderate' | 'complex' | 'enterprise'
188 | }
189 | ```
190 |
191 | ## Quality Assurance
192 |
193 | ### Testing Strategy
194 |
195 | - **Unit Tests**: Each analysis layer tested independently
196 | - **Integration Tests**: Full analysis pipeline validation
197 | - **Repository Fixtures**: Test suite with diverse project types
198 | - **Performance Tests**: Analysis time benchmarks for various repository sizes
199 | - **Accuracy Validation**: Manual verification against known project characteristics
200 |
201 | ### Monitoring and Metrics
202 |
203 | - Analysis execution time by repository size
204 | - Accuracy of complexity assessments
205 | - Cache hit rates and memory usage
206 | - Error rates and failure modes
207 |
208 | ## Future Enhancements
209 |
210 | ### Machine Learning Integration
211 |
212 | - Pattern recognition for project types
213 | - Automated documentation quality scoring
214 | - Predictive analysis for maintenance needs
215 |
216 | ### Advanced Analytics
217 |
218 | - Historical trend analysis
219 | - Comparative analysis across similar projects
220 | - Community best practice identification
221 |
222 | ### Performance Optimizations
223 |
224 | - WebAssembly modules for intensive analysis
225 | - Distributed analysis for very large repositories
226 | - Incremental analysis for updated repositories
227 |
228 | ## Security Considerations
229 |
230 | - **File System Access**: Restricted to repository boundaries
231 | - **Content Scanning**: No sensitive data extraction or storage
232 | - **Resource Limits**: Prevent resource exhaustion attacks
233 | - **Input Validation**: Sanitize all repository paths and content
234 |
235 | ## References
236 |
237 | - [Git Repository Analysis Best Practices](https://git-scm.com/docs)
238 | - [Static Analysis Tools Comparison](https://analysis-tools.dev/)
239 | - [Repository Metrics Standards](https://chaoss.community/)
240 |
```
--------------------------------------------------------------------------------
/src/tools/setup-playwright-tests.ts:
--------------------------------------------------------------------------------
```typescript
1 | /**
2 | * Setup Playwright E2E Tests Tool
3 | * Generates Playwright test configuration and files for user's documentation site
4 | */
5 |
6 | import { promises as fs } from "fs";
7 | import path from "path";
8 | import { z } from "zod";
9 | // Return type matches MCP tool response format
10 | type ToolResponse = {
11 | content: Array<{ type: "text"; text: string }>;
12 | isError?: boolean;
13 | };
14 | import { fileURLToPath } from "url";
15 | import { dirname } from "path";
16 |
17 | const __filename = fileURLToPath(import.meta.url);
18 | const __dirname = dirname(__filename);
19 |
20 | const inputSchema = z.object({
21 | repositoryPath: z.string().describe("Path to the documentation repository"),
22 | ssg: z.enum(["jekyll", "hugo", "docusaurus", "mkdocs", "eleventy"]),
23 | projectName: z.string().describe("Project name for tests"),
24 | mainBranch: z.string().optional().default("main"),
25 | includeAccessibilityTests: z.boolean().optional().default(true),
26 | includeDockerfile: z.boolean().optional().default(true),
27 | includeGitHubActions: z.boolean().optional().default(true),
28 | });
29 |
30 | interface SSGConfig {
31 | buildCommand: string;
32 | buildDir: string;
33 | port: number;
34 | packageDeps: Record<string, string>;
35 | }
36 |
37 | const SSG_CONFIGS: Record<string, SSGConfig> = {
38 | jekyll: {
39 | buildCommand: "bundle exec jekyll build",
40 | buildDir: "_site",
41 | port: 4000,
42 | packageDeps: {},
43 | },
44 | hugo: {
45 | buildCommand: "hugo",
46 | buildDir: "public",
47 | port: 1313,
48 | packageDeps: {},
49 | },
50 | docusaurus: {
51 | buildCommand: "npm run build",
52 | buildDir: "build",
53 | port: 3000,
54 | packageDeps: {
55 | "@docusaurus/core": "^3.0.0",
56 | "@docusaurus/preset-classic": "^3.0.0",
57 | },
58 | },
59 | mkdocs: {
60 | buildCommand: "mkdocs build",
61 | buildDir: "site",
62 | port: 8000,
63 | packageDeps: {},
64 | },
65 | eleventy: {
66 | buildCommand: "npx @11ty/eleventy",
67 | buildDir: "_site",
68 | port: 8080,
69 | packageDeps: {
70 | "@11ty/eleventy": "^2.0.0",
71 | },
72 | },
73 | };
74 |
75 | export async function setupPlaywrightTests(
76 | args: unknown,
77 | ): Promise<ToolResponse> {
78 | const {
79 | repositoryPath,
80 | ssg,
81 | projectName,
82 | mainBranch,
83 | includeAccessibilityTests,
84 | includeDockerfile,
85 | includeGitHubActions,
86 | } = inputSchema.parse(args);
87 |
88 | try {
89 | const config = SSG_CONFIGS[ssg];
90 | const templatesDir = path.join(__dirname, "../templates/playwright");
91 |
92 | // Create directories
93 | const testsDir = path.join(repositoryPath, "tests/e2e");
94 | await fs.mkdir(testsDir, { recursive: true });
95 |
96 | if (includeGitHubActions) {
97 | const workflowsDir = path.join(repositoryPath, ".github/workflows");
98 | await fs.mkdir(workflowsDir, { recursive: true });
99 | }
100 |
101 | // Read and process templates
102 | const filesCreated: string[] = [];
103 |
104 | // 1. Playwright config
105 | const configTemplate = await fs.readFile(
106 | path.join(templatesDir, "playwright.config.template.ts"),
107 | "utf-8",
108 | );
109 | const playwrightConfig = configTemplate.replace(
110 | /{{port}}/g,
111 | config.port.toString(),
112 | );
113 |
114 | await fs.writeFile(
115 | path.join(repositoryPath, "playwright.config.ts"),
116 | playwrightConfig,
117 | );
118 | filesCreated.push("playwright.config.ts");
119 |
120 | // 2. Link validation tests
121 | const linkTestTemplate = await fs.readFile(
122 | path.join(templatesDir, "link-validation.spec.template.ts"),
123 | "utf-8",
124 | );
125 | const linkTest = linkTestTemplate.replace(/{{projectName}}/g, projectName);
126 |
127 | await fs.writeFile(
128 | path.join(testsDir, "link-validation.spec.ts"),
129 | linkTest,
130 | );
131 | filesCreated.push("tests/e2e/link-validation.spec.ts");
132 |
133 | // 3. Accessibility tests (if enabled)
134 | if (includeAccessibilityTests) {
135 | const a11yTemplate = await fs.readFile(
136 | path.join(templatesDir, "accessibility.spec.template.ts"),
137 | "utf-8",
138 | );
139 |
140 | await fs.writeFile(
141 | path.join(testsDir, "accessibility.spec.ts"),
142 | a11yTemplate,
143 | );
144 | filesCreated.push("tests/e2e/accessibility.spec.ts");
145 | }
146 |
147 | // 4. Dockerfile (if enabled)
148 | if (includeDockerfile) {
149 | const dockerTemplate = await fs.readFile(
150 | path.join(templatesDir, "Dockerfile.template"),
151 | "utf-8",
152 | );
153 | const dockerfile = dockerTemplate
154 | .replace(/{{ssg}}/g, ssg)
155 | .replace(/{{buildCommand}}/g, config.buildCommand)
156 | .replace(/{{buildDir}}/g, config.buildDir);
157 |
158 | await fs.writeFile(
159 | path.join(repositoryPath, "Dockerfile.playwright"),
160 | dockerfile,
161 | );
162 | filesCreated.push("Dockerfile.playwright");
163 | }
164 |
165 | // 5. GitHub Actions workflow (if enabled)
166 | if (includeGitHubActions) {
167 | const workflowTemplate = await fs.readFile(
168 | path.join(templatesDir, "docs-e2e.workflow.template.yml"),
169 | "utf-8",
170 | );
171 | const workflow = workflowTemplate
172 | .replace(/{{mainBranch}}/g, mainBranch)
173 | .replace(/{{buildCommand}}/g, config.buildCommand)
174 | .replace(/{{buildDir}}/g, config.buildDir)
175 | .replace(/{{port}}/g, config.port.toString());
176 |
177 | await fs.writeFile(
178 | path.join(repositoryPath, ".github/workflows/docs-e2e-tests.yml"),
179 | workflow,
180 | );
181 | filesCreated.push(".github/workflows/docs-e2e-tests.yml");
182 | }
183 |
184 | // 6. Update package.json
185 | const packageJsonPath = path.join(repositoryPath, "package.json");
186 | let packageJson: any = {};
187 |
188 | try {
189 | const existing = await fs.readFile(packageJsonPath, "utf-8");
190 | packageJson = JSON.parse(existing);
191 | } catch {
192 | // Create new package.json
193 | packageJson = {
194 | name: projectName.toLowerCase().replace(/\s+/g, "-"),
195 | version: "1.0.0",
196 | private: true,
197 | scripts: {},
198 | dependencies: {},
199 | devDependencies: {},
200 | };
201 | }
202 |
203 | // Add Playwright dependencies
204 | packageJson.devDependencies = {
205 | ...packageJson.devDependencies,
206 | "@playwright/test": "^1.55.1",
207 | ...(includeAccessibilityTests
208 | ? { "@axe-core/playwright": "^4.10.2" }
209 | : {}),
210 | };
211 |
212 | // Add test scripts
213 | packageJson.scripts = {
214 | ...packageJson.scripts,
215 | "test:e2e": "playwright test",
216 | "test:e2e:ui": "playwright test --ui",
217 | "test:e2e:report": "playwright show-report",
218 | "test:e2e:docker":
219 | "docker build -t docs-test -f Dockerfile.playwright . && docker run --rm docs-test",
220 | };
221 |
222 | await fs.writeFile(packageJsonPath, JSON.stringify(packageJson, null, 2));
223 | filesCreated.push("package.json (updated)");
224 |
225 | // 7. Create .gitignore entries
226 | const gitignorePath = path.join(repositoryPath, ".gitignore");
227 | const gitignoreEntries = [
228 | "test-results/",
229 | "playwright-report/",
230 | "playwright-results.json",
231 | "playwright/.cache/",
232 | ].join("\n");
233 |
234 | try {
235 | const existing = await fs.readFile(gitignorePath, "utf-8");
236 | if (!existing.includes("test-results/")) {
237 | await fs.writeFile(
238 | gitignorePath,
239 | `${existing}\n\n# Playwright\n${gitignoreEntries}\n`,
240 | );
241 | filesCreated.push(".gitignore (updated)");
242 | }
243 | } catch {
244 | await fs.writeFile(gitignorePath, `# Playwright\n${gitignoreEntries}\n`);
245 | filesCreated.push(".gitignore");
246 | }
247 |
248 | return {
249 | content: [
250 | {
251 | type: "text" as const,
252 | text: JSON.stringify(
253 | {
254 | success: true,
255 | filesCreated,
256 | nextSteps: [
257 | "Run `npm install` to install Playwright dependencies",
258 | "Run `npx playwright install` to download browser binaries",
259 | "Test locally: `npm run test:e2e`",
260 | includeDockerfile
261 | ? "Build container: `docker build -t docs-test -f Dockerfile.playwright .`"
262 | : "",
263 | includeGitHubActions
264 | ? "Push to trigger GitHub Actions workflow"
265 | : "",
266 | ].filter(Boolean),
267 | configuration: {
268 | ssg,
269 | buildCommand: config.buildCommand,
270 | buildDir: config.buildDir,
271 | port: config.port,
272 | testsIncluded: {
273 | linkValidation: true,
274 | accessibility: includeAccessibilityTests,
275 | },
276 | integrations: {
277 | docker: includeDockerfile,
278 | githubActions: includeGitHubActions,
279 | },
280 | },
281 | },
282 | null,
283 | 2,
284 | ),
285 | },
286 | ],
287 | };
288 | } catch (error: any) {
289 | return {
290 | content: [
291 | {
292 | type: "text" as const,
293 | text: JSON.stringify(
294 | {
295 | success: false,
296 | error: error.message,
297 | },
298 | null,
299 | 2,
300 | ),
301 | },
302 | ],
303 | isError: true,
304 | };
305 | }
306 | }
307 |
```
--------------------------------------------------------------------------------
/src/tools/kg-health-check.ts:
--------------------------------------------------------------------------------
```typescript
1 | /**
2 | * Knowledge Graph Health Check Tool
3 | * MCP tool for checking knowledge graph health and getting recommendations
4 | */
5 |
6 | import { z } from "zod";
7 | import { MCPToolResponse, formatMCPResponse } from "../types/api.js";
8 | import { getKnowledgeGraph, getKGStorage } from "../memory/kg-integration.js";
9 | import { KGHealthMonitor, KGHealthMetrics } from "../memory/kg-health.js";
10 |
11 | const inputSchema = z.object({
12 | includeHistory: z.boolean().optional().default(false),
13 | generateReport: z.boolean().optional().default(true),
14 | days: z.number().min(1).max(90).optional().default(7),
15 | });
16 |
17 | /**
18 | * Check the health of the knowledge graph
19 | *
20 | * Performs comprehensive health analysis including data quality, structure health,
21 | * performance metrics, issue detection, and trend analysis.
22 | *
23 | * @param args - The input arguments
24 | * @param args.includeHistory - Include historical health trend data
25 | * @param args.generateReport - Generate a formatted health report
26 | * @param args.days - Number of days of history to include (1-90)
27 | *
28 | * @returns Health metrics with recommendations
29 | *
30 | * @example
31 | * ```typescript
32 | * const result = await checkKGHealth({
33 | * includeHistory: true,
34 | * generateReport: true,
35 | * days: 7
36 | * });
37 | * ```
38 | */
39 | export async function checkKGHealth(
40 | args: unknown,
41 | ): Promise<{ content: any[]; isError?: boolean }> {
42 | const startTime = Date.now();
43 |
44 | try {
45 | const { includeHistory, generateReport } = inputSchema.parse(args);
46 |
47 | // Get KG instances
48 | const kg = await getKnowledgeGraph();
49 | const storage = await getKGStorage();
50 |
51 | // Create health monitor
52 | const monitor = new KGHealthMonitor();
53 |
54 | // Calculate health
55 | const health = await monitor.calculateHealth(kg, storage);
56 |
57 | // Generate report if requested
58 | let report = "";
59 | if (generateReport) {
60 | report = generateHealthReport(health, includeHistory);
61 | }
62 |
63 | const response: MCPToolResponse<KGHealthMetrics> = {
64 | success: true,
65 | data: health,
66 | metadata: {
67 | toolVersion: "1.0.0",
68 | executionTime: Date.now() - startTime,
69 | timestamp: new Date().toISOString(),
70 | },
71 | recommendations: health.recommendations.map((rec) => ({
72 | type: rec.priority === "high" ? "warning" : "info",
73 | title: rec.action,
74 | description: `Expected impact: +${rec.expectedImpact} health score | Effort: ${rec.effort}`,
75 | })),
76 | nextSteps: [
77 | {
78 | action: "Apply Recommendations",
79 | toolRequired: "manual",
80 | description:
81 | "Implement high-priority recommendations to improve health",
82 | priority: "high",
83 | },
84 | ...(health.issues.filter((i) => i.severity === "critical").length > 0
85 | ? [
86 | {
87 | action: "Fix Critical Issues",
88 | toolRequired: "manual" as const,
89 | description: "Address critical issues immediately",
90 | priority: "high" as const,
91 | },
92 | ]
93 | : []),
94 | ],
95 | };
96 |
97 | if (generateReport) {
98 | // Add report as additional content
99 | return {
100 | content: [
101 | ...formatMCPResponse(response).content,
102 | {
103 | type: "text",
104 | text: report,
105 | },
106 | ],
107 | };
108 | }
109 |
110 | return formatMCPResponse(response);
111 | } catch (error) {
112 | const errorResponse: MCPToolResponse = {
113 | success: false,
114 | error: {
115 | code: "HEALTH_CHECK_FAILED",
116 | message: `Failed to check KG health: ${error}`,
117 | resolution: "Ensure the knowledge graph is properly initialized",
118 | },
119 | metadata: {
120 | toolVersion: "1.0.0",
121 | executionTime: Date.now() - startTime,
122 | timestamp: new Date().toISOString(),
123 | },
124 | };
125 | return formatMCPResponse(errorResponse);
126 | }
127 | }
128 |
129 | /**
130 | * Generate a human-readable health report
131 | */
132 | function generateHealthReport(
133 | health: KGHealthMetrics,
134 | includeHistory: boolean,
135 | ): string {
136 | const lines: string[] = [];
137 |
138 | // Header
139 | lines.push("═══════════════════════════════════════════════════════");
140 | lines.push(" KNOWLEDGE GRAPH HEALTH REPORT");
141 | lines.push("═══════════════════════════════════════════════════════");
142 | lines.push("");
143 |
144 | // Overall Health
145 | lines.push(
146 | `📊 OVERALL HEALTH: ${health.overallHealth}/100 ${getHealthEmoji(
147 | health.overallHealth,
148 | )}`,
149 | );
150 | lines.push(
151 | ` Trend: ${health.trends.healthTrend.toUpperCase()} ${getTrendEmoji(
152 | health.trends.healthTrend,
153 | )}`,
154 | );
155 | lines.push("");
156 |
157 | // Component Scores
158 | lines.push("Component Scores:");
159 | lines.push(
160 | ` • Data Quality: ${health.dataQuality.score}/100 ${getHealthEmoji(
161 | health.dataQuality.score,
162 | )}`,
163 | );
164 | lines.push(
165 | ` • Structure Health: ${health.structureHealth.score}/100 ${getHealthEmoji(
166 | health.structureHealth.score,
167 | )}`,
168 | );
169 | lines.push(
170 | ` • Performance: ${health.performance.score}/100 ${getHealthEmoji(
171 | health.performance.score,
172 | )}`,
173 | );
174 | lines.push("");
175 |
176 | // Graph Statistics
177 | lines.push("Graph Statistics:");
178 | lines.push(` • Total Nodes: ${health.dataQuality.totalNodes}`);
179 | lines.push(` • Total Edges: ${health.dataQuality.totalEdges}`);
180 | lines.push(
181 | ` • Avg Connectivity: ${health.structureHealth.densityScore.toFixed(3)}`,
182 | );
183 | lines.push(
184 | ` • Storage Size: ${formatBytes(health.performance.storageSize)}`,
185 | );
186 | lines.push("");
187 |
188 | // Data Quality Details
189 | if (health.dataQuality.score < 90) {
190 | lines.push("⚠️ Data Quality Issues:");
191 | if (health.dataQuality.staleNodeCount > 0) {
192 | lines.push(
193 | ` • ${health.dataQuality.staleNodeCount} stale nodes (>30 days old)`,
194 | );
195 | }
196 | if (health.dataQuality.orphanedEdgeCount > 0) {
197 | lines.push(` • ${health.dataQuality.orphanedEdgeCount} orphaned edges`);
198 | }
199 | if (health.dataQuality.duplicateCount > 0) {
200 | lines.push(` • ${health.dataQuality.duplicateCount} duplicate entities`);
201 | }
202 | if (health.dataQuality.completenessScore < 0.8) {
203 | lines.push(
204 | ` • Completeness: ${Math.round(
205 | health.dataQuality.completenessScore * 100,
206 | )}%`,
207 | );
208 | }
209 | lines.push("");
210 | }
211 |
212 | // Critical Issues
213 | const criticalIssues = health.issues.filter((i) => i.severity === "critical");
214 | const highIssues = health.issues.filter((i) => i.severity === "high");
215 |
216 | if (criticalIssues.length > 0 || highIssues.length > 0) {
217 | lines.push("🚨 CRITICAL & HIGH PRIORITY ISSUES:");
218 | for (const issue of [...criticalIssues, ...highIssues].slice(0, 5)) {
219 | lines.push(` [${issue.severity.toUpperCase()}] ${issue.description}`);
220 | lines.push(` → ${issue.remediation}`);
221 | }
222 | lines.push("");
223 | }
224 |
225 | // Top Recommendations
226 | if (health.recommendations.length > 0) {
227 | lines.push("💡 TOP RECOMMENDATIONS:");
228 | for (const rec of health.recommendations.slice(0, 5)) {
229 | lines.push(` ${getPriorityIcon(rec.priority)} ${rec.action}`);
230 | lines.push(` Impact: +${rec.expectedImpact} | Effort: ${rec.effort}`);
231 | }
232 | lines.push("");
233 | }
234 |
235 | // Trends
236 | if (includeHistory) {
237 | lines.push("📈 TRENDS (Last 7 Days):");
238 | lines.push(
239 | ` • Health: ${health.trends.healthTrend} ${getTrendEmoji(
240 | health.trends.healthTrend,
241 | )}`,
242 | );
243 | lines.push(
244 | ` • Quality: ${health.trends.qualityTrend} ${getTrendEmoji(
245 | health.trends.qualityTrend,
246 | )}`,
247 | );
248 | lines.push(
249 | ` • Node Growth: ${health.trends.nodeGrowthRate.toFixed(1)} nodes/day`,
250 | );
251 | lines.push(
252 | ` • Edge Growth: ${health.trends.edgeGrowthRate.toFixed(1)} edges/day`,
253 | );
254 | lines.push("");
255 | }
256 |
257 | // Footer
258 | lines.push("═══════════════════════════════════════════════════════");
259 | lines.push(
260 | `Report generated: ${new Date(health.timestamp).toLocaleString()}`,
261 | );
262 | lines.push("═══════════════════════════════════════════════════════");
263 |
264 | return lines.join("\n");
265 | }
266 |
267 | // Helper functions
268 |
269 | function getHealthEmoji(score: number): string {
270 | if (score >= 90) return "🟢 Excellent";
271 | if (score >= 75) return "🟡 Good";
272 | if (score >= 60) return "🟠 Fair";
273 | return "🔴 Poor";
274 | }
275 |
276 | function getTrendEmoji(trend: string): string {
277 | if (trend === "improving") return "📈";
278 | if (trend === "degrading") return "📉";
279 | return "➡️";
280 | }
281 |
282 | function getPriorityIcon(priority: string): string {
283 | if (priority === "high") return "🔴";
284 | if (priority === "medium") return "🟡";
285 | return "🟢";
286 | }
287 |
288 | function formatBytes(bytes: number): string {
289 | if (bytes < 1024) return `${bytes} B`;
290 | if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;
291 | if (bytes < 1024 * 1024 * 1024)
292 | return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
293 | return `${(bytes / (1024 * 1024 * 1024)).toFixed(1)} GB`;
294 | }
295 |
```
--------------------------------------------------------------------------------
/tests/tools/kg-health-check.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { promises as fs } from "fs";
2 | import path from "path";
3 | import os from "os";
4 | import { checkKGHealth } from "../../src/tools/kg-health-check";
5 | import {
6 | getKnowledgeGraph,
7 | getKGStorage,
8 | createOrUpdateProject,
9 | } from "../../src/memory/kg-integration";
10 |
11 | describe("KG Health Check Tool", () => {
12 | let tempDir: string;
13 | const originalCwd = process.cwd();
14 |
15 | beforeEach(async () => {
16 | tempDir = path.join(os.tmpdir(), `kg-health-${Date.now()}`);
17 | await fs.mkdir(tempDir, { recursive: true });
18 | process.chdir(tempDir);
19 | });
20 |
21 | afterEach(async () => {
22 | process.chdir(originalCwd);
23 | try {
24 | await fs.rm(tempDir, { recursive: true, force: true });
25 | } catch {
26 | // Ignore cleanup errors
27 | }
28 | });
29 |
30 | it("should perform basic health check", async () => {
31 | const result = await checkKGHealth({
32 | includeHistory: false,
33 | generateReport: false,
34 | });
35 |
36 | expect(result.content).toBeDefined();
37 | expect(result.content.length).toBeGreaterThan(0);
38 |
39 | // Should contain health metrics
40 | const text = result.content.map((c) => c.text).join(" ");
41 | expect(text).toContain("Health");
42 | });
43 |
44 | it("should include historical data when requested", async () => {
45 | const result = await checkKGHealth({
46 | includeHistory: true,
47 | generateReport: false,
48 | days: 7,
49 | });
50 |
51 | expect(result.content).toBeDefined();
52 | const text = result.content.map((c) => c.text).join(" ");
53 | expect(text).toContain("Health");
54 | });
55 |
56 | it("should generate detailed report", async () => {
57 | const result = await checkKGHealth({
58 | includeHistory: false,
59 | generateReport: true,
60 | });
61 |
62 | expect(result.content).toBeDefined();
63 | expect(result.content.length).toBeGreaterThan(0);
64 |
65 | // Report should contain detailed metrics
66 | const text = result.content.map((c) => c.text).join(" ");
67 | expect(text).toContain("Health");
68 | });
69 |
70 | it("should generate report with history included", async () => {
71 | const result = await checkKGHealth({
72 | includeHistory: true,
73 | generateReport: true,
74 | days: 14,
75 | });
76 |
77 | expect(result.content).toBeDefined();
78 | expect(result.content.length).toBeGreaterThan(1); // Should have formatted response + report
79 |
80 | const text = result.content.map((c) => c.text).join(" ");
81 | expect(text).toContain("Health");
82 | expect(text).toContain("TRENDS");
83 | });
84 |
85 | it("should handle errors gracefully", async () => {
86 | // Test with invalid parameters
87 | const result = await checkKGHealth({
88 | includeHistory: true,
89 | generateReport: true,
90 | days: -1, // Invalid
91 | });
92 |
93 | // Should either handle gracefully or return error
94 | expect(result.content).toBeDefined();
95 | });
96 |
97 | it("should calculate health score", async () => {
98 | const result = await checkKGHealth({
99 | includeHistory: false,
100 | generateReport: true,
101 | });
102 |
103 | expect(result.content).toBeDefined();
104 | const text = result.content.map((c) => c.text).join(" ");
105 |
106 | // Should contain some health indicator
107 | expect(text.length).toBeGreaterThan(0);
108 | });
109 |
110 | it("should include critical issues in next steps", async () => {
111 | // Create a project with some data to trigger health calculation
112 | const kg = await getKnowledgeGraph();
113 |
114 | // Add some nodes and edges to test health calculation
115 | kg.addNode({
116 | id: "test-node-1",
117 | type: "project",
118 | label: "Test Project",
119 | properties: { name: "test" },
120 | weight: 1.0,
121 | });
122 |
123 | const result = await checkKGHealth({
124 | includeHistory: false,
125 | generateReport: true,
126 | });
127 |
128 | expect(result.content).toBeDefined();
129 |
130 | // Check that the response structure is correct
131 | const text = result.content.map((c) => c.text).join(" ");
132 | expect(text).toBeTruthy();
133 | });
134 |
135 | it("should handle graph with high data quality score", async () => {
136 | const result = await checkKGHealth({
137 | includeHistory: false,
138 | generateReport: true,
139 | });
140 |
141 | expect(result.content).toBeDefined();
142 | const text = result.content.map((c) => c.text).join(" ");
143 |
144 | // Should complete without errors
145 | expect(text.length).toBeGreaterThan(0);
146 | });
147 |
148 | it("should use default values when parameters not provided", async () => {
149 | const result = await checkKGHealth({});
150 |
151 | expect(result.content).toBeDefined();
152 | expect(result.content.length).toBeGreaterThan(0);
153 | });
154 |
155 | it("should handle various health score ranges in report", async () => {
156 | // Test the helper functions indirectly through the report
157 | const result = await checkKGHealth({
158 | includeHistory: false,
159 | generateReport: true,
160 | });
161 |
162 | expect(result.content).toBeDefined();
163 | const text = result.content.map((c) => c.text).join(" ");
164 |
165 | // Should contain health indicators (emojis or text)
166 | expect(text.length).toBeGreaterThan(0);
167 | });
168 |
169 | it("should handle different trend directions in report", async () => {
170 | const result = await checkKGHealth({
171 | includeHistory: true,
172 | generateReport: true,
173 | });
174 |
175 | expect(result.content).toBeDefined();
176 | const text = result.content.map((c) => c.text).join(" ");
177 |
178 | // Report should include trend information
179 | expect(text).toContain("TRENDS");
180 | });
181 |
182 | it("should handle different priority levels in recommendations", async () => {
183 | const result = await checkKGHealth({
184 | includeHistory: false,
185 | generateReport: true,
186 | });
187 |
188 | expect(result.content).toBeDefined();
189 | const text = result.content.map((c) => c.text).join(" ");
190 |
191 | // Should complete without errors
192 | expect(text.length).toBeGreaterThan(0);
193 | });
194 |
195 | it("should handle different byte sizes in formatBytes", async () => {
196 | const result = await checkKGHealth({
197 | includeHistory: false,
198 | generateReport: true,
199 | });
200 |
201 | expect(result.content).toBeDefined();
202 | const text = result.content.map((c) => c.text).join(" ");
203 |
204 | // Report should include storage size
205 | expect(text.length).toBeGreaterThan(0);
206 | });
207 |
208 | it("should handle validation errors", async () => {
209 | const result = await checkKGHealth({
210 | days: 150, // Exceeds max of 90
211 | });
212 |
213 | expect(result.content).toBeDefined();
214 | // Should return error response
215 | const text = result.content.map((c) => c.text).join(" ");
216 | expect(text).toBeTruthy();
217 | });
218 |
219 | it("should handle recommendations with different priorities", async () => {
220 | const result = await checkKGHealth({
221 | includeHistory: false,
222 | generateReport: true,
223 | });
224 |
225 | expect(result.content).toBeDefined();
226 |
227 | // Check response structure
228 | const text = result.content.map((c) => c.text).join(" ");
229 | expect(text.length).toBeGreaterThan(0);
230 | });
231 |
232 | it("should detect and report data quality issues", async () => {
233 | const kg = await getKnowledgeGraph();
234 |
235 | // Create nodes
236 | kg.addNode({
237 | id: "test-project-1",
238 | type: "project",
239 | label: "Test Project 1",
240 | properties: { name: "test-project-1" },
241 | weight: 1.0,
242 | });
243 |
244 | kg.addNode({
245 | id: "test-tech-1",
246 | type: "technology",
247 | label: "TypeScript",
248 | properties: { name: "typescript" },
249 | weight: 1.0,
250 | });
251 |
252 | // Create an orphaned edge (edge pointing to non-existent node)
253 | kg.addEdge({
254 | source: "test-tech-1",
255 | target: "non-existent-node-id",
256 | type: "uses",
257 | weight: 1.0,
258 | confidence: 0.9,
259 | properties: {},
260 | });
261 |
262 | const result = await checkKGHealth({
263 | includeHistory: false,
264 | generateReport: true,
265 | });
266 |
267 | expect(result.content).toBeDefined();
268 | const text = result.content.map((c) => c.text).join(" ");
269 |
270 | // Should report data quality issues with score < 90
271 | expect(text).toContain("Health");
272 | // The report should show details about stale nodes, orphaned edges, etc.
273 | expect(text.length).toBeGreaterThan(100); // Detailed report
274 | });
275 |
276 | it("should test all priority icon levels", async () => {
277 | // This test indirectly tests getPriorityIcon for "high", "medium", and "low"
278 | const result = await checkKGHealth({
279 | includeHistory: false,
280 | generateReport: true,
281 | });
282 |
283 | expect(result.content).toBeDefined();
284 | const text = result.content.map((c) => c.text).join(" ");
285 |
286 | // The report should include priority indicators (emojis)
287 | expect(text.length).toBeGreaterThan(0);
288 | });
289 |
290 | it("should test formatBytes for different size ranges", async () => {
291 | // The tool will calculate storage size which triggers formatBytes
292 | // This covers: bytes, KB, MB ranges
293 | const result = await checkKGHealth({
294 | includeHistory: false,
295 | generateReport: true,
296 | });
297 |
298 | expect(result.content).toBeDefined();
299 | const text = result.content.map((c) => c.text).join(" ");
300 |
301 | // Storage size should be included in the report
302 | expect(text.length).toBeGreaterThan(0);
303 | });
304 | });
305 |
```
--------------------------------------------------------------------------------
/docs/development/MCP_INSPECTOR_TESTING.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | documcp:
3 | last_updated: "2025-11-20T00:46:21.945Z"
4 | last_validated: "2025-11-20T00:46:21.945Z"
5 | auto_updated: false
6 | update_frequency: monthly
7 | ---
8 |
9 | # MCP Inspector Testing Guide
10 |
11 | The MCP Inspector is an in-browser debugging tool for testing MCP servers without connecting to actual applications. This guide explains how to use it for DocuMCP development.
12 |
13 | ## Prerequisites
14 |
15 | - Node.js 20+ installed
16 | - DocuMCP repository cloned
17 | - Dependencies installed (`npm install`)
18 |
19 | ## Quick Start
20 |
21 | ### Option 1: Build and Launch Inspector
22 |
23 | ```bash
24 | npm run build:inspect
25 | ```
26 |
27 | This command:
28 |
29 | 1. Compiles TypeScript to `dist/`
30 | 2. Launches MCP Inspector
31 | 3. Opens browser at `http://localhost:5173` (or similar)
32 |
33 | ### Option 2: Launch Inspector with Existing Build
34 |
35 | ```bash
36 | npm run build # First build (if needed)
37 | npm run dev:inspect # Then launch inspector
38 | ```
39 |
40 | ## Using the Inspector
41 |
42 | ### 1. Connect to Server
43 |
44 | 1. Open the browser URL provided by the inspector
45 | 2. Click the "Connect" button in the left sidebar
46 | 3. Wait for connection confirmation
47 |
48 | ### 2. Test Tools
49 |
50 | The Tools section lists all available MCP tools:
51 |
52 | **Example: Testing `analyze_repository`**
53 |
54 | 1. Click "Tools" in the top navigation
55 | 2. Select "analyze_repository" from the list
56 | 3. In the right panel, enter parameters:
57 | ```json
58 | {
59 | "path": "./",
60 | "depth": "standard"
61 | }
62 | ```
63 | 4. Click "Run Tool"
64 | 5. Verify the output includes:
65 | - File counts
66 | - Language detection
67 | - Dependency analysis
68 | - Memory insights
69 |
70 | **Example: Testing `recommend_ssg`**
71 |
72 | 1. First run `analyze_repository` (as above) to get an `analysisId`
73 | 2. Select "recommend_ssg"
74 | 3. Enter parameters:
75 | ```json
76 | {
77 | "analysisId": "<id-from-previous-analysis>",
78 | "userId": "test-user",
79 | "preferences": {
80 | "priority": "simplicity",
81 | "ecosystem": "javascript"
82 | }
83 | }
84 | ```
85 | 4. Click "Run Tool"
86 | 5. Verify recommendation includes:
87 | - Recommended SSG
88 | - Confidence score
89 | - Reasoning
90 | - Alternative options
91 |
92 | ### 3. Test Resources
93 |
94 | Resources provide static data for application UIs:
95 |
96 | **Example: Testing SSG List**
97 |
98 | 1. Click "Resources" in the top navigation
99 | 2. Select "documcp://ssgs/available"
100 | 3. Verify output shows all 5 SSGs:
101 | - Jekyll
102 | - Hugo
103 | - Docusaurus
104 | - MkDocs
105 | - Eleventy
106 | 4. Check each SSG includes:
107 | - ID, name, description
108 | - Language, complexity, build speed
109 | - Best use cases
110 |
111 | **Example: Testing Configuration Templates**
112 |
113 | 1. Select "documcp://templates/jekyll-config"
114 | 2. Verify YAML template is returned
115 | 3. Test other templates:
116 | - `documcp://templates/hugo-config`
117 | - `documcp://templates/docusaurus-config`
118 | - `documcp://templates/mkdocs-config`
119 | - `documcp://templates/eleventy-config`
120 | - `documcp://templates/diataxis-structure`
121 |
122 | ### 4. Test Prompts
123 |
124 | Prompts provide pre-written instructions for specialized tasks:
125 |
126 | **Example: Testing `tutorial-writer`**
127 |
128 | 1. Click "Prompts" in the top navigation
129 | 2. Select "tutorial-writer"
130 | 3. Provide arguments:
131 | ```json
132 | {
133 | "project_path": "./",
134 | "target_audience": "beginners",
135 | "learning_goal": "deploy first documentation site"
136 | }
137 | ```
138 | 4. Click "Get Prompt"
139 | 5. Verify prompt messages include:
140 | - Project context (languages, frameworks)
141 | - Diataxis tutorial requirements
142 | - Step-by-step structure guidance
143 |
144 | **Example: Testing `analyze-and-recommend` workflow**
145 |
146 | 1. Select "analyze-and-recommend"
147 | 2. Provide arguments:
148 | ```json
149 | {
150 | "project_path": "./",
151 | "analysis_depth": "standard",
152 | "preferences": "good community support"
153 | }
154 | ```
155 | 3. Verify workflow prompt includes:
156 | - Complete analysis workflow
157 | - SSG recommendation guidance
158 | - Implementation steps
159 |
160 | ## Common Test Cases
161 |
162 | ### Tool Testing Checklist
163 |
164 | - [ ] **analyze_repository**
165 |
166 | - [ ] Test with current directory (`./`)
167 | - [ ] Test with different depth levels
168 | - [ ] Verify memory integration works
169 | - [ ] Check similar projects are found
170 |
171 | - [ ] **recommend_ssg**
172 |
173 | - [ ] Test with valid analysisId
174 | - [ ] Test different preference combinations
175 | - [ ] Verify confidence scores
176 | - [ ] Check historical data integration
177 |
178 | - [ ] **generate_config**
179 |
180 | - [ ] Test each SSG type
181 | - [ ] Verify output format
182 | - [ ] Check template variables
183 |
184 | - [ ] **setup_structure**
185 |
186 | - [ ] Test Diataxis structure creation
187 | - [ ] Verify all categories included
188 | - [ ] Check example content
189 |
190 | - [ ] **deploy_pages**
191 |
192 | - [ ] Test workflow generation
193 | - [ ] Verify GitHub Actions YAML
194 | - [ ] Check custom domain support
195 |
196 | - [ ] **validate_content**
197 | - [ ] Test with documentation path
198 | - [ ] Verify link checking
199 | - [ ] Check code block validation
200 |
201 | ### Resource Testing Checklist
202 |
203 | - [ ] **documcp://ssgs/available**
204 |
205 | - [ ] All 5 SSGs listed
206 | - [ ] Complete metadata for each
207 |
208 | - [ ] **Templates**
209 |
210 | - [ ] Jekyll config valid YAML
211 | - [ ] Hugo config valid YAML
212 | - [ ] Docusaurus config valid JS
213 | - [ ] MkDocs config valid YAML
214 | - [ ] Eleventy config valid JS
215 | - [ ] Diataxis structure valid JSON
216 |
217 | - [ ] **Workflows**
218 | - [ ] All workflows listed
219 | - [ ] Quick setup available
220 | - [ ] Full setup available
221 | - [ ] Guidance provided
222 |
223 | ### Prompt Testing Checklist
224 |
225 | - [ ] **Technical Writer Prompts**
226 |
227 | - [ ] tutorial-writer
228 | - [ ] howto-guide-writer
229 | - [ ] reference-writer
230 | - [ ] explanation-writer
231 | - [ ] diataxis-organizer
232 | - [ ] readme-optimizer
233 |
234 | - [ ] **Workflow Prompts**
235 | - [ ] analyze-and-recommend
236 | - [ ] setup-documentation
237 | - [ ] troubleshoot-deployment
238 |
239 | ## Troubleshooting
240 |
241 | ### Inspector Won't Connect
242 |
243 | **Problem:** Connection fails or times out
244 |
245 | **Solutions:**
246 |
247 | 1. Ensure server is built: `npm run build`
248 | 2. Check no other process is using the port
249 | 3. Try restarting: `Ctrl+C` and re-run `npm run dev:inspect`
250 |
251 | ### Tool Returns Error
252 |
253 | **Problem:** Tool execution fails with error message
254 |
255 | **Solutions:**
256 |
257 | 1. Check parameter format (must be valid JSON)
258 | 2. Verify required parameters are provided
259 | 3. Ensure file paths exist (for file-based tools)
260 | 4. Check server logs for detailed error messages
261 |
262 | ### Resource Not Found
263 |
264 | **Problem:** Resource URI returns "Resource not found" error
265 |
266 | **Solutions:**
267 |
268 | 1. Verify URI spelling matches exactly (case-sensitive)
269 | 2. Check resource list for available URIs
270 | 3. Ensure server version matches documentation
271 |
272 | ### Prompt Arguments Missing
273 |
274 | **Problem:** Prompt doesn't use provided arguments
275 |
276 | **Solutions:**
277 |
278 | 1. Check argument names match prompt definition
279 | 2. Verify JSON format is correct
280 | 3. Required arguments must be provided
281 |
282 | ## Best Practices
283 |
284 | ### During Development
285 |
286 | 1. **Keep Inspector Open:** Launch inspector at start of development session
287 | 2. **Test After Changes:** Run tool tests after modifying tool implementation
288 | 3. **Verify All Paths:** Test both success and error paths
289 | 4. **Check Edge Cases:** Test with unusual inputs, empty values, etc.
290 |
291 | ### Before Committing
292 |
293 | 1. **Full Tool Test:** Test at least one example from each tool
294 | 2. **Resource Validation:** Verify all resources return valid data
295 | 3. **Prompt Verification:** Check prompts generate correct messages
296 | 4. **Error Handling:** Test with invalid inputs to verify error messages
297 |
298 | ### For Bug Fixing
299 |
300 | 1. **Reproduce in Inspector:** Use inspector to reproduce bug consistently
301 | 2. **Test Fix:** Verify fix works in inspector before integration testing
302 | 3. **Regression Test:** Test related tools to ensure no regressions
303 | 4. **Document:** Add test case to this guide if bug was subtle
304 |
305 | ## Integration with Development Workflow
306 |
307 | ### Daily Development
308 |
309 | ```bash
310 | # Morning startup
311 | npm run build:inspect
312 |
313 | # Keep inspector tab open
314 | # Make code changes in editor
315 | # Test changes in inspector
316 | # Iterate until working
317 |
318 | # Before lunch/end of day
319 | npm run build && npm test
320 | ```
321 |
322 | ### Pre-Commit Workflow
323 |
324 | ```bash
325 | # Run full validation
326 | npm run ci
327 |
328 | # Test in inspector
329 | npm run build:inspect
330 |
331 | # Manual spot checks on key tools
332 | # Commit when all checks pass
333 | ```
334 |
335 | ### CI/CD Integration
336 |
337 | While MCP Inspector is primarily for local development, you can add automated checks:
338 |
339 | ```bash
340 | # In CI pipeline (future enhancement)
341 | npm run build
342 | npx @modelcontextprotocol/inspector dist/index.js --test automated-tests.json
343 | ```
344 |
345 | ## Additional Resources
346 |
347 | - **MCP Inspector GitHub:** https://github.com/modelcontextprotocol/inspector
348 | - **MCP Specification:** https://modelcontextprotocol.io/docs
349 | - **MCP TypeScript SDK:** https://github.com/modelcontextprotocol/typescript-sdk
350 | - **DocuMCP Architecture:** See `docs/adrs/` for detailed architectural decisions
351 |
352 | ## Feedback
353 |
354 | If you encounter issues with MCP Inspector or this guide:
355 |
356 | 1. Check for known issues: https://github.com/modelcontextprotocol/inspector/issues
357 | 2. Report DocuMCP-specific issues: https://github.com/anthropics/documcp/issues
358 | 3. Suggest improvements to this guide via pull request
359 |
360 | ---
361 |
362 | **Last Updated:** 2025-10-09
363 | **Version:** 1.0.0
364 |
```
--------------------------------------------------------------------------------
/tests/prompts/technical-writer-prompts.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | import {
2 | generateTechnicalWriterPrompts,
3 | analyzeProjectContext,
4 | } from "../../src/prompts/technical-writer-prompts.js";
5 | import { promises as fs } from "fs";
6 | import { join } from "path";
7 | import { tmpdir } from "os";
8 |
9 | describe("Technical Writer Diataxis Prompts", () => {
10 | let testProjectPath: string;
11 |
12 | beforeEach(async () => {
13 | // Create a temporary test project
14 | testProjectPath = join(tmpdir(), `test-project-${Date.now()}`);
15 | await fs.mkdir(testProjectPath, { recursive: true });
16 |
17 | // Create a basic package.json
18 | const packageJson = {
19 | name: "test-project",
20 | version: "1.0.0",
21 | dependencies: {
22 | react: "^18.0.0",
23 | typescript: "^5.0.0",
24 | },
25 | scripts: {
26 | test: "jest",
27 | },
28 | };
29 | await fs.writeFile(
30 | join(testProjectPath, "package.json"),
31 | JSON.stringify(packageJson, null, 2),
32 | );
33 |
34 | // Create a basic README.md
35 | await fs.writeFile(
36 | join(testProjectPath, "README.md"),
37 | "# Test Project\n\nA test project for testing.",
38 | );
39 |
40 | // Create a test directory
41 | await fs.mkdir(join(testProjectPath, "tests"), { recursive: true });
42 | await fs.writeFile(
43 | join(testProjectPath, "tests", "example.test.js"),
44 | 'test("example", () => { expect(true).toBe(true); });',
45 | );
46 |
47 | // Create a CI file
48 | await fs.mkdir(join(testProjectPath, ".github", "workflows"), {
49 | recursive: true,
50 | });
51 | await fs.writeFile(
52 | join(testProjectPath, ".github", "workflows", "ci.yml"),
53 | "name: CI\non: [push, pull_request]",
54 | );
55 | });
56 |
57 | afterEach(async () => {
58 | // Clean up test project
59 | try {
60 | await fs.rm(testProjectPath, { recursive: true, force: true });
61 | } catch (error) {
62 | // Ignore cleanup errors
63 | }
64 | });
65 |
66 | describe("generateTechnicalWriterPrompts", () => {
67 | it("should generate tutorial writer prompts", async () => {
68 | const prompts = await generateTechnicalWriterPrompts(
69 | "tutorial-writer",
70 | testProjectPath,
71 | );
72 |
73 | expect(prompts.length).toBeGreaterThan(0);
74 | expect(prompts[0]).toHaveProperty("role");
75 | expect(prompts[0]).toHaveProperty("content");
76 | expect(prompts[0].content).toHaveProperty("type", "text");
77 | expect(prompts[0].content).toHaveProperty("text");
78 | expect(prompts[0].content.text).toContain("tutorial");
79 | expect(prompts[0].content.text).toContain("Diataxis");
80 | });
81 |
82 | it("should generate how-to guide writer prompts", async () => {
83 | const prompts = await generateTechnicalWriterPrompts(
84 | "howto-guide-writer",
85 | testProjectPath,
86 | );
87 |
88 | expect(prompts.length).toBeGreaterThan(0);
89 | expect(prompts[0].content.text).toContain("how-to guide");
90 | expect(prompts[0].content.text).toContain("Problem-oriented");
91 | });
92 |
93 | it("should generate reference writer prompts", async () => {
94 | const prompts = await generateTechnicalWriterPrompts(
95 | "reference-writer",
96 | testProjectPath,
97 | );
98 |
99 | expect(prompts.length).toBeGreaterThan(0);
100 | expect(prompts[0].content.text).toContain("reference documentation");
101 | expect(prompts[0].content.text).toContain("Information-oriented");
102 | });
103 |
104 | it("should generate explanation writer prompts", async () => {
105 | const prompts = await generateTechnicalWriterPrompts(
106 | "explanation-writer",
107 | testProjectPath,
108 | );
109 |
110 | expect(prompts.length).toBeGreaterThan(0);
111 | expect(prompts[0].content.text).toContain("explanation documentation");
112 | expect(prompts[0].content.text).toContain("Understanding-oriented");
113 | });
114 |
115 | it("should generate diataxis organizer prompts", async () => {
116 | const prompts = await generateTechnicalWriterPrompts(
117 | "diataxis-organizer",
118 | testProjectPath,
119 | );
120 |
121 | expect(prompts.length).toBeGreaterThan(0);
122 | expect(prompts[0].content.text).toContain("Diataxis framework");
123 | expect(prompts[0].content.text).toContain("organize");
124 | });
125 |
126 | it("should generate readme optimizer prompts", async () => {
127 | const prompts = await generateTechnicalWriterPrompts(
128 | "readme-optimizer",
129 | testProjectPath,
130 | );
131 |
132 | expect(prompts.length).toBeGreaterThan(0);
133 | expect(prompts[0].content.text).toContain("README");
134 | expect(prompts[0].content.text).toContain("Diataxis-aware");
135 | });
136 |
137 | it("should generate analyze-and-recommend prompts", async () => {
138 | const prompts = await generateTechnicalWriterPrompts(
139 | "analyze-and-recommend",
140 | testProjectPath,
141 | );
142 |
143 | expect(prompts.length).toBeGreaterThan(0);
144 | expect(prompts[0].content.text.toLowerCase()).toContain("analyz");
145 | expect(prompts[0].content.text.toLowerCase()).toContain("recommend");
146 | });
147 |
148 | it("should generate setup-documentation prompts", async () => {
149 | const prompts = await generateTechnicalWriterPrompts(
150 | "setup-documentation",
151 | testProjectPath,
152 | );
153 |
154 | expect(prompts.length).toBeGreaterThan(0);
155 | expect(prompts[0].content.text).toContain("documentation");
156 | });
157 |
158 | it("should generate troubleshoot-deployment prompts", async () => {
159 | const prompts = await generateTechnicalWriterPrompts(
160 | "troubleshoot-deployment",
161 | testProjectPath,
162 | );
163 |
164 | expect(prompts.length).toBeGreaterThan(0);
165 | expect(prompts[0].content.text).toContain("troubleshoot");
166 | expect(prompts[0].content.text).toContain("deployment");
167 | });
168 |
169 | it("should generate maintain-documentation-freshness prompts", async () => {
170 | const prompts = await generateTechnicalWriterPrompts(
171 | "maintain-documentation-freshness",
172 | testProjectPath,
173 | { action: "track", preset: "monthly" },
174 | );
175 |
176 | expect(prompts.length).toBeGreaterThan(0);
177 | expect(prompts[0].content.text).toContain("freshness");
178 | expect(prompts[0].content.text).toContain("track");
179 | });
180 |
181 | it("should throw error for unknown prompt type", async () => {
182 | await expect(
183 | generateTechnicalWriterPrompts("unknown-type", testProjectPath),
184 | ).rejects.toThrow("Unknown prompt type: unknown-type");
185 | });
186 |
187 | it("should include project context in prompts", async () => {
188 | const prompts = await generateTechnicalWriterPrompts(
189 | "tutorial-writer",
190 | testProjectPath,
191 | );
192 |
193 | const promptText = prompts[0].content.text;
194 | expect(promptText).toContain("React"); // Should detect React from package.json
195 | expect(promptText).toContain("TypeScript"); // Should detect TypeScript
196 | });
197 | });
198 |
199 | describe("analyzeProjectContext", () => {
200 | it("should analyze project context correctly", async () => {
201 | const context = await analyzeProjectContext(testProjectPath);
202 |
203 | expect(context).toHaveProperty("projectType");
204 | expect(context).toHaveProperty("languages");
205 | expect(context).toHaveProperty("frameworks");
206 | expect(context).toHaveProperty("hasTests");
207 | expect(context).toHaveProperty("hasCI");
208 | expect(context).toHaveProperty("readmeExists");
209 | expect(context).toHaveProperty("documentationGaps");
210 |
211 | // Check specific values based on our test setup
212 | expect(context.projectType).toBe("node_application");
213 | expect(context.languages).toContain("TypeScript");
214 | expect(context.frameworks).toContain("React");
215 | expect(context.hasTests).toBe(true);
216 | expect(context.hasCI).toBe(true);
217 | expect(context.readmeExists).toBe(true);
218 | expect(context.packageManager).toBe("npm");
219 | });
220 |
221 | it("should detect documentation gaps", async () => {
222 | const context = await analyzeProjectContext(testProjectPath);
223 |
224 | expect(Array.isArray(context.documentationGaps)).toBe(true);
225 | // Should detect missing documentation since we only have a basic README
226 | expect(context.documentationGaps.length).toBeGreaterThan(0);
227 | });
228 |
229 | it("should handle projects without package.json", async () => {
230 | // Create a project without package.json
231 | const simpleProjectPath = join(tmpdir(), `simple-project-${Date.now()}`);
232 | await fs.mkdir(simpleProjectPath, { recursive: true });
233 |
234 | try {
235 | const context = await analyzeProjectContext(simpleProjectPath);
236 |
237 | expect(context.projectType).toBe("unknown");
238 | expect(context.languages).toEqual([]);
239 | expect(context.frameworks).toEqual([]);
240 | expect(context.readmeExists).toBe(false);
241 | } finally {
242 | await fs.rm(simpleProjectPath, { recursive: true, force: true });
243 | }
244 | });
245 |
246 | it("should detect yarn package manager", async () => {
247 | // Create yarn.lock to simulate yarn project
248 | await fs.writeFile(join(testProjectPath, "yarn.lock"), "# Yarn lockfile");
249 |
250 | const context = await analyzeProjectContext(testProjectPath);
251 | expect(context.packageManager).toBe("yarn");
252 | });
253 |
254 | it("should detect pnpm package manager", async () => {
255 | // Create pnpm-lock.yaml to simulate pnpm project
256 | await fs.writeFile(
257 | join(testProjectPath, "pnpm-lock.yaml"),
258 | "lockfileVersion: 5.4",
259 | );
260 |
261 | const context = await analyzeProjectContext(testProjectPath);
262 | expect(context.packageManager).toBe("pnpm");
263 | });
264 | });
265 | });
266 |
```
--------------------------------------------------------------------------------
/src/utils/freshness-tracker.ts:
--------------------------------------------------------------------------------
```typescript
1 | /**
2 | * Documentation Freshness Tracking Utilities
3 | *
4 | * Tracks when documentation files were last updated and validated,
5 | * supporting both short-term (minutes/hours) and long-term (days) staleness detection.
6 | */
7 |
8 | import fs from "fs/promises";
9 | import path from "path";
10 | import matter from "gray-matter";
11 |
12 | /**
13 | * Time unit for staleness threshold
14 | */
15 | export type TimeUnit = "minutes" | "hours" | "days";
16 |
17 | /**
18 | * Staleness threshold configuration
19 | */
20 | export interface StalenessThreshold {
21 | value: number;
22 | unit: TimeUnit;
23 | }
24 |
25 | /**
26 | * Predefined staleness levels
27 | */
28 | export const STALENESS_PRESETS = {
29 | realtime: { value: 30, unit: "minutes" as TimeUnit },
30 | active: { value: 1, unit: "hours" as TimeUnit },
31 | recent: { value: 24, unit: "hours" as TimeUnit },
32 | weekly: { value: 7, unit: "days" as TimeUnit },
33 | monthly: { value: 30, unit: "days" as TimeUnit },
34 | quarterly: { value: 90, unit: "days" as TimeUnit },
35 | } as const;
36 |
37 | /**
38 | * Documentation metadata tracked in frontmatter
39 | */
40 | export interface DocFreshnessMetadata {
41 | last_updated?: string; // ISO 8601 timestamp
42 | last_validated?: string; // ISO 8601 timestamp
43 | validated_against_commit?: string;
44 | auto_updated?: boolean;
45 | staleness_threshold?: StalenessThreshold;
46 | update_frequency?: keyof typeof STALENESS_PRESETS;
47 | }
48 |
49 | /**
50 | * Full frontmatter structure
51 | */
52 | export interface DocFrontmatter {
53 | title?: string;
54 | description?: string;
55 | documcp?: DocFreshnessMetadata;
56 | [key: string]: unknown;
57 | }
58 |
59 | /**
60 | * File freshness status
61 | */
62 | export interface FileFreshnessStatus {
63 | filePath: string;
64 | relativePath: string;
65 | hasMetadata: boolean;
66 | metadata?: DocFreshnessMetadata;
67 | lastUpdated?: Date;
68 | lastValidated?: Date;
69 | ageInMs?: number;
70 | ageFormatted?: string;
71 | isStale: boolean;
72 | stalenessLevel: "fresh" | "warning" | "stale" | "critical" | "unknown";
73 | staleDays?: number;
74 | }
75 |
76 | /**
77 | * Freshness scan report
78 | */
79 | export interface FreshnessScanReport {
80 | scannedAt: string;
81 | docsPath: string;
82 | totalFiles: number;
83 | filesWithMetadata: number;
84 | filesWithoutMetadata: number;
85 | freshFiles: number;
86 | warningFiles: number;
87 | staleFiles: number;
88 | criticalFiles: number;
89 | files: FileFreshnessStatus[];
90 | thresholds: {
91 | warning: StalenessThreshold;
92 | stale: StalenessThreshold;
93 | critical: StalenessThreshold;
94 | };
95 | }
96 |
97 | /**
98 | * Convert time threshold to milliseconds
99 | */
100 | export function thresholdToMs(threshold: StalenessThreshold): number {
101 | const { value, unit } = threshold;
102 | switch (unit) {
103 | case "minutes":
104 | return value * 60 * 1000;
105 | case "hours":
106 | return value * 60 * 60 * 1000;
107 | case "days":
108 | return value * 24 * 60 * 60 * 1000;
109 | default:
110 | throw new Error(`Unknown time unit: ${unit}`);
111 | }
112 | }
113 |
114 | /**
115 | * Format age in human-readable format
116 | */
117 | export function formatAge(ageMs: number): string {
118 | const seconds = Math.floor(ageMs / 1000);
119 | const minutes = Math.floor(seconds / 60);
120 | const hours = Math.floor(minutes / 60);
121 | const days = Math.floor(hours / 24);
122 |
123 | if (days > 0) {
124 | return `${days} day${days !== 1 ? "s" : ""}`;
125 | } else if (hours > 0) {
126 | return `${hours} hour${hours !== 1 ? "s" : ""}`;
127 | } else if (minutes > 0) {
128 | return `${minutes} minute${minutes !== 1 ? "s" : ""}`;
129 | } else {
130 | return `${seconds} second${seconds !== 1 ? "s" : ""}`;
131 | }
132 | }
133 |
134 | /**
135 | * Parse frontmatter from markdown file
136 | */
137 | export async function parseDocFrontmatter(
138 | filePath: string,
139 | ): Promise<DocFrontmatter> {
140 | try {
141 | const content = await fs.readFile(filePath, "utf-8");
142 | const { data } = matter(content);
143 | return data as DocFrontmatter;
144 | } catch (error) {
145 | return {};
146 | }
147 | }
148 |
149 | /**
150 | * Update frontmatter in markdown file
151 | */
152 | export async function updateDocFrontmatter(
153 | filePath: string,
154 | metadata: Partial<DocFreshnessMetadata>,
155 | ): Promise<void> {
156 | const content = await fs.readFile(filePath, "utf-8");
157 | const { data, content: body } = matter(content);
158 |
159 | const existingDocuMCP = (data.documcp as DocFreshnessMetadata) || {};
160 | const updatedData = {
161 | ...data,
162 | documcp: {
163 | ...existingDocuMCP,
164 | ...metadata,
165 | },
166 | };
167 |
168 | const newContent = matter.stringify(body, updatedData);
169 | await fs.writeFile(filePath, newContent, "utf-8");
170 | }
171 |
172 | /**
173 | * Calculate file freshness status
174 | */
175 | export function calculateFreshnessStatus(
176 | filePath: string,
177 | relativePath: string,
178 | frontmatter: DocFrontmatter,
179 | thresholds: {
180 | warning: StalenessThreshold;
181 | stale: StalenessThreshold;
182 | critical: StalenessThreshold;
183 | },
184 | ): FileFreshnessStatus {
185 | const metadata = frontmatter.documcp;
186 | const hasMetadata = !!metadata?.last_updated;
187 |
188 | if (!hasMetadata) {
189 | return {
190 | filePath,
191 | relativePath,
192 | hasMetadata: false,
193 | isStale: true,
194 | stalenessLevel: "unknown",
195 | };
196 | }
197 |
198 | const lastUpdated = new Date(metadata.last_updated!);
199 | const lastValidated = metadata.last_validated
200 | ? new Date(metadata.last_validated)
201 | : undefined;
202 | const now = new Date();
203 | const ageInMs = now.getTime() - lastUpdated.getTime();
204 | const ageFormatted = formatAge(ageInMs);
205 | const staleDays = Math.floor(ageInMs / (24 * 60 * 60 * 1000));
206 |
207 | // Determine staleness level
208 | let stalenessLevel: FileFreshnessStatus["stalenessLevel"];
209 | let isStale: boolean;
210 |
211 | const warningMs = thresholdToMs(thresholds.warning);
212 | const staleMs = thresholdToMs(thresholds.stale);
213 | const criticalMs = thresholdToMs(thresholds.critical);
214 |
215 | if (ageInMs >= criticalMs) {
216 | stalenessLevel = "critical";
217 | isStale = true;
218 | } else if (ageInMs >= staleMs) {
219 | stalenessLevel = "stale";
220 | isStale = true;
221 | } else if (ageInMs >= warningMs) {
222 | stalenessLevel = "warning";
223 | isStale = false;
224 | } else {
225 | stalenessLevel = "fresh";
226 | isStale = false;
227 | }
228 |
229 | return {
230 | filePath,
231 | relativePath,
232 | hasMetadata: true,
233 | metadata,
234 | lastUpdated,
235 | lastValidated,
236 | ageInMs,
237 | ageFormatted,
238 | isStale,
239 | stalenessLevel,
240 | staleDays,
241 | };
242 | }
243 |
244 | /**
245 | * Find all markdown files in directory recursively
246 | */
247 | export async function findMarkdownFiles(dir: string): Promise<string[]> {
248 | const files: string[] = [];
249 |
250 | async function scan(currentDir: string): Promise<void> {
251 | const entries = await fs.readdir(currentDir, { withFileTypes: true });
252 |
253 | for (const entry of entries) {
254 | const fullPath = path.join(currentDir, entry.name);
255 |
256 | // Skip common directories
257 | if (entry.isDirectory()) {
258 | if (
259 | !["node_modules", ".git", "dist", "build", ".documcp"].includes(
260 | entry.name,
261 | )
262 | ) {
263 | await scan(fullPath);
264 | }
265 | continue;
266 | }
267 |
268 | // Include markdown files
269 | if (entry.isFile() && /\.(md|mdx)$/i.test(entry.name)) {
270 | files.push(fullPath);
271 | }
272 | }
273 | }
274 |
275 | await scan(dir);
276 | return files;
277 | }
278 |
279 | /**
280 | * Scan directory for documentation freshness
281 | */
282 | export async function scanDocumentationFreshness(
283 | docsPath: string,
284 | thresholds: {
285 | warning?: StalenessThreshold;
286 | stale?: StalenessThreshold;
287 | critical?: StalenessThreshold;
288 | } = {},
289 | ): Promise<FreshnessScanReport> {
290 | // Default thresholds
291 | const finalThresholds = {
292 | warning: thresholds.warning || STALENESS_PRESETS.weekly,
293 | stale: thresholds.stale || STALENESS_PRESETS.monthly,
294 | critical: thresholds.critical || STALENESS_PRESETS.quarterly,
295 | };
296 |
297 | // Find all markdown files
298 | const markdownFiles = await findMarkdownFiles(docsPath);
299 |
300 | // Analyze each file
301 | const files: FileFreshnessStatus[] = [];
302 | for (const filePath of markdownFiles) {
303 | const relativePath = path.relative(docsPath, filePath);
304 | const frontmatter = await parseDocFrontmatter(filePath);
305 | const status = calculateFreshnessStatus(
306 | filePath,
307 | relativePath,
308 | frontmatter,
309 | finalThresholds,
310 | );
311 | files.push(status);
312 | }
313 |
314 | // Calculate summary statistics
315 | const totalFiles = files.length;
316 | const filesWithMetadata = files.filter((f) => f.hasMetadata).length;
317 | const filesWithoutMetadata = totalFiles - filesWithMetadata;
318 | const freshFiles = files.filter((f) => f.stalenessLevel === "fresh").length;
319 | const warningFiles = files.filter(
320 | (f) => f.stalenessLevel === "warning",
321 | ).length;
322 | const staleFiles = files.filter((f) => f.stalenessLevel === "stale").length;
323 | const criticalFiles = files.filter(
324 | (f) => f.stalenessLevel === "critical",
325 | ).length;
326 |
327 | return {
328 | scannedAt: new Date().toISOString(),
329 | docsPath,
330 | totalFiles,
331 | filesWithMetadata,
332 | filesWithoutMetadata,
333 | freshFiles,
334 | warningFiles,
335 | staleFiles,
336 | criticalFiles,
337 | files,
338 | thresholds: finalThresholds,
339 | };
340 | }
341 |
342 | /**
343 | * Initialize frontmatter for files without metadata
344 | */
345 | export async function initializeFreshnessMetadata(
346 | filePath: string,
347 | options: {
348 | updateFrequency?: keyof typeof STALENESS_PRESETS;
349 | autoUpdated?: boolean;
350 | } = {},
351 | ): Promise<void> {
352 | const frontmatter = await parseDocFrontmatter(filePath);
353 |
354 | if (!frontmatter.documcp?.last_updated) {
355 | const metadata: DocFreshnessMetadata = {
356 | last_updated: new Date().toISOString(),
357 | last_validated: new Date().toISOString(),
358 | auto_updated: options.autoUpdated ?? false,
359 | update_frequency: options.updateFrequency || "monthly",
360 | };
361 |
362 | if (options.updateFrequency) {
363 | metadata.staleness_threshold = STALENESS_PRESETS[options.updateFrequency];
364 | }
365 |
366 | await updateDocFrontmatter(filePath, metadata);
367 | }
368 | }
369 |
```
--------------------------------------------------------------------------------
/tests/tools/readme-best-practices.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { readmeBestPractices } from "../../src/tools/readme-best-practices.js";
2 | import { formatMCPResponse } from "../../src/types/api.js";
3 | import { writeFile, mkdir, rm } from "fs/promises";
4 | import { join } from "path";
5 |
6 | describe("readmeBestPractices", () => {
7 | const testDir = join(process.cwd(), "test-readme-best-practices-temp");
8 |
9 | beforeEach(async () => {
10 | // Create test directory
11 | await mkdir(testDir, { recursive: true });
12 | });
13 |
14 | afterEach(async () => {
15 | // Clean up test directory
16 | try {
17 | await rm(testDir, { recursive: true, force: true });
18 | } catch (error) {
19 | // Ignore cleanup errors
20 | }
21 | });
22 |
23 | describe("Basic Functionality", () => {
24 | test("should analyze README best practices with default parameters", async () => {
25 | const readmePath = join(testDir, "README.md");
26 | await writeFile(
27 | readmePath,
28 | `# Test Library
29 |
30 | ## Description
31 | This is a test library for analyzing best practices.
32 |
33 | ## Installation
34 | \`\`\`bash
35 | npm install test-library
36 | \`\`\`
37 |
38 | ## Usage
39 | \`\`\`javascript
40 | const lib = require('test-library');
41 | \`\`\`
42 |
43 | ## API Reference
44 | Function documentation here.
45 |
46 | ## Contributing
47 | Please read CONTRIBUTING.md.
48 |
49 | ## License
50 | MIT License
51 | `,
52 | );
53 |
54 | const result = await readmeBestPractices({
55 | readme_path: readmePath,
56 | });
57 |
58 | expect(result.success).toBe(true);
59 | expect(result.data).toBeDefined();
60 | expect(result.data!.bestPracticesReport).toBeDefined();
61 | expect(result.metadata).toBeDefined();
62 | });
63 |
64 | test("should handle different project types", async () => {
65 | const readmePath = join(testDir, "README.md");
66 | await writeFile(readmePath, "# Application\n\nA web application.");
67 |
68 | const result = await readmeBestPractices({
69 | readme_path: readmePath,
70 | project_type: "application",
71 | });
72 |
73 | expect(result.success).toBe(true);
74 | expect(result.data).toBeDefined();
75 | });
76 |
77 | test("should generate templates when requested", async () => {
78 | const outputDir = join(testDir, "output");
79 | await mkdir(outputDir, { recursive: true });
80 |
81 | const result = await readmeBestPractices({
82 | readme_path: join(testDir, "nonexistent.md"),
83 | generate_template: true,
84 | output_directory: outputDir,
85 | project_type: "library",
86 | });
87 |
88 | expect(result.success).toBe(true);
89 | expect(result.data).toBeDefined();
90 | });
91 |
92 | test("should handle different target audiences", async () => {
93 | const readmePath = join(testDir, "README.md");
94 | await writeFile(readmePath, "# Advanced Tool\n\nFor expert users.");
95 |
96 | const result = await readmeBestPractices({
97 | readme_path: readmePath,
98 | target_audience: "advanced",
99 | });
100 |
101 | expect(result.success).toBe(true);
102 | expect(result.data).toBeDefined();
103 | });
104 | });
105 |
106 | describe("Error Handling", () => {
107 | test("should handle missing README file without template generation", async () => {
108 | const result = await readmeBestPractices({
109 | readme_path: join(testDir, "nonexistent.md"),
110 | generate_template: false,
111 | });
112 |
113 | expect(result.success).toBe(false);
114 | expect(result.error).toBeDefined();
115 | expect(result.error!.code).toBe("README_NOT_FOUND");
116 | });
117 |
118 | test("should handle invalid project type", async () => {
119 | const readmePath = join(testDir, "README.md");
120 | await writeFile(readmePath, "# Test");
121 |
122 | const result = await readmeBestPractices({
123 | readme_path: readmePath,
124 | project_type: "invalid_type" as any,
125 | });
126 |
127 | expect(result.success).toBe(false);
128 | expect(result.error).toBeDefined();
129 | });
130 |
131 | test("should handle invalid target audience", async () => {
132 | const readmePath = join(testDir, "README.md");
133 | await writeFile(readmePath, "# Test");
134 |
135 | const result = await readmeBestPractices({
136 | readme_path: readmePath,
137 | target_audience: "invalid_audience" as any,
138 | });
139 |
140 | expect(result.success).toBe(false);
141 | expect(result.error).toBeDefined();
142 | });
143 | });
144 |
145 | describe("Best Practices Analysis", () => {
146 | test("should evaluate checklist items", async () => {
147 | const readmePath = join(testDir, "README.md");
148 | await writeFile(
149 | readmePath,
150 | `# Complete Library
151 |
152 | ## Table of Contents
153 | - [Installation](#installation)
154 | - [Usage](#usage)
155 |
156 | ## Description
157 | Detailed description of the library.
158 |
159 | ## Installation
160 | Installation instructions here.
161 |
162 | ## Usage
163 | Usage examples here.
164 |
165 | ## API Reference
166 | API documentation.
167 |
168 | ## Examples
169 | Code examples.
170 |
171 | ## Contributing
172 | Contributing guidelines.
173 |
174 | ## License
175 | MIT License
176 |
177 | ## Support
178 | Support information.
179 | `,
180 | );
181 |
182 | const result = await readmeBestPractices({
183 | readme_path: readmePath,
184 | project_type: "library",
185 | });
186 |
187 | expect(result.success).toBe(true);
188 | expect(result.data!.bestPracticesReport.checklist).toBeDefined();
189 | expect(Array.isArray(result.data!.bestPracticesReport.checklist)).toBe(
190 | true,
191 | );
192 | expect(result.data!.bestPracticesReport.checklist.length).toBeGreaterThan(
193 | 0,
194 | );
195 | });
196 |
197 | test("should calculate overall score and grade", async () => {
198 | const readmePath = join(testDir, "README.md");
199 | await writeFile(readmePath, "# Basic Project\n\nMinimal content.");
200 |
201 | const result = await readmeBestPractices({
202 | readme_path: readmePath,
203 | });
204 |
205 | expect(result.success).toBe(true);
206 | expect(
207 | result.data!.bestPracticesReport.overallScore,
208 | ).toBeGreaterThanOrEqual(0);
209 | expect(result.data!.bestPracticesReport.overallScore).toBeLessThanOrEqual(
210 | 100,
211 | );
212 | expect(result.data!.bestPracticesReport.grade).toBeDefined();
213 | });
214 |
215 | test("should provide recommendations", async () => {
216 | const readmePath = join(testDir, "README.md");
217 | await writeFile(readmePath, "# Incomplete Project");
218 |
219 | const result = await readmeBestPractices({
220 | readme_path: readmePath,
221 | });
222 |
223 | expect(result.success).toBe(true);
224 | expect(result.data!.recommendations).toBeDefined();
225 | expect(Array.isArray(result.data!.recommendations)).toBe(true);
226 | expect(result.data!.nextSteps).toBeDefined();
227 | expect(Array.isArray(result.data!.nextSteps)).toBe(true);
228 | });
229 |
230 | test("should provide summary metrics", async () => {
231 | const readmePath = join(testDir, "README.md");
232 | await writeFile(
233 | readmePath,
234 | `# Project
235 |
236 | ## Description
237 | Basic description.
238 |
239 | ## Installation
240 | Install steps.
241 | `,
242 | );
243 |
244 | const result = await readmeBestPractices({
245 | readme_path: readmePath,
246 | });
247 |
248 | expect(result.success).toBe(true);
249 | expect(result.data!.bestPracticesReport.summary).toBeDefined();
250 | expect(
251 | result.data!.bestPracticesReport.summary.criticalIssues,
252 | ).toBeGreaterThanOrEqual(0);
253 | expect(
254 | result.data!.bestPracticesReport.summary.importantIssues,
255 | ).toBeGreaterThanOrEqual(0);
256 | expect(
257 | result.data!.bestPracticesReport.summary.sectionsPresent,
258 | ).toBeGreaterThanOrEqual(0);
259 | expect(
260 | result.data!.bestPracticesReport.summary.totalSections,
261 | ).toBeGreaterThan(0);
262 | });
263 | });
264 |
265 | describe("Template Generation", () => {
266 | test("should generate README template when file is missing", async () => {
267 | const outputDir = join(testDir, "template-output");
268 | await mkdir(outputDir, { recursive: true });
269 |
270 | const result = await readmeBestPractices({
271 | readme_path: join(testDir, "missing.md"),
272 | generate_template: true,
273 | output_directory: outputDir,
274 | project_type: "tool",
275 | include_community_files: true,
276 | });
277 |
278 | expect(result.success).toBe(true);
279 | expect(result.data).toBeDefined();
280 | });
281 |
282 | test("should handle template generation without community files", async () => {
283 | const outputDir = join(testDir, "no-community-output");
284 | await mkdir(outputDir, { recursive: true });
285 |
286 | const result = await readmeBestPractices({
287 | readme_path: join(testDir, "missing.md"),
288 | generate_template: true,
289 | output_directory: outputDir,
290 | include_community_files: false,
291 | });
292 |
293 | expect(result.success).toBe(true);
294 | expect(result.data).toBeDefined();
295 | });
296 | });
297 |
298 | describe("Response Format", () => {
299 | test("should return MCPToolResponse structure", async () => {
300 | const readmePath = join(testDir, "README.md");
301 | await writeFile(readmePath, "# Test Project");
302 |
303 | const result = await readmeBestPractices({
304 | readme_path: readmePath,
305 | });
306 |
307 | expect(result.success).toBeDefined();
308 | expect(result.metadata).toBeDefined();
309 | expect(result.metadata.toolVersion).toBe("1.0.0");
310 | expect(result.metadata.executionTime).toBeGreaterThanOrEqual(0);
311 | expect(result.metadata.timestamp).toBeDefined();
312 | expect(result.metadata.analysisId).toBeDefined();
313 | });
314 |
315 | test("should format properly with formatMCPResponse", async () => {
316 | const readmePath = join(testDir, "README.md");
317 | await writeFile(readmePath, "# Test Project");
318 |
319 | const result = await readmeBestPractices({
320 | readme_path: readmePath,
321 | });
322 |
323 | // Test that the result can be formatted without errors
324 | const formatted = formatMCPResponse(result);
325 | expect(formatted.content).toBeDefined();
326 | expect(Array.isArray(formatted.content)).toBe(true);
327 | expect(formatted.content.length).toBeGreaterThan(0);
328 | expect(formatted.isError).toBe(false);
329 | });
330 | });
331 | });
332 |
```
--------------------------------------------------------------------------------
/src/benchmarks/performance.ts:
--------------------------------------------------------------------------------
```typescript
1 | // Performance benchmarking system per PERF-001 rules
2 | import { promises as fs } from "fs";
3 | import path from "path";
4 | import { analyzeRepository } from "../tools/analyze-repository.js";
5 |
6 | export interface BenchmarkResult {
7 | repoSize: "small" | "medium" | "large";
8 | fileCount: number;
9 | executionTime: number;
10 | targetTime: number;
11 | passed: boolean;
12 | performanceRatio: number;
13 | details: {
14 | startTime: number;
15 | endTime: number;
16 | memoryUsage: NodeJS.MemoryUsage;
17 | };
18 | }
19 |
20 | export interface BenchmarkSuite {
21 | testName: string;
22 | results: BenchmarkResult[];
23 | overallPassed: boolean;
24 | averagePerformance: number;
25 | summary: {
26 | smallRepos: { count: number; avgTime: number; passed: number };
27 | mediumRepos: { count: number; avgTime: number; passed: number };
28 | largeRepos: { count: number; avgTime: number; passed: number };
29 | };
30 | }
31 |
32 | // PERF-001 performance targets
33 | const PERFORMANCE_TARGETS = {
34 | small: 1000, // <1 second for <100 files
35 | medium: 10000, // <10 seconds for 100-1000 files
36 | large: 60000, // <60 seconds for 1000+ files
37 | } as const;
38 |
39 | export class PerformanceBenchmarker {
40 | private results: BenchmarkResult[] = [];
41 |
42 | async benchmarkRepository(
43 | repoPath: string,
44 | depth: "quick" | "standard" | "deep" = "standard",
45 | ): Promise<BenchmarkResult> {
46 | const fileCount = await this.getFileCount(repoPath);
47 | const repoSize = this.categorizeRepoSize(fileCount);
48 | const targetTime = PERFORMANCE_TARGETS[repoSize];
49 |
50 | // Capture initial memory state
51 | const initialMemory = process.memoryUsage();
52 |
53 | const startTime = Date.now();
54 |
55 | try {
56 | // Run the actual analysis
57 | await analyzeRepository({ path: repoPath, depth });
58 |
59 | const endTime = Date.now();
60 | const executionTime = endTime - startTime;
61 | const finalMemory = process.memoryUsage();
62 |
63 | const performanceRatio = executionTime / targetTime;
64 | const passed = executionTime <= targetTime;
65 |
66 | const result: BenchmarkResult = {
67 | repoSize,
68 | fileCount,
69 | executionTime,
70 | targetTime,
71 | passed,
72 | performanceRatio,
73 | details: {
74 | startTime,
75 | endTime,
76 | memoryUsage: {
77 | rss: finalMemory.rss - initialMemory.rss,
78 | heapTotal: finalMemory.heapTotal - initialMemory.heapTotal,
79 | heapUsed: finalMemory.heapUsed - initialMemory.heapUsed,
80 | external: finalMemory.external - initialMemory.external,
81 | arrayBuffers: finalMemory.arrayBuffers - initialMemory.arrayBuffers,
82 | },
83 | },
84 | };
85 |
86 | this.results.push(result);
87 | return result;
88 | } catch (error) {
89 | const endTime = Date.now();
90 | const executionTime = endTime - startTime;
91 |
92 | // Even failed executions should be benchmarked
93 | const result: BenchmarkResult = {
94 | repoSize,
95 | fileCount,
96 | executionTime,
97 | targetTime,
98 | passed: false, // Failed execution = failed performance
99 | performanceRatio: executionTime / targetTime,
100 | details: {
101 | startTime,
102 | endTime,
103 | memoryUsage: process.memoryUsage(),
104 | },
105 | };
106 |
107 | this.results.push(result);
108 | throw error;
109 | }
110 | }
111 |
112 | async runBenchmarkSuite(
113 | testRepos: Array<{ path: string; name: string }>,
114 | ): Promise<BenchmarkSuite> {
115 | console.log("🚀 Starting performance benchmark suite...\n");
116 |
117 | const results: BenchmarkResult[] = [];
118 |
119 | for (const repo of testRepos) {
120 | console.log(`📊 Benchmarking: ${repo.name}`);
121 |
122 | try {
123 | const result = await this.benchmarkRepository(repo.path);
124 | results.push(result);
125 |
126 | const status = result.passed ? "✅ PASS" : "❌ FAIL";
127 | const ratio = (result.performanceRatio * 100).toFixed(1);
128 |
129 | console.log(
130 | ` ${status} ${result.executionTime}ms (${ratio}% of target) - ${result.repoSize} repo with ${result.fileCount} files`,
131 | );
132 | } catch (error) {
133 | console.log(` ❌ ERROR: ${error}`);
134 | }
135 | }
136 |
137 | console.log("\n📈 Generating performance summary...\n");
138 |
139 | return this.generateSuite("Full Benchmark Suite", results);
140 | }
141 |
142 | generateSuite(testName: string, results: BenchmarkResult[]): BenchmarkSuite {
143 | const overallPassed = results.every((r) => r.passed);
144 | const averagePerformance =
145 | results.reduce((sum, r) => sum + r.performanceRatio, 0) / results.length;
146 |
147 | // Categorize results
148 | const smallRepos = results.filter((r) => r.repoSize === "small");
149 | const mediumRepos = results.filter((r) => r.repoSize === "medium");
150 | const largeRepos = results.filter((r) => r.repoSize === "large");
151 |
152 | const suite: BenchmarkSuite = {
153 | testName,
154 | results,
155 | overallPassed,
156 | averagePerformance,
157 | summary: {
158 | smallRepos: {
159 | count: smallRepos.length,
160 | avgTime:
161 | smallRepos.reduce((sum, r) => sum + r.executionTime, 0) /
162 | smallRepos.length || 0,
163 | passed: smallRepos.filter((r) => r.passed).length,
164 | },
165 | mediumRepos: {
166 | count: mediumRepos.length,
167 | avgTime:
168 | mediumRepos.reduce((sum, r) => sum + r.executionTime, 0) /
169 | mediumRepos.length || 0,
170 | passed: mediumRepos.filter((r) => r.passed).length,
171 | },
172 | largeRepos: {
173 | count: largeRepos.length,
174 | avgTime:
175 | largeRepos.reduce((sum, r) => sum + r.executionTime, 0) /
176 | largeRepos.length || 0,
177 | passed: largeRepos.filter((r) => r.passed).length,
178 | },
179 | },
180 | };
181 |
182 | return suite;
183 | }
184 |
185 | printDetailedReport(suite: BenchmarkSuite): void {
186 | console.log(`📋 Performance Benchmark Report: ${suite.testName}`);
187 | console.log("=".repeat(60));
188 | console.log(
189 | `Overall Status: ${suite.overallPassed ? "✅ PASSED" : "❌ FAILED"}`,
190 | );
191 | console.log(
192 | `Average Performance: ${(suite.averagePerformance * 100).toFixed(
193 | 1,
194 | )}% of target`,
195 | );
196 | console.log(`Total Tests: ${suite.results.length}\n`);
197 |
198 | // Summary by repo size
199 | console.log("📊 Performance by Repository Size:");
200 | console.log("-".repeat(40));
201 |
202 | const categories = [
203 | {
204 | name: "Small (<100 files)",
205 | data: suite.summary.smallRepos,
206 | target: PERFORMANCE_TARGETS.small,
207 | },
208 | {
209 | name: "Medium (100-1000 files)",
210 | data: suite.summary.mediumRepos,
211 | target: PERFORMANCE_TARGETS.medium,
212 | },
213 | {
214 | name: "Large (1000+ files)",
215 | data: suite.summary.largeRepos,
216 | target: PERFORMANCE_TARGETS.large,
217 | },
218 | ];
219 |
220 | categories.forEach((cat) => {
221 | if (cat.data.count > 0) {
222 | const passRate = ((cat.data.passed / cat.data.count) * 100).toFixed(1);
223 | const avgTime = cat.data.avgTime.toFixed(0);
224 | const targetTime = (cat.target / 1000).toFixed(1);
225 |
226 | console.log(`${cat.name}:`);
227 | console.log(
228 | ` Tests: ${cat.data.count} | Passed: ${cat.data.passed}/${cat.data.count} (${passRate}%)`,
229 | );
230 | console.log(` Avg Time: ${avgTime}ms | Target: <${targetTime}s`);
231 | console.log("");
232 | }
233 | });
234 |
235 | // Detailed results
236 | console.log("🔍 Detailed Results:");
237 | console.log("-".repeat(40));
238 |
239 | suite.results.forEach((result, i) => {
240 | const status = result.passed ? "✅" : "❌";
241 | const ratio = (result.performanceRatio * 100).toFixed(1);
242 | const memoryMB = (
243 | result.details.memoryUsage.heapUsed /
244 | 1024 /
245 | 1024
246 | ).toFixed(1);
247 |
248 | console.log(
249 | `${status} Test ${i + 1}: ${
250 | result.executionTime
251 | }ms (${ratio}% of target)`,
252 | );
253 | console.log(
254 | ` Size: ${result.repoSize} (${result.fileCount} files) | Memory: ${memoryMB}MB heap`,
255 | );
256 | });
257 |
258 | console.log("\n" + "=".repeat(60));
259 | }
260 |
261 | exportResults(suite: BenchmarkSuite, outputPath: string): Promise<void> {
262 | const report = {
263 | timestamp: new Date().toISOString(),
264 | suite,
265 | systemInfo: {
266 | node: process.version,
267 | platform: process.platform,
268 | arch: process.arch,
269 | memoryUsage: process.memoryUsage(),
270 | },
271 | performanceTargets: PERFORMANCE_TARGETS,
272 | };
273 |
274 | return fs.writeFile(outputPath, JSON.stringify(report, null, 2));
275 | }
276 |
277 | private async getFileCount(repoPath: string): Promise<number> {
278 | let fileCount = 0;
279 |
280 | async function countFiles(dir: string, level = 0): Promise<void> {
281 | if (level > 10) return; // Prevent infinite recursion
282 |
283 | try {
284 | const entries = await fs.readdir(dir, { withFileTypes: true });
285 |
286 | for (const entry of entries) {
287 | if (entry.name.startsWith(".") && entry.name !== ".github") continue;
288 | if (entry.name === "node_modules" || entry.name === "vendor")
289 | continue;
290 |
291 | const fullPath = path.join(dir, entry.name);
292 |
293 | if (entry.isDirectory()) {
294 | await countFiles(fullPath, level + 1);
295 | } else {
296 | fileCount++;
297 | }
298 | }
299 | } catch (error) {
300 | // Skip inaccessible directories
301 | }
302 | }
303 |
304 | await countFiles(repoPath);
305 | return fileCount;
306 | }
307 |
308 | private categorizeRepoSize(fileCount: number): "small" | "medium" | "large" {
309 | if (fileCount < 100) return "small";
310 | if (fileCount < 1000) return "medium";
311 | return "large";
312 | }
313 |
314 | // Utility method to clear results for fresh benchmarking
315 | reset(): void {
316 | this.results = [];
317 | }
318 |
319 | // Get current benchmark results
320 | getResults(): BenchmarkResult[] {
321 | return [...this.results];
322 | }
323 | }
324 |
325 | // Factory function for easy usage
326 | export function createBenchmarker(): PerformanceBenchmarker {
327 | return new PerformanceBenchmarker();
328 | }
329 |
```
--------------------------------------------------------------------------------
/src/tools/validate-documentation-freshness.ts:
--------------------------------------------------------------------------------
```typescript
1 | /**
2 | * Validate Documentation Freshness Tool
3 | *
4 | * Validates documentation freshness, initializes metadata for files without it,
5 | * and updates timestamps based on code changes.
6 | */
7 |
8 | import { z } from "zod";
9 | import path from "path";
10 | import { simpleGit } from "simple-git";
11 | import {
12 | findMarkdownFiles,
13 | parseDocFrontmatter,
14 | updateDocFrontmatter,
15 | initializeFreshnessMetadata,
16 | STALENESS_PRESETS,
17 | type DocFreshnessMetadata,
18 | scanDocumentationFreshness,
19 | } from "../utils/freshness-tracker.js";
20 | import { type MCPToolResponse } from "../types/api.js";
21 | import {
22 | storeFreshnessEvent,
23 | updateFreshnessEvent,
24 | } from "../memory/freshness-kg-integration.js";
25 |
26 | /**
27 | * Input schema for validate_documentation_freshness tool
28 | */
29 | export const ValidateDocumentationFreshnessSchema = z.object({
30 | docsPath: z.string().describe("Path to documentation directory"),
31 | projectPath: z
32 | .string()
33 | .describe("Path to project root (for git integration)"),
34 | initializeMissing: z
35 | .boolean()
36 | .optional()
37 | .default(true)
38 | .describe("Initialize metadata for files without it"),
39 | updateExisting: z
40 | .boolean()
41 | .optional()
42 | .default(false)
43 | .describe("Update last_validated timestamp for all files"),
44 | updateFrequency: z
45 | .enum(["realtime", "active", "recent", "weekly", "monthly", "quarterly"])
46 | .optional()
47 | .default("monthly")
48 | .describe("Default update frequency for new metadata"),
49 | validateAgainstGit: z
50 | .boolean()
51 | .optional()
52 | .default(true)
53 | .describe("Validate against current git commit"),
54 | });
55 |
56 | export type ValidateDocumentationFreshnessInput = z.input<
57 | typeof ValidateDocumentationFreshnessSchema
58 | >;
59 |
60 | /**
61 | * Validation result for a single file
62 | */
63 | interface FileValidationResult {
64 | filePath: string;
65 | relativePath: string;
66 | action: "initialized" | "updated" | "skipped" | "error";
67 | metadata?: DocFreshnessMetadata;
68 | error?: string;
69 | }
70 |
71 | /**
72 | * Validation report
73 | */
74 | interface ValidationReport {
75 | validatedAt: string;
76 | docsPath: string;
77 | projectPath: string;
78 | totalFiles: number;
79 | initialized: number;
80 | updated: number;
81 | skipped: number;
82 | errors: number;
83 | currentCommit?: string;
84 | files: FileValidationResult[];
85 | }
86 |
87 | /**
88 | * Format validation report for display
89 | */
90 | function formatValidationReport(report: ValidationReport): string {
91 | let output = "# Documentation Freshness Validation Report\n\n";
92 | output += `**Validated at**: ${new Date(
93 | report.validatedAt,
94 | ).toLocaleString()}\n`;
95 | output += `**Documentation path**: ${report.docsPath}\n`;
96 |
97 | if (report.currentCommit) {
98 | output += `**Current commit**: ${report.currentCommit.substring(0, 7)}\n`;
99 | }
100 |
101 | output += "\n## Summary\n\n";
102 | output += `- **Total files**: ${report.totalFiles}\n`;
103 | output += `- **Initialized**: ${report.initialized} files\n`;
104 | output += `- **Updated**: ${report.updated} files\n`;
105 | output += `- **Skipped**: ${report.skipped} files\n`;
106 |
107 | if (report.errors > 0) {
108 | output += `- **Errors**: ${report.errors} files\n`;
109 | }
110 |
111 | output += "\n## Actions Performed\n\n";
112 |
113 | // Group by action
114 | const grouped = {
115 | initialized: report.files.filter((f) => f.action === "initialized"),
116 | updated: report.files.filter((f) => f.action === "updated"),
117 | error: report.files.filter((f) => f.action === "error"),
118 | };
119 |
120 | if (grouped.initialized.length > 0) {
121 | output += `### ✨ Initialized (${grouped.initialized.length})\n\n`;
122 | for (const file of grouped.initialized) {
123 | output += `- ${file.relativePath}\n`;
124 | }
125 | output += "\n";
126 | }
127 |
128 | if (grouped.updated.length > 0) {
129 | output += `### 🔄 Updated (${grouped.updated.length})\n\n`;
130 | for (const file of grouped.updated) {
131 | output += `- ${file.relativePath}\n`;
132 | }
133 | output += "\n";
134 | }
135 |
136 | if (grouped.error.length > 0) {
137 | output += `### ❌ Errors (${grouped.error.length})\n\n`;
138 | for (const file of grouped.error) {
139 | output += `- ${file.relativePath}: ${file.error}\n`;
140 | }
141 | output += "\n";
142 | }
143 |
144 | // Recommendations
145 | output += "## Next Steps\n\n";
146 |
147 | if (report.initialized > 0) {
148 | output += `→ ${report.initialized} files now have freshness tracking enabled\n`;
149 | }
150 |
151 | if (report.updated > 0) {
152 | output += `→ ${report.updated} files have been marked as validated\n`;
153 | }
154 |
155 | output += `→ Run \`track_documentation_freshness\` to view current freshness status\n`;
156 |
157 | return output;
158 | }
159 |
160 | /**
161 | * Validate documentation freshness
162 | */
163 | export async function validateDocumentationFreshness(
164 | input: ValidateDocumentationFreshnessInput,
165 | ): Promise<MCPToolResponse> {
166 | const startTime = Date.now();
167 |
168 | try {
169 | const {
170 | docsPath,
171 | projectPath,
172 | initializeMissing,
173 | updateExisting,
174 | updateFrequency,
175 | validateAgainstGit,
176 | } = input;
177 |
178 | // Get current git commit if requested
179 | let currentCommit: string | undefined;
180 | if (validateAgainstGit) {
181 | try {
182 | const git = simpleGit(projectPath);
183 | const isRepo = await git.checkIsRepo();
184 |
185 | if (isRepo) {
186 | const log = await git.log({ maxCount: 1 });
187 | currentCommit = log.latest?.hash;
188 | }
189 | } catch (error) {
190 | // Git not available, continue without it
191 | }
192 | }
193 |
194 | // Find all markdown files
195 | const markdownFiles = await findMarkdownFiles(docsPath);
196 | const results: FileValidationResult[] = [];
197 |
198 | for (const filePath of markdownFiles) {
199 | const relativePath = path.relative(docsPath, filePath);
200 |
201 | try {
202 | const frontmatter = await parseDocFrontmatter(filePath);
203 | const hasMetadata = !!frontmatter.documcp?.last_updated;
204 |
205 | if (!hasMetadata && initializeMissing) {
206 | // Initialize metadata
207 | await initializeFreshnessMetadata(filePath, {
208 | updateFrequency,
209 | autoUpdated: false,
210 | });
211 |
212 | // If git is available, set validated_against_commit
213 | if (currentCommit) {
214 | await updateDocFrontmatter(filePath, {
215 | validated_against_commit: currentCommit,
216 | });
217 | }
218 |
219 | const updatedFrontmatter = await parseDocFrontmatter(filePath);
220 | results.push({
221 | filePath,
222 | relativePath,
223 | action: "initialized",
224 | metadata: updatedFrontmatter.documcp,
225 | });
226 | } else if (hasMetadata && updateExisting) {
227 | // Update existing metadata
228 | const updateData: Partial<DocFreshnessMetadata> = {
229 | last_validated: new Date().toISOString(),
230 | };
231 |
232 | if (currentCommit) {
233 | updateData.validated_against_commit = currentCommit;
234 | }
235 |
236 | await updateDocFrontmatter(filePath, updateData);
237 |
238 | const updatedFrontmatter = await parseDocFrontmatter(filePath);
239 | results.push({
240 | filePath,
241 | relativePath,
242 | action: "updated",
243 | metadata: updatedFrontmatter.documcp,
244 | });
245 | } else {
246 | results.push({
247 | filePath,
248 | relativePath,
249 | action: "skipped",
250 | metadata: frontmatter.documcp,
251 | });
252 | }
253 | } catch (error) {
254 | results.push({
255 | filePath,
256 | relativePath,
257 | action: "error",
258 | error: error instanceof Error ? error.message : "Unknown error",
259 | });
260 | }
261 | }
262 |
263 | // Generate report
264 | const report: ValidationReport = {
265 | validatedAt: new Date().toISOString(),
266 | docsPath,
267 | projectPath,
268 | totalFiles: markdownFiles.length,
269 | initialized: results.filter((r) => r.action === "initialized").length,
270 | updated: results.filter((r) => r.action === "updated").length,
271 | skipped: results.filter((r) => r.action === "skipped").length,
272 | errors: results.filter((r) => r.action === "error").length,
273 | currentCommit,
274 | files: results,
275 | };
276 |
277 | const formattedReport = formatValidationReport(report);
278 |
279 | // Store validation event in knowledge graph
280 | let eventId: string | undefined;
281 | if (report.initialized > 0 || report.updated > 0) {
282 | try {
283 | // Scan current state to get freshness metrics
284 | const scanReport = await scanDocumentationFreshness(docsPath, {
285 | warning: STALENESS_PRESETS.monthly,
286 | stale: {
287 | value: STALENESS_PRESETS.monthly.value * 2,
288 | unit: STALENESS_PRESETS.monthly.unit,
289 | },
290 | critical: {
291 | value: STALENESS_PRESETS.monthly.value * 3,
292 | unit: STALENESS_PRESETS.monthly.unit,
293 | },
294 | });
295 |
296 | // Determine event type
297 | const eventType = report.initialized > 0 ? "initialization" : "update";
298 |
299 | // Store in KG
300 | eventId = await storeFreshnessEvent(
301 | projectPath,
302 | docsPath,
303 | scanReport,
304 | eventType,
305 | );
306 |
307 | // Update event with validation details
308 | await updateFreshnessEvent(eventId, {
309 | filesInitialized: report.initialized,
310 | filesUpdated: report.updated,
311 | eventType,
312 | });
313 | } catch (error) {
314 | // KG storage failed, but continue with the response
315 | console.warn(
316 | "Failed to store validation event in knowledge graph:",
317 | error,
318 | );
319 | }
320 | }
321 |
322 | const response: MCPToolResponse = {
323 | success: true,
324 | data: {
325 | summary: `Validated ${report.totalFiles} files: ${report.initialized} initialized, ${report.updated} updated`,
326 | report,
327 | formattedReport,
328 | kgEventId: eventId,
329 | },
330 | metadata: {
331 | toolVersion: "1.0.0",
332 | executionTime: Date.now() - startTime,
333 | timestamp: new Date().toISOString(),
334 | },
335 | recommendations: [],
336 | };
337 |
338 | return response;
339 | } catch (error) {
340 | return {
341 | success: false,
342 | error: {
343 | code: "FRESHNESS_VALIDATION_FAILED",
344 | message:
345 | error instanceof Error
346 | ? error.message
347 | : "Unknown error validating documentation freshness",
348 | resolution:
349 | "Check that the documentation and project paths exist and are readable",
350 | },
351 | metadata: {
352 | toolVersion: "1.0.0",
353 | executionTime: Date.now() - startTime,
354 | timestamp: new Date().toISOString(),
355 | },
356 | };
357 | }
358 | }
359 |
```
--------------------------------------------------------------------------------
/tests/tools/generate-llm-context.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { describe, it, expect, beforeEach, afterEach } from "@jest/globals";
2 | import { promises as fs } from "fs";
3 | import path from "path";
4 | import os from "os";
5 | import {
6 | generateLLMContext,
7 | setToolDefinitions,
8 | GenerateLLMContextInputSchema,
9 | } from "../../src/tools/generate-llm-context.js";
10 | import { z } from "zod";
11 |
12 | describe("generate_llm_context", () => {
13 | let tmpDir: string;
14 |
15 | beforeEach(async () => {
16 | tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), "generate-llm-context-"));
17 |
18 | // Set up mock tool definitions
19 | const mockTools = [
20 | {
21 | name: "analyze_repository",
22 | description: "Analyze repository structure and dependencies",
23 | inputSchema: z.object({
24 | path: z.string(),
25 | depth: z.enum(["quick", "standard", "deep"]).optional(),
26 | }),
27 | },
28 | {
29 | name: "recommend_ssg",
30 | description: "Recommend static site generator",
31 | inputSchema: z.object({
32 | analysisId: z.string(),
33 | userId: z.string().optional(),
34 | }),
35 | },
36 | {
37 | name: "sync_code_to_docs",
38 | description: "Synchronize code with documentation",
39 | inputSchema: z.object({
40 | projectPath: z.string(),
41 | docsPath: z.string(),
42 | mode: z.enum(["detect", "preview", "apply", "auto"]).optional(),
43 | }),
44 | },
45 | ];
46 | setToolDefinitions(mockTools);
47 | });
48 |
49 | afterEach(async () => {
50 | await fs.rm(tmpDir, { recursive: true, force: true });
51 | });
52 |
53 | describe("Basic Generation", () => {
54 | it("should generate LLM context file with default options", async () => {
55 | const result = await generateLLMContext({
56 | projectPath: tmpDir,
57 | });
58 |
59 | // Check result structure
60 | expect(result.content).toBeDefined();
61 | expect(result.content[0].text).toContain("path");
62 |
63 | // Check file exists
64 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
65 | const fileExists = await fs
66 | .access(outputPath)
67 | .then(() => true)
68 | .catch(() => false);
69 | expect(fileExists).toBe(true);
70 |
71 | // Check file content
72 | const content = await fs.readFile(outputPath, "utf-8");
73 | expect(content).toContain("# DocuMCP LLM Context Reference");
74 | expect(content).toContain("analyze_repository");
75 | expect(content).toContain("recommend_ssg");
76 | expect(content).toContain("sync_code_to_docs");
77 | });
78 |
79 | it("should include examples when requested", async () => {
80 | await generateLLMContext({
81 | projectPath: tmpDir,
82 | includeExamples: true,
83 | });
84 |
85 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
86 | const content = await fs.readFile(outputPath, "utf-8");
87 | expect(content).toContain("**Example**:");
88 | expect(content).toContain("```typescript");
89 | });
90 |
91 | it("should generate concise format", async () => {
92 | await generateLLMContext({
93 | projectPath: tmpDir,
94 | format: "concise",
95 | includeExamples: false,
96 | });
97 |
98 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
99 | const content = await fs.readFile(outputPath, "utf-8");
100 | expect(content).toContain("# DocuMCP LLM Context Reference");
101 | expect(content).not.toContain("**Parameters**:");
102 | });
103 |
104 | it("should generate detailed format with parameters", async () => {
105 | await generateLLMContext({
106 | projectPath: tmpDir,
107 | format: "detailed",
108 | });
109 |
110 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
111 | const content = await fs.readFile(outputPath, "utf-8");
112 | expect(content).toContain("# DocuMCP LLM Context Reference");
113 | expect(content).toContain("**Parameters**:");
114 | });
115 | });
116 |
117 | describe("Content Sections", () => {
118 | it("should include overview section", async () => {
119 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
120 | await generateLLMContext({ projectPath: tmpDir });
121 |
122 | const content = await fs.readFile(outputPath, "utf-8");
123 | expect(content).toContain("## Overview");
124 | expect(content).toContain("DocuMCP is an intelligent MCP server");
125 | });
126 |
127 | it("should include core tools section", async () => {
128 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
129 | await generateLLMContext({ projectPath: tmpDir });
130 |
131 | const content = await fs.readFile(outputPath, "utf-8");
132 | expect(content).toContain("## Core Documentation Tools");
133 | });
134 |
135 | it("should include Phase 3 tools section", async () => {
136 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
137 | await generateLLMContext({ projectPath: tmpDir });
138 |
139 | const content = await fs.readFile(outputPath, "utf-8");
140 | expect(content).toContain(
141 | "## Phase 3: Code-to-Docs Synchronization Tools",
142 | );
143 | expect(content).toContain("sync_code_to_docs");
144 | });
145 |
146 | it("should include memory system section", async () => {
147 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
148 | await generateLLMContext({ projectPath: tmpDir });
149 |
150 | const content = await fs.readFile(outputPath, "utf-8");
151 | expect(content).toContain("## Memory Knowledge Graph System");
152 | expect(content).toContain("### Entity Types");
153 | expect(content).toContain("### Relationship Types");
154 | });
155 |
156 | it("should include workflows section", async () => {
157 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
158 | await generateLLMContext({ projectPath: tmpDir });
159 |
160 | const content = await fs.readFile(outputPath, "utf-8");
161 | expect(content).toContain("## Common Workflows");
162 | expect(content).toContain("### 1. New Documentation Site Setup");
163 | });
164 |
165 | it("should include quick reference table", async () => {
166 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
167 | await generateLLMContext({ projectPath: tmpDir });
168 |
169 | const content = await fs.readFile(outputPath, "utf-8");
170 | expect(content).toContain("## Quick Reference Table");
171 | expect(content).toContain("| Tool | Primary Use |");
172 | });
173 | });
174 |
175 | describe("Input Validation", () => {
176 | it("should validate input schema", () => {
177 | expect(() => {
178 | GenerateLLMContextInputSchema.parse({
179 | projectPath: "/test/path",
180 | includeExamples: true,
181 | format: "detailed",
182 | });
183 | }).not.toThrow();
184 | });
185 |
186 | it("should use default values for optional fields", () => {
187 | const result = GenerateLLMContextInputSchema.parse({
188 | projectPath: "/test/path",
189 | });
190 | expect(result.projectPath).toBe("/test/path");
191 | expect(result.includeExamples).toBe(true);
192 | expect(result.format).toBe("detailed");
193 | });
194 |
195 | it("should require projectPath", () => {
196 | expect(() => {
197 | GenerateLLMContextInputSchema.parse({});
198 | }).toThrow();
199 | });
200 |
201 | it("should reject invalid format", () => {
202 | expect(() => {
203 | GenerateLLMContextInputSchema.parse({
204 | projectPath: "/test/path",
205 | format: "invalid",
206 | });
207 | }).toThrow();
208 | });
209 | });
210 |
211 | describe("Error Handling", () => {
212 | it("should handle write errors gracefully", async () => {
213 | const invalidPath = "/invalid/path/that/does/not/exist";
214 | const result = await generateLLMContext({
215 | projectPath: invalidPath,
216 | });
217 |
218 | expect(result.content[0].text).toContain("GENERATION_ERROR");
219 | expect(result.isError).toBe(true);
220 | });
221 | });
222 |
223 | describe("File Output", () => {
224 | it("should create LLM_CONTEXT.md in project root", async () => {
225 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
226 | await generateLLMContext({ projectPath: tmpDir });
227 |
228 | const fileExists = await fs
229 | .access(outputPath)
230 | .then(() => true)
231 | .catch(() => false);
232 | expect(fileExists).toBe(true);
233 | });
234 |
235 | it("should overwrite existing file", async () => {
236 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
237 |
238 | // Write first time
239 | await generateLLMContext({ projectPath: tmpDir });
240 | const firstContent = await fs.readFile(outputPath, "utf-8");
241 |
242 | // Wait a moment to ensure timestamp changes
243 | await new Promise((resolve) => setTimeout(resolve, 10));
244 |
245 | // Write second time
246 | await generateLLMContext({ projectPath: tmpDir });
247 | const secondContent = await fs.readFile(outputPath, "utf-8");
248 |
249 | // Content should be different (timestamp changed)
250 | expect(firstContent).not.toEqual(secondContent);
251 | });
252 |
253 | it("should report correct file stats", async () => {
254 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
255 | const result = await generateLLMContext({ projectPath: tmpDir });
256 |
257 | const data = JSON.parse(result.content[0].text);
258 | expect(data.stats).toBeDefined();
259 | expect(data.stats.totalTools).toBe(3);
260 | expect(data.stats.fileSize).toBeGreaterThan(0);
261 | expect(data.stats.sections).toBeInstanceOf(Array);
262 | });
263 | });
264 |
265 | describe("Tool Extraction", () => {
266 | it("should extract tool names correctly", async () => {
267 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
268 | await generateLLMContext({ projectPath: tmpDir });
269 |
270 | const content = await fs.readFile(outputPath, "utf-8");
271 | expect(content).toContain("`analyze_repository`");
272 | expect(content).toContain("`recommend_ssg`");
273 | expect(content).toContain("`sync_code_to_docs`");
274 | });
275 |
276 | it("should extract tool descriptions", async () => {
277 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
278 | await generateLLMContext({ projectPath: tmpDir });
279 |
280 | const content = await fs.readFile(outputPath, "utf-8");
281 | expect(content).toContain(
282 | "Analyze repository structure and dependencies",
283 | );
284 | expect(content).toContain("Recommend static site generator");
285 | });
286 |
287 | it("should handle tools with no examples", async () => {
288 | const outputPath = path.join(tmpDir, "LLM_CONTEXT.md");
289 | await generateLLMContext({ projectPath: tmpDir, includeExamples: true });
290 |
291 | const content = await fs.readFile(outputPath, "utf-8");
292 | // recommend_ssg doesn't have an example defined
293 | const ssgSection = content.match(
294 | /### `recommend_ssg`[\s\S]*?(?=###|$)/,
295 | )?.[0];
296 | expect(ssgSection).toBeDefined();
297 | });
298 | });
299 | });
300 |
```
--------------------------------------------------------------------------------
/tests/tools/analyze-coverage.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | // Additional tests to improve analyze-repository coverage
2 | import { promises as fs } from "fs";
3 | import path from "path";
4 | import os from "os";
5 | import { analyzeRepository } from "../../src/tools/analyze-repository";
6 |
7 | describe("Analyze Repository Additional Coverage", () => {
8 | let tempDir: string;
9 |
10 | beforeAll(async () => {
11 | tempDir = path.join(os.tmpdir(), "analyze-coverage");
12 | await fs.mkdir(tempDir, { recursive: true });
13 | });
14 |
15 | afterAll(async () => {
16 | try {
17 | await fs.rm(tempDir, { recursive: true, force: true });
18 | } catch (error) {
19 | // Cleanup errors are okay
20 | }
21 | });
22 |
23 | describe("Different Repository Types", () => {
24 | it("should analyze Ruby project", async () => {
25 | const rubyDir = path.join(tempDir, "ruby-project");
26 | await fs.mkdir(rubyDir, { recursive: true });
27 |
28 | await fs.writeFile(
29 | path.join(rubyDir, "Gemfile"),
30 | `
31 | source 'https://rubygems.org'
32 | gem 'rails', '~> 7.0'
33 | gem 'puma'
34 | gem 'redis'
35 | `,
36 | );
37 |
38 | await fs.writeFile(path.join(rubyDir, "app.rb"), 'puts "Hello Ruby"');
39 | await fs.writeFile(path.join(rubyDir, "README.md"), "# Ruby Project");
40 |
41 | const result = await analyzeRepository({
42 | path: rubyDir,
43 | depth: "standard",
44 | });
45 | expect(result.content).toBeDefined();
46 | const analysis = JSON.parse(result.content[0].text);
47 | expect(analysis.dependencies.ecosystem).toBe("ruby");
48 | });
49 |
50 | it("should analyze Go project", async () => {
51 | const goDir = path.join(tempDir, "go-project");
52 | await fs.mkdir(goDir, { recursive: true });
53 |
54 | await fs.writeFile(
55 | path.join(goDir, "go.mod"),
56 | `
57 | module example.com/myapp
58 | go 1.21
59 | require (
60 | github.com/gin-gonic/gin v1.9.0
61 | github.com/stretchr/testify v1.8.0
62 | )
63 | `,
64 | );
65 |
66 | await fs.writeFile(path.join(goDir, "main.go"), "package main");
67 | await fs.writeFile(path.join(goDir, "README.md"), "# Go Project");
68 |
69 | const result = await analyzeRepository({
70 | path: goDir,
71 | depth: "standard",
72 | });
73 | expect(result.content).toBeDefined();
74 | const analysis = JSON.parse(result.content[0].text);
75 | expect(analysis.dependencies.ecosystem).toBe("go");
76 | });
77 |
78 | it("should analyze Java project", async () => {
79 | const javaDir = path.join(tempDir, "java-project");
80 | await fs.mkdir(javaDir, { recursive: true });
81 |
82 | await fs.writeFile(
83 | path.join(javaDir, "pom.xml"),
84 | `
85 | <?xml version="1.0" encoding="UTF-8"?>
86 | <project>
87 | <modelVersion>4.0.0</modelVersion>
88 | <groupId>com.example</groupId>
89 | <artifactId>myapp</artifactId>
90 | <version>1.0.0</version>
91 | <dependencies>
92 | <dependency>
93 | <groupId>org.springframework.boot</groupId>
94 | <artifactId>spring-boot-starter</artifactId>
95 | </dependency>
96 | </dependencies>
97 | </project>
98 | `,
99 | );
100 |
101 | await fs.writeFile(path.join(javaDir, "App.java"), "public class App {}");
102 |
103 | const result = await analyzeRepository({
104 | path: javaDir,
105 | depth: "standard",
106 | });
107 | expect(result.content).toBeDefined();
108 | const analysis = JSON.parse(result.content[0].text);
109 | expect(analysis.dependencies.ecosystem).toBeDefined(); // May be 'java' or 'unknown' depending on detection
110 | });
111 |
112 | it("should analyze project with Docker", async () => {
113 | const dockerDir = path.join(tempDir, "docker-project");
114 | await fs.mkdir(dockerDir, { recursive: true });
115 |
116 | await fs.writeFile(
117 | path.join(dockerDir, "Dockerfile"),
118 | `
119 | FROM node:20
120 | WORKDIR /app
121 | COPY . .
122 | RUN npm install
123 | CMD ["npm", "start"]
124 | `,
125 | );
126 |
127 | await fs.writeFile(
128 | path.join(dockerDir, "docker-compose.yml"),
129 | `
130 | version: '3'
131 | services:
132 | app:
133 | build: .
134 | ports:
135 | - "3000:3000"
136 | `,
137 | );
138 |
139 | await fs.writeFile(
140 | path.join(dockerDir, "package.json"),
141 | '{"name": "docker-app"}',
142 | );
143 |
144 | const result = await analyzeRepository({
145 | path: dockerDir,
146 | depth: "standard",
147 | });
148 | expect(result.content).toBeDefined();
149 | const analysis = JSON.parse(result.content[0].text);
150 |
151 | // Verify basic analysis works - Docker detection not implemented
152 | expect(analysis.structure).toBeDefined();
153 | expect(analysis.structure.totalFiles).toBe(3);
154 | expect(analysis.dependencies.ecosystem).toBe("javascript");
155 | });
156 |
157 | it("should analyze project with existing docs", async () => {
158 | const docsDir = path.join(tempDir, "docs-project");
159 | await fs.mkdir(path.join(docsDir, "docs"), { recursive: true });
160 | await fs.mkdir(path.join(docsDir, "documentation"), { recursive: true });
161 |
162 | await fs.writeFile(
163 | path.join(docsDir, "docs", "index.md"),
164 | "# Documentation",
165 | );
166 | await fs.writeFile(
167 | path.join(docsDir, "docs", "api.md"),
168 | "# API Reference",
169 | );
170 | await fs.writeFile(
171 | path.join(docsDir, "documentation", "guide.md"),
172 | "# User Guide",
173 | );
174 | await fs.writeFile(
175 | path.join(docsDir, "README.md"),
176 | "# Project with Docs",
177 | );
178 |
179 | const result = await analyzeRepository({
180 | path: docsDir,
181 | depth: "standard",
182 | });
183 | expect(result.content).toBeDefined();
184 | const analysis = JSON.parse(result.content[0].text);
185 | expect(analysis.structure.hasDocs).toBe(true);
186 | });
187 | });
188 |
189 | describe("Edge Cases and Error Handling", () => {
190 | it("should handle empty repository", async () => {
191 | const emptyDir = path.join(tempDir, "empty-repo");
192 | await fs.mkdir(emptyDir, { recursive: true });
193 |
194 | const result = await analyzeRepository({
195 | path: emptyDir,
196 | depth: "quick",
197 | });
198 | expect(result.content).toBeDefined();
199 | const analysis = JSON.parse(result.content[0].text);
200 | expect(analysis.dependencies.ecosystem).toBe("unknown");
201 | });
202 |
203 | it("should handle repository with only config files", async () => {
204 | const configDir = path.join(tempDir, "config-only");
205 | await fs.mkdir(configDir, { recursive: true });
206 |
207 | await fs.writeFile(path.join(configDir, ".gitignore"), "node_modules/");
208 | await fs.writeFile(
209 | path.join(configDir, ".editorconfig"),
210 | "indent_style = space",
211 | );
212 | await fs.writeFile(path.join(configDir, "LICENSE"), "MIT License");
213 |
214 | const result = await analyzeRepository({
215 | path: configDir,
216 | depth: "standard",
217 | });
218 | expect(result.content).toBeDefined();
219 | expect(result.content.length).toBeGreaterThan(0);
220 | });
221 |
222 | it("should handle deep analysis depth", async () => {
223 | const deepDir = path.join(tempDir, "deep-analysis");
224 | await fs.mkdir(deepDir, { recursive: true });
225 |
226 | // Create nested structure
227 | await fs.mkdir(path.join(deepDir, "src", "components", "ui"), {
228 | recursive: true,
229 | });
230 | await fs.mkdir(path.join(deepDir, "src", "utils", "helpers"), {
231 | recursive: true,
232 | });
233 | await fs.mkdir(path.join(deepDir, "tests", "unit"), { recursive: true });
234 |
235 | await fs.writeFile(
236 | path.join(deepDir, "package.json"),
237 | JSON.stringify({
238 | name: "deep-project",
239 | scripts: {
240 | test: "jest",
241 | build: "webpack",
242 | lint: "eslint .",
243 | },
244 | }),
245 | );
246 |
247 | await fs.writeFile(
248 | path.join(deepDir, "src", "index.js"),
249 | 'console.log("app");',
250 | );
251 | await fs.writeFile(
252 | path.join(deepDir, "src", "components", "ui", "Button.js"),
253 | "export default Button;",
254 | );
255 | await fs.writeFile(
256 | path.join(deepDir, "tests", "unit", "test.js"),
257 | 'test("sample", () => {});',
258 | );
259 |
260 | const result = await analyzeRepository({ path: deepDir, depth: "deep" });
261 | expect(result.content).toBeDefined();
262 | const analysis = JSON.parse(result.content[0].text);
263 | expect(analysis.structure.hasTests).toBe(true);
264 | });
265 |
266 | it("should analyze repository with multiple ecosystems", async () => {
267 | const multiDir = path.join(tempDir, "multi-ecosystem");
268 | await fs.mkdir(multiDir, { recursive: true });
269 |
270 | // JavaScript
271 | await fs.writeFile(
272 | path.join(multiDir, "package.json"),
273 | '{"name": "frontend"}',
274 | );
275 |
276 | // Python
277 | await fs.writeFile(
278 | path.join(multiDir, "requirements.txt"),
279 | "flask==2.0.0",
280 | );
281 |
282 | // Ruby
283 | await fs.writeFile(path.join(multiDir, "Gemfile"), 'gem "rails"');
284 |
285 | const result = await analyzeRepository({
286 | path: multiDir,
287 | depth: "standard",
288 | });
289 | expect(result.content).toBeDefined();
290 | // Should detect the primary ecosystem (usually the one with most files/config)
291 | const analysis = JSON.parse(result.content[0].text);
292 | expect(["javascript", "python", "ruby"]).toContain(
293 | analysis.dependencies.ecosystem,
294 | );
295 | });
296 | });
297 |
298 | describe("Repository Complexity Analysis", () => {
299 | it("should calculate complexity metrics", async () => {
300 | const complexDir = path.join(tempDir, "complex-repo");
301 | await fs.mkdir(path.join(complexDir, ".github", "workflows"), {
302 | recursive: true,
303 | });
304 |
305 | // Create various files to test complexity
306 | await fs.writeFile(
307 | path.join(complexDir, "package.json"),
308 | JSON.stringify({
309 | name: "complex-app",
310 | dependencies: {
311 | react: "^18.0.0",
312 | express: "^4.0.0",
313 | webpack: "^5.0.0",
314 | },
315 | devDependencies: {
316 | jest: "^29.0.0",
317 | eslint: "^8.0.0",
318 | },
319 | }),
320 | );
321 |
322 | await fs.writeFile(
323 | path.join(complexDir, ".github", "workflows", "ci.yml"),
324 | `
325 | name: CI
326 | on: push
327 | jobs:
328 | test:
329 | runs-on: ubuntu-latest
330 | `,
331 | );
332 |
333 | await fs.writeFile(
334 | path.join(complexDir, "README.md"),
335 | "# Complex Project\n\nWith detailed documentation",
336 | );
337 | await fs.writeFile(
338 | path.join(complexDir, "CONTRIBUTING.md"),
339 | "# Contributing Guide",
340 | );
341 |
342 | const result = await analyzeRepository({
343 | path: complexDir,
344 | depth: "deep",
345 | });
346 | expect(result.content).toBeDefined();
347 | const analysis = JSON.parse(result.content[0].text);
348 | expect(analysis.structure.hasCI).toBe(true);
349 | expect(analysis.documentation.hasReadme).toBe(true);
350 | });
351 | });
352 | });
353 |
```
--------------------------------------------------------------------------------
/docs/reference/api-overview.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | sidebar_position: 1
3 | documcp:
4 | last_updated: "2025-11-20T00:46:21.959Z"
5 | last_validated: "2025-11-20T00:46:21.959Z"
6 | auto_updated: false
7 | update_frequency: monthly
8 | ---
9 |
10 | # API Overview
11 |
12 | DocuMCP provides **45 specialized tools** organized into functional categories for intelligent documentation deployment via the Model Context Protocol (MCP).
13 |
14 | ## 🎯 Quick Reference: LLM_CONTEXT.md
15 |
16 | For AI assistants and LLMs, reference the **comprehensive context file**:
17 |
18 | **File**: `/LLM_CONTEXT.md` (in project root)
19 |
20 | This auto-generated file provides:
21 |
22 | - All 45 tool descriptions with parameters
23 | - Usage examples and code snippets
24 | - Common workflow patterns
25 | - Memory system documentation
26 | - Phase 3 code-to-docs sync features
27 |
28 | **Usage in AI assistants**:
29 |
30 | ```
31 | @LLM_CONTEXT.md help me deploy documentation to GitHub Pages
32 | ```
33 |
34 | ## 📚 Tool Categories
35 |
36 | ### Core Documentation Tools (9 tools)
37 |
38 | Essential tools for repository analysis, recommendations, and deployment:
39 |
40 | | Tool | Purpose | Key Parameters |
41 | | ------------------------------- | ---------------------------------------- | ---------------------------------- |
42 | | `analyze_repository` | Analyze project structure & dependencies | `path`, `depth` |
43 | | `recommend_ssg` | Recommend static site generator | `analysisId`, `preferences` |
44 | | `generate_config` | Generate SSG configuration files | `ssg`, `projectName`, `outputPath` |
45 | | `setup_structure` | Create Diataxis documentation structure | `path`, `ssg` |
46 | | `deploy_pages` | Deploy to GitHub Pages with tracking | `repository`, `ssg`, `userId` |
47 | | `verify_deployment` | Verify deployment status | `repository`, `url` |
48 | | `populate_diataxis_content` | Generate project-specific content | `analysisId`, `docsPath` |
49 | | `update_existing_documentation` | Update existing docs intelligently | `analysisId`, `docsPath` |
50 | | `validate_diataxis_content` | Validate documentation quality | `contentPath`, `validationType` |
51 |
52 | ### README Analysis & Generation (6 tools)
53 |
54 | Specialized tools for README creation and optimization:
55 |
56 | | Tool | Purpose | Key Parameters |
57 | | --------------------------- | ----------------------------------------- | -------------------------------------------- |
58 | | `evaluate_readme_health` | Assess README quality & onboarding | `readme_path`, `project_type` |
59 | | `readme_best_practices` | Analyze against best practices | `readme_path`, `generate_template` |
60 | | `generate_readme_template` | Create standardized README | `projectName`, `description`, `templateType` |
61 | | `validate_readme_checklist` | Validate against community standards | `readmePath`, `strict` |
62 | | `analyze_readme` | Comprehensive length & structure analysis | `project_path`, `optimization_level` |
63 | | `optimize_readme` | Restructure and condense content | `readme_path`, `strategy`, `max_length` |
64 |
65 | ### Phase 3: Code-to-Docs Synchronization (2 tools)
66 |
67 | Advanced AST-based code analysis and drift detection:
68 |
69 | | Tool | Purpose | Key Parameters |
70 | | ----------------------------- | ---------------------------------- | --------------------------------- |
71 | | `sync_code_to_docs` | Detect and fix documentation drift | `projectPath`, `docsPath`, `mode` |
72 | | `generate_contextual_content` | Generate docs from code analysis | `filePath`, `documentationType` |
73 |
74 | **Supported Languages**: TypeScript, JavaScript, Python, Go, Rust, Java, Ruby, Bash
75 |
76 | **Drift Types Detected**: Outdated, Incorrect, Missing, Breaking
77 |
78 | ### Memory & Analytics Tools (2 tools)
79 |
80 | User preferences and deployment pattern analysis:
81 |
82 | | Tool | Purpose | Key Parameters |
83 | | --------------------- | -------------------------------------- | ----------------------------------- |
84 | | `manage_preferences` | Manage user preferences & SSG history | `action`, `userId`, `preferences` |
85 | | `analyze_deployments` | Analyze deployment patterns & insights | `analysisType`, `ssg`, `periodDays` |
86 |
87 | ### Validation & Testing Tools (4 tools)
88 |
89 | Quality assurance and deployment testing:
90 |
91 | | Tool | Purpose | Key Parameters |
92 | | --------------------------- | ------------------------------------ | -------------------------------------------- |
93 | | `validate_content` | Validate links, code, and references | `contentPath`, `validationType` |
94 | | `check_documentation_links` | Comprehensive link validation | `documentation_path`, `check_external_links` |
95 | | `test_local_deployment` | Test build and local server | `repositoryPath`, `ssg`, `port` |
96 | | `setup_playwright_tests` | Generate E2E test infrastructure | `repositoryPath`, `ssg`, `projectName` |
97 |
98 | ### Utility Tools (3 tools)
99 |
100 | Additional functionality and management:
101 |
102 | | Tool | Purpose | Key Parameters |
103 | | --------------------------- | --------------------------------- | ------------------------------------- |
104 | | `detect_documentation_gaps` | Identify missing content | `repositoryPath`, `documentationPath` |
105 | | `manage_sitemap` | Generate and validate sitemap.xml | `action`, `docsPath`, `baseUrl` |
106 | | `read_directory` | List files within allowed roots | `path` |
107 |
108 | ### Advanced Memory Tools (19 tools)
109 |
110 | Sophisticated memory, learning, and knowledge graph operations:
111 |
112 | | Tool Category | Tools | Purpose |
113 | | ------------------- | ---------------------------------------------------------------------- | ----------------------------- |
114 | | **Memory Recall** | `memory_recall`, `memory_contextual_search` | Retrieve and search memories |
115 | | **Intelligence** | `memory_intelligent_analysis`, `memory_enhanced_recommendation` | AI-powered insights |
116 | | **Knowledge Graph** | `memory_knowledge_graph`, `memory_learning_stats` | Graph queries and statistics |
117 | | **Collaboration** | `memory_agent_network` | Multi-agent memory sharing |
118 | | **Insights** | `memory_insights`, `memory_similar`, `memory_temporal_analysis` | Pattern analysis |
119 | | **Data Management** | `memory_export`, `memory_cleanup`, `memory_pruning` | Export, cleanup, optimization |
120 | | **Visualization** | `memory_visualization` | Visual representations |
121 | | **Advanced I/O** | `memory_export_advanced`, `memory_import_advanced`, `memory_migration` | Complex data operations |
122 | | **Metrics** | `memory_optimization_metrics` | Performance analysis |
123 |
124 | ## 🔗 Detailed Documentation
125 |
126 | ### Full API Reference
127 |
128 | - **[MCP Tools API](./mcp-tools.md)** - Complete tool descriptions with examples
129 | - **[TypeDoc API](../api/)** - Auto-generated API documentation for all classes, interfaces, and functions
130 | - **[LLM Context Reference](../../LLM_CONTEXT.md)** - Comprehensive tool reference for AI assistants
131 |
132 | ### Configuration & Usage
133 |
134 | - **[Configuration Options](./configuration.md)** - All configuration settings
135 | - **[CLI Commands](./cli.md)** - Command-line interface reference
136 | - **[Prompt Templates](./prompt-templates.md)** - Pre-built prompt examples
137 |
138 | ## 🚀 Common Workflows
139 |
140 | ### 1. New Documentation Site
141 |
142 | ```
143 | analyze_repository → recommend_ssg → generate_config →
144 | setup_structure → populate_diataxis_content → deploy_pages
145 | ```
146 |
147 | ### 2. Documentation Sync (Phase 3)
148 |
149 | ```
150 | sync_code_to_docs (detect) → review drift →
151 | sync_code_to_docs (apply) → manual review
152 | ```
153 |
154 | ### 3. Existing Docs Improvement
155 |
156 | ```
157 | analyze_repository → update_existing_documentation →
158 | validate_diataxis_content → check_documentation_links
159 | ```
160 |
161 | ### 4. README Enhancement
162 |
163 | ```
164 | analyze_readme → evaluate_readme_health →
165 | readme_best_practices → optimize_readme
166 | ```
167 |
168 | ## 📦 Memory Knowledge Graph
169 |
170 | DocuMCP includes a persistent memory system that learns from every analysis:
171 |
172 | ### Entity Types
173 |
174 | - **Project**: Software projects with analysis history
175 | - **User**: User preferences and SSG patterns
176 | - **Configuration**: SSG deployment configs with success rates
177 | - **Documentation**: Documentation structures and patterns
178 | - **CodeFile**: Source code files with change tracking
179 | - **DocumentationSection**: Docs sections linked to code
180 | - **Technology**: Languages, frameworks, and tools
181 |
182 | ### Relationship Types
183 |
184 | - `project_uses_technology`: Links projects to tech stack
185 | - `user_prefers_ssg`: Tracks user SSG preferences
186 | - `project_deployed_with`: Records deployment outcomes
187 | - `similar_to`: Identifies similar projects
188 | - `documents`: Links code files to documentation
189 | - `outdated_for`: Flags out-of-sync documentation
190 | - `depends_on`: Tracks technology dependencies
191 |
192 | ### Storage Location
193 |
194 | - **Default**: `.documcp/memory/`
195 | - **Entities**: `.documcp/memory/knowledge-graph-entities.jsonl`
196 | - **Relationships**: `.documcp/memory/knowledge-graph-relationships.jsonl`
197 | - **Backups**: `.documcp/memory/backups/`
198 | - **Snapshots**: `.documcp/snapshots/` (for drift detection)
199 |
200 | ## 🎓 Getting Started
201 |
202 | 1. **Start with tutorials**: [Getting Started Guide](../tutorials/getting-started.md)
203 | 2. **Learn effective prompting**: [Prompting Guide](../how-to/prompting-guide.md)
204 | 3. **Reference LLM_CONTEXT.md**: Use `@LLM_CONTEXT.md` in AI assistants
205 | 4. **Explore workflows**: [Common Workflows](#-common-workflows)
206 |
207 | ## 📊 Tool Statistics
208 |
209 | - **Total Tools**: 45
210 | - **Core Documentation**: 9 tools
211 | - **README Management**: 6 tools
212 | - **Phase 3 Sync**: 2 tools
213 | - **Memory & Analytics**: 2 tools
214 | - **Validation**: 4 tools
215 | - **Utilities**: 3 tools
216 | - **Advanced Memory**: 19 tools
217 |
218 | ## 🔍 Search & Discovery
219 |
220 | - **By functionality**: Use the category tables above
221 | - **By name**: See [MCP Tools API](./mcp-tools.md)
222 | - **By code**: Browse [TypeDoc API](../api/)
223 | - **For AI assistants**: Reference [LLM_CONTEXT.md](../../LLM_CONTEXT.md)
224 |
225 | ---
226 |
227 | _Documentation auto-generated from DocuMCP v0.3.2_
228 |
```
--------------------------------------------------------------------------------
/docs/reference/deploy-pages.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | documcp:
3 | last_updated: "2025-11-20T00:46:21.961Z"
4 | last_validated: "2025-11-20T00:46:21.961Z"
5 | auto_updated: false
6 | update_frequency: monthly
7 | ---
8 |
9 | # Deploy Pages Tool Documentation
10 |
11 | ## Overview
12 |
13 | The `deploy_pages` tool provides automated GitHub Pages deployment setup with intelligent SSG (Static Site Generator) detection, optimized workflow generation, and comprehensive deployment tracking through the Knowledge Graph system.
14 |
15 | ## Features
16 |
17 | - **SSG Auto-Detection**: Automatically retrieves SSG recommendations from Knowledge Graph using analysisId
18 | - **Optimized Workflows**: Generates SSG-specific GitHub Actions workflows with best practices
19 | - **Package Manager Detection**: Supports npm, yarn, and pnpm with automatic lockfile detection
20 | - **Documentation Folder Detection**: Intelligently detects docs folders (docs/, website/, documentation/)
21 | - **Custom Domain Support**: Automatic CNAME file generation
22 | - **Deployment Tracking**: Integrates with Knowledge Graph to track deployment success/failure
23 | - **User Preference Learning**: Tracks SSG usage patterns for personalized recommendations
24 |
25 | ## Usage
26 |
27 | ### Basic Usage
28 |
29 | ```javascript
30 | // Deploy with explicit SSG
31 | const result = await callTool("deploy_pages", {
32 | repository: "/path/to/project",
33 | ssg: "docusaurus",
34 | });
35 | ```
36 |
37 | ### Advanced Usage with Knowledge Graph Integration
38 |
39 | ```javascript
40 | // Deploy using SSG from previous analysis
41 | const result = await callTool("deploy_pages", {
42 | repository: "https://github.com/user/repo.git",
43 | analysisId: "repo-analysis-123", // SSG retrieved from KG
44 | projectPath: "/local/path",
45 | projectName: "My Documentation Site",
46 | customDomain: "docs.example.com",
47 | userId: "developer-1",
48 | });
49 | ```
50 |
51 | ## Parameters
52 |
53 | | Parameter | Type | Required | Description |
54 | | -------------- | -------- | -------- | --------------------------------------------------------------------------- |
55 | | `repository` | `string` | ✅ | Repository path (local) or URL (remote) |
56 | | `ssg` | `enum` | ⚠️\* | Static site generator: `jekyll`, `hugo`, `docusaurus`, `mkdocs`, `eleventy` |
57 | | `branch` | `string` | ❌ | Target branch for deployment (default: `gh-pages`) |
58 | | `customDomain` | `string` | ❌ | Custom domain for GitHub Pages |
59 | | `projectPath` | `string` | ❌ | Local project path for tracking |
60 | | `projectName` | `string` | ❌ | Project name for tracking |
61 | | `analysisId` | `string` | ❌ | Repository analysis ID for SSG retrieval |
62 | | `userId` | `string` | ❌ | User ID for preference tracking (default: `default`) |
63 |
64 | \*Required unless `analysisId` is provided for SSG retrieval from Knowledge Graph
65 |
66 | ## SSG-Specific Workflows
67 |
68 | ### Docusaurus
69 |
70 | - Node.js setup with configurable version
71 | - Package manager auto-detection (npm/yarn/pnpm)
72 | - Build caching optimization
73 | - Working directory support for monorepos
74 |
75 | ### Hugo
76 |
77 | - Extended Hugo version with latest releases
78 | - Asset optimization and minification
79 | - Submodule support for themes
80 | - Custom build command detection
81 |
82 | ### Jekyll
83 |
84 | - Ruby environment with Bundler
85 | - Gemfile dependency management
86 | - Production environment variables
87 | - Custom plugin support
88 |
89 | ### MkDocs
90 |
91 | - Python environment setup
92 | - Requirements.txt dependency installation
93 | - Direct GitHub Pages deployment
94 | - Custom branch targeting
95 |
96 | ### Eleventy (11ty)
97 |
98 | - Node.js with flexible configuration
99 | - Custom output directory detection
100 | - Plugin ecosystem support
101 | - Development server integration
102 |
103 | ## Generated Workflow Features
104 |
105 | ### Security Best Practices
106 |
107 | - **Minimal Permissions**: Only required `pages:write` and `id-token:write` permissions
108 | - **OIDC Token Authentication**: JWT-based deployment validation
109 | - **Environment Protection**: Production deployment safeguards
110 | - **Dependency Scanning**: Automated security vulnerability checks
111 |
112 | ### Performance Optimizations
113 |
114 | - **Build Caching**: Package manager and dependency caching
115 | - **Incremental Builds**: Only rebuild changed content when possible
116 | - **Asset Optimization**: Minification and compression
117 | - **Parallel Processing**: Multi-stage builds where applicable
118 |
119 | ### Error Handling
120 |
121 | - **Graceful Failures**: Comprehensive error reporting and recovery
122 | - **Debug Information**: Detailed logging for troubleshooting
123 | - **Health Checks**: Post-deployment validation
124 | - **Rollback Support**: Automated rollback on deployment failures
125 |
126 | ## Knowledge Graph Integration
127 |
128 | ### Deployment Tracking
129 |
130 | ```typescript
131 | // Successful deployment tracking
132 | await trackDeployment(projectId, ssg, true, {
133 | buildTime: executionTime,
134 | branch: targetBranch,
135 | customDomain: domain,
136 | });
137 |
138 | // Failed deployment tracking
139 | await trackDeployment(projectId, ssg, false, {
140 | errorMessage: error.message,
141 | failureStage: "build|deploy|verification",
142 | });
143 | ```
144 |
145 | ### SSG Retrieval Logic
146 |
147 | 1. **Check Analysis ID**: Query project node in Knowledge Graph
148 | 2. **Get Recommendations**: Retrieve SSG recommendations sorted by confidence
149 | 3. **Fallback to History**: Use most recent successful deployment
150 | 4. **Smart Filtering**: Only consider successful deployments
151 |
152 | ### User Preference Learning
153 |
154 | - **Success Rate Tracking**: Monitor SSG deployment success rates
155 | - **Usage Pattern Analysis**: Track frequency of SSG selections
156 | - **Personalized Recommendations**: Weight future suggestions based on history
157 | - **Multi-User Support**: Separate preference tracking per userId
158 |
159 | ## Examples
160 |
161 | ### Complete Workflow Integration
162 |
163 | ```javascript
164 | try {
165 | // 1. Analyze repository
166 | const analysis = await callTool("analyze_repository", {
167 | path: "/path/to/project",
168 | });
169 |
170 | // 2. Get SSG recommendation
171 | const recommendation = await callTool("recommend_ssg", {
172 | analysisId: analysis.analysisId,
173 | });
174 |
175 | // 3. Deploy with recommended SSG
176 | const deployment = await callTool("deploy_pages", {
177 | repository: "/path/to/project",
178 | analysisId: analysis.analysisId,
179 | projectName: "My Project",
180 | userId: "developer-1",
181 | });
182 |
183 | console.log(`Deployed ${deployment.ssg} to ${deployment.branch}`);
184 | } catch (error) {
185 | console.error("Deployment workflow failed:", error);
186 | }
187 | ```
188 |
189 | ### Custom Domain Setup
190 |
191 | ```javascript
192 | const result = await callTool("deploy_pages", {
193 | repository: "/path/to/docs",
194 | ssg: "hugo",
195 | customDomain: "docs.mycompany.com",
196 | branch: "main", // Deploy from main branch
197 | });
198 |
199 | // CNAME file automatically created
200 | console.log(`CNAME created: ${result.cnameCreated}`);
201 | ```
202 |
203 | ### Monorepo Documentation
204 |
205 | ```javascript
206 | const result = await callTool("deploy_pages", {
207 | repository: "/path/to/monorepo",
208 | ssg: "docusaurus",
209 | // Will detect docs/ folder automatically
210 | projectPath: "/path/to/monorepo/packages/docs",
211 | });
212 |
213 | console.log(`Docs folder: ${result.detectedConfig.docsFolder}`);
214 | console.log(`Build command: ${result.detectedConfig.buildCommand}`);
215 | ```
216 |
217 | ## Response Format
218 |
219 | ### Success Response
220 |
221 | ```javascript
222 | {
223 | repository: "/path/to/project",
224 | ssg: "docusaurus",
225 | branch: "gh-pages",
226 | customDomain: "docs.example.com",
227 | workflowPath: "deploy-docs.yml",
228 | cnameCreated: true,
229 | repoPath: "/path/to/project",
230 | detectedConfig: {
231 | docsFolder: "docs",
232 | buildCommand: "npm run build",
233 | outputPath: "./build",
234 | packageManager: "npm",
235 | workingDirectory: "docs"
236 | }
237 | }
238 | ```
239 |
240 | ### Error Response
241 |
242 | ```javascript
243 | {
244 | success: false,
245 | error: {
246 | code: "SSG_NOT_SPECIFIED",
247 | message: "SSG parameter is required. Either provide it directly or ensure analysisId points to a project with SSG recommendations.",
248 | resolution: "Run analyze_repository and recommend_ssg first, or specify the SSG parameter explicitly."
249 | }
250 | }
251 | ```
252 |
253 | ## Error Codes
254 |
255 | | Code | Description | Resolution |
256 | | ---------------------------- | ------------------------------------------------- | --------------------------------------------------- |
257 | | `SSG_NOT_SPECIFIED` | No SSG provided and none found in Knowledge Graph | Provide SSG parameter or run analysis first |
258 | | `DEPLOYMENT_SETUP_FAILED` | Failed to create workflow files | Check repository permissions and path accessibility |
259 | | `INVALID_REPOSITORY` | Repository path or URL invalid | Verify repository exists and is accessible |
260 | | `WORKFLOW_GENERATION_FAILED` | Failed to generate SSG-specific workflow | Check SSG parameter and project structure |
261 |
262 | ## Best Practices
263 |
264 | ### Repository Structure
265 |
266 | - Place documentation in standard folders (`docs/`, `website/`, `documentation/`)
267 | - Include `package.json` for Node.js projects with proper scripts
268 | - Use lockfiles (`package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`) for dependency consistency
269 |
270 | ### Workflow Optimization
271 |
272 | - Enable GitHub Pages in repository settings before first deployment
273 | - Use semantic versioning for documentation releases
274 | - Configure branch protection rules for production deployments
275 | - Monitor deployment logs for performance bottlenecks
276 |
277 | ### Knowledge Graph Benefits
278 |
279 | - Run `analyze_repository` before deployment for optimal SSG selection
280 | - Use consistent `userId` for personalized recommendations
281 | - Provide `projectName` and `projectPath` for deployment tracking
282 | - Review deployment history through Knowledge Graph queries
283 |
284 | ## Troubleshooting
285 |
286 | ### Common Issues
287 |
288 | **Build Failures**
289 |
290 | - Verify all dependencies are listed in `package.json` or `requirements.txt`
291 | - Check Node.js/Python version compatibility
292 | - Ensure build scripts are properly configured
293 |
294 | **Permission Errors**
295 |
296 | - Enable GitHub Actions in repository settings
297 | - Check workflow file permissions (should be automatically handled)
298 | - Verify GitHub Pages is enabled for the target branch
299 |
300 | **Custom Domain Issues**
301 |
302 | - Verify DNS configuration points to GitHub Pages
303 | - Allow 24-48 hours for DNS propagation
304 | - Check CNAME file is created in repository root
305 |
306 | ### Debug Workflow
307 |
308 | 1. Check GitHub Actions logs in repository
309 | 2. Verify workflow file syntax using GitHub workflow validator
310 | 3. Test build locally using same commands as workflow
311 | 4. Review Knowledge Graph deployment history for patterns
312 |
313 | ## Related Tools
314 |
315 | - [`analyze_repository`](../how-to/repository-analysis.md) - Repository analysis for SSG recommendations
316 | - [`recommend_ssg`](./mcp-tools.md#recommend_ssg) - SSG recommendation engine
317 | - [`verify_deployment`](./mcp-tools.md#verify_deployment) - Deployment verification and health checks
318 | - [`manage_preferences`](./mcp-tools.md#manage_preferences) - User preference management
319 |
```