This is page 2 of 23. Use http://codebase.md/tosin2013/documcp?page={x} to view the full context.
# Directory Structure
```
├── .dockerignore
├── .eslintignore
├── .eslintrc.json
├── .github
│ ├── agents
│ │ ├── documcp-ast.md
│ │ ├── documcp-deploy.md
│ │ ├── documcp-memory.md
│ │ ├── documcp-test.md
│ │ └── documcp-tool.md
│ ├── copilot-instructions.md
│ ├── dependabot.yml
│ ├── ISSUE_TEMPLATE
│ │ ├── automated-changelog.md
│ │ ├── bug_report.md
│ │ ├── bug_report.yml
│ │ ├── documentation_issue.md
│ │ ├── feature_request.md
│ │ ├── feature_request.yml
│ │ ├── npm-publishing-fix.md
│ │ └── release_improvements.md
│ ├── PULL_REQUEST_TEMPLATE.md
│ ├── release-drafter.yml
│ └── workflows
│ ├── auto-merge.yml
│ ├── ci.yml
│ ├── codeql.yml
│ ├── dependency-review.yml
│ ├── deploy-docs.yml
│ ├── README.md
│ ├── release-drafter.yml
│ └── release.yml
├── .gitignore
├── .husky
│ ├── commit-msg
│ └── pre-commit
├── .linkcheck.config.json
├── .markdown-link-check.json
├── .nvmrc
├── .pre-commit-config.yaml
├── .versionrc.json
├── ARCHITECTURAL_CHANGES_SUMMARY.md
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── commitlint.config.js
├── CONTRIBUTING.md
├── docker-compose.docs.yml
├── Dockerfile.docs
├── docs
│ ├── .docusaurus
│ │ ├── docusaurus-plugin-content-docs
│ │ │ └── default
│ │ │ └── __mdx-loader-dependency.json
│ │ └── docusaurus-plugin-content-pages
│ │ └── default
│ │ └── __plugin.json
│ ├── adrs
│ │ ├── adr-0001-mcp-server-architecture.md
│ │ ├── adr-0002-repository-analysis-engine.md
│ │ ├── adr-0003-static-site-generator-recommendation-engine.md
│ │ ├── adr-0004-diataxis-framework-integration.md
│ │ ├── adr-0005-github-pages-deployment-automation.md
│ │ ├── adr-0006-mcp-tools-api-design.md
│ │ ├── adr-0007-mcp-prompts-and-resources-integration.md
│ │ ├── adr-0008-intelligent-content-population-engine.md
│ │ ├── adr-0009-content-accuracy-validation-framework.md
│ │ ├── adr-0010-mcp-resource-pattern-redesign.md
│ │ ├── adr-0011-ce-mcp-compatibility.md
│ │ ├── adr-0012-priority-scoring-system-for-documentation-drift.md
│ │ ├── adr-0013-release-pipeline-and-package-distribution.md
│ │ └── README.md
│ ├── api
│ │ ├── .nojekyll
│ │ ├── assets
│ │ │ ├── hierarchy.js
│ │ │ ├── highlight.css
│ │ │ ├── icons.js
│ │ │ ├── icons.svg
│ │ │ ├── main.js
│ │ │ ├── navigation.js
│ │ │ ├── search.js
│ │ │ └── style.css
│ │ ├── hierarchy.html
│ │ ├── index.html
│ │ ├── modules.html
│ │ └── variables
│ │ └── TOOLS.html
│ ├── assets
│ │ └── logo.svg
│ ├── CE-MCP-FINDINGS.md
│ ├── development
│ │ └── MCP_INSPECTOR_TESTING.md
│ ├── docusaurus.config.js
│ ├── explanation
│ │ ├── architecture.md
│ │ └── index.md
│ ├── guides
│ │ ├── link-validation.md
│ │ ├── playwright-integration.md
│ │ └── playwright-testing-workflow.md
│ ├── how-to
│ │ ├── analytics-setup.md
│ │ ├── change-watcher.md
│ │ ├── custom-domains.md
│ │ ├── documentation-freshness-tracking.md
│ │ ├── drift-priority-scoring.md
│ │ ├── github-pages-deployment.md
│ │ ├── index.md
│ │ ├── llm-integration.md
│ │ ├── local-testing.md
│ │ ├── performance-optimization.md
│ │ ├── prompting-guide.md
│ │ ├── repository-analysis.md
│ │ ├── seo-optimization.md
│ │ ├── site-monitoring.md
│ │ ├── troubleshooting.md
│ │ └── usage-examples.md
│ ├── index.md
│ ├── knowledge-graph.md
│ ├── package-lock.json
│ ├── package.json
│ ├── phase-2-intelligence.md
│ ├── reference
│ │ ├── api-overview.md
│ │ ├── cli.md
│ │ ├── configuration.md
│ │ ├── deploy-pages.md
│ │ ├── index.md
│ │ ├── mcp-tools.md
│ │ └── prompt-templates.md
│ ├── research
│ │ ├── cross-domain-integration
│ │ │ └── README.md
│ │ ├── domain-1-mcp-architecture
│ │ │ ├── index.md
│ │ │ └── mcp-performance-research.md
│ │ ├── domain-2-repository-analysis
│ │ │ └── README.md
│ │ ├── domain-3-ssg-recommendation
│ │ │ ├── index.md
│ │ │ └── ssg-performance-analysis.md
│ │ ├── domain-4-diataxis-integration
│ │ │ └── README.md
│ │ ├── domain-5-github-deployment
│ │ │ ├── github-pages-security-analysis.md
│ │ │ └── index.md
│ │ ├── domain-6-api-design
│ │ │ └── README.md
│ │ ├── README.md
│ │ ├── research-integration-summary-2025-01-14.md
│ │ ├── research-progress-template.md
│ │ └── research-questions-2025-01-14.md
│ ├── robots.txt
│ ├── sidebars.js
│ ├── sitemap.xml
│ ├── src
│ │ └── css
│ │ └── custom.css
│ └── tutorials
│ ├── development-setup.md
│ ├── environment-setup.md
│ ├── first-deployment.md
│ ├── getting-started.md
│ ├── index.md
│ ├── memory-workflows.md
│ └── user-onboarding.md
├── ISSUE_IMPLEMENTATION_SUMMARY.md
├── jest.config.js
├── LICENSE
├── Makefile
├── MCP_PHASE2_IMPLEMENTATION.md
├── mcp-config-example.json
├── mcp.json
├── package-lock.json
├── package.json
├── README.md
├── release.sh
├── scripts
│ └── check-package-structure.cjs
├── SECURITY.md
├── setup-precommit.sh
├── src
│ ├── benchmarks
│ │ └── performance.ts
│ ├── index.ts
│ ├── memory
│ │ ├── contextual-retrieval.ts
│ │ ├── deployment-analytics.ts
│ │ ├── enhanced-manager.ts
│ │ ├── export-import.ts
│ │ ├── freshness-kg-integration.ts
│ │ ├── index.ts
│ │ ├── integration.ts
│ │ ├── kg-code-integration.ts
│ │ ├── kg-health.ts
│ │ ├── kg-integration.ts
│ │ ├── kg-link-validator.ts
│ │ ├── kg-storage.ts
│ │ ├── knowledge-graph.ts
│ │ ├── learning.ts
│ │ ├── manager.ts
│ │ ├── multi-agent-sharing.ts
│ │ ├── pruning.ts
│ │ ├── schemas.ts
│ │ ├── storage.ts
│ │ ├── temporal-analysis.ts
│ │ ├── user-preferences.ts
│ │ └── visualization.ts
│ ├── prompts
│ │ └── technical-writer-prompts.ts
│ ├── scripts
│ │ └── benchmark.ts
│ ├── templates
│ │ └── playwright
│ │ ├── accessibility.spec.template.ts
│ │ ├── Dockerfile.template
│ │ ├── docs-e2e.workflow.template.yml
│ │ ├── link-validation.spec.template.ts
│ │ └── playwright.config.template.ts
│ ├── tools
│ │ ├── analyze-deployments.ts
│ │ ├── analyze-readme.ts
│ │ ├── analyze-repository.ts
│ │ ├── change-watcher.ts
│ │ ├── check-documentation-links.ts
│ │ ├── cleanup-agent-artifacts.ts
│ │ ├── deploy-pages.ts
│ │ ├── detect-gaps.ts
│ │ ├── evaluate-readme-health.ts
│ │ ├── generate-config.ts
│ │ ├── generate-contextual-content.ts
│ │ ├── generate-llm-context.ts
│ │ ├── generate-readme-template.ts
│ │ ├── generate-technical-writer-prompts.ts
│ │ ├── kg-health-check.ts
│ │ ├── manage-preferences.ts
│ │ ├── manage-sitemap.ts
│ │ ├── optimize-readme.ts
│ │ ├── populate-content.ts
│ │ ├── readme-best-practices.ts
│ │ ├── recommend-ssg.ts
│ │ ├── setup-playwright-tests.ts
│ │ ├── setup-structure.ts
│ │ ├── simulate-execution.ts
│ │ ├── sync-code-to-docs.ts
│ │ ├── test-local-deployment.ts
│ │ ├── track-documentation-freshness.ts
│ │ ├── update-existing-documentation.ts
│ │ ├── validate-content.ts
│ │ ├── validate-documentation-freshness.ts
│ │ ├── validate-readme-checklist.ts
│ │ └── verify-deployment.ts
│ ├── types
│ │ └── api.ts
│ ├── utils
│ │ ├── artifact-detector.ts
│ │ ├── ast-analyzer.ts
│ │ ├── change-watcher.ts
│ │ ├── code-scanner.ts
│ │ ├── content-extractor.ts
│ │ ├── drift-detector.ts
│ │ ├── execution-simulator.ts
│ │ ├── freshness-tracker.ts
│ │ ├── language-parsers-simple.ts
│ │ ├── llm-client.ts
│ │ ├── permission-checker.ts
│ │ ├── semantic-analyzer.ts
│ │ ├── sitemap-generator.ts
│ │ ├── usage-metadata.ts
│ │ └── user-feedback-integration.ts
│ └── workflows
│ └── documentation-workflow.ts
├── test-docs-local.sh
├── tests
│ ├── api
│ │ └── mcp-responses.test.ts
│ ├── benchmarks
│ │ └── performance.test.ts
│ ├── call-graph-builder.test.ts
│ ├── change-watcher-priority.integration.test.ts
│ ├── change-watcher.test.ts
│ ├── edge-cases
│ │ └── error-handling.test.ts
│ ├── execution-simulator.test.ts
│ ├── functional
│ │ └── tools.test.ts
│ ├── integration
│ │ ├── kg-documentation-workflow.test.ts
│ │ ├── knowledge-graph-workflow.test.ts
│ │ ├── mcp-readme-tools.test.ts
│ │ ├── memory-mcp-tools.test.ts
│ │ ├── readme-technical-writer.test.ts
│ │ └── workflow.test.ts
│ ├── memory
│ │ ├── contextual-retrieval.test.ts
│ │ ├── enhanced-manager.test.ts
│ │ ├── export-import.test.ts
│ │ ├── freshness-kg-integration.test.ts
│ │ ├── kg-code-integration.test.ts
│ │ ├── kg-health.test.ts
│ │ ├── kg-link-validator.test.ts
│ │ ├── kg-storage-validation.test.ts
│ │ ├── kg-storage.test.ts
│ │ ├── knowledge-graph-documentation-examples.test.ts
│ │ ├── knowledge-graph-enhanced.test.ts
│ │ ├── knowledge-graph.test.ts
│ │ ├── learning.test.ts
│ │ ├── manager-advanced.test.ts
│ │ ├── manager.test.ts
│ │ ├── mcp-resource-integration.test.ts
│ │ ├── mcp-tool-persistence.test.ts
│ │ ├── schemas-documentation-examples.test.ts
│ │ ├── schemas.test.ts
│ │ ├── storage.test.ts
│ │ ├── temporal-analysis.test.ts
│ │ └── user-preferences.test.ts
│ ├── performance
│ │ ├── memory-load-testing.test.ts
│ │ └── memory-stress-testing.test.ts
│ ├── prompts
│ │ ├── guided-workflow-prompts.test.ts
│ │ └── technical-writer-prompts.test.ts
│ ├── server.test.ts
│ ├── setup.ts
│ ├── tools
│ │ ├── all-tools.test.ts
│ │ ├── analyze-coverage.test.ts
│ │ ├── analyze-deployments.test.ts
│ │ ├── analyze-readme.test.ts
│ │ ├── analyze-repository.test.ts
│ │ ├── check-documentation-links.test.ts
│ │ ├── cleanup-agent-artifacts.test.ts
│ │ ├── deploy-pages-kg-retrieval.test.ts
│ │ ├── deploy-pages-tracking.test.ts
│ │ ├── deploy-pages.test.ts
│ │ ├── detect-gaps.test.ts
│ │ ├── evaluate-readme-health.test.ts
│ │ ├── generate-contextual-content.test.ts
│ │ ├── generate-llm-context.test.ts
│ │ ├── generate-readme-template.test.ts
│ │ ├── generate-technical-writer-prompts.test.ts
│ │ ├── kg-health-check.test.ts
│ │ ├── manage-sitemap.test.ts
│ │ ├── optimize-readme.test.ts
│ │ ├── readme-best-practices.test.ts
│ │ ├── recommend-ssg-historical.test.ts
│ │ ├── recommend-ssg-preferences.test.ts
│ │ ├── recommend-ssg.test.ts
│ │ ├── simple-coverage.test.ts
│ │ ├── sync-code-to-docs.test.ts
│ │ ├── test-local-deployment.test.ts
│ │ ├── tool-error-handling.test.ts
│ │ ├── track-documentation-freshness.test.ts
│ │ ├── validate-content.test.ts
│ │ ├── validate-documentation-freshness.test.ts
│ │ └── validate-readme-checklist.test.ts
│ ├── types
│ │ └── type-safety.test.ts
│ └── utils
│ ├── artifact-detector.test.ts
│ ├── ast-analyzer.test.ts
│ ├── content-extractor.test.ts
│ ├── drift-detector-diataxis.test.ts
│ ├── drift-detector-priority.test.ts
│ ├── drift-detector.test.ts
│ ├── freshness-tracker.test.ts
│ ├── llm-client.test.ts
│ ├── semantic-analyzer.test.ts
│ ├── sitemap-generator.test.ts
│ ├── usage-metadata.test.ts
│ └── user-feedback-integration.test.ts
├── tsconfig.json
└── typedoc.json
```
# Files
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/automated-changelog.md:
--------------------------------------------------------------------------------
```markdown
---
name: "📝 Implement Automated Changelog Generation"
description: "Add automated changelog generation from conventional commits"
labels: ["enhancement", "automation", "documentation", "medium-priority"]
assignees:
- "tosinakinosho"
---
## 📋 Problem Description
Currently, changelog updates are manual processes that can lead to:
- Inconsistent formatting and content
- Missed entries or inaccurate information
- Time-consuming maintenance
- Potential human error
**Current State:**
- Manual `CHANGELOG.md` updates
- Basic automation in release workflow
- Partial conventional commit adoption
- Generic release notes in GitHub Releases
**Impact:**
- Reduced changelog reliability
- Increased maintenance overhead
- Inconsistent user communication
- Poor developer experience
## 🎯 Expected Behavior
- Changelog automatically updated on each release
- Commit messages follow conventional format
- Release notes include all relevant changes
- Consistent formatting and structure
- Automated categorization of changes
## 🔧 Solution Proposal
### Phase 1: Conventional Commits Setup
1. **Add commitlint configuration**:
```bash
npm install --save-dev @commitlint/cli @commitlint/config-conventional
```
2. **Create commitlint config** (`commitlint.config.js`):
```javascript
module.exports = { extends: ["@commitlint/config-conventional"] };
```
3. **Set up husky hooks** for commit validation:
```bash
npm install --save-dev husky
npx husky init
npx husky add .husky/commit-msg 'npx commitlint --edit $1'
```
### Phase 2: Automated Changelog Generation
1. **Add standard-version** for automated releases:
```bash
npm install --save-dev standard-version
```
2. **Update package.json scripts**:
```json
{
"scripts": {
"release": "standard-version",
"release:minor": "standard-version --release-as minor",
"release:major": "standard-version --release-as major"
}
}
```
3. **Configure standard-version** (`.versionrc.js`):
```javascript
module.exports = {
types: [
{ type: "feat", section: "Features" },
{ type: "fix", section: "Bug Fixes" },
{ type: "chore", section: "Chores" },
{ type: "docs", section: "Documentation" },
],
};
```
### Phase 3: Workflow Integration
1. **Update release workflow** to use automated changelog:
```yaml
- name: Generate changelog
run: npx standard-version --release-as ${{ github.event.inputs.version_type }}
- name: Create GitHub Release
uses: softprops/action-gh-release@v1
with:
body: "${{ steps.changelog.outputs.content }}"
```
2. **Remove manual changelog steps** from current workflow
## 📋 Acceptance Criteria
- [ ] Changelog automatically updated on release
- [ ] Commit messages follow conventional format
- [ ] Release notes include all relevant changes
- [ ] Consistent formatting across all releases
- [ ] Automated categorization of changes (Features, Fixes, etc.)
- [ ] Husky hooks enforce commit message standards
## 🔍 Technical Details
**Relevant Files:**
- `.github/workflows/release.yml`
- `CHANGELOG.md`
- `package.json`
- `commitlint.config.js`
- `.husky/commit-msg`
**Dependencies:**
- @commitlint/cli
- @commitlint/config-conventional
- husky
- standard-version
## ⚠️ Potential Issues
1. **Existing commit history** - May not follow conventional format
2. **Learning curve** - Team needs to adopt new commit conventions
3. **Tool compatibility** - Ensure all tools work with Node.js 20+
4. **CI/CD integration** - Need to handle git operations in workflows
## 📚 References
- [Conventional Commits](https://www.conventionalcommits.org/)
- [commitlint](https://commitlint.js.org/)
- [standard-version](https://github.com/conventional-changelog/standard-version)
- [Husky](https://typicode.github.io/husky/)
## 🎪 Priority: Medium
Improves documentation reliability and reduces maintenance overhead.
```
--------------------------------------------------------------------------------
/src/tools/analyze-deployments.ts:
--------------------------------------------------------------------------------
```typescript
/**
* Analyze Deployments Tool
* Phase 2.4: Deployment Analytics and Insights
*
* MCP tool for analyzing deployment patterns and generating insights
*/
import { z } from "zod";
import { MCPToolResponse, formatMCPResponse } from "../types/api.js";
import { getDeploymentAnalytics } from "../memory/deployment-analytics.js";
const inputSchema = z.object({
analysisType: z
.enum(["full_report", "ssg_stats", "compare", "health", "trends"])
.optional()
.default("full_report"),
ssg: z.string().optional().describe("SSG name for ssg_stats analysis"),
ssgs: z
.array(z.string())
.optional()
.describe("Array of SSG names for comparison"),
periodDays: z
.number()
.optional()
.default(30)
.describe("Period in days for trend analysis"),
});
export async function analyzeDeployments(
args: unknown,
): Promise<{ content: any[] }> {
const startTime = Date.now();
try {
const { analysisType, ssg, ssgs, periodDays } = inputSchema.parse(args);
const analytics = getDeploymentAnalytics();
let result: any;
let actionDescription: string;
switch (analysisType) {
case "full_report":
result = await analytics.generateReport();
actionDescription =
"Generated comprehensive deployment analytics report";
break;
case "ssg_stats":
if (!ssg) {
throw new Error("SSG name required for ssg_stats analysis");
}
result = await analytics.getSSGStatistics(ssg);
if (!result) {
throw new Error(`No deployment data found for SSG: ${ssg}`);
}
actionDescription = `Retrieved statistics for ${ssg}`;
break;
case "compare":
if (!ssgs || ssgs.length < 2) {
throw new Error(
"At least 2 SSG names required for comparison analysis",
);
}
result = await analytics.compareSSGs(ssgs);
actionDescription = `Compared ${ssgs.length} SSGs`;
break;
case "health":
result = await analytics.getHealthScore();
actionDescription = "Calculated deployment health score";
break;
case "trends":
result = await analytics.identifyTrends(periodDays);
actionDescription = `Identified deployment trends over ${periodDays} days`;
break;
default:
throw new Error(`Unknown analysis type: ${analysisType}`);
}
const response: MCPToolResponse<any> = {
success: true,
data: result,
metadata: {
toolVersion: "1.0.0",
executionTime: Date.now() - startTime,
timestamp: new Date().toISOString(),
},
recommendations: [
{
type: "info",
title: actionDescription,
description: `Analysis completed successfully`,
},
],
};
// Add context-specific recommendations
if (analysisType === "full_report" && result.recommendations) {
response.recommendations?.push(
...result.recommendations.slice(0, 3).map((rec: string) => ({
type: "info" as const,
title: "Recommendation",
description: rec,
})),
);
}
if (analysisType === "health") {
const healthStatus =
result.score > 70 ? "good" : result.score > 40 ? "warning" : "critical";
response.recommendations?.push({
type: healthStatus === "good" ? "info" : "warning",
title: `Health Score: ${result.score}/100`,
description: `Deployment health is ${healthStatus}`,
});
}
return formatMCPResponse(response);
} catch (error) {
const errorResponse: MCPToolResponse = {
success: false,
error: {
code: "ANALYTICS_FAILED",
message: `Failed to analyze deployments: ${error}`,
resolution:
"Ensure deployment data exists in the knowledge graph and parameters are valid",
},
metadata: {
toolVersion: "1.0.0",
executionTime: Date.now() - startTime,
timestamp: new Date().toISOString(),
},
};
return formatMCPResponse(errorResponse);
}
}
```
--------------------------------------------------------------------------------
/docs/how-to/site-monitoring.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.955Z"
last_validated: "2025-12-09T19:41:38.586Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# How to Verify and Monitor Your Documentation Deployment
This guide shows you how to verify your deployed documentation site and monitor deployment health using DocuMCP's built-in tools.
## Quick Setup
```bash
# Verify your deployment:
"verify my GitHub Pages deployment and check for issues"
```
## Deployment Verification Overview
DocuMCP provides deployment verification and health monitoring capabilities:
### Verification Features
- **Deployment Status**: Check if GitHub Pages deployment succeeded
- **Site Accessibility**: Verify your site is reachable
- **Content Validation**: Check documentation accuracy and links
- **Build Health**: Monitor deployment pipeline health
### Health Monitoring
- **Deployment Analytics**: Track success/failure rates over time
- **Build Time Monitoring**: Monitor deployment performance
- **Error Detection**: Identify common deployment issues
## Setup Methods
### Method 1: Deployment Verification
```bash
# Verify deployment status:
"verify my GitHub Pages deployment and check for issues"
```
This will:
1. Check GitHub Pages deployment status
2. Verify site accessibility
3. Validate documentation links
4. Check content accuracy
5. Generate health report
### Method 2: Content Validation
#### Step 1: Link Checking
```bash
# Check documentation links:
"check all my documentation links for broken references"
```
#### Step 2: Content Accuracy
```bash
# Validate content accuracy:
"validate my documentation content for errors and inconsistencies"
```
#### Step 3: Deployment Health
```bash
# Check deployment health:
"analyze my deployment health and provide recommendations"
```
## Deployment Health Monitoring
### Using MCP Tools
```typescript
// Check deployment verification
import { verifyDeployment } from "./dist/tools/verify-deployment.js";
const verification = await verifyDeployment({
repository: "username/repo-name",
url: "https://username.github.io/repo-name",
});
// Check documentation links
import { checkDocumentationLinks } from "./dist/tools/check-documentation-links.js";
const linkCheck = await checkDocumentationLinks({
documentation_path: "./docs",
check_external_links: true,
check_internal_links: true,
});
```
### Key Health Indicators
- **Deployment Success**: GitHub Pages build status
- **Link Health**: Broken/working link ratio
- **Content Accuracy**: Documentation validation score
- **Build Performance**: Deployment time trends
## Troubleshooting
### Common Issues
**Problem**: Deployment verification fails
**Solution**: Check GitHub Pages settings and repository permissions
**Problem**: Link checker reports false broken links
**Solution**: Verify external link accessibility and adjust timeout settings
**Problem**: Content validation shows low accuracy
**Solution**: Review code examples and update outdated documentation
**Problem**: Health score seems low
**Solution**: Analyze deployment failures and optimize configurations
## Advanced Configuration
### Custom Validation
```yaml
# validation-config.yml
validation:
links:
timeout: 30s
check_external: true
check_internal: true
content:
accuracy_threshold: 70
include_code_validation: true
deployment:
health_threshold: 80
track_build_times: true
```
### Integration Options
- **GitHub Actions**: Automated validation in CI/CD workflows
- **MCP Tools**: Direct integration with documcp verification tools
- **Custom Scripts**: Tailored monitoring solutions
## Best Practices
1. **Set Realistic Thresholds**: Avoid alert fatigue
2. **Monitor Key Pages**: Focus on critical documentation
3. **Regular Reviews**: Check metrics weekly
4. **Automated Responses**: Set up auto-healing where possible
## Next Steps
- [Custom Domains Setup](custom-domains.md)
- [SEO Optimization](seo-optimization.md)
- [Analytics Setup](analytics-setup.md)
- [Troubleshooting Guide](../how-to/troubleshooting.md)
```
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/release_improvements.md:
--------------------------------------------------------------------------------
```markdown
# Release Pipeline Improvement Issues
## 🚀 Critical Improvements
### Issue 1: Fix npm Package Publishing
**Title:** Verify and fix npm package publishing configuration
**Priority:** High
**Labels:** bug, release, npm
**Problem:**
The release workflow is configured to publish to npm, but the package "documcp" is not found on the npm registry, indicating either publication failures or configuration issues.
**Solution:**
1. Verify NPM_TOKEN secret configuration in GitHub repository secrets
2. Test npm publication locally with dry-run
3. Add publication verification step to release workflow
4. Implement fallback handling for publication failures
**Acceptance Criteria:**
- [ ] npm package "documcp" is publicly accessible on npm registry
- [ ] Release workflow successfully publishes new versions
- [ ] Publication failures are properly logged and handled
---
### Issue 2: Automated Changelog Generation
**Title:** Implement automated changelog generation from commits
**Priority:** High
**Labels:** enhancement, automation, documentation
**Problem:**
Changelog updates are currently manual, leading to potential inconsistencies and missed entries.
**Solution:**
1. Implement conventional commits standard
2. Add automated changelog generation tool (e.g., standard-version, semantic-release)
3. Integrate with release workflow
4. Add commit validation
**Acceptance Criteria:**
- [ ] Changelog automatically updated on release
- [ ] Commit messages follow conventional format
- [ ] Release notes include all relevant changes
---
### Issue 3: Improve Test Coverage to 85%
**Title:** Increase test coverage to meet 85% target threshold
**Priority:** High
**Labels:** testing, quality, coverage
**Problem:**
Current test coverage (82.59%) is below the 85% target, particularly in critical files.
**Solution:**
1. Focus on files with <60% coverage first
2. Add comprehensive tests for error handling
3. Improve branch coverage
4. Add integration tests
**Acceptance Criteria:**
- [ ] Overall statement coverage ≥85%
- [ ] No files with <70% coverage
- [ ] Critical business logic fully tested
---
## 🎯 Quality Improvements
### Issue 4: Conventional Commits Enforcement
**Title:** Implement commitlint for conventional commits enforcement
**Priority:** Medium
**Labels:** enhancement, automation, quality
**Solution:**
1. Add commitlint configuration
2. Set up husky hooks
3. Add commit message validation
4. Update contributing guidelines
---
### Issue 5: Enhanced Release Notes
**Title:** Improve release note quality and structure
**Priority:** Medium
**Labels:** documentation, enhancement
**Solution:**
1. Create release note templates
2. Add categorized sections (Features, Fixes, Breaking Changes)
3. Include contributor recognition
4. Add performance metrics
---
### Issue 6: Smart Dependabot Auto-merge
**Title:** Enhance Dependabot auto-merge with semver awareness
**Priority:** Medium
**Labels:** dependencies, automation, security
**Solution:**
1. Implement semver-based merge rules
2. Add major version update review requirement
3. Include test verification before auto-merge
4. Add security update prioritization
---
## 🔮 Advanced Features
### Issue 7: AI-Enhanced Release Notes
**Title:** Implement AI-powered release note generation
**Priority:** Low
**Labels:** enhancement, ai, automation
**Solution:**
1. Integrate with AI API (OpenAI, Gemini, etc.)
2. Create context-aware prompt templates
3. Add project-specific terminology
4. Implement quality validation
---
### Issue 8: Release Health Dashboard
**Title:** Create release pipeline monitoring dashboard
**Priority:** Low
**Labels:** monitoring, enhancement, devops
**Solution:**
1. Track release success rates
2. Monitor publication times
3. Track test coverage trends
4. Add alerting for failures
## 📊 Implementation Priority
1. **Critical:** Issues 1-3 (npm, changelog, coverage)
2. **Quality:** Issues 4-6 (commits, notes, dependabot)
3. **Advanced:** Issues 7-8 (AI, dashboard)
## 🛠️ Technical Dependencies
- Requires Node.js 20+
- GitHub Actions environment
- npm registry access
- Optional: AI API access for enhanced features
```
--------------------------------------------------------------------------------
/docs/explanation/index.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.947Z"
last_validated: "2025-12-09T19:41:38.578Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# Explanation Documentation
Conceptual documentation and background information about DocuMCP's architecture and design principles.
## Architecture Overview
- [DocuMCP Architecture](architecture.md) - Complete system architecture overview
- [Phase 2: Intelligence & Learning System](../phase-2-intelligence.md) - Advanced AI features
## Design Principles
### Methodological Pragmatism
DocuMCP is built on methodological pragmatism frameworks, emphasizing:
- **Practical Outcomes**: Focus on what works reliably
- **Systematic Verification**: Structured processes for validating knowledge
- **Explicit Fallibilism**: Acknowledging limitations and uncertainty
- **Cognitive Systematization**: Organizing knowledge into coherent systems
### Error Architecture Awareness
The system recognizes different types of errors:
- **Human-Cognitive Errors**: Knowledge gaps, attention limitations, cognitive biases
- **Artificial-Stochastic Errors**: Pattern completion errors, context limitations, training artifacts
### Systematic Verification
All recommendations include:
- Confidence scores for significant recommendations
- Explicit checks for different error types
- Verification approaches and validation methods
- Consideration of edge cases and limitations
## System Components
### Core Architecture
- **MCP Server**: Model Context Protocol implementation
- **Repository Analysis Engine**: Multi-layered project analysis
- **SSG Recommendation Engine**: Data-driven static site generator selection
- **Documentation Generation**: Intelligent content creation
- **Deployment Automation**: Automated GitHub Pages deployment
### Intelligence System (Phase 2)
- **Memory System**: Historical data and pattern learning
- **User Preferences**: Personalized recommendations
- **Deployment Analytics**: Success pattern analysis
- **Smart Scoring**: Intelligent SSG scoring based on historical data
## Integration Patterns
### MCP Integration
DocuMCP integrates seamlessly with:
- **Claude Desktop**: AI assistant integration
- **GitHub Copilot**: Development environment integration
- **Other MCP Clients**: Broad compatibility through protocol compliance
### Development Workflow
- **Repository Analysis**: Understand project structure and needs
- **SSG Recommendation**: Select optimal static site generator
- **Documentation Generation**: Create comprehensive documentation
- **Deployment**: Automated deployment to GitHub Pages
## Research Foundation
DocuMCP is built on extensive research across multiple domains:
- **Repository Analysis**: Multi-layered analysis techniques
- **SSG Performance**: Comprehensive static site generator analysis
- **Documentation Patterns**: Diataxis framework integration
- **Deployment Optimization**: GitHub Pages deployment best practices
- **API Design**: Model Context Protocol best practices
## Future Directions
### Planned Enhancements
- **Advanced AI Integration**: Enhanced machine learning capabilities
- **Real-time Collaboration**: Multi-user documentation workflows
- **Extended Platform Support**: Support for additional deployment platforms
- **Advanced Analytics**: Comprehensive documentation analytics
### Research Areas
- **Cross-Domain Integration**: Seamless workflow integration
- **Performance Optimization**: Advanced performance tuning
- **User Experience**: Enhanced user interaction patterns
- **Scalability**: Large-scale deployment management
## Philosophy
DocuMCP embodies the principle that documentation should be:
- **Intelligent**: AI-powered analysis and recommendations
- **Automated**: Minimal manual intervention required
- **Comprehensive**: Complete documentation lifecycle coverage
- **Accessible**: Easy to use for developers of all skill levels
- **Reliable**: Consistent, high-quality results
## Related Documentation
- [Tutorials](../tutorials/) - Step-by-step guides
- [How-to Guides](../how-to/) - Task-specific instructions
- [Reference](../reference/) - Technical API reference
- [Architecture Decision Records](../adrs/) - Design decisions and rationale
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
{
"name": "documcp",
"version": "0.5.10",
"description": "Intelligent MCP server for GitHub Pages documentation deployment",
"main": "dist/index.js",
"type": "module",
"bin": {
"documcp": "./dist/index.js"
},
"scripts": {
"build": "tsc",
"dev": "tsx watch src/index.ts",
"dev:inspect": "npx @modelcontextprotocol/inspector dist/index.js",
"build:inspect": "npm run build && npm run dev:inspect",
"start": "node dist/index.js",
"test": "jest",
"test:coverage": "jest --coverage",
"test:performance": "jest --testPathPattern=benchmarks",
"test:ci": "jest --coverage --ci --watchAll=false --forceExit",
"lint": "eslint . --ext .ts",
"lint:fix": "eslint . --ext .ts --fix",
"format": "prettier --write \"src/**/*.ts\"",
"format:check": "prettier --check \"src/**/*.ts\"",
"typecheck": "tsc --noEmit",
"validate:rules": "npm run lint && npm run typecheck && npm run test:coverage",
"security:check": "npm audit --audit-level=moderate",
"ci": "npm run typecheck && npm run lint && npm run test:ci && npm run build",
"commitlint": "commitlint --from HEAD~1 --to HEAD --verbose",
"release": "standard-version",
"release:minor": "standard-version --release-as minor",
"release:major": "standard-version --release-as major",
"release:patch": "standard-version --release-as patch",
"docs:generate": "typedoc",
"docs:watch": "typedoc --watch",
"release:dry-run": "standard-version --dry-run",
"prepare": "husky",
"benchmark:run": "tsx src/scripts/benchmark.ts run",
"benchmark:current": "tsx src/scripts/benchmark.ts current",
"benchmark:create-config": "tsx src/scripts/benchmark.ts create-config",
"benchmark:help": "tsx src/scripts/benchmark.ts help",
"docs:check-links": "tsx src/scripts/link-checker.ts",
"docs:check-links:markdown": "markdown-link-check docs/**/*.md --config .markdown-link-check.json",
"docs:check-links:external": "tsx src/scripts/link-checker.ts --external true --internal false",
"docs:check-links:internal": "tsx src/scripts/link-checker.ts --external false --internal true",
"docs:check-links:ci": "tsx src/scripts/link-checker.ts --env ci",
"docs:check-links:all": "npm run docs:check-links:markdown && npm run docs:check-links",
"docs:validate": "./test-docs.sh",
"docs:test": "npm run docs:check-links:all && npm run docs:validate",
"docs:start": "cd docs && npm start",
"docs:build": "cd docs && npm run build",
"docs:serve": "cd docs && npm run serve"
},
"keywords": [
"mcp",
"documentation",
"github-pages",
"static-site-generator",
"diataxis"
],
"author": "Tosin Akinosho",
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/tosin2013/documcp.git"
},
"bugs": {
"url": "https://github.com/tosin2013/documcp/issues"
},
"homepage": "https://github.com/tosin2013/documcp#readme",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.24.0",
"@typescript-eslint/typescript-estree": "^8.44.0",
"chokidar": "^4.0.1",
"globby": "^14.1.0",
"gray-matter": "^4.0.3",
"linkinator": "^6.1.4",
"simple-git": "^3.30.0",
"tmp": "^0.2.5",
"tree-sitter-bash": "^0.25.0",
"tree-sitter-go": "^0.25.0",
"tree-sitter-java": "^0.23.5",
"tree-sitter-javascript": "^0.25.0",
"tree-sitter-python": "^0.25.0",
"tree-sitter-ruby": "^0.23.1",
"tree-sitter-rust": "^0.24.0",
"tree-sitter-typescript": "^0.23.2",
"tree-sitter-yaml": "^0.5.0",
"web-tree-sitter": "^0.25.9",
"zod": "^3.22.4",
"zod-to-json-schema": "^3.24.6"
},
"devDependencies": {
"@commitlint/cli": "^19.8.1",
"@commitlint/config-conventional": "^19.8.1",
"@types/jest": "^29.5.11",
"@types/node": "^20.11.0",
"@types/tmp": "^0.2.6",
"@typescript-eslint/eslint-plugin": "^6.19.0",
"@typescript-eslint/parser": "^6.19.0",
"eslint": "^8.56.0",
"husky": "^9.1.7",
"jest": "^29.7.0",
"markdown-link-check": "^3.12.2",
"prettier": "^3.2.4",
"standard-version": "^9.5.0",
"ts-jest": "^29.1.1",
"tsx": "^4.7.0",
"typedoc": "^0.28.13",
"typescript": "^5.3.3"
},
"engines": {
"node": ">=20.0.0"
},
"overrides": {
"markdown-link-check": {
"xmlbuilder2": "^4.0.0"
}
}
}
```
--------------------------------------------------------------------------------
/docs/how-to/custom-domains.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.950Z"
last_validated: "2025-12-09T19:41:38.581Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# How to Set Up Custom Domains
This guide shows you how to configure custom domains for your DocuMCP-deployed documentation site.
## Quick Setup
```bash
# Prompt DocuMCP:
"set up custom domain for my documentation site"
```
## Custom Domain Overview
DocuMCP supports custom domain configuration for professional documentation sites:
### Domain Types
- **Subdomains**: `docs.yourcompany.com`
- **Root Domains**: `yourcompany.com`
- **Path-based**: `yourcompany.com/docs`
### Requirements
- Domain ownership verification
- DNS configuration access
- GitHub Pages enabled
- SSL certificate (automatic with GitHub Pages)
## Setup Methods
### Method 1: Automated Setup (Recommended)
```bash
# Complete domain setup:
"configure custom domain docs.example.com for my site"
```
This will:
1. Guide you through DNS configuration
2. Set up GitHub Pages custom domain
3. Configure SSL certificate
4. Test domain connectivity
5. Set up redirects if needed
### Method 2: Manual Configuration
#### Step 1: DNS Configuration
Add the following DNS records to your domain:
**For Subdomain (docs.example.com):**
```
Type: CNAME
Name: docs
Value: yourusername.github.io
```
> **Note**: Replace `yourusername` with your GitHub username or organization name.
**For Root Domain (example.com):**
```
Type: A
Name: @
Value: 185.199.108.153
Value: 185.199.109.153
Value: 185.199.110.153
Value: 185.199.111.153
```
#### Step 2: GitHub Pages Configuration
1. Go to your repository settings
2. Navigate to "Pages" section
3. Enter your custom domain
4. Enable "Enforce HTTPS"
#### Step 3: Verification
```bash
# Verify domain setup:
"verify my custom domain configuration"
```
## Domain Configuration Examples
### Subdomain Setup
```yaml
# Custom domain configuration
domain:
type: subdomain
name: "docs.example.com"
redirects:
- from: "example.com/docs"
to: "docs.example.com"
```
### Root Domain Setup
```yaml
# Root domain configuration
domain:
type: root
name: "example.com"
path: "/docs"
ssl: true
```
## Advanced Configuration
### Multiple Domains
```bash
# Set up multiple domains:
"configure domains docs.example.com and help.example.com"
```
### Redirects
```bash
# Set up redirects:
"redirect old-domain.com to new-domain.com"
```
### SSL Configuration
```bash
# Verify SSL setup:
"check SSL certificate for my domain"
```
## Troubleshooting
### Common Issues
**Problem**: Domain not resolving
**Solution**: Check DNS propagation (up to 48 hours)
**Problem**: SSL certificate issues
**Solution**: Verify GitHub Pages settings and DNS
**Problem**: Redirects not working
**Solution**: Check CNAME vs A record configuration
**Problem**: Mixed content warnings
**Solution**: Ensure all resources use HTTPS
### DNS Troubleshooting
```bash
# Check DNS propagation:
dig docs.example.com
nslookup docs.example.com
# Test connectivity:
curl -I https://docs.example.com
```
## Security Considerations
### HTTPS Enforcement
- Always enable HTTPS in GitHub Pages
- Use HSTS headers for security
- Monitor certificate expiration
### Access Control
- Configure appropriate permissions
- Set up authentication if needed
- Monitor access logs
## Performance Optimization
### CDN Configuration
```bash
# Optimize with CDN:
"set up CDN for my custom domain"
```
### Caching Headers
```yaml
# Cache configuration
caching:
static_assets: "1 year"
html_pages: "1 hour"
api_responses: "5 minutes"
```
## Monitoring
### Domain Health
```bash
# Monitor domain health:
"set up monitoring for my custom domain"
```
### SSL Monitoring
```bash
# Monitor SSL certificate:
"monitor SSL certificate for my domain"
```
## Best Practices
1. **Use Subdomains**: Easier to manage than root domains
2. **Enable HTTPS**: Essential for security and SEO
3. **Set Up Redirects**: Maintain old URLs for SEO
4. **Monitor Uptime**: Track domain availability
5. **Document Changes**: Keep DNS records documented
## Next Steps
- [Site Monitoring](site-monitoring.md)
- [SEO Optimization](seo-optimization.md)
- [Analytics Setup](analytics-setup.md)
- [Performance Optimization](performance-optimization.md)
```
--------------------------------------------------------------------------------
/tests/tools/simple-coverage.test.ts:
--------------------------------------------------------------------------------
```typescript
// Simple coverage tests for all tools
import { promises as fs } from "fs";
import path from "path";
import os from "os";
// Import all tools to increase coverage
import { recommendSSG } from "../../src/tools/recommend-ssg";
import { generateConfig } from "../../src/tools/generate-config";
import { setupStructure } from "../../src/tools/setup-structure";
import { deployPages } from "../../src/tools/deploy-pages";
import { verifyDeployment } from "../../src/tools/verify-deployment";
describe("Simple Tool Coverage Tests", () => {
let tempDir: string;
const originalCwd = process.cwd();
beforeAll(async () => {
tempDir = path.join(os.tmpdir(), "simple-coverage");
await fs.mkdir(tempDir, { recursive: true });
// Clean up any existing KG data in temp directory
const kgDir = path.join(tempDir, ".documcp", "memory");
try {
await fs.rm(kgDir, { recursive: true, force: true });
} catch {
// Ignore if doesn't exist
}
});
afterAll(async () => {
try {
await fs.rm(tempDir, { recursive: true, force: true });
} catch (error) {
// Cleanup errors are okay
}
});
it("should test recommend_ssg tool", async () => {
// Change to temp directory to avoid KG conflicts
process.chdir(tempDir);
try {
const result = await recommendSSG({
analysisId: "test-123",
});
expect(result.content).toBeDefined();
expect(result.content.length).toBeGreaterThan(0);
} finally {
process.chdir(originalCwd);
}
});
it("should test generate_config for each SSG", async () => {
const ssgs = [
"docusaurus",
"mkdocs",
"hugo",
"jekyll",
"eleventy",
] as const;
for (const ssg of ssgs) {
const outputPath = path.join(tempDir, ssg);
const result = await generateConfig({
ssg,
projectName: `Test ${ssg}`,
outputPath,
});
expect(result.content).toBeDefined();
// Verify files were created
const files = await fs.readdir(outputPath);
expect(files.length).toBeGreaterThan(0);
}
});
it("should test setup_structure tool", async () => {
const structurePath = path.join(tempDir, "structure-test");
const result = await setupStructure({
path: structurePath,
ssg: "docusaurus",
includeExamples: true,
});
expect(result.content).toBeDefined();
// Check Diataxis categories were created
const categories = ["tutorials", "how-to", "reference", "explanation"];
for (const category of categories) {
const categoryPath = path.join(structurePath, category);
const stat = await fs.stat(categoryPath);
expect(stat.isDirectory()).toBe(true);
}
});
it("should test deploy_pages tool", async () => {
const deployPath = path.join(tempDir, "deploy-test");
const result = await deployPages({
repository: deployPath,
ssg: "docusaurus",
branch: "gh-pages",
});
expect(result.content).toBeDefined();
// Check workflow was created
const workflowPath = path.join(
deployPath,
".github",
"workflows",
"deploy-docs.yml",
);
const stat = await fs.stat(workflowPath);
expect(stat.isFile()).toBe(true);
});
it("should test verify_deployment tool", async () => {
const verifyPath = path.join(tempDir, "verify-test");
await fs.mkdir(verifyPath, { recursive: true });
const result = await verifyDeployment({
repository: verifyPath,
});
expect(result.content).toBeDefined();
expect(result.content.length).toBeGreaterThan(0);
// Should contain check results with recommendation icons
const fullText = result.content.map((c) => c.text).join(" ");
expect(fullText).toContain("🔴"); // Should contain recommendation icons
});
it("should test error cases", async () => {
// Test generate_config with invalid path
try {
await generateConfig({
ssg: "docusaurus",
projectName: "Test",
outputPath: "/invalid/path/that/should/fail",
});
} catch (error) {
expect(error).toBeDefined();
}
// Test setup_structure error handling
const result = await setupStructure({
path: path.join(tempDir, "new-structure"),
ssg: "mkdocs",
includeExamples: false,
});
expect(result.content).toBeDefined();
});
});
```
--------------------------------------------------------------------------------
/src/tools/change-watcher.ts:
--------------------------------------------------------------------------------
```typescript
import { Tool } from "@modelcontextprotocol/sdk/types.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { ChangeWatcher } from "../utils/change-watcher.js";
import { formatMCPResponse, MCPContentWrapper } from "../types/api.js";
export const changeWatcherSchema = z.object({
action: z
.enum(["start", "status", "stop", "trigger", "install_hook"])
.default("status")
.describe("Action to perform"),
projectPath: z.string().describe("Project root path"),
docsPath: z.string().describe("Documentation path"),
watchPaths: z
.array(z.string())
.optional()
.describe("Paths to watch (defaults to src/)"),
excludePatterns: z
.array(z.string())
.optional()
.describe("Glob patterns to exclude"),
debounceMs: z
.number()
.min(50)
.max(600000)
.default(500)
.describe("Debounce window for drift detection"),
triggerOnCommit: z
.boolean()
.default(true)
.describe("Respond to git commit events"),
triggerOnPR: z.boolean().default(true).describe("Respond to PR/merge events"),
webhookEndpoint: z
.string()
.optional()
.describe("Webhook endpoint path (e.g., /hooks/documcp/change-watcher)"),
webhookSecret: z
.string()
.optional()
.describe("Shared secret for webhook signature validation"),
port: z
.number()
.min(1)
.max(65535)
.optional()
.describe("Port for webhook server (default 8787)"),
snapshotDir: z.string().optional().describe("Snapshot directory override"),
reason: z.string().optional().describe("Reason for manual trigger"),
files: z
.array(z.string())
.optional()
.describe("Changed files (for manual trigger)"),
});
type ChangeWatcherArgs = z.infer<typeof changeWatcherSchema>;
let watcher: ChangeWatcher | null = null;
// Exported for tests
export function __resetChangeWatcher(): void {
watcher = null;
}
function makeResponse<T>(data: T): ReturnType<typeof formatMCPResponse<T>> {
return formatMCPResponse(
{
success: true,
data,
metadata: {
toolVersion: "0.0.0",
executionTime: 0,
timestamp: new Date().toISOString(),
},
},
{ fullResponse: true },
);
}
export async function handleChangeWatcher(
args: unknown,
context?: any,
): Promise<MCPContentWrapper> {
const parsed = changeWatcherSchema.parse(args);
switch (parsed.action) {
case "start":
return await startWatcher(parsed, context);
case "status":
return makeResponse(watcher ? watcher.getStatus() : { running: false });
case "stop":
if (watcher) {
await watcher.stop();
watcher = null;
}
return makeResponse({ running: false });
case "trigger":
if (!watcher) {
await startWatcher(parsed, context);
}
if (!watcher) {
throw new Error("Change watcher not available");
}
return makeResponse(
await watcher.triggerManual(parsed.reason, parsed.files),
);
case "install_hook":
if (!watcher) {
await startWatcher(parsed, context);
}
if (!watcher) {
throw new Error("Change watcher not available");
}
return makeResponse({
hook: await watcher.installGitHook("post-commit"),
});
}
}
async function startWatcher(
options: ChangeWatcherArgs,
context?: any,
): Promise<MCPContentWrapper> {
if (!watcher) {
watcher = new ChangeWatcher(
{
projectPath: options.projectPath,
docsPath: options.docsPath,
watchPaths: options.watchPaths,
excludePatterns: options.excludePatterns,
debounceMs: options.debounceMs,
triggerOnCommit: options.triggerOnCommit,
triggerOnPR: options.triggerOnPR,
webhookEndpoint: options.webhookEndpoint,
webhookSecret: options.webhookSecret,
port: options.port,
snapshotDir: options.snapshotDir,
},
{
logger: {
info: context?.info,
warn: context?.warn,
error: context?.error,
},
},
);
await watcher.start();
}
return makeResponse(watcher.getStatus());
}
export const changeWatcherTool: Tool = {
name: "change_watcher",
description:
"Watch code changes and trigger documentation drift detection in near real-time.",
inputSchema: zodToJsonSchema(changeWatcherSchema) as any,
};
```
--------------------------------------------------------------------------------
/src/tools/manage-preferences.ts:
--------------------------------------------------------------------------------
```typescript
/**
* Manage User Preferences Tool
* Phase 2.2: User Preference Management
*
* MCP tool for viewing and updating user preferences
*/
import { z } from "zod";
import { MCPToolResponse, formatMCPResponse } from "../types/api.js";
import { getUserPreferenceManager } from "../memory/user-preferences.js";
const inputSchema = z.object({
action: z.enum([
"get",
"update",
"reset",
"export",
"import",
"recommendations",
]),
userId: z.string().optional().default("default"),
preferences: z
.object({
preferredSSGs: z.array(z.string()).optional(),
documentationStyle: z
.enum(["minimal", "comprehensive", "tutorial-heavy"])
.optional(),
expertiseLevel: z
.enum(["beginner", "intermediate", "advanced"])
.optional(),
preferredTechnologies: z.array(z.string()).optional(),
preferredDiataxisCategories: z
.array(z.enum(["tutorials", "how-to", "reference", "explanation"]))
.optional(),
autoApplyPreferences: z.boolean().optional(),
})
.optional(),
json: z.string().optional(), // For import action
});
export async function managePreferences(
args: unknown,
): Promise<{ content: any[] }> {
const startTime = Date.now();
try {
const { action, userId, preferences, json } = inputSchema.parse(args);
const manager = await getUserPreferenceManager(userId);
let result: any;
let actionDescription: string;
switch (action) {
case "get":
result = await manager.getPreferences();
actionDescription = "Retrieved user preferences";
break;
case "update":
if (!preferences) {
throw new Error("Preferences object required for update action");
}
result = await manager.updatePreferences(preferences);
actionDescription = "Updated user preferences";
break;
case "reset":
result = await manager.resetPreferences();
actionDescription = "Reset preferences to defaults";
break;
case "export": {
const exportedJson = await manager.exportPreferences();
result = { exported: exportedJson };
actionDescription = "Exported preferences as JSON";
break;
}
case "import": {
if (!json) {
throw new Error("JSON string required for import action");
}
result = await manager.importPreferences(json);
actionDescription = "Imported preferences from JSON";
break;
}
case "recommendations": {
const recommendations = await manager.getSSGRecommendations();
result = {
recommendations,
summary: `Found ${recommendations.length} SSG recommendation(s) based on usage history`,
};
actionDescription = "Retrieved SSG recommendations";
break;
}
default:
throw new Error(`Unknown action: ${action}`);
}
const response: MCPToolResponse<any> = {
success: true,
data: result,
metadata: {
toolVersion: "1.0.0",
executionTime: Date.now() - startTime,
timestamp: new Date().toISOString(),
},
recommendations: [
{
type: "info",
title: actionDescription,
description: `User preferences ${action} completed successfully for user: ${userId}`,
},
],
};
// Add context-specific next steps
if (action === "get" || action === "recommendations") {
response.nextSteps = [
{
action: "Update Preferences",
toolRequired: "manage_preferences",
description: "Modify your preferences using the update action",
priority: "medium",
},
];
} else if (action === "update" || action === "import") {
response.nextSteps = [
{
action: "Test Recommendations",
toolRequired: "recommend_ssg",
description: "See how your preferences affect SSG recommendations",
priority: "high",
},
];
}
return formatMCPResponse(response);
} catch (error) {
const errorResponse: MCPToolResponse = {
success: false,
error: {
code: "PREFERENCE_MANAGEMENT_FAILED",
message: `Failed to manage preferences: ${error}`,
resolution:
"Check that action and parameters are valid, and user ID exists",
},
metadata: {
toolVersion: "1.0.0",
executionTime: Date.now() - startTime,
timestamp: new Date().toISOString(),
},
};
return formatMCPResponse(errorResponse);
}
}
```
--------------------------------------------------------------------------------
/docs/how-to/seo-optimization.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.955Z"
last_validated: "2025-12-09T19:41:38.586Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# How to Manage Documentation SEO
This guide shows you how to use DocuMCP's sitemap management tools to improve your documentation's search engine visibility.
## Quick Setup
```bash
# Generate sitemap for your documentation:
"generate sitemap for my documentation"
```
## SEO Overview
DocuMCP provides basic SEO support through sitemap management:
### Available SEO Features
- **XML Sitemap Generation**: Automatic sitemap creation for documentation
- **Sitemap Validation**: Verify sitemap structure and URLs
- **Link Discovery**: Automatic detection of documentation pages
- **GitHub Pages Integration**: Optimized for GitHub Pages deployment
### SEO Benefits
- **Search Engine Discovery**: Help search engines find your documentation
- **Crawling Efficiency**: Provide structured navigation for crawlers
- **URL Organization**: Maintain clean URL structure
- **Update Tracking**: Track when pages were last modified
## Setup Methods
### Method 1: Automatic Sitemap Generation
```bash
# Generate sitemap for your documentation:
"generate sitemap for my documentation"
```
This will:
1. Scan your documentation directory
2. Discover all markdown and HTML files
3. Generate XML sitemap with proper URLs
4. Include last modified dates from git history
5. Validate sitemap structure
### Method 2: Manual Sitemap Management
#### Step 1: Generate Sitemap
```bash
# Create XML sitemap:
"create sitemap for my documentation with base URL https://mydocs.com"
```
#### Step 2: Validate Sitemap
```bash
# Validate existing sitemap:
"validate my documentation sitemap"
```
#### Step 3: Update Sitemap
```bash
# Update sitemap with new content:
"update my documentation sitemap"
```
## Sitemap Management
### Using MCP Tools
```typescript
// Generate sitemap using MCP tools
import { manageSitemap } from "./dist/tools/manage-sitemap.js";
// Generate new sitemap
const sitemap = await manageSitemap({
action: "generate",
docsPath: "./docs",
baseUrl: "https://mydocs.github.io/repo",
});
// Validate existing sitemap
const validation = await manageSitemap({
action: "validate",
docsPath: "./docs",
});
// Update sitemap with new content
const update = await manageSitemap({
action: "update",
docsPath: "./docs",
baseUrl: "https://mydocs.github.io/repo",
});
```
### Sitemap Configuration
```yaml
# Sitemap generation settings
sitemap:
base_url: "https://mydocs.github.io/repo"
include_patterns:
- "**/*.md"
- "**/*.html"
exclude_patterns:
- "node_modules/**"
- ".git/**"
update_frequency: "weekly"
use_git_history: true
```
## Best Practices
### Sitemap Management
1. **Regular Updates**: Regenerate sitemap when adding new content
2. **Proper URLs**: Ensure all URLs in sitemap are accessible
3. **Git Integration**: Use git history for accurate last modified dates
4. **Validation**: Always validate sitemap after generation
5. **Submit to Search Engines**: Submit sitemap to Google Search Console
### URL Structure
- Use clean, descriptive URLs
- Maintain consistent URL patterns
- Avoid deep nesting when possible
- Include keywords in URLs naturally
### Content Organization
- Structure content logically
- Use clear headings and navigation
- Maintain consistent documentation patterns
- Link related content appropriately
## Troubleshooting
### Common Issues
**Problem**: Sitemap not generating
**Solution**: Check documentation directory permissions and file patterns
**Problem**: Invalid URLs in sitemap
**Solution**: Verify base URL configuration and file paths
**Problem**: Sitemap not updating
**Solution**: Ensure git history is accessible for last modified dates
**Problem**: Search engines not finding pages
**Solution**: Submit sitemap to Google Search Console and verify accessibility
### Sitemap Debugging
```bash
# Debug sitemap issues:
"validate my sitemap and check for errors"
```
## Sitemap Tools
### Built-in DocuMCP Tools
- **Sitemap Generation**: Create XML sitemaps automatically
- **Sitemap Validation**: Verify sitemap structure and URLs
- **Link Discovery**: Find all documentation pages
- **Git Integration**: Use git history for modification dates
### MCP Tools Available
- `manage_sitemap`: Generate, validate, and update sitemaps
- `check_documentation_links`: Verify all links work correctly
- `validate_content`: Check documentation accuracy
## Next Steps
- [Deploy Pages](../reference/mcp-tools.md#deploy_pages)
- [Site Monitoring](site-monitoring.md)
- [Custom Domains](custom-domains.md)
- [Troubleshooting](troubleshooting.md)
```
--------------------------------------------------------------------------------
/tests/memory/knowledge-graph-enhanced.test.ts:
--------------------------------------------------------------------------------
```typescript
import { promises as fs } from "fs";
import { join } from "path";
import { tmpdir } from "os";
import { KnowledgeGraph } from "../../src/memory/knowledge-graph.js";
import { MemoryManager } from "../../src/memory/manager.js";
describe("Knowledge Graph Basic Tests", () => {
let tempDir: string;
let memoryManager: MemoryManager;
let knowledgeGraph: KnowledgeGraph;
beforeEach(async () => {
tempDir = join(
tmpdir(),
`test-kg-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`,
);
await fs.mkdir(tempDir, { recursive: true });
memoryManager = new MemoryManager(tempDir);
await memoryManager.initialize();
knowledgeGraph = new KnowledgeGraph(memoryManager);
await knowledgeGraph.initialize();
// Add test data to memory manager
await memoryManager.remember(
"analysis",
{
projectType: "javascript",
complexity: "medium",
framework: "react",
technologies: ["webpack", "babel", "jest"],
},
{
projectId: "project-1",
tags: ["frontend", "spa"],
},
);
await memoryManager.remember(
"recommendation",
{
ssg: "docusaurus",
confidence: 0.9,
reasons: ["React ecosystem", "Good documentation features"],
},
{
projectId: "project-1",
tags: ["react", "documentation"],
},
);
});
afterEach(async () => {
try {
await fs.rm(tempDir, { recursive: true });
} catch {
// Ignore cleanup errors
}
});
describe("Basic Functionality", () => {
it("should initialize knowledge graph", async () => {
expect(knowledgeGraph).toBeDefined();
});
it("should build graph from memories", async () => {
await knowledgeGraph.buildFromMemories();
const stats = await knowledgeGraph.getStatistics();
expect(stats).toBeDefined();
expect(typeof stats.nodeCount).toBe("number");
expect(typeof stats.edgeCount).toBe("number");
expect(stats.nodeCount).toBeGreaterThanOrEqual(0);
});
it("should get all nodes", async () => {
await knowledgeGraph.buildFromMemories();
const nodes = await knowledgeGraph.getAllNodes();
expect(Array.isArray(nodes)).toBe(true);
expect(nodes.length).toBeGreaterThanOrEqual(0);
});
it("should get all edges", async () => {
await knowledgeGraph.buildFromMemories();
const edges = await knowledgeGraph.getAllEdges();
expect(Array.isArray(edges)).toBe(true);
expect(edges.length).toBeGreaterThanOrEqual(0);
});
it("should get connections for a node", async () => {
await knowledgeGraph.buildFromMemories();
const nodes = await knowledgeGraph.getAllNodes();
if (nodes.length > 0) {
const connections = await knowledgeGraph.getConnections(nodes[0].id);
expect(Array.isArray(connections)).toBe(true);
}
});
});
describe("Data Management", () => {
it("should save and load from memory", async () => {
await knowledgeGraph.buildFromMemories();
// Save the current state
await knowledgeGraph.saveToMemory();
// Create new instance and load
const newKG = new KnowledgeGraph(memoryManager);
await newKG.initialize();
await newKG.loadFromMemory();
const originalStats = await knowledgeGraph.getStatistics();
const loadedStats = await newKG.getStatistics();
expect(loadedStats.nodeCount).toBe(originalStats.nodeCount);
});
it("should remove nodes", async () => {
await knowledgeGraph.buildFromMemories();
const nodes = await knowledgeGraph.getAllNodes();
if (nodes.length > 0) {
const initialCount = nodes.length;
const removed = await knowledgeGraph.removeNode(nodes[0].id);
expect(removed).toBe(true);
const remainingNodes = await knowledgeGraph.getAllNodes();
expect(remainingNodes.length).toBe(initialCount - 1);
}
});
});
describe("Performance", () => {
it("should handle multiple memories efficiently", async () => {
// Add more test data
const promises = [];
for (let i = 0; i < 20; i++) {
promises.push(
memoryManager.remember(
"analysis",
{
projectType: i % 2 === 0 ? "javascript" : "python",
complexity: ["low", "medium", "high"][i % 3],
index: i,
},
{
projectId: `project-${Math.floor(i / 5)}`,
tags: [`tag-${i % 3}`],
},
),
);
}
await Promise.all(promises);
const startTime = Date.now();
await knowledgeGraph.buildFromMemories();
const buildTime = Date.now() - startTime;
expect(buildTime).toBeLessThan(2000); // Should complete within 2 seconds
const stats = await knowledgeGraph.getStatistics();
expect(stats.nodeCount).toBeGreaterThanOrEqual(0);
});
});
});
```
--------------------------------------------------------------------------------
/src/types/api.ts:
--------------------------------------------------------------------------------
```typescript
// Standardized API response types per DEVELOPMENT_RULES.md CODE-002
export interface MCPToolResponse<T = any> {
success: boolean;
data?: T;
error?: ErrorDetails;
metadata: ResponseMetadata;
recommendations?: Recommendation[];
nextSteps?: NextStep[];
}
export interface ErrorDetails {
code: string;
message: string;
details?: any;
resolution?: string;
}
export interface ResponseMetadata {
toolVersion: string;
executionTime: number;
timestamp: string;
analysisId?: string;
}
export interface Recommendation {
type: "info" | "warning" | "critical";
title: string;
description: string;
action?: string;
}
export interface NextStep {
action: string;
toolRequired?: string;
description?: string;
priority?: "low" | "medium" | "high";
}
// Additional types for README health analysis and best practices
// These types prevent compilation errors when health analysis functionality is added
export interface HealthAnalysis {
score: number;
issues: HealthIssue[];
recommendations: string[];
metadata: {
checkDate: string;
version: string;
};
}
export interface HealthIssue {
type: "critical" | "warning" | "info";
message: string;
section?: string;
line?: number;
}
export interface ChecklistItem {
id: string;
title: string;
description: string;
completed: boolean;
required: boolean;
category: string;
}
export interface BestPracticesReport {
items: ChecklistItem[];
score: number;
categories: {
[category: string]: {
total: number;
completed: number;
score: number;
};
};
recommendations: string[];
}
// MCP content format wrapper for backward compatibility
export interface MCPContentWrapper {
content: Array<{
type: "text";
text: string;
}>;
isError?: boolean;
}
// Helper to convert MCPToolResponse to MCP format
export function formatMCPResponse<T>(
response: MCPToolResponse<T>,
options?: { fullResponse?: boolean },
): MCPContentWrapper {
const content: Array<{ type: "text"; text: string }> = [];
// For backward compatibility: by default, use rich formatting with metadata, recommendations, etc.
// If fullResponse is true (Phase 3 tools), include the full response object as JSON
if (options?.fullResponse) {
content.push({
type: "text",
text: JSON.stringify(response, null, 2),
});
} else {
// Legacy format with rich formatting (original behavior)
if (response.success) {
// Main data response
if (response.data) {
content.push({
type: "text",
text: JSON.stringify(response.data, null, 2),
});
} else {
content.push({
type: "text",
text: "Operation completed successfully",
});
}
// Metadata section
content.push({
type: "text",
text: `\nExecution completed in ${response.metadata.executionTime}ms at ${response.metadata.timestamp}`,
});
// Recommendations section with emoji icons
if (response.recommendations?.length) {
content.push({
type: "text",
text:
"\nRecommendations:\n" +
response.recommendations
.map(
(r) =>
`${getRecommendationIcon(r.type)} ${r.title}: ${
r.description
}`,
)
.join("\n"),
});
}
// Next steps section with arrow symbols
if (response.nextSteps?.length) {
content.push({
type: "text",
text:
"\nNext Steps:\n" +
response.nextSteps
.map((s) => {
let stepText = `→ ${s.action}`;
if (s.toolRequired) {
stepText += ` (use ${s.toolRequired}`;
if (s.description) {
stepText += `: ${s.description}`;
}
stepText += ")";
} else if (s.description) {
stepText += `: ${s.description}`;
}
return stepText;
})
.join("\n"),
});
}
} else if (response.error) {
// Error responses need to be JSON for programmatic error handling
content.push({
type: "text",
text: JSON.stringify(response, null, 2),
});
}
}
return {
content,
isError: !response.success,
};
}
function getRecommendationIcon(type: Recommendation["type"]): string {
switch (type) {
case "info":
return "ℹ️";
case "warning":
return "⚠️";
case "critical":
return "🔴";
default:
return "•";
}
}
// Utility functions for type conversions to prevent common type errors
export function convertBestPracticesReportToChecklistItems(
report: BestPracticesReport,
): ChecklistItem[] {
return report.items;
}
export function generateHealthRecommendations(
analysis: HealthAnalysis,
): string[] {
return analysis.recommendations;
}
```
--------------------------------------------------------------------------------
/tests/memory/temporal-analysis.test.ts:
--------------------------------------------------------------------------------
```typescript
/**
* Basic unit tests for Temporal Memory Analysis System
* Tests core temporal analysis functionality
* Part of Issue #55 - Advanced Memory Components Unit Tests
*/
import { promises as fs } from "fs";
import path from "path";
import os from "os";
import { MemoryManager } from "../../src/memory/manager.js";
import {
TemporalMemoryAnalysis,
TimeWindow,
TemporalPattern,
TemporalMetrics,
TemporalQuery,
TemporalInsight,
} from "../../src/memory/temporal-analysis.js";
describe("TemporalMemoryAnalysis", () => {
let tempDir: string;
let memoryManager: MemoryManager;
let temporalAnalysis: TemporalMemoryAnalysis;
beforeEach(async () => {
// Create unique temp directory for each test
tempDir = path.join(
os.tmpdir(),
`temporal-analysis-test-${Date.now()}-${Math.random()
.toString(36)
.substr(2, 9)}`,
);
await fs.mkdir(tempDir, { recursive: true });
memoryManager = new MemoryManager(tempDir);
await memoryManager.initialize();
// Create required dependencies for TemporalMemoryAnalysis
const storage = (memoryManager as any).storage;
const learningSystem = {
learn: jest.fn(),
predict: jest.fn(),
adaptModel: jest.fn(),
};
const knowledgeGraph = {
addNode: jest.fn(),
addEdge: jest.fn(),
findPaths: jest.fn(),
};
temporalAnalysis = new TemporalMemoryAnalysis(
storage,
memoryManager,
learningSystem as any,
knowledgeGraph as any,
);
});
afterEach(async () => {
// Cleanup temp directory
try {
await fs.rm(tempDir, { recursive: true, force: true });
} catch (error) {
// Ignore cleanup errors
}
});
describe("Temporal Analysis Initialization", () => {
test("should create temporal analysis system instance", () => {
expect(temporalAnalysis).toBeDefined();
expect(temporalAnalysis).toBeInstanceOf(TemporalMemoryAnalysis);
});
test("should analyze temporal patterns", async () => {
// Add some test memories
await memoryManager.remember("analysis", {
projectPath: "/test/project",
timestamp: new Date().toISOString(),
});
// Test temporal pattern analysis
const patterns = await temporalAnalysis.analyzeTemporalPatterns();
expect(Array.isArray(patterns)).toBe(true);
});
test("should get temporal metrics", async () => {
// Add test memory
await memoryManager.remember("deployment", {
status: "success",
timestamp: new Date().toISOString(),
});
// Test temporal metrics
const metrics = await temporalAnalysis.getTemporalMetrics();
expect(metrics).toBeDefined();
expect(typeof metrics.activityLevel).toBe("number");
});
test("should predict future activity", async () => {
// Add test memories
await memoryManager.remember("analysis", { test: "data1" });
await memoryManager.remember("analysis", { test: "data2" });
// Test prediction
const prediction = await temporalAnalysis.predictFutureActivity();
expect(prediction).toBeDefined();
expect(typeof prediction.nextActivity.confidence).toBe("number");
});
test("should get temporal insights", async () => {
// Add test memory
await memoryManager.remember("recommendation", {
type: "ssg",
recommendation: "use-hugo",
});
// Test insights
const insights = await temporalAnalysis.getTemporalInsights();
expect(Array.isArray(insights)).toBe(true);
});
});
describe("Temporal Query Support", () => {
test("should handle temporal queries with parameters", async () => {
// Add test data
await memoryManager.remember("analysis", { framework: "react" });
await memoryManager.remember("deployment", { status: "success" });
const query: TemporalQuery = {
granularity: "day",
aggregation: "count",
filters: { types: ["analysis"] },
};
const patterns = await temporalAnalysis.analyzeTemporalPatterns(query);
expect(Array.isArray(patterns)).toBe(true);
const metrics = await temporalAnalysis.getTemporalMetrics(query);
expect(metrics).toBeDefined();
});
});
describe("Error Handling", () => {
test("should handle empty data gracefully", async () => {
// Test with no memories
const patterns = await temporalAnalysis.analyzeTemporalPatterns();
expect(Array.isArray(patterns)).toBe(true);
expect(patterns.length).toBe(0);
const metrics = await temporalAnalysis.getTemporalMetrics();
expect(metrics).toBeDefined();
expect(metrics.activityLevel).toBe(0);
});
test("should handle invalid query parameters", async () => {
const invalidQuery = {
granularity: "invalid" as any,
aggregation: "count" as any,
};
// Should not throw but handle gracefully
await expect(
temporalAnalysis.analyzeTemporalPatterns(invalidQuery),
).resolves.toBeDefined();
});
});
});
```
--------------------------------------------------------------------------------
/setup-precommit.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup script for DocuMCP pre-commit hooks
# Based on: https://gist.githubusercontent.com/tosin2013/15b1d7bffafe17dff6374edf1530469b/raw/324c60dffb93ddd62c007effc1dbf3918c6483e8/install-precommit-tools.sh
set -e
echo "🚀 Setting up DocuMCP pre-commit hooks..."
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check if we're in a git repository
if ! git rev-parse --git-dir > /dev/null 2>&1; then
print_error "This is not a git repository!"
exit 1
fi
# Check if Node.js and npm are installed
if ! command -v node &> /dev/null; then
print_error "Node.js is not installed. Please install Node.js 20+ first."
exit 1
fi
if ! command -v npm &> /dev/null; then
print_error "npm is not installed. Please install npm first."
exit 1
fi
# Check Node.js version
NODE_VERSION=$(node --version | cut -d'v' -f2 | cut -d'.' -f1)
if [ "$NODE_VERSION" -lt 20 ]; then
print_error "Node.js version 20 or higher is required. Current version: $(node --version)"
exit 1
fi
print_status "Node.js version: $(node --version) ✓"
# Install npm dependencies if needed
if [ ! -d "node_modules" ]; then
print_status "Installing npm dependencies..."
npm install
else
print_status "npm dependencies already installed ✓"
fi
# Install pre-commit
print_status "Installing pre-commit..."
if command -v brew &> /dev/null; then
# macOS with Homebrew
if ! command -v pre-commit &> /dev/null; then
print_status "Installing pre-commit via Homebrew..."
brew install pre-commit
else
print_status "pre-commit already installed ✓"
fi
elif command -v pip3 &> /dev/null; then
# Linux/WSL with pip3
if ! command -v pre-commit &> /dev/null; then
print_status "Installing pre-commit via pip3..."
pip3 install --user pre-commit
# Add to PATH if needed
if [[ ":$PATH:" != *":$HOME/.local/bin:"* ]]; then
print_warning "Adding ~/.local/bin to PATH"
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
export PATH="$HOME/.local/bin:$PATH"
fi
else
print_status "pre-commit already installed ✓"
fi
elif command -v pipx &> /dev/null; then
# Alternative installation via pipx
if ! command -v pre-commit &> /dev/null; then
print_status "Installing pre-commit via pipx..."
pipx install pre-commit
else
print_status "pre-commit already installed ✓"
fi
else
print_error "Cannot install pre-commit. Please install either:"
print_error " - Homebrew (macOS): brew install pre-commit"
print_error " - pip3: pip3 install --user pre-commit"
print_error " - pipx: pipx install pre-commit"
exit 1
fi
# Verify pre-commit installation
if ! command -v pre-commit &> /dev/null; then
print_error "pre-commit installation failed!"
exit 1
fi
print_success "pre-commit installed: $(pre-commit --version)"
# Install pre-commit hooks
print_status "Installing pre-commit hooks..."
pre-commit install-hooks
# Update Husky pre-commit hook to use pre-commit
if [ -f ".husky/pre-commit" ]; then
if ! grep -q "pre-commit run" .husky/pre-commit; then
print_status "Updating Husky pre-commit hook..."
echo "pre-commit run --all-files" > .husky/pre-commit
chmod +x .husky/pre-commit
else
print_status "Husky pre-commit hook already configured ✓"
fi
else
print_warning "Husky pre-commit hook not found. Creating..."
mkdir -p .husky
echo "pre-commit run --all-files" > .husky/pre-commit
chmod +x .husky/pre-commit
fi
# Test the setup
print_status "Testing pre-commit setup..."
if pre-commit run --all-files > /dev/null 2>&1; then
print_success "Pre-commit hooks are working!"
else
print_warning "Pre-commit hooks encountered some issues (this is normal for first run)"
print_status "Running pre-commit with output for diagnosis..."
pre-commit run --all-files || true
fi
print_success "🎉 Pre-commit setup complete!"
echo
echo "📋 Summary of installed hooks:"
echo " ✅ File integrity checks (trailing whitespace, end-of-file, etc.)"
echo " ✅ YAML/JSON validation"
echo " ✅ Security checks (private keys, large files)"
echo " ✅ ESLint code linting with auto-fix"
echo " ✅ Prettier code formatting"
echo " ✅ TypeScript type checking"
echo " ✅ npm security audit"
echo " ✅ Core Jest tests (stable tests only)"
echo " ✅ Documentation link checking"
echo " ✅ Package.json validation"
echo " ✅ Build verification"
echo
echo "🔧 Usage:"
echo " • Hooks run automatically on every commit"
echo " • Run manually: pre-commit run --all-files"
echo " • Update hooks: pre-commit autoupdate"
echo " • Skip hooks (emergency): git commit --no-verify"
echo
echo "📖 For team members:"
echo " • New team members should run: ./setup-precommit.sh"
echo " • All hooks are configured to match existing npm scripts"
echo " • Hooks focus on code quality without blocking development"
echo
print_success "Happy coding! 🚀"
```
--------------------------------------------------------------------------------
/docs/api/modules.html:
--------------------------------------------------------------------------------
```html
<!DOCTYPE html><html class="default" lang="en" data-base="./"><head><meta charset="utf-8"/><meta http-equiv="x-ua-compatible" content="IE=edge"/><title>DocuMCP API Documentation - v0.4.1</title><meta name="description" content="Documentation for DocuMCP API Documentation"/><meta name="viewport" content="width=device-width, initial-scale=1"/><link rel="stylesheet" href="assets/style.css"/><link rel="stylesheet" href="assets/highlight.css"/><script defer src="assets/main.js"></script><script async src="assets/icons.js" id="tsd-icons-script"></script><script async src="assets/search.js" id="tsd-search-script"></script><script async src="assets/navigation.js" id="tsd-nav-script"></script></head><body><script>document.documentElement.dataset.theme = localStorage.getItem("tsd-theme") || "os";document.body.style.display="none";setTimeout(() => window.app?app.showPage():document.body.style.removeProperty("display"),500)</script><header class="tsd-page-toolbar"><div class="tsd-toolbar-contents container"><a href="index.html" class="title">DocuMCP API Documentation - v0.4.1</a><div id="tsd-toolbar-links"></div><button id="tsd-search-trigger" class="tsd-widget" aria-label="Search"><svg width="16" height="16" viewBox="0 0 16 16" fill="none" aria-hidden="true"><use href="assets/icons.svg#icon-search"></use></svg></button><dialog id="tsd-search" aria-label="Search"><input role="combobox" id="tsd-search-input" aria-controls="tsd-search-results" aria-autocomplete="list" aria-expanded="true" autocapitalize="off" autocomplete="off" placeholder="Search the docs" maxLength="100"/><ul role="listbox" id="tsd-search-results"></ul><div id="tsd-search-status" aria-live="polite" aria-atomic="true"><div>Preparing search index...</div></div></dialog><a href="#" class="tsd-widget menu" id="tsd-toolbar-menu-trigger" data-toggle="menu" aria-label="Menu"><svg width="16" height="16" viewBox="0 0 16 16" fill="none" aria-hidden="true"><use href="assets/icons.svg#icon-menu"></use></svg></a></div></header><div class="container container-main"><div class="col-content"><div class="tsd-page-title"><ul class="tsd-breadcrumb" aria-label="Breadcrumb"></ul><h1>DocuMCP API Documentation - v0.4.1</h1></div><details class="tsd-panel-group tsd-member-group tsd-accordion" open><summary class="tsd-accordion-summary" data-key="section-Variables"><svg width="20" height="20" viewBox="0 0 24 24" fill="none" aria-hidden="true"><use href="assets/icons.svg#icon-chevronDown"></use></svg><h2>Variables</h2></summary><dl class="tsd-member-summaries"><dt class="tsd-member-summary" id="tools"><span class="tsd-member-summary-name"><svg class="tsd-kind-icon" viewBox="0 0 24 24" aria-label="Variable"><use href="assets/icons.svg#icon-32"></use></svg><a href="variables/TOOLS.html">TOOLS</a><a href="#tools" aria-label="Permalink" class="tsd-anchor-icon"><svg viewBox="0 0 24 24" aria-hidden="true"><use href="assets/icons.svg#icon-anchor"></use></svg></a></span></dt><dd class="tsd-member-summary"></dd></dl></details></div><div class="col-sidebar"><div class="page-menu"><div class="tsd-navigation settings"><details class="tsd-accordion"><summary class="tsd-accordion-summary"><svg width="20" height="20" viewBox="0 0 24 24" fill="none" aria-hidden="true"><use href="assets/icons.svg#icon-chevronDown"></use></svg><h3>Settings</h3></summary><div class="tsd-accordion-details"><div class="tsd-filter-visibility"><span class="settings-label">Member Visibility</span><ul id="tsd-filter-options"><li class="tsd-filter-item"><label class="tsd-filter-input"><input type="checkbox" id="tsd-filter-inherited" name="inherited" checked/><svg width="32" height="32" viewBox="0 0 32 32" aria-hidden="true"><rect class="tsd-checkbox-background" width="30" height="30" x="1" y="1" rx="6" fill="none"></rect><path class="tsd-checkbox-checkmark" d="M8.35422 16.8214L13.2143 21.75L24.6458 10.25" stroke="none" stroke-width="3.5" stroke-linejoin="round" fill="none"></path></svg><span>Inherited</span></label></li></ul></div><div class="tsd-theme-toggle"><label class="settings-label" for="tsd-theme">Theme</label><select id="tsd-theme"><option value="os">OS</option><option value="light">Light</option><option value="dark">Dark</option></select></div></div></details></div><details open class="tsd-accordion tsd-page-navigation"><summary class="tsd-accordion-summary"><svg width="20" height="20" viewBox="0 0 24 24" fill="none" aria-hidden="true"><use href="assets/icons.svg#icon-chevronDown"></use></svg><h3>On This Page</h3></summary><div class="tsd-accordion-details"><details open class="tsd-accordion tsd-page-navigation-section"><summary class="tsd-accordion-summary" data-key="section-Variables"><svg width="20" height="20" viewBox="0 0 24 24" fill="none" aria-hidden="true"><use href="assets/icons.svg#icon-chevronDown"></use></svg>Variables</summary><div><a href="#tools"><svg class="tsd-kind-icon" viewBox="0 0 24 24" aria-label="Variable"><use href="assets/icons.svg#icon-32"></use></svg><span>TOOLS</span></a></div></details></div></details></div><div class="site-menu"><nav class="tsd-navigation"><a href="modules.html" class="current">DocuMCP API Documentation - v0.4.1</a><ul class="tsd-small-nested-navigation" id="tsd-nav-container"><li>Loading...</li></ul></nav></div></div></div><footer><p class="tsd-generator">Generated using <a href="https://typedoc.org/" target="_blank">TypeDoc</a></p></footer><div class="overlay"></div></body></html>
```
--------------------------------------------------------------------------------
/docs/how-to/performance-optimization.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.953Z"
last_validated: "2025-12-09T19:41:38.584Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# How to Optimize Documentation Deployment Performance
This guide shows you how to optimize your DocuMCP deployment process for faster builds and better deployment success rates.
## Quick Setup
```bash
# Analyze deployment performance:
"analyze my deployment performance and provide optimization recommendations"
```
## Deployment Performance Overview
DocuMCP tracks deployment performance metrics to help you optimize your documentation build process:
### Key Metrics
- **Build Time**: Time taken for documentation generation
- **Deployment Success Rate**: Percentage of successful deployments
- **SSG Performance**: Static site generator efficiency comparison
- **Error Recovery**: Time to resolve deployment failures
### Performance Benefits
- **Faster Deployments**: Reduced time from commit to live site
- **Higher Success Rates**: More reliable deployment pipeline
- **Better Developer Experience**: Quicker feedback cycles
- **Reduced Resource Usage**: Optimized build processes
## Setup Methods
### Method 1: Deployment Performance Analysis
```bash
# Analyze deployment performance:
"analyze my deployment performance and provide optimization recommendations"
```
This will:
1. Analyze current deployment metrics
2. Compare SSG build times
3. Identify deployment bottlenecks
4. Provide optimization recommendations
5. Track performance improvements
### Method 2: SSG Performance Comparison
#### Step 1: Build Time Analysis
```bash
# Analyze build performance:
"compare build times across different static site generators"
```
#### Step 2: Success Rate Optimization
```bash
# Optimize deployment success:
"analyze deployment failures and suggest improvements"
```
#### Step 3: Performance Monitoring
```bash
# Monitor deployment performance:
"track my deployment performance over time"
```
## Deployment Optimization Techniques
### SSG Selection Optimization
```bash
# Analyze SSG performance:
"compare static site generator build times and success rates"
```
#### SSG Performance Factors
- **Build Speed**: Time to generate documentation
- **Success Rate**: Reliability of builds
- **Resource Usage**: Memory and CPU requirements
- **Feature Support**: Compatibility with documentation needs
### Build Configuration Optimization
```typescript
// Optimize build configuration for faster deployments
const buildConfig = {
// Use faster package managers
packageManager: "pnpm", // or "yarn" for faster installs
// Optimize Node.js version
nodeVersion: "20", // Latest LTS for better performance
// Configure build caching
cache: {
enabled: true,
strategy: "aggressive",
},
};
```
### Deployment Pipeline Optimization
```bash
# Optimize deployment pipeline:
"analyze my deployment pipeline and suggest performance improvements"
```
#### Pipeline Best Practices
- **Parallel Processing**: Run independent tasks concurrently
- **Build Caching**: Cache dependencies and build artifacts
- **Incremental Builds**: Only rebuild changed content
- **Resource Allocation**: Optimize memory and CPU usage
## Troubleshooting
### Common Issues
**Problem**: Slow deployment builds
**Solution**: Analyze SSG performance and switch to faster alternatives
**Problem**: Frequent deployment failures
**Solution**: Review error patterns and optimize build configurations
**Problem**: Inconsistent build times
**Solution**: Enable build caching and optimize dependencies
**Problem**: Resource exhaustion during builds
**Solution**: Optimize memory usage and build parallelization
### Performance Debugging
```bash
# Debug deployment performance issues:
"analyze my deployment bottlenecks and suggest optimizations"
```
## Best Practices
### Deployment Performance Guidelines
1. **Choose Fast SSGs**: Use performance data to select optimal static site generators
2. **Enable Caching**: Implement build caching for faster subsequent deployments
3. **Optimize Dependencies**: Keep dependencies minimal and up-to-date
4. **Monitor Build Times**: Track deployment performance over time
5. **Use Analytics**: Leverage deployment analytics for optimization decisions
### Build Optimization Strategies
1. **Incremental Builds**: Only rebuild changed content when possible
2. **Parallel Processing**: Run independent build tasks concurrently
3. **Resource Management**: Optimize memory and CPU usage during builds
4. **Dependency Caching**: Cache node_modules and build artifacts
5. **Build Environment**: Use optimized build environments and Node.js versions
## Deployment Analytics Tools
### Built-in DocuMCP Analytics
- **Build time tracking**: Monitor deployment speed over time
- **Success rate analysis**: Track deployment reliability
- **SSG performance comparison**: Compare static site generator efficiency
- **Failure pattern analysis**: Identify common deployment issues
### MCP Tools Available
- `analyze_deployments`: Get comprehensive deployment performance analytics
- `deploy_pages`: Track deployment attempts and build times
- `recommend_ssg`: Get performance-based SSG recommendations
## Next Steps
- [Deploy Pages](../reference/mcp-tools.md#deploy_pages)
- [Analytics Setup](analytics-setup.md)
- [Site Monitoring](site-monitoring.md)
- [Troubleshooting](troubleshooting.md)
```
--------------------------------------------------------------------------------
/src/scripts/benchmark.ts:
--------------------------------------------------------------------------------
```typescript
#!/usr/bin/env node
// Performance benchmark CLI script per PERF-001 rules
import { promises as fs } from "fs";
import path from "path";
import { createBenchmarker } from "../benchmarks/performance.js";
interface BenchmarkConfig {
testRepos: Array<{
path: string;
name: string;
expectedSize?: "small" | "medium" | "large";
}>;
outputDir?: string;
verbose?: boolean;
}
async function main() {
const args = process.argv.slice(2);
const command = args[0] || "help";
switch (command) {
case "run":
await runBenchmarks(args.slice(1));
break;
case "current":
await benchmarkCurrentRepo();
break;
case "create-config":
await createDefaultConfig();
break;
case "help":
default:
printHelp();
break;
}
}
async function runBenchmarks(args: string[]) {
const configPath = args[0] || "./benchmark-config.json";
try {
const configContent = await fs.readFile(configPath, "utf-8");
const config: BenchmarkConfig = JSON.parse(configContent);
console.log("🎯 Performance Benchmarking System (PERF-001 Compliance)");
console.log("Target Performance:");
console.log(" • Small repos (<100 files): <1 second");
console.log(" • Medium repos (100-1000 files): <10 seconds");
console.log(" • Large repos (1000+ files): <60 seconds\\n");
const benchmarker = createBenchmarker();
const suite = await benchmarker.runBenchmarkSuite(config.testRepos);
// Print detailed report
benchmarker.printDetailedReport(suite);
// Export results if output directory specified
if (config.outputDir) {
await fs.mkdir(config.outputDir, { recursive: true });
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
const outputPath = path.join(
config.outputDir,
`benchmark-${timestamp}.json`,
);
await benchmarker.exportResults(suite, outputPath);
console.log(`\\n📄 Results exported to: ${outputPath}`);
}
// Exit with appropriate code
process.exit(suite.overallPassed ? 0 : 1);
} catch (error) {
console.error("❌ Benchmark failed:", error);
console.error(
'\\nTry running "npm run benchmark:create-config" to create a default configuration.',
);
process.exit(1);
}
}
async function benchmarkCurrentRepo() {
console.log("🎯 Benchmarking Current Repository");
console.log("=".repeat(40));
const currentRepo = process.cwd();
const repoName = path.basename(currentRepo);
const benchmarker = createBenchmarker();
try {
console.log(`📊 Analyzing: ${repoName} at ${currentRepo}\\n`);
const result = await benchmarker.benchmarkRepository(
currentRepo,
"standard",
);
// Generate single-repo suite
const suite = benchmarker.generateSuite(`Current Repository: ${repoName}`, [
result,
]);
// Print results
benchmarker.printDetailedReport(suite);
// Export to current directory
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
const outputPath = `./benchmark-current-${timestamp}.json`;
await benchmarker.exportResults(suite, outputPath);
console.log(`\\n📄 Results saved to: ${outputPath}`);
process.exit(suite.overallPassed ? 0 : 1);
} catch (error) {
console.error("❌ Benchmark failed:", error);
process.exit(1);
}
}
async function createDefaultConfig() {
const defaultConfig: BenchmarkConfig = {
testRepos: [
{
path: ".",
name: "Current Repository",
expectedSize: "small",
},
// Add more test repositories here
// {
// path: "/path/to/medium/repo",
// name: "Medium Test Repo",
// expectedSize: "medium"
// },
// {
// path: "/path/to/large/repo",
// name: "Large Test Repo",
// expectedSize: "large"
// }
],
outputDir: "./benchmark-results",
verbose: true,
};
const configPath = "./benchmark-config.json";
await fs.writeFile(configPath, JSON.stringify(defaultConfig, null, 2));
console.log("✅ Created default benchmark configuration:");
console.log(` ${configPath}`);
console.log("");
console.log("📝 Edit this file to add your test repositories, then run:");
console.log(" npm run benchmark:run");
}
function printHelp() {
console.log("🎯 DocuMCP Performance Benchmarking Tool");
console.log("");
console.log("USAGE:");
console.log(
" npm run benchmark:run [config-file] Run full benchmark suite",
);
console.log(
" npm run benchmark:current Benchmark current repository only",
);
console.log(
" npm run benchmark:create-config Create default configuration",
);
console.log(" npm run benchmark:help Show this help");
console.log("");
console.log("PERFORMANCE TARGETS (PERF-001):");
console.log(" • Small repositories (<100 files): <1 second");
console.log(" • Medium repositories (100-1000 files): <10 seconds");
console.log(" • Large repositories (1000+ files): <60 seconds");
console.log("");
console.log("EXAMPLES:");
console.log(" npm run benchmark:current");
console.log(" npm run benchmark:create-config");
console.log(" npm run benchmark:run ./my-config.json");
}
// Handle unhandled promise rejections
process.on("unhandledRejection", (error) => {
console.error("❌ Unhandled rejection:", error);
process.exit(1);
});
main().catch((error) => {
console.error("❌ Script failed:", error);
process.exit(1);
});
```
--------------------------------------------------------------------------------
/docs/research/domain-3-ssg-recommendation/ssg-performance-analysis.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.966Z"
last_validated: "2025-12-09T19:41:38.597Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# Static Site Generator Performance Analysis
**Research Date**: 2025-01-14
**Domain**: SSG Recommendation Engine
**Status**: Completed
## Research Overview
Comprehensive analysis of static site generator performance characteristics, build times, and deployment considerations for DocuMCP recommendation engine.
## Key Research Findings
### Build Performance Comparison
Based on CSS-Tricks comprehensive benchmarking study:
| SSG | Language | Small Sites (1-1024 files) | Large Sites (1K-64K files) | Key Characteristics |
| -------------- | -------- | --------------------------------- | ------------------------------- | ------------------------------- |
| **Hugo** | Go | ~250x faster than Gatsby | ~40x faster than Gatsby | Fastest across all scales |
| **Jekyll** | Ruby | Competitive with Eleventy | Slower scaling, Ruby bottleneck | Good for small-medium sites |
| **Eleventy** | Node.js | Fast, lightweight | Good scaling | Excellent developer experience |
| **Gatsby** | React | Slower startup (webpack overhead) | Improves relatively at scale | Rich features, plugin ecosystem |
| **Next.js** | React | Framework overhead | Good with optimization | Hybrid capabilities |
| **Docusaurus** | React | Moderate performance | Documentation optimized | Purpose-built for docs |
### Performance Characteristics Analysis
#### **Tier 1: Speed Champions (Hugo)**
- **Build Time**: Sub-second for small sites, seconds for large sites
- **Scaling**: Linear performance, excellent for content-heavy sites
- **Trade-offs**: Limited plugin ecosystem, steeper learning curve
#### **Tier 2: Balanced Performance (Jekyll, Eleventy)**
- **Build Time**: Fast for small sites, moderate scaling
- **Scaling**: Jekyll hits Ruby performance ceiling, Eleventy scales better
- **Trade-offs**: Good balance of features and performance
#### **Tier 3: Feature-Rich (Gatsby, Next.js, Docusaurus)**
- **Build Time**: Significant webpack/framework overhead
- **Scaling**: Performance gap narrows at scale due to optimizations
- **Trade-offs**: Rich ecosystems, modern features, slower builds
### Real-World Performance Implications
#### **For DocuMCP Recommendation Logic:**
1. **Small Projects** (< 100 pages):
- All SSGs perform adequately
- Developer experience becomes primary factor
- Hugo still 250x faster than Gatsby for simple sites
2. **Medium Projects** (100-1000 pages):
- Performance differences become noticeable
- Hugo maintains significant advantage
- Jekyll starts showing Ruby limitations
3. **Large Projects** (1000+ pages):
- Hugo remains fastest but gap narrows
- Framework-based SSGs benefit from optimizations
- Build time becomes CI/CD bottleneck consideration
### Deployment and CI/CD Considerations
#### **GitHub Actions Build Time Impact**
- **Free Plan Limitations**: 2000 minutes/month
- **Cost Implications**: Slow builds consume more CI time
- **Real Example**: Gatsby site taking 15 minutes vs Hugo taking 30 seconds
#### **Content Editor Experience**
- **Preview Generation**: Fast builds enable quick content previews
- **Development Workflow**: Build speed affects local development experience
- **Incremental Builds**: Framework support varies significantly
### Recommendation Engine Criteria
Based on research findings, DocuMCP should weight these factors:
1. **Project Scale Weight**:
- Small projects: 40% performance, 60% features/DX
- Medium projects: 60% performance, 40% features/DX
- Large projects: 80% performance, 20% features/DX
2. **Team Context Multipliers**:
- Technical team: Favor performance (Hugo/Eleventy)
- Non-technical content creators: Favor ease-of-use (Jekyll/Docusaurus)
- Mixed teams: Balanced approach (Next.js/Gatsby)
3. **Use Case Optimization**:
- **Documentation**: Docusaurus > MkDocs > Hugo
- **Marketing Sites**: Next.js > Gatsby > Hugo
- **Blogs**: Jekyll > Eleventy > Hugo
- **Large Content Sites**: Hugo > Eleventy > Others
## Implementation Recommendations for DocuMCP
### Algorithm Design
```typescript
// Performance scoring algorithm
const calculatePerformanceScore = (projectMetrics: ProjectMetrics) => {
const { pageCount, teamSize, techLevel, updateFrequency } = projectMetrics;
// Scale-based performance weighting
const performanceWeight =
pageCount > 1000 ? 0.8 : pageCount > 100 ? 0.6 : 0.4;
// SSG-specific performance scores (0-100)
const performanceScores = {
hugo: 100,
eleventy: 85,
jekyll: pageCount > 500 ? 60 : 80,
nextjs: 70,
gatsby: pageCount > 1000 ? 65 : 45,
docusaurus: 75,
};
return performanceScores;
};
```
### Research Validation
- ✅ Performance benchmarks analyzed from multiple sources
- ✅ Real-world implications documented
- ✅ Recommendation criteria established
- ⚠️ Needs validation: Edge case performance scenarios
- ⚠️ Needs testing: Algorithm implementation with real project data
## Sources & References
1. CSS-Tricks Comprehensive SSG Build Time Analysis
2. Jamstack.org Performance Surveys
3. GitHub Actions CI/CD Cost Analysis
4. Community Performance Reports (Hugo, Gatsby, Next.js)
```
--------------------------------------------------------------------------------
/ARCHITECTURAL_CHANGES_SUMMARY.md:
--------------------------------------------------------------------------------
```markdown
# Architectural Changes Summary
## January 14, 2025
This document summarizes all architectural changes, ADR updates, and implementations completed in this session.
## ✅ Implementations Completed
### 1. Release Pipeline Improvements (Issues #1, #2, #3)
**Status**: ✅ Fully Implemented
**Files Changed**:
- `.github/workflows/release.yml` - Enhanced with verification and automation
**Features Implemented**:
- ✅ npm publishing verification with retry mechanism (3 attempts)
- ✅ Package installation test after publication
- ✅ Automated changelog generation using standard-version
- ✅ Commit message validation before release
- ✅ Coverage threshold updated from 80% to 85% (currently at 91.65%)
- ✅ Enhanced changelog extraction for GitHub Releases
**Verification**:
- ✅ `npm run release:dry-run` tested and working
- ✅ All quality gates in place
- ✅ Error handling implemented throughout
### 2. ADR Documentation Updates
**Status**: ✅ Completed
**New ADRs Created**:
- **ADR-012**: Priority Scoring System for Documentation Drift Detection
- **ADR-013**: Release Pipeline and Package Distribution Architecture
**ADRs Updated**:
- **ADR-002**: Added GitHub issue references (#77, #78)
- **ADR-004**: Added Diataxis type tracking documentation (#81)
- **ADR-005**: Added release pipeline reference
- **ADR-006**: Added agent artifact cleanup tool reference (#80)
- **ADR-009**: Added LLM integration documentation (#82)
- **ADR-012**: Added GitHub issue reference (#83)
- **ADR-013**: Updated status to Accepted with implementation details
**ADR README**: Updated with all new ADRs and status changes
## 📋 Code Implementation Verification
### Priority Scoring System (Issue #83)
**Implementation**: ✅ Found in `src/utils/drift-detector.ts`
- `DriftPriorityScore` interface (lines 91-103)
- `calculatePriorityScore()` method (line 1307)
- Integration with drift detection results
### LLM Integration Layer (Issue #82)
**Implementation**: ✅ Found in multiple files
- `src/utils/llm-client.ts` - LLM client implementation
- `src/utils/semantic-analyzer.ts` - Semantic analysis integration
- Supports DeepSeek, OpenAI, Anthropic, Ollama providers
- Hybrid analysis (LLM + AST fallback)
### Agent Artifact Cleanup (Issue #80)
**Implementation**: ✅ Found in multiple files
- `src/tools/cleanup-agent-artifacts.ts` - MCP tool implementation
- `src/utils/artifact-detector.ts` - Detection logic
- Integrated into main MCP server (`src/index.ts`)
### Diataxis Type Tracking (Issue #81)
**Implementation**: ✅ Found in multiple files
- `src/utils/drift-detector.ts` - Diataxis type detection (lines 699-984)
- `src/memory/schemas.ts` - Schema definition (line 266)
- CodeExample interface extended with diataxisType field
### Knowledge Graph Extensions (Issues #77, #78)
**Implementation**: ✅ Found in `src/memory/schemas.ts`
- DocumentationExampleEntitySchema (line 262)
- ExampleValidationEntitySchema (line 284)
- CallGraphEntitySchema (referenced in commit)
## 📊 Project Statistics
- **Total TypeScript Files**: 72
- **ADRs**: 13 (11 Accepted, 2 Proposed)
- **Test Coverage**: 91.65% (exceeds 85% target)
- **Recent Commits**: 10+ in last 2 days
## 🔗 GitHub Issues Status
| Issue # | Title | Status | ADR Reference |
| ------- | ------------------------------ | -------------------- | ------------- |
| #1 | Fix npm Package Publishing | ✅ Fixed | ADR-013 |
| #2 | Automated Changelog Generation | ✅ Implemented | ADR-013 |
| #3 | Test Coverage to 85% | ✅ Exceeded (91.65%) | ADR-013 |
| #77 | Knowledge Graph Extensions | ✅ Implemented | ADR-002 |
| #78 | Documentation Example Entities | ✅ Implemented | ADR-002 |
| #80 | Agent Artifact Cleanup | ✅ Implemented | ADR-006 |
| #81 | Diataxis Type Tracking | ✅ Implemented | ADR-004 |
| #82 | LLM Integration Layer | ✅ Implemented | ADR-009 |
| #83 | Priority Scoring System | ✅ Implemented | ADR-012 |
## 📝 Commits Made
1. **dbef13f** - `feat(release): implement npm publishing verification and automated changelog (#1, #2)`
- Release pipeline improvements
- New ADRs (012, 013)
- ADR updates with issue references
2. **ef03918** - `docs(adrs): update ADR-013 status to Accepted with implementation details`
- ADR-013 status update
- Implementation details added
## 🎯 Next Steps
### Ready for Implementation
- **Issue #74**: Change Watcher for Real-time Documentation Drift Monitoring
- Dependencies: ✅ Drift detection system exists
- Dependencies: ✅ LLM integration available (optional)
- Status: Ready to implement
### Future Enhancements (From ADR-013)
- Issue #7: AI-enhanced release notes
- Issue #8: Release health dashboard
- Issue #6: Smart Dependabot auto-merge
## 📚 Documentation Created
1. **ISSUE_IMPLEMENTATION_SUMMARY.md** - Detailed implementation summary
2. **ARCHITECTURAL_CHANGES_SUMMARY.md** - This document
3. **ADR-012** - Priority Scoring System documentation
4. **ADR-013** - Release Pipeline Architecture documentation
## ✅ Quality Assurance
- ✅ All implementations verified in codebase
- ✅ ADRs updated with implementation status
- ✅ GitHub issues referenced in ADRs
- ✅ Commit messages follow conventional format
- ✅ Test coverage exceeds targets
- ✅ Release pipeline tested and working
---
**Last Updated**: 2025-01-14
**Status**: All changes committed and pushed to GitHub
**Ready for**: Issue #74 implementation
```
--------------------------------------------------------------------------------
/tests/utils/usage-metadata.test.ts:
--------------------------------------------------------------------------------
```typescript
import { UsageMetadataCollector } from "../../src/utils/usage-metadata.js";
import {
DriftSnapshot,
DocumentationSnapshot,
DriftDetectionResult,
} from "../../src/utils/drift-detector.js";
import { ASTAnalysisResult } from "../../src/utils/ast-analyzer.js";
import { DriftDetector } from "../../src/utils/drift-detector.js";
describe("UsageMetadataCollector", () => {
const collector = new UsageMetadataCollector();
const makeSnapshot = (): DriftSnapshot => {
const producerFile: ASTAnalysisResult = {
filePath: "/repo/src/producer.ts",
language: "typescript",
functions: [
{
name: "produce",
parameters: [],
returnType: null,
isAsync: false,
isExported: true,
isPublic: true,
docComment: null,
startLine: 1,
endLine: 1,
complexity: 1,
dependencies: [],
},
],
classes: [
{
name: "Widget",
isExported: true,
extends: null,
implements: [],
methods: [],
properties: [],
docComment: null,
startLine: 1,
endLine: 1,
},
],
interfaces: [],
types: [],
imports: [],
exports: ["produce", "Widget"],
contentHash: "abc",
lastModified: new Date().toISOString(),
linesOfCode: 10,
complexity: 1,
};
const consumerFile: ASTAnalysisResult = {
filePath: "/repo/src/consumer.ts",
language: "typescript",
functions: [],
classes: [],
interfaces: [],
types: [],
imports: [
{
source: "./producer",
imports: [{ name: "produce" }, { name: "Widget" }],
isDefault: false,
startLine: 1,
},
],
exports: [],
contentHash: "def",
lastModified: new Date().toISOString(),
linesOfCode: 10,
complexity: 1,
};
const docSnapshot: DocumentationSnapshot = {
filePath: "/repo/docs/api.md",
contentHash: "ghi",
referencedCode: ["/repo/src/producer.ts"],
lastUpdated: new Date().toISOString(),
sections: [
{
title: "Widget",
content: "Widget docs",
referencedFunctions: [],
referencedClasses: ["Widget"],
referencedTypes: [],
codeExamples: [],
startLine: 1,
endLine: 5,
},
],
};
return {
projectPath: "/repo",
timestamp: new Date().toISOString(),
files: new Map([
[producerFile.filePath, producerFile],
[consumerFile.filePath, consumerFile],
]),
documentation: new Map([[docSnapshot.filePath, docSnapshot]]),
};
};
it("counts imports and class/function references (sync fallback)", () => {
const snapshot = makeSnapshot();
const metadata = collector.collectSync(snapshot);
expect(metadata.imports.get("produce")).toBe(1);
expect(metadata.imports.get("Widget")).toBe(1);
expect(metadata.functionCalls.get("produce")).toBe(1);
// Widget is identified as a class and should increment class instantiations
// once from docs and once from imports.
expect(metadata.classInstantiations.get("Widget")).toBe(2);
});
it("collects usage metadata asynchronously with call graph analysis", async () => {
const snapshot = makeSnapshot();
const metadata = await collector.collect(snapshot);
expect(metadata.imports.get("produce")).toBeGreaterThanOrEqual(1);
expect(metadata.imports.get("Widget")).toBeGreaterThanOrEqual(1);
// Function calls may be counted from call graph or imports
expect(metadata.functionCalls.get("produce")).toBeGreaterThanOrEqual(0);
expect(metadata.classInstantiations.get("Widget")).toBeGreaterThanOrEqual(
1,
);
});
it("integrates with DriftDetector scoring when usage metadata is supplied", async () => {
const snapshot = makeSnapshot();
const metadata = await collector
.collect(snapshot)
.catch(() => collector.collectSync(snapshot));
const detector = new DriftDetector("/repo");
const result: DriftDetectionResult = {
filePath: "/repo/src/producer.ts",
hasDrift: true,
severity: "medium" as const,
drifts: [
{
type: "outdated" as const,
affectedDocs: ["/repo/docs/api.md"],
codeChanges: [
{
type: "modified" as const,
category: "function" as const,
name: "produce",
details: "signature update",
impactLevel: "minor" as const,
},
],
description: "function changed",
detectedAt: new Date().toISOString(),
severity: "medium" as const,
},
],
suggestions: [],
impactAnalysis: {
breakingChanges: 0,
majorChanges: 0,
minorChanges: 1,
affectedDocFiles: ["/repo/docs/api.md"],
estimatedUpdateEffort: "low" as const,
requiresManualReview: false,
},
};
const scoreWithoutUsage = detector.calculatePriorityScore(result, snapshot);
const scoreWithUsage = detector.calculatePriorityScore(
result,
snapshot,
metadata,
);
// Usage frequency should align with observed usage (imports + calls).
const expectedUsage =
(metadata.functionCalls.get("produce") ?? 0) +
(metadata.imports.get("produce") ?? 0);
expect(scoreWithUsage.factors.usageFrequency).toBe(expectedUsage);
expect(scoreWithUsage.factors.usageFrequency).toBeGreaterThan(0);
});
});
```
--------------------------------------------------------------------------------
/tests/utils/user-feedback-integration.test.ts:
--------------------------------------------------------------------------------
```typescript
/**
* User Feedback Integration Tests (ADR-012 Phase 3)
*/
import { UserFeedbackIntegration } from "../../src/utils/user-feedback-integration.js";
import { DriftDetectionResult } from "../../src/utils/drift-detector.js";
// Mock fetch globally
global.fetch = jest.fn();
describe("UserFeedbackIntegration", () => {
let integration: UserFeedbackIntegration;
beforeEach(() => {
integration = new UserFeedbackIntegration();
(global.fetch as jest.Mock).mockClear();
});
afterEach(() => {
jest.clearAllMocks();
});
describe("Configuration", () => {
test("should configure GitHub integration", () => {
integration.configure({
provider: "github",
apiToken: "test-token",
owner: "test-owner",
repo: "test-repo",
});
expect(integration).toBeDefined();
});
test("should clear cache on configuration change", () => {
integration.configure({
provider: "github",
owner: "test",
repo: "test",
});
integration.clearCache();
expect(integration).toBeDefined();
});
});
describe("Feedback Score Calculation", () => {
test("should return 0 when no integration configured", async () => {
const result: DriftDetectionResult = {
filePath: "/test/file.ts",
hasDrift: true,
severity: "medium",
drifts: [],
suggestions: [],
impactAnalysis: {
breakingChanges: 0,
majorChanges: 1,
minorChanges: 0,
affectedDocFiles: [],
estimatedUpdateEffort: "medium",
requiresManualReview: false,
},
};
const score = await integration.calculateFeedbackScore(result);
expect(score).toBe(0);
});
test("should handle API errors gracefully", async () => {
integration.configure({
provider: "github",
apiToken: "invalid-token",
owner: "nonexistent",
repo: "nonexistent",
});
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
status: 401,
});
const result: DriftDetectionResult = {
filePath: "/test/file.ts",
hasDrift: true,
severity: "medium",
drifts: [],
suggestions: [],
impactAnalysis: {
breakingChanges: 0,
majorChanges: 1,
minorChanges: 0,
affectedDocFiles: [],
estimatedUpdateEffort: "medium",
requiresManualReview: false,
},
};
const score = await integration.calculateFeedbackScore(result);
expect(score).toBe(0);
});
test("should calculate feedback score from GitHub issues", async () => {
integration.configure({
provider: "github",
apiToken: "test-token",
owner: "test-owner",
repo: "test-repo",
});
const mockIssues = [
{
number: 1,
title: "Documentation issue",
body: "The file `src/utils/test.ts` has outdated docs",
state: "open",
labels: [{ name: "documentation" }, { name: "critical" }],
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
},
{
number: 2,
title: "Another docs issue",
body: "Function `testFunction()` needs documentation",
state: "open",
labels: [{ name: "docs" }],
created_at: new Date().toISOString(),
updated_at: new Date(
Date.now() - 10 * 24 * 60 * 60 * 1000,
).toISOString(), // 10 days ago
},
];
(global.fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => mockIssues,
});
const result: DriftDetectionResult = {
filePath: "src/utils/test.ts",
hasDrift: true,
severity: "medium",
drifts: [
{
type: "missing",
affectedDocs: [],
codeChanges: [
{
name: "testFunction",
type: "added",
category: "function",
details: "New function added",
impactLevel: "minor",
},
],
description: "Function testFunction is missing documentation",
detectedAt: new Date().toISOString(),
severity: "medium",
},
],
suggestions: [],
impactAnalysis: {
breakingChanges: 0,
majorChanges: 1,
minorChanges: 0,
affectedDocFiles: [],
estimatedUpdateEffort: "medium",
requiresManualReview: false,
},
};
const score = await integration.calculateFeedbackScore(result);
// Should have score > 0 due to open issues
expect(score).toBeGreaterThan(0);
});
test("should use cache for repeated requests", async () => {
integration.configure({
provider: "github",
apiToken: "test-token",
owner: "test-owner",
repo: "test-repo",
});
(global.fetch as jest.Mock).mockResolvedValue({
ok: true,
json: async () => [],
});
const result: DriftDetectionResult = {
filePath: "/test/file.ts",
hasDrift: true,
severity: "medium",
drifts: [],
suggestions: [],
impactAnalysis: {
breakingChanges: 0,
majorChanges: 1,
minorChanges: 0,
affectedDocFiles: [],
estimatedUpdateEffort: "medium",
requiresManualReview: false,
},
};
// First call
await integration.calculateFeedbackScore(result);
// Second call should use cache
await integration.calculateFeedbackScore(result);
// Fetch should only be called once due to caching
expect(global.fetch).toHaveBeenCalledTimes(1);
});
});
});
```
--------------------------------------------------------------------------------
/ISSUE_IMPLEMENTATION_SUMMARY.md:
--------------------------------------------------------------------------------
```markdown
# GitHub Issues Implementation Summary
This document summarizes the implementation of GitHub issues related to the release pipeline and package distribution (ADR-013).
## ✅ Completed Implementations
### Issue #1: Fix npm Package Publishing ✅
**Status**: Implemented
**Changes Made**:
1. **Enhanced npm Publishing Step** (`.github/workflows/release.yml`):
- Added npm authentication verification before publishing
- Implemented retry mechanism (3 attempts with 5-second delays)
- Added error handling with clear failure messages
- Captured package version for verification
2. **Added npm Publication Verification**:
- New step to verify package exists on npm registry after publication
- 10-second wait for registry propagation
- Clear success/failure indicators
- Automatic failure if package not found
3. **Added Package Installation Test**:
- Tests that published package can be installed globally
- Verifies `documcp` command is available after installation
- Ensures end-to-end package functionality
**Key Features**:
- Retry mechanism for transient failures
- Comprehensive error messages
- Verification steps prevent false success
- Installation test ensures package works correctly
### Issue #2: Automated Changelog Generation ✅
**Status**: Already Configured, Enhanced Integration
**Existing Configuration**:
- ✅ `standard-version` package installed
- ✅ `.versionrc.json` configured with proper formatting
- ✅ Release scripts in `package.json`
**Enhancements Made**:
1. **Improved Changelog Extraction**:
- Better parsing of CHANGELOG.md sections
- Handles version format correctly
- Improved error handling if changelog missing
2. **Added Commit Message Validation**:
- Validates commits follow conventional format before release
- Prevents releases with invalid commit messages
- Clear error messages for developers
3. **Enhanced Release Workflow**:
- Better integration with standard-version
- Improved changelog content extraction for GitHub Releases
- Proper error handling throughout
**Verification**:
- ✅ `npm run release:dry-run` works correctly
- ✅ Generates properly formatted changelog entries
- ✅ Links commits and issues correctly
### Issue #3: Improve Test Coverage to 85% ✅
**Status**: Already Exceeded Target
**Current Status**:
- **Statement Coverage**: 91.65% ✅ (Target: 85%)
- **Branch Coverage**: 81.44%
- **Function Coverage**: 93.97%
- **Line Coverage**: 92.39%
**Changes Made**:
1. **Updated Coverage Threshold**:
- Changed from 80% to 85% in release workflow
- Updated threshold check to use correct output parsing
- Added clear success message with actual coverage percentage
**Note**: Coverage already exceeds target, but threshold updated to reflect new standard.
## 📋 Implementation Details
### Release Workflow Improvements
The release workflow (`.github/workflows/release.yml`) now includes:
1. **Pre-Release Quality Gates**:
- Test coverage verification (85% threshold)
- Commit message validation
- Full test suite execution
- Build verification
2. **Automated Changelog Generation**:
- Uses `standard-version` for version bumping
- Generates formatted changelog entries
- Extracts changelog content for GitHub Releases
- Handles both manual and tag-based releases
3. **npm Publishing with Verification**:
- Authentication verification
- Retry mechanism (3 attempts)
- Publication verification
- Installation test
### Configuration Files
**commitlint.config.js**:
- ✅ Already configured with conventional commit rules
- ✅ Enforces commit message format
- ✅ Integrated with Husky hooks
**.versionrc.json**:
- ✅ Configured with proper changelog formatting
- ✅ Includes emoji sections
- ✅ Proper URL formats for GitHub
**.husky/commit-msg**:
- ✅ Pre-commit hook validates commit messages
- ✅ Prevents invalid commits from being created
## 🎯 Acceptance Criteria Status
### Issue #1: npm Package Publishing
- [x] npm package "documcp" verification step added
- [x] Release workflow includes publication verification
- [x] Publication failures are properly logged and handled
- [x] Retry mechanism implemented
- [x] Installation test added
### Issue #2: Automated Changelog Generation
- [x] Changelog automatically updated on release
- [x] Commit messages follow conventional format (enforced)
- [x] Release notes include all relevant changes
- [x] Consistent formatting across all releases
- [x] Automated categorization of changes
### Issue #3: Test Coverage
- [x] Overall statement coverage ≥85% (currently 91.65%)
- [x] Coverage threshold updated in workflow
- [x] Coverage check integrated into release pipeline
## 🚀 Next Steps
### Recommended Actions
1. **Test Release Pipeline**:
- Run a test release to verify all steps work correctly
- Verify npm publication succeeds
- Confirm changelog generation works
2. **Monitor First Release**:
- Watch for any issues in the enhanced workflow
- Verify package appears on npm registry
- Confirm installation works for users
3. **Documentation Updates**:
- Update CONTRIBUTING.md with commit message guidelines
- Add release process documentation
- Document npm publishing process
### Future Enhancements (From ADR-013)
- [ ] AI-enhanced release notes (Issue #7)
- [ ] Release health dashboard (Issue #8)
- [ ] Smart Dependabot auto-merge (Issue #6)
- [ ] Enhanced release notes with performance metrics
## 📝 Related ADRs
- **ADR-013**: Release Pipeline and Package Distribution Architecture
- **ADR-005**: GitHub Pages Deployment Automation (related workflow)
## 🔗 References
- GitHub Issue: #1 - Fix npm Package Publishing
- GitHub Issue: #2 - Implement Automated Changelog Generation
- GitHub Issue: #3 - Improve Test Coverage to 85%
- [Conventional Commits](https://www.conventionalcommits.org/)
- [standard-version](https://github.com/conventional-changelog/standard-version)
- [commitlint](https://commitlint.js.org/)
---
**Last Updated**: 2025-01-14
**Implementation Status**: ✅ Complete
**Ready for Testing**: Yes
```
--------------------------------------------------------------------------------
/tests/change-watcher-priority.integration.test.ts:
--------------------------------------------------------------------------------
```typescript
import { ChangeWatcher } from "../src/utils/change-watcher.js";
import {
DriftSnapshot,
PrioritizedDriftResult,
} from "../src/utils/drift-detector.js";
import { ASTAnalysisResult } from "../src/utils/ast-analyzer.js";
describe("ChangeWatcher priority integration", () => {
it("passes collected usage metadata into prioritized drift results", async () => {
// Baseline snapshot (pre-change)
const producerFile: ASTAnalysisResult = {
filePath: "/repo/src/producer.ts",
language: "typescript",
functions: [
{
name: "produce",
parameters: [],
returnType: null,
isAsync: false,
isExported: true,
isPublic: true,
docComment: null,
startLine: 1,
endLine: 1,
complexity: 1,
dependencies: [],
},
],
classes: [],
interfaces: [],
types: [],
imports: [],
exports: ["produce"],
contentHash: "abc",
lastModified: new Date().toISOString(),
linesOfCode: 10,
complexity: 1,
};
const baselineSnapshot: DriftSnapshot = {
projectPath: "/repo",
timestamp: new Date().toISOString(),
files: new Map([[producerFile.filePath, producerFile]]),
documentation: new Map(),
};
// Current snapshot adds an importing consumer and doc references
const consumerFile: ASTAnalysisResult = {
filePath: "/repo/src/consumer.ts",
language: "typescript",
functions: [],
classes: [],
interfaces: [],
types: [],
imports: [
{
source: "./producer",
imports: [{ name: "produce" }],
isDefault: false,
startLine: 1,
},
],
exports: [],
contentHash: "def",
lastModified: new Date().toISOString(),
linesOfCode: 5,
complexity: 1,
};
const docSnapshot = {
filePath: "/repo/docs/api.md",
contentHash: "ghi",
referencedCode: [producerFile.filePath],
lastUpdated: new Date().toISOString(),
sections: [
{
title: "produce",
content: "Description",
referencedFunctions: ["produce"],
referencedClasses: [],
referencedTypes: [],
codeExamples: [],
startLine: 1,
endLine: 5,
},
],
};
const currentSnapshot: DriftSnapshot = {
projectPath: "/repo",
timestamp: new Date().toISOString(),
files: new Map([
[producerFile.filePath, producerFile],
[consumerFile.filePath, consumerFile],
]),
documentation: new Map([[docSnapshot.filePath, docSnapshot]]),
};
const driftResults: PrioritizedDriftResult[] = [
{
filePath: producerFile.filePath,
hasDrift: true,
severity: "medium",
drifts: [
{
type: "outdated",
affectedDocs: [docSnapshot.filePath],
codeChanges: [
{
type: "modified",
category: "function",
name: "produce",
details: "signature update",
impactLevel: "minor",
},
],
description: "function changed",
detectedAt: new Date().toISOString(),
severity: "medium",
},
],
suggestions: [],
impactAnalysis: {
breakingChanges: 0,
majorChanges: 0,
minorChanges: 1,
affectedDocFiles: [docSnapshot.filePath],
estimatedUpdateEffort: "low",
requiresManualReview: false,
},
priorityScore: {
overall: 0,
factors: {
codeComplexity: 0,
usageFrequency: 0,
changeMagnitude: 0,
documentationCoverage: 0,
staleness: 0,
userFeedback: 0,
},
recommendation: "low",
suggestedAction: "",
},
},
];
let capturedUsage: any = null;
const detectorStub = {
initialize: jest.fn().mockResolvedValue(undefined),
loadLatestSnapshot: jest.fn().mockResolvedValue(baselineSnapshot),
createSnapshot: jest.fn().mockResolvedValue(currentSnapshot),
getPrioritizedDriftResults: jest
.fn()
.mockImplementation(
async (
_oldSnapshot: DriftSnapshot,
_newSnapshot: DriftSnapshot,
usageMetadata: any,
) => {
capturedUsage = usageMetadata;
// Encode usage frequency into the priority score for assertion
const usageFreq = usageMetadata?.imports?.get("produce") ?? 0;
return driftResults.map((dr) => ({
...dr,
priorityScore: {
...dr.priorityScore!,
factors: {
...dr.priorityScore!.factors,
usageFrequency: usageFreq,
},
overall: usageFreq,
},
}));
},
),
};
const watcher = new ChangeWatcher(
{
projectPath: "/repo",
docsPath: "/repo/docs",
watchPaths: [], // disable FS watcher side effects
},
{
createDetector: () => detectorStub as any,
},
);
await watcher.start();
const result = await watcher.triggerManual("test-run");
expect(detectorStub.initialize).toHaveBeenCalled();
expect(detectorStub.getPrioritizedDriftResults).toHaveBeenCalled();
// Usage metadata should reflect imports and doc references
expect(capturedUsage).toBeTruthy();
expect(capturedUsage.imports.get("produce")).toBe(1);
// produce is exported as a function; collector should count it in functionCalls
expect(capturedUsage.functionCalls.get("produce")).toBeGreaterThanOrEqual(
1,
);
// Drift results returned to the caller should carry the usage-influenced score
expect(result.driftResults[0].priorityScore?.overall).toBe(1);
expect(result.driftResults[0].priorityScore?.factors.usageFrequency).toBe(
1,
);
});
});
```
--------------------------------------------------------------------------------
/docs/CE-MCP-FINDINGS.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-12-09T19:18:14.152Z"
last_validated: "2025-12-09T19:41:38.564Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# Code Execution with MCP (CE-MCP) Research Findings
**Date**: 2025-12-09
**Status**: Validated - documcp is CE-MCP Compatible ✅
## Executive Summary
After comprehensive research into the Code Execution with MCP (CE-MCP) paradigm, we've confirmed that **documcp's existing architecture is fully compatible with Code Mode clients** without requiring architectural changes.
## Key Discoveries
### 1. CE-MCP is Client-Side, Not Server-Side
The CE-MCP paradigm described in the architectural guide is implemented by **MCP clients** (Claude Code, Cloudflare Workers AI, pctx), not servers:
| Responsibility | Implementation | Status for documcp |
| ------------------------------ | ------------------------------------------- | -------------------------- |
| Code generation | MCP Client | ✅ Client handles |
| Tool discovery | MCP Client (generates filesystem structure) | ✅ Compatible |
| Sandboxed execution | MCP Client (isolates, Docker, etc.) | ✅ Client handles |
| Security (AgentBound-style) | MCP Client (MCP Guardian, etc.) | ✅ Client handles |
| Summary filtering | MCP Client | ✅ Compatible |
| **Tool definitions & schemas** | **MCP Server (documcp)** | ✅ **Already implemented** |
| **Tool execution** | **MCP Server (documcp)** | ✅ **Already implemented** |
### 2. What MCP Servers Provide
According to Anthropic and Cloudflare's documentation:
> "MCP is designed for tool-calling, but it doesn't actually _have to_ be used that way. The 'tools' that an MCP server exposes are really just an RPC interface with attached documentation."
**MCP servers (like documcp) provide:**
- Standard MCP protocol tools ✅ (documcp has 25+ tools)
- Tool schemas and documentation ✅ (Zod-validated)
- JSON-RPC interface ✅ (MCP SDK handles this)
**That's it!** The client SDK handles everything else.
### 3. How Code Mode Works
**Client-Side Transformation:**
1. Client connects to MCP server and receives tool definitions
2. Client converts tool definitions → TypeScript/Python code APIs
3. Client creates filesystem structure for tool discovery (e.g., `./servers/google-drive/getDocument.ts`)
4. LLM navigates filesystem and reads only needed tool definitions
5. LLM generates orchestration code using the tool APIs
6. Client executes code in secure sandbox (isolate, Docker, etc.)
7. Only final summary returned to LLM context
**Result**: 98.7% token reduction, 75x cost reduction, 60% faster execution
### 4. MCP SDK 1.24.0 New Features
PR #69 upgrades us from v0.6.0 → v1.24.0, bringing:
- **SEP-1686: Tasks API** - New MCP primitive for long-running agent operations
- Better SSE (Server-Sent Events) handling
- OAuth enhancements (client credentials flow)
- Improved type safety and Zod V4 compatibility
## Validation Results
### ✅ SDK Upgrade Successful
- All tests pass: 91.67% coverage
- No breaking changes detected
- TypeScript compilation successful
- Build successful
### ✅ documcp Architecture Validates
**Why documcp is already Code Mode compatible:**
1. **Stateless Design** (ADR-001): Perfect for Code Mode workflows
2. **Modular Tools** (ADR-006): Each tool is independent and composable
3. **Zod Validation**: Provides excellent schema docs for code generation
4. **JSON-RPC**: Standard MCP protocol, works with all clients
5. **MCP Resources** (ADR-007): Perfect for summary-only result filtering
## Architectural Implications
### What documcp Does NOT Need
❌ Filesystem-based tool discovery system (client does this)
❌ Sandbox execution environment (client does this)
❌ AgentBound security framework (client does this)
❌ Code generation layer (client does this)
❌ Tool wrappers (client generates these)
❌ Major architectural changes
### What documcp COULD Optimize (Optional)
These are **optional enhancements** for better Code Mode UX, not requirements:
1. **Tool Categorization** - Add metadata tags for easier discovery
2. **Concise Descriptions** - Optimize tool descriptions for token efficiency
3. **Result Summarization** - Return more concise results where appropriate
4. **MCP Tasks Integration** - Use new Tasks API for long-running operations
5. **Resource Optimization** - Better use of MCP resources for intermediate results
## Recommended Actions
### Immediate (Completed ✅)
- [x] Merge PR #69 (SDK upgrade to 1.24.0)
- [x] Run tests to validate compatibility
- [x] Document CE-MCP findings
### Short-Term (This Sprint)
- [ ] Create ADR-011: CE-MCP Compatibility and Code Mode Support
- [ ] Update ADR-001: Add Code Mode compatibility note
- [ ] Update ADR-006: Add tool organization recommendations
- [ ] Update ADR-007: Add resource optimization for Code Mode
- [ ] Test with Code Mode client (Claude Code, pctx)
- [ ] Create CE-MCP usage documentation
### Medium-Term (Optional Optimizations)
- [ ] Research which tools benefit from MCP Tasks API
- [ ] Add tool categorization metadata
- [ ] Optimize tool descriptions for token efficiency
- [ ] Implement result summarization for large outputs
- [ ] Create example Code Mode workflows
## References
- [Anthropic: Code Execution with MCP](https://www.anthropic.com/engineering/code-execution-with-mcp)
- [Cloudflare: Code Mode](https://blog.cloudflare.com/code-mode/)
- [MCP Specification 2025-11-25](https://modelcontextprotocol.io/specification/2025-06-18)
- [MCP SDK 1.24.0 Release Notes](https://github.com/modelcontextprotocol/typescript-sdk/releases/tag/1.24.0)
## Conclusion
**documcp's existing architecture is fully Code Mode compatible.** The stateless, tool-based design aligns perfectly with the CE-MCP paradigm. No architectural changes are required—only optional optimizations to enhance the user experience with Code Mode clients.
The CE-MCP paradigm validates our architectural decisions in ADR-001, ADR-006, and ADR-007. The focus should now shift to testing with Code Mode clients and documenting best practices for developers using documcp in Code Mode workflows.
```
--------------------------------------------------------------------------------
/docs/tutorials/getting-started.md:
--------------------------------------------------------------------------------
```markdown
---
id: getting-started
title: Getting Started with DocuMCP
sidebar_label: Getting Started
documcp:
last_updated: "2025-11-20T00:46:21.972Z"
last_validated: "2025-12-09T19:41:38.603Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# Getting Started with DocuMCP
This tutorial will guide you through setting up and using DocuMCP's intelligent documentation deployment system with memory-enhanced capabilities.
## Prerequisites
Before you begin, ensure you have:
- Node.js 20.0.0 or higher installed
- Access to a GitHub repository
- Claude Desktop, Cursor, or another MCP-compatible client
- Basic familiarity with documentation workflows
## 🎯 Pro Tip: Reference LLM_CONTEXT.md
When using DocuMCP through an AI assistant, **always reference the LLM_CONTEXT.md file** for comprehensive tool context:
```
@LLM_CONTEXT.md help me set up documentation for my TypeScript project
```
The `LLM_CONTEXT.md` file is auto-generated and contains:
- All 45 tool descriptions and parameters
- Usage examples and workflows
- Memory system documentation
- Phase 3 code-to-docs sync features
**Location**: `/LLM_CONTEXT.md` (in project root)
This ensures your AI assistant has complete context and can provide optimal recommendations.
## Step 1: Initial Repository Analysis
Start by analyzing your repository to understand its characteristics and documentation needs:
```json
{
"path": "/path/to/your/project",
"depth": "standard"
}
```
This will analyze your project and return:
- **Project structure**: File counts, languages used, and organization
- **Dependencies**: Production and development packages detected
- **Documentation status**: Existing docs, README, contributing guidelines
- **Smart recommendations**: Primary language, project type, team size estimates
- **Unique analysis ID**: For use in subsequent steps
Example response snippet:
```json
{
"id": "analysis_abc123xyz",
"structure": {
"totalFiles": 150,
"languages": { ".ts": 45, ".js": 12, ".md": 8 },
"hasTests": true,
"hasCI": true
},
"dependencies": {
"ecosystem": "javascript",
"packages": ["react", "typescript"]
},
"recommendations": {
"primaryLanguage": "typescript",
"projectType": "library"
}
}
```
## Step 2: Memory-Enhanced SSG Recommendation
Next, get intelligent recommendations powered by DocuMCP's memory system:
```json
{
"analysisId": "analysis_abc123xyz",
"preferences": {
"ecosystem": "javascript",
"priority": "features"
}
}
```
The memory system leverages patterns from 130+ previous projects to provide:
- **Confidence-scored recommendations** (e.g., Docusaurus with 85% confidence)
- **Historical success data** (69% deployment success rate insights)
- **Pattern-based insights** (Hugo most common with 98 projects, but Docusaurus optimal for TypeScript)
- **Similar project examples** to learn from successful configurations
Example recommendation response:
```json
{
"recommended": "docusaurus",
"confidence": 0.85,
"reasoning": [
"JavaScript/TypeScript ecosystem detected",
"Modern React-based framework aligns with project stack",
"Strong community support and active development"
],
"alternatives": [
{
"name": "MkDocs",
"score": 0.75,
"pros": ["Simple setup", "Great themes"],
"cons": ["Limited React component support"]
}
]
}
```
## Step 3: Configuration Generation
Generate optimized configuration files for your chosen SSG:
```javascript
// Generate Docusaurus configuration
{
"ssg": "docusaurus",
"projectName": "Your Project",
"projectDescription": "Your project description",
"outputPath": "/path/to/your/repository"
}
```
## Step 4: Diataxis Structure Setup
Create a professional documentation structure following the Diataxis framework:
```javascript
// Setup documentation structure
{
"path": "/path/to/your/repository/docs",
"ssg": "docusaurus",
"includeExamples": true
}
```
This creates four optimized sections following the Diataxis framework:
- **Tutorials**: Learning-oriented guides for skill acquisition (study context)
- **How-to Guides**: Problem-solving guides for specific tasks (work context)
- **Reference**: Information-oriented content for lookup and verification (information context)
- **Explanation**: Understanding-oriented content for context and background (understanding context)
## Step 5: GitHub Pages Deployment
Set up automated deployment with security best practices:
```javascript
// Deploy to GitHub Pages
{
"repository": "/path/to/your/repository",
"ssg": "docusaurus",
"branch": "gh-pages"
}
```
This generates:
- GitHub Actions workflow with OIDC authentication
- Minimal security permissions (pages:write, id-token:write only)
- Automated build and deployment pipeline
## Step 6: Memory System Exploration
Explore DocuMCP's advanced memory capabilities:
```javascript
// Get learning statistics
{
"includeDetails": true
}
// Recall similar projects
{
"query": "typescript documentation",
"type": "recommendation",
"limit": 5
}
```
The memory system provides:
- **Pattern Recognition**: Most successful SSG choices for your project type
- **Historical Insights**: Success rates and common issues
- **Smart Recommendations**: Enhanced suggestions based on similar projects
## Verification
Verify your setup with these checks:
1. **Documentation Structure**: Confirm all Diataxis directories are created
2. **Configuration Files**: Check generated config files are valid
3. **GitHub Actions**: Verify workflow file in `.github/workflows/`
4. **Memory Insights**: Review recommendations and confidence scores
## Summary
In this tutorial, you learned how to:
- **Analyze repositories** with comprehensive project profiling
- **Get intelligent SSG recommendations** using memory-enhanced insights
- **Generate optimized configurations** for your chosen static site generator
- **Create Diataxis-compliant structures** for professional documentation
- **Set up automated GitHub Pages deployment** with security best practices
- **Leverage the memory system** for enhanced recommendations and insights
## Next Steps
- Explore [Memory-Enhanced Workflows](./memory-workflows.md)
- Read [How-To Guides](../how-to/) for specific tasks
- Check the [API Reference](../reference/) for complete tool documentation
- Learn about [Diataxis Framework](../explanation/) principles
```
--------------------------------------------------------------------------------
/tests/memory/contextual-retrieval.test.ts:
--------------------------------------------------------------------------------
```typescript
/**
* Basic unit tests for Contextual Memory Retrieval System
* Tests basic context-aware memory retrieval capabilities
* Part of Issue #55 - Advanced Memory Components Unit Tests
*/
import { promises as fs } from "fs";
import path from "path";
import os from "os";
import { MemoryManager } from "../../src/memory/manager.js";
import ContextualRetrievalSystem, {
RetrievalContext,
} from "../../src/memory/contextual-retrieval.js";
describe("ContextualRetrievalSystem", () => {
let tempDir: string;
let memoryManager: MemoryManager;
let knowledgeGraph: any;
let contextualRetrieval: ContextualRetrievalSystem;
beforeEach(async () => {
// Create unique temp directory for each test
tempDir = path.join(
os.tmpdir(),
`contextual-retrieval-test-${Date.now()}-${Math.random()
.toString(36)
.substr(2, 9)}`,
);
await fs.mkdir(tempDir, { recursive: true });
memoryManager = new MemoryManager(tempDir);
await memoryManager.initialize();
// Create a mock knowledge graph for testing
knowledgeGraph = {
findRelatedNodes: jest.fn().mockResolvedValue([]),
getConnectionStrength: jest.fn().mockResolvedValue(0.5),
query: jest.fn().mockReturnValue({ nodes: [], edges: [] }),
};
contextualRetrieval = new ContextualRetrievalSystem(
memoryManager,
knowledgeGraph,
);
});
afterEach(async () => {
// Cleanup temp directory
try {
await fs.rm(tempDir, { recursive: true, force: true });
} catch (error) {
// Ignore cleanup errors
}
});
describe("Initialization and Configuration", () => {
test("should create ContextualRetrievalSystem instance", () => {
expect(contextualRetrieval).toBeInstanceOf(ContextualRetrievalSystem);
});
test("should have memory manager and knowledge graph dependencies", () => {
expect(contextualRetrieval).toBeDefined();
// Basic integration test - system should be created with dependencies
});
});
describe("Basic Contextual Retrieval", () => {
beforeEach(async () => {
// Set up test memories for retrieval tests
await memoryManager.remember("analysis", {
projectPath: "/test/typescript-project",
language: "typescript",
framework: "react",
outcome: "success",
recommendation: "Use TypeScript for better type safety",
});
await memoryManager.remember("deployment", {
projectPath: "/test/node-project",
language: "javascript",
framework: "express",
outcome: "success",
recommendation: "Deploy with Docker for consistency",
});
await memoryManager.remember("recommendation", {
projectPath: "/test/python-project",
language: "python",
framework: "django",
outcome: "failure",
recommendation: "Check Python version compatibility",
});
});
test("should retrieve contextual matches based on project context", async () => {
const retrievalContext: RetrievalContext = {
currentProject: {
path: "/test/typescript-project",
language: "typescript",
framework: "react",
},
userIntent: {
action: "analyze",
urgency: "medium",
experience: "intermediate",
},
temporalContext: {
recency: "recent",
},
};
const result = await contextualRetrieval.retrieve(
"typescript react documentation",
retrievalContext,
);
expect(result).toBeDefined();
expect(result.matches).toBeDefined();
expect(Array.isArray(result.matches)).toBe(true);
// Basic structure validation
if (result.matches.length > 0) {
const match = result.matches[0];
expect(match).toHaveProperty("memory");
expect(match).toHaveProperty("relevanceScore");
expect(typeof match.relevanceScore).toBe("number");
}
});
test("should handle different user intents", async () => {
const troubleshootContext: RetrievalContext = {
userIntent: {
action: "troubleshoot",
urgency: "high",
experience: "novice",
},
};
const recommendContext: RetrievalContext = {
userIntent: {
action: "recommend",
urgency: "low",
experience: "expert",
},
};
const troubleshootResult = await contextualRetrieval.retrieve(
"deployment failed",
troubleshootContext,
);
const recommendResult = await contextualRetrieval.retrieve(
"best practices",
recommendContext,
);
expect(troubleshootResult).toBeDefined();
expect(recommendResult).toBeDefined();
expect(Array.isArray(troubleshootResult.matches)).toBe(true);
expect(Array.isArray(recommendResult.matches)).toBe(true);
});
test("should consider temporal context for relevance", async () => {
const recentContext: RetrievalContext = {
temporalContext: {
recency: "recent",
},
};
const historicalContext: RetrievalContext = {
temporalContext: {
recency: "historical",
},
};
const recentResult = await contextualRetrieval.retrieve(
"recent activity",
recentContext,
);
const historicalResult = await contextualRetrieval.retrieve(
"historical data",
historicalContext,
);
expect(recentResult).toBeDefined();
expect(historicalResult).toBeDefined();
expect(Array.isArray(recentResult.matches)).toBe(true);
expect(Array.isArray(historicalResult.matches)).toBe(true);
});
});
describe("Error Handling and Edge Cases", () => {
test("should handle empty query gracefully", async () => {
const context: RetrievalContext = {
userIntent: {
action: "analyze",
urgency: "medium",
experience: "intermediate",
},
};
const result = await contextualRetrieval.retrieve("", context);
expect(result).toBeDefined();
expect(result.matches).toBeDefined();
expect(Array.isArray(result.matches)).toBe(true);
});
test("should handle minimal context", async () => {
const minimalContext: RetrievalContext = {};
const result = await contextualRetrieval.retrieve(
"test query",
minimalContext,
);
expect(result).toBeDefined();
expect(result.matches).toBeDefined();
expect(Array.isArray(result.matches)).toBe(true);
});
});
});
```
--------------------------------------------------------------------------------
/tests/integration/mcp-readme-tools.test.ts:
--------------------------------------------------------------------------------
```typescript
import { promises as fs } from "fs";
import { join } from "path";
describe("MCP Integration Tests", () => {
let tempDir: string;
beforeEach(async () => {
tempDir = join(process.cwd(), "test-mcp-integration-temp");
await fs.mkdir(tempDir, { recursive: true });
});
afterEach(async () => {
try {
await fs.rm(tempDir, { recursive: true, force: true });
} catch (error) {
// Ignore cleanup errors
}
});
describe("Tool Registration", () => {
test("should include evaluate_readme_health in tools list", async () => {
// This test verifies that the README health tool is properly registered
// Since we can't directly access the server instance, we'll test the tool functions directly
// but verify they match the expected MCP interface
const { evaluateReadmeHealth } = await import(
"../../src/tools/evaluate-readme-health.js"
);
// Test with valid parameters that match the MCP schema
const readmePath = join(tempDir, "README.md");
await fs.writeFile(readmePath, "# Test Project\n\nBasic README content.");
const result = await evaluateReadmeHealth({
readme_path: readmePath,
project_type: "community_library", // Valid enum value from schema
});
expect(result.content).toBeDefined();
expect(result.isError).toBe(false);
});
test("should include readme_best_practices in tools list", async () => {
const { readmeBestPractices } = await import(
"../../src/tools/readme-best-practices.js"
);
const readmePath = join(tempDir, "README.md");
await fs.writeFile(
readmePath,
"# Test Library\n\nLibrary documentation.",
);
const result = await readmeBestPractices({
readme_path: readmePath,
project_type: "library", // Valid enum value from schema
});
expect(result.success).toBe(true);
expect(result.data).toBeDefined();
});
});
describe("Parameter Validation", () => {
test("evaluate_readme_health should handle invalid project_type", async () => {
const { evaluateReadmeHealth } = await import(
"../../src/tools/evaluate-readme-health.js"
);
const readmePath = join(tempDir, "README.md");
await fs.writeFile(readmePath, "# Test");
const result = await evaluateReadmeHealth({
readme_path: readmePath,
project_type: "invalid_type" as any,
});
expect(result.isError).toBe(true);
});
test("readme_best_practices should handle invalid project_type", async () => {
const { readmeBestPractices } = await import(
"../../src/tools/readme-best-practices.js"
);
const readmePath = join(tempDir, "README.md");
await fs.writeFile(readmePath, "# Test");
const result = await readmeBestPractices({
readme_path: readmePath,
project_type: "invalid_type" as any,
});
expect(result.success).toBe(false);
expect(result.error).toBeDefined();
});
test("evaluate_readme_health should handle missing file", async () => {
const { evaluateReadmeHealth } = await import(
"../../src/tools/evaluate-readme-health.js"
);
const result = await evaluateReadmeHealth({
readme_path: join(tempDir, "nonexistent.md"),
});
expect(result.isError).toBe(true);
});
test("readme_best_practices should handle missing file without template", async () => {
const { readmeBestPractices } = await import(
"../../src/tools/readme-best-practices.js"
);
const result = await readmeBestPractices({
readme_path: join(tempDir, "nonexistent.md"),
generate_template: false,
});
expect(result.success).toBe(false);
expect(result.error?.code).toBe("README_NOT_FOUND");
});
});
describe("Response Format Consistency", () => {
test("evaluate_readme_health should return MCP-formatted response", async () => {
const { evaluateReadmeHealth } = await import(
"../../src/tools/evaluate-readme-health.js"
);
const readmePath = join(tempDir, "README.md");
await fs.writeFile(
readmePath,
"# Complete Project\n\n## Description\nDetailed description.",
);
const result = await evaluateReadmeHealth({
readme_path: readmePath,
});
// Should be already formatted for MCP
expect(result.content).toBeDefined();
expect(Array.isArray(result.content)).toBe(true);
expect(result.isError).toBeDefined();
// Should have execution metadata
const metadataContent = result.content.find((c) =>
c.text.includes("Execution completed"),
);
expect(metadataContent).toBeDefined();
});
test("readme_best_practices should return MCPToolResponse that can be formatted", async () => {
const { readmeBestPractices } = await import(
"../../src/tools/readme-best-practices.js"
);
const { formatMCPResponse } = await import("../../src/types/api.js");
const readmePath = join(tempDir, "README.md");
await fs.writeFile(
readmePath,
"# Library Project\n\n## Installation\nnpm install",
);
const result = await readmeBestPractices({
readme_path: readmePath,
});
// Should be raw MCPToolResponse
expect(result.success).toBeDefined();
expect(result.metadata).toBeDefined();
// Should be formattable
const formatted = formatMCPResponse(result);
expect(formatted.content).toBeDefined();
expect(Array.isArray(formatted.content)).toBe(true);
expect(formatted.isError).toBe(false);
});
});
describe("Cross-tool Consistency", () => {
test("both tools should handle the same README file", async () => {
const { evaluateReadmeHealth } = await import(
"../../src/tools/evaluate-readme-health.js"
);
const { readmeBestPractices } = await import(
"../../src/tools/readme-best-practices.js"
);
const readmePath = join(tempDir, "README.md");
await fs.writeFile(
readmePath,
`# Test Project
## Description
This is a comprehensive test project.
## Installation
\`\`\`bash
npm install test-project
\`\`\`
## Usage
\`\`\`javascript
const test = require('test-project');
test.run();
\`\`\`
## Contributing
Please read our contributing guidelines.
## License
MIT License
`,
);
// Both tools should work on the same file
const healthResult = await evaluateReadmeHealth({
readme_path: readmePath,
project_type: "community_library",
});
const practicesResult = await readmeBestPractices({
readme_path: readmePath,
project_type: "library",
});
expect(healthResult.isError).toBe(false);
expect(practicesResult.success).toBe(true);
});
});
});
```
--------------------------------------------------------------------------------
/docs/guides/playwright-integration.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.948Z"
last_validated: "2025-12-09T19:41:38.579Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# Playwright Integration Guide
## Overview
DocuMCP can generate a complete Playwright E2E testing setup for your documentation site, including:
- Playwright configuration
- Link validation tests
- Accessibility tests (WCAG 2.1 AA)
- Docker/Podman containerization
- GitHub Actions CI/CD workflow
**Important**: Playwright is NOT a dependency of DocuMCP itself. Instead, DocuMCP **generates** the Playwright setup for your documentation site.
## Quick Start
### Generate Playwright Setup
Use the `setup_playwright_tests` tool to generate all necessary files:
```typescript
{
tool: "setup_playwright_tests",
arguments: {
repositoryPath: "./my-docs-site",
ssg: "docusaurus",
projectName: "My Documentation",
mainBranch: "main",
includeAccessibilityTests: true,
includeDockerfile: true,
includeGitHubActions: true
}
}
```
### What Gets Generated
```
my-docs-site/
├── playwright.config.ts # Playwright configuration
├── Dockerfile.playwright # Multi-stage Docker build
├── .github/workflows/
│ └── docs-e2e-tests.yml # CI/CD workflow
├── tests/e2e/
│ ├── link-validation.spec.ts # Link tests
│ └── accessibility.spec.ts # A11y tests
├── package.json # Updated with Playwright deps
└── .gitignore # Updated with test artifacts
```
## Generated Files Explained
### 1. Playwright Config (`playwright.config.ts`)
```typescript
export default defineConfig({
testDir: "./tests/e2e",
timeout: 30 * 1000,
use: {
baseURL: process.env.BASE_URL || "http://localhost:3000",
},
projects: [{ name: "chromium" }, { name: "firefox" }, { name: "webkit" }],
});
```
### 2. Link Validation Tests
- ✅ Internal navigation links
- ✅ External link HTTP status
- ✅ Anchor/hash links
- ✅ 404 detection
### 3. Accessibility Tests
- ✅ WCAG 2.1 AA compliance (axe-core)
- ✅ Keyboard navigation
- ✅ Image alt text
- ✅ Color contrast
### 4. Docker Multi-Stage Build
```dockerfile
# Build docs
FROM node:20-alpine AS builder
RUN npm run build
# Run tests
FROM mcr.microsoft.com/playwright:v1.55.1 AS tester
RUN npx playwright test
# Serve production
FROM nginx:alpine AS server
COPY build /usr/share/nginx/html
```
### 5. GitHub Actions Workflow
Automated testing on every push/PR:
1. **Build** → Compile documentation
2. **Test** → Run Playwright in container (chromium, firefox, webkit)
3. **Deploy** → Push to GitHub Pages (if tests pass)
4. **Verify** → Test live production site
## Usage After Generation
### Local Testing
```bash
# Install dependencies (in YOUR docs site, not DocuMCP)
cd my-docs-site
npm install
# Install Playwright browsers
npx playwright install
# Run tests
npm run test:e2e
# Run tests in UI mode
npm run test:e2e:ui
# View test report
npm run test:e2e:report
```
### Docker Testing
```bash
# Build test container
docker build -t my-docs-test -f Dockerfile.playwright .
# Run tests in container
docker run --rm my-docs-test
# Or with Podman
podman build -t my-docs-test -f Dockerfile.playwright .
podman run --rm my-docs-test
```
### CI/CD Integration
Push to trigger GitHub Actions:
```bash
git add .
git commit -m "Add Playwright E2E tests"
git push origin main
```
Workflow will automatically:
- Build docs
- Run E2E tests across browsers
- Deploy to GitHub Pages (if all tests pass)
- Test production site after deployment
## Customization
### Add More Tests
Create new test files in `tests/e2e/`:
```typescript
// tests/e2e/navigation.spec.ts
import { test, expect } from "@playwright/test";
test("breadcrumbs should work", async ({ page }) => {
await page.goto("/docs/some-page");
const breadcrumbs = page.locator('[aria-label="breadcrumb"]');
await expect(breadcrumbs).toBeVisible();
});
```
### Modify Configuration
Edit `playwright.config.ts`:
```typescript
export default defineConfig({
// Increase timeout for slow networks
timeout: 60 * 1000,
// Add mobile viewports
projects: [
{ name: "chromium" },
{ name: "Mobile Chrome", use: devices["Pixel 5"] },
],
});
```
## SSG-Specific Configuration
DocuMCP automatically configures for your SSG:
| SSG | Build Command | Build Dir | Port |
| ---------- | -------------------------- | --------- | ---- |
| Jekyll | `bundle exec jekyll build` | `_site` | 4000 |
| Hugo | `hugo` | `public` | 1313 |
| Docusaurus | `npm run build` | `build` | 3000 |
| MkDocs | `mkdocs build` | `site` | 8000 |
| Eleventy | `npx @11ty/eleventy` | `_site` | 8080 |
## Knowledge Graph Integration
Test results are tracked in DocuMCP's Knowledge Graph:
```typescript
{
type: "deployment_validation",
properties: {
playwrightResults: {
totalTests: 25,
passed: 24,
failed: 1,
browsers: ["chromium", "firefox", "webkit"],
linksChecked: 127,
brokenLinks: 0,
accessibilityScore: 98,
}
}
}
```
## Troubleshooting
### Tests Fail on External Links
External link validation can fail due to:
- Network timeouts
- Rate limiting
- CORS issues
**Solution**: Tests only check first 10 external links. Increase timeout in config.
### Container Build Fails
**Issue**: Docker build fails on dependency installation
**Solution**: Check SSG-specific dependencies in package.json
### CI/CD Workflow Times Out
**Issue**: GitHub Actions workflow exceeds time limit
**Solution**: Run only chromium in CI, full matrix locally:
```yaml
# .github/workflows/docs-e2e-tests.yml
strategy:
matrix:
browser: [chromium] # Only chromium in CI
```
## Best Practices
1. **Run tests before pushing** - `npm run test:e2e`
2. **Use Docker locally** - Same environment as CI
3. **Update baselines** - When changing UI intentionally
4. **Monitor CI reports** - Check artifacts for failures
5. **Test production** - Workflow tests live site automatically
## Example Workflow
```bash
# 1. User analyzes their documentation repo with DocuMCP
documcp analyze_repository --path ./my-docs
# 2. User generates Playwright setup
documcp setup_playwright_tests \
--repositoryPath ./my-docs \
--ssg docusaurus \
--projectName "My Docs"
# 3. User installs dependencies (in THEIR repo)
cd my-docs
npm install
npx playwright install
# 4. User runs tests locally
npm run test:e2e
# 5. User pushes to GitHub
git push origin main
# 6. GitHub Actions runs tests automatically
# 7. If tests pass, deploys to GitHub Pages
# 8. Tests production site
```
## Resources
- [Playwright Documentation](https://playwright.dev/)
- [Complete Workflow Guide](./playwright-testing-workflow.md)
- [Link Validation Integration](./link-validation.md)
- [Axe Accessibility Testing](https://github.com/dequelabs/axe-core)
```
--------------------------------------------------------------------------------
/docs/how-to/repository-analysis.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.954Z"
last_validated: "2025-12-09T19:41:38.585Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# How to Analyze Your Repository with DocuMCP
This guide walks you through using DocuMCP's repository analysis capabilities to understand your project's documentation needs.
## What Repository Analysis Provides
DocuMCP's analysis examines your project from multiple perspectives:
- **Project Structure**: File organization, language distribution, directory structure
- **Dependencies**: Package ecosystems, frameworks, and libraries in use
- **Documentation Status**: Existing documentation files, README quality, coverage gaps
- **Complexity Assessment**: Project size, team size estimates, maintenance requirements
- **Recommendations**: Tailored suggestions based on your project characteristics
## Basic Analysis
### Simple Analysis Request
```
analyze my repository
```
This performs a standard-depth analysis covering all key aspects of your project.
### Specify Analysis Depth
```
analyze my repository with deep analysis
```
Available depth levels:
- **quick**: Fast overview focusing on basic structure and languages
- **standard**: Comprehensive analysis including dependencies and documentation (recommended)
- **deep**: Detailed analysis with advanced insights and recommendations
## Understanding Analysis Results
### Project Structure Section
```json
{
"structure": {
"totalFiles": 2034,
"totalDirectories": 87,
"languages": {
".ts": 86,
".js": 13,
".css": 3,
".html": 37
},
"hasTests": true,
"hasCI": true,
"hasDocs": true
}
}
```
This tells you:
- Scale of your project (file/directory count)
- Primary programming languages
- Presence of tests, CI/CD, and existing documentation
### Dependencies Analysis
```json
{
"dependencies": {
"ecosystem": "javascript",
"packages": ["@modelcontextprotocol/sdk", "zod", "typescript"],
"devPackages": ["jest", "@types/node", "eslint"]
}
}
```
This reveals:
- Primary package ecosystem (npm, pip, cargo, etc.)
- Key runtime dependencies
- Development and tooling dependencies
### Documentation Assessment
```json
{
"documentation": {
"hasReadme": true,
"hasContributing": true,
"hasLicense": true,
"existingDocs": ["README.md", "docs/api.md"],
"estimatedComplexity": "complex"
}
}
```
This shows:
- Presence of essential documentation files
- Existing documentation structure
- Complexity level for documentation planning
## Advanced Analysis Techniques
### Target Specific Directories
```
analyze the src directory for API documentation needs
```
### Focus on Documentation Gaps
```
what documentation is missing from my project?
```
### Analyze for Specific Use Cases
```
analyze my repository to determine if it needs user guides or developer documentation
```
## Using Analysis Results
### For SSG Selection
After analysis, use the results to get targeted recommendations:
```
based on the analysis, what static site generator works best for my TypeScript project?
```
### For Documentation Planning
Use analysis insights to plan your documentation structure:
```
given my project complexity, how should I organize my documentation?
```
### For Deployment Strategy
Let analysis guide your deployment approach:
```
considering my project setup, what's the best way to deploy documentation?
```
## Analysis-Driven Workflows
### Complete Documentation Setup
1. **Analyze**: `analyze my repository for documentation needs`
2. **Plan**: Use analysis results to understand project characteristics
3. **Recommend**: `recommend documentation tools based on the analysis`
4. **Implement**: `set up documentation based on the recommendations`
### Documentation Audit
1. **Current State**: `analyze my existing documentation structure`
2. **Gap Analysis**: `what documentation gaps exist in my project?`
3. **Improvement Plan**: `how can I improve my current documentation?`
### Migration Planning
1. **Legacy Analysis**: `analyze my project's current documentation approach`
2. **Modern Approach**: `what modern documentation tools would work better?`
3. **Migration Strategy**: `how should I migrate from my current setup?`
## Interpreting Recommendations
### Project Type Classification
Analysis categorizes your project as:
- **library**: Reusable code packages requiring API documentation
- **application**: End-user software needing user guides and tutorials
- **tool**: Command-line or developer tools requiring usage documentation
### Team Size Estimation
- **small**: 1-3 developers, favor simple solutions
- **medium**: 4-10 developers, need collaborative features
- **large**: 10+ developers, require enterprise-grade solutions
### Complexity Assessment
- **simple**: Basic projects with minimal documentation needs
- **moderate**: Standard projects requiring structured documentation
- **complex**: Large projects needing comprehensive documentation strategies
## Common Analysis Patterns
### JavaScript/TypeScript Projects
Analysis typically reveals:
- npm ecosystem with extensive dev dependencies
- Need for API documentation (if library)
- Integration with existing build tools
- Recommendation: Often Docusaurus or VuePress
### Python Projects
Analysis usually shows:
- pip/poetry ecosystem
- Sphinx-compatible documentation needs
- Strong preference for MkDocs
- Integration with Python documentation standards
### Multi-Language Projects
Analysis identifies:
- Mixed ecosystems and dependencies
- Need for language-agnostic solutions
- Recommendation: Usually Hugo or Jekyll for flexibility
## Troubleshooting Analysis
### Incomplete Results
If analysis seems incomplete:
```
run deep analysis on my repository to get more detailed insights
```
### Focus on Specific Areas
If you need more details about certain aspects:
```
analyze my project's dependencies in detail
```
### Re-analyze After Changes
After making significant changes:
```
re-analyze my repository to see updated recommendations
```
## Analysis Memory and Caching
DocuMCP stores analysis results for reference in future operations:
- Analysis IDs are provided for referencing specific analyses
- Results remain accessible throughout your session
- Memory system learns from successful documentation deployments
Use analysis IDs in follow-up requests:
```
using analysis analysis_abc123, set up the recommended documentation structure
```
## Best Practices
1. **Start Fresh**: Begin new documentation projects with analysis
2. **Regular Reviews**: Re-analyze periodically as projects evolve
3. **Deep Dive When Needed**: Use deep analysis for complex projects
4. **Combine with Expertise**: Use analysis as a starting point, not final decision
5. **Iterate**: Refine based on analysis feedback and results
Analysis is the foundation of effective documentation planning with DocuMCP. Use it to make informed decisions about tools, structure, and deployment strategies.
```
--------------------------------------------------------------------------------
/docs/research/domain-5-github-deployment/github-pages-security-analysis.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.967Z"
last_validated: "2025-12-09T19:41:38.598Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# GitHub Pages Deployment Security and Limitations Analysis
**Research Date**: 2025-01-14
**Domain**: GitHub Pages Deployment Automation
**Status**: Completed
## Research Overview
Comprehensive analysis of GitHub Pages deployment security considerations, limitations, and automation best practices for DocuMCP implementation.
## GitHub Pages Security Model Analysis
### Deployment Methods & Security Implications
#### **1. GitHub Actions (Official Method)**
**Security Profile**:
- ✅ **OIDC Token-based Authentication**: Uses JWT tokens with branch validation
- ✅ **Permissions Model**: Requires explicit `pages: write` and `id-token: write`
- ✅ **Environment Protection**: Supports environment rules and approvals
- ⚠️ **First Deploy Challenge**: Manual branch selection required initially
**Implementation Pattern**:
```yaml
permissions:
pages: write # Deploy to Pages
id-token: write # Verify deployment origin
contents: read # Checkout repository
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
```
#### **2. Deploy Keys (SSH Method)**
**Security Profile**:
- ✅ **Repository-specific**: Keys scoped to individual repositories
- ✅ **Write Access Control**: Can be limited to deployment-only access
- ⚠️ **Key Management**: Requires secure key generation and storage
- ⚠️ **Cross-repo Complexity**: Each external repo needs separate key setup
#### **3. Personal Access Tokens**
**Security Profile**:
- ⚠️ **Broad Permissions**: Often have wider access than needed
- ⚠️ **Expiration Management**: Tokens expire and need rotation
- ⚠️ **Account-wide Risk**: Compromise affects all accessible repositories
### GitHub Pages Deployment Limitations
#### **Technical Constraints**
1. **Site Size Limits**:
- Maximum 1GB per repository
- Impacts large documentation sites with assets
- No compression before size calculation
2. **Build Frequency Limits**:
- 10 builds per hour soft limit
- Additional builds queued for next hour
- Can impact rapid deployment cycles
3. **Static Content Only**:
- No server-side processing
- No dynamic content generation
- Limited to client-side JavaScript
#### **Security Constraints**
1. **Content Security Policy**:
- Default CSP may block certain resources
- Limited ability to customize security headers
- No server-side security controls
2. **HTTPS Enforcement**:
- Custom domains require manual HTTPS setup
- Certificate management through GitHub
- No control over TLS configuration
### CI/CD Workflow Security Best Practices
#### **Recommended Security Architecture**
```yaml
name: Deploy Documentation
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Security scan
run: |
# Scan for secrets, vulnerabilities
npm audit --audit-level high
build:
needs: security-scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build site
run: npm run build
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./dist
deploy:
if: github.ref == 'refs/heads/main'
needs: build
runs-on: ubuntu-latest
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
```
#### **Security Validation Steps**
1. **Pre-deployment Checks**:
- Secret scanning
- Dependency vulnerability assessment
- Content validation
2. **Deployment Security**:
- Environment protection rules
- Required reviewers for production
- Branch protection enforcement
3. **Post-deployment Verification**:
- Site accessibility validation
- Security header verification
- Content integrity checks
### DocuMCP Security Implementation Recommendations
#### **Multi-layered Security Approach**
1. **Tool-level Security**:
```typescript
// Example security validation in MCP tool
const validateDeploymentSecurity = (config: DeploymentConfig) => {
const securityChecks = {
hasSecretScanning: checkSecretScanning(config),
hasEnvironmentProtection: checkEnvironmentRules(config),
hasProperPermissions: validatePermissions(config),
hasSecurityHeaders: validateSecurityHeaders(config),
};
return securityChecks;
};
```
2. **Configuration Template Security**:
- Generate workflows with minimal required permissions
- Include security scanning by default
- Enforce environment protection for production
3. **User Education Components**:
- Security best practices documentation
- Common vulnerability warnings
- Regular security updates guidance
### Risk Assessment & Mitigation
#### **High-Risk Scenarios**
1. **Secret Exposure in Repositories**:
- **Risk**: API keys, tokens in code
- **Mitigation**: Mandatory secret scanning, education
2. **Malicious Pull Request Deployments**:
- **Risk**: Untrusted code in preview deployments
- **Mitigation**: Environment protection, review requirements
3. **Supply Chain Attacks**:
- **Risk**: Compromised dependencies
- **Mitigation**: Dependency scanning, lock files
#### **Medium-Risk Scenarios**
1. **Excessive Permissions**:
- **Risk**: Overprivileged deployment workflows
- **Mitigation**: Principle of least privilege templates
2. **Unprotected Environments**:
- **Risk**: Direct production deployments
- **Mitigation**: Default environment protection
### Implementation Priorities for DocuMCP
#### **Critical Security Features**
1. **Automated Security Scanning**: Integrate secret and vulnerability scanning
2. **Permission Minimization**: Generate workflows with minimal required permissions
3. **Environment Protection**: Default protection rules for production environments
4. **Security Documentation**: Clear guidance on security best practices
#### **Enhanced Security Features**
1. **Custom Security Checks**: Advanced validation for specific project types
2. **Security Reporting**: Automated security posture assessment
3. **Incident Response**: Guidance for security issue handling
## Research Validation Status
- ✅ GitHub Pages security model analyzed
- ✅ Deployment methods evaluated
- ✅ Security best practices documented
- ✅ Risk assessment completed
- ⚠️ Needs validation: Security template effectiveness testing
- ⚠️ Needs implementation: DocuMCP security feature integration
## Sources & References
1. GitHub Pages Official Documentation - Security Guidelines
2. GitHub Actions Security Best Practices
3. OWASP Static Site Security Guide
4. GitHub Security Advisory Database
5. Community Security Analysis Reports
```
--------------------------------------------------------------------------------
/docs/research/research-integration-summary-2025-01-14.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.968Z"
last_validated: "2025-12-09T19:41:38.599Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# Research Integration Summary
**Date**: 2025-01-14
**Status**: Completed
**Integration Method**: Direct ADR Updates + Implementation Recommendations
## Research Integration Overview
This document summarizes how research findings from systematic web research using Firecrawl MCP server have been incorporated into DocuMCP's architectural decisions and implementation planning.
## Research Areas Integrated
### ✅ **1. MCP Server Architecture (ADR-001)**
**Research Source**: `domain-1-mcp-architecture/mcp-performance-research.md`
**Key Integrations**:
- **Performance Validation**: Confirmed TypeScript MCP SDK provides minimal overhead with JSON-RPC 2.0
- **Memory Optimization**: Integrated streaming patterns (10x memory reduction) and worker threads (3-4x performance)
- **Implementation Strategy**: Added concrete code patterns for repository analysis with performance benchmarks
**ADR Updates Applied**:
- Added "Research Integration" section with validated performance characteristics
- Integrated specific implementation patterns for streaming and worker threads
- Established research-validated performance targets for different repository sizes
### ✅ **2. SSG Recommendation Engine (ADR-003)**
**Research Source**: `domain-3-ssg-recommendation/ssg-performance-analysis.md`
**Key Integrations**:
- **Performance Matrix**: Comprehensive build time analysis across SSG scales
- **Algorithm Enhancement**: Research-validated scoring with scale-based weighting
- **Real-World Data**: Hugo 250x faster than Gatsby (small sites), gap narrows to 40x (large sites)
**ADR Updates Applied**:
- Enhanced performance modeling with research-validated SSG performance matrix
- Updated recommendation algorithm with evidence-based scoring
- Integrated scale-based performance weighting (critical path vs features)
### ✅ **3. GitHub Pages Deployment Security (ADR-005)**
**Research Source**: `domain-5-github-deployment/github-pages-security-analysis.md`
**Key Integrations**:
- **Security Architecture**: OIDC token authentication with JWT validation
- **Permission Minimization**: Specific `pages: write` and `id-token: write` requirements
- **Environment Protection**: Default security rules with approval workflows
- **Automated Scanning**: Integrated secret and vulnerability detection
**ADR Updates Applied**:
- Enhanced repository configuration management with research-validated security practices
- Added multi-layered security approach with specific implementation details
- Integrated automated security scanning and environment protection requirements
## Implementation Impact Analysis
### **Immediate Implementation Requirements**
1. **High Priority Updates** (Week 1-2):
- Implement streaming-based repository analysis with 10MB threshold
- Create worker thread pool for parallel file processing
- Integrate OIDC-based GitHub Pages deployment templates
2. **Medium Priority Enhancements** (Week 3-4):
- Develop SSG performance scoring algorithm with research-validated weights
- Implement automated security scanning in generated workflows
- Create environment protection templates
### **Architecture Validation Status**
| **Decision Area** | **Research Status** | **Validation Result** | **Implementation Ready** |
| --------------------- | ------------------- | ---------------------------- | ------------------------ |
| TypeScript MCP SDK | ✅ Validated | Confirmed optimal choice | ✅ Yes |
| Node.js Performance | ✅ Validated | Specific patterns identified | ✅ Yes |
| SSG Recommendation | ✅ Validated | Algorithm refined | ✅ Yes |
| GitHub Pages Security | ✅ Validated | Security model confirmed | ✅ Yes |
| Repository Analysis | ✅ Validated | Streaming patterns proven | ✅ Yes |
### **Risk Mitigation Updates**
**Original Risk**: Memory constraints for large repository analysis
**Research Mitigation**: 10x memory reduction with streaming + worker threads
**Implementation**: Concrete code patterns integrated into ADR-001
**Original Risk**: SSG recommendation accuracy
**Research Mitigation**: Evidence-based performance weighting algorithm
**Implementation**: Performance matrix and scoring algorithm in ADR-003
**Original Risk**: Deployment security vulnerabilities
**Research Mitigation**: Multi-layered security with OIDC authentication
**Implementation**: Enhanced security configuration in ADR-005
## Research Validation Metrics
### **Research Quality Assessment**
- **Sources Analyzed**: 15+ authoritative sources (GitHub docs, CSS-Tricks benchmarks, security guides)
- **Data Points Validated**: 50+ specific performance metrics and security practices
- **Implementation Patterns**: 12+ concrete code examples and configuration templates
- **Best Practices**: 25+ industry-validated approaches integrated
### **ADR Enhancement Metrics**
- **ADRs Updated**: 3 core architectural decisions
- **New Content Added**: ~500 lines of research-validated implementation guidance
- **Performance Targets**: Quantitative benchmarks established for all components
- **Security Practices**: Comprehensive security model with specific configurations
## Next Steps & Continuous Integration
### **Immediate Actions** (Next 48 hours)
1. **Implementation Planning**: Use research-validated patterns for MVP development
2. **Security Review**: Validate enhanced security configurations with team
3. **Performance Testing**: Create benchmarks based on research targets
### **Short-term Integration** (Next 2 weeks)
1. **Prototype Development**: Implement streaming repository analysis
2. **Algorithm Validation**: Test SSG recommendation scoring with real projects
3. **Security Testing**: Validate OIDC deployment workflows
### **Long-term Monitoring** (Ongoing)
1. **Performance Validation**: Compare actual performance against research predictions
2. **Security Auditing**: Regular validation of security practices
3. **Research Updates**: Monitor for new performance data and security practices
## Research Integration Success Criteria
✅ **Architectural Validation**: All core decisions validated with evidence
✅ **Implementation Guidance**: Concrete patterns and code examples provided
✅ **Performance Targets**: Quantitative benchmarks established
✅ **Security Framework**: Comprehensive security model implemented
✅ **Risk Mitigation**: Major risks addressed with validated solutions
**Overall Integration Status**: **SUCCESSFUL** - Ready for implementation phase
---
**Research Conducted Using**: Firecrawl MCP Server systematic web research
**Research Duration**: 4 hours intensive analysis
**Integration Method**: Direct ADR updates with validation tracking
**Confidence Level**: 95% - Based on authoritative sources and comprehensive analysis
```
--------------------------------------------------------------------------------
/tests/memory/learning.test.ts:
--------------------------------------------------------------------------------
```typescript
/**
* Basic unit tests for Incremental Learning System
* Tests basic instantiation and core functionality
* Part of Issue #54 - Core Memory System Unit Tests
*/
import { promises as fs } from "fs";
import path from "path";
import os from "os";
import { MemoryManager } from "../../src/memory/manager.js";
import {
IncrementalLearningSystem,
ProjectFeatures,
} from "../../src/memory/learning.js";
describe("IncrementalLearningSystem", () => {
let tempDir: string;
let memoryManager: MemoryManager;
let learning: IncrementalLearningSystem;
beforeEach(async () => {
// Create unique temp directory for each test
tempDir = path.join(
os.tmpdir(),
`memory-learning-test-${Date.now()}-${Math.random()
.toString(36)
.substr(2, 9)}`,
);
await fs.mkdir(tempDir, { recursive: true });
// Create memory manager for learning system
memoryManager = new MemoryManager(tempDir);
await memoryManager.initialize();
learning = new IncrementalLearningSystem(memoryManager);
await learning.initialize();
});
afterEach(async () => {
// Cleanup temp directory
try {
await fs.rm(tempDir, { recursive: true, force: true });
} catch (error) {
// Ignore cleanup errors
}
});
describe("Basic Learning System Tests", () => {
test("should create learning system instance", () => {
expect(learning).toBeDefined();
expect(learning).toBeInstanceOf(IncrementalLearningSystem);
});
test("should be able to enable and disable learning", () => {
learning.setLearningEnabled(false);
learning.setLearningEnabled(true);
// Just test that the methods exist and don't throw
expect(true).toBe(true);
});
test("should have pattern retrieval capabilities", async () => {
// Test pattern retrieval without throwing errors
const patterns = await learning.getPatterns();
expect(Array.isArray(patterns)).toBe(true);
});
test("should provide learning statistics", async () => {
const stats = await learning.getStatistics();
expect(stats).toBeDefined();
expect(typeof stats.totalPatterns).toBe("number");
expect(typeof stats.averageConfidence).toBe("number");
expect(Array.isArray(stats.insights)).toBe(true);
});
test("should handle clearing patterns", async () => {
await learning.clearPatterns();
// Verify patterns are cleared
const patterns = await learning.getPatterns();
expect(Array.isArray(patterns)).toBe(true);
expect(patterns.length).toBe(0);
});
test("should provide improved recommendations", async () => {
const projectFeatures: ProjectFeatures = {
language: "typescript",
framework: "react",
size: "medium" as const,
complexity: "moderate" as const,
hasTests: true,
hasCI: true,
hasDocs: false,
isOpenSource: true,
};
const baseRecommendation = {
recommended: "docusaurus",
confidence: 0.8,
score: 0.85,
};
const improved = await learning.getImprovedRecommendation(
projectFeatures,
baseRecommendation,
);
expect(improved).toBeDefined();
expect(improved.recommendation).toBeDefined();
expect(typeof improved.confidence).toBe("number");
expect(Array.isArray(improved.insights)).toBe(true);
});
test("should handle learning from memory entries", async () => {
const memoryEntry = await memoryManager.remember(
"recommendation",
{
recommended: "docusaurus",
confidence: 0.9,
language: { primary: "typescript" },
framework: { name: "react" },
},
{
projectId: "test-project",
ssg: "docusaurus",
},
);
// Learn from successful outcome
await learning.learn(memoryEntry, "success");
// Verify no errors thrown
expect(true).toBe(true);
});
});
describe("Learning Statistics and Analysis", () => {
test("should provide comprehensive learning statistics", async () => {
const stats = await learning.getStatistics();
expect(stats).toBeDefined();
expect(typeof stats.totalPatterns).toBe("number");
expect(typeof stats.averageConfidence).toBe("number");
expect(typeof stats.learningVelocity).toBe("number");
expect(typeof stats.patternsByType).toBe("object");
expect(Array.isArray(stats.insights)).toBe(true);
});
test("should handle multiple learning iterations", async () => {
const projectFeatures: ProjectFeatures = {
language: "javascript",
framework: "vue",
size: "small" as const,
complexity: "simple" as const,
hasTests: false,
hasCI: false,
hasDocs: true,
isOpenSource: false,
};
const baseRecommendation = {
recommended: "vuepress",
confidence: 0.7,
score: 0.75,
};
// Multiple learning cycles
for (let i = 0; i < 3; i++) {
const improved = await learning.getImprovedRecommendation(
projectFeatures,
baseRecommendation,
);
expect(improved.recommendation).toBeDefined();
}
expect(true).toBe(true);
});
});
describe("Error Handling", () => {
test("should handle empty patterns gracefully", async () => {
// Clear all patterns first
await learning.clearPatterns();
const patterns = await learning.getPatterns();
expect(Array.isArray(patterns)).toBe(true);
expect(patterns.length).toBe(0);
});
test("should handle learning with minimal data", async () => {
const projectFeatures: ProjectFeatures = {
language: "unknown",
size: "small" as const,
complexity: "simple" as const,
hasTests: false,
hasCI: false,
hasDocs: false,
isOpenSource: false,
};
const baseRecommendation = {
recommended: "jekyll",
confidence: 0.5,
};
const improved = await learning.getImprovedRecommendation(
projectFeatures,
baseRecommendation,
);
expect(improved).toBeDefined();
expect(improved.recommendation).toBeDefined();
});
test("should handle concurrent learning operations", async () => {
const promises = Array.from({ length: 3 }, async (_, i) => {
const projectFeatures: ProjectFeatures = {
language: "go",
size: "medium" as const,
complexity: "moderate" as const,
hasTests: true,
hasCI: true,
hasDocs: true,
isOpenSource: true,
};
const baseRecommendation = {
recommended: "hugo",
confidence: 0.8 + i * 0.02,
};
return learning.getImprovedRecommendation(
projectFeatures,
baseRecommendation,
);
});
const results = await Promise.all(promises);
expect(results.length).toBe(3);
results.forEach((result) => {
expect(result.recommendation).toBeDefined();
});
});
});
});
```
--------------------------------------------------------------------------------
/docs/how-to/prompting-guide.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.953Z"
last_validated: "2025-12-09T19:41:38.585Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# How to Prompt DocuMCP Effectively
This guide shows you how to interact with DocuMCP using effective prompts to get the best results from the system.
## 🎯 Pro Tip: Use @LLM_CONTEXT.md
When using DocuMCP in your AI assistant (Claude, ChatGPT, etc.), **reference the LLM_CONTEXT.md file** for instant context about all 45 available tools:
```
@LLM_CONTEXT.md analyze my repository and recommend the best deployment strategy
```
The `LLM_CONTEXT.md` file provides:
- Complete tool descriptions and parameters
- Usage examples for all 45 tools
- Common workflow patterns
- Memory system documentation
- Phase 3 code-to-docs sync capabilities
**Location**: `/LLM_CONTEXT.md` in the root of your project
This ensures your AI assistant has full context about DocuMCP's capabilities and can provide more accurate recommendations.
## Quick Start
DocuMCP responds to natural language prompts. Here are the most common patterns:
### Basic Analysis
```
analyze my repository for documentation needs
```
### Get Recommendations
```
what static site generator should I use for my project?
```
### Deploy Documentation
```
set up GitHub Pages deployment for my docs
```
## Available Tools
DocuMCP provides several tools you can invoke through natural prompts:
### 1. Repository Analysis
**Purpose**: Analyze your project structure, dependencies, and documentation needs.
**Example Prompts**:
- "Analyze my repository structure"
- "What documentation gaps do I have?"
- "Examine my project for documentation opportunities"
**What it returns**: Project analysis with language detection, dependency mapping, and complexity assessment.
### 2. SSG Recommendations
**Purpose**: Get intelligent static site generator recommendations based on your project.
**Example Prompts**:
- "Recommend a static site generator for my TypeScript project"
- "Which SSG works best with my Python documentation?"
- "Compare documentation tools for my project"
**What it returns**: Weighted recommendations with justifications for Jekyll, Hugo, Docusaurus, MkDocs, or Eleventy.
### 3. Configuration Generation
**Purpose**: Generate SSG-specific configuration files.
**Example Prompts**:
- "Generate a Hugo config for my project"
- "Create MkDocs configuration files"
- "Set up Docusaurus for my documentation"
**What it returns**: Ready-to-use configuration files optimized for your project.
### 4. Documentation Structure
**Purpose**: Create Diataxis-compliant documentation structure.
**Example Prompts**:
- "Set up documentation structure following Diataxis"
- "Create organized docs folders for my project"
- "Build a comprehensive documentation layout"
**What it returns**: Organized folder structure with templates following documentation best practices.
### 5. GitHub Pages Deployment
**Purpose**: Automate GitHub Pages deployment workflows.
**Example Prompts**:
- "Deploy my docs to GitHub Pages"
- "Set up automated documentation deployment"
- "Create GitHub Actions for my documentation site"
**What it returns**: GitHub Actions workflows configured for your chosen SSG.
### 6. Deployment Verification
**Purpose**: Verify and troubleshoot GitHub Pages deployments.
**Example Prompts**:
- "Check if my GitHub Pages deployment is working"
- "Troubleshoot my documentation deployment"
- "Verify my docs site is live"
**What it returns**: Deployment status and troubleshooting recommendations.
## Advanced Prompting Techniques
### Chained Operations
You can chain multiple operations in a single conversation:
```
1. First analyze my repository
2. Then recommend the best SSG
3. Finally set up the deployment workflow
```
### Specific Requirements
Be specific about your needs:
```
I need a documentation site that:
- Works with TypeScript
- Supports API documentation
- Has good search functionality
- Deploys automatically on commits
```
### Context-Aware Requests
Reference previous analysis:
```
Based on the analysis you just did, create the documentation structure and deploy it to GitHub Pages
```
## Best Practices
### 1. Start with Analysis
Always begin with repository analysis to get tailored recommendations:
```
analyze my project for documentation needs
```
### 2. Be Specific About Goals
Tell DocuMCP what you're trying to achieve:
- "I need developer documentation for my API"
- "I want user guides for my application"
- "I need project documentation for contributors"
### 3. Specify Constraints
Mention any limitations or preferences:
- "I prefer minimal setup"
- "I need something that works with our CI/CD pipeline"
- "I want to use our existing design system"
### 4. Ask for Explanations
Request reasoning behind recommendations:
```
why did you recommend Hugo over Jekyll for my project?
```
### 5. Iterate and Refine
Use follow-up prompts to refine results:
```
can you modify the GitHub Actions workflow to also run tests?
```
## Common Workflows
### Complete Documentation Setup
```
1. "Analyze my repository for documentation needs"
2. "Recommend the best static site generator for my project"
3. "Generate configuration files for the recommended SSG"
4. "Set up Diataxis-compliant documentation structure"
5. "Deploy everything to GitHub Pages"
```
### Documentation Audit
```
1. "Analyze my existing documentation"
2. "What gaps do you see in my current docs?"
3. "How can I improve my documentation structure?"
```
### Deployment Troubleshooting
```
1. "My GitHub Pages site isn't working"
2. "Check my deployment configuration"
3. "Help me fix the build errors"
```
## Memory and Context
DocuMCP remembers context within a conversation, so you can:
- Reference previous analysis results
- Build on earlier recommendations
- Chain operations together seamlessly
Example conversation flow:
```
User: "analyze my repository"
DocuMCP: [provides analysis]
User: "based on that analysis, what SSG do you recommend?"
DocuMCP: [provides recommendation using analysis context]
User: "set it up with that recommendation"
DocuMCP: [configures the recommended SSG]
```
## Troubleshooting Prompts
If you're not getting the results you expect, try:
### More Specific Prompts
Instead of: "help with docs"
Try: "analyze my TypeScript project and recommend documentation tools"
### Context Setting
Instead of: "set up deployment"
Try: "set up GitHub Pages deployment for the MkDocs site we just configured"
### Direct Tool Requests
If you know exactly what you want:
- "use the analyze_repository tool on my current directory"
- "run the recommend_ssg tool with my project data"
## Getting Help
If you need assistance with prompting:
- Ask DocuMCP to explain available tools: "what can you help me with?"
- Request examples: "show me example prompts for documentation setup"
- Ask for clarification: "I don't understand the recommendation, can you explain?"
Remember: DocuMCP is designed to understand natural language, so don't hesitate to ask questions in your own words!
```
--------------------------------------------------------------------------------
/docs/tutorials/memory-workflows.md:
--------------------------------------------------------------------------------
```markdown
---
documcp:
last_updated: "2025-11-20T00:46:21.973Z"
last_validated: "2025-12-09T19:41:38.604Z"
auto_updated: false
update_frequency: monthly
validated_against_commit: 306567b32114502c606244ad6c2930360bcd4201
---
# Memory Workflows and Advanced Features
This tutorial covers DocuMCP's memory system and advanced workflow features for intelligent documentation management.
## Overview
DocuMCP includes a sophisticated memory system that learns from your documentation patterns and provides intelligent assistance:
### Memory System Features
- **Historical Analysis**: Learns from past documentation projects
- **User Preferences**: Adapts to your documentation style
- **Pattern Recognition**: Identifies successful documentation patterns
- **Smart Recommendations**: Provides context-aware suggestions
### Advanced Workflows
- **Multi-Project Memory**: Share insights across projects
- **Collaborative Learning**: Learn from team documentation patterns
- **Automated Optimization**: Continuously improve documentation quality
## Getting Started with Memory
### Initial Setup
```bash
# Initialize memory system:
"initialize memory system for my documentation workflow"
```
### Basic Memory Operations
```bash
# Store analysis results:
"store this analysis in memory for future reference"
# Recall similar projects:
"find similar projects in my memory"
# Update preferences:
"update my documentation preferences based on this project"
```
## Memory System Architecture
### Memory Components
1. **Analysis Memory**: Stores repository analysis results
2. **Recommendation Memory**: Tracks SSG recommendation patterns
3. **Deployment Memory**: Records deployment success patterns
4. **User Preference Memory**: Learns individual preferences
### Memory Storage
```yaml
# Memory configuration
memory:
storage_path: ".documcp/memory"
retention_policy: "keep_all"
backup_enabled: true
compression: true
```
## Advanced Memory Features
### Contextual Retrieval
```bash
# Find relevant memories:
"find memories related to TypeScript documentation projects"
# Get contextual suggestions:
"get suggestions based on my previous documentation patterns"
```
### Pattern Learning
```bash
# Learn from successful deployments:
"learn from successful documentation deployments"
# Identify patterns:
"identify successful documentation patterns in my memory"
```
### Collaborative Memory
```bash
# Share memories with team:
"share documentation patterns with my team"
# Import team memories:
"import documentation patterns from team members"
```
## Memory Workflow Examples
### Project Analysis Workflow
```bash
# Complete analysis with memory integration:
"analyze my repository and store results in memory for future reference"
```
This workflow:
1. Analyzes the current repository
2. Compares with similar projects in memory
3. Provides enhanced recommendations
4. Stores results for future reference
### Recommendation Workflow
```bash
# Get memory-enhanced recommendations:
"recommend SSG based on my memory and current project"
```
This workflow:
1. Retrieves relevant memories
2. Applies learned patterns
3. Provides personalized recommendations
4. Updates memory with results
### Deployment Workflow
```bash
# Deploy with memory insights:
"deploy documentation using insights from my memory"
```
This workflow:
1. Applies learned deployment patterns
2. Uses successful configuration templates
3. Monitors for known issues
4. Records results for future learning
## Memory Management
### Memory Operations
```bash
# List all memories:
"list all memories in my system"
# Search memories:
"search memories for 'React documentation'"
# Export memories:
"export my documentation memories"
# Import memories:
"import documentation memories from file"
```
### Memory Optimization
```bash
# Optimize memory storage:
"optimize memory storage and remove duplicates"
# Clean up old memories:
"clean up memories older than 6 months"
# Compress memory:
"compress memory storage for efficiency"
```
## Advanced Workflow Patterns
### Multi-Project Memory Sharing
```bash
# Set up project memory sharing:
"set up memory sharing between my projects"
```
### Team Collaboration
```bash
# Enable team memory sharing:
"enable team memory sharing for documentation patterns"
```
### Automated Learning
```bash
# Enable automated learning:
"enable automated learning from documentation patterns"
```
## Memory Analytics
### Memory Insights
```bash
# Get memory insights:
"provide insights from my documentation memory"
```
### Success Pattern Analysis
```bash
# Analyze success patterns:
"analyze successful documentation patterns in my memory"
```
### Performance Tracking
```bash
# Track memory performance:
"track performance of memory-enhanced recommendations"
```
## Troubleshooting Memory Issues
### Common Problems
**Problem**: Memory not loading
**Solution**: Check memory file permissions and integrity
**Problem**: Slow memory operations
**Solution**: Optimize memory storage and clean up old data
**Problem**: Inconsistent recommendations
**Solution**: Review memory data quality and patterns
### Memory Debugging
```bash
# Debug memory issues:
"debug memory system problems"
# Validate memory integrity:
"validate memory data integrity"
# Reset memory system:
"reset memory system to defaults"
```
## Best Practices
### Memory Management
1. **Regular Backups**: Backup memory data regularly
2. **Quality Control**: Review and clean memory data
3. **Privacy**: Be mindful of sensitive data in memories
4. **Performance**: Monitor memory system performance
5. **Documentation**: Document your memory workflows
### Workflow Optimization
1. **Consistent Patterns**: Use consistent documentation patterns
2. **Regular Updates**: Update memory with new learnings
3. **Team Sharing**: Share successful patterns with team
4. **Continuous Learning**: Enable continuous learning features
5. **Performance Monitoring**: Monitor workflow performance
## Memory System Integration
### With Documentation Tools
- **Repository Analysis**: Enhanced with historical data
- **SSG Recommendations**: Improved with pattern learning
- **Deployment Automation**: Optimized with success patterns
- **Content Generation**: Informed by previous content
### With External Systems
- **CI/CD Integration**: Memory-aware deployment pipelines
- **Analytics Integration**: Memory-enhanced performance tracking
- **Team Tools**: Collaborative memory sharing
- **Backup Systems**: Automated memory backup
## Advanced Configuration
### Memory Configuration
```yaml
# Advanced memory configuration
memory:
storage:
type: "local"
path: ".documcp/memory"
encryption: true
learning:
enabled: true
auto_update: true
pattern_detection: true
sharing:
team_enabled: false
project_sharing: true
export_format: "json"
```
### Performance Tuning
```bash
# Tune memory performance:
"optimize memory system performance"
```
## Next Steps
- [How-to Guides](../how-to/)
- [Reference Documentation](../reference/)
- [Architecture Explanation](../explanation/)
- [Advanced Configuration](../reference/configuration.md)
```