This is page 1 of 38. Use http://codebase.md/eyaltoledano/claude-task-master?page={x} to view the full context. # Directory Structure ``` ├── .changeset │ ├── config.json │ └── README.md ├── .claude │ ├── agents │ │ ├── task-checker.md │ │ ├── task-executor.md │ │ └── task-orchestrator.md │ ├── commands │ │ ├── dedupe.md │ │ └── tm │ │ ├── add-dependency │ │ │ └── add-dependency.md │ │ ├── add-subtask │ │ │ ├── add-subtask.md │ │ │ └── convert-task-to-subtask.md │ │ ├── add-task │ │ │ └── add-task.md │ │ ├── analyze-complexity │ │ │ └── analyze-complexity.md │ │ ├── complexity-report │ │ │ └── complexity-report.md │ │ ├── expand │ │ │ ├── expand-all-tasks.md │ │ │ └── expand-task.md │ │ ├── fix-dependencies │ │ │ └── fix-dependencies.md │ │ ├── generate │ │ │ └── generate-tasks.md │ │ ├── help.md │ │ ├── init │ │ │ ├── init-project-quick.md │ │ │ └── init-project.md │ │ ├── learn.md │ │ ├── list │ │ │ ├── list-tasks-by-status.md │ │ │ ├── list-tasks-with-subtasks.md │ │ │ └── list-tasks.md │ │ ├── models │ │ │ ├── setup-models.md │ │ │ └── view-models.md │ │ ├── next │ │ │ └── next-task.md │ │ ├── parse-prd │ │ │ ├── parse-prd-with-research.md │ │ │ └── parse-prd.md │ │ ├── remove-dependency │ │ │ └── remove-dependency.md │ │ ├── remove-subtask │ │ │ └── remove-subtask.md │ │ ├── remove-subtasks │ │ │ ├── remove-all-subtasks.md │ │ │ └── remove-subtasks.md │ │ ├── remove-task │ │ │ └── remove-task.md │ │ ├── set-status │ │ │ ├── to-cancelled.md │ │ │ ├── to-deferred.md │ │ │ ├── to-done.md │ │ │ ├── to-in-progress.md │ │ │ ├── to-pending.md │ │ │ └── to-review.md │ │ ├── setup │ │ │ ├── install-taskmaster.md │ │ │ └── quick-install-taskmaster.md │ │ ├── show │ │ │ └── show-task.md │ │ ├── status │ │ │ └── project-status.md │ │ ├── sync-readme │ │ │ └── sync-readme.md │ │ ├── tm-main.md │ │ ├── update │ │ │ ├── update-single-task.md │ │ │ ├── update-task.md │ │ │ └── update-tasks-from-id.md │ │ ├── utils │ │ │ └── analyze-project.md │ │ ├── validate-dependencies │ │ │ └── validate-dependencies.md │ │ └── workflows │ │ ├── auto-implement-tasks.md │ │ ├── command-pipeline.md │ │ └── smart-workflow.md │ └── TM_COMMANDS_GUIDE.md ├── .coderabbit.yaml ├── .cursor │ ├── mcp.json │ └── rules │ ├── ai_providers.mdc │ ├── ai_services.mdc │ ├── architecture.mdc │ ├── changeset.mdc │ ├── commands.mdc │ ├── context_gathering.mdc │ ├── cursor_rules.mdc │ ├── dependencies.mdc │ ├── dev_workflow.mdc │ ├── git_workflow.mdc │ ├── glossary.mdc │ ├── mcp.mdc │ ├── new_features.mdc │ ├── self_improve.mdc │ ├── tags.mdc │ ├── taskmaster.mdc │ ├── tasks.mdc │ ├── telemetry.mdc │ ├── test_workflow.mdc │ ├── tests.mdc │ ├── ui.mdc │ └── utilities.mdc ├── .cursorignore ├── .env.example ├── .github │ ├── ISSUE_TEMPLATE │ │ ├── bug_report.md │ │ ├── enhancements---feature-requests.md │ │ └── feedback.md │ ├── PULL_REQUEST_TEMPLATE │ │ ├── bugfix.md │ │ ├── config.yml │ │ ├── feature.md │ │ └── integration.md │ ├── PULL_REQUEST_TEMPLATE.md │ ├── scripts │ │ ├── auto-close-duplicates.mjs │ │ ├── backfill-duplicate-comments.mjs │ │ ├── check-pre-release-mode.mjs │ │ ├── parse-metrics.mjs │ │ ├── release.mjs │ │ ├── tag-extension.mjs │ │ └── utils.mjs │ └── workflows │ ├── auto-close-duplicates.yml │ ├── backfill-duplicate-comments.yml │ ├── ci.yml │ ├── claude-dedupe-issues.yml │ ├── claude-docs-trigger.yml │ ├── claude-docs-updater.yml │ ├── claude-issue-triage.yml │ ├── claude.yml │ ├── extension-ci.yml │ ├── extension-release.yml │ ├── log-issue-events.yml │ ├── pre-release.yml │ ├── release-check.yml │ ├── release.yml │ ├── update-models-md.yml │ └── weekly-metrics-discord.yml ├── .gitignore ├── .kiro │ ├── hooks │ │ ├── tm-code-change-task-tracker.kiro.hook │ │ ├── tm-complexity-analyzer.kiro.hook │ │ ├── tm-daily-standup-assistant.kiro.hook │ │ ├── tm-git-commit-task-linker.kiro.hook │ │ ├── tm-pr-readiness-checker.kiro.hook │ │ ├── tm-task-dependency-auto-progression.kiro.hook │ │ └── tm-test-success-task-completer.kiro.hook │ ├── settings │ │ └── mcp.json │ └── steering │ ├── dev_workflow.md │ ├── kiro_rules.md │ ├── self_improve.md │ ├── taskmaster_hooks_workflow.md │ └── taskmaster.md ├── .manypkg.json ├── .mcp.json ├── .npmignore ├── .nvmrc ├── .taskmaster │ ├── CLAUDE.md │ ├── config.json │ ├── docs │ │ ├── MIGRATION-ROADMAP.md │ │ ├── prd-tm-start.txt │ │ ├── prd.txt │ │ ├── README.md │ │ ├── research │ │ │ ├── 2025-06-14_how-can-i-improve-the-scope-up-and-scope-down-comm.md │ │ │ ├── 2025-06-14_should-i-be-using-any-specific-libraries-for-this.md │ │ │ ├── 2025-06-14_test-save-functionality.md │ │ │ ├── 2025-06-14_test-the-fix-for-duplicate-saves-final-test.md │ │ │ └── 2025-08-01_do-we-need-to-add-new-commands-or-can-we-just-weap.md │ │ ├── task-template-importing-prd.txt │ │ ├── test-prd.txt │ │ └── tm-core-phase-1.txt │ ├── reports │ │ ├── task-complexity-report_cc-kiro-hooks.json │ │ ├── task-complexity-report_test-prd-tag.json │ │ ├── task-complexity-report_tm-core-phase-1.json │ │ ├── task-complexity-report.json │ │ └── tm-core-complexity.json │ ├── state.json │ ├── tasks │ │ ├── task_001_tm-start.txt │ │ ├── task_002_tm-start.txt │ │ ├── task_003_tm-start.txt │ │ ├── task_004_tm-start.txt │ │ ├── task_007_tm-start.txt │ │ └── tasks.json │ └── templates │ └── example_prd.txt ├── .vscode │ ├── extensions.json │ └── settings.json ├── apps │ ├── cli │ │ ├── CHANGELOG.md │ │ ├── package.json │ │ ├── src │ │ │ ├── commands │ │ │ │ ├── auth.command.ts │ │ │ │ ├── context.command.ts │ │ │ │ ├── list.command.ts │ │ │ │ ├── set-status.command.ts │ │ │ │ ├── show.command.ts │ │ │ │ └── start.command.ts │ │ │ ├── index.ts │ │ │ ├── ui │ │ │ │ ├── components │ │ │ │ │ ├── dashboard.component.ts │ │ │ │ │ ├── header.component.ts │ │ │ │ │ ├── index.ts │ │ │ │ │ ├── next-task.component.ts │ │ │ │ │ ├── suggested-steps.component.ts │ │ │ │ │ └── task-detail.component.ts │ │ │ │ └── index.ts │ │ │ └── utils │ │ │ ├── auto-update.ts │ │ │ └── ui.ts │ │ └── tsconfig.json │ ├── docs │ │ ├── archive │ │ │ ├── ai-client-utils-example.mdx │ │ │ ├── ai-development-workflow.mdx │ │ │ ├── command-reference.mdx │ │ │ ├── configuration.mdx │ │ │ ├── cursor-setup.mdx │ │ │ ├── examples.mdx │ │ │ └── Installation.mdx │ │ ├── best-practices │ │ │ ├── advanced-tasks.mdx │ │ │ ├── configuration-advanced.mdx │ │ │ └── index.mdx │ │ ├── capabilities │ │ │ ├── cli-root-commands.mdx │ │ │ ├── index.mdx │ │ │ ├── mcp.mdx │ │ │ └── task-structure.mdx │ │ ├── CHANGELOG.md │ │ ├── docs.json │ │ ├── favicon.svg │ │ ├── getting-started │ │ │ ├── contribute.mdx │ │ │ ├── faq.mdx │ │ │ └── quick-start │ │ │ ├── configuration-quick.mdx │ │ │ ├── execute-quick.mdx │ │ │ ├── installation.mdx │ │ │ ├── moving-forward.mdx │ │ │ ├── prd-quick.mdx │ │ │ ├── quick-start.mdx │ │ │ ├── requirements.mdx │ │ │ ├── rules-quick.mdx │ │ │ └── tasks-quick.mdx │ │ ├── introduction.mdx │ │ ├── licensing.md │ │ ├── logo │ │ │ ├── dark.svg │ │ │ ├── light.svg │ │ │ └── task-master-logo.png │ │ ├── package.json │ │ ├── README.md │ │ ├── style.css │ │ ├── vercel.json │ │ └── whats-new.mdx │ └── extension │ ├── .vscodeignore │ ├── assets │ │ ├── banner.png │ │ ├── icon-dark.svg │ │ ├── icon-light.svg │ │ ├── icon.png │ │ ├── screenshots │ │ │ ├── kanban-board.png │ │ │ └── task-details.png │ │ └── sidebar-icon.svg │ ├── CHANGELOG.md │ ├── components.json │ ├── docs │ │ ├── extension-CI-setup.md │ │ └── extension-development-guide.md │ ├── esbuild.js │ ├── LICENSE │ ├── package.json │ ├── package.mjs │ ├── package.publish.json │ ├── README.md │ ├── src │ │ ├── components │ │ │ ├── ConfigView.tsx │ │ │ ├── constants.ts │ │ │ ├── TaskDetails │ │ │ │ ├── AIActionsSection.tsx │ │ │ │ ├── DetailsSection.tsx │ │ │ │ ├── PriorityBadge.tsx │ │ │ │ ├── SubtasksSection.tsx │ │ │ │ ├── TaskMetadataSidebar.tsx │ │ │ │ └── useTaskDetails.ts │ │ │ ├── TaskDetailsView.tsx │ │ │ ├── TaskMasterLogo.tsx │ │ │ └── ui │ │ │ ├── badge.tsx │ │ │ ├── breadcrumb.tsx │ │ │ ├── button.tsx │ │ │ ├── card.tsx │ │ │ ├── collapsible.tsx │ │ │ ├── CollapsibleSection.tsx │ │ │ ├── dropdown-menu.tsx │ │ │ ├── label.tsx │ │ │ ├── scroll-area.tsx │ │ │ ├── separator.tsx │ │ │ ├── shadcn-io │ │ │ │ └── kanban │ │ │ │ └── index.tsx │ │ │ └── textarea.tsx │ │ ├── extension.ts │ │ ├── index.ts │ │ ├── lib │ │ │ └── utils.ts │ │ ├── services │ │ │ ├── config-service.ts │ │ │ ├── error-handler.ts │ │ │ ├── notification-preferences.ts │ │ │ ├── polling-service.ts │ │ │ ├── polling-strategies.ts │ │ │ ├── sidebar-webview-manager.ts │ │ │ ├── task-repository.ts │ │ │ ├── terminal-manager.ts │ │ │ └── webview-manager.ts │ │ ├── test │ │ │ └── extension.test.ts │ │ ├── utils │ │ │ ├── configManager.ts │ │ │ ├── connectionManager.ts │ │ │ ├── errorHandler.ts │ │ │ ├── event-emitter.ts │ │ │ ├── logger.ts │ │ │ ├── mcpClient.ts │ │ │ ├── notificationPreferences.ts │ │ │ └── task-master-api │ │ │ ├── cache │ │ │ │ └── cache-manager.ts │ │ │ ├── index.ts │ │ │ ├── mcp-client.ts │ │ │ ├── transformers │ │ │ │ └── task-transformer.ts │ │ │ └── types │ │ │ └── index.ts │ │ └── webview │ │ ├── App.tsx │ │ ├── components │ │ │ ├── AppContent.tsx │ │ │ ├── EmptyState.tsx │ │ │ ├── ErrorBoundary.tsx │ │ │ ├── PollingStatus.tsx │ │ │ ├── PriorityBadge.tsx │ │ │ ├── SidebarView.tsx │ │ │ ├── TagDropdown.tsx │ │ │ ├── TaskCard.tsx │ │ │ ├── TaskEditModal.tsx │ │ │ ├── TaskMasterKanban.tsx │ │ │ ├── ToastContainer.tsx │ │ │ └── ToastNotification.tsx │ │ ├── constants │ │ │ └── index.ts │ │ ├── contexts │ │ │ └── VSCodeContext.tsx │ │ ├── hooks │ │ │ ├── useTaskQueries.ts │ │ │ ├── useVSCodeMessages.ts │ │ │ └── useWebviewHeight.ts │ │ ├── index.css │ │ ├── index.tsx │ │ ├── providers │ │ │ └── QueryProvider.tsx │ │ ├── reducers │ │ │ └── appReducer.ts │ │ ├── sidebar.tsx │ │ ├── types │ │ │ └── index.ts │ │ └── utils │ │ ├── logger.ts │ │ └── toast.ts │ └── tsconfig.json ├── assets │ ├── .windsurfrules │ ├── AGENTS.md │ ├── claude │ │ ├── agents │ │ │ ├── task-checker.md │ │ │ ├── task-executor.md │ │ │ └── task-orchestrator.md │ │ ├── commands │ │ │ └── tm │ │ │ ├── add-dependency │ │ │ │ └── add-dependency.md │ │ │ ├── add-subtask │ │ │ │ ├── add-subtask.md │ │ │ │ └── convert-task-to-subtask.md │ │ │ ├── add-task │ │ │ │ └── add-task.md │ │ │ ├── analyze-complexity │ │ │ │ └── analyze-complexity.md │ │ │ ├── clear-subtasks │ │ │ │ ├── clear-all-subtasks.md │ │ │ │ └── clear-subtasks.md │ │ │ ├── complexity-report │ │ │ │ └── complexity-report.md │ │ │ ├── expand │ │ │ │ ├── expand-all-tasks.md │ │ │ │ └── expand-task.md │ │ │ ├── fix-dependencies │ │ │ │ └── fix-dependencies.md │ │ │ ├── generate │ │ │ │ └── generate-tasks.md │ │ │ ├── help.md │ │ │ ├── init │ │ │ │ ├── init-project-quick.md │ │ │ │ └── init-project.md │ │ │ ├── learn.md │ │ │ ├── list │ │ │ │ ├── list-tasks-by-status.md │ │ │ │ ├── list-tasks-with-subtasks.md │ │ │ │ └── list-tasks.md │ │ │ ├── models │ │ │ │ ├── setup-models.md │ │ │ │ └── view-models.md │ │ │ ├── next │ │ │ │ └── next-task.md │ │ │ ├── parse-prd │ │ │ │ ├── parse-prd-with-research.md │ │ │ │ └── parse-prd.md │ │ │ ├── remove-dependency │ │ │ │ └── remove-dependency.md │ │ │ ├── remove-subtask │ │ │ │ └── remove-subtask.md │ │ │ ├── remove-subtasks │ │ │ │ ├── remove-all-subtasks.md │ │ │ │ └── remove-subtasks.md │ │ │ ├── remove-task │ │ │ │ └── remove-task.md │ │ │ ├── set-status │ │ │ │ ├── to-cancelled.md │ │ │ │ ├── to-deferred.md │ │ │ │ ├── to-done.md │ │ │ │ ├── to-in-progress.md │ │ │ │ ├── to-pending.md │ │ │ │ └── to-review.md │ │ │ ├── setup │ │ │ │ ├── install-taskmaster.md │ │ │ │ └── quick-install-taskmaster.md │ │ │ ├── show │ │ │ │ └── show-task.md │ │ │ ├── status │ │ │ │ └── project-status.md │ │ │ ├── sync-readme │ │ │ │ └── sync-readme.md │ │ │ ├── tm-main.md │ │ │ ├── update │ │ │ │ ├── update-single-task.md │ │ │ │ ├── update-task.md │ │ │ │ └── update-tasks-from-id.md │ │ │ ├── utils │ │ │ │ └── analyze-project.md │ │ │ ├── validate-dependencies │ │ │ │ └── validate-dependencies.md │ │ │ └── workflows │ │ │ ├── auto-implement-tasks.md │ │ │ ├── command-pipeline.md │ │ │ └── smart-workflow.md │ │ └── TM_COMMANDS_GUIDE.md │ ├── config.json │ ├── env.example │ ├── example_prd.txt │ ├── gitignore │ ├── kiro-hooks │ │ ├── tm-code-change-task-tracker.kiro.hook │ │ ├── tm-complexity-analyzer.kiro.hook │ │ ├── tm-daily-standup-assistant.kiro.hook │ │ ├── tm-git-commit-task-linker.kiro.hook │ │ ├── tm-pr-readiness-checker.kiro.hook │ │ ├── tm-task-dependency-auto-progression.kiro.hook │ │ └── tm-test-success-task-completer.kiro.hook │ ├── roocode │ │ ├── .roo │ │ │ ├── rules-architect │ │ │ │ └── architect-rules │ │ │ ├── rules-ask │ │ │ │ └── ask-rules │ │ │ ├── rules-code │ │ │ │ └── code-rules │ │ │ ├── rules-debug │ │ │ │ └── debug-rules │ │ │ ├── rules-orchestrator │ │ │ │ └── orchestrator-rules │ │ │ └── rules-test │ │ │ └── test-rules │ │ └── .roomodes │ ├── rules │ │ ├── cursor_rules.mdc │ │ ├── dev_workflow.mdc │ │ ├── self_improve.mdc │ │ ├── taskmaster_hooks_workflow.mdc │ │ └── taskmaster.mdc │ └── scripts_README.md ├── bin │ └── task-master.js ├── biome.json ├── CHANGELOG.md ├── CLAUDE.md ├── context │ ├── chats │ │ ├── add-task-dependencies-1.md │ │ └── max-min-tokens.txt.md │ ├── fastmcp-core.txt │ ├── fastmcp-docs.txt │ ├── MCP_INTEGRATION.md │ ├── mcp-js-sdk-docs.txt │ ├── mcp-protocol-repo.txt │ ├── mcp-protocol-schema-03262025.json │ └── mcp-protocol-spec.txt ├── CONTRIBUTING.md ├── docs │ ├── CLI-COMMANDER-PATTERN.md │ ├── command-reference.md │ ├── configuration.md │ ├── contributor-docs │ │ └── testing-roo-integration.md │ ├── cross-tag-task-movement.md │ ├── examples │ │ └── claude-code-usage.md │ ├── examples.md │ ├── licensing.md │ ├── mcp-provider-guide.md │ ├── mcp-provider.md │ ├── migration-guide.md │ ├── models.md │ ├── providers │ │ └── gemini-cli.md │ ├── README.md │ ├── scripts │ │ └── models-json-to-markdown.js │ ├── task-structure.md │ └── tutorial.md ├── images │ └── logo.png ├── index.js ├── jest.config.js ├── jest.resolver.cjs ├── LICENSE ├── llms-install.md ├── mcp-server │ ├── server.js │ └── src │ ├── core │ │ ├── __tests__ │ │ │ └── context-manager.test.js │ │ ├── context-manager.js │ │ ├── direct-functions │ │ │ ├── add-dependency.js │ │ │ ├── add-subtask.js │ │ │ ├── add-tag.js │ │ │ ├── add-task.js │ │ │ ├── analyze-task-complexity.js │ │ │ ├── cache-stats.js │ │ │ ├── clear-subtasks.js │ │ │ ├── complexity-report.js │ │ │ ├── copy-tag.js │ │ │ ├── create-tag-from-branch.js │ │ │ ├── delete-tag.js │ │ │ ├── expand-all-tasks.js │ │ │ ├── expand-task.js │ │ │ ├── fix-dependencies.js │ │ │ ├── generate-task-files.js │ │ │ ├── initialize-project.js │ │ │ ├── list-tags.js │ │ │ ├── list-tasks.js │ │ │ ├── models.js │ │ │ ├── move-task-cross-tag.js │ │ │ ├── move-task.js │ │ │ ├── next-task.js │ │ │ ├── parse-prd.js │ │ │ ├── remove-dependency.js │ │ │ ├── remove-subtask.js │ │ │ ├── remove-task.js │ │ │ ├── rename-tag.js │ │ │ ├── research.js │ │ │ ├── response-language.js │ │ │ ├── rules.js │ │ │ ├── scope-down.js │ │ │ ├── scope-up.js │ │ │ ├── set-task-status.js │ │ │ ├── show-task.js │ │ │ ├── update-subtask-by-id.js │ │ │ ├── update-task-by-id.js │ │ │ ├── update-tasks.js │ │ │ ├── use-tag.js │ │ │ └── validate-dependencies.js │ │ ├── task-master-core.js │ │ └── utils │ │ ├── env-utils.js │ │ └── path-utils.js │ ├── custom-sdk │ │ ├── errors.js │ │ ├── index.js │ │ ├── json-extractor.js │ │ ├── language-model.js │ │ ├── message-converter.js │ │ └── schema-converter.js │ ├── index.js │ ├── logger.js │ ├── providers │ │ └── mcp-provider.js │ └── tools │ ├── add-dependency.js │ ├── add-subtask.js │ ├── add-tag.js │ ├── add-task.js │ ├── analyze.js │ ├── clear-subtasks.js │ ├── complexity-report.js │ ├── copy-tag.js │ ├── delete-tag.js │ ├── expand-all.js │ ├── expand-task.js │ ├── fix-dependencies.js │ ├── generate.js │ ├── get-operation-status.js │ ├── get-task.js │ ├── get-tasks.js │ ├── index.js │ ├── initialize-project.js │ ├── list-tags.js │ ├── models.js │ ├── move-task.js │ ├── next-task.js │ ├── parse-prd.js │ ├── remove-dependency.js │ ├── remove-subtask.js │ ├── remove-task.js │ ├── rename-tag.js │ ├── research.js │ ├── response-language.js │ ├── rules.js │ ├── scope-down.js │ ├── scope-up.js │ ├── set-task-status.js │ ├── update-subtask.js │ ├── update-task.js │ ├── update.js │ ├── use-tag.js │ ├── utils.js │ └── validate-dependencies.js ├── mcp-test.js ├── output.json ├── package-lock.json ├── package.json ├── packages │ ├── build-config │ │ ├── CHANGELOG.md │ │ ├── package.json │ │ ├── src │ │ │ └── tsdown.base.ts │ │ └── tsconfig.json │ └── tm-core │ ├── .gitignore │ ├── CHANGELOG.md │ ├── docs │ │ └── listTasks-architecture.md │ ├── package.json │ ├── POC-STATUS.md │ ├── README.md │ ├── src │ │ ├── auth │ │ │ ├── auth-manager.test.ts │ │ │ ├── auth-manager.ts │ │ │ ├── config.ts │ │ │ ├── credential-store.test.ts │ │ │ ├── credential-store.ts │ │ │ ├── index.ts │ │ │ ├── oauth-service.ts │ │ │ ├── supabase-session-storage.ts │ │ │ └── types.ts │ │ ├── clients │ │ │ ├── index.ts │ │ │ └── supabase-client.ts │ │ ├── config │ │ │ ├── config-manager.spec.ts │ │ │ ├── config-manager.ts │ │ │ ├── index.ts │ │ │ └── services │ │ │ ├── config-loader.service.spec.ts │ │ │ ├── config-loader.service.ts │ │ │ ├── config-merger.service.spec.ts │ │ │ ├── config-merger.service.ts │ │ │ ├── config-persistence.service.spec.ts │ │ │ ├── config-persistence.service.ts │ │ │ ├── environment-config-provider.service.spec.ts │ │ │ ├── environment-config-provider.service.ts │ │ │ ├── index.ts │ │ │ ├── runtime-state-manager.service.spec.ts │ │ │ └── runtime-state-manager.service.ts │ │ ├── constants │ │ │ └── index.ts │ │ ├── entities │ │ │ └── task.entity.ts │ │ ├── errors │ │ │ ├── index.ts │ │ │ └── task-master-error.ts │ │ ├── executors │ │ │ ├── base-executor.ts │ │ │ ├── claude-executor.ts │ │ │ ├── executor-factory.ts │ │ │ ├── executor-service.ts │ │ │ ├── index.ts │ │ │ └── types.ts │ │ ├── index.ts │ │ ├── interfaces │ │ │ ├── ai-provider.interface.ts │ │ │ ├── configuration.interface.ts │ │ │ ├── index.ts │ │ │ └── storage.interface.ts │ │ ├── logger │ │ │ ├── factory.ts │ │ │ ├── index.ts │ │ │ └── logger.ts │ │ ├── mappers │ │ │ └── TaskMapper.ts │ │ ├── parser │ │ │ └── index.ts │ │ ├── providers │ │ │ ├── ai │ │ │ │ ├── base-provider.ts │ │ │ │ └── index.ts │ │ │ └── index.ts │ │ ├── repositories │ │ │ ├── supabase-task-repository.ts │ │ │ └── task-repository.interface.ts │ │ ├── services │ │ │ ├── index.ts │ │ │ ├── organization.service.ts │ │ │ ├── task-execution-service.ts │ │ │ └── task-service.ts │ │ ├── storage │ │ │ ├── api-storage.ts │ │ │ ├── file-storage │ │ │ │ ├── file-operations.ts │ │ │ │ ├── file-storage.ts │ │ │ │ ├── format-handler.ts │ │ │ │ ├── index.ts │ │ │ │ └── path-resolver.ts │ │ │ ├── index.ts │ │ │ └── storage-factory.ts │ │ ├── subpath-exports.test.ts │ │ ├── task-master-core.ts │ │ ├── types │ │ │ ├── database.types.ts │ │ │ ├── index.ts │ │ │ └── legacy.ts │ │ └── utils │ │ ├── id-generator.ts │ │ └── index.ts │ ├── tests │ │ ├── integration │ │ │ └── list-tasks.test.ts │ │ ├── mocks │ │ │ └── mock-provider.ts │ │ ├── setup.ts │ │ └── unit │ │ ├── base-provider.test.ts │ │ ├── executor.test.ts │ │ └── smoke.test.ts │ ├── tsconfig.json │ └── vitest.config.ts ├── README-task-master.md ├── README.md ├── scripts │ ├── dev.js │ ├── init.js │ ├── modules │ │ ├── ai-services-unified.js │ │ ├── commands.js │ │ ├── config-manager.js │ │ ├── dependency-manager.js │ │ ├── index.js │ │ ├── prompt-manager.js │ │ ├── supported-models.json │ │ ├── sync-readme.js │ │ ├── task-manager │ │ │ ├── add-subtask.js │ │ │ ├── add-task.js │ │ │ ├── analyze-task-complexity.js │ │ │ ├── clear-subtasks.js │ │ │ ├── expand-all-tasks.js │ │ │ ├── expand-task.js │ │ │ ├── find-next-task.js │ │ │ ├── generate-task-files.js │ │ │ ├── is-task-dependent.js │ │ │ ├── list-tasks.js │ │ │ ├── migrate.js │ │ │ ├── models.js │ │ │ ├── move-task.js │ │ │ ├── parse-prd │ │ │ │ ├── index.js │ │ │ │ ├── parse-prd-config.js │ │ │ │ ├── parse-prd-helpers.js │ │ │ │ ├── parse-prd-non-streaming.js │ │ │ │ ├── parse-prd-streaming.js │ │ │ │ └── parse-prd.js │ │ │ ├── remove-subtask.js │ │ │ ├── remove-task.js │ │ │ ├── research.js │ │ │ ├── response-language.js │ │ │ ├── scope-adjustment.js │ │ │ ├── set-task-status.js │ │ │ ├── tag-management.js │ │ │ ├── task-exists.js │ │ │ ├── update-single-task-status.js │ │ │ ├── update-subtask-by-id.js │ │ │ ├── update-task-by-id.js │ │ │ └── update-tasks.js │ │ ├── task-manager.js │ │ ├── ui.js │ │ ├── update-config-tokens.js │ │ ├── utils │ │ │ ├── contextGatherer.js │ │ │ ├── fuzzyTaskSearch.js │ │ │ └── git-utils.js │ │ └── utils.js │ ├── task-complexity-report.json │ ├── test-claude-errors.js │ └── test-claude.js ├── src │ ├── ai-providers │ │ ├── anthropic.js │ │ ├── azure.js │ │ ├── base-provider.js │ │ ├── bedrock.js │ │ ├── claude-code.js │ │ ├── custom-sdk │ │ │ ├── claude-code │ │ │ │ ├── errors.js │ │ │ │ ├── index.js │ │ │ │ ├── json-extractor.js │ │ │ │ ├── language-model.js │ │ │ │ ├── message-converter.js │ │ │ │ └── types.js │ │ │ └── grok-cli │ │ │ ├── errors.js │ │ │ ├── index.js │ │ │ ├── json-extractor.js │ │ │ ├── language-model.js │ │ │ ├── message-converter.js │ │ │ └── types.js │ │ ├── gemini-cli.js │ │ ├── google-vertex.js │ │ ├── google.js │ │ ├── grok-cli.js │ │ ├── groq.js │ │ ├── index.js │ │ ├── ollama.js │ │ ├── openai.js │ │ ├── openrouter.js │ │ ├── perplexity.js │ │ └── xai.js │ ├── constants │ │ ├── commands.js │ │ ├── paths.js │ │ ├── profiles.js │ │ ├── providers.js │ │ ├── rules-actions.js │ │ ├── task-priority.js │ │ └── task-status.js │ ├── profiles │ │ ├── amp.js │ │ ├── base-profile.js │ │ ├── claude.js │ │ ├── cline.js │ │ ├── codex.js │ │ ├── cursor.js │ │ ├── gemini.js │ │ ├── index.js │ │ ├── kilo.js │ │ ├── kiro.js │ │ ├── opencode.js │ │ ├── roo.js │ │ ├── trae.js │ │ ├── vscode.js │ │ ├── windsurf.js │ │ └── zed.js │ ├── progress │ │ ├── base-progress-tracker.js │ │ ├── cli-progress-factory.js │ │ ├── parse-prd-tracker.js │ │ ├── progress-tracker-builder.js │ │ └── tracker-ui.js │ ├── prompts │ │ ├── add-task.json │ │ ├── analyze-complexity.json │ │ ├── expand-task.json │ │ ├── parse-prd.json │ │ ├── README.md │ │ ├── research.json │ │ ├── schemas │ │ │ ├── parameter.schema.json │ │ │ ├── prompt-template.schema.json │ │ │ ├── README.md │ │ │ └── variant.schema.json │ │ ├── update-subtask.json │ │ ├── update-task.json │ │ └── update-tasks.json │ ├── provider-registry │ │ └── index.js │ ├── task-master.js │ ├── ui │ │ ├── confirm.js │ │ ├── indicators.js │ │ └── parse-prd.js │ └── utils │ ├── asset-resolver.js │ ├── create-mcp-config.js │ ├── format.js │ ├── getVersion.js │ ├── logger-utils.js │ ├── manage-gitignore.js │ ├── path-utils.js │ ├── profiles.js │ ├── rule-transformer.js │ ├── stream-parser.js │ └── timeout-manager.js ├── test-clean-tags.js ├── test-config-manager.js ├── test-prd.txt ├── test-tag-functions.js ├── test-version-check-full.js ├── test-version-check.js ├── tests │ ├── e2e │ │ ├── e2e_helpers.sh │ │ ├── parse_llm_output.cjs │ │ ├── run_e2e.sh │ │ ├── run_fallback_verification.sh │ │ └── test_llm_analysis.sh │ ├── fixture │ │ └── test-tasks.json │ ├── fixtures │ │ ├── .taskmasterconfig │ │ ├── sample-claude-response.js │ │ ├── sample-prd.txt │ │ └── sample-tasks.js │ ├── integration │ │ ├── claude-code-optional.test.js │ │ ├── cli │ │ │ ├── commands.test.js │ │ │ ├── complex-cross-tag-scenarios.test.js │ │ │ └── move-cross-tag.test.js │ │ ├── manage-gitignore.test.js │ │ ├── mcp-server │ │ │ └── direct-functions.test.js │ │ ├── move-task-cross-tag.integration.test.js │ │ ├── move-task-simple.integration.test.js │ │ └── profiles │ │ ├── amp-init-functionality.test.js │ │ ├── claude-init-functionality.test.js │ │ ├── cline-init-functionality.test.js │ │ ├── codex-init-functionality.test.js │ │ ├── cursor-init-functionality.test.js │ │ ├── gemini-init-functionality.test.js │ │ ├── opencode-init-functionality.test.js │ │ ├── roo-files-inclusion.test.js │ │ ├── roo-init-functionality.test.js │ │ ├── rules-files-inclusion.test.js │ │ ├── trae-init-functionality.test.js │ │ ├── vscode-init-functionality.test.js │ │ └── windsurf-init-functionality.test.js │ ├── manual │ │ ├── progress │ │ │ ├── parse-prd-analysis.js │ │ │ ├── test-parse-prd.js │ │ │ └── TESTING_GUIDE.md │ │ └── prompts │ │ ├── prompt-test.js │ │ └── README.md │ ├── README.md │ ├── setup.js │ └── unit │ ├── ai-providers │ │ ├── claude-code.test.js │ │ ├── custom-sdk │ │ │ └── claude-code │ │ │ └── language-model.test.js │ │ ├── gemini-cli.test.js │ │ ├── mcp-components.test.js │ │ └── openai.test.js │ ├── ai-services-unified.test.js │ ├── commands.test.js │ ├── config-manager.test.js │ ├── config-manager.test.mjs │ ├── dependency-manager.test.js │ ├── init.test.js │ ├── initialize-project.test.js │ ├── kebab-case-validation.test.js │ ├── manage-gitignore.test.js │ ├── mcp │ │ └── tools │ │ ├── __mocks__ │ │ │ └── move-task.js │ │ ├── add-task.test.js │ │ ├── analyze-complexity.test.js │ │ ├── expand-all.test.js │ │ ├── get-tasks.test.js │ │ ├── initialize-project.test.js │ │ ├── move-task-cross-tag-options.test.js │ │ ├── move-task-cross-tag.test.js │ │ └── remove-task.test.js │ ├── mcp-providers │ │ ├── mcp-components.test.js │ │ └── mcp-provider.test.js │ ├── parse-prd.test.js │ ├── profiles │ │ ├── amp-integration.test.js │ │ ├── claude-integration.test.js │ │ ├── cline-integration.test.js │ │ ├── codex-integration.test.js │ │ ├── cursor-integration.test.js │ │ ├── gemini-integration.test.js │ │ ├── kilo-integration.test.js │ │ ├── kiro-integration.test.js │ │ ├── mcp-config-validation.test.js │ │ ├── opencode-integration.test.js │ │ ├── profile-safety-check.test.js │ │ ├── roo-integration.test.js │ │ ├── rule-transformer-cline.test.js │ │ ├── rule-transformer-cursor.test.js │ │ ├── rule-transformer-gemini.test.js │ │ ├── rule-transformer-kilo.test.js │ │ ├── rule-transformer-kiro.test.js │ │ ├── rule-transformer-opencode.test.js │ │ ├── rule-transformer-roo.test.js │ │ ├── rule-transformer-trae.test.js │ │ ├── rule-transformer-vscode.test.js │ │ ├── rule-transformer-windsurf.test.js │ │ ├── rule-transformer-zed.test.js │ │ ├── rule-transformer.test.js │ │ ├── selective-profile-removal.test.js │ │ ├── subdirectory-support.test.js │ │ ├── trae-integration.test.js │ │ ├── vscode-integration.test.js │ │ ├── windsurf-integration.test.js │ │ └── zed-integration.test.js │ ├── progress │ │ └── base-progress-tracker.test.js │ ├── prompt-manager.test.js │ ├── prompts │ │ └── expand-task-prompt.test.js │ ├── providers │ │ └── provider-registry.test.js │ ├── scripts │ │ └── modules │ │ ├── commands │ │ │ ├── move-cross-tag.test.js │ │ │ └── README.md │ │ ├── dependency-manager │ │ │ ├── circular-dependencies.test.js │ │ │ ├── cross-tag-dependencies.test.js │ │ │ └── fix-dependencies-command.test.js │ │ ├── task-manager │ │ │ ├── add-subtask.test.js │ │ │ ├── add-task.test.js │ │ │ ├── analyze-task-complexity.test.js │ │ │ ├── clear-subtasks.test.js │ │ │ ├── complexity-report-tag-isolation.test.js │ │ │ ├── expand-all-tasks.test.js │ │ │ ├── expand-task.test.js │ │ │ ├── find-next-task.test.js │ │ │ ├── generate-task-files.test.js │ │ │ ├── list-tasks.test.js │ │ │ ├── move-task-cross-tag.test.js │ │ │ ├── move-task.test.js │ │ │ ├── parse-prd.test.js │ │ │ ├── remove-subtask.test.js │ │ │ ├── remove-task.test.js │ │ │ ├── research.test.js │ │ │ ├── scope-adjustment.test.js │ │ │ ├── set-task-status.test.js │ │ │ ├── setup.js │ │ │ ├── update-single-task-status.test.js │ │ │ ├── update-subtask-by-id.test.js │ │ │ ├── update-task-by-id.test.js │ │ │ └── update-tasks.test.js │ │ ├── ui │ │ │ └── cross-tag-error-display.test.js │ │ └── utils-tag-aware-paths.test.js │ ├── task-finder.test.js │ ├── task-manager │ │ ├── clear-subtasks.test.js │ │ ├── move-task.test.js │ │ ├── tag-boundary.test.js │ │ └── tag-management.test.js │ ├── task-master.test.js │ ├── ui │ │ └── indicators.test.js │ ├── ui.test.js │ ├── utils-strip-ansi.test.js │ └── utils.test.js ├── tsconfig.json ├── tsdown.config.ts └── turbo.json ``` # Files -------------------------------------------------------------------------------- /.nvmrc: -------------------------------------------------------------------------------- ``` 22 ``` -------------------------------------------------------------------------------- /.cursorignore: -------------------------------------------------------------------------------- ``` package-lock.json # Add directories or file patterns to ignore during indexing (e.g. foo/ or *.csv) node_modules/ ``` -------------------------------------------------------------------------------- /.mcp.json: -------------------------------------------------------------------------------- ```json { "mcpServers": { "task-master-ai": { "type": "stdio", "command": "npx", "args": ["-y", "task-master-ai"] } } } ``` -------------------------------------------------------------------------------- /.coderabbit.yaml: -------------------------------------------------------------------------------- ```yaml reviews: profile: assertive poem: false auto_review: base_branches: - rc - beta - alpha - production - next ``` -------------------------------------------------------------------------------- /.manypkg.json: -------------------------------------------------------------------------------- ```json { "$schema": "https://unpkg.com/@manypkg/[email protected]/schema.json", "defaultBranch": "main", "ignoredRules": ["ROOT_HAS_DEPENDENCIES", "INTERNAL_MISMATCH"], "ignoredPackages": ["@tm/core", "@tm/cli", "@tm/build-config"] } ``` -------------------------------------------------------------------------------- /tests/fixtures/.taskmasterconfig: -------------------------------------------------------------------------------- ``` { "models": { "main": { "provider": "openai", "modelId": "gpt-4o" }, "research": { "provider": "perplexity", "modelId": "sonar-pro" }, "fallback": { "provider": "anthropic", "modelId": "claude-3-haiku-20240307" } } } ``` -------------------------------------------------------------------------------- /apps/extension/.vscodeignore: -------------------------------------------------------------------------------- ``` # Ignore everything by default * # Only include specific essential files !package.json !README.md !CHANGELOG.md !LICENSE !icon.png !assets/** # Include only the built files we need (not source maps) !dist/extension.js !dist/index.js !dist/index.css # Exclude development documentation docs/extension-CI-setup.md docs/extension-DEV-guide.md # Exclude assets/.DS_Store assets/banner.png ``` -------------------------------------------------------------------------------- /.npmignore: -------------------------------------------------------------------------------- ``` # Development files .git .github .vscode .idea .DS_Store # Logs logs *.log npm-debug.log* dev-debug.log init-debug.log # Source files not needed in the package src test tests docs examples .editorconfig .eslintrc .prettierrc .travis.yml .gitlab-ci.yml tsconfig.json jest.config.js # Original project files tasks.json tasks/ prd.txt scripts/prd.txt .env # Temporary files .tmp .temp *.swp *.swo # Node modules node_modules/ # Debug files *.debug ``` -------------------------------------------------------------------------------- /.env.example: -------------------------------------------------------------------------------- ``` # API Keys (Required for using in any role i.e. main/research/fallback -- see `task-master models`) ANTHROPIC_API_KEY=YOUR_ANTHROPIC_KEY_HERE PERPLEXITY_API_KEY=YOUR_PERPLEXITY_KEY_HERE OPENAI_API_KEY=YOUR_OPENAI_KEY_HERE GOOGLE_API_KEY=YOUR_GOOGLE_KEY_HERE MISTRAL_API_KEY=YOUR_MISTRAL_KEY_HERE GROQ_API_KEY=YOUR_GROQ_KEY_HERE OPENROUTER_API_KEY=YOUR_OPENROUTER_KEY_HERE XAI_API_KEY=YOUR_XAI_KEY_HERE AZURE_OPENAI_API_KEY=YOUR_AZURE_KEY_HERE OLLAMA_API_KEY=YOUR_OLLAMA_API_KEY_HERE # Google Vertex AI Configuration VERTEX_PROJECT_ID=your-gcp-project-id VERTEX_LOCATION=us-central1 # Optional: Path to service account credentials JSON file (alternative to API key) GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-credentials.json ``` -------------------------------------------------------------------------------- /packages/tm-core/.gitignore: -------------------------------------------------------------------------------- ``` # Dependencies node_modules/ *.pnp .pnp.js # Build output dist/ build/ *.tsbuildinfo # Coverage reports coverage/ *.lcov # Runtime data pids *.pid *.seed *.pid.lock # Logs logs *.log npm-debug.log* yarn-debug.log* yarn-error.log* lerna-debug.log* # Diagnostic reports report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json # Runtime data pids *.pid *.seed *.pid.lock # Directory for instrumented libs generated by jscoverage/JSCover lib-cov # nyc test coverage .nyc_output # Dependency directories jspm_packages/ # Optional npm cache directory .npm # Optional eslint cache .eslintcache # Optional REPL history .node_repl_history # Output of 'npm pack' *.tgz # Yarn Integrity file .yarn-integrity # Environment variables .env .env.local .env.development.local .env.test.local .env.production.local # IDE .vscode/ .idea/ *.swp *.swo *~ # OS generated files .DS_Store .DS_Store? ._* .Spotlight-V100 .Trashes ehthumbs.db Thumbs.db ``` -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- ``` # Dependency directories node_modules/ jspm_packages/ # Environment variables .env .env.local .env.development.local .env.test.local .env.production.local # Cursor configuration -- might have ENV variables. Included by default # .cursor/mcp.json # Logs logs *.log npm-debug.log* yarn-debug.log* yarn-error.log* lerna-debug.log* # Coverage directory used by tools like istanbul coverage/ *.lcov # Jest cache .jest/ # Test temporary files and directories tests/temp/ tests/e2e/_runs/ tests/e2e/log/ tests/**/*.log tests/**/coverage/ # Test database files (if any) tests/**/*.db tests/**/*.sqlite tests/**/*.sqlite3 # Optional npm cache directory .npm # Optional eslint cache .eslintcache # Optional REPL history .node_repl_history # Output of 'npm pack' *.tgz # Yarn Integrity file .yarn-integrity # dotenv environment variables file .env.test # parcel-bundler cache .cache # Next.js build output .next # Nuxt.js build / generate output .nuxt dist # Mac files .DS_Store # Debug files *.debug init-debug.log dev-debug.log # NPMRC .npmrc # Added by Task Master AI # Editor directories and files .idea .vscode *.suo *.ntvs* *.njsproj *.sln *.sw? # VS Code extension test files .vscode-test/ apps/extension/.vscode-test/ # apps/extension apps/extension/vsix-build/ # turbo .turbo ``` -------------------------------------------------------------------------------- /assets/roocode/.roomodes: -------------------------------------------------------------------------------- ``` { "customModes": [ { "slug": "orchestrator", "name": "Orchestrator", "roleDefinition": "You are Roo, a strategic workflow orchestrator who coordinates complex tasks by delegating them to appropriate specialized modes. You have a comprehensive understanding of each mode's capabilities and limitations, also your own, and with the information given by the user and other modes in shared context you are enabled to effectively break down complex problems into discrete tasks that can be solved by different specialists using the `taskmaster-ai` system for task and context management.", "customInstructions": "Your role is to coordinate complex workflows by delegating tasks to specialized modes, using `taskmaster-ai` as the central hub for task definition, progress tracking, and context management. \nAs an orchestrator, you should:\nn1. When given a complex task, use contextual information (which gets updated frequently) to break it down into logical subtasks that can be delegated to appropriate specialized modes.\nn2. For each subtask, use the `new_task` tool to delegate. Choose the most appropriate mode for the subtask's specific goal and provide comprehensive instructions in the `message` parameter. \nThese instructions must include:\n* All necessary context from the parent task or previous subtasks required to complete the work.\n* A clearly defined scope, specifying exactly what the subtask should accomplish.\n* An explicit statement that the subtask should *only* perform the work outlined in these instructions and not deviate.\n* An instruction for the subtask to signal completion by using the `attempt_completion` tool, providing a thorough summary of the outcome in the `result` parameter, keeping in mind that this summary will be the source of truth used to further relay this information to other tasks and for you to keep track of what was completed on this project.\nn3. Track and manage the progress of all subtasks. When a subtask is completed, acknowledge its results and determine the next steps.\nn4. Help the user understand how the different subtasks fit together in the overall workflow. Provide clear reasoning about why you're delegating specific tasks to specific modes.\nn5. Ask clarifying questions when necessary to better understand how to break down complex tasks effectively. If it seems complex delegate to architect to accomplish that \nn6. Use subtasks to maintain clarity. If a request significantly shifts focus or requires a different expertise (mode), consider creating a subtask rather than overloading the current one.", "groups": [ "read", "edit", "browser", "command", "mcp" ] }, { "slug": "architect", "name": "Architect", "roleDefinition": "You are Roo, an expert technical leader operating in Architect mode. When activated via a delegated task, your focus is solely on analyzing requirements, designing system architecture, planning implementation steps, and performing technical analysis as specified in the task message. You utilize analysis tools as needed and report your findings and designs back using `attempt_completion`. You do not deviate from the delegated task scope.", "customInstructions": "1. Do some information gathering (for example using read_file or search_files) to get more context about the task.\n\n2. You should also ask the user clarifying questions to get a better understanding of the task.\n\n3. Once you've gained more context about the user's request, you should create a detailed plan for how to accomplish the task. Include Mermaid diagrams if they help make your plan clearer.\n\n4. Ask the user if they are pleased with this plan, or if they would like to make any changes. Think of this as a brainstorming session where you can discuss the task and plan the best way to accomplish it.\n\n5. Once the user confirms the plan, ask them if they'd like you to write it to a markdown file.\n\n6. Use the switch_mode tool to request that the user switch to another mode to implement the solution.", "groups": [ "read", ["edit", { "fileRegex": "\\.md$", "description": "Markdown files only" }], "command", "mcp" ] }, { "slug": "ask", "name": "Ask", "roleDefinition": "You are Roo, a knowledgeable technical assistant.\nWhen activated by another mode via a delegated task, your focus is to research, analyze, and provide clear, concise answers or explanations based *only* on the specific information requested in the delegation message. Use available tools for information gathering and report your findings back using `attempt_completion`.", "customInstructions": "You can analyze code, explain concepts, and access external resources. Make sure to answer the user's questions and don't rush to switch to implementing code. Include Mermaid diagrams if they help make your response clearer.", "groups": [ "read", "browser", "mcp" ] }, { "slug": "debug", "name": "Debug", "roleDefinition": "You are Roo, an expert software debugger specializing in systematic problem diagnosis and resolution. When activated by another mode, your task is to meticulously analyze the provided debugging request (potentially referencing Taskmaster tasks, logs, or metrics), use diagnostic tools as instructed to investigate the issue, identify the root cause, and report your findings and recommended next steps back via `attempt_completion`. You focus solely on diagnostics within the scope defined by the delegated task.", "customInstructions": "Reflect on 5-7 different possible sources of the problem, distill those down to 1-2 most likely sources, and then add logs to validate your assumptions. Explicitly ask the user to confirm the diagnosis before fixing the problem.", "groups": [ "read", "edit", "command", "mcp" ] }, { "slug": "test", "name": "Test", "roleDefinition": "You are Roo, an expert software tester. Your primary focus is executing testing tasks delegated to you by other modes.\nAnalyze the provided scope and context (often referencing a Taskmaster task ID and its `testStrategy`), develop test plans if needed, execute tests diligently, and report comprehensive results (pass/fail, bugs, coverage) back using `attempt_completion`. You operate strictly within the delegated task's boundaries.", "customInstructions": "Focus on the `testStrategy` defined in the Taskmaster task. Develop and execute test plans accordingly. Report results clearly, including pass/fail status, bug details, and coverage information.", "groups": [ "read", "command", "mcp" ] } ] } ``` -------------------------------------------------------------------------------- /assets/.windsurfrules: -------------------------------------------------------------------------------- ``` Below you will find a variety of important rules spanning: - the dev_workflow - the .windsurfrules document self-improvement workflow - the template to follow when modifying or adding new sections/rules to this document. --- ## DEV_WORKFLOW description: Guide for using meta-development script (scripts/dev.js) to manage task-driven development workflows globs: **/\* filesToApplyRule: **/\* alwaysApply: true --- - **Global CLI Commands** - Task Master now provides a global CLI through the `task-master` command - All functionality from `scripts/dev.js` is available through this interface - Install globally with `npm install -g claude-task-master` or use locally via `npx` - Use `task-master <command>` instead of `node scripts/dev.js <command>` - Examples: - `task-master list` instead of `node scripts/dev.js list` - `task-master next` instead of `node scripts/dev.js next` - `task-master expand --id=3` instead of `node scripts/dev.js expand --id=3` - All commands accept the same options as their script equivalents - The CLI provides additional commands like `task-master init` for project setup - **Development Workflow Process** - Start new projects by running `task-master init` or `node scripts/dev.js parse-prd --input=<prd-file.txt>` to generate initial tasks.json - Begin coding sessions with `task-master list` to see current tasks, status, and IDs - Analyze task complexity with `task-master analyze-complexity --research` before breaking down tasks - Select tasks based on dependencies (all marked 'done'), priority level, and ID order - Clarify tasks by checking task files in tasks/ directory or asking for user input - View specific task details using `task-master show <id>` to understand implementation requirements - Break down complex tasks using `task-master expand --id=<id>` with appropriate flags - Clear existing subtasks if needed using `task-master clear-subtasks --id=<id>` before regenerating - Implement code following task details, dependencies, and project standards - Verify tasks according to test strategies before marking as complete - Mark completed tasks with `task-master set-status --id=<id> --status=done` - Update dependent tasks when implementation differs from original plan - Generate task files with `task-master generate` after updating tasks.json - Maintain valid dependency structure with `task-master fix-dependencies` when needed - Respect dependency chains and task priorities when selecting work - Report progress regularly using the list command - **Task Complexity Analysis** - Run `node scripts/dev.js analyze-complexity --research` for comprehensive analysis - Review complexity report in scripts/task-complexity-report.json - Or use `node scripts/dev.js complexity-report` for a formatted, readable version of the report - Focus on tasks with highest complexity scores (8-10) for detailed breakdown - Use analysis results to determine appropriate subtask allocation - Note that reports are automatically used by the expand command - **Task Breakdown Process** - For tasks with complexity analysis, use `node scripts/dev.js expand --id=<id>` - Otherwise use `node scripts/dev.js expand --id=<id> --subtasks=<number>` - Add `--research` flag to leverage Perplexity AI for research-backed expansion - Use `--prompt="<context>"` to provide additional context when needed - Review and adjust generated subtasks as necessary - Use `--all` flag to expand multiple pending tasks at once - If subtasks need regeneration, clear them first with `clear-subtasks` command - **Implementation Drift Handling** - When implementation differs significantly from planned approach - When future tasks need modification due to current implementation choices - When new dependencies or requirements emerge - Call `node scripts/dev.js update --from=<futureTaskId> --prompt="<explanation>"` to update tasks.json - **Task Status Management** - Use 'pending' for tasks ready to be worked on - Use 'done' for completed and verified tasks - Use 'deferred' for postponed tasks - Add custom status values as needed for project-specific workflows - **Task File Format Reference** ``` # Task ID: <id> # Title: <title> # Status: <status> # Dependencies: <comma-separated list of dependency IDs> # Priority: <priority> # Description: <brief description> # Details: <detailed implementation notes> # Test Strategy: <verification approach> ``` - **Command Reference: parse-prd** - Legacy Syntax: `node scripts/dev.js parse-prd --input=<prd-file.txt>` - CLI Syntax: `task-master parse-prd --input=<prd-file.txt>` - Description: Parses a PRD document and generates a tasks.json file with structured tasks - Parameters: - `--input=<file>`: Path to the PRD text file (default: sample-prd.txt) - Example: `task-master parse-prd --input=requirements.txt` - Notes: Will overwrite existing tasks.json file. Use with caution. - **Command Reference: update** - Legacy Syntax: `node scripts/dev.js update --from=<id> --prompt="<prompt>"` - CLI Syntax: `task-master update --from=<id> --prompt="<prompt>"` - Description: Updates tasks with ID >= specified ID based on the provided prompt - Parameters: - `--from=<id>`: Task ID from which to start updating (required) - `--prompt="<text>"`: Explanation of changes or new context (required) - Example: `task-master update --from=4 --prompt="Now we are using Express instead of Fastify."` - Notes: Only updates tasks not marked as 'done'. Completed tasks remain unchanged. - **Command Reference: generate** - Legacy Syntax: `node scripts/dev.js generate` - CLI Syntax: `task-master generate` - Description: Generates individual task files based on tasks.json - Parameters: - `--file=<path>, -f`: Use alternative tasks.json file (default: '.taskmaster/tasks/tasks.json') - `--output=<dir>, -o`: Output directory (default: '.taskmaster/tasks') - Example: `task-master generate` - Notes: Overwrites existing task files. Creates output directory if needed. - **Command Reference: set-status** - Legacy Syntax: `node scripts/dev.js set-status --id=<id> --status=<status>` - CLI Syntax: `task-master set-status --id=<id> --status=<status>` - Description: Updates the status of a specific task in tasks.json - Parameters: - `--id=<id>`: ID of the task to update (required) - `--status=<status>`: New status value (required) - Example: `task-master set-status --id=3 --status=done` - Notes: Common values are 'done', 'pending', and 'deferred', but any string is accepted. - **Command Reference: list** - Legacy Syntax: `node scripts/dev.js list` - CLI Syntax: `task-master list` - Description: Lists all tasks in tasks.json with IDs, titles, and status - Parameters: - `--status=<status>, -s`: Filter by status - `--with-subtasks`: Show subtasks for each task - `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json') - Example: `task-master list` - Notes: Provides quick overview of project progress. Use at start of sessions. - **Command Reference: expand** - Legacy Syntax: `node scripts/dev.js expand --id=<id> [--num=<number>] [--research] [--prompt="<context>"]` - CLI Syntax: `task-master expand --id=<id> [--num=<number>] [--research] [--prompt="<context>"]` - Description: Expands a task with subtasks for detailed implementation - Parameters: - `--id=<id>`: ID of task to expand (required unless using --all) - `--all`: Expand all pending tasks, prioritized by complexity - `--num=<number>`: Number of subtasks to generate (default: from complexity report) - `--research`: Use Perplexity AI for research-backed generation - `--prompt="<text>"`: Additional context for subtask generation - `--force`: Regenerate subtasks even for tasks that already have them - Example: `task-master expand --id=3 --num=5 --research --prompt="Focus on security aspects"` - Notes: Uses complexity report recommendations if available. - **Command Reference: analyze-complexity** - Legacy Syntax: `node scripts/dev.js analyze-complexity [options]` - CLI Syntax: `task-master analyze-complexity [options]` - Description: Analyzes task complexity and generates expansion recommendations - Parameters: - `--output=<file>, -o`: Output file path (default: scripts/task-complexity-report.json) - `--model=<model>, -m`: Override LLM model to use - `--threshold=<number>, -t`: Minimum score for expansion recommendation (default: 5) - `--file=<path>, -f`: Use alternative tasks.json file - `--research, -r`: Use Perplexity AI for research-backed analysis - Example: `task-master analyze-complexity --research` - Notes: Report includes complexity scores, recommended subtasks, and tailored prompts. - **Command Reference: clear-subtasks** - Legacy Syntax: `node scripts/dev.js clear-subtasks --id=<id>` - CLI Syntax: `task-master clear-subtasks --id=<id>` - Description: Removes subtasks from specified tasks to allow regeneration - Parameters: - `--id=<id>`: ID or comma-separated IDs of tasks to clear subtasks from - `--all`: Clear subtasks from all tasks - Examples: - `task-master clear-subtasks --id=3` - `task-master clear-subtasks --id=1,2,3` - `task-master clear-subtasks --all` - Notes: - Task files are automatically regenerated after clearing subtasks - Can be combined with expand command to immediately generate new subtasks - Works with both parent tasks and individual subtasks - **Task Structure Fields** - **id**: Unique identifier for the task (Example: `1`) - **title**: Brief, descriptive title (Example: `"Initialize Repo"`) - **description**: Concise summary of what the task involves (Example: `"Create a new repository, set up initial structure."`) - **status**: Current state of the task (Example: `"pending"`, `"done"`, `"deferred"`) - **dependencies**: IDs of prerequisite tasks (Example: `[1, 2]`) - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) - This helps quickly identify which prerequisite tasks are blocking work - **priority**: Importance level (Example: `"high"`, `"medium"`, `"low"`) - **details**: In-depth implementation instructions (Example: `"Use GitHub client ID/secret, handle callback, set session token."`) - **testStrategy**: Verification approach (Example: `"Deploy and call endpoint to confirm 'Hello World' response."`) - **subtasks**: List of smaller, more specific tasks (Example: `[{"id": 1, "title": "Configure OAuth", ...}]`) - **Environment Variables Configuration** - **ANTHROPIC_API_KEY** (Required): Your Anthropic API key for Claude (Example: `ANTHROPIC_API_KEY=sk-ant-api03-...`) - **MODEL** (Default: `"claude-3-7-sonnet-20250219"`): Claude model to use (Example: `MODEL=claude-3-opus-20240229`) - **MAX_TOKENS** (Default: `"4000"`): Maximum tokens for responses (Example: `MAX_TOKENS=8000`) - **TEMPERATURE** (Default: `"0.7"`): Temperature for model responses (Example: `TEMPERATURE=0.5`) - **DEBUG** (Default: `"false"`): Enable debug logging (Example: `DEBUG=true`) - **TASKMASTER_LOG_LEVEL** (Default: `"info"`): Console output level (Example: `TASKMASTER_LOG_LEVEL=debug`) - **DEFAULT_SUBTASKS** (Default: `"3"`): Default subtask count (Example: `DEFAULT_SUBTASKS=5`) - **DEFAULT_PRIORITY** (Default: `"medium"`): Default priority (Example: `DEFAULT_PRIORITY=high`) - **PROJECT_NAME** (Default: `"MCP SaaS MVP"`): Project name in metadata (Example: `PROJECT_NAME=My Awesome Project`) - **PROJECT_VERSION** (Default: `"1.0.0"`): Version in metadata (Example: `PROJECT_VERSION=2.1.0`) - **PERPLEXITY_API_KEY**: For research-backed features (Example: `PERPLEXITY_API_KEY=pplx-...`) - **PERPLEXITY_MODEL** (Default: `"sonar-medium-online"`): Perplexity model (Example: `PERPLEXITY_MODEL=sonar-large-online`) - **Determining the Next Task** - Run `task-master next` to show the next task to work on - The next command identifies tasks with all dependencies satisfied - Tasks are prioritized by priority level, dependency count, and ID - The command shows comprehensive task information including: - Basic task details and description - Implementation details - Subtasks (if they exist) - Contextual suggested actions - Recommended before starting any new development work - Respects your project's dependency structure - Ensures tasks are completed in the appropriate sequence - Provides ready-to-use commands for common task actions - **Viewing Specific Task Details** - Run `task-master show <id>` or `task-master show --id=<id>` to view a specific task - Use dot notation for subtasks: `task-master show 1.2` (shows subtask 2 of task 1) - Displays comprehensive information similar to the next command, but for a specific task - For parent tasks, shows all subtasks and their current status - For subtasks, shows parent task information and relationship - Provides contextual suggested actions appropriate for the specific task - Useful for examining task details before implementation or checking status - **Managing Task Dependencies** - Use `task-master add-dependency --id=<id> --depends-on=<id>` to add a dependency - Use `task-master remove-dependency --id=<id> --depends-on=<id>` to remove a dependency - The system prevents circular dependencies and duplicate dependency entries - Dependencies are checked for existence before being added or removed - Task files are automatically regenerated after dependency changes - Dependencies are visualized with status indicators in task listings and files - **Command Reference: add-dependency** - Legacy Syntax: `node scripts/dev.js add-dependency --id=<id> --depends-on=<id>` - CLI Syntax: `task-master add-dependency --id=<id> --depends-on=<id>` - Description: Adds a dependency relationship between two tasks - Parameters: - `--id=<id>`: ID of task that will depend on another task (required) - `--depends-on=<id>`: ID of task that will become a dependency (required) - Example: `task-master add-dependency --id=22 --depends-on=21` - Notes: Prevents circular dependencies and duplicates; updates task files automatically - **Command Reference: remove-dependency** - Legacy Syntax: `node scripts/dev.js remove-dependency --id=<id> --depends-on=<id>` - CLI Syntax: `task-master remove-dependency --id=<id> --depends-on=<id>` - Description: Removes a dependency relationship between two tasks - Parameters: - `--id=<id>`: ID of task to remove dependency from (required) - `--depends-on=<id>`: ID of task to remove as a dependency (required) - Example: `task-master remove-dependency --id=22 --depends-on=21` - Notes: Checks if dependency actually exists; updates task files automatically - **Command Reference: validate-dependencies** - Legacy Syntax: `node scripts/dev.js validate-dependencies [options]` - CLI Syntax: `task-master validate-dependencies [options]` - Description: Checks for and identifies invalid dependencies in tasks.json and task files - Parameters: - `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json') - Example: `task-master validate-dependencies` - Notes: - Reports all non-existent dependencies and self-dependencies without modifying files - Provides detailed statistics on task dependency state - Use before fix-dependencies to audit your task structure - **Command Reference: fix-dependencies** - Legacy Syntax: `node scripts/dev.js fix-dependencies [options]` - CLI Syntax: `task-master fix-dependencies [options]` - Description: Finds and fixes all invalid dependencies in tasks.json and task files - Parameters: - `--file=<path>, -f`: Use alternative tasks.json file (default: 'tasks/tasks.json') - Example: `task-master fix-dependencies` - Notes: - Removes references to non-existent tasks and subtasks - Eliminates self-dependencies (tasks depending on themselves) - Regenerates task files with corrected dependencies - Provides detailed report of all fixes made - **Command Reference: complexity-report** - Legacy Syntax: `node scripts/dev.js complexity-report [options]` - CLI Syntax: `task-master complexity-report [options]` - Description: Displays the task complexity analysis report in a formatted, easy-to-read way - Parameters: - `--file=<path>, -f`: Path to the complexity report file (default: 'scripts/task-complexity-report.json') - Example: `task-master complexity-report` - Notes: - Shows tasks organized by complexity score with recommended actions - Provides complexity distribution statistics - Displays ready-to-use expansion commands for complex tasks - If no report exists, offers to generate one interactively - **Command Reference: add-task** - CLI Syntax: `task-master add-task [options]` - Description: Add a new task to tasks.json using AI - Parameters: - `--file=<path>, -f`: Path to the tasks file (default: 'tasks/tasks.json') - `--prompt=<text>, -p`: Description of the task to add (required) - `--dependencies=<ids>, -d`: Comma-separated list of task IDs this task depends on - `--priority=<priority>`: Task priority (high, medium, low) (default: 'medium') - Example: `task-master add-task --prompt="Create user authentication using Auth0"` - Notes: Uses AI to convert description into structured task with appropriate details - **Command Reference: init** - CLI Syntax: `task-master init` - Description: Initialize a new project with Task Master structure - Parameters: None - Example: `task-master init` - Notes: - Creates initial project structure with required files - Prompts for project settings if not provided - Merges with existing files when appropriate - Can be used to bootstrap a new Task Master project quickly - **Code Analysis & Refactoring Techniques** - **Top-Level Function Search** - Use grep pattern matching to find all exported functions across the codebase - Command: `grep -E "export (function|const) \w+|function \w+\(|const \w+ = \(|module\.exports" --include="*.js" -r ./` - Benefits: - Quickly identify all public API functions without reading implementation details - Compare functions between files during refactoring (e.g., monolithic to modular structure) - Verify all expected functions exist in refactored modules - Identify duplicate functionality or naming conflicts - Usage examples: - When migrating from `scripts/dev.js` to modular structure: `grep -E "function \w+\(" scripts/dev.js` - Check function exports in a directory: `grep -E "export (function|const)" scripts/modules/` - Find potential naming conflicts: `grep -E "function (get|set|create|update)\w+\(" -r ./` - Variations: - Add `-n` flag to include line numbers - Add `--include="*.ts"` to filter by file extension - Use with `| sort` to alphabetize results - Integration with refactoring workflow: - Start by mapping all functions in the source file - Create target module files based on function grouping - Verify all functions were properly migrated - Check for any unintentional duplications or omissions --- ## WINDSURF_RULES description: Guidelines for creating and maintaining Windsurf rules to ensure consistency and effectiveness. globs: .windsurfrules filesToApplyRule: .windsurfrules alwaysApply: true --- The below describes how you should be structuring new rule sections in this document. - **Required Rule Structure:** ```markdown --- description: Clear, one-line description of what the rule enforces globs: path/to/files/*.ext, other/path/**/* alwaysApply: boolean --- - **Main Points in Bold** - Sub-points with details - Examples and explanations ``` - **Section References:** - Use `ALL_CAPS_SECTION` to reference files - Example: `WINDSURF_RULES` - **Code Examples:** - Use language-specific code blocks ```typescript // ✅ DO: Show good examples const goodExample = true; // ❌ DON'T: Show anti-patterns const badExample = false; ``` - **Rule Content Guidelines:** - Start with high-level overview - Include specific, actionable requirements - Show examples of correct implementation - Reference existing code when possible - Keep rules DRY by referencing other rules - **Rule Maintenance:** - Update rules when new patterns emerge - Add examples from actual codebase - Remove outdated patterns - Cross-reference related rules - **Best Practices:** - Use bullet points for clarity - Keep descriptions concise - Include both DO and DON'T examples - Reference actual code over theoretical examples - Use consistent formatting across rules --- ## SELF_IMPROVE description: Guidelines for continuously improving this rules document based on emerging code patterns and best practices. globs: **/\* filesToApplyRule: **/\* alwaysApply: true --- - **Rule Improvement Triggers:** - New code patterns not covered by existing rules - Repeated similar implementations across files - Common error patterns that could be prevented - New libraries or tools being used consistently - Emerging best practices in the codebase - **Analysis Process:** - Compare new code with existing rules - Identify patterns that should be standardized - Look for references to external documentation - Check for consistent error handling patterns - Monitor test patterns and coverage - **Rule Updates:** - **Add New Rules When:** - A new technology/pattern is used in 3+ files - Common bugs could be prevented by a rule - Code reviews repeatedly mention the same feedback - New security or performance patterns emerge - **Modify Existing Rules When:** - Better examples exist in the codebase - Additional edge cases are discovered - Related rules have been updated - Implementation details have changed - **Example Pattern Recognition:** ```typescript // If you see repeated patterns like: const data = await prisma.user.findMany({ select: { id: true, email: true }, where: { status: "ACTIVE" }, }); // Consider adding a PRISMA section in the .windsurfrules: // - Standard select fields // - Common where conditions // - Performance optimization patterns ``` - **Rule Quality Checks:** - Rules should be actionable and specific - Examples should come from actual code - References should be up to date - Patterns should be consistently enforced - **Continuous Improvement:** - Monitor code review comments - Track common development questions - Update rules after major refactors - Add links to relevant documentation - Cross-reference related rules - **Rule Deprecation:** - Mark outdated patterns as deprecated - Remove rules that no longer apply - Update references to deprecated rules - Document migration paths for old patterns - **Documentation Updates:** - Keep examples synchronized with code - Update references to external docs - Maintain links between related rules - Document breaking changes Follow WINDSURF_RULES for proper rule formatting and structure of windsurf rule sections. ``` -------------------------------------------------------------------------------- /apps/docs/README.md: -------------------------------------------------------------------------------- ```markdown # Task Master Documentation Welcome to the Task Master documentation. Use the links below to navigate to the information you need: ## Getting Started - [Configuration Guide](archive/configuration.md) - Set up environment variables and customize Task Master - [Tutorial](archive/ctutorial.md) - Step-by-step guide to getting started with Task Master ## Reference - [Command Reference](archive/ccommand-reference.md) - Complete list of all available commands - [Task Structure](archive/ctask-structure.md) - Understanding the task format and features ## Examples & Licensing - [Example Interactions](archive/cexamples.md) - Common Cursor AI interaction examples - [Licensing Information](archive/clicensing.md) - Detailed information about the license ## Need More Help? If you can't find what you're looking for in these docs, please check the [main README](../README.md) or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master). ``` -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- ```markdown # Task Master Documentation Welcome to the Task Master documentation. Use the links below to navigate to the information you need: ## Getting Started - [Configuration Guide](configuration.md) - Set up environment variables and customize Task Master - [Tutorial](tutorial.md) - Step-by-step guide to getting started with Task Master ## Reference - [Command Reference](command-reference.md) - Complete list of all available commands (including research and multi-task viewing) - [Task Structure](task-structure.md) - Understanding the task format and features - [Available Models](models.md) - Complete list of supported AI models and providers ## Examples & Licensing - [Example Interactions](examples.md) - Common Cursor AI interaction examples - [Licensing Information](licensing.md) - Detailed information about the license ## Need More Help? If you can't find what you're looking for in these docs, please check the [main README](../README.md) or visit our [GitHub repository](https://github.com/eyaltoledano/claude-task-master). ``` -------------------------------------------------------------------------------- /tests/README.md: -------------------------------------------------------------------------------- ```markdown # Task Master Test Suite This directory contains tests for the Task Master CLI. The tests are organized into different categories to ensure comprehensive test coverage. ## Test Structure - `unit/`: Unit tests for individual functions and components - `integration/`: Integration tests for testing interactions between components - `e2e/`: End-to-end tests for testing complete workflows - `fixtures/`: Test fixtures and sample data ## Running Tests To run all tests: ```bash npm test ``` To run tests in watch mode (for development): ```bash npm run test:watch ``` To run tests with coverage reporting: ```bash npm run test:coverage ``` ## Testing Approach ### Unit Tests Unit tests focus on testing individual functions and components in isolation. These tests should be fast and should mock external dependencies. ### Integration Tests Integration tests focus on testing interactions between components. These tests ensure that components work together correctly. ### End-to-End Tests End-to-end tests focus on testing complete workflows from a user's perspective. These tests ensure that the CLI works correctly as a whole. ## Test Fixtures Test fixtures provide sample data for tests. Fixtures should be small, focused, and representative of real-world data. ## Mocking For external dependencies like file system operations and API calls, we use mocking to isolate the code being tested. - File system operations: Use `mock-fs` to mock the file system - API calls: Use Jest's mocking capabilities to mock API responses ## Test Coverage We aim for at least 80% test coverage for all code paths. Coverage reports can be generated with: ```bash npm run test:coverage ``` ``` -------------------------------------------------------------------------------- /tests/unit/scripts/modules/commands/README.md: -------------------------------------------------------------------------------- ```markdown # Mock System Documentation ## Overview The `move-cross-tag.test.js` file has been refactored to use a focused, maintainable mock system that addresses the brittleness and complexity of the original implementation. ## Key Improvements ### 1. **Focused Mocking** - **Before**: Mocked 20+ modules, many irrelevant to cross-tag functionality - **After**: Only mocks 5 core modules actually used in cross-tag moves ### 2. **Configuration-Driven Mocking** ```javascript const mockConfig = { core: { moveTasksBetweenTags: true, generateTaskFiles: true, readJSON: true, initTaskMaster: true, findProjectRoot: true } }; ``` ### 3. **Reusable Mock Factory** ```javascript function createMockFactory(config = mockConfig) { const mocks = {}; if (config.core?.moveTasksBetweenTags) { mocks.moveTasksBetweenTags = createMock('moveTasksBetweenTags'); } // ... other mocks return mocks; } ``` ## Mock Configuration ### Core Mocks (Required for Cross-Tag Functionality) - `moveTasksBetweenTags`: Core move functionality - `generateTaskFiles`: File generation after moves - `readJSON`: Reading task data - `initTaskMaster`: TaskMaster initialization - `findProjectRoot`: Project path resolution ### Optional Mocks - Console methods: `error`, `log`, `exit` - TaskMaster instance methods: `getCurrentTag`, `getTasksPath`, `getProjectRoot` ## Usage Examples ### Default Configuration ```javascript const mocks = setupMocks(); // Uses default mockConfig ``` ### Minimal Configuration ```javascript const minimalConfig = { core: { moveTasksBetweenTags: true, generateTaskFiles: true, readJSON: true } }; const mocks = setupMocks(minimalConfig); ``` ### Selective Mocking ```javascript const selectiveConfig = { core: { moveTasksBetweenTags: true, generateTaskFiles: false, // Disabled readJSON: true } }; const mocks = setupMocks(selectiveConfig); ``` ## Benefits 1. **Reduced Complexity**: From 150+ lines of mock setup to 50 lines 2. **Better Maintainability**: Clear configuration object shows dependencies 3. **Focused Testing**: Only mocks what's actually used 4. **Flexible Configuration**: Easy to enable/disable specific mocks 5. **Consistent Naming**: All mocks use `createMock()` with descriptive names ## Migration Guide ### For Other Test Files 1. Identify actual module dependencies 2. Create configuration object for required mocks 3. Use `createMockFactory()` and `setupMocks()` 4. Remove unnecessary mocks ### Example Migration ```javascript // Before: 20+ jest.mock() calls jest.mock('module1', () => ({ ... })); jest.mock('module2', () => ({ ... })); // ... many more // After: Configuration-driven const mockConfig = { core: { requiredFunction1: true, requiredFunction2: true } }; const mocks = setupMocks(mockConfig); ``` ## Testing the Mock System The test suite includes validation tests: - `should work with minimal mock configuration` - `should allow disabling specific mocks` These ensure the mock factory works correctly and can be configured flexibly. ``` -------------------------------------------------------------------------------- /.changeset/README.md: -------------------------------------------------------------------------------- ```markdown # Changesets This folder has been automatically generated by `@changesets/cli`, a build tool that works with multi-package repos or single-package repos to help version and publish code. Full documentation is available in the [Changesets repository](https://github.com/changesets/changesets). ## What are Changesets? Changesets are a way to track changes to packages in your repository. Each changeset: - Describes the changes you've made - Specifies the type of version bump needed (patch, minor, or major) - Connects these changes with release notes - Automates the versioning and publishing process ## How to Use Changesets in Task Master ### 2. Making Changes 1. Create a new branch for your changes 2. Make your code changes 3. Write tests and ensure all tests pass ### 3. Creating a Changeset After making changes, create a changeset by running: ```bash npx changeset ``` This will: - Walk you through a CLI to describe your changes - Ask you to select impact level (patch, minor, major) - Create a markdown file in the `.changeset` directory ### 4. Impact Level Guidelines When choosing the impact level for your changes: - **Patch**: Bug fixes and minor changes that don't affect how users interact with the system - Example: Fixing a typo in output text, optimizing code without changing behavior - **Minor**: New features or enhancements that don't break existing functionality - Example: Adding a new flag to an existing command, adding new task metadata fields - **Major**: Breaking changes that require users to update their usage - Example: Renaming a command, changing the format of the tasks.json file ### 5. Writing Good Changeset Descriptions Your changeset description should: - Be written for end-users, not developers - Clearly explain what changed and why - Include any migration steps or backward compatibility notes - Reference related issues or pull requests with `#issue-number` Examples: ```md # Good Added new `--research` flag to the `expand` command that uses Perplexity AI to provide research-backed task expansions. Requires PERPLEXITY_API_KEY environment variable. # Not Good Fixed stuff and added new flag ``` ### 6. Committing Your Changes Commit both your code changes and the generated changeset file: ```bash git add . git commit -m "Add feature X with changeset" git push ``` ### 7. Pull Request Process 1. Open a pull request 2. Ensure CI passes 3. Await code review 4. Once approved and merged, your changeset will be used during the next release ## Release Process (for Maintainers) When it's time to make a release: 1. Ensure all desired changesets are merged 2. Run `npx changeset version` to update package versions and changelog 3. Review and commit the changes 4. Run `npm publish` to publish to npm This can be automated through Github Actions ## Common Issues and Solutions - **Merge Conflicts in Changeset Files**: Resolve just like any other merge conflict - **Multiple Changes in One PR**: Create multiple changesets if changes affect different areas - **Accidentally Committed Without Changeset**: Create the changeset after the fact and commit it separately ## Additional Resources - [Changesets Documentation](https://github.com/changesets/changesets) - [Common Questions](https://github.com/changesets/changesets/blob/main/docs/common-questions.md) ``` -------------------------------------------------------------------------------- /packages/tm-core/README.md: -------------------------------------------------------------------------------- ```markdown # @task-master/tm-core Core library for Task Master AI - providing task management and orchestration capabilities with TypeScript support. ## Overview `tm-core` is the foundational library that powers Task Master AI's task management system. It provides a comprehensive set of tools for creating, managing, and orchestrating tasks with AI integration. ## Features - **TypeScript-first**: Built with full TypeScript support and strict type checking - **Dual Format**: Supports both ESM and CommonJS with automatic format detection - **Modular Architecture**: Clean separation of concerns with dedicated modules for different functionality - **AI Provider Integration**: Pluggable AI provider system for task generation and management - **Flexible Storage**: Abstracted storage layer supporting different persistence strategies - **Task Parsing**: Advanced parsing capabilities for various task definition formats - **Error Handling**: Comprehensive error system with specific error types - **Testing**: Complete test coverage with Jest and TypeScript support ## Installation ```bash npm install @task-master/tm-core ``` ## Usage ### Basic Usage ```typescript import { generateTaskId, PlaceholderTask } from '@task-master/tm-core'; // Generate a unique task ID const taskId = generateTaskId(); // Create a task (coming soon - full implementation) const task: PlaceholderTask = { id: taskId, title: 'My Task', status: 'pending', priority: 'medium' }; ``` ### Modular Imports You can import specific modules to reduce bundle size: ```typescript // Import types only import type { TaskId, TaskStatus } from '@task-master/tm-core/types'; // Import utilities import { generateTaskId, formatDate } from '@task-master/tm-core/utils'; // Import providers (AI providers coming soon) // import { AIProvider } from '@task-master/tm-core/providers'; // Import storage import { PlaceholderStorage } from '@task-master/tm-core/storage'; // Import parsers import { PlaceholderParser } from '@task-master/tm-core/parser'; // Import errors import { TmCoreError, TaskNotFoundError } from '@task-master/tm-core/errors'; ``` ## Architecture The library is organized into several key modules: - **types/**: TypeScript type definitions and interfaces - **providers/**: AI provider implementations for task generation - **storage/**: Storage adapters for different persistence strategies - **parser/**: Task parsing utilities for various formats - **utils/**: Common utility functions and helpers - **errors/**: Custom error classes and error handling ## Development ### Prerequisites - Node.js >= 18.0.0 - npm or yarn ### Setup ```bash # Install dependencies npm install # Build the library npm run build # Run tests npm test # Run tests with coverage npm run test:coverage # Lint code npm run lint # Format code npm run format ``` ### Scripts - `build`: Build the library for both ESM and CJS formats - `build:watch`: Build in watch mode for development - `test`: Run the test suite - `test:watch`: Run tests in watch mode - `test:coverage`: Run tests with coverage reporting - `lint`: Lint TypeScript files - `lint:fix`: Lint and auto-fix issues - `format`: Format code with Prettier - `format:check`: Check code formatting - `typecheck`: Type-check without emitting files - `clean`: Clean build artifacts - `dev`: Development mode with watch ## ESM and CommonJS Support This package supports both ESM and CommonJS formats automatically: ```javascript // ESM import { generateTaskId } from '@task-master/tm-core'; // CommonJS const { generateTaskId } = require('@task-master/tm-core'); ``` ## Roadmap This is the initial package structure. The following features are planned for implementation: ### Task 116: TypeScript Types - [ ] Complete type definitions for tasks, projects, and configurations - [ ] Zod schema validation - [ ] Generic type utilities ### Task 117: AI Provider System - [ ] Base provider interface - [ ] Anthropic Claude integration - [ ] OpenAI integration - [ ] Perplexity integration - [ ] Provider factory and registry ### Task 118: Storage Layer - [ ] File system storage adapter - [ ] Memory storage adapter - [ ] Storage interface and factory ### Task 119: Task Parser - [ ] PRD parser implementation - [ ] Markdown parser - [ ] JSON task format parser - [ ] Validation utilities ### Task 120: Utility Functions - [ ] Task ID generation - [ ] Date formatting - [ ] Validation helpers - [ ] File system utilities ### Task 121: Error Handling - [ ] Task-specific errors - [ ] Storage errors - [ ] Provider errors - [ ] Validation errors ### Task 122: Configuration System - [ ] Configuration schema - [ ] Default configurations - [ ] Environment variable support ### Task 123: Testing Infrastructure - [ ] Unit test coverage - [ ] Integration tests - [ ] Mock utilities ### Task 124: Documentation - [ ] API documentation - [ ] Usage examples - [ ] Migration guides ### Task 125: Package Finalization - [ ] Final testing and validation - [ ] Release preparation - [ ] CI/CD integration ## Implementation Checklist ### ✅ Task 115: Initialize tm-core Package Structure (COMPLETED) - [x] Create tm-core directory structure and base configuration files - [x] Configure build and test infrastructure - [x] Create barrel export files for all directories - [x] Add development tooling and documentation - [x] Validate package structure and prepare for development ### 🚧 Remaining Implementation Tasks - [ ] **Task 116**: TypeScript Types - Complete type definitions for tasks, projects, and configurations - [ ] **Task 117**: AI Provider System - Base provider interface and integrations - [ ] **Task 118**: Storage Layer - File system and memory storage adapters - [ ] **Task 119**: Task Parser - PRD, Markdown, and JSON parsers - [ ] **Task 120**: Utility Functions - Task ID generation, validation helpers - [ ] **Task 121**: Error Handling - Task-specific and validation errors - [ ] **Task 122**: Configuration System - Schema and environment support - [ ] **Task 123**: Testing Infrastructure - Complete unit and integration tests - [ ] **Task 124**: Documentation - API docs and usage examples - [ ] **Task 125**: Package Finalization - Release preparation and CI/CD ## Contributing This package is part of the Task Master AI project. Please refer to the main project's contributing guidelines. ## License MIT - See the main project's LICENSE file for details. ## Support For questions and support, please refer to the main Task Master AI documentation. ``` -------------------------------------------------------------------------------- /apps/extension/README.md: -------------------------------------------------------------------------------- ```markdown # Official Taskmaster AI Extension Transform your AI-driven development workflow with a beautiful, interactive Kanban board directly in VS Code. Seamlessly manage tasks from [Taskmaster AI](https://github.com/eyaltoledano/claude-task-master) projects with real-time synchronization and intelligent task management.     ## 🎯 What is Taskmaster AI? Taskmaster AI is an intelligent task management system designed for AI-assisted development. It helps you break down complex projects into manageable tasks, track progress, and leverage AI to enhance your development workflow. ## ✨ Key Features ### 📊 **Interactive Kanban Board** - **Drag & Drop Interface** - Effortlessly move tasks between status columns - **Real-time Sync** - Changes instantly reflect in your Taskmaster project files - **Multiple Views** - Board view and detailed task sidebar - **Smart Columns** - Pending, In Progress, Review, Done, Deferred, and Cancelled  ### 🤖 **AI-Powered Features** - **Task Content Generation** - Regenerate task descriptions using AI - **Smart Task Updates** - Append findings and progress notes automatically - **MCP Integration** - Seamless connection to Taskmaster AI via Model Context Protocol - **Intelligent Caching** - Smart performance optimization with background refresh  ### 🚀 **Performance & Usability** - **Offline Support** - Continue working even when disconnected - **Auto-refresh** - Automatic polling for task changes with smart frequency - **VS Code Native** - Perfectly integrated with VS Code themes and UI - **Modern Interface** - Built with ShadCN UI components and Tailwind CSS ## 🛠️ Installation ### Prerequisites 1. **VS Code** 1.90.0 or higher 2. **Node.js** 18.0 or higher (for Taskmaster MCP server) ### Install the Extension 1. **From VS Code Marketplace:** - Click the **Install** button above - The extension will be automatically added to your VS Code instance ## 🚀 Quick Start ### 1. **Initialize Taskmaster Project** If you don't have a Taskmaster project yet: ```bash cd your-project npm i -g task-master-ai task-master init ``` ### 2. **Open Kanban Board** - **Command Palette** (Ctrl+Shift+P): `Taskmaster Kanban: Show Board` - **Or** the extension automatically activates when you have a `.taskmaster` folder in your workspace ### 3. **MCP Server Setup** The extension automatically handles the Taskmaster MCP server connection: - **No manual installation required** - The extension spawns the MCP server automatically - **Uses npx by default** - Automatically downloads Taskmaster AI when needed - **Configurable** - You can customize the MCP server command in settings if needed ### 4. **Start Managing Tasks** - **Drag tasks** between columns to change status - **Click tasks** to view detailed information - **Use AI features** to enhance task content - **Add subtasks** with the + button on parent tasks ## 📋 Usage Guide ### Task Management | Action | How to Do It | |--------|--------------| | **View Kanban Board** | `Ctrl/Cmd + Shift + P` → "Taskmaster: Show Board" | | **Change Task Status** | Drag task card to different column | | **View Task Details** | Click on any task card | | **Edit Task Content** | Click task → Use edit buttons in details panel | | **Add Subtasks** | Click the + button on parent task cards | | **Use AI Features** | Open task details → Click AI action buttons | ### Understanding Task Statuses - 📋 **Pending** - Tasks ready to be started - 🚀 **In Progress** - Currently being worked on - 👀 **Review** - Awaiting review or feedback - ✅ **Done** - Completed tasks - ⏸️ **Deferred** - Postponed for later ### **AI-Powered Task Management** The extension integrates seamlessly with Taskmaster AI via MCP to provide: - **Smart Task Generation** - AI creates detailed implementation plans - **Progress Tracking** - Append timestamped notes and findings - **Content Enhancement** - Regenerate task descriptions for clarity - **Research Integration** - Get up-to-date information for your tasks ## ⚙️ Configuration Access settings via **File → Preferences → Settings** and search for "Taskmaster": ### **MCP Connection Settings** - **MCP Server Command** - Path to task-master-ai executable (default: `npx`) - **MCP Server Args** - Arguments for the server command (default: `-y`, `task-master-ai`) - **Connection Timeout** - Server response timeout (default: 30s) - **Auto Refresh** - Enable automatic task updates (default: enabled) ### **UI Preferences** - **Theme** - Auto, Light, or Dark mode - **Show Completed Tasks** - Display done tasks in board (default: enabled) - **Task Display Limit** - Maximum tasks to show (default: 100) ### **Performance Options** - **Cache Duration** - How long to cache task data (default: 5s) - **Concurrent Requests** - Max simultaneous API calls (default: 5) ## 🔧 Troubleshooting ### **Extension Not Loading** 1. Ensure Node.js 18+ is installed 2. Check workspace contains `.taskmaster` folder 3. Restart VS Code 4. Check Output panel (View → Output → Taskmaster Kanban) ### **MCP Connection Issues** 1. **Command not found**: Ensure Node.js and npx are in your PATH 2. **Timeout errors**: Increase timeout in settings 3. **Permission errors**: Check Node.js permissions 4. **Network issues**: Verify internet connection for npx downloads ### **Tasks Not Updating** 1. Check MCP connection status in status bar 2. Verify `.taskmaster/tasks/tasks.json` exists 3. Try manual refresh: `Taskmaster Kanban: Check Connection` 4. Review error logs in Output panel ### **Performance Issues** 1. Reduce task display limit in settings 2. Increase cache duration 3. Disable auto-refresh if needed 4. Close other VS Code extensions temporarily ## 🆘 Support & Resources ### **Getting Help** - 📖 **Documentation**: [Taskmaster AI Docs](https://github.com/eyaltoledano/claude-task-master) - 🐛 **Report Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues) - 💬 **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions) - 🐛 **Report Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues) ## 🎯 Tips for Best Results ### **Project Organization** - Use descriptive task titles - Add detailed implementation notes - Set appropriate task dependencies - Leverage AI features for complex tasks ### **Workflow Optimization** - Review task details before starting work - Use subtasks for complex features - Update task status as you progress - Add findings and learnings to task notes ### **Collaboration** - Keep task descriptions updated - Use consistent status conventions - Document decisions in task details - Share knowledge through task notes --- ## 🏆 Why Taskmaster Kanban? ✅ **Visual workflow management** for your Taskmaster projects ✅ **AI-powered task enhancement** built right in ✅ **Real-time synchronization** keeps everything in sync ✅ **Native VS Code integration** feels like part of the editor ✅ **Free and open source** with active development **Transform your development workflow today!** 🚀 --- *Originally Made with ❤️ by [David Maliglowka](https://x.com/DavidMaliglowka)* ## Support This is an open-source project maintained in my spare time. While I strive to fix bugs and improve the extension, support is provided on a best-effort basis. Feel free to: - Report issues on [GitHub](https://github.com/eyaltoledano/claude-task-master/issues) - Submit pull requests with improvements - Fork the project if you need specific modifications ## Disclaimer This extension is provided "as is" without any warranties. Use at your own risk. The author is not responsible for any issues, data loss, or damages that may occur from using this extension. Please backup your work regularly and test thoroughly before using in important projects. ``` -------------------------------------------------------------------------------- /tests/manual/prompts/README.md: -------------------------------------------------------------------------------- ```markdown # Task Master Prompt Template Testing This directory contains comprehensive testing tools for Task Master's centralized prompt template system. ## Interactive Menu System (Recommended) The test script now includes an interactive menu system for easy testing and exploration: ```bash node prompt-test.js ``` ### Menu Features **Main Menu Options:** 1. **Test specific prompt template** - Choose individual templates and variants 2. **Run all tests** - Execute the full test suite 3. **Toggle full prompt display** - Switch between preview and full prompt output (default: ON) 4. **Generate HTML report** - Create a professional HTML report and open in browser 5. **Exit** - Close the application **Template Selection:** - Choose from 8 available prompt templates - See available variants for each template - Test individual variants or all variants at once **Interactive Flow:** - Select template → Select variant → View results → Choose next action - Easy navigation back to previous menus - Color-coded output for better readability ## Batch Mode Options ### Run All Tests (Batch) ```bash node prompt-test.js --batch ``` Runs all tests non-interactively and exits with appropriate status code. ### Generate HTML Report ```bash node prompt-test.js --html ``` Generates a professional HTML report with all test results and full prompt content. The report includes: - **Test summary dashboard** with pass/fail statistics at the top - **Compact single-line format** - Each template shows: `template: [variant ✓] [variant ✗] - x/y passed` - **Individual pass/fail badges** - Visual ✓/✗ indicators for each variant test result - **Template status summary** - Shows x/y passed count at the end of each line - **Separate error condition section** - Tests for missing parameters, invalid variants, nonexistent templates - **Alphabetically sorted** - Templates and variants are sorted for predictable ordering - **Space-efficient layout** - Optimized for developer review with minimal vertical space - **Two-section layout**: 1. **Prompt Templates** - Real template variants testing 2. **Error Condition Tests** - Error handling validation (empty-prompt, missing-parameters, invalid-variant, etc.) 3. **Detailed Content** - Full system and user prompts below - **Full prompt content** displayed without scrolling (no truncation) - **Professional styling** with clear visual hierarchy and responsive design - **Automatic browser opening** (cross-platform) Reports are saved to `tests/manual/prompts/output/` with timestamps. ### Legacy Full Test Mode ```bash node prompt-test.js --full ``` Runs all tests and shows sample full prompts for verification. ### Help ```bash node prompt-test.js --help ``` Shows usage information and examples. ## Test Coverage The comprehensive test suite covers: ## Test Coverage Summary **Total Test Cases: 23** (18 functional + 5 error condition tests) ### Templates with Research Conditional Content These templates have `useResearch` or `research` parameters that modify prompt content: - **add-task** (default, research variants) - **analyze-complexity** (default, research variants) - **parse-prd** (default, research variants) - **update-subtask** (default, research variants) - **update-task** (default, append, research variants) ### Templates with Legitimate Separate Variants These templates have genuinely different prompts for different use cases: - **expand-task** (default, research, complexity-report variants) - Three sophisticated strategies with advanced parameter support - **research** (low, medium, high detail level variants) ### Single Variant Templates These templates only have one variant because research mode only changes AI role, not prompt content: - **update-tasks** (default variant only) ### Prompt Templates (8 total) - **add-task** (default, research variants) - **expand-task** (default, research, complexity-report variants) - Enhanced with sophisticated parameter support and context handling - **analyze-complexity** (default variant) - **research** (low, medium, high detail variants) - **parse-prd** (default variant) - Enhanced with sophisticated numTasks conditional logic - **update-subtask** (default variant with `useResearch` conditional content) - **update-task** (default, append variants; research uses `useResearch` conditional content) - **update-tasks** (default variant with `useResearch` conditional content) ### Test Scenarios (27 total) - 16 valid template/variant combinations (including enhanced expand-task with new parameter support) - 4 conditional logic validation tests (testing new gt/gte helper functions) - 7 error condition tests (nonexistent variants, templates, missing params, invalid detail levels) ### Validation - Parameter schema compliance - Template loading success/failure - Error handling for invalid inputs - Realistic test data for each template type - **Output content validation** for conditional logic (NEW) #### Conditional Logic Testing (NEW) The test suite now includes specific validation for the new `gt` (greater than) and `gte` (greater than or equal) helper functions: **Helper Function Tests:** - `conditional-zero-tasks`: Validates `numTasks = 0` produces "an appropriate number of" text - `conditional-positive-tasks`: Validates `numTasks = 5` produces "approximately 5" text - `conditional-zero-subtasks`: Validates `subtaskCount = 0` produces "an appropriate number of" text - `conditional-positive-subtasks`: Validates `subtaskCount = 3` produces "exactly 3" text These tests use the new `validateOutput` function to verify that conditional template logic produces the expected rendered content, ensuring our helper functions work correctly beyond just successful template loading. ## Output Modes ### Preview Mode (Default) Shows truncated prompts (200 characters) for quick overview: ``` System Prompt Preview: You are an AI assistant helping with task management... User Prompt Preview: Create a new task based on the following description... Tip: Use option 3 in main menu to toggle full prompt display ``` ### Full Mode Shows complete system and user prompts for detailed verification: ``` System Prompt: [Complete system prompt content] User Prompt: [Complete user prompt content] ``` ## Test Data Each template uses realistic test data: - **Tasks**: Complete task objects with proper IDs, titles, descriptions - **Context**: Simulated project context and gathered information - **Parameters**: Properly formatted parameters matching each template's schema - **Research**: Sample queries and detail levels for research prompts ## Error Testing The test suite includes error condition validation: - Nonexistent template variants - Invalid template names - Missing required parameters - Malformed parameter data ## Exit Codes (Batch Mode) - **0**: All tests passed - **1**: One or more tests failed ## Use Cases ### Development Workflow 1. **Template Development**: Test new templates interactively 2. **Variant Testing**: Verify all variants work correctly 3. **Parameter Validation**: Ensure parameter schemas are working 4. **Regression Testing**: Run batch tests after changes ### Manual Verification 1. **Prompt Review**: Human verification of generated prompts 2. **Parameter Exploration**: See how different parameters affect output 3. **Context Testing**: Verify context inclusion and formatting ### CI/CD Integration ```bash # In CI pipeline node tests/manual/prompts/prompt-test.js --batch ``` The interactive menu makes it easy to explore and verify prompt templates during development, while batch mode enables automated testing in CI/CD pipelines. ## 🎯 Purpose - **Verify all 8 prompt templates** work correctly with the prompt manager - **Test multiple variants** for each prompt (default, research, complexity-report, etc.) - **Show full generated prompts** for human verification and debugging - **Test error conditions** and parameter validation - **Provide realistic sample data** for each prompt type ## 📁 Files - `prompt-test.js` - Main test script - `output/` - Generated HTML reports (when using --html flag or menu option) ## 🎯 Use Cases ### For Developers - **Verify prompt changes** don't break existing functionality - **Test new prompt variants** before deployment - **Debug prompt generation** issues with full output - **Validate parameter schemas** work correctly ### For QA - **Regression testing** after prompt template changes - **Verification of prompt outputs** match expectations - **Parameter validation testing** for robustness - **Cross-variant consistency** checking ### For Documentation - **Reference for prompt usage** with realistic examples - **Parameter requirements** demonstration - **Variant differences** visualization - **Expected output formats** examples ## ⚠️ Important Notes 1. **Real Prompt Manager**: This test uses the actual prompt manager, not mocks 2. **Parameter Accuracy**: All parameters match the exact schema requirements of each prompt template 3. **Variant Coverage**: Tests all documented variants for each prompt type 4. **Sample Data**: Uses realistic project scenarios, not dummy data 5. **Exit Codes**: Returns exit code 1 if any tests fail, 0 if all pass ## 🔄 Maintenance When adding new prompt templates or variants: 1. Add sample data to the `sampleData` object 2. Include realistic parameters matching the prompt's schema 3. Test all documented variants 4. Verify with the `--full` flag that prompts generate correctly 5. Update this README with new coverage information This test suite should be run whenever: - Prompt templates are modified - New variants are added - Parameter schemas change - Prompt manager logic is updated - Before major releases ``` -------------------------------------------------------------------------------- /src/prompts/schemas/README.md: -------------------------------------------------------------------------------- ```markdown # Task Master JSON Schemas This directory contains JSON schemas for validating Task Master prompt templates. These schemas provide IDE support, validation, and better developer experience when working with prompt templates. ## Overview The schema system provides: - **Structural Validation**: Ensures all required fields and proper JSON structure - **Type Safety**: Validates parameter types and value constraints - **IDE Integration**: IntelliSense and auto-completion in VS Code - **Development Safety**: Catches errors before runtime - **Documentation**: Self-documenting templates through schema definitions ## Schema Files ### `prompt-template.schema.json` (Main Schema) **Version**: 1.0.0 **Purpose**: Main schema for Task Master prompt template files **Validates**: - Template metadata (id, version, description) - Parameter definitions with comprehensive type validation - Prompt variants with conditional logic - Cross-references between parameters and template variables - Semantic versioning compliance - Handlebars template syntax **Required Fields**: - `id`: Unique template identifier (kebab-case) - `version`: Semantic version (e.g., "1.0.0") - `description`: Human-readable description - `prompts.default`: Default prompt variant **Optional Fields**: - `metadata`: Additional template information - `parameters`: Parameter definitions for template variables - `prompts.*`: Additional prompt variants ### `parameter.schema.json` (Parameter Schema) **Version**: 1.0.0 **Purpose**: Reusable schema for individual prompt parameters **Supports**: - **Type Validation**: `string`, `number`, `boolean`, `array`, `object` - **Constraints**: Required/optional parameters, default values - **String Validation**: Pattern matching (regex), enum constraints - **Numeric Validation**: Minimum/maximum values, integer constraints - **Array Validation**: Item types, minimum/maximum length - **Object Validation**: Property definitions and required fields **Parameter Properties**: ```json { "type": "string|number|boolean|array|object", "required": true|false, "default": "any value matching type", "description": "Parameter documentation", "enum": ["option1", "option2"], "pattern": "^regex$", "minimum": 0, "maximum": 100, "minLength": 1, "maxLength": 255, "items": { "type": "string" }, "properties": { "key": { "type": "string" } } } ``` ### `variant.schema.json` (Variant Schema) **Version**: 1.0.0 **Purpose**: Schema for prompt template variants **Validates**: - System and user prompt templates - Conditional expressions for variant selection - Variable placeholders using Handlebars syntax - Variant metadata and descriptions **Variant Structure**: ```json { "condition": "JavaScript expression", "system": "System prompt template", "user": "User prompt template", "metadata": { "description": "When to use this variant" } } ``` ## Schema Validation Rules ### Template ID Validation - **Pattern**: `^[a-z][a-z0-9-]*[a-z0-9]$` - **Format**: Kebab-case, alphanumeric with hyphens - **Examples**: - ✅ `add-task`, `parse-prd`, `analyze-complexity` - ❌ `AddTask`, `add_task`, `-invalid-`, `task-` ### Version Validation - **Pattern**: Semantic versioning (semver) - **Format**: `MAJOR.MINOR.PATCH` - **Examples**: - ✅ `1.0.0`, `2.1.3`, `10.0.0` - ❌ `1.0`, `v1.0.0`, `1.0.0-beta` ### Parameter Type Validation - **String**: Text values with optional pattern/enum constraints - **Number**: Numeric values with optional min/max constraints - **Boolean**: True/false values - **Array**: Lists with optional item type validation - **Object**: Complex structures with property definitions ### Template Variable Validation - **Handlebars Syntax**: `{{variable}}`, `{{#if condition}}`, `{{#each array}}` - **Parameter References**: All template variables must have corresponding parameters - **Nested Access**: Support for `{{object.property}}` notation - **Special Variables**: `{{@index}}`, `{{@first}}`, `{{@last}}` in loops ## IDE Integration ### VS Code Setup The VS Code profile automatically configures schema validation: ```json { "json.schemas": [ { "fileMatch": [ "src/prompts/**/*.json", ".taskmaster/prompts/**/*.json", "prompts/**/*.json" ], "url": "./src/prompts/schemas/prompt-template.schema.json" } ] } ``` **Features Provided**: - **Auto-completion**: IntelliSense for all schema properties - **Real-time Validation**: Immediate error highlighting - **Hover Documentation**: Parameter descriptions on hover - **Error Messages**: Detailed validation error explanations ### Other IDEs For other development environments: **Schema URLs**: - **Local Development**: `./src/prompts/schemas/prompt-template.schema.json` - **GitHub Reference**: `https://github.com/eyaltoledano/claude-task-master/blob/main/src/prompts/schemas/prompt-template.schema.json` **File Patterns**: - `src/prompts/**/*.json` - `.taskmaster/prompts/**/*.json` - `prompts/**/*.json` ## Validation Examples ### Valid Template Example ```json { "id": "example-prompt", "version": "1.0.0", "description": "Example prompt template with comprehensive validation", "metadata": { "author": "Task Master Team", "category": "task", "tags": ["example", "validation"] }, "parameters": { "taskDescription": { "type": "string", "description": "Description of the task to perform", "required": true, "minLength": 5, "maxLength": 500 }, "priority": { "type": "string", "description": "Task priority level", "required": false, "enum": ["high", "medium", "low"], "default": "medium" }, "maxTokens": { "type": "number", "description": "Maximum tokens for response", "required": false, "minimum": 100, "maximum": 4000, "default": 1000 }, "useResearch": { "type": "boolean", "description": "Whether to include research context", "required": false, "default": false }, "tags": { "type": "array", "description": "Task tags for categorization", "required": false, "items": { "type": "string", "pattern": "^[a-z][a-z0-9-]*$" } } }, "prompts": { "default": { "system": "You are a helpful AI assistant that creates tasks with {{priority}} priority.", "user": "Create a task: {{taskDescription}}{{#if tags}}\nTags: {{#each tags}}{{this}}{{#unless @last}}, {{/unless}}{{/each}}{{/if}}" }, "research": { "condition": "useResearch === true", "system": "You are a research-focused AI assistant with access to current information.", "user": "Research and create a task: {{taskDescription}}" } } } ``` ### Common Validation Errors **Missing Required Fields**: ```json // ❌ Error: Missing required 'id' field { "version": "1.0.0", "description": "Missing ID" } ``` **Invalid ID Format**: ```json // ❌ Error: ID must be kebab-case { "id": "InvalidID_Format", "version": "1.0.0" } ``` **Parameter Type Mismatch**: ```json // ❌ Error: Parameter type doesn't match usage { "parameters": { "count": { "type": "string" } }, "prompts": { "default": { "user": "Process {{count}} items" // Should be number for counting } } } ``` **Invalid Condition Syntax**: ```json // ❌ Error: Invalid JavaScript in condition { "prompts": { "variant": { "condition": "useResearch = true", // Should be === "user": "Research prompt" } } } ``` ## Development Workflow ### Creating New Templates 1. **Start with Schema**: Use VS Code with schema validation enabled 2. **Define Structure**: Begin with required fields (id, version, description) 3. **Add Parameters**: Define all template variables with proper types 4. **Create Prompts**: Write system and user prompts with template variables 5. **Test Validation**: Ensure template validates without errors 6. **Add Variants**: Create additional variants if needed 7. **Document Usage**: Update the main README with template details ### Modifying Existing Templates 1. **Check Current Version**: Note the current version number 2. **Assess Changes**: Determine if changes are breaking or non-breaking 3. **Update Version**: Increment version following semantic versioning 4. **Maintain Compatibility**: Avoid breaking existing parameter contracts 5. **Test Thoroughly**: Verify all existing code still works 6. **Update Documentation**: Reflect changes in README files ### Schema Evolution When updating schemas themselves: 1. **Backward Compatibility**: Ensure existing templates remain valid 2. **Version Increment**: Update schema version in `$id` and `version` fields 3. **Test Migration**: Validate all existing templates against new schema 4. **Document Changes**: Update this README with schema changes 5. **Coordinate Release**: Ensure schema and template changes are synchronized ## Advanced Validation Features ### Cross-Reference Validation The schema validates that: - All template variables have corresponding parameters - Parameter types match their usage in templates - Variant conditions reference valid parameters - Nested property access is properly defined ### Conditional Validation - **Dynamic Schemas**: Different validation rules based on parameter values - **Variant Conditions**: JavaScript expression validation - **Template Syntax**: Handlebars syntax validation - **Parameter Dependencies**: Required parameters based on other parameters ### Custom Validation Rules The schema includes custom validation for: - **Semantic Versioning**: Proper version format validation - **Template Variables**: Handlebars syntax and parameter references - **Condition Expressions**: JavaScript expression syntax validation - **File Patterns**: Consistent naming conventions ## Performance Considerations ### Schema Loading - **Caching**: Schemas are loaded once and cached - **Lazy Loading**: Validation only occurs when templates are accessed - **Memory Efficiency**: Shared schema instances across templates ### Validation Performance - **Fast Validation**: AJV provides optimized validation - **Error Batching**: Multiple errors reported in single validation pass - **Minimal Overhead**: Validation adds minimal runtime cost ### Development Impact - **IDE Responsiveness**: Real-time validation without performance impact - **Build Time**: Schema validation during development, not production - **Testing Speed**: Fast validation during test execution ## Troubleshooting ### Common Schema Issues **Schema Not Loading**: - Check file paths in VS Code settings - Verify schema files exist and are valid JSON - Restart VS Code if changes aren't recognized **Validation Not Working**: - Ensure `ajv` and `ajv-formats` dependencies are installed - Check for JSON syntax errors in templates - Verify schema file paths are correct **Performance Issues**: - Check for circular references in schemas - Verify schema caching is working - Monitor validation frequency in development ### Debugging Validation Errors **Understanding Error Messages**: ```javascript // Example error output { "instancePath": "/parameters/priority/type", "schemaPath": "#/properties/parameters/additionalProperties/properties/type/enum", "keyword": "enum", "params": { "allowedValues": ["string", "number", "boolean", "array", "object"] }, "message": "must be equal to one of the allowed values" } ``` **Common Error Patterns**: - `instancePath`: Shows where in the template the error occurred - `schemaPath`: Shows which schema rule was violated - `keyword`: Indicates the type of validation that failed - `params`: Provides additional context about the validation rule - `message`: Human-readable description of the error ### Getting Help **Internal Resources**: - Main prompt README: `src/prompts/README.md` - Schema files: `src/prompts/schemas/*.json` - PromptManager code: `scripts/modules/prompt-manager.js` **External Resources**: - JSON Schema documentation: https://json-schema.org/ - AJV validation library: https://ajv.js.org/ - Handlebars template syntax: https://handlebarsjs.com/ ## Schema URLs and References ### Current Schema Locations - **Local Development**: `./src/prompts/schemas/prompt-template.schema.json` - **GitHub Blob**: `https://github.com/eyaltoledano/claude-task-master/blob/main/src/prompts/schemas/prompt-template.schema.json` - **Schema ID**: Used for internal references and validation ### URL Usage Guidelines - **`$id` Field**: Use GitHub blob URLs for stable schema identification - **Local References**: Use relative paths for development and testing - **External Tools**: GitHub blob URLs provide stable, version-controlled access - **Documentation**: Link to GitHub for public schema access ``` -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- ```markdown <a name="readme-top"></a> <div align='center'> <a href="https://trendshift.io/repositories/13971" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13971" alt="eyaltoledano%2Fclaude-task-master | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> </div> <p align="center"> <a href="https://task-master.dev"><img src="./images/logo.png?raw=true" alt="Taskmaster logo"></a> </p> <p align="center"> <b>Taskmaster</b>: A task management system for AI-driven development, designed to work seamlessly with any AI chat. </p> <p align="center"> <a href="https://discord.gg/taskmasterai" target="_blank"><img src="https://dcbadge.limes.pink/api/server/https://discord.gg/taskmasterai?style=flat" alt="Discord"></a> | <a href="https://docs.task-master.dev" target="_blank">Docs</a> </p> <p align="center"> <a href="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml"><img src="https://github.com/eyaltoledano/claude-task-master/actions/workflows/ci.yml/badge.svg" alt="CI"></a> <a href="https://github.com/eyaltoledano/claude-task-master/stargazers"><img src="https://img.shields.io/github/stars/eyaltoledano/claude-task-master?style=social" alt="GitHub stars"></a> <a href="https://badge.fury.io/js/task-master-ai"><img src="https://badge.fury.io/js/task-master-ai.svg" alt="npm version"></a> <a href="LICENSE"><img src="https://img.shields.io/badge/license-MIT%20with%20Commons%20Clause-blue.svg" alt="License"></a> </p> <p align="center"> <a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/d18m/task-master-ai?style=flat" alt="NPM Downloads"></a> <a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dm/task-master-ai?style=flat" alt="NPM Downloads"></a> <a href="https://www.npmjs.com/package/task-master-ai"><img src="https://img.shields.io/npm/dw/task-master-ai?style=flat" alt="NPM Downloads"></a> </p> ## By [@eyaltoledano](https://x.com/eyaltoledano) & [@RalphEcom](https://x.com/RalphEcom) [](https://x.com/eyaltoledano) [](https://x.com/RalphEcom) A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI. ## Documentation 📚 **[View Full Documentation](https://docs.task-master.dev)** For detailed guides, API references, and comprehensive examples, visit our documentation site. ### Quick Reference The following documentation is also available in the `docs` directory: - [Configuration Guide](docs/configuration.md) - Set up environment variables and customize Task Master - [Tutorial](docs/tutorial.md) - Step-by-step guide to getting started with Task Master - [Command Reference](docs/command-reference.md) - Complete list of all available commands - [Task Structure](docs/task-structure.md) - Understanding the task format and features - [Example Interactions](docs/examples.md) - Common Cursor AI interaction examples - [Migration Guide](docs/migration-guide.md) - Guide to migrating to the new project structure #### Quick Install for Cursor 1.0+ (One-Click) [](https://cursor.com/en/install-mcp?name=task-master-ai&config=eyJjb21tYW5kIjoibnB4IC15IC0tcGFja2FnZT10YXNrLW1hc3Rlci1haSB0YXNrLW1hc3Rlci1haSIsImVudiI6eyJBTlRIUk9QSUNfQVBJX0tFWSI6IllPVVJfQU5USFJPUElDX0FQSV9LRVlfSEVSRSIsIlBFUlBMRVhJVFlfQVBJX0tFWSI6IllPVVJfUEVSUExFWElUWV9BUElfS0VZX0hFUkUiLCJPUEVOQUlfQVBJX0tFWSI6IllPVVJfT1BFTkFJX0tFWV9IRVJFIiwiR09PR0xFX0FQSV9LRVkiOiJZT1VSX0dPT0dMRV9LRVlfSEVSRSIsIk1JU1RSQUxfQVBJX0tFWSI6IllPVVJfTUlTVFJBTF9LRVlfSEVSRSIsIkdST1FfQVBJX0tFWSI6IllPVVJfR1JPUV9LRVlfSEVSRSIsIk9QRU5ST1VURVJfQVBJX0tFWSI6IllPVVJfT1BFTlJPVVRFUl9LRVlfSEVSRSIsIlhBSV9BUElfS0VZIjoiWU9VUl9YQUlfS0VZX0hFUkUiLCJBWlVSRV9PUEVOQUlfQVBJX0tFWSI6IllPVVJfQVpVUkVfS0VZX0hFUkUiLCJPTExBTUFfQVBJX0tFWSI6IllPVVJfT0xMQU1BX0FQSV9LRVlfSEVSRSJ9fQ%3D%3D) > **Note:** After clicking the link, you'll still need to add your API keys to the configuration. The link installs the MCP server with placeholder keys that you'll need to replace with your actual API keys. ## Requirements Taskmaster utilizes AI across several commands, and those require a separate API key. You can use a variety of models from different AI providers provided you add your API keys. For example, if you want to use Claude 3.7, you'll need an Anthropic API key. You can define 3 types of models to be used: the main model, the research model, and the fallback model (in case either the main or research fail). Whatever model you use, its provider API key must be present in either mcp.json or .env. At least one (1) of the following is required: - Anthropic API key (Claude API) - OpenAI API key - Google Gemini API key - Perplexity API key (for research model) - xAI API Key (for research or main model) - OpenRouter API Key (for research or main model) - Claude Code (no API key required - requires Claude Code CLI) Using the research model is optional but highly recommended. You will need at least ONE API key (unless using Claude Code). Adding all API keys enables you to seamlessly switch between model providers at will. ## Quick Start ### Option 1: MCP (Recommended) MCP (Model Control Protocol) lets you run Task Master directly from your editor. #### 1. Add your MCP config at the following path depending on your editor | Editor | Scope | Linux/macOS Path | Windows Path | Key | | ------------ | ------- | ------------------------------------- | ------------------------------------------------- | ------------ | | **Cursor** | Global | `~/.cursor/mcp.json` | `%USERPROFILE%\.cursor\mcp.json` | `mcpServers` | | | Project | `<project_folder>/.cursor/mcp.json` | `<project_folder>\.cursor\mcp.json` | `mcpServers` | | **Windsurf** | Global | `~/.codeium/windsurf/mcp_config.json` | `%USERPROFILE%\.codeium\windsurf\mcp_config.json` | `mcpServers` | | **VS Code** | Project | `<project_folder>/.vscode/mcp.json` | `<project_folder>\.vscode\mcp.json` | `servers` | ##### Manual Configuration ###### Cursor & Windsurf (`mcpServers`) ```json { "mcpServers": { "task-master-ai": { "command": "npx", "args": ["-y", "task-master-ai"], "env": { "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", "GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE", "MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE", "GROQ_API_KEY": "YOUR_GROQ_KEY_HERE", "OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE", "XAI_API_KEY": "YOUR_XAI_KEY_HERE", "AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE", "OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE" } } } } ``` > 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use. > **Note**: If you see `0 tools enabled` in the MCP settings, restart your editor and check that your API keys are correctly configured. ###### VS Code (`servers` + `type`) ```json { "servers": { "task-master-ai": { "command": "npx", "args": ["-y", "task-master-ai"], "env": { "ANTHROPIC_API_KEY": "YOUR_ANTHROPIC_API_KEY_HERE", "PERPLEXITY_API_KEY": "YOUR_PERPLEXITY_API_KEY_HERE", "OPENAI_API_KEY": "YOUR_OPENAI_KEY_HERE", "GOOGLE_API_KEY": "YOUR_GOOGLE_KEY_HERE", "MISTRAL_API_KEY": "YOUR_MISTRAL_KEY_HERE", "GROQ_API_KEY": "YOUR_GROQ_KEY_HERE", "OPENROUTER_API_KEY": "YOUR_OPENROUTER_KEY_HERE", "XAI_API_KEY": "YOUR_XAI_KEY_HERE", "AZURE_OPENAI_API_KEY": "YOUR_AZURE_KEY_HERE", "OLLAMA_API_KEY": "YOUR_OLLAMA_API_KEY_HERE" }, "type": "stdio" } } } ``` > 🔑 Replace `YOUR_…_KEY_HERE` with your real API keys. You can remove keys you don't use. #### 2. (Cursor-only) Enable Taskmaster MCP Open Cursor Settings (Ctrl+Shift+J) ➡ Click on MCP tab on the left ➡ Enable task-master-ai with the toggle #### 3. (Optional) Configure the models you want to use In your editor's AI chat pane, say: ```txt Change the main, research and fallback models to <model_name>, <model_name> and <model_name> respectively. ``` For example, to use Claude Code (no API key required): ```txt Change the main model to claude-code/sonnet ``` [Table of available models](docs/models.md) | [Claude Code setup](docs/examples/claude-code-usage.md) #### 4. Initialize Task Master In your editor's AI chat pane, say: ```txt Initialize taskmaster-ai in my project ``` #### 5. Make sure you have a PRD (Recommended) For **new projects**: Create your PRD at `.taskmaster/docs/prd.txt` For **existing projects**: You can use `scripts/prd.txt` or migrate with `task-master migrate` An example PRD template is available after initialization in `.taskmaster/templates/example_prd.txt`. > [!NOTE] > While a PRD is recommended for complex projects, you can always create individual tasks by asking "Can you help me implement [description of what you want to do]?" in chat. **Always start with a detailed PRD.** The more detailed your PRD, the better the generated tasks will be. #### 6. Common Commands Use your AI assistant to: - Parse requirements: `Can you parse my PRD at scripts/prd.txt?` - Plan next step: `What's the next task I should work on?` - Implement a task: `Can you help me implement task 3?` - View multiple tasks: `Can you show me tasks 1, 3, and 5?` - Expand a task: `Can you help me expand task 4?` - **Research fresh information**: `Research the latest best practices for implementing JWT authentication with Node.js` - **Research with context**: `Research React Query v5 migration strategies for our current API implementation in src/api.js` [More examples on how to use Task Master in chat](docs/examples.md) ### Option 2: Using Command Line #### Installation ```bash # Install globally npm install -g task-master-ai # OR install locally within your project npm install task-master-ai ``` #### Initialize a new project ```bash # If installed globally task-master init # If installed locally npx task-master init # Initialize project with specific rules task-master init --rules cursor,windsurf,vscode ``` This will prompt you for project details and set up a new project with the necessary files and structure. #### Common Commands ```bash # Initialize a new project task-master init # Parse a PRD and generate tasks task-master parse-prd your-prd.txt # List all tasks task-master list # Show the next task to work on task-master next # Show specific task(s) - supports comma-separated IDs task-master show 1,3,5 # Research fresh information with project context task-master research "What are the latest best practices for JWT authentication?" # Move tasks between tags (cross-tag movement) task-master move --from=5 --from-tag=backlog --to-tag=in-progress task-master move --from=5,6,7 --from-tag=backlog --to-tag=done --with-dependencies task-master move --from=5 --from-tag=backlog --to-tag=in-progress --ignore-dependencies # Generate task files task-master generate # Add rules after initialization task-master rules add windsurf,roo,vscode ``` ## Claude Code Support Task Master now supports Claude models through the Claude Code CLI, which requires no API key: - **Models**: `claude-code/opus` and `claude-code/sonnet` - **Requirements**: Claude Code CLI installed - **Benefits**: No API key needed, uses your local Claude instance [Learn more about Claude Code setup](docs/examples/claude-code-usage.md) ## Troubleshooting ### If `task-master init` doesn't respond Try running it with Node directly: ```bash node node_modules/claude-task-master/scripts/init.js ``` Or clone the repository and run: ```bash git clone https://github.com/eyaltoledano/claude-task-master.git cd claude-task-master node scripts/init.js ``` ## Contributors <a href="https://github.com/eyaltoledano/claude-task-master/graphs/contributors"> <img src="https://contrib.rocks/image?repo=eyaltoledano/claude-task-master" alt="Task Master project contributors" /> </a> ## Star History [](https://www.star-history.com/#eyaltoledano/claude-task-master&Timeline) ## Licensing Task Master is licensed under the MIT License with Commons Clause. This means you can: ✅ **Allowed**: - Use Task Master for any purpose (personal, commercial, academic) - Modify the code - Distribute copies - Create and sell products built using Task Master ❌ **Not Allowed**: - Sell Task Master itself - Offer Task Master as a hosted service - Create competing products based on Task Master See the [LICENSE](LICENSE) file for the complete license text and [licensing details](docs/licensing.md) for more information. ``` -------------------------------------------------------------------------------- /.taskmaster/docs/README.md: -------------------------------------------------------------------------------- ```markdown # Meta-Development Script This folder contains a **meta-development script** (`dev.js`) and related utilities that manage tasks for an AI-driven or traditional software development workflow. The script revolves around a `tasks.json` file, which holds an up-to-date list of development tasks. ## Overview In an AI-driven development process—particularly with tools like [Cursor](https://www.cursor.so/)—it's beneficial to have a **single source of truth** for tasks. This script allows you to: 1. **Parse** a PRD or requirements document (`.txt`) to initialize a set of tasks (`tasks.json`). 2. **List** all existing tasks (IDs, statuses, titles). 3. **Update** tasks to accommodate new prompts or architecture changes (useful if you discover "implementation drift"). 4. **Generate** individual task files (e.g., `task_001.txt`) for easy reference or to feed into an AI coding workflow. 5. **Set task status**—mark tasks as `done`, `pending`, or `deferred` based on progress. 6. **Expand** tasks with subtasks—break down complex tasks into smaller, more manageable subtasks. 7. **Research-backed subtask generation**—use Perplexity AI to generate more informed and contextually relevant subtasks. 8. **Clear subtasks**—remove subtasks from specified tasks to allow regeneration or restructuring. 9. **Show task details**—display detailed information about a specific task and its subtasks. ## Configuration The script can be configured through environment variables in a `.env` file at the root of the project: ### Required Configuration - `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude ### Optional Configuration - `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219") - `MAX_TOKENS`: Maximum tokens for model responses (default: 4000) - `TEMPERATURE`: Temperature for model responses (default: 0.7) - `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation - `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online") - `DEBUG`: Enable debug logging (default: false) - `TASKMASTER_LOG_LEVEL`: Log level - debug, info, warn, error (default: info) - `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3) - `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium) - `PROJECT_NAME`: Override default project name in tasks.json - `PROJECT_VERSION`: Override default version in tasks.json ## How It Works 1. **`tasks.json`**: - A JSON file at the project root containing an array of tasks (each with `id`, `title`, `description`, `status`, etc.). - The `meta` field can store additional info like the project's name, version, or reference to the PRD. - Tasks can have `subtasks` for more detailed implementation steps. - Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending) to easily track progress. 2. **Script Commands** You can run the script via: ```bash node scripts/dev.js [command] [options] ``` Available commands: - `parse-prd`: Generate tasks from a PRD document - `list`: Display all tasks with their status - `update`: Update tasks based on new information - `generate`: Create individual task files - `set-status`: Change a task's status - `expand`: Add subtasks to a task or all tasks - `clear-subtasks`: Remove subtasks from specified tasks - `next`: Determine the next task to work on based on dependencies - `show`: Display detailed information about a specific task Run `node scripts/dev.js` without arguments to see detailed usage information. ## Listing Tasks The `list` command allows you to view all tasks and their status: ```bash # List all tasks node scripts/dev.js list # List tasks with a specific status node scripts/dev.js list --status=pending # List tasks and include their subtasks node scripts/dev.js list --with-subtasks # List tasks with a specific status and include their subtasks node scripts/dev.js list --status=pending --with-subtasks ``` ## Updating Tasks The `update` command allows you to update tasks based on new information or implementation changes: ```bash # Update tasks starting from ID 4 with a new prompt node scripts/dev.js update --from=4 --prompt="Refactor tasks from ID 4 onward to use Express instead of Fastify" # Update all tasks (default from=1) node scripts/dev.js update --prompt="Add authentication to all relevant tasks" # With research-backed updates using Perplexity AI node scripts/dev.js update --from=4 --prompt="Integrate OAuth 2.0" --research # Specify a different tasks file node scripts/dev.js update --file=custom-tasks.json --from=5 --prompt="Change database from MongoDB to PostgreSQL" ``` Notes: - The `--prompt` parameter is required and should explain the changes or new context - Only tasks that aren't marked as 'done' will be updated - Tasks with ID >= the specified --from value will be updated - The `--research` flag uses Perplexity AI for more informed updates when available ## Updating a Single Task The `update-task` command allows you to update a specific task instead of multiple tasks: ```bash # Update a specific task with new information node scripts/dev.js update-task --id=4 --prompt="Use JWT for authentication" # With research-backed updates using Perplexity AI node scripts/dev.js update-task --id=4 --prompt="Use JWT for authentication" --research ``` This command: - Updates only the specified task rather than a range of tasks - Provides detailed validation with helpful error messages - Checks for required API keys when using research mode - Falls back gracefully if Perplexity API is unavailable - Preserves tasks that are already marked as "done" - Includes contextual error handling for common issues ## Setting Task Status The `set-status` command allows you to change a task's status: ```bash # Mark a task as done node scripts/dev.js set-status --id=3 --status=done # Mark a task as pending node scripts/dev.js set-status --id=4 --status=pending # Mark a specific subtask as done node scripts/dev.js set-status --id=3.1 --status=done # Mark multiple tasks at once node scripts/dev.js set-status --id=1,2,3 --status=done ``` Notes: - When marking a parent task as "done", all of its subtasks will automatically be marked as "done" as well - Common status values are 'done', 'pending', and 'deferred', but any string is accepted - You can specify multiple task IDs by separating them with commas - Subtask IDs are specified using the format `parentId.subtaskId` (e.g., `3.1`) - Dependencies are updated to show completion status (✅ for completed, ⏱️ for pending) throughout the system ## Expanding Tasks The `expand` command allows you to break down tasks into subtasks for more detailed implementation: ```bash # Expand a specific task with 3 subtasks (default) node scripts/dev.js expand --id=3 # Expand a specific task with 5 subtasks node scripts/dev.js expand --id=3 --num=5 # Expand a task with additional context node scripts/dev.js expand --id=3 --prompt="Focus on security aspects" # Expand all pending tasks that don't have subtasks node scripts/dev.js expand --all # Force regeneration of subtasks for all pending tasks node scripts/dev.js expand --all --force # Use Perplexity AI for research-backed subtask generation node scripts/dev.js expand --id=3 --research # Use Perplexity AI for research-backed generation on all pending tasks node scripts/dev.js expand --all --research ``` ## Clearing Subtasks The `clear-subtasks` command allows you to remove subtasks from specified tasks: ```bash # Clear subtasks from a specific task node scripts/dev.js clear-subtasks --id=3 # Clear subtasks from multiple tasks node scripts/dev.js clear-subtasks --id=1,2,3 # Clear subtasks from all tasks node scripts/dev.js clear-subtasks --all ``` Notes: - After clearing subtasks, task files are automatically regenerated - This is useful when you want to regenerate subtasks with a different approach - Can be combined with the `expand` command to immediately generate new subtasks - Works with both parent tasks and individual subtasks ## AI Integration The script integrates with two AI services: 1. **Anthropic Claude**: Used for parsing PRDs, generating tasks, and creating subtasks. 2. **Perplexity AI**: Used for research-backed subtask generation when the `--research` flag is specified. The Perplexity integration uses the OpenAI client to connect to Perplexity's API, which provides enhanced research capabilities for generating more informed subtasks. If the Perplexity API is unavailable or encounters an error, the script will automatically fall back to using Anthropic's Claude. To use the Perplexity integration: 1. Obtain a Perplexity API key 2. Add `PERPLEXITY_API_KEY` to your `.env` file 3. Optionally specify `PERPLEXITY_MODEL` in your `.env` file (default: "sonar-medium-online") 4. Use the `--research` flag with the `expand` command ## Logging The script supports different logging levels controlled by the `TASKMASTER_LOG_LEVEL` environment variable: - `debug`: Detailed information, typically useful for troubleshooting - `info`: Confirmation that things are working as expected (default) - `warn`: Warning messages that don't prevent execution - `error`: Error messages that might prevent execution When `DEBUG=true` is set, debug logs are also written to a `dev-debug.log` file in the project root. ## Managing Task Dependencies The `add-dependency` and `remove-dependency` commands allow you to manage task dependencies: ```bash # Add a dependency to a task node scripts/dev.js add-dependency --id=<id> --depends-on=<id> # Remove a dependency from a task node scripts/dev.js remove-dependency --id=<id> --depends-on=<id> ``` These commands: 1. **Allow precise dependency management**: - Add dependencies between tasks with automatic validation - Remove dependencies when they're no longer needed - Update task files automatically after changes 2. **Include validation checks**: - Prevent circular dependencies (a task depending on itself) - Prevent duplicate dependencies - Verify that both tasks exist before adding/removing dependencies - Check if dependencies exist before attempting to remove them 3. **Provide clear feedback**: - Success messages confirm when dependencies are added/removed - Error messages explain why operations failed (if applicable) 4. **Automatically update task files**: - Regenerates task files to reflect dependency changes - Ensures tasks and their files stay synchronized ## Dependency Validation and Fixing The script provides two specialized commands to ensure task dependencies remain valid and properly maintained: ### Validating Dependencies The `validate-dependencies` command allows you to check for invalid dependencies without making changes: ```bash # Check for invalid dependencies in tasks.json node scripts/dev.js validate-dependencies # Specify a different tasks file node scripts/dev.js validate-dependencies --file=custom-tasks.json ``` This command: - Scans all tasks and subtasks for non-existent dependencies - Identifies potential self-dependencies (tasks referencing themselves) - Reports all found issues without modifying files - Provides a comprehensive summary of dependency state - Gives detailed statistics on task dependencies Use this command to audit your task structure before applying fixes. ### Fixing Dependencies The `fix-dependencies` command proactively finds and fixes all invalid dependencies: ```bash # Find and fix all invalid dependencies node scripts/dev.js fix-dependencies # Specify a different tasks file node scripts/dev.js fix-dependencies --file=custom-tasks.json ``` This command: 1. **Validates all dependencies** across tasks and subtasks 2. **Automatically removes**: - References to non-existent tasks and subtasks - Self-dependencies (tasks depending on themselves) 3. **Fixes issues in both**: - The tasks.json data structure - Individual task files during regeneration 4. **Provides a detailed report**: - Types of issues fixed (non-existent vs. self-dependencies) - Number of tasks affected (tasks vs. subtasks) - Where fixes were applied (tasks.json vs. task files) - List of all individual fixes made This is especially useful when tasks have been deleted or IDs have changed, potentially breaking dependency chains. ## Analyzing Task Complexity The `analyze-complexity` command allows you to automatically assess task complexity and generate expansion recommendations: ```bash # Analyze all tasks and generate expansion recommendations node scripts/dev.js analyze-complexity # Specify a custom output file node scripts/dev.js analyze-complexity --output=custom-report.json # Override the model used for analysis node scripts/dev.js analyze-complexity --model=claude-3-opus-20240229 # Set a custom complexity threshold (1-10) node scripts/dev.js analyze-complexity --threshold=6 # Use Perplexity AI for research-backed complexity analysis node scripts/dev.js analyze-complexity --research ``` Notes: - The command uses Claude to analyze each task's complexity (or Perplexity with --research flag) - Tasks are scored on a scale of 1-10 - Each task receives a recommended number of subtasks based on DEFAULT_SUBTASKS configuration - The default output path is `scripts/task-complexity-report.json` - Each task in the analysis includes a ready-to-use `expansionCommand` that can be copied directly to the terminal or executed programmatically - Tasks with complexity scores below the threshold (default: 5) may not need expansion - The research flag provides more contextual and informed complexity assessments ### Integration with Expand Command The `expand` command automatically checks for and uses complexity analysis if available: ```bash # Expand a task, using complexity report recommendations if available node scripts/dev.js expand --id=8 # Expand all tasks, prioritizing by complexity score if a report exists node scripts/dev.js expand --all # Override recommendations with explicit values node scripts/dev.js expand --id=8 --num=5 --prompt="Custom prompt" ``` When a complexity report exists: - The `expand` command will use the recommended subtask count from the report (unless overridden) - It will use the tailored expansion prompt from the report (unless a custom prompt is provided) - When using `--all`, tasks are sorted by complexity score (highest first) - The `--research` flag is preserved from the complexity analysis to expansion The output report structure is: ```json { "meta": { "generatedAt": "2023-06-15T12:34:56.789Z", "tasksAnalyzed": 20, "thresholdScore": 5, "projectName": "Your Project Name", "usedResearch": true }, "complexityAnalysis": [ { "taskId": 8, "taskTitle": "Develop Implementation Drift Handling", "complexityScore": 9.5, "recommendedSubtasks": 6, "expansionPrompt": "Create subtasks that handle detecting...", "reasoning": "This task requires sophisticated logic...", "expansionCommand": "node scripts/dev.js expand --id=8 --num=6 --prompt=\"Create subtasks...\" --research" } // More tasks sorted by complexity score (highest first) ] } ``` ## Finding the Next Task The `next` command helps you determine which task to work on next based on dependencies and status: ```bash # Show the next task to work on node scripts/dev.js next # Specify a different tasks file node scripts/dev.js next --file=custom-tasks.json ``` This command: 1. Identifies all **eligible tasks** - pending or in-progress tasks whose dependencies are all satisfied (marked as done) 2. **Prioritizes** these eligible tasks by: - Priority level (high > medium > low) - Number of dependencies (fewer dependencies first) - Task ID (lower ID first) 3. **Displays** comprehensive information about the selected task: - Basic task details (ID, title, priority, dependencies) - Detailed description and implementation details - Subtasks if they exist 4. Provides **contextual suggested actions**: - Command to mark the task as in-progress - Command to mark the task as done when completed - Commands for working with subtasks (update status or expand) This feature ensures you're always working on the most appropriate task based on your project's current state and dependency structure. ## Showing Task Details The `show` command allows you to view detailed information about a specific task: ```bash # Show details for a specific task node scripts/dev.js show 1 # Alternative syntax with --id option node scripts/dev.js show --id=1 # Show details for a subtask node scripts/dev.js show --id=1.2 # Specify a different tasks file node scripts/dev.js show 3 --file=custom-tasks.json ``` This command: 1. **Displays comprehensive information** about the specified task: - Basic task details (ID, title, priority, dependencies, status) - Full description and implementation details - Test strategy information - Subtasks if they exist 2. **Handles both regular tasks and subtasks**: - For regular tasks, shows all subtasks and their status - For subtasks, shows the parent task relationship 3. **Provides contextual suggested actions**: - Commands to update the task status - Commands for working with subtasks - For subtasks, provides a link to view the parent task This command is particularly useful when you need to examine a specific task in detail before implementing it or when you want to check the status and details of a particular task. ## Enhanced Error Handling The script now includes improved error handling throughout all commands: 1. **Detailed Validation**: - Required parameters (like task IDs and prompts) are validated early - File existence is checked with customized errors for common scenarios - Parameter type conversion is handled with clear error messages 2. **Contextual Error Messages**: - Task not found errors include suggestions to run the list command - API key errors include reminders to check environment variables - Invalid ID format errors show the expected format 3. **Command-Specific Help Displays**: - When validation fails, detailed help for the specific command is shown - Help displays include usage examples and parameter descriptions - Formatted in clear, color-coded boxes with examples 4. **Helpful Error Recovery**: - Detailed troubleshooting steps for common errors - Graceful fallbacks for missing optional dependencies - Clear instructions for how to fix configuration issues ## Version Checking The script now automatically checks for updates without slowing down execution: 1. **Background Version Checking**: - Non-blocking version checks run in the background while commands execute - Actual command execution isn't delayed by version checking - Update notifications appear after command completion 2. **Update Notifications**: - When a newer version is available, a notification is displayed - Notifications include current version, latest version, and update command - Formatted in an attention-grabbing box with clear instructions 3. **Implementation Details**: - Uses semantic versioning to compare current and latest versions - Fetches version information from npm registry with a timeout - Gracefully handles connection issues without affecting command execution ## Subtask Management The script now includes enhanced commands for managing subtasks: ### Adding Subtasks ```bash # Add a subtask to an existing task node scripts/dev.js add-subtask --parent=5 --title="Implement login UI" --description="Create login form" # Convert an existing task to a subtask node scripts/dev.js add-subtask --parent=5 --task-id=8 # Add a subtask with dependencies node scripts/dev.js add-subtask --parent=5 --title="Authentication middleware" --dependencies=5.1,5.2 # Skip regenerating task files node scripts/dev.js add-subtask --parent=5 --title="Login API route" --skip-generate ``` Key features: - Create new subtasks with detailed properties or convert existing tasks - Define dependencies between subtasks - Set custom status for new subtasks - Provides next-step suggestions after creation ### Removing Subtasks ```bash # Remove a subtask node scripts/dev.js remove-subtask --id=5.2 # Remove multiple subtasks node scripts/dev.js remove-subtask --id=5.2,5.3,5.4 # Convert a subtask to a standalone task node scripts/dev.js remove-subtask --id=5.2 --convert # Skip regenerating task files node scripts/dev.js remove-subtask --id=5.2 --skip-generate ``` Key features: - Remove subtasks individually or in batches - Optionally convert subtasks to standalone tasks - Control whether task files are regenerated - Provides detailed success messages and next steps ``` -------------------------------------------------------------------------------- /src/prompts/README.md: -------------------------------------------------------------------------------- ```markdown # Task Master Prompt Management System This directory contains the centralized prompt templates for all AI-powered features in Task Master. ## Overview The prompt management system provides: - **Centralized Storage**: All prompts in one location (`/src/prompts`) - **JSON Schema Validation**: Comprehensive validation using AJV with detailed error reporting - **Version Control**: Track changes to prompts over time - **Variant Support**: Different prompts for different contexts (research mode, complexity levels, etc.) - **Template Variables**: Dynamic prompt generation with variable substitution - **IDE Integration**: VS Code IntelliSense and validation support ## Directory Structure ``` src/prompts/ ├── README.md # This file ├── schemas/ # JSON schemas for validation │ ├── README.md # Schema documentation │ ├── prompt-template.schema.json # Main template schema │ ├── parameter.schema.json # Parameter validation schema │ └── variant.schema.json # Prompt variant schema ├── parse-prd.json # PRD parsing prompts ├── expand-task.json # Task expansion prompts ├── add-task.json # Task creation prompts ├── update-tasks.json # Bulk task update prompts ├── update-task.json # Single task update prompts ├── update-subtask.json # Subtask update prompts ├── analyze-complexity.json # Complexity analysis prompts └── research.json # Research query prompts ``` ## Schema Validation All prompt templates are validated against JSON schemas located in `/src/prompts/schemas/`. The validation system: - **Structural Validation**: Ensures required fields and proper nesting - **Parameter Type Checking**: Validates parameter types, patterns, and ranges - **Template Syntax**: Validates Handlebars syntax and variable references - **Semantic Versioning**: Enforces proper version format - **Cross-Reference Validation**: Ensures parameters match template variables ### Validation Features - **Required Fields**: `id`, `version`, `description`, `prompts.default` - **Type Safety**: String, number, boolean, array, object validation - **Pattern Matching**: Regex validation for string parameters - **Range Validation**: Min/max values for numeric parameters - **Enum Constraints**: Restricted value sets for categorical parameters ## Development Workflow ### Setting Up Development Environment 1. **VS Code Integration**: Schemas are automatically configured for IntelliSense 2. **Dependencies**: `ajv` and `ajv-formats` are required for validation 3. **File Watching**: Changes to templates trigger automatic validation ### Creating New Prompts 1. Create a new `.json` file in `/src/prompts/` 2. Follow the schema structure (see Template Structure section) 3. Define parameters with proper types and validation 4. Create system and user prompts with template variables 5. Test with the PromptManager before committing ### Modifying Existing Prompts 1. Update the `version` field following semantic versioning 2. Maintain backward compatibility when possible 3. Test with existing code that uses the prompt 4. Update documentation if parameters change ## Prompt Template Reference ### 1. parse-prd.json **Purpose**: Parse a Product Requirements Document into structured tasks **Variants**: `default`, `research` (when research mode is enabled) **Required Parameters**: - `numTasks` (number): Target number of tasks to generate - `nextId` (number): Starting ID for tasks - `prdContent` (string): Content of the PRD file - `prdPath` (string): Path to the PRD file - `defaultTaskPriority` (string): Default priority for generated tasks **Optional Parameters**: - `research` (boolean): Enable research mode for latest best practices (default: false) **Usage**: Used by `task-master parse-prd` command to convert PRD documents into actionable task lists. ### 2. add-task.json **Purpose**: Generate a new task based on user description **Variants**: `default`, `research` (when research mode is enabled) **Required Parameters**: - `prompt` (string): User's task description - `newTaskId` (number): ID for the new task **Optional Parameters**: - `existingTasks` (array): List of existing tasks for context - `gatheredContext` (string): Context gathered from codebase analysis - `contextFromArgs` (string): Additional context from manual args - `priority` (string): Task priority (high/medium/low, default: medium) - `dependencies` (array): Task dependency IDs - `useResearch` (boolean): Use research mode (default: false) **Usage**: Used by `task-master add-task` command to create new tasks with AI assistance. ### 3. expand-task.json **Purpose**: Break down a task into detailed subtasks with three sophisticated strategies **Variants**: `complexity-report` (when expansionPrompt exists), `research` (when research mode is enabled), `default` (standard case) **Required Parameters**: - `subtaskCount` (number): Number of subtasks to generate - `task` (object): The task to expand - `nextSubtaskId` (number): Starting ID for new subtasks **Optional Parameters**: - `additionalContext` (string): Additional context for expansion (default: "") - `complexityReasoningContext` (string): Complexity analysis reasoning context (default: "") - `gatheredContext` (string): Gathered project context (default: "") - `useResearch` (boolean): Use research mode (default: false) - `expansionPrompt` (string): Expansion prompt from complexity report **Variant Selection Strategy**: 1. **complexity-report**: Used when `expansionPrompt` exists (highest priority) 2. **research**: Used when `useResearch === true && !expansionPrompt` 3. **default**: Standard fallback strategy **Usage**: Used by `task-master expand` command to break complex tasks into manageable subtasks using the most appropriate strategy based on available context and complexity analysis. ### 4. update-task.json **Purpose**: Update a single task with new information, supporting full updates and append mode **Variants**: `default`, `append` (when appendMode is true), `research` (when research mode is enabled) **Required Parameters**: - `task` (object): The task to update - `taskJson` (string): JSON string representation of the task - `updatePrompt` (string): Description of changes to apply **Optional Parameters**: - `appendMode` (boolean): Whether to append to details or do full update (default: false) - `useResearch` (boolean): Use research mode (default: false) - `currentDetails` (string): Current task details for context (default: "(No existing details)") - `gatheredContext` (string): Additional project context **Usage**: Used by `task-master update-task` command to modify existing tasks. ### 5. update-tasks.json **Purpose**: Update multiple tasks based on new context or changes **Variants**: `default`, `research` (when research mode is enabled) **Required Parameters**: - `tasks` (array): Array of tasks to update - `updatePrompt` (string): Description of changes to apply **Optional Parameters**: - `useResearch` (boolean): Use research mode (default: false) - `projectContext` (string): Additional project context **Usage**: Used by `task-master update` command to bulk update multiple tasks. ### 6. update-subtask.json **Purpose**: Append information to a subtask by generating only new content **Variants**: `default`, `research` (when research mode is enabled) **Required Parameters**: - `parentTask` (object): The parent task context - `currentDetails` (string): Current subtask details (default: "(No existing details)") - `updatePrompt` (string): User request for what to add **Optional Parameters**: - `prevSubtask` (object): The previous subtask if any - `nextSubtask` (object): The next subtask if any - `useResearch` (boolean): Use research mode (default: false) - `gatheredContext` (string): Additional project context **Usage**: Used by `task-master update-subtask` command to log progress and findings on subtasks. ### 7. analyze-complexity.json **Purpose**: Analyze task complexity and generate expansion recommendations **Variants**: `default`, `research` (when research mode is enabled), `batch` (when analyzing >10 tasks) **Required Parameters**: - `tasks` (array): Array of tasks to analyze **Optional Parameters**: - `gatheredContext` (string): Additional project context - `threshold` (number): Complexity threshold for expansion recommendation (1-10, default: 5) - `useResearch` (boolean): Use research mode for deeper analysis (default: false) **Usage**: Used by `task-master analyze-complexity` command to determine which tasks need breakdown. ### 8. research.json **Purpose**: Perform AI-powered research with project context **Variants**: `default`, `low` (concise responses), `medium` (balanced), `high` (detailed) **Required Parameters**: - `query` (string): Research query **Optional Parameters**: - `gatheredContext` (string): Gathered project context - `detailLevel` (string): Level of detail (low/medium/high, default: medium) - `projectInfo` (object): Project information with properties: - `root` (string): Project root path - `taskCount` (number): Number of related tasks - `fileCount` (number): Number of related files **Usage**: Used by `task-master research` command to get contextual information and guidance. ## Template Structure Each prompt template is a JSON file with the following structure: ```json { "id": "unique-identifier", "version": "1.0.0", "description": "What this prompt does", "metadata": { "author": "system", "created": "2024-01-01T00:00:00Z", "updated": "2024-01-01T00:00:00Z", "tags": ["category", "feature"], "category": "task" }, "parameters": { "paramName": { "type": "string|number|boolean|array|object", "required": true|false, "default": "default value", "description": "Parameter description", "enum": ["option1", "option2"], "pattern": "^[a-z]+$", "minimum": 1, "maximum": 100 } }, "prompts": { "default": { "system": "System prompt template", "user": "User prompt template" }, "variant-name": { "condition": "JavaScript expression", "system": "Variant system prompt", "user": "Variant user prompt", "metadata": { "description": "When to use this variant" } } } } ``` ## Template Features ### Variable Substitution Use `{{variableName}}` to inject dynamic values: ``` "user": "Analyze these {{tasks.length}} tasks with threshold {{threshold}}" ``` ### Conditionals Use `{{#if variable}}...{{/if}}` for conditional content: ``` "user": "{{#if useResearch}}Research and {{/if}}create a task" ``` ### Helper Functions #### Equality Helper Use `{{#if (eq variable "value")}}...{{/if}}` for string comparisons: ``` "user": "{{#if (eq detailLevel \"low\")}}Provide a brief summary{{/if}}" "user": "{{#if (eq priority \"high\")}}URGENT: {{/if}}{{taskTitle}}" ``` The `eq` helper enables clean conditional logic based on parameter values: - Compare strings: `(eq detailLevel "medium")` - Compare with enum values: `(eq status "pending")` - Multiple conditions: `{{#if (eq level "1")}}First{{/if}}{{#if (eq level "2")}}Second{{/if}}` #### Negation Helper Use `{{#if (not variable)}}...{{/if}}` for negation conditions: ``` "user": "{{#if (not useResearch)}}Use basic analysis{{/if}}" "user": "{{#if (not hasSubtasks)}}This task has no subtasks{{/if}}" ``` The `not` helper enables clean negative conditional logic: - Negate boolean values: `(not useResearch)` - Negate truthy/falsy values: `(not emptyArray)` - Cleaner than separate boolean parameters: No need for `notUseResearch` flags #### Numeric Comparison Helpers Use `{{#if (gt variable number)}}...{{/if}}` for greater than comparisons: ``` "user": "generate {{#if (gt numTasks 0)}}approximately {{numTasks}}{{else}}an appropriate number of{{/if}} top-level development tasks" "user": "{{#if (gt complexity 5)}}This is a complex task{{/if}}" "system": "create {{#if (gt subtaskCount 0)}}exactly {{subtaskCount}}{{else}}an appropriate number of{{/if}} subtasks" ``` Use `{{#if (gte variable number)}}...{{/if}}` for greater than or equal comparisons: ``` "user": "{{#if (gte priority 8)}}HIGH PRIORITY{{/if}}" "user": "{{#if (gte threshold 1)}}Analysis enabled{{/if}}" "system": "{{#if (gte complexityScore 8)}}Use detailed breakdown approach{{/if}}" ``` The numeric comparison helpers enable sophisticated conditional logic: - **Dynamic counting**: `{{#if (gt numTasks 0)}}exactly {{numTasks}}{{else}}an appropriate number of{{/if}}` - **Threshold-based behavior**: `(gte complexityScore 8)` for high-complexity handling - **Zero checks**: `(gt subtaskCount 0)` for conditional content generation - **Decimal support**: `(gt score 7.5)` for fractional comparisons - **Enhanced prompt sophistication**: Enables parse-prd and expand-task logic matching GitHub specifications ### Loops Use `{{#each array}}...{{/each}}` to iterate over arrays: ``` "user": "Tasks:\n{{#each tasks}}- {{id}}: {{title}}\n{{/each}}" ``` ### Special Loop Variables Inside `{{#each}}` blocks, you have access to: - `{{@index}}`: Current array index (0-based) - `{{@first}}`: Boolean, true for first item - `{{@last}}`: Boolean, true for last item ``` "user": "{{#each tasks}}{{@index}}. {{title}}{{#unless @last}}\n{{/unless}}{{/each}}" ``` ### JSON Serialization Use `{{{json variable}}}` (triple braces) to serialize objects/arrays to JSON: ``` "user": "Analyze these tasks: {{{json tasks}}}" ``` ### Nested Properties Access nested properties with dot notation: ``` "user": "Project: {{context.projectName}}" ``` ## Prompt Variants Variants allow different prompts based on conditions: ```json { "prompts": { "default": { "system": "Default system prompt", "user": "Default user prompt" }, "research": { "condition": "useResearch === true", "system": "Research-focused system prompt", "user": "Research-focused user prompt" }, "high-complexity": { "condition": "complexityScore >= 8", "system": "Complex task handling prompt", "user": "Detailed breakdown request" } } } ``` ### Condition Evaluation Conditions are JavaScript expressions evaluated with parameter values as context: - Simple comparisons: `useResearch === true` - Numeric comparisons: `threshold >= 5` - String matching: `priority === 'high'` - Complex logic: `useResearch && threshold > 7` ## PromptManager Module The PromptManager is implemented in `scripts/modules/prompt-manager.js` and provides: - **Template loading and caching**: Templates are loaded once and cached for performance - **Schema validation**: Comprehensive validation using AJV with detailed error reporting - **Variable substitution**: Handlebars-like syntax for dynamic content - **Variant selection**: Automatic selection based on conditions - **Error handling**: Graceful fallbacks and detailed error messages - **Singleton pattern**: One instance per project root for efficiency ### Validation Behavior - **Schema Available**: Full validation with detailed error messages - **Schema Missing**: Falls back to basic structural validation - **Invalid Templates**: Throws descriptive errors with field-level details - **Parameter Validation**: Type checking, pattern matching, range validation ## Usage in Code ### Basic Usage ```javascript import { getPromptManager } from '../prompt-manager.js'; const promptManager = getPromptManager(); const { systemPrompt, userPrompt, metadata } = promptManager.loadPrompt('add-task', { // Parameters matching the template's parameter definitions prompt: 'Create a user authentication system', newTaskId: 5, priority: 'high', useResearch: false }); // Use with AI service const result = await generateObjectService({ systemPrompt, prompt: userPrompt, // ... other AI parameters }); ``` ### With Variants ```javascript // Research variant will be selected automatically const { systemPrompt, userPrompt } = promptManager.loadPrompt('expand-task', { useResearch: true, // Triggers research variant task: taskObject, subtaskCount: 5 }); ``` ### Error Handling ```javascript try { const result = promptManager.loadPrompt('invalid-template', {}); } catch (error) { if (error.message.includes('Schema validation failed')) { console.error('Template validation error:', error.message); } else if (error.message.includes('not found')) { console.error('Template not found:', error.message); } } ``` ## Adding New Prompts 1. **Create the JSON file** following the template structure 2. **Define parameters** with proper types, validation, and descriptions 3. **Create prompts** with clear system and user templates 4. **Use template variables** for dynamic content 5. **Add variants** if needed for different contexts 6. **Test thoroughly** with the PromptManager 7. **Update this documentation** with the new prompt details ### Example New Prompt ```json { "id": "new-feature", "version": "1.0.0", "description": "Generate code for a new feature", "parameters": { "featureName": { "type": "string", "required": true, "pattern": "^[a-zA-Z][a-zA-Z0-9-]*$", "description": "Name of the feature to implement" }, "complexity": { "type": "string", "required": false, "enum": ["simple", "medium", "complex"], "default": "medium", "description": "Feature complexity level" } }, "prompts": { "default": { "system": "You are a senior software engineer.", "user": "Create a {{complexity}} {{featureName}} feature." } } } ``` ## Best Practices ### Template Design 1. **Clear IDs**: Use kebab-case, descriptive identifiers 2. **Semantic Versioning**: Follow semver for version management 3. **Comprehensive Parameters**: Define all required and optional parameters 4. **Type Safety**: Use proper parameter types and validation 5. **Clear Descriptions**: Document what each prompt and parameter does ### Variable Usage 1. **Meaningful Names**: Use descriptive variable names 2. **Consistent Patterns**: Follow established naming conventions 3. **Safe Defaults**: Provide sensible default values 4. **Validation**: Use patterns, enums, and ranges for validation ### Variant Strategy 1. **Simple Conditions**: Keep variant conditions easy to understand 2. **Clear Purpose**: Each variant should have a distinct use case 3. **Fallback Logic**: Always provide a default variant 4. **Documentation**: Explain when each variant is used ### Performance 1. **Caching**: Templates are cached automatically 2. **Lazy Loading**: Templates load only when needed 3. **Minimal Variants**: Don't create unnecessary variants 4. **Efficient Conditions**: Keep condition evaluation fast ## Testing Prompts ### Validation Testing ```javascript // Test schema validation const promptManager = getPromptManager(); const results = promptManager.validateAllPrompts(); console.log(`Valid: ${results.valid.length}, Errors: ${results.errors.length}`); ``` ### Integration Testing When modifying prompts, ensure to test: - Variable substitution works with actual data structures - Variant selection triggers correctly based on conditions - AI responses remain consistent with expected behavior - All parameters are properly validated - Error handling works for invalid inputs ### Quick Testing ```javascript // Test prompt loading and variable substitution const promptManager = getPromptManager(); const result = promptManager.loadPrompt('research', { query: 'What are the latest React best practices?', detailLevel: 'medium', gatheredContext: 'React project with TypeScript' }); console.log('System:', result.systemPrompt); console.log('User:', result.userPrompt); console.log('Metadata:', result.metadata); ``` ### Testing Checklist - [ ] Template validates against schema - [ ] All required parameters are defined - [ ] Variable substitution works correctly - [ ] Variants trigger under correct conditions - [ ] Error messages are clear and helpful - [ ] Performance is acceptable for repeated usage ## Troubleshooting ### Common Issues **Schema Validation Errors**: - Check required fields are present - Verify parameter types match schema - Ensure version follows semantic versioning - Validate JSON syntax **Variable Substitution Problems**: - Check variable names match parameter names - Verify nested property access syntax - Ensure array iteration syntax is correct - Test with actual data structures **Variant Selection Issues**: - Verify condition syntax is valid JavaScript - Check parameter values match condition expectations - Ensure default variant exists - Test condition evaluation with debug logging **Performance Issues**: - Check for circular references in templates - Verify caching is working correctly - Monitor template loading frequency - Consider simplifying complex conditions ``` -------------------------------------------------------------------------------- /CLAUDE.md: -------------------------------------------------------------------------------- ```markdown # Claude Code Instructions ## Task Master AI Instructions **Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.** @./.taskmaster/CLAUDE.md ## Changeset Guidelines - When creating changesets, remember that it's user-facing, meaning we don't have to get into the specifics of the code, but rather mention what the end-user is getting or fixing from this changeset. ``` -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- ```markdown # Contributing to Task Master Thank you for your interest in contributing to Task Master! We're excited to work with you and appreciate your help in making this project better. 🚀 ## 🤝 Our Collaborative Approach We're a **PR-friendly team** that values collaboration: - ✅ **We review PRs quickly** - Usually within hours, not days - ✅ **We're super reactive** - Expect fast feedback and engagement - ✅ **We sometimes take over PRs** - If your contribution is valuable but needs cleanup, we might jump in to help finish it - ✅ **We're open to all contributions** - From bug fixes to major features **We don't mind AI-generated code**, but we do expect you to: - ✅ **Review and understand** what the AI generated - ✅ **Test the code thoroughly** before submitting - ✅ **Ensure it's well-written** and follows our patterns - ❌ **Don't submit "AI slop"** - untested, unreviewed AI output > **Why this matters**: We spend significant time reviewing PRs. Help us help you by submitting quality contributions that save everyone time! ## 🚀 Quick Start for Contributors ### 1. Fork and Clone ```bash git clone https://github.com/YOUR_USERNAME/claude-task-master.git cd claude-task-master npm install ``` ### 2. Create a Feature Branch **Important**: Always target the `next` branch, not `main`: ```bash git checkout next git pull origin next git checkout -b feature/your-feature-name ``` ### 3. Make Your Changes Follow our development guidelines below. ### 4. Test Everything Yourself **Before submitting your PR**, ensure: ```bash # Run all tests npm test # Check formatting npm run format-check # Fix formatting if needed npm run format ``` ### 5. Create a Changeset **Required for most changes**: ```bash npm run changeset ``` See the [Changeset Guidelines](#changeset-guidelines) below for details. ### 6. Submit Your PR - Target the `next` branch - Write a clear description - Reference any related issues ## 📋 Development Guidelines ### Branch Strategy - **`main`**: Production-ready code - **`next`**: Development branch - **target this for PRs** - **Feature branches**: `feature/description` or `fix/description` ### Code Quality Standards 1. **Write tests** for new functionality 2. **Follow existing patterns** in the codebase 3. **Add JSDoc comments** for functions 4. **Keep functions focused** and single-purpose ### Testing Requirements Your PR **must pass all CI checks**: - ✅ **Unit tests**: `npm test` - ✅ **Format check**: `npm run format-check` **Test your changes locally first** - this saves review time and shows you care about quality. ## 📦 Changeset Guidelines We use [Changesets](https://github.com/changesets/changesets) to manage versioning and generate changelogs. ### When to Create a Changeset **Always create a changeset for**: - ✅ New features - ✅ Bug fixes - ✅ Breaking changes - ✅ Performance improvements - ✅ User-facing documentation updates - ✅ Dependency updates that affect functionality **Skip changesets for**: - ❌ Internal documentation only - ❌ Test-only changes - ❌ Code formatting/linting - ❌ Development tooling that doesn't affect users ### How to Create a Changeset 1. **After making your changes**: ```bash npm run changeset ``` 2. **Choose the bump type**: - **Major**: Breaking changes - **Minor**: New features - **Patch**: Bug fixes, docs, performance improvements 3. **Write a clear summary**: ``` Add support for custom AI models in MCP configuration ``` 4. **Commit the changeset file** with your changes: ```bash git add .changeset/*.md git commit -m "feat: add custom AI model support" ``` ### Changeset vs Git Commit Messages - **Changeset summary**: User-facing, goes in CHANGELOG.md - **Git commit**: Developer-facing, explains the technical change Example: ```bash # Changeset summary (user-facing) "Add support for custom Ollama models" # Git commit message (developer-facing) "feat(models): implement custom Ollama model validation - Add model validation for custom Ollama endpoints - Update configuration schema to support custom models - Add tests for new validation logic" ``` ## 🔧 Development Setup ### Prerequisites - Node.js 18+ - npm or yarn ### Environment Setup 1. **Copy environment template**: ```bash cp .env.example .env ``` 2. **Add your API keys** (for testing AI features): ```bash ANTHROPIC_API_KEY=your_key_here OPENAI_API_KEY=your_key_here # Add others as needed ``` ### Running Tests ```bash # Run all tests npm test # Run tests in watch mode npm run test:watch # Run with coverage npm run test:coverage # Run E2E tests npm run test:e2e ``` ### Code Formatting We use Prettier for consistent formatting: ```bash # Check formatting npm run format-check # Fix formatting npm run format ``` ## 📝 PR Guidelines ### Before Submitting - [ ] **Target the `next` branch** - [ ] **Test everything locally** - [ ] **Run the full test suite** - [ ] **Check code formatting** - [ ] **Create a changeset** (if needed) - [ ] **Re-read your changes** - ensure they're clean and well-thought-out ### PR Description Template ```markdown ## Description Brief description of what this PR does. ## Type of Change - [ ] Bug fix - [ ] New feature - [ ] Breaking change - [ ] Documentation update ## Testing - [ ] I have tested this locally - [ ] All existing tests pass - [ ] I have added tests for new functionality ## Changeset - [ ] I have created a changeset (or this change doesn't need one) ## Additional Notes Any additional context or notes for reviewers. ``` ### What We Look For ✅ **Good PRs**: - Clear, focused changes - Comprehensive testing - Good commit messages - Proper changeset (when needed) - Self-reviewed code ❌ **Avoid**: - Massive PRs that change everything - Untested code - Formatting issues - Missing changesets for user-facing changes - AI-generated code that wasn't reviewed ## 🏗️ Project Structure ``` claude-task-master/ ├── bin/ # CLI executables ├── mcp-server/ # MCP server implementation ├── scripts/ # Core task management logic ├── src/ # Shared utilities and providers and well refactored code (we are slowly moving everything here) ├── tests/ # Test files ├── docs/ # Documentation └── .cursor/ # Cursor IDE rules and configuration └── assets/ # Assets like rules and configuration for all IDEs ``` ### Key Areas for Contribution - **CLI Commands**: `scripts/modules/commands.js` - **MCP Tools**: `mcp-server/src/tools/` - **Core Logic**: `scripts/modules/task-manager/` - **AI Providers**: `src/ai-providers/` - **Tests**: `tests/` ## 🐛 Reporting Issues ### Bug Reports Include: - Task Master version - Node.js version - Operating system - Steps to reproduce - Expected vs actual behavior - Error messages/logs ### Feature Requests Include: - Clear description of the feature - Use case/motivation - Proposed implementation (if you have ideas) - Willingness to contribute ## 💬 Getting Help - **Discord**: [Join our community](https://discord.gg/taskmasterai) - **Issues**: [GitHub Issues](https://github.com/eyaltoledano/claude-task-master/issues) - **Discussions**: [GitHub Discussions](https://github.com/eyaltoledano/claude-task-master/discussions) ## 📄 License By contributing, you agree that your contributions will be licensed under the same license as the project (MIT with Commons Clause). --- **Thank you for contributing to Task Master!** 🎉 Your contributions help make AI-driven development more accessible and efficient for everyone. ``` -------------------------------------------------------------------------------- /.taskmaster/CLAUDE.md: -------------------------------------------------------------------------------- ```markdown # Task Master AI - Agent Integration Guide ## Essential Commands ### Core Workflow Commands ```bash # Project Setup task-master init # Initialize Task Master in current project task-master parse-prd .taskmaster/docs/prd.txt # Generate tasks from PRD document task-master models --setup # Configure AI models interactively # Daily Development Workflow task-master list # Show all tasks with status task-master next # Get next available task to work on task-master show <id> # View detailed task information (e.g., task-master show 1.2) task-master set-status --id=<id> --status=done # Mark task complete # Task Management task-master add-task --prompt="description" --research # Add new task with AI assistance task-master expand --id=<id> --research --force # Break task into subtasks task-master update-task --id=<id> --prompt="changes" # Update specific task task-master update --from=<id> --prompt="changes" # Update multiple tasks from ID onwards task-master update-subtask --id=<id> --prompt="notes" # Add implementation notes to subtask # Analysis & Planning task-master analyze-complexity --research # Analyze task complexity task-master complexity-report # View complexity analysis task-master expand --all --research # Expand all eligible tasks # Dependencies & Organization task-master add-dependency --id=<id> --depends-on=<id> # Add task dependency task-master move --from=<id> --to=<id> # Reorganize task hierarchy task-master validate-dependencies # Check for dependency issues task-master generate # Update task markdown files (usually auto-called) ``` ## Key Files & Project Structure ### Core Files - `.taskmaster/tasks/tasks.json` - Main task data file (auto-managed) - `.taskmaster/config.json` - AI model configuration (use `task-master models` to modify) - `.taskmaster/docs/prd.txt` - Product Requirements Document for parsing - `.taskmaster/tasks/*.txt` - Individual task files (auto-generated from tasks.json) - `.env` - API keys for CLI usage ### Claude Code Integration Files - `CLAUDE.md` - Auto-loaded context for Claude Code (this file) - `.claude/settings.json` - Claude Code tool allowlist and preferences - `.claude/commands/` - Custom slash commands for repeated workflows - `.mcp.json` - MCP server configuration (project-specific) ### Directory Structure ``` project/ ├── .taskmaster/ │ ├── tasks/ # Task files directory │ │ ├── tasks.json # Main task database │ │ ├── task-1.md # Individual task files │ │ └── task-2.md │ ├── docs/ # Documentation directory │ │ ├── prd.txt # Product requirements │ ├── reports/ # Analysis reports directory │ │ └── task-complexity-report.json │ ├── templates/ # Template files │ │ └── example_prd.txt # Example PRD template │ └── config.json # AI models & settings ├── .claude/ │ ├── settings.json # Claude Code configuration │ └── commands/ # Custom slash commands ├── .env # API keys ├── .mcp.json # MCP configuration └── CLAUDE.md # This file - auto-loaded by Claude Code ``` ## MCP Integration Task Master provides an MCP server that Claude Code can connect to. Configure in `.mcp.json`: ```json { "mcpServers": { "task-master-ai": { "command": "npx", "args": ["-y", "task-master-ai"], "env": { "ANTHROPIC_API_KEY": "your_key_here", "PERPLEXITY_API_KEY": "your_key_here", "OPENAI_API_KEY": "OPENAI_API_KEY_HERE", "GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE", "XAI_API_KEY": "XAI_API_KEY_HERE", "OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE", "MISTRAL_API_KEY": "MISTRAL_API_KEY_HERE", "AZURE_OPENAI_API_KEY": "AZURE_OPENAI_API_KEY_HERE", "OLLAMA_API_KEY": "OLLAMA_API_KEY_HERE" } } } } ``` ### Essential MCP Tools ```javascript help; // = shows available taskmaster commands // Project setup initialize_project; // = task-master init parse_prd; // = task-master parse-prd // Daily workflow get_tasks; // = task-master list next_task; // = task-master next get_task; // = task-master show <id> set_task_status; // = task-master set-status // Task management add_task; // = task-master add-task expand_task; // = task-master expand update_task; // = task-master update-task update_subtask; // = task-master update-subtask update; // = task-master update // Analysis analyze_project_complexity; // = task-master analyze-complexity complexity_report; // = task-master complexity-report ``` ## Claude Code Workflow Integration ### Standard Development Workflow #### 1. Project Initialization ```bash # Initialize Task Master task-master init # Create or obtain PRD, then parse it task-master parse-prd .taskmaster/docs/prd.txt # Analyze complexity and expand tasks task-master analyze-complexity --research task-master expand --all --research ``` If tasks already exist, another PRD can be parsed (with new information only!) using parse-prd with --append flag. This will add the generated tasks to the existing list of tasks.. #### 2. Daily Development Loop ```bash # Start each session task-master next # Find next available task task-master show <id> # Review task details # During implementation, check in code context into the tasks and subtasks task-master update-subtask --id=<id> --prompt="implementation notes..." # Complete tasks task-master set-status --id=<id> --status=done ``` #### 3. Multi-Claude Workflows For complex projects, use multiple Claude Code sessions: ```bash # Terminal 1: Main implementation cd project && claude # Terminal 2: Testing and validation cd project-test-worktree && claude # Terminal 3: Documentation updates cd project-docs-worktree && claude ``` ### Custom Slash Commands Create `.claude/commands/taskmaster-next.md`: ```markdown Find the next available Task Master task and show its details. Steps: 1. Run `task-master next` to get the next task 2. If a task is available, run `task-master show <id>` for full details 3. Provide a summary of what needs to be implemented 4. Suggest the first implementation step ``` Create `.claude/commands/taskmaster-complete.md`: ```markdown Complete a Task Master task: $ARGUMENTS Steps: 1. Review the current task with `task-master show $ARGUMENTS` 2. Verify all implementation is complete 3. Run any tests related to this task 4. Mark as complete: `task-master set-status --id=$ARGUMENTS --status=done` 5. Show the next available task with `task-master next` ``` ## Tool Allowlist Recommendations Add to `.claude/settings.json`: ```json { "allowedTools": [ "Edit", "Bash(task-master *)", "Bash(git commit:*)", "Bash(git add:*)", "Bash(npm run *)", "mcp__task_master_ai__*" ] } ``` ## Configuration & Setup ### API Keys Required At least **one** of these API keys must be configured: - `ANTHROPIC_API_KEY` (Claude models) - **Recommended** - `PERPLEXITY_API_KEY` (Research features) - **Highly recommended** - `OPENAI_API_KEY` (GPT models) - `GOOGLE_API_KEY` (Gemini models) - `MISTRAL_API_KEY` (Mistral models) - `OPENROUTER_API_KEY` (Multiple models) - `XAI_API_KEY` (Grok models) An API key is required for any provider used across any of the 3 roles defined in the `models` command. ### Model Configuration ```bash # Interactive setup (recommended) task-master models --setup # Set specific models task-master models --set-main claude-3-5-sonnet-20241022 task-master models --set-research perplexity-llama-3.1-sonar-large-128k-online task-master models --set-fallback gpt-4o-mini ``` ## Task Structure & IDs ### Task ID Format - Main tasks: `1`, `2`, `3`, etc. - Subtasks: `1.1`, `1.2`, `2.1`, etc. - Sub-subtasks: `1.1.1`, `1.1.2`, etc. ### Task Status Values - `pending` - Ready to work on - `in-progress` - Currently being worked on - `done` - Completed and verified - `deferred` - Postponed - `cancelled` - No longer needed - `blocked` - Waiting on external factors ### Task Fields ```json { "id": "1.2", "title": "Implement user authentication", "description": "Set up JWT-based auth system", "status": "pending", "priority": "high", "dependencies": ["1.1"], "details": "Use bcrypt for hashing, JWT for tokens...", "testStrategy": "Unit tests for auth functions, integration tests for login flow", "subtasks": [] } ``` ## Claude Code Best Practices with Task Master ### Context Management - Use `/clear` between different tasks to maintain focus - This CLAUDE.md file is automatically loaded for context - Use `task-master show <id>` to pull specific task context when needed ### Iterative Implementation 1. `task-master show <subtask-id>` - Understand requirements 2. Explore codebase and plan implementation 3. `task-master update-subtask --id=<id> --prompt="detailed plan"` - Log plan 4. `task-master set-status --id=<id> --status=in-progress` - Start work 5. Implement code following logged plan 6. `task-master update-subtask --id=<id> --prompt="what worked/didn't work"` - Log progress 7. `task-master set-status --id=<id> --status=done` - Complete task ### Complex Workflows with Checklists For large migrations or multi-step processes: 1. Create a markdown PRD file describing the new changes: `touch task-migration-checklist.md` (prds can be .txt or .md) 2. Use Taskmaster to parse the new prd with `task-master parse-prd --append` (also available in MCP) 3. Use Taskmaster to expand the newly generated tasks into subtasks. Consdier using `analyze-complexity` with the correct --to and --from IDs (the new ids) to identify the ideal subtask amounts for each task. Then expand them. 4. Work through items systematically, checking them off as completed 5. Use `task-master update-subtask` to log progress on each task/subtask and/or updating/researching them before/during implementation if getting stuck ### Git Integration Task Master works well with `gh` CLI: ```bash # Create PR for completed task gh pr create --title "Complete task 1.2: User authentication" --body "Implements JWT auth system as specified in task 1.2" # Reference task in commits git commit -m "feat: implement JWT auth (task 1.2)" ``` ### Parallel Development with Git Worktrees ```bash # Create worktrees for parallel task development git worktree add ../project-auth feature/auth-system git worktree add ../project-api feature/api-refactor # Run Claude Code in each worktree cd ../project-auth && claude # Terminal 1: Auth work cd ../project-api && claude # Terminal 2: API work ``` ## Troubleshooting ### AI Commands Failing ```bash # Check API keys are configured cat .env # For CLI usage # Verify model configuration task-master models # Test with different model task-master models --set-fallback gpt-4o-mini ``` ### MCP Connection Issues - Check `.mcp.json` configuration - Verify Node.js installation - Use `--mcp-debug` flag when starting Claude Code - Use CLI as fallback if MCP unavailable ### Task File Sync Issues ```bash # Regenerate task files from tasks.json task-master generate # Fix dependency issues task-master fix-dependencies ``` DO NOT RE-INITIALIZE. That will not do anything beyond re-adding the same Taskmaster core files. ## Important Notes ### AI-Powered Operations These commands make AI calls and may take up to a minute: - `parse_prd` / `task-master parse-prd` - `analyze_project_complexity` / `task-master analyze-complexity` - `expand_task` / `task-master expand` - `expand_all` / `task-master expand --all` - `add_task` / `task-master add-task` - `update` / `task-master update` - `update_task` / `task-master update-task` - `update_subtask` / `task-master update-subtask` ### File Management - Never manually edit `tasks.json` - use commands instead - Never manually edit `.taskmaster/config.json` - use `task-master models` - Task markdown files in `tasks/` are auto-generated - Run `task-master generate` after manual changes to tasks.json ### Claude Code Session Management - Use `/clear` frequently to maintain focused context - Create custom slash commands for repeated Task Master workflows - Configure tool allowlist to streamline permissions - Use headless mode for automation: `claude -p "task-master next"` ### Multi-Task Updates - Use `update --from=<id>` to update multiple future tasks - Use `update-task --id=<id>` for single task updates - Use `update-subtask --id=<id>` for implementation logging ### Research Mode - Add `--research` flag for research-based AI enhancement - Requires a research model API key like Perplexity (`PERPLEXITY_API_KEY`) in environment - Provides more informed task creation and updates - Recommended for complex technical tasks --- _This guide ensures Claude Code has immediate access to Task Master's essential functionality for agentic development workflows._ ```