This is page 42 of 52. Use http://codebase.md/eyaltoledano/claude-task-master?lines=true&page={x} to view the full context. # Directory Structure ``` ├── .changeset │ ├── config.json │ └── README.md ├── .claude │ ├── agents │ │ ├── task-checker.md │ │ ├── task-executor.md │ │ └── task-orchestrator.md │ ├── commands │ │ ├── dedupe.md │ │ └── tm │ │ ├── add-dependency │ │ │ └── add-dependency.md │ │ ├── add-subtask │ │ │ ├── add-subtask.md │ │ │ └── convert-task-to-subtask.md │ │ ├── add-task │ │ │ └── add-task.md │ │ ├── analyze-complexity │ │ │ └── analyze-complexity.md │ │ ├── complexity-report │ │ │ └── complexity-report.md │ │ ├── expand │ │ │ ├── expand-all-tasks.md │ │ │ └── expand-task.md │ │ ├── fix-dependencies │ │ │ └── fix-dependencies.md │ │ ├── generate │ │ │ └── generate-tasks.md │ │ ├── help.md │ │ ├── init │ │ │ ├── init-project-quick.md │ │ │ └── init-project.md │ │ ├── learn.md │ │ ├── list │ │ │ ├── list-tasks-by-status.md │ │ │ ├── list-tasks-with-subtasks.md │ │ │ └── list-tasks.md │ │ ├── models │ │ │ ├── setup-models.md │ │ │ └── view-models.md │ │ ├── next │ │ │ └── next-task.md │ │ ├── parse-prd │ │ │ ├── parse-prd-with-research.md │ │ │ └── parse-prd.md │ │ ├── remove-dependency │ │ │ └── remove-dependency.md │ │ ├── remove-subtask │ │ │ └── remove-subtask.md │ │ ├── remove-subtasks │ │ │ ├── remove-all-subtasks.md │ │ │ └── remove-subtasks.md │ │ ├── remove-task │ │ │ └── remove-task.md │ │ ├── set-status │ │ │ ├── to-cancelled.md │ │ │ ├── to-deferred.md │ │ │ ├── to-done.md │ │ │ ├── to-in-progress.md │ │ │ ├── to-pending.md │ │ │ └── to-review.md │ │ ├── setup │ │ │ ├── install-taskmaster.md │ │ │ └── quick-install-taskmaster.md │ │ ├── show │ │ │ └── show-task.md │ │ ├── status │ │ │ └── project-status.md │ │ ├── sync-readme │ │ │ └── sync-readme.md │ │ ├── tm-main.md │ │ ├── update │ │ │ ├── update-single-task.md │ │ │ ├── update-task.md │ │ │ └── update-tasks-from-id.md │ │ ├── utils │ │ │ └── analyze-project.md │ │ ├── validate-dependencies │ │ │ └── validate-dependencies.md │ │ └── workflows │ │ ├── auto-implement-tasks.md │ │ ├── command-pipeline.md │ │ └── smart-workflow.md │ └── TM_COMMANDS_GUIDE.md ├── .coderabbit.yaml ├── .cursor │ ├── mcp.json │ └── rules │ ├── ai_providers.mdc │ ├── ai_services.mdc │ ├── architecture.mdc │ ├── changeset.mdc │ ├── commands.mdc │ ├── context_gathering.mdc │ ├── cursor_rules.mdc │ ├── dependencies.mdc │ ├── dev_workflow.mdc │ ├── git_workflow.mdc │ ├── glossary.mdc │ ├── mcp.mdc │ ├── new_features.mdc │ ├── self_improve.mdc │ ├── tags.mdc │ ├── taskmaster.mdc │ ├── tasks.mdc │ ├── telemetry.mdc │ ├── test_workflow.mdc │ ├── tests.mdc │ ├── ui.mdc │ └── utilities.mdc ├── .cursorignore ├── .env.example ├── .github │ ├── ISSUE_TEMPLATE │ │ ├── bug_report.md │ │ ├── enhancements---feature-requests.md │ │ └── feedback.md │ ├── PULL_REQUEST_TEMPLATE │ │ ├── bugfix.md │ │ ├── config.yml │ │ ├── feature.md │ │ └── integration.md │ ├── PULL_REQUEST_TEMPLATE.md │ ├── scripts │ │ ├── auto-close-duplicates.mjs │ │ ├── backfill-duplicate-comments.mjs │ │ ├── check-pre-release-mode.mjs │ │ ├── parse-metrics.mjs │ │ ├── release.mjs │ │ ├── tag-extension.mjs │ │ └── utils.mjs │ └── workflows │ ├── auto-close-duplicates.yml │ ├── backfill-duplicate-comments.yml │ ├── ci.yml │ ├── claude-dedupe-issues.yml │ ├── claude-docs-trigger.yml │ ├── claude-docs-updater.yml │ ├── claude-issue-triage.yml │ ├── claude.yml │ ├── extension-ci.yml │ ├── extension-release.yml │ ├── log-issue-events.yml │ ├── pre-release.yml │ ├── release-check.yml │ ├── release.yml │ ├── update-models-md.yml │ └── weekly-metrics-discord.yml ├── .gitignore ├── .kiro │ ├── hooks │ │ ├── tm-code-change-task-tracker.kiro.hook │ │ ├── tm-complexity-analyzer.kiro.hook │ │ ├── tm-daily-standup-assistant.kiro.hook │ │ ├── tm-git-commit-task-linker.kiro.hook │ │ ├── tm-pr-readiness-checker.kiro.hook │ │ ├── tm-task-dependency-auto-progression.kiro.hook │ │ └── tm-test-success-task-completer.kiro.hook │ ├── settings │ │ └── mcp.json │ └── steering │ ├── dev_workflow.md │ ├── kiro_rules.md │ ├── self_improve.md │ ├── taskmaster_hooks_workflow.md │ └── taskmaster.md ├── .manypkg.json ├── .mcp.json ├── .npmignore ├── .nvmrc ├── .taskmaster │ ├── CLAUDE.md │ ├── config.json │ ├── docs │ │ ├── MIGRATION-ROADMAP.md │ │ ├── prd-tm-start.txt │ │ ├── prd.txt │ │ ├── README.md │ │ ├── research │ │ │ ├── 2025-06-14_how-can-i-improve-the-scope-up-and-scope-down-comm.md │ │ │ ├── 2025-06-14_should-i-be-using-any-specific-libraries-for-this.md │ │ │ ├── 2025-06-14_test-save-functionality.md │ │ │ ├── 2025-06-14_test-the-fix-for-duplicate-saves-final-test.md │ │ │ └── 2025-08-01_do-we-need-to-add-new-commands-or-can-we-just-weap.md │ │ ├── task-template-importing-prd.txt │ │ ├── test-prd.txt │ │ └── tm-core-phase-1.txt │ ├── reports │ │ ├── task-complexity-report_cc-kiro-hooks.json │ │ ├── task-complexity-report_test-prd-tag.json │ │ ├── task-complexity-report_tm-core-phase-1.json │ │ ├── task-complexity-report.json │ │ └── tm-core-complexity.json │ ├── state.json │ ├── tasks │ │ ├── task_001_tm-start.txt │ │ ├── task_002_tm-start.txt │ │ ├── task_003_tm-start.txt │ │ ├── task_004_tm-start.txt │ │ ├── task_007_tm-start.txt │ │ └── tasks.json │ └── templates │ └── example_prd.txt ├── .vscode │ ├── extensions.json │ └── settings.json ├── apps │ ├── cli │ │ ├── CHANGELOG.md │ │ ├── package.json │ │ ├── src │ │ │ ├── commands │ │ │ │ ├── auth.command.ts │ │ │ │ ├── context.command.ts │ │ │ │ ├── list.command.ts │ │ │ │ ├── set-status.command.ts │ │ │ │ ├── show.command.ts │ │ │ │ └── start.command.ts │ │ │ ├── index.ts │ │ │ ├── ui │ │ │ │ ├── components │ │ │ │ │ ├── dashboard.component.ts │ │ │ │ │ ├── header.component.ts │ │ │ │ │ ├── index.ts │ │ │ │ │ ├── next-task.component.ts │ │ │ │ │ ├── suggested-steps.component.ts │ │ │ │ │ └── task-detail.component.ts │ │ │ │ └── index.ts │ │ │ └── utils │ │ │ ├── auto-update.ts │ │ │ └── ui.ts │ │ └── tsconfig.json │ ├── docs │ │ ├── archive │ │ │ ├── ai-client-utils-example.mdx │ │ │ ├── ai-development-workflow.mdx │ │ │ ├── command-reference.mdx │ │ │ ├── configuration.mdx │ │ │ ├── cursor-setup.mdx │ │ │ ├── examples.mdx │ │ │ └── Installation.mdx │ │ ├── best-practices │ │ │ ├── advanced-tasks.mdx │ │ │ ├── configuration-advanced.mdx │ │ │ └── index.mdx │ │ ├── capabilities │ │ │ ├── cli-root-commands.mdx │ │ │ ├── index.mdx │ │ │ ├── mcp.mdx │ │ │ └── task-structure.mdx │ │ ├── CHANGELOG.md │ │ ├── docs.json │ │ ├── favicon.svg │ │ ├── getting-started │ │ │ ├── contribute.mdx │ │ │ ├── faq.mdx │ │ │ └── quick-start │ │ │ ├── configuration-quick.mdx │ │ │ ├── execute-quick.mdx │ │ │ ├── installation.mdx │ │ │ ├── moving-forward.mdx │ │ │ ├── prd-quick.mdx │ │ │ ├── quick-start.mdx │ │ │ ├── requirements.mdx │ │ │ ├── rules-quick.mdx │ │ │ └── tasks-quick.mdx │ │ ├── introduction.mdx │ │ ├── licensing.md │ │ ├── logo │ │ │ ├── dark.svg │ │ │ ├── light.svg │ │ │ └── task-master-logo.png │ │ ├── package.json │ │ ├── README.md │ │ ├── style.css │ │ ├── vercel.json │ │ └── whats-new.mdx │ └── extension │ ├── .vscodeignore │ ├── assets │ │ ├── banner.png │ │ ├── icon-dark.svg │ │ ├── icon-light.svg │ │ ├── icon.png │ │ ├── screenshots │ │ │ ├── kanban-board.png │ │ │ └── task-details.png │ │ └── sidebar-icon.svg │ ├── CHANGELOG.md │ ├── components.json │ ├── docs │ │ ├── extension-CI-setup.md │ │ └── extension-development-guide.md │ ├── esbuild.js │ ├── LICENSE │ ├── package.json │ ├── package.mjs │ ├── package.publish.json │ ├── README.md │ ├── src │ │ ├── components │ │ │ ├── ConfigView.tsx │ │ │ ├── constants.ts │ │ │ ├── TaskDetails │ │ │ │ ├── AIActionsSection.tsx │ │ │ │ ├── DetailsSection.tsx │ │ │ │ ├── PriorityBadge.tsx │ │ │ │ ├── SubtasksSection.tsx │ │ │ │ ├── TaskMetadataSidebar.tsx │ │ │ │ └── useTaskDetails.ts │ │ │ ├── TaskDetailsView.tsx │ │ │ ├── TaskMasterLogo.tsx │ │ │ └── ui │ │ │ ├── badge.tsx │ │ │ ├── breadcrumb.tsx │ │ │ ├── button.tsx │ │ │ ├── card.tsx │ │ │ ├── collapsible.tsx │ │ │ ├── CollapsibleSection.tsx │ │ │ ├── dropdown-menu.tsx │ │ │ ├── label.tsx │ │ │ ├── scroll-area.tsx │ │ │ ├── separator.tsx │ │ │ ├── shadcn-io │ │ │ │ └── kanban │ │ │ │ └── index.tsx │ │ │ └── textarea.tsx │ │ ├── extension.ts │ │ ├── index.ts │ │ ├── lib │ │ │ └── utils.ts │ │ ├── services │ │ │ ├── config-service.ts │ │ │ ├── error-handler.ts │ │ │ ├── notification-preferences.ts │ │ │ ├── polling-service.ts │ │ │ ├── polling-strategies.ts │ │ │ ├── sidebar-webview-manager.ts │ │ │ ├── task-repository.ts │ │ │ ├── terminal-manager.ts │ │ │ └── webview-manager.ts │ │ ├── test │ │ │ └── extension.test.ts │ │ ├── utils │ │ │ ├── configManager.ts │ │ │ ├── connectionManager.ts │ │ │ ├── errorHandler.ts │ │ │ ├── event-emitter.ts │ │ │ ├── logger.ts │ │ │ ├── mcpClient.ts │ │ │ ├── notificationPreferences.ts │ │ │ └── task-master-api │ │ │ ├── cache │ │ │ │ └── cache-manager.ts │ │ │ ├── index.ts │ │ │ ├── mcp-client.ts │ │ │ ├── transformers │ │ │ │ └── task-transformer.ts │ │ │ └── types │ │ │ └── index.ts │ │ └── webview │ │ ├── App.tsx │ │ ├── components │ │ │ ├── AppContent.tsx │ │ │ ├── EmptyState.tsx │ │ │ ├── ErrorBoundary.tsx │ │ │ ├── PollingStatus.tsx │ │ │ ├── PriorityBadge.tsx │ │ │ ├── SidebarView.tsx │ │ │ ├── TagDropdown.tsx │ │ │ ├── TaskCard.tsx │ │ │ ├── TaskEditModal.tsx │ │ │ ├── TaskMasterKanban.tsx │ │ │ ├── ToastContainer.tsx │ │ │ └── ToastNotification.tsx │ │ ├── constants │ │ │ └── index.ts │ │ ├── contexts │ │ │ └── VSCodeContext.tsx │ │ ├── hooks │ │ │ ├── useTaskQueries.ts │ │ │ ├── useVSCodeMessages.ts │ │ │ └── useWebviewHeight.ts │ │ ├── index.css │ │ ├── index.tsx │ │ ├── providers │ │ │ └── QueryProvider.tsx │ │ ├── reducers │ │ │ └── appReducer.ts │ │ ├── sidebar.tsx │ │ ├── types │ │ │ └── index.ts │ │ └── utils │ │ ├── logger.ts │ │ └── toast.ts │ └── tsconfig.json ├── assets │ ├── .windsurfrules │ ├── AGENTS.md │ ├── claude │ │ ├── agents │ │ │ ├── task-checker.md │ │ │ ├── task-executor.md │ │ │ └── task-orchestrator.md │ │ ├── commands │ │ │ └── tm │ │ │ ├── add-dependency │ │ │ │ └── add-dependency.md │ │ │ ├── add-subtask │ │ │ │ ├── add-subtask.md │ │ │ │ └── convert-task-to-subtask.md │ │ │ ├── add-task │ │ │ │ └── add-task.md │ │ │ ├── analyze-complexity │ │ │ │ └── analyze-complexity.md │ │ │ ├── clear-subtasks │ │ │ │ ├── clear-all-subtasks.md │ │ │ │ └── clear-subtasks.md │ │ │ ├── complexity-report │ │ │ │ └── complexity-report.md │ │ │ ├── expand │ │ │ │ ├── expand-all-tasks.md │ │ │ │ └── expand-task.md │ │ │ ├── fix-dependencies │ │ │ │ └── fix-dependencies.md │ │ │ ├── generate │ │ │ │ └── generate-tasks.md │ │ │ ├── help.md │ │ │ ├── init │ │ │ │ ├── init-project-quick.md │ │ │ │ └── init-project.md │ │ │ ├── learn.md │ │ │ ├── list │ │ │ │ ├── list-tasks-by-status.md │ │ │ │ ├── list-tasks-with-subtasks.md │ │ │ │ └── list-tasks.md │ │ │ ├── models │ │ │ │ ├── setup-models.md │ │ │ │ └── view-models.md │ │ │ ├── next │ │ │ │ └── next-task.md │ │ │ ├── parse-prd │ │ │ │ ├── parse-prd-with-research.md │ │ │ │ └── parse-prd.md │ │ │ ├── remove-dependency │ │ │ │ └── remove-dependency.md │ │ │ ├── remove-subtask │ │ │ │ └── remove-subtask.md │ │ │ ├── remove-subtasks │ │ │ │ ├── remove-all-subtasks.md │ │ │ │ └── remove-subtasks.md │ │ │ ├── remove-task │ │ │ │ └── remove-task.md │ │ │ ├── set-status │ │ │ │ ├── to-cancelled.md │ │ │ │ ├── to-deferred.md │ │ │ │ ├── to-done.md │ │ │ │ ├── to-in-progress.md │ │ │ │ ├── to-pending.md │ │ │ │ └── to-review.md │ │ │ ├── setup │ │ │ │ ├── install-taskmaster.md │ │ │ │ └── quick-install-taskmaster.md │ │ │ ├── show │ │ │ │ └── show-task.md │ │ │ ├── status │ │ │ │ └── project-status.md │ │ │ ├── sync-readme │ │ │ │ └── sync-readme.md │ │ │ ├── tm-main.md │ │ │ ├── update │ │ │ │ ├── update-single-task.md │ │ │ │ ├── update-task.md │ │ │ │ └── update-tasks-from-id.md │ │ │ ├── utils │ │ │ │ └── analyze-project.md │ │ │ ├── validate-dependencies │ │ │ │ └── validate-dependencies.md │ │ │ └── workflows │ │ │ ├── auto-implement-tasks.md │ │ │ ├── command-pipeline.md │ │ │ └── smart-workflow.md │ │ └── TM_COMMANDS_GUIDE.md │ ├── config.json │ ├── env.example │ ├── example_prd.txt │ ├── gitignore │ ├── kiro-hooks │ │ ├── tm-code-change-task-tracker.kiro.hook │ │ ├── tm-complexity-analyzer.kiro.hook │ │ ├── tm-daily-standup-assistant.kiro.hook │ │ ├── tm-git-commit-task-linker.kiro.hook │ │ ├── tm-pr-readiness-checker.kiro.hook │ │ ├── tm-task-dependency-auto-progression.kiro.hook │ │ └── tm-test-success-task-completer.kiro.hook │ ├── roocode │ │ ├── .roo │ │ │ ├── rules-architect │ │ │ │ └── architect-rules │ │ │ ├── rules-ask │ │ │ │ └── ask-rules │ │ │ ├── rules-code │ │ │ │ └── code-rules │ │ │ ├── rules-debug │ │ │ │ └── debug-rules │ │ │ ├── rules-orchestrator │ │ │ │ └── orchestrator-rules │ │ │ └── rules-test │ │ │ └── test-rules │ │ └── .roomodes │ ├── rules │ │ ├── cursor_rules.mdc │ │ ├── dev_workflow.mdc │ │ ├── self_improve.mdc │ │ ├── taskmaster_hooks_workflow.mdc │ │ └── taskmaster.mdc │ └── scripts_README.md ├── bin │ └── task-master.js ├── biome.json ├── CHANGELOG.md ├── CLAUDE.md ├── context │ ├── chats │ │ ├── add-task-dependencies-1.md │ │ └── max-min-tokens.txt.md │ ├── fastmcp-core.txt │ ├── fastmcp-docs.txt │ ├── MCP_INTEGRATION.md │ ├── mcp-js-sdk-docs.txt │ ├── mcp-protocol-repo.txt │ ├── mcp-protocol-schema-03262025.json │ └── mcp-protocol-spec.txt ├── CONTRIBUTING.md ├── docs │ ├── CLI-COMMANDER-PATTERN.md │ ├── command-reference.md │ ├── configuration.md │ ├── contributor-docs │ │ └── testing-roo-integration.md │ ├── cross-tag-task-movement.md │ ├── examples │ │ └── claude-code-usage.md │ ├── examples.md │ ├── licensing.md │ ├── mcp-provider-guide.md │ ├── mcp-provider.md │ ├── migration-guide.md │ ├── models.md │ ├── providers │ │ └── gemini-cli.md │ ├── README.md │ ├── scripts │ │ └── models-json-to-markdown.js │ ├── task-structure.md │ └── tutorial.md ├── images │ └── logo.png ├── index.js ├── jest.config.js ├── jest.resolver.cjs ├── LICENSE ├── llms-install.md ├── mcp-server │ ├── server.js │ └── src │ ├── core │ │ ├── __tests__ │ │ │ └── context-manager.test.js │ │ ├── context-manager.js │ │ ├── direct-functions │ │ │ ├── add-dependency.js │ │ │ ├── add-subtask.js │ │ │ ├── add-tag.js │ │ │ ├── add-task.js │ │ │ ├── analyze-task-complexity.js │ │ │ ├── cache-stats.js │ │ │ ├── clear-subtasks.js │ │ │ ├── complexity-report.js │ │ │ ├── copy-tag.js │ │ │ ├── create-tag-from-branch.js │ │ │ ├── delete-tag.js │ │ │ ├── expand-all-tasks.js │ │ │ ├── expand-task.js │ │ │ ├── fix-dependencies.js │ │ │ ├── generate-task-files.js │ │ │ ├── initialize-project.js │ │ │ ├── list-tags.js │ │ │ ├── list-tasks.js │ │ │ ├── models.js │ │ │ ├── move-task-cross-tag.js │ │ │ ├── move-task.js │ │ │ ├── next-task.js │ │ │ ├── parse-prd.js │ │ │ ├── remove-dependency.js │ │ │ ├── remove-subtask.js │ │ │ ├── remove-task.js │ │ │ ├── rename-tag.js │ │ │ ├── research.js │ │ │ ├── response-language.js │ │ │ ├── rules.js │ │ │ ├── scope-down.js │ │ │ ├── scope-up.js │ │ │ ├── set-task-status.js │ │ │ ├── show-task.js │ │ │ ├── update-subtask-by-id.js │ │ │ ├── update-task-by-id.js │ │ │ ├── update-tasks.js │ │ │ ├── use-tag.js │ │ │ └── validate-dependencies.js │ │ ├── task-master-core.js │ │ └── utils │ │ ├── env-utils.js │ │ └── path-utils.js │ ├── custom-sdk │ │ ├── errors.js │ │ ├── index.js │ │ ├── json-extractor.js │ │ ├── language-model.js │ │ ├── message-converter.js │ │ └── schema-converter.js │ ├── index.js │ ├── logger.js │ ├── providers │ │ └── mcp-provider.js │ └── tools │ ├── add-dependency.js │ ├── add-subtask.js │ ├── add-tag.js │ ├── add-task.js │ ├── analyze.js │ ├── clear-subtasks.js │ ├── complexity-report.js │ ├── copy-tag.js │ ├── delete-tag.js │ ├── expand-all.js │ ├── expand-task.js │ ├── fix-dependencies.js │ ├── generate.js │ ├── get-operation-status.js │ ├── get-task.js │ ├── get-tasks.js │ ├── index.js │ ├── initialize-project.js │ ├── list-tags.js │ ├── models.js │ ├── move-task.js │ ├── next-task.js │ ├── parse-prd.js │ ├── remove-dependency.js │ ├── remove-subtask.js │ ├── remove-task.js │ ├── rename-tag.js │ ├── research.js │ ├── response-language.js │ ├── rules.js │ ├── scope-down.js │ ├── scope-up.js │ ├── set-task-status.js │ ├── update-subtask.js │ ├── update-task.js │ ├── update.js │ ├── use-tag.js │ ├── utils.js │ └── validate-dependencies.js ├── mcp-test.js ├── output.json ├── package-lock.json ├── package.json ├── packages │ ├── build-config │ │ ├── CHANGELOG.md │ │ ├── package.json │ │ ├── src │ │ │ └── tsdown.base.ts │ │ └── tsconfig.json │ └── tm-core │ ├── .gitignore │ ├── CHANGELOG.md │ ├── docs │ │ └── listTasks-architecture.md │ ├── package.json │ ├── POC-STATUS.md │ ├── README.md │ ├── src │ │ ├── auth │ │ │ ├── auth-manager.test.ts │ │ │ ├── auth-manager.ts │ │ │ ├── config.ts │ │ │ ├── credential-store.test.ts │ │ │ ├── credential-store.ts │ │ │ ├── index.ts │ │ │ ├── oauth-service.ts │ │ │ ├── supabase-session-storage.ts │ │ │ └── types.ts │ │ ├── clients │ │ │ ├── index.ts │ │ │ └── supabase-client.ts │ │ ├── config │ │ │ ├── config-manager.spec.ts │ │ │ ├── config-manager.ts │ │ │ ├── index.ts │ │ │ └── services │ │ │ ├── config-loader.service.spec.ts │ │ │ ├── config-loader.service.ts │ │ │ ├── config-merger.service.spec.ts │ │ │ ├── config-merger.service.ts │ │ │ ├── config-persistence.service.spec.ts │ │ │ ├── config-persistence.service.ts │ │ │ ├── environment-config-provider.service.spec.ts │ │ │ ├── environment-config-provider.service.ts │ │ │ ├── index.ts │ │ │ ├── runtime-state-manager.service.spec.ts │ │ │ └── runtime-state-manager.service.ts │ │ ├── constants │ │ │ └── index.ts │ │ ├── entities │ │ │ └── task.entity.ts │ │ ├── errors │ │ │ ├── index.ts │ │ │ └── task-master-error.ts │ │ ├── executors │ │ │ ├── base-executor.ts │ │ │ ├── claude-executor.ts │ │ │ ├── executor-factory.ts │ │ │ ├── executor-service.ts │ │ │ ├── index.ts │ │ │ └── types.ts │ │ ├── index.ts │ │ ├── interfaces │ │ │ ├── ai-provider.interface.ts │ │ │ ├── configuration.interface.ts │ │ │ ├── index.ts │ │ │ └── storage.interface.ts │ │ ├── logger │ │ │ ├── factory.ts │ │ │ ├── index.ts │ │ │ └── logger.ts │ │ ├── mappers │ │ │ └── TaskMapper.ts │ │ ├── parser │ │ │ └── index.ts │ │ ├── providers │ │ │ ├── ai │ │ │ │ ├── base-provider.ts │ │ │ │ └── index.ts │ │ │ └── index.ts │ │ ├── repositories │ │ │ ├── supabase-task-repository.ts │ │ │ └── task-repository.interface.ts │ │ ├── services │ │ │ ├── index.ts │ │ │ ├── organization.service.ts │ │ │ ├── task-execution-service.ts │ │ │ └── task-service.ts │ │ ├── storage │ │ │ ├── api-storage.ts │ │ │ ├── file-storage │ │ │ │ ├── file-operations.ts │ │ │ │ ├── file-storage.ts │ │ │ │ ├── format-handler.ts │ │ │ │ ├── index.ts │ │ │ │ └── path-resolver.ts │ │ │ ├── index.ts │ │ │ └── storage-factory.ts │ │ ├── subpath-exports.test.ts │ │ ├── task-master-core.ts │ │ ├── types │ │ │ ├── database.types.ts │ │ │ ├── index.ts │ │ │ └── legacy.ts │ │ └── utils │ │ ├── id-generator.ts │ │ └── index.ts │ ├── tests │ │ ├── integration │ │ │ └── list-tasks.test.ts │ │ ├── mocks │ │ │ └── mock-provider.ts │ │ ├── setup.ts │ │ └── unit │ │ ├── base-provider.test.ts │ │ ├── executor.test.ts │ │ └── smoke.test.ts │ ├── tsconfig.json │ └── vitest.config.ts ├── README-task-master.md ├── README.md ├── scripts │ ├── dev.js │ ├── init.js │ ├── modules │ │ ├── ai-services-unified.js │ │ ├── commands.js │ │ ├── config-manager.js │ │ ├── dependency-manager.js │ │ ├── index.js │ │ ├── prompt-manager.js │ │ ├── supported-models.json │ │ ├── sync-readme.js │ │ ├── task-manager │ │ │ ├── add-subtask.js │ │ │ ├── add-task.js │ │ │ ├── analyze-task-complexity.js │ │ │ ├── clear-subtasks.js │ │ │ ├── expand-all-tasks.js │ │ │ ├── expand-task.js │ │ │ ├── find-next-task.js │ │ │ ├── generate-task-files.js │ │ │ ├── is-task-dependent.js │ │ │ ├── list-tasks.js │ │ │ ├── migrate.js │ │ │ ├── models.js │ │ │ ├── move-task.js │ │ │ ├── parse-prd │ │ │ │ ├── index.js │ │ │ │ ├── parse-prd-config.js │ │ │ │ ├── parse-prd-helpers.js │ │ │ │ ├── parse-prd-non-streaming.js │ │ │ │ ├── parse-prd-streaming.js │ │ │ │ └── parse-prd.js │ │ │ ├── remove-subtask.js │ │ │ ├── remove-task.js │ │ │ ├── research.js │ │ │ ├── response-language.js │ │ │ ├── scope-adjustment.js │ │ │ ├── set-task-status.js │ │ │ ├── tag-management.js │ │ │ ├── task-exists.js │ │ │ ├── update-single-task-status.js │ │ │ ├── update-subtask-by-id.js │ │ │ ├── update-task-by-id.js │ │ │ └── update-tasks.js │ │ ├── task-manager.js │ │ ├── ui.js │ │ ├── update-config-tokens.js │ │ ├── utils │ │ │ ├── contextGatherer.js │ │ │ ├── fuzzyTaskSearch.js │ │ │ └── git-utils.js │ │ └── utils.js │ ├── task-complexity-report.json │ ├── test-claude-errors.js │ └── test-claude.js ├── src │ ├── ai-providers │ │ ├── anthropic.js │ │ ├── azure.js │ │ ├── base-provider.js │ │ ├── bedrock.js │ │ ├── claude-code.js │ │ ├── custom-sdk │ │ │ ├── claude-code │ │ │ │ ├── errors.js │ │ │ │ ├── index.js │ │ │ │ ├── json-extractor.js │ │ │ │ ├── language-model.js │ │ │ │ ├── message-converter.js │ │ │ │ └── types.js │ │ │ └── grok-cli │ │ │ ├── errors.js │ │ │ ├── index.js │ │ │ ├── json-extractor.js │ │ │ ├── language-model.js │ │ │ ├── message-converter.js │ │ │ └── types.js │ │ ├── gemini-cli.js │ │ ├── google-vertex.js │ │ ├── google.js │ │ ├── grok-cli.js │ │ ├── groq.js │ │ ├── index.js │ │ ├── ollama.js │ │ ├── openai.js │ │ ├── openrouter.js │ │ ├── perplexity.js │ │ └── xai.js │ ├── constants │ │ ├── commands.js │ │ ├── paths.js │ │ ├── profiles.js │ │ ├── providers.js │ │ ├── rules-actions.js │ │ ├── task-priority.js │ │ └── task-status.js │ ├── profiles │ │ ├── amp.js │ │ ├── base-profile.js │ │ ├── claude.js │ │ ├── cline.js │ │ ├── codex.js │ │ ├── cursor.js │ │ ├── gemini.js │ │ ├── index.js │ │ ├── kilo.js │ │ ├── kiro.js │ │ ├── opencode.js │ │ ├── roo.js │ │ ├── trae.js │ │ ├── vscode.js │ │ ├── windsurf.js │ │ └── zed.js │ ├── progress │ │ ├── base-progress-tracker.js │ │ ├── cli-progress-factory.js │ │ ├── parse-prd-tracker.js │ │ ├── progress-tracker-builder.js │ │ └── tracker-ui.js │ ├── prompts │ │ ├── add-task.json │ │ ├── analyze-complexity.json │ │ ├── expand-task.json │ │ ├── parse-prd.json │ │ ├── README.md │ │ ├── research.json │ │ ├── schemas │ │ │ ├── parameter.schema.json │ │ │ ├── prompt-template.schema.json │ │ │ ├── README.md │ │ │ └── variant.schema.json │ │ ├── update-subtask.json │ │ ├── update-task.json │ │ └── update-tasks.json │ ├── provider-registry │ │ └── index.js │ ├── task-master.js │ ├── ui │ │ ├── confirm.js │ │ ├── indicators.js │ │ └── parse-prd.js │ └── utils │ ├── asset-resolver.js │ ├── create-mcp-config.js │ ├── format.js │ ├── getVersion.js │ ├── logger-utils.js │ ├── manage-gitignore.js │ ├── path-utils.js │ ├── profiles.js │ ├── rule-transformer.js │ ├── stream-parser.js │ └── timeout-manager.js ├── test-clean-tags.js ├── test-config-manager.js ├── test-prd.txt ├── test-tag-functions.js ├── test-version-check-full.js ├── test-version-check.js ├── tests │ ├── e2e │ │ ├── e2e_helpers.sh │ │ ├── parse_llm_output.cjs │ │ ├── run_e2e.sh │ │ ├── run_fallback_verification.sh │ │ └── test_llm_analysis.sh │ ├── fixture │ │ └── test-tasks.json │ ├── fixtures │ │ ├── .taskmasterconfig │ │ ├── sample-claude-response.js │ │ ├── sample-prd.txt │ │ └── sample-tasks.js │ ├── integration │ │ ├── claude-code-optional.test.js │ │ ├── cli │ │ │ ├── commands.test.js │ │ │ ├── complex-cross-tag-scenarios.test.js │ │ │ └── move-cross-tag.test.js │ │ ├── manage-gitignore.test.js │ │ ├── mcp-server │ │ │ └── direct-functions.test.js │ │ ├── move-task-cross-tag.integration.test.js │ │ ├── move-task-simple.integration.test.js │ │ └── profiles │ │ ├── amp-init-functionality.test.js │ │ ├── claude-init-functionality.test.js │ │ ├── cline-init-functionality.test.js │ │ ├── codex-init-functionality.test.js │ │ ├── cursor-init-functionality.test.js │ │ ├── gemini-init-functionality.test.js │ │ ├── opencode-init-functionality.test.js │ │ ├── roo-files-inclusion.test.js │ │ ├── roo-init-functionality.test.js │ │ ├── rules-files-inclusion.test.js │ │ ├── trae-init-functionality.test.js │ │ ├── vscode-init-functionality.test.js │ │ └── windsurf-init-functionality.test.js │ ├── manual │ │ ├── progress │ │ │ ├── parse-prd-analysis.js │ │ │ ├── test-parse-prd.js │ │ │ └── TESTING_GUIDE.md │ │ └── prompts │ │ ├── prompt-test.js │ │ └── README.md │ ├── README.md │ ├── setup.js │ └── unit │ ├── ai-providers │ │ ├── claude-code.test.js │ │ ├── custom-sdk │ │ │ └── claude-code │ │ │ └── language-model.test.js │ │ ├── gemini-cli.test.js │ │ ├── mcp-components.test.js │ │ └── openai.test.js │ ├── ai-services-unified.test.js │ ├── commands.test.js │ ├── config-manager.test.js │ ├── config-manager.test.mjs │ ├── dependency-manager.test.js │ ├── init.test.js │ ├── initialize-project.test.js │ ├── kebab-case-validation.test.js │ ├── manage-gitignore.test.js │ ├── mcp │ │ └── tools │ │ ├── __mocks__ │ │ │ └── move-task.js │ │ ├── add-task.test.js │ │ ├── analyze-complexity.test.js │ │ ├── expand-all.test.js │ │ ├── get-tasks.test.js │ │ ├── initialize-project.test.js │ │ ├── move-task-cross-tag-options.test.js │ │ ├── move-task-cross-tag.test.js │ │ └── remove-task.test.js │ ├── mcp-providers │ │ ├── mcp-components.test.js │ │ └── mcp-provider.test.js │ ├── parse-prd.test.js │ ├── profiles │ │ ├── amp-integration.test.js │ │ ├── claude-integration.test.js │ │ ├── cline-integration.test.js │ │ ├── codex-integration.test.js │ │ ├── cursor-integration.test.js │ │ ├── gemini-integration.test.js │ │ ├── kilo-integration.test.js │ │ ├── kiro-integration.test.js │ │ ├── mcp-config-validation.test.js │ │ ├── opencode-integration.test.js │ │ ├── profile-safety-check.test.js │ │ ├── roo-integration.test.js │ │ ├── rule-transformer-cline.test.js │ │ ├── rule-transformer-cursor.test.js │ │ ├── rule-transformer-gemini.test.js │ │ ├── rule-transformer-kilo.test.js │ │ ├── rule-transformer-kiro.test.js │ │ ├── rule-transformer-opencode.test.js │ │ ├── rule-transformer-roo.test.js │ │ ├── rule-transformer-trae.test.js │ │ ├── rule-transformer-vscode.test.js │ │ ├── rule-transformer-windsurf.test.js │ │ ├── rule-transformer-zed.test.js │ │ ├── rule-transformer.test.js │ │ ├── selective-profile-removal.test.js │ │ ├── subdirectory-support.test.js │ │ ├── trae-integration.test.js │ │ ├── vscode-integration.test.js │ │ ├── windsurf-integration.test.js │ │ └── zed-integration.test.js │ ├── progress │ │ └── base-progress-tracker.test.js │ ├── prompt-manager.test.js │ ├── prompts │ │ └── expand-task-prompt.test.js │ ├── providers │ │ └── provider-registry.test.js │ ├── scripts │ │ └── modules │ │ ├── commands │ │ │ ├── move-cross-tag.test.js │ │ │ └── README.md │ │ ├── dependency-manager │ │ │ ├── circular-dependencies.test.js │ │ │ ├── cross-tag-dependencies.test.js │ │ │ └── fix-dependencies-command.test.js │ │ ├── task-manager │ │ │ ├── add-subtask.test.js │ │ │ ├── add-task.test.js │ │ │ ├── analyze-task-complexity.test.js │ │ │ ├── clear-subtasks.test.js │ │ │ ├── complexity-report-tag-isolation.test.js │ │ │ ├── expand-all-tasks.test.js │ │ │ ├── expand-task.test.js │ │ │ ├── find-next-task.test.js │ │ │ ├── generate-task-files.test.js │ │ │ ├── list-tasks.test.js │ │ │ ├── move-task-cross-tag.test.js │ │ │ ├── move-task.test.js │ │ │ ├── parse-prd.test.js │ │ │ ├── remove-subtask.test.js │ │ │ ├── remove-task.test.js │ │ │ ├── research.test.js │ │ │ ├── scope-adjustment.test.js │ │ │ ├── set-task-status.test.js │ │ │ ├── setup.js │ │ │ ├── update-single-task-status.test.js │ │ │ ├── update-subtask-by-id.test.js │ │ │ ├── update-task-by-id.test.js │ │ │ └── update-tasks.test.js │ │ ├── ui │ │ │ └── cross-tag-error-display.test.js │ │ └── utils-tag-aware-paths.test.js │ ├── task-finder.test.js │ ├── task-manager │ │ ├── clear-subtasks.test.js │ │ ├── move-task.test.js │ │ ├── tag-boundary.test.js │ │ └── tag-management.test.js │ ├── task-master.test.js │ ├── ui │ │ └── indicators.test.js │ ├── ui.test.js │ ├── utils-strip-ansi.test.js │ └── utils.test.js ├── tsconfig.json ├── tsdown.config.ts └── turbo.json ``` # Files -------------------------------------------------------------------------------- /tests/unit/config-manager.test.js: -------------------------------------------------------------------------------- ```javascript 1 | import fs from 'fs'; 2 | import path from 'path'; 3 | import { jest } from '@jest/globals'; 4 | import { fileURLToPath } from 'url'; 5 | 6 | // Mock modules first before any imports 7 | jest.mock('fs', () => ({ 8 | existsSync: jest.fn((filePath) => { 9 | // Prevent Jest internal file access 10 | if ( 11 | filePath.includes('jest-message-util') || 12 | filePath.includes('node_modules') 13 | ) { 14 | return false; 15 | } 16 | return false; // Default to false for config discovery prevention 17 | }), 18 | readFileSync: jest.fn(() => '{}'), 19 | writeFileSync: jest.fn(), 20 | mkdirSync: jest.fn() 21 | })); 22 | 23 | jest.mock('path', () => ({ 24 | join: jest.fn((dir, file) => `${dir}/${file}`), 25 | dirname: jest.fn((filePath) => filePath.split('/').slice(0, -1).join('/')), 26 | resolve: jest.fn((...paths) => paths.join('/')), 27 | basename: jest.fn((filePath) => filePath.split('/').pop()) 28 | })); 29 | 30 | jest.mock('chalk', () => ({ 31 | red: jest.fn((text) => text), 32 | blue: jest.fn((text) => text), 33 | green: jest.fn((text) => text), 34 | yellow: jest.fn((text) => text), 35 | white: jest.fn((text) => ({ 36 | bold: jest.fn((text) => text) 37 | })), 38 | reset: jest.fn((text) => text), 39 | dim: jest.fn((text) => text) // Add dim function to prevent chalk errors 40 | })); 41 | 42 | // Mock console to prevent Jest internal access 43 | const mockConsole = { 44 | log: jest.fn(), 45 | info: jest.fn(), 46 | warn: jest.fn(), 47 | error: jest.fn() 48 | }; 49 | global.console = mockConsole; 50 | 51 | // --- Define Mock Function Instances --- 52 | const mockFindConfigPath = jest.fn(() => null); // Default to null, can be overridden in tests 53 | 54 | // Mock path-utils to prevent config file path discovery and logging 55 | jest.mock('../../src/utils/path-utils.js', () => ({ 56 | __esModule: true, 57 | findProjectRoot: jest.fn(() => '/mock/project'), 58 | findConfigPath: mockFindConfigPath, // Use the mock function instance 59 | findTasksPath: jest.fn(() => '/mock/tasks.json'), 60 | findComplexityReportPath: jest.fn(() => null), 61 | resolveTasksOutputPath: jest.fn(() => '/mock/tasks.json'), 62 | resolveComplexityReportOutputPath: jest.fn(() => '/mock/report.json') 63 | })); 64 | 65 | // --- Read REAL supported-models.json data BEFORE mocks --- 66 | const __filename = fileURLToPath(import.meta.url); // Get current file path 67 | const __dirname = path.dirname(__filename); // Get current directory 68 | const realSupportedModelsPath = path.resolve( 69 | __dirname, 70 | '../../scripts/modules/supported-models.json' 71 | ); 72 | let REAL_SUPPORTED_MODELS_CONTENT; 73 | let REAL_SUPPORTED_MODELS_DATA; 74 | try { 75 | REAL_SUPPORTED_MODELS_CONTENT = fs.readFileSync( 76 | realSupportedModelsPath, 77 | 'utf-8' 78 | ); 79 | REAL_SUPPORTED_MODELS_DATA = JSON.parse(REAL_SUPPORTED_MODELS_CONTENT); 80 | } catch (err) { 81 | console.error( 82 | 'FATAL TEST SETUP ERROR: Could not read or parse real supported-models.json', 83 | err 84 | ); 85 | REAL_SUPPORTED_MODELS_CONTENT = '{}'; // Default to empty object on error 86 | REAL_SUPPORTED_MODELS_DATA = {}; 87 | process.exit(1); // Exit if essential test data can't be loaded 88 | } 89 | 90 | // --- Define Mock Function Instances --- 91 | const mockFindProjectRoot = jest.fn(); 92 | const mockLog = jest.fn(); 93 | 94 | // --- Mock Dependencies BEFORE importing the module under test --- 95 | 96 | // Mock the 'utils.js' module using a factory function 97 | jest.mock('../../scripts/modules/utils.js', () => ({ 98 | __esModule: true, // Indicate it's an ES module mock 99 | findProjectRoot: mockFindProjectRoot, // Use the mock function instance 100 | log: mockLog, // Use the mock function instance 101 | // Include other necessary exports from utils if config-manager uses them directly 102 | resolveEnvVariable: jest.fn() // Example if needed 103 | })); 104 | 105 | // --- Import the module under test AFTER mocks are defined --- 106 | import * as configManager from '../../scripts/modules/config-manager.js'; 107 | // Import the mocked 'fs' module to allow spying on its functions 108 | import fsMocked from 'fs'; 109 | 110 | // --- Test Data (Keep as is, ensure DEFAULT_CONFIG is accurate) --- 111 | const MOCK_PROJECT_ROOT = '/mock/project'; 112 | const MOCK_CONFIG_PATH = path.join( 113 | MOCK_PROJECT_ROOT, 114 | '.taskmaster/config.json' 115 | ); 116 | 117 | // Updated DEFAULT_CONFIG reflecting the implementation 118 | const DEFAULT_CONFIG = { 119 | models: { 120 | main: { 121 | provider: 'anthropic', 122 | modelId: 'claude-sonnet-4-20250514', 123 | maxTokens: 64000, 124 | temperature: 0.2 125 | }, 126 | research: { 127 | provider: 'perplexity', 128 | modelId: 'sonar', 129 | maxTokens: 8700, 130 | temperature: 0.1 131 | }, 132 | fallback: { 133 | provider: 'anthropic', 134 | modelId: 'claude-3-7-sonnet-20250219', 135 | maxTokens: 120000, 136 | temperature: 0.2 137 | } 138 | }, 139 | global: { 140 | logLevel: 'info', 141 | debug: false, 142 | defaultNumTasks: 10, 143 | defaultSubtasks: 5, 144 | defaultPriority: 'medium', 145 | projectName: 'Task Master', 146 | ollamaBaseURL: 'http://localhost:11434/api', 147 | bedrockBaseURL: 'https://bedrock.us-east-1.amazonaws.com', 148 | enableCodebaseAnalysis: true, 149 | responseLanguage: 'English' 150 | }, 151 | claudeCode: {}, 152 | grokCli: { 153 | timeout: 120000, 154 | workingDirectory: null, 155 | defaultModel: 'grok-4-latest' 156 | } 157 | }; 158 | 159 | // Other test data (VALID_CUSTOM_CONFIG, PARTIAL_CONFIG, INVALID_PROVIDER_CONFIG) 160 | const VALID_CUSTOM_CONFIG = { 161 | models: { 162 | main: { 163 | provider: 'openai', 164 | modelId: 'gpt-4o', 165 | maxTokens: 4096, 166 | temperature: 0.5 167 | }, 168 | research: { 169 | provider: 'google', 170 | modelId: 'gemini-1.5-pro-latest', 171 | maxTokens: 8192, 172 | temperature: 0.3 173 | }, 174 | fallback: { 175 | provider: 'anthropic', 176 | modelId: 'claude-3-opus-20240229', 177 | maxTokens: 100000, 178 | temperature: 0.4 179 | } 180 | }, 181 | global: { 182 | logLevel: 'debug', 183 | defaultPriority: 'high', 184 | projectName: 'My Custom Project' 185 | } 186 | }; 187 | 188 | const PARTIAL_CONFIG = { 189 | models: { 190 | main: { provider: 'openai', modelId: 'gpt-4-turbo' } 191 | }, 192 | global: { 193 | projectName: 'Partial Project' 194 | } 195 | }; 196 | 197 | const INVALID_PROVIDER_CONFIG = { 198 | models: { 199 | main: { provider: 'invalid-provider', modelId: 'some-model' }, 200 | research: { 201 | provider: 'perplexity', 202 | modelId: 'llama-3-sonar-large-32k-online' 203 | } 204 | }, 205 | global: { 206 | logLevel: 'warn' 207 | } 208 | }; 209 | 210 | // Claude Code test data 211 | const VALID_CLAUDE_CODE_CONFIG = { 212 | maxTurns: 5, 213 | customSystemPrompt: 'You are a helpful coding assistant', 214 | appendSystemPrompt: 'Always follow best practices', 215 | permissionMode: 'acceptEdits', 216 | allowedTools: ['Read', 'LS', 'Edit'], 217 | disallowedTools: ['Write'], 218 | mcpServers: { 219 | 'test-server': { 220 | type: 'stdio', 221 | command: 'node', 222 | args: ['server.js'], 223 | env: { NODE_ENV: 'test' } 224 | } 225 | }, 226 | commandSpecific: { 227 | 'add-task': { 228 | maxTurns: 3, 229 | permissionMode: 'plan' 230 | }, 231 | research: { 232 | customSystemPrompt: 'You are a research assistant' 233 | } 234 | } 235 | }; 236 | 237 | const INVALID_CLAUDE_CODE_CONFIG = { 238 | maxTurns: 'invalid', // Should be number 239 | permissionMode: 'invalid-mode', // Invalid enum value 240 | allowedTools: 'not-an-array', // Should be array 241 | mcpServers: { 242 | 'invalid-server': { 243 | type: 'invalid-type', // Invalid enum value 244 | url: 'not-a-valid-url' // Invalid URL format 245 | } 246 | }, 247 | commandSpecific: { 248 | 'invalid-command': { 249 | // Invalid command name 250 | maxTurns: -1 // Invalid negative number 251 | } 252 | } 253 | }; 254 | 255 | const PARTIAL_CLAUDE_CODE_CONFIG = { 256 | maxTurns: 10, 257 | permissionMode: 'default', 258 | commandSpecific: { 259 | 'expand-task': { 260 | customSystemPrompt: 'Focus on task breakdown' 261 | } 262 | } 263 | }; 264 | 265 | // Define spies globally to be restored in afterAll 266 | let consoleErrorSpy; 267 | let consoleWarnSpy; 268 | let fsReadFileSyncSpy; 269 | let fsWriteFileSyncSpy; 270 | let fsExistsSyncSpy; 271 | 272 | beforeAll(() => { 273 | // Set up console spies 274 | consoleErrorSpy = jest.spyOn(console, 'error').mockImplementation(() => {}); 275 | consoleWarnSpy = jest.spyOn(console, 'warn').mockImplementation(() => {}); 276 | }); 277 | 278 | afterAll(() => { 279 | // Restore all spies 280 | jest.restoreAllMocks(); 281 | }); 282 | 283 | // Reset mocks before each test for isolation 284 | beforeEach(() => { 285 | // Clear all mock calls and reset implementations between tests 286 | jest.clearAllMocks(); 287 | // Reset the external mock instances for utils 288 | mockFindProjectRoot.mockReset(); 289 | mockLog.mockReset(); 290 | mockFindConfigPath.mockReset(); 291 | 292 | // --- Set up spies ON the imported 'fs' mock --- 293 | fsExistsSyncSpy = jest.spyOn(fsMocked, 'existsSync'); 294 | fsReadFileSyncSpy = jest.spyOn(fsMocked, 'readFileSync'); 295 | fsWriteFileSyncSpy = jest.spyOn(fsMocked, 'writeFileSync'); 296 | 297 | // --- Default Mock Implementations --- 298 | mockFindProjectRoot.mockReturnValue(MOCK_PROJECT_ROOT); // Default for utils.findProjectRoot 299 | mockFindConfigPath.mockReturnValue(null); // Default to no config file found 300 | fsExistsSyncSpy.mockReturnValue(true); // Assume files exist by default 301 | 302 | // Default readFileSync: Return REAL models content, mocked config, or throw error 303 | fsReadFileSyncSpy.mockImplementation((filePath) => { 304 | const baseName = path.basename(filePath); 305 | if (baseName === 'supported-models.json') { 306 | // Return the REAL file content stringified 307 | return REAL_SUPPORTED_MODELS_CONTENT; 308 | } else if (filePath === MOCK_CONFIG_PATH) { 309 | // Still mock the .taskmasterconfig reads 310 | return JSON.stringify(DEFAULT_CONFIG); // Default behavior 311 | } 312 | // For Jest internal files or other unexpected files, return empty string instead of throwing 313 | // This prevents Jest's internal file operations from breaking tests 314 | if ( 315 | filePath.includes('jest-message-util') || 316 | filePath.includes('node_modules') 317 | ) { 318 | return '{}'; // Return empty JSON for Jest internal files 319 | } 320 | // Throw for truly unexpected reads that should be caught in tests 321 | throw new Error(`Unexpected fs.readFileSync call in test: ${filePath}`); 322 | }); 323 | 324 | // Default writeFileSync: Do nothing, just allow calls 325 | fsWriteFileSyncSpy.mockImplementation(() => {}); 326 | }); 327 | 328 | // --- Validation Functions --- 329 | describe('Validation Functions', () => { 330 | // Tests for validateProvider and validateProviderModelCombination 331 | test('validateProvider should return true for valid providers', () => { 332 | expect(configManager.validateProvider('openai')).toBe(true); 333 | expect(configManager.validateProvider('anthropic')).toBe(true); 334 | expect(configManager.validateProvider('google')).toBe(true); 335 | expect(configManager.validateProvider('perplexity')).toBe(true); 336 | expect(configManager.validateProvider('ollama')).toBe(true); 337 | expect(configManager.validateProvider('openrouter')).toBe(true); 338 | expect(configManager.validateProvider('bedrock')).toBe(true); 339 | }); 340 | 341 | test('validateProvider should return false for invalid providers', () => { 342 | expect(configManager.validateProvider('invalid-provider')).toBe(false); 343 | expect(configManager.validateProvider('grok')).toBe(false); // Not in mock map 344 | expect(configManager.validateProvider('')).toBe(false); 345 | expect(configManager.validateProvider(null)).toBe(false); 346 | }); 347 | 348 | test('validateProviderModelCombination should validate known good combinations', () => { 349 | // Re-load config to ensure MODEL_MAP is populated from mock (now real data) 350 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 351 | expect( 352 | configManager.validateProviderModelCombination('openai', 'gpt-4o') 353 | ).toBe(true); 354 | expect( 355 | configManager.validateProviderModelCombination( 356 | 'anthropic', 357 | 'claude-3-5-sonnet-20241022' 358 | ) 359 | ).toBe(true); 360 | }); 361 | 362 | test('validateProviderModelCombination should return false for known bad combinations', () => { 363 | // Re-load config to ensure MODEL_MAP is populated from mock (now real data) 364 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 365 | expect( 366 | configManager.validateProviderModelCombination( 367 | 'openai', 368 | 'claude-3-opus-20240229' 369 | ) 370 | ).toBe(false); 371 | }); 372 | 373 | test('validateProviderModelCombination should return true for ollama/openrouter (empty lists in map)', () => { 374 | // Re-load config to ensure MODEL_MAP is populated from mock (now real data) 375 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 376 | expect( 377 | configManager.validateProviderModelCombination('ollama', 'any-model') 378 | ).toBe(false); 379 | expect( 380 | configManager.validateProviderModelCombination('openrouter', 'any/model') 381 | ).toBe(false); 382 | }); 383 | 384 | test('validateProviderModelCombination should return true for providers not in map', () => { 385 | // Re-load config to ensure MODEL_MAP is populated from mock (now real data) 386 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 387 | // The implementation returns true if the provider isn't in the map 388 | expect( 389 | configManager.validateProviderModelCombination( 390 | 'unknown-provider', 391 | 'some-model' 392 | ) 393 | ).toBe(true); 394 | }); 395 | }); 396 | 397 | // --- Claude Code Validation Tests --- 398 | describe('Claude Code Validation', () => { 399 | test('validateClaudeCodeSettings should return valid settings for correct input', () => { 400 | const result = configManager.validateClaudeCodeSettings( 401 | VALID_CLAUDE_CODE_CONFIG 402 | ); 403 | 404 | expect(result).toEqual(VALID_CLAUDE_CODE_CONFIG); 405 | expect(consoleWarnSpy).not.toHaveBeenCalled(); 406 | }); 407 | 408 | test('validateClaudeCodeSettings should return empty object for invalid input', () => { 409 | const result = configManager.validateClaudeCodeSettings( 410 | INVALID_CLAUDE_CODE_CONFIG 411 | ); 412 | 413 | expect(result).toEqual({}); 414 | expect(consoleWarnSpy).toHaveBeenCalledWith( 415 | expect.stringContaining('Warning: Invalid Claude Code settings in config') 416 | ); 417 | }); 418 | 419 | test('validateClaudeCodeSettings should handle partial valid configuration', () => { 420 | const result = configManager.validateClaudeCodeSettings( 421 | PARTIAL_CLAUDE_CODE_CONFIG 422 | ); 423 | 424 | expect(result).toEqual(PARTIAL_CLAUDE_CODE_CONFIG); 425 | expect(consoleWarnSpy).not.toHaveBeenCalled(); 426 | }); 427 | 428 | test('validateClaudeCodeSettings should return empty object for empty input', () => { 429 | const result = configManager.validateClaudeCodeSettings({}); 430 | 431 | expect(result).toEqual({}); 432 | expect(consoleWarnSpy).not.toHaveBeenCalled(); 433 | }); 434 | 435 | test('validateClaudeCodeSettings should handle null/undefined input', () => { 436 | expect(configManager.validateClaudeCodeSettings(null)).toEqual({}); 437 | expect(configManager.validateClaudeCodeSettings(undefined)).toEqual({}); 438 | expect(consoleWarnSpy).toHaveBeenCalledTimes(2); 439 | }); 440 | }); 441 | 442 | // --- Claude Code Getter Tests --- 443 | describe('Claude Code Getter Functions', () => { 444 | test('getClaudeCodeSettings should return default empty object when no config exists', () => { 445 | // No config file exists, should return empty object 446 | fsExistsSyncSpy.mockReturnValue(false); 447 | const settings = configManager.getClaudeCodeSettings(MOCK_PROJECT_ROOT); 448 | 449 | expect(settings).toEqual({}); 450 | }); 451 | 452 | test('getClaudeCodeSettings should return merged settings from config file', () => { 453 | // Config file with Claude Code settings 454 | const configWithClaudeCode = { 455 | ...VALID_CUSTOM_CONFIG, 456 | claudeCode: VALID_CLAUDE_CODE_CONFIG 457 | }; 458 | 459 | // Mock findConfigPath to return the mock config path 460 | mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH); 461 | 462 | fsReadFileSyncSpy.mockImplementation((filePath) => { 463 | if (filePath === MOCK_CONFIG_PATH) 464 | return JSON.stringify(configWithClaudeCode); 465 | if (path.basename(filePath) === 'supported-models.json') { 466 | return JSON.stringify({ 467 | openai: [{ id: 'gpt-4o' }], 468 | google: [{ id: 'gemini-1.5-pro-latest' }], 469 | anthropic: [ 470 | { id: 'claude-3-opus-20240229' }, 471 | { id: 'claude-3-7-sonnet-20250219' }, 472 | { id: 'claude-3-5-sonnet' } 473 | ], 474 | perplexity: [{ id: 'sonar-pro' }], 475 | ollama: [], 476 | openrouter: [] 477 | }); 478 | } 479 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 480 | }); 481 | fsExistsSyncSpy.mockReturnValue(true); 482 | 483 | const settings = configManager.getClaudeCodeSettings( 484 | MOCK_PROJECT_ROOT, 485 | true 486 | ); // Force reload 487 | 488 | expect(settings).toEqual(VALID_CLAUDE_CODE_CONFIG); 489 | }); 490 | 491 | test('getClaudeCodeSettingsForCommand should return command-specific settings', () => { 492 | // Config with command-specific settings 493 | const configWithClaudeCode = { 494 | ...VALID_CUSTOM_CONFIG, 495 | claudeCode: VALID_CLAUDE_CODE_CONFIG 496 | }; 497 | 498 | // Mock findConfigPath to return the mock config path 499 | mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH); 500 | 501 | fsReadFileSyncSpy.mockImplementation((filePath) => { 502 | if (path.basename(filePath) === 'supported-models.json') return '{}'; 503 | if (filePath === MOCK_CONFIG_PATH) 504 | return JSON.stringify(configWithClaudeCode); 505 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 506 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 507 | }); 508 | fsExistsSyncSpy.mockReturnValue(true); 509 | 510 | const settings = configManager.getClaudeCodeSettingsForCommand( 511 | 'add-task', 512 | MOCK_PROJECT_ROOT, 513 | true 514 | ); // Force reload 515 | 516 | // Should merge global settings with command-specific settings 517 | const expectedSettings = { 518 | ...VALID_CLAUDE_CODE_CONFIG, 519 | ...VALID_CLAUDE_CODE_CONFIG.commandSpecific['add-task'] 520 | }; 521 | expect(settings).toEqual(expectedSettings); 522 | }); 523 | 524 | test('getClaudeCodeSettingsForCommand should return global settings for unknown command', () => { 525 | // Config with Claude Code settings 526 | const configWithClaudeCode = { 527 | ...VALID_CUSTOM_CONFIG, 528 | claudeCode: PARTIAL_CLAUDE_CODE_CONFIG 529 | }; 530 | 531 | // Mock findConfigPath to return the mock config path 532 | mockFindConfigPath.mockReturnValue(MOCK_CONFIG_PATH); 533 | 534 | fsReadFileSyncSpy.mockImplementation((filePath) => { 535 | if (path.basename(filePath) === 'supported-models.json') return '{}'; 536 | if (filePath === MOCK_CONFIG_PATH) 537 | return JSON.stringify(configWithClaudeCode); 538 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 539 | }); 540 | fsExistsSyncSpy.mockReturnValue(true); 541 | 542 | const settings = configManager.getClaudeCodeSettingsForCommand( 543 | 'unknown-command', 544 | MOCK_PROJECT_ROOT, 545 | true 546 | ); // Force reload 547 | 548 | // Should return global settings only 549 | expect(settings).toEqual(PARTIAL_CLAUDE_CODE_CONFIG); 550 | }); 551 | }); 552 | 553 | // --- getConfig Tests --- 554 | describe('getConfig Tests', () => { 555 | test('should return default config if .taskmasterconfig does not exist', () => { 556 | // Arrange 557 | fsExistsSyncSpy.mockReturnValue(false); 558 | // findProjectRoot mock is set in beforeEach 559 | 560 | // Act: Call getConfig with explicit root 561 | const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); // Force reload 562 | 563 | // Assert 564 | expect(config).toEqual(DEFAULT_CONFIG); 565 | expect(mockFindProjectRoot).not.toHaveBeenCalled(); // Explicit root provided 566 | // The implementation checks for .taskmaster directory first 567 | expect(fsExistsSyncSpy).toHaveBeenCalledWith( 568 | path.join(MOCK_PROJECT_ROOT, '.taskmaster') 569 | ); 570 | expect(fsReadFileSyncSpy).not.toHaveBeenCalled(); // No read if file doesn't exist 571 | expect(consoleWarnSpy).toHaveBeenCalledWith( 572 | expect.stringContaining('not found at provided project root') 573 | ); 574 | }); 575 | 576 | test.skip('should use findProjectRoot and return defaults if file not found', () => { 577 | // TODO: Fix mock interaction, findProjectRoot isn't being registered as called 578 | // Arrange 579 | fsExistsSyncSpy.mockReturnValue(false); 580 | // findProjectRoot mock is set in beforeEach 581 | 582 | // Act: Call getConfig without explicit root 583 | const config = configManager.getConfig(null, true); // Force reload 584 | 585 | // Assert 586 | expect(mockFindProjectRoot).toHaveBeenCalled(); // Should be called now 587 | expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH); 588 | expect(config).toEqual(DEFAULT_CONFIG); 589 | expect(fsReadFileSyncSpy).not.toHaveBeenCalled(); 590 | expect(consoleWarnSpy).toHaveBeenCalledWith( 591 | expect.stringContaining('not found at derived root') 592 | ); // Adjusted expected warning 593 | }); 594 | 595 | test('should read and merge valid config file with defaults', () => { 596 | // Arrange: Override readFileSync for this test 597 | fsReadFileSyncSpy.mockImplementation((filePath) => { 598 | if (filePath === MOCK_CONFIG_PATH) 599 | return JSON.stringify(VALID_CUSTOM_CONFIG); 600 | if (path.basename(filePath) === 'supported-models.json') { 601 | // Provide necessary models for validation within getConfig 602 | return JSON.stringify({ 603 | openai: [{ id: 'gpt-4o' }], 604 | google: [{ id: 'gemini-1.5-pro-latest' }], 605 | perplexity: [{ id: 'sonar-pro' }], 606 | anthropic: [ 607 | { id: 'claude-3-opus-20240229' }, 608 | { id: 'claude-3-5-sonnet' }, 609 | { id: 'claude-3-7-sonnet-20250219' }, 610 | { id: 'claude-3-5-sonnet' } 611 | ], 612 | ollama: [], 613 | openrouter: [] 614 | }); 615 | } 616 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 617 | }); 618 | fsExistsSyncSpy.mockReturnValue(true); 619 | // findProjectRoot mock set in beforeEach 620 | 621 | // Act 622 | const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); // Force reload 623 | 624 | // Assert: Construct expected merged config 625 | const expectedMergedConfig = { 626 | models: { 627 | main: { 628 | ...DEFAULT_CONFIG.models.main, 629 | ...VALID_CUSTOM_CONFIG.models.main 630 | }, 631 | research: { 632 | ...DEFAULT_CONFIG.models.research, 633 | ...VALID_CUSTOM_CONFIG.models.research 634 | }, 635 | fallback: { 636 | ...DEFAULT_CONFIG.models.fallback, 637 | ...VALID_CUSTOM_CONFIG.models.fallback 638 | } 639 | }, 640 | global: { ...DEFAULT_CONFIG.global, ...VALID_CUSTOM_CONFIG.global }, 641 | claudeCode: { 642 | ...DEFAULT_CONFIG.claudeCode, 643 | ...VALID_CUSTOM_CONFIG.claudeCode 644 | }, 645 | grokCli: { ...DEFAULT_CONFIG.grokCli } 646 | }; 647 | expect(config).toEqual(expectedMergedConfig); 648 | expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH); 649 | expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8'); 650 | }); 651 | 652 | test('should merge defaults for partial config file', () => { 653 | // Arrange 654 | fsReadFileSyncSpy.mockImplementation((filePath) => { 655 | if (filePath === MOCK_CONFIG_PATH) return JSON.stringify(PARTIAL_CONFIG); 656 | if (path.basename(filePath) === 'supported-models.json') { 657 | return JSON.stringify({ 658 | openai: [{ id: 'gpt-4-turbo' }], 659 | perplexity: [{ id: 'sonar-pro' }], 660 | anthropic: [ 661 | { id: 'claude-3-7-sonnet-20250219' }, 662 | { id: 'claude-3-5-sonnet' } 663 | ], 664 | ollama: [], 665 | openrouter: [] 666 | }); 667 | } 668 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 669 | }); 670 | fsExistsSyncSpy.mockReturnValue(true); 671 | // findProjectRoot mock set in beforeEach 672 | 673 | // Act 674 | const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); 675 | 676 | // Assert: Construct expected merged config 677 | const expectedMergedConfig = { 678 | models: { 679 | main: { ...DEFAULT_CONFIG.models.main, ...PARTIAL_CONFIG.models.main }, 680 | research: { ...DEFAULT_CONFIG.models.research }, 681 | fallback: { ...DEFAULT_CONFIG.models.fallback } 682 | }, 683 | global: { ...DEFAULT_CONFIG.global, ...PARTIAL_CONFIG.global }, 684 | claudeCode: { 685 | ...DEFAULT_CONFIG.claudeCode, 686 | ...VALID_CUSTOM_CONFIG.claudeCode 687 | }, 688 | grokCli: { ...DEFAULT_CONFIG.grokCli } 689 | }; 690 | expect(config).toEqual(expectedMergedConfig); 691 | expect(fsReadFileSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH, 'utf-8'); 692 | }); 693 | 694 | test('should handle JSON parsing error and return defaults', () => { 695 | // Arrange 696 | fsReadFileSyncSpy.mockImplementation((filePath) => { 697 | if (filePath === MOCK_CONFIG_PATH) return 'invalid json'; 698 | // Mock models read needed for initial load before parse error 699 | if (path.basename(filePath) === 'supported-models.json') { 700 | return JSON.stringify({ 701 | anthropic: [{ id: 'claude-3-7-sonnet-20250219' }], 702 | perplexity: [{ id: 'sonar-pro' }], 703 | fallback: [{ id: 'claude-3-5-sonnet' }], 704 | ollama: [], 705 | openrouter: [] 706 | }); 707 | } 708 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 709 | }); 710 | fsExistsSyncSpy.mockReturnValue(true); 711 | // findProjectRoot mock set in beforeEach 712 | 713 | // Act 714 | const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); 715 | 716 | // Assert 717 | expect(config).toEqual(DEFAULT_CONFIG); 718 | expect(consoleErrorSpy).toHaveBeenCalledWith( 719 | expect.stringContaining('Error reading or parsing') 720 | ); 721 | }); 722 | 723 | test('should handle file read error and return defaults', () => { 724 | // Arrange 725 | const readError = new Error('Permission denied'); 726 | fsReadFileSyncSpy.mockImplementation((filePath) => { 727 | if (filePath === MOCK_CONFIG_PATH) throw readError; 728 | // Mock models read needed for initial load before read error 729 | if (path.basename(filePath) === 'supported-models.json') { 730 | return JSON.stringify({ 731 | anthropic: [{ id: 'claude-3-7-sonnet-20250219' }], 732 | perplexity: [{ id: 'sonar-pro' }], 733 | fallback: [{ id: 'claude-3-5-sonnet' }], 734 | ollama: [], 735 | openrouter: [] 736 | }); 737 | } 738 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 739 | }); 740 | fsExistsSyncSpy.mockReturnValue(true); 741 | // findProjectRoot mock set in beforeEach 742 | 743 | // Act 744 | const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); 745 | 746 | // Assert 747 | expect(config).toEqual(DEFAULT_CONFIG); 748 | expect(consoleErrorSpy).toHaveBeenCalledWith( 749 | expect.stringContaining('Permission denied. Using default configuration.') 750 | ); 751 | }); 752 | 753 | test('should validate provider and fallback to default if invalid', () => { 754 | // Arrange 755 | fsReadFileSyncSpy.mockImplementation((filePath) => { 756 | if (filePath === MOCK_CONFIG_PATH) 757 | return JSON.stringify(INVALID_PROVIDER_CONFIG); 758 | if (path.basename(filePath) === 'supported-models.json') { 759 | return JSON.stringify({ 760 | perplexity: [{ id: 'llama-3-sonar-large-32k-online' }], 761 | anthropic: [ 762 | { id: 'claude-3-7-sonnet-20250219' }, 763 | { id: 'claude-3-5-sonnet' } 764 | ], 765 | ollama: [], 766 | openrouter: [] 767 | }); 768 | } 769 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 770 | }); 771 | fsExistsSyncSpy.mockReturnValue(true); 772 | // findProjectRoot mock set in beforeEach 773 | 774 | // Act 775 | const config = configManager.getConfig(MOCK_PROJECT_ROOT, true); 776 | 777 | // Assert 778 | expect(consoleWarnSpy).toHaveBeenCalledWith( 779 | expect.stringContaining( 780 | 'Warning: Invalid main provider "invalid-provider"' 781 | ) 782 | ); 783 | const expectedMergedConfig = { 784 | models: { 785 | main: { ...DEFAULT_CONFIG.models.main }, 786 | research: { 787 | ...DEFAULT_CONFIG.models.research, 788 | ...INVALID_PROVIDER_CONFIG.models.research 789 | }, 790 | fallback: { ...DEFAULT_CONFIG.models.fallback } 791 | }, 792 | global: { ...DEFAULT_CONFIG.global, ...INVALID_PROVIDER_CONFIG.global }, 793 | claudeCode: { 794 | ...DEFAULT_CONFIG.claudeCode, 795 | ...VALID_CUSTOM_CONFIG.claudeCode 796 | }, 797 | grokCli: { ...DEFAULT_CONFIG.grokCli } 798 | }; 799 | expect(config).toEqual(expectedMergedConfig); 800 | }); 801 | }); 802 | 803 | // --- writeConfig Tests --- 804 | describe('writeConfig', () => { 805 | test('should write valid config to file', () => { 806 | // Arrange (Default mocks are sufficient) 807 | // findProjectRoot mock set in beforeEach 808 | fsWriteFileSyncSpy.mockImplementation(() => {}); // Ensure it doesn't throw 809 | 810 | // Act 811 | const success = configManager.writeConfig( 812 | VALID_CUSTOM_CONFIG, 813 | MOCK_PROJECT_ROOT 814 | ); 815 | 816 | // Assert 817 | expect(success).toBe(true); 818 | expect(fsWriteFileSyncSpy).toHaveBeenCalledWith( 819 | MOCK_CONFIG_PATH, 820 | JSON.stringify(VALID_CUSTOM_CONFIG, null, 2) // writeConfig stringifies 821 | ); 822 | expect(consoleErrorSpy).not.toHaveBeenCalled(); 823 | }); 824 | 825 | test('should return false and log error if write fails', () => { 826 | // Arrange 827 | const mockWriteError = new Error('Disk full'); 828 | fsWriteFileSyncSpy.mockImplementation(() => { 829 | throw mockWriteError; 830 | }); 831 | // findProjectRoot mock set in beforeEach 832 | 833 | // Act 834 | const success = configManager.writeConfig( 835 | VALID_CUSTOM_CONFIG, 836 | MOCK_PROJECT_ROOT 837 | ); 838 | 839 | // Assert 840 | expect(success).toBe(false); 841 | expect(fsWriteFileSyncSpy).toHaveBeenCalled(); 842 | expect(consoleErrorSpy).toHaveBeenCalledWith( 843 | expect.stringContaining('Disk full') 844 | ); 845 | }); 846 | 847 | test.skip('should return false if project root cannot be determined', () => { 848 | // TODO: Fix mock interaction or function logic, returns true unexpectedly in test 849 | // Arrange: Override mock for this specific test 850 | mockFindProjectRoot.mockReturnValue(null); 851 | 852 | // Act: Call without explicit root 853 | const success = configManager.writeConfig(VALID_CUSTOM_CONFIG); 854 | 855 | // Assert 856 | expect(success).toBe(false); // Function should return false if root is null 857 | expect(mockFindProjectRoot).toHaveBeenCalled(); 858 | expect(fsWriteFileSyncSpy).not.toHaveBeenCalled(); 859 | expect(consoleErrorSpy).toHaveBeenCalledWith( 860 | expect.stringContaining('Could not determine project root') 861 | ); 862 | }); 863 | }); 864 | 865 | // --- Getter Functions --- 866 | describe('Getter Functions', () => { 867 | test('getMainProvider should return provider from config', () => { 868 | // Arrange: Set up readFileSync to return VALID_CUSTOM_CONFIG 869 | fsReadFileSyncSpy.mockImplementation((filePath) => { 870 | if (filePath === MOCK_CONFIG_PATH) 871 | return JSON.stringify(VALID_CUSTOM_CONFIG); 872 | if (path.basename(filePath) === 'supported-models.json') { 873 | return JSON.stringify({ 874 | openai: [{ id: 'gpt-4o' }], 875 | google: [{ id: 'gemini-1.5-pro-latest' }], 876 | anthropic: [ 877 | { id: 'claude-3-opus-20240229' }, 878 | { id: 'claude-3-7-sonnet-20250219' }, 879 | { id: 'claude-3-5-sonnet' } 880 | ], 881 | perplexity: [{ id: 'sonar-pro' }], 882 | ollama: [], 883 | openrouter: [] 884 | }); // Added perplexity 885 | } 886 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 887 | }); 888 | fsExistsSyncSpy.mockReturnValue(true); 889 | // findProjectRoot mock set in beforeEach 890 | 891 | // Act 892 | const provider = configManager.getMainProvider(MOCK_PROJECT_ROOT); 893 | 894 | // Assert 895 | expect(provider).toBe(VALID_CUSTOM_CONFIG.models.main.provider); 896 | }); 897 | 898 | test('getLogLevel should return logLevel from config', () => { 899 | // Arrange: Set up readFileSync to return VALID_CUSTOM_CONFIG 900 | fsReadFileSyncSpy.mockImplementation((filePath) => { 901 | if (filePath === MOCK_CONFIG_PATH) 902 | return JSON.stringify(VALID_CUSTOM_CONFIG); 903 | if (path.basename(filePath) === 'supported-models.json') { 904 | // Provide enough mock model data for validation within getConfig 905 | return JSON.stringify({ 906 | openai: [{ id: 'gpt-4o' }], 907 | google: [{ id: 'gemini-1.5-pro-latest' }], 908 | anthropic: [ 909 | { id: 'claude-3-opus-20240229' }, 910 | { id: 'claude-3-7-sonnet-20250219' }, 911 | { id: 'claude-3-5-sonnet' } 912 | ], 913 | perplexity: [{ id: 'sonar-pro' }], 914 | ollama: [], 915 | openrouter: [] 916 | }); 917 | } 918 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 919 | }); 920 | fsExistsSyncSpy.mockReturnValue(true); 921 | // findProjectRoot mock set in beforeEach 922 | 923 | // Act 924 | const logLevel = configManager.getLogLevel(MOCK_PROJECT_ROOT); 925 | 926 | // Assert 927 | expect(logLevel).toBe(VALID_CUSTOM_CONFIG.global.logLevel); 928 | }); 929 | 930 | test('getResponseLanguage should return responseLanguage from config', () => { 931 | // Arrange 932 | // Prepare a config object with responseLanguage property for this test 933 | const configWithLanguage = JSON.stringify({ 934 | models: { 935 | main: { provider: 'openai', modelId: 'gpt-4-turbo' } 936 | }, 937 | global: { 938 | projectName: 'Test Project', 939 | responseLanguage: '中文' 940 | } 941 | }); 942 | 943 | // Set up fs.readFileSync to return our test config 944 | fsReadFileSyncSpy.mockImplementation((filePath) => { 945 | if (filePath === MOCK_CONFIG_PATH) { 946 | return configWithLanguage; 947 | } 948 | if (path.basename(filePath) === 'supported-models.json') { 949 | return JSON.stringify({ 950 | openai: [{ id: 'gpt-4-turbo' }] 951 | }); 952 | } 953 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 954 | }); 955 | 956 | fsExistsSyncSpy.mockReturnValue(true); 957 | 958 | // Ensure getConfig returns new values instead of cached ones 959 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 960 | 961 | // Act 962 | const responseLanguage = 963 | configManager.getResponseLanguage(MOCK_PROJECT_ROOT); 964 | 965 | // Assert 966 | expect(responseLanguage).toBe('中文'); 967 | }); 968 | 969 | test('getResponseLanguage should return undefined when responseLanguage is not in config', () => { 970 | // Arrange 971 | const configWithoutLanguage = JSON.stringify({ 972 | models: { 973 | main: { provider: 'openai', modelId: 'gpt-4-turbo' } 974 | }, 975 | global: { 976 | projectName: 'Test Project' 977 | // No responseLanguage property 978 | } 979 | }); 980 | 981 | fsReadFileSyncSpy.mockImplementation((filePath) => { 982 | if (filePath === MOCK_CONFIG_PATH) { 983 | return configWithoutLanguage; 984 | } 985 | if (path.basename(filePath) === 'supported-models.json') { 986 | return JSON.stringify({ 987 | openai: [{ id: 'gpt-4-turbo' }] 988 | }); 989 | } 990 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 991 | }); 992 | 993 | fsExistsSyncSpy.mockReturnValue(true); 994 | 995 | // Ensure getConfig returns new values instead of cached ones 996 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 997 | 998 | // Act 999 | const responseLanguage = 1000 | configManager.getResponseLanguage(MOCK_PROJECT_ROOT); 1001 | 1002 | // Assert 1003 | expect(responseLanguage).toBe('English'); 1004 | }); 1005 | 1006 | // Add more tests for other getters (getResearchProvider, getProjectName, etc.) 1007 | }); 1008 | 1009 | // --- isConfigFilePresent Tests --- 1010 | describe('isConfigFilePresent', () => { 1011 | test('should return true if config file exists', () => { 1012 | fsExistsSyncSpy.mockReturnValue(true); 1013 | // findProjectRoot mock set in beforeEach 1014 | expect(configManager.isConfigFilePresent(MOCK_PROJECT_ROOT)).toBe(true); 1015 | expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH); 1016 | }); 1017 | 1018 | test('should return false if config file does not exist', () => { 1019 | fsExistsSyncSpy.mockReturnValue(false); 1020 | // findProjectRoot mock set in beforeEach 1021 | expect(configManager.isConfigFilePresent(MOCK_PROJECT_ROOT)).toBe(false); 1022 | expect(fsExistsSyncSpy).toHaveBeenCalledWith(MOCK_CONFIG_PATH); 1023 | }); 1024 | 1025 | test.skip('should use findProjectRoot if explicitRoot is not provided', () => { 1026 | // TODO: Fix mock interaction, findProjectRoot isn't being registered as called 1027 | fsExistsSyncSpy.mockReturnValue(true); 1028 | // findProjectRoot mock set in beforeEach 1029 | expect(configManager.isConfigFilePresent()).toBe(true); 1030 | expect(mockFindProjectRoot).toHaveBeenCalled(); // Should be called now 1031 | }); 1032 | }); 1033 | 1034 | // --- getAllProviders Tests --- 1035 | describe('getAllProviders', () => { 1036 | test('should return all providers from ALL_PROVIDERS constant', () => { 1037 | // Arrange: Ensure config is loaded with real data 1038 | configManager.getConfig(null, true); // Force load using the mock that returns real data 1039 | 1040 | // Act 1041 | const providers = configManager.getAllProviders(); 1042 | 1043 | // Assert 1044 | // getAllProviders() should return the same as the ALL_PROVIDERS constant 1045 | expect(providers).toEqual(configManager.ALL_PROVIDERS); 1046 | expect(providers.length).toBe(configManager.ALL_PROVIDERS.length); 1047 | 1048 | // Verify it includes both validated and custom providers 1049 | expect(providers).toEqual( 1050 | expect.arrayContaining(configManager.VALIDATED_PROVIDERS) 1051 | ); 1052 | expect(providers).toEqual( 1053 | expect.arrayContaining(Object.values(configManager.CUSTOM_PROVIDERS)) 1054 | ); 1055 | }); 1056 | }); 1057 | 1058 | // Add tests for getParametersForRole if needed 1059 | 1060 | // --- defaultNumTasks Tests --- 1061 | describe('Configuration Getters', () => { 1062 | test('getDefaultNumTasks should return default value when config is valid', () => { 1063 | // Arrange: Mock fs.readFileSync to return valid config when called with the expected path 1064 | fsReadFileSyncSpy.mockImplementation((filePath) => { 1065 | if (filePath === MOCK_CONFIG_PATH) { 1066 | return JSON.stringify({ 1067 | global: { 1068 | defaultNumTasks: 15 1069 | } 1070 | }); 1071 | } 1072 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 1073 | }); 1074 | fsExistsSyncSpy.mockReturnValue(true); 1075 | 1076 | // Force reload to clear cache 1077 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 1078 | 1079 | // Act: Call getDefaultNumTasks with explicit root 1080 | const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT); 1081 | 1082 | // Assert 1083 | expect(result).toBe(15); 1084 | }); 1085 | 1086 | test('getDefaultNumTasks should return fallback when config value is invalid', () => { 1087 | // Arrange: Mock fs.readFileSync to return invalid config 1088 | fsReadFileSyncSpy.mockImplementation((filePath) => { 1089 | if (filePath === MOCK_CONFIG_PATH) { 1090 | return JSON.stringify({ 1091 | global: { 1092 | defaultNumTasks: 'invalid' 1093 | } 1094 | }); 1095 | } 1096 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 1097 | }); 1098 | fsExistsSyncSpy.mockReturnValue(true); 1099 | 1100 | // Force reload to clear cache 1101 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 1102 | 1103 | // Act: Call getDefaultNumTasks with explicit root 1104 | const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT); 1105 | 1106 | // Assert 1107 | expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks 1108 | }); 1109 | 1110 | test('getDefaultNumTasks should return fallback when config value is missing', () => { 1111 | // Arrange: Mock fs.readFileSync to return config without defaultNumTasks 1112 | fsReadFileSyncSpy.mockImplementation((filePath) => { 1113 | if (filePath === MOCK_CONFIG_PATH) { 1114 | return JSON.stringify({ 1115 | global: {} 1116 | }); 1117 | } 1118 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 1119 | }); 1120 | fsExistsSyncSpy.mockReturnValue(true); 1121 | 1122 | // Force reload to clear cache 1123 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 1124 | 1125 | // Act: Call getDefaultNumTasks with explicit root 1126 | const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT); 1127 | 1128 | // Assert 1129 | expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks 1130 | }); 1131 | 1132 | test('getDefaultNumTasks should handle non-existent config file', () => { 1133 | // Arrange: Mock file not existing 1134 | fsExistsSyncSpy.mockReturnValue(false); 1135 | 1136 | // Force reload to clear cache 1137 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 1138 | 1139 | // Act: Call getDefaultNumTasks with explicit root 1140 | const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT); 1141 | 1142 | // Assert 1143 | expect(result).toBe(10); // Should fallback to DEFAULTS.global.defaultNumTasks 1144 | }); 1145 | 1146 | test('getDefaultNumTasks should accept explicit project root', () => { 1147 | // Arrange: Mock fs.readFileSync to return valid config 1148 | fsReadFileSyncSpy.mockImplementation((filePath) => { 1149 | if (filePath === MOCK_CONFIG_PATH) { 1150 | return JSON.stringify({ 1151 | global: { 1152 | defaultNumTasks: 20 1153 | } 1154 | }); 1155 | } 1156 | throw new Error(`Unexpected fs.readFileSync call: ${filePath}`); 1157 | }); 1158 | fsExistsSyncSpy.mockReturnValue(true); 1159 | 1160 | // Force reload to clear cache 1161 | configManager.getConfig(MOCK_PROJECT_ROOT, true); 1162 | 1163 | // Act: Call getDefaultNumTasks with explicit project root 1164 | const result = configManager.getDefaultNumTasks(MOCK_PROJECT_ROOT); 1165 | 1166 | // Assert 1167 | expect(result).toBe(20); 1168 | }); 1169 | }); 1170 | 1171 | // Note: Tests for setMainModel, setResearchModel were removed as the functions were removed in the implementation. 1172 | // If similar setter functions exist, add tests for them following the writeConfig pattern. 1173 | ``` -------------------------------------------------------------------------------- /tests/e2e/run_e2e.sh: -------------------------------------------------------------------------------- ```bash 1 | #!/bin/bash 2 | 3 | # Treat unset variables as an error when substituting. 4 | set -u 5 | # Prevent errors in pipelines from being masked. 6 | set -o pipefail 7 | 8 | # --- Default Settings --- 9 | run_verification_test=true 10 | 11 | # --- Argument Parsing --- 12 | # Simple loop to check for the skip flag 13 | # Note: This needs to happen *before* the main block piped to tee 14 | # if we want the decision logged early. Or handle args inside. 15 | # Let's handle it before for clarity. 16 | processed_args=() 17 | while [[ $# -gt 0 ]]; do 18 | case "$1" in 19 | --skip-verification) 20 | run_verification_test=false 21 | echo "[INFO] Argument '--skip-verification' detected. Fallback verification will be skipped." 22 | shift # Consume the flag 23 | ;; 24 | --analyze-log) 25 | # Keep the analyze-log flag handling separate for now 26 | # It exits early, so doesn't conflict with the main run flags 27 | processed_args+=("$1") 28 | if [[ $# -gt 1 ]]; then 29 | processed_args+=("$2") 30 | shift 2 31 | else 32 | shift 1 33 | fi 34 | ;; 35 | *) 36 | # Unknown argument, pass it along or handle error 37 | # For now, just pass it along in case --analyze-log needs it later 38 | processed_args+=("$1") 39 | shift 40 | ;; 41 | esac 42 | done 43 | # Restore processed arguments ONLY if the array is not empty 44 | if [ ${#processed_args[@]} -gt 0 ]; then 45 | set -- "${processed_args[@]}" 46 | fi 47 | 48 | 49 | # --- Configuration --- 50 | # Assumes script is run from the project root (claude-task-master) 51 | TASKMASTER_SOURCE_DIR="." # Current directory is the source 52 | # Base directory for test runs, relative to project root 53 | BASE_TEST_DIR="$TASKMASTER_SOURCE_DIR/tests/e2e/_runs" 54 | # Log directory, relative to project root 55 | LOG_DIR="$TASKMASTER_SOURCE_DIR/tests/e2e/log" 56 | # Path to the sample PRD, relative to project root 57 | SAMPLE_PRD_SOURCE="$TASKMASTER_SOURCE_DIR/tests/fixtures/sample-prd.txt" 58 | # Path to the main .env file in the source directory 59 | MAIN_ENV_FILE="$TASKMASTER_SOURCE_DIR/.env" 60 | # --- 61 | 62 | # <<< Source the helper script >>> 63 | # shellcheck source=tests/e2e/e2e_helpers.sh 64 | source "$TASKMASTER_SOURCE_DIR/tests/e2e/e2e_helpers.sh" 65 | 66 | # ========================================== 67 | # >>> Global Helper Functions Defined in run_e2e.sh <<< 68 | # --- Helper Functions (Define globally before export) --- 69 | _format_duration() { 70 | local total_seconds=$1 71 | local minutes=$((total_seconds / 60)) 72 | local seconds=$((total_seconds % 60)) 73 | printf "%dm%02ds" "$minutes" "$seconds" 74 | } 75 | 76 | # Note: This relies on 'overall_start_time' being set globally before the function is called 77 | _get_elapsed_time_for_log() { 78 | local current_time 79 | current_time=$(date +%s) 80 | # Use overall_start_time here, as start_time_for_helpers might not be relevant globally 81 | local elapsed_seconds 82 | elapsed_seconds=$((current_time - overall_start_time)) 83 | _format_duration "$elapsed_seconds" 84 | } 85 | 86 | log_info() { 87 | echo "[INFO] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" 88 | } 89 | 90 | log_success() { 91 | echo "[SUCCESS] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" 92 | } 93 | 94 | log_error() { 95 | echo "[ERROR] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" >&2 96 | } 97 | 98 | log_step() { 99 | test_step_count=$((test_step_count + 1)) 100 | echo "" 101 | echo "=============================================" 102 | echo " STEP ${test_step_count}: [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" 103 | echo "=============================================" 104 | } 105 | # ========================================== 106 | 107 | # <<< Export helper functions for subshells >>> 108 | export -f log_info log_success log_error log_step _format_duration _get_elapsed_time_for_log extract_and_sum_cost 109 | 110 | # --- Argument Parsing for Analysis-Only Mode --- 111 | # This remains the same, as it exits early if matched 112 | if [ "$#" -ge 1 ] && [ "$1" == "--analyze-log" ]; then 113 | LOG_TO_ANALYZE="" 114 | # Check if a log file path was provided as the second argument 115 | if [ "$#" -ge 2 ] && [ -n "$2" ]; then 116 | LOG_TO_ANALYZE="$2" 117 | echo "[INFO] Using specified log file for analysis: $LOG_TO_ANALYZE" 118 | else 119 | echo "[INFO] Log file not specified. Attempting to find the latest log..." 120 | # Find the latest log file in the LOG_DIR 121 | # Ensure LOG_DIR is absolute for ls to work correctly regardless of PWD 122 | ABS_LOG_DIR="$(cd "$TASKMASTER_SOURCE_DIR/$LOG_DIR" && pwd)" 123 | LATEST_LOG=$(ls -t "$ABS_LOG_DIR"/e2e_run_*.log 2>/dev/null | head -n 1) 124 | 125 | if [ -z "$LATEST_LOG" ]; then 126 | echo "[ERROR] No log files found matching 'e2e_run_*.log' in $ABS_LOG_DIR. Cannot analyze." >&2 127 | exit 1 128 | fi 129 | LOG_TO_ANALYZE="$LATEST_LOG" 130 | echo "[INFO] Found latest log file: $LOG_TO_ANALYZE" 131 | fi 132 | 133 | # Ensure the log path is absolute (it should be if found by ls, but double-check) 134 | if [[ "$LOG_TO_ANALYZE" != /* ]]; then 135 | LOG_TO_ANALYZE="$(pwd)/$LOG_TO_ANALYZE" # Fallback if relative path somehow occurred 136 | fi 137 | echo "[INFO] Running in analysis-only mode for log: $LOG_TO_ANALYZE" 138 | 139 | # --- Derive TEST_RUN_DIR from log file path --- 140 | # Extract timestamp like YYYYMMDD_HHMMSS from e2e_run_YYYYMMDD_HHMMSS.log 141 | log_basename=$(basename "$LOG_TO_ANALYZE") 142 | # Ensure the sed command matches the .log suffix correctly 143 | timestamp_match=$(echo "$log_basename" | sed -n 's/^e2e_run_\([0-9]\{8\}_[0-9]\{6\}\)\.log$/\1/p') 144 | 145 | if [ -z "$timestamp_match" ]; then 146 | echo "[ERROR] Could not extract timestamp from log file name: $log_basename" >&2 147 | echo "[ERROR] Expected format: e2e_run_YYYYMMDD_HHMMSS.log" >&2 148 | exit 1 149 | fi 150 | 151 | # Construct the expected run directory path relative to project root 152 | EXPECTED_RUN_DIR="$TASKMASTER_SOURCE_DIR/tests/e2e/_runs/run_$timestamp_match" 153 | # Make it absolute 154 | EXPECTED_RUN_DIR_ABS="$(cd "$TASKMASTER_SOURCE_DIR" && pwd)/tests/e2e/_runs/run_$timestamp_match" 155 | 156 | if [ ! -d "$EXPECTED_RUN_DIR_ABS" ]; then 157 | echo "[ERROR] Corresponding test run directory not found: $EXPECTED_RUN_DIR_ABS" >&2 158 | exit 1 159 | fi 160 | 161 | # Save original dir before changing 162 | ORIGINAL_DIR=$(pwd) 163 | 164 | echo "[INFO] Changing directory to $EXPECTED_RUN_DIR_ABS for analysis context..." 165 | cd "$EXPECTED_RUN_DIR_ABS" 166 | 167 | # Call the analysis function (sourced from helpers) 168 | echo "[INFO] Calling analyze_log_with_llm function..." 169 | analyze_log_with_llm "$LOG_TO_ANALYZE" "$(cd "$ORIGINAL_DIR/$TASKMASTER_SOURCE_DIR" && pwd)" # Pass absolute project root 170 | ANALYSIS_EXIT_CODE=$? 171 | 172 | # Return to original directory 173 | cd "$ORIGINAL_DIR" 174 | exit $ANALYSIS_EXIT_CODE 175 | fi 176 | # --- End Analysis-Only Mode Logic --- 177 | 178 | # --- Normal Execution Starts Here (if not in analysis-only mode) --- 179 | 180 | # --- Test State Variables --- 181 | # Note: These are mainly for step numbering within the log now, not for final summary 182 | test_step_count=0 183 | start_time_for_helpers=0 # Separate start time for helper functions inside the pipe 184 | total_e2e_cost="0.0" # Initialize total E2E cost 185 | # --- 186 | 187 | # --- Log File Setup --- 188 | # Create the log directory if it doesn't exist 189 | mkdir -p "$LOG_DIR" 190 | # Define timestamped log file path 191 | TIMESTAMP=$(date +"%Y%m%d_%H%M%S") 192 | # <<< Use pwd to create an absolute path AND add .log extension >>> 193 | LOG_FILE="$(pwd)/$LOG_DIR/e2e_run_${TIMESTAMP}.log" 194 | 195 | # Define and create the test run directory *before* the main pipe 196 | mkdir -p "$BASE_TEST_DIR" # Ensure base exists first 197 | TEST_RUN_DIR="$BASE_TEST_DIR/run_$TIMESTAMP" 198 | mkdir -p "$TEST_RUN_DIR" 199 | 200 | # Echo starting message to the original terminal BEFORE the main piped block 201 | echo "Starting E2E test. Output will be shown here and saved to: $LOG_FILE" 202 | echo "Running from directory: $(pwd)" 203 | echo "--- Starting E2E Run ---" # Separator before piped output starts 204 | 205 | # Record start time for overall duration *before* the pipe 206 | overall_start_time=$(date +%s) 207 | 208 | # <<< DEFINE ORIGINAL_DIR GLOBALLY HERE >>> 209 | ORIGINAL_DIR=$(pwd) 210 | 211 | # ========================================== 212 | # >>> MOVE FUNCTION DEFINITION HERE <<< 213 | # --- Helper Functions (Define globally) --- 214 | _format_duration() { 215 | local total_seconds=$1 216 | local minutes=$((total_seconds / 60)) 217 | local seconds=$((total_seconds % 60)) 218 | printf "%dm%02ds" "$minutes" "$seconds" 219 | } 220 | 221 | # Note: This relies on 'overall_start_time' being set globally before the function is called 222 | _get_elapsed_time_for_log() { 223 | local current_time=$(date +%s) 224 | # Use overall_start_time here, as start_time_for_helpers might not be relevant globally 225 | local elapsed_seconds=$((current_time - overall_start_time)) 226 | _format_duration "$elapsed_seconds" 227 | } 228 | 229 | log_info() { 230 | echo "[INFO] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" 231 | } 232 | 233 | log_success() { 234 | echo "[SUCCESS] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" 235 | } 236 | 237 | log_error() { 238 | echo "[ERROR] [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" >&2 239 | } 240 | 241 | log_step() { 242 | test_step_count=$((test_step_count + 1)) 243 | echo "" 244 | echo "=============================================" 245 | echo " STEP ${test_step_count}: [$(_get_elapsed_time_for_log)] $(date +"%Y-%m-%d %H:%M:%S") $1" 246 | echo "=============================================" 247 | } 248 | 249 | # ========================================== 250 | 251 | # --- Main Execution Block (Piped to tee) --- 252 | # Wrap the main part of the script in braces and pipe its output (stdout and stderr) to tee 253 | { 254 | # Note: Helper functions are now defined globally above, 255 | # but we still need start_time_for_helpers if any logging functions 256 | # called *inside* this block depend on it. If not, it can be removed. 257 | start_time_for_helpers=$(date +%s) # Keep if needed by helpers called inside this block 258 | 259 | # Log the verification decision 260 | if [ "$run_verification_test" = true ]; then 261 | log_info "Fallback verification test will be run as part of this E2E test." 262 | else 263 | log_info "Fallback verification test will be SKIPPED (--skip-verification flag detected)." 264 | fi 265 | 266 | # --- Dependency Checks --- 267 | log_step "Checking for dependencies (jq, bc)" 268 | if ! command -v jq &> /dev/null; then 269 | log_error "Dependency 'jq' is not installed or not found in PATH. Please install jq (e.g., 'brew install jq' or 'sudo apt-get install jq')." 270 | exit 1 271 | fi 272 | if ! command -v bc &> /dev/null; then 273 | log_error "Dependency 'bc' not installed (for cost calculation). Please install bc (e.g., 'brew install bc' or 'sudo apt-get install bc')." 274 | exit 1 275 | fi 276 | log_success "Dependencies 'jq' and 'bc' found." 277 | 278 | # --- Test Setup (Output to tee) --- 279 | log_step "Setting up test environment" 280 | 281 | log_step "Creating global npm link for task-master-ai" 282 | if npm link; then 283 | log_success "Global link created/updated." 284 | else 285 | log_error "Failed to run 'npm link'. Check permissions or output for details." 286 | exit 1 287 | fi 288 | 289 | log_info "Ensured base test directory exists: $BASE_TEST_DIR" 290 | 291 | log_info "Using test run directory (created earlier): $TEST_RUN_DIR" 292 | 293 | # Check if source .env file exists 294 | if [ ! -f "$MAIN_ENV_FILE" ]; then 295 | log_error "Source .env file not found at $MAIN_ENV_FILE. Cannot proceed with API-dependent tests." 296 | exit 1 297 | fi 298 | log_info "Source .env file found at $MAIN_ENV_FILE." 299 | 300 | # Check if sample PRD exists 301 | if [ ! -f "$SAMPLE_PRD_SOURCE" ]; then 302 | log_error "Sample PRD not found at $SAMPLE_PRD_SOURCE. Please check path." 303 | exit 1 304 | fi 305 | 306 | log_info "Copying sample PRD to test directory..." 307 | cp "$SAMPLE_PRD_SOURCE" "$TEST_RUN_DIR/prd.txt" 308 | if [ ! -f "$TEST_RUN_DIR/prd.txt" ]; then 309 | log_error "Failed to copy sample PRD to $TEST_RUN_DIR." 310 | exit 1 311 | fi 312 | log_success "Sample PRD copied." 313 | 314 | # ORIGINAL_DIR=$(pwd) # Save original dir # <<< REMOVED FROM HERE 315 | cd "$TEST_RUN_DIR" 316 | log_info "Changed directory to $(pwd)" 317 | 318 | # === Copy .env file BEFORE init === 319 | log_step "Copying source .env file for API keys" 320 | if cp "$ORIGINAL_DIR/.env" ".env"; then 321 | log_success ".env file copied successfully." 322 | else 323 | log_error "Failed to copy .env file from $ORIGINAL_DIR/.env" 324 | exit 1 325 | fi 326 | # ======================================== 327 | 328 | # --- Test Execution (Output to tee) --- 329 | 330 | log_step "Linking task-master-ai package locally" 331 | npm link task-master-ai 332 | log_success "Package linked locally." 333 | 334 | log_step "Initializing Task Master project (non-interactive)" 335 | task-master init -y --name="E2E Test $TIMESTAMP" --description="Automated E2E test run" 336 | if [ ! -f ".taskmaster/config.json" ]; then 337 | log_error "Initialization failed: .taskmaster/config.json not found." 338 | exit 1 339 | fi 340 | log_success "Project initialized." 341 | 342 | log_step "Parsing PRD" 343 | cmd_output_prd=$(task-master parse-prd ./prd.txt --force 2>&1) 344 | exit_status_prd=$? 345 | echo "$cmd_output_prd" 346 | extract_and_sum_cost "$cmd_output_prd" 347 | if [ $exit_status_prd -ne 0 ] || [ ! -s ".taskmaster/tasks/tasks.json" ]; then 348 | log_error "Parsing PRD failed: .taskmaster/tasks/tasks.json not found or is empty. Exit status: $exit_status_prd" 349 | exit 1 350 | else 351 | log_success "PRD parsed successfully." 352 | fi 353 | 354 | log_step "Expanding Task 1 (to ensure subtask 1.1 exists)" 355 | cmd_output_analyze=$(task-master analyze-complexity --research --output complexity_results.json 2>&1) 356 | exit_status_analyze=$? 357 | echo "$cmd_output_analyze" 358 | extract_and_sum_cost "$cmd_output_analyze" 359 | if [ $exit_status_analyze -ne 0 ] || [ ! -f "complexity_results.json" ]; then 360 | log_error "Complexity analysis failed: complexity_results.json not found. Exit status: $exit_status_analyze" 361 | exit 1 362 | else 363 | log_success "Complexity analysis saved to complexity_results.json" 364 | fi 365 | 366 | log_step "Generating complexity report" 367 | task-master complexity-report --file complexity_results.json > complexity_report_formatted.log 368 | log_success "Formatted complexity report saved to complexity_report_formatted.log" 369 | 370 | log_step "Expanding Task 1 (assuming it exists)" 371 | cmd_output_expand1=$(task-master expand --id=1 --cr complexity_results.json 2>&1) 372 | exit_status_expand1=$? 373 | echo "$cmd_output_expand1" 374 | extract_and_sum_cost "$cmd_output_expand1" 375 | if [ $exit_status_expand1 -ne 0 ]; then 376 | log_error "Expanding Task 1 failed. Exit status: $exit_status_expand1" 377 | else 378 | log_success "Attempted to expand Task 1." 379 | fi 380 | 381 | log_step "Setting status for Subtask 1.1 (assuming it exists)" 382 | task-master set-status --id=1.1 --status=done 383 | log_success "Attempted to set status for Subtask 1.1 to 'done'." 384 | 385 | log_step "Listing tasks again (after changes)" 386 | task-master list --with-subtasks > task_list_after_changes.log 387 | log_success "Task list after changes saved to task_list_after_changes.log" 388 | 389 | # === Start New Test Section: Tag-Aware Expand Testing === 390 | log_step "Creating additional tag for expand testing" 391 | task-master add-tag feature-expand --description="Tag for testing expand command with tag preservation" 392 | log_success "Created feature-expand tag." 393 | 394 | log_step "Adding task to feature-expand tag" 395 | task-master add-task --tag=feature-expand --prompt="Test task for tag-aware expansion" --priority=medium 396 | # Get the new task ID dynamically 397 | new_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json) 398 | log_success "Added task $new_expand_task_id to feature-expand tag." 399 | 400 | log_step "Verifying tags exist before expand test" 401 | task-master tags > tags_before_expand.log 402 | tag_count_before=$(jq 'keys | length' .taskmaster/tasks/tasks.json) 403 | log_success "Tag count before expand: $tag_count_before" 404 | 405 | log_step "Expanding task in feature-expand tag (testing tag corruption fix)" 406 | cmd_output_expand_tagged=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" 2>&1) 407 | exit_status_expand_tagged=$? 408 | echo "$cmd_output_expand_tagged" 409 | extract_and_sum_cost "$cmd_output_expand_tagged" 410 | if [ $exit_status_expand_tagged -ne 0 ]; then 411 | log_error "Tagged expand failed. Exit status: $exit_status_expand_tagged" 412 | else 413 | log_success "Tagged expand completed." 414 | fi 415 | 416 | log_step "Verifying tag preservation after expand" 417 | task-master tags > tags_after_expand.log 418 | tag_count_after=$(jq 'keys | length' .taskmaster/tasks/tasks.json) 419 | 420 | if [ "$tag_count_before" -eq "$tag_count_after" ]; then 421 | log_success "Tag count preserved: $tag_count_after (no corruption detected)" 422 | else 423 | log_error "Tag corruption detected! Before: $tag_count_before, After: $tag_count_after" 424 | fi 425 | 426 | log_step "Verifying master tag still exists and has tasks" 427 | master_task_count=$(jq -r '.master.tasks | length' .taskmaster/tasks/tasks.json 2>/dev/null || echo "0") 428 | if [ "$master_task_count" -gt "0" ]; then 429 | log_success "Master tag preserved with $master_task_count tasks" 430 | else 431 | log_error "Master tag corrupted or empty after tagged expand" 432 | fi 433 | 434 | log_step "Verifying feature-expand tag has expanded subtasks" 435 | expanded_subtask_count=$(jq -r ".\"feature-expand\".tasks[] | select(.id == $new_expand_task_id) | .subtasks | length" .taskmaster/tasks/tasks.json 2>/dev/null || echo "0") 436 | if [ "$expanded_subtask_count" -gt "0" ]; then 437 | log_success "Expand successful: $expanded_subtask_count subtasks created in feature-expand tag" 438 | else 439 | log_error "Expand failed: No subtasks found in feature-expand tag" 440 | fi 441 | 442 | log_step "Testing force expand with tag preservation" 443 | cmd_output_force_expand=$(task-master expand --tag=feature-expand --id="$new_expand_task_id" --force 2>&1) 444 | exit_status_force_expand=$? 445 | echo "$cmd_output_force_expand" 446 | extract_and_sum_cost "$cmd_output_force_expand" 447 | 448 | # Verify tags still preserved after force expand 449 | tag_count_after_force=$(jq 'keys | length' .taskmaster/tasks/tasks.json) 450 | if [ "$tag_count_before" -eq "$tag_count_after_force" ]; then 451 | log_success "Force expand preserved all tags" 452 | else 453 | log_error "Force expand caused tag corruption" 454 | fi 455 | 456 | log_step "Testing expand --all with tag preservation" 457 | # Add another task to feature-expand for expand-all testing 458 | task-master add-task --tag=feature-expand --prompt="Second task for expand-all testing" --priority=low 459 | second_expand_task_id=$(jq -r '.["feature-expand"].tasks[-1].id' .taskmaster/tasks/tasks.json) 460 | 461 | cmd_output_expand_all=$(task-master expand --tag=feature-expand --all 2>&1) 462 | exit_status_expand_all=$? 463 | echo "$cmd_output_expand_all" 464 | extract_and_sum_cost "$cmd_output_expand_all" 465 | 466 | # Verify tags preserved after expand-all 467 | tag_count_after_all=$(jq 'keys | length' .taskmaster/tasks/tasks.json) 468 | if [ "$tag_count_before" -eq "$tag_count_after_all" ]; then 469 | log_success "Expand --all preserved all tags" 470 | else 471 | log_error "Expand --all caused tag corruption" 472 | fi 473 | 474 | log_success "Completed expand --all tag preservation test." 475 | 476 | # === End New Test Section: Tag-Aware Expand Testing === 477 | 478 | # === Test Model Commands === 479 | log_step "Checking initial model configuration" 480 | task-master models > models_initial_config.log 481 | log_success "Initial model config saved to models_initial_config.log" 482 | 483 | log_step "Setting main model" 484 | task-master models --set-main claude-3-7-sonnet-20250219 485 | log_success "Set main model." 486 | 487 | log_step "Setting research model" 488 | task-master models --set-research sonar-pro 489 | log_success "Set research model." 490 | 491 | log_step "Setting fallback model" 492 | task-master models --set-fallback claude-3-5-sonnet-20241022 493 | log_success "Set fallback model." 494 | 495 | log_step "Checking final model configuration" 496 | task-master models > models_final_config.log 497 | log_success "Final model config saved to models_final_config.log" 498 | 499 | log_step "Resetting main model to default (Claude Sonnet) before provider tests" 500 | task-master models --set-main claude-3-7-sonnet-20250219 501 | log_success "Main model reset to claude-3-7-sonnet-20250219." 502 | 503 | # === End Model Commands Test === 504 | 505 | # === Fallback Model generateObjectService Verification === 506 | if [ "$run_verification_test" = true ]; then 507 | log_step "Starting Fallback Model (generateObjectService) Verification (Calls separate script)" 508 | verification_script_path="$ORIGINAL_DIR/tests/e2e/run_fallback_verification.sh" 509 | 510 | if [ -x "$verification_script_path" ]; then 511 | log_info "--- Executing Fallback Verification Script: $verification_script_path ---" 512 | verification_output=$("$verification_script_path" "$(pwd)" 2>&1) 513 | verification_exit_code=$? 514 | echo "$verification_output" 515 | extract_and_sum_cost "$verification_output" 516 | 517 | log_info "--- Finished Fallback Verification Script Execution (Exit Code: $verification_exit_code) ---" 518 | 519 | # Log success/failure based on captured exit code 520 | if [ $verification_exit_code -eq 0 ]; then 521 | log_success "Fallback verification script reported success." 522 | else 523 | log_error "Fallback verification script reported FAILURE (Exit Code: $verification_exit_code)." 524 | fi 525 | else 526 | log_error "Fallback verification script not found or not executable at $verification_script_path. Skipping verification." 527 | fi 528 | else 529 | log_info "Skipping Fallback Verification test as requested by flag." 530 | fi 531 | # === END Verification Section === 532 | 533 | 534 | # === Multi-Provider Add-Task Test (Keep as is) === 535 | log_step "Starting Multi-Provider Add-Task Test Sequence" 536 | 537 | # Define providers, models, and flags 538 | # Array order matters: providers[i] corresponds to models[i] and flags[i] 539 | declare -a providers=("anthropic" "openai" "google" "perplexity" "xai" "openrouter") 540 | declare -a models=( 541 | "claude-3-7-sonnet-20250219" 542 | "gpt-4o" 543 | "gemini-2.5-pro-preview-05-06" 544 | "sonar-pro" # Note: This is research-only, add-task might fail if not using research model 545 | "grok-3" 546 | "anthropic/claude-3.7-sonnet" # OpenRouter uses Claude 3.7 547 | ) 548 | # Flags: Add provider-specific flags here, e.g., --openrouter. Use empty string if none. 549 | declare -a flags=("" "" "" "" "" "--openrouter") 550 | 551 | # Consistent prompt for all providers 552 | add_task_prompt="Create a task to implement user authentication using OAuth 2.0 with Google as the provider. Include steps for registering the app, handling the callback, and storing user sessions." 553 | log_info "Using consistent prompt for add-task tests: \"$add_task_prompt\"" 554 | echo "--- Multi-Provider Add Task Summary ---" > provider_add_task_summary.log # Initialize summary log 555 | 556 | for i in "${!providers[@]}"; do 557 | provider="${providers[$i]}" 558 | model="${models[$i]}" 559 | flag="${flags[$i]}" 560 | 561 | log_step "Testing Add-Task with Provider: $provider (Model: $model)" 562 | 563 | # 1. Set the main model for this provider 564 | log_info "Setting main model to $model for $provider ${flag:+using flag $flag}..." 565 | set_model_cmd="task-master models --set-main \"$model\" $flag" 566 | echo "Executing: $set_model_cmd" 567 | if eval $set_model_cmd; then 568 | log_success "Successfully set main model for $provider." 569 | else 570 | log_error "Failed to set main model for $provider. Skipping add-task for this provider." 571 | # Optionally save failure info here if needed for LLM analysis 572 | echo "Provider $provider set-main FAILED" >> provider_add_task_summary.log 573 | continue # Skip to the next provider 574 | fi 575 | 576 | # 2. Run add-task 577 | log_info "Running add-task with prompt..." 578 | add_task_output_file="add_task_raw_output_${provider}_${model//\//_}.log" # Sanitize ID 579 | # Run add-task and capture ALL output (stdout & stderr) to a file AND a variable 580 | add_task_cmd_output=$(task-master add-task --prompt "$add_task_prompt" 2>&1 | tee "$add_task_output_file") 581 | add_task_exit_code=${PIPESTATUS[0]} 582 | 583 | # 3. Check for success and extract task ID 584 | new_task_id="" 585 | extract_and_sum_cost "$add_task_cmd_output" 586 | if [ $add_task_exit_code -eq 0 ] && (echo "$add_task_cmd_output" | grep -q "✓ Added new task #" || echo "$add_task_cmd_output" | grep -q "✅ New task created successfully:" || echo "$add_task_cmd_output" | grep -q "Task [0-9]\+ Created Successfully"); then 587 | new_task_id=$(echo "$add_task_cmd_output" | grep -o -E "(Task |#)[0-9.]+" | grep -o -E "[0-9.]+" | head -n 1) 588 | if [ -n "$new_task_id" ]; then 589 | log_success "Add-task succeeded for $provider. New task ID: $new_task_id" 590 | echo "Provider $provider add-task SUCCESS (ID: $new_task_id)" >> provider_add_task_summary.log 591 | else 592 | # Succeeded but couldn't parse ID - treat as warning/anomaly 593 | log_error "Add-task command succeeded for $provider, but failed to extract task ID from output." 594 | echo "Provider $provider add-task SUCCESS (ID extraction FAILED)" >> provider_add_task_summary.log 595 | new_task_id="UNKNOWN_ID_EXTRACTION_FAILED" 596 | fi 597 | else 598 | log_error "Add-task command failed for $provider (Exit Code: $add_task_exit_code). See $add_task_output_file for details." 599 | echo "Provider $provider add-task FAILED (Exit Code: $add_task_exit_code)" >> provider_add_task_summary.log 600 | new_task_id="FAILED" 601 | fi 602 | 603 | # 4. Run task show if ID was obtained (even if extraction failed, use placeholder) 604 | if [ "$new_task_id" != "FAILED" ] && [ "$new_task_id" != "UNKNOWN_ID_EXTRACTION_FAILED" ]; then 605 | log_info "Running task show for new task ID: $new_task_id" 606 | show_output_file="add_task_show_output_${provider}_id_${new_task_id}.log" 607 | if task-master show "$new_task_id" > "$show_output_file"; then 608 | log_success "Task show output saved to $show_output_file" 609 | else 610 | log_error "task show command failed for ID $new_task_id. Check log." 611 | # Still keep the file, it might contain error output 612 | fi 613 | elif [ "$new_task_id" == "UNKNOWN_ID_EXTRACTION_FAILED" ]; then 614 | log_info "Skipping task show for $provider due to ID extraction failure." 615 | else 616 | log_info "Skipping task show for $provider due to add-task failure." 617 | fi 618 | 619 | done # End of provider loop 620 | 621 | log_step "Finished Multi-Provider Add-Task Test Sequence" 622 | echo "Provider add-task summary log available at: provider_add_task_summary.log" 623 | # === End Multi-Provider Add-Task Test === 624 | 625 | log_step "Listing tasks again (after multi-add)" 626 | task-master list --with-subtasks > task_list_after_multi_add.log 627 | log_success "Task list after multi-add saved to task_list_after_multi_add.log" 628 | 629 | 630 | # === Resume Core Task Commands Test === 631 | log_step "Listing tasks (for core tests)" 632 | task-master list > task_list_core_test_start.log 633 | log_success "Core test initial task list saved." 634 | 635 | log_step "Getting next task" 636 | task-master next > next_task_core_test.log 637 | log_success "Core test next task saved." 638 | 639 | log_step "Showing Task 1 details" 640 | task-master show 1 > task_1_details_core_test.log 641 | log_success "Task 1 details saved." 642 | 643 | log_step "Adding dependency (Task 2 depends on Task 1)" 644 | task-master add-dependency --id=2 --depends-on=1 645 | log_success "Added dependency 2->1." 646 | 647 | log_step "Validating dependencies (after add)" 648 | task-master validate-dependencies > validate_dependencies_after_add_core.log 649 | log_success "Dependency validation after add saved." 650 | 651 | log_step "Removing dependency (Task 2 depends on Task 1)" 652 | task-master remove-dependency --id=2 --depends-on=1 653 | log_success "Removed dependency 2->1." 654 | 655 | log_step "Fixing dependencies (should be no-op now)" 656 | task-master fix-dependencies > fix_dependencies_output_core.log 657 | log_success "Fix dependencies attempted." 658 | 659 | # === Start New Test Section: Validate/Fix Bad Dependencies === 660 | 661 | log_step "Intentionally adding non-existent dependency (1 -> 999)" 662 | task-master add-dependency --id=1 --depends-on=999 || log_error "Failed to add non-existent dependency (unexpected)" 663 | # Don't exit even if the above fails, the goal is to test validation 664 | log_success "Attempted to add dependency 1 -> 999." 665 | 666 | log_step "Validating dependencies (expecting non-existent error)" 667 | task-master validate-dependencies > validate_deps_non_existent.log 2>&1 || true # Allow command to fail without exiting script 668 | if grep -q "Non-existent dependency ID: 999" validate_deps_non_existent.log; then 669 | log_success "Validation correctly identified non-existent dependency 999." 670 | else 671 | log_error "Validation DID NOT report non-existent dependency 999 as expected. Check validate_deps_non_existent.log" 672 | fi 673 | 674 | log_step "Fixing dependencies (should remove 1 -> 999)" 675 | task-master fix-dependencies > fix_deps_after_non_existent.log 676 | log_success "Attempted to fix dependencies." 677 | 678 | log_step "Validating dependencies (after fix)" 679 | task-master validate-dependencies > validate_deps_after_fix_non_existent.log 2>&1 || true # Allow potential failure 680 | if grep -q "Non-existent dependency ID: 999" validate_deps_after_fix_non_existent.log; then 681 | log_error "Validation STILL reports non-existent dependency 999 after fix. Check logs." 682 | else 683 | log_success "Validation shows non-existent dependency 999 was removed." 684 | fi 685 | 686 | 687 | log_step "Intentionally adding circular dependency (4 -> 5 -> 4)" 688 | task-master add-dependency --id=4 --depends-on=5 || log_error "Failed to add dependency 4->5" 689 | task-master add-dependency --id=5 --depends-on=4 || log_error "Failed to add dependency 5->4" 690 | log_success "Attempted to add dependencies 4 -> 5 and 5 -> 4." 691 | 692 | 693 | log_step "Validating dependencies (expecting circular error)" 694 | task-master validate-dependencies > validate_deps_circular.log 2>&1 || true # Allow command to fail 695 | # Note: Adjust the grep pattern based on the EXACT error message from validate-dependencies 696 | if grep -q -E "Circular dependency detected involving task IDs: (4, 5|5, 4)" validate_deps_circular.log; then 697 | log_success "Validation correctly identified circular dependency between 4 and 5." 698 | else 699 | log_error "Validation DID NOT report circular dependency 4<->5 as expected. Check validate_deps_circular.log" 700 | fi 701 | 702 | log_step "Fixing dependencies (should remove one side of 4 <-> 5)" 703 | task-master fix-dependencies > fix_deps_after_circular.log 704 | log_success "Attempted to fix dependencies." 705 | 706 | log_step "Validating dependencies (after fix circular)" 707 | task-master validate-dependencies > validate_deps_after_fix_circular.log 2>&1 || true # Allow potential failure 708 | if grep -q -E "Circular dependency detected involving task IDs: (4, 5|5, 4)" validate_deps_after_fix_circular.log; then 709 | log_error "Validation STILL reports circular dependency 4<->5 after fix. Check logs." 710 | else 711 | log_success "Validation shows circular dependency 4<->5 was resolved." 712 | fi 713 | 714 | # === End New Test Section === 715 | 716 | # Find the next available task ID dynamically instead of hardcoding 11, 12 717 | # Assuming tasks are added sequentially and we didn't remove any core tasks yet 718 | last_task_id=$(jq '[.master.tasks[].id] | max' .taskmaster/tasks/tasks.json) 719 | manual_task_id=$((last_task_id + 1)) 720 | ai_task_id=$((manual_task_id + 1)) 721 | 722 | log_step "Adding Task $manual_task_id (Manual)" 723 | task-master add-task --title="Manual E2E Task" --description="Add basic health check endpoint" --priority=low --dependencies=3 # Depends on backend setup 724 | log_success "Added Task $manual_task_id manually." 725 | 726 | log_step "Adding Task $ai_task_id (AI)" 727 | cmd_output_add_ai=$(task-master add-task --prompt="Implement basic UI styling using CSS variables for colors and spacing" --priority=medium --dependencies=1 2>&1) 728 | exit_status_add_ai=$? 729 | echo "$cmd_output_add_ai" 730 | extract_and_sum_cost "$cmd_output_add_ai" 731 | if [ $exit_status_add_ai -ne 0 ]; then 732 | log_error "Adding AI Task $ai_task_id failed. Exit status: $exit_status_add_ai" 733 | else 734 | log_success "Added Task $ai_task_id via AI prompt." 735 | fi 736 | 737 | 738 | log_step "Updating Task 3 (update-task AI)" 739 | cmd_output_update_task3=$(task-master update-task --id=3 --prompt="Update backend server setup: Ensure CORS is configured to allow requests from the frontend origin." 2>&1) 740 | exit_status_update_task3=$? 741 | echo "$cmd_output_update_task3" 742 | extract_and_sum_cost "$cmd_output_update_task3" 743 | if [ $exit_status_update_task3 -ne 0 ]; then 744 | log_error "Updating Task 3 failed. Exit status: $exit_status_update_task3" 745 | else 746 | log_success "Attempted update for Task 3." 747 | fi 748 | 749 | log_step "Updating Tasks from Task 5 (update AI)" 750 | cmd_output_update_from5=$(task-master update --from=5 --prompt="Refactor the backend storage module to use a simple JSON file (storage.json) instead of an in-memory object for persistence. Update relevant tasks." 2>&1) 751 | exit_status_update_from5=$? 752 | echo "$cmd_output_update_from5" 753 | extract_and_sum_cost "$cmd_output_update_from5" 754 | if [ $exit_status_update_from5 -ne 0 ]; then 755 | log_error "Updating from Task 5 failed. Exit status: $exit_status_update_from5" 756 | else 757 | log_success "Attempted update from Task 5 onwards." 758 | fi 759 | 760 | log_step "Expanding Task 8 (AI)" 761 | cmd_output_expand8=$(task-master expand --id=8 2>&1) 762 | exit_status_expand8=$? 763 | echo "$cmd_output_expand8" 764 | extract_and_sum_cost "$cmd_output_expand8" 765 | if [ $exit_status_expand8 -ne 0 ]; then 766 | log_error "Expanding Task 8 failed. Exit status: $exit_status_expand8" 767 | else 768 | log_success "Attempted to expand Task 8." 769 | fi 770 | 771 | log_step "Updating Subtask 8.1 (update-subtask AI)" 772 | cmd_output_update_subtask81=$(task-master update-subtask --id=8.1 --prompt="Implementation note: Remember to handle potential API errors and display a user-friendly message." 2>&1) 773 | exit_status_update_subtask81=$? 774 | echo "$cmd_output_update_subtask81" 775 | extract_and_sum_cost "$cmd_output_update_subtask81" 776 | if [ $exit_status_update_subtask81 -ne 0 ]; then 777 | log_error "Updating Subtask 8.1 failed. Exit status: $exit_status_update_subtask81" 778 | else 779 | log_success "Attempted update for Subtask 8.1." 780 | fi 781 | 782 | # Add a couple more subtasks for multi-remove test 783 | log_step 'Adding subtasks to Task 2 (for multi-remove test)' 784 | task-master add-subtask --parent=2 --title="Subtask 2.1 for removal" 785 | task-master add-subtask --parent=2 --title="Subtask 2.2 for removal" 786 | log_success "Added subtasks 2.1 and 2.2." 787 | 788 | log_step "Removing Subtasks 2.1 and 2.2 (multi-ID)" 789 | task-master remove-subtask --id=2.1,2.2 790 | log_success "Removed subtasks 2.1 and 2.2." 791 | 792 | log_step "Setting status for Task 1 to done" 793 | task-master set-status --id=1 --status=done 794 | log_success "Set status for Task 1 to done." 795 | 796 | log_step "Getting next task (after status change)" 797 | task-master next > next_task_after_change_core.log 798 | log_success "Next task after change saved." 799 | 800 | # === Start New Test Section: List Filtering === 801 | log_step "Listing tasks filtered by status 'done'" 802 | task-master list --status=done > task_list_status_done.log 803 | log_success "Filtered list saved to task_list_status_done.log (Manual/LLM check recommended)" 804 | # Optional assertion: Check if Task 1 ID exists and Task 2 ID does NOT 805 | # if grep -q "^1\." task_list_status_done.log && ! grep -q "^2\." task_list_status_done.log; then 806 | # log_success "Basic check passed: Task 1 found, Task 2 not found in 'done' list." 807 | # else 808 | # log_error "Basic check failed for list --status=done." 809 | # fi 810 | # === End New Test Section === 811 | 812 | log_step "Clearing subtasks from Task 8" 813 | task-master clear-subtasks --id=8 814 | log_success "Attempted to clear subtasks from Task 8." 815 | 816 | log_step "Removing Tasks $manual_task_id and $ai_task_id (multi-ID)" 817 | # Remove the tasks we added earlier 818 | task-master remove-task --id="$manual_task_id,$ai_task_id" -y 819 | log_success "Removed tasks $manual_task_id and $ai_task_id." 820 | 821 | # === Start New Test Section: Subtasks & Dependencies === 822 | 823 | log_step "Expanding Task 2 (to ensure multiple tasks have subtasks)" 824 | task-master expand --id=2 # Expand task 2: Backend setup 825 | log_success "Attempted to expand Task 2." 826 | 827 | log_step "Listing tasks with subtasks (Before Clear All)" 828 | task-master list --with-subtasks > task_list_before_clear_all.log 829 | log_success "Task list before clear-all saved." 830 | 831 | log_step "Clearing ALL subtasks" 832 | task-master clear-subtasks --all 833 | log_success "Attempted to clear all subtasks." 834 | 835 | log_step "Listing tasks with subtasks (After Clear All)" 836 | task-master list --with-subtasks > task_list_after_clear_all.log 837 | log_success "Task list after clear-all saved. (Manual/LLM check recommended to verify subtasks removed)" 838 | 839 | log_step "Expanding Task 3 again (to have subtasks for next test)" 840 | task-master expand --id=3 841 | log_success "Attempted to expand Task 3." 842 | # Verify 3.1 exists 843 | if ! jq -e '.master.tasks[] | select(.id == 3) | .subtasks[] | select(.id == 1)' .taskmaster/tasks/tasks.json > /dev/null; then 844 | log_error "Subtask 3.1 not found in tasks.json after expanding Task 3." 845 | exit 1 846 | fi 847 | 848 | log_step "Adding dependency: Task 4 depends on Subtask 3.1" 849 | task-master add-dependency --id=4 --depends-on=3.1 850 | log_success "Added dependency 4 -> 3.1." 851 | 852 | log_step "Showing Task 4 details (after adding subtask dependency)" 853 | task-master show 4 > task_4_details_after_dep_add.log 854 | log_success "Task 4 details saved. (Manual/LLM check recommended for dependency [3.1])" 855 | 856 | log_step "Removing dependency: Task 4 depends on Subtask 3.1" 857 | task-master remove-dependency --id=4 --depends-on=3.1 858 | log_success "Removed dependency 4 -> 3.1." 859 | 860 | log_step "Showing Task 4 details (after removing subtask dependency)" 861 | task-master show 4 > task_4_details_after_dep_remove.log 862 | log_success "Task 4 details saved. (Manual/LLM check recommended to verify dependency removed)" 863 | 864 | # === End New Test Section === 865 | 866 | log_step "Generating task files (final)" 867 | task-master generate 868 | log_success "Generated task files." 869 | # === End Core Task Commands Test === 870 | 871 | # === AI Commands (Re-test some after changes) === 872 | log_step "Analyzing complexity (AI with Research - Final Check)" 873 | cmd_output_analyze_final=$(task-master analyze-complexity --research --output complexity_results_final.json 2>&1) 874 | exit_status_analyze_final=$? 875 | echo "$cmd_output_analyze_final" 876 | extract_and_sum_cost "$cmd_output_analyze_final" 877 | if [ $exit_status_analyze_final -ne 0 ] || [ ! -f "complexity_results_final.json" ]; then 878 | log_error "Final Complexity analysis failed. Exit status: $exit_status_analyze_final. File found: $(test -f complexity_results_final.json && echo true || echo false)" 879 | exit 1 # Critical for subsequent report step 880 | else 881 | log_success "Final Complexity analysis command executed and file created." 882 | fi 883 | 884 | log_step "Generating complexity report (Non-AI - Final Check)" 885 | task-master complexity-report --file complexity_results_final.json > complexity_report_formatted_final.log 886 | log_success "Final Formatted complexity report saved." 887 | 888 | # === End AI Commands Re-test === 889 | 890 | log_step "Listing tasks again (final)" 891 | task-master list --with-subtasks > task_list_final.log 892 | log_success "Final task list saved to task_list_final.log" 893 | 894 | # --- Test Completion (Output to tee) --- 895 | log_step "E2E Test Steps Completed" 896 | echo "" 897 | ABS_TEST_RUN_DIR="$(pwd)" 898 | echo "Test artifacts and logs are located in: $ABS_TEST_RUN_DIR" 899 | echo "Key artifact files (within above dir):" 900 | ls -1 # List files in the current directory 901 | echo "" 902 | echo "Full script log also available at: $LOG_FILE (relative to project root)" 903 | 904 | # Optional: cd back to original directory 905 | # cd "$ORIGINAL_DIR" 906 | 907 | # End of the main execution block brace 908 | } 2>&1 | tee "$LOG_FILE" 909 | 910 | # --- Final Terminal Message --- 911 | EXIT_CODE=${PIPESTATUS[0]} 912 | overall_end_time=$(date +%s) 913 | total_elapsed_seconds=$((overall_end_time - overall_start_time)) 914 | 915 | # Format total duration 916 | total_minutes=$((total_elapsed_seconds / 60)) 917 | total_sec_rem=$((total_elapsed_seconds % 60)) 918 | formatted_total_time=$(printf "%dm%02ds" "$total_minutes" "$total_sec_rem") 919 | 920 | # Count steps and successes from the log file *after* the pipe finishes 921 | # Use grep -c for counting lines matching the pattern 922 | # Corrected pattern to match ' STEP X:' format 923 | final_step_count=$(grep -c '^[[:space:]]\+STEP [0-9]\+:' "$LOG_FILE" || true) 924 | final_success_count=$(grep -c '\[SUCCESS\]' "$LOG_FILE" || true) # Count lines containing [SUCCESS] 925 | 926 | echo "--- E2E Run Summary ---" 927 | echo "Log File: $LOG_FILE" 928 | echo "Total Elapsed Time: ${formatted_total_time}" 929 | echo "Total Steps Executed: ${final_step_count}" # Use count from log 930 | 931 | if [ $EXIT_CODE -eq 0 ]; then 932 | echo "Status: SUCCESS" 933 | # Use counts from log file 934 | echo "Successful Steps: ${final_success_count}/${final_step_count}" 935 | else 936 | echo "Status: FAILED" 937 | # Use count from log file for total steps attempted 938 | echo "Failure likely occurred during/after Step: ${final_step_count}" 939 | # Use count from log file for successes before failure 940 | echo "Successful Steps Before Failure: ${final_success_count}" 941 | echo "Please check the log file '$LOG_FILE' for error details." 942 | fi 943 | echo "-------------------------" 944 | 945 | # --- Attempt LLM Analysis --- 946 | # Run this *after* the main execution block and tee pipe finish writing the log file 947 | if [ -d "$TEST_RUN_DIR" ]; then 948 | # Define absolute path to source dir if not already defined (though it should be by setup) 949 | TASKMASTER_SOURCE_DIR_ABS=${TASKMASTER_SOURCE_DIR_ABS:-$(cd "$ORIGINAL_DIR/$TASKMASTER_SOURCE_DIR" && pwd)} 950 | 951 | cd "$TEST_RUN_DIR" 952 | # Pass the absolute source directory path 953 | analyze_log_with_llm "$LOG_FILE" "$TASKMASTER_SOURCE_DIR_ABS" 954 | ANALYSIS_EXIT_CODE=$? # Capture the exit code of the analysis function 955 | # Optional: cd back again if needed 956 | cd "$ORIGINAL_DIR" # Ensure we change back to the original directory 957 | else 958 | formatted_duration_for_error=$(_format_duration "$total_elapsed_seconds") 959 | echo "[ERROR] [$formatted_duration_for_error] $(date +"%Y-%m-%d %H:%M:%S") Test run directory $TEST_RUN_DIR not found. Cannot perform LLM analysis." >&2 960 | fi 961 | 962 | # Final cost formatting 963 | formatted_total_e2e_cost=$(printf "%.6f" "$total_e2e_cost") 964 | echo "Total E2E AI Cost: $formatted_total_e2e_cost USD" 965 | 966 | exit $EXIT_CODE ``` -------------------------------------------------------------------------------- /.kiro/steering/taskmaster.md: -------------------------------------------------------------------------------- ```markdown 1 | --- 2 | inclusion: always 3 | --- 4 | 5 | # Taskmaster Tool & Command Reference 6 | 7 | This document provides a detailed reference for interacting with Taskmaster, covering both the recommended MCP tools, suitable for integrations like Kiro, and the corresponding `task-master` CLI commands, designed for direct user interaction or fallback. 8 | 9 | **Note:** For interacting with Taskmaster programmatically or via integrated tools, using the **MCP tools is strongly recommended** due to better performance, structured data, and error handling. The CLI commands serve as a user-friendly alternative and fallback. 10 | 11 | **Important:** Several MCP tools involve AI processing... The AI-powered tools include `parse_prd`, `analyze_project_complexity`, `update_subtask`, `update_task`, `update`, `expand_all`, `expand_task`, and `add_task`. 12 | 13 | **🏷️ Tagged Task Lists System:** Task Master now supports **tagged task lists** for multi-context task management. This allows you to maintain separate, isolated lists of tasks for different features, branches, or experiments. Existing projects are seamlessly migrated to use a default "master" tag. Most commands now support a `--tag <name>` flag to specify which context to operate on. If omitted, commands use the currently active tag. 14 | 15 | --- 16 | 17 | ## Initialization & Setup 18 | 19 | ### 1. Initialize Project (`init`) 20 | 21 | * **MCP Tool:** `initialize_project` 22 | * **CLI Command:** `task-master init [options]` 23 | * **Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project.` 24 | * **Key CLI Options:** 25 | * `--name <name>`: `Set the name for your project in Taskmaster's configuration.` 26 | * `--description <text>`: `Provide a brief description for your project.` 27 | * `--version <version>`: `Set the initial version for your project, e.g., '0.1.0'.` 28 | * `-y, --yes`: `Initialize Taskmaster quickly using default settings without interactive prompts.` 29 | * **Usage:** Run this once at the beginning of a new project. 30 | * **MCP Variant Description:** `Set up the basic Taskmaster file structure and configuration in the current directory for a new project by running the 'task-master init' command.` 31 | * **Key MCP Parameters/Options:** 32 | * `projectName`: `Set the name for your project.` (CLI: `--name <name>`) 33 | * `projectDescription`: `Provide a brief description for your project.` (CLI: `--description <text>`) 34 | * `projectVersion`: `Set the initial version for your project, e.g., '0.1.0'.` (CLI: `--version <version>`) 35 | * `authorName`: `Author name.` (CLI: `--author <author>`) 36 | * `skipInstall`: `Skip installing dependencies. Default is false.` (CLI: `--skip-install`) 37 | * `addAliases`: `Add shell aliases tm and taskmaster. Default is false.` (CLI: `--aliases`) 38 | * `yes`: `Skip prompts and use defaults/provided arguments. Default is false.` (CLI: `-y, --yes`) 39 | * **Usage:** Run this once at the beginning of a new project, typically via an integrated tool like Kiro. Operates on the current working directory of the MCP server. 40 | * **Important:** Once complete, you *MUST* parse a prd in order to generate tasks. There will be no tasks files until then. The next step after initializing should be to create a PRD using the example PRD in .taskmaster/templates/example_prd.txt. 41 | * **Tagging:** Use the `--tag` option to parse the PRD into a specific, non-default tag context. If the tag doesn't exist, it will be created automatically. Example: `task-master parse-prd spec.txt --tag=new-feature`. 42 | 43 | ### 2. Parse PRD (`parse_prd`) 44 | 45 | * **MCP Tool:** `parse_prd` 46 | * **CLI Command:** `task-master parse-prd [file] [options]` 47 | * **Description:** `Parse a Product Requirements Document, PRD, or text file with Taskmaster to automatically generate an initial set of tasks in tasks.json.` 48 | * **Key Parameters/Options:** 49 | * `input`: `Path to your PRD or requirements text file that Taskmaster should parse for tasks.` (CLI: `[file]` positional or `-i, --input <file>`) 50 | * `output`: `Specify where Taskmaster should save the generated 'tasks.json' file. Defaults to '.taskmaster/tasks/tasks.json'.` (CLI: `-o, --output <file>`) 51 | * `numTasks`: `Approximate number of top-level tasks Taskmaster should aim to generate from the document.` (CLI: `-n, --num-tasks <number>`) 52 | * `force`: `Use this to allow Taskmaster to overwrite an existing 'tasks.json' without asking for confirmation.` (CLI: `-f, --force`) 53 | * **Usage:** Useful for bootstrapping a project from an existing requirements document. 54 | * **Notes:** Task Master will strictly adhere to any specific requirements mentioned in the PRD, such as libraries, database schemas, frameworks, tech stacks, etc., while filling in any gaps where the PRD isn't fully specified. Tasks are designed to provide the most direct implementation path while avoiding over-engineering. 55 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. If the user does not have a PRD, suggest discussing their idea and then use the example PRD in `.taskmaster/templates/example_prd.txt` as a template for creating the PRD based on their idea, for use with `parse-prd`. 56 | 57 | --- 58 | 59 | ## AI Model Configuration 60 | 61 | ### 2. Manage Models (`models`) 62 | * **MCP Tool:** `models` 63 | * **CLI Command:** `task-master models [options]` 64 | * **Description:** `View the current AI model configuration or set specific models for different roles (main, research, fallback). Allows setting custom model IDs for Ollama and OpenRouter.` 65 | * **Key MCP Parameters/Options:** 66 | * `setMain <model_id>`: `Set the primary model ID for task generation/updates.` (CLI: `--set-main <model_id>`) 67 | * `setResearch <model_id>`: `Set the model ID for research-backed operations.` (CLI: `--set-research <model_id>`) 68 | * `setFallback <model_id>`: `Set the model ID to use if the primary fails.` (CLI: `--set-fallback <model_id>`) 69 | * `ollama <boolean>`: `Indicates the set model ID is a custom Ollama model.` (CLI: `--ollama`) 70 | * `openrouter <boolean>`: `Indicates the set model ID is a custom OpenRouter model.` (CLI: `--openrouter`) 71 | * `listAvailableModels <boolean>`: `If true, lists available models not currently assigned to a role.` (CLI: No direct equivalent; CLI lists available automatically) 72 | * `projectRoot <string>`: `Optional. Absolute path to the project root directory.` (CLI: Determined automatically) 73 | * **Key CLI Options:** 74 | * `--set-main <model_id>`: `Set the primary model.` 75 | * `--set-research <model_id>`: `Set the research model.` 76 | * `--set-fallback <model_id>`: `Set the fallback model.` 77 | * `--ollama`: `Specify that the provided model ID is for Ollama (use with --set-*).` 78 | * `--openrouter`: `Specify that the provided model ID is for OpenRouter (use with --set-*). Validates against OpenRouter API.` 79 | * `--bedrock`: `Specify that the provided model ID is for AWS Bedrock (use with --set-*).` 80 | * `--setup`: `Run interactive setup to configure models, including custom Ollama/OpenRouter IDs.` 81 | * **Usage (MCP):** Call without set flags to get current config. Use `setMain`, `setResearch`, or `setFallback` with a valid model ID to update the configuration. Use `listAvailableModels: true` to get a list of unassigned models. To set a custom model, provide the model ID and set `ollama: true` or `openrouter: true`. 82 | * **Usage (CLI):** Run without flags to view current configuration and available models. Use set flags to update specific roles. Use `--setup` for guided configuration, including custom models. To set a custom model via flags, use `--set-<role>=<model_id>` along with either `--ollama` or `--openrouter`. 83 | * **Notes:** Configuration is stored in `.taskmaster/config.json` in the project root. This command/tool modifies that file. Use `listAvailableModels` or `task-master models` to see internally supported models. OpenRouter custom models are validated against their live API. Ollama custom models are not validated live. 84 | * **API note:** API keys for selected AI providers (based on their model) need to exist in the mcp.json file to be accessible in MCP context. The API keys must be present in the local .env file for the CLI to be able to read them. 85 | * **Model costs:** The costs in supported models are expressed in dollars. An input/output value of 3 is $3.00. A value of 0.8 is $0.80. 86 | * **Warning:** DO NOT MANUALLY EDIT THE .taskmaster/config.json FILE. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback. 87 | 88 | --- 89 | 90 | ## Task Listing & Viewing 91 | 92 | ### 3. Get Tasks (`get_tasks`) 93 | 94 | * **MCP Tool:** `get_tasks` 95 | * **CLI Command:** `task-master list [options]` 96 | * **Description:** `List your Taskmaster tasks, optionally filtering by status and showing subtasks.` 97 | * **Key Parameters/Options:** 98 | * `status`: `Show only Taskmaster tasks matching this status (or multiple statuses, comma-separated), e.g., 'pending' or 'done,in-progress'.` (CLI: `-s, --status <status>`) 99 | * `withSubtasks`: `Include subtasks indented under their parent tasks in the list.` (CLI: `--with-subtasks`) 100 | * `tag`: `Specify which tag context to list tasks from. Defaults to the current active tag.` (CLI: `--tag <name>`) 101 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 102 | * **Usage:** Get an overview of the project status, often used at the start of a work session. 103 | 104 | ### 4. Get Next Task (`next_task`) 105 | 106 | * **MCP Tool:** `next_task` 107 | * **CLI Command:** `task-master next [options]` 108 | * **Description:** `Ask Taskmaster to show the next available task you can work on, based on status and completed dependencies.` 109 | * **Key Parameters/Options:** 110 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 111 | * `tag`: `Specify which tag context to use. Defaults to the current active tag.` (CLI: `--tag <name>`) 112 | * **Usage:** Identify what to work on next according to the plan. 113 | 114 | ### 5. Get Task Details (`get_task`) 115 | 116 | * **MCP Tool:** `get_task` 117 | * **CLI Command:** `task-master show [id] [options]` 118 | * **Description:** `Display detailed information for one or more specific Taskmaster tasks or subtasks by ID.` 119 | * **Key Parameters/Options:** 120 | * `id`: `Required. The ID of the Taskmaster task (e.g., '15'), subtask (e.g., '15.2'), or a comma-separated list of IDs ('1,5,10.2') you want to view.` (CLI: `[id]` positional or `-i, --id <id>`) 121 | * `tag`: `Specify which tag context to get the task(s) from. Defaults to the current active tag.` (CLI: `--tag <name>`) 122 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 123 | * **Usage:** Understand the full details for a specific task. When multiple IDs are provided, a summary table is shown. 124 | * **CRITICAL INFORMATION** If you need to collect information from multiple tasks, use comma-separated IDs (i.e. 1,2,3) to receive an array of tasks. Do not needlessly get tasks one at a time if you need to get many as that is wasteful. 125 | 126 | --- 127 | 128 | ## Task Creation & Modification 129 | 130 | ### 6. Add Task (`add_task`) 131 | 132 | * **MCP Tool:** `add_task` 133 | * **CLI Command:** `task-master add-task [options]` 134 | * **Description:** `Add a new task to Taskmaster by describing it; AI will structure it.` 135 | * **Key Parameters/Options:** 136 | * `prompt`: `Required. Describe the new task you want Taskmaster to create, e.g., "Implement user authentication using JWT".` (CLI: `-p, --prompt <text>`) 137 | * `dependencies`: `Specify the IDs of any Taskmaster tasks that must be completed before this new one can start, e.g., '12,14'.` (CLI: `-d, --dependencies <ids>`) 138 | * `priority`: `Set the priority for the new task: 'high', 'medium', or 'low'. Default is 'medium'.` (CLI: `--priority <priority>`) 139 | * `research`: `Enable Taskmaster to use the research role for potentially more informed task creation.` (CLI: `-r, --research`) 140 | * `tag`: `Specify which tag context to add the task to. Defaults to the current active tag.` (CLI: `--tag <name>`) 141 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 142 | * **Usage:** Quickly add newly identified tasks during development. 143 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. 144 | 145 | ### 7. Add Subtask (`add_subtask`) 146 | 147 | * **MCP Tool:** `add_subtask` 148 | * **CLI Command:** `task-master add-subtask [options]` 149 | * **Description:** `Add a new subtask to a Taskmaster parent task, or convert an existing task into a subtask.` 150 | * **Key Parameters/Options:** 151 | * `id` / `parent`: `Required. The ID of the Taskmaster task that will be the parent.` (MCP: `id`, CLI: `-p, --parent <id>`) 152 | * `taskId`: `Use this if you want to convert an existing top-level Taskmaster task into a subtask of the specified parent.` (CLI: `-i, --task-id <id>`) 153 | * `title`: `Required if not using taskId. The title for the new subtask Taskmaster should create.` (CLI: `-t, --title <title>`) 154 | * `description`: `A brief description for the new subtask.` (CLI: `-d, --description <text>`) 155 | * `details`: `Provide implementation notes or details for the new subtask.` (CLI: `--details <text>`) 156 | * `dependencies`: `Specify IDs of other tasks or subtasks, e.g., '15' or '16.1', that must be done before this new subtask.` (CLI: `--dependencies <ids>`) 157 | * `status`: `Set the initial status for the new subtask. Default is 'pending'.` (CLI: `-s, --status <status>`) 158 | * `generate`: `Enable Taskmaster to regenerate markdown task files after adding the subtask.` (CLI: `--generate`) 159 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 160 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 161 | * **Usage:** Break down tasks manually or reorganize existing tasks. 162 | 163 | ### 8. Update Tasks (`update`) 164 | 165 | * **MCP Tool:** `update` 166 | * **CLI Command:** `task-master update [options]` 167 | * **Description:** `Update multiple upcoming tasks in Taskmaster based on new context or changes, starting from a specific task ID.` 168 | * **Key Parameters/Options:** 169 | * `from`: `Required. The ID of the first task Taskmaster should update. All tasks with this ID or higher that are not 'done' will be considered.` (CLI: `--from <id>`) 170 | * `prompt`: `Required. Explain the change or new context for Taskmaster to apply to the tasks, e.g., "We are now using React Query instead of Redux Toolkit for data fetching".` (CLI: `-p, --prompt <text>`) 171 | * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) 172 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 173 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 174 | * **Usage:** Handle significant implementation changes or pivots that affect multiple future tasks. Example CLI: `task-master update --from='18' --prompt='Switching to React Query.\nNeed to refactor data fetching...'` 175 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. 176 | 177 | ### 9. Update Task (`update_task`) 178 | 179 | * **MCP Tool:** `update_task` 180 | * **CLI Command:** `task-master update-task [options]` 181 | * **Description:** `Modify a specific Taskmaster task by ID, incorporating new information or changes. By default, this replaces the existing task details.` 182 | * **Key Parameters/Options:** 183 | * `id`: `Required. The specific ID of the Taskmaster task, e.g., '15', you want to update.` (CLI: `-i, --id <id>`) 184 | * `prompt`: `Required. Explain the specific changes or provide the new information Taskmaster should incorporate into this task.` (CLI: `-p, --prompt <text>`) 185 | * `append`: `If true, appends the prompt content to the task's details with a timestamp, rather than replacing them. Behaves like update-subtask.` (CLI: `--append`) 186 | * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) 187 | * `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) 188 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 189 | * **Usage:** Refine a specific task based on new understanding. Use `--append` to log progress without creating subtasks. 190 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. 191 | 192 | ### 10. Update Subtask (`update_subtask`) 193 | 194 | * **MCP Tool:** `update_subtask` 195 | * **CLI Command:** `task-master update-subtask [options]` 196 | * **Description:** `Append timestamped notes or details to a specific Taskmaster subtask without overwriting existing content. Intended for iterative implementation logging.` 197 | * **Key Parameters/Options:** 198 | * `id`: `Required. The ID of the Taskmaster subtask, e.g., '5.2', to update with new information.` (CLI: `-i, --id <id>`) 199 | * `prompt`: `Required. The information, findings, or progress notes to append to the subtask's details with a timestamp.` (CLI: `-p, --prompt <text>`) 200 | * `research`: `Enable Taskmaster to use the research role for more informed updates. Requires appropriate API key.` (CLI: `-r, --research`) 201 | * `tag`: `Specify which tag context the subtask belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) 202 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 203 | * **Usage:** Log implementation progress, findings, and discoveries during subtask development. Each update is timestamped and appended to preserve the implementation journey. 204 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. 205 | 206 | ### 11. Set Task Status (`set_task_status`) 207 | 208 | * **MCP Tool:** `set_task_status` 209 | * **CLI Command:** `task-master set-status [options]` 210 | * **Description:** `Update the status of one or more Taskmaster tasks or subtasks, e.g., 'pending', 'in-progress', 'done'.` 211 | * **Key Parameters/Options:** 212 | * `id`: `Required. The ID(s) of the Taskmaster task(s) or subtask(s), e.g., '15', '15.2', or '16,17.1', to update.` (CLI: `-i, --id <id>`) 213 | * `status`: `Required. The new status to set, e.g., 'done', 'pending', 'in-progress', 'review', 'cancelled'.` (CLI: `-s, --status <status>`) 214 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 215 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 216 | * **Usage:** Mark progress as tasks move through the development cycle. 217 | 218 | ### 12. Remove Task (`remove_task`) 219 | 220 | * **MCP Tool:** `remove_task` 221 | * **CLI Command:** `task-master remove-task [options]` 222 | * **Description:** `Permanently remove a task or subtask from the Taskmaster tasks list.` 223 | * **Key Parameters/Options:** 224 | * `id`: `Required. The ID of the Taskmaster task, e.g., '5', or subtask, e.g., '5.2', to permanently remove.` (CLI: `-i, --id <id>`) 225 | * `yes`: `Skip the confirmation prompt and immediately delete the task.` (CLI: `-y, --yes`) 226 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 227 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 228 | * **Usage:** Permanently delete tasks or subtasks that are no longer needed in the project. 229 | * **Notes:** Use with caution as this operation cannot be undone. Consider using 'blocked', 'cancelled', or 'deferred' status instead if you just want to exclude a task from active planning but keep it for reference. The command automatically cleans up dependency references in other tasks. 230 | 231 | --- 232 | 233 | ## Task Structure & Breakdown 234 | 235 | ### 13. Expand Task (`expand_task`) 236 | 237 | * **MCP Tool:** `expand_task` 238 | * **CLI Command:** `task-master expand [options]` 239 | * **Description:** `Use Taskmaster's AI to break down a complex task into smaller, manageable subtasks. Appends subtasks by default.` 240 | * **Key Parameters/Options:** 241 | * `id`: `The ID of the specific Taskmaster task you want to break down into subtasks.` (CLI: `-i, --id <id>`) 242 | * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create. Uses complexity analysis/defaults otherwise.` (CLI: `-n, --num <number>`) 243 | * `research`: `Enable Taskmaster to use the research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) 244 | * `prompt`: `Optional: Provide extra context or specific instructions to Taskmaster for generating the subtasks.` (CLI: `-p, --prompt <text>`) 245 | * `force`: `Optional: If true, clear existing subtasks before generating new ones. Default is false (append).` (CLI: `--force`) 246 | * `tag`: `Specify which tag context the task belongs to. Defaults to the current active tag.` (CLI: `--tag <name>`) 247 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 248 | * **Usage:** Generate a detailed implementation plan for a complex task before starting coding. Automatically uses complexity report recommendations if available and `num` is not specified. 249 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. 250 | 251 | ### 14. Expand All Tasks (`expand_all`) 252 | 253 | * **MCP Tool:** `expand_all` 254 | * **CLI Command:** `task-master expand --all [options]` (Note: CLI uses the `expand` command with the `--all` flag) 255 | * **Description:** `Tell Taskmaster to automatically expand all eligible pending/in-progress tasks based on complexity analysis or defaults. Appends subtasks by default.` 256 | * **Key Parameters/Options:** 257 | * `num`: `Optional: Suggests how many subtasks Taskmaster should aim to create per task.` (CLI: `-n, --num <number>`) 258 | * `research`: `Enable research role for more informed subtask generation. Requires appropriate API key.` (CLI: `-r, --research`) 259 | * `prompt`: `Optional: Provide extra context for Taskmaster to apply generally during expansion.` (CLI: `-p, --prompt <text>`) 260 | * `force`: `Optional: If true, clear existing subtasks before generating new ones for each eligible task. Default is false (append).` (CLI: `--force`) 261 | * `tag`: `Specify which tag context to expand. Defaults to the current active tag.` (CLI: `--tag <name>`) 262 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 263 | * **Usage:** Useful after initial task generation or complexity analysis to break down multiple tasks at once. 264 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. 265 | 266 | ### 15. Clear Subtasks (`clear_subtasks`) 267 | 268 | * **MCP Tool:** `clear_subtasks` 269 | * **CLI Command:** `task-master clear-subtasks [options]` 270 | * **Description:** `Remove all subtasks from one or more specified Taskmaster parent tasks.` 271 | * **Key Parameters/Options:** 272 | * `id`: `The ID(s) of the Taskmaster parent task(s) whose subtasks you want to remove, e.g., '15' or '16,18'. Required unless using 'all'.` (CLI: `-i, --id <ids>`) 273 | * `all`: `Tell Taskmaster to remove subtasks from all parent tasks.` (CLI: `--all`) 274 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 275 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 276 | * **Usage:** Used before regenerating subtasks with `expand_task` if the previous breakdown needs replacement. 277 | 278 | ### 16. Remove Subtask (`remove_subtask`) 279 | 280 | * **MCP Tool:** `remove_subtask` 281 | * **CLI Command:** `task-master remove-subtask [options]` 282 | * **Description:** `Remove a subtask from its Taskmaster parent, optionally converting it into a standalone task.` 283 | * **Key Parameters/Options:** 284 | * `id`: `Required. The ID(s) of the Taskmaster subtask(s) to remove, e.g., '15.2' or '16.1,16.3'.` (CLI: `-i, --id <id>`) 285 | * `convert`: `If used, Taskmaster will turn the subtask into a regular top-level task instead of deleting it.` (CLI: `-c, --convert`) 286 | * `generate`: `Enable Taskmaster to regenerate markdown task files after removing the subtask.` (CLI: `--generate`) 287 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 288 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 289 | * **Usage:** Delete unnecessary subtasks or promote a subtask to a top-level task. 290 | 291 | ### 17. Move Task (`move_task`) 292 | 293 | * **MCP Tool:** `move_task` 294 | * **CLI Command:** `task-master move [options]` 295 | * **Description:** `Move a task or subtask to a new position within the task hierarchy.` 296 | * **Key Parameters/Options:** 297 | * `from`: `Required. ID of the task/subtask to move (e.g., "5" or "5.2"). Can be comma-separated for multiple tasks.` (CLI: `--from <id>`) 298 | * `to`: `Required. ID of the destination (e.g., "7" or "7.3"). Must match the number of source IDs if comma-separated.` (CLI: `--to <id>`) 299 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 300 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 301 | * **Usage:** Reorganize tasks by moving them within the hierarchy. Supports various scenarios like: 302 | * Moving a task to become a subtask 303 | * Moving a subtask to become a standalone task 304 | * Moving a subtask to a different parent 305 | * Reordering subtasks within the same parent 306 | * Moving a task to a new, non-existent ID (automatically creates placeholders) 307 | * Moving multiple tasks at once with comma-separated IDs 308 | * **Validation Features:** 309 | * Allows moving tasks to non-existent destination IDs (creates placeholder tasks) 310 | * Prevents moving to existing task IDs that already have content (to avoid overwriting) 311 | * Validates that source tasks exist before attempting to move them 312 | * Maintains proper parent-child relationships 313 | * **Example CLI:** `task-master move --from=5.2 --to=7.3` to move subtask 5.2 to become subtask 7.3. 314 | * **Example Multi-Move:** `task-master move --from=10,11,12 --to=16,17,18` to move multiple tasks to new positions. 315 | * **Common Use:** Resolving merge conflicts in tasks.json when multiple team members create tasks on different branches. 316 | 317 | --- 318 | 319 | ## Dependency Management 320 | 321 | ### 18. Add Dependency (`add_dependency`) 322 | 323 | * **MCP Tool:** `add_dependency` 324 | * **CLI Command:** `task-master add-dependency [options]` 325 | * **Description:** `Define a dependency in Taskmaster, making one task a prerequisite for another.` 326 | * **Key Parameters/Options:** 327 | * `id`: `Required. The ID of the Taskmaster task that will depend on another.` (CLI: `-i, --id <id>`) 328 | * `dependsOn`: `Required. The ID of the Taskmaster task that must be completed first, the prerequisite.` (CLI: `-d, --depends-on <id>`) 329 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 330 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <path>`) 331 | * **Usage:** Establish the correct order of execution between tasks. 332 | 333 | ### 19. Remove Dependency (`remove_dependency`) 334 | 335 | * **MCP Tool:** `remove_dependency` 336 | * **CLI Command:** `task-master remove-dependency [options]` 337 | * **Description:** `Remove a dependency relationship between two Taskmaster tasks.` 338 | * **Key Parameters/Options:** 339 | * `id`: `Required. The ID of the Taskmaster task you want to remove a prerequisite from.` (CLI: `-i, --id <id>`) 340 | * `dependsOn`: `Required. The ID of the Taskmaster task that should no longer be a prerequisite.` (CLI: `-d, --depends-on <id>`) 341 | * `tag`: `Specify which tag context to operate on. Defaults to the current active tag.` (CLI: `--tag <name>`) 342 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 343 | * **Usage:** Update task relationships when the order of execution changes. 344 | 345 | ### 20. Validate Dependencies (`validate_dependencies`) 346 | 347 | * **MCP Tool:** `validate_dependencies` 348 | * **CLI Command:** `task-master validate-dependencies [options]` 349 | * **Description:** `Check your Taskmaster tasks for dependency issues (like circular references or links to non-existent tasks) without making changes.` 350 | * **Key Parameters/Options:** 351 | * `tag`: `Specify which tag context to validate. Defaults to the current active tag.` (CLI: `--tag <name>`) 352 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 353 | * **Usage:** Audit the integrity of your task dependencies. 354 | 355 | ### 21. Fix Dependencies (`fix_dependencies`) 356 | 357 | * **MCP Tool:** `fix_dependencies` 358 | * **CLI Command:** `task-master fix-dependencies [options]` 359 | * **Description:** `Automatically fix dependency issues (like circular references or links to non-existent tasks) in your Taskmaster tasks.` 360 | * **Key Parameters/Options:** 361 | * `tag`: `Specify which tag context to fix dependencies in. Defaults to the current active tag.` (CLI: `--tag <name>`) 362 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 363 | * **Usage:** Clean up dependency errors automatically. 364 | 365 | --- 366 | 367 | ## Analysis & Reporting 368 | 369 | ### 22. Analyze Project Complexity (`analyze_project_complexity`) 370 | 371 | * **MCP Tool:** `analyze_project_complexity` 372 | * **CLI Command:** `task-master analyze-complexity [options]` 373 | * **Description:** `Have Taskmaster analyze your tasks to determine their complexity and suggest which ones need to be broken down further.` 374 | * **Key Parameters/Options:** 375 | * `output`: `Where to save the complexity analysis report. Default is '.taskmaster/reports/task-complexity-report.json' (or '..._tagname.json' if a tag is used).` (CLI: `-o, --output <file>`) 376 | * `threshold`: `The minimum complexity score (1-10) that should trigger a recommendation to expand a task.` (CLI: `-t, --threshold <number>`) 377 | * `research`: `Enable research role for more accurate complexity analysis. Requires appropriate API key.` (CLI: `-r, --research`) 378 | * `tag`: `Specify which tag context to analyze. Defaults to the current active tag.` (CLI: `--tag <name>`) 379 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 380 | * **Usage:** Used before breaking down tasks to identify which ones need the most attention. 381 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. Please inform users to hang tight while the operation is in progress. 382 | 383 | ### 23. View Complexity Report (`complexity_report`) 384 | 385 | * **MCP Tool:** `complexity_report` 386 | * **CLI Command:** `task-master complexity-report [options]` 387 | * **Description:** `Display the task complexity analysis report in a readable format.` 388 | * **Key Parameters/Options:** 389 | * `tag`: `Specify which tag context to show the report for. Defaults to the current active tag.` (CLI: `--tag <name>`) 390 | * `file`: `Path to the complexity report (default: '.taskmaster/reports/task-complexity-report.json').` (CLI: `-f, --file <file>`) 391 | * **Usage:** Review and understand the complexity analysis results after running analyze-complexity. 392 | 393 | --- 394 | 395 | ## File Management 396 | 397 | ### 24. Generate Task Files (`generate`) 398 | 399 | * **MCP Tool:** `generate` 400 | * **CLI Command:** `task-master generate [options]` 401 | * **Description:** `Create or update individual Markdown files for each task based on your tasks.json.` 402 | * **Key Parameters/Options:** 403 | * `output`: `The directory where Taskmaster should save the task files (default: in a 'tasks' directory).` (CLI: `-o, --output <directory>`) 404 | * `tag`: `Specify which tag context to generate files for. Defaults to the current active tag.` (CLI: `--tag <name>`) 405 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 406 | * **Usage:** Run this after making changes to tasks.json to keep individual task files up to date. This command is now manual and no longer runs automatically. 407 | 408 | --- 409 | 410 | ## AI-Powered Research 411 | 412 | ### 25. Research (`research`) 413 | 414 | * **MCP Tool:** `research` 415 | * **CLI Command:** `task-master research [options]` 416 | * **Description:** `Perform AI-powered research queries with project context to get fresh, up-to-date information beyond the AI's knowledge cutoff.` 417 | * **Key Parameters/Options:** 418 | * `query`: `Required. Research query/prompt (e.g., "What are the latest best practices for React Query v5?").` (CLI: `[query]` positional or `-q, --query <text>`) 419 | * `taskIds`: `Comma-separated list of task/subtask IDs from the current tag context (e.g., "15,16.2,17").` (CLI: `-i, --id <ids>`) 420 | * `filePaths`: `Comma-separated list of file paths for context (e.g., "src/api.js,docs/readme.md").` (CLI: `-f, --files <paths>`) 421 | * `customContext`: `Additional custom context text to include in the research.` (CLI: `-c, --context <text>`) 422 | * `includeProjectTree`: `Include project file tree structure in context (default: false).` (CLI: `--tree`) 423 | * `detailLevel`: `Detail level for the research response: 'low', 'medium', 'high' (default: medium).` (CLI: `--detail <level>`) 424 | * `saveTo`: `Task or subtask ID (e.g., "15", "15.2") to automatically save the research conversation to.` (CLI: `--save-to <id>`) 425 | * `saveFile`: `If true, saves the research conversation to a markdown file in '.taskmaster/docs/research/'.` (CLI: `--save-file`) 426 | * `noFollowup`: `Disables the interactive follow-up question menu in the CLI.` (CLI: `--no-followup`) 427 | * `tag`: `Specify which tag context to use for task-based context gathering. Defaults to the current active tag.` (CLI: `--tag <name>`) 428 | * `projectRoot`: `The directory of the project. Must be an absolute path.` (CLI: Determined automatically) 429 | * **Usage:** **This is a POWERFUL tool that agents should use FREQUENTLY** to: 430 | * Get fresh information beyond knowledge cutoff dates 431 | * Research latest best practices, library updates, security patches 432 | * Find implementation examples for specific technologies 433 | * Validate approaches against current industry standards 434 | * Get contextual advice based on project files and tasks 435 | * **When to Consider Using Research:** 436 | * **Before implementing any task** - Research current best practices 437 | * **When encountering new technologies** - Get up-to-date implementation guidance (libraries, apis, etc) 438 | * **For security-related tasks** - Find latest security recommendations 439 | * **When updating dependencies** - Research breaking changes and migration guides 440 | * **For performance optimization** - Get current performance best practices 441 | * **When debugging complex issues** - Research known solutions and workarounds 442 | * **Research + Action Pattern:** 443 | * Use `research` to gather fresh information 444 | * Use `update_subtask` to commit findings with timestamps 445 | * Use `update_task` to incorporate research into task details 446 | * Use `add_task` with research flag for informed task creation 447 | * **Important:** This MCP tool makes AI calls and can take up to a minute to complete. The research provides FRESH data beyond the AI's training cutoff, making it invaluable for current best practices and recent developments. 448 | 449 | --- 450 | 451 | ## Tag Management 452 | 453 | This new suite of commands allows you to manage different task contexts (tags). 454 | 455 | ### 26. List Tags (`tags`) 456 | 457 | * **MCP Tool:** `list_tags` 458 | * **CLI Command:** `task-master tags [options]` 459 | * **Description:** `List all available tags with task counts, completion status, and other metadata.` 460 | * **Key Parameters/Options:** 461 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 462 | * `--show-metadata`: `Include detailed metadata in the output (e.g., creation date, description).` (CLI: `--show-metadata`) 463 | 464 | ### 27. Add Tag (`add_tag`) 465 | 466 | * **MCP Tool:** `add_tag` 467 | * **CLI Command:** `task-master add-tag <tagName> [options]` 468 | * **Description:** `Create a new, empty tag context, or copy tasks from another tag.` 469 | * **Key Parameters/Options:** 470 | * `tagName`: `Name of the new tag to create (alphanumeric, hyphens, underscores).` (CLI: `<tagName>` positional) 471 | * `--from-branch`: `Creates a tag with a name derived from the current git branch, ignoring the <tagName> argument.` (CLI: `--from-branch`) 472 | * `--copy-from-current`: `Copy tasks from the currently active tag to the new tag.` (CLI: `--copy-from-current`) 473 | * `--copy-from <tag>`: `Copy tasks from a specific source tag to the new tag.` (CLI: `--copy-from <tag>`) 474 | * `--description <text>`: `Provide an optional description for the new tag.` (CLI: `-d, --description <text>`) 475 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 476 | 477 | ### 28. Delete Tag (`delete_tag`) 478 | 479 | * **MCP Tool:** `delete_tag` 480 | * **CLI Command:** `task-master delete-tag <tagName> [options]` 481 | * **Description:** `Permanently delete a tag and all of its associated tasks.` 482 | * **Key Parameters/Options:** 483 | * `tagName`: `Name of the tag to delete.` (CLI: `<tagName>` positional) 484 | * `--yes`: `Skip the confirmation prompt.` (CLI: `-y, --yes`) 485 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 486 | 487 | ### 29. Use Tag (`use_tag`) 488 | 489 | * **MCP Tool:** `use_tag` 490 | * **CLI Command:** `task-master use-tag <tagName>` 491 | * **Description:** `Switch your active task context to a different tag.` 492 | * **Key Parameters/Options:** 493 | * `tagName`: `Name of the tag to switch to.` (CLI: `<tagName>` positional) 494 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 495 | 496 | ### 30. Rename Tag (`rename_tag`) 497 | 498 | * **MCP Tool:** `rename_tag` 499 | * **CLI Command:** `task-master rename-tag <oldName> <newName>` 500 | * **Description:** `Rename an existing tag.` 501 | * **Key Parameters/Options:** 502 | * `oldName`: `The current name of the tag.` (CLI: `<oldName>` positional) 503 | * `newName`: `The new name for the tag.` (CLI: `<newName>` positional) 504 | * `file`: `Path to your Taskmaster 'tasks.json' file. Default relies on auto-detection.` (CLI: `-f, --file <file>`) 505 | 506 | ### 31. Copy Tag (`copy_tag`) 507 | 508 | * **MCP Tool:** `copy_tag` 509 | * **CLI Command:** `task-master copy-tag <sourceName> <targetName> [options]` 510 | * **Description:** `Copy an entire tag context, including all its tasks and metadata, to a new tag.` 511 | * **Key Parameters/Options:** 512 | * `sourceName`: `Name of the tag to copy from.` (CLI: `<sourceName>` positional) 513 | * `targetName`: `Name of the new tag to create.` (CLI: `<targetName>` positional) 514 | * `--description <text>`: `Optional description for the new tag.` (CLI: `-d, --description <text>`) 515 | 516 | --- 517 | 518 | ## Miscellaneous 519 | 520 | ### 32. Sync Readme (`sync-readme`) -- experimental 521 | 522 | * **MCP Tool:** N/A 523 | * **CLI Command:** `task-master sync-readme [options]` 524 | * **Description:** `Exports your task list to your project's README.md file, useful for showcasing progress.` 525 | * **Key Parameters/Options:** 526 | * `status`: `Filter tasks by status (e.g., 'pending', 'done').` (CLI: `-s, --status <status>`) 527 | * `withSubtasks`: `Include subtasks in the export.` (CLI: `--with-subtasks`) 528 | * `tag`: `Specify which tag context to export from. Defaults to the current active tag.` (CLI: `--tag <name>`) 529 | 530 | --- 531 | 532 | ## Environment Variables Configuration (Updated) 533 | 534 | Taskmaster primarily uses the **`.taskmaster/config.json`** file (in project root) for configuration (models, parameters, logging level, etc.), managed via `task-master models --setup`. 535 | 536 | Environment variables are used **only** for sensitive API keys related to AI providers and specific overrides like the Ollama base URL: 537 | 538 | * **API Keys (Required for corresponding provider):** 539 | * `ANTHROPIC_API_KEY` 540 | * `PERPLEXITY_API_KEY` 541 | * `OPENAI_API_KEY` 542 | * `GOOGLE_API_KEY` 543 | * `MISTRAL_API_KEY` 544 | * `AZURE_OPENAI_API_KEY` (Requires `AZURE_OPENAI_ENDPOINT` too) 545 | * `OPENROUTER_API_KEY` 546 | * `XAI_API_KEY` 547 | * `OLLAMA_API_KEY` (Requires `OLLAMA_BASE_URL` too) 548 | * **Endpoints (Optional/Provider Specific inside .taskmaster/config.json):** 549 | * `AZURE_OPENAI_ENDPOINT` 550 | * `OLLAMA_BASE_URL` (Default: `http://localhost:11434/api`) 551 | 552 | **Set API keys** in your **`.env`** file in the project root (for CLI use) or within the `env` section of your **`.kiro/mcp.json`** file (for MCP/Kiro integration). All other settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in `.taskmaster/config.json` via `task-master models` command or `models` MCP tool. 553 | 554 | --- 555 | 556 | For details on how these commands fit into the development process, see the [dev_workflow.md](.kiro/steering/dev_workflow.md). ```