This is page 16 of 21. Use http://codebase.md/sparesparrow/mcp-project-orchestrator?page={x} to view the full context.
# Directory Structure
```
├── .cursorrules
├── .env.example
├── .github
│ └── workflows
│ ├── build.yml
│ ├── ci-cd.yml
│ ├── ci.yml
│ ├── deploy.yml
│ ├── ecosystem-monitor.yml
│ ├── fan-out-orchestrator.yml
│ └── release.yml
├── .gitignore
├── .pre-commit-config.yaml
├── AUTOMOTIVE_CAMERA_SYSTEM_SUMMARY.md
├── automotive-camera-system
│ ├── docs
│ │ └── IMPLEMENTACE_CS.md
│ └── README.md
├── AWS_MCP_IMPLEMENTATION_SUMMARY.md
├── AWS_MCP_QUICKSTART.md
├── AWS_SIP_TRUNK_DEPLOYMENT_COMPLETE.md
├── aws-sip-trunk
│ ├── .gitignore
│ ├── config
│ │ ├── extensions.conf.j2
│ │ └── pjsip.conf.j2
│ ├── DEPLOYMENT_SUMMARY.md
│ ├── docs
│ │ ├── DEPLOYMENT.md
│ │ └── TROUBLESHOOTING.md
│ ├── PROJECT_INDEX.md
│ ├── pyproject.toml
│ ├── QUICKSTART.md
│ ├── README.md
│ ├── scripts
│ │ ├── deploy-asterisk-aws.sh
│ │ └── user-data.sh
│ ├── terraform
│ │ ├── ec2.tf
│ │ ├── main.tf
│ │ ├── monitoring.tf
│ │ ├── networking.tf
│ │ ├── outputs.tf
│ │ ├── storage.tf
│ │ ├── terraform.tfvars.example
│ │ └── variables.tf
│ ├── tests
│ │ └── test_sip_connectivity.py
│ └── VERIFICATION_CHECKLIST.md
├── CLAUDE.md
├── component_templates.json
├── conanfile.py
├── config
│ ├── default.json
│ └── project_orchestration.json
├── Containerfile
├── cursor-templates
│ └── openssl
│ ├── linux-dev.mdc.jinja2
│ └── shared.mdc.jinja2
├── data
│ └── prompts
│ └── templates
│ ├── advanced-multi-server-template.json
│ ├── analysis-assistant.json
│ ├── analyze-mermaid-diagram.json
│ ├── architecture-design-assistant.json
│ ├── code-diagram-documentation-creator.json
│ ├── code-refactoring-assistant.json
│ ├── code-review-assistant.json
│ ├── collaborative-development.json
│ ├── consolidated-interfaces-template.json
│ ├── could-you-interpret-the-assumed-applicat.json
│ ├── data-analysis-template.json
│ ├── database-query-assistant.json
│ ├── debugging-assistant.json
│ ├── development-system-prompt-zcna0.json
│ ├── development-system-prompt.json
│ ├── development-workflow.json
│ ├── docker-compose-prompt-combiner.json
│ ├── docker-containerization-guide.json
│ ├── docker-mcp-servers-orchestration.json
│ ├── foresight-assistant.json
│ ├── generate-different-types-of-questions-ab.json
│ ├── generate-mermaid-diagram.json
│ ├── image-1-describe-the-icon-in-one-sen.json
│ ├── initialize-project-setup-for-a-new-micro.json
│ ├── install-dependencies-build-run-test.json
│ ├── mcp-code-generator.json
│ ├── mcp-integration-assistant.json
│ ├── mcp-resources-explorer.json
│ ├── mcp-resources-integration.json
│ ├── mcp-server-configurator.json
│ ├── mcp-server-dev-prompt-combiner.json
│ ├── mcp-server-integration-template.json
│ ├── mcp-template-system.json
│ ├── mermaid-analysis-expert.json
│ ├── mermaid-class-diagram-generator.json
│ ├── mermaid-diagram-generator.json
│ ├── mermaid-diagram-modifier.json
│ ├── modify-mermaid-diagram.json
│ ├── monorepo-migration-guide.json
│ ├── multi-resource-context.json
│ ├── project-analysis-assistant.json
│ ├── prompt-combiner-interface.json
│ ├── prompt-templates.json
│ ├── repository-explorer.json
│ ├── research-assistant.json
│ ├── sequential-data-analysis.json
│ ├── solid-code-analysis-visualizer.json
│ ├── task-list-helper-8ithy.json
│ ├── template-based-mcp-integration.json
│ ├── templates.json
│ ├── test-prompt.json
│ └── you-are-limited-to-respond-yes-or-no-onl.json
├── docs
│ ├── AWS_MCP.md
│ ├── AWS.md
│ ├── CONAN.md
│ └── integration.md
├── elevenlabs-agents
│ ├── agent-prompts.json
│ └── README.md
├── IMPLEMENTATION_STATUS.md
├── integration_plan.md
├── LICENSE
├── MANIFEST.in
├── mcp-project-orchestrator
│ └── openssl
│ ├── .github
│ │ └── workflows
│ │ └── validate-cursor-config.yml
│ ├── conanfile.py
│ ├── CURSOR_DEPLOYMENT_POLISH.md
│ ├── cursor-rules
│ │ ├── mcp.json.jinja2
│ │ ├── prompts
│ │ │ ├── fips-compliance.md.jinja2
│ │ │ ├── openssl-coding-standards.md.jinja2
│ │ │ └── pr-review.md.jinja2
│ │ └── rules
│ │ ├── ci-linux.mdc.jinja2
│ │ ├── linux-dev.mdc.jinja2
│ │ ├── macos-dev.mdc.jinja2
│ │ ├── shared.mdc.jinja2
│ │ └── windows-dev.mdc.jinja2
│ ├── docs
│ │ └── cursor-configuration-management.md
│ ├── examples
│ │ └── example-workspace
│ │ ├── .cursor
│ │ │ ├── mcp.json
│ │ │ └── rules
│ │ │ ├── linux-dev.mdc
│ │ │ └── shared.mdc
│ │ ├── .gitignore
│ │ ├── CMakeLists.txt
│ │ ├── conanfile.py
│ │ ├── profiles
│ │ │ ├── linux-gcc-debug.profile
│ │ │ └── linux-gcc-release.profile
│ │ ├── README.md
│ │ └── src
│ │ ├── crypto_utils.cpp
│ │ ├── crypto_utils.h
│ │ └── main.cpp
│ ├── IMPLEMENTATION_SUMMARY.md
│ ├── mcp_orchestrator
│ │ ├── __init__.py
│ │ ├── cli.py
│ │ ├── conan_integration.py
│ │ ├── cursor_config.py
│ │ ├── cursor_deployer.py
│ │ ├── deploy_cursor.py
│ │ ├── env_config.py
│ │ ├── platform_detector.py
│ │ └── yaml_validator.py
│ ├── openssl-cursor-example-workspace-20251014_121133.zip
│ ├── pyproject.toml
│ ├── README.md
│ ├── requirements.txt
│ ├── scripts
│ │ └── create_example_workspace.py
│ ├── setup.py
│ ├── test_deployment.py
│ └── tests
│ ├── __init__.py
│ ├── test_cursor_deployer.py
│ └── test_template_validation.py
├── printcast-agent
│ ├── .env.example
│ ├── config
│ │ └── asterisk
│ │ └── extensions.conf
│ ├── Containerfile
│ ├── docker-compose.yml
│ ├── pyproject.toml
│ ├── README.md
│ ├── scripts
│ │ └── docker-entrypoint.sh
│ ├── src
│ │ ├── integrations
│ │ │ ├── __init__.py
│ │ │ ├── asterisk.py
│ │ │ ├── content.py
│ │ │ ├── delivery.py
│ │ │ ├── elevenlabs.py
│ │ │ └── printing.py
│ │ ├── mcp_server
│ │ │ ├── __init__.py
│ │ │ ├── main.py
│ │ │ └── server.py
│ │ └── orchestration
│ │ ├── __init__.py
│ │ └── workflow.py
│ └── tests
│ └── test_mcp_server.py
├── project_orchestration.json
├── project_templates.json
├── pyproject.toml
├── README.md
├── REFACTORING_COMPLETED.md
├── REFACTORING_RECOMMENDATIONS.md
├── requirements.txt
├── scripts
│ ├── archive
│ │ ├── init_claude_test.sh
│ │ ├── init_postgres.sh
│ │ ├── start_mcp_servers.sh
│ │ └── test_claude_desktop.sh
│ ├── consolidate_mermaid.py
│ ├── consolidate_prompts.py
│ ├── consolidate_resources.py
│ ├── consolidate_templates.py
│ ├── INSTRUCTIONS.md
│ ├── README.md
│ ├── setup_aws_mcp.sh
│ ├── setup_mcp.sh
│ ├── setup_orchestrator.sh
│ ├── setup_project.py
│ └── test_mcp.sh
├── src
│ └── mcp_project_orchestrator
│ ├── __init__.py
│ ├── __main__.py
│ ├── aws_mcp.py
│ ├── cli
│ │ └── __init__.py
│ ├── cli.py
│ ├── commands
│ │ └── openssl_cli.py
│ ├── core
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── config.py
│ │ ├── exceptions.py
│ │ ├── fastmcp.py
│ │ ├── logging.py
│ │ └── managers.py
│ ├── cursor_deployer.py
│ ├── ecosystem_monitor.py
│ ├── fan_out_orchestrator.py
│ ├── fastmcp.py
│ ├── mcp-py
│ │ ├── AggregateVersions.py
│ │ ├── CustomBashTool.py
│ │ ├── FileAnnotator.py
│ │ ├── mcp-client.py
│ │ ├── mcp-server.py
│ │ ├── MermaidDiagramGenerator.py
│ │ ├── NamingAgent.py
│ │ └── solid-analyzer-agent.py
│ ├── mermaid
│ │ ├── __init__.py
│ │ ├── generator.py
│ │ ├── mermaid_orchestrator.py
│ │ ├── renderer.py
│ │ ├── templates
│ │ │ ├── AbstractFactory-diagram.json
│ │ │ ├── Adapter-diagram.json
│ │ │ ├── Analyze_Mermaid_Diagram.json
│ │ │ ├── Builder-diagram.json
│ │ │ ├── Chain-diagram.json
│ │ │ ├── Code_Diagram_Documentation_Creator.json
│ │ │ ├── Command-diagram.json
│ │ │ ├── Decorator-diagram.json
│ │ │ ├── Facade-diagram.json
│ │ │ ├── Factory-diagram.json
│ │ │ ├── flowchart
│ │ │ │ ├── AbstractFactory-diagram.json
│ │ │ │ ├── Adapter-diagram.json
│ │ │ │ ├── Analyze_Mermaid_Diagram.json
│ │ │ │ ├── Builder-diagram.json
│ │ │ │ ├── Chain-diagram.json
│ │ │ │ ├── Code_Diagram_Documentation_Creator.json
│ │ │ │ ├── Command-diagram.json
│ │ │ │ ├── Decorator-diagram.json
│ │ │ │ ├── Facade-diagram.json
│ │ │ │ ├── Factory-diagram.json
│ │ │ │ ├── Generate_Mermaid_Diagram.json
│ │ │ │ ├── generated_diagram.json
│ │ │ │ ├── integration.json
│ │ │ │ ├── Iterator-diagram.json
│ │ │ │ ├── Mediator-diagram.json
│ │ │ │ ├── Memento-diagram.json
│ │ │ │ ├── Mermaid_Analysis_Expert.json
│ │ │ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ │ │ ├── Mermaid_Diagram_Generator.json
│ │ │ │ ├── Mermaid_Diagram_Modifier.json
│ │ │ │ ├── Modify_Mermaid_Diagram.json
│ │ │ │ ├── Observer-diagram.json
│ │ │ │ ├── Prototype-diagram.json
│ │ │ │ ├── Proxy-diagram.json
│ │ │ │ ├── README.json
│ │ │ │ ├── Singleton-diagram.json
│ │ │ │ ├── State-diagram.json
│ │ │ │ ├── Strategy-diagram.json
│ │ │ │ ├── TemplateMethod-diagram.json
│ │ │ │ ├── theme_dark.json
│ │ │ │ ├── theme_default.json
│ │ │ │ ├── theme_pastel.json
│ │ │ │ ├── theme_vibrant.json
│ │ │ │ └── Visitor-diagram.json
│ │ │ ├── Generate_Mermaid_Diagram.json
│ │ │ ├── generated_diagram.json
│ │ │ ├── index.json
│ │ │ ├── integration.json
│ │ │ ├── Iterator-diagram.json
│ │ │ ├── Mediator-diagram.json
│ │ │ ├── Memento-diagram.json
│ │ │ ├── Mermaid_Analysis_Expert.json
│ │ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Modifier.json
│ │ │ ├── Modify_Mermaid_Diagram.json
│ │ │ ├── Observer-diagram.json
│ │ │ ├── Prototype-diagram.json
│ │ │ ├── Proxy-diagram.json
│ │ │ ├── README.json
│ │ │ ├── Singleton-diagram.json
│ │ │ ├── State-diagram.json
│ │ │ ├── Strategy-diagram.json
│ │ │ ├── TemplateMethod-diagram.json
│ │ │ ├── theme_dark.json
│ │ │ ├── theme_default.json
│ │ │ ├── theme_pastel.json
│ │ │ ├── theme_vibrant.json
│ │ │ └── Visitor-diagram.json
│ │ └── types.py
│ ├── project_orchestration.py
│ ├── prompt_manager
│ │ ├── __init__.py
│ │ ├── loader.py
│ │ ├── manager.py
│ │ └── template.py
│ ├── prompts
│ │ ├── __dirname.json
│ │ ├── __image_1___describe_the_icon_in_one_sen___.json
│ │ ├── __init__.py
│ │ ├── __type.json
│ │ ├── _.json
│ │ ├── _DEFAULT_OPEN_DELIMITER.json
│ │ ├── _emojiRegex.json
│ │ ├── _UUID_CHARS.json
│ │ ├── a.json
│ │ ├── A.json
│ │ ├── Aa.json
│ │ ├── aAnnotationPadding.json
│ │ ├── absoluteThresholdGroup.json
│ │ ├── add.json
│ │ ├── ADDITIONAL_PROPERTY_FLAG.json
│ │ ├── Advanced_Multi-Server_Integration_Template.json
│ │ ├── allOptionsList.json
│ │ ├── analysis
│ │ │ ├── Data_Analysis_Template.json
│ │ │ ├── index.json
│ │ │ ├── Mermaid_Analysis_Expert.json
│ │ │ ├── Sequential_Data_Analysis_with_MCP_Integration.json
│ │ │ └── SOLID_Code_Analysis_Visualizer.json
│ │ ├── Analysis_Assistant.json
│ │ ├── Analyze_Mermaid_Diagram.json
│ │ ├── ANDROID_EVERGREEN_FIRST.json
│ │ ├── ANSI_ESCAPE_BELL.json
│ │ ├── architecture
│ │ │ ├── index.json
│ │ │ └── PromptCombiner_Interface.json
│ │ ├── Architecture_Design_Assistant.json
│ │ ├── argsTag.json
│ │ ├── ARROW.json
│ │ ├── assistant
│ │ │ ├── Analysis_Assistant.json
│ │ │ ├── Architecture_Design_Assistant.json
│ │ │ ├── Code_Refactoring_Assistant.json
│ │ │ ├── Code_Review_Assistant.json
│ │ │ ├── Database_Query_Assistant.json
│ │ │ ├── Debugging_Assistant.json
│ │ │ ├── Foresight_Assistant.json
│ │ │ ├── index.json
│ │ │ ├── MCP_Integration_Assistant.json
│ │ │ ├── Project_Analysis_Assistant.json
│ │ │ └── Research_Assistant.json
│ │ ├── astralRange.json
│ │ ├── at.json
│ │ ├── authorization_endpoint.json
│ │ ├── b.json
│ │ ├── BABELIGNORE_FILENAME.json
│ │ ├── BACKSLASH.json
│ │ ├── backupId.json
│ │ ├── BANG.json
│ │ ├── BASE64_MAP.json
│ │ ├── baseFlags.json
│ │ ├── Basic_Template.json
│ │ ├── bgModel.json
│ │ ├── bignum.json
│ │ ├── blockKeywordsStr.json
│ │ ├── BOMChar.json
│ │ ├── boundary.json
│ │ ├── brackets.json
│ │ ├── BROWSER_VAR.json
│ │ ├── bt.json
│ │ ├── BUILTIN.json
│ │ ├── BULLET.json
│ │ ├── c.json
│ │ ├── C.json
│ │ ├── CACHE_VERSION.json
│ │ ├── cacheControl.json
│ │ ├── cacheProp.json
│ │ ├── category.py
│ │ ├── CHANGE_EVENT.json
│ │ ├── CHAR_CODE_0.json
│ │ ├── chars.json
│ │ ├── cjsPattern.json
│ │ ├── cKeywords.json
│ │ ├── classForPercent.json
│ │ ├── classStr.json
│ │ ├── clientFirstMessageBare.json
│ │ ├── cmd.json
│ │ ├── Code_Diagram_Documentation_Creator.json
│ │ ├── Code_Refactoring_Assistant.json
│ │ ├── Code_Review_Assistant.json
│ │ ├── code.json
│ │ ├── coding
│ │ │ ├── __dirname.json
│ │ │ ├── _.json
│ │ │ ├── _DEFAULT_OPEN_DELIMITER.json
│ │ │ ├── _emojiRegex.json
│ │ │ ├── _UUID_CHARS.json
│ │ │ ├── a.json
│ │ │ ├── A.json
│ │ │ ├── aAnnotationPadding.json
│ │ │ ├── absoluteThresholdGroup.json
│ │ │ ├── add.json
│ │ │ ├── ADDITIONAL_PROPERTY_FLAG.json
│ │ │ ├── allOptionsList.json
│ │ │ ├── ANDROID_EVERGREEN_FIRST.json
│ │ │ ├── ANSI_ESCAPE_BELL.json
│ │ │ ├── argsTag.json
│ │ │ ├── ARROW.json
│ │ │ ├── astralRange.json
│ │ │ ├── at.json
│ │ │ ├── authorization_endpoint.json
│ │ │ ├── BABELIGNORE_FILENAME.json
│ │ │ ├── BACKSLASH.json
│ │ │ ├── BANG.json
│ │ │ ├── BASE64_MAP.json
│ │ │ ├── baseFlags.json
│ │ │ ├── bgModel.json
│ │ │ ├── bignum.json
│ │ │ ├── blockKeywordsStr.json
│ │ │ ├── BOMChar.json
│ │ │ ├── boundary.json
│ │ │ ├── brackets.json
│ │ │ ├── BROWSER_VAR.json
│ │ │ ├── bt.json
│ │ │ ├── BUILTIN.json
│ │ │ ├── BULLET.json
│ │ │ ├── c.json
│ │ │ ├── C.json
│ │ │ ├── CACHE_VERSION.json
│ │ │ ├── cacheControl.json
│ │ │ ├── cacheProp.json
│ │ │ ├── CHANGE_EVENT.json
│ │ │ ├── CHAR_CODE_0.json
│ │ │ ├── chars.json
│ │ │ ├── cjsPattern.json
│ │ │ ├── cKeywords.json
│ │ │ ├── classForPercent.json
│ │ │ ├── classStr.json
│ │ │ ├── clientFirstMessageBare.json
│ │ │ ├── cmd.json
│ │ │ ├── code.json
│ │ │ ├── colorCode.json
│ │ │ ├── comma.json
│ │ │ ├── command.json
│ │ │ ├── configJsContent.json
│ │ │ ├── connectionString.json
│ │ │ ├── cssClassStr.json
│ │ │ ├── currentBoundaryParse.json
│ │ │ ├── d.json
│ │ │ ├── data.json
│ │ │ ├── DATA.json
│ │ │ ├── dataWebpackPrefix.json
│ │ │ ├── debug.json
│ │ │ ├── decodeStateVectorV2.json
│ │ │ ├── DEFAULT_DELIMITER.json
│ │ │ ├── DEFAULT_DIAGRAM_DIRECTION.json
│ │ │ ├── DEFAULT_JS_PATTERN.json
│ │ │ ├── DEFAULT_LOG_TARGET.json
│ │ │ ├── defaultHelpOpt.json
│ │ │ ├── defaultHost.json
│ │ │ ├── deferY18nLookupPrefix.json
│ │ │ ├── DELIM.json
│ │ │ ├── delimiter.json
│ │ │ ├── DEPRECATION.json
│ │ │ ├── destMain.json
│ │ │ ├── DID_NOT_THROW.json
│ │ │ ├── direction.json
│ │ │ ├── displayValue.json
│ │ │ ├── DNS.json
│ │ │ ├── doc.json
│ │ │ ├── DOCUMENTATION_NOTE.json
│ │ │ ├── DOT.json
│ │ │ ├── DOTS.json
│ │ │ ├── dummyCompoundId.json
│ │ │ ├── e.json
│ │ │ ├── E.json
│ │ │ ├── earlyHintsLink.json
│ │ │ ├── elide.json
│ │ │ ├── EMPTY.json
│ │ │ ├── end.json
│ │ │ ├── endpoint.json
│ │ │ ├── environment.json
│ │ │ ├── ERR_CODE.json
│ │ │ ├── errMessage.json
│ │ │ ├── errMsg.json
│ │ │ ├── ERROR_MESSAGE.json
│ │ │ ├── error.json
│ │ │ ├── ERROR.json
│ │ │ ├── ERRORCLASS.json
│ │ │ ├── errorMessage.json
│ │ │ ├── es6Default.json
│ │ │ ├── ESC.json
│ │ │ ├── Escapable.json
│ │ │ ├── escapedChar.json
│ │ │ ├── escapeFuncStr.json
│ │ │ ├── escSlash.json
│ │ │ ├── ev.json
│ │ │ ├── event.json
│ │ │ ├── execaMessage.json
│ │ │ ├── EXPECTED_LABEL.json
│ │ │ ├── expected.json
│ │ │ ├── expectedString.json
│ │ │ ├── expression1.json
│ │ │ ├── EXTENSION.json
│ │ │ ├── f.json
│ │ │ ├── FAIL_TEXT.json
│ │ │ ├── FILE_BROWSER_FACTORY.json
│ │ │ ├── fill.json
│ │ │ ├── findPackageJson.json
│ │ │ ├── fnKey.json
│ │ │ ├── FORMAT.json
│ │ │ ├── formatted.json
│ │ │ ├── from.json
│ │ │ ├── fullpaths.json
│ │ │ ├── FUNC_ERROR_TEXT.json
│ │ │ ├── GenStateSuspendedStart.json
│ │ │ ├── GENSYNC_EXPECTED_START.json
│ │ │ ├── gutter.json
│ │ │ ├── h.json
│ │ │ ├── handlerFuncName.json
│ │ │ ├── HASH_UNDEFINED.json
│ │ │ ├── head.json
│ │ │ ├── helpMessage.json
│ │ │ ├── HINT_ARG.json
│ │ │ ├── HOOK_RETURNED_NOTHING_ERROR_MESSAGE.json
│ │ │ ├── i.json
│ │ │ ├── id.json
│ │ │ ├── identifier.json
│ │ │ ├── Identifier.json
│ │ │ ├── INDENT.json
│ │ │ ├── indentation.json
│ │ │ ├── index.json
│ │ │ ├── INDIRECTION_FRAGMENT.json
│ │ │ ├── input.json
│ │ │ ├── inputText.json
│ │ │ ├── insert.json
│ │ │ ├── insertPromptQuery.json
│ │ │ ├── INSPECT_MAX_BYTES.json
│ │ │ ├── intToCharMap.json
│ │ │ ├── IS_ITERABLE_SENTINEL.json
│ │ │ ├── IS_KEYED_SENTINEL.json
│ │ │ ├── isConfigType.json
│ │ │ ├── isoSentinel.json
│ │ │ ├── isSourceNode.json
│ │ │ ├── j.json
│ │ │ ├── JAKE_CMD.json
│ │ │ ├── JEST_GLOBAL_NAME.json
│ │ │ ├── JEST_GLOBALS_MODULE_NAME.json
│ │ │ ├── JSON_SYNTAX_CHAR.json
│ │ │ ├── json.json
│ │ │ ├── jsonType.json
│ │ │ ├── jupyter_namespaceObject.json
│ │ │ ├── JUPYTERLAB_DOCMANAGER_PLUGIN_ID.json
│ │ │ ├── k.json
│ │ │ ├── KERNEL_STATUS_ERROR_CLASS.json
│ │ │ ├── key.json
│ │ │ ├── l.json
│ │ │ ├── labelId.json
│ │ │ ├── LATEST_PROTOCOL_VERSION.json
│ │ │ ├── LETTERDASHNUMBER.json
│ │ │ ├── LF.json
│ │ │ ├── LIMIT_REPLACE_NODE.json
│ │ │ ├── logTime.json
│ │ │ ├── lstatkey.json
│ │ │ ├── lt.json
│ │ │ ├── m.json
│ │ │ ├── maliciousPayload.json
│ │ │ ├── mask.json
│ │ │ ├── match.json
│ │ │ ├── matchingDelim.json
│ │ │ ├── MAXIMUM_MESSAGE_SIZE.json
│ │ │ ├── mdcContent.json
│ │ │ ├── MERMAID_DOM_ID_PREFIX.json
│ │ │ ├── message.json
│ │ │ ├── messages.json
│ │ │ ├── meth.json
│ │ │ ├── minimatch.json
│ │ │ ├── MOCK_CONSTRUCTOR_NAME.json
│ │ │ ├── MOCKS_PATTERN.json
│ │ │ ├── moduleDirectory.json
│ │ │ ├── msg.json
│ │ │ ├── mtr.json
│ │ │ ├── multipartType.json
│ │ │ ├── n.json
│ │ │ ├── N.json
│ │ │ ├── name.json
│ │ │ ├── NATIVE_PLATFORM.json
│ │ │ ├── newUrl.json
│ │ │ ├── NM.json
│ │ │ ├── NO_ARGUMENTS.json
│ │ │ ├── NO_DIFF_MESSAGE.json
│ │ │ ├── NODE_MODULES.json
│ │ │ ├── nodeInternalPrefix.json
│ │ │ ├── nonASCIIidentifierStartChars.json
│ │ │ ├── nonKey.json
│ │ │ ├── NOT_A_DOT.json
│ │ │ ├── notCharacterOrDash.json
│ │ │ ├── notebookURL.json
│ │ │ ├── notSelector.json
│ │ │ ├── nullTag.json
│ │ │ ├── num.json
│ │ │ ├── NUMBER.json
│ │ │ ├── o.json
│ │ │ ├── O.json
│ │ │ ├── octChar.json
│ │ │ ├── octetStreamType.json
│ │ │ ├── operators.json
│ │ │ ├── out.json
│ │ │ ├── OUTSIDE_JEST_VM_PROTOCOL.json
│ │ │ ├── override.json
│ │ │ ├── p.json
│ │ │ ├── PACKAGE_FILENAME.json
│ │ │ ├── PACKAGE_JSON.json
│ │ │ ├── packageVersion.json
│ │ │ ├── paddedNumber.json
│ │ │ ├── page.json
│ │ │ ├── parseClass.json
│ │ │ ├── path.json
│ │ │ ├── pathExt.json
│ │ │ ├── pattern.json
│ │ │ ├── PatternBoolean.json
│ │ │ ├── pBuiltins.json
│ │ │ ├── pFloatForm.json
│ │ │ ├── pkg.json
│ │ │ ├── PLUGIN_ID_DOC_MANAGER.json
│ │ │ ├── plusChar.json
│ │ │ ├── PN_CHARS.json
│ │ │ ├── point.json
│ │ │ ├── prefix.json
│ │ │ ├── PRETTY_PLACEHOLDER.json
│ │ │ ├── property_prefix.json
│ │ │ ├── pubkey256.json
│ │ │ ├── Q.json
│ │ │ ├── qmark.json
│ │ │ ├── QO.json
│ │ │ ├── query.json
│ │ │ ├── querystringType.json
│ │ │ ├── queryText.json
│ │ │ ├── r.json
│ │ │ ├── R.json
│ │ │ ├── rangeStart.json
│ │ │ ├── re.json
│ │ │ ├── reI.json
│ │ │ ├── REQUIRED_FIELD_SYMBOL.json
│ │ │ ├── reserve.json
│ │ │ ├── resolvedDestination.json
│ │ │ ├── resolverDir.json
│ │ │ ├── responseType.json
│ │ │ ├── result.json
│ │ │ ├── ROOT_DESCRIBE_BLOCK_NAME.json
│ │ │ ├── ROOT_NAMESPACE_NAME.json
│ │ │ ├── ROOT_TASK_NAME.json
│ │ │ ├── route.json
│ │ │ ├── RUNNING_TEXT.json
│ │ │ ├── s.json
│ │ │ ├── SCHEMA_PATH.json
│ │ │ ├── se.json
│ │ │ ├── SEARCHABLE_CLASS.json
│ │ │ ├── secret.json
│ │ │ ├── selector.json
│ │ │ ├── SEMVER_SPEC_VERSION.json
│ │ │ ├── sensitiveHeaders.json
│ │ │ ├── sep.json
│ │ │ ├── separator.json
│ │ │ ├── SHAPE_STATE.json
│ │ │ ├── shape.json
│ │ │ ├── SHARED.json
│ │ │ ├── short.json
│ │ │ ├── side.json
│ │ │ ├── SNAPSHOT_VERSION.json
│ │ │ ├── SOURCE_MAPPING_PREFIX.json
│ │ │ ├── source.json
│ │ │ ├── sourceMapContent.json
│ │ │ ├── SPACE_SYMBOL.json
│ │ │ ├── SPACE.json
│ │ │ ├── sqlKeywords.json
│ │ │ ├── sranges.json
│ │ │ ├── st.json
│ │ │ ├── ST.json
│ │ │ ├── stack.json
│ │ │ ├── START_HIDING.json
│ │ │ ├── START_OF_LINE.json
│ │ │ ├── startNoTraversal.json
│ │ │ ├── STATES.json
│ │ │ ├── stats.json
│ │ │ ├── statSync.json
│ │ │ ├── storageStatus.json
│ │ │ ├── storageType.json
│ │ │ ├── str.json
│ │ │ ├── stringifiedObject.json
│ │ │ ├── stringPath.json
│ │ │ ├── stringResult.json
│ │ │ ├── stringTag.json
│ │ │ ├── strValue.json
│ │ │ ├── style.json
│ │ │ ├── SUB_NAME.json
│ │ │ ├── subkey.json
│ │ │ ├── SUBPROTOCOL.json
│ │ │ ├── SUITE_NAME.json
│ │ │ ├── symbolPattern.json
│ │ │ ├── symbolTag.json
│ │ │ ├── t.json
│ │ │ ├── T.json
│ │ │ ├── templateDir.json
│ │ │ ├── tempName.json
│ │ │ ├── text.json
│ │ │ ├── time.json
│ │ │ ├── titleSeparator.json
│ │ │ ├── tmpl.json
│ │ │ ├── tn.json
│ │ │ ├── toValue.json
│ │ │ ├── transform.json
│ │ │ ├── trustProxyDefaultSymbol.json
│ │ │ ├── typeArgumentsKey.json
│ │ │ ├── typeKey.json
│ │ │ ├── typeMessage.json
│ │ │ ├── typesRegistryPackageName.json
│ │ │ ├── u.json
│ │ │ ├── UNDEFINED.json
│ │ │ ├── unit.json
│ │ │ ├── UNMATCHED_SURROGATE_PAIR_REPLACE.json
│ │ │ ├── ur.json
│ │ │ ├── USAGE.json
│ │ │ ├── value.json
│ │ │ ├── Vr.json
│ │ │ ├── watchmanURL.json
│ │ │ ├── webkit.json
│ │ │ ├── xhtml.json
│ │ │ ├── XP_DEFAULT_PATHEXT.json
│ │ │ └── y.json
│ │ ├── Collaborative_Development_with_MCP_Integration.json
│ │ ├── colorCode.json
│ │ ├── comma.json
│ │ ├── command.json
│ │ ├── completionShTemplate.json
│ │ ├── configJsContent.json
│ │ ├── connectionString.json
│ │ ├── Consolidated_TypeScript_Interfaces_Template.json
│ │ ├── Could_you_interpret_the_assumed_applicat___.json
│ │ ├── cssClassStr.json
│ │ ├── currentBoundaryParse.json
│ │ ├── d.json
│ │ ├── Data_Analysis_Template.json
│ │ ├── data.json
│ │ ├── DATA.json
│ │ ├── Database_Query_Assistant.json
│ │ ├── dataWebpackPrefix.json
│ │ ├── debug.json
│ │ ├── Debugging_Assistant.json
│ │ ├── decodeStateVectorV2.json
│ │ ├── DEFAULT_DELIMITER.json
│ │ ├── DEFAULT_DIAGRAM_DIRECTION.json
│ │ ├── DEFAULT_INDENT.json
│ │ ├── DEFAULT_JS_PATTERN.json
│ │ ├── DEFAULT_LOG_TARGET.json
│ │ ├── defaultHelpOpt.json
│ │ ├── defaultHost.json
│ │ ├── deferY18nLookupPrefix.json
│ │ ├── DELIM.json
│ │ ├── delimiter.json
│ │ ├── DEPRECATION.json
│ │ ├── DESCENDING.json
│ │ ├── destMain.json
│ │ ├── development
│ │ │ ├── Collaborative_Development_with_MCP_Integration.json
│ │ │ ├── Consolidated_TypeScript_Interfaces_Template.json
│ │ │ ├── Development_Workflow.json
│ │ │ ├── index.json
│ │ │ ├── MCP_Server_Development_Prompt_Combiner.json
│ │ │ └── Monorepo_Migration_and_Code_Organization_Guide.json
│ │ ├── Development_System_Prompt.json
│ │ ├── Development_Workflow.json
│ │ ├── devops
│ │ │ ├── Docker_Compose_Prompt_Combiner.json
│ │ │ ├── Docker_Containerization_Guide.json
│ │ │ └── index.json
│ │ ├── DID_NOT_THROW.json
│ │ ├── direction.json
│ │ ├── displayValue.json
│ │ ├── DNS.json
│ │ ├── doc.json
│ │ ├── Docker_Compose_Prompt_Combiner.json
│ │ ├── Docker_Containerization_Guide.json
│ │ ├── Docker_MCP_Servers_Orchestration_Guide.json
│ │ ├── DOCUMENTATION_NOTE.json
│ │ ├── DOT.json
│ │ ├── DOTS.json
│ │ ├── dummyCompoundId.json
│ │ ├── e.json
│ │ ├── E.json
│ │ ├── earlyHintsLink.json
│ │ ├── elide.json
│ │ ├── EMPTY.json
│ │ ├── encoded.json
│ │ ├── end.json
│ │ ├── endpoint.json
│ │ ├── environment.json
│ │ ├── ERR_CODE.json
│ │ ├── errMessage.json
│ │ ├── errMsg.json
│ │ ├── ERROR_MESSAGE.json
│ │ ├── error.json
│ │ ├── ERROR.json
│ │ ├── ERRORCLASS.json
│ │ ├── errorMessage.json
│ │ ├── es6Default.json
│ │ ├── ESC.json
│ │ ├── Escapable.json
│ │ ├── escapedChar.json
│ │ ├── escapeFuncStr.json
│ │ ├── escSlash.json
│ │ ├── ev.json
│ │ ├── event.json
│ │ ├── execaMessage.json
│ │ ├── EXPECTED_LABEL.json
│ │ ├── expected.json
│ │ ├── expectedString.json
│ │ ├── expression1.json
│ │ ├── EXTENSION.json
│ │ ├── f.json
│ │ ├── FAIL_TEXT.json
│ │ ├── FILE_BROWSER_FACTORY.json
│ │ ├── fill.json
│ │ ├── findPackageJson.json
│ │ ├── fnKey.json
│ │ ├── Foresight_Assistant.json
│ │ ├── FORMAT.json
│ │ ├── formatted.json
│ │ ├── from.json
│ │ ├── fullpaths.json
│ │ ├── FUNC_ERROR_TEXT.json
│ │ ├── general
│ │ │ └── index.json
│ │ ├── Generate_different_types_of_questions_ab___.json
│ │ ├── Generate_Mermaid_Diagram.json
│ │ ├── GenStateSuspendedStart.json
│ │ ├── GENSYNC_EXPECTED_START.json
│ │ ├── GitHub_Repository_Explorer.json
│ │ ├── gutter.json
│ │ ├── h.json
│ │ ├── handlerFuncName.json
│ │ ├── HASH_UNDEFINED.json
│ │ ├── head.json
│ │ ├── helpMessage.json
│ │ ├── HINT_ARG.json
│ │ ├── HOOK_RETURNED_NOTHING_ERROR_MESSAGE.json
│ │ ├── i.json
│ │ ├── id.json
│ │ ├── identifier.json
│ │ ├── Identifier.json
│ │ ├── INDENT.json
│ │ ├── indentation.json
│ │ ├── index.json
│ │ ├── INDIRECTION_FRAGMENT.json
│ │ ├── Initialize_project_setup_for_a_new_micro___.json
│ │ ├── input.json
│ │ ├── inputText.json
│ │ ├── insert.json
│ │ ├── insertPromptQuery.json
│ │ ├── INSPECT_MAX_BYTES.json
│ │ ├── install_dependencies__build__run__test____.json
│ │ ├── intToCharMap.json
│ │ ├── IS_ITERABLE_SENTINEL.json
│ │ ├── IS_KEYED_SENTINEL.json
│ │ ├── isConfigType.json
│ │ ├── isoSentinel.json
│ │ ├── isSourceNode.json
│ │ ├── j.json
│ │ ├── J.json
│ │ ├── JAKE_CMD.json
│ │ ├── JEST_GLOBAL_NAME.json
│ │ ├── JEST_GLOBALS_MODULE_NAME.json
│ │ ├── JSON_SYNTAX_CHAR.json
│ │ ├── json.json
│ │ ├── jsonType.json
│ │ ├── jupyter_namespaceObject.json
│ │ ├── JUPYTERLAB_DOCMANAGER_PLUGIN_ID.json
│ │ ├── k.json
│ │ ├── KERNEL_STATUS_ERROR_CLASS.json
│ │ ├── key.json
│ │ ├── l.json
│ │ ├── labelId.json
│ │ ├── LATEST_PROTOCOL_VERSION.json
│ │ ├── LETTERDASHNUMBER.json
│ │ ├── LF.json
│ │ ├── LIMIT_REPLACE_NODE.json
│ │ ├── LINE_FEED.json
│ │ ├── logTime.json
│ │ ├── lstatkey.json
│ │ ├── lt.json
│ │ ├── m.json
│ │ ├── maliciousPayload.json
│ │ ├── manager.py
│ │ ├── marker.json
│ │ ├── mask.json
│ │ ├── match.json
│ │ ├── matchingDelim.json
│ │ ├── MAXIMUM_MESSAGE_SIZE.json
│ │ ├── MCP_Integration_Assistant.json
│ │ ├── MCP_Resources_Explorer.json
│ │ ├── MCP_Resources_Integration_Guide.json
│ │ ├── MCP_Server_Development_Prompt_Combiner.json
│ │ ├── MCP_Server_Integration_Guide.json
│ │ ├── mcp-code-generator.json
│ │ ├── mdcContent.json
│ │ ├── Mermaid_Analysis_Expert.json
│ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ ├── Mermaid_Diagram_Generator.json
│ │ ├── Mermaid_Diagram_Modifier.json
│ │ ├── MERMAID_DOM_ID_PREFIX.json
│ │ ├── message.json
│ │ ├── messages.json
│ │ ├── meth.json
│ │ ├── minimatch.json
│ │ ├── MOBILE_QUERY.json
│ │ ├── MOCK_CONSTRUCTOR_NAME.json
│ │ ├── MOCKS_PATTERN.json
│ │ ├── Modify_Mermaid_Diagram.json
│ │ ├── moduleDirectory.json
│ │ ├── Monorepo_Migration_and_Code_Organization_Guide.json
│ │ ├── msg.json
│ │ ├── mtr.json
│ │ ├── Multi-Resource_Context_Assistant.json
│ │ ├── multipartType.json
│ │ ├── n.json
│ │ ├── N.json
│ │ ├── name.json
│ │ ├── NATIVE_PLATFORM.json
│ │ ├── newUrl.json
│ │ ├── NM.json
│ │ ├── NO_ARGUMENTS.json
│ │ ├── NO_DIFF_MESSAGE.json
│ │ ├── NODE_MODULES.json
│ │ ├── nodeInternalPrefix.json
│ │ ├── nonASCIIidentifierStartChars.json
│ │ ├── nonKey.json
│ │ ├── NOT_A_DOT.json
│ │ ├── notCharacterOrDash.json
│ │ ├── notebookURL.json
│ │ ├── notSelector.json
│ │ ├── nullTag.json
│ │ ├── num.json
│ │ ├── NUMBER.json
│ │ ├── o.json
│ │ ├── O.json
│ │ ├── octChar.json
│ │ ├── octetStreamType.json
│ │ ├── operators.json
│ │ ├── other
│ │ │ ├── __image_1___describe_the_icon_in_one_sen___.json
│ │ │ ├── __type.json
│ │ │ ├── Advanced_Multi-Server_Integration_Template.json
│ │ │ ├── Analyze_Mermaid_Diagram.json
│ │ │ ├── Basic_Template.json
│ │ │ ├── Code_Diagram_Documentation_Creator.json
│ │ │ ├── Collaborative_Development_with_MCP_Integration.json
│ │ │ ├── completionShTemplate.json
│ │ │ ├── Could_you_interpret_the_assumed_applicat___.json
│ │ │ ├── DEFAULT_INDENT.json
│ │ │ ├── Docker_MCP_Servers_Orchestration_Guide.json
│ │ │ ├── Generate_different_types_of_questions_ab___.json
│ │ │ ├── Generate_Mermaid_Diagram.json
│ │ │ ├── GitHub_Repository_Explorer.json
│ │ │ ├── index.json
│ │ │ ├── Initialize_project_setup_for_a_new_micro___.json
│ │ │ ├── install_dependencies__build__run__test____.json
│ │ │ ├── LINE_FEED.json
│ │ │ ├── MCP_Resources_Explorer.json
│ │ │ ├── MCP_Resources_Integration_Guide.json
│ │ │ ├── MCP_Server_Integration_Guide.json
│ │ │ ├── mcp-code-generator.json
│ │ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Modifier.json
│ │ │ ├── Modify_Mermaid_Diagram.json
│ │ │ ├── Multi-Resource_Context_Assistant.json
│ │ │ ├── output.json
│ │ │ ├── sseUrl.json
│ │ │ ├── string.json
│ │ │ ├── Task_List_Helper.json
│ │ │ ├── Template-Based_MCP_Integration.json
│ │ │ ├── Test_Prompt.json
│ │ │ ├── type.json
│ │ │ ├── VERSION.json
│ │ │ ├── WIN_SLASH.json
│ │ │ └── You_are_limited_to_respond_Yes_or_No_onl___.json
│ │ ├── out.json
│ │ ├── output.json
│ │ ├── OUTSIDE_JEST_VM_PROTOCOL.json
│ │ ├── override.json
│ │ ├── p.json
│ │ ├── PACKAGE_FILENAME.json
│ │ ├── PACKAGE_JSON.json
│ │ ├── packageVersion.json
│ │ ├── paddedNumber.json
│ │ ├── page.json
│ │ ├── parseClass.json
│ │ ├── PATH_NODE_MODULES.json
│ │ ├── path.json
│ │ ├── pathExt.json
│ │ ├── pattern.json
│ │ ├── PatternBoolean.json
│ │ ├── pBuiltins.json
│ │ ├── pFloatForm.json
│ │ ├── pkg.json
│ │ ├── PLUGIN_ID_DOC_MANAGER.json
│ │ ├── plusChar.json
│ │ ├── PN_CHARS.json
│ │ ├── point.json
│ │ ├── prefix.json
│ │ ├── PRETTY_PLACEHOLDER.json
│ │ ├── Project_Analysis_Assistant.json
│ │ ├── ProjectsUpdatedInBackgroundEvent.json
│ │ ├── PromptCombiner_Interface.json
│ │ ├── promptId.json
│ │ ├── property_prefix.json
│ │ ├── pubkey256.json
│ │ ├── Q.json
│ │ ├── qmark.json
│ │ ├── QO.json
│ │ ├── query.json
│ │ ├── querystringType.json
│ │ ├── queryText.json
│ │ ├── r.json
│ │ ├── R.json
│ │ ├── rangeStart.json
│ │ ├── re.json
│ │ ├── reI.json
│ │ ├── REQUIRED_FIELD_SYMBOL.json
│ │ ├── Research_Assistant.json
│ │ ├── reserve.json
│ │ ├── resolvedDestination.json
│ │ ├── resolverDir.json
│ │ ├── responseType.json
│ │ ├── result.json
│ │ ├── ROOT_DESCRIBE_BLOCK_NAME.json
│ │ ├── ROOT_NAMESPACE_NAME.json
│ │ ├── ROOT_TASK_NAME.json
│ │ ├── route.json
│ │ ├── RUNNING_TEXT.json
│ │ ├── RXstyle.json
│ │ ├── s.json
│ │ ├── SCHEMA_PATH.json
│ │ ├── schemaQuery.json
│ │ ├── se.json
│ │ ├── SEARCHABLE_CLASS.json
│ │ ├── secret.json
│ │ ├── selector.json
│ │ ├── SEMVER_SPEC_VERSION.json
│ │ ├── sensitiveHeaders.json
│ │ ├── sep.json
│ │ ├── separator.json
│ │ ├── Sequential_Data_Analysis_with_MCP_Integration.json
│ │ ├── SHAPE_STATE.json
│ │ ├── shape.json
│ │ ├── SHARED.json
│ │ ├── short.json
│ │ ├── side.json
│ │ ├── SNAPSHOT_VERSION.json
│ │ ├── SOLID_Code_Analysis_Visualizer.json
│ │ ├── SOURCE_MAPPING_PREFIX.json
│ │ ├── source.json
│ │ ├── sourceMapContent.json
│ │ ├── SPACE_SYMBOL.json
│ │ ├── SPACE.json
│ │ ├── sqlKeywords.json
│ │ ├── sranges.json
│ │ ├── sseUrl.json
│ │ ├── st.json
│ │ ├── ST.json
│ │ ├── stack.json
│ │ ├── START_HIDING.json
│ │ ├── START_OF_LINE.json
│ │ ├── startNoTraversal.json
│ │ ├── STATES.json
│ │ ├── stats.json
│ │ ├── statSync.json
│ │ ├── status.json
│ │ ├── storageStatus.json
│ │ ├── storageType.json
│ │ ├── str.json
│ │ ├── string.json
│ │ ├── stringifiedObject.json
│ │ ├── stringPath.json
│ │ ├── stringResult.json
│ │ ├── stringTag.json
│ │ ├── strValue.json
│ │ ├── style.json
│ │ ├── SUB_NAME.json
│ │ ├── subkey.json
│ │ ├── SUBPROTOCOL.json
│ │ ├── SUITE_NAME.json
│ │ ├── symbolPattern.json
│ │ ├── symbolTag.json
│ │ ├── system
│ │ │ ├── Aa.json
│ │ │ ├── b.json
│ │ │ ├── Development_System_Prompt.json
│ │ │ ├── index.json
│ │ │ ├── marker.json
│ │ │ ├── PATH_NODE_MODULES.json
│ │ │ ├── ProjectsUpdatedInBackgroundEvent.json
│ │ │ ├── RXstyle.json
│ │ │ ├── status.json
│ │ │ └── versionMajorMinor.json
│ │ ├── t.json
│ │ ├── T.json
│ │ ├── Task_List_Helper.json
│ │ ├── Template-Based_MCP_Integration.json
│ │ ├── template.py
│ │ ├── templateDir.json
│ │ ├── tempName.json
│ │ ├── Test_Prompt.json
│ │ ├── text.json
│ │ ├── time.json
│ │ ├── titleSeparator.json
│ │ ├── tmpl.json
│ │ ├── tn.json
│ │ ├── TOPBAR_FACTORY.json
│ │ ├── toValue.json
│ │ ├── transform.json
│ │ ├── trustProxyDefaultSymbol.json
│ │ ├── txt.json
│ │ ├── type.json
│ │ ├── typeArgumentsKey.json
│ │ ├── typeKey.json
│ │ ├── typeMessage.json
│ │ ├── typesRegistryPackageName.json
│ │ ├── u.json
│ │ ├── UNDEFINED.json
│ │ ├── unit.json
│ │ ├── UNMATCHED_SURROGATE_PAIR_REPLACE.json
│ │ ├── ur.json
│ │ ├── usage.json
│ │ ├── USAGE.json
│ │ ├── user
│ │ │ ├── backupId.json
│ │ │ ├── DESCENDING.json
│ │ │ ├── encoded.json
│ │ │ ├── index.json
│ │ │ ├── J.json
│ │ │ ├── MOBILE_QUERY.json
│ │ │ ├── promptId.json
│ │ │ ├── schemaQuery.json
│ │ │ ├── TOPBAR_FACTORY.json
│ │ │ ├── txt.json
│ │ │ └── usage.json
│ │ ├── value.json
│ │ ├── VERSION.json
│ │ ├── version.py
│ │ ├── versionMajorMinor.json
│ │ ├── Vr.json
│ │ ├── watchmanURL.json
│ │ ├── webkit.json
│ │ ├── WIN_SLASH.json
│ │ ├── xhtml.json
│ │ ├── XP_DEFAULT_PATHEXT.json
│ │ ├── y.json
│ │ └── You_are_limited_to_respond_Yes_or_No_onl___.json
│ ├── resources
│ │ ├── __init__.py
│ │ ├── code_examples
│ │ │ └── index.json
│ │ ├── config
│ │ │ └── index.json
│ │ ├── documentation
│ │ │ └── index.json
│ │ ├── images
│ │ │ └── index.json
│ │ ├── index.json
│ │ └── other
│ │ └── index.json
│ ├── server.py
│ ├── templates
│ │ ├── __init__.py
│ │ ├── AbstractFactory.json
│ │ ├── Adapter.json
│ │ ├── base.py
│ │ ├── Builder.json
│ │ ├── Chain.json
│ │ ├── Command.json
│ │ ├── component
│ │ │ ├── AbstractFactory.json
│ │ │ ├── Adapter.json
│ │ │ ├── Builder.json
│ │ │ ├── Chain.json
│ │ │ ├── Command.json
│ │ │ ├── Decorator.json
│ │ │ ├── Facade.json
│ │ │ ├── Factory.json
│ │ │ ├── Iterator.json
│ │ │ ├── Mediator.json
│ │ │ ├── Memento.json
│ │ │ ├── Observer.json
│ │ │ ├── Prototype.json
│ │ │ ├── Proxy.json
│ │ │ ├── Singleton.json
│ │ │ ├── State.json
│ │ │ ├── Strategy.json
│ │ │ ├── TemplateMethod.json
│ │ │ └── Visitor.json
│ │ ├── component.py
│ │ ├── Decorator.json
│ │ ├── Facade.json
│ │ ├── Factory.json
│ │ ├── index.json
│ │ ├── Iterator.json
│ │ ├── manager.py
│ │ ├── Mediator.json
│ │ ├── Memento.json
│ │ ├── Observer.json
│ │ ├── project.py
│ │ ├── Prototype.json
│ │ ├── Proxy.json
│ │ ├── renderer.py
│ │ ├── Singleton.json
│ │ ├── State.json
│ │ ├── Strategy.json
│ │ ├── template_manager.py
│ │ ├── TemplateMethod.json
│ │ ├── types.py
│ │ └── Visitor.json
│ └── utils
│ └── __init__.py
├── SUMMARY.md
├── TASK_COMPLETION_SUMMARY.md
├── templates
│ └── openssl
│ ├── files
│ │ ├── CMakeLists.txt.jinja2
│ │ ├── conanfile.py.jinja2
│ │ ├── main.cpp.jinja2
│ │ └── README.md.jinja2
│ ├── openssl-consumer.json
│ └── template.json
├── test_openssl_integration.sh
├── test_package
│ └── conanfile.py
└── tests
├── __init__.py
├── conftest.py
├── integration
│ ├── test_core_integration.py
│ ├── test_mermaid_integration.py
│ ├── test_prompt_manager_integration.py
│ └── test_server_integration.py
├── test_aws_mcp.py
├── test_base_classes.py
├── test_config.py
├── test_exceptions.py
├── test_mermaid.py
├── test_prompts.py
└── test_templates.py
```
# Files
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/devops/Docker_Compose_Prompt_Combiner.json:
--------------------------------------------------------------------------------
```json
{
"name": "Docker Compose Prompt Combiner",
"description": "A specialized prompt combiner for creating Docker Compose configurations that integrates service definitions, volumes, networks, and deployment patterns",
"type": "prompt",
"category": "devops",
"content": "/**\n * DockerComposePromptCombiner for {{project_name}}\n * \n * A specialized implementation of the PromptCombiner interface\n * focused on combining prompts for Docker Compose configuration and orchestration.\n */\n\nimport { PromptCombiner, CombinerContext, CombinedPromptResult, PromptSuggestion, CombinationValidationResult, WorkflowConfig, SavedWorkflow } from './prompt-combiner-interface';\nimport { PromptService } from '../services/prompt-service';\nimport { Prompt } from '../core/types';\n\n/**\n * Docker Compose specific context\n */\nexport interface DockerComposeContext extends CombinerContext {\n /** Project environment (development, staging, production) */\n environment: 'development' | 'staging' | 'production' | string;\n \n /** Services to include in the configuration */\n services: {\n name: string;\n type: string;\n image?: string;\n ports?: string[];\n volumes?: string[];\n environment?: Record<string, string>;\n dependencies?: string[];\n }[];\n \n /** Networks to define */\n networks?: {\n name: string;\n external?: boolean;\n driver?: string;\n }[];\n \n /** Volumes to define */\n volumes?: {\n name: string;\n driver?: string;\n external?: boolean;\n }[];\n \n /** Docker Compose version */\n composeVersion?: string;\n \n /** Orchestration platform */\n platform?: 'docker' | 'kubernetes' | 'swarm';\n \n /** Resource constraints */\n resources?: {\n memoryLimits?: boolean;\n cpuLimits?: boolean;\n };\n \n /** Additional Docker-specific context */\n {{additional_docker_context}}\n}\n\n/**\n * Specialized result for Docker Compose combinations\n */\nexport interface DockerComposeResult extends CombinedPromptResult {\n /** Generated Docker Compose configuration */\n composeConfiguration?: string;\n \n /** Individual service configurations */\n serviceConfigurations?: Record<string, string>;\n \n /** Network configurations */\n networkConfigurations?: string;\n \n /** Volume configurations */\n volumeConfigurations?: string;\n \n /** Deployment commands */\n deploymentCommands?: string;\n \n /** Generated Dockerfiles */\n dockerfiles?: Record<string, string>;\n \n /** Additional Docker-specific results */\n {{additional_docker_results}}\n}\n\n/**\n * Implementation of DockerComposePromptCombiner\n */\nexport class DockerComposePromptCombiner implements PromptCombiner {\n constructor(private promptService: PromptService) {}\n \n /**\n * Combines Docker Compose prompts\n * @param promptIds Array of prompt IDs to combine\n * @param context Optional Docker Compose context\n * @returns Combined Docker Compose result\n */\n async combinePrompts(promptIds: string[], context?: DockerComposeContext): Promise<DockerComposeResult> {\n // Implementation would include:\n // 1. Validating the prompts are compatible for Docker Compose configurations\n // 2. Organizing prompts into service, network, and volume sections\n // 3. Resolving dependencies between services\n // 4. Applying variables with Docker Compose knowledge\n // 5. Generating a comprehensive deployment configuration\n \n // This is a template structure - in a real implementation, this would contain\n // the actual logic for combining Docker Compose prompts\n \n // For now, we'll outline the structure of how the implementation would work\n \n // Step 1: Load and categorize all prompts\n const prompts = await Promise.all(promptIds.map(id => this.promptService.getPrompt(id)));\n \n const servicePrompts = prompts.filter(p => p.tags?.includes('service'));\n const networkPrompts = prompts.filter(p => p.tags?.includes('network'));\n const volumePrompts = prompts.filter(p => p.tags?.includes('volume'));\n const deploymentPrompts = prompts.filter(p => p.tags?.includes('deployment'));\n \n // Step 2: Apply variables to each prompt category\n const variables = context?.variables || {};\n \n // Combine service configurations\n const services = await this.combineServices(servicePrompts, context);\n \n // Combine network configurations\n const networks = await this.combineNetworks(networkPrompts, context);\n \n // Combine volume configurations\n const volumes = await this.combineVolumes(volumePrompts, context);\n \n // Combine deployment commands\n const deployment = await this.combineDeployment(deploymentPrompts, context);\n \n // Step 3: Create combined Docker Compose content\n const composeVersion = context?.composeVersion || '3.8';\n const serviceName = variables.service_name || 'app';\n \n const composeConfiguration = `version: '${composeVersion}'\n\nservices:\n${services.content}\n\nnetworks:\n${networks.content}\n\nvolumes:\n${volumes.content}\n`;\n \n // Step 4: Return the comprehensive result\n return {\n content: `# Docker Compose Configuration for ${variables.project_name || 'Your Project'}\n\n## Docker Compose File\n\n\\`\\`\\`yaml\n${composeConfiguration}\n\\`\\`\\`\n\n## Deployment Commands\n\n${deployment.content}\n`,\n components: [\n ...services.components,\n ...networks.components,\n ...volumes.components,\n ...deployment.components\n ],\n appliedVariables: variables,\n composeConfiguration,\n serviceConfigurations: this.extractServiceConfigurations(services.content),\n networkConfigurations: networks.content,\n volumeConfigurations: volumes.content,\n deploymentCommands: deployment.content,\n // Add suggestion for what to do next\n nextSteps: [\n { action: 'validate_compose', description: 'Validate the Docker Compose configuration using docker-compose config' },\n { action: 'deploy_compose', description: 'Deploy services using docker-compose up -d' },\n { action: 'monitor_services', description: 'Monitor service logs using docker-compose logs -f' },\n { action: 'scale_services', description: 'Scale services as needed using docker-compose up -d --scale' }\n ]\n };\n }\n \n /**\n * Helper method to combine service prompts\n * @param prompts Service prompts\n * @param context Docker Compose context\n * @returns Combined result for services\n */\n private async combineServices(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine service definitions\n // For our template, we'll create a simplified implementation\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no service prompts but we have services in context, create from context\n if (prompts.length === 0 && context?.services?.length) {\n content = this.generateServicesFromContext(context);\n components.push({\n id: 'generated-services',\n name: 'Generated Services',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate service definitions from context\n * @param context Docker Compose context\n * @returns Generated service YAML\n */\n private generateServicesFromContext(context: DockerComposeContext): string {\n let servicesYaml = '';\n \n for (const service of context.services) {\n servicesYaml += ` ${service.name}:\\n`;\n if (service.image) {\n servicesYaml += ` image: ${service.image}\\n`;\n } else {\n servicesYaml += ` build: ./${service.name}\\n`;\n }\n \n if (service.ports && service.ports.length) {\n servicesYaml += ' ports:\\n';\n for (const port of service.ports) {\n servicesYaml += ` - \"${port}\"\\n`;\n }\n }\n \n if (service.environment && Object.keys(service.environment).length) {\n servicesYaml += ' environment:\\n';\n for (const [key, value] of Object.entries(service.environment)) {\n servicesYaml += ` - ${key}=${value}\\n`;\n }\n }\n \n if (service.volumes && service.volumes.length) {\n servicesYaml += ' volumes:\\n';\n for (const volume of service.volumes) {\n servicesYaml += ` - ${volume}\\n`;\n }\n }\n \n if (service.dependencies && service.dependencies.length) {\n servicesYaml += ' depends_on:\\n';\n for (const dep of service.dependencies) {\n servicesYaml += ` - ${dep}\\n`;\n }\n }\n \n // Add resource constraints if specified\n if (context.resources?.cpuLimits || context.resources?.memoryLimits) {\n servicesYaml += ' deploy:\\n resources:\\n limits:\\n';\n if (context.resources.cpuLimits) {\n servicesYaml += ' cpus: \"1.0\"\\n';\n }\n if (context.resources.memoryLimits) {\n servicesYaml += ' memory: 512M\\n';\n }\n }\n \n servicesYaml += '\\n';\n }\n \n return servicesYaml;\n }\n \n /**\n * Helper method to combine network prompts\n * @param prompts Network prompts\n * @param context Docker Compose context\n * @returns Combined result for networks\n */\n private async combineNetworks(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine network definitions\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no network prompts but we have networks in context, create from context\n if (prompts.length === 0 && context?.networks?.length) {\n content = this.generateNetworksFromContext(context);\n components.push({\n id: 'generated-networks',\n name: 'Generated Networks',\n contribution: content\n });\n } else if (prompts.length === 0) {\n // Default network if nothing provided\n content = ` app-network:\\n driver: bridge\\n`;\n components.push({\n id: 'default-network',\n name: 'Default Network',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate network definitions from context\n * @param context Docker Compose context\n * @returns Generated network YAML\n */\n private generateNetworksFromContext(context: DockerComposeContext): string {\n let networksYaml = '';\n \n for (const network of context.networks || []) {\n networksYaml += ` ${network.name}:\\n`;\n if (network.driver) {\n networksYaml += ` driver: ${network.driver}\\n`;\n }\n if (network.external) {\n networksYaml += ` external: true\\n`;\n }\n networksYaml += '\\n';\n }\n \n return networksYaml;\n }\n \n /**\n * Helper method to combine volume prompts\n * @param prompts Volume prompts\n * @param context Docker Compose context\n * @returns Combined result for volumes\n */\n private async combineVolumes(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine volume definitions\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no volume prompts but we have volumes in context, create from context\n if (prompts.length === 0 && context?.volumes?.length) {\n content = this.generateVolumesFromContext(context);\n components.push({\n id: 'generated-volumes',\n name: 'Generated Volumes',\n contribution: content\n });\n } else if (prompts.length === 0) {\n // Default volume if nothing provided\n content = ` app-data:\\n`;\n components.push({\n id: 'default-volume',\n name: 'Default Volume',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate volume definitions from context\n * @param context Docker Compose context\n * @returns Generated volume YAML\n */\n private generateVolumesFromContext(context: DockerComposeContext): string {\n let volumesYaml = '';\n \n for (const volume of context.volumes || []) {\n volumesYaml += ` ${volume.name}:\\n`;\n if (volume.driver) {\n volumesYaml += ` driver: ${volume.driver}\\n`;\n }\n if (volume.external) {\n volumesYaml += ` external: true\\n`;\n }\n volumesYaml += '\\n';\n }\n \n return volumesYaml;\n }\n \n /**\n * Helper method to combine deployment prompts\n * @param prompts Deployment prompts\n * @param context Docker Compose context\n * @returns Combined result for deployment\n */\n private async combineDeployment(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine deployment commands\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no deployment prompts, generate default commands\n if (prompts.length === 0) {\n const projectName = variables.project_name || 'myproject';\n const env = context?.environment || 'development';\n \n content = `# Start all services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml up -d\n\n# View service logs\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml logs -f\n\n# Scale specific services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml up -d --scale service_name=3\n\n# Stop all services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml down\n\n# Stop and remove volumes\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml down -v`;\n \n components.push({\n id: 'default-deployment',\n name: 'Default Deployment Commands',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Extract individual service configurations from combined YAML\n * @param servicesYaml Combined services YAML\n * @returns Object with service name keys and configuration values\n */\n private extractServiceConfigurations(servicesYaml: string): Record<string, string> {\n const services: Record<string, string> = {};\n const serviceBlocks = servicesYaml.split(/^\\s{2}[^\\s]+:/gm);\n \n // Skip the first empty block if it exists\n const startIndex = serviceBlocks[0].trim() === '' ? 1 : 0;\n \n for (let i = startIndex; i < serviceBlocks.length; i++) {\n const block = serviceBlocks[i];\n const nameMatch = block.match(/^\\s*([^\\s:]+)\\s*$/m);\n \n if (nameMatch && nameMatch[1]) {\n const serviceName = nameMatch[1];\n services[serviceName] = block.trim();\n }\n }\n \n return services;\n }\n \n /**\n * Gets Docker Compose prompt suggestions\n * @param category Optional category to filter by\n * @param context Current Docker Compose context to inform suggestions\n * @returns Array of prompt suggestions for Docker Compose configurations\n */\n async getPromptSuggestions(category?: string, context?: DockerComposeContext): Promise<PromptSuggestion[]> {\n // Implementation would suggest prompts based on the current Docker context\n // For example, if using PostgreSQL, suggest corresponding service templates\n // This is a placeholder for demonstration\n \n const hasDatabase = context?.services?.some(s => \n s.type === 'database' || \n s.image?.includes('postgres') || \n s.image?.includes('mysql') || \n s.image?.includes('mongo'));\n \n const hasMCP = context?.services?.some(s => \n s.name.includes('mcp') || \n s.type === 'mcp');\n \n return [\n {\n id: 'docker-containerization-guide',\n name: 'Docker Containerization Guide',\n relevance: 100,\n compatibleWith: ['docker-compose-database-service', 'docker-compose-mcp-service'],\n reason: 'Provides the Docker containerization foundation'\n },\n {\n id: 'docker-compose-database-service',\n name: 'Docker Compose Database Service',\n relevance: hasDatabase ? 100 : 70,\n compatibleWith: ['docker-containerization-guide', 'docker-compose-mcp-service'],\n reason: hasDatabase ? 'Required for database services in your composition' : 'Optional database service configuration'\n },\n {\n id: 'docker-compose-mcp-service',\n name: 'Docker Compose MCP Service',\n relevance: hasMCP ? 100 : 50,\n compatibleWith: ['docker-containerization-guide', 'docker-compose.postgres'],\n reason: hasMCP ? 'Required for MCP services in your composition' : 'Optional MCP service configuration'\n },\n {\n id: 'docker-compose-networking',\n name: 'Docker Compose Networking',\n relevance: 80,\n compatibleWith: ['docker-containerization-guide'],\n reason: 'Advanced networking configuration for your services'\n },\n {\n id: 'docker-compose-deployment',\n name: 'Docker Compose Deployment',\n relevance: context?.environment === 'production' ? 100 : 70,\n compatibleWith: ['docker-containerization-guide'],\n reason: 'Deployment strategies for your Docker Compose applications'\n }\n ];\n }\n \n /**\n * Validates if the prompts can be combined for Docker Compose configurations\n * @param promptIds Array of prompt IDs to validate\n * @returns Validation result with any issues specific to Docker Compose\n */\n async validateCombination(promptIds: string[]): Promise<CombinationValidationResult> {\n // Implementation would validate that the prompts make sense for Docker Compose\n // For example, ensuring there are no conflicting service definitions\n // This is a placeholder for demonstration\n \n const prompts = await Promise.all(promptIds.map(id => this.promptService.getPrompt(id)));\n \n // Check for Docker container prompt\n const hasContainer = prompts.some(p => p.tags?.includes('docker') || p.tags?.includes('containerization'));\n if (!hasContainer) {\n return {\n isValid: false,\n issues: [{\n promptId: '',\n issue: 'Missing Docker containerization prompt',\n severity: 'error',\n suggestion: 'Add a Docker containerization prompt, such as docker-containerization-guide'\n }],\n suggestions: [{\n promptIds: [...promptIds, 'docker-containerization-guide'],\n reason: 'Docker containerization is required for Docker Compose configurations'\n }]\n };\n }\n \n // In a real implementation, would do more validation specific to Docker Compose\n \n return {\n isValid: true\n };\n }\n \n /**\n * Creates a saved Docker Compose workflow\n * @param name Name for the new workflow\n * @param promptIds Component prompt IDs\n * @param config Configuration for the combination\n * @returns The created Docker Compose workflow\n */\n async saveWorkflow(name: string, promptIds: string[], config: WorkflowConfig): Promise<SavedWorkflow> {\n // Implementation would save a Docker Compose workflow\n // This is a placeholder for demonstration\n \n return {\n id: `docker-compose-workflow-${Date.now()}`,\n name,\n promptIds,\n config,\n createdAt: new Date().toISOString(),\n updatedAt: new Date().toISOString(),\n version: 1,\n category: 'docker-compose',\n tags: ['docker', 'compose', 'deployment']\n };\n }\n \n /**\n * Loads a previously saved Docker Compose workflow\n * @param workflowId ID of the saved workflow\n * @returns The loaded Docker Compose workflow\n */\n async loadWorkflow(workflowId: string): Promise<SavedWorkflow> {\n // Implementation would load a Docker Compose workflow\n // This is a placeholder for demonstration\n \n throw new Error(`Workflow ${workflowId} not found or not implemented yet`);\n }\n}\n\n/**\n * Usage Examples\n * \n * ```typescript\n * // Creating a combiner\n * const promptService = new PromptService(storageAdapter);\n * const dockerCombiner = new DockerComposePromptCombiner(promptService);\n * \n * // Getting prompt suggestions for Docker Compose\n * const suggestions = await dockerCombiner.getPromptSuggestions('services', {\n * environment: 'production',\n * services: [\n * {\n * name: 'web',\n * type: 'frontend',\n * image: 'nginx:alpine',\n * ports: ['80:80']\n * },\n * {\n * name: 'api',\n * type: 'backend',\n * image: 'node:14-alpine',\n * ports: ['3000:3000'],\n * dependencies: ['db']\n * },\n * {\n * name: 'db',\n * type: 'database',\n * image: 'postgres:13',\n * volumes: ['postgres-data:/var/lib/postgresql/data']\n * }\n * ],\n * composeVersion: '3.8'\n * });\n * \n * // Combining prompts for Docker Compose\n * const result = await dockerCombiner.combinePrompts([\n * 'docker-containerization-guide',\n * 'docker-compose-database-service'\n * ], {\n * variables: {\n * project_name: 'My Awesome Project',\n * service_name: 'api'\n * },\n * environment: 'production',\n * services: [\n * {\n * name: 'web',\n * type: 'frontend',\n * image: 'nginx:alpine',\n * ports: ['80:80']\n * },\n * {\n * name: 'api',\n * type: 'backend',\n * image: 'node:14-alpine',\n * ports: ['3000:3000'],\n * dependencies: ['db']\n * },\n * {\n * name: 'db',\n * type: 'database',\n * image: 'postgres:13',\n * volumes: ['postgres-data:/var/lib/postgresql/data']\n * }\n * ],\n * composeVersion: '3.8'\n * });\n * \n * // Using the specialized result properties\n * console.log(result.composeConfiguration); // Get the complete Docker Compose configuration\n * console.log(result.serviceConfigurations['db']); // Get just the database service configuration\n * console.log(result.deploymentCommands); // Get the deployment commands\n * ```\n */\n\n// ============================\n// Extension Guidelines\n// ============================\n\n/**\n * When extending DockerComposePromptCombiner, consider:\n * \n * 1. Adding support for specific service types (e.g., web, backend, database, cache)\n * 2. Enhancing the context with more Docker-specific properties\n * 3. Adding support for more complex network and volume configurations\n * 4. Implementing advanced health check configurations\n * 5. Adding support for Docker Swarm mode configurations\n * 6. {{additional_extension_guidelines}}\n */",
"variables": [
"project_name",
"additional_docker_context",
"additional_docker_results",
"additional_extension_guidelines"
],
"metadata": {
"source": "/home/sparrow/projects/mcp-prompts/prompts/docker-compose-prompt-combiner.json",
"imported": true
}
}
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/Docker_Compose_Prompt_Combiner.json:
--------------------------------------------------------------------------------
```json
{
"name": "Docker Compose Prompt Combiner",
"description": "A specialized prompt combiner for creating Docker Compose configurations that integrates service definitions, volumes, networks, and deployment patterns",
"type": "prompt",
"category": "devops",
"content": "/**\n * DockerComposePromptCombiner for {{project_name}}\n * \n * A specialized implementation of the PromptCombiner interface\n * focused on combining prompts for Docker Compose configuration and orchestration.\n */\n\nimport { PromptCombiner, CombinerContext, CombinedPromptResult, PromptSuggestion, CombinationValidationResult, WorkflowConfig, SavedWorkflow } from './prompt-combiner-interface';\nimport { PromptService } from '../services/prompt-service';\nimport { Prompt } from '../core/types';\n\n/**\n * Docker Compose specific context\n */\nexport interface DockerComposeContext extends CombinerContext {\n /** Project environment (development, staging, production) */\n environment: 'development' | 'staging' | 'production' | string;\n \n /** Services to include in the configuration */\n services: {\n name: string;\n type: string;\n image?: string;\n ports?: string[];\n volumes?: string[];\n environment?: Record<string, string>;\n dependencies?: string[];\n }[];\n \n /** Networks to define */\n networks?: {\n name: string;\n external?: boolean;\n driver?: string;\n }[];\n \n /** Volumes to define */\n volumes?: {\n name: string;\n driver?: string;\n external?: boolean;\n }[];\n \n /** Docker Compose version */\n composeVersion?: string;\n \n /** Orchestration platform */\n platform?: 'docker' | 'kubernetes' | 'swarm';\n \n /** Resource constraints */\n resources?: {\n memoryLimits?: boolean;\n cpuLimits?: boolean;\n };\n \n /** Additional Docker-specific context */\n {{additional_docker_context}}\n}\n\n/**\n * Specialized result for Docker Compose combinations\n */\nexport interface DockerComposeResult extends CombinedPromptResult {\n /** Generated Docker Compose configuration */\n composeConfiguration?: string;\n \n /** Individual service configurations */\n serviceConfigurations?: Record<string, string>;\n \n /** Network configurations */\n networkConfigurations?: string;\n \n /** Volume configurations */\n volumeConfigurations?: string;\n \n /** Deployment commands */\n deploymentCommands?: string;\n \n /** Generated Dockerfiles */\n dockerfiles?: Record<string, string>;\n \n /** Additional Docker-specific results */\n {{additional_docker_results}}\n}\n\n/**\n * Implementation of DockerComposePromptCombiner\n */\nexport class DockerComposePromptCombiner implements PromptCombiner {\n constructor(private promptService: PromptService) {}\n \n /**\n * Combines Docker Compose prompts\n * @param promptIds Array of prompt IDs to combine\n * @param context Optional Docker Compose context\n * @returns Combined Docker Compose result\n */\n async combinePrompts(promptIds: string[], context?: DockerComposeContext): Promise<DockerComposeResult> {\n // Implementation would include:\n // 1. Validating the prompts are compatible for Docker Compose configurations\n // 2. Organizing prompts into service, network, and volume sections\n // 3. Resolving dependencies between services\n // 4. Applying variables with Docker Compose knowledge\n // 5. Generating a comprehensive deployment configuration\n \n // This is a template structure - in a real implementation, this would contain\n // the actual logic for combining Docker Compose prompts\n \n // For now, we'll outline the structure of how the implementation would work\n \n // Step 1: Load and categorize all prompts\n const prompts = await Promise.all(promptIds.map(id => this.promptService.getPrompt(id)));\n \n const servicePrompts = prompts.filter(p => p.tags?.includes('service'));\n const networkPrompts = prompts.filter(p => p.tags?.includes('network'));\n const volumePrompts = prompts.filter(p => p.tags?.includes('volume'));\n const deploymentPrompts = prompts.filter(p => p.tags?.includes('deployment'));\n \n // Step 2: Apply variables to each prompt category\n const variables = context?.variables || {};\n \n // Combine service configurations\n const services = await this.combineServices(servicePrompts, context);\n \n // Combine network configurations\n const networks = await this.combineNetworks(networkPrompts, context);\n \n // Combine volume configurations\n const volumes = await this.combineVolumes(volumePrompts, context);\n \n // Combine deployment commands\n const deployment = await this.combineDeployment(deploymentPrompts, context);\n \n // Step 3: Create combined Docker Compose content\n const composeVersion = context?.composeVersion || '3.8';\n const serviceName = variables.service_name || 'app';\n \n const composeConfiguration = `version: '${composeVersion}'\n\nservices:\n${services.content}\n\nnetworks:\n${networks.content}\n\nvolumes:\n${volumes.content}\n`;\n \n // Step 4: Return the comprehensive result\n return {\n content: `# Docker Compose Configuration for ${variables.project_name || 'Your Project'}\n\n## Docker Compose File\n\n\\`\\`\\`yaml\n${composeConfiguration}\n\\`\\`\\`\n\n## Deployment Commands\n\n${deployment.content}\n`,\n components: [\n ...services.components,\n ...networks.components,\n ...volumes.components,\n ...deployment.components\n ],\n appliedVariables: variables,\n composeConfiguration,\n serviceConfigurations: this.extractServiceConfigurations(services.content),\n networkConfigurations: networks.content,\n volumeConfigurations: volumes.content,\n deploymentCommands: deployment.content,\n // Add suggestion for what to do next\n nextSteps: [\n { action: 'validate_compose', description: 'Validate the Docker Compose configuration using docker-compose config' },\n { action: 'deploy_compose', description: 'Deploy services using docker-compose up -d' },\n { action: 'monitor_services', description: 'Monitor service logs using docker-compose logs -f' },\n { action: 'scale_services', description: 'Scale services as needed using docker-compose up -d --scale' }\n ]\n };\n }\n \n /**\n * Helper method to combine service prompts\n * @param prompts Service prompts\n * @param context Docker Compose context\n * @returns Combined result for services\n */\n private async combineServices(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine service definitions\n // For our template, we'll create a simplified implementation\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no service prompts but we have services in context, create from context\n if (prompts.length === 0 && context?.services?.length) {\n content = this.generateServicesFromContext(context);\n components.push({\n id: 'generated-services',\n name: 'Generated Services',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate service definitions from context\n * @param context Docker Compose context\n * @returns Generated service YAML\n */\n private generateServicesFromContext(context: DockerComposeContext): string {\n let servicesYaml = '';\n \n for (const service of context.services) {\n servicesYaml += ` ${service.name}:\\n`;\n if (service.image) {\n servicesYaml += ` image: ${service.image}\\n`;\n } else {\n servicesYaml += ` build: ./${service.name}\\n`;\n }\n \n if (service.ports && service.ports.length) {\n servicesYaml += ' ports:\\n';\n for (const port of service.ports) {\n servicesYaml += ` - \"${port}\"\\n`;\n }\n }\n \n if (service.environment && Object.keys(service.environment).length) {\n servicesYaml += ' environment:\\n';\n for (const [key, value] of Object.entries(service.environment)) {\n servicesYaml += ` - ${key}=${value}\\n`;\n }\n }\n \n if (service.volumes && service.volumes.length) {\n servicesYaml += ' volumes:\\n';\n for (const volume of service.volumes) {\n servicesYaml += ` - ${volume}\\n`;\n }\n }\n \n if (service.dependencies && service.dependencies.length) {\n servicesYaml += ' depends_on:\\n';\n for (const dep of service.dependencies) {\n servicesYaml += ` - ${dep}\\n`;\n }\n }\n \n // Add resource constraints if specified\n if (context.resources?.cpuLimits || context.resources?.memoryLimits) {\n servicesYaml += ' deploy:\\n resources:\\n limits:\\n';\n if (context.resources.cpuLimits) {\n servicesYaml += ' cpus: \"1.0\"\\n';\n }\n if (context.resources.memoryLimits) {\n servicesYaml += ' memory: 512M\\n';\n }\n }\n \n servicesYaml += '\\n';\n }\n \n return servicesYaml;\n }\n \n /**\n * Helper method to combine network prompts\n * @param prompts Network prompts\n * @param context Docker Compose context\n * @returns Combined result for networks\n */\n private async combineNetworks(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine network definitions\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no network prompts but we have networks in context, create from context\n if (prompts.length === 0 && context?.networks?.length) {\n content = this.generateNetworksFromContext(context);\n components.push({\n id: 'generated-networks',\n name: 'Generated Networks',\n contribution: content\n });\n } else if (prompts.length === 0) {\n // Default network if nothing provided\n content = ` app-network:\\n driver: bridge\\n`;\n components.push({\n id: 'default-network',\n name: 'Default Network',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate network definitions from context\n * @param context Docker Compose context\n * @returns Generated network YAML\n */\n private generateNetworksFromContext(context: DockerComposeContext): string {\n let networksYaml = '';\n \n for (const network of context.networks || []) {\n networksYaml += ` ${network.name}:\\n`;\n if (network.driver) {\n networksYaml += ` driver: ${network.driver}\\n`;\n }\n if (network.external) {\n networksYaml += ` external: true\\n`;\n }\n networksYaml += '\\n';\n }\n \n return networksYaml;\n }\n \n /**\n * Helper method to combine volume prompts\n * @param prompts Volume prompts\n * @param context Docker Compose context\n * @returns Combined result for volumes\n */\n private async combineVolumes(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine volume definitions\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no volume prompts but we have volumes in context, create from context\n if (prompts.length === 0 && context?.volumes?.length) {\n content = this.generateVolumesFromContext(context);\n components.push({\n id: 'generated-volumes',\n name: 'Generated Volumes',\n contribution: content\n });\n } else if (prompts.length === 0) {\n // Default volume if nothing provided\n content = ` app-data:\\n`;\n components.push({\n id: 'default-volume',\n name: 'Default Volume',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate volume definitions from context\n * @param context Docker Compose context\n * @returns Generated volume YAML\n */\n private generateVolumesFromContext(context: DockerComposeContext): string {\n let volumesYaml = '';\n \n for (const volume of context.volumes || []) {\n volumesYaml += ` ${volume.name}:\\n`;\n if (volume.driver) {\n volumesYaml += ` driver: ${volume.driver}\\n`;\n }\n if (volume.external) {\n volumesYaml += ` external: true\\n`;\n }\n volumesYaml += '\\n';\n }\n \n return volumesYaml;\n }\n \n /**\n * Helper method to combine deployment prompts\n * @param prompts Deployment prompts\n * @param context Docker Compose context\n * @returns Combined result for deployment\n */\n private async combineDeployment(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine deployment commands\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no deployment prompts, generate default commands\n if (prompts.length === 0) {\n const projectName = variables.project_name || 'myproject';\n const env = context?.environment || 'development';\n \n content = `# Start all services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml up -d\n\n# View service logs\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml logs -f\n\n# Scale specific services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml up -d --scale service_name=3\n\n# Stop all services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml down\n\n# Stop and remove volumes\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml down -v`;\n \n components.push({\n id: 'default-deployment',\n name: 'Default Deployment Commands',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Extract individual service configurations from combined YAML\n * @param servicesYaml Combined services YAML\n * @returns Object with service name keys and configuration values\n */\n private extractServiceConfigurations(servicesYaml: string): Record<string, string> {\n const services: Record<string, string> = {};\n const serviceBlocks = servicesYaml.split(/^\\s{2}[^\\s]+:/gm);\n \n // Skip the first empty block if it exists\n const startIndex = serviceBlocks[0].trim() === '' ? 1 : 0;\n \n for (let i = startIndex; i < serviceBlocks.length; i++) {\n const block = serviceBlocks[i];\n const nameMatch = block.match(/^\\s*([^\\s:]+)\\s*$/m);\n \n if (nameMatch && nameMatch[1]) {\n const serviceName = nameMatch[1];\n services[serviceName] = block.trim();\n }\n }\n \n return services;\n }\n \n /**\n * Gets Docker Compose prompt suggestions\n * @param category Optional category to filter by\n * @param context Current Docker Compose context to inform suggestions\n * @returns Array of prompt suggestions for Docker Compose configurations\n */\n async getPromptSuggestions(category?: string, context?: DockerComposeContext): Promise<PromptSuggestion[]> {\n // Implementation would suggest prompts based on the current Docker context\n // For example, if using PostgreSQL, suggest corresponding service templates\n // This is a placeholder for demonstration\n \n const hasDatabase = context?.services?.some(s => \n s.type === 'database' || \n s.image?.includes('postgres') || \n s.image?.includes('mysql') || \n s.image?.includes('mongo'));\n \n const hasMCP = context?.services?.some(s => \n s.name.includes('mcp') || \n s.type === 'mcp');\n \n return [\n {\n id: 'docker-containerization-guide',\n name: 'Docker Containerization Guide',\n relevance: 100,\n compatibleWith: ['docker-compose-database-service', 'docker-compose-mcp-service'],\n reason: 'Provides the Docker containerization foundation'\n },\n {\n id: 'docker-compose-database-service',\n name: 'Docker Compose Database Service',\n relevance: hasDatabase ? 100 : 70,\n compatibleWith: ['docker-containerization-guide', 'docker-compose-mcp-service'],\n reason: hasDatabase ? 'Required for database services in your composition' : 'Optional database service configuration'\n },\n {\n id: 'docker-compose-mcp-service',\n name: 'Docker Compose MCP Service',\n relevance: hasMCP ? 100 : 50,\n compatibleWith: ['docker-containerization-guide', 'docker-compose.postgres'],\n reason: hasMCP ? 'Required for MCP services in your composition' : 'Optional MCP service configuration'\n },\n {\n id: 'docker-compose-networking',\n name: 'Docker Compose Networking',\n relevance: 80,\n compatibleWith: ['docker-containerization-guide'],\n reason: 'Advanced networking configuration for your services'\n },\n {\n id: 'docker-compose-deployment',\n name: 'Docker Compose Deployment',\n relevance: context?.environment === 'production' ? 100 : 70,\n compatibleWith: ['docker-containerization-guide'],\n reason: 'Deployment strategies for your Docker Compose applications'\n }\n ];\n }\n \n /**\n * Validates if the prompts can be combined for Docker Compose configurations\n * @param promptIds Array of prompt IDs to validate\n * @returns Validation result with any issues specific to Docker Compose\n */\n async validateCombination(promptIds: string[]): Promise<CombinationValidationResult> {\n // Implementation would validate that the prompts make sense for Docker Compose\n // For example, ensuring there are no conflicting service definitions\n // This is a placeholder for demonstration\n \n const prompts = await Promise.all(promptIds.map(id => this.promptService.getPrompt(id)));\n \n // Check for Docker container prompt\n const hasContainer = prompts.some(p => p.tags?.includes('docker') || p.tags?.includes('containerization'));\n if (!hasContainer) {\n return {\n isValid: false,\n issues: [{\n promptId: '',\n issue: 'Missing Docker containerization prompt',\n severity: 'error',\n suggestion: 'Add a Docker containerization prompt, such as docker-containerization-guide'\n }],\n suggestions: [{\n promptIds: [...promptIds, 'docker-containerization-guide'],\n reason: 'Docker containerization is required for Docker Compose configurations'\n }]\n };\n }\n \n // In a real implementation, would do more validation specific to Docker Compose\n \n return {\n isValid: true\n };\n }\n \n /**\n * Creates a saved Docker Compose workflow\n * @param name Name for the new workflow\n * @param promptIds Component prompt IDs\n * @param config Configuration for the combination\n * @returns The created Docker Compose workflow\n */\n async saveWorkflow(name: string, promptIds: string[], config: WorkflowConfig): Promise<SavedWorkflow> {\n // Implementation would save a Docker Compose workflow\n // This is a placeholder for demonstration\n \n return {\n id: `docker-compose-workflow-${Date.now()}`,\n name,\n promptIds,\n config,\n createdAt: new Date().toISOString(),\n updatedAt: new Date().toISOString(),\n version: 1,\n category: 'docker-compose',\n tags: ['docker', 'compose', 'deployment']\n };\n }\n \n /**\n * Loads a previously saved Docker Compose workflow\n * @param workflowId ID of the saved workflow\n * @returns The loaded Docker Compose workflow\n */\n async loadWorkflow(workflowId: string): Promise<SavedWorkflow> {\n // Implementation would load a Docker Compose workflow\n // This is a placeholder for demonstration\n \n throw new Error(`Workflow ${workflowId} not found or not implemented yet`);\n }\n}\n\n/**\n * Usage Examples\n * \n * ```typescript\n * // Creating a combiner\n * const promptService = new PromptService(storageAdapter);\n * const dockerCombiner = new DockerComposePromptCombiner(promptService);\n * \n * // Getting prompt suggestions for Docker Compose\n * const suggestions = await dockerCombiner.getPromptSuggestions('services', {\n * environment: 'production',\n * services: [\n * {\n * name: 'web',\n * type: 'frontend',\n * image: 'nginx:alpine',\n * ports: ['80:80']\n * },\n * {\n * name: 'api',\n * type: 'backend',\n * image: 'node:14-alpine',\n * ports: ['3000:3000'],\n * dependencies: ['db']\n * },\n * {\n * name: 'db',\n * type: 'database',\n * image: 'postgres:13',\n * volumes: ['postgres-data:/var/lib/postgresql/data']\n * }\n * ],\n * composeVersion: '3.8'\n * });\n * \n * // Combining prompts for Docker Compose\n * const result = await dockerCombiner.combinePrompts([\n * 'docker-containerization-guide',\n * 'docker-compose-database-service'\n * ], {\n * variables: {\n * project_name: 'My Awesome Project',\n * service_name: 'api'\n * },\n * environment: 'production',\n * services: [\n * {\n * name: 'web',\n * type: 'frontend',\n * image: 'nginx:alpine',\n * ports: ['80:80']\n * },\n * {\n * name: 'api',\n * type: 'backend',\n * image: 'node:14-alpine',\n * ports: ['3000:3000'],\n * dependencies: ['db']\n * },\n * {\n * name: 'db',\n * type: 'database',\n * image: 'postgres:13',\n * volumes: ['postgres-data:/var/lib/postgresql/data']\n * }\n * ],\n * composeVersion: '3.8'\n * });\n * \n * // Using the specialized result properties\n * console.log(result.composeConfiguration); // Get the complete Docker Compose configuration\n * console.log(result.serviceConfigurations['db']); // Get just the database service configuration\n * console.log(result.deploymentCommands); // Get the deployment commands\n * ```\n */\n\n// ============================\n// Extension Guidelines\n// ============================\n\n/**\n * When extending DockerComposePromptCombiner, consider:\n * \n * 1. Adding support for specific service types (e.g., web, backend, database, cache)\n * 2. Enhancing the context with more Docker-specific properties\n * 3. Adding support for more complex network and volume configurations\n * 4. Implementing advanced health check configurations\n * 5. Adding support for Docker Swarm mode configurations\n * 6. {{additional_extension_guidelines}}\n */",
"variables": [
"project_name",
"additional_docker_context",
"additional_docker_results",
"additional_extension_guidelines"
],
"metadata": {
"source": "/home/sparrow/projects/mcp-prompts/prompts/docker-compose-prompt-combiner.json",
"imported": true
}
}
```
--------------------------------------------------------------------------------
/data/prompts/templates/docker-compose-prompt-combiner.json:
--------------------------------------------------------------------------------
```json
{
"id": "docker-compose-prompt-combiner",
"name": "Docker Compose Prompt Combiner",
"description": "A specialized prompt combiner for creating Docker Compose configurations that integrates service definitions, volumes, networks, and deployment patterns",
"content": "/**\n * DockerComposePromptCombiner for {{project_name}}\n * \n * A specialized implementation of the PromptCombiner interface\n * focused on combining prompts for Docker Compose configuration and orchestration.\n */\n\nimport { PromptCombiner, CombinerContext, CombinedPromptResult, PromptSuggestion, CombinationValidationResult, WorkflowConfig, SavedWorkflow } from './prompt-combiner-interface';\nimport { PromptService } from '../services/prompt-service';\nimport { Prompt } from '../core/types';\n\n/**\n * Docker Compose specific context\n */\nexport interface DockerComposeContext extends CombinerContext {\n /** Project environment (development, staging, production) */\n environment: 'development' | 'staging' | 'production' | string;\n \n /** Services to include in the configuration */\n services: {\n name: string;\n type: string;\n image?: string;\n ports?: string[];\n volumes?: string[];\n environment?: Record<string, string>;\n dependencies?: string[];\n }[];\n \n /** Networks to define */\n networks?: {\n name: string;\n external?: boolean;\n driver?: string;\n }[];\n \n /** Volumes to define */\n volumes?: {\n name: string;\n driver?: string;\n external?: boolean;\n }[];\n \n /** Docker Compose version */\n composeVersion?: string;\n \n /** Orchestration platform */\n platform?: 'docker' | 'kubernetes' | 'swarm';\n \n /** Resource constraints */\n resources?: {\n memoryLimits?: boolean;\n cpuLimits?: boolean;\n };\n \n /** Additional Docker-specific context */\n {{additional_docker_context}}\n}\n\n/**\n * Specialized result for Docker Compose combinations\n */\nexport interface DockerComposeResult extends CombinedPromptResult {\n /** Generated Docker Compose configuration */\n composeConfiguration?: string;\n \n /** Individual service configurations */\n serviceConfigurations?: Record<string, string>;\n \n /** Network configurations */\n networkConfigurations?: string;\n \n /** Volume configurations */\n volumeConfigurations?: string;\n \n /** Deployment commands */\n deploymentCommands?: string;\n \n /** Generated Dockerfiles */\n dockerfiles?: Record<string, string>;\n \n /** Additional Docker-specific results */\n {{additional_docker_results}}\n}\n\n/**\n * Implementation of DockerComposePromptCombiner\n */\nexport class DockerComposePromptCombiner implements PromptCombiner {\n constructor(private promptService: PromptService) {}\n \n /**\n * Combines Docker Compose prompts\n * @param promptIds Array of prompt IDs to combine\n * @param context Optional Docker Compose context\n * @returns Combined Docker Compose result\n */\n async combinePrompts(promptIds: string[], context?: DockerComposeContext): Promise<DockerComposeResult> {\n // Implementation would include:\n // 1. Validating the prompts are compatible for Docker Compose configurations\n // 2. Organizing prompts into service, network, and volume sections\n // 3. Resolving dependencies between services\n // 4. Applying variables with Docker Compose knowledge\n // 5. Generating a comprehensive deployment configuration\n \n // This is a template structure - in a real implementation, this would contain\n // the actual logic for combining Docker Compose prompts\n \n // For now, we'll outline the structure of how the implementation would work\n \n // Step 1: Load and categorize all prompts\n const prompts = await Promise.all(promptIds.map(id => this.promptService.getPrompt(id)));\n \n const servicePrompts = prompts.filter(p => p.tags?.includes('service'));\n const networkPrompts = prompts.filter(p => p.tags?.includes('network'));\n const volumePrompts = prompts.filter(p => p.tags?.includes('volume'));\n const deploymentPrompts = prompts.filter(p => p.tags?.includes('deployment'));\n \n // Step 2: Apply variables to each prompt category\n const variables = context?.variables || {};\n \n // Combine service configurations\n const services = await this.combineServices(servicePrompts, context);\n \n // Combine network configurations\n const networks = await this.combineNetworks(networkPrompts, context);\n \n // Combine volume configurations\n const volumes = await this.combineVolumes(volumePrompts, context);\n \n // Combine deployment commands\n const deployment = await this.combineDeployment(deploymentPrompts, context);\n \n // Step 3: Create combined Docker Compose content\n const composeVersion = context?.composeVersion || '3.8';\n const serviceName = variables.service_name || 'app';\n \n const composeConfiguration = `version: '${composeVersion}'\n\nservices:\n${services.content}\n\nnetworks:\n${networks.content}\n\nvolumes:\n${volumes.content}\n`;\n \n // Step 4: Return the comprehensive result\n return {\n content: `# Docker Compose Configuration for ${variables.project_name || 'Your Project'}\n\n## Docker Compose File\n\n\\`\\`\\`yaml\n${composeConfiguration}\n\\`\\`\\`\n\n## Deployment Commands\n\n${deployment.content}\n`,\n components: [\n ...services.components,\n ...networks.components,\n ...volumes.components,\n ...deployment.components\n ],\n appliedVariables: variables,\n composeConfiguration,\n serviceConfigurations: this.extractServiceConfigurations(services.content),\n networkConfigurations: networks.content,\n volumeConfigurations: volumes.content,\n deploymentCommands: deployment.content,\n // Add suggestion for what to do next\n nextSteps: [\n { action: 'validate_compose', description: 'Validate the Docker Compose configuration using docker-compose config' },\n { action: 'deploy_compose', description: 'Deploy services using docker-compose up -d' },\n { action: 'monitor_services', description: 'Monitor service logs using docker-compose logs -f' },\n { action: 'scale_services', description: 'Scale services as needed using docker-compose up -d --scale' }\n ]\n };\n }\n \n /**\n * Helper method to combine service prompts\n * @param prompts Service prompts\n * @param context Docker Compose context\n * @returns Combined result for services\n */\n private async combineServices(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine service definitions\n // For our template, we'll create a simplified implementation\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no service prompts but we have services in context, create from context\n if (prompts.length === 0 && context?.services?.length) {\n content = this.generateServicesFromContext(context);\n components.push({\n id: 'generated-services',\n name: 'Generated Services',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate service definitions from context\n * @param context Docker Compose context\n * @returns Generated service YAML\n */\n private generateServicesFromContext(context: DockerComposeContext): string {\n let servicesYaml = '';\n \n for (const service of context.services) {\n servicesYaml += ` ${service.name}:\\n`;\n if (service.image) {\n servicesYaml += ` image: ${service.image}\\n`;\n } else {\n servicesYaml += ` build: ./${service.name}\\n`;\n }\n \n if (service.ports && service.ports.length) {\n servicesYaml += ' ports:\\n';\n for (const port of service.ports) {\n servicesYaml += ` - \"${port}\"\\n`;\n }\n }\n \n if (service.environment && Object.keys(service.environment).length) {\n servicesYaml += ' environment:\\n';\n for (const [key, value] of Object.entries(service.environment)) {\n servicesYaml += ` - ${key}=${value}\\n`;\n }\n }\n \n if (service.volumes && service.volumes.length) {\n servicesYaml += ' volumes:\\n';\n for (const volume of service.volumes) {\n servicesYaml += ` - ${volume}\\n`;\n }\n }\n \n if (service.dependencies && service.dependencies.length) {\n servicesYaml += ' depends_on:\\n';\n for (const dep of service.dependencies) {\n servicesYaml += ` - ${dep}\\n`;\n }\n }\n \n // Add resource constraints if specified\n if (context.resources?.cpuLimits || context.resources?.memoryLimits) {\n servicesYaml += ' deploy:\\n resources:\\n limits:\\n';\n if (context.resources.cpuLimits) {\n servicesYaml += ' cpus: \"1.0\"\\n';\n }\n if (context.resources.memoryLimits) {\n servicesYaml += ' memory: 512M\\n';\n }\n }\n \n servicesYaml += '\\n';\n }\n \n return servicesYaml;\n }\n \n /**\n * Helper method to combine network prompts\n * @param prompts Network prompts\n * @param context Docker Compose context\n * @returns Combined result for networks\n */\n private async combineNetworks(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine network definitions\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no network prompts but we have networks in context, create from context\n if (prompts.length === 0 && context?.networks?.length) {\n content = this.generateNetworksFromContext(context);\n components.push({\n id: 'generated-networks',\n name: 'Generated Networks',\n contribution: content\n });\n } else if (prompts.length === 0) {\n // Default network if nothing provided\n content = ` app-network:\\n driver: bridge\\n`;\n components.push({\n id: 'default-network',\n name: 'Default Network',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate network definitions from context\n * @param context Docker Compose context\n * @returns Generated network YAML\n */\n private generateNetworksFromContext(context: DockerComposeContext): string {\n let networksYaml = '';\n \n for (const network of context.networks || []) {\n networksYaml += ` ${network.name}:\\n`;\n if (network.driver) {\n networksYaml += ` driver: ${network.driver}\\n`;\n }\n if (network.external) {\n networksYaml += ` external: true\\n`;\n }\n networksYaml += '\\n';\n }\n \n return networksYaml;\n }\n \n /**\n * Helper method to combine volume prompts\n * @param prompts Volume prompts\n * @param context Docker Compose context\n * @returns Combined result for volumes\n */\n private async combineVolumes(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine volume definitions\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no volume prompts but we have volumes in context, create from context\n if (prompts.length === 0 && context?.volumes?.length) {\n content = this.generateVolumesFromContext(context);\n components.push({\n id: 'generated-volumes',\n name: 'Generated Volumes',\n contribution: content\n });\n } else if (prompts.length === 0) {\n // Default volume if nothing provided\n content = ` app-data:\\n`;\n components.push({\n id: 'default-volume',\n name: 'Default Volume',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Generate volume definitions from context\n * @param context Docker Compose context\n * @returns Generated volume YAML\n */\n private generateVolumesFromContext(context: DockerComposeContext): string {\n let volumesYaml = '';\n \n for (const volume of context.volumes || []) {\n volumesYaml += ` ${volume.name}:\\n`;\n if (volume.driver) {\n volumesYaml += ` driver: ${volume.driver}\\n`;\n }\n if (volume.external) {\n volumesYaml += ` external: true\\n`;\n }\n volumesYaml += '\\n';\n }\n \n return volumesYaml;\n }\n \n /**\n * Helper method to combine deployment prompts\n * @param prompts Deployment prompts\n * @param context Docker Compose context\n * @returns Combined result for deployment\n */\n private async combineDeployment(prompts: Prompt[], context?: DockerComposeContext): Promise<CombinedPromptResult> {\n // Implementation would combine deployment commands\n let content = '';\n const components: {id: string; name: string; contribution: string}[] = [];\n const variables = context?.variables || {};\n \n // If no deployment prompts, generate default commands\n if (prompts.length === 0) {\n const projectName = variables.project_name || 'myproject';\n const env = context?.environment || 'development';\n \n content = `# Start all services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml up -d\n\n# View service logs\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml logs -f\n\n# Scale specific services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml up -d --scale service_name=3\n\n# Stop all services\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml down\n\n# Stop and remove volumes\ndocker-compose -p ${projectName} -f docker-compose.${env}.yml down -v`;\n \n components.push({\n id: 'default-deployment',\n name: 'Default Deployment Commands',\n contribution: content\n });\n } else {\n // Otherwise use the prompts\n for (const prompt of prompts) {\n const result = await this.promptService.applyTemplate(prompt.id, variables);\n content += result.content + '\\n\\n';\n components.push({\n id: prompt.id,\n name: prompt.name,\n contribution: result.content\n });\n }\n }\n \n return {\n content: content.trim(),\n components,\n appliedVariables: variables\n };\n }\n \n /**\n * Extract individual service configurations from combined YAML\n * @param servicesYaml Combined services YAML\n * @returns Object with service name keys and configuration values\n */\n private extractServiceConfigurations(servicesYaml: string): Record<string, string> {\n const services: Record<string, string> = {};\n const serviceBlocks = servicesYaml.split(/^\\s{2}[^\\s]+:/gm);\n \n // Skip the first empty block if it exists\n const startIndex = serviceBlocks[0].trim() === '' ? 1 : 0;\n \n for (let i = startIndex; i < serviceBlocks.length; i++) {\n const block = serviceBlocks[i];\n const nameMatch = block.match(/^\\s*([^\\s:]+)\\s*$/m);\n \n if (nameMatch && nameMatch[1]) {\n const serviceName = nameMatch[1];\n services[serviceName] = block.trim();\n }\n }\n \n return services;\n }\n \n /**\n * Gets Docker Compose prompt suggestions\n * @param category Optional category to filter by\n * @param context Current Docker Compose context to inform suggestions\n * @returns Array of prompt suggestions for Docker Compose configurations\n */\n async getPromptSuggestions(category?: string, context?: DockerComposeContext): Promise<PromptSuggestion[]> {\n // Implementation would suggest prompts based on the current Docker context\n // For example, if using PostgreSQL, suggest corresponding service templates\n // This is a placeholder for demonstration\n \n const hasDatabase = context?.services?.some(s => \n s.type === 'database' || \n s.image?.includes('postgres') || \n s.image?.includes('mysql') || \n s.image?.includes('mongo'));\n \n const hasMCP = context?.services?.some(s => \n s.name.includes('mcp') || \n s.type === 'mcp');\n \n return [\n {\n id: 'docker-containerization-guide',\n name: 'Docker Containerization Guide',\n relevance: 100,\n compatibleWith: ['docker-compose-database-service', 'docker-compose-mcp-service'],\n reason: 'Provides the Docker containerization foundation'\n },\n {\n id: 'docker-compose-database-service',\n name: 'Docker Compose Database Service',\n relevance: hasDatabase ? 100 : 70,\n compatibleWith: ['docker-containerization-guide', 'docker-compose-mcp-service'],\n reason: hasDatabase ? 'Required for database services in your composition' : 'Optional database service configuration'\n },\n {\n id: 'docker-compose-mcp-service',\n name: 'Docker Compose MCP Service',\n relevance: hasMCP ? 100 : 50,\n compatibleWith: ['docker-containerization-guide', 'docker-compose.postgres'],\n reason: hasMCP ? 'Required for MCP services in your composition' : 'Optional MCP service configuration'\n },\n {\n id: 'docker-compose-networking',\n name: 'Docker Compose Networking',\n relevance: 80,\n compatibleWith: ['docker-containerization-guide'],\n reason: 'Advanced networking configuration for your services'\n },\n {\n id: 'docker-compose-deployment',\n name: 'Docker Compose Deployment',\n relevance: context?.environment === 'production' ? 100 : 70,\n compatibleWith: ['docker-containerization-guide'],\n reason: 'Deployment strategies for your Docker Compose applications'\n }\n ];\n }\n \n /**\n * Validates if the prompts can be combined for Docker Compose configurations\n * @param promptIds Array of prompt IDs to validate\n * @returns Validation result with any issues specific to Docker Compose\n */\n async validateCombination(promptIds: string[]): Promise<CombinationValidationResult> {\n // Implementation would validate that the prompts make sense for Docker Compose\n // For example, ensuring there are no conflicting service definitions\n // This is a placeholder for demonstration\n \n const prompts = await Promise.all(promptIds.map(id => this.promptService.getPrompt(id)));\n \n // Check for Docker container prompt\n const hasContainer = prompts.some(p => p.tags?.includes('docker') || p.tags?.includes('containerization'));\n if (!hasContainer) {\n return {\n isValid: false,\n issues: [{\n promptId: '',\n issue: 'Missing Docker containerization prompt',\n severity: 'error',\n suggestion: 'Add a Docker containerization prompt, such as docker-containerization-guide'\n }],\n suggestions: [{\n promptIds: [...promptIds, 'docker-containerization-guide'],\n reason: 'Docker containerization is required for Docker Compose configurations'\n }]\n };\n }\n \n // In a real implementation, would do more validation specific to Docker Compose\n \n return {\n isValid: true\n };\n }\n \n /**\n * Creates a saved Docker Compose workflow\n * @param name Name for the new workflow\n * @param promptIds Component prompt IDs\n * @param config Configuration for the combination\n * @returns The created Docker Compose workflow\n */\n async saveWorkflow(name: string, promptIds: string[], config: WorkflowConfig): Promise<SavedWorkflow> {\n // Implementation would save a Docker Compose workflow\n // This is a placeholder for demonstration\n \n return {\n id: `docker-compose-workflow-${Date.now()}`,\n name,\n promptIds,\n config,\n createdAt: new Date().toISOString(),\n updatedAt: new Date().toISOString(),\n version: 1,\n category: 'docker-compose',\n tags: ['docker', 'compose', 'deployment']\n };\n }\n \n /**\n * Loads a previously saved Docker Compose workflow\n * @param workflowId ID of the saved workflow\n * @returns The loaded Docker Compose workflow\n */\n async loadWorkflow(workflowId: string): Promise<SavedWorkflow> {\n // Implementation would load a Docker Compose workflow\n // This is a placeholder for demonstration\n \n throw new Error(`Workflow ${workflowId} not found or not implemented yet`);\n }\n}\n\n/**\n * Usage Examples\n * \n * ```typescript\n * // Creating a combiner\n * const promptService = new PromptService(storageAdapter);\n * const dockerCombiner = new DockerComposePromptCombiner(promptService);\n * \n * // Getting prompt suggestions for Docker Compose\n * const suggestions = await dockerCombiner.getPromptSuggestions('services', {\n * environment: 'production',\n * services: [\n * {\n * name: 'web',\n * type: 'frontend',\n * image: 'nginx:alpine',\n * ports: ['80:80']\n * },\n * {\n * name: 'api',\n * type: 'backend',\n * image: 'node:14-alpine',\n * ports: ['3000:3000'],\n * dependencies: ['db']\n * },\n * {\n * name: 'db',\n * type: 'database',\n * image: 'postgres:13',\n * volumes: ['postgres-data:/var/lib/postgresql/data']\n * }\n * ],\n * composeVersion: '3.8'\n * });\n * \n * // Combining prompts for Docker Compose\n * const result = await dockerCombiner.combinePrompts([\n * 'docker-containerization-guide',\n * 'docker-compose-database-service'\n * ], {\n * variables: {\n * project_name: 'My Awesome Project',\n * service_name: 'api'\n * },\n * environment: 'production',\n * services: [\n * {\n * name: 'web',\n * type: 'frontend',\n * image: 'nginx:alpine',\n * ports: ['80:80']\n * },\n * {\n * name: 'api',\n * type: 'backend',\n * image: 'node:14-alpine',\n * ports: ['3000:3000'],\n * dependencies: ['db']\n * },\n * {\n * name: 'db',\n * type: 'database',\n * image: 'postgres:13',\n * volumes: ['postgres-data:/var/lib/postgresql/data']\n * }\n * ],\n * composeVersion: '3.8'\n * });\n * \n * // Using the specialized result properties\n * console.log(result.composeConfiguration); // Get the complete Docker Compose configuration\n * console.log(result.serviceConfigurations['db']); // Get just the database service configuration\n * console.log(result.deploymentCommands); // Get the deployment commands\n * ```\n */\n\n// ============================\n// Extension Guidelines\n// ============================\n\n/**\n * When extending DockerComposePromptCombiner, consider:\n * \n * 1. Adding support for specific service types (e.g., web, backend, database, cache)\n * 2. Enhancing the context with more Docker-specific properties\n * 3. Adding support for more complex network and volume configurations\n * 4. Implementing advanced health check configurations\n * 5. Adding support for Docker Swarm mode configurations\n * 6. {{additional_extension_guidelines}}\n */",
"isTemplate": true,
"variables": [
"project_name",
"additional_docker_context",
"additional_docker_results",
"additional_extension_guidelines"
],
"tags": [
"devops",
"docker",
"docker-compose",
"orchestration",
"deployment"
],
"category": "devops",
"createdAt": "2024-08-08T17:30:00.000Z",
"updatedAt": "2024-08-08T17:30:00.000Z",
"version": 1
}
```
--------------------------------------------------------------------------------
/scripts/setup_orchestrator.sh:
--------------------------------------------------------------------------------
```bash
#!/usr/bin/env bash
# Complete project orchestration setup for Cursor on Linux.
# - Installs core tooling (git, curl, jq, Node via nvm, Python, Podman)
# - Configures .cursor MCP servers, rules, tools, hooks, deeplinks
# - Scaffolds background agents and webhooks (FastAPI)
# - Adds GitHub Actions workflows (CI, docs, code review)
# - Generates multi-language templates (MCP servers: Py/TS/C++, client, web, AWS, Docker, devcontainer,
# C++ ESP32, C++ with Conan, Android Kotlin containerized builder)
# - Safe to run multiple times; idempotent where possible
set -Eeuo pipefail
IFS=$'\n\t'
SCRIPT_NAME="$(basename "$0")"
START_TS="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
# ----------------------------- Utilities ------------------------------------
log() { printf "[setup][%s] %s\n" "$(date +%H:%M:%S)" "$*"; }
warn() { printf "\033[33m[warn]\033[0m %s\n" "$*"; }
err() { printf "\033[31m[err ]\033[0m %s\n" "$*" 1>&2; }
die() { err "$*"; exit 1; }
have_cmd() { command -v "$1" >/dev/null 2>&1; }
json_get() { jq -r "$1" "$CONFIG_PATH" 2>/dev/null; }
SUDO=""
if [ "${EUID:-$(id -u)}" -ne 0 ]; then
if have_cmd sudo; then SUDO="sudo -n"; else warn "sudo not found; attempting without elevated privileges"; fi
fi
require_or_install_pkg() {
# Best-effort package installation across distros. Usage: require_or_install_pkg pkgname [cmd_to_check]
local pkg="$1"; shift || true
local check_cmd="${1:-}";
if [ -n "$check_cmd" ] && have_cmd "$check_cmd"; then return 0; fi
if [ -n "$check_cmd" ] && [ -x "$check_cmd" ]; then return 0; fi
if [ -r /etc/os-release ]; then . /etc/os-release; fi
if have_cmd apt-get; then
$SUDO DEBIAN_FRONTEND=noninteractive apt-get update -y || true
$SUDO DEBIAN_FRONTEND=noninteractive apt-get install -y "$pkg" || warn "apt-get install $pkg failed"
elif have_cmd dnf; then
$SUDO dnf install -y "$pkg" || warn "dnf install $pkg failed"
elif have_cmd yum; then
$SUDO yum install -y "$pkg" || warn "yum install $pkg failed"
elif have_cmd pacman; then
$SUDO pacman -Sy --noconfirm "$pkg" || warn "pacman install $pkg failed"
elif have_cmd zypper; then
$SUDO zypper install -y "$pkg" || warn "zypper install $pkg failed"
elif have_cmd apk; then
$SUDO apk add --no-cache "$pkg" || warn "apk add $pkg failed"
else
warn "Unsupported package manager; could not install $pkg"
fi
}
install_base_packages() {
log "Installing base packages (git, curl, wget, jq, build tools, Python, Podman)"
# Core CLI
require_or_install_pkg git git
require_or_install_pkg curl curl
require_or_install_pkg wget wget
require_or_install_pkg jq jq
require_or_install_pkg ca-certificates
require_or_install_pkg unzip unzip
require_or_install_pkg tar tar
require_or_install_pkg xz-utils || true
# Build toolchain
if have_cmd apt-get; then
require_or_install_pkg build-essential
require_or_install_pkg cmake cmake
require_or_install_pkg ninja-build ninja
require_or_install_pkg pkg-config pkg-config
else
require_or_install_pkg gcc gcc || true
require_or_install_pkg g++ g++ || true
require_or_install_pkg cmake cmake || true
require_or_install_pkg ninja ninja || true
require_or_install_pkg pkgconf pkgconf || require_or_install_pkg pkg-config pkg-config || true
fi
# Python
require_or_install_pkg python3 python3
if have_cmd apt-get; then
require_or_install_pkg python3-venv || true
require_or_install_pkg python3-pip || true
require_or_install_pkg pipx || true
else
require_or_install_pkg python3-pip || true
fi
# Containers: honor container.prefer from JSON
local prefer="$(json_get '.container.prefer')"
if [ "$prefer" = "docker" ]; then
# Prefer Docker engine when requested
if have_cmd apt-get; then
require_or_install_pkg docker.io docker || warn "Failed to install docker.io"
elif have_cmd dnf; then
require_or_install_pkg docker docker || require_or_install_pkg moby-engine docker || true
fi
if have_cmd docker; then
log "Docker is available"
else
warn "Docker not available; container preference is docker but installation may have failed"
fi
else
# Default/Podman path
require_or_install_pkg podman podman || warn "Podman not installed; containerization features may be limited"
# Provide docker compatibility shim if docker client missing and podman exists
if have_cmd podman && ! have_cmd docker; then
if [ ! -x /usr/local/bin/docker ]; then
log "Creating docker -> podman shim at /usr/local/bin/docker"
echo '#!/usr/bin/env bash' | $SUDO tee /usr/local/bin/docker >/dev/null
echo 'exec podman "$@"' | $SUDO tee -a /usr/local/bin/docker >/dev/null
$SUDO chmod +x /usr/local/bin/docker || true
fi
fi
fi
# Optional: docker-compose replacement for Podman
if ! have_cmd podman-compose; then
if have_cmd pipx; then pipx install podman-compose || true; fi
fi
}
install_node_via_nvm() {
if have_cmd node; then
log "Node.js present: $(node -v)"
else
log "Installing Node.js (LTS) via nvm"
export NVM_DIR="$HOME/.nvm"
mkdir -p "$NVM_DIR"
curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
# shellcheck source=/dev/null
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
nvm install --lts
nvm alias default 'lts/*'
fi
if have_cmd corepack; then
corepack enable || true
corepack prepare pnpm@latest --activate || true
else
npm install -g pnpm yarn || true
fi
}
install_python_tools() {
log "Ensuring pip, pipx, and venv availability"
if ! have_cmd pipx; then
if have_cmd python3; then python3 -m pip install --user -q pipx || true; fi
if have_cmd pipx; then pipx ensurepath || true; fi
fi
}
create_dirs() {
WORKSPACE_ROOT="${WORKSPACE_ROOT:-/workspace}"
if [ ! -d "$WORKSPACE_ROOT" ]; then WORKSPACE_ROOT="$(pwd)"; fi
export WORKSPACE_ROOT
log "Using WORKSPACE_ROOT=$WORKSPACE_ROOT"
mkdir -p "$WORKSPACE_ROOT/.cursor/tools" \
"$WORKSPACE_ROOT/.cursor/hooks" \
"$WORKSPACE_ROOT/.cursor/webhooks" \
"$WORKSPACE_ROOT/.cursor/agents" \
"$WORKSPACE_ROOT/scripts" \
"$WORKSPACE_ROOT/servers/python-mcp" \
"$WORKSPACE_ROOT/servers/ts-mcp/src" \
"$WORKSPACE_ROOT/servers/cpp-mcp/src" \
"$WORKSPACE_ROOT/client/mcp-client/src" \
"$WORKSPACE_ROOT/services/background-agent" \
"$WORKSPACE_ROOT/.github/workflows" \
"$WORKSPACE_ROOT/templates" \
"$WORKSPACE_ROOT/infra/aws/terraform" \
"$WORKSPACE_ROOT/devcontainer" \
"$WORKSPACE_ROOT/web"
}
write_file() {
# write_file <path> <mode> <<'EOF'
local path="$1"; shift
local mode="$1"; shift
$SUDO mkdir -p "$(dirname "$path")"
# shellcheck disable=SC2094
cat >"$path"
$SUDO chmod "$mode" "$path" || true
}
setup_cursor_configs() {
if [ "$(json_get '.enable.cursorConfigs')" != "true" ]; then return 0; fi
log "Writing .cursor configuration (MCP servers, tools, rules, hooks, deeplinks)"
# Build MCP servers config from JSON flags
local mcpEntries="{}"
local py_port
py_port="$(json_get '.ports.pyMcpPort')" || py_port="8765"
if [ -z "$py_port" ] || [ "$py_port" = "null" ]; then py_port="8765"; fi
local ts_port
ts_port="$(json_get '.ports.tsMcpPort')" || ts_port="8766"
if [ -z "$ts_port" ] || [ "$ts_port" = "null" ]; then ts_port="8766"; fi
if [ "$(json_get '.enable.pythonMcp')" = "true" ]; then
mcpEntries=$(jq --arg port "$py_port" '. + {"mcp-python": {"command":"bash","args":["-lc","python3 servers/python-mcp/main.py"],"env":{"PY_MCP_PORT": $port}}}' <<<"$mcpEntries")
fi
if [ "$(json_get '.enable.tsMcp')" = "true" ]; then
mcpEntries=$(jq --arg port "$ts_port" '. + {"mcp-typescript": {"command":"bash","args":["-lc","node servers/ts-mcp/dist/index.js"],"env":{"TS_MCP_PORT": $port}}}' <<<"$mcpEntries")
fi
if [ "$(json_get '.enable.cppMcp')" = "true" ]; then
mcpEntries=$(jq '. + {"mcp-cpp": {"command":"bash","args":["-lc","./servers/cpp-mcp/build/mcp_server"],"env":{}}}' <<<"$mcpEntries")
fi
jq -n --argjson servers "$mcpEntries" '{servers: $servers}' > "$WORKSPACE_ROOT/.cursor/mcp.json"
if [ "$(json_get '.tools.largeCodebases.enabled')" = "true" ]; then
jq -n \
--argjson enabled true \
--argjson exclude "$(json_get '.tools.largeCodebases.exclude')" \
--argjson maxFileSizeMB "$(json_get '.tools.largeCodebases.maxFileSizeMB')" \
'{enabled: $enabled, exclude: $exclude, maxFileSizeMB: $maxFileSizeMB}' \
> "$WORKSPACE_ROOT/.cursor/tools/large-codebases.json"
fi
if [ "$(json_get '.tools.mermaid.enabled')" = "true" ]; then
jq -n '{enabled: true}' > "$WORKSPACE_ROOT/.cursor/tools/mermaid-diagrams.json"
fi
write_file "$WORKSPACE_ROOT/.cursor/rules.json" 0644 <<'JSON'
{
"rules": [
{"pattern": "**/*.py", "instructions": "Follow PEP 8, PEP 257. Use type hints."},
{"pattern": "**/*.{ts,tsx}", "instructions": "Use strict TypeScript. Prefer explicit types for exports."},
{"pattern": "**/*.cpp", "instructions": "Use modern C++17+, CMake targets, no raw new/delete."}
]
}
JSON
write_file "$WORKSPACE_ROOT/.cursor/hooks/agent-hooks.json" 0644 <<'JSON'
{
"preTask": [
{"type": "log", "level": "info", "message": "Starting task"}
],
"postTask": [
{"type": "log", "level": "info", "message": "Task complete"}
]
}
JSON
local agent_host="$(json_get '.backgroundAgent.host')"
local agent_port="$(json_get '.backgroundAgent.port')"
jq -n \
--arg url "http://${agent_host}:${agent_port}/webhooks/cursor" \
'{webhooks: [{name: "background-agent", url: $url, events: ["task.created","task.updated","run.completed"]}]}' \
> "$WORKSPACE_ROOT/.cursor/webhooks/webhooks.json"
jq -n \
--arg baseUrl "http://${agent_host}:${agent_port}" \
'{agents: [{name: "default", baseUrl: $baseUrl, enabled: true}]}' \
> "$WORKSPACE_ROOT/.cursor/agents/background-agent.json"
}
scaffold_python_mcp_server() {
if [ "$(json_get '.enable.pythonMcp')" != "true" ]; then return 0; fi
log "Scaffolding Python MCP server template"
write_file "$WORKSPACE_ROOT/servers/python-mcp/pyproject.toml" 0644 <<'TOML'
[build-system]
requires = ["setuptools>=68", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "python-mcp-server"
version = "0.1.0"
description = "Example Python MCP server"
requires-python = ">=3.9"
dependencies = [
"fastapi>=0.115.0",
"uvicorn[standard]>=0.30.0"
]
TOML
write_file "$WORKSPACE_ROOT/servers/python-mcp/main.py" 0755 <<'PY'
#!/usr/bin/env python3
"""
Minimal Python MCP server placeholder.
This is a scaffold to be adapted to a real MCP implementation. It starts a FastAPI
HTTP app to demonstrate a background service that could receive MCP-like requests.
Replace with an actual MCP server according to the latest Cursor MCP docs.
"""
from __future__ import annotations
import os
from fastapi import FastAPI
from fastapi.responses import JSONResponse
app = FastAPI(title="Python MCP Server (placeholder)")
@app.get("/health")
def health() -> dict:
return {"status": "ok"}
@app.get("/")
def root() -> JSONResponse:
return JSONResponse({"message": "Replace with real MCP protocol server."})
if __name__ == "__main__":
import uvicorn
port = int(os.environ.get("PY_MCP_PORT", "8765"))
uvicorn.run(app, host="127.0.0.1", port=port)
PY
}
scaffold_ts_mcp_server() {
if [ "$(json_get '.enable.tsMcp')" != "true" ]; then return 0; fi
log "Scaffolding TypeScript MCP server template"
write_file "$WORKSPACE_ROOT/servers/ts-mcp/package.json" 0644 <<'JSON'
{
"name": "ts-mcp-server",
"version": "0.1.0",
"private": true,
"type": "module",
"scripts": {
"build": "tsc -p .",
"start": "node dist/index.js"
},
"devDependencies": {
"typescript": "^5.6.3"
},
"dependencies": {}
}
JSON
write_file "$WORKSPACE_ROOT/servers/ts-mcp/tsconfig.json" 0644 <<'JSON'
{
"compilerOptions": {
"target": "ES2022",
"module": "ES2022",
"moduleResolution": "bundler",
"outDir": "dist",
"rootDir": "src",
"strict": true,
"esModuleInterop": true
}
}
JSON
write_file "$WORKSPACE_ROOT/servers/ts-mcp/src/index.ts" 0644 <<'TS'
/*
Minimal TypeScript MCP server placeholder.
Replace with a real MCP server per Cursor docs.
*/
import http from "node:http";
const server = http.createServer((_req, res) => {
res.statusCode = 200;
res.setHeader("Content-Type", "application/json");
res.end(JSON.stringify({ message: "Replace with real MCP server." }));
});
const port = Number(process.env.TS_MCP_PORT ?? 8766);
server.listen(port, "127.0.0.1", () => {
// eslint-disable-next-line no-console
console.log(`TS MCP placeholder listening on http://127.0.0.1:${port}`);
});
TS
}
scaffold_cpp_mcp_server() {
if [ "$(json_get '.enable.cppMcp')" != "true" ]; then return 0; fi
log "Scaffolding C++ MCP server template"
write_file "$WORKSPACE_ROOT/servers/cpp-mcp/CMakeLists.txt" 0644 <<'CMAKE'
cmake_minimum_required(VERSION 3.16)
project(cpp_mcp_server LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
add_executable(mcp_server src/main.cpp)
CMAKE
write_file "$WORKSPACE_ROOT/servers/cpp-mcp/src/main.cpp" 0644 <<'CPP'
#include <iostream>
int main() {
std::cout << "Replace with real MCP server (C++)." << std::endl;
return 0;
}
CPP
write_file "$WORKSPACE_ROOT/servers/cpp-mcp/build.sh" 0755 <<'SH'
#!/usr/bin/env bash
set -euo pipefail
cd "$(dirname "$0")"
rm -rf build && mkdir -p build && cd build
cmake .. -G Ninja || cmake ..
cmake --build . --config Release
SH
}
scaffold_mcp_client_ts() {
if [ "$(json_get '.enable.mcpClient')" != "true" ]; then return 0; fi
log "Scaffolding MCP client (TypeScript) template"
write_file "$WORKSPACE_ROOT/client/mcp-client/package.json" 0644 <<'JSON'
{
"name": "mcp-client",
"version": "0.1.0",
"private": true,
"type": "module",
"scripts": {
"build": "tsc -p .",
"start": "node dist/index.js"
},
"devDependencies": {
"typescript": "^5.6.3"
},
"dependencies": {}
}
JSON
write_file "$WORKSPACE_ROOT/client/mcp-client/tsconfig.json" 0644 <<'JSON'
{
"compilerOptions": {
"target": "ES2022",
"module": "ES2022",
"moduleResolution": "bundler",
"outDir": "dist",
"rootDir": "src",
"strict": true,
"esModuleInterop": true
}
}
JSON
write_file "$WORKSPACE_ROOT/client/mcp-client/src/index.ts" 0644 <<'TS'
/* Placeholder MCP client. Replace with actual MCP client logic. */
// eslint-disable-next-line no-console
console.log("MCP client placeholder");
TS
}
scaffold_background_agent() {
if [ "$(json_get '.enable.backgroundAgent')" != "true" ]; then return 0; fi
log "Scaffolding background agent + webhooks (FastAPI)"
write_file "$WORKSPACE_ROOT/services/background-agent/requirements.txt" 0644 <<'REQ'
fastapi>=0.115.0
uvicorn[standard]>=0.30.0
REQ
write_file "$WORKSPACE_ROOT/services/background-agent/main.py" 0755 <<'PY'
#!/usr/bin/env python3
"""
Background agent + webhook receiver (FastAPI).
Endpoints:
- GET /health
- POST /webhooks/cursor (generic webhook entry)
- GET /api/events (example endpoint)
Run locally:
uvicorn main:app --host 127.0.0.1 --port 8088 --reload
"""
from __future__ import annotations
from typing import Any, Dict
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
app = FastAPI(title="Background Agent")
@app.get("/health")
def health() -> dict:
return {"status": "ok"}
@app.post("/webhooks/cursor")
async def cursor_webhook(request: Request) -> JSONResponse:
payload: Dict[str, Any] = await request.json()
# TODO: handle events appropriately
return JSONResponse({"received": True, "keys": list(payload.keys())})
@app.get("/api/events")
def list_events() -> dict:
return {"events": []}
PY
write_file "$WORKSPACE_ROOT/scripts/run-background-agent.sh" 0755 <<'SH'
#!/usr/bin/env bash
set -euo pipefail
cd "$(dirname "$0")/.." || exit 1
CONFIG_PATH="${CONFIG_PATH:-$(pwd)/config/project_orchestration.json}"
HOST="127.0.0.1"
PORT="8088"
if command -v jq >/dev/null 2>&1 && [ -f "$CONFIG_PATH" ]; then
HOST="$(jq -r '.backgroundAgent.host' "$CONFIG_PATH" 2>/dev/null || echo "$HOST")"
PORT="$(jq -r '.backgroundAgent.port' "$CONFIG_PATH" 2>/dev/null || echo "$PORT")"
fi
python3 -m venv .venv 2>/dev/null || true
. .venv/bin/activate
python -m pip install -U pip
pip install -r services/background-agent/requirements.txt
exec uvicorn services.background-agent.main:app --host "$HOST" --port "$PORT" --reload
SH
}
scaffold_github_actions() {
if [ "$(json_get '.enable.githubActions')" != "true" ]; then return 0; fi
log "Adding GitHub Actions workflows (CI, docs, code review)"
write_file "$WORKSPACE_ROOT/.github/workflows/ci.yml" 0644 <<'YML'
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: 'lts/*'
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install Node deps (if any)
run: |
if [ -f servers/ts-mcp/package.json ]; then npm ci --prefix servers/ts-mcp || true; fi
if [ -f client/mcp-client/package.json ]; then npm ci --prefix client/mcp-client || true; fi
- name: Install Python deps (if any)
run: |
python -m pip install -U pip
if [ -f servers/python-mcp/pyproject.toml ]; then pip install -e servers/python-mcp || true; fi
if [ -f services/background-agent/requirements.txt ]; then pip install -r services/background-agent/requirements.txt || true; fi
- name: Build TS artifacts
run: |
if [ -f servers/ts-mcp/package.json ]; then npm --prefix servers/ts-mcp run build || true; fi
if [ -f client/mcp-client/package.json ]; then npm --prefix client/mcp-client run build || true; fi
- name: C++ build
run: |
if [ -f servers/cpp-mcp/CMakeLists.txt ]; then bash servers/cpp-mcp/build.sh || true; fi
YML
write_file "$WORKSPACE_ROOT/.github/workflows/docs.yml" 0644 <<'YML'
name: Update Docs
on:
workflow_dispatch:
push:
paths: [ 'docs/**' ]
jobs:
build-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Upload docs artifact
uses: actions/upload-artifact@v4
with:
name: site-docs
path: docs/
YML
write_file "$WORKSPACE_ROOT/.github/workflows/code-review.yml" 0644 <<'YML'
name: Code Review
on:
pull_request:
branches: [ main ]
jobs:
pr_checks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Lint Python
run: |
python -m pip install ruff || true
ruff check . || true
- name: Type-check (mypy)
run: |
python -m pip install mypy || true
mypy . || true
YML
}
scaffold_devcontainer_and_containerfiles() {
if [ "$(json_get '.enable.devcontainer')" != "true" ]; then return 0; fi
log "Scaffolding devcontainer, Containerfile, Docker deployment"
# Containerfile (Podman)
write_file "$WORKSPACE_ROOT/Containerfile" 0644 <<'DOCKER'
FROM alpine:3.20
RUN apk add --no-cache ca-certificates bash && update-ca-certificates
WORKDIR /app
COPY . /app
CMD ["/bin/sh"]
DOCKER
# Devcontainer config
write_file "$WORKSPACE_ROOT/devcontainer/devcontainer.json" 0644 <<'JSON'
{
"name": "Cursor Orchestrator Dev",
"image": "mcr.microsoft.com/devcontainers/base:debian",
"features": {
"ghcr.io/devcontainers/features/node:1": {
"version": "lts"
},
"ghcr.io/devcontainers/features/python:1": {
"version": "3.11"
}
},
"postCreateCommand": "bash scripts/post-create.sh"
}
JSON
write_file "$WORKSPACE_ROOT/scripts/post-create.sh" 0755 <<'SH'
#!/usr/bin/env bash
set -euo pipefail
echo "Devcontainer post-create hook"
SH
# Docker deployment example
local agent_port
agent_port="$(json_get '.backgroundAgent.port')"
if [ -z "$agent_port" ] || [ "$agent_port" = "null" ]; then agent_port="8088"; fi
write_file "$WORKSPACE_ROOT/Dockerfile" 0644 <<DOCKER
FROM python:3.11-slim
WORKDIR /app
COPY services/background-agent/requirements.txt /app/requirements.txt
RUN pip install -U pip && pip install -r /app/requirements.txt
COPY services/background-agent /app/services/background-agent
EXPOSE ${agent_port}
CMD ["uvicorn", "services.background-agent.main:app", "--host", "0.0.0.0", "--port", "${agent_port}"]
DOCKER
write_file "$WORKSPACE_ROOT/compose.yaml" 0644 <<YAML
services:
background-agent:
build: .
ports:
- "${agent_port}:${agent_port}"
restart: unless-stopped
YAML
}
scaffold_aws_terraform() {
if [ "$(json_get '.enable.awsTerraform')" != "true" ]; then return 0; fi
log "Scaffolding AWS Terraform template"
write_file "$WORKSPACE_ROOT/infra/aws/terraform/main.tf" 0644 <<'TF'
terraform {
required_version = ">= 1.3.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0"
}
}
}
provider "aws" {
region = var.aws_region
}
variable "aws_region" {
type = string
default = "us-east-1"
}
TF
write_file "$WORKSPACE_ROOT/infra/aws/terraform/.gitignore" 0644 <<'IGN'
.terraform/
terraform.tfstate*
IGN
}
scaffold_web_and_mcp_json() {
if [ "$(json_get '.enable.webAndMcp')" != "true" ]; then return 0; fi
log "Scaffolding web project and browser tools mcp.json"
write_file "$WORKSPACE_ROOT/web/README.md" 0644 <<'MD'
# Web Dev + Testing
Use this directory for web development. Add e2e tests and tools.
MD
write_file "$WORKSPACE_ROOT/web/mcp.json" 0644 <<'JSON'
{
"tools": [
"large-codebases",
"mermaid-diagrams"
]
}
JSON
}
scaffold_cpp_conan_and_esp32() {
if [ "$(json_get '.enable.cppConan')" != "true" ] && [ "$(json_get '.enable.esp32')" != "true" ]; then return 0; fi
log "Scaffolding C++ with Conan and ESP32 container template"
# C++ + Conan
mkdir -p "$WORKSPACE_ROOT/cpp-conan/src"
write_file "$WORKSPACE_ROOT/cpp-conan/conanfile.txt" 0644 <<'TXT'
[requires]
[generators]
CMakeDeps
CMakeToolchain
TXT
write_file "$WORKSPACE_ROOT/cpp-conan/CMakeLists.txt" 0644 <<'CMAKE'
cmake_minimum_required(VERSION 3.16)
project(cpp_conan_example LANGUAGES CXX)
set(CMAKE_CXX_STANDARD 17)
add_executable(app src/main.cpp)
CMAKE
write_file "$WORKSPACE_ROOT/cpp-conan/src/main.cpp" 0644 <<'CPP'
#include <iostream>
int main() { std::cout << "Hello from Conan template" << std::endl; }
CPP
# ESP32 containerized builder (placeholder)
if [ "$(json_get '.enable.esp32')" = "true" ]; then
write_file "$WORKSPACE_ROOT/esp32/Dockerfile" 0644 <<'DOCKER'
FROM espressif/idf:latest
WORKDIR /workspace
CMD ["/bin/bash"]
DOCKER
write_file "$WORKSPACE_ROOT/esp32/README.md" 0644 <<'MD'
# ESP32 Containerized Builder
Use the `espressif/idf` image to build ESP32 targets without local SDK installs.
MD
fi
}
scaffold_android_kotlin_container() {
if [ "$(json_get '.enable.android')" != "true" ]; then return 0; fi
log "Scaffolding Android Kotlin containerized builder (minimal)"
write_file "$WORKSPACE_ROOT/android/Dockerfile" 0644 <<'DOCKER'
FROM eclipse-temurin:17-jdk
ENV ANDROID_SDK_ROOT=/opt/android-sdk
RUN mkdir -p "$ANDROID_SDK_ROOT" /opt/tools \
&& apt-get update && apt-get install -y --no-install-recommends unzip wget ca-certificates && rm -rf /var/lib/apt/lists/* \
&& wget -q https://dl.google.com/android/repository/commandlinetools-linux-10406996_latest.zip -O /opt/tools/cmdline-tools.zip \
&& unzip -q /opt/tools/cmdline-tools.zip -d /opt/tools \
&& mkdir -p $ANDROID_SDK_ROOT/cmdline-tools/latest \
&& mv /opt/tools/cmdline-tools $ANDROID_SDK_ROOT/cmdline-tools/latest \
&& yes | $ANDROID_SDK_ROOT/cmdline-tools/latest/bin/sdkmanager --licenses || true
DOCKER
write_file "$WORKSPACE_ROOT/android/README.md" 0644 <<'MD'
# Android Native Kotlin (Containerized Builder)
Container image with JDK and Android SDK command-line tools.
MD
}
scaffold_readme() {
log "Writing project README with usage instructions"
write_file "$WORKSPACE_ROOT/README.md" 0644 <<'MD'
# Cursor Orchestration Environment
This repository was initialized by `scripts/setup_orchestrator.sh`.
Key components:
- `.cursor/` MCP config, tools, rules, hooks, webhooks, agents
- `servers/` MCP server templates for Python, TypeScript, C++
- `services/background-agent` FastAPI webhook receiver
- `.github/workflows/` CI workflows
- `devcontainer/`, `Containerfile`, `Dockerfile`, `compose.yaml`
- `infra/aws/terraform` starter
- `cpp-conan`, `esp32`, `android` templates
Getting started:
```bash
# Run background agent
bash scripts/run-background-agent.sh
# Build TS server
npm --prefix servers/ts-mcp run build
# Build C++ server
bash servers/cpp-mcp/build.sh
```
MD
}
attempt_install_cursor_cli() {
log "Attempting to install Cursor-related CLIs (best-effort)"
if have_cmd npm; then
# These package names are placeholders; if they don't exist, the step is skipped gracefully.
npm install -g @cursor/cli 2>/dev/null || true
npm install -g cursor-agent 2>/dev/null || true
npm install -g @cursor/agent 2>/dev/null || true
else
warn "npm not available; skipping Cursor CLI attempts"
fi
}
main() {
log "Starting $SCRIPT_NAME at $START_TS"
WORKSPACE_ROOT="${WORKSPACE_ROOT:-/workspace}"
if [ ! -d "$WORKSPACE_ROOT" ]; then WORKSPACE_ROOT="$(pwd)"; fi
export WORKSPACE_ROOT
CONFIG_PATH="${CONFIG_PATH:-$WORKSPACE_ROOT/config/project_orchestration.json}"
if ! have_cmd jq; then
require_or_install_pkg jq jq || die "jq is required to parse JSON config"
fi
if [ ! -f "$CONFIG_PATH" ]; then
warn "Config not found at $CONFIG_PATH; creating defaults"
mkdir -p "$(dirname "$CONFIG_PATH")"
cat >"$CONFIG_PATH" <<'JSON'
{
"enable": {"cursorConfigs": true, "pythonMcp": true, "tsMcp": true, "cppMcp": true, "mcpClient": true, "backgroundAgent": true, "githubActions": true, "devcontainer": true, "awsTerraform": true, "webAndMcp": true, "cppConan": true, "esp32": true, "android": true},
"ports": {"pyMcpPort": 8765, "tsMcpPort": 8766},
"backgroundAgent": {"host": "127.0.0.1", "port": 8088},
"tools": {"largeCodebases": {"enabled": true, "exclude": ["node_modules", "build", "dist", ".git", ".venv", "venv"], "maxFileSizeMB": 5}, "mermaid": {"enabled": true}},
"container": {"prefer": "podman"},
"runtime": {"node": "lts/*", "python": "3.11"}
}
JSON
fi
create_dirs
install_base_packages
install_node_via_nvm
install_python_tools
setup_cursor_configs
scaffold_python_mcp_server
scaffold_ts_mcp_server
scaffold_cpp_mcp_server
scaffold_mcp_client_ts
scaffold_background_agent
scaffold_github_actions
scaffold_devcontainer_and_containerfiles
scaffold_aws_terraform
scaffold_web_and_mcp_json
scaffold_cpp_conan_and_esp32
scaffold_android_kotlin_container
scaffold_readme
attempt_install_cursor_cli
log "Setup complete. Next steps:"
cat <<'STEPS'
- Review .cursor configs in .cursor/
- Start background agent: bash scripts/run-background-agent.sh
- Build TS server: npm --prefix servers/ts-mcp run build && node servers/ts-mcp/dist/index.js
- Build C++ server: bash servers/cpp-mcp/build.sh && ./servers/cpp-mcp/build/mcp_server
- Optionally run container: docker compose up --build
STEPS
}
main "$@"
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/mcp-py/MermaidDiagramGenerator.py:
--------------------------------------------------------------------------------
```python
import anthropic
client = anthropic.Anthropic(
# defaults to os.environ.get("ANTHROPIC_API_KEY")
api_key="my_api_key",
)
# Replace placeholders like {{USER_QUERY}} with real values,
# because the SDK does not support variables.
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=8192,
temperature=0,
system="You are an expert system designed to create Mermaid diagrams based on user queries. Your task is to analyze the given input and generate a visual representation of the concepts, relationships, or processes described.\n\nPlease follow these steps to create an appropriate Mermaid diagram:\n\n1. Analyze the user query carefully, identifying the main concepts, ideas, or components present.\n\n2. Determine the relationships or dependencies between these concepts.\n\n3. Plan the structure of your Mermaid diagram. Consider the following:\n a. Choose between graph TD (top-down) or LR (left-right) based on the complexity of relationships.\n b. Decide on node naming conventions (letters, short descriptive names, etc.).\n c. Plan how to represent relationships with appropriate arrows and labels.\n d. Consider how to group related elements using subgraphs if necessary.\n e. Plan a color scheme and styling to enhance readability and visual appeal.\n\n4. Create your Mermaid diagram based on your analysis and planning.\n\n5. Format your Mermaid diagram inside Markdown code blocks, starting with \"```mermaid\" and ending with \"```\".\n\nBefore creating the final diagram, wrap your analysis and planning process inside <diagram_planning> tags. Include the following steps:\n\na. List main concepts/components from the user query\nb. Identify relationships between concepts\nc. Determine appropriate diagram type (TD or LR)\nd. Plan node structure and naming\ne. Outline relationships and labels\nf. Consider alternative representations and justify your chosen approach\ng. Brainstorm potential edge cases or complexities in the diagram\nh. Propose solutions for handling these complexities\n\nThis expanded planning process will help ensure a thorough interpretation of the data and a well-structured diagram. It's OK for this section to be quite long.\n\nYour final output should be a single Mermaid diagram that effectively visualizes the concepts and relationships present in the query, enclosed in <answer> tags.\n\nRemember to adapt your diagram to fit the specific content of the user query, ensuring it captures the essence of the information provided. Pay close attention to any temporal relationships, hierarchies, or processes described in the query.",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "<examples>\n<example>\n<USER_QUERY>\nhelp doctors diagnose possible patient illnesses.\n\n <documents>\n <document index=\"1\">\n <source>patient_symptoms.txt</source>\n <document_content>\n {{PATIENT_SYMPTOMS}}\n </document_content>\n </document>\n <document index=\"2\">\n <source>patient_records.txt</source>\n <document_content>\n {{PATIENT_RECORDS}}\n </document_content>\n </document>\n <document index=\"3\">\n <source>patient01/_appt_history.txt</source>\n <document_content>\n {{PATIENT01_APPOINTMENT_HISTORY}}\n </document_content>\n </document>\n </documents>\n</USER_QUERY>\n<ideal_output>\n<diagram_planning>\n1. Main concepts/components:\n- Patient Symptoms Database\n- Patient Records\n- Appointment History\n- Diagnosis Process\n- Doctor's Analysis\n- Possible Illnesses\n\n2. Relationships:\n- Patient data flows into diagnostic process\n- Symptoms lead to potential illnesses\n- Historical records inform diagnosis\n- Doctor analyzes multiple data sources\n- Diagnosis results in possible illnesses\n\n3. Diagram type:\n- TD (top-down) would work best to show the diagnostic flow process\n\n4. Node structure:\n- Use descriptive names for clarity\n- Group related elements\n- Show clear data flow\n\n5. Relationships and labels:\n- Use arrows to show information flow\n- Include analysis steps\n- Show parallel data streams\n- Indicate evaluation points\n</diagram_planning>\n\n<answer>\n```mermaid\ngraph TD\n title[Diagnostic Process Flow]\n style title fill:#f9f,stroke:#333,stroke-width:2px\n\n A[Patient Symptoms Database] -->|Input| D[Diagnostic Analysis]\n B[Patient Records] -->|Historical Data| D\n C[Appointment History] -->|Past Visits| D\n\n D -->|Analysis| E[Doctor's Evaluation]\n \n subgraph Diagnostic Process\n E -->|Compare| F[Pattern Recognition]\n F -->|Identify| G[Symptom Clusters]\n end\n \n G -->|Match| H[Possible Illnesses]\n \n subgraph Decision Support\n H -->|Priority| I[High Probability]\n H -->|Secondary| J[Moderate Probability]\n H -->|Consider| K[Low Probability]\n end\n\n style A fill:#e1f3f8,stroke:#333\n style B fill:#e1f3f8,stroke:#333\n style C fill:#e1f3f8,stroke:#333\n style D fill:#fff2cc,stroke:#333\n style E fill:#d5e8d4,stroke:#333\n style H fill:#ffe6cc,stroke:#333\n style I fill:#f8cecc,stroke:#333\n style J fill:#fff2cc,stroke:#333\n style K fill:#dae8fc,stroke:#333\n```\n</answer>\n</ideal_output>\n</example>\n<example>\n<USER_QUERY>\npojďme vytvořit repozitář nástroj pro libertariány anarchokapitalisty který bude obsahovat různé ai agenty a nástroje pro získávání dat z internetu analýzu textů agenty s osobnostmi známých představitelů rakouské ekonomické školy vyhledávání ve zdrojích na internetu a v PDF souborech z knihami autorů a kompozici konstruktivní kritiky argumentů v diskuzích na internetu. repozitář se bude jmenovat `libertools` a hlavním programem pro management nástrojů bude program `libertus`. nástroje budou obsahovat sdílené virtuální prostředí pro Python `venv` a budou ve formě použitelné pro Anthropic Claude Tools API a pro CrewAI agentic framework, v čem už budou využívat sdílené knihovny systém frontu a konfigurací agentů ve formátu yaml, v souborech agents.yaml a tasks.yaml. libertus bude dále obsahovat a sdílené knihovny v pythonu předpřipravenými klientskými API cally využívající Anthropic API a ElevenLabs API pomocí SDK.\njedním z hlavních bodů a předností libertools bude také aplikace AudiobookNarrator pro překládání audioknih do češtiny s text to speech pomocí elevenlabs. libertus bude také obsahovat krátké exaktní nástroje které pomocí LLM api beckendu převedou vložený text na upravený text například pomocí promptu (zjednodušeně) jako \"odpověz na tento text kritikou kterou by odpověděl Murray Rothbard nebo Ludwig von Mises\"\n</USER_QUERY>\n<ideal_output>\n<diagram_planning>\n1. Main concepts/components:\n- libertools (main repository)\n- libertus (main management tool)\n- Shared Components\n - Python venv\n - YAML configurations\n - Shared libraries\n - API integrations\n- Tools/Agents\n - AI Agents\n - Austrian Economics Personalities\n - AudiobookNarrator\n - Text Analysis Tools\n - Data Collection Tools\n- External Services\n - Anthropic Claude API\n - ElevenLabs API\n - CrewAI Framework\n\n2. Relationships:\n- Hierarchical structure from repository to tools\n- Shared resources used by multiple tools\n- API integrations powering various features\n- Configuration files defining agent behaviors\n- Tool dependencies and interactions\n\n3. Diagram Type:\n- TD (top-down) to show hierarchical structure and component relationships\n\n4. Node Structure:\n- Root: libertools repository\n- Main branches: libertus and shared components\n- Sub-branches: specific tools and integrations\n- Grouped related components\n\n5. Relationships/Labels:\n- Use solid lines for direct dependencies\n- Dashed lines for data flow\n- Group related components in subgraphs\n- Show configuration and resource sharing\n</diagram_planning>\n\n<answer>\n```mermaid\ngraph TD\n title[libertools Repository]\n style title fill:#f9f,stroke:#333,stroke-width:2px\n\n A[libertools] --> B[libertus]\n \n subgraph Shared Resources\n C[Python venv]\n D[YAML Configs]\n E[Shared Libraries]\n end\n \n subgraph Configuration\n F[agents.yaml]\n G[tasks.yaml]\n end\n \n subgraph API Integration\n H[Anthropic API Client]\n I[ElevenLabs API Client]\n J[CrewAI Framework]\n end\n \n subgraph Tools and Agents\n K[AudiobookNarrator]\n L[Austrian Economics Agents]\n M[Data Collection Tools]\n N[Text Analysis Tools]\n end\n \n B --> C\n B --> D\n B --> E\n \n D --> F\n D --> G\n \n B --> H\n B --> I\n B --> J\n \n K --> I\n L --> H\n M --> H\n N --> H\n \n style A fill:#f9f9f9,stroke:#333\n style B fill:#e1f3f8,stroke:#333\n style C,D,E fill:#ffe6cc,stroke:#333\n style F,G fill:#d5e8d4,stroke:#333\n style H,I,J fill:#fff2cc,stroke:#333\n style K,L,M,N fill:#e1f3f8,stroke:#333\n \n %% Connections to shared resources\n C --> K\n C --> L\n C --> M\n C --> N\n \n %% Configuration connections\n F -.-> L\n G -.-> L\n F -.-> K\n G -.-> K\n```\n</answer>\n</ideal_output>\n</example>\n<example>\n<USER_QUERY>\nYou are brilliant storyteller, a true master of improvisation with exceptional talent for describing actions and actors playing role in any problem-solving procedure or system. Your sentences are usually full of parallelism when your story actually refers to allusions to a similar concept that is likely better received by the user, who likes Software Design Patterns analogies and analogic backstories based on the names of actors or subjects in the discussion. \n\nBrainstorm LLM agent definitions ( NLP system prompts) until a working Agent is iteratively created. LLM Agent that is independent to environment has always some specific inputs and outputs, in which case you describe not only the agent itself, it's goal and backstory - you have to describe the expected input and expected output data, and ensure variable names remain consistent throughout the agency. Your agentic system design and workflow process has to be SOLID. All of the NLP prompts/descriptions for agents and tasks are YAML formatted. Your proposed configuration of subjected agent or set of agents will be always accompanied with a mermaid diagram visualizing the flow of user query, the relations of agents, their input/output transition in a sequence or loop, and more applications of visualization techniques, maintaining a consistent static colors style for common entities, like Agents, Tasks, Crews, Knowledge and Tools. Your response is structured and should contain also the advanced Agent LLM configuration, like Tools or Knowledge.\nDo not interrupt your responses, not even when user is talking, unless they say STOP. ask the user questions unless they explicitly request to STOP. When STOP is requested, you stop talking and will start again after user says START,and if it is unclear what the user wants you to start at the time, you ask whetether to switch to a different chapter in the document, switch to a different document, or discuss some specific area covered by the documents in the Knowledge. If user asks for HELP, then you help them compose a prompt for LLM, providing list of matching prompts from the Anthropic Library, if applicable, then propose a system prompt and explain reasoning behind, referencing the documentation papers.\n</USER_QUERY>\n<ideal_output>\n<diagram_planning>\n1. Main concepts/components:\n- LLM Agent System\n- Agent Configuration\n - Input/Output definitions\n - Goals and backstory\n - YAML formatting\n- Workflow Process\n - User interactions\n - STOP/START commands\n - HELP functionality\n- Documentation\n - Anthropic Library\n - System prompts\n - Knowledge base\n- Agent Characteristics\n - Environment independence\n - SOLID design principles\n - Tools and Knowledge integration\n\n2. Relationships:\n- Hierarchical flow from user input to agent response\n- Iterative agent development process\n- Command-based interaction system\n- Documentation and knowledge integration\n- Configuration and prompt management\n\n3. Diagram type:\n- TD (top-down) to show the hierarchical workflow and interaction patterns\n\n4. Node structure:\n- Group related components in subgraphs\n- Use descriptive names for clarity\n- Maintain consistent naming conventions\n- Include control flow elements\n\n5. Relationships and labels:\n- Show command flow with labeled arrows\n- Indicate iterative processes\n- Display documentation integration\n- Mark configuration dependencies\n</diagram_planning>\n\n<answer>\n```mermaid\ngraph TD\n title[LLM Agent System Architecture]\n style title fill:#f9f,stroke:#333,stroke-width:2px\n\n subgraph User Interaction\n U[User Input] --> C[Command Parser]\n C -->|STOP| S[Stop Process]\n C -->|START| R[Resume Process]\n C -->|HELP| H[Help System]\n end\n\n subgraph Agent Configuration\n Y[YAML Definitions]\n P[Prompt Templates]\n K[Knowledge Base]\n T[Tools Integration]\n end\n\n subgraph Agent Development\n I[Input Definition] --> A[Agent Design]\n A --> O[Output Definition]\n O -->|Iterate| I\n end\n\n subgraph Documentation Support\n H -->|Reference| L[Anthropic Library]\n L -->|Generate| SP[System Prompts]\n SP -->|Enhance| P\n end\n\n Y -->|Configure| A\n K -->|Inform| A\n T -->|Augment| A\n\n A -->|Execute| E[Agent Response]\n E -->|Validate| V[SOLID Verification]\n V -->|Feedback| A\n\n style U fill:#e1f3f8,stroke:#333\n style A fill:#ffe6cc,stroke:#333\n style Y fill:#d5e8d4,stroke:#333\n style K fill:#fff2cc,stroke:#333\n style T fill:#fff2cc,stroke:#333\n style H fill:#f8cecc,stroke:#333\n style L fill:#dae8fc,stroke:#333\n style SP fill:#d5e8d4,stroke:#333\n style E fill:#ffe6cc,stroke:#333\n style V fill:#f8cecc,stroke:#333\n```\n</answer>\n</ideal_output>\n</example>\n<example>\n<USER_QUERY>\nFlow of process: iteratively build an LLM agent definition focused on independence:\n\n1. Base agent:\n- Executes assigned tasks autonomously\n- Makes decisions without external prompting\n- Maintains internal state tracking\n- Handles errors independently\n\n2. Adding goal orientation:\n- Sets own sub-goals to achieve main objective\n- Prioritizes tasks based on goal importance\n- Adjusts strategies when facing obstacles\n- Evaluates own progress\n\n3. Adding memory management:\n- Maintains episodic memory of interactions\n- Updates working memory with new information\n- Prunes irrelevant information\n- Retrieves relevant context when needed\n\n4. Adding reasoning capabilities:\n- Employs chain-of-thought reasoning\n- Forms hypotheses and tests them\n- Identifies logical inconsistencies\n- Self-corrects based on outcomes\n\n5. Adding resource awareness:\n- Monitors token usage\n- Optimizes response length\n- Manages computational resources\n- Balances speed vs thoroughness\n\n6. Adding learning capability:\n- Incorporates feedback into future responses\n- Identifies patterns in successful strategies\n- Adapts communication style to user\n- Refines decision-making process\n\n7. Adding task boundaries:\n- Recognizes scope limitations\n- Declines tasks outside capabilities\n- Maintains ethical constraints\n- Preserves core directives\n\n8. Adding coordination interface:\n- Provides status updates\n- Requests clarification when needed\n- Reports completion metrics\n- Handles handoffs between tasks\n\n9. Adding self-optimization:\n- Tunes parameter settings\n- Refines prompt templates\n- Adjusts conversation flow\n- Optimizes memory usage\n\n10. Adding context sensitivity:\n- Interprets user intent\n- Adapts to domain requirements\n- Maintains cultural awareness \n- Scales response complexity\n\n11. Adding execution monitoring:\n- Tracks completion progress\n- Validates output quality\n- Identifies performance bottlenecks\n- Implements self-diagnostic checks\n\n12. Adding workflow management:\n- Sequences dependent tasks\n- Handles parallel processes\n- Manages state transitions\n- Controls execution flow\n</USER_QUERY>\n<ideal_output>\n<diagram_planning>\na. Main concepts/components:\n- Base agent capabilities\n- Goal orientation features\n- Memory management\n- Reasoning capabilities\n- Resource awareness\n- Learning capability\n- Task boundaries\n- Coordination interface\n- Self-optimization\n- Context sensitivity\n- Execution monitoring\n- Workflow management\n\nb. Relationships:\n- Hierarchical building blocks, each layer adds complexity\n- Progressive enhancement of agent capabilities\n- Dependencies between layers (e.g., reasoning depends on memory)\n\nc. Diagram type:\n- TD (top-down) to show the progressive building of capabilities\n\nd. Node structure:\n- Each major capability will be a node\n- Sub-capabilities as styled notes\n- Use color coding for different capability types\n\ne. Relationships/labels:\n- Arrows showing \"enhances with\" relationship\n- Progressive numbering to show build order\n</diagram_planning>\n\n<answer>\n```mermaid\ngraph TD\n classDef base fill:#e1f5fe,stroke:#01579b\n classDef enhanced fill:#e8f5e9,stroke:#1b5e20\n classDef advanced fill:#fff3e0,stroke:#e65100\n\n A[Base Agent]:::base --> B[Goal Orientation]\n B --> C[Memory Management]\n C --> D[Reasoning Capabilities]\n D --> E[Resource Awareness]\n E --> F[Learning Capability]\n F --> G[Task Boundaries]\n G --> H[Coordination Interface]\n H --> I[Self-Optimization]\n I --> J[Context Sensitivity]\n J --> K[Execution Monitoring]\n K --> L[Workflow Management]\n\n subgraph \"1. Foundation\"\n A --- A1[Autonomous Execution]\n A --- A2[Independent Decisions]\n A --- A3[State Tracking]\n A --- A4[Error Handling]\n end\n\n subgraph \"2-4. Core Capabilities\"\n B & C & D\n end\n\n subgraph \"5-8. Operational Features\"\n E & F & G & H\n end\n\n subgraph \"9-12. Advanced Features\"\n I & J & K & L\n end\n\n style A1 fill:#f3f3f3\n style A2 fill:#f3f3f3\n style A3 fill:#f3f3f3\n style A4 fill:#f3f3f3\n```\n</answer>\n</ideal_output>\n</example>\n<example>\n<USER_QUERY>\n# Stage 1: Foundation Crew\nfoundation_crew:\n description: \"Establishes core agent functionality and goal framework\"\n advancement_criteria:\n - \"Base agent successfully executes independent tasks\"\n - \"Goal framework verified and operational\"\n - \"Decision-making system functional\"\n \n agents:\n base_architect:\n role: \"Core Agent Architecture Designer\"\n goal: \"Design and implement foundational agent capabilities\"\n backstory: \"Expert in autonomous systems design with focus on independence\"\n tools:\n - SpiderTool\n - \"anthropic.tools.base\"\n \n goal_engineer:\n role: \"Goal System Designer\"\n goal: \"Implement goal-setting and tracking framework\"\n backstory: \"Specialist in objective-oriented system design\"\n tools:\n - \"anthropic.tools.goal_tracking\"\n\n tasks:\n establish_base:\n description: \"Create core agent framework with autonomous execution capability\"\n expected_output: \"Functional base agent with verified independence\"\n agent: base_architect\n \n implement_goals:\n description: \"Design and implement goal management system\"\n expected_output: \"Operational goal framework with priority management\"\n agent: goal_engineer\n\n# Stage 2: Cognition Crew\ncognition_crew:\n description: \"Implements memory and reasoning systems\"\n advancement_criteria:\n - \"Memory systems operational and verified\"\n - \"Reasoning framework demonstrates chain-of-thought capabilities\"\n - \"Self-correction mechanisms functional\"\n \n agents:\n memory_architect:\n role: \"Memory Systems Designer\"\n goal: \"Implement episodic and working memory systems\"\n backstory: \"Expert in cognitive architecture design\"\n tools:\n - \"anthropic.tools.memory_management\"\n \n reasoning_engineer:\n role: \"Reasoning Systems Designer\"\n goal: \"Implement logical reasoning and hypothesis testing\"\n backstory: \"Specialist in AI reasoning systems\"\n tools:\n - \"anthropic.tools.reasoning\"\n\n tasks:\n implement_memory:\n description: \"Design and implement memory management systems\"\n expected_output: \"Functional memory system with context retention\"\n agent: memory_architect\n \n establish_reasoning:\n description: \"Implement reasoning and self-correction frameworks\"\n expected_output: \"Operational reasoning system with verification\"\n agent: reasoning_engineer\n\n# Stage 3: Resource Crew\nresource_crew:\n description: \"Handles resource management and learning systems\"\n advancement_criteria:\n - \"Resource monitoring system operational\"\n - \"Learning framework demonstrates adaptation\"\n - \"Performance optimization verified\"\n \n agents:\n resource_manager:\n role: \"Resource Optimization Specialist\"\n goal: \"Implement resource monitoring and optimization\"\n backstory: \"Expert in system resource management\"\n tools:\n - \"anthropic.tools.resource_monitoring\"\n \n learning_engineer:\n role: \"Learning Systems Designer\"\n goal: \"Implement adaptive learning capabilities\"\n backstory: \"Specialist in machine learning systems\"\n tools:\n - \"anthropic.tools.learning\"\n\n tasks:\n setup_monitoring:\n description: \"Implement resource monitoring and optimization\"\n expected_output: \"Functional resource management system\"\n agent: resource_manager\n \n implement_learning:\n description: \"Design and implement learning framework\"\n expected_output: \"Operational learning system with adaptation\"\n agent: learning_engineer\n\n# Stage 4: Integration Crew\nintegration_crew:\n description: \"Handles final integration and optimization\"\n advancement_criteria:\n - \"Boundary systems operational\"\n - \"Coordination framework verified\"\n - \"Optimization systems functional\"\n \n agents:\n integration_architect:\n role: \"System Integration Specialist\"\n goal: \"Coordinate final system integration\"\n backstory: \"Expert in complex system integration\"\n tools:\n - \"anthropic.tools.integration\"\n \n optimization_engineer:\n role: \"System Optimization Specialist\"\n goal: \"Implement final optimization systems\"\n backstory: \"Specialist in system optimization\"\n tools:\n - \"anthropic.tools.optimization\"\n\n tasks:\n establish_boundaries:\n description: \"Implement system boundaries and constraints\"\n expected_output: \"Verified boundary framework\"\n agent: integration_architect\n \n setup_coordination:\n description: \"Implement coordination and handoff systems\"\n expected_output: \"Operational coordination framework\"\n agent: integration_architect\n \n enable_optimization:\n description: \"Implement self-optimization capabilities\"\n expected_output: \"Functional optimization system\"\n agent: optimization_engineer\n</USER_QUERY>\n<ideal_output>\ngraph TD\n classDef crew fill:#e1f5fe,stroke:#01579b\n classDef agent fill:#e8f5e9,stroke:#1b5e20\n classDef task fill:#fff3e0,stroke:#e65100\n classDef tool fill:#f3e5f5,stroke:#4a148c\n \nsubgraph \"Stage 1: Foundation Crew\":::crew\n F[Foundation Crew] --> BA[Base Architect]:::agent\n F --> GE[Goal Engineer]:::agent\n BA --> ET[Establish Base]:::task\n GE --> IG[Implement Goals]:::task\n ET -.->|uses| ST[SpiderTool]:::tool\n ET -.->|uses| BT[Base Tools]:::tool\n IG -.->|uses| GT[Goal Tracking]:::tool\nend\n\nsubgraph \"Stage 2: Cognition Crew\":::crew\n C[Cognition Crew] --> MA[Memory Architect]:::agent\n C --> RE[Reasoning Engineer]:::agent\n MA --> IM[Implement Memory]:::task\n RE --> ER[Establish Reasoning]:::task\n IM -.->|uses| MM[Memory Management]:::tool\n ER -.->|uses| RT[Reasoning Tools]:::tool\nend\n\nsubgraph \"Stage 3: Resource Crew\":::crew\n R[Resource Crew] --> RM[Resource Manager]:::agent\n R --> LE[Learning Engineer]:::agent\n RM --> SM[Setup Monitoring]:::task\n LE --> IL[Implement Learning]:::task\n SM -.->|uses| MT[Monitoring Tools]:::tool\n IL -.->|uses| LT[Learning Tools]:::tool\nend\n\nsubgraph \"Stage 4: Integration Crew\":::crew\n I[Integration Crew] --> IA[Integration Architect]:::agent\n I --> OE[Optimization Engineer]:::agent\n IA --> EB[Establish Boundaries]:::task\n IA --> SC[Setup Coordination]:::task\n OE --> EO[Enable Optimization]:::task\n EB & SC -.->|uses| IT[Integration Tools]:::tool\n EO -.->|uses| OT[Optimization Tools]:::tool\nend\n\nF -->|Advances to| C\nC -->|Advances to| R\nR -->|Advances to| I\n\nstyle F fill:#e1f5fe,stroke:#01579b\nstyle C fill:#e1f5fe,stroke:#01579b\nstyle R fill:#e1f5fe,stroke:#01579b\nstyle I fill:#e1f5fe,stroke:#01579b\n</ideal_output>\n</example>\n</examples>\n\n"
},
{
"type": "text",
"text": "<user_query>\n{{USER_QUERY}}\n</user_query>"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "<diagram_planning>\n1. Main concepts/components:\n- Client-side code\n- Claude (Assistant)\n- Tools\n- Tool choice configurations\n- Chain of thought process\n- Tool use/result flow\n\n2. Relationships:\n- Client provides tools and prompt to Claude\n- Claude processes and decides on tool usage\n- Tool execution happens client-side\n- Results flow back to Claude\n- Claude formulates final response\n\n3. Diagram Type:\n- TD (top-down) to show the sequential flow and decision points\n\n4. Node Structure:\n- Distinct sections for client and Claude\n- Tool choice configuration options\n- Tool execution flow\n- Response handling\n\n5. Relationships/Labels:\n- Show data flow between components\n- Indicate decision points\n- Mark configuration options\n- Display tool use/result cycle\n\n6. Key Considerations:\n- Tool choice configurations (auto/any/tool)\n- Parallel vs sequential tool use\n- Chain of thought inclusion\n- Error handling\n</diagram_planning>\n\n<answer>\n```mermaid\ngraph TD\n classDef client fill:#e1f3f8,stroke:#333\n classDef claude fill:#ffe6cc,stroke:#333\n classDef config fill:#d5e8d4,stroke:#333\n classDef tool fill:#fff2cc,stroke:#333\n classDef response fill:#f8cecc,stroke:#333\n\n %% Client Side\n C[Client]:::client --> |Provides Tools & Prompt| TC[Tool Configuration]:::config\n TC --> |Configure| CH[Tool Choice Handler]:::config\n CH --> |Set| TC1[auto]:::config\n CH --> |Set| TC2[any]:::config\n CH --> |Set| TC3[tool]:::config\n \n %% Claude Processing\n TC --> CL[Claude]:::claude\n CL --> |Analyzes| D{Need Tools?}:::claude\n \n %% Decision Flow\n D -->|Yes| COT[Chain of Thought]:::claude\n D -->|No| FR[Direct Response]:::response\n \n %% Tool Use Flow\n COT --> TU[Tool Use Request]:::claude\n TU --> |Extract| TE[Tool Execution]:::tool\n TE --> |Generate| TR[Tool Result]:::tool\n \n %% Result Processing\n TR --> |Return to| CL\n CL --> |Formulate| FR\n\n %% Parallel Tool Use\n subgraph \"Parallel Tool Configuration\"\n PT[Parallel Tools]:::config --> |disable_parallel_tool_use=true| ST[Sequential Tools]:::config\n end\n \n %% Tool Result Structure\n subgraph \"Tool Result Format\"\n TRF[Tool Result]:::tool\n TRF --> ID[tool_use_id]:::tool\n TRF --> CN[content]:::tool\n TRF --> ER[is_error]:::tool\n end\n\n style D fill:#ffe6cc,stroke:#333\n```\n</answer>"
}
]
}
]
)
print(message.content)
```