This is page 6 of 21. Use http://codebase.md/sparesparrow/mcp-project-orchestrator?page={x} to view the full context.
# Directory Structure
```
├── .cursorrules
├── .env.example
├── .github
│ └── workflows
│ ├── build.yml
│ ├── ci-cd.yml
│ ├── ci.yml
│ ├── deploy.yml
│ ├── ecosystem-monitor.yml
│ ├── fan-out-orchestrator.yml
│ └── release.yml
├── .gitignore
├── .pre-commit-config.yaml
├── AUTOMOTIVE_CAMERA_SYSTEM_SUMMARY.md
├── automotive-camera-system
│ ├── docs
│ │ └── IMPLEMENTACE_CS.md
│ └── README.md
├── AWS_MCP_IMPLEMENTATION_SUMMARY.md
├── AWS_MCP_QUICKSTART.md
├── AWS_SIP_TRUNK_DEPLOYMENT_COMPLETE.md
├── aws-sip-trunk
│ ├── .gitignore
│ ├── config
│ │ ├── extensions.conf.j2
│ │ └── pjsip.conf.j2
│ ├── DEPLOYMENT_SUMMARY.md
│ ├── docs
│ │ ├── DEPLOYMENT.md
│ │ └── TROUBLESHOOTING.md
│ ├── PROJECT_INDEX.md
│ ├── pyproject.toml
│ ├── QUICKSTART.md
│ ├── README.md
│ ├── scripts
│ │ ├── deploy-asterisk-aws.sh
│ │ └── user-data.sh
│ ├── terraform
│ │ ├── ec2.tf
│ │ ├── main.tf
│ │ ├── monitoring.tf
│ │ ├── networking.tf
│ │ ├── outputs.tf
│ │ ├── storage.tf
│ │ ├── terraform.tfvars.example
│ │ └── variables.tf
│ ├── tests
│ │ └── test_sip_connectivity.py
│ └── VERIFICATION_CHECKLIST.md
├── CLAUDE.md
├── component_templates.json
├── conanfile.py
├── config
│ ├── default.json
│ └── project_orchestration.json
├── Containerfile
├── cursor-templates
│ └── openssl
│ ├── linux-dev.mdc.jinja2
│ └── shared.mdc.jinja2
├── data
│ └── prompts
│ └── templates
│ ├── advanced-multi-server-template.json
│ ├── analysis-assistant.json
│ ├── analyze-mermaid-diagram.json
│ ├── architecture-design-assistant.json
│ ├── code-diagram-documentation-creator.json
│ ├── code-refactoring-assistant.json
│ ├── code-review-assistant.json
│ ├── collaborative-development.json
│ ├── consolidated-interfaces-template.json
│ ├── could-you-interpret-the-assumed-applicat.json
│ ├── data-analysis-template.json
│ ├── database-query-assistant.json
│ ├── debugging-assistant.json
│ ├── development-system-prompt-zcna0.json
│ ├── development-system-prompt.json
│ ├── development-workflow.json
│ ├── docker-compose-prompt-combiner.json
│ ├── docker-containerization-guide.json
│ ├── docker-mcp-servers-orchestration.json
│ ├── foresight-assistant.json
│ ├── generate-different-types-of-questions-ab.json
│ ├── generate-mermaid-diagram.json
│ ├── image-1-describe-the-icon-in-one-sen.json
│ ├── initialize-project-setup-for-a-new-micro.json
│ ├── install-dependencies-build-run-test.json
│ ├── mcp-code-generator.json
│ ├── mcp-integration-assistant.json
│ ├── mcp-resources-explorer.json
│ ├── mcp-resources-integration.json
│ ├── mcp-server-configurator.json
│ ├── mcp-server-dev-prompt-combiner.json
│ ├── mcp-server-integration-template.json
│ ├── mcp-template-system.json
│ ├── mermaid-analysis-expert.json
│ ├── mermaid-class-diagram-generator.json
│ ├── mermaid-diagram-generator.json
│ ├── mermaid-diagram-modifier.json
│ ├── modify-mermaid-diagram.json
│ ├── monorepo-migration-guide.json
│ ├── multi-resource-context.json
│ ├── project-analysis-assistant.json
│ ├── prompt-combiner-interface.json
│ ├── prompt-templates.json
│ ├── repository-explorer.json
│ ├── research-assistant.json
│ ├── sequential-data-analysis.json
│ ├── solid-code-analysis-visualizer.json
│ ├── task-list-helper-8ithy.json
│ ├── template-based-mcp-integration.json
│ ├── templates.json
│ ├── test-prompt.json
│ └── you-are-limited-to-respond-yes-or-no-onl.json
├── docs
│ ├── AWS_MCP.md
│ ├── AWS.md
│ ├── CONAN.md
│ └── integration.md
├── elevenlabs-agents
│ ├── agent-prompts.json
│ └── README.md
├── IMPLEMENTATION_STATUS.md
├── integration_plan.md
├── LICENSE
├── MANIFEST.in
├── mcp-project-orchestrator
│ └── openssl
│ ├── .github
│ │ └── workflows
│ │ └── validate-cursor-config.yml
│ ├── conanfile.py
│ ├── CURSOR_DEPLOYMENT_POLISH.md
│ ├── cursor-rules
│ │ ├── mcp.json.jinja2
│ │ ├── prompts
│ │ │ ├── fips-compliance.md.jinja2
│ │ │ ├── openssl-coding-standards.md.jinja2
│ │ │ └── pr-review.md.jinja2
│ │ └── rules
│ │ ├── ci-linux.mdc.jinja2
│ │ ├── linux-dev.mdc.jinja2
│ │ ├── macos-dev.mdc.jinja2
│ │ ├── shared.mdc.jinja2
│ │ └── windows-dev.mdc.jinja2
│ ├── docs
│ │ └── cursor-configuration-management.md
│ ├── examples
│ │ └── example-workspace
│ │ ├── .cursor
│ │ │ ├── mcp.json
│ │ │ └── rules
│ │ │ ├── linux-dev.mdc
│ │ │ └── shared.mdc
│ │ ├── .gitignore
│ │ ├── CMakeLists.txt
│ │ ├── conanfile.py
│ │ ├── profiles
│ │ │ ├── linux-gcc-debug.profile
│ │ │ └── linux-gcc-release.profile
│ │ ├── README.md
│ │ └── src
│ │ ├── crypto_utils.cpp
│ │ ├── crypto_utils.h
│ │ └── main.cpp
│ ├── IMPLEMENTATION_SUMMARY.md
│ ├── mcp_orchestrator
│ │ ├── __init__.py
│ │ ├── cli.py
│ │ ├── conan_integration.py
│ │ ├── cursor_config.py
│ │ ├── cursor_deployer.py
│ │ ├── deploy_cursor.py
│ │ ├── env_config.py
│ │ ├── platform_detector.py
│ │ └── yaml_validator.py
│ ├── openssl-cursor-example-workspace-20251014_121133.zip
│ ├── pyproject.toml
│ ├── README.md
│ ├── requirements.txt
│ ├── scripts
│ │ └── create_example_workspace.py
│ ├── setup.py
│ ├── test_deployment.py
│ └── tests
│ ├── __init__.py
│ ├── test_cursor_deployer.py
│ └── test_template_validation.py
├── printcast-agent
│ ├── .env.example
│ ├── config
│ │ └── asterisk
│ │ └── extensions.conf
│ ├── Containerfile
│ ├── docker-compose.yml
│ ├── pyproject.toml
│ ├── README.md
│ ├── scripts
│ │ └── docker-entrypoint.sh
│ ├── src
│ │ ├── integrations
│ │ │ ├── __init__.py
│ │ │ ├── asterisk.py
│ │ │ ├── content.py
│ │ │ ├── delivery.py
│ │ │ ├── elevenlabs.py
│ │ │ └── printing.py
│ │ ├── mcp_server
│ │ │ ├── __init__.py
│ │ │ ├── main.py
│ │ │ └── server.py
│ │ └── orchestration
│ │ ├── __init__.py
│ │ └── workflow.py
│ └── tests
│ └── test_mcp_server.py
├── project_orchestration.json
├── project_templates.json
├── pyproject.toml
├── README.md
├── REFACTORING_COMPLETED.md
├── REFACTORING_RECOMMENDATIONS.md
├── requirements.txt
├── scripts
│ ├── archive
│ │ ├── init_claude_test.sh
│ │ ├── init_postgres.sh
│ │ ├── start_mcp_servers.sh
│ │ └── test_claude_desktop.sh
│ ├── consolidate_mermaid.py
│ ├── consolidate_prompts.py
│ ├── consolidate_resources.py
│ ├── consolidate_templates.py
│ ├── INSTRUCTIONS.md
│ ├── README.md
│ ├── setup_aws_mcp.sh
│ ├── setup_mcp.sh
│ ├── setup_orchestrator.sh
│ ├── setup_project.py
│ └── test_mcp.sh
├── src
│ └── mcp_project_orchestrator
│ ├── __init__.py
│ ├── __main__.py
│ ├── aws_mcp.py
│ ├── cli
│ │ └── __init__.py
│ ├── cli.py
│ ├── commands
│ │ └── openssl_cli.py
│ ├── core
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── config.py
│ │ ├── exceptions.py
│ │ ├── fastmcp.py
│ │ ├── logging.py
│ │ └── managers.py
│ ├── cursor_deployer.py
│ ├── ecosystem_monitor.py
│ ├── fan_out_orchestrator.py
│ ├── fastmcp.py
│ ├── mcp-py
│ │ ├── AggregateVersions.py
│ │ ├── CustomBashTool.py
│ │ ├── FileAnnotator.py
│ │ ├── mcp-client.py
│ │ ├── mcp-server.py
│ │ ├── MermaidDiagramGenerator.py
│ │ ├── NamingAgent.py
│ │ └── solid-analyzer-agent.py
│ ├── mermaid
│ │ ├── __init__.py
│ │ ├── generator.py
│ │ ├── mermaid_orchestrator.py
│ │ ├── renderer.py
│ │ ├── templates
│ │ │ ├── AbstractFactory-diagram.json
│ │ │ ├── Adapter-diagram.json
│ │ │ ├── Analyze_Mermaid_Diagram.json
│ │ │ ├── Builder-diagram.json
│ │ │ ├── Chain-diagram.json
│ │ │ ├── Code_Diagram_Documentation_Creator.json
│ │ │ ├── Command-diagram.json
│ │ │ ├── Decorator-diagram.json
│ │ │ ├── Facade-diagram.json
│ │ │ ├── Factory-diagram.json
│ │ │ ├── flowchart
│ │ │ │ ├── AbstractFactory-diagram.json
│ │ │ │ ├── Adapter-diagram.json
│ │ │ │ ├── Analyze_Mermaid_Diagram.json
│ │ │ │ ├── Builder-diagram.json
│ │ │ │ ├── Chain-diagram.json
│ │ │ │ ├── Code_Diagram_Documentation_Creator.json
│ │ │ │ ├── Command-diagram.json
│ │ │ │ ├── Decorator-diagram.json
│ │ │ │ ├── Facade-diagram.json
│ │ │ │ ├── Factory-diagram.json
│ │ │ │ ├── Generate_Mermaid_Diagram.json
│ │ │ │ ├── generated_diagram.json
│ │ │ │ ├── integration.json
│ │ │ │ ├── Iterator-diagram.json
│ │ │ │ ├── Mediator-diagram.json
│ │ │ │ ├── Memento-diagram.json
│ │ │ │ ├── Mermaid_Analysis_Expert.json
│ │ │ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ │ │ ├── Mermaid_Diagram_Generator.json
│ │ │ │ ├── Mermaid_Diagram_Modifier.json
│ │ │ │ ├── Modify_Mermaid_Diagram.json
│ │ │ │ ├── Observer-diagram.json
│ │ │ │ ├── Prototype-diagram.json
│ │ │ │ ├── Proxy-diagram.json
│ │ │ │ ├── README.json
│ │ │ │ ├── Singleton-diagram.json
│ │ │ │ ├── State-diagram.json
│ │ │ │ ├── Strategy-diagram.json
│ │ │ │ ├── TemplateMethod-diagram.json
│ │ │ │ ├── theme_dark.json
│ │ │ │ ├── theme_default.json
│ │ │ │ ├── theme_pastel.json
│ │ │ │ ├── theme_vibrant.json
│ │ │ │ └── Visitor-diagram.json
│ │ │ ├── Generate_Mermaid_Diagram.json
│ │ │ ├── generated_diagram.json
│ │ │ ├── index.json
│ │ │ ├── integration.json
│ │ │ ├── Iterator-diagram.json
│ │ │ ├── Mediator-diagram.json
│ │ │ ├── Memento-diagram.json
│ │ │ ├── Mermaid_Analysis_Expert.json
│ │ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Modifier.json
│ │ │ ├── Modify_Mermaid_Diagram.json
│ │ │ ├── Observer-diagram.json
│ │ │ ├── Prototype-diagram.json
│ │ │ ├── Proxy-diagram.json
│ │ │ ├── README.json
│ │ │ ├── Singleton-diagram.json
│ │ │ ├── State-diagram.json
│ │ │ ├── Strategy-diagram.json
│ │ │ ├── TemplateMethod-diagram.json
│ │ │ ├── theme_dark.json
│ │ │ ├── theme_default.json
│ │ │ ├── theme_pastel.json
│ │ │ ├── theme_vibrant.json
│ │ │ └── Visitor-diagram.json
│ │ └── types.py
│ ├── project_orchestration.py
│ ├── prompt_manager
│ │ ├── __init__.py
│ │ ├── loader.py
│ │ ├── manager.py
│ │ └── template.py
│ ├── prompts
│ │ ├── __dirname.json
│ │ ├── __image_1___describe_the_icon_in_one_sen___.json
│ │ ├── __init__.py
│ │ ├── __type.json
│ │ ├── _.json
│ │ ├── _DEFAULT_OPEN_DELIMITER.json
│ │ ├── _emojiRegex.json
│ │ ├── _UUID_CHARS.json
│ │ ├── a.json
│ │ ├── A.json
│ │ ├── Aa.json
│ │ ├── aAnnotationPadding.json
│ │ ├── absoluteThresholdGroup.json
│ │ ├── add.json
│ │ ├── ADDITIONAL_PROPERTY_FLAG.json
│ │ ├── Advanced_Multi-Server_Integration_Template.json
│ │ ├── allOptionsList.json
│ │ ├── analysis
│ │ │ ├── Data_Analysis_Template.json
│ │ │ ├── index.json
│ │ │ ├── Mermaid_Analysis_Expert.json
│ │ │ ├── Sequential_Data_Analysis_with_MCP_Integration.json
│ │ │ └── SOLID_Code_Analysis_Visualizer.json
│ │ ├── Analysis_Assistant.json
│ │ ├── Analyze_Mermaid_Diagram.json
│ │ ├── ANDROID_EVERGREEN_FIRST.json
│ │ ├── ANSI_ESCAPE_BELL.json
│ │ ├── architecture
│ │ │ ├── index.json
│ │ │ └── PromptCombiner_Interface.json
│ │ ├── Architecture_Design_Assistant.json
│ │ ├── argsTag.json
│ │ ├── ARROW.json
│ │ ├── assistant
│ │ │ ├── Analysis_Assistant.json
│ │ │ ├── Architecture_Design_Assistant.json
│ │ │ ├── Code_Refactoring_Assistant.json
│ │ │ ├── Code_Review_Assistant.json
│ │ │ ├── Database_Query_Assistant.json
│ │ │ ├── Debugging_Assistant.json
│ │ │ ├── Foresight_Assistant.json
│ │ │ ├── index.json
│ │ │ ├── MCP_Integration_Assistant.json
│ │ │ ├── Project_Analysis_Assistant.json
│ │ │ └── Research_Assistant.json
│ │ ├── astralRange.json
│ │ ├── at.json
│ │ ├── authorization_endpoint.json
│ │ ├── b.json
│ │ ├── BABELIGNORE_FILENAME.json
│ │ ├── BACKSLASH.json
│ │ ├── backupId.json
│ │ ├── BANG.json
│ │ ├── BASE64_MAP.json
│ │ ├── baseFlags.json
│ │ ├── Basic_Template.json
│ │ ├── bgModel.json
│ │ ├── bignum.json
│ │ ├── blockKeywordsStr.json
│ │ ├── BOMChar.json
│ │ ├── boundary.json
│ │ ├── brackets.json
│ │ ├── BROWSER_VAR.json
│ │ ├── bt.json
│ │ ├── BUILTIN.json
│ │ ├── BULLET.json
│ │ ├── c.json
│ │ ├── C.json
│ │ ├── CACHE_VERSION.json
│ │ ├── cacheControl.json
│ │ ├── cacheProp.json
│ │ ├── category.py
│ │ ├── CHANGE_EVENT.json
│ │ ├── CHAR_CODE_0.json
│ │ ├── chars.json
│ │ ├── cjsPattern.json
│ │ ├── cKeywords.json
│ │ ├── classForPercent.json
│ │ ├── classStr.json
│ │ ├── clientFirstMessageBare.json
│ │ ├── cmd.json
│ │ ├── Code_Diagram_Documentation_Creator.json
│ │ ├── Code_Refactoring_Assistant.json
│ │ ├── Code_Review_Assistant.json
│ │ ├── code.json
│ │ ├── coding
│ │ │ ├── __dirname.json
│ │ │ ├── _.json
│ │ │ ├── _DEFAULT_OPEN_DELIMITER.json
│ │ │ ├── _emojiRegex.json
│ │ │ ├── _UUID_CHARS.json
│ │ │ ├── a.json
│ │ │ ├── A.json
│ │ │ ├── aAnnotationPadding.json
│ │ │ ├── absoluteThresholdGroup.json
│ │ │ ├── add.json
│ │ │ ├── ADDITIONAL_PROPERTY_FLAG.json
│ │ │ ├── allOptionsList.json
│ │ │ ├── ANDROID_EVERGREEN_FIRST.json
│ │ │ ├── ANSI_ESCAPE_BELL.json
│ │ │ ├── argsTag.json
│ │ │ ├── ARROW.json
│ │ │ ├── astralRange.json
│ │ │ ├── at.json
│ │ │ ├── authorization_endpoint.json
│ │ │ ├── BABELIGNORE_FILENAME.json
│ │ │ ├── BACKSLASH.json
│ │ │ ├── BANG.json
│ │ │ ├── BASE64_MAP.json
│ │ │ ├── baseFlags.json
│ │ │ ├── bgModel.json
│ │ │ ├── bignum.json
│ │ │ ├── blockKeywordsStr.json
│ │ │ ├── BOMChar.json
│ │ │ ├── boundary.json
│ │ │ ├── brackets.json
│ │ │ ├── BROWSER_VAR.json
│ │ │ ├── bt.json
│ │ │ ├── BUILTIN.json
│ │ │ ├── BULLET.json
│ │ │ ├── c.json
│ │ │ ├── C.json
│ │ │ ├── CACHE_VERSION.json
│ │ │ ├── cacheControl.json
│ │ │ ├── cacheProp.json
│ │ │ ├── CHANGE_EVENT.json
│ │ │ ├── CHAR_CODE_0.json
│ │ │ ├── chars.json
│ │ │ ├── cjsPattern.json
│ │ │ ├── cKeywords.json
│ │ │ ├── classForPercent.json
│ │ │ ├── classStr.json
│ │ │ ├── clientFirstMessageBare.json
│ │ │ ├── cmd.json
│ │ │ ├── code.json
│ │ │ ├── colorCode.json
│ │ │ ├── comma.json
│ │ │ ├── command.json
│ │ │ ├── configJsContent.json
│ │ │ ├── connectionString.json
│ │ │ ├── cssClassStr.json
│ │ │ ├── currentBoundaryParse.json
│ │ │ ├── d.json
│ │ │ ├── data.json
│ │ │ ├── DATA.json
│ │ │ ├── dataWebpackPrefix.json
│ │ │ ├── debug.json
│ │ │ ├── decodeStateVectorV2.json
│ │ │ ├── DEFAULT_DELIMITER.json
│ │ │ ├── DEFAULT_DIAGRAM_DIRECTION.json
│ │ │ ├── DEFAULT_JS_PATTERN.json
│ │ │ ├── DEFAULT_LOG_TARGET.json
│ │ │ ├── defaultHelpOpt.json
│ │ │ ├── defaultHost.json
│ │ │ ├── deferY18nLookupPrefix.json
│ │ │ ├── DELIM.json
│ │ │ ├── delimiter.json
│ │ │ ├── DEPRECATION.json
│ │ │ ├── destMain.json
│ │ │ ├── DID_NOT_THROW.json
│ │ │ ├── direction.json
│ │ │ ├── displayValue.json
│ │ │ ├── DNS.json
│ │ │ ├── doc.json
│ │ │ ├── DOCUMENTATION_NOTE.json
│ │ │ ├── DOT.json
│ │ │ ├── DOTS.json
│ │ │ ├── dummyCompoundId.json
│ │ │ ├── e.json
│ │ │ ├── E.json
│ │ │ ├── earlyHintsLink.json
│ │ │ ├── elide.json
│ │ │ ├── EMPTY.json
│ │ │ ├── end.json
│ │ │ ├── endpoint.json
│ │ │ ├── environment.json
│ │ │ ├── ERR_CODE.json
│ │ │ ├── errMessage.json
│ │ │ ├── errMsg.json
│ │ │ ├── ERROR_MESSAGE.json
│ │ │ ├── error.json
│ │ │ ├── ERROR.json
│ │ │ ├── ERRORCLASS.json
│ │ │ ├── errorMessage.json
│ │ │ ├── es6Default.json
│ │ │ ├── ESC.json
│ │ │ ├── Escapable.json
│ │ │ ├── escapedChar.json
│ │ │ ├── escapeFuncStr.json
│ │ │ ├── escSlash.json
│ │ │ ├── ev.json
│ │ │ ├── event.json
│ │ │ ├── execaMessage.json
│ │ │ ├── EXPECTED_LABEL.json
│ │ │ ├── expected.json
│ │ │ ├── expectedString.json
│ │ │ ├── expression1.json
│ │ │ ├── EXTENSION.json
│ │ │ ├── f.json
│ │ │ ├── FAIL_TEXT.json
│ │ │ ├── FILE_BROWSER_FACTORY.json
│ │ │ ├── fill.json
│ │ │ ├── findPackageJson.json
│ │ │ ├── fnKey.json
│ │ │ ├── FORMAT.json
│ │ │ ├── formatted.json
│ │ │ ├── from.json
│ │ │ ├── fullpaths.json
│ │ │ ├── FUNC_ERROR_TEXT.json
│ │ │ ├── GenStateSuspendedStart.json
│ │ │ ├── GENSYNC_EXPECTED_START.json
│ │ │ ├── gutter.json
│ │ │ ├── h.json
│ │ │ ├── handlerFuncName.json
│ │ │ ├── HASH_UNDEFINED.json
│ │ │ ├── head.json
│ │ │ ├── helpMessage.json
│ │ │ ├── HINT_ARG.json
│ │ │ ├── HOOK_RETURNED_NOTHING_ERROR_MESSAGE.json
│ │ │ ├── i.json
│ │ │ ├── id.json
│ │ │ ├── identifier.json
│ │ │ ├── Identifier.json
│ │ │ ├── INDENT.json
│ │ │ ├── indentation.json
│ │ │ ├── index.json
│ │ │ ├── INDIRECTION_FRAGMENT.json
│ │ │ ├── input.json
│ │ │ ├── inputText.json
│ │ │ ├── insert.json
│ │ │ ├── insertPromptQuery.json
│ │ │ ├── INSPECT_MAX_BYTES.json
│ │ │ ├── intToCharMap.json
│ │ │ ├── IS_ITERABLE_SENTINEL.json
│ │ │ ├── IS_KEYED_SENTINEL.json
│ │ │ ├── isConfigType.json
│ │ │ ├── isoSentinel.json
│ │ │ ├── isSourceNode.json
│ │ │ ├── j.json
│ │ │ ├── JAKE_CMD.json
│ │ │ ├── JEST_GLOBAL_NAME.json
│ │ │ ├── JEST_GLOBALS_MODULE_NAME.json
│ │ │ ├── JSON_SYNTAX_CHAR.json
│ │ │ ├── json.json
│ │ │ ├── jsonType.json
│ │ │ ├── jupyter_namespaceObject.json
│ │ │ ├── JUPYTERLAB_DOCMANAGER_PLUGIN_ID.json
│ │ │ ├── k.json
│ │ │ ├── KERNEL_STATUS_ERROR_CLASS.json
│ │ │ ├── key.json
│ │ │ ├── l.json
│ │ │ ├── labelId.json
│ │ │ ├── LATEST_PROTOCOL_VERSION.json
│ │ │ ├── LETTERDASHNUMBER.json
│ │ │ ├── LF.json
│ │ │ ├── LIMIT_REPLACE_NODE.json
│ │ │ ├── logTime.json
│ │ │ ├── lstatkey.json
│ │ │ ├── lt.json
│ │ │ ├── m.json
│ │ │ ├── maliciousPayload.json
│ │ │ ├── mask.json
│ │ │ ├── match.json
│ │ │ ├── matchingDelim.json
│ │ │ ├── MAXIMUM_MESSAGE_SIZE.json
│ │ │ ├── mdcContent.json
│ │ │ ├── MERMAID_DOM_ID_PREFIX.json
│ │ │ ├── message.json
│ │ │ ├── messages.json
│ │ │ ├── meth.json
│ │ │ ├── minimatch.json
│ │ │ ├── MOCK_CONSTRUCTOR_NAME.json
│ │ │ ├── MOCKS_PATTERN.json
│ │ │ ├── moduleDirectory.json
│ │ │ ├── msg.json
│ │ │ ├── mtr.json
│ │ │ ├── multipartType.json
│ │ │ ├── n.json
│ │ │ ├── N.json
│ │ │ ├── name.json
│ │ │ ├── NATIVE_PLATFORM.json
│ │ │ ├── newUrl.json
│ │ │ ├── NM.json
│ │ │ ├── NO_ARGUMENTS.json
│ │ │ ├── NO_DIFF_MESSAGE.json
│ │ │ ├── NODE_MODULES.json
│ │ │ ├── nodeInternalPrefix.json
│ │ │ ├── nonASCIIidentifierStartChars.json
│ │ │ ├── nonKey.json
│ │ │ ├── NOT_A_DOT.json
│ │ │ ├── notCharacterOrDash.json
│ │ │ ├── notebookURL.json
│ │ │ ├── notSelector.json
│ │ │ ├── nullTag.json
│ │ │ ├── num.json
│ │ │ ├── NUMBER.json
│ │ │ ├── o.json
│ │ │ ├── O.json
│ │ │ ├── octChar.json
│ │ │ ├── octetStreamType.json
│ │ │ ├── operators.json
│ │ │ ├── out.json
│ │ │ ├── OUTSIDE_JEST_VM_PROTOCOL.json
│ │ │ ├── override.json
│ │ │ ├── p.json
│ │ │ ├── PACKAGE_FILENAME.json
│ │ │ ├── PACKAGE_JSON.json
│ │ │ ├── packageVersion.json
│ │ │ ├── paddedNumber.json
│ │ │ ├── page.json
│ │ │ ├── parseClass.json
│ │ │ ├── path.json
│ │ │ ├── pathExt.json
│ │ │ ├── pattern.json
│ │ │ ├── PatternBoolean.json
│ │ │ ├── pBuiltins.json
│ │ │ ├── pFloatForm.json
│ │ │ ├── pkg.json
│ │ │ ├── PLUGIN_ID_DOC_MANAGER.json
│ │ │ ├── plusChar.json
│ │ │ ├── PN_CHARS.json
│ │ │ ├── point.json
│ │ │ ├── prefix.json
│ │ │ ├── PRETTY_PLACEHOLDER.json
│ │ │ ├── property_prefix.json
│ │ │ ├── pubkey256.json
│ │ │ ├── Q.json
│ │ │ ├── qmark.json
│ │ │ ├── QO.json
│ │ │ ├── query.json
│ │ │ ├── querystringType.json
│ │ │ ├── queryText.json
│ │ │ ├── r.json
│ │ │ ├── R.json
│ │ │ ├── rangeStart.json
│ │ │ ├── re.json
│ │ │ ├── reI.json
│ │ │ ├── REQUIRED_FIELD_SYMBOL.json
│ │ │ ├── reserve.json
│ │ │ ├── resolvedDestination.json
│ │ │ ├── resolverDir.json
│ │ │ ├── responseType.json
│ │ │ ├── result.json
│ │ │ ├── ROOT_DESCRIBE_BLOCK_NAME.json
│ │ │ ├── ROOT_NAMESPACE_NAME.json
│ │ │ ├── ROOT_TASK_NAME.json
│ │ │ ├── route.json
│ │ │ ├── RUNNING_TEXT.json
│ │ │ ├── s.json
│ │ │ ├── SCHEMA_PATH.json
│ │ │ ├── se.json
│ │ │ ├── SEARCHABLE_CLASS.json
│ │ │ ├── secret.json
│ │ │ ├── selector.json
│ │ │ ├── SEMVER_SPEC_VERSION.json
│ │ │ ├── sensitiveHeaders.json
│ │ │ ├── sep.json
│ │ │ ├── separator.json
│ │ │ ├── SHAPE_STATE.json
│ │ │ ├── shape.json
│ │ │ ├── SHARED.json
│ │ │ ├── short.json
│ │ │ ├── side.json
│ │ │ ├── SNAPSHOT_VERSION.json
│ │ │ ├── SOURCE_MAPPING_PREFIX.json
│ │ │ ├── source.json
│ │ │ ├── sourceMapContent.json
│ │ │ ├── SPACE_SYMBOL.json
│ │ │ ├── SPACE.json
│ │ │ ├── sqlKeywords.json
│ │ │ ├── sranges.json
│ │ │ ├── st.json
│ │ │ ├── ST.json
│ │ │ ├── stack.json
│ │ │ ├── START_HIDING.json
│ │ │ ├── START_OF_LINE.json
│ │ │ ├── startNoTraversal.json
│ │ │ ├── STATES.json
│ │ │ ├── stats.json
│ │ │ ├── statSync.json
│ │ │ ├── storageStatus.json
│ │ │ ├── storageType.json
│ │ │ ├── str.json
│ │ │ ├── stringifiedObject.json
│ │ │ ├── stringPath.json
│ │ │ ├── stringResult.json
│ │ │ ├── stringTag.json
│ │ │ ├── strValue.json
│ │ │ ├── style.json
│ │ │ ├── SUB_NAME.json
│ │ │ ├── subkey.json
│ │ │ ├── SUBPROTOCOL.json
│ │ │ ├── SUITE_NAME.json
│ │ │ ├── symbolPattern.json
│ │ │ ├── symbolTag.json
│ │ │ ├── t.json
│ │ │ ├── T.json
│ │ │ ├── templateDir.json
│ │ │ ├── tempName.json
│ │ │ ├── text.json
│ │ │ ├── time.json
│ │ │ ├── titleSeparator.json
│ │ │ ├── tmpl.json
│ │ │ ├── tn.json
│ │ │ ├── toValue.json
│ │ │ ├── transform.json
│ │ │ ├── trustProxyDefaultSymbol.json
│ │ │ ├── typeArgumentsKey.json
│ │ │ ├── typeKey.json
│ │ │ ├── typeMessage.json
│ │ │ ├── typesRegistryPackageName.json
│ │ │ ├── u.json
│ │ │ ├── UNDEFINED.json
│ │ │ ├── unit.json
│ │ │ ├── UNMATCHED_SURROGATE_PAIR_REPLACE.json
│ │ │ ├── ur.json
│ │ │ ├── USAGE.json
│ │ │ ├── value.json
│ │ │ ├── Vr.json
│ │ │ ├── watchmanURL.json
│ │ │ ├── webkit.json
│ │ │ ├── xhtml.json
│ │ │ ├── XP_DEFAULT_PATHEXT.json
│ │ │ └── y.json
│ │ ├── Collaborative_Development_with_MCP_Integration.json
│ │ ├── colorCode.json
│ │ ├── comma.json
│ │ ├── command.json
│ │ ├── completionShTemplate.json
│ │ ├── configJsContent.json
│ │ ├── connectionString.json
│ │ ├── Consolidated_TypeScript_Interfaces_Template.json
│ │ ├── Could_you_interpret_the_assumed_applicat___.json
│ │ ├── cssClassStr.json
│ │ ├── currentBoundaryParse.json
│ │ ├── d.json
│ │ ├── Data_Analysis_Template.json
│ │ ├── data.json
│ │ ├── DATA.json
│ │ ├── Database_Query_Assistant.json
│ │ ├── dataWebpackPrefix.json
│ │ ├── debug.json
│ │ ├── Debugging_Assistant.json
│ │ ├── decodeStateVectorV2.json
│ │ ├── DEFAULT_DELIMITER.json
│ │ ├── DEFAULT_DIAGRAM_DIRECTION.json
│ │ ├── DEFAULT_INDENT.json
│ │ ├── DEFAULT_JS_PATTERN.json
│ │ ├── DEFAULT_LOG_TARGET.json
│ │ ├── defaultHelpOpt.json
│ │ ├── defaultHost.json
│ │ ├── deferY18nLookupPrefix.json
│ │ ├── DELIM.json
│ │ ├── delimiter.json
│ │ ├── DEPRECATION.json
│ │ ├── DESCENDING.json
│ │ ├── destMain.json
│ │ ├── development
│ │ │ ├── Collaborative_Development_with_MCP_Integration.json
│ │ │ ├── Consolidated_TypeScript_Interfaces_Template.json
│ │ │ ├── Development_Workflow.json
│ │ │ ├── index.json
│ │ │ ├── MCP_Server_Development_Prompt_Combiner.json
│ │ │ └── Monorepo_Migration_and_Code_Organization_Guide.json
│ │ ├── Development_System_Prompt.json
│ │ ├── Development_Workflow.json
│ │ ├── devops
│ │ │ ├── Docker_Compose_Prompt_Combiner.json
│ │ │ ├── Docker_Containerization_Guide.json
│ │ │ └── index.json
│ │ ├── DID_NOT_THROW.json
│ │ ├── direction.json
│ │ ├── displayValue.json
│ │ ├── DNS.json
│ │ ├── doc.json
│ │ ├── Docker_Compose_Prompt_Combiner.json
│ │ ├── Docker_Containerization_Guide.json
│ │ ├── Docker_MCP_Servers_Orchestration_Guide.json
│ │ ├── DOCUMENTATION_NOTE.json
│ │ ├── DOT.json
│ │ ├── DOTS.json
│ │ ├── dummyCompoundId.json
│ │ ├── e.json
│ │ ├── E.json
│ │ ├── earlyHintsLink.json
│ │ ├── elide.json
│ │ ├── EMPTY.json
│ │ ├── encoded.json
│ │ ├── end.json
│ │ ├── endpoint.json
│ │ ├── environment.json
│ │ ├── ERR_CODE.json
│ │ ├── errMessage.json
│ │ ├── errMsg.json
│ │ ├── ERROR_MESSAGE.json
│ │ ├── error.json
│ │ ├── ERROR.json
│ │ ├── ERRORCLASS.json
│ │ ├── errorMessage.json
│ │ ├── es6Default.json
│ │ ├── ESC.json
│ │ ├── Escapable.json
│ │ ├── escapedChar.json
│ │ ├── escapeFuncStr.json
│ │ ├── escSlash.json
│ │ ├── ev.json
│ │ ├── event.json
│ │ ├── execaMessage.json
│ │ ├── EXPECTED_LABEL.json
│ │ ├── expected.json
│ │ ├── expectedString.json
│ │ ├── expression1.json
│ │ ├── EXTENSION.json
│ │ ├── f.json
│ │ ├── FAIL_TEXT.json
│ │ ├── FILE_BROWSER_FACTORY.json
│ │ ├── fill.json
│ │ ├── findPackageJson.json
│ │ ├── fnKey.json
│ │ ├── Foresight_Assistant.json
│ │ ├── FORMAT.json
│ │ ├── formatted.json
│ │ ├── from.json
│ │ ├── fullpaths.json
│ │ ├── FUNC_ERROR_TEXT.json
│ │ ├── general
│ │ │ └── index.json
│ │ ├── Generate_different_types_of_questions_ab___.json
│ │ ├── Generate_Mermaid_Diagram.json
│ │ ├── GenStateSuspendedStart.json
│ │ ├── GENSYNC_EXPECTED_START.json
│ │ ├── GitHub_Repository_Explorer.json
│ │ ├── gutter.json
│ │ ├── h.json
│ │ ├── handlerFuncName.json
│ │ ├── HASH_UNDEFINED.json
│ │ ├── head.json
│ │ ├── helpMessage.json
│ │ ├── HINT_ARG.json
│ │ ├── HOOK_RETURNED_NOTHING_ERROR_MESSAGE.json
│ │ ├── i.json
│ │ ├── id.json
│ │ ├── identifier.json
│ │ ├── Identifier.json
│ │ ├── INDENT.json
│ │ ├── indentation.json
│ │ ├── index.json
│ │ ├── INDIRECTION_FRAGMENT.json
│ │ ├── Initialize_project_setup_for_a_new_micro___.json
│ │ ├── input.json
│ │ ├── inputText.json
│ │ ├── insert.json
│ │ ├── insertPromptQuery.json
│ │ ├── INSPECT_MAX_BYTES.json
│ │ ├── install_dependencies__build__run__test____.json
│ │ ├── intToCharMap.json
│ │ ├── IS_ITERABLE_SENTINEL.json
│ │ ├── IS_KEYED_SENTINEL.json
│ │ ├── isConfigType.json
│ │ ├── isoSentinel.json
│ │ ├── isSourceNode.json
│ │ ├── j.json
│ │ ├── J.json
│ │ ├── JAKE_CMD.json
│ │ ├── JEST_GLOBAL_NAME.json
│ │ ├── JEST_GLOBALS_MODULE_NAME.json
│ │ ├── JSON_SYNTAX_CHAR.json
│ │ ├── json.json
│ │ ├── jsonType.json
│ │ ├── jupyter_namespaceObject.json
│ │ ├── JUPYTERLAB_DOCMANAGER_PLUGIN_ID.json
│ │ ├── k.json
│ │ ├── KERNEL_STATUS_ERROR_CLASS.json
│ │ ├── key.json
│ │ ├── l.json
│ │ ├── labelId.json
│ │ ├── LATEST_PROTOCOL_VERSION.json
│ │ ├── LETTERDASHNUMBER.json
│ │ ├── LF.json
│ │ ├── LIMIT_REPLACE_NODE.json
│ │ ├── LINE_FEED.json
│ │ ├── logTime.json
│ │ ├── lstatkey.json
│ │ ├── lt.json
│ │ ├── m.json
│ │ ├── maliciousPayload.json
│ │ ├── manager.py
│ │ ├── marker.json
│ │ ├── mask.json
│ │ ├── match.json
│ │ ├── matchingDelim.json
│ │ ├── MAXIMUM_MESSAGE_SIZE.json
│ │ ├── MCP_Integration_Assistant.json
│ │ ├── MCP_Resources_Explorer.json
│ │ ├── MCP_Resources_Integration_Guide.json
│ │ ├── MCP_Server_Development_Prompt_Combiner.json
│ │ ├── MCP_Server_Integration_Guide.json
│ │ ├── mcp-code-generator.json
│ │ ├── mdcContent.json
│ │ ├── Mermaid_Analysis_Expert.json
│ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ ├── Mermaid_Diagram_Generator.json
│ │ ├── Mermaid_Diagram_Modifier.json
│ │ ├── MERMAID_DOM_ID_PREFIX.json
│ │ ├── message.json
│ │ ├── messages.json
│ │ ├── meth.json
│ │ ├── minimatch.json
│ │ ├── MOBILE_QUERY.json
│ │ ├── MOCK_CONSTRUCTOR_NAME.json
│ │ ├── MOCKS_PATTERN.json
│ │ ├── Modify_Mermaid_Diagram.json
│ │ ├── moduleDirectory.json
│ │ ├── Monorepo_Migration_and_Code_Organization_Guide.json
│ │ ├── msg.json
│ │ ├── mtr.json
│ │ ├── Multi-Resource_Context_Assistant.json
│ │ ├── multipartType.json
│ │ ├── n.json
│ │ ├── N.json
│ │ ├── name.json
│ │ ├── NATIVE_PLATFORM.json
│ │ ├── newUrl.json
│ │ ├── NM.json
│ │ ├── NO_ARGUMENTS.json
│ │ ├── NO_DIFF_MESSAGE.json
│ │ ├── NODE_MODULES.json
│ │ ├── nodeInternalPrefix.json
│ │ ├── nonASCIIidentifierStartChars.json
│ │ ├── nonKey.json
│ │ ├── NOT_A_DOT.json
│ │ ├── notCharacterOrDash.json
│ │ ├── notebookURL.json
│ │ ├── notSelector.json
│ │ ├── nullTag.json
│ │ ├── num.json
│ │ ├── NUMBER.json
│ │ ├── o.json
│ │ ├── O.json
│ │ ├── octChar.json
│ │ ├── octetStreamType.json
│ │ ├── operators.json
│ │ ├── other
│ │ │ ├── __image_1___describe_the_icon_in_one_sen___.json
│ │ │ ├── __type.json
│ │ │ ├── Advanced_Multi-Server_Integration_Template.json
│ │ │ ├── Analyze_Mermaid_Diagram.json
│ │ │ ├── Basic_Template.json
│ │ │ ├── Code_Diagram_Documentation_Creator.json
│ │ │ ├── Collaborative_Development_with_MCP_Integration.json
│ │ │ ├── completionShTemplate.json
│ │ │ ├── Could_you_interpret_the_assumed_applicat___.json
│ │ │ ├── DEFAULT_INDENT.json
│ │ │ ├── Docker_MCP_Servers_Orchestration_Guide.json
│ │ │ ├── Generate_different_types_of_questions_ab___.json
│ │ │ ├── Generate_Mermaid_Diagram.json
│ │ │ ├── GitHub_Repository_Explorer.json
│ │ │ ├── index.json
│ │ │ ├── Initialize_project_setup_for_a_new_micro___.json
│ │ │ ├── install_dependencies__build__run__test____.json
│ │ │ ├── LINE_FEED.json
│ │ │ ├── MCP_Resources_Explorer.json
│ │ │ ├── MCP_Resources_Integration_Guide.json
│ │ │ ├── MCP_Server_Integration_Guide.json
│ │ │ ├── mcp-code-generator.json
│ │ │ ├── Mermaid_Class_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Generator.json
│ │ │ ├── Mermaid_Diagram_Modifier.json
│ │ │ ├── Modify_Mermaid_Diagram.json
│ │ │ ├── Multi-Resource_Context_Assistant.json
│ │ │ ├── output.json
│ │ │ ├── sseUrl.json
│ │ │ ├── string.json
│ │ │ ├── Task_List_Helper.json
│ │ │ ├── Template-Based_MCP_Integration.json
│ │ │ ├── Test_Prompt.json
│ │ │ ├── type.json
│ │ │ ├── VERSION.json
│ │ │ ├── WIN_SLASH.json
│ │ │ └── You_are_limited_to_respond_Yes_or_No_onl___.json
│ │ ├── out.json
│ │ ├── output.json
│ │ ├── OUTSIDE_JEST_VM_PROTOCOL.json
│ │ ├── override.json
│ │ ├── p.json
│ │ ├── PACKAGE_FILENAME.json
│ │ ├── PACKAGE_JSON.json
│ │ ├── packageVersion.json
│ │ ├── paddedNumber.json
│ │ ├── page.json
│ │ ├── parseClass.json
│ │ ├── PATH_NODE_MODULES.json
│ │ ├── path.json
│ │ ├── pathExt.json
│ │ ├── pattern.json
│ │ ├── PatternBoolean.json
│ │ ├── pBuiltins.json
│ │ ├── pFloatForm.json
│ │ ├── pkg.json
│ │ ├── PLUGIN_ID_DOC_MANAGER.json
│ │ ├── plusChar.json
│ │ ├── PN_CHARS.json
│ │ ├── point.json
│ │ ├── prefix.json
│ │ ├── PRETTY_PLACEHOLDER.json
│ │ ├── Project_Analysis_Assistant.json
│ │ ├── ProjectsUpdatedInBackgroundEvent.json
│ │ ├── PromptCombiner_Interface.json
│ │ ├── promptId.json
│ │ ├── property_prefix.json
│ │ ├── pubkey256.json
│ │ ├── Q.json
│ │ ├── qmark.json
│ │ ├── QO.json
│ │ ├── query.json
│ │ ├── querystringType.json
│ │ ├── queryText.json
│ │ ├── r.json
│ │ ├── R.json
│ │ ├── rangeStart.json
│ │ ├── re.json
│ │ ├── reI.json
│ │ ├── REQUIRED_FIELD_SYMBOL.json
│ │ ├── Research_Assistant.json
│ │ ├── reserve.json
│ │ ├── resolvedDestination.json
│ │ ├── resolverDir.json
│ │ ├── responseType.json
│ │ ├── result.json
│ │ ├── ROOT_DESCRIBE_BLOCK_NAME.json
│ │ ├── ROOT_NAMESPACE_NAME.json
│ │ ├── ROOT_TASK_NAME.json
│ │ ├── route.json
│ │ ├── RUNNING_TEXT.json
│ │ ├── RXstyle.json
│ │ ├── s.json
│ │ ├── SCHEMA_PATH.json
│ │ ├── schemaQuery.json
│ │ ├── se.json
│ │ ├── SEARCHABLE_CLASS.json
│ │ ├── secret.json
│ │ ├── selector.json
│ │ ├── SEMVER_SPEC_VERSION.json
│ │ ├── sensitiveHeaders.json
│ │ ├── sep.json
│ │ ├── separator.json
│ │ ├── Sequential_Data_Analysis_with_MCP_Integration.json
│ │ ├── SHAPE_STATE.json
│ │ ├── shape.json
│ │ ├── SHARED.json
│ │ ├── short.json
│ │ ├── side.json
│ │ ├── SNAPSHOT_VERSION.json
│ │ ├── SOLID_Code_Analysis_Visualizer.json
│ │ ├── SOURCE_MAPPING_PREFIX.json
│ │ ├── source.json
│ │ ├── sourceMapContent.json
│ │ ├── SPACE_SYMBOL.json
│ │ ├── SPACE.json
│ │ ├── sqlKeywords.json
│ │ ├── sranges.json
│ │ ├── sseUrl.json
│ │ ├── st.json
│ │ ├── ST.json
│ │ ├── stack.json
│ │ ├── START_HIDING.json
│ │ ├── START_OF_LINE.json
│ │ ├── startNoTraversal.json
│ │ ├── STATES.json
│ │ ├── stats.json
│ │ ├── statSync.json
│ │ ├── status.json
│ │ ├── storageStatus.json
│ │ ├── storageType.json
│ │ ├── str.json
│ │ ├── string.json
│ │ ├── stringifiedObject.json
│ │ ├── stringPath.json
│ │ ├── stringResult.json
│ │ ├── stringTag.json
│ │ ├── strValue.json
│ │ ├── style.json
│ │ ├── SUB_NAME.json
│ │ ├── subkey.json
│ │ ├── SUBPROTOCOL.json
│ │ ├── SUITE_NAME.json
│ │ ├── symbolPattern.json
│ │ ├── symbolTag.json
│ │ ├── system
│ │ │ ├── Aa.json
│ │ │ ├── b.json
│ │ │ ├── Development_System_Prompt.json
│ │ │ ├── index.json
│ │ │ ├── marker.json
│ │ │ ├── PATH_NODE_MODULES.json
│ │ │ ├── ProjectsUpdatedInBackgroundEvent.json
│ │ │ ├── RXstyle.json
│ │ │ ├── status.json
│ │ │ └── versionMajorMinor.json
│ │ ├── t.json
│ │ ├── T.json
│ │ ├── Task_List_Helper.json
│ │ ├── Template-Based_MCP_Integration.json
│ │ ├── template.py
│ │ ├── templateDir.json
│ │ ├── tempName.json
│ │ ├── Test_Prompt.json
│ │ ├── text.json
│ │ ├── time.json
│ │ ├── titleSeparator.json
│ │ ├── tmpl.json
│ │ ├── tn.json
│ │ ├── TOPBAR_FACTORY.json
│ │ ├── toValue.json
│ │ ├── transform.json
│ │ ├── trustProxyDefaultSymbol.json
│ │ ├── txt.json
│ │ ├── type.json
│ │ ├── typeArgumentsKey.json
│ │ ├── typeKey.json
│ │ ├── typeMessage.json
│ │ ├── typesRegistryPackageName.json
│ │ ├── u.json
│ │ ├── UNDEFINED.json
│ │ ├── unit.json
│ │ ├── UNMATCHED_SURROGATE_PAIR_REPLACE.json
│ │ ├── ur.json
│ │ ├── usage.json
│ │ ├── USAGE.json
│ │ ├── user
│ │ │ ├── backupId.json
│ │ │ ├── DESCENDING.json
│ │ │ ├── encoded.json
│ │ │ ├── index.json
│ │ │ ├── J.json
│ │ │ ├── MOBILE_QUERY.json
│ │ │ ├── promptId.json
│ │ │ ├── schemaQuery.json
│ │ │ ├── TOPBAR_FACTORY.json
│ │ │ ├── txt.json
│ │ │ └── usage.json
│ │ ├── value.json
│ │ ├── VERSION.json
│ │ ├── version.py
│ │ ├── versionMajorMinor.json
│ │ ├── Vr.json
│ │ ├── watchmanURL.json
│ │ ├── webkit.json
│ │ ├── WIN_SLASH.json
│ │ ├── xhtml.json
│ │ ├── XP_DEFAULT_PATHEXT.json
│ │ ├── y.json
│ │ └── You_are_limited_to_respond_Yes_or_No_onl___.json
│ ├── resources
│ │ ├── __init__.py
│ │ ├── code_examples
│ │ │ └── index.json
│ │ ├── config
│ │ │ └── index.json
│ │ ├── documentation
│ │ │ └── index.json
│ │ ├── images
│ │ │ └── index.json
│ │ ├── index.json
│ │ └── other
│ │ └── index.json
│ ├── server.py
│ ├── templates
│ │ ├── __init__.py
│ │ ├── AbstractFactory.json
│ │ ├── Adapter.json
│ │ ├── base.py
│ │ ├── Builder.json
│ │ ├── Chain.json
│ │ ├── Command.json
│ │ ├── component
│ │ │ ├── AbstractFactory.json
│ │ │ ├── Adapter.json
│ │ │ ├── Builder.json
│ │ │ ├── Chain.json
│ │ │ ├── Command.json
│ │ │ ├── Decorator.json
│ │ │ ├── Facade.json
│ │ │ ├── Factory.json
│ │ │ ├── Iterator.json
│ │ │ ├── Mediator.json
│ │ │ ├── Memento.json
│ │ │ ├── Observer.json
│ │ │ ├── Prototype.json
│ │ │ ├── Proxy.json
│ │ │ ├── Singleton.json
│ │ │ ├── State.json
│ │ │ ├── Strategy.json
│ │ │ ├── TemplateMethod.json
│ │ │ └── Visitor.json
│ │ ├── component.py
│ │ ├── Decorator.json
│ │ ├── Facade.json
│ │ ├── Factory.json
│ │ ├── index.json
│ │ ├── Iterator.json
│ │ ├── manager.py
│ │ ├── Mediator.json
│ │ ├── Memento.json
│ │ ├── Observer.json
│ │ ├── project.py
│ │ ├── Prototype.json
│ │ ├── Proxy.json
│ │ ├── renderer.py
│ │ ├── Singleton.json
│ │ ├── State.json
│ │ ├── Strategy.json
│ │ ├── template_manager.py
│ │ ├── TemplateMethod.json
│ │ ├── types.py
│ │ └── Visitor.json
│ └── utils
│ └── __init__.py
├── SUMMARY.md
├── TASK_COMPLETION_SUMMARY.md
├── templates
│ └── openssl
│ ├── files
│ │ ├── CMakeLists.txt.jinja2
│ │ ├── conanfile.py.jinja2
│ │ ├── main.cpp.jinja2
│ │ └── README.md.jinja2
│ ├── openssl-consumer.json
│ └── template.json
├── test_openssl_integration.sh
├── test_package
│ └── conanfile.py
└── tests
├── __init__.py
├── conftest.py
├── integration
│ ├── test_core_integration.py
│ ├── test_mermaid_integration.py
│ ├── test_prompt_manager_integration.py
│ └── test_server_integration.py
├── test_aws_mcp.py
├── test_base_classes.py
├── test_config.py
├── test_exceptions.py
├── test_mermaid.py
├── test_prompts.py
└── test_templates.py
```
# Files
--------------------------------------------------------------------------------
/printcast-agent/docker-compose.yml:
--------------------------------------------------------------------------------
```yaml
version: '3.8'
services:
# Main PrintCast MCP Server
printcast:
build:
context: .
dockerfile: Containerfile
container_name: printcast-agent
restart: unless-stopped
ports:
- "8000:8000" # MCP Server
- "5038:5038" # Asterisk AMI
- "5060:5060/udp" # SIP UDP
- "5060:5060/tcp" # SIP TCP
- "10000-10100:10000-10100/udp" # RTP ports
volumes:
- ./config:/app/config
- ./logs:/var/log/printcast
- ./data:/var/lib/printcast
- /var/run/cups:/var/run/cups # CUPS socket
env_file:
- .env
environment:
- ASTERISK_ENABLED=true
- PYTHONUNBUFFERED=1
networks:
- printcast-network
depends_on:
- redis
- postgres
# Redis for caching and task queue
redis:
image: redis:7-alpine
container_name: printcast-redis
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- printcast-network
command: redis-server --appendonly yes
# PostgreSQL for persistent data
postgres:
image: postgres:15-alpine
container_name: printcast-db
restart: unless-stopped
ports:
- "5432:5432"
environment:
POSTGRES_DB: printcast
POSTGRES_USER: printcast
POSTGRES_PASSWORD: ${DB_PASSWORD:-printcast123}
volumes:
- postgres-data:/var/lib/postgresql/data
- ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- printcast-network
# Asterisk PBX (optional, if not using host Asterisk)
asterisk:
image: andrius/asterisk:latest
container_name: printcast-asterisk
restart: unless-stopped
ports:
- "5038:5038" # AMI
- "5060:5060/udp" # SIP
- "5060:5060/tcp" # SIP
- "10000-10100:10000-10100/udp" # RTP
volumes:
- ./config/asterisk:/etc/asterisk
- asterisk-data:/var/lib/asterisk
- asterisk-logs:/var/log/asterisk
networks:
- printcast-network
profiles:
- asterisk
# CUPS print server (optional, if not using host CUPS)
cups:
image: ydkn/cups:latest
container_name: printcast-cups
restart: unless-stopped
ports:
- "631:631"
volumes:
- /var/run/dbus:/var/run/dbus
- cups-data:/etc/cups
environment:
CUPS_USER: admin
CUPS_PASSWORD: ${CUPS_PASSWORD:-admin}
networks:
- printcast-network
profiles:
- printing
# Monitoring with Prometheus
prometheus:
image: prom/prometheus:latest
container_name: printcast-prometheus
restart: unless-stopped
ports:
- "9090:9090"
volumes:
- ./config/prometheus:/etc/prometheus
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
networks:
- printcast-network
profiles:
- monitoring
# Grafana for visualization
grafana:
image: grafana/grafana:latest
container_name: printcast-grafana
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- grafana-data:/var/lib/grafana
- ./config/grafana:/etc/grafana/provisioning
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD:-admin}
GF_INSTALL_PLUGINS: redis-datasource
networks:
- printcast-network
depends_on:
- prometheus
profiles:
- monitoring
networks:
printcast-network:
driver: bridge
volumes:
redis-data:
postgres-data:
asterisk-data:
asterisk-logs:
cups-data:
prometheus-data:
grafana-data:
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/core/base.py:
--------------------------------------------------------------------------------
```python
"""Base classes for MCP Project Orchestrator components.
This module provides abstract base classes that define the core interfaces
for templates, components, and managers in the MCP Project Orchestrator.
"""
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel
class BaseComponent(ABC):
"""Abstract base class for all MCP components."""
def __init__(self, name: str, config: Optional[Dict[str, Any]] = None):
"""Initialize a base component.
Args:
name: The name of the component
config: Optional configuration dictionary
"""
self.name = name
self.config = config or {}
@abstractmethod
async def initialize(self) -> None:
"""Initialize the component."""
pass
@abstractmethod
async def cleanup(self) -> None:
"""Clean up component resources."""
pass
class BaseTemplate(ABC):
"""Abstract base class for all MCP templates."""
def __init__(self, template_path: Union[str, Path]):
"""Initialize a base template.
Args:
template_path: Path to the template file or directory
"""
self.template_path = Path(template_path)
@abstractmethod
async def render(self, context: Dict[str, Any]) -> str:
"""Render the template with the given context.
Args:
context: Dictionary containing template variables
Returns:
str: The rendered template
"""
pass
@abstractmethod
async def validate(self) -> bool:
"""Validate the template structure and content.
Returns:
bool: True if valid, False otherwise
"""
pass
class BaseManager(ABC):
"""Abstract base class for all MCP managers."""
def __init__(self, config_path: Optional[Union[str, Path]] = None):
"""Initialize a base manager.
Args:
config_path: Optional path to configuration file
"""
self.config_path = Path(config_path) if config_path else None
self.components: Dict[str, BaseComponent] = {}
@abstractmethod
async def load_config(self) -> None:
"""Load manager configuration."""
pass
@abstractmethod
async def register_component(self, component: BaseComponent) -> None:
"""Register a new component with the manager.
Args:
component: The component to register
"""
pass
@abstractmethod
async def get_component(self, name: str) -> Optional[BaseComponent]:
"""Get a registered component by name.
Args:
name: Name of the component
Returns:
Optional[BaseComponent]: The component if found, None otherwise
"""
pass
@abstractmethod
async def list_components(self) -> List[str]:
"""List all registered components.
Returns:
List[str]: List of component names
"""
pass
class BaseOrchestrator(BaseComponent):
"""Base class for orchestrator components.
This class provides a common interface for components that manage
resources and interact with other parts of the system.
"""
def __init__(self, config):
"""Initialize a base orchestrator.
Args:
config: Configuration instance
"""
super().__init__(name=config.name if hasattr(config, "name") else "orchestrator", config=config)
self.config = config
```
--------------------------------------------------------------------------------
/tests/test_exceptions.py:
--------------------------------------------------------------------------------
```python
"""Tests for exception handling."""
import pytest
from mcp_project_orchestrator.core.exceptions import (
MCPException,
ConfigError,
TemplateError,
PromptError,
MermaidError,
ValidationError,
ResourceError
)
def test_mcp_exception_basic():
"""Test basic MCPException."""
exc = MCPException("Test error")
assert "Test error" in str(exc)
assert exc.message == "Test error"
assert isinstance(exc, Exception)
assert hasattr(exc, 'code')
assert hasattr(exc, 'details')
def test_config_error():
"""Test ConfigError."""
exc = ConfigError("Invalid config", "/path/to/config")
assert "Invalid config" in str(exc)
assert exc.config_path == "/path/to/config"
assert exc.message == "Invalid config"
assert isinstance(exc, MCPException)
def test_template_error():
"""Test TemplateError."""
exc = TemplateError("Template not found", "/path/to/template")
assert "Template not found" in str(exc)
assert exc.template_path == "/path/to/template"
assert exc.message == "Template not found"
assert isinstance(exc, MCPException)
def test_prompt_error():
"""Test PromptError."""
exc = PromptError("Prompt failed", "my-prompt")
assert "Prompt failed" in str(exc)
assert exc.prompt_name == "my-prompt"
assert exc.message == "Prompt failed"
assert isinstance(exc, MCPException)
def test_mermaid_error():
"""Test MermaidError."""
exc = MermaidError("Diagram generation failed", "flowchart")
assert "Diagram generation failed" in str(exc)
assert exc.diagram_type == "flowchart"
assert exc.message == "Diagram generation failed"
assert isinstance(exc, MCPException)
def test_validation_error():
"""Test ValidationError."""
errors = ["error1", "error2"]
exc = ValidationError("Validation failed", errors)
assert "Validation failed" in str(exc)
assert exc.validation_errors == errors
assert exc.message == "Validation failed"
assert isinstance(exc, MCPException)
def test_resource_error():
"""Test ResourceError."""
exc = ResourceError("Resource missing", "/path/to/resource")
assert "Resource missing" in str(exc)
assert exc.resource_path == "/path/to/resource"
assert exc.message == "Resource missing"
assert isinstance(exc, MCPException)
def test_exception_hierarchy():
"""Test exception inheritance hierarchy."""
# All custom exceptions should inherit from MCPException
assert issubclass(ConfigError, MCPException)
assert issubclass(TemplateError, MCPException)
assert issubclass(PromptError, MCPException)
assert issubclass(MermaidError, MCPException)
assert issubclass(ValidationError, MCPException)
assert issubclass(ResourceError, MCPException)
# MCPException should inherit from Exception
assert issubclass(MCPException, Exception)
def test_exception_catching():
"""Test that exceptions can be caught properly."""
try:
raise TemplateError("Test template error")
except MCPException as e:
assert "Test template error" in str(e)
except Exception:
pytest.fail("Should have caught as MCPException")
def test_exception_with_cause():
"""Test exception with underlying cause."""
try:
try:
raise ValueError("Original error")
except ValueError as e:
raise TemplateError("Template error", cause=e) from e
except TemplateError as e:
assert "Template error" in str(e)
assert isinstance(e.__cause__, ValueError)
assert e.cause == e.__cause__
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/cursor_deployer.py:
--------------------------------------------------------------------------------
```python
"""Deploy Cursor configuration to local repository (profile management pattern)"""
from pathlib import Path
from jinja2 import Template
import platform
import os
import shutil
class CursorConfigDeployer:
"""Deploy Cursor configuration templates to local repository"""
def __init__(self, repo_root: Path, package_root: Path):
self.repo_root = Path(repo_root)
self.package_root = Path(package_root)
self.cursor_dir = self.repo_root / ".cursor"
self.templates_dir = self.package_root / "cursor-templates"
def deploy(self, force: bool = False, platform: str = None, project_type: str = "openssl"):
"""Deploy Cursor configuration to repository"""
if self.cursor_dir.exists() and not force:
print(f"ℹ️ .cursor/ already exists. Use --force to overwrite.")
return
# Auto-detect platform if not specified
if platform is None:
platform = platform.system().lower()
platform_info = {
"os": platform,
"project_type": project_type,
"user": os.getenv("USER", "developer"),
"home": str(Path.home()),
"repo_root": str(self.repo_root)
}
# Create .cursor directory structure
self.cursor_dir.mkdir(exist_ok=True)
(self.cursor_dir / "rules").mkdir(exist_ok=True)
# Deploy platform-specific rules
self._deploy_rules(platform_info, platform, project_type)
# Deploy MCP configuration
self._deploy_mcp_config(platform_info, project_type)
print(f"✅ Cursor configuration deployed to {self.cursor_dir}")
print(f" Platform: {platform}")
print(f" Project type: {project_type}")
def _deploy_rules(self, platform_info: dict, platform: str, project_type: str):
"""Deploy platform-specific rule files"""
# Deploy shared rules
shared_template = self.templates_dir / project_type / "shared.mdc.jinja2"
if shared_template.exists():
self._render_template(
shared_template,
self.cursor_dir / "rules" / "shared.mdc",
platform_info
)
# Deploy platform-specific rules
platform_template = self.templates_dir / project_type / f"{platform}-dev.mdc.jinja2"
if platform_template.exists():
self._render_template(
platform_template,
self.cursor_dir / "rules" / f"{platform}-dev.mdc",
platform_info
)
def _deploy_mcp_config(self, platform_info: dict, project_type: str):
"""Deploy MCP server configuration"""
# Create basic MCP configuration
mcp_config = {
"mcpServers": {
"mcp-project-orchestrator": {
"command": "python",
"args": ["-m", "mcp_project_orchestrator"],
"env": {
"PROJECT_TYPE": project_type,
"PLATFORM": platform_info["os"]
}
}
}
}
import json
(self.cursor_dir / "mcp.json").write_text(json.dumps(mcp_config, indent=2))
def _render_template(self, template_path: Path, output_path: Path, context: dict):
"""Render Jinja2 template with context"""
template = Template(template_path.read_text())
rendered = template.render(**context)
output_path.write_text(rendered)
print(f" 📄 {output_path.relative_to(self.cursor_dir)}")
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/analysis/Sequential_Data_Analysis_with_MCP_Integration.json:
--------------------------------------------------------------------------------
```json
{
"name": "Sequential Data Analysis with MCP Integration",
"description": "Advanced prompt template for multi-stage data analysis that integrates filesystem, database, memory, and sequential thinking MCP servers for comprehensive data workflows.",
"type": "prompt",
"category": "analysis",
"content": "# Sequential Data Analysis Assistant\n\nYou are a specialized AI assistant for comprehensive data analysis, with access to multiple MCP servers that enhance your capabilities. Your task is to analyze {{data_type}} data from {{data_source}} and provide insights about {{analysis_objective}}.\n\n## Available MCP Servers\n\nYou have access to the following MCP servers to assist with this analysis:\n\n- **Filesystem**: Access data files, configuration, and save analysis outputs\n- **PostgreSQL**: Query structured data from databases\n- **Memory**: Store intermediate analysis results and insights\n- **Sequential Thinking**: Break complex analysis into logical steps\n- **GitHub**: Access code repositories, documentation, and data processing scripts\n{{additional_servers}}\n\n## Data Context\n\n- **Data Type**: {{data_type}}\n- **Data Source**: {{data_source}}\n- **Analysis Objective**: {{analysis_objective}}\n- **Technical Background**: {{technical_background}}\n- **Required Output Format**: {{output_format}}\n\n## Analysis Plan\n\nYour data analysis should follow these sequential steps, utilizing appropriate MCP servers at each stage:\n\n### 1. Data Discovery and Acquisition\n- Identify all relevant data sources across available servers\n- Use Filesystem MCP to check available data files\n- Use PostgreSQL MCP to explore database schema and available tables\n- Use GitHub MCP to locate relevant data processing scripts\n- Document data types, formats, and relationships\n\n### 2. Data Preparation\n- Use Sequential Thinking MCP to plan data cleaning steps\n- Process data to handle missing values, outliers, transformations\n- Use Memory MCP to store intermediate processing results\n- Document data preparation decisions and their rationale\n\n### 3. Exploratory Analysis\n- Calculate descriptive statistics\n- Identify patterns, correlations, and potential insights\n- Generate appropriate visualizations (described textually)\n- Store key observations in Memory MCP for later reference\n\n### 4. Advanced Analysis\n- Apply statistical methods or machine learning techniques appropriate for {{analysis_objective}}\n- Use Sequential Thinking MCP to break down complex analysis into logical steps\n- Reference relevant GitHub repositories for specialized algorithms\n- Document methodology, assumptions, and limitations\n\n### 5. Synthesis and Reporting\n- Summarize key findings and insights\n- Relate results back to {{analysis_objective}}\n- Provide actionable recommendations\n- Use Filesystem MCP to save analysis results in {{output_format}}\n\n## Guidelines for Your Response\n\n1. Begin by outlining your understanding of the analysis objective and the data context\n2. Specify which MCP servers you'll use for each analysis stage\n3. Provide a structured analysis following the sequential steps above\n4. For complex analyses, use the Sequential Thinking MCP to break down your reasoning\n5. Store important intermediate findings in Memory MCP and reference them in your final analysis\n6. Present results in the required {{output_format}}\n7. Include recommendations for further analysis or actions\n8. Document any limitations of your analysis or areas requiring human validation\n\n{{additional_guidelines}}",
"variables": [
"data_type",
"data_source",
"analysis_objective",
"technical_background",
"output_format",
"additional_servers",
"additional_guidelines"
],
"metadata": {
"source": "/home/sparrow/projects/mcp-prompts/prompts/sequential-data-analysis.json",
"imported": true
}
}
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/Sequential_Data_Analysis_with_MCP_Integration.json:
--------------------------------------------------------------------------------
```json
{
"name": "Sequential Data Analysis with MCP Integration",
"description": "Advanced prompt template for multi-stage data analysis that integrates filesystem, database, memory, and sequential thinking MCP servers for comprehensive data workflows.",
"type": "prompt",
"category": "analysis",
"content": "# Sequential Data Analysis Assistant\n\nYou are a specialized AI assistant for comprehensive data analysis, with access to multiple MCP servers that enhance your capabilities. Your task is to analyze {{data_type}} data from {{data_source}} and provide insights about {{analysis_objective}}.\n\n## Available MCP Servers\n\nYou have access to the following MCP servers to assist with this analysis:\n\n- **Filesystem**: Access data files, configuration, and save analysis outputs\n- **PostgreSQL**: Query structured data from databases\n- **Memory**: Store intermediate analysis results and insights\n- **Sequential Thinking**: Break complex analysis into logical steps\n- **GitHub**: Access code repositories, documentation, and data processing scripts\n{{additional_servers}}\n\n## Data Context\n\n- **Data Type**: {{data_type}}\n- **Data Source**: {{data_source}}\n- **Analysis Objective**: {{analysis_objective}}\n- **Technical Background**: {{technical_background}}\n- **Required Output Format**: {{output_format}}\n\n## Analysis Plan\n\nYour data analysis should follow these sequential steps, utilizing appropriate MCP servers at each stage:\n\n### 1. Data Discovery and Acquisition\n- Identify all relevant data sources across available servers\n- Use Filesystem MCP to check available data files\n- Use PostgreSQL MCP to explore database schema and available tables\n- Use GitHub MCP to locate relevant data processing scripts\n- Document data types, formats, and relationships\n\n### 2. Data Preparation\n- Use Sequential Thinking MCP to plan data cleaning steps\n- Process data to handle missing values, outliers, transformations\n- Use Memory MCP to store intermediate processing results\n- Document data preparation decisions and their rationale\n\n### 3. Exploratory Analysis\n- Calculate descriptive statistics\n- Identify patterns, correlations, and potential insights\n- Generate appropriate visualizations (described textually)\n- Store key observations in Memory MCP for later reference\n\n### 4. Advanced Analysis\n- Apply statistical methods or machine learning techniques appropriate for {{analysis_objective}}\n- Use Sequential Thinking MCP to break down complex analysis into logical steps\n- Reference relevant GitHub repositories for specialized algorithms\n- Document methodology, assumptions, and limitations\n\n### 5. Synthesis and Reporting\n- Summarize key findings and insights\n- Relate results back to {{analysis_objective}}\n- Provide actionable recommendations\n- Use Filesystem MCP to save analysis results in {{output_format}}\n\n## Guidelines for Your Response\n\n1. Begin by outlining your understanding of the analysis objective and the data context\n2. Specify which MCP servers you'll use for each analysis stage\n3. Provide a structured analysis following the sequential steps above\n4. For complex analyses, use the Sequential Thinking MCP to break down your reasoning\n5. Store important intermediate findings in Memory MCP and reference them in your final analysis\n6. Present results in the required {{output_format}}\n7. Include recommendations for further analysis or actions\n8. Document any limitations of your analysis or areas requiring human validation\n\n{{additional_guidelines}}",
"variables": [
"data_type",
"data_source",
"analysis_objective",
"technical_background",
"output_format",
"additional_servers",
"additional_guidelines"
],
"metadata": {
"source": "/home/sparrow/projects/mcp-prompts/prompts/sequential-data-analysis.json",
"imported": true
}
}
```
--------------------------------------------------------------------------------
/tests/test_config.py:
--------------------------------------------------------------------------------
```python
"""Tests for configuration management."""
import pytest
from pathlib import Path
import tempfile
import json
import yaml
from mcp_project_orchestrator.core import MCPConfig, Config
def test_config_creation():
"""Test basic config creation."""
config = MCPConfig()
assert config.settings is not None
assert config.settings.workspace_dir == Path.cwd()
assert config.settings.host == "localhost"
assert config.settings.port == 8000
def test_config_alias():
"""Test that Config is an alias for MCPConfig."""
assert Config is MCPConfig
config = Config()
assert isinstance(config, MCPConfig)
def test_config_path_helpers(tmp_path):
"""Test configuration path helper methods."""
config = MCPConfig()
config.settings.workspace_dir = tmp_path
config.settings.templates_dir = tmp_path / "templates"
config.settings.prompts_dir = tmp_path / "prompts"
config.settings.resources_dir = tmp_path / "resources"
# Test path helpers
workspace_path = config.get_workspace_path("test", "file.txt")
assert workspace_path == tmp_path / "test" / "file.txt"
template_path = config.get_template_path("template.json")
assert template_path == tmp_path / "templates" / "template.json"
prompt_path = config.get_prompt_path("prompt.json")
assert prompt_path == tmp_path / "prompts" / "prompt.json"
resource_path = config.get_resource_path("resource.txt")
assert resource_path == tmp_path / "resources" / "resource.txt"
def test_config_json_loading(tmp_path):
"""Test loading configuration from JSON file."""
config_file = tmp_path / "config.json"
config_data = {
"workspace_dir": str(tmp_path / "workspace"),
"templates_dir": str(tmp_path / "templates"),
"prompts_dir": str(tmp_path / "prompts"),
"host": "0.0.0.0",
"port": 9000,
"debug": True
}
with open(config_file, 'w') as f:
json.dump(config_data, f)
config = MCPConfig(config_path=config_file)
config.load_config()
assert config.settings.host == "0.0.0.0"
assert config.settings.port == 9000
assert config.settings.debug is True
def test_config_yaml_loading(tmp_path):
"""Test loading configuration from YAML file."""
config_file = tmp_path / "config.yml"
config_data = {
"workspace_dir": str(tmp_path / "workspace"),
"host": "127.0.0.1",
"port": 8080
}
with open(config_file, 'w') as f:
yaml.dump(config_data, f)
config = MCPConfig(config_path=config_file)
config.load_config()
assert config.settings.host == "127.0.0.1"
assert config.settings.port == 8080
def test_config_directory_creation(tmp_path):
"""Test that config creates required directories."""
config = MCPConfig()
config.settings.workspace_dir = tmp_path / "workspace"
config.settings.templates_dir = tmp_path / "templates"
config.settings.prompts_dir = tmp_path / "prompts"
config.settings.resources_dir = tmp_path / "resources"
config.settings.output_dir = tmp_path / "output"
config._create_directories()
assert config.settings.workspace_dir.exists()
assert config.settings.templates_dir.exists()
assert config.settings.prompts_dir.exists()
assert config.settings.resources_dir.exists()
assert config.settings.output_dir.exists()
def test_config_invalid_file_format(tmp_path):
"""Test error handling for invalid config file format."""
config_file = tmp_path / "config.txt"
config_file.write_text("invalid config")
config = MCPConfig(config_path=config_file)
with pytest.raises(ValueError, match="Unsupported config file format"):
config.load_config()
def test_config_settings_defaults():
"""Test default settings values."""
config = MCPConfig()
assert config.settings.host == "localhost"
assert config.settings.port == 8000
assert config.settings.debug is False
assert config.settings.template_extensions[".py"] == "python"
assert config.settings.template_extensions[".js"] == "javascript"
```
--------------------------------------------------------------------------------
/tests/conftest.py:
--------------------------------------------------------------------------------
```python
"""Pytest configuration and fixtures."""
import os
import shutil
import tempfile
from pathlib import Path
import pytest
import json
from mcp_project_orchestrator.core import MCPConfig
from mcp_project_orchestrator.templates import TemplateManager
from mcp_project_orchestrator.prompt_manager import PromptManager
from mcp_project_orchestrator.mermaid import MermaidGenerator, MermaidRenderer
@pytest.fixture
def temp_dir():
"""Create a temporary directory for tests."""
temp_dir = tempfile.mkdtemp()
yield Path(temp_dir)
shutil.rmtree(temp_dir)
@pytest.fixture
def test_config(temp_dir):
"""Create a test configuration."""
config = MCPConfig()
config.settings.workspace_dir = temp_dir / "workspace"
config.settings.templates_dir = temp_dir / "templates"
config.settings.resources_dir = temp_dir / "resources"
config.settings.prompts_dir = temp_dir / "prompts"
config.settings.output_dir = temp_dir / "diagrams"
# Create directories
config.settings.workspace_dir.mkdir(parents=True, exist_ok=True)
config.settings.templates_dir.mkdir(parents=True, exist_ok=True)
config.settings.resources_dir.mkdir(parents=True, exist_ok=True)
config.settings.prompts_dir.mkdir(parents=True, exist_ok=True)
config.settings.output_dir.mkdir(parents=True, exist_ok=True)
return config
@pytest.fixture
def template_manager(test_config):
"""Create a template manager instance."""
return TemplateManager(test_config.settings.templates_dir)
@pytest.fixture
def prompt_manager(test_config):
"""Create a prompt manager instance."""
manager = PromptManager(test_config)
return manager
@pytest.fixture
def mermaid_generator(test_config):
"""Create a Mermaid generator instance."""
return MermaidGenerator(test_config)
@pytest.fixture
def mermaid_renderer(test_config):
"""Create a Mermaid renderer instance."""
return MermaidRenderer(test_config)
@pytest.fixture
def sample_project_template(temp_dir):
"""Create a sample project template for testing."""
template_dir = temp_dir / "templates" / "sample-project"
template_dir.mkdir(parents=True)
# Create template.json
template_json = {
"name": "sample-project",
"description": "A sample project template for testing",
"type": "project",
"version": "1.0.0",
"variables": {
"project_name": "Name of the project",
"project_description": "Project description",
"author_name": "Author's name",
"author_email": "Author's email"
}
}
with open(template_dir / "template.json", "w") as f:
json.dump(template_json, f, indent=2)
# Create sample files
files_dir = template_dir / "files"
files_dir.mkdir()
with open(files_dir / "README.md", "w") as f:
f.write("# {{ project_name }}\n\n{{ project_description }}")
with open(files_dir / "pyproject.toml", "w") as f:
f.write('[project]\nname = "{{ project_name }}"\nauthor = "{{ author_name }}"')
(files_dir / "src").mkdir()
(files_dir / "tests").mkdir()
return template_dir
@pytest.fixture
def sample_component_template(temp_dir):
"""Create a sample component template for testing."""
template_dir = temp_dir / "templates" / "sample-component"
template_dir.mkdir(parents=True)
# Create template.json
template_json = {
"name": "sample-component",
"description": "A sample component template for testing",
"type": "component",
"version": "1.0.0",
"variables": {
"component_name": "Name of the component",
"component_description": "Component description"
}
}
with open(template_dir / "template.json", "w") as f:
json.dump(template_json, f, indent=2)
# Create sample files
files_dir = template_dir / "files"
files_dir.mkdir()
with open(files_dir / "{{ component_name }}.py", "w") as f:
f.write('"""{{ component_description }}"""\n\nclass {{ component_name }}:\n pass')
return template_dir
```
--------------------------------------------------------------------------------
/mcp-project-orchestrator/openssl/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "mcp-project-orchestrator-openssl"
version = "0.1.0"
description = "Cursor configuration management for OpenSSL development"
readme = "README.md"
requires-python = ">=3.8"
license = {text = "MIT"}
authors = [
{name = "MCP Project Orchestrator Team", email = "[email protected]"},
]
maintainers = [
{name = "MCP Project Orchestrator Team", email = "[email protected]"},
]
keywords = ["openssl", "cursor", "ide", "configuration", "management", "conan", "build", "profiles"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Build Tools",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Systems Administration",
"Topic :: Security :: Cryptography",
]
dependencies = [
"click>=8.0.0",
"jinja2>=3.0.0",
"pathlib2>=2.3.0; python_version < '3.4'",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0.0",
"pytest-cov>=4.0.0",
"pytest-xdist>=3.0.0",
"black>=23.0.0",
"ruff>=0.1.0",
"mypy>=1.0.0",
"pre-commit>=3.0.0",
]
test = [
"pytest>=7.0.0",
"pytest-cov>=4.0.0",
"pytest-xdist>=3.0.0",
]
[project.urls]
Homepage = "https://github.com/sparesparrow/mcp-project-orchestrator"
Documentation = "https://github.com/sparesparrow/mcp-project-orchestrator/blob/main/docs/"
Repository = "https://github.com/sparesparrow/mcp-project-orchestrator"
"Bug Tracker" = "https://github.com/sparesparrow/mcp-project-orchestrator/issues"
[project.scripts]
mcp-orchestrator = "mcp_orchestrator.cli:cli"
deploy-cursor = "mcp_orchestrator.deploy_cursor:deploy_cursor"
[tool.setuptools.packages.find]
where = ["."]
include = ["mcp_orchestrator*"]
[tool.setuptools.package-data]
mcp_orchestrator = [
"cursor-rules/**/*",
"cursor-rules/**/*.jinja2",
"cursor-rules/**/*.md",
]
[tool.black]
line-length = 88
target-version = ['py38', 'py39', 'py310', 'py311', 'py312']
include = '\.pyi?$'
extend-exclude = '''
/(
# directories
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| build
| dist
)/
'''
[tool.ruff]
target-version = "py38"
line-length = 88
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # pyflakes
"I", # isort
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"UP", # pyupgrade
]
ignore = [
"E501", # line too long, handled by black
"B008", # do not perform function calls in argument defaults
"C901", # too complex
]
[tool.ruff.per-file-ignores]
"__init__.py" = ["F401"]
[tool.mypy]
python_version = "3.8"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = true
disallow_incomplete_defs = true
check_untyped_defs = true
disallow_untyped_decorators = true
no_implicit_optional = true
warn_redundant_casts = true
warn_unused_ignores = true
warn_no_return = true
warn_unreachable = true
strict_equality = true
[tool.pytest.ini_options]
minversion = "7.0"
addopts = "-ra -q --strict-markers --strict-config"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"integration: marks tests as integration tests",
"unit: marks tests as unit tests",
]
[tool.coverage.run]
source = ["mcp_orchestrator"]
omit = [
"*/tests/*",
"*/test_*",
"*/__pycache__/*",
"*/venv/*",
"*/env/*",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"if self.debug:",
"if settings.DEBUG",
"raise AssertionError",
"raise NotImplementedError",
"if 0:",
"if __name__ == .__main__.:",
"class .*\\bProtocol\\):",
"@(abc\\.)?abstractmethod",
]
```
--------------------------------------------------------------------------------
/.github/workflows/fan-out-orchestrator.yml:
--------------------------------------------------------------------------------
```yaml
name: Fan-Out Release Orchestrator
on:
workflow_dispatch:
inputs:
source_repository:
description: 'Source repository that triggered the release'
required: true
type: string
source_version:
description: 'Version of the source repository'
required: true
type: string
release_type:
description: 'Type of release (foundation, tooling, domain, orchestration)'
required: true
type: choice
options:
- foundation
- tooling
- domain
- orchestration
dependency_update:
description: 'Whether this is a dependency update'
required: false
default: 'false'
type: boolean
triggered_by:
description: 'What triggered this orchestration'
required: false
default: 'manual'
type: string
env:
PYTHON_VERSION: "3.11"
jobs:
orchestrate-release:
name: Orchestrate Cross-Repository Release
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
actions: read
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install PyGithub httpx
- name: Run fan-out orchestration
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python -c "
import asyncio
from mcp_project_orchestrator.fan_out_orchestrator import ReleaseCoordinator
async def orchestrate():
coordinator = ReleaseCoordinator('${{ env.GITHUB_TOKEN }}')
if '${{ inputs.release_type }}' == 'foundation':
result = await coordinator.coordinate_foundation_release('${{ inputs.source_version }}')
elif '${{ inputs.release_type }}' == 'tooling':
result = await coordinator.coordinate_tooling_release('${{ inputs.source_version }}')
elif '${{ inputs.release_type }}' == 'domain':
result = await coordinator.coordinate_domain_release('${{ inputs.source_version }}')
else:
print('Unknown release type: ${{ inputs.release_type }}')
return
print(f'Orchestration result: {result}')
asyncio.run(orchestrate())
"
- name: Generate orchestration report
if: always()
run: |
echo "## 🚀 Fan-Out Release Orchestration Report" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Orchestration Run:** $(date -u '+%Y-%m-%d %H:%M:%S UTC')" >> $GITHUB_STEP_SUMMARY
echo "**Source Repository:** ${{ inputs.source_repository }}" >> $GITHUB_STEP_SUMMARY
echo "**Source Version:** ${{ inputs.source_version }}" >> $GITHUB_STEP_SUMMARY
echo "**Release Type:** ${{ inputs.release_type }}" >> $GITHUB_STEP_SUMMARY
echo "**Triggered By:** ${{ inputs.triggered_by }}" >> $GITHUB_STEP_SUMMARY
echo "**Dependency Update:** ${{ inputs.dependency_update }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Orchestration Actions" >> $GITHUB_STEP_SUMMARY
echo "- Analyzed dependency relationships" >> $GITHUB_STEP_SUMMARY
echo "- Triggered dependent repository workflows" >> $GITHUB_STEP_SUMMARY
echo "- Created dependency update PRs where needed" >> $GITHUB_STEP_SUMMARY
echo "- Monitored release status across ecosystem" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "### Next Steps" >> $GITHUB_STEP_SUMMARY
echo "- Review triggered workflows in dependent repositories" >> $GITHUB_STEP_SUMMARY
echo "- Monitor build and test results" >> $GITHUB_STEP_SUMMARY
echo "- Address any dependency conflicts" >> $GITHUB_STEP_SUMMARY
echo "- Verify end-to-end integration" >> $GITHUB_STEP_SUMMARY
```
--------------------------------------------------------------------------------
/scripts/setup_aws_mcp.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup script for AWS MCP integration
# This script helps configure AWS credentials and test the integration
set -e
echo "======================================"
echo "AWS MCP Integration Setup"
echo "======================================"
echo ""
# Check if boto3 is installed
if ! python3 -c "import boto3" 2>/dev/null; then
echo "❌ boto3 is not installed"
echo "Installing boto3 and botocore..."
pip install boto3 botocore
echo "✅ boto3 installed successfully"
else
echo "✅ boto3 is already installed"
fi
# Check if .env file exists
if [ ! -f .env ]; then
echo ""
echo "Creating .env file from template..."
cp .env.example .env
echo "✅ .env file created"
echo ""
echo "⚠️ Please edit .env file and add your AWS credentials"
echo " Required variables:"
echo " - AWS_REGION"
echo " - AWS_ACCESS_KEY_ID (optional if using IAM roles)"
echo " - AWS_SECRET_ACCESS_KEY (optional if using IAM roles)"
echo ""
echo "You can also use AWS CLI profiles by setting AWS_PROFILE"
else
echo "✅ .env file already exists"
fi
# Check if AWS CLI is configured
echo ""
echo "Checking AWS CLI configuration..."
if command -v aws &> /dev/null; then
echo "✅ AWS CLI is installed"
if aws sts get-caller-identity &> /dev/null; then
echo "✅ AWS credentials are configured"
echo ""
echo "Current AWS Identity:"
aws sts get-caller-identity
else
echo "⚠️ AWS CLI is not configured or credentials are invalid"
echo ""
echo "To configure AWS CLI, run:"
echo " aws configure"
echo ""
echo "Or use environment variables in .env file"
fi
else
echo "⚠️ AWS CLI is not installed"
echo " Install it from: https://aws.amazon.com/cli/"
fi
# Test AWS MCP integration
echo ""
echo "======================================"
echo "Testing AWS MCP Integration"
echo "======================================"
echo ""
# Create a test script
cat > /tmp/test_aws_mcp.py << 'EOF'
"""Test AWS MCP integration."""
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
print("Loading AWS MCP integration...")
try:
from mcp_project_orchestrator.aws_mcp import AWSConfig, AWSMCPIntegration
# Check configuration
config = AWSConfig()
print(f"✅ AWS Region: {config.region}")
if config.validate():
print("✅ AWS configuration is valid")
else:
print("⚠️ AWS configuration validation failed")
print(" Check your environment variables")
# Initialize integration
aws = AWSMCPIntegration(config)
print("✅ AWS MCP integration initialized")
# Test best practices
print("\nTesting AWS best practices...")
practices = aws.get_aws_best_practices('s3')
print(f"✅ Retrieved {len(practices)} best practice categories for S3")
# Test cost estimation
print("\nTesting cost estimation...")
estimate = aws.estimate_costs('s3', {'storage_gb': 100, 'requests': 10000})
print(f"✅ Cost estimate: ${estimate['total_usd']} USD")
print("\n" + "="*50)
print("✅ All tests passed!")
print("="*50)
print("\nAWS MCP integration is ready to use.")
print("\nAvailable MCP tools:")
print(" - aws_list_s3_buckets")
print(" - aws_list_ec2_instances")
print(" - aws_list_lambda_functions")
print(" - aws_best_practices")
print(" - aws_estimate_costs")
except ImportError as e:
print(f"❌ Import error: {e}")
print("\nMake sure you have installed the package:")
print(" pip install -e .[aws]")
except Exception as e:
print(f"❌ Error: {e}")
import traceback
traceback.print_exc()
EOF
# Run the test
python3 /tmp/test_aws_mcp.py
# Cleanup
rm /tmp/test_aws_mcp.py
echo ""
echo "======================================"
echo "Setup Complete!"
echo "======================================"
echo ""
echo "Next steps:"
echo "1. Edit .env file with your AWS credentials (if needed)"
echo "2. Run the MCP server: python -m mcp_project_orchestrator.project_orchestration"
echo "3. Use AWS MCP tools in your AI assistant"
echo ""
echo "Documentation:"
echo " - See docs/AWS_MCP.md for detailed usage"
echo " - See .env.example for all configuration options"
echo ""
```
--------------------------------------------------------------------------------
/tests/test_base_classes.py:
--------------------------------------------------------------------------------
```python
"""Tests for base classes."""
import pytest
from pathlib import Path
from mcp_project_orchestrator.core.base import (
BaseComponent,
BaseTemplate,
BaseManager,
BaseOrchestrator
)
class ConcreteComponent(BaseComponent):
"""Concrete implementation for testing."""
async def initialize(self):
self.initialized = True
async def cleanup(self):
self.cleaned_up = True
class ConcreteTemplate(BaseTemplate):
"""Concrete implementation for testing."""
async def render(self, context):
return f"Rendered: {context.get('name', 'unknown')}"
async def validate(self):
return self.template_path.exists() if hasattr(self, 'template_path') else True
class ConcreteManager(BaseManager):
"""Concrete implementation for testing."""
async def load_config(self):
self.config_loaded = True
async def register_component(self, component):
self.components[component.name] = component
async def get_component(self, name):
return self.components.get(name)
async def list_components(self):
return list(self.components.keys())
class ConcreteOrchestrator(BaseOrchestrator):
"""Concrete implementation for testing."""
async def initialize(self):
self.initialized = True
async def cleanup(self):
self.cleaned_up = True
@pytest.mark.asyncio
async def test_base_component():
"""Test BaseComponent."""
component = ConcreteComponent("test-component", {"key": "value"})
assert component.name == "test-component"
assert component.config == {"key": "value"}
await component.initialize()
assert component.initialized is True
await component.cleanup()
assert component.cleaned_up is True
@pytest.mark.asyncio
async def test_base_template(tmp_path):
"""Test BaseTemplate."""
template_file = tmp_path / "template.txt"
template_file.write_text("Test template")
template = ConcreteTemplate(template_file)
assert template.template_path == template_file
# Test render
result = await template.render({"name": "Test"})
assert result == "Rendered: Test"
# Test validate
is_valid = await template.validate()
assert is_valid is True
@pytest.mark.asyncio
async def test_base_manager(tmp_path):
"""Test BaseManager."""
config_file = tmp_path / "config.json"
manager = ConcreteManager(config_file)
assert manager.config_path == config_file
assert manager.components == {}
# Test load config
await manager.load_config()
assert manager.config_loaded is True
# Test component registration
component = ConcreteComponent("comp1", {})
await manager.register_component(component)
# Test get component
retrieved = await manager.get_component("comp1")
assert retrieved is component
# Test list components
components = await manager.list_components()
assert components == ["comp1"]
@pytest.mark.asyncio
async def test_base_orchestrator():
"""Test BaseOrchestrator."""
class MockConfig:
name = "test-orchestrator"
config = MockConfig()
orchestrator = ConcreteOrchestrator(config)
assert orchestrator.config is config
assert orchestrator.name == "test-orchestrator"
await orchestrator.initialize()
assert orchestrator.initialized is True
await orchestrator.cleanup()
assert orchestrator.cleaned_up is True
def test_base_component_without_config():
"""Test BaseComponent without config."""
component = ConcreteComponent("test")
assert component.name == "test"
assert component.config == {}
def test_abstract_methods():
"""Test that abstract methods must be implemented."""
# BaseComponent requires initialize and cleanup
with pytest.raises(TypeError):
class IncompleteComponent(BaseComponent):
async def initialize(self):
pass
# Missing cleanup
IncompleteComponent("test")
# BaseTemplate requires render and validate
with pytest.raises(TypeError):
class IncompleteTemplate(BaseTemplate):
async def render(self, context):
pass
# Missing validate
IncompleteTemplate(Path("test"))
```
--------------------------------------------------------------------------------
/mcp-project-orchestrator/openssl/conanfile.py:
--------------------------------------------------------------------------------
```python
"""
Conan package for mcp-project-orchestrator/openssl
This package provides Cursor configuration management for OpenSSL development,
similar to how Conan manages build profiles.
"""
from conan import ConanFile
from conan.tools.files import copy, get
from conan.tools.layout import basic_layout
import os
class MCPProjectOrchestratorOpenSSLConan(ConanFile):
name = "mcp-project-orchestrator-openssl"
version = "0.1.0"
description = "Cursor configuration management for OpenSSL development"
license = "MIT"
url = "https://github.com/sparesparrow/mcp-project-orchestrator"
homepage = "https://github.com/sparesparrow/mcp-project-orchestrator"
topics = ("openssl", "cursor", "ide", "configuration", "management", "conan", "build", "profiles")
package_type = "python-require"
settings = "os", "arch", "compiler", "build_type"
options = {
"with_cursor": [True, False],
"cursor_opt_out": [True, False],
}
default_options = {
"with_cursor": True,
"cursor_opt_out": False,
}
def configure(self):
"""Configure the package."""
# This is a Python package, not a C++ library
self.settings.rm_safe("compiler")
self.settings.rm_safe("build_type")
self.settings.rm_safe("arch")
def layout(self):
"""Set up the package layout."""
basic_layout(self)
def requirements(self):
"""Define package requirements."""
self.requires("python_requires/click/8.0.0")
self.requires("python_requires/jinja2/3.0.0")
def build_requirements(self):
"""Define build requirements."""
if self.options.with_cursor:
self.build_requires("python_requires/click/8.0.0")
self.build_requires("python_requires/jinja2/3.0.0")
def source(self):
"""Download source code."""
# This package contains only Python code and templates
# No external source download needed
pass
def build(self):
"""Build the package."""
# This is a Python package, no compilation needed
pass
def package(self):
"""Package the files."""
# Copy Python package
copy(self, "mcp_orchestrator/*", src=self.source_folder, dst=os.path.join(self.package_folder, "mcp_orchestrator"))
# Copy cursor-rules templates
copy(self, "cursor-rules/**/*", src=self.source_folder, dst=os.path.join(self.package_folder, "cursor-rules"))
# Copy configuration files
copy(self, "pyproject.toml", src=self.source_folder, dst=self.package_folder)
copy(self, "setup.py", src=self.source_folder, dst=self.package_folder)
copy(self, "requirements.txt", src=self.source_folder, dst=self.package_folder)
def package_info(self):
"""Define package information."""
# Set Python path
self.cpp_info.bindirs = []
self.cpp_info.libdirs = []
self.cpp_info.includedirs = []
# Set Python package path
self.env_info.PYTHONPATH.append(os.path.join(self.package_folder, "mcp_orchestrator"))
# Set cursor-rules path
self.env_info.CURSOR_RULES_PATH = os.path.join(self.package_folder, "cursor-rules")
# Set package options
self.env_info.MCP_ORCHESTRATOR_WITH_CURSOR = str(self.options.with_cursor)
self.env_info.MCP_ORCHESTRATOR_CURSOR_OPT_OUT = str(self.options.cursor_opt_out)
def deploy(self):
"""Deploy the package."""
# Copy Python package to destination
copy(self, "mcp_orchestrator/*", src=self.package_folder, dst=self.build_folder)
# Copy cursor-rules templates
copy(self, "cursor-rules/**/*", src=self.package_folder, dst=self.build_folder)
# Copy configuration files
copy(self, "pyproject.toml", src=self.package_folder, dst=self.build_folder)
copy(self, "setup.py", src=self.package_folder, dst=self.build_folder)
copy(self, "requirements.txt", src=self.package_folder, dst=self.build_folder)
def package_id(self):
"""Customize package ID."""
# Include options in package ID
self.info.options.with_cursor = self.options.with_cursor
self.info.options.cursor_opt_out = self.options.cursor_opt_out
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/development/Monorepo_Migration_and_Code_Organization_Guide.json:
--------------------------------------------------------------------------------
```json
{
"name": "Monorepo Migration and Code Organization Guide",
"description": "A template for guiding the migration of code into a monorepo structure with best practices for TypeScript interfaces, Docker configuration, and CI/CD workflows",
"type": "prompt",
"category": "development",
"content": "# Monorepo Migration and Code Organization Guide for {{project_name}}\n\n## Overview\n\nThis guide outlines the process for migrating {{project_type}} codebases into a monorepo structure while adhering to best practices for code organization, interface consolidation, containerization, and CI/CD workflows.\n\n## Interface Consolidation\n\n### TypeScript Interfaces Unification\n\n1. Create a centralized interfaces directory:\n ```bash\n mkdir -p src/interfaces\n ```\n\n2. Consolidate related interfaces into a single file to reduce fragmentation:\n - Group interfaces by domain/purpose\n - Maintain consistent naming conventions\n - Document each interface with JSDoc comments\n - Export all interfaces from a single entry point `index.ts`\n\n3. Example unified interface structure:\n ```typescript\n /**\n * Core domain interfaces\n */\n export interface {{primary_interface_name}} {\n id: string;\n name: string;\n // Additional properties...\n }\n\n /**\n * Service interfaces\n */\n export interface {{service_interface_name}} {\n // Service methods...\n }\n\n /**\n * Storage adapters\n */\n export interface StorageAdapter {\n // Storage operations...\n }\n ```\n\n## Docker Configuration\n\n### Dockerfile Best Practices\n\n1. Use multi-stage builds for better efficiency:\n ```dockerfile\n # Build stage\n FROM node:{{node_version}}-alpine AS build\n WORKDIR /app\n COPY package*.json ./\n RUN npm ci\n COPY . .\n RUN npm run build\n\n # Production stage\n FROM node:{{node_version}}-alpine\n WORKDIR /app\n COPY --from=build /app/build ./build\n # Additional configuration...\n ```\n\n2. Set appropriate environment variables\n3. Use non-root users for security\n4. Implement health checks\n5. Add proper LABEL metadata\n6. Configure volumes for persistent data\n\n### Docker Compose\n\n1. Base configuration for core functionality:\n ```yaml\n services:\n {{service_name}}:\n build: .\n volumes:\n - ./data:/app/data\n environment:\n - NODE_ENV=production\n # Additional environment variables...\n ```\n\n2. Extended configurations for additional functionality (database, etc.)\n3. Development-specific configurations\n\n## GitHub Workflows\n\n### Essential CI/CD Workflows\n\n1. Main CI workflow for testing and linting\n2. Build and publish workflow for releases\n3. Containerized testing workflow\n\n### Workflow Structure\n\n```yaml\nname: {{workflow_name}}\n\non:\n push:\n branches: [main]\n pull_request:\n branches: [main]\n\njobs:\n test:\n runs-on: ubuntu-latest\n # Job configuration...\n\n build:\n needs: [test]\n # Build configuration...\n```\n\n## Containerized Testing\n\nImplement containerized testing to ensure consistent environments:\n\n1. Create test-specific Dockerfiles\n2. Set up Docker networks for integrated tests\n3. Use Docker Compose for multi-container testing scenarios\n4. Implement proper cleanup procedures\n\n## DevContainer Configuration\n\nProvide consistent development environments:\n\n```json\n{\n \"name\": \"{{project_name}} Dev Environment\",\n \"build\": {\n \"dockerfile\": \"../Dockerfile\",\n \"context\": \"..\"\n },\n \"customizations\": {\n \"vscode\": {\n \"extensions\": [\n \"dbaeumer.vscode-eslint\",\n \"esbenp.prettier-vscode\"\n // Additional extensions...\n ]\n }\n }\n}\n```\n\n## Implementation Strategy\n\n1. Create a feature branch for interface consolidation\n2. Migrate interfaces in stages, testing thoroughly\n3. Add Docker and CI configurations\n4. Validate with containerized tests\n5. Create comprehensive documentation\n\n## Technical Considerations\n\n{{technical_considerations}}\n",
"variables": [
"project_name",
"project_type",
"primary_interface_name",
"service_interface_name",
"node_version",
"service_name",
"workflow_name",
"technical_considerations"
],
"metadata": {
"source": "/home/sparrow/projects/mcp-prompts/prompts/monorepo-migration-guide.json",
"imported": true
}
}
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/Monorepo_Migration_and_Code_Organization_Guide.json:
--------------------------------------------------------------------------------
```json
{
"name": "Monorepo Migration and Code Organization Guide",
"description": "A template for guiding the migration of code into a monorepo structure with best practices for TypeScript interfaces, Docker configuration, and CI/CD workflows",
"type": "prompt",
"category": "development",
"content": "# Monorepo Migration and Code Organization Guide for {{project_name}}\n\n## Overview\n\nThis guide outlines the process for migrating {{project_type}} codebases into a monorepo structure while adhering to best practices for code organization, interface consolidation, containerization, and CI/CD workflows.\n\n## Interface Consolidation\n\n### TypeScript Interfaces Unification\n\n1. Create a centralized interfaces directory:\n ```bash\n mkdir -p src/interfaces\n ```\n\n2. Consolidate related interfaces into a single file to reduce fragmentation:\n - Group interfaces by domain/purpose\n - Maintain consistent naming conventions\n - Document each interface with JSDoc comments\n - Export all interfaces from a single entry point `index.ts`\n\n3. Example unified interface structure:\n ```typescript\n /**\n * Core domain interfaces\n */\n export interface {{primary_interface_name}} {\n id: string;\n name: string;\n // Additional properties...\n }\n\n /**\n * Service interfaces\n */\n export interface {{service_interface_name}} {\n // Service methods...\n }\n\n /**\n * Storage adapters\n */\n export interface StorageAdapter {\n // Storage operations...\n }\n ```\n\n## Docker Configuration\n\n### Dockerfile Best Practices\n\n1. Use multi-stage builds for better efficiency:\n ```dockerfile\n # Build stage\n FROM node:{{node_version}}-alpine AS build\n WORKDIR /app\n COPY package*.json ./\n RUN npm ci\n COPY . .\n RUN npm run build\n\n # Production stage\n FROM node:{{node_version}}-alpine\n WORKDIR /app\n COPY --from=build /app/build ./build\n # Additional configuration...\n ```\n\n2. Set appropriate environment variables\n3. Use non-root users for security\n4. Implement health checks\n5. Add proper LABEL metadata\n6. Configure volumes for persistent data\n\n### Docker Compose\n\n1. Base configuration for core functionality:\n ```yaml\n services:\n {{service_name}}:\n build: .\n volumes:\n - ./data:/app/data\n environment:\n - NODE_ENV=production\n # Additional environment variables...\n ```\n\n2. Extended configurations for additional functionality (database, etc.)\n3. Development-specific configurations\n\n## GitHub Workflows\n\n### Essential CI/CD Workflows\n\n1. Main CI workflow for testing and linting\n2. Build and publish workflow for releases\n3. Containerized testing workflow\n\n### Workflow Structure\n\n```yaml\nname: {{workflow_name}}\n\non:\n push:\n branches: [main]\n pull_request:\n branches: [main]\n\njobs:\n test:\n runs-on: ubuntu-latest\n # Job configuration...\n\n build:\n needs: [test]\n # Build configuration...\n```\n\n## Containerized Testing\n\nImplement containerized testing to ensure consistent environments:\n\n1. Create test-specific Dockerfiles\n2. Set up Docker networks for integrated tests\n3. Use Docker Compose for multi-container testing scenarios\n4. Implement proper cleanup procedures\n\n## DevContainer Configuration\n\nProvide consistent development environments:\n\n```json\n{\n \"name\": \"{{project_name}} Dev Environment\",\n \"build\": {\n \"dockerfile\": \"../Dockerfile\",\n \"context\": \"..\"\n },\n \"customizations\": {\n \"vscode\": {\n \"extensions\": [\n \"dbaeumer.vscode-eslint\",\n \"esbenp.prettier-vscode\"\n // Additional extensions...\n ]\n }\n }\n}\n```\n\n## Implementation Strategy\n\n1. Create a feature branch for interface consolidation\n2. Migrate interfaces in stages, testing thoroughly\n3. Add Docker and CI configurations\n4. Validate with containerized tests\n5. Create comprehensive documentation\n\n## Technical Considerations\n\n{{technical_considerations}}\n",
"variables": [
"project_name",
"project_type",
"primary_interface_name",
"service_interface_name",
"node_version",
"service_name",
"workflow_name",
"technical_considerations"
],
"metadata": {
"source": "/home/sparrow/projects/mcp-prompts/prompts/monorepo-migration-guide.json",
"imported": true
}
}
```
--------------------------------------------------------------------------------
/tests/test_prompts.py:
--------------------------------------------------------------------------------
```python
"""Tests for the prompt management system."""
import pytest
from pathlib import Path
from mcp_project_orchestrator.prompt_manager import (
PromptManager,
PromptTemplate,
PromptCategory,
PromptMetadata,
)
def test_prompt_metadata():
"""Test prompt metadata creation and conversion."""
metadata = PromptMetadata(
name="test-prompt",
description="Test prompt",
category=PromptCategory.SYSTEM,
version="1.0.0",
author="Test Author",
tags=["test", "system"],
variables={"var1": "desc1", "var2": "desc2"},
)
# Test to_dict
data = metadata.to_dict()
assert data["name"] == "test-prompt"
assert data["category"] == "system"
# Test from_dict
new_metadata = PromptMetadata.from_dict(data)
assert new_metadata.name == metadata.name
assert new_metadata.category == metadata.category
def test_prompt_template():
"""Test prompt template creation and rendering."""
template = PromptTemplate(
metadata=PromptMetadata(
name="test-prompt",
description="Test prompt",
category=PromptCategory.SYSTEM,
),
content="Hello {{ name }}! Welcome to {{ project }}.",
)
# Test variable substitution
rendered = template.render({
"name": "User",
"project": "MCP",
})
assert rendered == "Hello User! Welcome to MCP."
# Test missing variable
with pytest.raises(KeyError):
template.render({"name": "User"})
def test_prompt_manager(prompt_manager, temp_dir):
"""Test prompt manager functionality."""
# Create test prompt
prompt_dir = temp_dir / "prompts"
prompt_dir.mkdir(parents=True, exist_ok=True)
prompt_file = prompt_dir / "test-prompt.json"
prompt_file.write_text("""
{
"metadata": {
"name": "test-prompt",
"description": "Test prompt",
"category": "system",
"version": "1.0.0",
"author": "Test Author",
"tags": ["test", "system"],
"variables": {
"name": "User name",
"project": "Project name"
}
},
"content": "Hello {{ name }}! Welcome to {{ project }}."
}
""")
# Load prompts
prompt_manager.discover_prompts()
# List prompts
all_prompts = prompt_manager.list_prompts()
assert "test-prompt" in all_prompts
# Get specific prompts
system_prompts = prompt_manager.list_prompts(PromptCategory.SYSTEM)
assert "test-prompt" in system_prompts
# Get prompt
prompt = prompt_manager.get_prompt("test-prompt")
assert prompt is not None
assert prompt.metadata.name == "test-prompt"
assert prompt.metadata.category == PromptCategory.SYSTEM
# Render prompt
rendered = prompt_manager.render_prompt("test-prompt", {
"name": "User",
"project": "MCP",
})
assert rendered == "Hello User! Welcome to MCP."
def test_prompt_validation(prompt_manager):
"""Test prompt validation."""
# Invalid prompt (missing required fields)
metadata = PromptMetadata(
name="invalid-prompt",
description="Invalid prompt",
category=PromptCategory.SYSTEM,
)
template = PromptTemplate(metadata=metadata, content="")
assert not template.validate()
# Valid prompt
metadata = PromptMetadata(
name="valid-prompt",
description="Valid prompt",
category=PromptCategory.SYSTEM,
version="1.0.0",
author="Test Author",
tags=["test"],
variables={"var1": "desc1"},
)
template = PromptTemplate(metadata=metadata, content="Test {{ var1 }}")
assert template.validate()
def test_prompt_save_load(prompt_manager, temp_dir):
"""Test saving and loading prompts."""
# Create prompt
metadata = PromptMetadata(
name="save-test",
description="Save test prompt",
category=PromptCategory.SYSTEM,
version="1.0.0",
author="Test Author",
tags=["test"],
variables={"var1": "desc1"},
)
template = PromptTemplate(metadata=metadata, content="Test {{ var1 }}")
# Save prompt
prompt_manager.save_prompt(template)
# Load prompt
loaded = prompt_manager.get_prompt("save-test")
assert loaded is not None
assert loaded.metadata.name == template.metadata.name
assert loaded.content == template.content
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/mcp-py/mcp-server.py:
--------------------------------------------------------------------------------
```python
# server.py
import asyncio
import json
import logging
import os
from typing import Any, Dict, List, Optional
from fastapi import FastAPI, Request, Response
from fastapi.responses import JSONResponse
from sse_starlette.sse import EventSourceResponse
import uvicorn
from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
from mcp.server.stdio import stdio_server
from mcp.server.sse import SseServerTransport
import mcp.types as types
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class MCPToolServer(Server):
def __init__(self):
super().__init__("tool-server")
self.tools: List[Dict[str, Any]] = []
self.load_tools()
def load_tools(self):
"""Load tool definitions from tools.json"""
try:
with open("tools.json", "r") as f:
self.tools = json.load(f)
except Exception as e:
logger.error(f"Failed to load tools.json: {e}")
self.tools = []
async def handle_list_tools(self) -> List[types.Tool]:
"""Handle tools/list request"""
return [
types.Tool(
name=tool["name"],
description=tool.get("description", ""),
inputSchema=tool["input_schema"]
)
for tool in self.tools
]
async def handle_call_tool(self, name: str, arguments: Optional[Dict[str, Any]] = None) -> List[types.TextContent]:
"""Handle tools/call request"""
# Find the requested tool
tool = next((t for t in self.tools if t["name"] == name), None)
if not tool:
raise ValueError(f"Tool not found: {name}")
# Here you would implement the actual tool execution logic
# For now, we'll just echo back the call details
result = f"Called tool {name} with arguments: {json.dumps(arguments or {})}"
return [types.TextContent(type="text", text=result)]
class TransportManager:
"""Manages different transport types for the MCP server"""
def __init__(self, server: MCPToolServer):
self.server = server
self.app = FastAPI()
self.setup_routes()
def setup_routes(self):
"""Set up FastAPI routes for SSE and HTTP endpoints"""
@self.app.get("/sse")
async def sse_endpoint(request: Request):
transport = SseServerTransport("/message")
return EventSourceResponse(self.handle_sse(transport, request))
@self.app.post("/message")
async def message_endpoint(request: Request):
message = await request.json()
# Handle incoming messages for SSE transport
return JSONResponse({"status": "ok"})
@self.app.post("/tools/call/{tool_name}")
async def call_tool(tool_name: str, request: Request):
arguments = await request.json()
result = await self.server.handle_call_tool(tool_name, arguments)
return JSONResponse({"result": result})
@self.app.get("/tools")
async def list_tools():
tools = await self.server.handle_list_tools()
return JSONResponse({"tools": [t.dict() for t in tools]})
async def handle_sse(self, transport, request):
"""Handle SSE connection"""
async with transport.connect_sse(request.scope, request.receive, request.send) as streams:
await self.server.run(
streams[0],
streams[1],
self.server.create_initialization_options()
)
async def run_stdio(self):
"""Run server with stdio transport"""
async with stdio_server() as (read_stream, write_stream):
await self.server.run(
read_stream,
write_stream,
self.server.create_initialization_options()
)
async def main():
# Create server and transport manager
server = MCPToolServer()
transport_mgr = TransportManager(server)
# Determine transport type from environment
transport_type = os.environ.get("MCP_TRANSPORT", "stdio")
if transport_type == "stdio":
await transport_mgr.run_stdio()
else:
# Run HTTP/SSE server
port = int(os.environ.get("MCP_PORT", 8000))
config = uvicorn.Config(transport_mgr.app, host="0.0.0.0", port=port)
server = uvicorn.Server(config)
await server.serve()
if __name__ == "__main__":
asyncio.run(main())
```
--------------------------------------------------------------------------------
/data/prompts/templates/monorepo-migration-guide.json:
--------------------------------------------------------------------------------
```json
{
"id": "monorepo-migration-guide",
"name": "Monorepo Migration and Code Organization Guide",
"description": "A template for guiding the migration of code into a monorepo structure with best practices for TypeScript interfaces, Docker configuration, and CI/CD workflows",
"content": "# Monorepo Migration and Code Organization Guide for {{project_name}}\n\n## Overview\n\nThis guide outlines the process for migrating {{project_type}} codebases into a monorepo structure while adhering to best practices for code organization, interface consolidation, containerization, and CI/CD workflows.\n\n## Interface Consolidation\n\n### TypeScript Interfaces Unification\n\n1. Create a centralized interfaces directory:\n ```bash\n mkdir -p src/interfaces\n ```\n\n2. Consolidate related interfaces into a single file to reduce fragmentation:\n - Group interfaces by domain/purpose\n - Maintain consistent naming conventions\n - Document each interface with JSDoc comments\n - Export all interfaces from a single entry point `index.ts`\n\n3. Example unified interface structure:\n ```typescript\n /**\n * Core domain interfaces\n */\n export interface {{primary_interface_name}} {\n id: string;\n name: string;\n // Additional properties...\n }\n\n /**\n * Service interfaces\n */\n export interface {{service_interface_name}} {\n // Service methods...\n }\n\n /**\n * Storage adapters\n */\n export interface StorageAdapter {\n // Storage operations...\n }\n ```\n\n## Docker Configuration\n\n### Dockerfile Best Practices\n\n1. Use multi-stage builds for better efficiency:\n ```dockerfile\n # Build stage\n FROM node:{{node_version}}-alpine AS build\n WORKDIR /app\n COPY package*.json ./\n RUN npm ci\n COPY . .\n RUN npm run build\n\n # Production stage\n FROM node:{{node_version}}-alpine\n WORKDIR /app\n COPY --from=build /app/build ./build\n # Additional configuration...\n ```\n\n2. Set appropriate environment variables\n3. Use non-root users for security\n4. Implement health checks\n5. Add proper LABEL metadata\n6. Configure volumes for persistent data\n\n### Docker Compose\n\n1. Base configuration for core functionality:\n ```yaml\n services:\n {{service_name}}:\n build: .\n volumes:\n - ./data:/app/data\n environment:\n - NODE_ENV=production\n # Additional environment variables...\n ```\n\n2. Extended configurations for additional functionality (database, etc.)\n3. Development-specific configurations\n\n## GitHub Workflows\n\n### Essential CI/CD Workflows\n\n1. Main CI workflow for testing and linting\n2. Build and publish workflow for releases\n3. Containerized testing workflow\n\n### Workflow Structure\n\n```yaml\nname: {{workflow_name}}\n\non:\n push:\n branches: [main]\n pull_request:\n branches: [main]\n\njobs:\n test:\n runs-on: ubuntu-latest\n # Job configuration...\n\n build:\n needs: [test]\n # Build configuration...\n```\n\n## Containerized Testing\n\nImplement containerized testing to ensure consistent environments:\n\n1. Create test-specific Dockerfiles\n2. Set up Docker networks for integrated tests\n3. Use Docker Compose for multi-container testing scenarios\n4. Implement proper cleanup procedures\n\n## DevContainer Configuration\n\nProvide consistent development environments:\n\n```json\n{\n \"name\": \"{{project_name}} Dev Environment\",\n \"build\": {\n \"dockerfile\": \"../Dockerfile\",\n \"context\": \"..\"\n },\n \"customizations\": {\n \"vscode\": {\n \"extensions\": [\n \"dbaeumer.vscode-eslint\",\n \"esbenp.prettier-vscode\"\n // Additional extensions...\n ]\n }\n }\n}\n```\n\n## Implementation Strategy\n\n1. Create a feature branch for interface consolidation\n2. Migrate interfaces in stages, testing thoroughly\n3. Add Docker and CI configurations\n4. Validate with containerized tests\n5. Create comprehensive documentation\n\n## Technical Considerations\n\n{{technical_considerations}}\n",
"isTemplate": true,
"variables": [
"project_name",
"project_type",
"primary_interface_name",
"service_interface_name",
"node_version",
"service_name",
"workflow_name",
"technical_considerations"
],
"tags": [
"development",
"monorepo",
"typescript",
"docker",
"ci-cd",
"migration"
],
"category": "development",
"createdAt": "2024-08-08T15:30:00.000Z",
"updatedAt": "2024-08-08T15:30:00.000Z",
"version": 1
}
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/mcp-py/mcp-client.py:
--------------------------------------------------------------------------------
```python
# client.py
import asyncio
import json
from typing import Any, Dict, List, Optional
import os
import httpx
from urllib.parse import urljoin
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import mcp.types as types
class MCPClient:
"""A flexible MCP client that supports both stdio and HTTP/SSE transports"""
def __init__(self, transport_type: str = "stdio", server_url: Optional[str] = None):
self.transport_type = transport_type
self.server_url = server_url or "http://localhost:8000"
self.session: Optional[ClientSession] = None
self.http_client = httpx.AsyncClient()
async def connect_stdio(self, server_command: str, server_args: Optional[List[str]] = None):
"""Connect using stdio transport"""
params = StdioServerParameters(
command=server_command,
args=server_args or [],
env=None
)
streams = await stdio_client(params).__aenter__()
self.session = await ClientSession(streams[0], streams[1]).__aenter__()
await self.session.initialize()
async def connect_http(self):
"""Connect using HTTP transport"""
# For HTTP transport, we don't need to maintain a persistent connection
# We'll just make HTTP requests as needed
pass
async def list_tools(self) -> List[Dict[str, Any]]:
"""List available tools"""
if self.transport_type == "stdio":
if not self.session:
raise RuntimeError("Not connected")
response = await self.session.list_tools()
return [tool.dict() for tool in response.tools]
else:
# Use HTTP endpoint
async with httpx.AsyncClient() as client:
response = await client.get(urljoin(self.server_url, "/tools"))
response.raise_for_status()
return response.json()["tools"]
async def call_tool(self, tool_name: str, arguments: Optional[Dict[str, Any]] = None) -> Any:
"""Call a specific tool"""
if self.transport_type == "stdio":
if not self.session:
raise RuntimeError("Not connected")
result = await self.session.call_tool(tool_name, arguments or {})
return result
else:
# Use HTTP endpoint
async with httpx.AsyncClient() as client:
response = await client.post(
urljoin(self.server_url, f"/tools/call/{tool_name}"),
json=arguments or {}
)
response.raise_for_status()
return response.json()["result"]
async def close(self):
"""Clean up resources"""
if self.session:
await self.session.__aexit__(None, None, None)
await self.http_client.aclose()
class MCPClientCLI:
"""Command-line interface for the MCP client"""
def __init__(self):
self.transport_type = os.environ.get("MCP_TRANSPORT", "stdio")
self.server_url = os.environ.get("MCP_SERVER_URL", "http://localhost:8000")
self.client = MCPClient(self.transport_type, self.server_url)
async def run(self):
"""Run the CLI"""
try:
if self.transport_type == "stdio":
await self.client.connect_stdio("python", ["server.py"])
else:
await self.client.connect_http()
while True:
command = input("\nEnter command (list_tools/call_tool/quit): ").strip()
if command == "quit":
break
elif command == "list_tools":
tools = await self.client.list_tools()
print("\nAvailable tools:")
for tool in tools:
print(f"- {tool['name']}: {tool['description']}")
elif command == "call_tool":
tool_name = input("Enter tool name: ").strip()
args_str = input("Enter arguments as JSON (or empty): ").strip()
arguments = json.loads(args_str) if args_str else {}
result = await self.client.call_tool(tool_name, arguments)
print("\nResult:", result)
else:
print("Unknown command")
finally:
await self.client.close()
async def main():
cli = MCPClientCLI()
await cli.run()
if __name__ == "__main__":
asyncio.run(main())
```
--------------------------------------------------------------------------------
/data/prompts/templates/advanced-multi-server-template.json:
--------------------------------------------------------------------------------
```json
{
"id": "advanced-multi-server-template",
"name": "Advanced Multi-Server Integration Template",
"description": "A comprehensive template that coordinates multiple MCP servers for complex tasks requiring diverse capabilities",
"content": "# Advanced Multi-Server Assistant\n\nYou are an advanced AI assistant with access to multiple specialized MCP servers that significantly enhance your capabilities. Your task is to help with {{primary_task}} by coordinating these diverse tools and resources effectively.\n\n## Available MCP Servers and Capabilities\n\n### Core Resources and Data Access\n- **filesystem**: Access files and directories on the local system\n - Use for: examining code, reading configuration files, accessing project documentation\n- **github**: Interact with repositories, issues, pull requests, and code on GitHub\n - Use for: code exploration, commit history analysis, repository management\n- **postgres**: Execute SQL queries and interact with database content\n - Use for: data analysis, schema exploration, complex data retrieval\n\n### Knowledge Management\n- **prompts**: Access and apply specialized templates for different tasks\n - Use for: structured workflows, consistent outputs, domain-specific prompting\n- **memory**: Store and retrieve key information across conversation sessions\n - Use for: retaining context, tracking progress on multi-step tasks\n\n### Enhanced Reasoning\n- **sequential-thinking**: Break down complex problems into logical steps\n - Use for: multi-step reasoning, maintaining clarity in complex analyses\n- **mcp-compass**: Navigate between different capabilities with strategic direction\n - Use for: orchestrating complex workflows involving multiple servers\n\n### Specialized Capabilities\n- **puppeteer**: Automate browser interactions and web scraping\n - Use for: testing web applications, extracting data from websites\n- **elevenlabs**: Convert text to realistic speech\n - Use for: creating audio versions of content, accessibility enhancements\n- **brave-search**: Perform web searches for up-to-date information\n - Use for: research, finding relevant resources, staying current\n\n## Integration Strategy\n\nI will coordinate these capabilities based on your needs by:\n1. **Understanding the primary goal** of {{primary_task}}\n2. **Identifying which MCP servers** are most relevant for this task\n3. **Creating a workflow** that efficiently combines their capabilities\n4. **Executing tasks** in an optimal sequence\n5. **Synthesizing results** into a comprehensive response\n\n## Specialized Task Approach\n\nFor your specific task in {{domain_expertise}}, I'll focus on using:\n- {{primary_server_1}}\n- {{primary_server_2}}\n- {{primary_server_3}}\n\nAdditional servers may be utilized as needed based on our conversation.\n\n## Guiding Principles\n\n- I'll prioritize {{priority_principle}} in my approach\n- I'll maintain awareness of {{ethical_consideration}} throughout our interaction\n- I'll structure my responses to emphasize {{output_focus}}\n\nLet's begin by clarifying your specific needs for {{primary_task}} and how I can best leverage these MCP servers to assist you.",
"variables": [
"primary_task",
"domain_expertise",
"primary_server_1",
"primary_server_2",
"primary_server_3",
"priority_principle",
"ethical_consideration",
"output_focus"
],
"examples": [
{
"name": "Code Repository Analysis",
"values": {
"primary_task": "analyzing a GitHub repository structure and suggesting improvements",
"domain_expertise": "software architecture",
"primary_server_1": "github",
"primary_server_2": "filesystem",
"primary_server_3": "sequential-thinking",
"priority_principle": "maintainability and clarity",
"ethical_consideration": "respecting original code design intentions",
"output_focus": "actionable recommendations with examples"
}
},
{
"name": "Data Analysis Project",
"values": {
"primary_task": "exploring a database and generating insights about customer behavior",
"domain_expertise": "data analytics",
"primary_server_1": "postgres",
"primary_server_2": "sequential-thinking",
"primary_server_3": "memory",
"priority_principle": "finding meaningful patterns in complex data",
"ethical_consideration": "privacy and data protection concerns",
"output_focus": "visualizable insights and business recommendations"
}
}
],
"categories": ["integration", "multi-server", "advanced", "orchestration"]
}
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/version.py:
--------------------------------------------------------------------------------
```python
"""Prompt version implementation for MCP Project Orchestrator.
This module provides the PromptVersion class that handles version information
for prompts, following semantic versioning principles.
"""
from typing import Dict, Any
class PromptVersion:
"""Class representing a prompt version."""
def __init__(self, major: int = 1, minor: int = 0, patch: int = 0):
"""Initialize prompt version.
Args:
major: Major version number
minor: Minor version number
patch: Patch version number
"""
self.major = major
self.minor = minor
self.patch = patch
def bump_major(self) -> None:
"""Bump major version number.
This resets minor and patch numbers to 0.
"""
self.major += 1
self.minor = 0
self.patch = 0
def bump_minor(self) -> None:
"""Bump minor version number.
This resets patch number to 0.
"""
self.minor += 1
self.patch = 0
def bump_patch(self) -> None:
"""Bump patch version number."""
self.patch += 1
def is_compatible_with(self, other: "PromptVersion") -> bool:
"""Check if this version is compatible with another version.
Compatible versions have the same major version number.
Args:
other: Version to compare with
Returns:
bool: True if versions are compatible
"""
return self.major == other.major
def is_newer_than(self, other: "PromptVersion") -> bool:
"""Check if this version is newer than another version.
Args:
other: Version to compare with
Returns:
bool: True if this version is newer
"""
if self.major != other.major:
return self.major > other.major
if self.minor != other.minor:
return self.minor > other.minor
return self.patch > other.patch
def to_dict(self) -> Dict[str, Any]:
"""Convert version to dictionary representation.
Returns:
Dict[str, Any]: Dictionary representation
"""
return {
"major": self.major,
"minor": self.minor,
"patch": self.patch,
}
def __str__(self) -> str:
"""Get string representation.
Returns:
str: String representation in format 'major.minor.patch'
"""
return f"{self.major}.{self.minor}.{self.patch}"
def __repr__(self) -> str:
"""Get detailed string representation.
Returns:
str: Detailed string representation
"""
return (
f"PromptVersion(major={self.major}, "
f"minor={self.minor}, "
f"patch={self.patch})"
)
def __eq__(self, other: object) -> bool:
"""Check if versions are equal.
Args:
other: Version to compare with
Returns:
bool: True if versions are equal
"""
if not isinstance(other, PromptVersion):
return NotImplemented
return (
self.major == other.major
and self.minor == other.minor
and self.patch == other.patch
)
def __lt__(self, other: "PromptVersion") -> bool:
"""Check if this version is less than another version.
Args:
other: Version to compare with
Returns:
bool: True if this version is less than other
"""
if self.major != other.major:
return self.major < other.major
if self.minor != other.minor:
return self.minor < other.minor
return self.patch < other.patch
def __le__(self, other: "PromptVersion") -> bool:
"""Check if this version is less than or equal to another version.
Args:
other: Version to compare with
Returns:
bool: True if this version is less than or equal to other
"""
return self < other or self == other
def __gt__(self, other: "PromptVersion") -> bool:
"""Check if this version is greater than another version.
Args:
other: Version to compare with
Returns:
bool: True if this version is greater than other
"""
return not (self <= other)
def __ge__(self, other: "PromptVersion") -> bool:
"""Check if this version is greater than or equal to another version.
Args:
other: Version to compare with
Returns:
bool: True if this version is greater than or equal to other
"""
return not (self < other)
```
--------------------------------------------------------------------------------
/mcp-project-orchestrator/openssl/mcp_orchestrator/platform_detector.py:
--------------------------------------------------------------------------------
```python
"""
Platform detection utilities for Cursor configuration deployment.
This module detects the developer's platform and environment to select
appropriate rule templates and configuration settings.
"""
import platform
import os
from pathlib import Path
from typing import Dict, Any
from datetime import datetime
class PlatformDetector:
"""Detect developer platform and environment for Cursor configuration."""
def __init__(self):
self._cache: Dict[str, Any] = {}
def detect_platform(self) -> Dict[str, Any]:
"""
Detect developer platform and environment.
Returns:
Dictionary containing platform information including OS,
version, Python version, CI status, and user details.
"""
if self._cache:
return self._cache
system = platform.system().lower()
# Detect CI environment
is_ci = os.getenv("CI", "false").lower() == "true"
is_github_actions = os.getenv("GITHUB_ACTIONS", "false").lower() == "true"
is_gitlab_ci = os.getenv("GITLAB_CI", "false").lower() == "true"
is_jenkins = os.getenv("JENKINS_URL") is not None
# Get user information
user = os.getenv("USER", os.getenv("USERNAME", "developer"))
home = str(Path.home())
# Detect shell
shell = os.getenv("SHELL", "/bin/bash")
if system == "windows":
shell = os.getenv("COMSPEC", "cmd.exe")
# Detect development tools
has_git = self._has_command("git")
has_conan = self._has_command("conan")
has_cursor = self._has_command("cursor")
# Detect Python environment
python_version = platform.python_version()
python_implementation = platform.python_implementation()
# Detect if running in virtual environment
in_venv = (
hasattr(sys, 'real_prefix') or
(hasattr(sys, 'base_prefix') and sys.base_prefix != sys.prefix) or
os.getenv("VIRTUAL_ENV") is not None
)
platform_info = {
"os": system,
"os_version": platform.version(),
"os_release": platform.release(),
"architecture": platform.machine(),
"python_version": python_version,
"python_implementation": python_implementation,
"in_venv": in_venv,
"is_ci": is_ci,
"is_github_actions": is_github_actions,
"is_gitlab_ci": is_gitlab_ci,
"is_jenkins": is_jenkins,
"user": user,
"home": home,
"shell": shell,
"has_git": has_git,
"has_conan": has_conan,
"has_cursor": has_cursor,
"timestamp": datetime.now().isoformat(),
}
self._cache = platform_info
return platform_info
def _has_command(self, command: str) -> bool:
"""Check if a command is available in PATH."""
import shutil
return shutil.which(command) is not None
def get_rule_template_name(self) -> str:
"""
Get the appropriate rule template name based on platform detection.
Returns:
Template filename (without .jinja2 extension)
"""
platform_info = self.detect_platform()
if platform_info["is_ci"]:
return "ci-linux" # Default CI template
else:
os_name = platform_info["os"]
return f"{os_name}-dev"
def get_mcp_command(self) -> str:
"""
Get the appropriate MCP command for the platform.
Returns:
Command to run MCP servers (npx or npx.cmd)
"""
platform_info = self.detect_platform()
if platform_info["os"] == "windows":
return "npx.cmd"
else:
return "npx"
def is_development_environment(self) -> bool:
"""Check if this is a development environment (not CI)."""
platform_info = self.detect_platform()
return not platform_info["is_ci"]
def get_conan_home(self) -> str:
"""Get the Conan home directory for this platform."""
platform_info = self.detect_platform()
conan_home = os.getenv("CONAN_USER_HOME")
if conan_home:
return conan_home
# Default Conan home
home = platform_info["home"]
if platform_info["os"] == "windows":
return f"{home}\\.conan2"
else:
return f"{home}/.conan2"
# Import sys for virtual environment detection
import sys
```
--------------------------------------------------------------------------------
/tests/integration/test_core_integration.py:
--------------------------------------------------------------------------------
```python
"""
Integration tests for the core module functionality.
These tests verify that the core components work together properly.
"""
import os
import pytest
import tempfile
from pathlib import Path
import json
from mcp_project_orchestrator.core import FastMCPServer, MCPConfig
from mcp_project_orchestrator.core import setup_logging, MCPException
class TestCoreIntegration:
"""Integration tests for core module components."""
@pytest.fixture
def temp_config_dir(self):
"""Create a temporary config directory."""
with tempfile.TemporaryDirectory() as temp_dir:
yield Path(temp_dir)
@pytest.fixture
def config(self, temp_config_dir):
"""Create a test configuration."""
config_data = {
"name": "test-mcp-server",
"version": "0.1.0",
"description": "Test MCP Server",
"server": {
"host": "127.0.0.1",
"port": 8080
},
"paths": {
"prompts": str(temp_config_dir / "prompts"),
"templates": str(temp_config_dir / "templates"),
"mermaid": str(temp_config_dir / "mermaid"),
"resources": str(temp_config_dir / "resources")
}
}
config_file = temp_config_dir / "config.json"
with open(config_file, "w") as f:
json.dump(config_data, f)
# Create required directories
for dir_name in ["prompts", "templates", "mermaid", "resources"]:
(temp_config_dir / dir_name).mkdir(exist_ok=True)
return MCPConfig(config_file=config_file)
def test_config_loading(self, config):
"""Test that configuration is loaded properly."""
assert config.name == "test-mcp-server"
assert config.version == "0.1.0"
assert config.description == "Test MCP Server"
assert config.server_host == "127.0.0.1"
assert config.server_port == 8080
def test_server_initialization(self, config):
"""Test that the server initializes properly."""
server = FastMCPServer(config=config)
assert server.name == config.name
assert server.config == config
def test_tool_registration(self, config):
"""Test that tools can be registered."""
server = FastMCPServer(config=config)
tool_name = "test-tool"
tool_description = "A test tool"
tool_params = {
"type": "object",
"properties": {
"param1": {"type": "string"},
"param2": {"type": "number"}
},
"required": ["param1"]
}
# Define a simple tool function
def tool_function(params):
return {"result": f"Hello, {params['param1']}!"}
# Register the tool
server.register_tool(
name=tool_name,
description=tool_description,
parameters=tool_params,
handler=tool_function
)
# Check if the tool was registered
assert tool_name in server.tools
assert server.tools[tool_name]["description"] == tool_description
assert server.tools[tool_name]["parameters"] == tool_params
assert server.tools[tool_name]["handler"] == tool_function
def test_resource_registration(self, config):
"""Test that resources can be registered."""
server = FastMCPServer(config=config)
resource_name = "test-resource"
resource_content = {"key": "value"}
# Register the resource
server.register_resource(
name=resource_name,
content=resource_content
)
# Check if the resource was registered
assert resource_name in server.resources
assert server.resources[resource_name] == resource_content
def test_exception_handling(self):
"""Test that exceptions are handled properly."""
with pytest.raises(MCPException) as excinfo:
raise MCPException("Test exception")
assert "Test exception" in str(excinfo.value)
def test_logging_setup(self, temp_config_dir):
"""Test that logging is set up properly."""
log_file = temp_config_dir / "test.log"
logger = setup_logging(log_file=log_file)
# Log a test message
test_message = "Test log message"
logger.info(test_message)
# Check if the message was logged
with open(log_file, "r") as f:
log_content = f.read()
assert test_message in log_content
```
--------------------------------------------------------------------------------
/component_templates.json:
--------------------------------------------------------------------------------
```json
{
"component_templates": [
{
"name": "Factory",
"description": "Creates objects using the Factory Method.",
"mermaid": "Generate a Mermaid diagram visualizing FactoryMethod Design Pattern, showing the factory, product, and client interactions."
},
{
"name": "AbstractFactory",
"description": "Creates related objects without specifying concrete classes.",
"mermaid": "Generate a Mermaid diagram visualizing AbstractFactory Design Pattern, illustrating families of related objects and their creation process."
},
{
"name": "Builder",
"description": "Constructs objects using a step-by-step approach.",
"mermaid": "Generate a Mermaid diagram visualizing Builder Design Pattern, that shows the step-by-step construction process of complex objects."
},
{
"name": "Prototype",
"description": "Clones existing objects as prototypes.",
"mermaid": "Generate a Mermaid diagram visualizing Prototype Design Pattern, displaying object cloning and prototype relationships."
},
{
"name": "Singleton",
"description": "Manages a single instance across the application.",
"mermaid": "Generate a Mermaid diagram visualizing Singleton Design Pattern, illustrating the single instance and global access point."
},
{
"name": "Adapter",
"description": "Adapts one interface to another.",
"mermaid": "Generate a Mermaid diagram visualizing Adapter Design Pattern, showing how incompatible interfaces are adapted to work together."
},
{
"name": "Decorator",
"description": "Wraps objects to extend functionality.",
"mermaid": "Generate a Mermaid diagram visualizing Decorator Design Pattern, that shows dynamic addition of behaviors to objects."
},
{
"name": "Facade",
"description": "Simplifies interactions with a subsystem.",
"mermaid": "Generate a Mermaid diagram visualizing Facade Design Pattern, showing a simplified interface versus a complex subsystem."
},
{
"name": "Proxy",
"description": "Intermediates access to a real object.",
"mermaid": "Generate a Mermaid diagram visualizing Proxy Design Pattern, demonstrating the surrogate control pattern."
},
{
"name": "Chain",
"description": "Processes a request through a chain of handlers.",
"mermaid": "Generate a Mermaid diagram visualizing ChainOfResponsibility Design Pattern, showing the flow of requests through handlers."
},
{
"name": "Command",
"description": "Encapsulates requests as objects.",
"mermaid": "Generate a Mermaid diagram visualizing Command Design Pattern, outlining encapsulated request objects and their invokers."
},
{
"name": "Iterator",
"description": "Enables sequential access to collection elements.",
"mermaid": "Generate a Mermaid diagram visualizing Iterator Design Pattern, showing sequential access to a collection."
},
{
"name": "Mediator",
"description": "Manages communication between objects.",
"mermaid": "Generate a Mermaid diagram visualizing Mediator Design Pattern, showing how components communicate via a mediator."
},
{
"name": "Memento",
"description": "Stores and restores object state.",
"mermaid": "Generate a Mermaid diagram visualizing Memento Design Pattern, illustrating state capture and restoration processes."
},
{
"name": "Observer",
"description": "Notifies subscribers of state changes.",
"mermaid": "Generate a Mermaid diagram visualizing Observer Design Pattern, showing one-to-many dependencies between subjects and observers."
},
{
"name": "State",
"description": "Manages state-dependent behavior.",
"mermaid": "Generate a Mermaid diagram visualizing State Design Pattern, outlining state transitions and behavior changes."
},
{
"name": "Strategy",
"description": "Encapsulates interchangeable algorithms.",
"mermaid": "Generate a Mermaid diagram visualizing Strategy Design Pattern, showing interchangeable algorithms and their selection criteria."
},
{
"name": "TemplateMethod",
"description": "Defines a skeletal algorithm with overrideable steps.",
"mermaid": "Generate a Mermaid diagram visualizing TemplateMethod Design Pattern, illustrating the skeletal algorithm with customizable steps."
},
{
"name": "Visitor",
"description": "Separates operations from object structure.",
"mermaid": "Generate a Mermaid diagram visualizing Visitor Design Pattern, showing the separation of operations from object structure."
}
]
}
```
--------------------------------------------------------------------------------
/project_orchestration.json:
--------------------------------------------------------------------------------
```json
{
"communication_protocol": "JSON-RPC",
"mcp_compliance": true,
"system_prompt": "You are an AI assistant specializing in guiding users through software project implementation using systematic approaches and design patterns. Your goal is to orchestrate the development project from an idea that is provided by user. Execute following steps:\n1. Extract key information from the user's query and decide on relevant context - files, MCP tools or prompts - to link each next step with.\n2. Determine which known design patterns and SW architecture abstraction concepts cover the logic behind the user's idea.\n3. Select one of the project templates from the catalogue and apply it by creating a new directory in my common SW projects directory and copying in the contents of the selected template's data folder. In the copied template's files, make sure to correctly substitute variable placeholders with their actual values, combining user idea with context of design patterns best practices and other results of the conducted research.\n4. Create Project Documentation - Describe SW Architecture, Components and Modules, their relationships, interfaces, communication protocols, technologies used, dependencies, installation, build, run and test commands. Use simplified mermaid tools to visualize the various parts of the documentation.\n5. Prepare File Structure and visualize directory tree of the project. For each source code file, assign a suitable file name and location within the project. Filenames must be consistent, clear, descriptive, and unambiguous with assumed file contents. Maintain consistency with existing naming patterns if present. Follow standard naming conventions for the type of item being named and ensure names are unique within their context.\n6. Decide in which order files should be implemented, how features should be tested, and how components should be built and deployed. Update project README with final aggregated notes on project orchestration and instructions for the composer implementor agent.",
"goals": [
"Analyze user input to identify suitable design patterns and project templates.",
"Initialize and define the tools required for project setup.",
"Prepare an implementation plan that includes design patterns, file structures, and testing strategies.",
"Provide clear instructions for tool usage during the project lifecycle."
],
"tools": [
{
"name": "MermaidTool",
"description": "A unified tool to generate Mermaid diagrams for visualizing software architecture, design patterns, and process flows.",
"input_schema": {
"type": "object",
"properties": {
"diagram_planning": {
"type": "string",
"description": "Planned strategy for the diagram implementation."
}
},
"required": ["diagram_planning"]
}
}
],
"steps": [
{
"phase": "Information Extraction",
"description": "Extract key information from the user's query and decide on relevant context - files, MCP tools or prompts - to link each next step with."
},
{
"phase": "Design Patterns & Architecture Identification",
"description": "Determine which known design patterns and software architecture abstraction concepts align with the user's idea, considering system modularity and orchestrated design."
},
{
"phase": "Project Template Application",
"description": "Select one of the project templates from the catalogue and apply it by creating a new directory in the common SW projects directory, copying in the contents of the selected template's data folder, and substituting variable placeholders appropriately."
},
{
"phase": "Project Documentation & Visual Design",
"description": "Create comprehensive project documentation that outlines the software architecture, components, and modules, and includes visual representations using MermaidTool."
},
{
"phase": "File Structure Preparation",
"description": "Prepare the file structure and visualize the directory tree of the project. Assign clear, descriptive, and consistent file names and locations with temporary TODO comments for future implementation."
},
{
"phase": "Implementation Strategy",
"description": "Decide on the implementation order for files, define testing strategies for implemented features, and update the project README with final orchestration notes and instructions for the composer implementor agent."
}
]
}
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/templates/renderer.py:
--------------------------------------------------------------------------------
```python
"""Template renderer for MCP Project Orchestrator.
This module provides the TemplateRenderer class that handles template rendering
using Jinja2, with support for custom filters and extensions.
"""
from pathlib import Path
from typing import Dict, Any, Optional, Union
import re
from jinja2 import Environment, FileSystemLoader, Template, select_autoescape
from ..core.exceptions import TemplateError
class TemplateRenderer:
"""Template renderer implementation."""
def __init__(self, template_dir: Optional[Union[str, Path]] = None):
"""Initialize template renderer.
Args:
template_dir: Optional directory containing template files
"""
self.template_dir = Path(template_dir) if template_dir else None
self.env = Environment(
loader=FileSystemLoader(str(template_dir)) if template_dir else None,
autoescape=select_autoescape(['html', 'xml']),
trim_blocks=True,
lstrip_blocks=True,
keep_trailing_newline=True,
)
self._setup_filters()
self._setup_extensions()
def _setup_filters(self) -> None:
"""Set up custom Jinja2 filters."""
self.env.filters.update({
'camelcase': self._to_camel_case,
'snakecase': self._to_snake_case,
'kebabcase': self._to_kebab_case,
'pascalcase': self._to_pascal_case,
'quote': lambda s: f'"{s}"',
'indent': self._indent_text,
})
def _setup_extensions(self) -> None:
"""Set up Jinja2 extensions."""
self.env.add_extension('jinja2.ext.do')
self.env.add_extension('jinja2.ext.loopcontrols')
def render_string(self, template_str: str, context: Dict[str, Any]) -> str:
"""Render a template string with context.
Args:
template_str: Template string to render
context: Template variables
Returns:
str: Rendered content
Raises:
TemplateError: If rendering fails
"""
try:
template = Template(template_str, environment=self.env)
return template.render(**context)
except Exception as e:
raise TemplateError(f"Failed to render template string: {str(e)}")
def render_file(self, template_path: Union[str, Path], context: Dict[str, Any]) -> str:
"""Render a template file with context.
Args:
template_path: Path to template file
context: Template variables
Returns:
str: Rendered content
Raises:
TemplateError: If rendering fails
"""
if not self.template_dir:
raise TemplateError("Template directory not set")
try:
template = self.env.get_template(str(template_path))
return template.render(**context)
except Exception as e:
raise TemplateError(
f"Failed to render template file {template_path}: {str(e)}"
)
@staticmethod
def _to_camel_case(s: str) -> str:
"""Convert string to camelCase.
Args:
s: Input string
Returns:
str: Converted string
"""
s = re.sub(r"[^a-zA-Z0-9]+", " ", s).title().replace(" ", "")
return s[0].lower() + s[1:]
@staticmethod
def _to_snake_case(s: str) -> str:
"""Convert string to snake_case.
Args:
s: Input string
Returns:
str: Converted string
"""
s = re.sub(r"[^a-zA-Z0-9]+", " ", s)
return "_".join(s.lower().split())
@staticmethod
def _to_kebab_case(s: str) -> str:
"""Convert string to kebab-case.
Args:
s: Input string
Returns:
str: Converted string
"""
s = re.sub(r"[^a-zA-Z0-9]+", " ", s)
return "-".join(s.lower().split())
@staticmethod
def _to_pascal_case(s: str) -> str:
"""Convert string to PascalCase.
Args:
s: Input string
Returns:
str: Converted string
"""
s = re.sub(r"[^a-zA-Z0-9]+", " ", s).title().replace(" ", "")
return s
@staticmethod
def _indent_text(text: str, width: int = 4, first: bool = False) -> str:
"""Indent text by specified width.
Args:
text: Text to indent
width: Number of spaces for indentation
first: Whether to indent first line
Returns:
str: Indented text
"""
prefix = " " * width
lines = text.splitlines()
if not lines:
return ""
if first:
return "\n".join(prefix + line for line in lines)
return "\n".join(
prefix + line if line else line
for line in lines
)
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompt_manager/loader.py:
--------------------------------------------------------------------------------
```python
"""
Prompt loader for MCP Project Orchestrator.
This module provides functionality for loading prompt templates
from various sources (files, directories, remote repositories).
"""
import asyncio
from pathlib import Path
from typing import Dict, List, Optional, Set
import json
import aiohttp
import logging
from ..core import BaseOrchestrator, Config
from .template import PromptTemplate
class PromptLoader(BaseOrchestrator):
"""Class for loading prompt templates from various sources."""
def __init__(self, config: Config):
"""Initialize the prompt loader.
Args:
config: Configuration instance
"""
super().__init__(config)
self.templates: Dict[str, PromptTemplate] = {}
self.categories: Set[str] = set()
self.tags: Set[str] = set()
async def initialize(self) -> None:
"""Initialize the prompt loader.
Loads templates from the configured templates directory.
"""
await self.load_templates_from_directory(
self.config.prompt_templates_dir
)
async def cleanup(self) -> None:
"""Clean up resources."""
self.templates.clear()
self.categories.clear()
self.tags.clear()
async def load_templates_from_directory(
self, directory: Path
) -> None:
"""Load all template files from a directory.
Args:
directory: Directory containing template files
Raises:
FileNotFoundError: If the directory doesn't exist
"""
if not directory.exists():
raise FileNotFoundError(f"Templates directory not found: {directory}")
for file_path in directory.glob("*.json"):
try:
template = PromptTemplate.from_file(file_path)
template.validate()
self.templates[template.name] = template
if template.category:
self.categories.add(template.category)
self.tags.update(template.tags)
except (json.JSONDecodeError, ValueError) as e:
self.log_error(f"Error loading template {file_path}", e)
async def load_template_from_url(self, url: str) -> Optional[PromptTemplate]:
"""Load a template from a URL.
Args:
url: URL of the template JSON file
Returns:
Loaded template or None if loading failed
"""
try:
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
if response.status != 200:
self.log_error(
f"Failed to fetch template from {url}: "
f"Status {response.status}"
)
return None
data = await response.json()
template = PromptTemplate(**data)
template.validate()
return template
except (aiohttp.ClientError, ValueError) as e:
self.log_error(f"Error loading template from {url}", e)
return None
def get_template(self, name: str) -> Optional[PromptTemplate]:
"""Get a template by name.
Args:
name: Name of the template
Returns:
Template instance or None if not found
"""
return self.templates.get(name)
def get_templates_by_category(self, category: str) -> List[PromptTemplate]:
"""Get all templates in a category.
Args:
category: Category to filter by
Returns:
List of templates in the category
"""
return [
template for template in self.templates.values()
if template.category == category
]
def get_templates_by_tag(self, tag: str) -> List[PromptTemplate]:
"""Get all templates with a specific tag.
Args:
tag: Tag to filter by
Returns:
List of templates with the tag
"""
return [
template for template in self.templates.values()
if tag in template.tags
]
def get_all_categories(self) -> List[str]:
"""Get all available categories.
Returns:
List of category names
"""
return sorted(self.categories)
def get_all_tags(self) -> List[str]:
"""Get all available tags.
Returns:
List of tag names
"""
return sorted(self.tags)
```
--------------------------------------------------------------------------------
/mcp-project-orchestrator/openssl/mcp_orchestrator/cursor_config.py:
--------------------------------------------------------------------------------
```python
"""
Cursor configuration management utilities.
This module provides classes and utilities for managing Cursor IDE
configuration files and settings.
"""
import json
from pathlib import Path
from typing import Dict, Any, List, Optional
from dataclasses import dataclass
@dataclass
class CursorRule:
"""Represents a Cursor rule configuration."""
title: str
description: str
platform: str
content: str
created: str
user: str
def to_mdc_content(self) -> str:
"""Convert to .mdc file content with YAML frontmatter."""
frontmatter = {
"title": self.title,
"description": self.description,
"platform": self.platform,
"created": self.created,
"user": self.user,
}
yaml_content = "\n".join([
"---",
*[f"{k}: {v}" for k, v in frontmatter.items()],
"---",
"",
])
return yaml_content + self.content
@dataclass
class MCPServerConfig:
"""Represents an MCP server configuration."""
name: str
command: str
args: List[str]
env: Dict[str, str]
disabled: bool = False
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for JSON serialization."""
config = {
"command": self.command,
"args": self.args,
"env": self.env,
}
if self.disabled:
config["disabled"] = True
return config
class CursorConfig:
"""Manages Cursor IDE configuration files."""
def __init__(self, cursor_dir: Path):
self.cursor_dir = Path(cursor_dir)
self.rules_dir = self.cursor_dir / "rules"
self.prompts_dir = self.cursor_dir / "prompts"
self.mcp_config_file = self.cursor_dir / "mcp.json"
def create_directory_structure(self) -> None:
"""Create the standard .cursor directory structure."""
self.cursor_dir.mkdir(exist_ok=True)
self.rules_dir.mkdir(exist_ok=True)
self.prompts_dir.mkdir(exist_ok=True)
(self.rules_dir / "custom").mkdir(exist_ok=True)
def write_rule(self, rule: CursorRule, filename: str) -> None:
"""Write a rule to the rules directory."""
rule_file = self.rules_dir / f"{filename}.mdc"
rule_file.write_text(rule.to_mdc_content())
def write_prompt(self, title: str, content: str, filename: str) -> None:
"""Write a prompt to the prompts directory."""
prompt_file = self.prompts_dir / f"{filename}.md"
prompt_file.write_text(f"# {title}\n\n{content}")
def write_mcp_config(self, servers: List[MCPServerConfig],
global_shortcut: str = "Ctrl+Shift+.",
logging_level: str = "info") -> None:
"""Write MCP server configuration."""
config = {
"mcpServers": {
server.name: server.to_dict()
for server in servers
},
"globalShortcut": global_shortcut,
"logging": {
"level": logging_level
}
}
self.mcp_config_file.write_text(json.dumps(config, indent=2))
def create_gitignore(self) -> None:
"""Create .gitignore for .cursor directory."""
gitignore_content = """# Cursor IDE local customizations
*.log
*.cache
.cursor-session
# Keep rule templates and prompts in VCS
!rules/
!prompts/
!mcp.json
# Ignore developer-specific overrides
rules/custom/
"""
gitignore_file = self.cursor_dir / ".gitignore"
gitignore_file.write_text(gitignore_content)
def get_existing_rules(self) -> List[str]:
"""Get list of existing rule files."""
if not self.rules_dir.exists():
return []
return [f.stem for f in self.rules_dir.glob("*.mdc")]
def get_existing_prompts(self) -> List[str]:
"""Get list of existing prompt files."""
if not self.prompts_dir.exists():
return []
return [f.stem for f in self.prompts_dir.glob("*.md")]
def has_mcp_config(self) -> bool:
"""Check if MCP configuration exists."""
return self.mcp_config_file.exists()
def read_mcp_config(self) -> Optional[Dict[str, Any]]:
"""Read existing MCP configuration."""
if not self.has_mcp_config():
return None
try:
return json.loads(self.mcp_config_file.read_text())
except (json.JSONDecodeError, FileNotFoundError):
return None
def is_configured(self) -> bool:
"""Check if Cursor is configured (has rules or prompts)."""
return (
len(self.get_existing_rules()) > 0 or
len(self.get_existing_prompts()) > 0 or
self.has_mcp_config()
)
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/templates/__init__.py:
--------------------------------------------------------------------------------
```python
"""
Project and component templates for the MCP Project Orchestrator.
This package exposes a simple template API used in tests:
- TemplateType, TemplateCategory, TemplateMetadata, TemplateFile
- ProjectTemplate, ComponentTemplate
- TemplateManager (directory-based discovery)
"""
from pathlib import Path
from typing import Dict, List, Optional, Union
from .types import TemplateType, TemplateCategory, TemplateMetadata, TemplateFile
from .base import BaseTemplate
class ProjectTemplate(BaseTemplate):
"""Project template with validation and apply logic for tests."""
def validate(self) -> bool:
# Valid if at least one file is present
return len(self.files) > 0
def apply(self, target_path: Union[str, Path]) -> None:
target_path = Path(target_path)
target_path.mkdir(parents=True, exist_ok=True)
# Create a nested directory named after the project if provided
project_dir_name = self.get_variable("project_name", "Project")
project_root = target_path / project_dir_name
project_root.mkdir(parents=True, exist_ok=True)
for file in self.files:
dest = project_root / file.path
dest.parent.mkdir(parents=True, exist_ok=True)
content = self.substitute_variables_jinja2(file.content) if file.path.endswith(".jinja2") else self.substitute_variables(file.content)
dest.write_text(content)
class ComponentTemplate(BaseTemplate):
"""Component template with validation and apply logic for tests."""
def validate(self) -> bool:
# Valid if at least one file is present
return len(self.files) > 0
def apply(self, target_path: Union[str, Path]) -> None:
target_path = Path(target_path)
target_path.mkdir(parents=True, exist_ok=True)
for file in self.files:
# Substitute variables in the file path as well
file_path = self.substitute_variables(file.path)
dest = target_path / file_path
dest.parent.mkdir(parents=True, exist_ok=True)
content = self.substitute_variables_jinja2(file.content) if file.path.endswith(".jinja2") else self.substitute_variables(file.content)
dest.write_text(content)
class TemplateManager:
"""Directory-based template discovery and access used in tests."""
def __init__(self, templates_dir: Union[str, Path, None] = None) -> None:
self.templates_dir = Path(templates_dir) if templates_dir else Path.cwd() / "templates"
self._templates: Dict[str, BaseTemplate] = {}
def discover_templates(self) -> None:
self._templates.clear()
if not self.templates_dir.exists():
return
for sub in self.templates_dir.iterdir():
if not sub.is_dir():
continue
metadata_path = sub / "template.json"
if not metadata_path.exists():
continue
try:
meta = TemplateMetadata.from_dict(__import__("json").load(open(metadata_path)))
except Exception:
continue
# Choose template class based on type
if meta.type == TemplateType.PROJECT:
template = ProjectTemplate(meta)
else:
template = ComponentTemplate(meta)
# Load files directory if present
files_dir = sub / "files"
if files_dir.exists():
for fp in files_dir.rglob("*"):
if fp.is_file():
rel = fp.relative_to(files_dir)
content = fp.read_text()
template.add_file(TemplateFile(path=str(rel), content=content))
self._templates[meta.name] = template
def list_templates(self, template_type: Optional[TemplateType] = None) -> List[str]:
if template_type is None:
return list(self._templates.keys())
return [name for name, t in self._templates.items() if isinstance(t, ProjectTemplate) and template_type == TemplateType.PROJECT or isinstance(t, ComponentTemplate) and template_type == TemplateType.COMPONENT]
def get_template(self, name: str) -> Optional[BaseTemplate]:
return self._templates.get(name)
__all__ = [
"TemplateType",
"TemplateCategory",
"TemplateMetadata",
"TemplateFile",
"ProjectTemplate",
"ComponentTemplate",
"TemplateManager",
]
def apply_template(self, template_name: str, variables: dict, target_dir: str) -> None:
"""Apply a template with variables to create a new project"""
template = self.get_template(template_name)
if not template:
raise ValueError(f"Template '{template_name}' not found")
# Set variables
for key, value in variables.items():
template.set_variable(key, str(value))
# Apply template
template.apply(target_dir)
```
--------------------------------------------------------------------------------
/mcp-project-orchestrator/openssl/docs/cursor-configuration-management.md:
--------------------------------------------------------------------------------
```markdown
# Cursor Configuration Management
## Overview
Cursor configuration is managed like Conan profiles:
- **Templates** stored in package: `cursor-rules/`
- **Deployed** to each repository: `.cursor/`
- **Platform-specific** rules auto-selected
- **Developer customization** via `--custom-rules`
## Deployment Model
| Component | Location | Version Control |
|-----------|----------|-----------------|
| Templates | `mcp-project-orchestrator/openssl/cursor-rules/` | ✅ In package |
| Deployed config | `<repo>/.cursor/` | ✅ In repo (standard rules) |
| Custom rules | `<repo>/.cursor/rules/custom/` | ❌ Not committed (.gitignore) |
## Platform Detection
The deployer automatically detects:
- **OS**: Linux, macOS, Windows
- **CI environment**: GitHub Actions, GitLab CI, etc.
- **Python version**
- **User home directory**
## Usage
### Quick Start
```bash
# Install mcp-project-orchestrator/openssl
pip install mcp-project-orchestrator-openssl
# Deploy to current repository
mcp-orchestrator setup-cursor
```
### Advanced Options
```bash
# Force overwrite existing config
mcp-orchestrator setup-cursor --force
# Import custom rules
mcp-orchestrator setup-cursor \
--custom-rules ~/my-rules/crypto.mdc \
--custom-rules ~/my-rules/testing.mdc
# Opt out of AI features
mcp-orchestrator setup-cursor --opt-out
# Dry run (see what would be deployed)
mcp-orchestrator setup-cursor --dry-run
```
## Customization
### Scenario: Add Team-Specific Rules
1. Fork `mcp-project-orchestrator/openssl`
2. Edit `cursor-rules/rules/shared.mdc.jinja2`
3. Publish to private PyPI or install from Git
```bash
pip install git+https://github.com/mycompany/mcp-project-orchestrator/openssl.git@custom-rules
```
### Scenario: Disable MCP Servers
Edit `.cursor/mcp.json` manually:
```json
{
"mcpServers": {
"openssl-context": {
"disabled": true
}
}
}
```
## Version Control Best Practices
Commit to Git:
- ✅ `.cursor/rules/*.mdc` (standard rules)
- ✅ `.cursor/prompts/*.md` (standard prompts)
- ✅ `.cursor/mcp.json` (MCP configuration)
Exclude from Git:
- ❌ `.cursor/rules/custom/` (personal rules)
- ❌ `.cursor/*.log`, `.cursor/*.cache`
## FAQ
**Q: Can I skip Cursor deployment entirely?**
A: Yes, use `--opt-out` or set `MCP_ORCHESTRATOR_OPT_OUT=true`.
**Q: How do I update rules after package upgrade?**
A: Run `mcp-orchestrator setup-cursor --force`.
**Q: Can I use my own rule templates?**
A: Yes, use `--custom-rules` to import your files.
**Q: What if I want CI-specific rules locally?**
A: Set `export CI=true` before running `setup-cursor`.
## Integration with OpenSSL Tools
### Setup Script Integration
```python
# openssl-tools/setup_openssl_env.py (UPDATED)
from openssl_conan_base.profile_deployer import deploy_conan_profiles
from mcp_orchestrator.cursor_deployer import CursorConfigDeployer
import click
@click.command()
@click.option('--with-cursor', is_flag=True, default=False,
help='Also deploy Cursor AI configuration')
@click.option('--cursor-opt-out', is_flag=True, default=False,
help='Skip Cursor configuration deployment')
def setup_environment(with_cursor, cursor_opt_out):
"""Setup OpenSSL development environment"""
# Step 1: Deploy Conan profiles (always)
click.echo("📦 Deploying Conan profiles...")
deploy_conan_profiles()
# Step 2: Deploy Cursor configuration (optional)
if with_cursor and not cursor_opt_out:
click.echo("🤖 Deploying Cursor AI configuration...")
deployer = CursorConfigDeployer(Path.cwd(), get_mcp_package_root())
deployer.deploy()
elif cursor_opt_out:
click.echo("⏭️ Skipping Cursor configuration (opt-out)")
else:
click.echo("ℹ️ Cursor configuration not deployed (use --with-cursor)")
click.echo("✅ OpenSSL environment setup complete!")
if __name__ == '__main__':
setup_environment()
```
## Summary: Cursor as Profile Management
| Aspect | Implementation |
| :-- | :-- |
| **Template Storage** | `mcp-project-orchestrator/openssl/cursor-rules/` (like `profiles/` in Conan packages) |
| **Deployment Target** | `<repo>/.cursor/` (like `~/.conan2/profiles/`) |
| **Platform Detection** | Auto-detect OS, CI, Python version (like profile selection logic) |
| **Customization** | `--custom-rules` flag (like profile inheritance/override) |
| **Opt-Out** | `--opt-out` flag or env var (like disabling Conan for a project) |
| **Version Control** | Standard rules committed, custom rules excluded (like shared profiles + local overrides) |
| **Update Strategy** | `--force` flag to re-deploy (like `conan config install --force`) |
This pattern provides:
- ✅ **Platform-specific** rules without manual selection
- ✅ **Developer opt-out** for those who don't want AI
- ✅ **Custom rule import** for personal preferences
- ✅ **CI environment** detection and adaptation
- ✅ **Reproducibility** through VCS-tracked standard rules
- ✅ **Flexibility** through local customization
```
--------------------------------------------------------------------------------
/mcp-project-orchestrator/openssl/examples/example-workspace/src/crypto_utils.cpp:
--------------------------------------------------------------------------------
```cpp
#include "crypto_utils.h"
#include <openssl/evp.h>
#include <openssl/rand.h>
#include <openssl/sha.h>
#include <iomanip>
#include <sstream>
std::string encrypt_aes256_gcm(const std::string& plaintext, const std::string& key) {
if (key.length() != 32) {
return ""; // Key must be 32 bytes for AES-256
}
EVP_CIPHER_CTX* ctx = EVP_CIPHER_CTX_new();
if (!ctx) {
return "";
}
// Generate random IV
unsigned char iv[12]; // GCM uses 12-byte IV
if (RAND_bytes(iv, sizeof(iv)) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
// Initialize encryption
if (EVP_EncryptInit_ex(ctx, EVP_aes_256_gcm(), NULL,
reinterpret_cast<const unsigned char*>(key.c_str()), iv) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
// Encrypt
std::vector<unsigned char> ciphertext(plaintext.length() + EVP_CIPHER_block_size(EVP_aes_256_gcm()));
int len;
int ciphertext_len;
if (EVP_EncryptUpdate(ctx, ciphertext.data(), &len,
reinterpret_cast<const unsigned char*>(plaintext.c_str()),
plaintext.length()) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
ciphertext_len = len;
if (EVP_EncryptFinal_ex(ctx, ciphertext.data() + len, &len) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
ciphertext_len += len;
// Get authentication tag
unsigned char tag[16];
if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_GET_TAG, 16, tag) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
EVP_CIPHER_CTX_free(ctx);
// Combine IV + ciphertext + tag
std::stringstream ss;
ss << std::hex << std::setfill('0');
// Add IV
for (int i = 0; i < 12; i++) {
ss << std::setw(2) << static_cast<int>(iv[i]);
}
// Add ciphertext
for (int i = 0; i < ciphertext_len; i++) {
ss << std::setw(2) << static_cast<int>(ciphertext[i]);
}
// Add tag
for (int i = 0; i < 16; i++) {
ss << std::setw(2) << static_cast<int>(tag[i]);
}
return ss.str();
}
std::string decrypt_aes256_gcm(const std::string& ciphertext, const std::string& key) {
if (key.length() != 32 || ciphertext.length() < 56) { // 24 (IV) + 32 (tag) minimum
return "";
}
EVP_CIPHER_CTX* ctx = EVP_CIPHER_CTX_new();
if (!ctx) {
return "";
}
// Parse hex string
std::vector<unsigned char> data;
for (size_t i = 0; i < ciphertext.length(); i += 2) {
std::string byte_str = ciphertext.substr(i, 2);
data.push_back(static_cast<unsigned char>(std::stoi(byte_str, nullptr, 16)));
}
if (data.size() < 28) { // 12 (IV) + 16 (tag) minimum
EVP_CIPHER_CTX_free(ctx);
return "";
}
// Extract components
unsigned char iv[12];
std::copy(data.begin(), data.begin() + 12, iv);
std::vector<unsigned char> encrypted_data(data.begin() + 12, data.end() - 16);
unsigned char tag[16];
std::copy(data.end() - 16, data.end(), tag);
// Initialize decryption
if (EVP_DecryptInit_ex(ctx, EVP_aes_256_gcm(), NULL,
reinterpret_cast<const unsigned char*>(key.c_str()), iv) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
// Decrypt
std::vector<unsigned char> plaintext(encrypted_data.size() + EVP_CIPHER_block_size(EVP_aes_256_gcm()));
int len;
int plaintext_len;
if (EVP_DecryptUpdate(ctx, plaintext.data(), &len,
encrypted_data.data(), encrypted_data.size()) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
plaintext_len = len;
// Set authentication tag
if (EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_TAG, 16, tag) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
if (EVP_DecryptFinal_ex(ctx, plaintext.data() + len, &len) != 1) {
EVP_CIPHER_CTX_free(ctx);
return "";
}
plaintext_len += len;
EVP_CIPHER_CTX_free(ctx);
return std::string(reinterpret_cast<char*>(plaintext.data()), plaintext_len);
}
std::string generate_aes256_key() {
unsigned char key[32];
if (RAND_bytes(key, sizeof(key)) != 1) {
return "";
}
std::stringstream ss;
ss << std::hex << std::setfill('0');
for (int i = 0; i < 32; i++) {
ss << std::setw(2) << static_cast<int>(key[i]);
}
return ss.str();
}
std::string sha256_hash(const std::string& input) {
unsigned char hash[SHA256_DIGEST_LENGTH];
SHA256_CTX sha256;
SHA256_Init(&sha256);
SHA256_Update(&sha256, input.c_str(), input.length());
SHA256_Final(hash, &sha256);
std::stringstream ss;
ss << std::hex << std::setfill('0');
for (int i = 0; i < SHA256_DIGEST_LENGTH; i++) {
ss << std::setw(2) << static_cast<int>(hash[i]);
}
return ss.str();
}
```
--------------------------------------------------------------------------------
/src/mcp_project_orchestrator/prompts/template.py:
--------------------------------------------------------------------------------
```python
"""Prompt template implementation for MCP Project Orchestrator.
This module provides the PromptTemplate class that handles individual prompt
templates, including loading, validation, and rendering.
"""
from typing import Dict, List, Any, Optional
from jinja2 import Environment, Template, StrictUndefined
from ..core.exceptions import PromptError
from .version import PromptVersion
class PromptTemplate:
"""Class representing a prompt template."""
def __init__(
self,
name: str,
description: str,
version: PromptVersion,
author: str,
template: str,
variables: Dict[str, Any],
category: str = "general",
tags: Optional[List[str]] = None
):
"""Initialize prompt template.
Args:
name: Template name
description: Template description
version: Template version
author: Template author
template: Template content
variables: Template variables with descriptions
category: Template category
tags: Template tags
"""
self.name = name
self.description = description
self.version = version
self.author = author
self.template = template
self.variables = variables
self.category = category
self.tags = tags or []
self._jinja_env = Environment(
undefined=StrictUndefined,
trim_blocks=True,
lstrip_blocks=True,
keep_trailing_newline=True
)
self._compiled_template: Optional[Template] = None
async def validate(self) -> None:
"""Validate the prompt template.
This method checks that:
- Required fields are present
- Template syntax is valid
- Variable references are valid
Raises:
PromptError: If validation fails
"""
if not self.name:
raise PromptError("Template name is required")
if not self.template:
raise PromptError("Template content is required")
try:
self._compiled_template = self._jinja_env.from_string(self.template)
except Exception as e:
raise PromptError(
f"Invalid template syntax in {self.name}",
self.name
) from e
# Validate variable references
try:
# Create dummy variables with None values
dummy_vars = {name: None for name in self.variables}
self._compiled_template.render(**dummy_vars)
except Exception as e:
raise PromptError(
f"Invalid variable references in {self.name}",
self.name
) from e
async def render(self, variables: Dict[str, Any]) -> str:
"""Render the prompt template with variables.
Args:
variables: Template variables
Returns:
str: Rendered prompt
Raises:
PromptError: If rendering fails
"""
if not self._compiled_template:
await self.validate()
try:
# Validate required variables are provided
missing = set(self.variables) - set(variables)
if missing:
raise PromptError(
f"Missing required variables: {', '.join(missing)}",
self.name
)
return self._compiled_template.render(**variables)
except Exception as e:
raise PromptError(
f"Failed to render template {self.name}",
self.name
) from e
def get_variable_info(self) -> Dict[str, Dict[str, Any]]:
"""Get information about template variables.
Returns:
Dict[str, Dict[str, Any]]: Variable information
"""
return self.variables
def to_dict(self) -> Dict[str, Any]:
"""Convert template to dictionary representation.
Returns:
Dict[str, Any]: Dictionary representation
"""
return {
"name": self.name,
"description": self.description,
"version": {
"major": self.version.major,
"minor": self.version.minor,
"patch": self.version.patch,
},
"author": self.author,
"template": self.template,
"variables": self.variables,
"category": self.category,
"tags": self.tags,
}
def __str__(self) -> str:
"""Get string representation.
Returns:
str: String representation
"""
return f"{self.name} (v{self.version})"
def __repr__(self) -> str:
"""Get detailed string representation.
Returns:
str: Detailed string representation
"""
return (
f"PromptTemplate(name='{self.name}', "
f"version={self.version}, "
f"category='{self.category}')"
)
```
--------------------------------------------------------------------------------
/tests/test_mermaid.py:
--------------------------------------------------------------------------------
```python
"""Tests for the Mermaid diagram generation system."""
import pytest
from pathlib import Path
from mcp_project_orchestrator.mermaid import (
MermaidGenerator,
MermaidRenderer,
DiagramType,
DiagramMetadata,
)
def test_diagram_metadata():
"""Test diagram metadata creation and conversion."""
metadata = DiagramMetadata(
name="test-diagram",
description="Test diagram",
type=DiagramType.FLOWCHART,
version="1.0.0",
author="Test Author",
tags=["test", "flowchart"],
)
# Test to_dict
data = metadata.to_dict()
assert data["name"] == "test-diagram"
assert data["type"] == "flowchart"
# Test from_dict
new_metadata = DiagramMetadata.from_dict(data)
assert new_metadata.name == metadata.name
assert new_metadata.type == metadata.type
def test_mermaid_generator(mermaid_generator):
"""Test Mermaid diagram generation."""
# Generate flowchart
flowchart = mermaid_generator.generate_flowchart(
nodes=[
("A", "Start"),
("B", "Process"),
("C", "End"),
],
edges=[
("A", "B", ""),
("B", "C", ""),
],
)
assert "flowchart TD" in flowchart
assert "A[Start]" in flowchart
assert "B[Process]" in flowchart
assert "C[End]" in flowchart
assert "A --> B" in flowchart
assert "B --> C" in flowchart
# Generate sequence diagram
sequence = mermaid_generator.generate_sequence(
participants=["User", "System"],
messages=[
("User", "System", "Request"),
("System", "User", "Response"),
],
)
assert "sequenceDiagram" in sequence
assert "participant User" in sequence
assert "participant System" in sequence
assert "User->>System: Request" in sequence
assert "System->>User: Response" in sequence
# Generate class diagram
class_diagram = mermaid_generator.generate_class(
classes=[
{
"name": "Animal",
"attributes": ["name: str", "age: int"],
"methods": ["speak()", "move()"],
},
{
"name": "Dog",
"attributes": ["breed: str"],
"methods": ["bark()"],
},
],
relationships=[
("Dog", "Animal", "extends"),
],
)
assert "classDiagram" in class_diagram
assert "class Animal" in class_diagram
assert "class Dog" in class_diagram
assert "Dog --|> Animal" in class_diagram
def test_mermaid_renderer(mermaid_renderer, temp_dir):
"""Test Mermaid diagram rendering."""
# Create test diagram
diagram = """
flowchart TD
A[Start] --> B[Process]
B --> C[End]
"""
# Render to SVG
svg_path = temp_dir / "test.svg"
mermaid_renderer.render(diagram, svg_path)
assert svg_path.exists()
assert svg_path.stat().st_size > 0
# Render to PNG
png_path = temp_dir / "test.png"
mermaid_renderer.render(diagram, png_path)
assert png_path.exists()
assert png_path.stat().st_size > 0
def test_diagram_save_load(mermaid_generator, temp_dir):
"""Test saving and loading diagrams."""
# Create diagram
metadata = DiagramMetadata(
name="save-test",
description="Save test diagram",
type=DiagramType.FLOWCHART,
version="1.0.0",
author="Test Author",
tags=["test"],
)
diagram = """
flowchart TD
A[Start] --> B[Process]
B --> C[End]
"""
# Save diagram
diagram_dir = temp_dir / "diagrams"
diagram_dir.mkdir(parents=True, exist_ok=True)
diagram_file = diagram_dir / "save-test.mmd"
mermaid_generator.save_diagram(metadata, diagram, diagram_file)
# Load diagram
loaded_metadata, loaded_diagram = mermaid_generator.load_diagram(diagram_file)
assert loaded_metadata.name == metadata.name
assert loaded_metadata.type == metadata.type
assert loaded_diagram.strip() == diagram.strip()
def test_diagram_validation(mermaid_generator):
"""Test diagram validation."""
# Valid flowchart
valid_flowchart = """
flowchart TD
A[Start] --> B[Process]
B --> C[End]
"""
assert mermaid_generator.validate_diagram(valid_flowchart, DiagramType.FLOWCHART)
# Invalid flowchart (syntax error)
invalid_flowchart = """
flowchart TD
A[Start] --> B[Process
B --> C[End]
"""
assert not mermaid_generator.validate_diagram(invalid_flowchart, DiagramType.FLOWCHART)
# Valid sequence diagram
valid_sequence = """
sequenceDiagram
participant A
participant B
A->>B: Message
"""
assert mermaid_generator.validate_diagram(valid_sequence, DiagramType.SEQUENCE)
# Invalid sequence diagram (missing participant)
invalid_sequence = """
sequenceDiagram
A->>B: Message
"""
assert not mermaid_generator.validate_diagram(invalid_sequence, DiagramType.SEQUENCE)
```
--------------------------------------------------------------------------------
/data/prompts/templates/sequential-data-analysis.json:
--------------------------------------------------------------------------------
```json
{
"id": "sequential-data-analysis",
"name": "Sequential Data Analysis with MCP Integration",
"description": "Advanced prompt template for multi-stage data analysis that integrates filesystem, database, memory, and sequential thinking MCP servers for comprehensive data workflows.",
"content": "# Sequential Data Analysis Assistant\n\nYou are a specialized AI assistant for comprehensive data analysis, with access to multiple MCP servers that enhance your capabilities. Your task is to analyze {{data_type}} data from {{data_source}} and provide insights about {{analysis_objective}}.\n\n## Available MCP Servers\n\nYou have access to the following MCP servers to assist with this analysis:\n\n- **Filesystem**: Access data files, configuration, and save analysis outputs\n- **PostgreSQL**: Query structured data from databases\n- **Memory**: Store intermediate analysis results and insights\n- **Sequential Thinking**: Break complex analysis into logical steps\n- **GitHub**: Access code repositories, documentation, and data processing scripts\n{{additional_servers}}\n\n## Data Context\n\n- **Data Type**: {{data_type}}\n- **Data Source**: {{data_source}}\n- **Analysis Objective**: {{analysis_objective}}\n- **Technical Background**: {{technical_background}}\n- **Required Output Format**: {{output_format}}\n\n## Analysis Plan\n\nYour data analysis should follow these sequential steps, utilizing appropriate MCP servers at each stage:\n\n### 1. Data Discovery and Acquisition\n- Identify all relevant data sources across available servers\n- Use Filesystem MCP to check available data files\n- Use PostgreSQL MCP to explore database schema and available tables\n- Use GitHub MCP to locate relevant data processing scripts\n- Document data types, formats, and relationships\n\n### 2. Data Preparation\n- Use Sequential Thinking MCP to plan data cleaning steps\n- Process data to handle missing values, outliers, transformations\n- Use Memory MCP to store intermediate processing results\n- Document data preparation decisions and their rationale\n\n### 3. Exploratory Analysis\n- Calculate descriptive statistics\n- Identify patterns, correlations, and potential insights\n- Generate appropriate visualizations (described textually)\n- Store key observations in Memory MCP for later reference\n\n### 4. Advanced Analysis\n- Apply statistical methods or machine learning techniques appropriate for {{analysis_objective}}\n- Use Sequential Thinking MCP to break down complex analysis into logical steps\n- Reference relevant GitHub repositories for specialized algorithms\n- Document methodology, assumptions, and limitations\n\n### 5. Synthesis and Reporting\n- Summarize key findings and insights\n- Relate results back to {{analysis_objective}}\n- Provide actionable recommendations\n- Use Filesystem MCP to save analysis results in {{output_format}}\n\n## Guidelines for Your Response\n\n1. Begin by outlining your understanding of the analysis objective and the data context\n2. Specify which MCP servers you'll use for each analysis stage\n3. Provide a structured analysis following the sequential steps above\n4. For complex analyses, use the Sequential Thinking MCP to break down your reasoning\n5. Store important intermediate findings in Memory MCP and reference them in your final analysis\n6. Present results in the required {{output_format}}\n7. Include recommendations for further analysis or actions\n8. Document any limitations of your analysis or areas requiring human validation\n\n{{additional_guidelines}}",
"isTemplate": true,
"variables": [
"data_type",
"data_source",
"analysis_objective",
"technical_background",
"output_format",
"additional_servers",
"additional_guidelines"
],
"tags": [
"data-analysis",
"mcp-integration",
"sequential-processing",
"filesystem",
"postgres",
"memory",
"sequential-thinking",
"template"
],
"createdAt": "2025-03-15T12:00:00.000Z",
"updatedAt": "2025-03-15T12:00:00.000Z",
"version": 1,
"metadata": {
"recommended_servers": [
"filesystem",
"postgres",
"memory",
"sequential-thinking",
"github"
],
"example_variables": {
"data_type": "time series",
"data_source": "PostgreSQL database with sensor readings and JSON log files",
"analysis_objective": "identifying anomalies in IoT device performance",
"technical_background": "The IoT devices are deployed in manufacturing environments and collect temperature, vibration, and power consumption data at 5-minute intervals",
"output_format": "JSON report with statistical summary, detected anomalies, and visualization descriptions",
"additional_servers": "- **Brave Search**: Access relevant research papers on IoT anomaly detection\n- **ElevenLabs**: Generate audio summary of critical findings",
"additional_guidelines": "Focus particularly on correlations between temperature spikes and subsequent power consumption anomalies. The stakeholders are especially interested in predictive maintenance opportunities."
}
}
}
```