#
tokens: 8273/50000 9/9 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── examples
│   ├── agents.yml
│   └── tasks.yml
├── pyproject.toml
├── README.md
├── src
│   └── mcp_crew_ai
│       ├── __main__.py
│       ├── __pycache__
│       │   └── server.cpython-311.pyc
│       ├── cli.py
│       ├── server_cmd.py
│       └── server.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
  1 | # Byte-compiled / optimized / DLL files
  2 | __pycache__/
  3 | *.py[cod]
  4 | *$py.class
  5 | 
  6 | # C extensions
  7 | *.so
  8 | 
  9 | # Distribution / packaging
 10 | .Python
 11 | build/
 12 | develop-eggs/
 13 | dist/
 14 | downloads/
 15 | eggs/
 16 | .eggs/
 17 | lib/
 18 | lib64/
 19 | parts/
 20 | sdist/
 21 | var/
 22 | wheels/
 23 | *.egg-info/
 24 | .installed.cfg
 25 | *.egg
 26 | MANIFEST
 27 | 
 28 | # PyInstaller
 29 | *.manifest
 30 | *.spec
 31 | 
 32 | # Installer logs
 33 | pip-log.txt
 34 | pip-delete-this-directory.txt
 35 | 
 36 | # Unit test / coverage reports
 37 | htmlcov/
 38 | .tox/
 39 | .nox/
 40 | .coverage
 41 | .coverage.*
 42 | .cache
 43 | nosetests.xml
 44 | coverage.xml
 45 | *.cover
 46 | .hypothesis/
 47 | .pytest_cache/
 48 | 
 49 | # Translations
 50 | *.mo
 51 | *.pot
 52 | 
 53 | # Django stuff:
 54 | *.log
 55 | local_settings.py
 56 | db.sqlite3
 57 | db.sqlite3-journal
 58 | 
 59 | # Flask stuff:
 60 | instance/
 61 | .webassets-cache
 62 | 
 63 | # Scrapy stuff:
 64 | .scrapy
 65 | 
 66 | # Sphinx documentation
 67 | docs/_build/
 68 | 
 69 | # PyBuilder
 70 | target/
 71 | 
 72 | # Jupyter Notebook
 73 | .ipynb_checkpoints
 74 | 
 75 | # IPython
 76 | profile_default/
 77 | ipython_config.py
 78 | 
 79 | # pyenv
 80 | .python-version
 81 | 
 82 | # Virtual environments
 83 | venv/
 84 | env/
 85 | ENV/
 86 | .venv/
 87 | .env/
 88 | env.bak/
 89 | venv.bak/
 90 | .virtualenv/
 91 | .python-virtualenv/
 92 | Pipfile.lock
 93 | 
 94 | # Spyder project settings
 95 | .spyderproject
 96 | .spyproject
 97 | 
 98 | # Rope project settings
 99 | .ropeproject
100 | 
101 | # mkdocs documentation
102 | /site
103 | 
104 | # mypy
105 | .mypy_cache/
106 | .dmypy.json
107 | dmypy.json
108 | 
109 | # Pyre type checker
110 | .pyre/
111 | 
112 | # IDE specific files
113 | .idea/
114 | .vscode/
115 | *.swp
116 | *.swo
117 | .DS_Store
118 | Thumbs.db
119 | *.sublime-project
120 | *.sublime-workspace
121 | 
122 | # Poetry
123 | poetry.lock
124 | 
125 | # dotenv
126 | .env
127 | .env.*
128 | 
129 | # pytest
130 | pytest.ini
131 | 
132 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | <div align="center">
  2 |   <img src="https://github.com/crewAIInc/crewAI/blob/main/docs/crewai_logo.png" alt="CrewAI Logo" />
  3 | </div>
  4 | 
  5 | # MCP Crew AI Server
  6 | 
  7 | MCP Crew AI Server is a lightweight Python-based server designed to run, manage and create CrewAI workflows. This project leverages the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) to communicate with Large Language Models (LLMs) and tools such as Claude Desktop or Cursor IDE, allowing you to orchestrate multi-agent workflows with ease.
  8 | 
  9 | ## Features
 10 | 
 11 | - **Automatic Configuration:** Automatically loads agent and task configurations from two YAML files (`agents.yml` and `tasks.yml`), so you don't need to write custom code for basic setups.
 12 | - **Command Line Flexibility:** Pass custom paths to your configuration files via command line arguments (`--agents` and `--tasks`).
 13 | - **Seamless Workflow Execution:** Easily run pre-configured workflows through the MCP `run_workflow` tool.
 14 | - **Local Development:** Run the server locally in STDIO mode, making it ideal for development and testing.
 15 | 
 16 | ## Installation
 17 | 
 18 | There are several ways to install the MCP Crew AI server:
 19 | 
 20 | ### Option 1: Install from PyPI (Recommended)
 21 | 
 22 | ```bash
 23 | pip install mcp-crew-ai
 24 | ```
 25 | 
 26 | ### Option 2: Install from GitHub
 27 | 
 28 | ```bash
 29 | pip install git+https://github.com/adam-paterson/mcp-crew-ai.git
 30 | ```
 31 | 
 32 | ### Option 3: Clone and Install
 33 | 
 34 | ```bash
 35 | git clone https://github.com/adam-paterson/mcp-crew-ai.git
 36 | cd mcp-crew-ai
 37 | pip install -e .
 38 | ```
 39 | 
 40 | ### Requirements
 41 | 
 42 | - Python 3.11+
 43 | - MCP SDK
 44 | - CrewAI
 45 | - PyYAML
 46 | 
 47 | ## Configuration
 48 | 
 49 | - **agents.yml:** Define your agents with roles, goals, and backstories.
 50 | - **tasks.yml:** Define tasks with descriptions, expected outputs, and assign them to agents.
 51 | 
 52 | **Example `agents.yml`:**
 53 | 
 54 | ```yaml
 55 | zookeeper:
 56 |   role: Zookeeper
 57 |   goal: Manage zoo operations
 58 |   backstory: >
 59 |     You are a seasoned zookeeper with a passion for wildlife conservation...
 60 | ```
 61 | 
 62 | **Example `tasks.yml`:**
 63 | 
 64 | ```yaml
 65 | write_stories:
 66 |   description: >
 67 |     Write an engaging zoo update capturing the day's highlights.
 68 |   expected_output: 5 engaging stories
 69 |   agent: zookeeper
 70 |   output_file: zoo_report.md
 71 | ```
 72 | 
 73 | ## Usage
 74 | 
 75 | Once installed, you can run the MCP CrewAI server using either of these methods:
 76 | 
 77 | ### Standard Python Command
 78 | 
 79 | ```bash
 80 | mcp-crew-ai --agents path/to/agents.yml --tasks path/to/tasks.yml
 81 | ```
 82 | 
 83 | ### Using UV Execution (uvx)
 84 | 
 85 | For a more streamlined experience, you can use the UV execution command:
 86 | 
 87 | ```bash
 88 | uvx mcp-crew-ai --agents path/to/agents.yml --tasks path/to/tasks.yml
 89 | ```
 90 | 
 91 | Or run just the server directly:
 92 | 
 93 | ```bash
 94 | uvx mcp-crew-ai-server
 95 | ```
 96 | 
 97 | This will start the server using default configuration from environment variables.
 98 | 
 99 | ### Command Line Options
100 | 
101 | - `--agents`: Path to the agents YAML file (required)
102 | - `--tasks`: Path to the tasks YAML file (required)
103 | - `--topic`: The main topic for the crew to work on (default: "Artificial Intelligence")
104 | - `--process`: Process type to use (choices: "sequential" or "hierarchical", default: "sequential")
105 | - `--verbose`: Enable verbose output
106 | - `--variables`: JSON string or path to JSON file with additional variables to replace in YAML files
107 | - `--version`: Show version information and exit
108 | 
109 | ### Advanced Usage
110 | 
111 | You can also provide additional variables to be used in your YAML templates:
112 | 
113 | ```bash
114 | mcp-crew-ai --agents examples/agents.yml --tasks examples/tasks.yml --topic "Machine Learning" --variables '{"year": 2025, "focus": "deep learning"}'
115 | ```
116 | 
117 | These variables will replace placeholders in your YAML files. For example, `{topic}` will be replaced with "Machine Learning" and `{year}` with "2025".
118 | 
119 | ## Contributing
120 | 
121 | Contributions are welcome! Please open issues or submit pull requests with improvements, bug fixes, or new features.
122 | 
123 | ## Licence
124 | 
125 | This project is licensed under the MIT Licence. See the LICENSE file for details.
126 | 
127 | Happy workflow orchestration!
128 | 
```

--------------------------------------------------------------------------------
/src/mcp_crew_ai/__main__.py:
--------------------------------------------------------------------------------

```python
1 | #!/usr/bin/env python3
2 | """
3 | MCP Crew AI - Main module entry point
4 | Allows running the module directly with: python -m mcp_crew_ai
5 | """
6 | from mcp_crew_ai.cli import main
7 | 
8 | if __name__ == "__main__":
9 |     main()
```

--------------------------------------------------------------------------------
/src/mcp_crew_ai/server_cmd.py:
--------------------------------------------------------------------------------

```python
1 | #!/usr/bin/env python3
2 | """
3 | MCP Crew AI Server - Standalone executable
4 | This module provides a direct command-line interface to run the server via uvx
5 | """
6 | from mcp_crew_ai.server import main
7 | 
8 | if __name__ == "__main__":
9 |     main()
```

--------------------------------------------------------------------------------
/examples/tasks.yml:
--------------------------------------------------------------------------------

```yaml
 1 | research_task:
 2 |   description: >
 3 |     Conduct a thorough research about {topic}
 4 |     Make sure you find any interesting and relevant information given
 5 |     the current year is 2025.
 6 |   expected_output: >
 7 |     A list with 10 bullet points of the most relevant information about {topic}
 8 |   agent: researcher
 9 | 
10 | reporting_task:
11 |   description: >
12 |     Review the context you got and expand each topic into a full section for a report.
13 |     Make sure the report is detailed and contains any and all relevant information.
14 |   expected_output: >
15 |     A fully fledge reports with the mains topics, each with a full section of information.
16 |     Formatted as markdown without '```'
17 |   agent: reporting_analyst
18 |   output_file: report.md
19 | 
```

--------------------------------------------------------------------------------
/examples/agents.yml:
--------------------------------------------------------------------------------

```yaml
 1 | researcher:
 2 |   role: >
 3 |     {topic} Senior Data Researcher
 4 |   goal: >
 5 |     Uncover cutting-edge developments in {topic}
 6 |   backstory: >
 7 |     You're a seasoned researcher with a knack for uncovering the latest
 8 |     developments in {topic}. Known for your ability to find the most relevant
 9 |     information and present it in a clear and concise manner.
10 | 
11 | reporting_analyst:
12 |   role: >
13 |     {topic} Reporting Analyst
14 |   goal: >
15 |     Create detailed reports based on {topic} data analysis and research findings
16 |   backstory: >
17 |     You're a meticulous analyst with a keen eye for detail. You're known for
18 |     your ability to turn complex data into clear and concise reports, making
19 |     it easy for others to understand and act on the information you provide.
20 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mcp-crew-ai"
 3 | version = "0.1.0"
 4 | description = "MCP Crew AI Server - Run CrewAI agents through Model Context Protocol"
 5 | readme = "README.md"
 6 | authors = [
 7 |     { name = "adam.paterson", email = "[email protected]" }
 8 | ]
 9 | requires-python = ">=3.11"
10 | dependencies = [
11 |     "mcp[cli]>=1.3.0",
12 |     "crewai>=0.8.0",
13 |     "pyyaml>=6.0",
14 |     "importlib-metadata>=6.0.0",
15 | ]
16 | 
17 | [build-system]
18 | requires = ["hatchling"]
19 | build-backend = "hatchling.build"
20 | 
21 | [tool.hatch.build.targets.wheel]
22 | packages = ["src/mcp_crew_ai"]
23 | 
24 | [project.scripts]
25 | mcp-crew-ai = "mcp_crew_ai.cli:main"
26 | 
27 | [project.entry-points.uv]
28 | mcp-crew-ai = "mcp_crew_ai.cli:main"
29 | mcp-crew-ai-server = "mcp_crew_ai.server_cmd:main"
30 | 
31 | [project.urls]
32 | Homepage = "https://github.com/adam-paterson/mcp-crew-ai"
33 | Repository = "https://github.com/adam-paterson/mcp-crew-ai"
34 | Issues = "https://github.com/adam-paterson/mcp-crew-ai/issues"
35 | 
```

--------------------------------------------------------------------------------
/src/mcp_crew_ai/cli.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | import argparse
  3 | import yaml
  4 | import tempfile
  5 | import json
  6 | import subprocess
  7 | import sys
  8 | import logging
  9 | from pathlib import Path
 10 | from typing import Dict, Any, Optional, List, Callable
 11 | import importlib.metadata
 12 | 
 13 | # Configure logging
 14 | logging.basicConfig(
 15 |     level=logging.INFO,
 16 |     format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
 17 |     handlers=[
 18 |         logging.FileHandler("crew_ai_server.log"),
 19 |         logging.StreamHandler()
 20 |     ]
 21 | )
 22 | logger = logging.getLogger("mcp_crew_ai")
 23 | 
 24 | def main():
 25 |     """
 26 |     Main entry point for the MCP Crew AI CLI.
 27 |     Parses command line arguments and starts an MCP server with the specified configuration.
 28 |     """
 29 |     parser = argparse.ArgumentParser(description='MCP Crew AI - Run CrewAI agents through MCP')
 30 |     parser.add_argument('--agents', type=str, help='Path to agents YAML file')
 31 |     parser.add_argument('--tasks', type=str, help='Path to tasks YAML file')
 32 |     parser.add_argument('--topic', type=str, default='Artificial Intelligence', 
 33 |                       help='The main topic for the crew to work on')
 34 |     parser.add_argument('--process', type=str, default='sequential', 
 35 |                       choices=['sequential', 'hierarchical'], 
 36 |                       help='Process type: sequential or hierarchical')
 37 |     parser.add_argument('--verbose', action='store_true', help='Enable verbose output')
 38 |     parser.add_argument('--variables', type=str, 
 39 |                       help='JSON string or path to JSON file with variables to replace in YAML files')
 40 |     parser.add_argument('--version', action='store_true', help='Show version and exit')
 41 |     
 42 |     args = parser.parse_args()
 43 |     
 44 |     # Show version and exit if requested
 45 |     if args.version:
 46 |         try:
 47 |             version = importlib.metadata.version("mcp-crew-ai")
 48 |             print(f"MCP Crew AI v{version}")
 49 |         except importlib.metadata.PackageNotFoundError:
 50 |             print("MCP Crew AI (development version)")
 51 |         return
 52 |         
 53 |     # Get version for MCP_CREW_VERSION environment variable
 54 |     try:
 55 |         version = importlib.metadata.version("mcp-crew-ai")
 56 |     except importlib.metadata.PackageNotFoundError:
 57 |         version = "0.1.0"
 58 |     
 59 |     # Process YAML file paths
 60 |     agents_path = args.agents
 61 |     tasks_path = args.tasks
 62 |     
 63 |     if not agents_path or not tasks_path:
 64 |         logger.error("Both --agents and --tasks arguments are required. Use --help for more information.")
 65 |         sys.exit(1)
 66 |     
 67 |     # Validate that the files exist
 68 |     agents_file = Path(agents_path)
 69 |     tasks_file = Path(tasks_path)
 70 |     
 71 |     if not agents_file.exists():
 72 |         logger.error(f"Agents file not found: {agents_path}")
 73 |         sys.exit(1)
 74 |         
 75 |     if not tasks_file.exists():
 76 |         logger.error(f"Tasks file not found: {tasks_path}")
 77 |         sys.exit(1)
 78 |     
 79 |     # Process variables if provided
 80 |     variables = {}
 81 |     if args.variables:
 82 |         if os.path.isfile(args.variables):
 83 |             with open(args.variables, 'r') as f:
 84 |                 variables = json.load(f)
 85 |         else:
 86 |             try:
 87 |                 variables = json.loads(args.variables)
 88 |             except json.JSONDecodeError:
 89 |                 logger.warning(f"Could not parse variables as JSON: {args.variables}")
 90 |     
 91 |     # Add topic to variables
 92 |     variables['topic'] = args.topic
 93 |     
 94 |     logger.info(f"Starting MCP Crew AI server with:")
 95 |     logger.info(f"- Agents file: {agents_file}")
 96 |     logger.info(f"- Tasks file: {tasks_file}")
 97 |     logger.info(f"- Topic: {args.topic}")
 98 |     logger.info(f"- Process type: {args.process}")
 99 |     
100 |     # Set environment variables for the server to use
101 |     os.environ["MCP_CREW_AGENTS_FILE"] = str(agents_file.absolute())
102 |     os.environ["MCP_CREW_TASKS_FILE"] = str(tasks_file.absolute())
103 |     os.environ["MCP_CREW_TOPIC"] = args.topic
104 |     os.environ["MCP_CREW_PROCESS"] = args.process
105 |     os.environ["MCP_CREW_VERBOSE"] = "1" if args.verbose else "0"
106 |     os.environ["MCP_CREW_VERSION"] = version
107 |     
108 |     if variables:
109 |         os.environ["MCP_CREW_VARIABLES"] = json.dumps(variables)
110 |         
111 |     # Build MCP command to run the server
112 |     server_module = os.path.join(os.path.dirname(__file__), "server.py")
113 |     cmd = ["mcp", "dev", server_module]
114 |     
115 |     logger.info(f"Executing: {' '.join(cmd)}")
116 |     
117 |     try:
118 |         # Run the MCP server
119 |         subprocess.run(cmd)
120 |     except KeyboardInterrupt:
121 |         logger.info("Server stopped by user")
122 |     except Exception as e:
123 |         logger.error(f"Error running MCP server: {e}")
124 |         sys.exit(1)
125 | 
126 | 
127 | def load_yaml_with_variables(file_path: Path, variables: Dict[str, Any]) -> Dict[str, Any]:
128 |     """Load YAML and replace variables in memory"""
129 |     if not file_path.exists():
130 |         logger.error(f"File not found: {file_path}")
131 |         return {}
132 |     
133 |     try:
134 |         with open(file_path, 'r') as file:
135 |             content = file.read()
136 |         
137 |         # Replace all variables in the content
138 |         for key, value in variables.items():
139 |             placeholder = '{' + key + '}'
140 |             content = content.replace(placeholder, str(value))
141 |         
142 |         # Parse the YAML content
143 |         yaml_content = yaml.safe_load(content) or {}
144 |         return yaml_content
145 |     except Exception as e:
146 |         logger.error(f"Error loading YAML file {file_path}: {e}")
147 |         return {}
148 | 
149 | 
150 | if __name__ == "__main__":
151 |     main()
```

--------------------------------------------------------------------------------
/src/mcp_crew_ai/server.py:
--------------------------------------------------------------------------------

```python
  1 | from mcp.server.fastmcp import FastMCP
  2 | from crewai import Crew, Agent, Task, Process
  3 | import yaml
  4 | import os
  5 | import sys
  6 | import io
  7 | import contextlib
  8 | import json
  9 | import argparse
 10 | import logging
 11 | from pathlib import Path
 12 | 
 13 | # Configure logging
 14 | logging.basicConfig(
 15 |     level=logging.INFO,
 16 |     format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
 17 |     handlers=[
 18 |         logging.FileHandler("crew_ai_server.log"),
 19 |         logging.StreamHandler(sys.stderr)
 20 |     ]
 21 | )
 22 | logger = logging.getLogger("mcp_crew_ai_server")
 23 | 
 24 | # Initialize server at module level
 25 | server = None
 26 | 
 27 | 
 28 | @contextlib.contextmanager
 29 | def capture_output():
 30 |     """Capture stdout and stderr."""
 31 |     new_out, new_err = io.StringIO(), io.StringIO()
 32 |     old_out, old_err = sys.stdout, sys.stderr
 33 |     try:
 34 |         sys.stdout, sys.stderr = new_out, new_err
 35 |         yield new_out, new_err
 36 |     finally:
 37 |         sys.stdout, sys.stderr = old_out, old_err
 38 | 
 39 | def format_output(output):
 40 |     """Format the output to make it more readable."""
 41 |     # Split by lines and filter out LiteLLM log lines
 42 |     lines = output.split('\n')
 43 |     filtered_lines = [line for line in lines if not line.strip().startswith('[') or 'LiteLLM' not in line]
 44 |     
 45 |     # Join the filtered lines back together
 46 |     return '\n'.join(filtered_lines)
 47 | 
 48 | def kickoff(
 49 |     agents_file: str = None, 
 50 |     tasks_file: str = None,
 51 |     topic: str = None,
 52 |     additional_context: dict = None
 53 | ):
 54 |     """
 55 |     Execute a CrewAI workflow using YAML configuration files.
 56 |     
 57 |     Args:
 58 |         agents_file: Optional path to override the default agents YAML file
 59 |         tasks_file: Optional path to override the default tasks YAML file
 60 |         topic: The main topic for the crew to work on
 61 |         additional_context: Additional context variables for template formatting
 62 |     
 63 |     Returns:
 64 |         The results from the crew execution
 65 |     """
 66 |     logger.info(f"Tool kickoff called with: agents_file={agents_file}, tasks_file={tasks_file}, topic={topic}")
 67 |     
 68 |     # Use default paths if none provided
 69 |     agents_path = agents_file if agents_file else str(agents_yaml_path)
 70 |     tasks_path = tasks_file if tasks_file else str(tasks_yaml_path)
 71 |     
 72 |     # Use provided topic or default from environment variable
 73 |     current_topic = topic if topic else os.environ.get("MCP_CREW_TOPIC", "Artificial Intelligence")
 74 |     
 75 |     logger.info(f"Using agents file: {agents_path}")
 76 |     logger.info(f"Using tasks file: {tasks_path}")
 77 |     logger.info(f"Using topic: {current_topic}")
 78 |     
 79 |     # Check if files exist
 80 |     if not os.path.exists(agents_path):
 81 |         logger.error(f"Agent file not found: {agents_path}")
 82 |         return {"error": f"Agent file not found: {agents_path}"}
 83 |         
 84 |     if not os.path.exists(tasks_path):
 85 |         logger.error(f"Task file not found: {tasks_path}")
 86 |         return {"error": f"Task file not found: {tasks_path}"}
 87 |     
 88 |     # Template variables
 89 |     current_variables = {"topic": current_topic}
 90 |     
 91 |     # Add additional context if provided
 92 |     if additional_context:
 93 |         current_variables.update(additional_context)
 94 |     
 95 |     # Also add variables from command line if they exist
 96 |     if variables:
 97 |         # Don't overwrite explicit variables with command line ones
 98 |         for key, value in variables.items():
 99 |             if key not in current_variables:
100 |                 current_variables[key] = value
101 |     
102 |     logger.info(f"Template variables: {current_variables}")
103 |     
104 |     # Load agent configurations
105 |     try:
106 |         with open(agents_path, 'r') as f:
107 |             agents_data = yaml.safe_load(f)
108 |         logger.info(f"Loaded agents data: {list(agents_data.keys())}")
109 |     except Exception as e:
110 |         logger.error(f"Error loading agents file: {str(e)}")
111 |         return {"error": f"Error loading agents file: {str(e)}"}
112 |         
113 |     # Create agents
114 |     agents_dict = {}
115 |     for name, config in agents_data.items():
116 |         try:
117 |             # Format template strings in config
118 |             role = config.get("role", "")
119 |             goal = config.get("goal", "")
120 |             backstory = config.get("backstory", "")
121 |             
122 |             # Format with variables if they contain placeholders
123 |             if "{" in role:
124 |                 role = role.format(**current_variables)
125 |             if "{" in goal:
126 |                 goal = goal.format(**current_variables)
127 |             if "{" in backstory:
128 |                 backstory = backstory.format(**current_variables)
129 |             
130 |             logger.info(f"Creating agent: {name}")
131 |             agents_dict[name] = Agent(
132 |                 name=name,
133 |                 role=role,
134 |                 goal=goal,
135 |                 backstory=backstory,
136 |                 verbose=verbose,
137 |                 allow_delegation=True
138 |             )
139 |         except Exception as e:
140 |             logger.error(f"Error creating agent {name}: {str(e)}")
141 |             return {"error": f"Error creating agent {name}: {str(e)}"}
142 |         
143 |     # Load task configurations
144 |     try:
145 |         with open(tasks_path, 'r') as f:
146 |             tasks_data = yaml.safe_load(f)
147 |         logger.info(f"Loaded tasks data: {list(tasks_data.keys())}")
148 |     except Exception as e:
149 |         logger.error(f"Error loading tasks file: {str(e)}")
150 |         return {"error": f"Error loading tasks file: {str(e)}"}
151 |         
152 |     # Create tasks
153 |     tasks_list = []
154 |     for name, config in tasks_data.items():
155 |         try:
156 |             description = config.get("description", "")
157 |             expected_output = config.get("expected_output", "")
158 |             agent_name = config.get("agent")
159 |             
160 |             # Format with variables if they contain placeholders
161 |             if "{" in description:
162 |                 description = description.format(**current_variables)
163 |             if "{" in expected_output:
164 |                 expected_output = expected_output.format(**current_variables)
165 |             
166 |             if not agent_name or agent_name not in agents_dict:
167 |                 logger.error(f"Task {name} has invalid agent: {agent_name}")
168 |                 logger.error(f"Available agents: {list(agents_dict.keys())}")
169 |                 return {"error": f"Task {name} has invalid agent: {agent_name}"}
170 |                 
171 |             logger.info(f"Creating task: {name} for agent: {agent_name}")
172 |             task = Task(
173 |                 description=description,
174 |                 expected_output=expected_output,
175 |                 agent=agents_dict[agent_name]
176 |             )
177 |             
178 |             # Optional output file
179 |             output_file = config.get("output_file")
180 |             if output_file:
181 |                 task.output_file = output_file
182 |                 
183 |             tasks_list.append(task)
184 |         except Exception as e:
185 |             logger.error(f"Error creating task {name}: {str(e)}")
186 |             return {"error": f"Error creating task {name}: {str(e)}"}
187 |         
188 |     # Create the crew
189 |     logger.info("Creating crew")
190 |     logger.info(f"Number of agents: {len(agents_dict)}")
191 |     logger.info(f"Number of tasks: {len(tasks_list)}")
192 |     
193 |     # Check if we have agents and tasks
194 |     if not agents_dict:
195 |         logger.error("No agents were created")
196 |         return {"error": "No agents were created"}
197 |     if not tasks_list:
198 |         logger.error("No tasks were created")
199 |         return {"error": "No tasks were created"}
200 |         
201 |     try:
202 |         crew = Crew(
203 |             agents=list(agents_dict.values()),
204 |             tasks=tasks_list,
205 |             verbose=verbose,
206 |             process=process_type
207 |         )
208 |         logger.info("Crew created successfully")
209 |     except Exception as e:
210 |         logger.error(f"Error creating crew: {str(e)}")
211 |         return {"error": f"Error creating crew: {str(e)}"}
212 |     
213 |     # Execute the crew with captured output
214 |     try:
215 |         logger.info("Starting crew kickoff with captured output")
216 |         with capture_output() as (out, err):
217 |             result = crew.kickoff()
218 |             
219 |         # Get the captured output
220 |         stdout_content = out.getvalue()
221 |         stderr_content = err.getvalue()
222 |         
223 |         # Format the output to make it more readable
224 |         formatted_stdout = format_output(stdout_content)
225 |         formatted_stderr = format_output(stderr_content)
226 |         
227 |         logger.info("Crew kickoff completed successfully")
228 |         
229 |         # Convert result to string if it's not a simple type
230 |         if not isinstance(result, (str, int, float, bool, list, dict)) and result is not None:
231 |             logger.info(f"Converting result of type {type(result)} to string")
232 |             result = str(result)
233 |         
234 |         # Create a structured response with the agent outputs
235 |         response = {
236 |             "result": result,
237 |             "agent_outputs": formatted_stdout,
238 |             "errors": formatted_stderr if formatted_stderr.strip() else None
239 |         }
240 |         
241 |         # Log a sample of the output for debugging
242 |         if formatted_stdout:
243 |             sample = formatted_stdout[:500] + "..." if len(formatted_stdout) > 500 else formatted_stdout
244 |             logger.info(f"Sample of agent outputs: {sample}")
245 |         
246 |         return response
247 |     except Exception as e:
248 |         logger.error(f"Error in crew kickoff: {str(e)}")
249 |         return {"error": f"Error in crew kickoff: {str(e)}"}
250 | 
251 | 
252 | def initialize():
253 |     """Initialize the server with configuration from environment variables."""
254 |     global server, agents_yaml_path, tasks_yaml_path, topic, process_type_str, verbose, variables_json, variables, process_type
255 |     
256 |     # Log startup
257 |     logger.info("Starting Crew AI Server")
258 |     
259 |     # Create FastMCP server
260 |     server = FastMCP("Crew AI Server", version=os.environ.get("MCP_CREW_VERSION", "0.1.0"))
261 |     
262 |     # Get configuration from environment variables
263 |     agents_yaml_path = os.environ.get("MCP_CREW_AGENTS_FILE", "")
264 |     tasks_yaml_path = os.environ.get("MCP_CREW_TASKS_FILE", "")
265 |     topic = os.environ.get("MCP_CREW_TOPIC", "Artificial Intelligence")
266 |     process_type_str = os.environ.get("MCP_CREW_PROCESS", "sequential")
267 |     verbose = os.environ.get("MCP_CREW_VERBOSE", "0") == "1"
268 |     variables_json = os.environ.get("MCP_CREW_VARIABLES", "")
269 |     
270 |     # Define fallback paths
271 |     if not agents_yaml_path or not tasks_yaml_path:
272 |         current_dir = Path(os.path.dirname(os.path.abspath(__file__)))
273 |         project_root = current_dir.parent.parent
274 |         examples_dir = project_root / "examples"
275 |         
276 |         if not agents_yaml_path:
277 |             agents_yaml_path = str(examples_dir / "agents.yml")
278 |         
279 |         if not tasks_yaml_path:
280 |             tasks_yaml_path = str(examples_dir / "tasks.yml")
281 |     
282 |     # Convert paths to Path objects
283 |     agents_yaml_path = Path(agents_yaml_path)
284 |     tasks_yaml_path = Path(tasks_yaml_path)
285 |     
286 |     logger.info(f"Agents YAML path: {agents_yaml_path} (exists: {agents_yaml_path.exists()})")
287 |     logger.info(f"Tasks YAML path: {tasks_yaml_path} (exists: {tasks_yaml_path.exists()})")
288 |     logger.info(f"Topic: {topic}")
289 |     logger.info(f"Process type: {process_type_str}")
290 |     logger.info(f"Verbose: {verbose}")
291 |     
292 |     # Parse variables
293 |     variables = {"topic": topic}
294 |     if variables_json:
295 |         try:
296 |             additional_vars = json.loads(variables_json)
297 |             variables.update(additional_vars)
298 |             logger.info(f"Loaded additional variables: {list(additional_vars.keys())}")
299 |         except json.JSONDecodeError:
300 |             logger.warning(f"Could not parse variables JSON: {variables_json}")
301 |     
302 |     logger.info(f"Template variables: {variables}")
303 |     
304 |     # Set process type
305 |     process_type = Process.sequential
306 |     if process_type_str.lower() == 'hierarchical':
307 |         process_type = Process.hierarchical
308 |         logger.info("Using hierarchical process")
309 |         
310 |     # Register the kickoff tool
311 |     server.tool()(kickoff)
312 |     
313 |     return server
314 | 
315 | 
316 | def main():
317 |     """Run the MCP server as a standalone application."""
318 |     server = initialize()
319 |     
320 |     # If run directly, start the FastMCP server
321 |     if __name__ == "__main__":
322 |         server.run()
323 |     
324 |     return server
325 | 
326 | 
327 | # Initialize server when module is imported
328 | initialize()
```