# Directory Structure
```
├── .gitignore
├── .python-version
├── main.py
├── pyproject.toml
├── README.md
├── tools
│   ├── crawl.py
│   ├── scrape.py
│   └── utils.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
```
3.13
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info
# Virtual environments
.venv
__pycache__/
.DS_Store
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Crawl4AI MCP Server
A Model Context Protocol (MCP) server implementation that integrates Crawl4AI with Cursor AI, providing web scraping and crawling capabilities as tools for LLMs in Cursor Composer's agent mode.
## System Requirements
Python 3.10 or higher installed.
## Current Features
- Single page scraping
- Website crawling
## Installation
Basic setup instructions also available in the [Official Docs for MCP Server QuickStart](https://modelcontextprotocol.io/quickstart/server#why-claude-for-desktop-and-not-claude-ai).
### Set up your environment
First, let's install `uv` and set up our Python project and environment:
MacOS/Linux:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
Windows:
```bash
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```
Make sure to restart your terminal afterwards to ensure that the uv command gets picked up.
After that:
1. Clone the repository
2. Install dependencies using UV:
```bash
# Navigate to the crawl4ai-mcp directory
cd crawl4ai-mcp
# Install dependencies (Only first time)
uv venv
uv sync
# Activate the venv
source .venv/bin/activate
# Run the server
python main.py
```
3. Add to Cursor's MCP Servers or Claude's MCP Servers
You may need to put the full path to the uv executable in the command field. You can get this by running `which uv` on MacOS/Linux or `where uv` on Windows.
```json
{
  "mcpServers": {
    "Crawl4AI": {
      "command": "uv",
      "args": [
        "--directory",
        "/ABSOLUTE/PATH/TO/PARENT/FOLDER/crawl4ai-mcp",
        "run",
        "main.py"
      ]
    }
  }
}
```
## Tools Provided
This MCP server exposes the following tools to the LLM:
1.  **`scrape_webpage(url: str)`**
    - **Description:** Scrapes the content and metadata from a single webpage using Crawl4AI.
    - **Parameters:**
      - `url` (string, required): The URL of the webpage to scrape.
    - **Returns:** A list containing a `TextContent` object with the scraped content (primarily markdown) as JSON.
2.  **`crawl_website(url: str, crawl_depth: int = 1, max_pages: int = 5)`**
    - **Description:** Crawls a website starting from the given URL up to a specified depth and page limit using Crawl4AI.
    - **Parameters:**
      - `url` (string, required): The starting URL to crawl.
      - `crawl_depth` (integer, optional, default: 1): The maximum depth to crawl relative to the starting URL.
      - `max_pages` (integer, optional, default: 5): The maximum number of pages to scrape during the crawl.
    - **Returns:** A list containing a `TextContent` object with a JSON array of results for the crawled pages (including URL, success status, markdown content, or error).
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[project]
name = "crawl4ai-mcp"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
    "crawl4ai>=0.5.0.post8",
    "httpx>=0.28.1",
    "mcp[cli]>=1.6.0",
]
[tool.uv.sources]
crawl4ai = { git = "https://github.com/unclecode/crawl4ai.git", rev = "2025-MAR-ALPHA-1" }
# We need the above for the bug-fixes done in the crawl4ai-mcp branch
```
--------------------------------------------------------------------------------
/tools/utils.py:
--------------------------------------------------------------------------------
```python
import re
def validate_and_normalize_url(url: str) -> str | None:
    """Validate and normalize a URL.
    Args:
        url: The URL string to validate.
    Returns:
        The normalized URL with https scheme if valid, otherwise None.
    """
    # Simple validation for domains/subdomains with http(s)
    # Allows for optional paths
    url_pattern = re.compile(
        r"^(?:https?://)?(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|"  # domain...
        r"localhost|"  # localhost...
        r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"  # ...or ip
        r"(?::\d+)?"  # optional port
        r"(?:/?|[/?]\S+)$",
        re.IGNORECASE,
    )
    if not url_pattern.match(url):
        return None
    # Add https:// if missing
    if not url.startswith("http://") and not url.startswith("https://"):
        url = f"https://{url}"
    return url
```
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
```python
import mcp.types as types
from mcp.server.fastmcp import FastMCP
from tools.scrape import scrape_url
from tools.crawl import crawl_website_async
# Initialize FastMCP server
mcp = FastMCP("crawl4ai")
@mcp.tool()
async def scrape_webpage(
    url: str,
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
    """
    Scrape content and metadata from a single webpage using Crawl4AI.
    Args:
        url: The URL of the webpage to scrape
    Returns:
        List containing TextContent with the result as JSON.
    """
    return await scrape_url(url)
@mcp.tool()
async def crawl_website(
    url: str,
    crawl_depth: int = 1,
    max_pages: int = 5,
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
    """
    Crawl a website starting from the given URL up to a specified depth and page limit.
    Args:
        url: The starting URL to crawl.
        crawl_depth: The maximum depth to crawl relative to the starting URL (default: 1).
        max_pages: The maximum number of pages to scrape during the crawl (default: 5).
    Returns:
        List containing TextContent with a JSON array of results for crawled pages.
    """
    return await crawl_website_async(url, crawl_depth, max_pages)
if __name__ == "__main__":
    # Initialize and run the server
    mcp.run(transport="stdio")
```
--------------------------------------------------------------------------------
/tools/scrape.py:
--------------------------------------------------------------------------------
```python
import asyncio
import mcp.types as types
from typing import Any, List
import json
import re
from crawl4ai import AsyncWebCrawler, CacheMode
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
async def scrape_url(url: str) -> List[Any]:
    """Scrape a webpage using crawl4ai with simple implementation.
    Args:
        url: The URL to scrape
    Returns:
        A list containing TextContent object with the result as JSON
    """
    try:
        # Simple validation for domains/subdomains with http(s)
        url_pattern = re.compile(
            r"^(?:https?://)?(?:[A-Za-z0-9](?:[A-Za-z0-9-]{0,61}[A-Za-z0-9])?\.)+[A-Za-z]{2,}(?:/[^/\s]*)*$"
        )
        if not url_pattern.match(url):
            return [
                types.TextContent(
                    type="text",
                    text=json.dumps(
                        {
                            "success": False,
                            "url": url,
                            "error": "Invalid URL format",
                        }
                    ),
                )
            ]
        # Add https:// if missing
        if not url.startswith("http://") and not url.startswith("https://"):
            url = f"https://{url}"
        # Use default configurations with minimal customization
        browser_config = BrowserConfig(
            browser_type="chromium",
            headless=True,
            ignore_https_errors=True,
            verbose=False,
            extra_args=[
                "--no-sandbox",
                "--disable-setuid-sandbox",
                "--disable-dev-shm-usage",
            ],
        )
        run_config = CrawlerRunConfig(
            cache_mode=CacheMode.BYPASS,
            verbose=False,
            page_timeout=30 * 1000,  # Convert to milliseconds
        )
        async with AsyncWebCrawler(config=browser_config) as crawler:
            result = await asyncio.wait_for(
                crawler.arun(
                    url=url,
                    config=run_config,
                ),
                timeout=30,
            )
            # Create response in the format requested
            return [
                types.TextContent(
                    type="text", text=json.dumps({"markdown": result.markdown})
                )
            ]
    except Exception as e:
        return [
            types.TextContent(
                type="text",
                text=json.dumps({"success": False, "url": url, "error": str(e)}),
            )
        ]
```
--------------------------------------------------------------------------------
/tools/crawl.py:
--------------------------------------------------------------------------------
```python
import asyncio
import mcp.types as types
from typing import Any, List
import json
from crawl4ai import AsyncWebCrawler, CacheMode, CrawlResult
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.deep_crawling import BFSDeepCrawlStrategy
from .utils import validate_and_normalize_url
CRAWL_TIMEOUT_SECONDS = 300  # Overall timeout for the crawl operation
async def crawl_website_async(url: str, crawl_depth: int, max_pages: int) -> List[Any]:
    """Crawl a website using crawl4ai.
    Args:
        url: The starting URL to crawl.
        crawl_depth: The maximum depth to crawl.
        max_pages: The maximum number of pages to crawl.
    Returns:
        A list containing TextContent objects with the results as JSON.
    """
    normalized_url = validate_and_normalize_url(url)
    if not normalized_url:
        return [
            types.TextContent(
                type="text",
                text=json.dumps(
                    {
                        "success": False,
                        "url": url,
                        "error": "Invalid URL format",
                    }
                ),
            )
        ]
    try:
        # Use default configurations with minimal customization
        browser_config = BrowserConfig(
            browser_type="chromium",
            headless=True,
            ignore_https_errors=True,
            verbose=False,
            extra_args=[
                "--no-sandbox",
                "--disable-setuid-sandbox",
                "--disable-dev-shm-usage",
            ],
        )
        # 1. Create the deep crawl strategy with depth and page limits
        crawl_strategy = BFSDeepCrawlStrategy(
            max_depth=crawl_depth, max_pages=max_pages
        )
        # 2. Create the run config, passing the strategy
        run_config = CrawlerRunConfig(
            cache_mode=CacheMode.BYPASS,
            verbose=False,
            page_timeout=30 * 1000,  # 30 seconds per page
            deep_crawl_strategy=crawl_strategy,  # Pass the strategy here
        )
        results_list = []
        async with AsyncWebCrawler(config=browser_config) as crawler:
            # 3. Use arun and wrap in asyncio.wait_for for overall timeout
            crawl_results: List[CrawlResult] = await asyncio.wait_for(
                crawler.arun(
                    url=normalized_url,
                    config=run_config,
                ),
                timeout=CRAWL_TIMEOUT_SECONDS,
            )
            # Process results, checking 'success' attribute
            for result in crawl_results:
                if result.success:  # Check .success instead of .status
                    results_list.append(
                        {
                            "url": result.url,
                            "success": True,
                            "markdown": result.markdown,
                        }
                    )
                else:
                    results_list.append(
                        {
                            "url": result.url,
                            "success": False,
                            "error": result.error,  # Assume .error holds the message
                        }
                    )
            # Return a single TextContent with a JSON array of results
            return [
                types.TextContent(
                    type="text", text=json.dumps({"results": results_list})
                )
            ]
    except asyncio.TimeoutError:
        return [
            types.TextContent(
                type="text",
                text=json.dumps(
                    {
                        "success": False,
                        "url": normalized_url,
                        "error": f"Crawl operation timed out after {CRAWL_TIMEOUT_SECONDS} seconds.",
                    }
                ),
            )
        ]
    except Exception as e:
        return [
            types.TextContent(
                type="text",
                text=json.dumps(
                    {"success": False, "url": normalized_url, "error": str(e)}
                ),
            )
        ]
```