#
tokens: 5763/50000 21/21 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .continue
│   └── prompts
│       └── fastmcp.prompt
├── .continuerc.json
├── .gitignore
├── .python-version
├── .vscode
│   └── settings.json
├── pyproject.toml
├── README.md
├── SPEC-GEMINI.md
├── SPEC.md
├── src
│   └── mcps
│       ├── __init__.py
│       ├── config.py
│       ├── logs.py
│       ├── prompts
│       │   ├── __init__.py
│       │   └── file_prompts.py
│       ├── resources
│       │   ├── __init__.py
│       │   ├── doc_resource.py
│       │   ├── project_resource.py
│       │   └── url_resource.py
│       ├── server.py
│       └── tools
│           ├── __init__.py
│           ├── internet_search.py
│           └── perplexity_search.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
3.12

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info
.pytest_cache

# Virtual environments
.venv

```

--------------------------------------------------------------------------------
/.continuerc.json:
--------------------------------------------------------------------------------

```json
{
  "experimental": {
    "modelContextProtocolServers": [
      {
        "transport": {
          "type": "stdio",
          "command": "uv",
          "args": [
            "run",
            "--project",
            "/Users/alsmirnov/work/mcp-server-continue",
            "mcps"
          ],
          "env": {
            "ROOT": "/Users/alsmirnov/work/mcp-server-continue",
          }
        }
      }
    ]
  }
}
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown

# Model Context Protocol ( MCP ) Python server to use with continue.dev
MCP server that exposes a customizable prompt templates, resources, and tools
It uses FastMCP to run as server application.

Dependencies, build, and run managed by uv tool.

## Provided functionality
### prompts
prompts created from markdown files in `prompts` folder. 
Additional content can be added by templating, by variable names in {{variable}} format
Initial list of prompts:
- review code created by another llm
- check code for readability, confirm with *Clean Code* rules
- Use a conversational LLM to hone in on an idea
- wrap out at the end of the brainstorm to save it as `spec.md` file
- test driven development, to create tests from spec
- Draft a detailed, step-by-step blueprint for building project from spec

### resources
**NOTE: continue does not understand templates, so resource name should contain all information**
**resouce name left as is in prompt, so it should not confuse llm**
- extract url content as markdown
- full documentation about libraries, preferable from llms-full.txt
- complete project structure and content, created by `CodeWeawer` or `Repomix`

### tools
- web search, using `serper` or  
- web search results with summary, by `perplexity.io`
- find missed tests
- run unit tests and collect errors
```

--------------------------------------------------------------------------------
/src/mcps/resources/__init__.py:
--------------------------------------------------------------------------------

```python

```

--------------------------------------------------------------------------------
/src/mcps/tools/__init__.py:
--------------------------------------------------------------------------------

```python

```

--------------------------------------------------------------------------------
/src/mcps/prompts/__init__.py:
--------------------------------------------------------------------------------

```python
from .file_prompts import setup_prompts

__all__ = ["setup_prompts"]

```

--------------------------------------------------------------------------------
/src/mcps/resources/url_resource.py:
--------------------------------------------------------------------------------

```python

from mcps.config import ServerConfig


async def get_resource(encoded_url: str, config: ServerConfig) -> str:
    return f"URL resource: {encoded_url}"
```

--------------------------------------------------------------------------------
/src/mcps/resources/doc_resource.py:
--------------------------------------------------------------------------------

```python

from mcps.config import ServerConfig


async def get_resource(library_name: str, config: ServerConfig) -> str:
    return f"docs resource: {library_name}"
```

--------------------------------------------------------------------------------
/src/mcps/resources/project_resource.py:
--------------------------------------------------------------------------------

```python

from mcps.config import ServerConfig


async def get_resource(project_name: str, config: ServerConfig) -> str:
    return f"project resource: {project_name}"
```

--------------------------------------------------------------------------------
/src/mcps/tools/internet_search.py:
--------------------------------------------------------------------------------

```python
import logging

from mcps.config import ServerConfig


logger = logging.getLogger("mcps")

async def do_search(query: str, config: ServerConfig) -> str:
    """
    Performs a search and returns the results.  This is a placeholder.
    In a real implementation, this would use a search engine API.

    Args:
        query: The search query.

    Returns:
        The search query string back.
    """
    logger.info(f"Performing search with query: {query}")
    return query
```

--------------------------------------------------------------------------------
/src/mcps/prompts/file_prompts.py:
--------------------------------------------------------------------------------

```python
import logging
from pathlib import Path
from typing import Dict

from fastmcp import FastMCP

from mcps.config import ServerConfig


def setup_prompts(mcp: FastMCP, config: ServerConfig):
    """
    Dynamically sets up prompts from the prompts directory.

    Args:
        mcp: The FastMCP instance.
        config: The server configuration.
    """
    @mcp.prompt("echo")
    def echo_prompt(text: str, workspaceDir: str) -> str:
        logging.info(f"Echo prompt called with text: {text}")
        logging.info(f"Workspace directory: {workspaceDir}")
        return "provide short and concise answer: "+text
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "mcps"
version = "0.1.0"
description = "Model Context Protocol server for continue.dev"
readme = "README.md"
authors = [
    { name = "Alexander Smirnov", email = "[email protected]" }
]
requires-python = ">=3.12"
dependencies = [
    "fastmcp>=0.4.1",
]
[tool.uv]
package = true

[project.scripts]
mcps = "mcps:main"

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.hatch.build.targets.wheel]
      packages = ["src/mcps"]

[dependency-groups]
dev = [
    "pytest>=8.3.4",
]

[tool.pytest.ini_options]
testpaths = ["test"]
pythonpath = ["src/mcps","test"]
asyncio_mode = "auto"

```

--------------------------------------------------------------------------------
/src/mcps/__init__.py:
--------------------------------------------------------------------------------

```python
import os
import logging
import logging.handlers

import mcps.server
import mcps.config
from mcps.logs import setup_logging

# --- Package-level logger setup ---

# --- End of package-level logger setup ---


def main() -> None:
    config = mcps.config.create_config()  # Use the factory method
    server = mcps.server.create_server(config)
    # mcp server configures logging in constructor
    # configure output to file and remove console handlers
    # Disable console output by removing default handlers
    setup_logging()
    logger = logging.getLogger("mcps")
    # Current working directory
    logger.info(f"Current working directory: {os.getcwd()}")
    # File location
    logger.info(f"File location: {__file__}")
    # Current package name
    logger.info(f"Current package name: {__package__}")

    server.start()
```

--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------

```json
{
    "python.defaultInterpreterPath": ".venv/bin/python",
    "python.venvPath": ".venv",
    "python.testing.pytestArgs": [
        "test"
    ],
    "python.testing.unittestEnabled": false,
    "python.testing.pytestEnabled": true,
    "python-envs.defaultEnvManager": "ms-python.python:venv",
    "python-envs.defaultPackageManager": "ms-python.python:pip",
    "python-envs.pythonProjects": [],
    "mcp": {
        "inputs": [],
        "servers": {
            "mcps": {
                "command": "uv",
                "args": [
                    "run",
                    "--project",
                    "/Users/alsmirnov/work/mcp-server-continue",
                    "mcps"
                ],
                "env": {
                    "ROOT": "/Users/alsmirnov/work/mcp-server-continue",
                }
            }
        }
    }
}
```

--------------------------------------------------------------------------------
/src/mcps/logs.py:
--------------------------------------------------------------------------------

```python
import os
import logging
import logging.handlers

def setup_logging():
    """
    Set up logging to write to a file in the user's Library/Logs/Mcps directory.
    """
    log_dir = os.path.expanduser("~/Library/Logs/Mcps")
    os.makedirs(log_dir, exist_ok=True)
    log_file = os.path.join(log_dir, "mcps.log")


    file_handler = logging.handlers.RotatingFileHandler(
        log_file, maxBytes=10 * 1024 * 1024, backupCount=5
    )
    file_handler.setLevel(logging.DEBUG)

    formatter = logging.Formatter(
        "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    )
    file_handler.setFormatter(formatter)
    # configure output to file and remove console handlers
    # Disable console output by removing default handlers
    try:
        from rich.logging import RichHandler
        # Mcp tries to use rich for logging, if available
        for handler in logging.root.handlers[:]:
            if isinstance(handler, RichHandler):
                logging.root.removeHandler(handler)# Configure logging to write to file
    except ImportError:
        pass
    for handler in logging.root.handlers[:]:
        if isinstance(handler, logging.StreamHandler) :
            logging.root.removeHandler(handler)
    # Configure logging to write to file
    logging.basicConfig(
        handlers=[file_handler],
        level=logging.INFO,  # Capture all log levels
        force=True  # Override any existing logging configuration
    )
```

--------------------------------------------------------------------------------
/src/mcps/tools/perplexity_search.py:
--------------------------------------------------------------------------------

```python
from mcps.config import ServerConfig
import httpx

async def do_search(query: str, config: ServerConfig) -> str:
    """
    Performs a search and returns the results. 
    Args:
        query: The search query.

    Returns:
        The search query string back.
    """
    
    url = "https://api.perplexity.ai/chat/completions"
    headers = {
        "Authorization": f"Bearer {config.perplexity_api_key}",
        "Content-Type": "application/json"
    }
    payload = {
        "model": "sonar",
        "messages": [
            {"role": "system", "content": "Be precise and concise."},
            {"role": "user", "content": query}
        ],
        "max_tokens": 1000,
        "temperature": 0.01,
        "top_p": 0.9,
        "return_related_questions": False,
        "web_search_options": {
           "search_context_size": "medium"
      }
    }

    async with httpx.AsyncClient() as client:
        response = await client.post(url, json=payload, headers=headers)
        response.raise_for_status()
        return format_response_with_citations(response.json())

def format_response_with_citations(response: dict) -> str:
    """
    Formats the response from Perplexity.ai to include citations as a markdown list.

    Args:
        response: The JSON response from Perplexity.ai.

    Returns:
        A formatted string with the content and citations.
    """
    content = response.get("choices", [{}])[0].get("message", {}).get("content", "No content available")
    citations = response.get("citations", [])

    if citations:
        citations_md = "\n".join([f"- {url}" for url in citations])
        return f"{content}\n\n### Citations\n{citations_md}"
    return content
```

--------------------------------------------------------------------------------
/src/mcps/config.py:
--------------------------------------------------------------------------------

```python
# mcps/config.py
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict
from dotenv import load_dotenv
import os


@dataclass
class ServerConfig:
    prompts_dir: Path = field(default_factory=lambda: Path(__file__).parent / "prompts")
    cache_dir: Path = field(default_factory=lambda: Path(__file__).parent / "cache")
    tests_dir: Path = field(default_factory=lambda: Path(__file__).parent / "tests")
    library_docs: Dict[str, str] = field(default_factory=dict)
    project_paths: Dict[str, str] = field(default_factory=dict)
    openai_api_key: str = ""
    anthropic_api_key: str = ""
    perplexity_api_key: str = ""

def create_config(
    prompts_dir: Path = Path("./prompts"),
    cache_dir: Path = Path("./cache"),
    tests_dir: Path = Path("./tests"),
    library_docs: Dict[str, str] | None = None,
    project_paths: Dict[str, str] | None = None,
) -> ServerConfig:
    """
    Creates a ServerConfig instance, ensuring directories exist and
    handling default values for library_docs and project_paths.
    """
    # Load environment variables from .env files
    for env_path in [
        Path(__file__).parent.parent.parent,
        Path.home()
    ]:
        dotenv_path = env_path / ".env"
        if dotenv_path.exists():
            load_dotenv(dotenv_path)

    # Use provided dictionaries or default to empty dictionaries
    library_docs = library_docs if library_docs is not None else {}
    project_paths = project_paths if project_paths is not None else {}

    return ServerConfig(
        prompts_dir=prompts_dir,
        cache_dir=cache_dir,
        tests_dir=tests_dir,
        library_docs=library_docs,
        project_paths=project_paths,
        openai_api_key=os.getenv("OPENAI_API_KEY", ""),
        anthropic_api_key=os.getenv("ANTHROPIC_API_KEY", ""),
        perplexity_api_key=os.getenv("PERPLEXITY_API_KEY", ""),
    )

```

--------------------------------------------------------------------------------
/SPEC.md:
--------------------------------------------------------------------------------

```markdown
# Development Automation Server Specification

## Overview
FastMCP server implementation providing development automation tools with focus on TDD and documentation management.

## Server Configuration

### Directory Structure
mcp-server/ 
├── prompts/ # Markdown prompt templates 
├── cache/
│ ├── docs/ # Cached documentation 
│ └── search / # Search results 
├── tests/ # Generated test files 
└── config/ # Server configuration


### Configuration Parameters
```python
@dataclass
class ServerConfig:
    prompts_dir: Path = Path("./prompts")
    cache_dir: Path = Path("./cache")
    tests_dir: Path = Path("./tests")
```
### Core Components
#### Prompt Templates
test_generator.md - Creates test cases from spec
doc_extractor.md - Formats documentation for caching
spec_parser.md - Extracts requirements from free-form specs
#### Resource Endpoints
"docs://{library_name}"         # Get cached library documentation
"spec://{spec_name}"           # Get parsed specification
"spec://{spec_name}/tests"     # Get generated tests for spec
"url://{encoded_url}"          # Get cached URL content as markdown
#### Tools
```python
@mcp.tool()
def generate_tests(spec_name: str) -> str:
    """Generate test cases from a specification file"""

@mcp.tool()
def validate_tests(spec_name: str) -> str:
    """Validate that generated tests match specification requirements"""

@mcp.tool()
def suggest_test_improvements(test_file: str) -> str:
    """Analyze existing tests and suggest improvements for better coverage"""
```
### Server Implementation
Core Server Setup
```python
from mcp.server.fastmcp import FastMCP
from dataclasses import dataclass
from pathlib import Path

@dataclass
class ServerConfig:
    prompts_dir: Path
    cache_dir: Path
    tests_dir: Path

@dataclass
class AppContext:
    config: ServerConfig

def create_server(config: ServerConfig) -> FastMCP:
    mcp = FastMCP(
        "Development Automation Server",
        dependencies=["pytest"]
    )
    
    for dir_path in [config.prompts_dir, config.cache_dir, config.tests_dir]:
        dir_path.mkdir(parents=True, exist_ok=True)
        
    return mcp
```
### Integration
continue.dev Configuration
```json
{
  "mcpServers": [
    {
      "name": "Development Automation Server", 
      "command": "uv",
      "args": ["run", "server.py"]
    }
  ]
}
```
### Dependencies
FastMCP
pytest
```

--------------------------------------------------------------------------------
/src/mcps/server.py:
--------------------------------------------------------------------------------

```python
from dataclasses import dataclass
import logging
import os
from pathlib import Path
from typing import Dict

from mcp import ClientCapabilities, RootsCapability
from mcp.server.session import ServerSession
from mcp.server.fastmcp import FastMCP, Context

import mcps.prompts as prompts_module
import mcps.resources.url_resource as url_resource
import mcps.resources.doc_resource as doc_resource
import mcps.resources.project_resource as project_resource
import mcps.tools.internet_search as internet_search
import mcps.tools.perplexity_search as perplexity_search
from mcps.config import ServerConfig, create_config  # Import from config module


logger = logging.getLogger("mcps")
@dataclass
class AppContext:
    config: ServerConfig


class DevAutomationServer:
    def __init__(self, config: ServerConfig):
        self.config = config
        self.mcp = FastMCP(
            "Development Automation Server",
            # dependencies=["pytest", "httpx", "beautifulsoup4"],  # dependencies for resources/tools
        )
        self._setup_resources()
        self._setup_tools()
        self._setup_prompts()


    def _setup_resources(self):
        @self.mcp.resource("url://{encoded_url}")
        async def url_resource_handler(encoded_url: str) -> str:
            return await url_resource.get_resource(encoded_url, self.config)

        @self.mcp.resource("doc://{library_name}")
        async def doc_resource_handler(library_name: str) -> str:
            return await doc_resource.get_resource(library_name, self.config)

        @self.mcp.resource("project://{project_name}")
        async def project_resource_handler(project_name: str) -> str:
            return await project_resource.get_resource(project_name, self.config)
        @self.mcp.resource("resource://test", name="test/resource", description="Test project resource")
        async def test_resource_handler() -> str:
            try:
                session: ServerSession = self.mcp.get_context().session
                if session.check_client_capability(ClientCapabilities(roots=RootsCapability())) :
                    result = await session.list_roots()
                    logger.info(f"Result: {result}")
                    for root in result.roots:
                        logger.info(f"Root: {root.name} , {root.uri}")
            except Exception as e:
                logger.error(f"Error listing roots: {e}")
            return "Test project resource"
        @self.mcp.resource("documentation://test/docs")
        async def test_docs_handler() -> str:
            return "Test project documentation"

    def _setup_tools(self):
        @self.mcp.tool(name="web_search", description="Search the web for information")
        async def web_search(query: str) -> str:
            """
            Performs a web search using the provided query. Find the most relevant pages
            and return summary result.
            Args:
                query: The search query.
            Returns:
                The summary of the most relevant search results.
            """
            try:
                session: ServerSession = self.mcp.get_context().session
                if session.check_client_capability(ClientCapabilities(roots=RootsCapability())) :
                    result = await session.list_roots()
                    logger.info(f"Result: {result}")
                    for root in result.roots:
                        logger.info(f"Root: {root.name} , location: {root.uri}")
                else:
                    logger.info("Client does not support roots capability")
                    # Try to get the roots from the environment variable ROOT
                    root_value = os.getenv("ROOT")
                    logger.info(f"ROOT environment variable: {root_value}")
            except Exception as e:
                logger.error(f"Error listing roots: {e}")
            return await perplexity_search.do_search(query, self.config)

        # @self.mcp.tool()
        # async def perplexity_summary_search(query: str) -> str:
        #     return await perplexity_search.do_search(query, self.config)

    def _setup_prompts(self):
        # Dynamically register prompts from the prompts directory
        prompts_module.setup_prompts(self.mcp, self.config)

    def start(self):
        self.mcp.run()


def create_server(config: ServerConfig) -> DevAutomationServer:
    """
    Creates and configures the Development Automation Server.

    Args:
        config: The server configuration.

    Returns:
        The configured FastMCP server instance.
    """
    server = DevAutomationServer(config)
    return server


if __name__ == "__main__":
    # Example usage with configuration from the config module
    config = create_config()  # Use the factory method
    server = create_server(config)
    server.start()
```

--------------------------------------------------------------------------------
/SPEC-GEMINI.md:
--------------------------------------------------------------------------------

```markdown
# FastMCP Server Project Specification
This document outlines the specification for a FastMCP server designed to provide prompts, resources, and tools to Language Model (LLM) clients, such as continue.dev.

1. Prompts
Source: Prompts are stored in Markdown files within a dedicated prompts directory on the server.
Prompt Identification: Each prompt is identified by its filename (without the .md extension). For example, a file named code_review.md corresponds to a prompt named code_review.
Prompt Templating: Prompt files can contain template variables in the format {{variable}}. These variables are placeholders that will be replaced with values provided by the client when requesting a prompt.
Templating Mechanism: Simple string replacement. The server will receive a dictionary of variable names and values from the client and replace all occurrences of {{variable}} with their corresponding values.
Client Interaction (MCP):
Listing Prompts: Clients can use the MCP listPrompts request to get a list of available prompt names. The server will scan the prompts directory and return a list of filenames (without extensions).
Retrieving Prompts: Clients can use the MCP getPrompt request to retrieve a specific prompt. The request must include:
name: The name of the prompt (filename without extension).
arguments: A dictionary where keys are variable names used in the prompt template, and values are the strings to replace the placeholders.
Server Processing: Upon receiving a getPrompt request, the server will:
Locate the Markdown file corresponding to the requested name in the prompts directory.
Read the content of the Markdown file.
Perform template replacement using the provided arguments dictionary.
Return the processed prompt content as a string within the MCP GetPromptResult response.
2. Resources
The server will provide the following resource types, identified by their URI schemes:

url: Resource (Fetch URL Content as Markdown)

URI Format: url:http://<host>/<page> (e.g., url:http://example.com/page)
Functionality:
Extract the URL from the URI (e.g., http://example.com/page).
Use the external service r.jina.ai to fetch and convert the URL content to Markdown by transforming the URL to https://r.jina.ai/<original_url> and making a request.
If the fetched content is plain text, return it as is.
Return the content (Markdown or plain text) as the resource.
Error Handling: Any errors from the r.jina.ai service or the response content itself will be returned as the resource content to the client.
doc: Resource (Library Documentation)

URI Format: doc://<library_name> (e.g., doc://pandas)
Configuration: The server will have a configuration dictionary (library_docs) mapping library names to URLs of llms.txt files. This dictionary will be initially hardcoded in the Configuration class.
Functionality:
Extract the <library_name> from the URI.
Look up the <library_name> in the library_docs dictionary to get the corresponding llms.txt URL.
Fetch the content from the llms.txt URL.
Return the fetched content (assumed to be plain text, ready for LLM use) as the resource.
Error Handling: If the <library_name> is not found in the library_docs dictionary, the server will return an error to the client.
project: Resource (Project Structure and Content)

URI Format: project://<project_name> (e.g., project://my_project)
Configuration: The server will have a configuration dictionary (project_paths) mapping project names to local project root folder paths.
External Tool: "CodeWeawer" - assumed to be a command-line tool named codeweawer available in the system's PATH.
Functionality:
Extract the <project_name> from the URI.
Look up the <project_name> in the project_paths dictionary to get the project root folder path.
Execute the codeweawer command in the shell, passing the project root folder path as an argument (e.g., codeweawer /user/projects/my_project).
Capture the standard output (stdout) from the codeweawer command.
Return the captured stdout (plain text project structure) as the resource.
Error Handling:
If the <project_name> is not found in project_paths, return an error.
If the codeweawer command is not found in the system's PATH, return an error.
If the codeweawer command execution fails (non-zero exit code), return an error. In all error cases, an error response will be returned to the client.
3. Tools
The server will provide the following tools:

web_search Tool (Web Search using Serper)

Tool Name: web_search
Argument: query (string, required) - The search query.
Functionality: Uses the serper API to perform a web search using the provided query.
Output: Returns a plain text summary of the search results as a string. If no results are found, returns an empty string.
Error Handling: If the Serper API call fails, returns an error message string to the client. If no search results are found, returns an empty string.
perplexity_summary_search Tool (Summarized Web Search using Perplexity.io)

Tool Name: perplexity_summary_search
Argument: query (string, required) - The search query.
Functionality: Uses the perplexity.io API to perform a web search and get a summarized response for the query.
Output: Returns the summarized search result as a string. If no summary is available or an error occurs, returns an empty string.
Error Handling: If the Perplexity.io API call fails, returns an error message string to the client. If no summary is available or other issues occur, returns an empty string.

```