# Directory Structure
```
├── .github
│ └── workflows
│ ├── publish.yaml
│ └── python-ci.yaml
├── .gitignore
├── .pre-commit-config.yaml
├── .python-version
├── atla_mcp_server
│ ├── __init__.py
│ ├── __main__.py
│ ├── debug.py
│ └── server.py
├── CONTRIBUTING.md
├── LICENSE
├── pyproject.toml
└── README.md
```
# Files
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
```
3.11
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info
# Virtual environments
.venv
# Lock files
uv.lock
```
--------------------------------------------------------------------------------
/.pre-commit-config.yaml:
--------------------------------------------------------------------------------
```yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-yaml
- id: check-json
- id: check-toml
- id: check-merge-conflict
- id: end-of-file-fixer
- id: trailing-whitespace
- id: mixed-line-ending
- id: check-case-conflict
- id: detect-private-key
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.9.7
hooks:
# Run the linter
- id: ruff
args: [--fix]
# Run the formatter
- id: ruff-format
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Atla MCP Server
> [!CAUTION]
> This repository was archived on July 21, 2025. The Atla API is no longer active.
An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API for state-of-the-art LLMJ evaluation.
> Learn more about Atla [here](https://docs.atla-ai.com). Learn more about the Model Context Protocol [here](https://modelcontextprotocol.io).
<a href="https://glama.ai/mcp/servers/@atla-ai/atla-mcp-server">
<img width="380" height="200" src="https://glama.ai/mcp/servers/@atla-ai/atla-mcp-server/badge" alt="Atla MCP server" />
</a>
## Available Tools
- `evaluate_llm_response`: Evaluate an LLM's response to a prompt using a given evaluation criteria. This function uses an Atla evaluation model under the hood to return a dictionary containing a score for the model's response and a textual critique containing feedback on the model's response.
- `evaluate_llm_response_on_multiple_criteria`: Evaluate an LLM's response to a prompt across _multiple_ evaluation criteria. This function uses an Atla evaluation model under the hood to return a list of dictionaries, each containing an evaluation score and critique for a given criteria.
## Usage
> To use the MCP server, you will need an Atla API key. You can find your existing API key [here](https://www.atla-ai.com/sign-in) or create a new one [here](https://www.atla-ai.com/sign-up).
### Installation
> We recommend using `uv` to manage the Python environment. See [here](https://docs.astral.sh/uv/getting-started/installation/) for installation instructions.
### Manually running the server
Once you have `uv` installed and have your Atla API key, you can manually run the MCP server using `uvx` (which is provided by `uv`):
```bash
ATLA_API_KEY=<your-api-key> uvx atla-mcp-server
```
### Connecting to the server
> Having issues or need help connecting to another client? Feel free to open an issue or [contact us](mailto:[email protected])!
#### OpenAI Agents SDK
> For more details on using the OpenAI Agents SDK with MCP servers, refer to the [official documentation](https://openai.github.io/openai-agents-python/).
1. Install the OpenAI Agents SDK:
```shell
pip install openai-agents
```
2. Use the OpenAI Agents SDK to connect to the server:
```python
import os
from agents import Agent
from agents.mcp import MCPServerStdio
async with MCPServerStdio(
params={
"command": "uvx",
"args": ["atla-mcp-server"],
"env": {"ATLA_API_KEY": os.environ.get("ATLA_API_KEY")}
}
) as atla_mcp_server:
...
```
#### Claude Desktop
> For more details on configuring MCP servers in Claude Desktop, refer to the [official MCP quickstart guide](https://modelcontextprotocol.io/quickstart/user).
1. Add the following to your `claude_desktop_config.json` file:
```json
{
"mcpServers": {
"atla-mcp-server": {
"command": "uvx",
"args": ["atla-mcp-server"],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
```
2. **Restart Claude Desktop** to apply the changes.
You should now see options from `atla-mcp-server` in the list of available MCP tools.
#### Cursor
> For more details on configuring MCP servers in Cursor, refer to the [official documentation](https://docs.cursor.com/context/model-context-protocol).
1. Add the following to your `.cursor/mcp.json` file:
```json
{
"mcpServers": {
"atla-mcp-server": {
"command": "uvx",
"args": ["atla-mcp-server"],
"env": {
"ATLA_API_KEY": "<your-atla-api-key>"
}
}
}
}
```
You should now see `atla-mcp-server` in the list of available MCP servers.
## Contributing
Contributions are welcome! Please see the [CONTRIBUTING.md](CONTRIBUTING.md) file for details.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
```
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
```markdown
# Contributing to Atla MCP Server
We welcome contributions to the Atla MCP Server! This document provides guidelines and steps for contributing.
## Development Setup
Follow the installation steps in the [README.md](README.md#installation), making sure to install the development dependencies:
```shell
uv pip install -e ".[dev]"
pre-commit install # Set up git hooks
```
## Making Changes
1. Fork the repository on GitHub
2. Clone your fork locally
3. Create a new branch for your changes
4. Make your changes
5. Commit your changes (pre-commit hooks will run automatically)
6. Push to your fork
7. Submit a pull request from your fork to our main repository
## Questions?
Feel free to open an issue if you have questions or run into problems.
```
--------------------------------------------------------------------------------
/atla_mcp_server/__init__.py:
--------------------------------------------------------------------------------
```python
"""An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API.""" # noqa: E501
```
--------------------------------------------------------------------------------
/atla_mcp_server/debug.py:
--------------------------------------------------------------------------------
```python
"""File for debugging the Atla MCP Server via the MCP Inspector."""
import os
from atla_mcp_server.server import app_factory
app = app_factory(atla_api_key=os.getenv("ATLA_API_KEY", ""))
```
--------------------------------------------------------------------------------
/.github/workflows/python-ci.yaml:
--------------------------------------------------------------------------------
```yaml
name: Python CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
python-ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install uv
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Setup Python environment
run: |
uv venv
. .venv/bin/activate
uv pip install -e ".[dev]"
- name: Run ruff checks
run: |
. .venv/bin/activate
ruff check .
ruff format --check .
- name: Run mypy checks
run: |
. .venv/bin/activate
dmypy run -- .
```
--------------------------------------------------------------------------------
/.github/workflows/publish.yaml:
--------------------------------------------------------------------------------
```yaml
name: Publishing
on:
release:
types: [published]
jobs:
build:
runs-on: ubuntu-latest
name: Build distribution
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v3
- name: Build
run: uv build
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: release-dists
path: dist/
pypi-publish:
name: Upload release to PyPI
runs-on: ubuntu-latest
environment: release
needs: [build]
permissions:
id-token: write
steps:
- name: Retrieve release distribution
uses: actions/download-artifact@v4
with:
name: release-dists
path: dist/
- name: Publish package distribution to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
```
--------------------------------------------------------------------------------
/atla_mcp_server/__main__.py:
--------------------------------------------------------------------------------
```python
"""Entrypoint for the Atla MCP Server."""
import argparse
import os
from atla_mcp_server.server import app_factory
def main():
"""Entrypoint for the Atla MCP Server."""
print("Starting Atla MCP Server with stdio transport...")
parser = argparse.ArgumentParser()
parser.add_argument(
"--atla-api-key",
type=str,
required=False,
help="Atla API key. Can also be set via ATLA_API_KEY environment variable.",
)
args = parser.parse_args()
if args.atla_api_key:
print("Using Atla API key from --atla-api-key CLI argument...")
atla_api_key = args.atla_api_key
elif os.getenv("ATLA_API_KEY"):
atla_api_key = os.getenv("ATLA_API_KEY")
print("Using Atla API key from ATLA_API_KEY environment variable...")
else:
parser.error(
"Atla API key must be provided either via --atla-api-key argument "
"or ATLA_API_KEY environment variable"
)
print("Creating server...")
app = app_factory(atla_api_key)
print("Running server...")
app.run(transport="stdio")
if __name__ == "__main__":
main()
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[build-system]
requires = ["hatchling", "uv-dynamic-versioning"]
build-backend = "hatchling.build"
[tool.hatch.version]
source = "uv-dynamic-versioning"
[tool.uv-dynamic-versioning]
vcs = "git"
style = "pep440"
bump = true
[tool.hatch.build.targets.wheel]
packages = ["atla_mcp_server"]
[tool.hatch.build.targets.sdist]
packages = ["atla_mcp_server"]
[project]
name = "atla-mcp-server"
dynamic = ["version"]
description = "An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API."
readme = "README.md"
requires-python = ">=3.11"
authors = [
{ name="Atla", email="[email protected]" }
]
license = { text = "MIT" }
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
]
dependencies = [
"atla>=0.6.0",
"mcp[cli]>=1.6.0",
]
[project.optional-dependencies]
dev = [
"mypy>=1.15.0",
"pre-commit>=3.7.1",
"ruff>=0.9.7",
]
[project.scripts]
atla-mcp-server = "atla_mcp_server.__main__:main"
[project.urls]
Homepage = "https://atla-ai.com"
Repository = "https://github.com/atla-ai/atla-mcp-server"
Issues = "https://github.com/atla-ai/atla-mcp-server/issues"
[tool.mypy]
exclude = ['.venv']
explicit_package_bases = false
follow_untyped_imports = true
implicit_optional = false
mypy_path = ["atla_mcp_server"]
plugins = ['pydantic.mypy']
python_version = "3.11"
[tool.ruff]
line-length = 90
indent-width = 4
[tool.ruff.lint]
exclude = [".venv"]
# See: https://docs.astral.sh/ruff/rules/
select = [
"B", # Bugbear
"C", # Complexity
"E", # Pycodestyle
"F", # Pyflakes
"I", # Isort
"RUF", # Ruff
"W", # Pycodestyle
"D", # Docstrings
]
ignore = []
fixable = ["ALL"]
unfixable = []
[tool.ruff.lint.isort]
known-first-party = ["atla_mcp_server"]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.format]
quote-style = "double"
```
--------------------------------------------------------------------------------
/atla_mcp_server/server.py:
--------------------------------------------------------------------------------
```python
"""MCP server implementation."""
import asyncio
from contextlib import asynccontextmanager
from dataclasses import dataclass
from textwrap import dedent
from typing import Annotated, AsyncIterator, Literal, Optional, cast
from atla import AsyncAtla
from mcp.server.fastmcp import Context, FastMCP
from pydantic import WithJsonSchema
# config
@dataclass
class MCPState:
"""State of the MCP server."""
atla_client: AsyncAtla
# types
AnnotatedLlmPrompt = Annotated[
str,
WithJsonSchema(
{
"description": dedent(
"""The prompt given to an LLM to generate the `llm_response` to be \
evaluated."""
),
"examples": [
"What is the capital of the moon?",
"Explain the difference between supervised and unsupervised learning.",
"Can you summarize the main idea behind transformers in NLP?",
],
}
),
]
AnnotatedLlmResponse = Annotated[
str,
WithJsonSchema(
{
"description": dedent(
"""The output generated by the model in response to the `llm_prompt`, \
which needs to be evaluated."""
),
"examples": [
dedent(
"""The Moon doesn't have a capital — it has no countries, \
governments, or permanent residents"""
),
dedent(
"""Supervised learning uses labeled data to train models to make \
predictions or classifications. Unsupervised learning, on the other \
hand, works with unlabeled data to uncover hidden patterns or \
groupings, such as through clustering or dimensionality reduction."""
),
dedent(
"""Transformers are neural network architectures designed for \
sequence modeling tasks like NLP. They rely on self-attention \
mechanisms to weigh the importance of different input tokens, \
enabling parallel processing of input data. Unlike RNNs, they don't \
process sequentially, which allows for faster training and better \
handling of long-range dependencies."""
),
],
}
),
]
AnnotatedEvaluationCriteria = Annotated[
str,
WithJsonSchema(
{
"description": dedent(
"""The specific criteria or instructions on which to evaluate the \
model output. A good evaluation criteria should provide the model \
with: (1) a description of the evaluation task, (2) a rubric of \
possible scores and their corresponding criteria, and (3) a \
final sentence clarifying expected score format. A good evaluation \
criteria should also be specific and focus on a single aspect of \
the model output. To evaluate a model's response on multiple \
criteria, use the `evaluate_llm_response_on_multiple_criteria` \
function and create individual criteria for each relevant evaluation \
task. Typical rubrics score responses either on a Likert scale from \
1 to 5 or binary scale with scores of 'Yes' or 'No', depending on \
the specific evaluation task."""
),
"examples": [
dedent(
"""Evaluate how well the response fulfills the requirements of the instruction by providing relevant information. This includes responding in accordance with the explicit and implicit purpose of given instruction.
Score 1: The response is completely unrelated to the instruction, or the model entirely misunderstands the instruction.
Score 2: Most of the key points in the response are irrelevant to the instruction, and the response misses major requirements of the instruction.
Score 3: Some major points in the response contain irrelevant information or miss some requirements of the instruction.
Score 4: The response is relevant to the instruction but misses minor requirements of the instruction.
Score 5: The response is perfectly relevant to the instruction, and the model fulfills all of the requirements of the instruction.
Your score should be an integer between 1 and 5.""" # noqa: E501
),
dedent(
"""Evaluate whether the information provided in the response is correct given the reference response.
Ignore differences in punctuation and phrasing between the response and reference response.
It is okay if the response contains more information than the reference response, as long as it does not contain any conflicting statements.
Binary scoring
"No": The response is not factually accurate when compared against the reference response or includes conflicting statements.
"Yes": The response is supported by the reference response and does not contain conflicting statements.
Your score should be either "No" or "Yes".
""" # noqa: E501
),
],
}
),
]
AnnotatedExpectedLlmOutput = Annotated[
Optional[str],
WithJsonSchema(
{
"description": dedent(
"""A reference or ideal answer to compare against the `llm_response`. \
This is useful in cases where a specific output is expected from \
the model. Defaults to None."""
)
}
),
]
AnnotatedLlmContext = Annotated[
Optional[str],
WithJsonSchema(
{
"description": dedent(
"""Additional context or information provided to the model during \
generation. This is useful in cases where the model was provided \
with additional information that is not part of the `llm_prompt` \
or `expected_llm_output` (e.g., a RAG retrieval context). \
Defaults to None."""
)
}
),
]
AnnotatedModelId = Annotated[
Literal["atla-selene", "atla-selene-mini"],
WithJsonSchema(
{
"description": dedent(
"""The Atla model ID to use for evaluation. `atla-selene` is the \
flagship Atla model, optimized for the highest all-round performance. \
`atla-selene-mini` is a compact model that is generally faster and \
cheaper to run. Defaults to `atla-selene`."""
)
}
),
]
# tools
async def evaluate_llm_response(
ctx: Context,
evaluation_criteria: AnnotatedEvaluationCriteria,
llm_prompt: AnnotatedLlmPrompt,
llm_response: AnnotatedLlmResponse,
expected_llm_output: AnnotatedExpectedLlmOutput = None,
llm_context: AnnotatedLlmContext = None,
model_id: AnnotatedModelId = "atla-selene",
) -> dict[str, str]:
"""Evaluate an LLM's response to a prompt using a given evaluation criteria.
This function uses an Atla evaluation model under the hood to return a dictionary
containing a score for the model's response and a textual critique containing
feedback on the model's response.
Returns:
dict[str, str]: A dictionary containing the evaluation score and critique, in
the format `{"score": <score>, "critique": <critique>}`.
"""
state = cast(MCPState, ctx.request_context.lifespan_context)
result = await state.atla_client.evaluation.create(
model_id=model_id,
model_input=llm_prompt,
model_output=llm_response,
evaluation_criteria=evaluation_criteria,
expected_model_output=expected_llm_output,
model_context=llm_context,
)
return {
"score": result.result.evaluation.score,
"critique": result.result.evaluation.critique,
}
async def evaluate_llm_response_on_multiple_criteria(
ctx: Context,
evaluation_criteria_list: list[AnnotatedEvaluationCriteria],
llm_prompt: AnnotatedLlmPrompt,
llm_response: AnnotatedLlmResponse,
expected_llm_output: AnnotatedExpectedLlmOutput = None,
llm_context: AnnotatedLlmContext = None,
model_id: AnnotatedModelId = "atla-selene",
) -> list[dict[str, str]]:
"""Evaluate an LLM's response to a prompt across *multiple* evaluation criteria.
This function uses an Atla evaluation model under the hood to return a list of
dictionaries, each containing an evaluation score and critique for a given
criteria.
Returns:
list[dict[str, str]]: A list of dictionaries containing the evaluation score
and critique, in the format `{"score": <score>, "critique": <critique>}`.
The order of the dictionaries in the list will match the order of the
criteria in the `evaluation_criteria_list` argument.
"""
tasks = [
evaluate_llm_response(
ctx=ctx,
evaluation_criteria=criterion,
llm_prompt=llm_prompt,
llm_response=llm_response,
expected_llm_output=expected_llm_output,
llm_context=llm_context,
model_id=model_id,
)
for criterion in evaluation_criteria_list
]
results = await asyncio.gather(*tasks)
return results
# app factory
def app_factory(atla_api_key: str) -> FastMCP:
"""Factory function to create an Atla MCP server with the given API key."""
@asynccontextmanager
async def lifespan(_: FastMCP) -> AsyncIterator[MCPState]:
async with AsyncAtla(
api_key=atla_api_key,
default_headers={
"X-Atla-Source": "mcp-server",
},
) as client:
yield MCPState(atla_client=client)
mcp = FastMCP("Atla", lifespan=lifespan)
mcp.tool()(evaluate_llm_response)
mcp.tool()(evaluate_llm_response_on_multiple_criteria)
return mcp
```