# Directory Structure
```
├── .github
│ └── workflows
│ ├── main.yml
│ └── publish.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .python-version
├── .zed
│ └── settings.json
├── glama.json
├── LICENSE
├── logfire_mcp
│ ├── __init__.py
│ ├── __main__.py
│ └── main.py
├── Makefile
├── pyproject.toml
├── README.md
├── tests
│ ├── __init__.py
│ ├── cassettes
│ │ ├── test_logfire_link
│ │ │ └── test_logfire_link.yaml
│ │ └── test_schema_reference
│ │ └── test_schema_reference.yaml
│ ├── conftest.py
│ ├── README.md.jinja
│ ├── test_logfire_link.py
│ ├── test_readme.py
│ └── test_schema_reference.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
```
3.12
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info
# Virtual environments
.venv
.envrc
.env
.claude
```
--------------------------------------------------------------------------------
/.pre-commit-config.yaml:
--------------------------------------------------------------------------------
```yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
- id: no-commit-to-branch # prevent direct commits to the `main` branch
- id: check-yaml
- id: check-toml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/sirosen/texthooks
rev: 0.6.8
hooks:
- id: fix-smartquotes
exclude: "cassettes/"
- id: fix-spaces
exclude: "cassettes/"
- id: fix-ligatures
exclude: "cassettes/"
- repo: https://github.com/codespell-project/codespell
# Configuration for codespell is in pyproject.toml
rev: v2.3.0
hooks:
- id: codespell
args: ["--skip", "tests/cassettes/*"]
additional_dependencies:
- tomli
- repo: local
hooks:
- id: format
name: Format
entry: make
args: [format]
language: system
types: [python]
pass_filenames: false
- id: lint
name: Lint
entry: make
args: [lint]
types: [python]
language: system
pass_filenames: false
- id: typecheck
name: Typecheck
entry: make
args: [typecheck]
language: system
types: [python]
pass_filenames: false
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
<!-- DO NOT MODIFY THIS FILE DIRECTLY, IT IS GENERATED BY THE TESTS! -->
# Pydantic Logfire MCP Server
This repository contains a Model Context Protocol (MCP) server with tools that can access the OpenTelemetry traces and
metrics you've sent to Pydantic Logfire.
<a href="https://glama.ai/mcp/servers/@pydantic/logfire-mcp">
<img width="380" height="200" src="https://glama.ai/mcp/servers/@pydantic/logfire-mcp/badge" alt="Pydantic Logfire Server MCP server" />
</a>
This MCP server enables LLMs to retrieve your application's telemetry data, analyze distributed
traces, and make use of the results of arbitrary SQL queries executed using the Pydantic Logfire APIs.
## Available Tools
* `find_exceptions_in_file` - Get the details about the 10 most recent exceptions on the file.
* Arguments:
* `filepath` (string) - The path to the file to find exceptions in.
* `age` (integer) - Number of minutes to look back, e.g. 30 for last 30 minutes. Maximum allowed value is 7 days.
* `arbitrary_query` - Run an arbitrary query on the Pydantic Logfire database.
* Arguments:
* `query` (string) - The query to run, as a SQL string.
* `age` (integer) - Number of minutes to look back, e.g. 30 for last 30 minutes. Maximum allowed value is 7 days.
* `logfire_link` - Creates a link to help the user to view the trace in the Logfire UI.
* Arguments:
* `trace_id` (string) - The trace ID to link to.
* `schema_reference` - The database schema for the Logfire DataFusion database.
## Setup
### Install `uv`
The first thing to do is make sure `uv` is installed, as `uv` is used to run the MCP server.
For installation instructions, see the [`uv` installation docs](https://docs.astral.sh/uv/getting-started/installation/).
If you already have an older version of `uv` installed, you might need to update it with `uv self update`.
### Obtain a Pydantic Logfire read token
In order to make requests to the Pydantic Logfire APIs, the Pydantic Logfire MCP server requires a "read token".
You can create one under the "Read Tokens" section of your project settings in Pydantic Logfire:
https://logfire.pydantic.dev/-/redirect/latest-project/settings/read-tokens
> [!IMPORTANT]
> Pydantic Logfire read tokens are project-specific, so you need to create one for the specific project you want to expose to the Pydantic Logfire MCP server.
### Manually run the server
Once you have `uv` installed and have a Pydantic Logfire read token, you can manually run the MCP server using `uvx` (which is provided by `uv`).
You can specify your read token using the `LOGFIRE_READ_TOKEN` environment variable:
```bash
LOGFIRE_READ_TOKEN=YOUR_READ_TOKEN uvx logfire-mcp@latest
```
You can also set `LOGFIRE_READ_TOKEN` in a `.env` file:
```bash
LOGFIRE_READ_TOKEN=pylf_v1_us_...
```
**NOTE:** for this to work, the MCP server needs to run with the directory containing the `.env` file in its working directory.
or using the `--read-token` flag:
```bash
uvx logfire-mcp@latest --read-token=YOUR_READ_TOKEN
```
> [!NOTE]
> If you are using Cursor, Claude Desktop, Cline, or other MCP clients that manage your MCP servers for you, you **_do
NOT_** need to manually run the server yourself. The next section will show you how to configure these clients to make
use of the Pydantic Logfire MCP server.
### Base URL
If you are running Logfire in a self hosted environment, you need to specify the base URL.
This can be done using the `LOGFIRE_BASE_URL` environment variable:
```bash
LOGFIRE_BASE_URL=https://logfire.my-company.com uvx logfire-mcp@latest --read-token=YOUR_READ_TOKEN
```
You can also use the `--base-url` argument:
```bash
uvx logfire-mcp@latest --base-url=https://logfire.my-company.com --read-token=YOUR_READ_TOKEN
```
## Configuration with well-known MCP clients
### Configure for Cursor
Create a `.cursor/mcp.json` file in your project root:
```json
{
"mcpServers": {
"logfire": {
"command": "uvx",
"args": ["logfire-mcp@latest", "--read-token=YOUR-TOKEN"]
}
}
}
```
The Cursor doesn't accept the `env` field, so you need to use the `--read-token` flag instead.
### Configure for Claude code
Run the following command:
```bash
claude mcp add logfire -e LOGFIRE_READ_TOKEN=YOUR_TOKEN -- uvx logfire-mcp@latest
```
### Configure for Claude Desktop
Add to your Claude settings:
```json
{
"command": ["uvx"],
"args": ["logfire-mcp@latest"],
"type": "stdio",
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
}
}
```
### Configure for Cline
Add to your Cline settings in `cline_mcp_settings.json`:
```json
{
"mcpServers": {
"logfire": {
"command": "uvx",
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
},
"disabled": false,
"autoApprove": []
}
}
}
```
### Configure for VS Code
Make sure you [enabled MCP support in VS Code](https://code.visualstudio.com/docs/copilot/chat/mcp-servers#_enable-mcp-support-in-vs-code).
Create a `.vscode/mcp.json` file in your project's root directory:
```json
{
"servers": {
"logfire": {
"type": "stdio",
"command": "uvx", // or the absolute /path/to/uvx
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
}
}
}
}
```
### Configure for Zed
Create a `.zed/settings.json` file in your project's root directory:
```json
{
"context_servers": {
"logfire": {
"source": "custom",
"command": "uvx",
"args": ["logfire-mcp@latest"],
"env": {
"LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
},
"enabled": true
}
}
}
```
## Example Interactions
1. Get details about exceptions from traces in a specific file:
```json
{
"name": "find_exceptions_in_file",
"arguments": {
"filepath": "app/api.py",
"age": 1440
}
}
```
Response:
```json
[
{
"created_at": "2024-03-20T10:30:00Z",
"message": "Failed to process request",
"exception_type": "ValueError",
"exception_message": "Invalid input format",
"function_name": "process_request",
"line_number": "42",
"attributes": {
"service.name": "api-service",
"code.filepath": "app/api.py"
},
"trace_id": "1234567890abcdef"
}
]
```
2. Run a custom query on traces:
```json
{
"name": "arbitrary_query",
"arguments": {
"query": "SELECT trace_id, message, created_at, attributes->>'service.name' as service FROM records WHERE severity_text = 'ERROR' ORDER BY created_at DESC LIMIT 10",
"age": 1440
}
}
```
## Examples of Questions for Claude
1. "What exceptions occurred in traces from the last hour across all services?"
2. "Show me the recent errors in the file 'app/api.py' with their trace context"
3. "How many errors were there in the last 24 hours per service?"
4. "What are the most common exception types in my traces, grouped by service name?"
5. "Get me the OpenTelemetry schema for traces and metrics"
6. "Find all errors from yesterday and show their trace contexts"
## Getting Started
1. First, obtain a Pydantic Logfire read token from:
https://logfire.pydantic.dev/-/redirect/latest-project/settings/read-tokens
2. Run the MCP server:
```bash
uvx logfire-mcp@latest --read-token=YOUR_TOKEN
```
3. Configure your preferred client (Cursor, Claude Desktop, or Cline) using the configuration examples above
4. Start using the MCP server to analyze your OpenTelemetry traces and metrics!
## Contributing
We welcome contributions to help improve the Pydantic Logfire MCP server. Whether you want to add new trace analysis tools, enhance metrics querying functionality, or improve documentation, your input is valuable.
For examples of other MCP servers and implementation patterns, see the [Model Context Protocol servers repository](https://github.com/modelcontextprotocol/servers).
## License
Pydantic Logfire MCP is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License.
```
--------------------------------------------------------------------------------
/logfire_mcp/__init__.py:
--------------------------------------------------------------------------------
```python
```
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
```python
```
--------------------------------------------------------------------------------
/glama.json:
--------------------------------------------------------------------------------
```json
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"Kludex",
"samuelcolvin"
]
}
```
--------------------------------------------------------------------------------
/.zed/settings.json:
--------------------------------------------------------------------------------
```json
{
"context_servers": {
"logfire": {
"source": "custom",
// use uv run logfire-mcp, not uvx so we use the local version
"command": "uv",
"args": ["run", "logfire-mcp"],
"enabled": true
}
}
}
```
--------------------------------------------------------------------------------
/tests/test_logfire_link.py:
--------------------------------------------------------------------------------
```python
import pytest
from mcp.client.session import ClientSession
from mcp.types import TextContent
pytestmark = [pytest.mark.vcr, pytest.mark.anyio]
async def test_logfire_link(session: ClientSession) -> None:
result = await session.call_tool('logfire_link', {'trace_id': '019837e6ba8ab0ede383b398b6706f28'})
assert result.content == [
TextContent(
type='text',
text='https://logfire-us.pydantic.dev/kludex/logfire-mcp?q=trace_id%3D%27019837e6ba8ab0ede383b398b6706f28%27',
)
]
```
--------------------------------------------------------------------------------
/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
```yaml
name: Publishing
on:
release:
types: [published]
jobs:
build:
name: Build distribution
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v6
with:
enable-cache: true
- name: Build
run: uv build
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: release-dists
path: dist/
pypi-publish:
name: Upload release to PyPI
runs-on: ubuntu-latest
environment: release
needs: [build]
permissions:
id-token: write # IMPORTANT: this permission is mandatory for trusted publishing
steps:
- name: Retrieve release distributions
uses: actions/download-artifact@v4
with:
name: release-dists
path: dist/
- uses: astral-sh/setup-uv@v6
- run: uv publish --trusted-publishing always
```
--------------------------------------------------------------------------------
/tests/conftest.py:
--------------------------------------------------------------------------------
```python
import os
from collections.abc import AsyncGenerator
import pytest
from mcp.client.session import ClientSession
from mcp.server.fastmcp import FastMCP
from mcp.shared.memory import create_connected_server_and_client_session
from logfire_mcp.__main__ import app_factory
@pytest.fixture
def anyio_backend():
return 'asyncio'
@pytest.fixture
def vcr_config():
return {'filter_headers': [('authorization', None)]}
@pytest.fixture
async def logfire_read_token() -> str:
# To get a read token, go to https://logfire-us.pydantic.dev/kludex/logfire-mcp/settings/read-tokens/.
return os.getenv('LOGFIRE_READ_TOKEN', 'fake-token')
@pytest.fixture
def app(logfire_read_token: str) -> FastMCP:
return app_factory(logfire_read_token)
@pytest.fixture
async def session(app: FastMCP) -> AsyncGenerator[ClientSession]:
mcp_server = app._mcp_server # type: ignore
async with create_connected_server_and_client_session(mcp_server, raise_exceptions=True) as _session:
yield _session
```
--------------------------------------------------------------------------------
/.github/workflows/main.yml:
--------------------------------------------------------------------------------
```yaml
name: CI
on:
push:
branches:
- main
pull_request: {}
env:
COLUMNS: 150
UV_PYTHON: 3.12
UV_FROZEN: "1"
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v6
with:
enable-cache: true
- run: uv sync
- uses: actions/cache@v4
with:
path: ~/.cache/pre-commit
key: pre-commit|${{ env.UV_PYTHON }}|${{ hashFiles('.pre-commit-config.yaml') }}
- run: uvx pre-commit run --color=always --all-files --verbose
env:
SKIP: no-commit-to-branch
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v6
with:
python-version: ${{ matrix.python-version }}
enable-cache: true
- run: uv sync --frozen
- run: uv run pytest
# https://github.com/marketplace/actions/alls-green#why used for branch protection checks
check:
if: always()
needs: [lint, test]
runs-on: ubuntu-latest
steps:
- uses: re-actors/alls-green@release/v1
with:
jobs: ${{ toJSON(needs) }}
```
--------------------------------------------------------------------------------
/tests/test_readme.py:
--------------------------------------------------------------------------------
```python
from pathlib import Path
from typing import TypedDict
import pytest
from jinja2 import Environment, FileSystemLoader
from mcp.client.session import ClientSession
env = Environment(loader=FileSystemLoader(Path(__file__).parent))
template = env.get_template('README.md.jinja')
pytestmark = [pytest.mark.vcr, pytest.mark.anyio]
class Argument(TypedDict):
name: str
description: str
type: str
class Tool(TypedDict):
name: str
description: str
arguments: list[Argument]
async def test_generate_readme(session: ClientSession) -> None:
tools: list[Tool] = []
mcp_tools = await session.list_tools()
for tool in mcp_tools.tools:
assert tool.description
description = tool.description.split('\n', 1)[0].strip()
arguments: list[Argument] = []
for argument_name, argument_schema in tool.inputSchema['properties'].items():
arguments.append(
{'name': argument_name, 'description': argument_schema['description'], 'type': argument_schema['type']}
)
tools.append({'name': tool.name, 'description': description, 'arguments': arguments})
readme = template.render(tools=tools)
with open('README.md', 'w') as f:
f.write(readme)
```
--------------------------------------------------------------------------------
/tests/cassettes/test_logfire_link/test_logfire_link.yaml:
--------------------------------------------------------------------------------
```yaml
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- logfire-us.pydantic.dev
user-agent:
- logfire-mcp/0.0.1
method: GET
uri: https://logfire-us.pydantic.dev/api/read-token-info
response:
body:
string: !!binary |
H4sIAAAAAAAAA1yNSQrDMAwA/6JzBN4S2/lMsS0puFkcQgqlpX/vJYfS68DMvOFsM2+3SjBCCr0u
YgkDeYuOs2D2VqPW3gWT+hy5hw7aMaWtvtJZ2yUaoiKKBxRKCp0vgoGiYLAmqaDUECJBB/vR7lzO
a9ZrTqwMRi2CTgbBmKNHUkNmU4JT0fzPtrQyjDAvD+LnT/DiS5ukHoxr2eHzBQAA//8DAGdhjOza
AAAA
headers:
CF-RAY:
- 9643f8370dddf5dc-AMS
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 24 Jul 2025 14:04:56 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=VUHjqy%2BTrdTOsaOXBzrBG9FuRkXpOwjRA%2FxWrdx0nSl%2BXScKANX0WkbEpjBsbCTgwWOd%2Frk0%2FjuqV3cex4dPjNbjFciTk%2BIEkmBhrwxOEBIu3pPQdTf8ia7Htc8EMWlnpT0SPCfxfM3v"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-expose-headers:
- traceresponse
cf-cache-status:
- DYNAMIC
server-timing:
- cfL4;desc="?proto=TCP&rtt=12945&min_rtt=12595&rtt_var=4973&sent=4&recv=6&lost=0&retrans=0&sent_bytes=2843&recv_bytes=859&delivery_rate=229932&cwnd=252&unsent_bytes=0&cid=f1a8bab48a8921f3&ts=186&x=0"
traceresponse:
- 00-01983cc05eb814b18b2b449cd2e76a6d-8432e93ab1a5c2bc-01
via:
- 1.1 google
x-api-version:
- 8B3/58p72Z+yU8ZrVkBW+6WjdphoLsyQ1sXK9c1305Y=
status:
code: 200
message: OK
version: 1
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[build-system]
requires = ["hatchling", "uv-dynamic-versioning"]
build-backend = "hatchling.build"
[tool.hatch.version]
source = "uv-dynamic-versioning"
[tool.uv-dynamic-versioning]
vcs = "git"
style = "pep440"
bump = true
[project]
name = "logfire-mcp"
dynamic = ["version"]
description = "The Pydantic Logfire MCP server! 🔍"
authors = [
{ name = "Marcelo Trylesinski", email = "[email protected]" },
{ name = "Samuel Colvin", email = "[email protected]" },
]
readme = "README.md"
requires-python = ">=3.11"
license = "MIT"
license-files = ["LICENSE"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
]
dependencies = ["logfire>=3.7.1", "mcp[cli]>=1.10.0", "python-dotenv>=1.1.1"]
[project.scripts]
logfire-mcp = "logfire_mcp.__main__:main"
[project.urls]
Homepage = "https://github.com/pydantic/logfire-mcp"
Repository = "https://github.com/pydantic/logfire-mcp"
Issues = "https://github.com/pydantic/logfire-mcp/issues"
[dependency-groups]
dev = [
"devtools>=0.12.2",
"inline-snapshot[black]>=0.24.0",
"jinja2>=3.1.6",
"pyright>=1.1.403",
"pytest-recording>=0.13.4",
"ruff",
]
[tool.ruff]
line-length = 120
[tool.ruff.lint]
extend-select = [
"Q",
"RUF100",
"RUF018", # https://docs.astral.sh/ruff/rules/assignment-in-assert/
"C90",
"UP",
"I",
"TID251",
]
ignore = ["UP031"] # https://docs.astral.sh/ruff/rules/printf-string-formatting/
flake8-quotes = { inline-quotes = "single", multiline-quotes = "double" }
isort = { combine-as-imports = true }
[tool.ruff.format]
# don't format python in docstrings, pytest-examples takes care of it
docstring-code-format = false
quote-style = "single"
[tool.inline-snapshot]
format-command = "ruff format --stdin-filename {filename}"
[tool.inline-snapshot.shortcuts]
snap-fix = ["create", "fix"]
snap = ["create"]
```
--------------------------------------------------------------------------------
/tests/cassettes/test_schema_reference/test_schema_reference.yaml:
--------------------------------------------------------------------------------
```yaml
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- logfire-us.pydantic.dev
user-agent:
- logfire-mcp/0.3.2.dev4+ae94fe7
method: GET
uri: https://logfire-us.pydantic.dev/v1/schemas
response:
body:
string: !!binary |
H4sIAI0k1GgC/+2YbW/bIBDHv0rk1/0Eeddu1hapbao206ZVFSL4YtNgYHBEzaJ89+GkbZYHLw7Q
SdP6ztjm57vjuOPvRYZ0LMBm/ftFJmkNWT8zwJQpbHaWFWCZ4Rq5kv6+v2FZBTXN+ouMIho+dthM
XWQFRUpwrpvpo/zbyL8qnRANOuujcbDHWp79hiCPVkmygQfhmAGKUBCKe4jBVX43Or+66X0djD73
mmHv+/A638JOqLAHuQWd7wI/no+6TgYt1LwGiQTkjBslm+tgHwtn6Hq4Y9Dwy8Vl3ru5zT8M7gbD
604wkAVBXoNFWuu0MYMnBqsx8XhLSwj2eEPyZrIpGspSwNYzAzEVovaOYaWKOIYBq5W00PiGzhKm
ij2rBtej/FN+ewJU+S0VbBe35DVKu5CL4fAyP++WXFMujwenNX8EzEB0iUQ7QZVkrIp5cCC6Jm6r
BQpBEO+HxPASuWIILqeRCJ9oyhkGJEHZXgEtUzo1bd1+4jkzMPZA9p6Iat+RQaDYOqipaZqI1VQS
Hl52tFHMW0L0PuOUQuMxj8CwiyWt+8OCmXGfklz6EMnmItytF1anFDpqUUPxgY5oNC+k2FTsutzt
LjWAyKggNfhWBwWk5cHacf/QKT5+g4FvxGZObDElgsrSxWyybVxUPdpGxebB6tgTlQjOCDLxrwWb
0AA0xSoK8MOBCW3Jy+XZq0RpIsvZMYlSlgZKuj7vQa2VoYLjPIFeeZc8f0/ywJNfOkGNjSBoUnGL
qjS0JnKVEjMgY8emgL7DuwMntOdm2LEMdfkCUZOJBYzputuf0cryt3Xk8BeSO2IZFZAO9xOMWpua
mImVP0ZXShRJ1PcGnnD9NtDoAGxQPhSCM45eTzlZ2GPen2xqTZ8SR7TmMjHRujoJ0cvrWkmFSnIW
I6/XDZBsPQgWuitU3CHxGdLpt8qfFfO7VP035dz6rzHE/lJsVUK+SVBDZlQ4SLIT3+VnB04C6fe/
SzcnOYbmxHL5sPwF6X85hKIZAAA=
headers:
CF-RAY:
- 9843dc15cb69cba0-LAX
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Length:
- '761'
Content-Type:
- application/json
Date:
- Wed, 24 Sep 2025 17:04:13 GMT
Nel:
- '{"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}'
Report-To:
- '{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=PzOyGdfLWRnle1nd8OxmfFcCYNg7p4jnLwzUghIyvIkSPowyFzWQt1xCSwl77Pi7c5k4tJOTJnp0aXIaT7c2AdQYy4ZfGJxsI5IUFBvuNAqb%2F1gYuFo%3D"}]}'
Server:
- cloudflare
access-control-expose-headers:
- traceresponse
cf-cache-status:
- DYNAMIC
traceresponse:
- 00-01997caec9c993425dabd0264fe732ed-2c46a3fa819c4259-01
vary:
- Accept-Encoding
via:
- 1.1 google
x-api-version:
- 54zyD1FXxO9eq0NYG+0f6iMvkItkOgIG55xblSjRFPw=
status:
code: 200
message: OK
version: 1
```
--------------------------------------------------------------------------------
/tests/test_schema_reference.py:
--------------------------------------------------------------------------------
```python
import pytest
from inline_snapshot import snapshot
from mcp.client.session import ClientSession
from mcp.types import TextContent
pytestmark = [pytest.mark.vcr, pytest.mark.anyio]
async def test_schema_reference(session: ClientSession) -> None:
result = await session.call_tool('schema_reference')
assert result.content == snapshot(
[
TextContent(
type='text',
text="""\
CREATE TABLE records (
attributes TEXT,
attributes_json_schema TEXT,
created_at TIMESTAMP WITH TIME ZONE NOT NULL,
day DATE NOT NULL,
deployment_environment TEXT,
duration DOUBLE PRECISION,
end_timestamp TIMESTAMP WITH TIME ZONE NOT NULL,
exception_message TEXT,
exception_stacktrace TEXT,
exception_type TEXT,
http_method TEXT,
http_response_status_code INTEGER,
http_route TEXT,
is_exception BOOLEAN,
kind TEXT NOT NULL,
level INTEGER NOT NULL,
log_body TEXT,
message TEXT NOT NULL,
otel_events TEXT,
otel_links TEXT,
otel_resource_attributes TEXT,
otel_scope_attributes TEXT,
otel_scope_name TEXT,
otel_scope_version TEXT,
otel_status_code TEXT,
otel_status_message TEXT,
parent_span_id TEXT,
process_pid INTEGER,
project_id TEXT NOT NULL,
service_instance_id TEXT,
service_name TEXT NOT NULL,
service_namespace TEXT,
service_version TEXT,
span_id TEXT NOT NULL,
span_name TEXT NOT NULL,
start_timestamp TIMESTAMP WITH TIME ZONE NOT NULL,
tags TEXT[],
telemetry_sdk_language TEXT,
telemetry_sdk_name TEXT,
telemetry_sdk_version TEXT,
trace_id TEXT NOT NULL,
url_full TEXT,
url_path TEXT,
url_query TEXT
);
CREATE TABLE metrics (
aggregation_temporality TEXT,
attributes TEXT,
attributes_json_schema TEXT,
created_at TIMESTAMP WITH TIME ZONE NOT NULL,
day DATE NOT NULL,
deployment_environment TEXT,
exemplars TEXT,
exp_histogram_negative_bucket_counts INTEGER[],
exp_histogram_negative_bucket_counts_offset INTEGER,
exp_histogram_positive_bucket_counts INTEGER[],
exp_histogram_positive_bucket_counts_offset INTEGER,
exp_histogram_scale INTEGER,
exp_histogram_zero_count INTEGER,
exp_histogram_zero_threshold DOUBLE PRECISION,
histogram_bucket_counts INTEGER[],
histogram_count INTEGER,
histogram_explicit_bounds DOUBLE PRECISION[],
histogram_max DOUBLE PRECISION,
histogram_min DOUBLE PRECISION,
histogram_sum DOUBLE PRECISION,
is_monotonic BOOLEAN,
metric_description TEXT,
metric_name TEXT NOT NULL,
metric_type TEXT NOT NULL,
otel_resource_attributes TEXT,
otel_scope_attributes TEXT,
otel_scope_name TEXT,
otel_scope_version TEXT,
process_pid INTEGER,
project_id TEXT NOT NULL,
recorded_timestamp TIMESTAMP WITH TIME ZONE,
scalar_value DOUBLE PRECISION,
service_instance_id TEXT,
service_name TEXT NOT NULL,
service_namespace TEXT,
service_version TEXT,
start_timestamp TIMESTAMP WITH TIME ZONE,
telemetry_sdk_language TEXT,
telemetry_sdk_name TEXT,
telemetry_sdk_version TEXT,
unit TEXT NOT NULL
);\
""",
)
]
)
```
--------------------------------------------------------------------------------
/logfire_mcp/__main__.py:
--------------------------------------------------------------------------------
```python
import argparse
import asyncio
import os
import sys
from dotenv import dotenv_values, find_dotenv
from mcp import ClientSession, StdioServerParameters, stdio_client
from mcp.types import TextContent
from .main import __version__, app_factory
def main():
name_version = f'Logfire MCP v{__version__}'
parser = argparse.ArgumentParser(
prog='logfire-mcp',
description=f'{name_version}\n\nSee github.com/pydantic/logfire-mcp',
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
'--read-token',
type=str,
help='Pydantic Logfire read token. Can also be set via LOGFIRE_READ_TOKEN environment variable.',
)
parser.add_argument(
'--base-url',
type=str,
required=False,
help='Pydantic Logfire base URL. Can also be set via LOGFIRE_BASE_URL environment variable.',
)
parser.add_argument('--test', action='store_true', help='Test the MCP server and exit')
parser.add_argument('--version', action='store_true', help='Show version and exit')
args = parser.parse_args()
if args.version:
print(name_version)
return
# Get token from args or environment
logfire_read_token, source = get_read_token(args)
if not logfire_read_token:
parser.error(
'Pydantic Logfire read token must be provided either via --read-token argument '
'or LOGFIRE_READ_TOKEN environment variable'
)
logfire_base_url = args.base_url or os.getenv('LOGFIRE_BASE_URL')
if args.test:
asyncio.run(test(logfire_read_token, logfire_base_url, source))
else:
app = app_factory(logfire_read_token, logfire_base_url)
app.run(transport='stdio')
async def test(logfire_read_token: str, logfire_base_url: str | None, source: str):
print('testing Logfire MCP server:\n')
print(f'logfire_read_token: `{logfire_read_token[:12]}...{logfire_read_token[-5:]}` from {source}\n')
args = ['-m', 'logfire_mcp', '--read-token', logfire_read_token]
if logfire_base_url:
print(f'logfire_base_url: `{logfire_base_url}`')
args += ['--base-url', logfire_base_url]
server_params = StdioServerParameters(command=sys.executable, args=args)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await session.list_tools()
print('tools:')
for tool in tools.tools:
print(f' - {tool.name}')
list_resources = await session.list_resources()
print('resources:')
for resource in list_resources.resources:
print(f' - {resource.name}')
for tool in 'sql_reference', 'get_logfire_records_schema':
print(f'\ncalling `{tool}`:')
output = await session.call_tool(tool)
# debug(output)
content = output.content[0]
assert isinstance(content, TextContent), f'Expected TextContent, got {type(content)}'
if len(content.text) < 200:
print(f'> {content.text.strip()}')
else:
first_line = content.text.strip().split('\n', 1)[0]
print(f'> {first_line}... ({len(content.text) - len(first_line)} more characters)\n')
def get_read_token(args: argparse.Namespace) -> tuple[str | None, str]:
if args.read_token:
return args.read_token, 'CLI argument'
elif token := os.getenv('LOGFIRE_READ_TOKEN'):
return token, 'environment variable'
else:
return dotenv_values(dotenv_path=find_dotenv(usecwd=True)).get('LOGFIRE_READ_TOKEN'), 'dotenv file'
if __name__ == '__main__':
main()
```
--------------------------------------------------------------------------------
/logfire_mcp/main.py:
--------------------------------------------------------------------------------
```python
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from dataclasses import dataclass
from datetime import UTC, datetime, timedelta
from importlib.metadata import version
from typing import Annotated, Any, TypedDict, cast
from logfire.experimental.query_client import AsyncLogfireQueryClient
from mcp.server.fastmcp import Context, FastMCP
from mcp.server.session import ServerSession
from pydantic import Field, WithJsonSchema
@dataclass
class MCPState:
logfire_client: AsyncLogfireQueryClient
HOUR = 60 # minutes
DAY = 24 * HOUR
__version__ = version('logfire-mcp')
Age = Annotated[
int,
Field(
ge=0,
le=7 * DAY,
description='Number of minutes to look back, e.g. 30 for last 30 minutes. Maximum allowed value is 7 days.',
),
WithJsonSchema({'type': 'integer'}),
]
async def find_exceptions_in_file(
ctx: Context[ServerSession, MCPState],
filepath: Annotated[str, Field(description='The path to the file to find exceptions in.')],
age: Age,
) -> list[Any]:
"""Get the details about the 10 most recent exceptions on the file."""
logfire_client = ctx.request_context.lifespan_context.logfire_client
min_timestamp = datetime.now(UTC) - timedelta(minutes=age)
result = await logfire_client.query_json_rows(
f"""\
SELECT
created_at,
message,
exception_type,
exception_message,
exception_stacktrace
FROM records
WHERE is_exception = true
AND exception_stacktrace like '%{filepath}%'
ORDER BY created_at DESC
LIMIT 10
""",
min_timestamp=min_timestamp,
)
return result['rows']
async def arbitrary_query(
ctx: Context[ServerSession, MCPState],
query: Annotated[str, Field(description='The query to run, as a SQL string.')],
age: Age,
) -> list[Any]:
"""Run an arbitrary query on the Pydantic Logfire database.
The SQL reference is available via the `sql_reference` tool.
"""
logfire_client = ctx.request_context.lifespan_context.logfire_client
min_timestamp = datetime.now(UTC) - timedelta(minutes=age)
result = await logfire_client.query_json_rows(query, min_timestamp=min_timestamp)
return result['rows']
async def schema_reference(ctx: Context[ServerSession, MCPState]) -> str:
"""The database schema for the Logfire DataFusion database.
This includes all tables, columns, and their types as well as descriptions.
For example:
```sql
-- The records table contains spans and logs.
CREATE TABLE records (
message TEXT, -- The message of the record
span_name TEXT, -- The name of the span, message is usually templated from this
trace_id TEXT, -- The trace ID, identifies a group of spans in a trace
exception_type TEXT, -- The type of the exception
exception_message TEXT, -- The message of the exception
-- other columns...
);
```
The SQL syntax is similar to Postgres, although the query engine is actually Apache DataFusion.
To access nested JSON fields e.g. in the `attributes` column use the `->` and `->>` operators.
You may need to cast the result of these operators e.g. `(attributes->'cost')::float + 10`.
You should apply as much filtering as reasonable to reduce the amount of data queried.
Filters on `start_timestamp`, `service_name`, `span_name`, `metric_name`, `trace_id` are efficient.
"""
logfire_client = ctx.request_context.lifespan_context.logfire_client
response = await logfire_client.client.get('/v1/schemas')
schema_data = response.json()
def schema_to_sql(schema_json: dict[str, Any]) -> str:
sql_commands: list[str] = []
for table in schema_json.get('tables', []):
table_name = table['name']
columns: list[str] = []
for col_name, col_info in table['schema'].items():
data_type = col_info['data_type']
nullable = col_info.get('nullable', True)
description = col_info.get('description', '').strip()
column_def = f'{col_name} {data_type}'
if not nullable:
column_def += ' NOT NULL'
if description:
column_def += f' -- {description}'
columns.append(column_def)
create_table = f'CREATE TABLE {table_name} (\n ' + ',\n '.join(columns) + '\n);'
sql_commands.append(create_table)
return '\n\n'.join(sql_commands)
return schema_to_sql(schema_data)
async def logfire_link(
ctx: Context[ServerSession, MCPState],
trace_id: Annotated[str, Field(description='The trace ID to link to.')],
) -> str:
"""Creates a link to help the user to view the trace in the Logfire UI."""
logfire_client = ctx.request_context.lifespan_context.logfire_client
response = await logfire_client.client.get('/api/read-token-info')
read_token_info = cast(ReadTokenInfo, response.json())
organization_name = read_token_info['organization_name']
project_name = read_token_info['project_name']
url = logfire_client.client.base_url
url = url.join(f'{organization_name}/{project_name}')
url = url.copy_add_param('q', f"trace_id='{trace_id}'")
return str(url)
def app_factory(logfire_read_token: str, logfire_base_url: str | None = None) -> FastMCP:
@asynccontextmanager
async def lifespan(server: FastMCP) -> AsyncIterator[MCPState]:
# print to stderr so we this message doesn't get read by the MCP client
headers = {'User-Agent': f'logfire-mcp/{__version__}'}
async with AsyncLogfireQueryClient(logfire_read_token, headers=headers, base_url=logfire_base_url) as client:
yield MCPState(logfire_client=client)
mcp = FastMCP('Logfire', lifespan=lifespan)
mcp.tool()(find_exceptions_in_file)
mcp.tool()(arbitrary_query)
mcp.tool()(logfire_link)
mcp.tool()(schema_reference)
return mcp
class ReadTokenInfo(TypedDict):
token_id: str
organization_id: str
project_id: str
organization_name: str
project_name: str
```