#
tokens: 7032/50000 19/19 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .DS_Store
├── .github
│   ├── dependabot.yaml
│   ├── ISSUE_TEMPLATE
│   │   └── bug_report.md
│   └── workflows
│       ├── cd.yaml
│       ├── changelog.yml
│       ├── opencommit.yaml
│       └── scan.yaml
├── .gitignore
├── .vscode
│   └── settings.json
├── assets
│   ├── config.png
│   ├── config2.png
│   ├── logs.gif
│   ├── logs.png
│   └── monitor.gif
├── datadog
│   ├── .env.template
│   ├── main.py
│   ├── Pipfile
│   ├── requirements.txt
│   └── taskfile.yaml
├── Dockerfile
├── README.md
├── ruff.toml
├── smithery.yaml
├── sonar-project.properties
├── taskfile.yaml
└── todo.md
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
.envrc
__pycache__
*.pyc
.DS_Store

```

--------------------------------------------------------------------------------
/datadog/.env.template:
--------------------------------------------------------------------------------

```
DD_API_KEY=your_api_key
DD_APP_KEY=your_app_key
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Datadog Model Context Protocol (MCP) 🔍

[![smithery badge](https://smithery.ai/badge/@didlawowo/mcp-collection)](https://smithery.ai/server/@didlawowo/mcp-collection)

A Python-based tool to interact with Datadog API and fetch monitoring data from your infrastructure. This MCP provides easy access to monitor states and Kubernetes logs through a simple interface.

## Datadog Features 🌟

- **Monitor State Tracking**: Fetch and analyze specific monitor states
- **Kubernetes Log Analysis**: Extract and format error logs from Kubernetes clusters

## Prerequisites 📋

- Python 3.11+
- Datadog API and Application keys (with correct permissions)
- Access to Datadog site

## Installation 🔧

### Installing via Smithery

To install Datadog for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@didlawowo/mcp-collection):

```bash
npx -y @smithery/cli install @didlawowo/mcp-collection --client claude
```

Required packages:

```text
datadog-api-client
fastmcp
loguru
icecream
python-dotenv
uv
```

## Environment Setup 🔑

Create a `.env` file with your Datadog credentials:

```env
DD_API_KEY=your_api_key
DD_APP_KEY=your_app_key
```

## Setup Claude Desktop Setup for MCP 🖥️

1. Install Claude Desktop

```bash
# Assuming you're on macOS
brew install claude-desktop

# Or download from official website
https://claude.ai/desktop
```

2. Set up Datadog MCP config:

```bash
# on mac is 
~/Library/Application\ Support/Claude/claude_desktop_config.json


# Add this to your claude config json
```json
    "Datadog-MCP-Server": {
      "command": "uv",
      "args": [
        "run",
        "--with",
        "datadog-api-client",
        "--with",
        "fastmcp",
        "--with",
        "icecream",
        "--with",
        "loguru",
        "--with",
        "python-dotenv",
        "fastmcp",
        "run",
        "/your-path/mcp-collection/datadog/main.py"
      ],
      "env": {
        "DD_API_KEY": "xxxx",
        "DD_APP_KEY": "xxx"
      }
    },
```

## Usage 💻

![get logs](assets/logs.gif)

![get monitor](assets/monitor.gif)

## Architecture 🏗

- **FastMCP Base**: Utilizes FastMCP framework for tool management
- **Modular Design**: Separate functions for monitors and logs
- **Type Safety**: Full typing support with Python type hints
- **API Abstraction**: Wrapped Datadog API calls with error handling

I'll add a section about MCP and Claude Desktop setup:

# Model Context Protocol (MCP) Introduction 🤖

## What is MCP?

Model Context Protocol (MCP) is a framework allowing AI models to interact with external tools and APIs in a standardized way. It enables models like Claude to:

- Access external data
- Execute commands
- Interact with APIs
- Maintain context across conversations

## some examples of MCP servers

<https://github.com/punkpeye/awesome-mcp-servers?tab=readme-ov-file>

## Tutorial for setup MCP

<https://medium.com/@pedro.aquino.se/how-to-use-mcp-tools-on-claude-desktop-app-and-automate-your-daily-tasks-1c38e22bc4b0>

## How it works - Available Functions 🛠️

the LLM use provided function to get the data and use it

### 1. Get Monitor States

```python
get_monitor_states(
    name: str,           # Monitor name to search
    timeframe: int = 1   # Hours to look back
)
```

Example:

```python

response = get_monitor_states(name="traefik")

# Sample Output
{
    "id": "12345678",
    "name": "traefik",
    "status": "OK",
    "query": "avg(last_5m):avg:traefik.response_time{*} > 1000",
    "message": "Response time is too high",
    "type": "metric alert",
    "created": "2024-01-14T10:00:00Z",
    "modified": "2024-01-14T15:30:00Z"
}
```

### 2. Get Kubernetes Logs

```python
get_k8s_logs(
    cluster: str,            # Kubernetes cluster name
    timeframe: int = 5,      # Hours to look back
    namespace: str = None    # Optional namespace filter
)
```

Example:

```python
logs = get_k8s_logs(
    cluster="prod-cluster",
    timeframe=3,
    namespace="default"
)

# Sample Output
{
    "timestamp": "2024-01-14T22:00:00Z",
    "host": "worker-1",
    "service": "nginx-ingress",
    "pod_name": "nginx-ingress-controller-abc123",
    "namespace": "default",
    "container_name": "controller",
    "message": "Connection refused",
    "status": "error"
}
```

```bash
# Install as MCP extension
cd datadog
task install-mcp
```

## 4. Verify Installation

### In Claude chat desktop

 check datadog connection in claude

![setup claude](assets/config.png)

## 5. Use Datadog MCP Tools

## Security Considerations 🔒

- Store API keys in `.env`
- MCP runs in isolated environment
- Each tool has defined permissions
- Rate limiting is implemented

## Troubleshooting 🔧

### Using MCP Inspector

```bash
# Launch MCP Inspector for debugging
task run-mcp-inspector
```

The MCP Inspector provides:

- Real-time view of MCP server status
- Function call logs
- Error tracing
- API response monitoring

### Common issues and solutions

1. **API Authentication Errors**

   ```bash
   Error: (403) Forbidden
   ```

   ➡️ Check your DD_API_KEY and DD_APP_KEY in .env

2. **MCP Connection Issues**

   ```bash
   Error: Failed to connect to MCP server
   ```

   ➡️ Verify your claude_desktop_config.json path and content

3. **Monitor Not Found**

   ```bash
   Error: No monitor found with name 'xxx'
   ```

   ➡️ Check monitor name spelling and case sensitivity

4. **logs can be found here**

![alt text](assets/logs.png)

## Contributing 🤝

Feel free to:

1. Open issues for bugs
2. Submit PRs for improvements
3. Add new features

## Notes 📝

- API calls are made to Datadog EU site
- Default timeframe is 1 hour for monitor states
- Page size limits are set to handle most use cases

```

--------------------------------------------------------------------------------
/ruff.toml:
--------------------------------------------------------------------------------

```toml
[lint]
ignore = ["F722", "F821"]
extend-select = ["I"]

```

--------------------------------------------------------------------------------
/todo.md:
--------------------------------------------------------------------------------

```markdown
- shodan
- virustotal
- youtube
- applenot
- home assistant
- 
```

--------------------------------------------------------------------------------
/.github/dependabot.yaml:
--------------------------------------------------------------------------------

```yaml
version: 2
updates:
- package-ecosystem: "github-actions"
  directory: "/"
  schedule:
    interval: "weekly"

```

--------------------------------------------------------------------------------
/datadog/requirements.txt:
--------------------------------------------------------------------------------

```

datadog-api-client==2.31.0

mcp==1.2.0

python-dotenv==1.0.1

loguru  ==0.7.3

icecream    ==2.1.3

fastmcp==0.4.1
```

--------------------------------------------------------------------------------
/taskfile.yaml:
--------------------------------------------------------------------------------

```yaml
version: '3'

tasks:

  shell:
    desc: start venv shell
    cmds:
    - pipenv shell

  build:
    desc: Build Docker image
    cmds:
    - docker compose build --push

  run:
    desc: Run the Docker container
    cmds:
    - docker compose up -d

default:
  desc: List available tasks
  cmds:
  - task --list

```

--------------------------------------------------------------------------------
/sonar-project.properties:
--------------------------------------------------------------------------------

```
sonar.projectKey=didlawowo_mcp-collection
sonar.organization=dcc

# This is the name and version displayed in the SonarCloud UI.
sonar.projectName=mcp-collection
sonar.projectVersion=1.0


# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
sonar.sources=datadog

# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
```

--------------------------------------------------------------------------------
/.github/workflows/scan.yaml:
--------------------------------------------------------------------------------

```yaml
name: Build

on:
  push:
    branches:
      - main


jobs:
  build:
    name: Build and analyze
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Shallow clones should be disabled for a better relevancy of analysis
      # - uses: sonarsource/sonarqube-scan-action@v3
      #   env:
      #     SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
      #     SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}

```

--------------------------------------------------------------------------------
/datadog/taskfile.yaml:
--------------------------------------------------------------------------------

```yaml
version: '3'

tasks:
  setup:
    desc: Install Python dependencies using pipenv
    cmds:
    - pipenv install --python 3.12 && pipenv shell

  clean:
    desc: Clean up pipenv environment
    cmds:
    - pipenv --rm

  shell:
    desc: Start pipenv shell
    cmds:
    - pipenv shell

  install-mcp:
    desc: Install mcp
    cmds:
    - uv run fastmcp install {{ .CLI_ARGS }}

  run-mcp-inspector:
    desc: Run mcp inspector
    cmds:
    - uv run fastmcp dev {{.CLI_ARGS}}

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml

startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    required:
      - ddApiKey
      - ddAppKey
    properties:
      ddApiKey:
        type: string
        description: Datadog API key for authentication.
      ddAppKey:
        type: string
        description: Datadog application key for authentication.
  commandFunction:
    # A function that produces the CLI command to start the MCP on stdio.
    |-
    (config) => ({command: 'python', args: ['main.py'], env: {DD_API_KEY: config.ddApiKey, DD_APP_KEY: config.ddAppKey}})
```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------

```markdown
# Bug Report

## Description

A clear and concise description of the bug.

## Steps to Reproduce

1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error

## Expected Behavior

A clear description of what you expected to happen.

## Actual Behavior

A clear description of what actually happened.

## Screenshots

If applicable, add screenshots to help explain the problem.

## Environment

- OS: [e.g. Windows 10]
- Browser: [e.g. Chrome 91.0]
- Version: [e.g. 22.1.1]

## Additional Context

Add any other relevant context about the problem here.

## Possible Solution

If you have suggestions on how to fix the bug, add them here.

## Related Issues

Link any related issues here.

```

--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------

```json
{
    "python.defaultInterpreterPath": "./.venv/bin/python",
    "python.linting.enabled": true,
    "python.linting.ruffEnabled": true,
    "python.linting.flake8Enabled": false,
    "python.linting.pycodestyleEnabled": false,
    "python.linting.pylintEnabled": false,
    "python.formatting.provider": "none",
    "python.sortImports.args": ["--profile", "black"],
    "editor.formatOnSave": true,
    "editor.codeActionsOnSave": {
        "source.organizeImports": true
    },
    "files.exclude": {
        "**/__pycache__": true,
        "**/*.pyc": true,
        "**/*.pyo": true,
        "**/*.pyd": true,
        ".pytest_cache": true,
        ".coverage": true,
        ".mypy_cache": true,
        ".ruff_cache": true
    },
    "python.testing.pytestEnabled": true,
    "python.testing.unittestEnabled": false,
    "python.testing.pytestArgs": [
        "tests"
    ]
}
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
# Start with a base Python image
FROM python:3.12-alpine

# Set the working directory
WORKDIR /app

# Copy the requirements file
COPY datadog/requirements.txt /app/requirements.txt

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code
COPY datadog/main.py /app/main.py

# Copy the .env file for environment variables
# Ensure that your .env file is in the same directory as your Dockerfile
# COPY datadog/.env /app/.env

# Set environment variables for Datadog credentials
# This should ideally be managed via a secrets management tool
# ENV DD_API_KEY=your_api_key
# ENV DD_APP_KEY=your_app_key

# Create a non-root
RUN adduser -D mcpuser && chown -R mcpuser:mcpuser /app

# Set the user to run the application
USER mcpuser

# Run the MCP server
CMD ["python", "main.py"]
```

--------------------------------------------------------------------------------
/.github/workflows/opencommit.yaml:
--------------------------------------------------------------------------------

```yaml
name: 'OpenCommit Action'

on:
  push:
    # this list of branches is often enough,
    # but you may still ignore other public branches
    branches-ignore: [main master dev development release]

jobs:
  opencommit:
    timeout-minutes: 10
    name: OpenCommit
    runs-on: ubuntu-latest
    permissions: write-all
    steps:
      - name: Setup Node.js Environment
        uses: actions/setup-node@v4
        with:
          node-version: '16'
      # - uses: actions/checkout@v3
      #   with:
      #     fetch-depth: 0
      # - uses: di-sukharev/[email protected]
      #   with:
      #     GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

      #   env:
      #     # set openAI api key in repo actions secrets,
      #     # for openAI keys go to: https://platform.openai.com/account/api-keys
      #     # for repo secret go to: <your_repo_url>/settings/secrets/actions
      #     OCO_API_KEY: ${{ secrets.OCO_API_KEY }}

      #     # customization
      #     OCO_TOKENS_MAX_INPUT: 4096
      #     OCO_TOKENS_MAX_OUTPUT: 500
      #     OCO_OPENAI_BASE_PATH: ''
      #     OCO_DESCRIPTION: false
      #     OCO_EMOJI: false
      #     OCO_MODEL: gpt-4o
      #     OCO_LANGUAGE: en
      #     OCO_PROMPT_MODULE: conventional-commit
```

--------------------------------------------------------------------------------
/.github/workflows/changelog.yml:
--------------------------------------------------------------------------------

```yaml
name: Generate Changelog

on:
  push:
    tags:
      - 'v*'

permissions:
  contents: write
  pull-requests: write

jobs:
  changelog:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
          
      - name: Generate changelog
        run: |
          LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.1.0")
          RANGE="$(git describe --tags --abbrev=0 HEAD^ 2>/dev/null || git rev-list --max-parents=0 HEAD)..$LATEST_TAG"
          
          echo "# Changelog pour $LATEST_TAG" > CHANGELOG.md
          echo "" >> CHANGELOG.md
          echo "## 🚀 Nouvelles fonctionnalités" >> CHANGELOG.md
          git log $RANGE --pretty=format:"- %s" --grep="^feat:" >> CHANGELOG.md
          echo "" >> CHANGELOG.md
          echo "## 🐛 Corrections de bugs" >> CHANGELOG.md
          git log $RANGE --pretty=format:"- %s" --grep="^fix:" >> CHANGELOG.md
          echo "" >> CHANGELOG.md
          echo "## 📝 Autres changements" >> CHANGELOG.md
          git log $RANGE --pretty=format:"- %s" --grep="^(chore|docs|style|refactor|perf|test):" >> CHANGELOG.md

      - name: Create Release
        uses: softprops/action-gh-release@v1
        with:
          body_path: CHANGELOG.md
          token: ${{ secrets.GITHUB_TOKEN }}
```

--------------------------------------------------------------------------------
/.github/workflows/cd.yaml:
--------------------------------------------------------------------------------

```yaml
name: Build and Update Helm Values

on:
  push:
    branches:
      - main  # adjust this to your branch

jobs:
  build-and-update:
    runs-on: ubuntu-latest
    permissions:
      contents: write
    
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3
      
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
        with:
          platforms: linux/amd64,linux/arm64
      
      
      - name: Login to DockerHub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}
      
      - name: Get repository name and SHA
        id: vars
        run: |
          REPO_NAME=$(echo "${{ github.repository }}" | cut -d'/' -f2 | tr '[:upper:]' '[:lower:]')
          echo "REPO_NAME=${REPO_NAME}" >> $GITHUB_OUTPUT
          echo "SHORT_SHA=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
      
      # - name: Build and push Docker image
      #   uses: docker/build-push-action@v6
      #   with:
      #     sbom: true
      #     provenance: mode=max
      #     platforms: linux/amd64,linux/arm64
      #     push: true
      #     tags: ${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}:${{ steps.vars.outputs.SHORT_SHA }}
      #     context: .
      #     cache-from: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}:buildcache
      #     cache-to: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}:buildcache,mode=max
      
      # - name: Update Helm values
      #   run: |
      #     yq e -i '.image.repository = "${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}"' helm/values.yaml
      #     yq e -i '.image.tag = "${{ steps.vars.outputs.SHORT_SHA }}"' helm/values.yaml
      
      # - name: Configure Git
      #   run: |
      #     git config user.name "GitHub Actions"
      #     git config user.email "[email protected]"
      
      # - name: Commit and push changes
      #   run: |
      #     git add helm/values.yaml
      #     git commit -m "chore: update image tag to ${{ steps.vars.outputs.SHORT_SHA }}"
      #     git push
```

--------------------------------------------------------------------------------
/datadog/main.py:
--------------------------------------------------------------------------------

```python
import mcp.server.stdio
import mcp.types as types
from datadog_api_client import ApiClient, Configuration
from datadog_api_client.v2.api.logs_api import LogsApi
import os
import json
from loguru import logger
from fastmcp import FastMCP
from fastmcp.prompts.base import UserMessage, AssistantMessage
from typing import Generator
from datadog_api_client.v2.models import LogsListResponse
from icecream import ic
from dotenv import load_dotenv
from datadog_api_client.v1.api.monitors_api import MonitorsApi


load_dotenv()

mcp = FastMCP(
    "Datadog-MCP-Server",
    dependencies=[
        "loguru",
        "icecream",
        "python-dotenv",
        "datadog-api-client",
    ],
)


def fetch_logs_paginated(
    api_instance: LogsApi, query_params: dict, max_results: int = 1000
) -> Generator[LogsListResponse, None, None]:
    """Fetch logs with pagination support."""
    current_page = 0
    total_logs = 0

    while total_logs < max_results:
        query_params["page"] = {
            "limit": min(100, max_results - total_logs),
            "cursor": current_page,
        }
        response = api_instance.list_logs(body=query_params)

        if not response.data:
            break

        yield response
        total_logs += len(response.data)
        current_page += 1


def extract_tag_value(tags: list, prefix: str) -> str:
    """Helper pour extraire une valeur de tag avec un préfixe donné"""
    for tag in tags:
        if tag.startswith(prefix):
            return tag.split(":", 1)[1]
    return None


@mcp.tool()
def get_monitor_states(
    name: str,
    timeframe: int = 1,
) -> list[types.TextContent]:
    """
    Get monitor states for a specific monitor with retry mechanism

    Args:
        name: monitor name
        timeframe: Hours to look back (default: 1)
    """

    def serialize_monitor(monitor) -> dict:
        """Helper to serialize monitor data"""
        return {
            "id": str(monitor.id),
            "name": monitor.name,
            "query": monitor.query,
            "status": str(monitor.overall_state),
            "last_triggered": monitor.last_triggered_ts
            if hasattr(monitor, "last_triggered_ts")
            else None,
            "message": monitor.message if hasattr(monitor, "message") else None,
            "type": monitor.type if hasattr(monitor, "type") else None,
            "created": str(monitor.created) if hasattr(monitor, "created") else None,
            "modified": str(monitor.modified) if hasattr(monitor, "modified") else None,
        }

    def fetch_monitors():
        with ApiClient(configuration) as api_client:
            monitors_api = MonitorsApi(api_client)

            # Get all monitors and filter by name
            response = monitors_api.list_monitors(
                page_size=100  # 👈 Increased page size
            )

            # Filter monitors by name (case insensitive)
            monitor_details = []
            for monitor in response:
                if name.lower() in monitor.name.lower():
                    monitor_details.append(monitor)

            return monitor_details

    try:
        configuration = Configuration()
        api_key = os.getenv("DD_API_KEY")
        app_key = os.getenv("DD_APP_KEY")

        if not api_key or not app_key:
            return [
                types.TextContent(
                    type="text", text="Error: Missing Datadog API credentials"
                )
            ]

        configuration.api_key["DD-API-KEY"] = api_key
        configuration.api_key["DD-APPLICATION-KEY"] = app_key
        configuration.server_variables["site"] = "datadoghq.eu"

        monitors = fetch_monitors()

        if not monitors:
            return [
                types.TextContent(
                    type="text", text=f"No monitors found with name containing '{name}'"
                )
            ]

        # Serialize monitors
        monitor_states = [serialize_monitor(monitor) for monitor in monitors]

        return [
            types.TextContent(
                type="text",
                text=json.dumps(
                    monitor_states, indent=2, default=str
                ),  # 👈 Added default serializer
            )
        ]

    except ValueError as ve:
        return [types.TextContent(type="text", text=str(ve))]
    except Exception as e:
        logger.error(f"Error fetching monitor states: {str(e)}")
        return [
            types.TextContent(
                type="text", text=f"Error fetching monitor states: {str(e)}"
            )
        ]


@mcp.tool()
def get_k8s_logs(
    cluster: str, timeframe: int = 5, namespace: str = None
) -> list[types.TextContent]:
    try:
        configuration = Configuration()
        api_key = os.getenv("DD_API_KEY")
        app_key = os.getenv("DD_APP_KEY")

        configuration.server_variables["site"] = "datadoghq.eu"

        configuration.api_key["DD-API-KEY"] = api_key
        configuration.api_key["DD-APPLICATION-KEY"] = app_key
        with ApiClient(configuration) as api_client:
            api_instance = LogsApi(api_client)

            # Construction d'une requête plus précise pour les erreurs
            query_components = [
                # "source:kubernetes",
                f"kube_cluster_name:{cluster}",
                "status:error OR level:error OR severity:error",  # 👈 Filtre des erreurs
            ]

            if namespace:
                query_components.append(f"kube_namespace:{namespace}")

            query = " AND ".join(query_components)

            response = api_instance.list_logs(
                body={
                    "filter": {
                        "query": query,
                        "from": f"now-{timeframe}h",  # 👈 Timeframe dynamique
                        "to": "now",
                    },
                    "sort": "-timestamp",  # Plus récent d'abord
                    "page": {
                        "limit": 100,  # Augmenté pour voir plus d'erreurs
                    },
                }
            )

            # Formatage plus pertinent de la réponse
            ic(f"Query: {query}")  # 👈 Log de la requête
            # ic(f"Response: {response}")  # 👈 Log de la réponse brute

            logs_data = response.to_dict()
            # ic(f"Logs data: {logs_data}")  # 👈 Log des données
            formatted_logs = []

            for log in logs_data.get("data", []):
                attributes = log.get("attributes", {})
                ic(attributes)
                formatted_logs.append(
                    {
                        "timestamp": attributes.get("timestamp"),
                        "host": attributes.get("host"),
                        "service": attributes.get("service"),
                        "pod_name": extract_tag_value(
                            attributes.get("tags", []), "pod_name:"
                        ),
                        "namespace": extract_tag_value(
                            attributes.get("tags", []), "kube_namespace:"
                        ),
                        "container_name": extract_tag_value(
                            attributes.get("tags", []), "kube_container_name:"
                        ),
                        "message": attributes.get("message"),
                        "status": attributes.get("status"),
                    }
                )

            return [
                types.TextContent(
                    type="text", text=json.dumps(formatted_logs, indent=2)
                )
            ]

    except Exception as e:
        logger.error(f"Error fetching logs: {str(e)}")
        return [types.TextContent(type="text", text=f"Error: {str(e)}")]


@mcp.prompt()
def analyze_monitors_data(name: str, timeframe: int = 3) -> list:
    """
    Analyze monitor data for a specific monitor.
    Parameters:
        name (str): The name of the monitor to analyze
        timeframe (int): Hours to look back for data
    Returns:
        list: Structured monitor analysis
    """
    try:
        monitor_data = get_monitor_states(name=name, timeframe=timeframe)
        if not monitor_data:
            return [
                AssistantMessage(
                    f"No monitor data found for '{name}' in the last {timeframe} hours."
                )
            ]
        # Format the response more naturally
        messages = [
            UserMessage(f"Monitor Analysis for '{name}' (last {timeframe} hours):")
        ]
        for data in monitor_data:
            messages.append(
                AssistantMessage(
                    f"Monitor State: {data['state']}, Timestamp: {data['timestamp']}"
                )
            )
        return messages
    except Exception as e:
        logger.error(f"Error analyzing monitor data: {str(e)}")
        return [AssistantMessage(f"Error: {str(e)}")]


@mcp.prompt()
def analyze_error_logs(
    cluster: str = "rke2", timeframe: int = 3, namespace: str = None
) -> list:
    """
    Analyze error logs from a Kubernetes cluster.

    Parameters:
        cluster (str): The cluster name to analyze
        timeframe (int): Hours to look back for errors
        namespace (str): Optional namespace filter

    Returns:
        list: Structured error analysis
    """
    logs = get_k8s_logs(cluster=cluster, namespace=namespace, timeframe=timeframe)

    if not logs:
        return [
            AssistantMessage(
                f"No error logs found for cluster '{cluster}' in the last {timeframe} hours."
            )
        ]

    # Format the response more naturally
    messages = [
        UserMessage(f"Error Analysis for cluster '{cluster}' (last {timeframe} hours):")
    ]

    try:
        log_data = json.loads(logs[0].text)
        if not log_data:
            messages.append(
                AssistantMessage("No errors found in the specified timeframe.")
            )
        else:
            # Group errors by service for better analysis
            errors_by_service = {}
            for log in log_data:
                service = log.get("service", "unknown")
                if service not in errors_by_service:
                    errors_by_service[service] = []
                errors_by_service[service].append(log)

            for service, errors in errors_by_service.items():
                messages.append(
                    AssistantMessage(
                        f"Service {service}: Found {len(errors)} errors\n"
                        + f"Most recent error: {errors[0].get('error_message', 'No message')}"
                    )
                )
    except json.JSONDecodeError:
        messages.append(AssistantMessage("Error parsing log data."))

    return messages

```