#
tokens: 9561/50000 19/19 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .DS_Store
├── .github
│   ├── dependabot.yaml
│   ├── ISSUE_TEMPLATE
│   │   └── bug_report.md
│   └── workflows
│       ├── cd.yaml
│       ├── changelog.yml
│       ├── opencommit.yaml
│       └── scan.yaml
├── .gitignore
├── .vscode
│   └── settings.json
├── assets
│   ├── config.png
│   ├── config2.png
│   ├── logs.gif
│   ├── logs.png
│   └── monitor.gif
├── datadog
│   ├── .env.template
│   ├── main.py
│   ├── Pipfile
│   ├── requirements.txt
│   └── taskfile.yaml
├── Dockerfile
├── README.md
├── ruff.toml
├── smithery.yaml
├── sonar-project.properties
├── taskfile.yaml
└── todo.md
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | .envrc
2 | __pycache__
3 | *.pyc
4 | .DS_Store
5 | 
```

--------------------------------------------------------------------------------
/datadog/.env.template:
--------------------------------------------------------------------------------

```
1 | DD_API_KEY=your_api_key
2 | DD_APP_KEY=your_app_key
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Datadog Model Context Protocol (MCP) 🔍
  2 | 
  3 | [![smithery badge](https://smithery.ai/badge/@didlawowo/mcp-collection)](https://smithery.ai/server/@didlawowo/mcp-collection)
  4 | 
  5 | A Python-based tool to interact with Datadog API and fetch monitoring data from your infrastructure. This MCP provides easy access to monitor states and Kubernetes logs through a simple interface.
  6 | 
  7 | ## Datadog Features 🌟
  8 | 
  9 | - **Monitor State Tracking**: Fetch and analyze specific monitor states
 10 | - **Kubernetes Log Analysis**: Extract and format error logs from Kubernetes clusters
 11 | 
 12 | ## Prerequisites 📋
 13 | 
 14 | - Python 3.11+
 15 | - Datadog API and Application keys (with correct permissions)
 16 | - Access to Datadog site
 17 | 
 18 | ## Installation 🔧
 19 | 
 20 | ### Installing via Smithery
 21 | 
 22 | To install Datadog for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@didlawowo/mcp-collection):
 23 | 
 24 | ```bash
 25 | npx -y @smithery/cli install @didlawowo/mcp-collection --client claude
 26 | ```
 27 | 
 28 | Required packages:
 29 | 
 30 | ```text
 31 | datadog-api-client
 32 | fastmcp
 33 | loguru
 34 | icecream
 35 | python-dotenv
 36 | uv
 37 | ```
 38 | 
 39 | ## Environment Setup 🔑
 40 | 
 41 | Create a `.env` file with your Datadog credentials:
 42 | 
 43 | ```env
 44 | DD_API_KEY=your_api_key
 45 | DD_APP_KEY=your_app_key
 46 | ```
 47 | 
 48 | ## Setup Claude Desktop Setup for MCP 🖥️
 49 | 
 50 | 1. Install Claude Desktop
 51 | 
 52 | ```bash
 53 | # Assuming you're on macOS
 54 | brew install claude-desktop
 55 | 
 56 | # Or download from official website
 57 | https://claude.ai/desktop
 58 | ```
 59 | 
 60 | 2. Set up Datadog MCP config:
 61 | 
 62 | ```bash
 63 | # on mac is 
 64 | ~/Library/Application\ Support/Claude/claude_desktop_config.json
 65 | 
 66 | 
 67 | # Add this to your claude config json
 68 | ```json
 69 |     "Datadog-MCP-Server": {
 70 |       "command": "uv",
 71 |       "args": [
 72 |         "run",
 73 |         "--with",
 74 |         "datadog-api-client",
 75 |         "--with",
 76 |         "fastmcp",
 77 |         "--with",
 78 |         "icecream",
 79 |         "--with",
 80 |         "loguru",
 81 |         "--with",
 82 |         "python-dotenv",
 83 |         "fastmcp",
 84 |         "run",
 85 |         "/your-path/mcp-collection/datadog/main.py"
 86 |       ],
 87 |       "env": {
 88 |         "DD_API_KEY": "xxxx",
 89 |         "DD_APP_KEY": "xxx"
 90 |       }
 91 |     },
 92 | ```
 93 | 
 94 | ## Usage 💻
 95 | 
 96 | ![get logs](assets/logs.gif)
 97 | 
 98 | ![get monitor](assets/monitor.gif)
 99 | 
100 | ## Architecture 🏗
101 | 
102 | - **FastMCP Base**: Utilizes FastMCP framework for tool management
103 | - **Modular Design**: Separate functions for monitors and logs
104 | - **Type Safety**: Full typing support with Python type hints
105 | - **API Abstraction**: Wrapped Datadog API calls with error handling
106 | 
107 | I'll add a section about MCP and Claude Desktop setup:
108 | 
109 | # Model Context Protocol (MCP) Introduction 🤖
110 | 
111 | ## What is MCP?
112 | 
113 | Model Context Protocol (MCP) is a framework allowing AI models to interact with external tools and APIs in a standardized way. It enables models like Claude to:
114 | 
115 | - Access external data
116 | - Execute commands
117 | - Interact with APIs
118 | - Maintain context across conversations
119 | 
120 | ## some examples of MCP servers
121 | 
122 | <https://github.com/punkpeye/awesome-mcp-servers?tab=readme-ov-file>
123 | 
124 | ## Tutorial for setup MCP
125 | 
126 | <https://medium.com/@pedro.aquino.se/how-to-use-mcp-tools-on-claude-desktop-app-and-automate-your-daily-tasks-1c38e22bc4b0>
127 | 
128 | ## How it works - Available Functions 🛠️
129 | 
130 | the LLM use provided function to get the data and use it
131 | 
132 | ### 1. Get Monitor States
133 | 
134 | ```python
135 | get_monitor_states(
136 |     name: str,           # Monitor name to search
137 |     timeframe: int = 1   # Hours to look back
138 | )
139 | ```
140 | 
141 | Example:
142 | 
143 | ```python
144 | 
145 | response = get_monitor_states(name="traefik")
146 | 
147 | # Sample Output
148 | {
149 |     "id": "12345678",
150 |     "name": "traefik",
151 |     "status": "OK",
152 |     "query": "avg(last_5m):avg:traefik.response_time{*} > 1000",
153 |     "message": "Response time is too high",
154 |     "type": "metric alert",
155 |     "created": "2024-01-14T10:00:00Z",
156 |     "modified": "2024-01-14T15:30:00Z"
157 | }
158 | ```
159 | 
160 | ### 2. Get Kubernetes Logs
161 | 
162 | ```python
163 | get_k8s_logs(
164 |     cluster: str,            # Kubernetes cluster name
165 |     timeframe: int = 5,      # Hours to look back
166 |     namespace: str = None    # Optional namespace filter
167 | )
168 | ```
169 | 
170 | Example:
171 | 
172 | ```python
173 | logs = get_k8s_logs(
174 |     cluster="prod-cluster",
175 |     timeframe=3,
176 |     namespace="default"
177 | )
178 | 
179 | # Sample Output
180 | {
181 |     "timestamp": "2024-01-14T22:00:00Z",
182 |     "host": "worker-1",
183 |     "service": "nginx-ingress",
184 |     "pod_name": "nginx-ingress-controller-abc123",
185 |     "namespace": "default",
186 |     "container_name": "controller",
187 |     "message": "Connection refused",
188 |     "status": "error"
189 | }
190 | ```
191 | 
192 | ```bash
193 | # Install as MCP extension
194 | cd datadog
195 | task install-mcp
196 | ```
197 | 
198 | ## 4. Verify Installation
199 | 
200 | ### In Claude chat desktop
201 | 
202 |  check datadog connection in claude
203 | 
204 | ![setup claude](assets/config.png)
205 | 
206 | ## 5. Use Datadog MCP Tools
207 | 
208 | ## Security Considerations 🔒
209 | 
210 | - Store API keys in `.env`
211 | - MCP runs in isolated environment
212 | - Each tool has defined permissions
213 | - Rate limiting is implemented
214 | 
215 | ## Troubleshooting 🔧
216 | 
217 | ### Using MCP Inspector
218 | 
219 | ```bash
220 | # Launch MCP Inspector for debugging
221 | task run-mcp-inspector
222 | ```
223 | 
224 | The MCP Inspector provides:
225 | 
226 | - Real-time view of MCP server status
227 | - Function call logs
228 | - Error tracing
229 | - API response monitoring
230 | 
231 | ### Common issues and solutions
232 | 
233 | 1. **API Authentication Errors**
234 | 
235 |    ```bash
236 |    Error: (403) Forbidden
237 |    ```
238 | 
239 |    ➡️ Check your DD_API_KEY and DD_APP_KEY in .env
240 | 
241 | 2. **MCP Connection Issues**
242 | 
243 |    ```bash
244 |    Error: Failed to connect to MCP server
245 |    ```
246 | 
247 |    ➡️ Verify your claude_desktop_config.json path and content
248 | 
249 | 3. **Monitor Not Found**
250 | 
251 |    ```bash
252 |    Error: No monitor found with name 'xxx'
253 |    ```
254 | 
255 |    ➡️ Check monitor name spelling and case sensitivity
256 | 
257 | 4. **logs can be found here**
258 | 
259 | ![alt text](assets/logs.png)
260 | 
261 | ## Contributing 🤝
262 | 
263 | Feel free to:
264 | 
265 | 1. Open issues for bugs
266 | 2. Submit PRs for improvements
267 | 3. Add new features
268 | 
269 | ## Notes 📝
270 | 
271 | - API calls are made to Datadog EU site
272 | - Default timeframe is 1 hour for monitor states
273 | - Page size limits are set to handle most use cases
274 | 
```

--------------------------------------------------------------------------------
/ruff.toml:
--------------------------------------------------------------------------------

```toml
1 | [lint]
2 | ignore = ["F722", "F821"]
3 | extend-select = ["I"]
4 | 
```

--------------------------------------------------------------------------------
/todo.md:
--------------------------------------------------------------------------------

```markdown
1 | - shodan
2 | - virustotal
3 | - youtube
4 | - applenot
5 | - home assistant
6 | - 
```

--------------------------------------------------------------------------------
/.github/dependabot.yaml:
--------------------------------------------------------------------------------

```yaml
1 | version: 2
2 | updates:
3 | - package-ecosystem: "github-actions"
4 |   directory: "/"
5 |   schedule:
6 |     interval: "weekly"
7 | 
```

--------------------------------------------------------------------------------
/datadog/requirements.txt:
--------------------------------------------------------------------------------

```
 1 | 
 2 | datadog-api-client==2.31.0
 3 | 
 4 | mcp==1.2.0
 5 | 
 6 | python-dotenv==1.0.1
 7 | 
 8 | loguru  ==0.7.3
 9 | 
10 | icecream    ==2.1.3
11 | 
12 | fastmcp==0.4.1
```

--------------------------------------------------------------------------------
/taskfile.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | version: '3'
 2 | 
 3 | tasks:
 4 | 
 5 |   shell:
 6 |     desc: start venv shell
 7 |     cmds:
 8 |     - pipenv shell
 9 | 
10 |   build:
11 |     desc: Build Docker image
12 |     cmds:
13 |     - docker compose build --push
14 | 
15 |   run:
16 |     desc: Run the Docker container
17 |     cmds:
18 |     - docker compose up -d
19 | 
20 | default:
21 |   desc: List available tasks
22 |   cmds:
23 |   - task --list
24 | 
```

--------------------------------------------------------------------------------
/sonar-project.properties:
--------------------------------------------------------------------------------

```
 1 | sonar.projectKey=didlawowo_mcp-collection
 2 | sonar.organization=dcc
 3 | 
 4 | # This is the name and version displayed in the SonarCloud UI.
 5 | sonar.projectName=mcp-collection
 6 | sonar.projectVersion=1.0
 7 | 
 8 | 
 9 | # Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
10 | sonar.sources=datadog
11 | 
12 | # Encoding of the source code. Default is default system encoding
13 | #sonar.sourceEncoding=UTF-8
```

--------------------------------------------------------------------------------
/.github/workflows/scan.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | name: Build
 2 | 
 3 | on:
 4 |   push:
 5 |     branches:
 6 |       - main
 7 | 
 8 | 
 9 | jobs:
10 |   build:
11 |     name: Build and analyze
12 |     runs-on: ubuntu-latest
13 |     
14 |     steps:
15 |       - uses: actions/checkout@v4
16 |         with:
17 |           fetch-depth: 0  # Shallow clones should be disabled for a better relevancy of analysis
18 |       # - uses: sonarsource/sonarqube-scan-action@v3
19 |       #   env:
20 |       #     SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
21 |       #     SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
22 | 
```

--------------------------------------------------------------------------------
/datadog/taskfile.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | version: '3'
 2 | 
 3 | tasks:
 4 |   setup:
 5 |     desc: Install Python dependencies using pipenv
 6 |     cmds:
 7 |     - pipenv install --python 3.12 && pipenv shell
 8 | 
 9 |   clean:
10 |     desc: Clean up pipenv environment
11 |     cmds:
12 |     - pipenv --rm
13 | 
14 |   shell:
15 |     desc: Start pipenv shell
16 |     cmds:
17 |     - pipenv shell
18 | 
19 |   install-mcp:
20 |     desc: Install mcp
21 |     cmds:
22 |     - uv run fastmcp install {{ .CLI_ARGS }}
23 | 
24 |   run-mcp-inspector:
25 |     desc: Run mcp inspector
26 |     cmds:
27 |     - uv run fastmcp dev {{.CLI_ARGS}}
28 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required:
 9 |       - ddApiKey
10 |       - ddAppKey
11 |     properties:
12 |       ddApiKey:
13 |         type: string
14 |         description: Datadog API key for authentication.
15 |       ddAppKey:
16 |         type: string
17 |         description: Datadog application key for authentication.
18 |   commandFunction:
19 |     # A function that produces the CLI command to start the MCP on stdio.
20 |     |-
21 |     (config) => ({command: 'python', args: ['main.py'], env: {DD_API_KEY: config.ddApiKey, DD_APP_KEY: config.ddAppKey}})
```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Bug Report
 2 | 
 3 | ## Description
 4 | 
 5 | A clear and concise description of the bug.
 6 | 
 7 | ## Steps to Reproduce
 8 | 
 9 | 1. Go to '...'
10 | 2. Click on '...'
11 | 3. Scroll down to '...'
12 | 4. See error
13 | 
14 | ## Expected Behavior
15 | 
16 | A clear description of what you expected to happen.
17 | 
18 | ## Actual Behavior
19 | 
20 | A clear description of what actually happened.
21 | 
22 | ## Screenshots
23 | 
24 | If applicable, add screenshots to help explain the problem.
25 | 
26 | ## Environment
27 | 
28 | - OS: [e.g. Windows 10]
29 | - Browser: [e.g. Chrome 91.0]
30 | - Version: [e.g. 22.1.1]
31 | 
32 | ## Additional Context
33 | 
34 | Add any other relevant context about the problem here.
35 | 
36 | ## Possible Solution
37 | 
38 | If you have suggestions on how to fix the bug, add them here.
39 | 
40 | ## Related Issues
41 | 
42 | Link any related issues here.
43 | 
```

--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |     "python.defaultInterpreterPath": "./.venv/bin/python",
 3 |     "python.linting.enabled": true,
 4 |     "python.linting.ruffEnabled": true,
 5 |     "python.linting.flake8Enabled": false,
 6 |     "python.linting.pycodestyleEnabled": false,
 7 |     "python.linting.pylintEnabled": false,
 8 |     "python.formatting.provider": "none",
 9 |     "python.sortImports.args": ["--profile", "black"],
10 |     "editor.formatOnSave": true,
11 |     "editor.codeActionsOnSave": {
12 |         "source.organizeImports": true
13 |     },
14 |     "files.exclude": {
15 |         "**/__pycache__": true,
16 |         "**/*.pyc": true,
17 |         "**/*.pyo": true,
18 |         "**/*.pyd": true,
19 |         ".pytest_cache": true,
20 |         ".coverage": true,
21 |         ".mypy_cache": true,
22 |         ".ruff_cache": true
23 |     },
24 |     "python.testing.pytestEnabled": true,
25 |     "python.testing.unittestEnabled": false,
26 |     "python.testing.pytestArgs": [
27 |         "tests"
28 |     ]
29 | }
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | # Start with a base Python image
 3 | FROM python:3.12-alpine
 4 | 
 5 | # Set the working directory
 6 | WORKDIR /app
 7 | 
 8 | # Copy the requirements file
 9 | COPY datadog/requirements.txt /app/requirements.txt
10 | 
11 | # Install dependencies
12 | RUN pip install --no-cache-dir -r requirements.txt
13 | 
14 | # Copy the rest of the application code
15 | COPY datadog/main.py /app/main.py
16 | 
17 | # Copy the .env file for environment variables
18 | # Ensure that your .env file is in the same directory as your Dockerfile
19 | # COPY datadog/.env /app/.env
20 | 
21 | # Set environment variables for Datadog credentials
22 | # This should ideally be managed via a secrets management tool
23 | # ENV DD_API_KEY=your_api_key
24 | # ENV DD_APP_KEY=your_app_key
25 | 
26 | # Create a non-root
27 | RUN adduser -D mcpuser && chown -R mcpuser:mcpuser /app
28 | 
29 | # Set the user to run the application
30 | USER mcpuser
31 | 
32 | # Run the MCP server
33 | CMD ["python", "main.py"]
```

--------------------------------------------------------------------------------
/.github/workflows/opencommit.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | name: 'OpenCommit Action'
 2 | 
 3 | on:
 4 |   push:
 5 |     # this list of branches is often enough,
 6 |     # but you may still ignore other public branches
 7 |     branches-ignore: [main master dev development release]
 8 | 
 9 | jobs:
10 |   opencommit:
11 |     timeout-minutes: 10
12 |     name: OpenCommit
13 |     runs-on: ubuntu-latest
14 |     permissions: write-all
15 |     steps:
16 |       - name: Setup Node.js Environment
17 |         uses: actions/setup-node@v4
18 |         with:
19 |           node-version: '16'
20 |       # - uses: actions/checkout@v3
21 |       #   with:
22 |       #     fetch-depth: 0
23 |       # - uses: di-sukharev/[email protected]
24 |       #   with:
25 |       #     GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
26 | 
27 |       #   env:
28 |       #     # set openAI api key in repo actions secrets,
29 |       #     # for openAI keys go to: https://platform.openai.com/account/api-keys
30 |       #     # for repo secret go to: <your_repo_url>/settings/secrets/actions
31 |       #     OCO_API_KEY: ${{ secrets.OCO_API_KEY }}
32 | 
33 |       #     # customization
34 |       #     OCO_TOKENS_MAX_INPUT: 4096
35 |       #     OCO_TOKENS_MAX_OUTPUT: 500
36 |       #     OCO_OPENAI_BASE_PATH: ''
37 |       #     OCO_DESCRIPTION: false
38 |       #     OCO_EMOJI: false
39 |       #     OCO_MODEL: gpt-4o
40 |       #     OCO_LANGUAGE: en
41 |       #     OCO_PROMPT_MODULE: conventional-commit
```

--------------------------------------------------------------------------------
/.github/workflows/changelog.yml:
--------------------------------------------------------------------------------

```yaml
 1 | name: Generate Changelog
 2 | 
 3 | on:
 4 |   push:
 5 |     tags:
 6 |       - 'v*'
 7 | 
 8 | permissions:
 9 |   contents: write
10 |   pull-requests: write
11 | 
12 | jobs:
13 |   changelog:
14 |     runs-on: ubuntu-latest
15 |     
16 |     steps:
17 |       - uses: actions/checkout@v4
18 |         with:
19 |           fetch-depth: 0
20 |           
21 |       - name: Generate changelog
22 |         run: |
23 |           LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.1.0")
24 |           RANGE="$(git describe --tags --abbrev=0 HEAD^ 2>/dev/null || git rev-list --max-parents=0 HEAD)..$LATEST_TAG"
25 |           
26 |           echo "# Changelog pour $LATEST_TAG" > CHANGELOG.md
27 |           echo "" >> CHANGELOG.md
28 |           echo "## 🚀 Nouvelles fonctionnalités" >> CHANGELOG.md
29 |           git log $RANGE --pretty=format:"- %s" --grep="^feat:" >> CHANGELOG.md
30 |           echo "" >> CHANGELOG.md
31 |           echo "## 🐛 Corrections de bugs" >> CHANGELOG.md
32 |           git log $RANGE --pretty=format:"- %s" --grep="^fix:" >> CHANGELOG.md
33 |           echo "" >> CHANGELOG.md
34 |           echo "## 📝 Autres changements" >> CHANGELOG.md
35 |           git log $RANGE --pretty=format:"- %s" --grep="^(chore|docs|style|refactor|perf|test):" >> CHANGELOG.md
36 | 
37 |       - name: Create Release
38 |         uses: softprops/action-gh-release@v1
39 |         with:
40 |           body_path: CHANGELOG.md
41 |           token: ${{ secrets.GITHUB_TOKEN }}
```

--------------------------------------------------------------------------------
/.github/workflows/cd.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | name: Build and Update Helm Values
 2 | 
 3 | on:
 4 |   push:
 5 |     branches:
 6 |       - main  # adjust this to your branch
 7 | 
 8 | jobs:
 9 |   build-and-update:
10 |     runs-on: ubuntu-latest
11 |     permissions:
12 |       contents: write
13 |     
14 |     steps:
15 |       - uses: actions/checkout@v4
16 |         with:
17 |           fetch-depth: 0
18 |       - name: Set up QEMU
19 |         uses: docker/setup-qemu-action@v3
20 |       
21 |       - name: Set up Docker Buildx
22 |         uses: docker/setup-buildx-action@v3
23 |         with:
24 |           platforms: linux/amd64,linux/arm64
25 |       
26 |       
27 |       - name: Login to DockerHub
28 |         uses: docker/login-action@v3
29 |         with:
30 |           username: ${{ secrets.DOCKER_USERNAME }}
31 |           password: ${{ secrets.DOCKER_PASSWORD }}
32 |       
33 |       - name: Get repository name and SHA
34 |         id: vars
35 |         run: |
36 |           REPO_NAME=$(echo "${{ github.repository }}" | cut -d'/' -f2 | tr '[:upper:]' '[:lower:]')
37 |           echo "REPO_NAME=${REPO_NAME}" >> $GITHUB_OUTPUT
38 |           echo "SHORT_SHA=$(git rev-parse --short HEAD)" >> $GITHUB_OUTPUT
39 |       
40 |       # - name: Build and push Docker image
41 |       #   uses: docker/build-push-action@v6
42 |       #   with:
43 |       #     sbom: true
44 |       #     provenance: mode=max
45 |       #     platforms: linux/amd64,linux/arm64
46 |       #     push: true
47 |       #     tags: ${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}:${{ steps.vars.outputs.SHORT_SHA }}
48 |       #     context: .
49 |       #     cache-from: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}:buildcache
50 |       #     cache-to: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}:buildcache,mode=max
51 |       
52 |       # - name: Update Helm values
53 |       #   run: |
54 |       #     yq e -i '.image.repository = "${{ secrets.DOCKER_USERNAME }}/${{ steps.vars.outputs.REPO_NAME }}"' helm/values.yaml
55 |       #     yq e -i '.image.tag = "${{ steps.vars.outputs.SHORT_SHA }}"' helm/values.yaml
56 |       
57 |       # - name: Configure Git
58 |       #   run: |
59 |       #     git config user.name "GitHub Actions"
60 |       #     git config user.email "[email protected]"
61 |       
62 |       # - name: Commit and push changes
63 |       #   run: |
64 |       #     git add helm/values.yaml
65 |       #     git commit -m "chore: update image tag to ${{ steps.vars.outputs.SHORT_SHA }}"
66 |       #     git push
```

--------------------------------------------------------------------------------
/datadog/main.py:
--------------------------------------------------------------------------------

```python
  1 | import mcp.server.stdio
  2 | import mcp.types as types
  3 | from datadog_api_client import ApiClient, Configuration
  4 | from datadog_api_client.v2.api.logs_api import LogsApi
  5 | import os
  6 | import json
  7 | from loguru import logger
  8 | from fastmcp import FastMCP
  9 | from fastmcp.prompts.base import UserMessage, AssistantMessage
 10 | from typing import Generator
 11 | from datadog_api_client.v2.models import LogsListResponse
 12 | from icecream import ic
 13 | from dotenv import load_dotenv
 14 | from datadog_api_client.v1.api.monitors_api import MonitorsApi
 15 | 
 16 | 
 17 | load_dotenv()
 18 | 
 19 | mcp = FastMCP(
 20 |     "Datadog-MCP-Server",
 21 |     dependencies=[
 22 |         "loguru",
 23 |         "icecream",
 24 |         "python-dotenv",
 25 |         "datadog-api-client",
 26 |     ],
 27 | )
 28 | 
 29 | 
 30 | def fetch_logs_paginated(
 31 |     api_instance: LogsApi, query_params: dict, max_results: int = 1000
 32 | ) -> Generator[LogsListResponse, None, None]:
 33 |     """Fetch logs with pagination support."""
 34 |     current_page = 0
 35 |     total_logs = 0
 36 | 
 37 |     while total_logs < max_results:
 38 |         query_params["page"] = {
 39 |             "limit": min(100, max_results - total_logs),
 40 |             "cursor": current_page,
 41 |         }
 42 |         response = api_instance.list_logs(body=query_params)
 43 | 
 44 |         if not response.data:
 45 |             break
 46 | 
 47 |         yield response
 48 |         total_logs += len(response.data)
 49 |         current_page += 1
 50 | 
 51 | 
 52 | def extract_tag_value(tags: list, prefix: str) -> str:
 53 |     """Helper pour extraire une valeur de tag avec un préfixe donné"""
 54 |     for tag in tags:
 55 |         if tag.startswith(prefix):
 56 |             return tag.split(":", 1)[1]
 57 |     return None
 58 | 
 59 | 
 60 | @mcp.tool()
 61 | def get_monitor_states(
 62 |     name: str,
 63 |     timeframe: int = 1,
 64 | ) -> list[types.TextContent]:
 65 |     """
 66 |     Get monitor states for a specific monitor with retry mechanism
 67 | 
 68 |     Args:
 69 |         name: monitor name
 70 |         timeframe: Hours to look back (default: 1)
 71 |     """
 72 | 
 73 |     def serialize_monitor(monitor) -> dict:
 74 |         """Helper to serialize monitor data"""
 75 |         return {
 76 |             "id": str(monitor.id),
 77 |             "name": monitor.name,
 78 |             "query": monitor.query,
 79 |             "status": str(monitor.overall_state),
 80 |             "last_triggered": monitor.last_triggered_ts
 81 |             if hasattr(monitor, "last_triggered_ts")
 82 |             else None,
 83 |             "message": monitor.message if hasattr(monitor, "message") else None,
 84 |             "type": monitor.type if hasattr(monitor, "type") else None,
 85 |             "created": str(monitor.created) if hasattr(monitor, "created") else None,
 86 |             "modified": str(monitor.modified) if hasattr(monitor, "modified") else None,
 87 |         }
 88 | 
 89 |     def fetch_monitors():
 90 |         with ApiClient(configuration) as api_client:
 91 |             monitors_api = MonitorsApi(api_client)
 92 | 
 93 |             # Get all monitors and filter by name
 94 |             response = monitors_api.list_monitors(
 95 |                 page_size=100  # 👈 Increased page size
 96 |             )
 97 | 
 98 |             # Filter monitors by name (case insensitive)
 99 |             monitor_details = []
100 |             for monitor in response:
101 |                 if name.lower() in monitor.name.lower():
102 |                     monitor_details.append(monitor)
103 | 
104 |             return monitor_details
105 | 
106 |     try:
107 |         configuration = Configuration()
108 |         api_key = os.getenv("DD_API_KEY")
109 |         app_key = os.getenv("DD_APP_KEY")
110 | 
111 |         if not api_key or not app_key:
112 |             return [
113 |                 types.TextContent(
114 |                     type="text", text="Error: Missing Datadog API credentials"
115 |                 )
116 |             ]
117 | 
118 |         configuration.api_key["DD-API-KEY"] = api_key
119 |         configuration.api_key["DD-APPLICATION-KEY"] = app_key
120 |         configuration.server_variables["site"] = "datadoghq.eu"
121 | 
122 |         monitors = fetch_monitors()
123 | 
124 |         if not monitors:
125 |             return [
126 |                 types.TextContent(
127 |                     type="text", text=f"No monitors found with name containing '{name}'"
128 |                 )
129 |             ]
130 | 
131 |         # Serialize monitors
132 |         monitor_states = [serialize_monitor(monitor) for monitor in monitors]
133 | 
134 |         return [
135 |             types.TextContent(
136 |                 type="text",
137 |                 text=json.dumps(
138 |                     monitor_states, indent=2, default=str
139 |                 ),  # 👈 Added default serializer
140 |             )
141 |         ]
142 | 
143 |     except ValueError as ve:
144 |         return [types.TextContent(type="text", text=str(ve))]
145 |     except Exception as e:
146 |         logger.error(f"Error fetching monitor states: {str(e)}")
147 |         return [
148 |             types.TextContent(
149 |                 type="text", text=f"Error fetching monitor states: {str(e)}"
150 |             )
151 |         ]
152 | 
153 | 
154 | @mcp.tool()
155 | def get_k8s_logs(
156 |     cluster: str, timeframe: int = 5, namespace: str = None
157 | ) -> list[types.TextContent]:
158 |     try:
159 |         configuration = Configuration()
160 |         api_key = os.getenv("DD_API_KEY")
161 |         app_key = os.getenv("DD_APP_KEY")
162 | 
163 |         configuration.server_variables["site"] = "datadoghq.eu"
164 | 
165 |         configuration.api_key["DD-API-KEY"] = api_key
166 |         configuration.api_key["DD-APPLICATION-KEY"] = app_key
167 |         with ApiClient(configuration) as api_client:
168 |             api_instance = LogsApi(api_client)
169 | 
170 |             # Construction d'une requête plus précise pour les erreurs
171 |             query_components = [
172 |                 # "source:kubernetes",
173 |                 f"kube_cluster_name:{cluster}",
174 |                 "status:error OR level:error OR severity:error",  # 👈 Filtre des erreurs
175 |             ]
176 | 
177 |             if namespace:
178 |                 query_components.append(f"kube_namespace:{namespace}")
179 | 
180 |             query = " AND ".join(query_components)
181 | 
182 |             response = api_instance.list_logs(
183 |                 body={
184 |                     "filter": {
185 |                         "query": query,
186 |                         "from": f"now-{timeframe}h",  # 👈 Timeframe dynamique
187 |                         "to": "now",
188 |                     },
189 |                     "sort": "-timestamp",  # Plus récent d'abord
190 |                     "page": {
191 |                         "limit": 100,  # Augmenté pour voir plus d'erreurs
192 |                     },
193 |                 }
194 |             )
195 | 
196 |             # Formatage plus pertinent de la réponse
197 |             ic(f"Query: {query}")  # 👈 Log de la requête
198 |             # ic(f"Response: {response}")  # 👈 Log de la réponse brute
199 | 
200 |             logs_data = response.to_dict()
201 |             # ic(f"Logs data: {logs_data}")  # 👈 Log des données
202 |             formatted_logs = []
203 | 
204 |             for log in logs_data.get("data", []):
205 |                 attributes = log.get("attributes", {})
206 |                 ic(attributes)
207 |                 formatted_logs.append(
208 |                     {
209 |                         "timestamp": attributes.get("timestamp"),
210 |                         "host": attributes.get("host"),
211 |                         "service": attributes.get("service"),
212 |                         "pod_name": extract_tag_value(
213 |                             attributes.get("tags", []), "pod_name:"
214 |                         ),
215 |                         "namespace": extract_tag_value(
216 |                             attributes.get("tags", []), "kube_namespace:"
217 |                         ),
218 |                         "container_name": extract_tag_value(
219 |                             attributes.get("tags", []), "kube_container_name:"
220 |                         ),
221 |                         "message": attributes.get("message"),
222 |                         "status": attributes.get("status"),
223 |                     }
224 |                 )
225 | 
226 |             return [
227 |                 types.TextContent(
228 |                     type="text", text=json.dumps(formatted_logs, indent=2)
229 |                 )
230 |             ]
231 | 
232 |     except Exception as e:
233 |         logger.error(f"Error fetching logs: {str(e)}")
234 |         return [types.TextContent(type="text", text=f"Error: {str(e)}")]
235 | 
236 | 
237 | @mcp.prompt()
238 | def analyze_monitors_data(name: str, timeframe: int = 3) -> list:
239 |     """
240 |     Analyze monitor data for a specific monitor.
241 |     Parameters:
242 |         name (str): The name of the monitor to analyze
243 |         timeframe (int): Hours to look back for data
244 |     Returns:
245 |         list: Structured monitor analysis
246 |     """
247 |     try:
248 |         monitor_data = get_monitor_states(name=name, timeframe=timeframe)
249 |         if not monitor_data:
250 |             return [
251 |                 AssistantMessage(
252 |                     f"No monitor data found for '{name}' in the last {timeframe} hours."
253 |                 )
254 |             ]
255 |         # Format the response more naturally
256 |         messages = [
257 |             UserMessage(f"Monitor Analysis for '{name}' (last {timeframe} hours):")
258 |         ]
259 |         for data in monitor_data:
260 |             messages.append(
261 |                 AssistantMessage(
262 |                     f"Monitor State: {data['state']}, Timestamp: {data['timestamp']}"
263 |                 )
264 |             )
265 |         return messages
266 |     except Exception as e:
267 |         logger.error(f"Error analyzing monitor data: {str(e)}")
268 |         return [AssistantMessage(f"Error: {str(e)}")]
269 | 
270 | 
271 | @mcp.prompt()
272 | def analyze_error_logs(
273 |     cluster: str = "rke2", timeframe: int = 3, namespace: str = None
274 | ) -> list:
275 |     """
276 |     Analyze error logs from a Kubernetes cluster.
277 | 
278 |     Parameters:
279 |         cluster (str): The cluster name to analyze
280 |         timeframe (int): Hours to look back for errors
281 |         namespace (str): Optional namespace filter
282 | 
283 |     Returns:
284 |         list: Structured error analysis
285 |     """
286 |     logs = get_k8s_logs(cluster=cluster, namespace=namespace, timeframe=timeframe)
287 | 
288 |     if not logs:
289 |         return [
290 |             AssistantMessage(
291 |                 f"No error logs found for cluster '{cluster}' in the last {timeframe} hours."
292 |             )
293 |         ]
294 | 
295 |     # Format the response more naturally
296 |     messages = [
297 |         UserMessage(f"Error Analysis for cluster '{cluster}' (last {timeframe} hours):")
298 |     ]
299 | 
300 |     try:
301 |         log_data = json.loads(logs[0].text)
302 |         if not log_data:
303 |             messages.append(
304 |                 AssistantMessage("No errors found in the specified timeframe.")
305 |             )
306 |         else:
307 |             # Group errors by service for better analysis
308 |             errors_by_service = {}
309 |             for log in log_data:
310 |                 service = log.get("service", "unknown")
311 |                 if service not in errors_by_service:
312 |                     errors_by_service[service] = []
313 |                 errors_by_service[service].append(log)
314 | 
315 |             for service, errors in errors_by_service.items():
316 |                 messages.append(
317 |                     AssistantMessage(
318 |                         f"Service {service}: Found {len(errors)} errors\n"
319 |                         + f"Most recent error: {errors[0].get('error_message', 'No message')}"
320 |                     )
321 |                 )
322 |     except json.JSONDecodeError:
323 |         messages.append(AssistantMessage("Error parsing log data."))
324 | 
325 |     return messages
326 | 
```