#
tokens: 29043/50000 6/38 files (page 2/2)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 2 of 2. Use http://codebase.md/pab1it0/prometheus-mcp-server?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .dockerignore
├── .env.template
├── .github
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── question.yml
│   ├── TRIAGE_AUTOMATION.md
│   ├── VALIDATION_SUMMARY.md
│   └── workflows
│       ├── bug-triage.yml
│       ├── ci.yml
│       ├── claude-code-review.yml
│       ├── claude.yml
│       ├── issue-management.yml
│       ├── label-management.yml
│       ├── security.yml
│       └── triage-metrics.yml
├── .gitignore
├── Dockerfile
├── docs
│   ├── api_reference.md
│   ├── configuration.md
│   ├── contributing.md
│   ├── deploying_with_toolhive.md
│   ├── docker_deployment.md
│   ├── installation.md
│   └── usage.md
├── LICENSE
├── pyproject.toml
├── README.md
├── server.json
├── src
│   └── prometheus_mcp_server
│       ├── __init__.py
│       ├── logging_config.py
│       ├── main.py
│       └── server.py
├── tests
│   ├── test_docker_integration.py
│   ├── test_logging_config.py
│   ├── test_main.py
│   ├── test_mcp_protocol_compliance.py
│   ├── test_server.py
│   └── test_tools.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/docs/docker_deployment.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Docker Deployment Guide
  2 | 
  3 | This guide covers deploying the Prometheus MCP Server using Docker, including Docker Compose configurations, environment setup, and best practices for production deployments.
  4 | 
  5 | ## Table of Contents
  6 | 
  7 | - [Quick Start](#quick-start)
  8 | - [Environment Variables](#environment-variables)
  9 | - [Transport Modes](#transport-modes)
 10 | - [Docker Compose Examples](#docker-compose-examples)
 11 | - [Production Deployment](#production-deployment)
 12 | - [Security Considerations](#security-considerations)
 13 | - [Monitoring and Health Checks](#monitoring-and-health-checks)
 14 | - [Troubleshooting](#troubleshooting)
 15 | 
 16 | ## Quick Start
 17 | 
 18 | ### Pull from Docker Hub (Recommended)
 19 | 
 20 | ```bash
 21 | # Pull the official image from Docker MCP registry
 22 | docker pull mcp/prometheus-mcp-server:latest
 23 | ```
 24 | 
 25 | ### Run with Docker
 26 | 
 27 | ```bash
 28 | # Basic stdio mode (default)
 29 | docker run --rm \
 30 |   -e PROMETHEUS_URL=http://your-prometheus:9090 \
 31 |   mcp/prometheus-mcp-server:latest
 32 | 
 33 | # HTTP mode with port mapping
 34 | docker run --rm -p 8080:8080 \
 35 |   -e PROMETHEUS_URL=http://your-prometheus:9090 \
 36 |   -e PROMETHEUS_MCP_SERVER_TRANSPORT=http \
 37 |   -e PROMETHEUS_MCP_BIND_HOST=0.0.0.0 \
 38 |   mcp/prometheus-mcp-server:latest
 39 | ```
 40 | 
 41 | ### Build from Source
 42 | 
 43 | ```bash
 44 | # Clone the repository
 45 | git clone https://github.com/pab1it0/prometheus-mcp-server.git
 46 | cd prometheus-mcp-server
 47 | 
 48 | # Build the Docker image
 49 | docker build -t prometheus-mcp-server:local .
 50 | 
 51 | # Run the locally built image
 52 | docker run --rm \
 53 |   -e PROMETHEUS_URL=http://your-prometheus:9090 \
 54 |   prometheus-mcp-server:local
 55 | ```
 56 | 
 57 | ## Environment Variables
 58 | 
 59 | ### Required Configuration
 60 | 
 61 | | Variable | Description | Example |
 62 | |----------|-------------|---------|
 63 | | `PROMETHEUS_URL` | Base URL of your Prometheus server | `http://prometheus:9090` |
 64 | 
 65 | ### Authentication (Optional)
 66 | 
 67 | | Variable | Description | Example |
 68 | |----------|-------------|---------|
 69 | | `PROMETHEUS_USERNAME` | Username for basic authentication | `admin` |
 70 | | `PROMETHEUS_PASSWORD` | Password for basic authentication | `secretpassword` |
 71 | | `PROMETHEUS_TOKEN` | Bearer token (takes precedence over basic auth) | `eyJhbGciOiJIUzI1NiIs...` |
 72 | 
 73 | ### Multi-tenancy (Optional)
 74 | 
 75 | | Variable | Description | Example |
 76 | |----------|-------------|---------|
 77 | | `ORG_ID` | Organization ID for multi-tenant setups | `tenant-1` |
 78 | 
 79 | ### MCP Server Configuration
 80 | 
 81 | | Variable | Default | Description | Options |
 82 | |----------|---------|-------------|---------|
 83 | | `PROMETHEUS_MCP_SERVER_TRANSPORT` | `stdio` | Transport protocol | `stdio`, `http`, `sse` |
 84 | | `PROMETHEUS_MCP_BIND_HOST` | `127.0.0.1` | Host to bind (HTTP/SSE modes) | `0.0.0.0`, `127.0.0.1` |
 85 | | `PROMETHEUS_MCP_BIND_PORT` | `8080` | Port to bind (HTTP/SSE modes) | `1024-65535` |
 86 | 
 87 | ## Transport Modes
 88 | 
 89 | The Prometheus MCP Server supports three transport modes:
 90 | 
 91 | ### 1. STDIO Mode (Default)
 92 | 
 93 | Best for local development and CLI integration:
 94 | 
 95 | ```bash
 96 | docker run --rm \
 97 |   -e PROMETHEUS_URL=http://prometheus:9090 \
 98 |   -e PROMETHEUS_MCP_SERVER_TRANSPORT=stdio \
 99 |   mcp/prometheus-mcp-server:latest
100 | ```
101 | 
102 | ### 2. HTTP Mode
103 | 
104 | Best for web applications and remote access:
105 | 
106 | ```bash
107 | docker run --rm -p 8080:8080 \
108 |   -e PROMETHEUS_URL=http://prometheus:9090 \
109 |   -e PROMETHEUS_MCP_SERVER_TRANSPORT=http \
110 |   -e PROMETHEUS_MCP_BIND_HOST=0.0.0.0 \
111 |   -e PROMETHEUS_MCP_BIND_PORT=8080 \
112 |   mcp/prometheus-mcp-server:latest
113 | ```
114 | 
115 | ### 3. Server-Sent Events (SSE) Mode
116 | 
117 | Best for real-time applications:
118 | 
119 | ```bash
120 | docker run --rm -p 8080:8080 \
121 |   -e PROMETHEUS_URL=http://prometheus:9090 \
122 |   -e PROMETHEUS_MCP_SERVER_TRANSPORT=sse \
123 |   -e PROMETHEUS_MCP_BIND_HOST=0.0.0.0 \
124 |   -e PROMETHEUS_MCP_BIND_PORT=8080 \
125 |   mcp/prometheus-mcp-server:latest
126 | ```
127 | 
128 | ## Docker Compose Examples
129 | 
130 | ### Basic Setup with Prometheus
131 | 
132 | ```yaml
133 | version: '3.8'
134 | services:
135 |   prometheus:
136 |     image: prom/prometheus:latest
137 |     ports:
138 |       - "9090:9090"
139 |     volumes:
140 |       - ./prometheus.yml:/etc/prometheus/prometheus.yml
141 |     
142 |   prometheus-mcp-server:
143 |     image: mcp/prometheus-mcp-server:latest
144 |     depends_on:
145 |       - prometheus
146 |     environment:
147 |       - PROMETHEUS_URL=http://prometheus:9090
148 |       - PROMETHEUS_MCP_SERVER_TRANSPORT=stdio
149 |     restart: unless-stopped
150 | ```
151 | 
152 | ### HTTP Mode with Authentication
153 | 
154 | ```yaml
155 | version: '3.8'
156 | services:
157 |   prometheus:
158 |     image: prom/prometheus:latest
159 |     ports:
160 |       - "9090:9090"
161 |     command:
162 |       - '--config.file=/etc/prometheus/prometheus.yml'
163 |       - '--web.basic-auth-users=/etc/prometheus/web.yml'
164 |     volumes:
165 |       - ./prometheus.yml:/etc/prometheus/prometheus.yml
166 |       - ./web.yml:/etc/prometheus/web.yml
167 |     
168 |   prometheus-mcp-server:
169 |     image: mcp/prometheus-mcp-server:latest
170 |     ports:
171 |       - "8080:8080"
172 |     depends_on:
173 |       - prometheus
174 |     environment:
175 |       - PROMETHEUS_URL=http://prometheus:9090
176 |       - PROMETHEUS_USERNAME=admin
177 |       - PROMETHEUS_PASSWORD=secretpassword
178 |       - PROMETHEUS_MCP_SERVER_TRANSPORT=http
179 |       - PROMETHEUS_MCP_BIND_HOST=0.0.0.0
180 |       - PROMETHEUS_MCP_BIND_PORT=8080
181 |     restart: unless-stopped
182 |     healthcheck:
183 |       test: ["CMD", "curl", "-f", "http://localhost:8080/"]
184 |       interval: 30s
185 |       timeout: 10s
186 |       retries: 3
187 |       start_period: 40s
188 | ```
189 | 
190 | ### Multi-tenant Setup
191 | 
192 | ```yaml
193 | version: '3.8'
194 | services:
195 |   prometheus-mcp-tenant1:
196 |     image: mcp/prometheus-mcp-server:latest
197 |     ports:
198 |       - "8081:8080"
199 |     environment:
200 |       - PROMETHEUS_URL=http://prometheus:9090
201 |       - PROMETHEUS_TOKEN=${TENANT1_TOKEN}
202 |       - ORG_ID=tenant-1
203 |       - PROMETHEUS_MCP_SERVER_TRANSPORT=http
204 |       - PROMETHEUS_MCP_BIND_HOST=0.0.0.0
205 |     restart: unless-stopped
206 |     
207 |   prometheus-mcp-tenant2:
208 |     image: mcp/prometheus-mcp-server:latest
209 |     ports:
210 |       - "8082:8080"
211 |     environment:
212 |       - PROMETHEUS_URL=http://prometheus:9090
213 |       - PROMETHEUS_TOKEN=${TENANT2_TOKEN}
214 |       - ORG_ID=tenant-2
215 |       - PROMETHEUS_MCP_SERVER_TRANSPORT=http
216 |       - PROMETHEUS_MCP_BIND_HOST=0.0.0.0
217 |     restart: unless-stopped
218 | ```
219 | 
220 | ### Production Setup with Secrets
221 | 
222 | ```yaml
223 | version: '3.8'
224 | services:
225 |   prometheus-mcp-server:
226 |     image: mcp/prometheus-mcp-server:latest
227 |     ports:
228 |       - "8080:8080"
229 |     environment:
230 |       - PROMETHEUS_URL=http://prometheus:9090
231 |       - PROMETHEUS_MCP_SERVER_TRANSPORT=http
232 |       - PROMETHEUS_MCP_BIND_HOST=0.0.0.0
233 |     secrets:
234 |       - prometheus_token
235 |     environment:
236 |       - PROMETHEUS_TOKEN_FILE=/run/secrets/prometheus_token
237 |     restart: unless-stopped
238 |     deploy:
239 |       resources:
240 |         limits:
241 |           memory: 256M
242 |           cpus: '0.5'
243 |         reservations:
244 |           memory: 128M
245 |           cpus: '0.25'
246 | 
247 | secrets:
248 |   prometheus_token:
249 |     external: true
250 | ```
251 | 
252 | ## Production Deployment
253 | 
254 | ### Resource Requirements
255 | 
256 | #### Minimum Requirements
257 | - **CPU**: 0.1 cores
258 | - **Memory**: 64MB
259 | - **Storage**: 100MB (for container image)
260 | 
261 | #### Recommended for Production
262 | - **CPU**: 0.25 cores
263 | - **Memory**: 128MB
264 | - **Storage**: 200MB
265 | 
266 | ### Docker Compose Production Example
267 | 
268 | ```yaml
269 | version: '3.8'
270 | services:
271 |   prometheus-mcp-server:
272 |     image: mcp/prometheus-mcp-server:latest
273 |     ports:
274 |       - "8080:8080"
275 |     environment:
276 |       - PROMETHEUS_URL=https://prometheus.example.com
277 |       - PROMETHEUS_TOKEN_FILE=/run/secrets/prometheus_token
278 |       - PROMETHEUS_MCP_SERVER_TRANSPORT=http
279 |       - PROMETHEUS_MCP_BIND_HOST=0.0.0.0
280 |       - ORG_ID=production
281 |     secrets:
282 |       - prometheus_token
283 |     restart: unless-stopped
284 |     deploy:
285 |       replicas: 2
286 |       resources:
287 |         limits:
288 |           memory: 256M
289 |           cpus: '0.5'
290 |         reservations:
291 |           memory: 128M
292 |           cpus: '0.25'
293 |       restart_policy:
294 |         condition: on-failure
295 |         delay: 5s
296 |         max_attempts: 3
297 |         window: 120s
298 |     healthcheck:
299 |       test: ["CMD", "curl", "-f", "http://localhost:8080/"]
300 |       interval: 30s
301 |       timeout: 10s
302 |       retries: 3
303 |       start_period: 40s
304 |     logging:
305 |       driver: "json-file"
306 |       options:
307 |         max-size: "100m"
308 |         max-file: "3"
309 | 
310 | secrets:
311 |   prometheus_token:
312 |     external: true
313 | ```
314 | 
315 | ### Kubernetes Deployment
316 | 
317 | ```yaml
318 | apiVersion: apps/v1
319 | kind: Deployment
320 | metadata:
321 |   name: prometheus-mcp-server
322 |   labels:
323 |     app: prometheus-mcp-server
324 | spec:
325 |   replicas: 2
326 |   selector:
327 |     matchLabels:
328 |       app: prometheus-mcp-server
329 |   template:
330 |     metadata:
331 |       labels:
332 |         app: prometheus-mcp-server
333 |     spec:
334 |       containers:
335 |       - name: prometheus-mcp-server
336 |         image: mcp/prometheus-mcp-server:latest
337 |         ports:
338 |         - containerPort: 8080
339 |         env:
340 |         - name: PROMETHEUS_URL
341 |           value: "http://prometheus:9090"
342 |         - name: PROMETHEUS_MCP_SERVER_TRANSPORT
343 |           value: "http"
344 |         - name: PROMETHEUS_MCP_BIND_HOST
345 |           value: "0.0.0.0"
346 |         - name: PROMETHEUS_TOKEN
347 |           valueFrom:
348 |             secretKeyRef:
349 |               name: prometheus-token
350 |               key: token
351 |         resources:
352 |           limits:
353 |             memory: "256Mi"
354 |             cpu: "500m"
355 |           requests:
356 |             memory: "128Mi"
357 |             cpu: "250m"
358 |         livenessProbe:
359 |           httpGet:
360 |             path: /
361 |             port: 8080
362 |           initialDelaySeconds: 30
363 |           periodSeconds: 30
364 |         readinessProbe:
365 |           httpGet:
366 |             path: /
367 |             port: 8080
368 |           initialDelaySeconds: 5
369 |           periodSeconds: 10
370 | ---
371 | apiVersion: v1
372 | kind: Service
373 | metadata:
374 |   name: prometheus-mcp-server
375 | spec:
376 |   selector:
377 |     app: prometheus-mcp-server
378 |   ports:
379 |   - protocol: TCP
380 |     port: 80
381 |     targetPort: 8080
382 |   type: ClusterIP
383 | ```
384 | 
385 | ## Security Considerations
386 | 
387 | ### 1. Network Security
388 | 
389 | ```yaml
390 | # Use internal networks for container communication
391 | version: '3.8'
392 | networks:
393 |   internal:
394 |     driver: bridge
395 |     internal: true
396 |   external:
397 |     driver: bridge
398 | 
399 | services:
400 |   prometheus-mcp-server:
401 |     networks:
402 |       - internal
403 |       - external
404 |     # Only expose necessary ports externally
405 | ```
406 | 
407 | ### 2. Secrets Management
408 | 
409 | ```bash
410 | # Create Docker secrets for sensitive data
411 | echo "your-prometheus-token" | docker secret create prometheus_token -
412 | 
413 | # Use secrets in compose
414 | version: '3.8'
415 | services:
416 |   prometheus-mcp-server:
417 |     secrets:
418 |       - prometheus_token
419 |     environment:
420 |       - PROMETHEUS_TOKEN_FILE=/run/secrets/prometheus_token
421 | ```
422 | 
423 | ### 3. User Permissions
424 | 
425 | The container runs as non-root user `app` (UID 1000) by default. No additional configuration needed.
426 | 
427 | ### 4. TLS/HTTPS
428 | 
429 | ```yaml
430 | # Use HTTPS for Prometheus URL
431 | environment:
432 |   - PROMETHEUS_URL=https://prometheus.example.com
433 |   - PROMETHEUS_TOKEN_FILE=/run/secrets/prometheus_token
434 | ```
435 | 
436 | ## Monitoring and Health Checks
437 | 
438 | ### Built-in Health Checks
439 | 
440 | The Docker image includes built-in health checks:
441 | 
442 | ```bash
443 | # Check container health
444 | docker ps
445 | # Look for "healthy" status
446 | 
447 | # Manual health check
448 | docker exec <container-id> curl -f http://localhost:8080/ || echo "unhealthy"
449 | ```
450 | 
451 | ### Custom Health Check
452 | 
453 | ```yaml
454 | healthcheck:
455 |   test: ["CMD", "curl", "-f", "http://localhost:8080/"]
456 |   interval: 30s
457 |   timeout: 10s
458 |   retries: 3
459 |   start_period: 40s
460 | ```
461 | 
462 | ### Prometheus Metrics
463 | 
464 | The server itself doesn't expose Prometheus metrics, but you can monitor it using standard container metrics.
465 | 
466 | ### Logging
467 | 
468 | ```yaml
469 | logging:
470 |   driver: "json-file"
471 |   options:
472 |     max-size: "100m"
473 |     max-file: "3"
474 | ```
475 | 
476 | View logs:
477 | ```bash
478 | docker logs prometheus-mcp-server
479 | docker logs -f prometheus-mcp-server  # Follow logs
480 | ```
481 | 
482 | ## Troubleshooting
483 | 
484 | ### Common Issues
485 | 
486 | #### 1. Connection Refused
487 | 
488 | ```bash
489 | # Check if Prometheus URL is accessible from container
490 | docker run --rm -it mcp/prometheus-mcp-server:latest /bin/bash
491 | curl -v http://your-prometheus:9090/api/v1/status/config
492 | ```
493 | 
494 | #### 2. Authentication Failures
495 | 
496 | ```bash
497 | # Test authentication
498 | curl -H "Authorization: Bearer your-token" \
499 |   http://your-prometheus:9090/api/v1/status/config
500 | 
501 | # Or with basic auth
502 | curl -u username:password \
503 |   http://your-prometheus:9090/api/v1/status/config
504 | ```
505 | 
506 | #### 3. Permission Errors
507 | 
508 | ```bash
509 | # Check container user
510 | docker exec container-id id
511 | # Should show: uid=1000(app) gid=1000(app)
512 | ```
513 | 
514 | #### 4. Port Binding Issues
515 | 
516 | ```bash
517 | # Check port availability
518 | netstat -tulpn | grep 8080
519 | 
520 | # Use different port
521 | docker run -p 8081:8080 ...
522 | ```
523 | 
524 | ### Debug Mode
525 | 
526 | ```bash
527 | # Run with verbose logging
528 | docker run --rm \
529 |   -e PROMETHEUS_URL=http://prometheus:9090 \
530 |   -e PYTHONUNBUFFERED=1 \
531 |   mcp/prometheus-mcp-server:latest
532 | ```
533 | 
534 | ### Container Inspection
535 | 
536 | ```bash
537 | # Inspect container configuration
538 | docker inspect prometheus-mcp-server
539 | 
540 | # Check resource usage
541 | docker stats prometheus-mcp-server
542 | 
543 | # Access container shell
544 | docker exec -it prometheus-mcp-server /bin/bash
545 | ```
546 | 
547 | ### Common Environment Variable Issues
548 | 
549 | | Issue | Solution |
550 | |-------|----------|
551 | | `PROMETHEUS_URL not set` | Set the `PROMETHEUS_URL` environment variable |
552 | | `Invalid transport` | Use `stdio`, `http`, or `sse` |
553 | | `Invalid port` | Use a valid port number (1024-65535) |
554 | | `Connection refused` | Check network connectivity to Prometheus |
555 | | `Authentication failed` | Verify credentials or token |
556 | 
557 | ### Getting Help
558 | 
559 | 1. Check the [GitHub Issues](https://github.com/pab1it0/prometheus-mcp-server/issues)
560 | 2. Review container logs: `docker logs <container-name>`
561 | 3. Test Prometheus connectivity manually
562 | 4. Verify environment variables are set correctly
563 | 5. Check Docker network configuration
564 | 
565 | For production deployments, consider implementing monitoring and alerting for the MCP server container health and performance.
```

--------------------------------------------------------------------------------
/tests/test_docker_integration.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for Docker integration and container functionality."""
  2 | 
  3 | import os
  4 | import time
  5 | import pytest
  6 | import subprocess
  7 | import requests
  8 | import json
  9 | import tempfile
 10 | from pathlib import Path
 11 | from typing import Dict, Any
 12 | import docker
 13 | from unittest.mock import patch
 14 | 
 15 | 
 16 | @pytest.fixture(scope="module")
 17 | def docker_client():
 18 |     """Create a Docker client for testing."""
 19 |     try:
 20 |         client = docker.from_env()
 21 |         # Test Docker connection
 22 |         client.ping()
 23 |         return client
 24 |     except Exception as e:
 25 |         pytest.skip(f"Docker not available: {e}")
 26 | 
 27 | 
 28 | @pytest.fixture(scope="module") 
 29 | def docker_image(docker_client):
 30 |     """Build the Docker image for testing."""
 31 |     # Build the Docker image
 32 |     image_tag = "prometheus-mcp-server:test"
 33 |     
 34 |     # Get the project root directory
 35 |     project_root = Path(__file__).parent.parent
 36 |     
 37 |     try:
 38 |         # Build the image
 39 |         image, logs = docker_client.images.build(
 40 |             path=str(project_root),
 41 |             tag=image_tag,
 42 |             rm=True,
 43 |             forcerm=True
 44 |         )
 45 |         
 46 |         # Print build logs for debugging
 47 |         for log in logs:
 48 |             if 'stream' in log:
 49 |                 print(log['stream'], end='')
 50 |         
 51 |         yield image_tag
 52 |         
 53 |     except Exception as e:
 54 |         pytest.skip(f"Failed to build Docker image: {e}")
 55 |     
 56 |     finally:
 57 |         # Cleanup: remove the test image
 58 |         try:
 59 |             docker_client.images.remove(image_tag, force=True)
 60 |         except:
 61 |             pass  # Image might already be removed
 62 | 
 63 | 
 64 | class TestDockerBuild:
 65 |     """Test Docker image build and basic functionality."""
 66 |     
 67 |     def test_docker_image_builds_successfully(self, docker_image):
 68 |         """Test that Docker image builds without errors."""
 69 |         assert docker_image is not None
 70 |     
 71 |     def test_docker_image_has_correct_labels(self, docker_client, docker_image):
 72 |         """Test that Docker image has the required OCI labels."""
 73 |         image = docker_client.images.get(docker_image)
 74 |         labels = image.attrs['Config']['Labels']
 75 |         
 76 |         # Test OCI standard labels
 77 |         assert 'org.opencontainers.image.title' in labels
 78 |         assert labels['org.opencontainers.image.title'] == 'Prometheus MCP Server'
 79 |         assert 'org.opencontainers.image.description' in labels
 80 |         assert 'org.opencontainers.image.version' in labels
 81 |         assert 'org.opencontainers.image.source' in labels
 82 |         assert 'org.opencontainers.image.licenses' in labels
 83 |         assert labels['org.opencontainers.image.licenses'] == 'MIT'
 84 |         
 85 |         # Test MCP-specific labels
 86 |         assert 'mcp.server.name' in labels
 87 |         assert labels['mcp.server.name'] == 'prometheus-mcp-server'
 88 |         assert 'mcp.server.category' in labels
 89 |         assert labels['mcp.server.category'] == 'monitoring'
 90 |         assert 'mcp.server.transport.stdio' in labels
 91 |         assert labels['mcp.server.transport.stdio'] == 'true'
 92 |         assert 'mcp.server.transport.http' in labels
 93 |         assert labels['mcp.server.transport.http'] == 'true'
 94 |     
 95 |     def test_docker_image_exposes_correct_port(self, docker_client, docker_image):
 96 |         """Test that Docker image exposes the correct port."""
 97 |         image = docker_client.images.get(docker_image)
 98 |         exposed_ports = image.attrs['Config']['ExposedPorts']
 99 |         
100 |         assert '8080/tcp' in exposed_ports
101 |     
102 |     def test_docker_image_runs_as_non_root(self, docker_client, docker_image):
103 |         """Test that Docker image runs as non-root user."""
104 |         image = docker_client.images.get(docker_image)
105 |         user = image.attrs['Config']['User']
106 |         
107 |         assert user == 'app'
108 | 
109 | 
110 | class TestDockerContainerStdio:
111 |     """Test Docker container running in stdio mode."""
112 |     
113 |     def test_container_starts_with_missing_prometheus_url(self, docker_client, docker_image):
114 |         """Test container behavior when PROMETHEUS_URL is not set."""
115 |         container = docker_client.containers.run(
116 |             docker_image,
117 |             environment={},
118 |             detach=True,
119 |             remove=True
120 |         )
121 |         
122 |         try:
123 |             # Wait for container to exit with timeout
124 |             # Container with missing PROMETHEUS_URL should exit quickly with error
125 |             result = container.wait(timeout=10)
126 |             
127 |             # Check that it exited with non-zero status (indicating configuration error)
128 |             assert result['StatusCode'] != 0
129 |             
130 |             # The fact that it exited quickly with non-zero status indicates
131 |             # the missing PROMETHEUS_URL was detected properly
132 |             
133 |         finally:
134 |             try:
135 |                 container.stop()
136 |                 container.remove()
137 |             except:
138 |                 pass  # Container might already be auto-removed
139 |     
140 |     def test_container_starts_with_valid_config(self, docker_client, docker_image):
141 |         """Test container starts successfully with valid configuration."""
142 |         container = docker_client.containers.run(
143 |             docker_image,
144 |             environment={
145 |                 'PROMETHEUS_URL': 'http://mock-prometheus:9090',
146 |                 'PROMETHEUS_MCP_SERVER_TRANSPORT': 'stdio'
147 |             },
148 |             detach=True,
149 |             remove=True
150 |         )
151 |         
152 |         try:
153 |             # In stdio mode without TTY/stdin, containers exit immediately after startup
154 |             # This is expected behavior - the server starts successfully then exits
155 |             result = container.wait(timeout=10)
156 |             
157 |             # Check that it exited with zero status (successful startup and normal exit)
158 |             assert result['StatusCode'] == 0
159 |             
160 |             # The fact that it exited with code 0 indicates successful configuration
161 |             # and normal termination (no stdin available in detached container)
162 |             
163 |         finally:
164 |             try:
165 |                 container.stop()
166 |                 container.remove()
167 |             except:
168 |                 pass  # Container might already be auto-removed
169 | 
170 | 
171 | class TestDockerContainerHTTP:
172 |     """Test Docker container running in HTTP mode."""
173 |     
174 |     def test_container_http_mode_binds_to_port(self, docker_client, docker_image):
175 |         """Test container in HTTP mode binds to the correct port."""
176 |         container = docker_client.containers.run(
177 |             docker_image,
178 |             environment={
179 |                 'PROMETHEUS_URL': 'http://mock-prometheus:9090',
180 |                 'PROMETHEUS_MCP_SERVER_TRANSPORT': 'http',
181 |                 'PROMETHEUS_MCP_BIND_HOST': '0.0.0.0',
182 |                 'PROMETHEUS_MCP_BIND_PORT': '8080'
183 |             },
184 |             ports={'8080/tcp': 8080},
185 |             detach=True,
186 |             remove=True
187 |         )
188 |         
189 |         try:
190 |             # Wait for the container to start
191 |             time.sleep(3)
192 |             
193 |             # Container should be running
194 |             container.reload()
195 |             assert container.status == 'running'
196 |             
197 |             # Try to connect to the HTTP port
198 |             # Note: This might fail if the MCP server doesn't accept HTTP requests
199 |             # but the port should be open
200 |             try:
201 |                 response = requests.get('http://localhost:8080', timeout=5)
202 |                 # Any response (including error) means the port is accessible
203 |             except requests.exceptions.ConnectionError:
204 |                 pytest.fail("HTTP port not accessible")
205 |             except requests.exceptions.RequestException:
206 |                 # Other request exceptions are okay - port is open but MCP protocol
207 |                 pass
208 |             
209 |         finally:
210 |             try:
211 |                 container.stop()
212 |                 container.remove()
213 |             except:
214 |                 pass
215 |     
216 |     def test_container_health_check_stdio_mode(self, docker_client, docker_image):
217 |         """Test Docker health check in stdio mode."""
218 |         container = docker_client.containers.run(
219 |             docker_image,
220 |             environment={
221 |                 'PROMETHEUS_URL': 'http://mock-prometheus:9090',
222 |                 'PROMETHEUS_MCP_SERVER_TRANSPORT': 'stdio'
223 |             },
224 |             detach=True,
225 |             remove=True
226 |         )
227 |         
228 |         try:
229 |             # In stdio mode, container will exit quickly since no stdin is available
230 |             # Test verifies that the container starts up properly (health check design)
231 |             result = container.wait(timeout=10)
232 |             
233 |             # Container should exit with code 0 (successful startup and normal termination)
234 |             assert result['StatusCode'] == 0
235 |             
236 |             # The successful exit indicates the server started properly
237 |             # In stdio mode without stdin, immediate exit is expected behavior
238 |             
239 |         finally:
240 |             try:
241 |                 container.stop()
242 |                 container.remove()
243 |             except:
244 |                 pass  # Container might already be auto-removed
245 | 
246 | 
247 | class TestDockerEnvironmentVariables:
248 |     """Test Docker container environment variable handling."""
249 |     
250 |     def test_all_environment_variables_accepted(self, docker_client, docker_image):
251 |         """Test that container accepts all expected environment variables."""
252 |         env_vars = {
253 |             'PROMETHEUS_URL': 'http://test-prometheus:9090',
254 |             'PROMETHEUS_USERNAME': 'testuser',
255 |             'PROMETHEUS_PASSWORD': 'testpass',
256 |             'PROMETHEUS_TOKEN': 'test-token',
257 |             'ORG_ID': 'test-org',
258 |             'PROMETHEUS_MCP_SERVER_TRANSPORT': 'http',
259 |             'PROMETHEUS_MCP_BIND_HOST': '0.0.0.0',
260 |             'PROMETHEUS_MCP_BIND_PORT': '8080'
261 |         }
262 |         
263 |         container = docker_client.containers.run(
264 |             docker_image,
265 |             environment=env_vars,
266 |             detach=True,
267 |             remove=True
268 |         )
269 |         
270 |         try:
271 |             # Wait for the container to start
272 |             time.sleep(3)
273 |             
274 |             # Container should be running
275 |             container.reload()
276 |             assert container.status == 'running'
277 |             
278 |             # Check logs don't contain environment variable errors
279 |             logs = container.logs().decode('utf-8')
280 |             assert 'environment variable is invalid' not in logs
281 |             assert 'configuration missing' not in logs.lower()
282 |             
283 |         finally:
284 |             try:
285 |                 container.stop()
286 |                 container.remove()
287 |             except:
288 |                 pass
289 |     
290 |     def test_invalid_transport_mode_fails(self, docker_client, docker_image):
291 |         """Test that invalid transport mode causes container to fail."""
292 |         container = docker_client.containers.run(
293 |             docker_image,
294 |             environment={
295 |                 'PROMETHEUS_URL': 'http://test-prometheus:9090',
296 |                 'PROMETHEUS_MCP_SERVER_TRANSPORT': 'invalid-transport'
297 |             },
298 |             detach=True,
299 |             remove=True
300 |         )
301 |         
302 |         try:
303 |             # Wait for container to exit with timeout
304 |             # Container with invalid transport should exit quickly with error
305 |             result = container.wait(timeout=10)
306 |             
307 |             # Check that it exited with non-zero status (indicating configuration error)
308 |             assert result['StatusCode'] != 0
309 |             
310 |             # The fact that it exited quickly with non-zero status indicates
311 |             # the invalid transport was detected properly
312 |             
313 |         finally:
314 |             try:
315 |                 container.stop()
316 |                 container.remove()
317 |             except:
318 |                 pass  # Container might already be auto-removed
319 |     
320 |     def test_invalid_port_fails(self, docker_client, docker_image):
321 |         """Test that invalid port causes container to fail."""
322 |         container = docker_client.containers.run(
323 |             docker_image,
324 |             environment={
325 |                 'PROMETHEUS_URL': 'http://test-prometheus:9090',
326 |                 'PROMETHEUS_MCP_SERVER_TRANSPORT': 'http',
327 |                 'PROMETHEUS_MCP_BIND_PORT': 'invalid-port'
328 |             },
329 |             detach=True,
330 |             remove=True
331 |         )
332 |         
333 |         try:
334 |             # Wait for container to exit with timeout
335 |             # Container with invalid port should exit quickly with error
336 |             result = container.wait(timeout=10)
337 |             
338 |             # Check that it exited with non-zero status (indicating configuration error)
339 |             assert result['StatusCode'] != 0
340 |             
341 |             # The fact that it exited quickly with non-zero status indicates
342 |             # the invalid port was detected properly
343 |             
344 |         finally:
345 |             try:
346 |                 container.stop()
347 |                 container.remove()
348 |             except:
349 |                 pass  # Container might already be auto-removed
350 | 
351 | 
352 | class TestDockerSecurity:
353 |     """Test Docker security features."""
354 |     
355 |     def test_container_runs_as_non_root_user(self, docker_client, docker_image):
356 |         """Test that container processes run as non-root user."""
357 |         container = docker_client.containers.run(
358 |             docker_image,
359 |             environment={
360 |                 'PROMETHEUS_URL': 'http://test-prometheus:9090',
361 |                 'PROMETHEUS_MCP_SERVER_TRANSPORT': 'http'
362 |             },
363 |             detach=True,
364 |             remove=True
365 |         )
366 |         
367 |         try:
368 |             # Wait for container to start
369 |             time.sleep(2)
370 |             
371 |             # Execute id command to check user
372 |             result = container.exec_run('id')
373 |             output = result.output.decode('utf-8')
374 |             
375 |             # Should run as app user (uid=1000, gid=1000)
376 |             assert 'uid=1000(app)' in output
377 |             assert 'gid=1000(app)' in output
378 |             
379 |         finally:
380 |             try:
381 |                 container.stop()
382 |                 container.remove()
383 |             except:
384 |                 pass
385 |     
386 |     def test_container_filesystem_permissions(self, docker_client, docker_image):
387 |         """Test that container filesystem has correct permissions."""
388 |         container = docker_client.containers.run(
389 |             docker_image,
390 |             environment={
391 |                 'PROMETHEUS_URL': 'http://test-prometheus:9090',
392 |                 'PROMETHEUS_MCP_SERVER_TRANSPORT': 'http'
393 |             },
394 |             detach=True,
395 |             remove=True
396 |         )
397 |         
398 |         try:
399 |             # Wait for container to start
400 |             time.sleep(2)
401 |             
402 |             # Check app directory ownership
403 |             result = container.exec_run('ls -la /app')
404 |             output = result.output.decode('utf-8')
405 |             
406 |             # App directory should be owned by app user
407 |             # Check that the directory shows app user and app group
408 |             assert 'app  app' in output or 'app app' in output
409 |             
410 |         finally:
411 |             try:
412 |                 container.stop()
413 |                 container.remove()
414 |             except:
415 |                 pass
```

--------------------------------------------------------------------------------
/.github/workflows/issue-management.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Issue Management
  2 | 
  3 | on:
  4 |   issues:
  5 |     types: [opened, edited, closed, reopened, labeled, unlabeled]
  6 |   issue_comment:
  7 |     types: [created, edited, deleted]
  8 |   schedule:
  9 |     # Run daily at 9 AM UTC for maintenance tasks
 10 |     - cron: '0 9 * * *'
 11 |   workflow_dispatch:
 12 |     inputs:
 13 |       action:
 14 |         description: 'Management action to perform'
 15 |         required: true
 16 |         default: 'health-check'
 17 |         type: choice
 18 |         options:
 19 |         - health-check
 20 |         - close-stale
 21 |         - update-metrics
 22 |         - sync-labels
 23 | 
 24 | permissions:
 25 |   issues: write
 26 |   contents: read
 27 |   pull-requests: read
 28 | 
 29 | jobs:
 30 |   issue-triage-rules:
 31 |     runs-on: ubuntu-latest
 32 |     if: github.event_name == 'issues' && (github.event.action == 'opened' || github.event.action == 'edited')
 33 |     
 34 |     steps:
 35 |       - name: Enhanced Auto-Triage
 36 |         uses: actions/github-script@v7
 37 |         with:
 38 |           script: |
 39 |             const issue = context.payload.issue;
 40 |             const title = issue.title.toLowerCase();
 41 |             const body = issue.body ? issue.body.toLowerCase() : '';
 42 |             
 43 |             // Advanced pattern matching for better categorization
 44 |             const patterns = {
 45 |               critical: {
 46 |                 keywords: ['critical', 'crash', 'data loss', 'security', 'urgent', 'production down'],
 47 |                 priority: 'priority: critical'
 48 |               },
 49 |               performance: {
 50 |                 keywords: ['slow', 'timeout', 'performance', 'memory', 'cpu', 'optimization'],
 51 |                 labels: ['type: performance', 'priority: high']
 52 |               },
 53 |               authentication: {
 54 |                 keywords: ['auth', 'login', 'token', 'credentials', 'unauthorized', '401', '403'],
 55 |                 labels: ['component: authentication', 'priority: medium']
 56 |               },
 57 |               configuration: {
 58 |                 keywords: ['config', 'setup', 'environment', 'variables', 'installation'],
 59 |                 labels: ['component: configuration', 'type: configuration']
 60 |               },
 61 |               docker: {
 62 |                 keywords: ['docker', 'container', 'image', 'deployment', 'kubernetes'],
 63 |                 labels: ['component: deployment', 'env: docker']
 64 |               }
 65 |             };
 66 | 
 67 |             const labelsToAdd = new Set();
 68 |             
 69 |             // Apply pattern-based labeling
 70 |             for (const [category, pattern] of Object.entries(patterns)) {
 71 |               const hasKeyword = pattern.keywords.some(keyword => 
 72 |                 title.includes(keyword) || body.includes(keyword)
 73 |               );
 74 |               
 75 |               if (hasKeyword) {
 76 |                 if (pattern.labels) {
 77 |                   pattern.labels.forEach(label => labelsToAdd.add(label));
 78 |                 } else if (pattern.priority) {
 79 |                   labelsToAdd.add(pattern.priority);
 80 |                 }
 81 |               }
 82 |             }
 83 | 
 84 |             // Intelligent component detection
 85 |             if (body.includes('promql') || body.includes('prometheus') || body.includes('metrics')) {
 86 |               labelsToAdd.add('component: prometheus');
 87 |             }
 88 |             
 89 |             if (body.includes('mcp') || body.includes('transport') || body.includes('server')) {
 90 |               labelsToAdd.add('component: mcp-server');
 91 |             }
 92 | 
 93 |             // Environment detection from issue body
 94 |             const envPatterns = {
 95 |               'env: windows': /windows|win32|powershell/i,
 96 |               'env: macos': /macos|darwin|mac\s+os|osx/i,
 97 |               'env: linux': /linux|ubuntu|debian|centos|rhel/i,
 98 |               'env: docker': /docker|container|kubernetes|k8s/i
 99 |             };
100 | 
101 |             for (const [label, pattern] of Object.entries(envPatterns)) {
102 |               if (pattern.test(body) || pattern.test(title)) {
103 |                 labelsToAdd.add(label);
104 |               }
105 |             }
106 | 
107 |             // Apply all detected labels
108 |             if (labelsToAdd.size > 0) {
109 |               await github.rest.issues.addLabels({
110 |                 owner: context.repo.owner,
111 |                 repo: context.repo.repo,
112 |                 issue_number: issue.number,
113 |                 labels: Array.from(labelsToAdd)
114 |               });
115 |             }
116 | 
117 |   intelligent-assignment:
118 |     runs-on: ubuntu-latest
119 |     if: github.event_name == 'issues' && github.event.action == 'labeled'
120 |     
121 |     steps:
122 |       - name: Smart Assignment Logic
123 |         uses: actions/github-script@v7
124 |         with:
125 |           script: |
126 |             const issue = context.payload.issue;
127 |             const labelName = context.payload.label.name;
128 |             
129 |             // Skip if already assigned
130 |             if (issue.assignees.length > 0) return;
131 |             
132 |             // Assignment rules based on labels and content
133 |             const assignmentRules = {
134 |               'priority: critical': {
135 |                 assignees: ['pab1it0'],
136 |                 notify: true,
137 |                 milestone: 'urgent-fixes'
138 |               },
139 |               'component: prometheus': {
140 |                 assignees: ['pab1it0'],
141 |                 notify: false
142 |               },
143 |               'component: authentication': {
144 |                 assignees: ['pab1it0'],
145 |                 notify: true
146 |               },
147 |               'type: performance': {
148 |                 assignees: ['pab1it0'],
149 |                 notify: false
150 |               }
151 |             };
152 | 
153 |             const rule = assignmentRules[labelName];
154 |             if (rule) {
155 |               // Assign to maintainer
156 |               await github.rest.issues.addAssignees({
157 |                 owner: context.repo.owner,
158 |                 repo: context.repo.repo,
159 |                 issue_number: issue.number,
160 |                 assignees: rule.assignees
161 |               });
162 |               
163 |               // Add notification comment if needed
164 |               if (rule.notify) {
165 |                 await github.rest.issues.createComment({
166 |                   owner: context.repo.owner,
167 |                   repo: context.repo.repo,
168 |                   issue_number: issue.number,
169 |                   body: `🚨 This issue has been marked as **${labelName}** and requires immediate attention from the maintainer team.`
170 |                 });
171 |               }
172 |               
173 |               // Set milestone if specified
174 |               if (rule.milestone) {
175 |                 try {
176 |                   const milestones = await github.rest.issues.listMilestones({
177 |                     owner: context.repo.owner,
178 |                     repo: context.repo.repo,
179 |                     state: 'open'
180 |                   });
181 |                   
182 |                   const milestone = milestones.data.find(m => m.title === rule.milestone);
183 |                   if (milestone) {
184 |                     await github.rest.issues.update({
185 |                       owner: context.repo.owner,
186 |                       repo: context.repo.repo,
187 |                       issue_number: issue.number,
188 |                       milestone: milestone.number
189 |                     });
190 |                   }
191 |                 } catch (error) {
192 |                   console.log(`Could not set milestone: ${error.message}`);
193 |                 }
194 |               }
195 |             }
196 | 
197 |   issue-health-monitoring:
198 |     runs-on: ubuntu-latest
199 |     if: github.event_name == 'schedule' || github.event.inputs.action == 'health-check'
200 |     
201 |     steps:
202 |       - name: Issue Health Check
203 |         uses: actions/github-script@v7
204 |         with:
205 |           script: |
206 |             const { data: issues } = await github.rest.issues.listForRepo({
207 |               owner: context.repo.owner,
208 |               repo: context.repo.repo,
209 |               state: 'open',
210 |               per_page: 100
211 |             });
212 | 
213 |             const now = new Date();
214 |             const healthMetrics = {
215 |               needsAttention: [],
216 |               staleIssues: [],
217 |               missingLabels: [],
218 |               duplicateCandidates: [],
219 |               escalationCandidates: []
220 |             };
221 | 
222 |             for (const issue of issues) {
223 |               if (issue.pull_request) continue;
224 |               
225 |               const updatedAt = new Date(issue.updated_at);
226 |               const daysSinceUpdate = Math.floor((now - updatedAt) / (1000 * 60 * 60 * 24));
227 |               
228 |               // Check for issues needing attention
229 |               const hasNeedsTriageLabel = issue.labels.some(l => l.name === 'status: needs-triage');
230 |               const hasAssignee = issue.assignees.length > 0;
231 |               const hasTypeLabel = issue.labels.some(l => l.name.startsWith('type:'));
232 |               const hasPriorityLabel = issue.labels.some(l => l.name.startsWith('priority:'));
233 |               
234 |               // Issues that need attention
235 |               if (hasNeedsTriageLabel && daysSinceUpdate > 3) {
236 |                 healthMetrics.needsAttention.push({
237 |                   number: issue.number,
238 |                   title: issue.title,
239 |                   daysSinceUpdate,
240 |                   reason: 'Needs triage for > 3 days'
241 |                 });
242 |               }
243 |               
244 |               // Stale issues
245 |               if (daysSinceUpdate > 30) {
246 |                 healthMetrics.staleIssues.push({
247 |                   number: issue.number,
248 |                   title: issue.title,
249 |                   daysSinceUpdate
250 |                 });
251 |               }
252 |               
253 |               // Missing essential labels
254 |               if (!hasTypeLabel || !hasPriorityLabel) {
255 |                 healthMetrics.missingLabels.push({
256 |                   number: issue.number,
257 |                   title: issue.title,
258 |                   missing: [
259 |                     !hasTypeLabel ? 'type' : null,
260 |                     !hasPriorityLabel ? 'priority' : null
261 |                   ].filter(Boolean)
262 |                 });
263 |               }
264 |               
265 |               // Escalation candidates (high priority, old, unassigned)
266 |               const hasHighPriority = issue.labels.some(l => 
267 |                 l.name === 'priority: high' || l.name === 'priority: critical'
268 |               );
269 |               
270 |               if (hasHighPriority && !hasAssignee && daysSinceUpdate > 2) {
271 |                 healthMetrics.escalationCandidates.push({
272 |                   number: issue.number,
273 |                   title: issue.title,
274 |                   daysSinceUpdate,
275 |                   labels: issue.labels.map(l => l.name)
276 |                 });
277 |               }
278 |             }
279 | 
280 |             // Generate health report
281 |             console.log('=== ISSUE HEALTH REPORT ===');
282 |             console.log(`Issues needing attention: ${healthMetrics.needsAttention.length}`);
283 |             console.log(`Stale issues (>30 days): ${healthMetrics.staleIssues.length}`);
284 |             console.log(`Issues missing labels: ${healthMetrics.missingLabels.length}`);
285 |             console.log(`Escalation candidates: ${healthMetrics.escalationCandidates.length}`);
286 |             
287 |             // Take action on health issues
288 |             if (healthMetrics.escalationCandidates.length > 0) {
289 |               for (const issue of healthMetrics.escalationCandidates) {
290 |                 await github.rest.issues.addAssignees({
291 |                   owner: context.repo.owner,
292 |                   repo: context.repo.repo,
293 |                   issue_number: issue.number,
294 |                   assignees: ['pab1it0']
295 |                 });
296 |                 
297 |                 await github.rest.issues.createComment({
298 |                   owner: context.repo.owner,
299 |                   repo: context.repo.repo,
300 |                   issue_number: issue.number,
301 |                   body: `⚡ This high-priority issue has been automatically escalated due to inactivity (${issue.daysSinceUpdate} days since last update).`
302 |                 });
303 |               }
304 |             }
305 | 
306 |   comment-management:
307 |     runs-on: ubuntu-latest
308 |     if: github.event_name == 'issue_comment'
309 |     
310 |     steps:
311 |       - name: Comment-Based Actions
312 |         uses: actions/github-script@v7
313 |         with:
314 |           script: |
315 |             const comment = context.payload.comment;
316 |             const issue = context.payload.issue;
317 |             const commentBody = comment.body.toLowerCase();
318 |             
319 |             // Skip if comment is from a bot
320 |             if (comment.user.type === 'Bot') return;
321 |             
322 |             // Auto-response to common questions
323 |             const autoResponses = {
324 |               'how to install': '📚 Please check our [installation guide](https://github.com/pab1it0/prometheus-mcp-server/blob/main/docs/installation.md) for detailed setup instructions.',
325 |               'docker setup': '🐳 For Docker setup instructions, see our [Docker deployment guide](https://github.com/pab1it0/prometheus-mcp-server/blob/main/docs/deploying_with_toolhive.md).',
326 |               'configuration help': '⚙️ Configuration details can be found in our [configuration guide](https://github.com/pab1it0/prometheus-mcp-server/blob/main/docs/configuration.md).'
327 |             };
328 | 
329 |             // Check for help requests
330 |             for (const [trigger, response] of Object.entries(autoResponses)) {
331 |               if (commentBody.includes(trigger)) {
332 |                 await github.rest.issues.createComment({
333 |                   owner: context.repo.owner,
334 |                   repo: context.repo.repo,
335 |                   issue_number: issue.number,
336 |                   body: `${response}\n\nIf this doesn't help, please provide more specific details about your setup and the issue you're experiencing.`
337 |                 });
338 |                 break;
339 |               }
340 |             }
341 |             
342 |             // Update status based on maintainer responses
343 |             const isMaintainer = comment.user.login === 'pab1it0';
344 |             if (isMaintainer) {
345 |               const hasWaitingLabel = issue.labels.some(l => l.name === 'status: waiting-for-response');
346 |               const hasNeedsTriageLabel = issue.labels.some(l => l.name === 'status: needs-triage');
347 |               
348 |               // Remove waiting label if maintainer responds
349 |               if (hasWaitingLabel) {
350 |                 await github.rest.issues.removeLabel({
351 |                   owner: context.repo.owner,
352 |                   repo: context.repo.repo,
353 |                   issue_number: issue.number,
354 |                   name: 'status: waiting-for-response'
355 |                 });
356 |               }
357 |               
358 |               // Remove needs-triage if maintainer responds
359 |               if (hasNeedsTriageLabel) {
360 |                 await github.rest.issues.removeLabel({
361 |                   owner: context.repo.owner,
362 |                   repo: context.repo.repo,
363 |                   issue_number: issue.number,
364 |                   name: 'status: needs-triage'
365 |                 });
366 |                 
367 |                 await github.rest.issues.addLabels({
368 |                   owner: context.repo.owner,
369 |                   repo: context.repo.repo,
370 |                   issue_number: issue.number,
371 |                   labels: ['status: in-progress']
372 |                 });
373 |               }
374 |             }
375 | 
376 |   duplicate-detection:
377 |     runs-on: ubuntu-latest
378 |     if: github.event_name == 'issues' && github.event.action == 'opened'
379 |     
380 |     steps:
381 |       - name: Detect Potential Duplicates
382 |         uses: actions/github-script@v7
383 |         with:
384 |           script: |
385 |             const newIssue = context.payload.issue;
386 |             const newTitle = newIssue.title.toLowerCase();
387 |             const newBody = newIssue.body ? newIssue.body.toLowerCase() : '';
388 |             
389 |             // Get recent issues for comparison
390 |             const { data: existingIssues } = await github.rest.issues.listForRepo({
391 |               owner: context.repo.owner,
392 |               repo: context.repo.repo,
393 |               state: 'all',
394 |               per_page: 50,
395 |               sort: 'created',
396 |               direction: 'desc'
397 |             });
398 |             
399 |             // Filter out the new issue itself and PRs
400 |             const candidates = existingIssues.filter(issue => 
401 |               issue.number !== newIssue.number && !issue.pull_request
402 |             );
403 |             
404 |             // Simple duplicate detection based on title similarity
405 |             const potentialDuplicates = candidates.filter(issue => {
406 |               const existingTitle = issue.title.toLowerCase();
407 |               const titleWords = newTitle.split(/\s+/).filter(word => word.length > 3);
408 |               const matchingWords = titleWords.filter(word => existingTitle.includes(word));
409 |               
410 |               // Consider it a potential duplicate if >50% of significant words match
411 |               return matchingWords.length / titleWords.length > 0.5 && titleWords.length > 2;
412 |             });
413 |             
414 |             if (potentialDuplicates.length > 0) {
415 |               const duplicateLinks = potentialDuplicates
416 |                 .slice(0, 3) // Limit to top 3 matches
417 |                 .map(dup => `- #${dup.number}: ${dup.title}`)
418 |                 .join('\n');
419 |               
420 |               await github.rest.issues.createComment({
421 |                 owner: context.repo.owner,
422 |                 repo: context.repo.repo,
423 |                 issue_number: newIssue.number,
424 |                 body: `🔍 **Potential Duplicate Detection**
425 |                 
426 | This issue might be similar to:
427 | ${duplicateLinks}
428 | 
429 | Please check if your issue is already reported. If this is indeed a duplicate, we'll close it to keep discussions consolidated. If it's different, please clarify how this issue differs from the existing ones.`
430 |               });
431 |               
432 |               await github.rest.issues.addLabels({
433 |                 owner: context.repo.owner,
434 |                 repo: context.repo.repo,
435 |                 issue_number: newIssue.number,
436 |                 labels: ['needs-investigation']
437 |               });
438 |             }
```

--------------------------------------------------------------------------------
/.github/workflows/triage-metrics.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Triage Metrics & Reporting
  2 | 
  3 | on:
  4 |   schedule:
  5 |     # Daily metrics at 8 AM UTC
  6 |     - cron: '0 8 * * *'
  7 |     # Weekly detailed report on Mondays at 9 AM UTC
  8 |     - cron: '0 9 * * 1'
  9 |   workflow_dispatch:
 10 |     inputs:
 11 |       report_type:
 12 |         description: 'Type of report to generate'
 13 |         required: true
 14 |         default: 'daily'
 15 |         type: choice
 16 |         options:
 17 |         - daily
 18 |         - weekly
 19 |         - monthly
 20 |         - custom
 21 |       days_back:
 22 |         description: 'Days back to analyze (for custom reports)'
 23 |         required: false
 24 |         default: '7'
 25 |         type: string
 26 | 
 27 | permissions:
 28 |   issues: read
 29 |   contents: write
 30 |   pull-requests: read
 31 | 
 32 | jobs:
 33 |   collect-metrics:
 34 |     runs-on: ubuntu-latest
 35 |     outputs:
 36 |       metrics_json: ${{ steps.calculate.outputs.metrics }}
 37 |     
 38 |     steps:
 39 |       - name: Calculate Triage Metrics
 40 |         id: calculate
 41 |         uses: actions/github-script@v7
 42 |         with:
 43 |           script: |
 44 |             const reportType = '${{ github.event.inputs.report_type }}' || 'daily';
 45 |             const daysBack = parseInt('${{ github.event.inputs.days_back }}' || '7');
 46 |             
 47 |             // Determine date range based on report type
 48 |             const now = new Date();
 49 |             let startDate;
 50 |             
 51 |             switch (reportType) {
 52 |               case 'daily':
 53 |                 startDate = new Date(now.getTime() - (1 * 24 * 60 * 60 * 1000));
 54 |                 break;
 55 |               case 'weekly':
 56 |                 startDate = new Date(now.getTime() - (7 * 24 * 60 * 60 * 1000));
 57 |                 break;
 58 |               case 'monthly':
 59 |                 startDate = new Date(now.getTime() - (30 * 24 * 60 * 60 * 1000));
 60 |                 break;
 61 |               case 'custom':
 62 |                 startDate = new Date(now.getTime() - (daysBack * 24 * 60 * 60 * 1000));
 63 |                 break;
 64 |               default:
 65 |                 startDate = new Date(now.getTime() - (7 * 24 * 60 * 60 * 1000));
 66 |             }
 67 | 
 68 |             console.log(`Analyzing ${reportType} metrics from ${startDate.toISOString()} to ${now.toISOString()}`);
 69 | 
 70 |             // Fetch all issues and PRs
 71 |             const allIssues = [];
 72 |             let page = 1;
 73 |             let hasMore = true;
 74 | 
 75 |             while (hasMore && page <= 10) { // Limit to prevent excessive API calls
 76 |               const { data: pageIssues } = await github.rest.issues.listForRepo({
 77 |                 owner: context.repo.owner,
 78 |                 repo: context.repo.repo,
 79 |                 state: 'all',
 80 |                 sort: 'updated',
 81 |                 direction: 'desc',
 82 |                 per_page: 100,
 83 |                 page: page
 84 |               });
 85 | 
 86 |               allIssues.push(...pageIssues);
 87 |               
 88 |               // Check if we've gone back far enough
 89 |               const oldestInPage = new Date(Math.min(...pageIssues.map(i => new Date(i.updated_at))));
 90 |               hasMore = pageIssues.length === 100 && oldestInPage > startDate;
 91 |               page++;
 92 |             }
 93 | 
 94 |             // Initialize metrics
 95 |             const metrics = {
 96 |               period: {
 97 |                 type: reportType,
 98 |                 start: startDate.toISOString(),
 99 |                 end: now.toISOString(),
100 |                 days: Math.ceil((now - startDate) / (1000 * 60 * 60 * 24))
101 |               },
102 |               overview: {
103 |                 total_issues: 0,
104 |                 total_prs: 0,
105 |                 open_issues: 0,
106 |                 closed_issues: 0,
107 |                 new_issues: 0,
108 |                 resolved_issues: 0
109 |               },
110 |               triage: {
111 |                 needs_triage: 0,
112 |                 triaged_this_period: 0,
113 |                 avg_triage_time_hours: 0,
114 |                 overdue_triage: 0
115 |               },
116 |               labels: {
117 |                 by_priority: {},
118 |                 by_component: {},
119 |                 by_type: {},
120 |                 by_status: {}
121 |               },
122 |               response_times: {
123 |                 avg_first_response_hours: 0,
124 |                 avg_resolution_time_hours: 0,
125 |                 issues_without_response: 0
126 |               },
127 |               contributors: {
128 |                 issue_creators: new Set(),
129 |                 comment_authors: new Set(),
130 |                 assignees: new Set()
131 |               },
132 |               quality: {
133 |                 issues_with_templates: 0,
134 |                 issues_missing_info: 0,
135 |                 duplicate_issues: 0,
136 |                 stale_issues: 0
137 |               }
138 |             };
139 | 
140 |             const triageEvents = [];
141 |             const responseTimeData = [];
142 | 
143 |             // Analyze each issue
144 |             for (const issue of allIssues) {
145 |               const createdAt = new Date(issue.created_at);
146 |               const updatedAt = new Date(issue.updated_at);
147 |               const closedAt = issue.closed_at ? new Date(issue.closed_at) : null;
148 |               const isPR = !!issue.pull_request;
149 |               const isInPeriod = updatedAt >= startDate;
150 |               
151 |               if (!isInPeriod && createdAt < startDate) continue;
152 | 
153 |               // Basic counts
154 |               if (isPR) {
155 |                 metrics.overview.total_prs++;
156 |               } else {
157 |                 metrics.overview.total_issues++;
158 |                 
159 |                 if (issue.state === 'open') {
160 |                   metrics.overview.open_issues++;
161 |                 } else {
162 |                   metrics.overview.closed_issues++;
163 |                 }
164 |                 
165 |                 // New issues in period
166 |                 if (createdAt >= startDate) {
167 |                   metrics.overview.new_issues++;
168 |                   metrics.contributors.issue_creators.add(issue.user.login);
169 |                 }
170 |                 
171 |                 // Resolved issues in period
172 |                 if (closedAt && closedAt >= startDate) {
173 |                   metrics.overview.resolved_issues++;
174 |                 }
175 |               }
176 | 
177 |               if (isPR) continue; // Skip PRs for issue-specific analysis
178 | 
179 |               // Triage analysis
180 |               const hasNeedsTriageLabel = issue.labels.some(l => l.name === 'status: needs-triage');
181 |               if (hasNeedsTriageLabel) {
182 |                 metrics.triage.needs_triage++;
183 |                 const daysSinceCreated = (now - createdAt) / (1000 * 60 * 60 * 24);
184 |                 if (daysSinceCreated > 3) {
185 |                   metrics.triage.overdue_triage++;
186 |                 }
187 |               }
188 | 
189 |               // Label analysis
190 |               for (const label of issue.labels) {
191 |                 const labelName = label.name;
192 |                 
193 |                 if (labelName.startsWith('priority: ')) {
194 |                   const priority = labelName.replace('priority: ', '');
195 |                   metrics.labels.by_priority[priority] = (metrics.labels.by_priority[priority] || 0) + 1;
196 |                 }
197 |                 
198 |                 if (labelName.startsWith('component: ')) {
199 |                   const component = labelName.replace('component: ', '');
200 |                   metrics.labels.by_component[component] = (metrics.labels.by_component[component] || 0) + 1;
201 |                 }
202 |                 
203 |                 if (labelName.startsWith('type: ')) {
204 |                   const type = labelName.replace('type: ', '');
205 |                   metrics.labels.by_type[type] = (metrics.labels.by_type[type] || 0) + 1;
206 |                 }
207 |                 
208 |                 if (labelName.startsWith('status: ')) {
209 |                   const status = labelName.replace('status: ', '');
210 |                   metrics.labels.by_status[status] = (metrics.labels.by_status[status] || 0) + 1;
211 |                 }
212 |               }
213 | 
214 |               // Assignment analysis
215 |               if (issue.assignees.length > 0) {
216 |                 issue.assignees.forEach(assignee => {
217 |                   metrics.contributors.assignees.add(assignee.login);
218 |                 });
219 |               }
220 | 
221 |               // Quality analysis
222 |               const bodyLength = issue.body ? issue.body.length : 0;
223 |               if (bodyLength > 100 && issue.body.includes('###')) {
224 |                 metrics.quality.issues_with_templates++;
225 |               } else if (bodyLength < 50) {
226 |                 metrics.quality.issues_missing_info++;
227 |               }
228 | 
229 |               // Check for stale issues
230 |               const daysSinceUpdate = (now - updatedAt) / (1000 * 60 * 60 * 24);
231 |               if (issue.state === 'open' && daysSinceUpdate > 30) {
232 |                 metrics.quality.stale_issues++;
233 |               }
234 | 
235 |               // Get comments for response time analysis
236 |               if (createdAt >= startDate) {
237 |                 try {
238 |                   const { data: comments } = await github.rest.issues.listComments({
239 |                     owner: context.repo.owner,
240 |                     repo: context.repo.repo,
241 |                     issue_number: issue.number
242 |                   });
243 | 
244 |                   comments.forEach(comment => {
245 |                     metrics.contributors.comment_authors.add(comment.user.login);
246 |                   });
247 | 
248 |                   // Find first maintainer response
249 |                   const maintainerResponse = comments.find(comment => 
250 |                     comment.user.login === 'pab1it0' ||
251 |                     comment.author_association === 'OWNER' ||
252 |                     comment.author_association === 'MEMBER'
253 |                   );
254 | 
255 |                   if (maintainerResponse) {
256 |                     const responseTime = (new Date(maintainerResponse.created_at) - createdAt) / (1000 * 60 * 60);
257 |                     responseTimeData.push(responseTime);
258 |                   } else {
259 |                     metrics.response_times.issues_without_response++;
260 |                   }
261 | 
262 |                   // Check for triage events
263 |                   const events = await github.rest.issues.listEvents({
264 |                     owner: context.repo.owner,
265 |                     repo: context.repo.repo,
266 |                     issue_number: issue.number
267 |                   });
268 | 
269 |                   for (const event of events.data) {
270 |                     if (event.event === 'labeled' && event.created_at >= startDate.toISOString()) {
271 |                       const labelName = event.label?.name;
272 |                       if (labelName && !labelName.startsWith('status: needs-triage')) {
273 |                         const triageTime = (new Date(event.created_at) - createdAt) / (1000 * 60 * 60);
274 |                         triageEvents.push(triageTime);
275 |                         metrics.triage.triaged_this_period++;
276 |                         break;
277 |                       }
278 |                     }
279 |                   }
280 |                 } catch (error) {
281 |                   console.log(`Error fetching comments/events for issue #${issue.number}: ${error.message}`);
282 |                 }
283 |               }
284 |             }
285 | 
286 |             // Calculate averages
287 |             if (responseTimeData.length > 0) {
288 |               metrics.response_times.avg_first_response_hours = 
289 |                 Math.round(responseTimeData.reduce((a, b) => a + b, 0) / responseTimeData.length * 100) / 100;
290 |             }
291 | 
292 |             if (triageEvents.length > 0) {
293 |               metrics.triage.avg_triage_time_hours = 
294 |                 Math.round(triageEvents.reduce((a, b) => a + b, 0) / triageEvents.length * 100) / 100;
295 |             }
296 | 
297 |             // Convert sets to counts
298 |             metrics.contributors.unique_issue_creators = metrics.contributors.issue_creators.size;
299 |             metrics.contributors.unique_commenters = metrics.contributors.comment_authors.size;
300 |             metrics.contributors.unique_assignees = metrics.contributors.assignees.size;
301 | 
302 |             // Clean up for JSON serialization
303 |             delete metrics.contributors.issue_creators;
304 |             delete metrics.contributors.comment_authors;
305 |             delete metrics.contributors.assignees;
306 | 
307 |             console.log('Metrics calculation completed');
308 |             core.setOutput('metrics', JSON.stringify(metrics, null, 2));
309 |             
310 |             return metrics;
311 | 
312 |   generate-report:
313 |     runs-on: ubuntu-latest
314 |     needs: collect-metrics
315 |     
316 |     steps:
317 |       - name: Checkout repository
318 |         uses: actions/checkout@v4
319 | 
320 |       - name: Generate Markdown Report
321 |         uses: actions/github-script@v7
322 |         with:
323 |           script: |
324 |             const metrics = JSON.parse('${{ needs.collect-metrics.outputs.metrics_json }}');
325 |             
326 |             // Generate markdown report
327 |             let report = `# 📊 Issue Triage Report\n\n`;
328 |             report += `**Period**: ${metrics.period.type} (${metrics.period.days} days)\n`;
329 |             report += `**Generated**: ${new Date().toISOString()}\n\n`;
330 | 
331 |             // Overview Section
332 |             report += `## 📈 Overview\n\n`;
333 |             report += `| Metric | Count |\n`;
334 |             report += `|--------|-------|\n`;
335 |             report += `| Total Issues | ${metrics.overview.total_issues} |\n`;
336 |             report += `| Open Issues | ${metrics.overview.open_issues} |\n`;
337 |             report += `| Closed Issues | ${metrics.overview.closed_issues} |\n`;
338 |             report += `| New Issues | ${metrics.overview.new_issues} |\n`;
339 |             report += `| Resolved Issues | ${metrics.overview.resolved_issues} |\n`;
340 |             report += `| Total PRs | ${metrics.overview.total_prs} |\n\n`;
341 | 
342 |             // Triage Section
343 |             report += `## 🏷️ Triage Status\n\n`;
344 |             report += `| Metric | Value |\n`;
345 |             report += `|--------|-------|\n`;
346 |             report += `| Issues Needing Triage | ${metrics.triage.needs_triage} |\n`;
347 |             report += `| Issues Triaged This Period | ${metrics.triage.triaged_this_period} |\n`;
348 |             report += `| Average Triage Time | ${metrics.triage.avg_triage_time_hours}h |\n`;
349 |             report += `| Overdue Triage (>3 days) | ${metrics.triage.overdue_triage} |\n\n`;
350 | 
351 |             // Response Times Section
352 |             report += `## ⏱️ Response Times\n\n`;
353 |             report += `| Metric | Value |\n`;
354 |             report += `|--------|-------|\n`;
355 |             report += `| Average First Response | ${metrics.response_times.avg_first_response_hours}h |\n`;
356 |             report += `| Issues Without Response | ${metrics.response_times.issues_without_response} |\n\n`;
357 | 
358 |             // Labels Distribution
359 |             report += `## 🏷️ Label Distribution\n\n`;
360 |             
361 |             if (Object.keys(metrics.labels.by_priority).length > 0) {
362 |               report += `### Priority Distribution\n`;
363 |               for (const [priority, count] of Object.entries(metrics.labels.by_priority)) {
364 |                 report += `- **${priority}**: ${count} issues\n`;
365 |               }
366 |               report += `\n`;
367 |             }
368 | 
369 |             if (Object.keys(metrics.labels.by_component).length > 0) {
370 |               report += `### Component Distribution\n`;
371 |               for (const [component, count] of Object.entries(metrics.labels.by_component)) {
372 |                 report += `- **${component}**: ${count} issues\n`;
373 |               }
374 |               report += `\n`;
375 |             }
376 | 
377 |             if (Object.keys(metrics.labels.by_type).length > 0) {
378 |               report += `### Type Distribution\n`;
379 |               for (const [type, count] of Object.entries(metrics.labels.by_type)) {
380 |                 report += `- **${type}**: ${count} issues\n`;
381 |               }
382 |               report += `\n`;
383 |             }
384 | 
385 |             // Contributors Section
386 |             report += `## 👥 Contributors\n\n`;
387 |             report += `| Metric | Count |\n`;
388 |             report += `|--------|-------|\n`;
389 |             report += `| Unique Issue Creators | ${metrics.contributors.unique_issue_creators} |\n`;
390 |             report += `| Unique Commenters | ${metrics.contributors.unique_commenters} |\n`;
391 |             report += `| Active Assignees | ${metrics.contributors.unique_assignees} |\n\n`;
392 | 
393 |             // Quality Metrics Section
394 |             report += `## ✅ Quality Metrics\n\n`;
395 |             report += `| Metric | Count |\n`;
396 |             report += `|--------|-------|\n`;
397 |             report += `| Issues Using Templates | ${metrics.quality.issues_with_templates} |\n`;
398 |             report += `| Issues Missing Information | ${metrics.quality.issues_missing_info} |\n`;
399 |             report += `| Stale Issues (>30 days) | ${metrics.quality.stale_issues} |\n\n`;
400 | 
401 |             // Recommendations Section
402 |             report += `## 💡 Recommendations\n\n`;
403 |             
404 |             if (metrics.triage.overdue_triage > 0) {
405 |               report += `- ⚠️ **${metrics.triage.overdue_triage} issues need immediate triage** (overdue >3 days)\n`;
406 |             }
407 |             
408 |             if (metrics.response_times.issues_without_response > 0) {
409 |               report += `- 📝 **${metrics.response_times.issues_without_response} issues lack maintainer response**\n`;
410 |             }
411 |             
412 |             if (metrics.quality.stale_issues > 5) {
413 |               report += `- 🧹 **Consider reviewing ${metrics.quality.stale_issues} stale issues** for closure\n`;
414 |             }
415 |             
416 |             if (metrics.quality.issues_missing_info > metrics.quality.issues_with_templates) {
417 |               report += `- 📋 **Improve issue template adoption** - many issues lack sufficient information\n`;
418 |             }
419 | 
420 |             const triageEfficiency = metrics.triage.triaged_this_period / (metrics.triage.triaged_this_period + metrics.triage.needs_triage) * 100;
421 |             if (triageEfficiency < 80) {
422 |               report += `- ⏰ **Triage efficiency is ${Math.round(triageEfficiency)}%** - consider increasing triage frequency\n`;
423 |             }
424 | 
425 |             report += `\n---\n`;
426 |             report += `*Report generated automatically by GitHub Actions*\n`;
427 | 
428 |             // Save report as an artifact and optionally create an issue
429 |             const fs = require('fs');
430 |             const reportPath = `/tmp/triage-report-${new Date().toISOString().split('T')[0]}.md`;
431 |             fs.writeFileSync(reportPath, report);
432 |             
433 |             console.log('Generated triage report:');
434 |             console.log(report);
435 | 
436 |             // For weekly reports, create a discussion or issue with the report
437 |             if (metrics.period.type === 'weekly' || '${{ github.event_name }}' === 'workflow_dispatch') {
438 |               try {
439 |                 await github.rest.issues.create({
440 |                   owner: context.repo.owner,
441 |                   repo: context.repo.repo,
442 |                   title: `📊 Weekly Triage Report - ${new Date().toISOString().split('T')[0]}`,
443 |                   body: report,
444 |                   labels: ['type: maintenance', 'status: informational']
445 |                 });
446 |               } catch (error) {
447 |                 console.log(`Could not create issue with report: ${error.message}`);
448 |               }
449 |             }
450 | 
451 |       - name: Upload Report Artifact
452 |         uses: actions/upload-artifact@v4
453 |         with:
454 |           name: triage-report-${{ github.run_id }}
455 |           path: /tmp/triage-report-*.md
456 |           retention-days: 30
```

--------------------------------------------------------------------------------
/tests/test_mcp_protocol_compliance.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for MCP protocol compliance and tool functionality."""
  2 | 
  3 | import pytest
  4 | import json
  5 | import asyncio
  6 | from unittest.mock import patch, MagicMock, AsyncMock
  7 | from datetime import datetime
  8 | from prometheus_mcp_server import server
  9 | from prometheus_mcp_server.server import (
 10 |     make_prometheus_request, get_prometheus_auth, config, TransportType,
 11 |     execute_query, execute_range_query, list_metrics, get_metric_metadata, get_targets, health_check
 12 | )
 13 | 
 14 | # Test the MCP tools by testing them through async wrappers
 15 | async def execute_query_wrapper(query: str, time=None):
 16 |     """Wrapper to test execute_query functionality."""
 17 |     params = {"query": query}
 18 |     if time:
 19 |         params["time"] = time
 20 |     data = make_prometheus_request("query", params=params)
 21 |     return {"resultType": data["resultType"], "result": data["result"]}
 22 | 
 23 | async def execute_range_query_wrapper(query: str, start: str, end: str, step: str):
 24 |     """Wrapper to test execute_range_query functionality."""  
 25 |     params = {"query": query, "start": start, "end": end, "step": step}
 26 |     data = make_prometheus_request("query_range", params=params)
 27 |     return {"resultType": data["resultType"], "result": data["result"]}
 28 | 
 29 | async def list_metrics_wrapper():
 30 |     """Wrapper to test list_metrics functionality."""
 31 |     return make_prometheus_request("label/__name__/values")
 32 | 
 33 | async def get_metric_metadata_wrapper(metric: str):
 34 |     """Wrapper to test get_metric_metadata functionality."""
 35 |     params = {"metric": metric}
 36 |     data = make_prometheus_request("metadata", params=params)
 37 |     return data["data"][metric]
 38 | 
 39 | async def get_targets_wrapper():
 40 |     """Wrapper to test get_targets functionality."""
 41 |     data = make_prometheus_request("targets")
 42 |     return {"activeTargets": data["activeTargets"], "droppedTargets": data["droppedTargets"]}
 43 | 
 44 | async def health_check_wrapper():
 45 |     """Wrapper to test health_check functionality."""
 46 |     try:
 47 |         health_status = {
 48 |             "status": "healthy",
 49 |             "service": "prometheus-mcp-server", 
 50 |             "version": "1.2.3",
 51 |             "timestamp": datetime.utcnow().isoformat(),
 52 |             "transport": config.mcp_server_config.mcp_server_transport if config.mcp_server_config else "stdio",
 53 |             "configuration": {
 54 |                 "prometheus_url_configured": bool(config.url),
 55 |                 "authentication_configured": bool(config.username or config.token),
 56 |                 "org_id_configured": bool(config.org_id)
 57 |             }
 58 |         }
 59 |         
 60 |         if config.url:
 61 |             try:
 62 |                 make_prometheus_request("query", params={"query": "up", "time": str(int(datetime.utcnow().timestamp()))})
 63 |                 health_status["prometheus_connectivity"] = "healthy"
 64 |                 health_status["prometheus_url"] = config.url
 65 |             except Exception as e:
 66 |                 health_status["prometheus_connectivity"] = "unhealthy"
 67 |                 health_status["prometheus_error"] = str(e)
 68 |                 health_status["status"] = "degraded"
 69 |         else:
 70 |             health_status["status"] = "unhealthy"
 71 |             health_status["error"] = "PROMETHEUS_URL not configured"
 72 |         
 73 |         return health_status
 74 |     except Exception as e:
 75 |         return {
 76 |             "status": "unhealthy",
 77 |             "service": "prometheus-mcp-server",
 78 |             "error": str(e),
 79 |             "timestamp": datetime.utcnow().isoformat()
 80 |         }
 81 | 
 82 | 
 83 | @pytest.fixture
 84 | def mock_prometheus_response():
 85 |     """Mock successful Prometheus API response."""
 86 |     return {
 87 |         "status": "success",
 88 |         "data": {
 89 |             "resultType": "vector",
 90 |             "result": [
 91 |                 {
 92 |                     "metric": {"__name__": "up", "instance": "localhost:9090"},
 93 |                     "value": [1609459200, "1"]
 94 |                 }
 95 |             ]
 96 |         }
 97 |     }
 98 | 
 99 | 
100 | @pytest.fixture
101 | def mock_metrics_response():
102 |     """Mock Prometheus metrics list response."""
103 |     return {
104 |         "status": "success", 
105 |         "data": ["up", "prometheus_build_info", "prometheus_config_last_reload_successful"]
106 |     }
107 | 
108 | 
109 | @pytest.fixture
110 | def mock_metadata_response():
111 |     """Mock Prometheus metadata response."""
112 |     return {
113 |         "status": "success",
114 |         "data": {
115 |             "data": {
116 |                 "up": [
117 |                     {
118 |                         "type": "gauge",
119 |                         "help": "1 if the instance is healthy, 0 otherwise",
120 |                         "unit": ""
121 |                     }
122 |                 ]
123 |             }
124 |         }
125 |     }
126 | 
127 | 
128 | @pytest.fixture
129 | def mock_targets_response():
130 |     """Mock Prometheus targets response."""
131 |     return {
132 |         "status": "success",
133 |         "data": {
134 |             "activeTargets": [
135 |                 {
136 |                     "discoveredLabels": {"__address__": "localhost:9090"},
137 |                     "labels": {"instance": "localhost:9090", "job": "prometheus"},
138 |                     "scrapePool": "prometheus",
139 |                     "scrapeUrl": "http://localhost:9090/metrics",
140 |                     "lastError": "",
141 |                     "lastScrape": "2023-01-01T00:00:00Z",
142 |                     "lastScrapeDuration": 0.001,
143 |                     "health": "up"
144 |                 }
145 |             ],
146 |             "droppedTargets": []
147 |         }
148 |     }
149 | 
150 | 
151 | class TestMCPToolCompliance:
152 |     """Test MCP tool interface compliance."""
153 |     
154 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
155 |     @pytest.mark.asyncio  
156 |     async def test_execute_query_tool_signature(self, mock_request, mock_prometheus_response):
157 |         """Test execute_query tool has correct MCP signature."""
158 |         mock_request.return_value = mock_prometheus_response["data"]
159 |         
160 |         # Ensure config has a URL set for tests
161 |         original_url = config.url
162 |         if not config.url:
163 |             config.url = "http://test-prometheus:9090"
164 |             
165 |         try:
166 |             # Test required parameters
167 |             result = await execute_query_wrapper("up")
168 |             assert isinstance(result, dict)
169 |             assert "resultType" in result
170 |             assert "result" in result
171 |             
172 |             # Test optional parameters
173 |             result = await execute_query_wrapper("up", time="2023-01-01T00:00:00Z")
174 |             assert isinstance(result, dict)
175 |         finally:
176 |             config.url = original_url
177 |     
178 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
179 |     @pytest.mark.asyncio
180 |     async def test_execute_range_query_tool_signature(self, mock_request, mock_prometheus_response):
181 |         """Test execute_range_query tool has correct MCP signature."""
182 |         mock_request.return_value = mock_prometheus_response["data"]
183 |         
184 |         # Test all required parameters
185 |         result = await execute_range_query_wrapper(
186 |             query="up",
187 |             start="2023-01-01T00:00:00Z", 
188 |             end="2023-01-01T01:00:00Z",
189 |             step="1m"
190 |         )
191 |         assert isinstance(result, dict)
192 |         assert "resultType" in result
193 |         assert "result" in result
194 |     
195 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
196 |     @pytest.mark.asyncio
197 |     async def test_list_metrics_tool_signature(self, mock_request, mock_metrics_response):
198 |         """Test list_metrics tool has correct MCP signature."""
199 |         mock_request.return_value = mock_metrics_response["data"]
200 |         
201 |         result = await list_metrics_wrapper()
202 |         assert isinstance(result, list)
203 |         assert all(isinstance(metric, str) for metric in result)
204 |     
205 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
206 |     @pytest.mark.asyncio
207 |     async def test_get_metric_metadata_tool_signature(self, mock_request, mock_metadata_response):
208 |         """Test get_metric_metadata tool has correct MCP signature."""
209 |         mock_request.return_value = mock_metadata_response["data"]
210 |         
211 |         result = await get_metric_metadata_wrapper("up")
212 |         assert isinstance(result, list)
213 |         assert all(isinstance(metadata, dict) for metadata in result)
214 |     
215 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
216 |     @pytest.mark.asyncio
217 |     async def test_get_targets_tool_signature(self, mock_request, mock_targets_response):
218 |         """Test get_targets tool has correct MCP signature."""
219 |         mock_request.return_value = mock_targets_response["data"]
220 |         
221 |         result = await get_targets_wrapper()
222 |         assert isinstance(result, dict)
223 |         assert "activeTargets" in result
224 |         assert "droppedTargets" in result
225 |         assert isinstance(result["activeTargets"], list)
226 |         assert isinstance(result["droppedTargets"], list)
227 |     
228 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
229 |     @pytest.mark.asyncio
230 |     async def test_health_check_tool_signature(self, mock_request):
231 |         """Test health_check tool has correct MCP signature."""
232 |         # Mock successful Prometheus connectivity
233 |         mock_request.return_value = {"resultType": "vector", "result": []}
234 |         
235 |         result = await health_check_wrapper()
236 |         assert isinstance(result, dict)
237 |         assert "status" in result
238 |         assert "service" in result
239 |         assert "timestamp" in result
240 |         assert result["service"] == "prometheus-mcp-server"
241 | 
242 | 
243 | class TestMCPToolErrorHandling:
244 |     """Test MCP tool error handling compliance."""
245 |     
246 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
247 |     @pytest.mark.asyncio
248 |     async def test_execute_query_handles_prometheus_errors(self, mock_request):
249 |         """Test execute_query handles Prometheus API errors gracefully."""
250 |         mock_request.side_effect = ValueError("Prometheus API error: query timeout")
251 |         
252 |         with pytest.raises(ValueError):
253 |             await execute_query_wrapper("invalid_query{")
254 |     
255 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
256 |     @pytest.mark.asyncio
257 |     async def test_execute_range_query_handles_network_errors(self, mock_request):
258 |         """Test execute_range_query handles network errors gracefully."""
259 |         import requests
260 |         mock_request.side_effect = requests.exceptions.ConnectionError("Connection refused")
261 |         
262 |         with pytest.raises(requests.exceptions.ConnectionError):
263 |             await execute_range_query_wrapper("up", "now-1h", "now", "1m")
264 |     
265 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
266 |     @pytest.mark.asyncio
267 |     async def test_health_check_handles_configuration_errors(self, mock_request):
268 |         """Test health_check handles configuration errors gracefully."""
269 |         # Test with missing Prometheus URL
270 |         original_url = config.url
271 |         config.url = ""
272 |         
273 |         try:
274 |             result = await health_check_wrapper()
275 |             assert result["status"] == "unhealthy" 
276 |             assert "error" in result or "PROMETHEUS_URL" in str(result)
277 |         finally:
278 |             config.url = original_url
279 |     
280 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
281 |     @pytest.mark.asyncio
282 |     async def test_health_check_handles_connectivity_errors(self, mock_request):
283 |         """Test health_check handles Prometheus connectivity errors."""
284 |         mock_request.side_effect = Exception("Connection timeout")
285 |         
286 |         result = await health_check_wrapper()
287 |         assert result["status"] in ["unhealthy", "degraded"]
288 |         assert "prometheus_connectivity" in result or "error" in result
289 | 
290 | 
291 | class TestMCPDataFormats:
292 |     """Test MCP tool data format compliance."""
293 |     
294 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
295 |     @pytest.mark.asyncio
296 |     async def test_execute_query_returns_valid_json(self, mock_request, mock_prometheus_response):
297 |         """Test execute_query returns JSON-serializable data."""
298 |         mock_request.return_value = mock_prometheus_response["data"]
299 |         
300 |         result = await execute_query_wrapper("up")
301 |         
302 |         # Verify JSON serializability
303 |         json_str = json.dumps(result)
304 |         assert json_str is not None
305 |         
306 |         # Verify structure
307 |         parsed = json.loads(json_str)
308 |         assert "resultType" in parsed
309 |         assert "result" in parsed
310 |     
311 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
312 |     @pytest.mark.asyncio
313 |     async def test_all_tools_return_json_serializable_data(self, mock_request):
314 |         """Test all MCP tools return JSON-serializable data."""
315 |         # Setup various mock responses
316 |         mock_request.side_effect = [
317 |             {"resultType": "vector", "result": []},  # execute_query
318 |             {"resultType": "matrix", "result": []},  # execute_range_query
319 |             ["metric1", "metric2"],  # list_metrics
320 |             {"data": {"metric1": [{"type": "gauge", "help": "test"}]}},  # get_metric_metadata
321 |             {"activeTargets": [], "droppedTargets": []},  # get_targets
322 |         ]
323 |         
324 |         # Test all tools
325 |         tools_and_calls = [
326 |             (execute_query_wrapper, ("up",)),
327 |             (execute_range_query_wrapper, ("up", "now-1h", "now", "1m")),
328 |             (list_metrics_wrapper, ()),
329 |             (get_metric_metadata_wrapper, ("metric1",)),
330 |             (get_targets_wrapper, ()),
331 |         ]
332 |         
333 |         for tool, args in tools_and_calls:
334 |             result = await tool(*args)
335 |             
336 |             # Verify JSON serializability
337 |             try:
338 |                 json_str = json.dumps(result)
339 |                 assert json_str is not None
340 |             except (TypeError, ValueError) as e:
341 |                 pytest.fail(f"Tool {tool.__name__} returned non-JSON-serializable data: {e}")
342 | 
343 | 
344 | class TestMCPServerConfiguration:
345 |     """Test MCP server configuration compliance."""
346 |     
347 |     def test_transport_type_validation(self):
348 |         """Test transport type validation works correctly."""
349 |         # Valid transport types
350 |         valid_transports = ["stdio", "http", "sse"]
351 |         for transport in valid_transports:
352 |             assert transport in TransportType.values()
353 |         
354 |         # Invalid transport types should not be in values
355 |         invalid_transports = ["tcp", "websocket", "grpc"]
356 |         for transport in invalid_transports:
357 |             assert transport not in TransportType.values()
358 |     
359 |     def test_server_config_validation(self):
360 |         """Test server configuration validation."""
361 |         from prometheus_mcp_server.server import MCPServerConfig, PrometheusConfig
362 |         
363 |         # Valid configuration
364 |         mcp_config = MCPServerConfig(
365 |             mcp_server_transport="http",
366 |             mcp_bind_host="127.0.0.1", 
367 |             mcp_bind_port=8080
368 |         )
369 |         assert mcp_config.mcp_server_transport == "http"
370 |         
371 |         # Test Prometheus config
372 |         prometheus_config = PrometheusConfig(
373 |             url="http://prometheus:9090",
374 |             mcp_server_config=mcp_config
375 |         )
376 |         assert prometheus_config.url == "http://prometheus:9090"
377 |     
378 |     def test_authentication_configuration(self):
379 |         """Test authentication configuration options."""
380 |         from prometheus_mcp_server.server import get_prometheus_auth
381 |         
382 |         # Test with no authentication
383 |         original_config = {
384 |             'username': config.username,
385 |             'password': config.password, 
386 |             'token': config.token
387 |         }
388 |         
389 |         try:
390 |             config.username = ""
391 |             config.password = ""
392 |             config.token = ""
393 |             
394 |             auth = get_prometheus_auth()
395 |             assert auth is None
396 |             
397 |             # Test with basic auth
398 |             config.username = "testuser"
399 |             config.password = "testpass"
400 |             config.token = ""
401 |             
402 |             auth = get_prometheus_auth()
403 |             assert auth is not None
404 |             
405 |             # Test with token auth (should take precedence)
406 |             config.token = "test-token"
407 |             
408 |             auth = get_prometheus_auth()
409 |             assert auth is not None
410 |             assert "Authorization" in auth
411 |             assert "Bearer" in auth["Authorization"]
412 |             
413 |         finally:
414 |             # Restore original config
415 |             config.username = original_config['username']
416 |             config.password = original_config['password']
417 |             config.token = original_config['token']
418 | 
419 | 
420 | class TestMCPProtocolVersioning:
421 |     """Test MCP protocol versioning and capabilities."""
422 |     
423 |     def test_mcp_server_info(self):
424 |         """Test MCP server provides correct server information."""
425 |         # Test FastMCP server instantiation
426 |         from prometheus_mcp_server.server import mcp
427 |         
428 |         assert mcp is not None
429 |         # FastMCP should have a name
430 |         assert hasattr(mcp, 'name') or hasattr(mcp, '_name')
431 |     
432 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
433 |     @pytest.mark.asyncio
434 |     async def test_tool_descriptions_are_present(self, mock_request):
435 |         """Test that all MCP tools have proper descriptions."""
436 |         # All tools should be registered with descriptions
437 |         tools = [
438 |             execute_query,
439 |             execute_range_query,
440 |             list_metrics,
441 |             get_metric_metadata,
442 |             get_targets,
443 |             health_check
444 |         ]
445 |         
446 |         for tool in tools:
447 |             # Each tool should have a description (FastMCP tools have description attribute)
448 |             assert hasattr(tool, 'description')
449 |             assert tool.description is not None and tool.description.strip() != ""
450 |     
451 |     def test_server_capabilities(self):
452 |         """Test server declares proper MCP capabilities."""
453 |         # Test that the server supports the expected transports
454 |         transports = ["stdio", "http", "sse"]
455 |         
456 |         for transport in transports:
457 |             assert transport in TransportType.values()
458 |     
459 |     @pytest.mark.asyncio
460 |     async def test_error_response_format(self):
461 |         """Test that error responses follow MCP format."""
462 |         # Test with invalid configuration to trigger errors
463 |         original_url = config.url
464 |         config.url = ""
465 |         
466 |         try:
467 |             result = await health_check_wrapper()
468 |             
469 |             # Error responses should be structured
470 |             assert isinstance(result, dict)
471 |             assert "status" in result
472 |             assert result["status"] in ["unhealthy", "degraded", "error"]
473 |             
474 |         finally:
475 |             config.url = original_url
476 | 
477 | 
478 | class TestMCPConcurrencyAndPerformance:
479 |     """Test MCP tools handle concurrency and perform well."""
480 |     
481 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
482 |     @pytest.mark.asyncio
483 |     async def test_concurrent_tool_execution(self, mock_request, mock_prometheus_response):
484 |         """Test tools can handle concurrent execution."""
485 |         def mock_side_effect(endpoint, params=None):
486 |             if endpoint == "targets":
487 |                 return {"activeTargets": [], "droppedTargets": []}
488 |             elif endpoint == "label/__name__/values":
489 |                 return ["up", "prometheus_build_info"]
490 |             else:
491 |                 return mock_prometheus_response["data"]
492 |         
493 |         mock_request.side_effect = mock_side_effect
494 |         
495 |         # Create multiple concurrent tasks
496 |         tasks = [
497 |             execute_query_wrapper("up"),
498 |             execute_query_wrapper("prometheus_build_info"),
499 |             list_metrics_wrapper(),
500 |             get_targets_wrapper()
501 |         ]
502 |         
503 |         # Execute concurrently
504 |         results = await asyncio.gather(*tasks)
505 |         
506 |         # All should complete successfully
507 |         assert len(results) == 4
508 |         for result in results:
509 |             assert result is not None
510 |     
511 |     @patch('test_mcp_protocol_compliance.make_prometheus_request')
512 |     @pytest.mark.asyncio
513 |     async def test_tool_timeout_handling(self, mock_request):
514 |         """Test tools handle timeouts gracefully."""
515 |         # Simulate slow response
516 |         def slow_response(*args, **kwargs):
517 |             import time
518 |             time.sleep(0.1)
519 |             return {"resultType": "vector", "result": []}
520 |         
521 |         mock_request.side_effect = slow_response
522 |         
523 |         # This should complete (not testing actual timeout, just that it's async)
524 |         result = await execute_query_wrapper("up")
525 |         assert result is not None
```

--------------------------------------------------------------------------------
/.github/workflows/bug-triage.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Bug Triage Automation
  2 | 
  3 | on:
  4 |   issues:
  5 |     types: [opened, edited, labeled, unlabeled, assigned, unassigned]
  6 |   issue_comment:
  7 |     types: [created, edited]
  8 |   pull_request:
  9 |     types: [opened, closed, merged]
 10 |   schedule:
 11 |     # Run triage check every 6 hours
 12 |     - cron: '0 */6 * * *'
 13 |   workflow_dispatch:
 14 |     inputs:
 15 |       triage_all:
 16 |         description: 'Re-triage all open issues'
 17 |         required: false
 18 |         default: false
 19 |         type: boolean
 20 | 
 21 | jobs:
 22 |   auto-triage:
 23 |     runs-on: ubuntu-latest
 24 |     if: github.event_name == 'issues' || github.event_name == 'issue_comment'
 25 |     permissions:
 26 |       issues: write
 27 |       contents: read
 28 |       pull-requests: read
 29 | 
 30 |     steps:
 31 |       - name: Checkout repository
 32 |         uses: actions/checkout@v4
 33 | 
 34 |       - name: Auto-label new issues
 35 |         if: github.event.action == 'opened' && github.event_name == 'issues'
 36 |         uses: actions/github-script@v7
 37 |         with:
 38 |           script: |
 39 |             const issue = context.payload.issue;
 40 |             const title = issue.title.toLowerCase();
 41 |             const body = issue.body ? issue.body.toLowerCase() : '';
 42 |             const labels = [];
 43 | 
 44 |             // Severity-based labeling
 45 |             if (title.includes('critical') || title.includes('crash') || title.includes('data loss') || 
 46 |                 body.includes('critical') || body.includes('crash') || body.includes('data loss')) {
 47 |               labels.push('priority: critical');
 48 |             } else if (title.includes('urgent') || title.includes('blocking') || 
 49 |                       body.includes('urgent') || body.includes('blocking')) {
 50 |               labels.push('priority: high');
 51 |             } else if (title.includes('minor') || title.includes('cosmetic') ||
 52 |                       body.includes('minor') || body.includes('cosmetic')) {
 53 |               labels.push('priority: low');
 54 |             } else {
 55 |               labels.push('priority: medium');
 56 |             }
 57 | 
 58 |             // Component-based labeling
 59 |             if (title.includes('prometheus') || title.includes('metrics') || title.includes('query') ||
 60 |                 body.includes('prometheus') || body.includes('metrics') || body.includes('promql')) {
 61 |               labels.push('component: prometheus');
 62 |             }
 63 |             if (title.includes('mcp') || title.includes('server') || title.includes('transport') ||
 64 |                 body.includes('mcp') || body.includes('server') || body.includes('transport')) {
 65 |               labels.push('component: mcp-server');
 66 |             }
 67 |             if (title.includes('docker') || title.includes('container') || title.includes('deployment') ||
 68 |                 body.includes('docker') || body.includes('container') || body.includes('deployment')) {
 69 |               labels.push('component: deployment');
 70 |             }
 71 |             if (title.includes('auth') || title.includes('authentication') || title.includes('token') ||
 72 |                 body.includes('auth') || body.includes('authentication') || body.includes('token')) {
 73 |               labels.push('component: authentication');
 74 |             }
 75 | 
 76 |             // Type-based labeling
 77 |             if (title.includes('feature') || title.includes('enhancement') || title.includes('improvement') ||
 78 |                 body.includes('feature request') || body.includes('enhancement')) {
 79 |               labels.push('type: feature');
 80 |             } else if (title.includes('doc') || title.includes('documentation') ||
 81 |                       body.includes('documentation')) {
 82 |               labels.push('type: documentation');
 83 |             } else if (title.includes('test') || body.includes('test')) {
 84 |               labels.push('type: testing');
 85 |             } else if (title.includes('performance') || body.includes('performance') || 
 86 |                       title.includes('slow') || body.includes('slow')) {
 87 |               labels.push('type: performance');
 88 |             } else {
 89 |               labels.push('type: bug');
 90 |             }
 91 | 
 92 |             // Environment-based labeling
 93 |             if (body.includes('windows') || title.includes('windows')) {
 94 |               labels.push('env: windows');
 95 |             } else if (body.includes('macos') || body.includes('mac') || title.includes('macos')) {
 96 |               labels.push('env: macos');
 97 |             } else if (body.includes('linux') || title.includes('linux')) {
 98 |               labels.push('env: linux');
 99 |             }
100 | 
101 |             // Add status label
102 |             labels.push('status: needs-triage');
103 | 
104 |             if (labels.length > 0) {
105 |               await github.rest.issues.addLabels({
106 |                 owner: context.repo.owner,
107 |                 repo: context.repo.repo,
108 |                 issue_number: issue.number,
109 |                 labels: labels
110 |               });
111 |             }
112 | 
113 |       - name: Auto-assign based on component
114 |         if: github.event.action == 'labeled' && github.event_name == 'issues'
115 |         uses: actions/github-script@v7
116 |         with:
117 |           script: |
118 |             const issue = context.payload.issue;
119 |             const labelName = context.payload.label.name;
120 |             
121 |             // Define component maintainers
122 |             const componentAssignees = {
123 |               'component: prometheus': ['pab1it0'],
124 |               'component: mcp-server': ['pab1it0'],
125 |               'component: deployment': ['pab1it0'],
126 |               'component: authentication': ['pab1it0']
127 |             };
128 |             
129 |             if (componentAssignees[labelName] && issue.assignees.length === 0) {
130 |               await github.rest.issues.addAssignees({
131 |                 owner: context.repo.owner,
132 |                 repo: context.repo.repo,
133 |                 issue_number: issue.number,
134 |                 assignees: componentAssignees[labelName]
135 |               });
136 |             }
137 | 
138 |       - name: Update triage status
139 |         if: github.event.action == 'assigned' && github.event_name == 'issues'
140 |         uses: actions/github-script@v7
141 |         with:
142 |           script: |
143 |             const issue = context.payload.issue;
144 |             const hasTriageLabel = issue.labels.some(label => label.name === 'status: needs-triage');
145 |             
146 |             if (hasTriageLabel) {
147 |               await github.rest.issues.removeLabel({
148 |                 owner: context.repo.owner,
149 |                 repo: context.repo.repo,
150 |                 issue_number: issue.number,
151 |                 name: 'status: needs-triage'
152 |               });
153 |               
154 |               await github.rest.issues.addLabels({
155 |                 owner: context.repo.owner,
156 |                 repo: context.repo.repo,
157 |                 issue_number: issue.number,
158 |                 labels: ['status: in-progress']
159 |               });
160 |             }
161 | 
162 |       - name: Welcome new contributors
163 |         if: github.event.action == 'opened' && github.event_name == 'issues'
164 |         uses: actions/github-script@v7
165 |         with:
166 |           script: |
167 |             const issue = context.payload.issue;
168 |             const author = issue.user.login;
169 |             
170 |             // Check if this is the user's first issue
171 |             const issues = await github.rest.issues.listForRepo({
172 |               owner: context.repo.owner,
173 |               repo: context.repo.repo,
174 |               creator: author,
175 |               state: 'all'
176 |             });
177 |             
178 |             if (issues.data.length === 1) {
179 |               const welcomeMessage = `
180 |             👋 Welcome to the Prometheus MCP Server project, @${author}!
181 | 
182 |             Thank you for taking the time to report this issue. This project provides AI assistants with access to Prometheus metrics through the Model Context Protocol (MCP).
183 | 
184 |             To help us resolve your issue quickly:
185 |             - Please ensure you've filled out all relevant sections of the issue template
186 |             - Include your environment details (OS, Python version, Prometheus version)
187 |             - Provide steps to reproduce if applicable
188 |             - Check if this might be related to Prometheus configuration rather than the MCP server
189 | 
190 |             A maintainer will review and triage your issue soon. If you're interested in contributing a fix, please feel free to submit a pull request!
191 | 
192 |             **Useful resources:**
193 |             - [Configuration Guide](https://github.com/pab1it0/prometheus-mcp-server/blob/main/docs/configuration.md)
194 |             - [Installation Guide](https://github.com/pab1it0/prometheus-mcp-server/blob/main/docs/installation.md)
195 |             - [Contributing Guidelines](https://github.com/pab1it0/prometheus-mcp-server/blob/main/docs/contributing.md)
196 |             `;
197 |               
198 |               await github.rest.issues.createComment({
199 |                 owner: context.repo.owner,
200 |                 repo: context.repo.repo,
201 |                 issue_number: issue.number,
202 |                 body: welcomeMessage
203 |               });
204 |             }
205 | 
206 |   scheduled-triage:
207 |     runs-on: ubuntu-latest
208 |     if: github.event_name == 'schedule' || github.event.inputs.triage_all == 'true'
209 |     permissions:
210 |       issues: write
211 |       contents: read
212 | 
213 |     steps:
214 |       - name: Checkout repository
215 |         uses: actions/checkout@v4
216 | 
217 |       - name: Triage stale issues
218 |         uses: actions/github-script@v7
219 |         with:
220 |           script: |
221 |             const { data: issues } = await github.rest.issues.listForRepo({
222 |               owner: context.repo.owner,
223 |               repo: context.repo.repo,
224 |               state: 'open',
225 |               sort: 'updated',
226 |               direction: 'asc',
227 |               per_page: 100
228 |             });
229 | 
230 |             const now = new Date();
231 |             const sevenDaysAgo = new Date(now.getTime() - (7 * 24 * 60 * 60 * 1000));
232 |             const thirtyDaysAgo = new Date(now.getTime() - (30 * 24 * 60 * 60 * 1000));
233 | 
234 |             for (const issue of issues) {
235 |               if (issue.pull_request) continue; // Skip PRs
236 |               
237 |               const updatedAt = new Date(issue.updated_at);
238 |               const hasNeedsTriageLabel = issue.labels.some(label => label.name === 'status: needs-triage');
239 |               const hasStaleLabel = issue.labels.some(label => label.name === 'status: stale');
240 |               const hasWaitingLabel = issue.labels.some(label => label.name === 'status: waiting-for-response');
241 | 
242 |               // Mark issues as stale if no activity for 30 days
243 |               if (updatedAt < thirtyDaysAgo && !hasStaleLabel && !hasWaitingLabel) {
244 |                 await github.rest.issues.addLabels({
245 |                   owner: context.repo.owner,
246 |                   repo: context.repo.repo,
247 |                   issue_number: issue.number,
248 |                   labels: ['status: stale']
249 |                 });
250 | 
251 |                 await github.rest.issues.createComment({
252 |                   owner: context.repo.owner,
253 |                   repo: context.repo.repo,
254 |                   issue_number: issue.number,
255 |                   body: `This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.`
256 |                 });
257 |               }
258 | 
259 |               // Auto-close issues that have been stale for 7 days
260 |               else if (updatedAt < thirtyDaysAgo && hasStaleLabel) {
261 |                 const comments = await github.rest.issues.listComments({
262 |                   owner: context.repo.owner,
263 |                   repo: context.repo.repo,
264 |                   issue_number: issue.number
265 |                 });
266 | 
267 |                 const staleComment = comments.data.find(comment => 
268 |                   comment.body.includes('automatically marked as stale')
269 |                 );
270 | 
271 |                 if (staleComment) {
272 |                   const staleCommentDate = new Date(staleComment.created_at);
273 |                   const sevenDaysAfterStale = new Date(staleCommentDate.getTime() + (7 * 24 * 60 * 60 * 1000));
274 | 
275 |                   if (now > sevenDaysAfterStale) {
276 |                     await github.rest.issues.update({
277 |                       owner: context.repo.owner,
278 |                       repo: context.repo.repo,
279 |                       issue_number: issue.number,
280 |                       state: 'closed'
281 |                     });
282 | 
283 |                     await github.rest.issues.createComment({
284 |                       owner: context.repo.owner,
285 |                       repo: context.repo.repo,
286 |                       issue_number: issue.number,
287 |                       body: `This issue has been automatically closed due to inactivity. If you believe this issue is still relevant, please reopen it with updated information.`
288 |                     });
289 |                   }
290 |                 }
291 |               }
292 | 
293 |               // Remove needs-triage if issue has been responded to by maintainer
294 |               else if (hasNeedsTriageLabel && updatedAt > sevenDaysAgo) {
295 |                 const comments = await github.rest.issues.listComments({
296 |                   owner: context.repo.owner,
297 |                   repo: context.repo.repo,
298 |                   issue_number: issue.number
299 |                 });
300 | 
301 |                 const maintainerResponse = comments.data.some(comment => 
302 |                   comment.user.login === 'pab1it0' && 
303 |                   new Date(comment.created_at) > sevenDaysAgo
304 |                 );
305 | 
306 |                 if (maintainerResponse) {
307 |                   await github.rest.issues.removeLabel({
308 |                     owner: context.repo.owner,
309 |                     repo: context.repo.repo,
310 |                     issue_number: issue.number,
311 |                     name: 'status: needs-triage'
312 |                   });
313 |                 }
314 |               }
315 |             }
316 | 
317 |   metrics-report:
318 |     runs-on: ubuntu-latest
319 |     if: github.event_name == 'schedule'
320 |     permissions:
321 |       issues: read
322 |       contents: read
323 | 
324 |     steps:
325 |       - name: Generate triage metrics
326 |         uses: actions/github-script@v7
327 |         with:
328 |           script: |
329 |             const { data: issues } = await github.rest.issues.listForRepo({
330 |               owner: context.repo.owner,
331 |               repo: context.repo.repo,
332 |               state: 'all',
333 |               per_page: 100
334 |             });
335 | 
336 |             const now = new Date();
337 |             const oneWeekAgo = new Date(now.getTime() - (7 * 24 * 60 * 60 * 1000));
338 |             const oneMonthAgo = new Date(now.getTime() - (30 * 24 * 60 * 60 * 1000));
339 | 
340 |             let metrics = {
341 |               total_open: 0,
342 |               needs_triage: 0,
343 |               in_progress: 0,
344 |               waiting_response: 0,
345 |               stale: 0,
346 |               new_this_week: 0,
347 |               closed_this_week: 0,
348 |               by_priority: { critical: 0, high: 0, medium: 0, low: 0 },
349 |               by_component: { prometheus: 0, 'mcp-server': 0, deployment: 0, authentication: 0 },
350 |               by_type: { bug: 0, feature: 0, documentation: 0, performance: 0 }
351 |             };
352 | 
353 |             for (const issue of issues) {
354 |               if (issue.pull_request) continue;
355 | 
356 |               const createdAt = new Date(issue.created_at);
357 |               const closedAt = issue.closed_at ? new Date(issue.closed_at) : null;
358 | 
359 |               if (issue.state === 'open') {
360 |                 metrics.total_open++;
361 | 
362 |                 // Count by status
363 |                 issue.labels.forEach(label => {
364 |                   if (label.name === 'status: needs-triage') metrics.needs_triage++;
365 |                   if (label.name === 'status: in-progress') metrics.in_progress++;
366 |                   if (label.name === 'status: waiting-for-response') metrics.waiting_response++;
367 |                   if (label.name === 'status: stale') metrics.stale++;
368 | 
369 |                   // Count by priority
370 |                   if (label.name.startsWith('priority: ')) {
371 |                     const priority = label.name.replace('priority: ', '');
372 |                     if (metrics.by_priority[priority] !== undefined) {
373 |                       metrics.by_priority[priority]++;
374 |                     }
375 |                   }
376 | 
377 |                   // Count by component
378 |                   if (label.name.startsWith('component: ')) {
379 |                     const component = label.name.replace('component: ', '');
380 |                     if (metrics.by_component[component] !== undefined) {
381 |                       metrics.by_component[component]++;
382 |                     }
383 |                   }
384 | 
385 |                   // Count by type
386 |                   if (label.name.startsWith('type: ')) {
387 |                     const type = label.name.replace('type: ', '');
388 |                     if (metrics.by_type[type] !== undefined) {
389 |                       metrics.by_type[type]++;
390 |                     }
391 |                   }
392 |                 });
393 |               }
394 | 
395 |               // Count new issues this week
396 |               if (createdAt > oneWeekAgo) {
397 |                 metrics.new_this_week++;
398 |               }
399 | 
400 |               // Count closed issues this week
401 |               if (closedAt && closedAt > oneWeekAgo) {
402 |                 metrics.closed_this_week++;
403 |               }
404 |             }
405 | 
406 |             // Log metrics (can be extended to send to external systems)
407 |             console.log('=== ISSUE TRIAGE METRICS ===');
408 |             console.log(`Total Open Issues: ${metrics.total_open}`);
409 |             console.log(`Needs Triage: ${metrics.needs_triage}`);
410 |             console.log(`In Progress: ${metrics.in_progress}`);
411 |             console.log(`Waiting for Response: ${metrics.waiting_response}`);
412 |             console.log(`Stale Issues: ${metrics.stale}`);
413 |             console.log(`New This Week: ${metrics.new_this_week}`);
414 |             console.log(`Closed This Week: ${metrics.closed_this_week}`);
415 |             console.log('Priority Distribution:', JSON.stringify(metrics.by_priority));
416 |             console.log('Component Distribution:', JSON.stringify(metrics.by_component));
417 |             console.log('Type Distribution:', JSON.stringify(metrics.by_type));
418 | 
419 |   pr-integration:
420 |     runs-on: ubuntu-latest
421 |     if: github.event_name == 'pull_request'
422 |     permissions:
423 |       issues: write
424 |       pull-requests: write
425 |       contents: read
426 | 
427 |     steps:
428 |       - name: Link PR to issues
429 |         if: github.event.action == 'opened'
430 |         uses: actions/github-script@v7
431 |         with:
432 |           script: |
433 |             const pr = context.payload.pull_request;
434 |             const body = pr.body || '';
435 |             
436 |             // Extract issue numbers from PR body
437 |             const issueMatches = body.match(/(close|closes|closed|fix|fixes|fixed|resolve|resolves|resolved)\s+#(\d+)/gi);
438 |             
439 |             if (issueMatches) {
440 |               for (const match of issueMatches) {
441 |                 const issueNumber = match.match(/#(\d+)/)[1];
442 |                 
443 |                 try {
444 |                   // Add a comment to the issue
445 |                   await github.rest.issues.createComment({
446 |                     owner: context.repo.owner,
447 |                     repo: context.repo.repo,
448 |                     issue_number: parseInt(issueNumber),
449 |                     body: `🔗 This issue is being addressed by PR #${pr.number}`
450 |                   });
451 |                   
452 |                   // Add in-review label to the issue
453 |                   await github.rest.issues.addLabels({
454 |                     owner: context.repo.owner,
455 |                     repo: context.repo.repo,
456 |                     issue_number: parseInt(issueNumber),
457 |                     labels: ['status: in-review']
458 |                   });
459 |                 } catch (error) {
460 |                   console.log(`Could not update issue #${issueNumber}: ${error.message}`);
461 |                 }
462 |               }
463 |             }
464 | 
465 |       - name: Update issue status on PR merge
466 |         if: github.event.action == 'closed' && github.event.pull_request.merged
467 |         uses: actions/github-script@v7
468 |         with:
469 |           script: |
470 |             const pr = context.payload.pull_request;
471 |             const body = pr.body || '';
472 |             
473 |             // Extract issue numbers from PR body
474 |             const issueMatches = body.match(/(close|closes|closed|fix|fixes|fixed|resolve|resolves|resolved)\s+#(\d+)/gi);
475 |             
476 |             if (issueMatches) {
477 |               for (const match of issueMatches) {
478 |                 const issueNumber = match.match(/#(\d+)/)[1];
479 |                 
480 |                 try {
481 |                   // Add a comment to the issue
482 |                   await github.rest.issues.createComment({
483 |                     owner: context.repo.owner,
484 |                     repo: context.repo.repo,
485 |                     issue_number: parseInt(issueNumber),
486 |                     body: `✅ This issue has been resolved by PR #${pr.number} which was merged in commit ${pr.merge_commit_sha}`
487 |                   });
488 |                   
489 |                   // Remove in-review label
490 |                   try {
491 |                     await github.rest.issues.removeLabel({
492 |                       owner: context.repo.owner,
493 |                       repo: context.repo.repo,
494 |                       issue_number: parseInt(issueNumber),
495 |                       name: 'status: in-review'
496 |                     });
497 |                   } catch (error) {
498 |                     // Label might not exist, ignore
499 |                   }
500 |                   
501 |                 } catch (error) {
502 |                   console.log(`Could not update issue #${issueNumber}: ${error.message}`);
503 |                 }
504 |               }
505 |             }
```
Page 2/2FirstPrevNextLast