#
tokens: 49874/50000 203/350 files (page 1/8)
lines: off (toggle) GitHub
raw markdown copy
This is page 1 of 8. Use http://codebase.md/manusa/kubernetes-mcp-server?page={x} to view the full context.

# Directory Structure

```
├── .github
│   ├── dependabot.yml
│   └── workflows
│       ├── build.yaml
│       ├── gevals.yaml
│       ├── release-helm.yaml
│       ├── release-image.yml
│       └── release.yaml
├── .gitignore
├── AGENTS.md
├── build
│   ├── gevals.mk
│   ├── helm.mk
│   ├── keycloak.mk
│   ├── kind.mk
│   ├── node.mk
│   ├── python.mk
│   └── tools.mk
├── charts
│   └── kubernetes-mcp-server
│       ├── .helmignore
│       ├── Chart.yaml
│       ├── README.md
│       ├── README.md.gotmpl
│       ├── templates
│       │   ├── _helpers.tpl
│       │   ├── configmap.yaml
│       │   ├── deployment.yaml
│       │   ├── ingress.yaml
│       │   ├── service.yaml
│       │   └── serviceaccount.yaml
│       └── values.yaml
├── CLAUDE.md
├── cmd
│   └── kubernetes-mcp-server
│       ├── main_test.go
│       └── main.go
├── dev
│   └── config
│       ├── cert-manager
│       │   └── selfsigned-issuer.yaml
│       ├── ingress
│       │   └── nginx-ingress.yaml
│       ├── keycloak
│       │   ├── client-scopes
│       │   │   ├── groups.json
│       │   │   ├── mcp-openshift.json
│       │   │   └── mcp-server.json
│       │   ├── clients
│       │   │   ├── mcp-client.json
│       │   │   ├── mcp-server-update.json
│       │   │   ├── mcp-server.json
│       │   │   └── openshift.json
│       │   ├── deployment.yaml
│       │   ├── ingress.yaml
│       │   ├── mappers
│       │   │   ├── groups-membership.json
│       │   │   ├── mcp-server-audience.json
│       │   │   ├── openshift-audience.json
│       │   │   └── username.json
│       │   ├── rbac.yaml
│       │   ├── realm
│       │   │   ├── realm-create.json
│       │   │   └── realm-events-config.json
│       │   └── users
│       │       └── mcp.json
│       └── kind
│           └── cluster.yaml
├── Dockerfile
├── docs
│   ├── GETTING_STARTED_CLAUDE_CODE.md
│   ├── GETTING_STARTED_KUBERNETES.md
│   ├── images
│   │   ├── keycloak-login-page.png
│   │   ├── keycloak-mcp-inspector-connect.png
│   │   ├── keycloak-mcp-inspector-results.png
│   │   ├── kubernetes-mcp-server-github-copilot.jpg
│   │   └── vibe-coding.jpg
│   ├── KEYCLOAK_OIDC_SETUP.md
│   ├── KIALI.md
│   ├── PROMPTS.md
│   └── README.md
├── evals
│   ├── claude-code
│   │   ├── agent.yaml
│   │   ├── eval-inline.yaml
│   │   └── eval.yaml
│   ├── mcp-config.yaml
│   ├── openai-agent
│   │   ├── agent.yaml
│   │   ├── eval-inline.yaml
│   │   └── eval.yaml
│   ├── README.md
│   └── tasks
│       ├── create-canary-deployment
│       │   ├── artifacts
│       │   │   ├── deployment-v1.yaml
│       │   │   └── service.yaml
│       │   ├── cleanup.sh
│       │   ├── create-canary-deployment.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── create-network-policy
│       │   ├── artifacts
│       │   │   └── desired-policy.yaml
│       │   ├── cleanup.sh
│       │   ├── create-network-policy.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── create-pod
│       │   ├── cleanup.sh
│       │   ├── create-pod.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── create-pod-mount-configmaps
│       │   ├── cleanup.sh
│       │   ├── create-pod-mount-configmaps.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── create-pod-resources-limits
│       │   ├── cleanup.sh
│       │   ├── create-pod-resources-limits.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── create-simple-rbac
│       │   ├── cleanup.sh
│       │   ├── create-simple-rbac.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── debug-app-logs
│       │   ├── artifacts
│       │   │   ├── calc-app-pod.yaml
│       │   │   └── calc-app.py
│       │   ├── cleanup.sh
│       │   ├── debug-app-logs.yaml
│       │   └── setup.sh
│       ├── deployment-traffic-switch
│       │   ├── artifacts
│       │   │   ├── blue-deployment.yaml
│       │   │   ├── green-deployment.yaml
│       │   │   └── service.yaml
│       │   ├── cleanup.sh
│       │   ├── deployment-traffix-switch.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── fix-crashloop
│       │   ├── cleanup.sh
│       │   ├── fix-crashloop.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── fix-image-pull
│       │   ├── cleanup.sh
│       │   ├── fix-image-pull.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── fix-pending-pod
│       │   ├── artifacts
│       │   │   ├── homepage-pod.yaml
│       │   │   └── homepage-pvc.yaml
│       │   ├── cleanup.sh
│       │   ├── fix-pending-pod.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── fix-probes
│       │   ├── cleanup.sh
│       │   ├── fix-probes.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── fix-rbac-wrong-resource
│       │   ├── cleanup.sh
│       │   ├── fix-rbac-wrong-resource.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── fix-service-routing
│       │   ├── cleanup.sh
│       │   ├── fix-service-routing.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── fix-service-with-no-endpoints
│       │   ├── artifacts
│       │   │   ├── deployment.yaml
│       │   │   └── service.yaml
│       │   ├── cleanup.sh
│       │   ├── fix-service-with-no-endpoints.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── horizontal-pod-autoscaler
│       │   ├── cleanup.sh
│       │   ├── horizontal-pod-autoscaler.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── list-images-for-pods
│       │   ├── artifacts
│       │   │   └── manifest.yaml
│       │   ├── cleanup.sh
│       │   ├── list-images-for-pods.yaml
│       │   └── setup.sh
│       ├── multi-container-pod-communication
│       │   ├── cleanup.sh
│       │   ├── multi-container-pod-communication.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── resize-pvc
│       │   ├── artifacts
│       │   │   ├── storage-pod.yaml
│       │   │   └── storage-pvc.yaml
│       │   ├── cleanup.sh
│       │   ├── resize-pvc.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── rolling-update-deployment
│       │   ├── cleanup.sh
│       │   ├── rolling-update-deployment.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── scale-deployment
│       │   ├── cleanup.sh
│       │   ├── scale-deployment.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── scale-down-deployment
│       │   ├── cleanup.sh
│       │   ├── scale-down-deployment.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       ├── setup-dev-cluster
│       │   ├── cleanup.sh
│       │   ├── setup-dev-cluster.md
│       │   ├── setup-dev-cluster.yaml
│       │   ├── setup.sh
│       │   └── verify.sh
│       └── statefulset-lifecycle
│           ├── cleanup.sh
│           ├── setup.sh
│           ├── statefulset-lifecycle.yaml
│           └── verify.sh
├── go.mod
├── go.sum
├── hack
│   └── generate-placeholder-ca.sh
├── internal
│   ├── test
│   │   ├── env.go
│   │   ├── kubernetes.go
│   │   ├── mcp.go
│   │   ├── mock_server.go
│   │   ├── test.go
│   │   ├── unstructured_test.go
│   │   └── unstructured.go
│   └── tools
│       └── update-readme
│           └── main.go
├── LICENSE
├── Makefile
├── npm
│   └── kubernetes-mcp-server
│       └── bin
│           └── index.js
├── pkg
│   ├── api
│   │   ├── config.go
│   │   ├── imports_test.go
│   │   ├── kubernetes.go
│   │   ├── params_test.go
│   │   ├── params.go
│   │   ├── prompt_serialization_test.go
│   │   ├── prompts_test.go
│   │   ├── prompts.go
│   │   ├── toolsets_test.go
│   │   └── toolsets.go
│   ├── config
│   │   ├── config_default_overrides.go
│   │   ├── config_default.go
│   │   ├── config_test.go
│   │   ├── config.go
│   │   ├── context.go
│   │   ├── extended.go
│   │   ├── provider_config_test.go
│   │   ├── provider_config.go
│   │   ├── toolset_config_test.go
│   │   └── toolset_config.go
│   ├── helm
│   │   └── helm.go
│   ├── http
│   │   ├── authorization_test.go
│   │   ├── authorization.go
│   │   ├── http_authorization_test.go
│   │   ├── http_mcp_test.go
│   │   ├── http_test.go
│   │   ├── http.go
│   │   ├── middleware.go
│   │   ├── sts_test.go
│   │   ├── sts.go
│   │   └── wellknown.go
│   ├── kiali
│   │   ├── config_test.go
│   │   ├── config.go
│   │   ├── defaults.go
│   │   ├── endpoints.go
│   │   ├── get_mesh_graph.go
│   │   ├── graph.go
│   │   ├── health_calculation.go
│   │   ├── health.go
│   │   ├── istio.go
│   │   ├── kiali_test.go
│   │   ├── kiali.go
│   │   ├── logs.go
│   │   ├── mesh.go
│   │   ├── namespaces.go
│   │   ├── services.go
│   │   ├── traces.go
│   │   ├── types.go
│   │   ├── validations.go
│   │   └── workloads.go
│   ├── kubernetes
│   │   ├── accesscontrol_round_tripper_test.go
│   │   ├── accesscontrol_round_tripper.go
│   │   ├── common_test.go
│   │   ├── configuration.go
│   │   ├── events.go
│   │   ├── impersonate_roundtripper.go
│   │   ├── kubernetes_derived_test.go
│   │   ├── kubernetes.go
│   │   ├── manager_test.go
│   │   ├── manager.go
│   │   ├── namespaces.go
│   │   ├── nodes.go
│   │   ├── openshift.go
│   │   ├── pods.go
│   │   ├── provider_kubeconfig_test.go
│   │   ├── provider_kubeconfig.go
│   │   ├── provider_registry_test.go
│   │   ├── provider_registry.go
│   │   ├── provider_single_test.go
│   │   ├── provider_single.go
│   │   ├── provider_test.go
│   │   ├── provider_watch_test.go
│   │   ├── provider.go
│   │   ├── resources.go
│   │   ├── token.go
│   │   └── watcher
│   │       ├── cluster_test.go
│   │       ├── cluster.go
│   │       ├── kubeconfig_test.go
│   │       ├── kubeconfig.go
│   │       └── watcher.go
│   ├── kubernetes-mcp-server
│   │   └── cmd
│   │       ├── root_sighup_test.go
│   │       ├── root_test.go
│   │       ├── root.go
│   │       └── testdata
│   │           ├── empty-config.toml
│   │           └── valid-config.toml
│   ├── kubevirt
│   │   ├── resources_test.go
│   │   ├── resources.go
│   │   ├── testing
│   │   │   └── helpers.go
│   │   ├── vm_test.go
│   │   └── vm.go
│   ├── mcp
│   │   ├── common_crd_test.go
│   │   ├── common_test.go
│   │   ├── configuration_test.go
│   │   ├── events_test.go
│   │   ├── gosdk.go
│   │   ├── helm_test.go
│   │   ├── kiali_test.go
│   │   ├── kubevirt_test.go
│   │   ├── mcp_middleware_test.go
│   │   ├── mcp_prompts_test.go
│   │   ├── mcp_reload_test.go
│   │   ├── mcp_test.go
│   │   ├── mcp_tools_test.go
│   │   ├── mcp_toolset_prompts_test.go
│   │   ├── mcp_watch_test.go
│   │   ├── mcp.go
│   │   ├── middleware.go
│   │   ├── modules.go
│   │   ├── namespaces_test.go
│   │   ├── nodes_test.go
│   │   ├── nodes_top_test.go
│   │   ├── pods_exec_test.go
│   │   ├── pods_run_test.go
│   │   ├── pods_test.go
│   │   ├── pods_top_test.go
│   │   ├── prompts_config_test.go
│   │   ├── prompts_gosdk_test.go
│   │   ├── prompts_gosdk.go
│   │   ├── resources_test.go
│   │   ├── testdata
│   │   │   ├── helm-chart-no-op
│   │   │   │   └── Chart.yaml
│   │   │   ├── helm-chart-secret
│   │   │   │   ├── Chart.yaml
│   │   │   │   └── templates
│   │   │   │       └── secret.yaml
│   │   │   ├── toolsets-config-tools.json
│   │   │   ├── toolsets-core-tools.json
│   │   │   ├── toolsets-full-tools-multicluster-enum.json
│   │   │   ├── toolsets-full-tools-multicluster.json
│   │   │   ├── toolsets-full-tools-openshift.json
│   │   │   ├── toolsets-full-tools.json
│   │   │   ├── toolsets-helm-tools.json
│   │   │   ├── toolsets-kiali-tools.json
│   │   │   └── toolsets-kubevirt-tools.json
│   │   ├── tool_filter_test.go
│   │   ├── tool_filter.go
│   │   ├── tool_mutator_test.go
│   │   ├── tool_mutator.go
│   │   └── toolsets_test.go
│   ├── openshift
│   │   └── openshift.go
│   ├── output
│   │   ├── output_test.go
│   │   └── output.go
│   ├── prompts
│   │   ├── prompts_test.go
│   │   └── prompts.go
│   ├── toolsets
│   │   ├── config
│   │   │   ├── configuration.go
│   │   │   └── toolset.go
│   │   ├── core
│   │   │   ├── events.go
│   │   │   ├── namespaces.go
│   │   │   ├── nodes.go
│   │   │   ├── pods.go
│   │   │   ├── resources.go
│   │   │   └── toolset.go
│   │   ├── helm
│   │   │   ├── helm.go
│   │   │   └── toolset.go
│   │   ├── kiali
│   │   │   ├── internal
│   │   │   │   └── defaults
│   │   │   │       ├── defaults_override.go
│   │   │   │       └── defaults.go
│   │   │   ├── tools
│   │   │   │   ├── get_mesh_graph.go
│   │   │   │   ├── get_metrics.go
│   │   │   │   ├── get_resource_details.go
│   │   │   │   ├── get_traces.go
│   │   │   │   ├── helpers_test.go
│   │   │   │   ├── helpers.go
│   │   │   │   ├── logs.go
│   │   │   │   └── manage_istio_config.go
│   │   │   └── toolset.go
│   │   ├── kubevirt
│   │   │   ├── toolset.go
│   │   │   └── vm
│   │   │       ├── create
│   │   │       │   ├── tool.go
│   │   │       │   └── vm.yaml.tmpl
│   │   │       └── lifecycle
│   │   │           └── tool.go
│   │   ├── toolsets_test.go
│   │   └── toolsets.go
│   └── version
│       └── version.go
├── python
│   ├── kubernetes_mcp_server
│   │   ├── __init__.py
│   │   ├── __main__.py
│   │   └── kubernetes_mcp_server.py
│   ├── pyproject.toml
│   └── README.md
├── README.md
└── smithery.yaml
```

# Files

--------------------------------------------------------------------------------
/charts/kubernetes-mcp-server/.helmignore:
--------------------------------------------------------------------------------

```
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
_output/
.idea/
.vscode/
.docusaurus/
node_modules/

.npmrc
kubernetes-mcp-server
!charts/kubernetes-mcp-server
!cmd/kubernetes-mcp-server
!pkg/kubernetes-mcp-server
npm/kubernetes-mcp-server/README.md
npm/kubernetes-mcp-server/LICENSE
!npm/kubernetes-mcp-server
kubernetes-mcp-server-darwin-amd64
!npm/kubernetes-mcp-server-darwin-amd64/
kubernetes-mcp-server-darwin-arm64
!npm/kubernetes-mcp-server-darwin-arm64
kubernetes-mcp-server-linux-amd64
!npm/kubernetes-mcp-server-linux-amd64
kubernetes-mcp-server-linux-arm64
!npm/kubernetes-mcp-server-linux-arm64
kubernetes-mcp-server-windows-amd64.exe
kubernetes-mcp-server-windows-arm64.exe

python/.venv/
python/build/
python/dist/
python/kubernetes_mcp_server.egg-info/
!python/kubernetes-mcp-server

```

--------------------------------------------------------------------------------
/python/README.md:
--------------------------------------------------------------------------------

```markdown
../README.md
```

--------------------------------------------------------------------------------
/docs/README.md:
--------------------------------------------------------------------------------

```markdown
# Kubernetes MCP Server Documentation

Welcome to the Kubernetes MCP Server documentation! This directory contains guides to help you set up and use the Kubernetes MCP Server with your Kubernetes cluster and Claude Code CLI.

## Getting Started Guides

Choose the guide that matches your needs:

| Guide | Description | Best For |
|-------|-------------|----------|
| **[Getting Started with Kubernetes](GETTING_STARTED_KUBERNETES.md)** | Base setup: Create ServiceAccount, token, and kubeconfig | Everyone - **start here first** |
| **[Using with Claude Code CLI](GETTING_STARTED_CLAUDE_CODE.md)** | Configure MCP server with Claude Code CLI | Claude Code CLI users |

## Recommended Workflow

1. **Complete the base setup**: Start with [Getting Started with Kubernetes](GETTING_STARTED_KUBERNETES.md) to create a ServiceAccount and kubeconfig file
2. **Configure Claude Code**: Then follow the [Claude Code CLI guide](GETTING_STARTED_CLAUDE_CODE.md)

## Other toolsets

- **[Kiali](KIALI.md)** - Tools for Kiali ServiceMesh with Istio

## Additional Documentation

- **[Keycloak OIDC Setup](KEYCLOAK_OIDC_SETUP.md)** - Developer guide for local Keycloak environment and testing with MCP Inspector
- **[Main README](../README.md)** - Project overview and general information




```

--------------------------------------------------------------------------------
/evals/README.md:
--------------------------------------------------------------------------------

```markdown
# Kubernetes MCP Server Test Examples

This directory contains examples for testing the **same Kubernetes MCP server** using different AI agents.

## Structure

```
kube-mcp-server/
├── README.md                    # This file
├── mcp-config.yaml              # Shared MCP server configuration
├── tasks/                       # Shared test tasks
│   ├── create-pod.yaml
│   ├── setup.sh
│   ├── verify.sh
│   └── cleanup.sh
├── claude-code/                 # Claude Code agent configuration
│   ├── agent.yaml
│   ├── eval.yaml
│   └── eval-inline.yaml
└── openai-agent/                # OpenAI-compatible agent configuration
    ├── agent.yaml
    ├── eval.yaml
    └── eval-inline.yaml
```

## What This Tests

Both examples test the **same Kubernetes MCP server** using **shared task definitions**:
- Creates an nginx pod named `web-server` in the `create-pod-test` namespace
- Verifies the pod is running
- Validates that the agent called appropriate Kubernetes tools
- Cleans up resources

The tasks and MCP configuration are shared - only the agent configuration differs.

## Prerequisites

- Kubernetes cluster (kind, minikube, or any cluster)
- kubectl configured
- Kubernetes MCP server running at `http://localhost:8080/mcp`
- Built binaries: `gevals` and `agent`

## Running Examples

### Option 1: Claude Code

```bash
./gevals eval examples/kube-mcp-server/claude-code/eval.yaml
```

**Requirements:**
- Claude Code installed and in PATH

**Tool Usage:**
- Claude typically uses pod-specific tools like `pods_run`, `pods_create`

---

### Option 2: OpenAI-Compatible Agent (Built-in)

```bash
# Set your model credentials
export MODEL_BASE_URL='https://your-api-endpoint.com/v1'
export MODEL_KEY='your-api-key'

# Run the test
./gevals eval examples/kube-mcp-server/openai-agent/eval.yaml
```

**Note:** Different AI models may choose different tools from the MCP server (`pods_*` or `resources_*`) to accomplish the same task. Both approaches work correctly.

## Assertions

Both examples use flexible assertions that accept either tool approach:

```yaml
toolPattern: "(pods_.*|resources_.*)"  # Accepts both pod-specific and generic resource tools
```

This makes the tests robust across different AI models that may prefer different tools.

## Key Difference: Agent Configuration

### Claude Code (claude-code/agent.yaml)
```yaml
commands:
  argTemplateMcpServer: "--mcp-config {{ .File }}"
  argTemplateAllowedTools: "mcp__{{ .ServerName }}__{{ .ToolName }}"
  runPrompt: |-
    claude {{ .McpServerFileArgs }} --print "{{ .Prompt }}"
```

### OpenAI Agent (openai-agent/agent.yaml)
```yaml
builtin:
  type: "openai-agent"
  model: "gpt-4"
```

Uses the built-in OpenAI agent with model configuration.

## Expected Results

Both examples should produce:
- ✅ Task passed - pod created successfully
- ✅ Assertions passed - appropriate tools were called
- ✅ Verification passed - pod exists and is running

Results saved to: `gevals-<eval-name>-out.json`

```

--------------------------------------------------------------------------------
/charts/kubernetes-mcp-server/README.md:
--------------------------------------------------------------------------------

```markdown
# kubernetes-mcp-server

![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![AppVersion: latest](https://img.shields.io/badge/AppVersion-latest-informational?style=flat-square)

Helm Chart for the Kubernetes MCP Server

**Homepage:** <https://github.com/containers/kubernetes-mcp-server>

## Maintainers

| Name | Email | Url |
| ---- | ------ | --- |
| Andrew Block | <[email protected]> |  |
| Marc Nuri | <[email protected]> |  |

## Installing the Chart

The Chart can be installed quickly and easily to a Kubernetes cluster. Since an _Ingress_ is added as part of the default install of the Chart, the `ingress.host` Value must be specified.

Install the Chart using the following command from the root of this directory:

```shell
helm upgrade -i -n kubernetes-mcp-server --create-namespace kubernetes-mcp-server oci://ghcr.io/containers/charts/kubernetes-mcp-server --set ingress.host=<hostname>
```

### Optimized OpenShift Deployment

Functionality has been added to the Chart to simplify the deployment to OpenShift Cluster.

## Values

| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` |  |
| config.port | string | `"{{ .Values.service.port }}"` |  |
| configFilePath | string | `"/etc/kubernetes-mcp-server/config.toml"` |  |
| defaultPodSecurityContext | object | `{"seccompProfile":{"type":"RuntimeDefault"}}` | Default Security Context for the Pod when one is not provided |
| defaultSecurityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"runAsNonRoot":true}` | Default Security Context for the Container when one is not provided |
| extraVolumeMounts | list | `[]` | Additional volumeMounts on the output Deployment definition. |
| extraVolumes | list | `[]` | Additional volumes on the output Deployment definition. |
| fullnameOverride | string | `""` |  |
| image | object | `{"pullPolicy":"IfNotPresent","registry":"quay.io","repository":"containers/kubernetes_mcp_server","version":"latest"}` | This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/ |
| image.pullPolicy | string | `"IfNotPresent"` | This sets the pull policy for images. |
| image.version | string | `"latest"` | This sets the tag or sha digest for the image. |
| imagePullSecrets | list | `[]` | This is for the secrets for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ |
| ingress | object | `{"annotations":{},"className":"","enabled":true,"host":"","hosts":null,"path":"/","pathType":"ImplementationSpecific","termination":"edge","tls":null}` | This block is for setting up the ingress for more information can be found here: https://kubernetes.io/docs/concepts/services-networking/ingress/ |
| livenessProbe | object | `{"httpGet":{"path":"/healthz","port":"http"}}` | Liveness and readiness probes for the container. |
| nameOverride | string | `""` |  |
| nodeSelector | object | `{}` |  |
| openshift | bool | `false` | Enable OpenShift specific features |
| podAnnotations | object | `{}` | For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ |
| podLabels | object | `{}` | For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ |
| podSecurityContext | object | `{}` | Define the Security Context for the Pod |
| readinessProbe.httpGet.path | string | `"/healthz"` |  |
| readinessProbe.httpGet.port | string | `"http"` |  |
| replicaCount | int | `1` | This will set the replicaset count more information can be found here: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ |
| resources | object | `{"limits":{"cpu":"100m","memory":"128Mi"},"requests":{"cpu":"100m","memory":"128Mi"}}` | Resource requests and limits for the container. |
| securityContext | object | `{}` | Define the Security Context for the Container |
| service | object | `{"port":8080,"type":"ClusterIP"}` | This is for setting up a service more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/ |
| service.port | int | `8080` | This sets the ports more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#field-spec-ports |
| service.type | string | `"ClusterIP"` | This sets the service type more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types |
| serviceAccount | object | `{"annotations":{},"create":true,"name":""}` | This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/ |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| serviceAccount.create | bool | `true` | Specifies whether a service account should be created |
| serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the fullname template |
| tolerations | list | `[]` |  |

## Updating the README

The contents of the README.md file is generated using [helm-docs](https://github.com/norwoodj/helm-docs). Whenever changes are introduced to the Chart and its _Values_, the documentation should be regenerated.

Execute the following command to regenerate the documentation from within the Helm Chart directory.

```shell
helm-docs -t README.md.gotmpl
```

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Kubernetes MCP Server

[![GitHub License](https://img.shields.io/github/license/containers/kubernetes-mcp-server)](https://github.com/containers/kubernetes-mcp-server/blob/main/LICENSE)
[![npm](https://img.shields.io/npm/v/kubernetes-mcp-server)](https://www.npmjs.com/package/kubernetes-mcp-server)
[![PyPI - Version](https://img.shields.io/pypi/v/kubernetes-mcp-server)](https://pypi.org/project/kubernetes-mcp-server/)
[![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/containers/kubernetes-mcp-server?sort=semver)](https://github.com/containers/kubernetes-mcp-server/releases/latest)
[![Build](https://github.com/containers/kubernetes-mcp-server/actions/workflows/build.yaml/badge.svg)](https://github.com/containers/kubernetes-mcp-server/actions/workflows/build.yaml)

[✨ Features](#features) | [🚀 Getting Started](#getting-started) | [🎥 Demos](#demos) | [⚙️ Configuration](#configuration) | [🛠️ Tools](#tools-and-functionalities) | [🧑‍💻 Development](#development)

https://github.com/user-attachments/assets/be2b67b3-fc1c-4d11-ae46-93deba8ed98e

## ✨ Features <a id="features"></a>

A powerful and flexible Kubernetes [Model Context Protocol (MCP)](https://blog.marcnuri.com/model-context-protocol-mcp-introduction) server implementation with support for **Kubernetes** and **OpenShift**.

- **✅ Configuration**:
  - Automatically detect changes in the Kubernetes configuration and update the MCP server.
  - **View** and manage the current [Kubernetes `.kube/config`](https://blog.marcnuri.com/where-is-my-default-kubeconfig-file) or in-cluster configuration.
- **✅ Generic Kubernetes Resources**: Perform operations on **any** Kubernetes or OpenShift resource.
  - Any CRUD operation (Create or Update, Get, List, Delete).
- **✅ Pods**: Perform Pod-specific operations.
  - **List** pods in all namespaces or in a specific namespace.
  - **Get** a pod by name from the specified namespace.
  - **Delete** a pod by name from the specified namespace.
  - **Show logs** for a pod by name from the specified namespace.
  - **Top** gets resource usage metrics for all pods or a specific pod in the specified namespace.
  - **Exec** into a pod and run a command.
  - **Run** a container image in a pod and optionally expose it.
- **✅ Namespaces**: List Kubernetes Namespaces.
- **✅ Events**: View Kubernetes events in all namespaces or in a specific namespace.
- **✅ Projects**: List OpenShift Projects.
- **☸️ Helm**:
  - **Install** a Helm chart in the current or provided namespace.
  - **List** Helm releases in all namespaces or in a specific namespace.
  - **Uninstall** a Helm release in the current or provided namespace.

Unlike other Kubernetes MCP server implementations, this **IS NOT** just a wrapper around `kubectl` or `helm` command-line tools.
It is a **Go-based native implementation** that interacts directly with the Kubernetes API server.

There is **NO NEED** for external dependencies or tools to be installed on the system.
If you're using the native binaries you don't need to have Node or Python installed on your system.

- **✅ Lightweight**: The server is distributed as a single native binary for Linux, macOS, and Windows.
- **✅ High-Performance / Low-Latency**: Directly interacts with the Kubernetes API server without the overhead of calling and waiting for external commands.
- **✅ Multi-Cluster**: Can interact with multiple Kubernetes clusters simultaneously (as defined in your kubeconfig files).
- **✅ Cross-Platform**: Available as a native binary for Linux, macOS, and Windows, as well as an npm package, a Python package, and container/Docker image.
- **✅ Configurable**: Supports [command-line arguments](#configuration)  to configure the server behavior.
- **✅ Well tested**: The server has an extensive test suite to ensure its reliability and correctness across different Kubernetes environments.

## 🚀 Getting Started <a id="getting-started"></a>

### Requirements

- Access to a Kubernetes cluster.

<details>
<summary><b>Claude Code</b></summary>

Follow the [dedicated Claude Code getting started guide](docs/GETTING_STARTED_CLAUDE_CODE.md) in our [user documentation](docs/).

For a secure production setup with dedicated ServiceAccount and read-only access, also review the [Kubernetes setup guide](docs/GETTING_STARTED_KUBERNETES.md).

</details>

### Claude Desktop

#### Using npx

If you have npm installed, this is the fastest way to get started with `kubernetes-mcp-server` on Claude Desktop.

Open your `claude_desktop_config.json` and add the mcp server to the list of `mcpServers`:
``` json
{
  "mcpServers": {
    "kubernetes": {
      "command": "npx",
      "args": [
        "-y",
        "kubernetes-mcp-server@latest"
      ]
    }
  }
}
```

### VS Code / VS Code Insiders

Install the Kubernetes MCP server extension in VS Code Insiders by pressing the following link:

[<img src="https://img.shields.io/badge/VS_Code-VS_Code?style=flat-square&label=Install%20Server&color=0098FF" alt="Install in VS Code">](https://insiders.vscode.dev/redirect?url=vscode%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522kubernetes%2522%252C%2522command%2522%253A%2522npx%2522%252C%2522args%2522%253A%255B%2522-y%2522%252C%2522kubernetes-mcp-server%2540latest%2522%255D%257D)
[<img alt="Install in VS Code Insiders" src="https://img.shields.io/badge/VS_Code_Insiders-VS_Code_Insiders?style=flat-square&label=Install%20Server&color=24bfa5">](https://insiders.vscode.dev/redirect?url=vscode-insiders%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522kubernetes%2522%252C%2522command%2522%253A%2522npx%2522%252C%2522args%2522%253A%255B%2522-y%2522%252C%2522kubernetes-mcp-server%2540latest%2522%255D%257D)

Alternatively, you can install the extension manually by running the following command:

```shell
# For VS Code
code --add-mcp '{"name":"kubernetes","command":"npx","args":["kubernetes-mcp-server@latest"]}'
# For VS Code Insiders
code-insiders --add-mcp '{"name":"kubernetes","command":"npx","args":["kubernetes-mcp-server@latest"]}'
```

### Cursor

Install the Kubernetes MCP server extension in Cursor by pressing the following link:

[![Install MCP Server](https://cursor.com/deeplink/mcp-install-dark.svg)](https://cursor.com/en/install-mcp?name=kubernetes-mcp-server&config=eyJjb21tYW5kIjoibnB4IC15IGt1YmVybmV0ZXMtbWNwLXNlcnZlckBsYXRlc3QifQ%3D%3D)

Alternatively, you can install the extension manually by editing the `mcp.json` file:

```json
{
  "mcpServers": {
    "kubernetes-mcp-server": {
      "command": "npx",
      "args": ["-y", "kubernetes-mcp-server@latest"]
    }
  }
}
```

### Goose CLI

[Goose CLI](https://blog.marcnuri.com/goose-on-machine-ai-agent-cli-introduction) is the easiest (and cheapest) way to get rolling with artificial intelligence (AI) agents.

#### Using npm

If you have npm installed, this is the fastest way to get started with `kubernetes-mcp-server`.

Open your goose `config.yaml` and add the mcp server to the list of `mcpServers`:
```yaml
extensions:
  kubernetes:
    command: npx
    args:
      - -y
      - kubernetes-mcp-server@latest

```

## 🎥 Demos <a id="demos"></a>

### Diagnosing and automatically fixing an OpenShift Deployment

Demo showcasing how Kubernetes MCP server is leveraged by Claude Desktop to automatically diagnose and fix a deployment in OpenShift without any user assistance.

https://github.com/user-attachments/assets/a576176d-a142-4c19-b9aa-a83dc4b8d941

### _Vibe Coding_ a simple game and deploying it to OpenShift

In this demo, I walk you through the process of _Vibe Coding_ a simple game using VS Code and how to leverage [Podman MCP server](https://github.com/manusa/podman-mcp-server) and Kubernetes MCP server to deploy it to OpenShift.

<a href="https://www.youtube.com/watch?v=l05jQDSrzVI" target="_blank">
 <img src="docs/images/vibe-coding.jpg" alt="Vibe Coding: Build & Deploy a Game on Kubernetes" width="240"  />
</a>

### Supercharge GitHub Copilot with Kubernetes MCP Server in VS Code - One-Click Setup!

In this demo, I'll show you how to set up Kubernetes MCP server in VS code just by clicking a link.

<a href="https://youtu.be/AI4ljYMkgtA" target="_blank">
 <img src="docs/images/kubernetes-mcp-server-github-copilot.jpg" alt="Supercharge GitHub Copilot with Kubernetes MCP Server in VS Code - One-Click Setup!" width="240"  />
</a>

## ⚙️ Configuration <a id="configuration"></a>

The Kubernetes MCP server can be configured using command line (CLI) arguments.

You can run the CLI executable either by using `npx`, `uvx`, or by downloading the [latest release binary](https://github.com/containers/kubernetes-mcp-server/releases/latest).

```shell
# Run the Kubernetes MCP server using npx (in case you have npm and node installed)
npx kubernetes-mcp-server@latest --help
```

```shell
# Run the Kubernetes MCP server using uvx (in case you have uv and python installed)
uvx kubernetes-mcp-server@latest --help
```

```shell
# Run the Kubernetes MCP server using the latest release binary
./kubernetes-mcp-server --help
```

### Configuration Options

| Option                    | Description                                                                                                                                                                                                                                                                                   |
|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--port`                  | Starts the MCP server in Streamable HTTP mode (path /mcp) and Server-Sent Event (SSE) (path /sse) mode and listens on the specified port .                                                                                                                                                    |
| `--log-level`             | Sets the logging level (values [from 0-9](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)). Similar to [kubectl logging levels](https://kubernetes.io/docs/reference/kubectl/quick-reference/#kubectl-output-verbosity-and-debugging). |
| `--config`                | (Optional) Path to the main TOML configuration file. See [Drop-in Configuration](#drop-in-configuration) section below for details.                                                                                                                                                          |
| `--config-dir`            | (Optional) Path to drop-in configuration directory. Files are loaded in lexical (alphabetical) order. Defaults to `conf.d` relative to the main config file if `--config` is specified. See [Drop-in Configuration](#drop-in-configuration) section below for details.                       |
| `--kubeconfig`            | Path to the Kubernetes configuration file. If not provided, it will try to resolve the configuration (in-cluster, default location, etc.).                                                                                                                                                    |
| `--list-output`           | Output format for resource list operations (one of: yaml, table) (default "table")                                                                                                                                                                                                            |
| `--read-only`             | If set, the MCP server will run in read-only mode, meaning it will not allow any write operations (create, update, delete) on the Kubernetes cluster. This is useful for debugging or inspecting the cluster without making changes.                                                          |
| `--disable-destructive`   | If set, the MCP server will disable all destructive operations (delete, update, etc.) on the Kubernetes cluster. This is useful for debugging or inspecting the cluster without accidentally making changes. This option has no effect when `--read-only` is used.                            |
| `--toolsets`              | Comma-separated list of toolsets to enable. Check the [🛠️ Tools and Functionalities](#tools-and-functionalities) section for more information.                                                                                                                                               |
| `--disable-multi-cluster` | If set, the MCP server will disable multi-cluster support and will only use the current context from the kubeconfig file. This is useful if you want to restrict the MCP server to a single cluster.                                                                                          |

### Drop-in Configuration <a id="drop-in-configuration"></a>

The Kubernetes MCP server supports flexible configuration through both a main config file and drop-in files. **Both are optional** - you can use either, both, or neither (server will use built-in defaults).

#### Configuration Loading Order

Configuration values are loaded and merged in the following order (later sources override earlier ones):

1. **Internal Defaults** - Always loaded (hardcoded default values)
2. **Main Configuration File** - Optional, loaded via `--config` flag
3. **Drop-in Files** - Optional, loaded from `--config-dir` in **lexical (alphabetical) order**

#### How Drop-in Files Work

- **Default Directory**: If `--config-dir` is not specified, the server looks for drop-in files in `conf.d/` relative to the main config file's directory (when `--config` is provided)
- **File Naming**: Use numeric prefixes to control loading order (e.g., `00-base.toml`, `10-cluster.toml`, `99-override.toml`)
- **File Extension**: Only `.toml` files are processed; dotfiles (starting with `.`) are ignored
- **Partial Configuration**: Drop-in files can contain only a subset of configuration options
- **Merge Behavior**: Values present in a drop-in file override previous values; missing values are preserved

#### Dynamic Configuration Reload

To reload configuration after modifying config files, send a `SIGHUP` signal to the running server process.

**Prerequisite**: SIGHUP reload requires the server to be started with either the `--config` flag or `--config-dir` flag (or both). If neither is specified, SIGHUP signals will be ignored.

**How to reload:**

```shell
# Find the process ID
ps aux | grep kubernetes-mcp-server

# Send SIGHUP to reload configuration
kill -HUP <pid>

# Or use pkill
pkill -HUP kubernetes-mcp-server
```

The server will:
- Reload the main config file and all drop-in files
- Update configuration values (log level, output format, etc.)
- Rebuild the toolset registry with new tool configurations
- Log the reload status

**Note**: Changing `kubeconfig` or cluster-related settings requires a server restart. Only tool configurations, log levels, and output formats can be reloaded dynamically.

**Note**: SIGHUP reload is not available on Windows. On Windows, restart the server to reload configuration.

#### Example: Using Both Config Methods

**Command (using default `conf.d` directory):**
```shell
kubernetes-mcp-server --config /etc/kubernetes-mcp-server/config.toml
```

**Directory structure:**
```
/etc/kubernetes-mcp-server/
├── config.toml              # Main configuration
└── conf.d/                  # Default drop-in directory (automatically loaded)
    ├── 00-base.toml         # Base overrides
    ├── 10-toolsets.toml     # Toolset-specific config
    └── 99-local.toml        # Local overrides
```

**Command (with explicit `--config-dir`):**
```shell
kubernetes-mcp-server --config /etc/kubernetes-mcp-server/config.toml \
                      --config-dir /etc/kubernetes-mcp-server/config.d/
```

**Example drop-in file** (`10-toolsets.toml`):
```toml
# Override only the toolsets - all other config preserved
toolsets = ["core", "config", "helm", "logs"]
```

**Example drop-in file** (`99-local.toml`):
```toml
# Local development overrides
log_level = 9
read_only = true
```

**To apply changes:**
```shell
# Edit config files
vim /etc/kubernetes-mcp-server/conf.d/99-local.toml

# Reload without restarting
pkill -HUP kubernetes-mcp-server
```

### MCP Prompts

1. The server supports MCP prompts for workflow templates. Define custom prompts in `config.toml`:

```toml
[[prompts]]
name = "my-workflow"
title = "my workflow"
description = "Custom workflow"

[[prompts.arguments]]
name = "resource_name"
required = true

[[prompts.messages]]
role = "user"
content = "Help me with {{resource_name}}"
```

2. Toolset prompts implemented by toolset developers

See docs/PROMPTS.md for detailed documentation.

## 🛠️ Tools and Functionalities <a id="tools-and-functionalities"></a>

The Kubernetes MCP server supports enabling or disabling specific groups of tools and functionalities (tools, resources, prompts, and so on) via the `--toolsets` command-line flag or `toolsets` configuration option.
This allows you to control which Kubernetes functionalities are available to your AI tools.
Enabling only the toolsets you need can help reduce the context size and improve the LLM's tool selection accuracy.

### Available Toolsets

The following sets of tools are available (toolsets marked with ✓ in the Default column are enabled by default):

<!-- AVAILABLE-TOOLSETS-START -->

| Toolset  | Description                                                                                                                                                          | Default |
|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
| config   | View and manage the current local Kubernetes configuration (kubeconfig)                                                                                              | ✓       |
| core     | Most common tools for Kubernetes management (Pods, Generic Resources, Events, etc.)                                                                                  | ✓       |
| kiali    | Most common tools for managing Kiali, check the [Kiali documentation](https://github.com/containers/kubernetes-mcp-server/blob/main/docs/KIALI.md) for more details. |         |
| kubevirt | KubeVirt virtual machine management tools                                                                                                                            |         |
| helm     | Tools for managing Helm charts and releases                                                                                                                          | ✓       |

<!-- AVAILABLE-TOOLSETS-END -->

### Tools

In case multi-cluster support is enabled (default) and you have access to multiple clusters, all applicable tools will include an additional `context` argument to specify the Kubernetes context (cluster) to use for that operation.

<!-- AVAILABLE-TOOLSETS-TOOLS-START -->

<details>

<summary>config</summary>

- **configuration_contexts_list** - List all available context names and associated server urls from the kubeconfig file

- **configuration_view** - Get the current Kubernetes configuration content as a kubeconfig YAML
  - `minified` (`boolean`) - Return a minified version of the configuration. If set to true, keeps only the current-context and the relevant pieces of the configuration for that context. If set to false, all contexts, clusters, auth-infos, and users are returned in the configuration. (Optional, default true)

</details>

<details>

<summary>core</summary>

- **events_list** - List all the Kubernetes events in the current cluster from all namespaces
  - `namespace` (`string`) - Optional Namespace to retrieve the events from. If not provided, will list events from all namespaces

- **namespaces_list** - List all the Kubernetes namespaces in the current cluster

- **projects_list** - List all the OpenShift projects in the current cluster

- **nodes_log** - Get logs from a Kubernetes node (kubelet, kube-proxy, or other system logs). This accesses node logs through the Kubernetes API proxy to the kubelet
  - `name` (`string`) **(required)** - Name of the node to get logs from
  - `query` (`string`) **(required)** - query specifies services(s) or files from which to return logs (required). Example: "kubelet" to fetch kubelet logs, "/<log-file-name>" to fetch a specific log file from the node (e.g., "/var/log/kubelet.log" or "/var/log/kube-proxy.log")
  - `tailLines` (`integer`) - Number of lines to retrieve from the end of the logs (Optional, 0 means all logs)

- **nodes_stats_summary** - Get detailed resource usage statistics from a Kubernetes node via the kubelet's Summary API. Provides comprehensive metrics including CPU, memory, filesystem, and network usage at the node, pod, and container levels. On systems with cgroup v2 and kernel 4.20+, also includes PSI (Pressure Stall Information) metrics that show resource pressure for CPU, memory, and I/O. See https://kubernetes.io/docs/reference/instrumentation/understand-psi-metrics/ for details on PSI metrics
  - `name` (`string`) **(required)** - Name of the node to get stats from

- **nodes_top** - List the resource consumption (CPU and memory) as recorded by the Kubernetes Metrics Server for the specified Kubernetes Nodes or all nodes in the cluster
  - `label_selector` (`string`) - Kubernetes label selector (e.g. 'node-role.kubernetes.io/worker=') to filter nodes by label (Optional, only applicable when name is not provided)
  - `name` (`string`) - Name of the Node to get the resource consumption from (Optional, all Nodes if not provided)

- **pods_list** - List all the Kubernetes pods in the current cluster from all namespaces
  - `labelSelector` (`string`) - Optional Kubernetes label selector (e.g. 'app=myapp,env=prod' or 'app in (myapp,yourapp)'), use this option when you want to filter the pods by label

- **pods_list_in_namespace** - List all the Kubernetes pods in the specified namespace in the current cluster
  - `labelSelector` (`string`) - Optional Kubernetes label selector (e.g. 'app=myapp,env=prod' or 'app in (myapp,yourapp)'), use this option when you want to filter the pods by label
  - `namespace` (`string`) **(required)** - Namespace to list pods from

- **pods_get** - Get a Kubernetes Pod in the current or provided namespace with the provided name
  - `name` (`string`) **(required)** - Name of the Pod
  - `namespace` (`string`) - Namespace to get the Pod from

- **pods_delete** - Delete a Kubernetes Pod in the current or provided namespace with the provided name
  - `name` (`string`) **(required)** - Name of the Pod to delete
  - `namespace` (`string`) - Namespace to delete the Pod from

- **pods_top** - List the resource consumption (CPU and memory) as recorded by the Kubernetes Metrics Server for the specified Kubernetes Pods in the all namespaces, the provided namespace, or the current namespace
  - `all_namespaces` (`boolean`) - If true, list the resource consumption for all Pods in all namespaces. If false, list the resource consumption for Pods in the provided namespace or the current namespace
  - `label_selector` (`string`) - Kubernetes label selector (e.g. 'app=myapp,env=prod' or 'app in (myapp,yourapp)'), use this option when you want to filter the pods by label (Optional, only applicable when name is not provided)
  - `name` (`string`) - Name of the Pod to get the resource consumption from (Optional, all Pods in the namespace if not provided)
  - `namespace` (`string`) - Namespace to get the Pods resource consumption from (Optional, current namespace if not provided and all_namespaces is false)

- **pods_exec** - Execute a command in a Kubernetes Pod in the current or provided namespace with the provided name and command
  - `command` (`array`) **(required)** - Command to execute in the Pod container. The first item is the command to be run, and the rest are the arguments to that command. Example: ["ls", "-l", "/tmp"]
  - `container` (`string`) - Name of the Pod container where the command will be executed (Optional)
  - `name` (`string`) **(required)** - Name of the Pod where the command will be executed
  - `namespace` (`string`) - Namespace of the Pod where the command will be executed

- **pods_log** - Get the logs of a Kubernetes Pod in the current or provided namespace with the provided name
  - `container` (`string`) - Name of the Pod container to get the logs from (Optional)
  - `name` (`string`) **(required)** - Name of the Pod to get the logs from
  - `namespace` (`string`) - Namespace to get the Pod logs from
  - `previous` (`boolean`) - Return previous terminated container logs (Optional)
  - `tail` (`integer`) - Number of lines to retrieve from the end of the logs (Optional, default: 100)

- **pods_run** - Run a Kubernetes Pod in the current or provided namespace with the provided container image and optional name
  - `image` (`string`) **(required)** - Container Image to run in the Pod
  - `name` (`string`) - Name of the Pod (Optional, random name if not provided)
  - `namespace` (`string`) - Namespace to run the Pod in
  - `port` (`number`) - TCP/IP port to expose from the Pod container (Optional, no port exposed if not provided)

- **resources_list** - List Kubernetes resources and objects in the current cluster by providing their apiVersion and kind and optionally the namespace and label selector
(common apiVersion and kind include: v1 Pod, v1 Service, v1 Node, apps/v1 Deployment, networking.k8s.io/v1 Ingress, route.openshift.io/v1 Route)
  - `apiVersion` (`string`) **(required)** - apiVersion of the resources (examples of valid apiVersion are: v1, apps/v1, networking.k8s.io/v1)
  - `kind` (`string`) **(required)** - kind of the resources (examples of valid kind are: Pod, Service, Deployment, Ingress)
  - `labelSelector` (`string`) - Optional Kubernetes label selector (e.g. 'app=myapp,env=prod' or 'app in (myapp,yourapp)'), use this option when you want to filter the pods by label
  - `namespace` (`string`) - Optional Namespace to retrieve the namespaced resources from (ignored in case of cluster scoped resources). If not provided, will list resources from all namespaces

- **resources_get** - Get a Kubernetes resource in the current cluster by providing its apiVersion, kind, optionally the namespace, and its name
(common apiVersion and kind include: v1 Pod, v1 Service, v1 Node, apps/v1 Deployment, networking.k8s.io/v1 Ingress, route.openshift.io/v1 Route)
  - `apiVersion` (`string`) **(required)** - apiVersion of the resource (examples of valid apiVersion are: v1, apps/v1, networking.k8s.io/v1)
  - `kind` (`string`) **(required)** - kind of the resource (examples of valid kind are: Pod, Service, Deployment, Ingress)
  - `name` (`string`) **(required)** - Name of the resource
  - `namespace` (`string`) - Optional Namespace to retrieve the namespaced resource from (ignored in case of cluster scoped resources). If not provided, will get resource from configured namespace

- **resources_create_or_update** - Create or update a Kubernetes resource in the current cluster by providing a YAML or JSON representation of the resource
(common apiVersion and kind include: v1 Pod, v1 Service, v1 Node, apps/v1 Deployment, networking.k8s.io/v1 Ingress, route.openshift.io/v1 Route)
  - `resource` (`string`) **(required)** - A JSON or YAML containing a representation of the Kubernetes resource. Should include top-level fields such as apiVersion,kind,metadata, and spec

- **resources_delete** - Delete a Kubernetes resource in the current cluster by providing its apiVersion, kind, optionally the namespace, and its name
(common apiVersion and kind include: v1 Pod, v1 Service, v1 Node, apps/v1 Deployment, networking.k8s.io/v1 Ingress, route.openshift.io/v1 Route)
  - `apiVersion` (`string`) **(required)** - apiVersion of the resource (examples of valid apiVersion are: v1, apps/v1, networking.k8s.io/v1)
  - `kind` (`string`) **(required)** - kind of the resource (examples of valid kind are: Pod, Service, Deployment, Ingress)
  - `name` (`string`) **(required)** - Name of the resource
  - `namespace` (`string`) - Optional Namespace to delete the namespaced resource from (ignored in case of cluster scoped resources). If not provided, will delete resource from configured namespace

- **resources_scale** - Get or update the scale of a Kubernetes resource in the current cluster by providing its apiVersion, kind, name, and optionally the namespace. If the scale is set in the tool call, the scale will be updated to that value. Always returns the current scale of the resource
  - `apiVersion` (`string`) **(required)** - apiVersion of the resource (examples of valid apiVersion are apps/v1)
  - `kind` (`string`) **(required)** - kind of the resource (examples of valid kind are: StatefulSet, Deployment)
  - `name` (`string`) **(required)** - Name of the resource
  - `namespace` (`string`) - Optional Namespace to get/update the namespaced resource scale from (ignored in case of cluster scoped resources). If not provided, will get/update resource scale from configured namespace
  - `scale` (`integer`) - Optional scale to update the resources scale to. If not provided, will return the current scale of the resource, and not update it

</details>

<details>

<summary>kiali</summary>

- **kiali_mesh_graph** - Returns the topology of a specific namespaces, health, status of the mesh and namespaces. Includes a mesh health summary overview with aggregated counts of healthy, degraded, and failing apps, workloads, and services. Use this for high-level overviews
  - `graphType` (`string`) - Optional type of graph to return: 'versionedApp', 'app', 'service', 'workload', 'mesh'
  - `namespace` (`string`) - Optional single namespace to include in the graph (alternative to namespaces)
  - `namespaces` (`string`) - Optional comma-separated list of namespaces to include in the graph
  - `rateInterval` (`string`) - Optional rate interval for fetching (e.g., '10m', '5m', '1h').

- **kiali_manage_istio_config** - Manages Istio configuration objects (Gateways, VirtualServices, etc.). Can list (objects and validations), get, create, patch, or delete objects
  - `action` (`string`) **(required)** - Action to perform: list, get, create, patch, or delete
  - `group` (`string`) - API group of the Istio object (e.g., 'networking.istio.io', 'gateway.networking.k8s.io')
  - `json_data` (`string`) - JSON data to apply or create the object
  - `kind` (`string`) - Kind of the Istio object (e.g., 'DestinationRule', 'VirtualService', 'HTTPRoute', 'Gateway')
  - `name` (`string`) - Name of the Istio object
  - `namespace` (`string`) - Namespace containing the Istio object
  - `version` (`string`) - API version of the Istio object (e.g., 'v1', 'v1beta1')

- **kiali_get_resource_details** - Gets lists or detailed info for Kubernetes resources (services, workloads) within the mesh
  - `namespaces` (`string`) - Comma-separated list of namespaces to get services from (e.g. 'bookinfo' or 'bookinfo,default'). If not provided, will list services from all accessible namespaces
  - `resource_name` (`string`) - Name of the resource to get details for (optional string - if provided, gets details; if empty, lists all).
  - `resource_type` (`string`) - Type of resource to get details for (service, workload)

- **kiali_get_metrics** - Gets lists or detailed info for Kubernetes resources (services, workloads) within the mesh
  - `byLabels` (`string`) - Comma-separated list of labels to group metrics by (e.g., 'source_workload,destination_service'). Optional
  - `direction` (`string`) - Traffic direction: 'inbound' or 'outbound'. Optional, defaults to 'outbound'
  - `duration` (`string`) - Time range to get metrics for (optional string - if provided, gets metrics (e.g., '1m', '5m', '1h'); if empty, get default 30m).
  - `namespace` (`string`) **(required)** - Namespace to get resources from
  - `quantiles` (`string`) - Comma-separated list of quantiles for histogram metrics (e.g., '0.5,0.95,0.99'). Optional
  - `rateInterval` (`string`) - Rate interval for metrics (e.g., '1m', '5m'). Optional, defaults to '10m'
  - `reporter` (`string`) - Metrics reporter: 'source', 'destination', or 'both'. Optional, defaults to 'source'
  - `requestProtocol` (`string`) - Filter by request protocol (e.g., 'http', 'grpc', 'tcp'). Optional
  - `resource_name` (`string`) **(required)** - Name of the resource to get details for (optional string - if provided, gets details; if empty, lists all).
  - `resource_type` (`string`) **(required)** - Type of resource to get details for (service, workload)
  - `step` (`string`) - Step between data points in seconds (e.g., '15'). Optional, defaults to 15 seconds

- **kiali_workload_logs** - Get logs for a specific workload's pods in a namespace. Only requires namespace and workload name - automatically discovers pods and containers. Optionally filter by container name, time range, and other parameters. Container is auto-detected if not specified.
  - `container` (`string`) - Optional container name to filter logs. If not provided, automatically detects and uses the main application container (excludes istio-proxy and istio-init)
  - `namespace` (`string`) **(required)** - Namespace containing the workload
  - `since` (`string`) - Time duration to fetch logs from (e.g., '5m', '1h', '30s'). If not provided, returns recent logs
  - `tail` (`integer`) - Number of lines to retrieve from the end of logs (default: 100)
  - `workload` (`string`) **(required)** - Name of the workload to get logs for

- **kiali_get_traces** - Gets traces for a specific resource (app, service, workload) in a namespace, or gets detailed information for a specific trace by its ID. If traceId is provided, it returns detailed trace information and other parameters are not required.
  - `clusterName` (`string`) - Cluster name for multi-cluster environments (optional, only used when traceId is not provided)
  - `endMicros` (`string`) - End time for traces in microseconds since epoch (optional, defaults to 10 minutes after startMicros if not provided, only used when traceId is not provided)
  - `limit` (`integer`) - Maximum number of traces to return (default: 100, only used when traceId is not provided)
  - `minDuration` (`integer`) - Minimum trace duration in microseconds (optional, only used when traceId is not provided)
  - `namespace` (`string`) - Namespace to get resources from. Required if traceId is not provided.
  - `resource_name` (`string`) - Name of the resource to get traces for. Required if traceId is not provided.
  - `resource_type` (`string`) - Type of resource to get traces for (app, service, workload). Required if traceId is not provided.
  - `startMicros` (`string`) - Start time for traces in microseconds since epoch (optional, defaults to 10 minutes before current time if not provided, only used when traceId is not provided)
  - `tags` (`string`) - JSON string of tags to filter traces (optional, only used when traceId is not provided)
  - `traceId` (`string`) - Unique identifier of the trace to retrieve detailed information for. If provided, this will return detailed trace information and other parameters (resource_type, namespace, resource_name) are not required.

</details>

<details>

<summary>kubevirt</summary>

- **vm_create** - Create a VirtualMachine in the cluster with the specified configuration, automatically resolving instance types, preferences, and container disk images. VM will be created in Halted state by default; use autostart parameter to start it immediately.
  - `autostart` (`boolean`) - Optional flag to automatically start the VM after creation (sets runStrategy to Always instead of Halted). Defaults to false.
  - `instancetype` (`string`) - Optional instance type name for the VM (e.g., 'u1.small', 'u1.medium', 'u1.large')
  - `name` (`string`) **(required)** - The name of the virtual machine
  - `namespace` (`string`) **(required)** - The namespace for the virtual machine
  - `performance` (`string`) - Optional performance family hint for the VM instance type (e.g., 'u1' for general-purpose, 'o1' for overcommitted, 'c1' for compute-optimized, 'm1' for memory-optimized). Defaults to 'u1' (general-purpose) if not specified.
  - `preference` (`string`) - Optional preference name for the VM
  - `size` (`string`) - Optional workload size hint for the VM (e.g., 'small', 'medium', 'large', 'xlarge'). Used to auto-select an appropriate instance type if not explicitly specified.
  - `storage` (`string`) - Optional storage size for the VM's root disk when using DataSources (e.g., '30Gi', '50Gi', '100Gi'). Defaults to 30Gi. Ignored when using container disks.
  - `workload` (`string`) - The workload for the VM. Accepts OS names (e.g., 'fedora' (default), 'ubuntu', 'centos', 'centos-stream', 'debian', 'rhel', 'opensuse', 'opensuse-tumbleweed', 'opensuse-leap') or full container disk image URLs

- **vm_lifecycle** - Manage VirtualMachine lifecycle: start, stop, or restart a VM
  - `action` (`string`) **(required)** - The lifecycle action to perform: 'start' (changes runStrategy to Always), 'stop' (changes runStrategy to Halted), or 'restart' (stops then starts the VM)
  - `name` (`string`) **(required)** - The name of the virtual machine
  - `namespace` (`string`) **(required)** - The namespace of the virtual machine

</details>

<details>

<summary>helm</summary>

- **helm_install** - Install a Helm chart in the current or provided namespace
  - `chart` (`string`) **(required)** - Chart reference to install (for example: stable/grafana, oci://ghcr.io/nginxinc/charts/nginx-ingress)
  - `name` (`string`) - Name of the Helm release (Optional, random name if not provided)
  - `namespace` (`string`) - Namespace to install the Helm chart in (Optional, current namespace if not provided)
  - `values` (`object`) - Values to pass to the Helm chart (Optional)

- **helm_list** - List all the Helm releases in the current or provided namespace (or in all namespaces if specified)
  - `all_namespaces` (`boolean`) - If true, lists all Helm releases in all namespaces ignoring the namespace argument (Optional)
  - `namespace` (`string`) - Namespace to list Helm releases from (Optional, all namespaces if not provided)

- **helm_uninstall** - Uninstall a Helm release in the current or provided namespace
  - `name` (`string`) **(required)** - Name of the Helm release to uninstall
  - `namespace` (`string`) - Namespace to uninstall the Helm release from (Optional, current namespace if not provided)

</details>


<!-- AVAILABLE-TOOLSETS-TOOLS-END -->

## Helm Chart

A [Helm Chart](https://helm.sh) is available to simplify the deployment of the Kubernetes MCP server. Additional details can be found in the [chart README](./charts/kubernetes-mcp-server/README.md).

## 🧑‍💻 Development <a id="development"></a>

### Running with mcp-inspector

Compile the project and run the Kubernetes MCP server with [mcp-inspector](https://modelcontextprotocol.io/docs/tools/inspector) to inspect the MCP server.

```shell
# Compile the project
make build
# Run the Kubernetes MCP server with mcp-inspector
npx @modelcontextprotocol/inspector@latest $(pwd)/kubernetes-mcp-server
```
```

--------------------------------------------------------------------------------
/CLAUDE.md:
--------------------------------------------------------------------------------

```markdown
AGENTS.md
```

--------------------------------------------------------------------------------
/AGENTS.md:
--------------------------------------------------------------------------------

```markdown
# Project Agents.md for Kubernetes MCP Server

This Agents.md file provides comprehensive guidance for AI assistants and coding agents (like Claude, Gemini, Cursor, and others) to work with this codebase.

This repository contains the kubernetes-mcp-server project,
a powerful Go-based Model Context Protocol (MCP) server that provides native Kubernetes and OpenShift cluster management capabilities without external dependencies.
This MCP server enables AI assistants (like Claude, Gemini, Cursor, and others) to interact with Kubernetes clusters using the Model Context Protocol (MCP).

## Project Structure and Repository layout

- Go package layout follows the standard Go conventions:
  - `cmd/kubernetes-mcp-server/` – main application entry point using Cobra CLI framework.
  - `pkg/` – libraries grouped by domain.
    - `api/` - API-related functionality, tool definitions, and toolset interfaces.
    - `config/` – configuration management.
    - `helm/` - Helm chart operations integration.
    - `http/` - HTTP server and authorization middleware.
    - `kubernetes/` - Kubernetes client management, authentication, and access control.
    - `mcp/` - Model Context Protocol (MCP) server implementation with tool registration and STDIO/HTTP support.
    - `output/` - output formatting and rendering.
    - `toolsets/` - Toolset registration and management for MCP tools.
    - `version/` - Version information management.
- `.github/` – GitHub-related configuration (Actions workflows, issue templates...).
- `docs/` – documentation files.
- `npm/` – Node packages that wraps the compiled binaries for distribution through npmjs.com.
- `python/` – Python package providing a script that downloads the correct platform binary from the GitHub releases page and runs it for distribution through pypi.org.
- `Dockerfile` - container image description file to distribute the server as a container image.
- `Makefile` – tasks for building, formatting, linting and testing.

## Feature development

Implement new functionality in the Go sources under `cmd/` and `pkg/`.
The JavaScript (`npm/`) and Python (`python/`) directories only wrap the compiled binary for distribution (npm and PyPI).
Most changes will not require touching them unless the version or packaging needs to be updated.

### Adding new MCP tools

The project uses a toolset-based architecture for organizing MCP tools:

- **Tool definitions** are created in `pkg/api/` using the `ServerTool` struct.
- **Toolsets** group related tools together (e.g., config tools, core Kubernetes tools, Helm tools).
- **Registration** happens in `pkg/toolsets/` where toolsets are registered at initialization.
- Each toolset lives in its own subdirectory under `pkg/toolsets/` (e.g., `pkg/toolsets/config/`, `pkg/toolsets/core/`, `pkg/toolsets/helm/`).

When adding a new tool:
1. Define the tool handler function that implements the tool's logic.
2. Create a `ServerTool` struct with the tool definition and handler.
3. Add the tool to an appropriate toolset (or create a new toolset if needed).
4. Register the toolset in `pkg/toolsets/` if it's a new toolset.

## Building

Use the provided Makefile targets:

```bash
# Format source and build the binary
make build

# Build for all supported platforms
make build-all-platforms
```

`make build` will run `go fmt` and `go mod tidy` before compiling.
The resulting executable is `kubernetes-mcp-server`.

## Running

The README demonstrates running the server via
[`mcp-inspector`](https://modelcontextprotocol.io/docs/tools/inspector):

```bash
make build
npx @modelcontextprotocol/inspector@latest $(pwd)/kubernetes-mcp-server
```

To run the server locally, you can use `npx`, `uvx` or execute the binary directly:

```bash
# Using npx (Node.js package runner)
npx -y kubernetes-mcp-server@latest

# Using uvx (Python package runner)
uvx kubernetes-mcp-server@latest

# Binary execution
./kubernetes-mcp-server
```

This MCP server is designed to run both locally and remotely.

### Local Execution

When running locally, the server connects to a Kubernetes or OpenShift cluster using the kubeconfig file.
It reads the kubeconfig from the `--kubeconfig` flag, the `KUBECONFIG` environment variable, or defaults to `~/.kube/config`.

This means that `npx -y kubernetes-mcp-server@latest` on a workstation will talk to whatever cluster your current kubeconfig points to (e.g. a local Kind cluster).

### Remote Execution

When running remotely, the server can be deployed as a container image in a Kubernetes or OpenShift cluster.
The server can be run as a Deployment, StatefulSet, or any other Kubernetes resource that suits your needs.
The server will automatically use the in-cluster configuration to connect to the Kubernetes API server.

## Tests

Run all Go tests with:

```bash
make test
```

The test suite relies on the `setup-envtest` tooling from `sigs.k8s.io/controller-runtime`.
The first run downloads a Kubernetes `envtest` environment from the internet, so network access is required.
Without it some tests will fail during setup.

### Testing Patterns and Guidelines

This project follows specific testing patterns to ensure consistency, maintainability, and quality. When writing tests, adhere to the following guidelines:

#### Test Framework

- **Use `testify/suite`** for organizing tests into suites
- Tests should be structured using test suites that embed `suite.Suite`
- Each test file should have a corresponding suite struct (e.g., `UnstructuredSuite`, `KubevirtSuite`)
- Use the `suite.Run()` function to execute test suites

Example:
```go
type MyTestSuite struct {
    suite.Suite
}

func (s *MyTestSuite) TestSomething() {
    s.Run("descriptive scenario name", func() {
        // test implementation
    })
}

func TestMyFeature(t *testing.T) {
    suite.Run(t, new(MyTestSuite))
}
```

#### Behavior-Based Testing

- **Test the public API only** - tests should be black-box and not access internal/private functions
- **No mocks** - use real implementations and integration testing where possible
- **Behavior over implementation** - test what the code does, not how it does it
- Focus on observable behavior and outcomes rather than internal state

#### Test Organization

- **Use nested subtests** with `s.Run()` to organize related test cases
- **Descriptive names** - subtest names should clearly describe the scenario being tested
- Group related scenarios together under a parent test (e.g., "edge cases", "with valid input")

Example structure:
```go
func (s *MySuite) TestFeature() {
    s.Run("valid input scenarios", func() {
        s.Run("handles simple case correctly", func() {
            // test code
        })
        s.Run("handles complex case with nested data", func() {
            // test code
        })
    })
    s.Run("edge cases", func() {
        s.Run("returns error for nil input", func() {
            // test code
        })
        s.Run("handles empty input gracefully", func() {
            // test code
        })
    })
}
```

#### Assertions

- **One assertion per test case** - each `s.Run()` block should ideally test one specific behavior
- Use `testify` assertion methods: `s.Equal()`, `s.True()`, `s.False()`, `s.Nil()`, `s.NotNil()`, etc.
- Provide clear assertion messages when the failure reason might not be obvious

Example:
```go
s.Run("returns expected value", func() {
    result := functionUnderTest()
    s.Equal("expected", result, "function should return the expected string")
})
```

#### Coverage

- **Aim for high test coverage** of the public API
- Add edge case tests to cover error paths and boundary conditions
- Common edge cases to consider:
  - Nil/null inputs
  - Empty strings, slices, maps
  - Negative numbers or invalid indices
  - Type mismatches
  - Malformed input (e.g., invalid paths, formats)

#### Error Handling

- **Never ignore errors** in production code
- Always check and handle errors from functions that return them
- In tests, use `s.Require().NoError(err)` for operations that must succeed for the test to be valid
- Use `s.Error(err)` or `s.NoError(err)` for testing error conditions

Example:
```go
s.Run("returns error for invalid input", func() {
    result, err := functionUnderTest(invalidInput)
    s.Error(err, "expected error for invalid input")
    s.Nil(result, "result should be nil when error occurs")
})
```

#### Test Helpers

- Create reusable test helpers in `internal/test/` for common testing utilities
- Test helpers should be generic and reusable across multiple test files
- Document test helpers with clear godoc comments explaining their purpose and usage

Example from this project:
```go
// FieldString retrieves a string field from an unstructured object using JSONPath-like notation.
// Examples:
//   - "spec.runStrategy"
//   - "spec.template.spec.volumes[0].containerDisk.image"
func FieldString(obj *unstructured.Unstructured, path string) string {
    // implementation
}
```

#### Examples from the Codebase

Good examples of these patterns can be found in:
- `internal/test/unstructured_test.go` - demonstrates proper use of testify/suite, nested subtests, and edge case testing
- `pkg/mcp/kubevirt_test.go` - shows behavior-based testing of the MCP layer
- `pkg/kubernetes/manager_test.go` - illustrates testing with proper setup/teardown and subtests

## Linting

Static analysis is performed with `golangci-lint`:

```bash
make lint
```

The `lint` target downloads the specified `golangci-lint` version if it is not already present under `_output/tools/bin/`.

## Additional Makefile targets

Beyond the basic build, test, and lint targets, the Makefile provides additional utilities:

**Local Development:**
```bash
# Setup a complete local development environment with Kind cluster
make local-env-setup

# Tear down the local Kind cluster
make local-env-teardown

# Show Keycloak status and connection info (for OIDC testing)
make keycloak-status

# Tail Keycloak logs
make keycloak-logs

# Install required development tools (like Kind) to ./_output/bin/
make tools
```

**Distribution and Publishing:**
```bash
# Copy compiled binaries to each npm package
make npm-copy-binaries

# Publish the npm packages
make npm-publish

# Publish the Python packages
make python-publish

# Update README.md with the latest toolsets
make update-readme-tools
```

Run `make help` to see all available targets with descriptions.

## Dependencies

When introducing new modules run `make tidy` so that `go.mod` and `go.sum` remain tidy.

## Coding style

- Go modules target Go **1.24** (see `go.mod`).
- Tests are written with the standard library `testing` package.
- Build, test and lint steps are defined in the Makefile—keep them working.

## Distribution Methods

The server is distributed as a binary executable, a Docker image, an npm package, and a Python package.

- **Native binaries** for Linux, macOS, and Windows are available in the GitHub releases.
- A **container image** (Docker) is built and pushed to the `quay.io/containers/kubernetes_mcp_server` repository.
- An **npm** package is available at [npmjs.com](https://www.npmjs.com/package/kubernetes-mcp-server).
  It wraps the platform-specific binary and provides a convenient way to run the server using `npx`.
- A **Python** package is available at [pypi.org](https://pypi.org/project/kubernetes-mcp-server/).
  It provides a script that downloads the correct platform binary from the GitHub releases page and runs it.
  It provides a convenient way to run the server using `uvx` or `python -m kubernetes_mcp_server`.

```

--------------------------------------------------------------------------------
/pkg/kubernetes-mcp-server/cmd/testdata/empty-config.toml:
--------------------------------------------------------------------------------

```toml

```

--------------------------------------------------------------------------------
/pkg/mcp/testdata/helm-chart-no-op/Chart.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
name: no-op
version: 1.33.7

```

--------------------------------------------------------------------------------
/dev/config/keycloak/realm/realm-create.json:
--------------------------------------------------------------------------------

```json
{
  "realm": "openshift",
  "enabled": true
}

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-routing/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace web 

```

--------------------------------------------------------------------------------
/evals/tasks/fix-image-pull/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace debug

```

--------------------------------------------------------------------------------
/evals/tasks/debug-app-logs/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace calc-app

```

--------------------------------------------------------------------------------
/evals/tasks/fix-crashloop/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace crashloop-test 

```

--------------------------------------------------------------------------------
/evals/tasks/scale-down-deployment/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace scale-down-test 
```

--------------------------------------------------------------------------------
/pkg/mcp/testdata/helm-chart-secret/Chart.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v2
name: secret-chart
version: 0.1.0
type: application


```

--------------------------------------------------------------------------------
/evals/claude-code/agent.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Agent
metadata:
  name: "claude-code"
builtin:
  type: "claude-code"

```

--------------------------------------------------------------------------------
/evals/tasks/resize-pvc/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace resize-pv --ignore-not-found=true

```

--------------------------------------------------------------------------------
/python/kubernetes_mcp_server/__main__.py:
--------------------------------------------------------------------------------

```python
from .kubernetes_mcp_server import main

if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/evals/tasks/fix-pending-pod/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace homepage-ns --ignore-not-found=true

```

--------------------------------------------------------------------------------
/evals/tasks/fix-rbac-wrong-resource/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace simple-rbac-setup --ignore-not-found

```

--------------------------------------------------------------------------------
/pkg/kubernetes/watcher/watcher.go:
--------------------------------------------------------------------------------

```go
package watcher

type Watcher interface {
	Watch(onChange func() error)
	Close()
}

```

--------------------------------------------------------------------------------
/evals/tasks/create-pod/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete pod web-server -n create-pod-test --ignore-not-found

```

--------------------------------------------------------------------------------
/evals/tasks/create-simple-rbac/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace create-simple-rbac --ignore-not-found=true

```

--------------------------------------------------------------------------------
/evals/mcp-config.yaml:
--------------------------------------------------------------------------------

```yaml
mcpServers:
  kubernetes:
    type: http
    url: http://localhost:8080/mcp
    enableAllTools: true

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-with-no-endpoints/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Clean up all resources
kubectl delete namespace webshop-frontend --ignore-not-found

```

--------------------------------------------------------------------------------
/evals/tasks/create-pod-mount-configmaps/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

kubectl delete namespace color-size-settings --ignore-not-found
echo "Cleanup completed" 

```

--------------------------------------------------------------------------------
/evals/tasks/horizontal-pod-autoscaler/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Tear down namespace and HPA resources
kubectl delete namespace hpa-test --ignore-not-found
exit 0

```

--------------------------------------------------------------------------------
/evals/tasks/rolling-update-deployment/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Tear down namespace and deployment
kubectl delete namespace rollout-test --ignore-not-found
exit 0
```

--------------------------------------------------------------------------------
/evals/tasks/create-pod/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace create-pod-test --ignore-not-found
kubectl create namespace create-pod-test

```

--------------------------------------------------------------------------------
/evals/tasks/multi-container-pod-communication/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

kubectl delete namespace multi-container-test --ignore-not-found
kubectl create namespace multi-container-test

```

--------------------------------------------------------------------------------
/evals/tasks/statefulset-lifecycle/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Tear down namespace and StatefulSet resources
kubectl delete namespace statefulset-test --ignore-not-found
exit 0
```

--------------------------------------------------------------------------------
/evals/tasks/create-pod-resources-limits/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Delete the namespace if it exists to ensure a clean state
kubectl delete namespace limits-test --ignore-not-found

```

--------------------------------------------------------------------------------
/evals/tasks/list-images-for-pods/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

set -o errexit
set -o nounset
set -o pipefail

NAMESPACE=list-images-for-pods

kubectl delete namespace ${NAMESPACE}

```

--------------------------------------------------------------------------------
/evals/tasks/create-canary-deployment/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
set -e
NAMESPACE="canary-deployment-ns"

# Delete the namespace
kubectl delete namespace $NAMESPACE --wait=false --ignore-not-found

```

--------------------------------------------------------------------------------
/evals/tasks/create-pod-mount-configmaps/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Delete the namespace if it exists to ensure a clean state
kubectl delete namespace color-size-settings --ignore-not-found

```

--------------------------------------------------------------------------------
/pkg/version/version.go:
--------------------------------------------------------------------------------

```go
package version

var CommitHash = "unknown"
var BuildTime = "1970-01-01T00:00:00Z"
var Version = "0.0.0"
var BinaryName = "kubernetes-mcp-server"

```

--------------------------------------------------------------------------------
/python/kubernetes_mcp_server/__init__.py:
--------------------------------------------------------------------------------

```python
"""
Kubernetes MCP Server (Model Context Protocol) with special support for OpenShift.
"""
from .kubernetes_mcp_server import main

__all__ = ['main']


```

--------------------------------------------------------------------------------
/dev/config/keycloak/client-scopes/groups.json:
--------------------------------------------------------------------------------

```json
{
  "name": "groups",
  "protocol": "openid-connect",
  "attributes": {
    "display.on.consent.screen": "false",
    "include.in.token.scope": "true"
  }
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/client-scopes/mcp-server.json:
--------------------------------------------------------------------------------

```json
{
  "name": "mcp-server",
  "protocol": "openid-connect",
  "attributes": {
    "display.on.consent.screen": "false",
    "include.in.token.scope": "true"
  }
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/client-scopes/mcp-openshift.json:
--------------------------------------------------------------------------------

```json
{
  "name": "mcp:openshift",
  "protocol": "openid-connect",
  "attributes": {
    "display.on.consent.screen": "false",
    "include.in.token.scope": "true"
  }
}

```

--------------------------------------------------------------------------------
/evals/tasks/create-pod-resources-limits/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Delete the namespace which will also delete all resources in it
kubectl delete namespace limits-test --ignore-not-found
echo "Cleanup completed" 

```

--------------------------------------------------------------------------------
/evals/tasks/statefulset-lifecycle/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Teardown any existing namespace
kubectl delete namespace statefulset-test --ignore-not-found

# Create namespace
kubectl create namespace statefulset-test
```

--------------------------------------------------------------------------------
/dev/config/keycloak/realm/realm-events-config.json:
--------------------------------------------------------------------------------

```json
{
  "realm": "openshift",
  "enabled": true,
  "eventsEnabled": true,
  "eventsListeners": ["jboss-logging"],
  "adminEventsEnabled": true,
  "adminEventsDetailsEnabled": true
}

```

--------------------------------------------------------------------------------
/evals/tasks/fix-probes/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Delete the namespace which will remove all resources created for this task
kubectl delete namespace health-check --ignore-not-found

echo "Cleanup completed"

```

--------------------------------------------------------------------------------
/pkg/config/toolset_config.go:
--------------------------------------------------------------------------------

```go
package config

var toolsetConfigRegistry = newExtendedConfigRegistry()

func RegisterToolsetConfig(name string, parser ExtendedConfigParser) {
	toolsetConfigRegistry.register(name, parser)
}

```

--------------------------------------------------------------------------------
/pkg/config/provider_config.go:
--------------------------------------------------------------------------------

```go
package config

var providerConfigRegistry = newExtendedConfigRegistry()

func RegisterProviderConfig(name string, parser ExtendedConfigParser) {
	providerConfigRegistry.register(name, parser)
}

```

--------------------------------------------------------------------------------
/evals/tasks/resize-pvc/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace resize-pv --ignore-not-found
kubectl create namespace resize-pv

kubectl apply -f artifacts/storage-pvc.yaml
kubectl apply -f artifacts/storage-pod.yaml

```

--------------------------------------------------------------------------------
/cmd/kubernetes-mcp-server/main_test.go:
--------------------------------------------------------------------------------

```go
package main

import (
	"os"
)

func Example_version() {
	oldArgs := os.Args
	defer func() { os.Args = oldArgs }()
	os.Args = []string{"kubernetes-mcp-server", "--version"}
	main()
	// Output: 0.0.0
}

```

--------------------------------------------------------------------------------
/evals/tasks/create-canary-deployment/artifacts/service.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: Service
metadata:
  name: recommendation-engine
spec:
  selector:
    app: recommendation-engine
    version: v2.0
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

```

--------------------------------------------------------------------------------
/evals/tasks/fix-pending-pod/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace homepage-ns --ignore-not-found
kubectl create namespace homepage-ns

kubectl apply -f artifacts/homepage-pvc.yaml
kubectl apply -f artifacts/homepage-pod.yaml

```

--------------------------------------------------------------------------------
/evals/tasks/setup-dev-cluster/setup-dev-cluster.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: setup-dev-cluster
  difficulty: hard
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    file: setup-dev-cluster.md

```

--------------------------------------------------------------------------------
/evals/tasks/fix-pending-pod/artifacts/homepage-pvc.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: homepage-pvc
  namespace: homepage-ns
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: sc 
  resources:
    requests:
      storage: 1Gi

```

--------------------------------------------------------------------------------
/pkg/config/config_default_overrides.go:
--------------------------------------------------------------------------------

```go
package config

func defaultOverrides() StaticConfig {
	return StaticConfig{
		// IMPORTANT: this file is used to override default config values in downstream builds.
		// This is intentionally left blank.
	}
}

```

--------------------------------------------------------------------------------
/evals/tasks/create-network-policy/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Delete the namespaces which will also delete all resources in them
kubectl delete namespace ns1 --ignore-not-found
kubectl delete namespace ns2 --ignore-not-found

echo "Cleanup completed" 

```

--------------------------------------------------------------------------------
/evals/tasks/deployment-traffic-switch/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
set -e
NAMESPACE="e-commerce"

# Delete the namespace to clean up all resources created during the evaluation
echo "Deleting namespace '$NAMESPACE'..."
kubectl delete namespace $NAMESPACE --wait=false

```

--------------------------------------------------------------------------------
/evals/tasks/resize-pvc/artifacts/storage-pvc.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: storage-pvc
  namespace: resize-pv
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: standard

```

--------------------------------------------------------------------------------
/evals/tasks/create-simple-rbac/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace create-simple-rbac --ignore-not-found # clean up, just in case
kubectl create namespace create-simple-rbac
kubectl create serviceaccount reader-sa -n create-simple-rbac

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-with-no-endpoints/artifacts/service.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
  namespace: webshop-frontend
  labels:
    app: web-app
spec:
  selector:
    app: web-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

```

--------------------------------------------------------------------------------
/internal/test/env.go:
--------------------------------------------------------------------------------

```go
package test

import (
	"os"
	"strings"
)

func RestoreEnv(originalEnv []string) {
	os.Clearenv()
	for _, env := range originalEnv {
		if key, value, found := strings.Cut(env, "="); found {
			_ = os.Setenv(key, value)
		}
	}
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/mappers/openshift-audience.json:
--------------------------------------------------------------------------------

```json
{
  "name": "openshift-audience",
  "protocol": "openid-connect",
  "protocolMapper": "oidc-audience-mapper",
  "config": {
    "included.client.audience": "openshift",
    "id.token.claim": "true",
    "access.token.claim": "true"
  }
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/mappers/mcp-server-audience.json:
--------------------------------------------------------------------------------

```json
{
  "name": "mcp-server-audience",
  "protocol": "openid-connect",
  "protocolMapper": "oidc-audience-mapper",
  "config": {
    "included.client.audience": "mcp-server",
    "id.token.claim": "true",
    "access.token.claim": "true"
  }
}

```

--------------------------------------------------------------------------------
/pkg/toolsets/kiali/internal/defaults/defaults_override.go:
--------------------------------------------------------------------------------

```go
package defaults

const (
	toolsetNameOverride        = ""
	toolsetDescriptionOverride = ""
)

func ToolsetNameOverride() string {
	return toolsetNameOverride
}

func ToolsetDescriptionOverride() string {
	return toolsetDescriptionOverride
}

```

--------------------------------------------------------------------------------
/pkg/kubernetes/token.go:
--------------------------------------------------------------------------------

```go
package kubernetes

import (
	"context"

	authenticationv1api "k8s.io/api/authentication/v1"
)

type TokenVerifier interface {
	VerifyToken(ctx context.Context, cluster, token, audience string) (*authenticationv1api.UserInfo, []string, error)
}

```

--------------------------------------------------------------------------------
/evals/tasks/multi-container-pod-communication/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

kubectl delete pod communication-pod -n multi-container-test --ignore-not-found
kubectl delete configmap shared-data -n multi-container-test --ignore-not-found
kubectl delete namespace multi-container-test --ignore-not-found

```

--------------------------------------------------------------------------------
/evals/openai-agent/agent.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Agent
metadata:
  name: "openai-agent"
builtin:
  type: "openai-agent"
  model: "gpt-5"  # Change to your model
# Before running, set environment variables:
#   export MODEL_BASE_URL="https://api.openai.com/v1"
#   export MODEL_KEY="sk-..."

```

--------------------------------------------------------------------------------
/evals/tasks/deployment-traffic-switch/artifacts/service.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: Service
metadata:
  name: checkout-service
spec:
  selector:
    # This selector initially targets the 'blue' version
    app: checkout-service
    version: blue
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

```

--------------------------------------------------------------------------------
/evals/tasks/list-images-for-pods/list-images-for-pods.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: list-images-for-pods
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    contains: "mysql:8.0.36"
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "What images are all pods running in the cluster?"

```

--------------------------------------------------------------------------------
/dev/config/keycloak/users/mcp.json:
--------------------------------------------------------------------------------

```json
{
  "username": "mcp",
  "email": "[email protected]",
  "firstName": "MCP",
  "lastName": "User",
  "enabled": true,
  "emailVerified": true,
  "credentials": [
    {
      "type": "password",
      "value": "mcp",
      "temporary": false
    }
  ]
}

```

--------------------------------------------------------------------------------
/evals/tasks/create-pod/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Wait for pod to be running with kubectl wait
if kubectl wait --for=condition=Ready pod/web-server -n create-pod-test --timeout=120s; then
    exit 0
else
    # If we get here, pod didn't reach Running state in time
    exit 1
fi

```

--------------------------------------------------------------------------------
/evals/tasks/fix-probes/fix-probes.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: fix-probes
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: My webapp in the orders namespace is not working? Can you please fix it?

```

--------------------------------------------------------------------------------
/evals/tasks/setup-dev-cluster/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

echo "Setting up cluster state for dev environment eval..."

# Clean up any existing resources
kubectl delete namespace dev-alice dev-bob dev-charlie dev-shared staging prod --ignore-not-found=true

echo "Setup completed successfully"
```

--------------------------------------------------------------------------------
/evals/tasks/debug-app-logs/debug-app-logs.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: debug-app-logs
  difficulty: medium
steps:
  setup:
    file: setup.sh
  prompt:
    inline: "What wrong with my calc-app-pod in the calc-app namespace?"
  cleanup:
    file: cleanup.sh
  verify:
    contains: "division by zero"

```

--------------------------------------------------------------------------------
/evals/tasks/resize-pvc/resize-pvc.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: resize-pvc
  difficulty: easy
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "resize the storage volume to 15Gi for the storage-pod in `resize-pv` namespace"

```

--------------------------------------------------------------------------------
/evals/tasks/create-pod/create-pod.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: "create-nginx-pod"
  difficulty: easy
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: Please create a nginx pod named web-server in the create-pod-test namespace

```

--------------------------------------------------------------------------------
/evals/tasks/fix-image-pull/fix-image-pull.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: "fix-image-pull"
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: Please fix the error in the deployment named 'app' in the namespace 'debug'

```

--------------------------------------------------------------------------------
/evals/tasks/fix-crashloop/fix-crashloop.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: "fix-crashloop"
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: Please fix the error in the deployment named 'app' in namespace 'crashloop-test'

```

--------------------------------------------------------------------------------
/evals/tasks/scale-deployment/scale-deployment.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: scale-deployment
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Scale up the replicas of deployment 'web-app' in namespace 'scale-test' by 100%"

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery.ai configuration https://smithery.ai/docs/config#smitheryyaml
startCommand:
  type: stdio
  configSchema:
    {}
  commandFunction:
    |-
    (config) => ({
      "command": "npx",
      "args": [
        "-y", "kubernetes-mcp-server@latest"
      ]
    })

```

--------------------------------------------------------------------------------
/.github/dependabot.yml:
--------------------------------------------------------------------------------

```yaml
version: 2
updates:
  - package-ecosystem: "gomod"
    directory: "/"
    schedule:
      interval: "daily"
    open-pull-requests-limit: 10
  - package-ecosystem: "github-actions"
    directory: "/"
    schedule:
      interval: "daily"
    open-pull-requests-limit: 10

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-routing/fix-service-routing.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: fix-service-routing
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Please diagnose and fix why the nginx service in namespace 'web' isn't accessible"

```

--------------------------------------------------------------------------------
/pkg/mcp/testdata/helm-chart-secret/templates/secret.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-secret
  labels:
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    app.kubernetes.io/instance: {{ .Release.Name }}
type: Opaque
data:
  username: {{ b64enc "aitana" }}
  password: {{ b64enc "alex" }}


```

--------------------------------------------------------------------------------
/dev/config/keycloak/mappers/groups-membership.json:
--------------------------------------------------------------------------------

```json
{
  "name": "groups",
  "protocol": "openid-connect",
  "protocolMapper": "oidc-group-membership-mapper",
  "config": {
    "claim.name": "groups",
    "full.path": "false",
    "id.token.claim": "true",
    "access.token.claim": "true",
    "userinfo.token.claim": "true"
  }
}

```

--------------------------------------------------------------------------------
/evals/tasks/scale-down-deployment/scale-down-deployment.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: scale-down-deployment
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Scale down the replicas of deployment 'web-service' in namespace 'scale-down-test' by 50%"

```

--------------------------------------------------------------------------------
/evals/tasks/create-simple-rbac/create-simple-rbac.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: create-simple-rbac
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Create a read-only role for pods bound to my reader-sa service account in the create-simple-rbac namespace."

```

--------------------------------------------------------------------------------
/evals/tasks/fix-rbac-wrong-resource/fix-rbac-wrong-resource.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: fix-rbac-wrong-resource
  difficulty: easy
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Why can't I list pods with my pod-reader service account in simple-rbac-setup namespace ? Please fix it."

```

--------------------------------------------------------------------------------
/charts/kubernetes-mcp-server/templates/configmap.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "kubernetes-mcp-server.fullname" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "kubernetes-mcp-server.labels" . | nindent 4 }}
data:
  config.toml: |
    {{- tpl (toToml .Values.config) . | replace ".0" "" | nindent 4 }}

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
FROM golang:latest AS builder

WORKDIR /app

COPY ./ ./
RUN make build

FROM registry.access.redhat.com/ubi9/ubi-minimal:latest
WORKDIR /app
COPY --from=builder /app/kubernetes-mcp-server /app/kubernetes-mcp-server
USER 65532:65532
ENTRYPOINT ["/app/kubernetes-mcp-server", "--port", "8080"]

EXPOSE 8080

```

--------------------------------------------------------------------------------
/evals/tasks/fix-pending-pod/fix-pending-pod.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: fix-pending-pod
  difficulty: easy
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "My pod homepage-pod in homepage-ns is stuck in pending state. Can you fix the issue so that pod status is in running state?"

```

--------------------------------------------------------------------------------
/evals/tasks/rolling-update-deployment/rolling-update-deployment.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: rolling-update-deployment
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Update the image of the web-app in rollout-test namespace to 1.22 version. Ensure there is zero downtime (or minimize disruption)"

```

--------------------------------------------------------------------------------
/evals/tasks/create-network-policy/artifacts/desired-policy.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np
  namespace: ns1
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: ns2
  - ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

```

--------------------------------------------------------------------------------
/dev/config/keycloak/mappers/username.json:
--------------------------------------------------------------------------------

```json
{
  "name": "username",
  "protocol": "openid-connect",
  "protocolMapper": "oidc-usermodel-property-mapper",
  "config": {
    "userinfo.token.claim": "true",
    "user.attribute": "username",
    "id.token.claim": "true",
    "access.token.claim": "true",
    "claim.name": "preferred_username",
    "jsonType.label": "String"
  }
}

```

--------------------------------------------------------------------------------
/evals/tasks/horizontal-pod-autoscaler/horizontal-pod-autoscaler.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: horizontal-pod-autoscaler
  difficulty: hard
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Create a HorizontalPodAutoscaler for deployment 'web-app' in namespace 'hpa-test' targeting 50% CPU utilization with min=1 and max=3 replicas"

```

--------------------------------------------------------------------------------
/pkg/openshift/openshift.go:
--------------------------------------------------------------------------------

```go
package openshift

import (
	"k8s.io/apimachinery/pkg/runtime/schema"
	"k8s.io/client-go/discovery"
)

func IsOpenshift(discoveryClient discovery.DiscoveryInterface) bool {
	_, err := discoveryClient.ServerResourcesForGroupVersion(schema.GroupVersion{
		Group:   "project.openshift.io",
		Version: "v1",
	}.String())
	return err == nil
}

```

--------------------------------------------------------------------------------
/charts/kubernetes-mcp-server/Chart.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v2
name: kubernetes-mcp-server
description: Helm Chart for the Kubernetes MCP Server
home: https://github.com/containers/kubernetes-mcp-server
keywords:
  - kubernetes
  - mcp
maintainers:
  - name: Andrew Block
    email: [email protected]
  - name: Marc Nuri
    email: [email protected]
version: 0.1.0
appVersion: "latest"

```

--------------------------------------------------------------------------------
/pkg/kubernetes/openshift.go:
--------------------------------------------------------------------------------

```go
package kubernetes

import (
	"context"

	"github.com/containers/kubernetes-mcp-server/pkg/openshift"
)

func (m *Manager) IsOpenShift(ctx context.Context) bool {
	// This method should be fast and not block (it's called at startup)
	k, err := m.Derived(ctx)
	if err != nil {
		return false
	}
	return openshift.IsOpenshift(k.DiscoveryClient())
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/clients/openshift.json:
--------------------------------------------------------------------------------

```json
{
  "clientId": "openshift",
  "enabled": true,
  "publicClient": false,
  "standardFlowEnabled": true,
  "directAccessGrantsEnabled": true,
  "serviceAccountsEnabled": true,
  "authorizationServicesEnabled": false,
  "redirectUris": ["*"],
  "webOrigins": ["*"],
  "defaultClientScopes": ["profile", "email", "groups"],
  "optionalClientScopes": []
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/clients/mcp-client.json:
--------------------------------------------------------------------------------

```json
{
  "clientId": "mcp-client",
  "enabled": true,
  "publicClient": true,
  "standardFlowEnabled": true,
  "directAccessGrantsEnabled": true,
  "serviceAccountsEnabled": false,
  "authorizationServicesEnabled": false,
  "redirectUris": ["*"],
  "webOrigins": ["*"],
  "defaultClientScopes": ["profile", "email"],
  "optionalClientScopes": ["mcp-server"]
}

```

--------------------------------------------------------------------------------
/evals/tasks/create-canary-deployment/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
set -e
NAMESPACE="canary-deployment-ns"

# Create the namespace
kubectl delete namespace $NAMESPACE --ignore-not-found
kubectl create namespace $NAMESPACE

# Apply the initial stable deployment and the service pointing only to it
kubectl apply -n $NAMESPACE -f artifacts/deployment-v1.yaml
kubectl apply -n $NAMESPACE -f artifacts/service.yaml

```

--------------------------------------------------------------------------------
/pkg/kiali/namespaces.go:
--------------------------------------------------------------------------------

```go
package kiali

import (
	"context"
	"net/http"
)

// ListNamespaces calls the Kiali namespaces API using the provided Authorization header value.
// Returns all namespaces in the mesh that the user has access to.
func (k *Kiali) ListNamespaces(ctx context.Context) (string, error) {
	return k.executeRequest(ctx, http.MethodGet, NamespacesEndpoint, "", nil)
}

```

--------------------------------------------------------------------------------
/pkg/mcp/modules.go:
--------------------------------------------------------------------------------

```go
package mcp

import (
	_ "github.com/containers/kubernetes-mcp-server/pkg/toolsets/config"
	_ "github.com/containers/kubernetes-mcp-server/pkg/toolsets/core"
	_ "github.com/containers/kubernetes-mcp-server/pkg/toolsets/helm"
	_ "github.com/containers/kubernetes-mcp-server/pkg/toolsets/kiali"
	_ "github.com/containers/kubernetes-mcp-server/pkg/toolsets/kubevirt"
)

```

--------------------------------------------------------------------------------
/evals/tasks/resize-pvc/artifacts/storage-pod.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: storage-pod
  namespace: resize-pv
spec:
  volumes:
    - name: my-storage
      persistentVolumeClaim:
        claimName: storage-pvc
  containers:
    - name: storage-user
      image: nginx:alpine
      volumeMounts:
        - name: my-storage
          mountPath: /usr/share/nginx/html
      ports:
        - containerPort: 80

```

--------------------------------------------------------------------------------
/evals/tasks/deployment-traffic-switch/artifacts/blue-deployment.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: checkout-service-blue
spec:
  replicas: 2
  selector:
    matchLabels:
      app: checkout-service
      version: blue
  template:
    metadata:
      labels:
        app: checkout-service
        version: blue
    spec:
      containers:
      - name: app
        image: nginx:1.25
        ports:
        - containerPort: 80

```

--------------------------------------------------------------------------------
/evals/tasks/create-pod-resources-limits/create-pod-resources-limits.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: create-pod-resources-limits
  difficulty: easy
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Create namespace 'limits-test' with a pod 'resource-limits-pod' using httpd:alpine image. Container 'my-container' should have CPU request 60m, limit 600m, and memory request/limit of 62Mi."

```

--------------------------------------------------------------------------------
/evals/tasks/fix-pending-pod/artifacts/homepage-pod.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: homepage-pod
  namespace: homepage-ns
spec:
  containers:
    - name: nginx
      image: nginx:latest
      ports:
        - containerPort: 80
      volumeMounts:
        - name: my-persistent-storage
          mountPath: /usr/share/nginx/html
  volumes:
    - name: my-persistent-storage
      persistentVolumeClaim:
        claimName: homepage-pvc

```

--------------------------------------------------------------------------------
/charts/kubernetes-mcp-server/templates/serviceaccount.yaml:
--------------------------------------------------------------------------------

```yaml
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ include "kubernetes-mcp-server.serviceAccountName" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "kubernetes-mcp-server.labels" . | nindent 4 }}
  {{- with .Values.serviceAccount.annotations }}
  annotations:
    {{- tpl (toYaml .) . | nindent 4 }}
  {{- end }}
{{- end }}

```

--------------------------------------------------------------------------------
/evals/tasks/deployment-traffic-switch/deployment-traffix-switch.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: deployment-traffic-switch
  difficulty: easy
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Our new checkout-service-green deployment in the e-commerce namespace has passed all tests. The current live version is checkout-service-blue. Can you switch all live traffic over to the green version now?"

```

--------------------------------------------------------------------------------
/evals/tasks/statefulset-lifecycle/statefulset-lifecycle.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: statefulset-lifecycle
  difficulty: hard
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Deploy a 3-replica StatefulSet db in namespace statefulset-test with each pod mounting a 1Gi PVC at /data containing a file `test` populated with the string `initial_data`. Then, scale back down to 1 replicas."

```

--------------------------------------------------------------------------------
/pkg/config/context.go:
--------------------------------------------------------------------------------

```go
package config

import "context"

type configDirPathKey struct{}

func withConfigDirPath(ctx context.Context, dirPath string) context.Context {
	return context.WithValue(ctx, configDirPathKey{}, dirPath)
}

func ConfigDirPathFromContext(ctx context.Context) string {
	val := ctx.Value(configDirPathKey{})

	if val == nil {
		return ""
	}

	if strVal, ok := val.(string); ok {
		return strVal
	}

	return ""
}

```

--------------------------------------------------------------------------------
/evals/tasks/debug-app-logs/artifacts/calc-app-pod.yaml:
--------------------------------------------------------------------------------

```yaml
# Pod for running the `calc-app.py` script
apiVersion: v1
kind: Pod
metadata:
  name: calc-app-pod
  namespace: calc-app
spec:
  containers:
  - name: calc-app-executor
    image: python:3.9-slim-buster
    command: ["python"]
    args: ["/etc/config/calc-app.py"]
    volumeMounts:
    - name: calc-app-volume
      mountPath: /etc/config
  volumes:
  - name: calc-app-volume
    configMap:
      name: calc-app-map

```

--------------------------------------------------------------------------------
/evals/tasks/create-network-policy/create-network-policy.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: create-network-policy
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Create a NetworkPolicy named 'np' in namespace 'ns1' that: 1. Allows egress traffic only to pods in namespace 'ns2' (incoming traffic not affected) 2. Allows DNS traffic (port 53 TCP and UDP) 3. Blocks all other outgoing traffic"

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-routing/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Check if service has endpoints
endpoints=$(kubectl get endpoints nginx -n web -o jsonpath='{.subsets[0].addresses}')
if [[ ! -z "$endpoints" ]]; then
    # Verify service can access the pod
    if kubectl run -n web test-connection --image=busybox --restart=Never --rm -i --wait --timeout=180s \
        -- wget -qO- nginx; then
        exit 0
    fi
fi

# If we get here, service connection failed
exit 1 

```

--------------------------------------------------------------------------------
/pkg/kubernetes/common_test.go:
--------------------------------------------------------------------------------

```go
package kubernetes

import (
	"os"
	"testing"
)

func TestMain(m *testing.M) {
	// Set up
	_ = os.Setenv("KUBECONFIG", "/dev/null")     // Avoid interference from existing kubeconfig
	_ = os.Setenv("KUBERNETES_SERVICE_HOST", "") // Avoid interference from in-cluster config
	_ = os.Setenv("KUBERNETES_SERVICE_PORT", "") // Avoid interference from in-cluster config

	// Run tests
	code := m.Run()

	// Tear down
	os.Exit(code)
}

```

--------------------------------------------------------------------------------
/evals/tasks/deployment-traffic-switch/artifacts/green-deployment.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: checkout-service-green
spec:
  replicas: 2
  selector:
    matchLabels:
      app: checkout-service
      version: green
  template:
    metadata:
      labels:
        app: checkout-service
        version: green
    spec:
      containers:
      - name: app
        image: nginx:1.26 # Use a different image tag to represent a new version
        ports:
        - containerPort: 80

```

--------------------------------------------------------------------------------
/charts/kubernetes-mcp-server/templates/service.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ include "kubernetes-mcp-server.fullname" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "kubernetes-mcp-server.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: http
      protocol: TCP
      name: http
  selector:
    {{- include "kubernetes-mcp-server.selectorLabels" . | nindent 4 }}

```

--------------------------------------------------------------------------------
/evals/tasks/create-canary-deployment/artifacts/deployment-v1.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: engine-v2-0
  labels:
    app: recommendation-engine
    version: v2.0
spec:
  replicas: 2
  selector:
    matchLabels:
      app: recommendation-engine
      version: v2.0
  template:
    metadata:
      labels:
        app: recommendation-engine
        version: v2.0
    spec:
      containers:
      - name: rec-engine
        image: nginx:1.28
        ports:
        - containerPort: 80

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-with-no-endpoints/artifacts/deployment.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app-deployment
  namespace: webshop-frontend
  labels:
    app: web-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      nodeSelector:
        environment: production-gpu
      containers:
      - name: web-app-container
        image: nginx:latest
        ports:
        - containerPort: 80

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-with-no-endpoints/fix-service-with-no-endpoints.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: fix-service-with-no-endpoints
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Our frontend application in the webshop-frontend namespace is reporting connection errors. The logs show: 'Error: connection to web-app-service.webshop-frontend.svc.cluster.local failed: connection refused'. Can you help diagnose and fix this issue?"

```

--------------------------------------------------------------------------------
/cmd/kubernetes-mcp-server/main.go:
--------------------------------------------------------------------------------

```go
package main

import (
	"os"

	"github.com/spf13/pflag"
	"k8s.io/cli-runtime/pkg/genericiooptions"

	"github.com/containers/kubernetes-mcp-server/pkg/kubernetes-mcp-server/cmd"
)

func main() {
	flags := pflag.NewFlagSet("kubernetes-mcp-server", pflag.ExitOnError)
	pflag.CommandLine = flags

	root := cmd.NewMCPServer(genericiooptions.IOStreams{In: os.Stdin, Out: os.Stdout, ErrOut: os.Stderr})
	if err := root.Execute(); err != nil {
		os.Exit(1)
	}
}

```

--------------------------------------------------------------------------------
/evals/claude-code/eval.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Eval
metadata:
  name: "kubernetes-basic-operations"
config:
  agent:
    type: "file"
    path: agent.yaml
  mcpConfigFile: ../mcp-config.yaml
  llmJudge:
    env:
      baseUrlKey: JUDGE_BASE_URL
      apiKeyKey: JUDGE_API_KEY
      modelNameKey: JUDGE_MODEL_NAME
  taskSets:
    - glob: ../tasks/*/*.yaml
      assertions:
        toolsUsed:
          - server: kubernetes
            toolPattern: ".*"
        minToolCalls: 1
        maxToolCalls: 20

```

--------------------------------------------------------------------------------
/evals/openai-agent/eval.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Eval
metadata:
  name: "openai-agent-kubernetes-test"
config:
  agent:
    type: "file"
    path: agent.yaml
  mcpConfigFile: ../mcp-config.yaml
  llmJudge:
    env:
      baseUrlKey: JUDGE_BASE_URL
      apiKeyKey: JUDGE_API_KEY
      modelNameKey: JUDGE_MODEL_NAME
  taskSets:
    - glob: ../tasks/*/*.yaml
      assertions:
        toolsUsed:
          - server: kubernetes
            toolPattern: ".*"
        minToolCalls: 1
        maxToolCalls: 20

```

--------------------------------------------------------------------------------
/evals/tasks/setup-dev-cluster/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

echo "Cleaning up dev cluster eval resources..."

# Delete all created namespaces (this will clean up most resources)
kubectl delete namespace dev-alice dev-bob dev-charlie dev-shared staging prod --ignore-not-found=true

# Clean up any cluster-level RBAC resources that might have been created
kubectl delete clusterrole dev-* --ignore-not-found=true
kubectl delete clusterrolebinding dev-* --ignore-not-found=true

echo "Cleanup completed"
```

--------------------------------------------------------------------------------
/evals/tasks/debug-app-logs/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace calc-app --ignore-not-found # clean up, just in case
kubectl create namespace calc-app
kubectl create configmap calc-app-map --from-file=artifacts/calc-app.py --namespace=calc-app
kubectl apply -f artifacts/calc-app-pod.yaml --namespace=calc-app

# Wait for pod to be ready
echo "Waiting for pod to be ready..."
TIMEOUT="120s"
kubectl wait --for=condition=Ready pod/calc-app-pod --namespace=calc-app --timeout=$TIMEOUT

```

--------------------------------------------------------------------------------
/evals/tasks/fix-rbac-wrong-resource/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace simple-rbac-setup --ignore-not-found
kubectl create namespace simple-rbac-setup
kubectl create serviceaccount pod-reader -n simple-rbac-setup
# role is misconfigured to list deployments instead of pods
kubectl create role pod-reader-role --verb=list --resource=deployments -n simple-rbac-setup
kubectl create rolebinding pod-reader-binding --role=pod-reader-role --serviceaccount=simple-rbac-setup:pod-reader -n simple-rbac-setup

```

--------------------------------------------------------------------------------
/evals/tasks/debug-app-logs/artifacts/calc-app.py:
--------------------------------------------------------------------------------

```python
import random
import sys
import time

print("Starting calc-app...")
sys.stdout.flush()

counter = 0
while True:
    counter += 1
    if random.randint(1, 4) % 4 == 0:
        try:
            result = 1 / 0
        except ZeroDivisionError as e:
            print(f"Run {counter} failed with error: {e}")
            sys.stdout.flush()
    else:
        print(f"Run {counter} succeeded")
        sys.stdout.flush()

    sleep_time = (1.2**counter)/20
    time.sleep(sleep_time)

```

--------------------------------------------------------------------------------
/evals/tasks/create-network-policy/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Cleanup existing namespaces if they exist
kubectl delete namespace ns1 --ignore-not-found
kubectl delete namespace ns2 --ignore-not-found

TIMEOUT="120s"

# Wait for namespaces to be fully deleted
echo "Waiting for namespaces to be fully deleted..."
while kubectl get namespace ns1 2>/dev/null || kubectl get namespace ns2 2>/dev/null; do
    sleep 1
done

# Create the namespaces
kubectl create namespace ns1
kubectl create namespace ns2

echo "Setup completed" 

```

--------------------------------------------------------------------------------
/pkg/kubernetes/impersonate_roundtripper.go:
--------------------------------------------------------------------------------

```go
package kubernetes

import "net/http"

// nolint:unused
type impersonateRoundTripper struct {
	delegate http.RoundTripper
}

// nolint:unused
func (irt *impersonateRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
	// TODO: Solution won't work with discoveryclient which uses context.TODO() instead of the passed-in context
	if v, ok := req.Context().Value(OAuthAuthorizationHeader).(string); ok {
		req.Header.Set("Authorization", v)
	}
	return irt.delegate.RoundTrip(req)
}

```

--------------------------------------------------------------------------------
/evals/tasks/scale-deployment/cleanup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Wait for deployment to scale to 2 replicas with kubectl wait
TIMEOUT="120s"
if kubectl wait --for=condition=Available=True --timeout=$TIMEOUT deployment/web-app -n scale-test; then
    # Verify the replica count is exactly 2
    if [ "$(kubectl get deployment web-app -n scale-test -o jsonpath='{.status.availableReplicas}')" = "2" ]; then
        exit 0
    fi
fi

# If we get here, deployment didn't scale up correctly in time
echo "Verification failed for scale-deployment"
exit 1 
```

--------------------------------------------------------------------------------
/evals/tasks/scale-deployment/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Wait for deployment to scale to 2 replicas with kubectl wait
TIMEOUT="120s"
if kubectl wait --for=condition=Available=True --timeout=$TIMEOUT deployment/web-app -n scale-test; then
    # Verify the replica count is exactly 2
    if [ "$(kubectl get deployment web-app -n scale-test -o jsonpath='{.status.availableReplicas}')" = "2" ]; then
        exit 0
    fi
fi

# If we get here, deployment didn't scale up correctly in time
echo "Verification failed for scale-deployment"
exit 1 
```

--------------------------------------------------------------------------------
/evals/tasks/scale-deployment/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Create namespace and a deployment with initial replicas
kubectl delete namespace scale-test --ignore-not-found
kubectl create namespace scale-test
kubectl create deployment web-app --image=nginx --replicas=1 -n scale-test
# Wait for initial deployment to be ready
for i in {1..30}; do
    if kubectl get deployment web-app -n scale-test -o jsonpath='{.status.availableReplicas}' | grep -q "1"; then
        exit 0
    fi
    sleep 2
done

echo "Setup failed for scale-deployment"
exit 1
```

--------------------------------------------------------------------------------
/evals/tasks/scale-down-deployment/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Create namespace and a deployment with initial replicas
kubectl delete namespace scale-down-test --ignore-not-found
kubectl create namespace scale-down-test
kubectl create deployment web-service --image=nginx --replicas=2 -n scale-down-test
# Wait for initial deployment to be ready
for i in {1..30}; do
    if kubectl --request-timeout=10s get deployment web-service -n scale-down-test -o jsonpath='{.status.availableReplicas}' | grep -q "2"; then
        exit 0
    fi
    sleep 1
done 
```

--------------------------------------------------------------------------------
/evals/tasks/create-canary-deployment/create-canary-deployment.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: create-canary-deployment
  difficulty: hard
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "We want to test a new version of our recommendation engine (image tag 1.29) in production without disturbing the existing stable deployment. Can you deploy it as a canary (as engine-v2-1)? We want about 50% of traffic to go to the new version. The current deployment is named engine-v2-0 in the canary-deployment-ns."

```

--------------------------------------------------------------------------------
/evals/claude-code/eval-inline.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Eval
metadata:
  name: "kubernetes-basic-operations"
config:
  # Inline agent configuration - no separate agent.yaml file needed
  agent:
    type: "builtin.claude-code"
  mcpConfigFile: ../mcp-config.yaml
  llmJudge:
    env:
      baseUrlKey: JUDGE_BASE_URL
      apiKeyKey: JUDGE_API_KEY
      modelNameKey: JUDGE_MODEL_NAME
  taskSets:
    - glob: ../tasks/*/*.yaml
      assertions:
        toolsUsed:
          - server: kubernetes
            toolPattern: ".*"
        minToolCalls: 1
        maxToolCalls: 20

```

--------------------------------------------------------------------------------
/pkg/kubernetes-mcp-server/cmd/testdata/valid-config.toml:
--------------------------------------------------------------------------------

```toml
log_level = 1
port = "9999"
kubeconfig = "test"
list_output = "yaml"
read_only = true
disable_destructive = true

denied_resources = [
    {group = "apps", version = "v1", kind = "Deployment"},
    {group = "rbac.authorization.k8s.io", version = "v1", kind = "Role"}
]

enabled_tools = ["configuration_view", "events_list", "namespaces_list", "pods_list", "resources_list", "resources_get", "resources_create_or_update", "resources_delete"]
disabled_tools = ["pods_delete", "pods_top", "pods_log", "pods_run", "pods_exec"]


```

--------------------------------------------------------------------------------
/evals/tasks/scale-down-deployment/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Wait for deployment to scale down to 1 replicas with kubectl wait
TIMEOUT="120s"
if kubectl wait --for=condition=Available=True --timeout=$TIMEOUT deployment/web-service -n scale-down-test; then
    # Verify the replica count is exactly 1
    if [ "$(kubectl get deployment web-service -n scale-down-test -o jsonpath='{.status.availableReplicas}')" = "1" ]; then
        exit 0
    fi
fi

# If we get here, deployment didn't scale down correctly in time
echo "Verification failed for scale-down-deployment"
exit 1 
```

--------------------------------------------------------------------------------
/evals/tasks/list-images-for-pods/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

set -o errexit
set -o nounset
set -o pipefail

NAMESPACE=list-images-for-pods

kubectl delete namespace ${NAMESPACE} --ignore-not-found
kubectl create namespace ${NAMESPACE}

# Create artifacts
kubectl apply -f artifacts/manifest.yaml

# Wait for everything to be ready
# Can't wait for statefulset directly (sadly)
# Needs a new version of kubectl: kubectl wait --for=create --timeout=30s Pod/mysql-0 -n ${NAMESPACE}
sleep 5 # Wait for pod to be created (hopefully)
kubectl wait --for=condition=Ready --timeout=180s Pod/mysql-0 -n ${NAMESPACE}

```

--------------------------------------------------------------------------------
/internal/test/kubernetes.go:
--------------------------------------------------------------------------------

```go
package test

import (
	clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
)

func KubeConfigFake() *clientcmdapi.Config {
	fakeConfig := clientcmdapi.NewConfig()
	fakeConfig.Clusters["fake"] = clientcmdapi.NewCluster()
	fakeConfig.Clusters["fake"].Server = "https://127.0.0.1:6443"
	fakeConfig.AuthInfos["fake"] = clientcmdapi.NewAuthInfo()
	fakeConfig.Contexts["fake-context"] = clientcmdapi.NewContext()
	fakeConfig.Contexts["fake-context"].Cluster = "fake"
	fakeConfig.Contexts["fake-context"].AuthInfo = "fake"
	fakeConfig.CurrentContext = "fake-context"
	return fakeConfig
}

```

--------------------------------------------------------------------------------
/dev/config/cert-manager/selfsigned-issuer.yaml:
--------------------------------------------------------------------------------

```yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: selfsigned-ca
  namespace: cert-manager
spec:
  isCA: true
  commonName: selfsigned-ca
  secretName: selfsigned-ca-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-ca-issuer
spec:
  ca:
    secretName: selfsigned-ca-secret

```

--------------------------------------------------------------------------------
/pkg/toolsets/kiali/internal/defaults/defaults.go:
--------------------------------------------------------------------------------

```go
package defaults

const (
	DefaultToolsetName        = "kiali"
	DefaultToolsetDescription = "Most common tools for managing Kiali, check the [Kiali documentation](https://github.com/containers/kubernetes-mcp-server/blob/main/docs/KIALI.md) for more details."
)

func ToolsetName() string {
	overrideName := ToolsetNameOverride()
	if overrideName != "" {
		return overrideName
	}
	return DefaultToolsetName
}

func ToolsetDescription() string {
	overrideDescription := ToolsetDescriptionOverride()
	if overrideDescription != "" {
		return overrideDescription
	}
	return DefaultToolsetDescription
}

```

--------------------------------------------------------------------------------
/evals/tasks/rolling-update-deployment/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Initialize namespace and deployment with the old image
kubectl delete namespace rollout-test --ignore-not-found
kubectl create namespace rollout-test
kubectl create deployment web-app --image=nginx:1.21 --replicas=3 -n rollout-test

# Wait until all replicas are available
TIMEOUT="120s"
if kubectl wait deployment/web-app -n rollout-test --for=condition=Available=True --timeout=$TIMEOUT; then
  echo "Setup succeeded for rolling-update-deployment"
  exit 0
else
  echo "Setup failed for rolling-update-deployment. Initial deployment did not become ready in time"
  exit 1
fi
```

--------------------------------------------------------------------------------
/evals/tasks/create-pod-mount-configmaps/create-pod-mount-configmaps.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: create-pod-mount-configmaps
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: "Create namespace 'color-size-settings' with two ConfigMaps - 'color-settings' with key 'color=blue' and 'size-settings' with key 'size=medium'. Create an nginx:alpine pod 'pod1' in namespace 'color-size-settings'. The pod `pod1` should use the value of 'color' key from 'color-settings' ConfigMap as an env var 'COLOR' and mounts all keys in the 'size-settings' ConfigMap under '/etc/sizes/' directory."

```

--------------------------------------------------------------------------------
/dev/config/keycloak/clients/mcp-server.json:
--------------------------------------------------------------------------------

```json
{
  "clientId": "mcp-server",
  "enabled": true,
  "publicClient": false,
  "standardFlowEnabled": true,
  "directAccessGrantsEnabled": true,
  "serviceAccountsEnabled": true,
  "authorizationServicesEnabled": false,
  "redirectUris": ["*"],
  "webOrigins": ["*"],
  "defaultClientScopes": ["profile", "email", "groups", "mcp-server"],
  "optionalClientScopes": ["mcp:openshift"],
  "attributes": {
    "oauth2.device.authorization.grant.enabled": "false",
    "oidc.ciba.grant.enabled": "false",
    "backchannel.logout.session.required": "true",
    "backchannel.logout.revoke.offline.tokens": "false"
  }
}

```

--------------------------------------------------------------------------------
/pkg/toolsets/helm/toolset.go:
--------------------------------------------------------------------------------

```go
package helm

import (
	"slices"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/containers/kubernetes-mcp-server/pkg/toolsets"
)

type Toolset struct{}

var _ api.Toolset = (*Toolset)(nil)

func (t *Toolset) GetName() string {
	return "helm"
}

func (t *Toolset) GetDescription() string {
	return "Tools for managing Helm charts and releases"
}

func (t *Toolset) GetTools(_ api.Openshift) []api.ServerTool {
	return slices.Concat(
		initHelm(),
	)
}

func (t *Toolset) GetPrompts() []api.ServerPrompt {
	// Helm toolset does not provide prompts
	return nil
}

func init() {
	toolsets.Register(&Toolset{})
}

```

--------------------------------------------------------------------------------
/hack/generate-placeholder-ca.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
set -e

# Generate a placeholder self-signed CA certificate for KIND cluster startup
# This will be replaced with the real cert-manager CA after the cluster is created

CERT_DIR="_output/cert-manager-ca"
CA_CERT="$CERT_DIR/ca.crt"
CA_KEY="$CERT_DIR/ca.key"

mkdir -p "$CERT_DIR"

# Generate a self-signed CA certificate (valid placeholder)
openssl req -x509 -newkey rsa:2048 -nodes \
  -keyout "$CA_KEY" \
  -out "$CA_CERT" \
  -days 365 \
  -subj "/CN=placeholder-ca" \
  2>/dev/null

echo "✅ Placeholder CA certificate created at $CA_CERT"
echo "⚠️  This will be replaced with cert-manager CA after cluster creation"

```

--------------------------------------------------------------------------------
/evals/tasks/fix-rbac-wrong-resource/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
NAMESPACE=simple-rbac-setup
SERVICE_ACCOUNT=pod-reader
SERVICE_ACCOUNT_USER="system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT}"

# Check for allowed permissions
if ! kubectl auth can-i list pods --as=${SERVICE_ACCOUNT_USER} -n=${NAMESPACE} &> /dev/null; then
    echo "ServiceAccount still can't list pods."
    exit 1
fi

# Check for denied permissions
if kubectl auth can-i list pods --as=${SERVICE_ACCOUNT_USER} -A &> /dev/null; then
    echo "ServiceAccount has excessive permissions (can 'list' pods in other namespaces)."
    exit 1
fi

echo "Verification successful: RBAC role correctly reconfigured."
exit 0

```

--------------------------------------------------------------------------------
/pkg/kiali/mesh.go:
--------------------------------------------------------------------------------

```go
package kiali

import (
	"context"
	"net/http"
	"net/url"
)

// MeshStatus calls the Kiali mesh graph API to get the status of mesh components.
// This returns information about mesh components like Istio, Kiali, Grafana, Prometheus
// and their interactions, versions, and health status.
func (k *Kiali) MeshStatus(ctx context.Context) (string, error) {
	u, err := url.Parse(MeshGraphEndpoint)
	if err != nil {
		return "", err
	}
	q := u.Query()
	q.Set("includeGateways", DefaultIncludeGateways)
	q.Set("includeWaypoints", DefaultIncludeWaypoints)
	u.RawQuery = q.Encode()
	return k.executeRequest(ctx, http.MethodGet, u.String(), "", nil)
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/clients/mcp-server-update.json:
--------------------------------------------------------------------------------

```json
{
  "clientId": "mcp-server",
  "enabled": true,
  "publicClient": false,
  "standardFlowEnabled": true,
  "directAccessGrantsEnabled": true,
  "serviceAccountsEnabled": true,
  "authorizationServicesEnabled": false,
  "redirectUris": ["*"],
  "webOrigins": ["*"],
  "defaultClientScopes": ["profile", "email", "groups", "mcp-server"],
  "optionalClientScopes": ["mcp:openshift"],
  "attributes": {
    "oauth2.device.authorization.grant.enabled": "false",
    "oidc.ciba.grant.enabled": "false",
    "backchannel.logout.session.required": "true",
    "backchannel.logout.revoke.offline.tokens": "false",
    "standard.token.exchange.enabled": "true"
  }
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/rbac.yaml:
--------------------------------------------------------------------------------

```yaml
# RBAC ClusterRoleBinding for mcp user with OIDC authentication
#
# IMPORTANT: This requires Kubernetes API server to be configured with OIDC:
#   --oidc-issuer-url=https://keycloak.127-0-0-1.sslip.io:8443/realms/openshift
#   --oidc-username-claim=preferred_username
#
# Without OIDC configuration, this binding will not work.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: oidc-mcp-cluster-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: https://keycloak.127-0-0-1.sslip.io:8443/realms/openshift#mcp

```

--------------------------------------------------------------------------------
/pkg/kubernetes/namespaces.go:
--------------------------------------------------------------------------------

```go
package kubernetes

import (
	"context"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"k8s.io/apimachinery/pkg/runtime"
	"k8s.io/apimachinery/pkg/runtime/schema"
)

func (k *Kubernetes) NamespacesList(ctx context.Context, options api.ListOptions) (runtime.Unstructured, error) {
	return k.ResourcesList(ctx, &schema.GroupVersionKind{
		Group: "", Version: "v1", Kind: "Namespace",
	}, "", options)
}

func (k *Kubernetes) ProjectsList(ctx context.Context, options api.ListOptions) (runtime.Unstructured, error) {
	return k.ResourcesList(ctx, &schema.GroupVersionKind{
		Group: "project.openshift.io", Version: "v1", Kind: "Project",
	}, "", options)
}

```

--------------------------------------------------------------------------------
/pkg/toolsets/config/toolset.go:
--------------------------------------------------------------------------------

```go
package config

import (
	"slices"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/containers/kubernetes-mcp-server/pkg/toolsets"
)

type Toolset struct{}

var _ api.Toolset = (*Toolset)(nil)

func (t *Toolset) GetName() string {
	return "config"
}

func (t *Toolset) GetDescription() string {
	return "View and manage the current local Kubernetes configuration (kubeconfig)"
}

func (t *Toolset) GetTools(_ api.Openshift) []api.ServerTool {
	return slices.Concat(
		initConfiguration(),
	)
}

func (t *Toolset) GetPrompts() []api.ServerPrompt {
	// Config toolset does not provide prompts
	return nil
}

func init() {
	toolsets.Register(&Toolset{})
}

```

--------------------------------------------------------------------------------
/evals/tasks/fix-crashloop/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Wait for pod to be ready
if kubectl wait --for=condition=Ready pod -l app=nginx -n crashloop-test --timeout=25s; then
    # Get current restart count
    restarts=$(kubectl get pods -n crashloop-test -l app=nginx -o jsonpath='{.items[0].status.containerStatuses[0].restartCount}')
    
    # Wait additional 5 seconds to ensure stability
    sleep 5
    
    # Check if restart count hasn't increased
    new_restarts=$(kubectl get pods -n crashloop-test -l app=nginx -o jsonpath='{.items[0].status.containerStatuses[0].restartCount}')
    if [[ "$restarts" == "$new_restarts" ]]; then
        exit 0
    fi
fi

# If we get here, deployment's pod didn't stabilize in time
exit 1 

```

--------------------------------------------------------------------------------
/evals/openai-agent/eval-inline.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Eval
metadata:
  name: "openai-agent-kubernetes-test"
config:
  # Inline agent configuration - no separate agent.yaml file needed
  agent:
    type: "builtin.openai-agent"
    model: "gpt-5"  # Change to your model
  # Before running, set environment variables:
  #   export MODEL_BASE_URL="https://api.openai.com/v1"
  #   export MODEL_KEY="sk-..."
  mcpConfigFile: ../mcp-config.yaml
  llmJudge:
    env:
      baseUrlKey: JUDGE_BASE_URL
      apiKeyKey: JUDGE_API_KEY
      modelNameKey: JUDGE_MODEL_NAME
  taskSets:
    - glob: ../tasks/*/*.yaml
      assertions:
        toolsUsed:
          - server: kubernetes
            toolPattern: ".*"
        minToolCalls: 1
        maxToolCalls: 20

```

--------------------------------------------------------------------------------
/evals/tasks/horizontal-pod-autoscaler/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Initialize namespace and deployment with CPU load generator
kubectl delete namespace hpa-test --ignore-not-found
kubectl create namespace hpa-test

# Create a Deployment with CPU request to allow HPA to target utilization
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  namespace: hpa-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
      - name: web-app
        image: busybox
        command: ["sh", "-c", "while true; do dd if=/dev/zero of=/dev/null; done"]
        resources:
          requests:
            cpu: "100m"
EOF

```

--------------------------------------------------------------------------------
/evals/tasks/multi-container-pod-communication/multi-container-pod-communication.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Task
metadata:
  name: multi-container-pod-communication
  difficulty: medium
steps:
  setup:
    file: setup.sh
  verify:
    file: verify.sh
  cleanup:
    file: cleanup.sh
  prompt:
    inline: |
      In the multi-container-logging namespace, run a pod called communication-pod with two containers:
      1. A 'web-server' nginx instance that serves traffic
      2. A 'logger' busybox instance that processes those logs from a shared volume, 'logs-volume' with 'tail -f /var/log/nginx/access.log'

      Both containers should mount logs-volume at '/var/log/nginx'.
      The logger should only start once the web server is ready. The pod should be considered ready when the web server is serving traffic.

```

--------------------------------------------------------------------------------
/pkg/mcp/testdata/toolsets-config-tools.json:
--------------------------------------------------------------------------------

```json
[
  {
    "annotations": {
      "title": "Configuration: View",
      "readOnlyHint": true,
      "destructiveHint": false,
      "openWorldHint": true
    },
    "description": "Get the current Kubernetes configuration content as a kubeconfig YAML",
    "inputSchema": {
      "type": "object",
      "properties": {
        "minified": {
          "description": "Return a minified version of the configuration. If set to true, keeps only the current-context and the relevant pieces of the configuration for that context. If set to false, all contexts, clusters, auth-infos, and users are returned in the configuration. (Optional, default true)",
          "type": "boolean"
        }
      }
    },
    "name": "configuration_view"
  }
]

```

--------------------------------------------------------------------------------
/evals/tasks/resize-pvc/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash

# Configuration
PVC_NAME="storage-pvc" 
EXPECTED_SIZE="15Gi"
TIMEOUT="120s"

echo "Attempting to get PV name from PVC: $PVC_NAME"

# Dynamically get the PV name from the PVC
PV_NAME=$(kubectl get pvc "$PVC_NAME" -n resize-pv -o jsonpath='{.spec.volumeName}')

if [ -z "$PV_NAME" ]; then
  echo "Error: Could not retrieve PersistentVolume name for PVC '$PVC_NAME'. Make sure the PVC exists and is bound."
  exit 1
fi

if ! kubectl wait --for=jsonpath='{.spec.capacity.storage}'='15Gi' pv/$PV_NAME --timeout=$TIMEOUT; then
  echo "FAILURE: PersistentVolume '$PV_NAME' did not reach the expected capacity of $EXPECTED_SIZE."
  exit 1 
else 
  echo "SUCCESS: PersistentVolume '$PV_NAME' reached the expected capacity of $EXPECTED_SIZE."
fi

```

--------------------------------------------------------------------------------
/pkg/toolsets/core/toolset.go:
--------------------------------------------------------------------------------

```go
package core

import (
	"slices"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/containers/kubernetes-mcp-server/pkg/toolsets"
)

type Toolset struct{}

var _ api.Toolset = (*Toolset)(nil)

func (t *Toolset) GetName() string {
	return "core"
}

func (t *Toolset) GetDescription() string {
	return "Most common tools for Kubernetes management (Pods, Generic Resources, Events, etc.)"
}

func (t *Toolset) GetTools(o api.Openshift) []api.ServerTool {
	return slices.Concat(
		initEvents(),
		initNamespaces(o),
		initNodes(),
		initPods(),
		initResources(o),
	)
}

func (t *Toolset) GetPrompts() []api.ServerPrompt {
	// Core toolset prompts will be added in Feature 3
	return nil
}

func init() {
	toolsets.Register(&Toolset{})
}

```

--------------------------------------------------------------------------------
/python/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[build-system]
requires = ["setuptools>=42", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name = "kubernetes-mcp-server"
version = "0.0.0"
description = "Kubernetes MCP Server (Model Context Protocol) with special support for OpenShift"
readme = {file="README.md", content-type="text/markdown"}
requires-python = ">=3.6"
license = "Apache-2.0"
authors = [
    { name = "Marc Nuri", email = "[email protected]" }
]
classifiers = [
    "Programming Language :: Python :: 3",
    "Operating System :: OS Independent",
]

[project.urls]
Homepage = "https://github.com/containers/kubernetes-mcp-server"
Repository = "https://github.com/containers/kubernetes-mcp-server"

[project.scripts]
kubernetes-mcp-server = "kubernetes_mcp_server:main"

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-routing/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Create namespace and deployment with one set of labels
kubectl delete namespace web --ignore-not-found
kubectl create namespace web

# Create deployment with label app=nginx
kubectl create deployment nginx --image=nginx -n web
# kubectl label deployment nginx -n web app=nginx --overwrite

# Create service with different selector (app=web)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: web
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: web  # Mismatched label - deployment has app=nginx
EOF

# Wait for deployment to be ready
for i in {1..30}; do
    if kubectl get deployment nginx -n web -o jsonpath='{.status.availableReplicas}' | grep -q "1"; then
        exit 0
    fi
    sleep 2
done 

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-with-no-endpoints/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
set -e

# Check if the deployment exists
if ! kubectl get deployment web-app-deployment -n webshop-frontend &>/dev/null; then
  echo "Deployment 'web-app-deployment' does not exist in namespace 'webshop-frontend'"
  exit 1
fi

# Check if pods are being created successfully
echo "Waiting for pods to become ready..."
TIMEOUT="120s"
if ! kubectl wait --for=condition=Ready pods -l app=web-app -n webshop-frontend --timeout=$TIMEOUT; then
  echo "Pods are not reaching Ready state after fixing the node selector"
  exit 1
fi

# Verify that the service now has endpoints
ENDPOINTS=$(kubectl get endpoints web-app-service -n webshop-frontend -o jsonpath='{.subsets[0].addresses}')
if [[ -z "$ENDPOINTS" ]]; then
  echo "Service still has no endpoints after fixing the deployment"
  exit 1
fi

```

--------------------------------------------------------------------------------
/evals/tasks/deployment-traffic-switch/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
set -e
NAMESPACE="e-commerce"
TIMEOUT="120s"

# Create the namespace if it doesn't exist to make the script idempotent
kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -

# Apply all resource manifests from the artifacts directory
echo "Applying Kubernetes resources from artifacts/ directory..."
kubectl apply -n $NAMESPACE -f artifacts/

# Wait for both deployments to be available to ensure a stable starting state
echo "Waiting for blue deployment to be ready..."
kubectl rollout status deployment/checkout-service-blue -n $NAMESPACE --timeout=$TIMEOUT

echo "Waiting for green deployment to be ready..."
kubectl rollout status deployment/checkout-service-green -n $NAMESPACE --timeout=$TIMEOUT

echo "Setup complete. Service 'checkout-service' is pointing to 'blue'."

```

--------------------------------------------------------------------------------
/dev/config/kind/cluster.yaml:
--------------------------------------------------------------------------------

```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
  - hostPath: ./_output/cert-manager-ca/ca.crt
    containerPath: /etc/kubernetes/pki/keycloak-ca.crt
    readOnly: true
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"

    kind: ClusterConfiguration
    apiServer:
      extraArgs:
        oidc-issuer-url: https://keycloak.127-0-0-1.sslip.io:8443/realms/openshift
        oidc-client-id: openshift
        oidc-username-claim: preferred_username
        oidc-groups-claim: groups
        oidc-ca-file: /etc/kubernetes/pki/keycloak-ca.crt
  extraPortMappings:
  - containerPort: 80
    hostPort: 8000
    protocol: TCP
  - containerPort: 443
    hostPort: 8443
    protocol: TCP

```

--------------------------------------------------------------------------------
/evals/tasks/fix-image-pull/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
# Create namespace and a deployment with an invalid image that will cause ImagePullBackOff
kubectl delete namespace debug --ignore-not-found
kubectl create namespace debug
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  namespace: debug
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:invalid-tag  # This will cause ImagePullBackOff error
EOF

# Wait for deployment's pod to enter ImagePullBackOff state
for i in {1..30}; do
    if kubectl get pods -n debug -l app=nginx -o jsonpath='{.items[0].status.containerStatuses[0].state.waiting.reason}' | grep -q "ImagePullBackOff"; then
        exit 0
    fi
    sleep 1
done 

```

--------------------------------------------------------------------------------
/pkg/toolsets/kubevirt/toolset.go:
--------------------------------------------------------------------------------

```go
package kubevirt

import (
	"slices"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/containers/kubernetes-mcp-server/pkg/toolsets"
	vm_create "github.com/containers/kubernetes-mcp-server/pkg/toolsets/kubevirt/vm/create"
	vm_lifecycle "github.com/containers/kubernetes-mcp-server/pkg/toolsets/kubevirt/vm/lifecycle"
)

type Toolset struct{}

var _ api.Toolset = (*Toolset)(nil)

func (t *Toolset) GetName() string {
	return "kubevirt"
}

func (t *Toolset) GetDescription() string {
	return "KubeVirt virtual machine management tools"
}

func (t *Toolset) GetTools(_ api.Openshift) []api.ServerTool {
	return slices.Concat(
		vm_create.Tools(),
		vm_lifecycle.Tools(),
	)
}

func (t *Toolset) GetPrompts() []api.ServerPrompt {
	// KubeVirt toolset does not provide prompts
	return nil
}

func init() {
	toolsets.Register(&Toolset{})
}

```

--------------------------------------------------------------------------------
/evals/tasks/fix-crashloop/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
kubectl delete namespace crashloop-test --ignore-not-found
# Create namespace and a deployment with an invalid command that will cause crashloop
kubectl create namespace crashloop-test
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  namespace: crashloop-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        command: ["/bin/sh", "-c"]
        args: ["nonexistent_command"]  # This will cause the pod to crash
EOF

# Wait for pod to enter crashloop state
for i in {1..30}; do
    if kubectl get pods -n crashloop-test -l app=nginx -o jsonpath='{.items[0].status.containerStatuses[0].restartCount}' | grep -q "[1-9]"; then
        exit 0
    fi
    sleep 1
done 

```

--------------------------------------------------------------------------------
/pkg/kiali/validations.go:
--------------------------------------------------------------------------------

```go
package kiali

import (
	"context"
	"net/http"
	"net/url"
	"strings"
)

// ValidationsList calls the Kiali validations API using the provided Authorization header value.
// `namespaces` may contain zero, one or many namespaces. If empty, returns validations from all namespaces.
func (k *Kiali) ValidationsList(ctx context.Context, namespaces []string) (string, error) {
	// Add namespaces query parameter if any provided
	cleaned := make([]string, 0, len(namespaces))
	endpoint := ValidationsEndpoint
	for _, ns := range namespaces {
		ns = strings.TrimSpace(ns)
		if ns != "" {
			cleaned = append(cleaned, ns)
		}
	}
	if len(cleaned) > 0 {
		u, err := url.Parse(endpoint)
		if err != nil {
			return "", err
		}
		q := u.Query()
		q.Set("namespaces", strings.Join(cleaned, ","))
		u.RawQuery = q.Encode()
		endpoint = u.String()
	}

	return k.executeRequest(ctx, http.MethodGet, endpoint, "", nil)
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/ingress.yaml:
--------------------------------------------------------------------------------

```yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: keycloak
  namespace: keycloak
  labels:
    app: keycloak
  annotations:
    cert-manager.io/cluster-issuer: "selfsigned-ca-issuer"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    # Required for Keycloak 26.2.0+ to include port in issuer URLs
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-Port 8443;
      proxy_set_header X-Forwarded-Host $host:8443;
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - keycloak.127-0-0-1.sslip.io
    secretName: keycloak-tls-cert
  rules:
  - host: keycloak.127-0-0-1.sslip.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: keycloak
            port:
              number: 80

```

--------------------------------------------------------------------------------
/evals/tasks/fix-service-with-no-endpoints/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
set -e

# Delete namespace if it exists
kubectl delete namespace webshop-frontend --ignore-not-found

# Create a fresh namespace
kubectl create namespace webshop-frontend

# Apply the service and deployment with the invalid node selector
kubectl apply -f artifacts/service.yaml
kubectl apply -f artifacts/deployment.yaml

# Wait for the deployment to be available or timeout after 120 seconds
echo "Waiting for resources to be created..."
TIMEOUT="120s"
kubectl wait --for=condition=Available=False --timeout=$TIMEOUT deployment/web-app-deployment -n webshop-frontend || true

# Check the service has no endpoints (due to deployment with invalid node selector)
ENDPOINTS=$(kubectl get endpoints web-app-service -n webshop-frontend -o jsonpath='{.subsets}')
if [[ -z "$ENDPOINTS" ]]; then
  echo "Setup successful: Service has no endpoints as expected"
else
  echo "Unexpected state: Service has endpoints"
  exit 1
fi

```

--------------------------------------------------------------------------------
/pkg/toolsets/kiali/toolset.go:
--------------------------------------------------------------------------------

```go
package kiali

import (
	"slices"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/containers/kubernetes-mcp-server/pkg/toolsets"
	"github.com/containers/kubernetes-mcp-server/pkg/toolsets/kiali/internal/defaults"
	kialiTools "github.com/containers/kubernetes-mcp-server/pkg/toolsets/kiali/tools"
)

type Toolset struct{}

var _ api.Toolset = (*Toolset)(nil)

func (t *Toolset) GetName() string {
	return defaults.ToolsetName()
}

func (t *Toolset) GetDescription() string {
	return defaults.ToolsetDescription()
}

func (t *Toolset) GetTools(_ api.Openshift) []api.ServerTool {
	return slices.Concat(
		kialiTools.InitGetMeshGraph(),
		kialiTools.InitManageIstioConfig(),
		kialiTools.InitGetResourceDetails(),
		kialiTools.InitGetMetrics(),
		kialiTools.InitLogs(),
		kialiTools.InitGetTraces(),
	)
}

func (t *Toolset) GetPrompts() []api.ServerPrompt {
	// Kiali toolset does not provide prompts
	return nil
}

func init() {
	toolsets.Register(&Toolset{})
}

```

--------------------------------------------------------------------------------
/.github/workflows/build.yaml:
--------------------------------------------------------------------------------

```yaml
name: Build

on:
  push:
    branches:
      - 'main'
    paths-ignore:
      - '.gitignore'
      - 'LICENSE'
      - '*.md'
  pull_request:
    paths-ignore:
      - '.gitignore'
      - 'LICENSE'
      - '*.md'

concurrency:
  # Only run once for latest commit per ref and cancel other (previous) runs.
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

env:
  GO_VERSION: 1.25

defaults:
  run:
    shell: bash

jobs:
  build:
    name: Build on ${{ matrix.os }}
    strategy:
      fail-fast: false
      matrix:
        os:
          - ubuntu-latest #x64
          - ubuntu-24.04-arm #arm64
          - windows-latest #x64
          - macos-15-intel #x64
          - macos-latest #arm64
    runs-on: ${{ matrix.os }}
    steps:
      - name: Checkout
        uses: actions/checkout@v6
      - uses: actions/setup-go@v6
        with:
          go-version: ${{ env.GO_VERSION }}
      - name: Build
        run: make build
      - name: Test
        run: make test

```

--------------------------------------------------------------------------------
/pkg/toolsets/toolsets.go:
--------------------------------------------------------------------------------

```go
package toolsets

import (
	"fmt"
	"slices"
	"strings"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
)

var toolsets []api.Toolset

// Clear removes all registered toolsets, TESTING PURPOSES ONLY.
func Clear() {
	toolsets = []api.Toolset{}
}

func Register(toolset api.Toolset) {
	toolsets = append(toolsets, toolset)
}

func Toolsets() []api.Toolset {
	return toolsets
}

func ToolsetNames() []string {
	names := make([]string, 0)
	for _, toolset := range Toolsets() {
		names = append(names, toolset.GetName())
	}
	slices.Sort(names)
	return names
}

func ToolsetFromString(name string) api.Toolset {
	for _, toolset := range Toolsets() {
		if toolset.GetName() == strings.TrimSpace(name) {
			return toolset
		}
	}
	return nil
}

func Validate(toolsets []string) error {
	for _, toolset := range toolsets {
		if ToolsetFromString(toolset) == nil {
			return fmt.Errorf("invalid toolset name: %s, valid names are: %s", toolset, strings.Join(ToolsetNames(), ", "))
		}
	}
	return nil
}

```

--------------------------------------------------------------------------------
/evals/tasks/deployment-traffic-switch/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
set -e
NAMESPACE="e-commerce"
SERVICE_NAME="checkout-service"
EXPECTED_SELECTOR_VERSION="green"
TIMEOUT="120s"

echo "Waiting for the Service '$SERVICE_NAME' to point to version '$EXPECTED_SELECTOR_VERSION'..."
# Use 'kubectl wait' to verify the service selector condition
if ! kubectl wait --for=jsonpath='{.spec.selector.version}'="$EXPECTED_SELECTOR_VERSION" service/$SERVICE_NAME -n $NAMESPACE --timeout=$TIMEOUT; then
    echo "Failed to verify the service selector."
    exit 1
fi

echo "Service selector updated correctly."

echo "Verifying that service endpoints match the green deployment..."
# Use a single command to check if at least one endpoint has the desired label
kubectl get endpointslices -n $NAMESPACE -l kubernetes.io/service-name=$SERVICE_NAME \
  -o jsonpath='{.items[0].endpoints[*].conditions.ready}' | grep -q "true" || { echo "Failed to verify service endpoints."; exit 1; }

echo "Service endpoints correctly point to the green deployment."
echo "Verification successful!"
exit 0

```

--------------------------------------------------------------------------------
/pkg/output/output_test.go:
--------------------------------------------------------------------------------

```go
package output

import (
	"encoding/json"
	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
	"regexp"
	"testing"
)

func TestPlainTextUnstructuredList(t *testing.T) {
	var podList unstructured.UnstructuredList
	_ = json.Unmarshal([]byte(`
			{ "apiVersion": "v1", "kind": "PodList", "items": [{ 
			  "apiVersion": "v1", "kind": "Pod",
			  "metadata": {
			    "name": "pod-1", "namespace": "default", "creationTimestamp": "2023-10-01T00:00:00Z", "labels": { "app": "nginx" }
			  },
			  "spec": { "containers": [{ "name": "container-1", "image": "marcnuri/chuck-norris" }] } }
			]}`), &podList)
	out, err := Table.PrintObj(&podList)
	t.Run("processes the list", func(t *testing.T) {
		if err != nil {
			t.Fatalf("Error printing pod list: %v", err)
		}
	})
	t.Run("prints headers", func(t *testing.T) {
		expectedHeaders := "NAME\\s+AGE\\s+LABELS"
		if m, e := regexp.MatchString(expectedHeaders, out); !m || e != nil {
			t.Errorf("Expected headers '%s' not found in output: %s", expectedHeaders, out)
		}
	})
}

```

--------------------------------------------------------------------------------
/evals/tasks/statefulset-lifecycle/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
set -euo pipefail

# --- Configuration ---
NAMESPACE="statefulset-test"
STS_NAME="db"
EXPECTED_CONTENT="initial_data"

echo "Verifying old pods are deleted"
# Wait for scale-down: 1 ready pods and deletion of old pods
kubectl wait pod/db-1 pod/db-2 -n statefulset-test --for=delete --timeout=120s
echo "Old pods are deleted"

# Verify correct number of replicas
echo "Verifying StatefulSet replica count"
replicas=$(kubectl get sts "${STS_NAME}" -n "${NAMESPACE}" -o jsonpath='{.spec.replicas}')
if [[ "${replicas}" -ne 1 ]]; then
  echo "Expected 1 replicas, but got $replicas"
  exit 1
fi
echo "StatefulSet is running with 1 replicas"

# Verify db-0 exists and have the correct data
for pod in db-0; do
  if ! kubectl get pod "$pod" -n "${NAMESPACE}" &> /dev/null; then
    echo "Pod $pod not found in namespace $NAMESPACE"
    exit 1
  fi

  data=$(kubectl exec "$pod" -n "${NAMESPACE}" -- cat /data/test)
  if [[ "$data" != "${EXPECTED_CONTENT}" ]]; then
    echo "Data missing or incorrect in $pod"
    exit 1
  fi
done

exit 0
```

--------------------------------------------------------------------------------
/evals/tasks/fix-image-pull/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
TIMEOUT="120s"

# Wait for the deployment rollout to complete and become "Available"
if kubectl wait --for=condition=Available deployment/app -n debug --timeout=$TIMEOUT; then
    # Get the restart count *only* from the new, running pod.
    restarts=$(kubectl get pods -n debug -l app=nginx --field-selector=status.phase=Running -o jsonpath='{.items[0].status.containerStatuses[0].restartCount}')
    
    # Wait additional 5 seconds to ensure stability
    sleep 5
    
    # Check if restart count hasn't increased
    new_restarts=$(kubectl get pods -n debug -l app=nginx --field-selector=status.phase=Running -o jsonpath='{.items[0].status.containerStatuses[0].restartCount}')
    if [[ "$restarts" == "$new_restarts" ]]; then
        echo "Pod is stable. Verification successful."
        exit 0
    else
        echo "Verification failed: Pod restarted unexpectedly."
        exit 1
    fi
fi

# If we get here, the deployment never became available
echo "Verification failed: Deployment 'app' did not become Available in $TIMEOUT."
exit 1

```

--------------------------------------------------------------------------------
/pkg/kiali/defaults.go:
--------------------------------------------------------------------------------

```go
package kiali

// Default values for Kiali API parameters shared across this package.
const (
	// DefaultRateInterval is the default rate interval for fetching error rates and metrics.
	// This value is used when rateInterval is not explicitly provided in API calls.
	DefaultRateInterval    = "10m"
	DefaultGraphType       = "versionedApp"
	DefaultDuration        = "30m"
	DefaultStep            = "15"
	DefaultDirection       = "outbound"
	DefaultReporter        = "source"
	DefaultRequestProtocol = "http"
	DefaultQuantiles       = "0.5,0.95,0.99,0.999"
	DefaultLimit           = "100"
	DefaultTail            = "100"

	// Default graph parameters
	DefaultIncludeIdleEdges   = "false"
	DefaultInjectServiceNodes = "true"
	DefaultBoxBy              = "cluster,namespace,app"
	DefaultAmbientTraffic     = "none"
	DefaultAppenders          = "deadNode,istio,serviceEntry,meshCheck,workloadEntry,health"
	DefaultRateGrpc           = "requests"
	DefaultRateHttp           = "requests"
	DefaultRateTcp            = "sent"

	// Default mesh status parameters
	DefaultIncludeGateways  = "false"
	DefaultIncludeWaypoints = "false"
)

```

--------------------------------------------------------------------------------
/pkg/mcp/tool_filter.go:
--------------------------------------------------------------------------------

```go
package mcp

import (
	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/containers/kubernetes-mcp-server/pkg/kubernetes"
)

// ToolFilter is a function that takes a ServerTool and returns a boolean indicating whether to include the tool
type ToolFilter func(tool api.ServerTool) bool

func CompositeFilter(filters ...ToolFilter) ToolFilter {
	return func(tool api.ServerTool) bool {
		for _, f := range filters {
			if !f(tool) {
				return false
			}
		}

		return true
	}
}

func ShouldIncludeTargetListTool(targetName string, targets []string) ToolFilter {
	return func(tool api.ServerTool) bool {
		if !tool.IsTargetListProvider() {
			return true
		}
		if len(targets) <= 1 {
			// there is no need to provide a tool to list the single available target
			return false
		}

		// TODO: this check should be removed or make more generic when we have other
		if tool.Tool.Name == "configuration_contexts_list" && targetName != kubernetes.KubeConfigTargetParameterName {
			// let's not include configuration_contexts_list if we aren't targeting contexts in our Provider
			return false
		}

		return true
	}
}

```

--------------------------------------------------------------------------------
/charts/kubernetes-mcp-server/templates/ingress.yaml:
--------------------------------------------------------------------------------

```yaml
{{- if .Values.ingress.enabled -}}
{{- $host := required "Ingress hostname must be specified" (tpl .Values.ingress.host .) }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include "kubernetes-mcp-server.fullname" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "kubernetes-mcp-server.labels" . | nindent 4 }}
  annotations:
    {{- if eq .Values.openshift true }}
    route.openshift.io/termination: {{ .Values.ingress.termination }}
    {{- end }}
  {{- with .Values.ingress.annotations }}
    {{- toYaml . | nindent 4 }}
  {{- end }}
spec:
  {{- with .Values.ingress.className }}
  ingressClassName: {{ . }}
  {{- end }}
  {{- if .Values.ingress.tls }}
  tls:
    - hosts:
        - "{{ $host }}"
      secretName: {{ .Values.ingress.tls.secretName }}
  {{- end }}
  rules:
    - host: "{{ $host }}"
      http:
        paths:
          - path: {{ .Values.ingress.path }}
            pathType: {{ .Values.ingress.pathType }}
            backend:
              service:
                name: {{ include "kubernetes-mcp-server.fullname" $ }}
                port:
                  number: {{ $.Values.service.port }}
{{- end }}

```

--------------------------------------------------------------------------------
/evals/tasks/list-images-for-pods/artifacts/manifest.yaml:
--------------------------------------------------------------------------------

```yaml
# artifacts/manifest.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mysql-secret
  namespace: list-images-for-pods
stringData:
  ROOT_PASSWORD: "S1VKd0hBOGRJcg=="
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: list-images-for-pods
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: list-images-for-pods
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: "mysql"
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:8.0.36 # Using the official, public MySQL image
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: ROOT_PASSWORD
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

```

--------------------------------------------------------------------------------
/evals/tasks/fix-pending-pod/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Configuration
POD_NAME="homepage-pod"
NAMESPACE="homepage-ns"
PVC_NAME="homepage-pvc"
TIMEOUT="120s"
TASK_NAME="fix-pending-pods"

echo "Starting verification for $TASK_NAME..." 
# Verify the PersistentVolumeClaim is bound
echo "ℹWaiting for PVC '$PVC_NAME' to be 'Bound'..."
if ! kubectl wait --for=jsonpath='{.status.phase}'=Bound pvc/$PVC_NAME -n $NAMESPACE --timeout=$TIMEOUT; then
    echo "PVC '$PVC_NAME' did not become Bound within $TIMEOUT."
    echo "Info for '$PVC_NAME' in namespace '$NAMESPACE':"
    kubectl describe pvc $PVC_NAME -n $NAMESPACE
    echo "---"
    echo "Info for StorageClass and PersistentVolumes:"
    kubectl get sc,pv
    exit 1
fi
echo "'$PVC_NAME' is Bound. Verifying that desired state is realized..."

# Verify the Pod is Ready
echo "Waiting for Pod '$POD_NAME' to be 'Ready'..."
if ! kubectl wait --for=condition=Ready pod/$POD_NAME -n $NAMESPACE --timeout=$TIMEOUT; then
    echo "Pod '$POD_NAME' did not become Ready within $TIMEOUT."
    echo "---"
    echo "Info for Pod '$POD_NAME' in namespace '$NAMESPACE':"
    kubectl describe pod $POD_NAME -n $NAMESPACE
    exit 1
fi
echo "Pod '$POD_NAME' is Ready. Verification successful for $EVAL_NAME."
exit 0

```

--------------------------------------------------------------------------------
/pkg/config/config_default.go:
--------------------------------------------------------------------------------

```go
package config

import (
	"bytes"

	"github.com/BurntSushi/toml"
)

func Default() *StaticConfig {
	defaultConfig := StaticConfig{
		ListOutput: "table",
		Toolsets:   []string{"core", "config", "helm"},
	}
	overrides := defaultOverrides()
	mergedConfig := mergeConfig(defaultConfig, overrides)
	return &mergedConfig
}

// HasDefaultOverrides indicates whether the internal defaultOverrides function
// provides any overrides or an empty StaticConfig.
func HasDefaultOverrides() bool {
	overrides := defaultOverrides()
	var buf bytes.Buffer
	if err := toml.NewEncoder(&buf).Encode(overrides); err != nil {
		// If marshaling fails, assume no overrides
		return false
	}
	return len(bytes.TrimSpace(buf.Bytes())) > 0
}

// mergeConfig applies non-zero values from override to base using TOML serialization
// and returns the merged StaticConfig.
// In case of any error during marshalling or unmarshalling, it returns the base config unchanged.
func mergeConfig(base, override StaticConfig) StaticConfig {
	var overrideBuffer bytes.Buffer
	if err := toml.NewEncoder(&overrideBuffer).Encode(override); err != nil {
		// If marshaling fails, return base unchanged
		return base
	}

	_, _ = toml.NewDecoder(&overrideBuffer).Decode(&base)
	return base
}

```

--------------------------------------------------------------------------------
/pkg/kiali/graph.go:
--------------------------------------------------------------------------------

```go
package kiali

import (
	"context"
	"net/http"
	"net/url"
	"strings"
)

// Graph calls the Kiali graph API using the provided Authorization header value.
// `namespaces` may contain zero, one or many namespaces. If empty, the API may return an empty graph
// or the server default, depending on Kiali configuration.
func (k *Kiali) Graph(ctx context.Context, namespaces []string, queryParams map[string]string) (string, error) {
	u, err := url.Parse(GraphEndpoint)
	if err != nil {
		return "", err
	}
	q := u.Query()
	q.Set("duration", queryParams["rateInterval"])
	q.Set("graphType", queryParams["graphType"])
	q.Set("includeIdleEdges", DefaultIncludeIdleEdges)
	q.Set("injectServiceNodes", DefaultInjectServiceNodes)
	q.Set("boxBy", DefaultBoxBy)
	q.Set("ambientTraffic", DefaultAmbientTraffic)
	q.Set("appenders", DefaultAppenders)
	q.Set("rateGrpc", DefaultRateGrpc)
	q.Set("rateHttp", DefaultRateHttp)
	q.Set("rateTcp", DefaultRateTcp)
	// Optional namespaces param
	cleaned := make([]string, 0, len(namespaces))
	for _, ns := range namespaces {
		ns = strings.TrimSpace(ns)
		if ns != "" {
			cleaned = append(cleaned, ns)
		}
	}
	if len(cleaned) > 0 {
		q.Set("namespaces", strings.Join(cleaned, ","))
	}
	u.RawQuery = q.Encode()
	endpoint := u.String()

	return k.executeRequest(ctx, http.MethodGet, endpoint, "", nil)
}

```

--------------------------------------------------------------------------------
/evals/tasks/create-simple-rbac/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

NAMESPACE="create-simple-rbac"
SERVICE_ACCOUNT="reader-sa"
SERVICE_ACCOUNT_USER="system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT}"

# Check for allowed permissions
if ! kubectl auth can-i get pods --as=$SERVICE_ACCOUNT_USER -n $NAMESPACE &> /dev/null; then
    echo "ServiceAccount cannot 'get' pods."
    exit 1
fi

if ! kubectl auth can-i list pods --as=$SERVICE_ACCOUNT_USER -n $NAMESPACE &> /dev/null; then
    echo "ServiceAccount cannot 'list' pods."
    exit 1
fi

# Check for denied permissions
if kubectl auth can-i delete pods --as=$SERVICE_ACCOUNT_USER -n $NAMESPACE &> /dev/null; then
  echo "ServiceAccount has excessive permissions (can 'delete' pods)."
  exit 1
fi

if kubectl auth can-i create pods --as=$SERVICE_ACCOUNT_USER -n $NAMESPACE &> /dev/null; then
  echo "ServiceAccount has excessive permissions (can 'create' pods)."
  exit 1
fi

if kubectl auth can-i create pods --as=$SERVICE_ACCOUNT_USER &> /dev/null; then
  echo "ServiceAccount has excessive permissions (can 'create' pods in other namespace)."
  exit 1
fi

if kubectl auth can-i list pods --as=$SERVICE_ACCOUNT_USER -A &> /dev/null; then
  echo "ServiceAccount has excessive permissions (can 'list' pods in other namespace)."
  exit 1
fi

echo "Verification successful: RBAC role and binding correctly configured."
exit 0

```

--------------------------------------------------------------------------------
/pkg/kiali/endpoints.go:
--------------------------------------------------------------------------------

```go
package kiali

// Kiali API endpoint paths shared across this package.
const (
	// MeshGraph is the Kiali API path that returns the mesh graph/status.
	AuthInfoEndpoint          = "/api/auth/info"
	MeshGraphEndpoint         = "/api/mesh/graph"
	GraphEndpoint             = "/api/namespaces/graph"
	HealthEndpoint            = "/api/clusters/health"
	IstioConfigEndpoint       = "/api/istio/config"
	IstioObjectEndpoint       = "/api/namespaces/%s/istio/%s/%s/%s/%s"
	IstioObjectCreateEndpoint = "/api/namespaces/%s/istio/%s/%s/%s"
	NamespacesEndpoint        = "/api/namespaces"
	PodDetailsEndpoint        = "/api/namespaces/%s/pods/%s"
	PodsLogsEndpoint          = "/api/namespaces/%s/pods/%s/logs"
	ServicesEndpoint          = "/api/clusters/services"
	ServiceDetailsEndpoint    = "/api/namespaces/%s/services/%s"
	ServiceMetricsEndpoint    = "/api/namespaces/%s/services/%s/metrics"
	AppTracesEndpoint         = "/api/namespaces/%s/apps/%s/traces"
	ServiceTracesEndpoint     = "/api/namespaces/%s/services/%s/traces"
	WorkloadTracesEndpoint    = "/api/namespaces/%s/workloads/%s/traces"
	WorkloadsEndpoint         = "/api/clusters/workloads"
	WorkloadDetailsEndpoint   = "/api/namespaces/%s/workloads/%s"
	WorkloadMetricsEndpoint   = "/api/namespaces/%s/workloads/%s/metrics"
	ValidationsEndpoint       = "/api/istio/validations"
)

```

--------------------------------------------------------------------------------
/pkg/http/middleware.go:
--------------------------------------------------------------------------------

```go
package http

import (
	"bufio"
	"net"
	"net/http"
	"time"

	"k8s.io/klog/v2"
)

func RequestMiddleware(next http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		if r.URL.Path == "/healthz" {
			next.ServeHTTP(w, r)
			return
		}

		start := time.Now()

		lrw := &loggingResponseWriter{
			ResponseWriter: w,
			statusCode:     http.StatusOK,
		}

		next.ServeHTTP(lrw, r)

		duration := time.Since(start)
		klog.V(5).Infof("%s %s %d %v", r.Method, r.URL.Path, lrw.statusCode, duration)
	})
}

type loggingResponseWriter struct {
	http.ResponseWriter
	statusCode    int
	headerWritten bool
}

func (lrw *loggingResponseWriter) WriteHeader(code int) {
	if !lrw.headerWritten {
		lrw.statusCode = code
		lrw.headerWritten = true
		lrw.ResponseWriter.WriteHeader(code)
	}
}

func (lrw *loggingResponseWriter) Write(b []byte) (int, error) {
	if !lrw.headerWritten {
		lrw.statusCode = http.StatusOK
		lrw.headerWritten = true
	}
	return lrw.ResponseWriter.Write(b)
}

func (lrw *loggingResponseWriter) Flush() {
	if flusher, ok := lrw.ResponseWriter.(http.Flusher); ok {
		flusher.Flush()
	}
}

func (lrw *loggingResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
	if hijacker, ok := lrw.ResponseWriter.(http.Hijacker); ok {
		return hijacker.Hijack()
	}
	return nil, nil, http.ErrNotSupported
}

```

--------------------------------------------------------------------------------
/pkg/api/toolsets_test.go:
--------------------------------------------------------------------------------

```go
package api

import (
	"testing"

	"github.com/stretchr/testify/suite"
	"k8s.io/utils/ptr"
)

type ToolsetsSuite struct {
	suite.Suite
}

func (s *ToolsetsSuite) TestServerTool() {
	s.Run("IsClusterAware", func() {
		s.Run("defaults to true", func() {
			tool := &ServerTool{}
			s.True(tool.IsClusterAware(), "Expected IsClusterAware to be true by default")
		})
		s.Run("can be set to false", func() {
			tool := &ServerTool{ClusterAware: ptr.To(false)}
			s.False(tool.IsClusterAware(), "Expected IsClusterAware to be false when set to false")
		})
		s.Run("can be set to true", func() {
			tool := &ServerTool{ClusterAware: ptr.To(true)}
			s.True(tool.IsClusterAware(), "Expected IsClusterAware to be true when set to true")
		})
	})
	s.Run("IsTargetListProvider", func() {
		s.Run("defaults to false", func() {
			tool := &ServerTool{}
			s.False(tool.IsTargetListProvider(), "Expected IsTargetListProvider to be false by default")
		})
		s.Run("can be set to false", func() {
			tool := &ServerTool{TargetListProvider: ptr.To(false)}
			s.False(tool.IsTargetListProvider(), "Expected IsTargetListProvider to be false when set to false")
		})
		s.Run("can be set to true", func() {
			tool := &ServerTool{TargetListProvider: ptr.To(true)}
			s.True(tool.IsTargetListProvider(), "Expected IsTargetListProvider to be true when set to true")
		})
	})
}

func TestToolsets(t *testing.T) {
	suite.Run(t, new(ToolsetsSuite))
}

```

--------------------------------------------------------------------------------
/dev/config/keycloak/deployment.yaml:
--------------------------------------------------------------------------------

```yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: keycloak
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: keycloak
  namespace: keycloak
  labels:
    app: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
      - name: keycloak
        image: quay.io/keycloak/keycloak:26.4
        args: ["start-dev"]
        env:
        - name: KC_BOOTSTRAP_ADMIN_USERNAME
          value: "admin"
        - name: KC_BOOTSTRAP_ADMIN_PASSWORD
          value: "admin"
        - name: KC_HOSTNAME
          value: "https://keycloak.127-0-0-1.sslip.io:8443"
        - name: KC_HTTP_ENABLED
          value: "true"
        - name: KC_HEALTH_ENABLED
          value: "true"
        - name: KC_PROXY_HEADERS
          value: "xforwarded"
        ports:
        - name: http
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 9000
          initialDelaySeconds: 30
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /health/live
            port: 9000
          initialDelaySeconds: 60
          periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
  name: keycloak
  namespace: keycloak
  labels:
    app: keycloak
spec:
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: keycloak
  type: ClusterIP

```

--------------------------------------------------------------------------------
/evals/tasks/fix-probes/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash

# Delete namespace if exists and create a fresh one
kubectl delete namespace orders --ignore-not-found
kubectl create namespace orders

TIMEOUT="120s"

# Create a deployment with problematic health checks
cat <<YAML | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
  namespace: orders
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: nginx:latest
        ports:
        - containerPort: 80
        # The problem: incorrect health probes causing restarts
        livenessProbe:
          httpGet:
            path: /get_status 
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
        readinessProbe:
          httpGet:
            path: /is_ready  
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
YAML

# Create a service for the webapp
kubectl create service clusterip webapp -n orders --tcp=80:80

# Wait for the pod to start and begin restarting due to failed probes
echo "Waiting for pod to start and begin failing health checks..."
if ! kubectl wait --for=condition=Available=False --timeout=$TIMEOUT deployment/webapp -n orders; then
    echo "Error: Timed out waiting for the deployment to become unavailable."
    exit 1
fi

echo "Setup successful: Deployment is no longer available."
exit 0

```

--------------------------------------------------------------------------------
/pkg/toolsets/kiali/tools/helpers.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"encoding/json"
	"fmt"
	"strconv"
	"strings"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
)

// getStringArgOrDefault returns the string argument value for the given key,
// or the provided default if the argument is absent or empty (after trimming).
func getStringArgOrDefault(params api.ToolHandlerParams, key, defaultVal string) (string, error) {
	if raw, ok := params.GetArguments()[key]; ok {
		// Special handling for time-like keys that can be specified with suffixes

		switch v := raw.(type) {
		case string:
			s := strings.TrimSpace(v)
			if s != "" {
				return s, nil
			}
		case float64:
			// JSON numbers decode to float64 in maps; use -1 precision to trim trailing zeros
			return strconv.FormatFloat(v, 'f', -1, 64), nil
		case int:
			return strconv.Itoa(v), nil
		case int64:
			return strconv.FormatInt(v, 10), nil
		case json.Number:
			return v.String(), nil
		default:
			s := strings.TrimSpace(fmt.Sprint(v))
			if s != "" {
				return s, nil
			}
		}
	}
	return defaultVal, nil
}

// setQueryParam sets queryParams[key] from tool arguments (with default handling).
// It uses getStringArgOrDefault and wraps errors with a useful message.
func setQueryParam(params api.ToolHandlerParams, queryParams map[string]string, key, defaultVal string) error {
	v, err := getStringArgOrDefault(params, key, defaultVal)
	if err != nil {
		return fmt.Errorf("invalid %s: %v", key, err)
	}
	queryParams[key] = v
	return nil
}

```

--------------------------------------------------------------------------------
/evals/tasks/multi-container-pod-communication/verify.sh:
--------------------------------------------------------------------------------

```bash
#!/usr/bin/env bash
set -euo pipefail

NAMESPACE="multi-container-logging"
POD_NAME="communication-pod"
TIMEOUT="120s"

# Wait for pod to be running
echo "Waiting for pod '$POD_NAME' to be ready..."
if ! kubectl wait --for=condition=Ready pod/$POD_NAME -n "$NAMESPACE" --timeout=$TIMEOUT; then
    echo "Pod failed to reach Ready state in time"
    echo "Current pod status:"
    kubectl describe pod "$POD_NAME" -n "$NAMESPACE"
    exit 1
fi

echo "Pod is ready. Verifying configuration..."

# then verify that both containers are running
CONTAINERS=$(kubectl get pod "$POD_NAME" -n "$NAMESPACE" -o jsonpath='{.spec.containers[*].name}')
if [[ ! "$CONTAINERS" == *"web-server"* ]] || [[ ! "$CONTAINERS" == *"logger"* ]]; then
    echo "Pod does not have both 'web-server' and 'logger' containers"
    exit 1
fi

# does the shared volume exists
VOLUMES=$(kubectl get pod "$POD_NAME" -n "$NAMESPACE" -o jsonpath='{.spec.volumes[*].name}')
if [[ ! "$VOLUMES" == *"logs-volume"* ]]; then
    echo "Pod does not have the required 'logs-volume' volume"
    exit 1
fi

# is web server accessible
echo "Testing web server access..."
kubectl exec "$POD_NAME" -n "$NAMESPACE" -c web-server -- curl -s -o /dev/null -w "%{http_code}" localhost:80 | grep -q 200

# logger container can see the access logs
echo "Verifying logger container can access nginx logs..."
kubectl exec "$POD_NAME" -n "$NAMESPACE" -c logger -- ls -la /var/log/nginx/access.log

echo "All verification checks passed!"
exit 0

```

--------------------------------------------------------------------------------
/pkg/config/extended.go:
--------------------------------------------------------------------------------

```go
package config

import (
	"context"
	"fmt"

	"github.com/BurntSushi/toml"
	"github.com/containers/kubernetes-mcp-server/pkg/api"
)

type ExtendedConfigParser func(ctx context.Context, primitive toml.Primitive, md toml.MetaData) (api.ExtendedConfig, error)

type extendedConfigRegistry struct {
	parsers map[string]ExtendedConfigParser
}

func newExtendedConfigRegistry() *extendedConfigRegistry {
	return &extendedConfigRegistry{
		parsers: make(map[string]ExtendedConfigParser),
	}
}

func (r *extendedConfigRegistry) register(name string, parser ExtendedConfigParser) {
	if _, exists := r.parsers[name]; exists {
		panic("extended config parser already registered for name: " + name)
	}

	r.parsers[name] = parser
}

func (r *extendedConfigRegistry) parse(ctx context.Context, metaData toml.MetaData, configs map[string]toml.Primitive) (map[string]api.ExtendedConfig, error) {
	if len(configs) == 0 {
		return make(map[string]api.ExtendedConfig), nil
	}
	parsedConfigs := make(map[string]api.ExtendedConfig, len(configs))

	for name, primitive := range configs {
		parser, ok := r.parsers[name]
		if !ok {
			continue
		}

		extendedConfig, err := parser(ctx, primitive, metaData)
		if err != nil {
			return nil, fmt.Errorf("failed to parse extended config for '%s': %w", name, err)
		}

		if err = extendedConfig.Validate(); err != nil {
			return nil, fmt.Errorf("failed to validate extended config for '%s': %w", name, err)
		}

		parsedConfigs[name] = extendedConfig
	}

	return parsedConfigs, nil
}

```

--------------------------------------------------------------------------------
/pkg/api/prompts_test.go:
--------------------------------------------------------------------------------

```go
package api

import (
	"testing"

	"github.com/stretchr/testify/assert"
	"k8s.io/utils/ptr"
)

func TestServerPrompt_IsClusterAware(t *testing.T) {
	tests := []struct {
		name         string
		clusterAware *bool
		want         bool
	}{
		{
			name:         "nil defaults to true",
			clusterAware: nil,
			want:         true,
		},
		{
			name:         "explicitly true",
			clusterAware: ptr.To(true),
			want:         true,
		},
		{
			name:         "explicitly false",
			clusterAware: ptr.To(false),
			want:         false,
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			sp := &ServerPrompt{
				ClusterAware: tt.clusterAware,
			}
			assert.Equal(t, tt.want, sp.IsClusterAware())
		})
	}
}

func TestNewPromptCallResult(t *testing.T) {
	tests := []struct {
		name        string
		description string
		messages    []PromptMessage
		err         error
	}{
		{
			name:        "successful result",
			description: "Test description",
			messages: []PromptMessage{
				{
					Role: "user",
					Content: PromptContent{
						Type: "text",
						Text: "Hello",
					},
				},
			},
			err: nil,
		},
		{
			name:        "result with error",
			description: "Error description",
			messages:    nil,
			err:         assert.AnError,
		},
	}

	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			result := NewPromptCallResult(tt.description, tt.messages, tt.err)
			assert.Equal(t, tt.description, result.Description)
			assert.Equal(t, tt.messages, result.Messages)
			assert.Equal(t, tt.err, result.Error)
		})
	}
}

```

--------------------------------------------------------------------------------
/internal/test/test.go:
--------------------------------------------------------------------------------

```go
package test

import (
	"fmt"
	"net"
	"net/http"
	"os"
	"path/filepath"
	"runtime"
	"time"
)

func Must[T any](v T, err error) T {
	if err != nil {
		panic(err)
	}
	return v
}

func ReadFile(path ...string) string {
	_, file, _, _ := runtime.Caller(1)
	filePath := filepath.Join(append([]string{filepath.Dir(file)}, path...)...)
	fileBytes := Must(os.ReadFile(filePath))
	return string(fileBytes)
}

func RandomPortAddress() (*net.TCPAddr, error) {
	ln, err := net.Listen("tcp", "0.0.0.0:0")
	if err != nil {
		return nil, fmt.Errorf("failed to find random port for HTTP server: %v", err)
	}
	defer func() { _ = ln.Close() }()
	tcpAddr, ok := ln.Addr().(*net.TCPAddr)
	if !ok {
		return nil, fmt.Errorf("failed to cast listener address to TCPAddr")
	}
	return tcpAddr, nil
}

func WaitForServer(tcpAddr *net.TCPAddr) error {
	var conn *net.TCPConn
	var err error
	for i := 0; i < 10; i++ {
		conn, err = net.DialTCP("tcp", nil, tcpAddr)
		if err == nil {
			_ = conn.Close()
			break
		}
		time.Sleep(50 * time.Millisecond)
	}
	return err
}

// WaitForHealthz waits for the /healthz endpoint to return a non-404 response
func WaitForHealthz(tcpAddr *net.TCPAddr) error {
	url := fmt.Sprintf("http://%s/healthz", tcpAddr.String())
	var resp *http.Response
	var err error
	for i := 0; i < 100; i++ {
		resp, err = http.Get(url)
		if err == nil {
			_ = resp.Body.Close()
			if resp.StatusCode != http.StatusNotFound {
				return nil
			}
		}
		time.Sleep(50 * time.Millisecond)
	}
	if err != nil {
		return err
	}
	return fmt.Errorf("healthz endpoint returned 404 after retries")
}

```

--------------------------------------------------------------------------------
/evals/tasks/setup-dev-cluster/setup-dev-cluster.md:
--------------------------------------------------------------------------------

```markdown
You are a Kubernetes administrator setting up a development cluster for a team of 3 developers (alice, bob, and charlie).

Create a secure, multi-tenant development environment with the following requirements:

1. **Namespaces**: Create separate namespaces for each developer (dev-alice, dev-bob, dev-charlie) plus shared namespaces (dev-shared, staging, prod)

2. **RBAC Configuration**:
    - Each developer should have full access to their own namespace
    - Developers should have read-only access to the dev-shared namespace
    - Only cluster admins should access staging and prod
    - Create service accounts for each developer (alice-sa, bob-sa, charlie-sa) in their respective namespaces
    - Each developer's service account should have full access to their respective namespace and read-only access to the dev-shared namespace

3. **Resource Quotas**:
    - Each developer namespace: max 2 CPUs, 4Gi memory, 10 pods, 5 services
    - dev-shared namespace: max 4 CPUs, 8Gi memory, 20 pods, 10 services  
    - staging/prod: max 8 CPUs, 16Gi memory, 50 pods, 20 services

4. **Network Policies**:
    - Developers can only access their own namespace and dev-shared
    - Block cross-developer namespace communication
    - Allow all namespaces to access DNS and system services
    - staging and prod should be completely isolated from dev namespaces

5. **Default Deny Policies**: Implement default deny network policies for all namespaces except system namespaces

Ensure all configurations follow principle of least privilege and provide appropriate isolation between environments.
```

--------------------------------------------------------------------------------
/npm/kubernetes-mcp-server/bin/index.js:
--------------------------------------------------------------------------------

```javascript
#!/usr/bin/env node

const childProcess = require('child_process');

const BINARY_MAP = {
  darwin_x64: {name: 'kubernetes-mcp-server-darwin-amd64', suffix: ''},
  darwin_arm64: {name: 'kubernetes-mcp-server-darwin-arm64', suffix: ''},
  linux_x64: {name: 'kubernetes-mcp-server-linux-amd64', suffix: ''},
  linux_arm64: {name: 'kubernetes-mcp-server-linux-arm64', suffix: ''},
  win32_x64: {name: 'kubernetes-mcp-server-windows-amd64', suffix: '.exe'},
  win32_arm64: {name: 'kubernetes-mcp-server-windows-arm64', suffix: '.exe'},
};

// Resolving will fail if the optionalDependency was not installed or the platform/arch is not supported
const resolveBinaryPath = () => {
  try {
    const binary = BINARY_MAP[`${process.platform}_${process.arch}`];
    return require.resolve(`${binary.name}/bin/${binary.name}${binary.suffix}`);
  } catch (e) {
    throw new Error(`Could not resolve binary path for platform/arch: ${process.platform}/${process.arch}`);
  }
};

const child = childProcess.spawn(resolveBinaryPath(), process.argv.slice(2), {
  stdio: 'inherit',
});

const handleSignal = () => (signal) => {
  console.log(`Received ${signal}, terminating child process...`);
  if (child && !child.killed) {
    child.kill(signal);
  }
};

['SIGTERM', 'SIGINT', 'SIGHUP'].forEach((signal) => {
  process.on(signal, handleSignal(signal));
});

child.on('close', (code, signal) => {
  if (signal) {
    console.log(`Child process terminated by signal: ${signal}`);
    process.exit(128 + (signal === 'SIGTERM' ? 15 : signal === 'SIGINT' ? 2 : 1));
  } else {
    process.exit(code || 0);
  }
});

```

--------------------------------------------------------------------------------
/pkg/api/imports_test.go:
--------------------------------------------------------------------------------

```go
package api

import (
	"go/build"
	"strings"
	"testing"

	"github.com/stretchr/testify/suite"
)

const modulePrefix = "github.com/containers/kubernetes-mcp-server/"

// ImportsSuite verifies that pkg/api doesn't accidentally import internal packages
// that would create cyclic dependencies.
type ImportsSuite struct {
	suite.Suite
}

func (s *ImportsSuite) TestNoCyclicDependencies() {
	// Whitelist of allowed internal packages that pkg/api can import.
	// Any other internal import will cause the test to fail.
	allowedInternalPackages := map[string]bool{
		"github.com/containers/kubernetes-mcp-server/pkg/output": true,
	}

	s.Run("pkg/api only imports whitelisted internal packages", func() {
		pkg, err := build.Import("github.com/containers/kubernetes-mcp-server/pkg/api", "", 0)
		s.Require().NoError(err, "Failed to import pkg/api")

		for _, imp := range pkg.Imports {
			// Skip external packages (not part of this module)
			if !strings.HasPrefix(imp, modulePrefix) {
				continue
			}

			// Internal package - must be in whitelist
			if !allowedInternalPackages[imp] {
				s.Failf("Forbidden internal import detected",
					"pkg/api imports %q which is not in the whitelist. "+
						"To prevent cyclic dependencies, pkg/api can only import: %v. "+
						"If this import is intentional, add it to allowedInternalPackages in this test.",
					imp, keys(allowedInternalPackages))
			}
		}
	})
}

func keys(m map[string]bool) []string {
	result := make([]string, 0, len(m))
	for k := range m {
		result = append(result, k)
	}
	return result
}

func TestImports(t *testing.T) {
	suite.Run(t, new(ImportsSuite))
}

```

--------------------------------------------------------------------------------
/pkg/mcp/tool_mutator.go:
--------------------------------------------------------------------------------

```go
package mcp

import (
	"fmt"
	"sort"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/google/jsonschema-go/jsonschema"
)

type ToolMutator func(tool api.ServerTool) api.ServerTool

const maxTargetsInEnum = 5 // TODO: test and validate that this is a reasonable cutoff

// WithTargetParameter adds a target selection parameter to the tool's input schema if the tool is cluster-aware
func WithTargetParameter(defaultCluster, targetParameterName string, targets []string) ToolMutator {
	return func(tool api.ServerTool) api.ServerTool {
		if !tool.IsClusterAware() {
			return tool
		}

		if tool.Tool.InputSchema == nil {
			tool.Tool.InputSchema = &jsonschema.Schema{Type: "object"}
		}

		if tool.Tool.InputSchema.Properties == nil {
			tool.Tool.InputSchema.Properties = make(map[string]*jsonschema.Schema)
		}

		if len(targets) > 1 {
			tool.Tool.InputSchema.Properties[targetParameterName] = createTargetProperty(
				defaultCluster,
				targetParameterName,
				targets,
			)
		}

		return tool
	}
}

func createTargetProperty(defaultCluster, targetName string, targets []string) *jsonschema.Schema {
	baseSchema := &jsonschema.Schema{
		Type: "string",
		Description: fmt.Sprintf(
			"Optional parameter selecting which %s to run the tool in. Defaults to %s if not set",
			targetName,
			defaultCluster,
		),
	}

	if len(targets) <= maxTargetsInEnum {
		// Sort clusters to ensure consistent enum ordering
		sort.Strings(targets)

		enumValues := make([]any, 0, len(targets))
		for _, c := range targets {
			enumValues = append(enumValues, c)
		}
		baseSchema.Enum = enumValues
	}

	return baseSchema
}

```

--------------------------------------------------------------------------------
/pkg/kubernetes/events.go:
--------------------------------------------------------------------------------

```go
package kubernetes

import (
	"context"
	"strings"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	v1 "k8s.io/api/core/v1"
	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
	"k8s.io/apimachinery/pkg/runtime"
	"k8s.io/apimachinery/pkg/runtime/schema"
)

func (k *Kubernetes) EventsList(ctx context.Context, namespace string) ([]map[string]any, error) {
	var eventMap []map[string]any
	raw, err := k.ResourcesList(ctx, &schema.GroupVersionKind{
		Group: "", Version: "v1", Kind: "Event",
	}, namespace, api.ListOptions{})
	if err != nil {
		return eventMap, err
	}
	unstructuredList := raw.(*unstructured.UnstructuredList)
	if len(unstructuredList.Items) == 0 {
		return eventMap, nil
	}
	for _, item := range unstructuredList.Items {
		event := &v1.Event{}
		if err = runtime.DefaultUnstructuredConverter.FromUnstructured(item.Object, event); err != nil {
			return eventMap, err
		}
		timestamp := event.EventTime.Time
		if timestamp.IsZero() && event.Series != nil {
			timestamp = event.Series.LastObservedTime.Time
		} else if timestamp.IsZero() && event.Count > 1 {
			timestamp = event.LastTimestamp.Time
		} else if timestamp.IsZero() {
			timestamp = event.FirstTimestamp.Time
		}
		eventMap = append(eventMap, map[string]any{
			"Namespace": event.Namespace,
			"Timestamp": timestamp.String(),
			"Type":      event.Type,
			"Reason":    event.Reason,
			"InvolvedObject": map[string]string{
				"apiVersion": event.InvolvedObject.APIVersion,
				"Kind":       event.InvolvedObject.Kind,
				"Name":       event.InvolvedObject.Name,
			},
			"Message": strings.TrimSpace(event.Message),
		})
	}
	return eventMap, nil
}

```

--------------------------------------------------------------------------------
/pkg/toolsets/core/events.go:
--------------------------------------------------------------------------------

```go
package core

import (
	"fmt"

	"github.com/google/jsonschema-go/jsonschema"
	"k8s.io/utils/ptr"

	"github.com/containers/kubernetes-mcp-server/pkg/api"
	"github.com/containers/kubernetes-mcp-server/pkg/output"
)

func initEvents() []api.ServerTool {
	return []api.ServerTool{
		{Tool: api.Tool{
			Name:        "events_list",
			Description: "List all the Kubernetes events in the current cluster from all namespaces",
			InputSchema: &jsonschema.Schema{
				Type: "object",
				Properties: map[string]*jsonschema.Schema{
					"namespace": {
						Type:        "string",
						Description: "Optional Namespace to retrieve the events from. If not provided, will list events from all namespaces",
					},
				},
			},
			Annotations: api.ToolAnnotations{
				Title:           "Events: List",
				ReadOnlyHint:    ptr.To(true),
				DestructiveHint: ptr.To(false),
				OpenWorldHint:   ptr.To(true),
			},
		}, Handler: eventsList},
	}
}

func eventsList(params api.ToolHandlerParams) (*api.ToolCallResult, error) {
	namespace := params.GetArguments()["namespace"]
	if namespace == nil {
		namespace = ""
	}
	eventMap, err := params.EventsList(params, namespace.(string))
	if err != nil {
		return api.NewToolCallResult("", fmt.Errorf("failed to list events in all namespaces: %v", err)), nil
	}
	if len(eventMap) == 0 {
		return api.NewToolCallResult("# No events found", nil), nil
	}
	yamlEvents, err := output.MarshalYaml(eventMap)
	if err != nil {
		err = fmt.Errorf("failed to list events in all namespaces: %v", err)
	}
	return api.NewToolCallResult(fmt.Sprintf("# The following events (YAML format) were found:\n%s", yamlEvents), err), nil
}

```

--------------------------------------------------------------------------------
/.github/workflows/release.yaml:
--------------------------------------------------------------------------------

```yaml
name: Release

on:
  push:
    tags:
      - '*'

concurrency:
  # Only run once for latest commit per ref and cancel other (previous) runs.
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

env:
  GO_VERSION: 1.25
  UV_PUBLISH_TOKEN: ${{ secrets.UV_PUBLISH_TOKEN }}

permissions:
  contents: write
  id-token: write  # Required for npmjs OIDC
  discussions: write

jobs:
  release:
    name: Release
    if: github.repository == 'containers/kubernetes-mcp-server'
    runs-on: macos-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v6
      - uses: actions/setup-go@v6
        with:
          go-version: ${{ env.GO_VERSION }}
      - name: Build
        run: make build-all-platforms
      - name: Upload artifacts
        uses: softprops/action-gh-release@v2
        with:
          generate_release_notes: true
          make_latest: true
          files: |
            LICENSE
            kubernetes-mcp-server-*
      # Ensure npm 11.5.1 or later is installed (required for https://docs.npmjs.com/trusted-publishers)
      - name: Setup node
        uses: actions/setup-node@v6
        with:
          node-version: 24
          registry-url: 'https://registry.npmjs.org'
      - name: Publish npm
        run:
          make npm-publish
  python:
    name: Release Python
    if: github.repository == 'containers/kubernetes-mcp-server'
    # Python logic requires the tag/release version to be available from GitHub
    needs: release
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v6
      - uses: astral-sh/setup-uv@v7
      - name: Publish Python
        run:
          make python-publish

```

--------------------------------------------------------------------------------
/docs/KIALI.md:
--------------------------------------------------------------------------------

```markdown
## Kiali integration

This server can expose Kiali tools so assistants can query mesh information (e.g., mesh status/graph).

### Enable the Kiali toolset

Enable the Kiali tools via the server TOML configuration file.

Config (TOML):

```toml
toolsets = ["core", "kiali"]

[toolset_configs.kiali]
url = "https://kiali.example" # Endpoint/route to reach Kiali console
# insecure = true  # optional: allow insecure TLS (not recommended in production)
# certificate_authority = "/path/to/ca.crt"  # File path to CA certificate
# When url is https and insecure is false, certificate_authority is required.
```

When the `kiali` toolset is enabled, a Kiali toolset configuration is required via `[toolset_configs.kiali]`. If missing or invalid, the server will refuse to start.

### How authentication works

- The server uses your existing Kubernetes credentials (from kubeconfig or in-cluster) to set a bearer token for Kiali calls.
- If you pass an HTTP Authorization header to the MCP HTTP endpoint, that is not required for Kiali; Kiali calls use the server's configured token.

### Troubleshooting

- Missing Kiali configuration when `kiali` toolset is enabled → set `[toolset_configs.kiali].url` in the config TOML.
- Invalid URL → ensure `[toolset_configs.kiali].url` is a valid `http(s)://host` URL.
- TLS certificate validation:
  - If `[toolset_configs.kiali].url` uses HTTPS and `[toolset_configs.kiali].insecure` is false, you must set `[toolset_configs.kiali].certificate_authority` with the path to the CA certificate file. Relative paths are resolved relative to the directory containing the config file.
  - For non-production environments you can set `[toolset_configs.kiali].insecure = true` to skip certificate verification.


```
Page 1/8FirstPrevNextLast