# Directory Structure
```
├── .env.example
├── .github
│ └── workflows
│ ├── publish-mcp.yml
│ ├── pypi-publish.yaml
│ └── release.yml
├── .gitignore
├── .python-version
├── cliff.toml
├── CONTRIBUTING.md
├── docker-compose-elasticsearch.yml
├── docker-compose-opensearch.yml
├── LICENSE
├── Makefile
├── mcp_client
│ ├── python-sdk-anthropic
│ │ ├── __init__.py
│ │ ├── .gitignore
│ │ ├── client.py
│ │ └── config.py
│ └── spring-ai
│ ├── build.gradle
│ ├── gradle
│ │ └── wrapper
│ │ ├── gradle-wrapper.jar
│ │ └── gradle-wrapper.properties
│ ├── gradle.properties
│ ├── gradlew
│ ├── gradlew.bat
│ ├── README.md
│ ├── settings.gradle
│ └── src
│ ├── main
│ │ ├── java
│ │ │ └── spring
│ │ │ └── ai
│ │ │ └── mcp
│ │ │ └── spring_ai_mcp
│ │ │ └── Application.java
│ │ └── resources
│ │ ├── application.yml
│ │ └── mcp-servers-config.json
│ └── test
│ └── java
│ └── spring
│ └── ai
│ └── mcp
│ └── spring_ai_mcp
│ └── SpringAiMcpApplicationTests.java
├── pyproject.toml
├── README.md
├── server.json
├── src
│ ├── __init__.py
│ ├── clients
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── common
│ │ │ ├── __init__.py
│ │ │ ├── alias.py
│ │ │ ├── client.py
│ │ │ ├── cluster.py
│ │ │ ├── data_stream.py
│ │ │ ├── document.py
│ │ │ ├── general.py
│ │ │ └── index.py
│ │ └── exceptions.py
│ ├── server.py
│ ├── tools
│ │ ├── __init__.py
│ │ ├── alias.py
│ │ ├── cluster.py
│ │ ├── data_stream.py
│ │ ├── document.py
│ │ ├── general.py
│ │ ├── index.py
│ │ └── register.py
│ └── version.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
```
3.10
```
--------------------------------------------------------------------------------
/mcp_client/python-sdk-anthropic/.gitignore:
--------------------------------------------------------------------------------
```
.env
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# IDE
.idea
.vscode
.kiro
# Python
.venv
dist
__pycache__
*.egg-info
# Configuration and Credentials
.env
# Spring AI
.gradle/
build
# MCP Registry
.mcpregistry*
```
--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------
```
# Elasticsearch connection settings
ELASTICSEARCH_HOSTS=https://localhost:9200
ELASTICSEARCH_USERNAME=elastic
ELASTICSEARCH_PASSWORD=test123
ELASTICSEARCH_VERIFY_CERTS=false
# OpenSearch connection settings
OPENSEARCH_HOSTS=https://localhost:9200
OPENSEARCH_USERNAME=admin
OPENSEARCH_PASSWORD=admin
OPENSEARCH_VERIFY_CERTS=false
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/README.md:
--------------------------------------------------------------------------------
```markdown
## Start MCP Client
```bash
export OPENAI_API_KEY=<your-openai-api-key>
export OPENAI_BASE_URL=<your-openai-base-url>
./gradlew bootRun
```
## Ask Questions
```bash
USER: List the indices in the cluster.
ASSISTANT: Here are the indices currently present in the cluster:
1. `.internal.alerts-default.alerts-default-000001`
2. `.internal.alerts-observability.threshold.alerts-default-000001`
3. `.internal.alerts-ml.anomaly-detection-health.alerts-default-000001`
4. `.internal.alerts-observability.metrics.alerts-default-000001`
5. `.internal.alerts-stack.alerts-default-000001`
6. `.internal.alerts-security.alerts-default-000001`
7. `.internal.alerts-observability.slo.alerts-default-000001`
8. `.internal.alerts-ml.anomaly-detection.alerts-default-000001`
9. `.internal.alerts-transform.health.alerts-default-000001`
10. `student`
11. `.internal.alerts-observability.logs.alerts-default-000001`
12. `.internal.alerts-observability.uptime.alerts-default-000001`
13. `.internal.alerts-observability.apm.alerts-default-000001`
If you need more information about any specific index, feel free to ask!
USER: Get student index's mapping.
ASSISTANT: The mapping for the `student` index is as follows:
- **Mappings:**
- **Properties:**
- **age:** Type is `long`
- **major:** Type is `text` with a sub-field `keyword` of type `keyword`, and `ignore_above` set to 256.
- **name:** Type is `text` with a sub-field `keyword` of type `keyword`, and `ignore_above` set to 256.
- **Settings:**
- **Index:**
- Routing:
- Allocation:
- Include:
- `_tier_preference`: `data_content`
- Number of shards: `1`
- Provided name: `student`
- Creation date: `1745300793755`
- Number of replicas: `1`
- UUID: `RIsbajmcSwKr8DGFjL0sMQ`
- Version created: `8521000`
If you need further details or have any other questions, feel free to ask!
```
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
<!-- mcp-name: io.github.cr7258/elasticsearch-mcp-server -->
# Elasticsearch/OpenSearch MCP Server
[](https://mseep.ai/app/cr7258-elasticsearch-mcp-server)
[](https://archestra.ai/mcp-catalog/cr7258__elasticsearch-mcp-server)
[MCP Official Registry]( https://registry.modelcontextprotocol.io/v0/servers?search=io.github.cr7258/elasticsearch-mcp-server)
## Overview
A Model Context Protocol (MCP) server implementation that provides Elasticsearch and OpenSearch interaction. This server enables searching documents, analyzing indices, and managing cluster through a set of tools.
<a href="https://glama.ai/mcp/servers/b3po3delex"><img width="380" height="200" src="https://glama.ai/mcp/servers/b3po3delex/badge" alt="Elasticsearch MCP Server" /></a>
## Demo
https://github.com/user-attachments/assets/f7409e31-fac4-4321-9c94-b0ff2ea7ff15
## Features
### General Operations
- `general_api_request`: Perform a general HTTP API request. Use this tool for any Elasticsearch/OpenSearch API that does not have a dedicated tool.
### Index Operations
- `list_indices`: List all indices.
- `get_index`: Returns information (mappings, settings, aliases) about one or more indices.
- `create_index`: Create a new index.
- `delete_index`: Delete an index.
- `create_data_stream`: Create a new data stream (requires matching index template).
- `get_data_stream`: Get information about one or more data streams.
- `delete_data_stream`: Delete one or more data streams and their backing indices.
### Document Operations
- `search_documents`: Search for documents.
- `index_document`: Creates or updates a document in the index.
- `get_document`: Get a document by ID.
- `delete_document`: Delete a document by ID.
- `delete_by_query`: Deletes documents matching the provided query.
### Cluster Operations
- `get_cluster_health`: Returns basic information about the health of the cluster.
- `get_cluster_stats`: Returns high-level overview of cluster statistics.
### Alias Operations
- `list_aliases`: List all aliases.
- `get_alias`: Get alias information for a specific index.
- `put_alias`: Create or update an alias for a specific index.
- `delete_alias`: Delete an alias for a specific index.
## Configure Environment Variables
The MCP server supports the following environment variables for authentication:
### Basic Authentication (Username/Password)
- `ELASTICSEARCH_USERNAME`: Username for basic authentication
- `ELASTICSEARCH_PASSWORD`: Password for basic authentication
- `OPENSEARCH_USERNAME`: Username for OpenSearch basic authentication
- `OPENSEARCH_PASSWORD`: Password for OpenSearch basic authentication
### API Key Authentication (Elasticsearch only) - Recommended
- `ELASTICSEARCH_API_KEY`: API key for [Elasticsearch](https://www.elastic.co/docs/deploy-manage/api-keys/elasticsearch-api-keys) or [Elastic Cloud](https://www.elastic.co/docs/deploy-manage/api-keys/elastic-cloud-api-keys) Authentication.
### Other Configuration
- `ELASTICSEARCH_HOSTS` / `OPENSEARCH_HOSTS`: Comma-separated list of hosts (default: `https://localhost:9200`)
- `ELASTICSEARCH_VERIFY_CERTS` / `OPENSEARCH_VERIFY_CERTS`: Whether to verify SSL certificates (default: `false`)
## Start Elasticsearch/OpenSearch Cluster
Start the Elasticsearch/OpenSearch cluster using Docker Compose:
```bash
# For Elasticsearch
docker-compose -f docker-compose-elasticsearch.yml up -d
# For OpenSearch
docker-compose -f docker-compose-opensearch.yml up -d
```
The default Elasticsearch username is `elastic` and password is `test123`. The default OpenSearch username is `admin` and password is `admin`.
You can access Kibana/OpenSearch Dashboards from http://localhost:5601.
## Stdio
### Option 1: Using uvx
Using `uvx` will automatically install the package from PyPI, no need to clone the repository locally. Add the following configuration to 's config file `claude_desktop_config.json`.
```json
// For Elasticsearch with username/password
{
"mcpServers": {
"elasticsearch-mcp-server": {
"command": "uvx",
"args": [
"elasticsearch-mcp-server"
],
"env": {
"ELASTICSEARCH_HOSTS": "https://localhost:9200",
"ELASTICSEARCH_USERNAME": "elastic",
"ELASTICSEARCH_PASSWORD": "test123"
}
}
}
}
// For Elasticsearch with API key
{
"mcpServers": {
"elasticsearch-mcp-server": {
"command": "uvx",
"args": [
"elasticsearch-mcp-server"
],
"env": {
"ELASTICSEARCH_HOSTS": "https://localhost:9200",
"ELASTICSEARCH_API_KEY": "<YOUR_ELASTICSEARCH_API_KEY>"
}
}
}
}
// For OpenSearch
{
"mcpServers": {
"opensearch-mcp-server": {
"command": "uvx",
"args": [
"opensearch-mcp-server"
],
"env": {
"OPENSEARCH_HOSTS": "https://localhost:9200",
"OPENSEARCH_USERNAME": "admin",
"OPENSEARCH_PASSWORD": "admin"
}
}
}
}
```
### Option 2: Using uv with local development
Using `uv` requires cloning the repository locally and specifying the path to the source code. Add the following configuration to Claude Desktop's config file `claude_desktop_config.json`.
```json
// For Elasticsearch with username/password
{
"mcpServers": {
"elasticsearch-mcp-server": {
"command": "uv",
"args": [
"--directory",
"path/to/elasticsearch-mcp-server",
"run",
"elasticsearch-mcp-server"
],
"env": {
"ELASTICSEARCH_HOSTS": "https://localhost:9200",
"ELASTICSEARCH_USERNAME": "elastic",
"ELASTICSEARCH_PASSWORD": "test123"
}
}
}
}
// For Elasticsearch with API key
{
"mcpServers": {
"elasticsearch-mcp-server": {
"command": "uv",
"args": [
"--directory",
"path/to/elasticsearch-mcp-server",
"run",
"elasticsearch-mcp-server"
],
"env": {
"ELASTICSEARCH_HOSTS": "https://localhost:9200",
"ELASTICSEARCH_API_KEY": "<YOUR_ELASTICSEARCH_API_KEY>"
}
}
}
}
// For OpenSearch
{
"mcpServers": {
"opensearch-mcp-server": {
"command": "uv",
"args": [
"--directory",
"path/to/elasticsearch-mcp-server",
"run",
"opensearch-mcp-server"
],
"env": {
"OPENSEARCH_HOSTS": "https://localhost:9200",
"OPENSEARCH_USERNAME": "admin",
"OPENSEARCH_PASSWORD": "admin"
}
}
}
}
```
## SSE
### Option 1: Using uvx
```bash
# export environment variables (with username/password)
export ELASTICSEARCH_HOSTS="https://localhost:9200"
export ELASTICSEARCH_USERNAME="elastic"
export ELASTICSEARCH_PASSWORD="test123"
# OR export environment variables (with API key)
export ELASTICSEARCH_HOSTS="https://localhost:9200"
export ELASTICSEARCH_API_KEY="<YOUR_ELASTICSEARCH_API_KEY>"
# By default, the SSE MCP server will serve on http://127.0.0.1:8000/sse
uvx elasticsearch-mcp-server --transport sse
# The host, port, and path can be specified using the --host, --port, and --path options
uvx elasticsearch-mcp-server --transport sse --host 0.0.0.0 --port 8000 --path /sse
```
### Option 2: Using uv
```bash
# By default, the SSE MCP server will serve on http://127.0.0.1:8000/sse
uv run src/server.py elasticsearch-mcp-server --transport sse
# The host, port, and path can be specified using the --host, --port, and --path options
uv run src/server.py elasticsearch-mcp-server --transport sse --host 0.0.0.0 --port 8000 --path /sse
```
## Streamable HTTP
### Option 1: Using uvx
```bash
# export environment variables (with username/password)
export ELASTICSEARCH_HOSTS="https://localhost:9200"
export ELASTICSEARCH_USERNAME="elastic"
export ELASTICSEARCH_PASSWORD="test123"
# OR export environment variables (with API key)
export ELASTICSEARCH_HOSTS="https://localhost:9200"
export ELASTICSEARCH_API_KEY="<YOUR_ELASTICSEARCH_API_KEY>"
# By default, the Streamable HTTP MCP server will serve on http://127.0.0.1:8000/mcp
uvx elasticsearch-mcp-server --transport streamable-http
# The host, port, and path can be specified using the --host, --port, and --path options
uvx elasticsearch-mcp-server --transport streamable-http --host 0.0.0.0 --port 8000 --path /mcp
```
### Option 2: Using uv
```bash
# By default, the Streamable HTTP MCP server will serve on http://127.0.0.1:8000/mcp
uv run src/server.py elasticsearch-mcp-server --transport streamable-http
# The host, port, and path can be specified using the --host, --port, and --path options
uv run src/server.py elasticsearch-mcp-server --transport streamable-http --host 0.0.0.0 --port 8000 --path /mcp
```
## Compatibility
The MCP server is compatible with Elasticsearch 7.x, 8.x, and 9.x. By default, it uses the Elasticsearch 8.x client (without a suffix).
| MCP Server | Elasticsearch |
| --- | --- |
| elasticsearch-mcp-server-es7 | Elasticsearch 7.x |
| elasticsearch-mcp-server | Elasticsearch 8.x |
| elasticsearch-mcp-server-es9 | Elasticsearch 9.x |
| opensearch-mcp-server | OpenSearch 1.x, 2.x, 3.x |
To use the Elasticsearch 7.x client, run the `elasticsearch-mcp-server-es7` variant. For Elasticsearch 9.x, use `elasticsearch-mcp-server-es9`. For example:
```bash
uvx elasticsearch-mcp-server-es7
```
If you want to run different Elasticsearch variants (e.g., 7.x or 9.x) locally, simply update the `elasticsearch` dependency version in `pyproject.toml`, then start the server with:
```bash
uv run src/server.py elasticsearch-mcp-server
```
## License
This project is licensed under the Apache License Version 2.0 - see the [LICENSE](LICENSE) file for details.
```
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
```markdown
# Contributing to Elasticsearch/OpenSearch MCP Server
Thank you for your interest in contributing to the Elasticsearch/OpenSearch MCP Server! All kinds of contributions are welcome.
## Bug reports
If you think you've found a bug in the Elasticsearch/OpenSearch MCP Server, we welcome your report. It's very helpful if you can provide steps to reproduce the bug, as it makes it easier to identify and fix the issue.
## Feature requests
If you find yourself wishing for a feature that doesn't exist in the Elasticsearch/OpenSearch MCP Server, you are probably not alone. Don't be hesitate to open an issue which describes the feature you would like to see, why you need it, and how it should work.
## Pull requests
If you have a fix or a new feature, we welcome your pull requests. You can follow the following steps:
1. Fork your own copy of the repository to your GitHub account by clicking on
`Fork` button on [elasticsearch-mcp-server's GitHub repository](https://github.com/cr7258/elasticsearch-mcp-server).
2. Clone the forked repository on your local setup.
```bash
git clone https://github.com/$user/elasticsearch-mcp-server
```
Add a remote upstream to track upstream `elasticsearch-mcp-server` repository.
```bash
git remote add upstream https://github.com/cr7258/elasticsearch-mcp-server
```
3. Create a topic branch.
```bash
git checkout -b <branch-name>
```
4. Make changes and commit it locally.
```bash
git add <modifiedFile>
git commit
```
Commit message could help reviewers better understand what is the purpose of submitted PR. It could help accelerate the code review procedure as well. We encourage contributors to use **EXPLICIT** commit message rather than ambiguous message. In general, we advocate the following commit message type:
- Features: commit message start with `feat`, For example: "feat: add user authentication module"
- Bug Fixes: commit message start with `fix`, For example: "fix: resolve null pointer exception in user service"
- Documentation, commit message start with `doc`, For example: "doc: update API documentation for user endpoints"
- Performance: commit message start with `perf`, For example: "perf: improve the performance of user service"
- Refactor: commit message start with `refactor`, For example: "refactor: refactor user service to improve code readability"
- Test: commit message start with `test`, For example: "test: add unit test for user service"
- Chore: commit message start with `chore`, For example: "chore: update dependencies in pom.xml"
- Style: commit message start with `style`, For example: "style: format the code in user service"
- Revert: commit message start with `revert`, For example: "revert: revert the changes in user service"
5. Push local branch to your forked repository.
```bash
git push
```
6. Create a Pull request on GitHub.
Visit your fork at `https://github.com/$user/elasticsearch-mcp-server` and click
`Compare & Pull Request` button next to your `<branch-name>`.
## Keeping branch in sync with upstream
Click `Sync fork` button on your forked repository to keep your forked repository in sync with the upstream repository.
If you have already created a branch and want to keep it in sync with the upstream repository, follow the below steps:
```bash
git checkout <branch-name>
git fetch upstream
git rebase upstream/main
```
## Release
```bash
uv sync
source .venv/bin/activate
# example: make release version=v0.0.6
make release version=<RELEASE_VERSION>
```
```
--------------------------------------------------------------------------------
/mcp_client/python-sdk-anthropic/__init__.py:
--------------------------------------------------------------------------------
```python
```
--------------------------------------------------------------------------------
/src/version.py:
--------------------------------------------------------------------------------
```python
__version__ = "2.0.16"
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/gradle.properties:
--------------------------------------------------------------------------------
```
org.gradle.console = plain
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/settings.gradle:
--------------------------------------------------------------------------------
```
rootProject.name = 'spring-ai-mcp'
```
--------------------------------------------------------------------------------
/src/__init__.py:
--------------------------------------------------------------------------------
```python
"""
Search MCP Server package.
"""
from src.server import elasticsearch_mcp_server, opensearch_mcp_server, run_search_server
__all__ = ['elasticsearch_mcp_server', 'opensearch_mcp_server', 'run_search_server']
```
--------------------------------------------------------------------------------
/src/clients/common/__init__.py:
--------------------------------------------------------------------------------
```python
from .index import IndexClient
from .document import DocumentClient
from .cluster import ClusterClient
from .alias import AliasClient
__all__ = ['IndexClient', 'DocumentClient', 'ClusterClient', 'AliasClient']
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/src/test/java/spring/ai/mcp/spring_ai_mcp/SpringAiMcpApplicationTests.java:
--------------------------------------------------------------------------------
```java
package spring.ai.mcp.spring_ai_mcp;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
@SpringBootTest
class SpringAiMcpApplicationTests {
@Test
void contextLoads() {
}
}
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/gradle/wrapper/gradle-wrapper.properties:
--------------------------------------------------------------------------------
```
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-8.13-bin.zip
networkTimeout=10000
validateDistributionUrl=true
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/src/main/resources/mcp-servers-config.json:
--------------------------------------------------------------------------------
```json
{
"mcpServers": {
"elasticsearch-mcp-server": {
"command": "uvx",
"args": [
"elasticsearch-mcp-server"
],
"env": {
"ELASTICSEARCH_HOSTS": "https://localhost:9200",
"ELASTICSEARCH_USERNAME": "elastic",
"ELASTICSEARCH_PASSWORD": "test123"
}
}
}
}
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/src/main/resources/application.yml:
--------------------------------------------------------------------------------
```yaml
spring:
application:
name: spring-ai-mcp
main:
web-application-type: none
ai:
openai:
api-key: ${OPENAI_API_KEY}
base-url: ${OPENAI_BASE_URL:https://openrouter.ai/api}
chat:
options:
model: ${CHAT_MODEL:openai/gpt-4o}
mcp:
client:
stdio:
servers-configuration: classpath:mcp-servers-config.json
```
--------------------------------------------------------------------------------
/src/tools/__init__.py:
--------------------------------------------------------------------------------
```python
from src.tools.alias import AliasTools
from src.tools.cluster import ClusterTools
from src.tools.document import DocumentTools
from src.tools.general import GeneralTools
from src.tools.index import IndexTools
from src.tools.register import ToolsRegister
__all__ = [
'AliasTools',
'ClusterTools',
'DocumentTools',
'GeneralTools',
'IndexTools',
'ToolsRegister',
]
```
--------------------------------------------------------------------------------
/src/clients/common/cluster.py:
--------------------------------------------------------------------------------
```python
from typing import Dict
from src.clients.base import SearchClientBase
class ClusterClient(SearchClientBase):
def get_cluster_health(self) -> Dict:
"""Get cluster health information from OpenSearch."""
return self.client.cluster.health()
def get_cluster_stats(self) -> Dict:
"""Get cluster statistics from OpenSearch."""
return self.client.cluster.stats()
```
--------------------------------------------------------------------------------
/src/clients/common/general.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional
from src.clients.base import SearchClientBase
class GeneralClient(SearchClientBase):
def general_api_request(self, method: str, path: str, params: Optional[Dict] = None, body: Optional[Dict] = None):
"""Perform a general HTTP API request.
Use this tool for any Elasticsearch/OpenSearch API that does not have a dedicated tool.
"""
return self.general_client.request(method, path, params, body)
```
--------------------------------------------------------------------------------
/src/tools/cluster.py:
--------------------------------------------------------------------------------
```python
from typing import Dict
from fastmcp import FastMCP
class ClusterTools:
def __init__(self, search_client):
self.search_client = search_client
def register_tools(self, mcp: FastMCP):
@mcp.tool()
def get_cluster_health() -> Dict:
"""Returns basic information about the health of the cluster."""
return self.search_client.get_cluster_health()
@mcp.tool()
def get_cluster_stats() -> Dict:
"""Returns high-level overview of cluster statistics."""
return self.search_client.get_cluster_stats()
```
--------------------------------------------------------------------------------
/.github/workflows/publish-mcp.yml:
--------------------------------------------------------------------------------
```yaml
name: Publish Python MCP Server
on:
workflow_run:
workflows: ["PyPI Publish"]
types:
- completed
jobs:
publish:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Install MCP Publisher
run: |
curl -L "https://github.com/modelcontextprotocol/registry/releases/download/v1.0.0/mcp-publisher_1.0.0_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/').tar.gz" | tar xz mcp-publisher
- name: Publish to MCP Registry
run: |
./mcp-publisher login github-oidc
./mcp-publisher publish
```
--------------------------------------------------------------------------------
/src/clients/common/data_stream.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional
from src.clients.base import SearchClientBase
class DataStreamClient(SearchClientBase):
def create_data_stream(self, name: str) -> Dict:
"""Create a new data stream."""
return self.client.indices.create_data_stream(name=name)
def get_data_stream(self, name: Optional[str] = None) -> Dict:
"""Get information about one or more data streams."""
if name:
return self.client.indices.get_data_stream(name=name)
else:
return self.client.indices.get_data_stream()
def delete_data_stream(self, name: str) -> Dict:
"""Delete one or more data streams."""
return self.client.indices.delete_data_stream(name=name)
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[project]
name = "elasticsearch-mcp-server"
version = "2.0.16"
description = "MCP Server for interacting with Elasticsearch and OpenSearch"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"elasticsearch==8.17.2",
"opensearch-py==2.8.0",
"mcp==1.9.2",
"python-dotenv==1.1.0",
"fastmcp==2.8.0",
"pydantic>=2.11.0,<2.12.0",
"anthropic==0.49.0",
"tomli==2.2.1",
"tomli-w==1.2.0",
]
[project.license]
file = "LICENSE"
[project.scripts]
elasticsearch-mcp-server = "src.server:elasticsearch_mcp_server"
opensearch-mcp-server = "src.server:opensearch_mcp_server"
[tool.hatch.build.targets.wheel]
packages = [
"src",
]
[build-system]
requires = [
"hatchling",
]
build-backend = "hatchling.build"
```
--------------------------------------------------------------------------------
/src/clients/common/alias.py:
--------------------------------------------------------------------------------
```python
from typing import Dict
from src.clients.base import SearchClientBase
class AliasClient(SearchClientBase):
def list_aliases(self) -> Dict:
"""Get all aliases."""
return self.client.cat.aliases()
def get_alias(self, index: str) -> Dict:
"""Get aliases for the specified index."""
return self.client.indices.get_alias(index=index)
def put_alias(self, index: str, name: str, body: Dict) -> Dict:
"""Creates or updates an alias."""
return self.client.indices.put_alias(index=index, name=name, body=body)
def delete_alias(self, index: str, name: str) -> Dict:
"""Delete an alias for the specified index."""
return self.client.indices.delete_alias(index=index, name=name)
```
--------------------------------------------------------------------------------
/src/clients/common/index.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional
from src.clients.base import SearchClientBase
class IndexClient(SearchClientBase):
def list_indices(self) -> Dict:
"""List all indices."""
return self.client.cat.indices()
def get_index(self, index: str) -> Dict:
"""Returns information (mappings, settings, aliases) about one or more indices."""
return self.client.indices.get(index=index)
def create_index(self, index: str, body: Optional[Dict] = None) -> Dict:
"""Creates an index with optional settings and mappings."""
return self.client.indices.create(index=index, body=body)
def delete_index(self, index: str) -> Dict:
"""Delete an index."""
return self.client.indices.delete(index=index)
```
--------------------------------------------------------------------------------
/src/tools/general.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional
from fastmcp import FastMCP
class GeneralTools:
def __init__(self, search_client):
self.search_client = search_client
def register_tools(self, mcp: FastMCP):
@mcp.tool()
def general_api_request(method: str, path: str, params: Optional[Dict] = None, body: Optional[Dict] = None):
"""Perform a general HTTP API request.
Use this tool for any Elasticsearch/OpenSearch API that does not have a dedicated tool.
Args:
method: HTTP method (GET, POST, PUT, DELETE, etc.)
path: API endpoint path
params: Query parameters
body: Request body
"""
return self.search_client.general_api_request(method, path, params, body)
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/build.gradle:
--------------------------------------------------------------------------------
```
plugins {
id 'java'
id 'org.springframework.boot' version '3.4.4'
id 'io.spring.dependency-management' version '1.1.7'
}
group = 'spring.ai.mcp'
version = '0.0.1-SNAPSHOT'
java {
toolchain {
languageVersion = JavaLanguageVersion.of(24)
}
}
repositories {
mavenCentral()
}
ext {
set('springAiVersion', "1.0.0-M7")
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.ai:spring-ai-starter-model-openai'
implementation 'org.springframework.ai:spring-ai-starter-mcp-client'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testRuntimeOnly 'org.junit.platform:junit-platform-launcher'
}
dependencyManagement {
imports {
mavenBom "org.springframework.ai:spring-ai-bom:${springAiVersion}"
}
}
tasks.named('test') {
useJUnitPlatform()
}
bootRun {
standardInput = System.in
}
```
--------------------------------------------------------------------------------
/.github/workflows/release.yml:
--------------------------------------------------------------------------------
```yaml
name: Release
on:
push:
tags:
- 'v*'
jobs:
release:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install git-cliff
- name: Get version from tag
id: get_version
run: echo "VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_ENV
- name: Generate changelog
run: |
git-cliff --output CHANGELOG.md --latest
- name: Create Release
uses: softprops/action-gh-release@v1
with:
name: v${{ env.VERSION }}
body_path: CHANGELOG.md
draft: false
prerelease: false
```
--------------------------------------------------------------------------------
/src/clients/common/client.py:
--------------------------------------------------------------------------------
```python
from typing import Dict
from src.clients.common.alias import AliasClient
from src.clients.common.cluster import ClusterClient
from src.clients.common.data_stream import DataStreamClient
from src.clients.common.document import DocumentClient
from src.clients.common.general import GeneralClient
from src.clients.common.index import IndexClient
class SearchClient(IndexClient, DocumentClient, ClusterClient, AliasClient, DataStreamClient, GeneralClient):
"""
Unified search client that combines all search functionality.
This class uses multiple inheritance to combine all specialized client implementations
(index, document, cluster, alias) into a single unified client.
"""
def __init__(self, config: Dict, engine_type: str):
"""
Initialize the search client.
Args:
config: Configuration dictionary with connection parameters
engine_type: Type of search engine to use ("elasticsearch" or "opensearch")
"""
super().__init__(config, engine_type)
self.logger.info(f"Initialized the {engine_type} client")
```
--------------------------------------------------------------------------------
/src/clients/__init__.py:
--------------------------------------------------------------------------------
```python
import os
from dotenv import load_dotenv
from src.clients.common.client import SearchClient
from src.clients.exceptions import handle_search_exceptions
def create_search_client(engine_type: str) -> SearchClient:
"""
Create a search client for the specified engine type.
Args:
engine_type: Type of search engine to use ("elasticsearch" or "opensearch")
Returns:
A search client instance
"""
# Load configuration from environment variables
load_dotenv()
# Get configuration from environment variables
prefix = engine_type.upper()
hosts_str = os.environ.get(f"{prefix}_HOSTS", "https://localhost:9200")
hosts = [host.strip() for host in hosts_str.split(",")]
username = os.environ.get(f"{prefix}_USERNAME")
password = os.environ.get(f"{prefix}_PASSWORD")
api_key = os.environ.get(f"{prefix}_API_KEY")
verify_certs = os.environ.get(f"{prefix}_VERIFY_CERTS", "false").lower() == "true"
config = {
"hosts": hosts,
"username": username,
"password": password,
"api_key": api_key,
"verify_certs": verify_certs
}
return SearchClient(config, engine_type)
__all__ = [
'create_search_client',
'handle_search_exceptions',
'SearchClient',
]
```
--------------------------------------------------------------------------------
/src/tools/register.py:
--------------------------------------------------------------------------------
```python
import logging
from typing import List, Type
from fastmcp import FastMCP
from src.clients import SearchClient
from src.clients.exceptions import with_exception_handling
class ToolsRegister:
"""Class to handle registration of MCP tools."""
def __init__(self, logger: logging.Logger, search_client: SearchClient, mcp: FastMCP):
"""
Initialize the tools register.
Args:
logger: Logger instance
search_client: Search client instance
mcp: FastMCP instance
"""
self.logger = logger
self.search_client = search_client
self.mcp = mcp
def register_all_tools(self, tool_classes: List[Type]):
"""
Register all tools with the MCP server.
Args:
tool_classes: List of tool classes to register
"""
for tool_class in tool_classes:
self.logger.info(f"Registering tools from {tool_class.__name__}")
tool_instance = tool_class(self.search_client)
# Set logger and client attributes
tool_instance.logger = self.logger
tool_instance.search_client = self.search_client
# Register tools with automatic exception handling
with_exception_handling(tool_instance, self.mcp)
```
--------------------------------------------------------------------------------
/src/tools/index.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional, List
from fastmcp import FastMCP
class IndexTools:
def __init__(self, search_client):
self.search_client = search_client
def register_tools(self, mcp: FastMCP):
@mcp.tool()
def list_indices() -> List[Dict]:
"""List all indices."""
return self.search_client.list_indices()
@mcp.tool()
def get_index(index: str) -> Dict:
"""
Returns information (mappings, settings, aliases) about one or more indices.
Args:
index: Name of the index
"""
return self.search_client.get_index(index=index)
@mcp.tool()
def create_index(index: str, body: Optional[Dict] = None) -> Dict:
"""
Create a new index.
Args:
index: Name of the index
body: Optional index configuration including mappings and settings
"""
return self.search_client.create_index(index=index, body=body)
@mcp.tool()
def delete_index(index: str) -> Dict:
"""
Delete an index.
Args:
index: Name of the index
"""
return self.search_client.delete_index(index=index)
```
--------------------------------------------------------------------------------
/src/tools/alias.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, List
from fastmcp import FastMCP
class AliasTools:
def __init__(self, search_client):
self.search_client = search_client
def register_tools(self, mcp: FastMCP):
@mcp.tool()
def list_aliases() -> List[Dict]:
"""List all aliases."""
return self.search_client.list_aliases()
@mcp.tool()
def get_alias(index: str) -> Dict:
"""
Get alias information for a specific index.
Args:
index: Name of the index
"""
return self.search_client.get_alias(index=index)
@mcp.tool()
def put_alias(index: str, name: str, body: Dict) -> Dict:
"""
Create or update an alias for a specific index.
Args:
index: Name of the index
name: Name of the alias
body: Alias configuration
"""
return self.search_client.put_alias(index=index, name=name, body=body)
@mcp.tool()
def delete_alias(index: str, name: str) -> Dict:
"""
Delete an alias for a specific index.
Args:
index: Name of the index
name: Name of the alias
"""
return self.search_client.delete_alias(index=index, name=name)
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/src/main/java/spring/ai/mcp/spring_ai_mcp/Application.java:
--------------------------------------------------------------------------------
```java
package spring.ai.mcp.spring_ai_mcp;
import java.util.List;
import java.util.Scanner;
import io.modelcontextprotocol.client.McpSyncClient;
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.ai.chat.client.advisor.MessageChatMemoryAdvisor;
import org.springframework.ai.chat.memory.InMemoryChatMemory;
import org.springframework.ai.mcp.SyncMcpToolCallbackProvider;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@Bean
public CommandLineRunner chatbot(ChatClient.Builder chatClientBuilder, List<McpSyncClient> mcpSyncClients) {
return args -> {
var chatClient = chatClientBuilder
.defaultSystem("You are useful assistant and can query Elasticsearch to reply to your questions.")
.defaultTools(new SyncMcpToolCallbackProvider(mcpSyncClients))
.defaultAdvisors(new MessageChatMemoryAdvisor(new InMemoryChatMemory()))
.build();
// Start the chat loop
System.out.println("\nI am your AI assistant.\n");
try (Scanner scanner = new Scanner(System.in)) {
while (true) {
System.out.print("\nUSER: ");
System.out.println("\nASSISTANT: " +
chatClient.prompt(scanner.nextLine()) // Get the user input
.call()
.content());
}
}
};
}
}
```
--------------------------------------------------------------------------------
/src/clients/common/document.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional
from src.clients.base import SearchClientBase
class DocumentClient(SearchClientBase):
def search_documents(self, index: str, body: Dict) -> Dict:
"""Search for documents in the index."""
return self.client.search(index=index, body=body)
def index_document(self, index: str, document: Dict, id: Optional[str] = None) -> Dict:
"""Creates a new document in the index."""
# Handle parameter name differences between Elasticsearch and OpenSearch
if self.engine_type == "elasticsearch":
# For Elasticsearch: index(index, document, id=None, ...)
if id is not None:
return self.client.index(index=index, document=document, id=id)
else:
return self.client.index(index=index, document=document)
else:
# For OpenSearch: index(index, body, id=None, ...)
if id is not None:
return self.client.index(index=index, body=document, id=id)
else:
return self.client.index(index=index, body=document)
def get_document(self, index: str, id: str) -> Dict:
"""Get a document by ID."""
return self.client.get(index=index, id=id)
def delete_document(self, index: str, id: str) -> Dict:
"""Removes a document from the index."""
return self.client.delete(index=index, id=id)
def delete_by_query(self, index: str, body: Dict) -> Dict:
"""Deletes documents matching the provided query."""
return self.client.delete_by_query(index=index, body=body)
```
--------------------------------------------------------------------------------
/src/tools/data_stream.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional
from fastmcp import FastMCP
class DataStreamTools:
def __init__(self, search_client):
self.search_client = search_client
def register_tools(self, mcp: FastMCP):
"""Register data stream tools with the MCP server."""
@mcp.tool()
def create_data_stream(name: str) -> Dict:
"""Create a new data stream.
This creates a new data stream with the specified name.
The data stream must have a matching index template before creation.
Args:
name: Name of the data stream to create
"""
return self.search_client.create_data_stream(name=name)
@mcp.tool()
def get_data_stream(name: Optional[str] = None) -> Dict:
"""Get information about one or more data streams.
Retrieves configuration, mappings, settings, and other information
about the specified data streams.
Args:
name: Name of the data stream(s) to retrieve.
Can be a comma-separated list or wildcard pattern.
If not provided, retrieves all data streams.
"""
return self.search_client.get_data_stream(name=name)
@mcp.tool()
def delete_data_stream(name: str) -> Dict:
"""Delete one or more data streams.
Permanently deletes the specified data streams and all their backing indices.
Args:
name: Name of the data stream(s) to delete.
Can be a comma-separated list or wildcard pattern.
"""
return self.search_client.delete_data_stream(name=name)
```
--------------------------------------------------------------------------------
/.github/workflows/pypi-publish.yaml:
--------------------------------------------------------------------------------
```yaml
# This workflow will upload Python Packages using uv when a release is created
# It builds and publishes multiple packages for different Elasticsearch versions
name: PyPI Publish
on:
workflow_run:
workflows: ["Release"]
types:
- completed
env:
UV_PUBLISH_TOKEN: '${{ secrets.PYPI_API_TOKEN }}'
jobs:
deploy:
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' }}
strategy:
matrix:
variant:
- name: "elasticsearch-mcp-server-es7"
elasticsearch_version: "7.13.0"
- name: "elasticsearch-mcp-server"
elasticsearch_version: "8.17.2"
- name: "elasticsearch-mcp-server-es9"
elasticsearch_version: "9.0.0"
- name: "opensearch-mcp-server"
elasticsearch_version: "8.17.2"
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.10.x'
- name: Install dependencies
run: |
python -m pip install uv
uv sync
- name: Modify pyproject.toml for ${{ matrix.variant.name }}
run: |
# Update package name
sed -i 's/^name = .*$/name = "${{ matrix.variant.name }}"/' pyproject.toml
# Update elasticsearch version
sed -i 's/elasticsearch==.*/elasticsearch==${{ matrix.variant.elasticsearch_version }}",/' pyproject.toml
# Update script name to match package name (only for non-opensearch packages)
if [[ "${{ matrix.variant.name }}" != "opensearch-mcp-server" ]]; then
sed -i 's/^elasticsearch-mcp-server = /"${{ matrix.variant.name }}" = /' pyproject.toml
fi
- name: Build ${{ matrix.variant.name }} package
run: uv build
- name: Publish ${{ matrix.variant.name }} package
run: uv publish
- name: Clean dist directory
run: rm -rf dist/*
```
--------------------------------------------------------------------------------
/server.json:
--------------------------------------------------------------------------------
```json
{
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-07-09/server.schema.json",
"name": "io.github.cr7258/elasticsearch-mcp-server",
"description": "MCP server for interacting with Elasticsearch",
"status": "active",
"repository": {
"url": "https://github.com/cr7258/elasticsearch-mcp-server",
"source": "github"
},
"version": "2.0.16",
"packages": [
{
"registry_type": "pypi",
"registry_base_url": "https://pypi.org",
"identifier": "elasticsearch-mcp-server",
"version": "2.0.16",
"transport": {
"type": "stdio"
},
"environment_variables": [
{
"name": "ELASTICSEARCH_HOSTS",
"description": "Comma-separated list of Elasticsearch hosts (e.g., https://localhost:9200)",
"is_required": false,
"format": "string",
"is_secret": false,
"default": "https://localhost:9200"
},
{
"name": "ELASTICSEARCH_API_KEY",
"description": "API key for Elasticsearch or Elastic Cloud authentication (recommended)",
"is_required": false,
"format": "string",
"is_secret": true
},
{
"name": "ELASTICSEARCH_USERNAME",
"description": "Username for basic authentication (alternative to API key)",
"is_required": false,
"format": "string",
"is_secret": false
},
{
"name": "ELASTICSEARCH_PASSWORD",
"description": "Password for basic authentication (used with ELASTICSEARCH_USERNAME)",
"is_required": false,
"format": "string",
"is_secret": true
},
{
"name": "ELASTICSEARCH_VERIFY_CERTS",
"description": "Whether to verify SSL certificates (true/false)",
"is_required": false,
"format": "boolean",
"is_secret": false,
"default": "false"
}
]
}
]
}
```
--------------------------------------------------------------------------------
/mcp_client/python-sdk-anthropic/config.py:
--------------------------------------------------------------------------------
```python
from dotenv import load_dotenv
import logging
from os import getenv
import pydantic_settings as py_set
load_dotenv()
class LoggerConfig(py_set.BaseSettings):
file: str = "logs/notifications_telegram.log"
format: str = "[{name}]-[%(levelname)s]-[%(asctime)s]-[%(message)s]"
to_file: bool = getenv("LOG_TO_FILE", "False").lower() == "true"
to_terminal: bool = getenv("LOG_TO_TERMINAL", "True").lower() == "true"
file_level: int = logging.DEBUG
terminal_level: int = logging.INFO
class ElasticsearchConfig(py_set.BaseSettings):
host: str = getenv("ELASTICSEARCH_HOSTS", "")
port: int = int(getenv("ELASTICSEARCH_PORT", "30930"))
scroll_size: int = 10_000
scroll: str = "1m"
timeout: int = 60
class AnthropicConfig(py_set.BaseSettings):
model: str = getenv("MODEL", "claude-3-5-sonnet-20241022")
max_tokens_message: int = int(getenv("MAX_TOKENS_MESSAGE", "1000"))
class Config(py_set.BaseSettings):
logger: LoggerConfig
elasticsearch: ElasticsearchConfig
anthropic: AnthropicConfig
def read_config() -> Config:
logger_config = LoggerConfig()
elasticsearch_config = ElasticsearchConfig()
anthropic_config = AnthropicConfig()
return Config(
logger=logger_config,
elasticsearch=elasticsearch_config,
anthropic=anthropic_config,
)
def get_logger(name: str) -> logging.Logger:
log_config = LoggerConfig()
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(log_config.format.format(name=name))
if log_config.to_file:
file_handler = logging.FileHandler(log_config.file, mode="a")
file_handler.setLevel(log_config.file_level)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
if log_config.to_terminal:
console_handler = logging.StreamHandler()
console_handler.setLevel(log_config.terminal_level)
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
return logger
```
--------------------------------------------------------------------------------
/src/tools/document.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, Optional
from fastmcp import FastMCP
class DocumentTools:
def __init__(self, search_client):
self.search_client = search_client
def register_tools(self, mcp: FastMCP):
@mcp.tool()
def search_documents(index: str, body: Dict) -> Dict:
"""
Search for documents.
Args:
index: Name of the index
body: Search query
"""
return self.search_client.search_documents(index=index, body=body)
@mcp.tool()
def index_document(index: str, document: Dict, id: Optional[str] = None) -> Dict:
"""
Creates or updates a document in the index.
Args:
index: Name of the index
document: Document data
id: Optional document ID
"""
return self.search_client.index_document(index=index, id=id, document=document)
@mcp.tool()
def get_document(index: str, id: str) -> Dict:
"""
Get a document by ID.
Args:
index: Name of the index
id: Document ID
"""
return self.search_client.get_document(index=index, id=id)
@mcp.tool()
def delete_document(index: str, id: str) -> Dict:
"""
Delete a document by ID.
Args:
index: Name of the index
id: Document ID
"""
return self.search_client.delete_document(index=index, id=id)
@mcp.tool()
def delete_by_query(index: str, body: Dict) -> Dict:
"""
Deletes documents matching the provided query.
Args:
index: Name of the index
body: Query to match documents for deletion
"""
return self.search_client.delete_by_query(index=index, body=body)
```
--------------------------------------------------------------------------------
/src/clients/exceptions.py:
--------------------------------------------------------------------------------
```python
import functools
import logging
from typing import TypeVar, Callable
from fastmcp import FastMCP
from mcp.types import TextContent
T = TypeVar('T')
def handle_search_exceptions(func: Callable[..., T]) -> Callable[..., list[TextContent]]:
"""
Decorator to handle exceptions in search client operations.
Args:
func: The function to decorate
Returns:
Decorated function that handles exceptions
"""
@functools.wraps(func)
def wrapper(*args, **kwargs):
logger = logging.getLogger()
try:
return func(*args, **kwargs)
except Exception as e:
logger.error(f"Unexpected error in {func.__name__}: {e}")
return [TextContent(type="text", text=f"Unexpected error in {func.__name__}: {str(e)}")]
return wrapper
def with_exception_handling(tool_instance: object, mcp: FastMCP) -> None:
"""
Register tools from a tool instance with automatic exception handling applied to all tools.
This function temporarily replaces mcp.tool with a wrapped version that automatically
applies the handle_search_exceptions decorator to all registered tool methods.
Args:
tool_instance: The tool instance that has a register_tools method
mcp: The FastMCP instance used for tool registration
"""
# Save the original tool method
original_tool = mcp.tool
@functools.wraps(original_tool)
def wrapped_tool(*args, **kwargs):
# Get the original decorator
decorator = original_tool(*args, **kwargs)
# Return a new decorator that applies both the exception handler and original decorator
def combined_decorator(func):
# First apply the exception handling decorator
wrapped_func = handle_search_exceptions(func)
# Then apply the original mcp.tool decorator
return decorator(wrapped_func)
return combined_decorator
try:
# Temporarily replace mcp.tool with our wrapped version
mcp.tool = wrapped_tool
# Call the registration method on the tool instance
tool_instance.register_tools(mcp)
finally:
# Restore the original mcp.tool to avoid affecting other code that might use mcp.tool
# This ensures that our modification is isolated to just this tool registration
# and prevents multiple nested decorators if register_all_tools is called multiple times
mcp.tool = original_tool
```
--------------------------------------------------------------------------------
/mcp_client/spring-ai/gradlew.bat:
--------------------------------------------------------------------------------
```
@rem
@rem Copyright 2015 the original author or authors.
@rem
@rem Licensed under the Apache License, Version 2.0 (the "License");
@rem you may not use this file except in compliance with the License.
@rem You may obtain a copy of the License at
@rem
@rem https://www.apache.org/licenses/LICENSE-2.0
@rem
@rem Unless required by applicable law or agreed to in writing, software
@rem distributed under the License is distributed on an "AS IS" BASIS,
@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@rem See the License for the specific language governing permissions and
@rem limitations under the License.
@rem
@rem SPDX-License-Identifier: Apache-2.0
@rem
@if "%DEBUG%"=="" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
set DIRNAME=%~dp0
if "%DIRNAME%"=="" set DIRNAME=.
@rem This is normally unused
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Resolve any "." and ".." in APP_HOME to make it shorter.
for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if %ERRORLEVEL% equ 0 goto execute
echo. 1>&2
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. 1>&2
echo. 1>&2
echo Please set the JAVA_HOME variable in your environment to match the 1>&2
echo location of your Java installation. 1>&2
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto execute
echo. 1>&2
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% 1>&2
echo. 1>&2
echo Please set the JAVA_HOME variable in your environment to match the 1>&2
echo location of your Java installation. 1>&2
goto fail
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %*
:end
@rem End local scope for the variables with windows NT shell
if %ERRORLEVEL% equ 0 goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
set EXIT_CODE=%ERRORLEVEL%
if %EXIT_CODE% equ 0 set EXIT_CODE=1
if not ""=="%GRADLE_EXIT_CONSOLE%" exit %EXIT_CODE%
exit /b %EXIT_CODE%
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega
```
--------------------------------------------------------------------------------
/docker-compose-opensearch.yml:
--------------------------------------------------------------------------------
```yaml
services:
opensearch-node1: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/)
image: opensearchproject/opensearch:2.11.0
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster # Name the cluster
- node.name=opensearch-node1 # Name the node that will run in this container
- discovery.seed_hosts=opensearch-node1,opensearch-node2 # Nodes to look for when discovering the cluster
- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligibile to serve as cluster manager
- bootstrap.memory_lock=true # Disable JVM heap memory swapping
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
- cluster.routing.allocation.disk.watermark.low=2gb
- cluster.routing.allocation.disk.watermark.high=1gb
- cluster.routing.allocation.disk.watermark.flood_stage=512mb
ulimits:
memlock:
soft: -1 # Set memlock to unlimited (no soft or hard limit)
hard: -1
nofile:
soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container
ports:
- 9200:9200 # REST API
- 9600:9600 # Performance Analyzer
networks:
- opensearch-net # All of the containers will join the same Docker bridge network
opensearch-node2:
image: opensearchproject/opensearch:2.11.0 # This should be the same image used for opensearch-node1 to avoid issues
container_name: opensearch-node2
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- cluster.routing.allocation.disk.watermark.low=2gb
- cluster.routing.allocation.disk.watermark.high=1gb
- cluster.routing.allocation.disk.watermark.flood_stage=512mb
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.11.0 # Make sure the version of opensearch-dashboards matches the version of opensearch installed on other nodes
container_name: opensearch-dashboards
ports:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
environment:
OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]' # Define the OpenSearch nodes that OpenSearch Dashboards will query
networks:
- opensearch-net
volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:
```
--------------------------------------------------------------------------------
/cliff.toml:
--------------------------------------------------------------------------------
```toml
# git-cliff ~ configuration file
# https://git-cliff.org/docs/configuration
[changelog]
# template for the changelog header
header = """
# Changelog\n
"""
# template for the changelog body
# https://keats.github.io/tera/docs/#introduction
body = """
{% if version %}\
{% if previous.version %}\
## [{{ version | trim_start_matches(pat="v") }}]($REPO/compare/{{ previous.version }}..{{ version }}) - {{ timestamp | date(format="%Y-%m-%d") }}
{% else %}\
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
{% endif %}\
{% else %}\
## [unreleased]
{% endif %}\
{% for group, commits in commits | group_by(attribute="group") %}
### {{ group | striptags | trim | upper_first }}
{% for commit in commits
| filter(attribute="scope")
| sort(attribute="scope") %}
- **({{commit.scope}})**{% if commit.breaking %} [**breaking**]{% endif %} \
{{ commit.message }} - ([{{ commit.id | truncate(length=7, end="") }}]($REPO/commit/{{ commit.id }})) - @{{ commit.author.name }}
{%- endfor -%}
{% raw %}\n{% endraw %}\
{%- for commit in commits %}
{%- if commit.scope -%}
{% else -%}
- {% if commit.breaking %} [**breaking**]{% endif %}\
{{ commit.message }} - ([{{ commit.id | truncate(length=7, end="") }}]($REPO/commit/{{ commit.id }})) - @{{ commit.author.name }}
{% endif -%}
{% endfor -%}
{% endfor %}\n
"""
# template for the changelog footer
footer = """
<!-- generated by git-cliff -->
"""
# remove the leading and trailing whitespace from the templates
trim = true
# postprocessors
postprocessors = [
{ pattern = '\$REPO', replace = "https://github.com/cr7258/elasticsearch-mcp-server" }, # replace repository URL
]
[git]
# parse the commits based on https://www.conventionalcommits.org
conventional_commits = true
# filter out the commits that are not conventional
filter_unconventional = true
# process each line of a commit as an individual commit
split_commits = false
# regex for preprocessing the commit messages
commit_preprocessors = [
# { pattern = '\((\w+\s)?#([0-9]+)\)', replace = "([#${2}](https://github.com/cr7258/elasticsearch-mcp-server/issues/${2}))"}, # replace issue numbers
]
# regex for parsing and grouping commits
commit_parsers = [
{ message = "^feat", group = "<!-- 0 -->⛰️ Features" },
{ message = "^fix", group = "<!-- 1 -->🐛 Bug Fixes" },
{ message = "^doc", group = "<!-- 3 -->📚 Documentation" },
{ message = "^perf", group = "<!-- 4 -->⚡ Performance" },
{ message = "^refactor\\(clippy\\)", skip = true },
{ message = "^refactor", group = "<!-- 2 -->🚜 Refactor" },
{ message = "^style", group = "<!-- 5 -->🎨 Styling" },
{ message = "^test", group = "<!-- 6 -->🧪 Testing" },
{ message = "^chore\\(release\\): prepare for", skip = true },
{ message = "^chore\\(deps.*\\)", skip = true },
{ message = "^chore\\(pr\\)", skip = true },
{ message = "^chore\\(pull\\)", skip = true },
{ message = "^chore\\(npm\\).*yarn\\.lock", skip = true },
{ message = "^chore|^ci", group = "<!-- 7 -->⚙️ Miscellaneous Tasks" },
{ body = ".*security", group = "<!-- 8 -->🛡️ Security" },
{ message = "^revert", group = "<!-- 9 -->◀️ Revert" },
]
# filter out the commits that are not matched by commit parsers
filter_commits = false
# sort the tags topologically
topo_order = false
# sort the commits inside sections by oldest/newest order
sort_commits = "oldest"
# regex for matching git tags
tag_pattern = "^v[0-9]"
# regex for skipping tags
skip_tags = ""
# regex for ignoring tags
ignore_tags = ""
# use tag date instead of commit date
date_order = true
# path to git binary
git_path = "git"
# whether to use relaxed or strict semver parsing
relaxed_semver = true
# only show the changes for the current version
tag_range = true
```
--------------------------------------------------------------------------------
/mcp_client/python-sdk-anthropic/client.py:
--------------------------------------------------------------------------------
```python
"""
Client example copied from https://modelcontextprotocol.io/quickstart/client
"""
import asyncio
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from anthropic import Anthropic
from config import get_logger, read_config
logger = get_logger(__name__)
class MCPClient:
def __init__(self):
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.anthropic = Anthropic()
self.config = read_config()
async def connect_to_server(self, server_script_path: str):
"""Connect to an MCP server
Args:
server_script_path: Path to the server script (.py or .js)
"""
is_python = server_script_path.endswith('.py')
is_js = server_script_path.endswith('.js')
if not (is_python or is_js):
raise ValueError("Server script must be a .py or .js file")
command = "python" if is_python else "node"
server_params = StdioServerParameters(command=command, args=[server_script_path], env=None)
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
self.stdio, self.write = stdio_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
if self.session is not None:
await self.session.initialize()
response = await self.session.list_tools()
tools = response.tools
logger.info(f"\nConnected to server with tools: {', '.join(tool.name for tool in tools)}")
async def process_query(self, query: str) -> str:
"""Process a query using Claude and available tools"""
messages = [
{
"role": "user",
"content": query
}
]
response = await self.session.list_tools()
available_tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]
# Initial Claude API call
response = self.anthropic.messages.create(
model=self.config.anthropic.model,
max_tokens=self.config.anthropic.max_tokens_message,
messages=messages,
tools=available_tools
)
# Process response and handle tool calls
final_text = []
assistant_message_content = []
for content in response.content:
if content.type == 'text':
final_text.append(content.text)
assistant_message_content.append(content)
elif content.type == 'tool_use':
tool_name = content.name
tool_args = content.input
# Execute tool call
result = await self.session.call_tool(tool_name, tool_args)
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
assistant_message_content.append(content)
messages.append({
"role": "assistant",
"content": assistant_message_content
})
messages.append({
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": content.id,
"content": result.content
}
]
})
# Get next response from Claude
response = self.anthropic.messages.create(
model=self.config.anthropic.model,
max_tokens=self.config.anthropic.max_tokens_message,
messages=messages,
tools=available_tools
)
final_text.append(response.content[0].text)
return "\n".join(final_text)
async def chat_loop(self):
"""Run an interactive chat loop"""
logger.info("\nMCP Client Started!")
logger.info("Type your queries or 'quit' to exit.")
while True:
try:
query = input("\nQuery: ").strip()
if query.lower() == 'quit':
break
response = await self.process_query(query)
logger.info("\n" + response)
except Exception as e:
logger.error(f"\nError: {str(e)}")
async def cleanup(self):
"""Clean up resources"""
await self.exit_stack.aclose()
async def main():
# if len(sys.argv) < 2:
# logger.exception("Usage: python client.py <path_to_server_script>")
# return
client = MCPClient()
try:
await client.connect_to_server(sys.argv[1])
logger.info(f"Connected to the server: {sys.argv[1]}.")
await client.chat_loop()
finally:
await client.cleanup()
logger.info(f"Disconnected from the server: {sys.argv[1]}.")
if __name__ == "__main__":
import sys
asyncio.run(main())
```
--------------------------------------------------------------------------------
/src/server.py:
--------------------------------------------------------------------------------
```python
import logging
import sys
import argparse
from fastmcp import FastMCP
from src.clients import create_search_client
from src.tools.alias import AliasTools
from src.tools.cluster import ClusterTools
from src.tools.data_stream import DataStreamTools
from src.tools.document import DocumentTools
from src.tools.general import GeneralTools
from src.tools.index import IndexTools
from src.tools.register import ToolsRegister
from src.version import __version__ as VERSION
class SearchMCPServer:
def __init__(self, engine_type):
# Set engine type
self.engine_type = engine_type
self.name = f"{self.engine_type}-mcp-server"
self.mcp = FastMCP(self.name)
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
)
self.logger = logging.getLogger(__name__)
self.logger.info(f"Initializing {self.name}, Version: {VERSION}")
# Create the corresponding search client
self.search_client = create_search_client(self.engine_type)
# Initialize tools
self._register_tools()
def _register_tools(self):
"""Register all MCP tools."""
# Create a tools register
register = ToolsRegister(self.logger, self.search_client, self.mcp)
# Define all tool classes to register
tool_classes = [
IndexTools,
DocumentTools,
ClusterTools,
AliasTools,
DataStreamTools,
GeneralTools,
]
# Register all tools
register.register_all_tools(tool_classes)
def run_search_server(engine_type, transport, host, port, path):
"""Run search server with specified engine type and transport options.
Args:
engine_type: Type of search engine to use ("elasticsearch" or "opensearch")
transport: Transport protocol to use ("stdio", "streamable-http", or "sse")
host: Host to bind to when using HTTP transports
port: Port to bind to when using HTTP transports
path: URL path prefix for HTTP transports
"""
server = SearchMCPServer(engine_type=engine_type)
if transport in ["streamable-http", "sse"]:
server.logger.info(f"Starting {server.name} with {transport} transport on {host}:{port}{path}")
server.mcp.run(transport=transport, host=host, port=port, path=path)
else:
server.logger.info(f"Starting {server.name} with {transport} transport")
server.mcp.run(transport=transport)
def parse_server_args():
"""Parse command line arguments for the MCP server.
Returns:
Parsed arguments
"""
parser = argparse.ArgumentParser()
parser.add_argument(
"--transport", "-t",
default="stdio",
choices=["stdio", "streamable-http", "sse"],
help="Transport protocol to use (default: stdio)"
)
parser.add_argument(
"--host", "-H",
default="127.0.0.1",
help="Host to bind to when using HTTP transports (default: 127.0.0.1)"
)
parser.add_argument(
"--port", "-p",
type=int,
default=8000,
help="Port to bind to when using HTTP transports (default: 8000)"
)
parser.add_argument(
"--path", "-P",
help="URL path prefix for HTTP transports (default: /mcp for streamable-http, /sse for sse)"
)
args = parser.parse_args()
# Set default path based on transport type if not specified
if args.path is None:
if args.transport == "sse":
args.path = "/sse"
else:
args.path = "/mcp"
return args
def elasticsearch_mcp_server():
"""Entry point for Elasticsearch MCP server."""
args = parse_server_args()
# Run the server with the specified options
run_search_server(
engine_type="elasticsearch",
transport=args.transport,
host=args.host,
port=args.port,
path=args.path
)
def opensearch_mcp_server():
"""Entry point for OpenSearch MCP server."""
args = parse_server_args()
# Run the server with the specified options
run_search_server(
engine_type="opensearch",
transport=args.transport,
host=args.host,
port=args.port,
path=args.path
)
if __name__ == "__main__":
# Require elasticsearch-mcp-server or opensearch-mcp-server as the first argument
if len(sys.argv) <= 1 or sys.argv[1] not in ["elasticsearch-mcp-server", "opensearch-mcp-server"]:
print("Error: First argument must be 'elasticsearch-mcp-server' or 'opensearch-mcp-server'")
sys.exit(1)
# Determine engine type based on the first argument
engine_type = "elasticsearch" # Default
if sys.argv[1] == "opensearch-mcp-server":
engine_type = "opensearch"
# Remove the first argument so it doesn't interfere with argparse
sys.argv.pop(1)
# Parse command line arguments
args = parse_server_args()
# Run the server with the specified options
run_search_server(
engine_type=engine_type,
transport=args.transport,
host=args.host,
port=args.port,
path=args.path
)
```
--------------------------------------------------------------------------------
/src/clients/base.py:
--------------------------------------------------------------------------------
```python
from abc import ABC
import logging
import warnings
from typing import Dict, Optional
from elasticsearch import Elasticsearch
import httpx
from opensearchpy import OpenSearch
class SearchClientBase(ABC):
def __init__(self, config: Dict, engine_type: str):
"""
Initialize the search client.
Args:
config: Configuration dictionary with connection parameters
engine_type: Type of search engine to use ("elasticsearch" or "opensearch")
"""
self.logger = logging.getLogger()
self.config = config
self.engine_type = engine_type
# Extract common configuration
hosts = config.get("hosts")
username = config.get("username")
password = config.get("password")
api_key = config.get("api_key")
verify_certs = config.get("verify_certs", False)
# Disable insecure request warnings if verify_certs is False
if not verify_certs:
warnings.filterwarnings("ignore", message=".*verify_certs=False is insecure.*")
warnings.filterwarnings("ignore", message=".*Unverified HTTPS request is being made to host.*")
try:
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
except ImportError:
pass
# Initialize client based on engine type
if engine_type == "elasticsearch":
# Get auth parameters based on elasticsearch package version and authentication method
auth_params = self._get_elasticsearch_auth_params(username, password, api_key)
self.client = Elasticsearch(
hosts=hosts,
verify_certs=verify_certs,
**auth_params
)
self.logger.info(f"Elasticsearch client initialized with hosts: {hosts}")
elif engine_type == "opensearch":
self.client = OpenSearch(
hosts=hosts,
http_auth=(username, password) if username and password else None,
verify_certs=verify_certs
)
self.logger.info(f"OpenSearch client initialized with hosts: {hosts}")
else:
raise ValueError(f"Unsupported engine type: {engine_type}")
# General REST client
base_url = hosts[0] if isinstance(hosts, list) else hosts
self.general_client = GeneralRestClient(
base_url=base_url,
username=username,
password=password,
api_key=api_key,
verify_certs=verify_certs,
)
def _get_elasticsearch_auth_params(self, username: Optional[str], password: Optional[str], api_key: Optional[str]) -> Dict:
"""
Get authentication parameters for Elasticsearch client based on package version.
Args:
username: Username for authentication
password: Password for authentication
api_key: API key for authentication
Returns:
Dictionary with appropriate auth parameters for the ES version
"""
# API key takes precedence over username/password
if api_key:
return {"api_key": api_key}
if not username or not password:
return {}
# Check Elasticsearch package version to determine auth parameter name
try:
from elasticsearch import __version__ as es_version
# Convert version tuple to string format
version_str = '.'.join(map(str, es_version))
self.logger.info(f"Elasticsearch client version: {version_str}")
major_version = es_version[0]
if major_version >= 8:
# ES 8+ uses basic_auth
return {"basic_auth": (username, password)}
else:
# ES 7 and below use http_auth
return {"http_auth": (username, password)}
except Exception as e:
self.logger.error(f"Failed to detect Elasticsearch version: {e}")
# If we can't detect version, try basic_auth first (ES 8+ default)
return {"basic_auth": (username, password)}
class GeneralRestClient:
def __init__(self, base_url: Optional[str], username: Optional[str], password: Optional[str], api_key: Optional[str], verify_certs: bool):
self.base_url = base_url.rstrip("/") if base_url else ""
self.auth = (username, password) if username and password else None
self.api_key = api_key
self.verify_certs = verify_certs
def request(self, method, path, params=None, body=None):
url = f"{self.base_url}/{path.lstrip('/')}"
headers = {}
# Add API key to Authorization header if provided
if self.api_key:
headers["Authorization"] = f"ApiKey {self.api_key}"
with httpx.Client(verify=self.verify_certs) as client:
resp = client.request(
method=method.upper(),
url=url,
params=params,
json=body,
auth=self.auth if not self.api_key else None, # Use basic auth only if no API key
headers=headers
)
resp.raise_for_status()
ct = resp.headers.get("content-type", "")
if ct.startswith("application/json"):
return resp.json()
return resp.text
```
--------------------------------------------------------------------------------
/docker-compose-elasticsearch.yml:
--------------------------------------------------------------------------------
```yaml
services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.2
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTICSEARCH_PASSWORD} == x ]; then
echo "Set the ELASTICSEARCH_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es02\n"\
" dns:\n"\
" - es02\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es03\n"\
" dns:\n"\
" - es03\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTICSEARCH_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"kibana123\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.2
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- node.name=es01
- cluster.name=es-mcp-cluster
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTICSEARCH_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=basic
- cluster.routing.allocation.disk.watermark.low=2gb
- cluster.routing.allocation.disk.watermark.high=1gb
- cluster.routing.allocation.disk.watermark.flood_stage=512mb
mem_limit: 1073741824
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es02:
depends_on:
- es01
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.2
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata02:/usr/share/elasticsearch/data
environment:
- node.name=es02
- cluster.name=es-mcp-cluster
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es02/es02.key
- xpack.security.http.ssl.certificate=certs/es02/es02.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es02/es02.key
- xpack.security.transport.ssl.certificate=certs/es02/es02.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=basic
- cluster.routing.allocation.disk.watermark.low=2gb
- cluster.routing.allocation.disk.watermark.high=1gb
- cluster.routing.allocation.disk.watermark.flood_stage=512mb
mem_limit: 1073741824
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es03:
depends_on:
- es02
image: docker.elastic.co/elasticsearch/elasticsearch:8.17.2
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata03:/usr/share/elasticsearch/data
environment:
- node.name=es03
- cluster.name=es-mcp-cluster
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es03/es03.key
- xpack.security.http.ssl.certificate=certs/es03/es03.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es03/es03.key
- xpack.security.transport.ssl.certificate=certs/es03/es03.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=basic
- cluster.routing.allocation.disk.watermark.low=2gb
- cluster.routing.allocation.disk.watermark.high=1gb
- cluster.routing.allocation.disk.watermark.flood_stage=512mb
mem_limit: 1073741824
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
es02:
condition: service_healthy
es03:
condition: service_healthy
image: docker.elastic.co/kibana/kibana:8.17.2
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- 5601:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=kibana123
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: 1073741824
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
certs:
driver: local
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
```