#
tokens: 9469/50000 17/17 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .github
│   └── workflows
│       └── release.yml
├── .gitignore
├── Dockerfile
├── go.mod
├── go.sum
├── LICENSE
├── main.go
├── npm
│   ├── bin
│   │   └── index.js
│   └── package.json
├── README.md
├── smithery.yaml
├── tools
│   ├── common_test.go
│   ├── common.go
│   ├── execute_sql.go
│   ├── get_table.go
│   ├── list_catalogs.go
│   ├── list_schemas.go
│   ├── list_tables.go
│   └── list_warehouses.go
└── version.go
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
databricks-mcp-server

# Test binary, built with `go test -c`
*.test

# Output of the go coverage tool, specifically when used with LiteIDE
*.out

# Dependency directories (remove the comment below to include it)
# vendor/

# Go workspace file
go.work

# IDE specific files
.idea/
.vscode/
*.swp
*.swo
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Databricks MCP Server

A Model Context Protocol (MCP) server for interacting with Databricks.

## Installation

You can download the latest release for your platform from the [Releases](https://github.com/characat0/databricks-mcp-server/releases) page.

### VS Code

Install the Databricks MCP Server extension in VS Code by pressing the following link:

[<img src="https://img.shields.io/badge/VS_Code-VS_Code?style=flat-square&label=Install%20Server&color=0098FF" alt="Install in VS Code">](https://vscode.dev/redirect?url=vscode%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522databricks%2522%252C%2522command%2522%253A%2522npx%2522%252C%2522args%2522%253A%255B%2522-y%2522%252C%2522databricks-mcp-server%2540latest%2522%255D%257D)

Alternatively, you can install the extension manually by running the following command:

```shell
# For VS Code
code --add-mcp '{"name":"databricks","command":"npx","args":["databricks-mcp-server@latest"]}'
# For VS Code Insiders
code-insiders --add-mcp '{"name":"databricks","command":"npx","args":["databricks-mcp-server@latest"]}'
```

## Tools

The Databricks MCP Server provides a Model Context Protocol (MCP) interface to interact with Databricks workspaces. It offers the following functionalities:

### List Catalogs

Lists all catalogs available in the Databricks workspace.

**Tool name:** `list_catalogs`

**Parameters:** None

**Returns:** JSON array of catalog objects

### List Schemas

Lists all schemas in a specified Databricks catalog.

**Tool name:** `list_schemas`

**Parameters:**
- `catalog` (string, required): Name of the catalog to list schemas from

**Returns:** JSON array of schema objects

### List Tables

Lists all tables in a specified Databricks schema with optional filtering.

**Tool name:** `list_tables`

**Parameters:**
- `catalog` (string, required): Name of the catalog containing the schema
- `schema` (string, required): Name of the schema to list tables from
- `filter_pattern` (string, optional, default: ".*"): Regular expression pattern to filter table names

**Returns:** JSON array of table objects

### Execute SQL

Executes SQL statements on a Databricks SQL warehouse and returns the results.

**Tool name:** `execute_sql`

**Parameters:**
- `statement` (string, required): SQL statement to execute
- `timeout_seconds` (number, optional, default: 60): Timeout in seconds for the statement execution
- `row_limit` (number, optional, default: 100): Maximum number of rows to return in the result

**Returns:** JSON object containing columns and rows from the query result, with information of the 
SQL warehouse used to execute the statement.

### List SQL Warehouses

Lists all SQL warehouses available in the Databricks workspace.

**Tool name:** `list_warehouses`

**Parameters:** None

**Returns:** JSON array of SQL warehouse objects

## Supported Platforms

- Linux (amd64)
- Windows (amd64)
- macOS (Intel/amd64)
- macOS (Apple Silicon/arm64)

## Usage

### Authentication

The application uses Databricks unified authentication. For details on how to configure authentication, please refer to the [Databricks Authentication documentation](https://docs.databricks.com/en/dev-tools/auth.html).

### Running the Server

Start the MCP server:

```bash
./databricks-mcp-server
```

The server will start and listen for MCP protocol commands on standard input/output.

## Development

### Prerequisites

- Go 1.24 or later

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
startCommand:
  type: stdio
  build:
    dockerfile: ./Dockerfile
    dockerBuildPath: .
  configSchema:
    {}
  commandFunction: |
    (config) => ({
      "command": "./databricks-mcp-server",
      "env": {}
    })

```

--------------------------------------------------------------------------------
/version.go:
--------------------------------------------------------------------------------

```go
package main

// Version information
var (
	// Version is the current version of the application
	Version = "0.0.10"

	// BuildDate is the date when the binary was built
	BuildDate = "unknown"

	// GitCommit is the git commit hash of the build
	GitCommit = "unknown"
)

```

--------------------------------------------------------------------------------
/tools/list_catalogs.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"

	"github.com/databricks/databricks-sdk-go/service/catalog"
	"github.com/mark3labs/mcp-go/mcp"
)

// ListCatalogs retrieves all catalogs from the Databricks workspace
// and returns them as a JSON string.
func ListCatalogs(ctx context.Context, _ mcp.CallToolRequest) (interface{}, error) {
	w, err := WorkspaceClientFromContext(ctx)
	if err != nil {
		return nil, err
	}
	return w.Catalogs.ListAll(ctx, catalog.ListCatalogsRequest{})
}

```

--------------------------------------------------------------------------------
/tools/list_warehouses.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"

	"github.com/databricks/databricks-sdk-go/service/sql"
	"github.com/mark3labs/mcp-go/mcp"
)

// ListWarehouses retrieves all SQL warehouses from the Databricks workspace
// and returns them as a JSON string.
func ListWarehouses(ctx context.Context, _ mcp.CallToolRequest) (interface{}, error) {
	w, err := WorkspaceClientFromContext(ctx)
	if err != nil {
		return nil, err
	}
	return w.Warehouses.ListAll(ctx, sql.ListWarehousesRequest{})
}

```

--------------------------------------------------------------------------------
/tools/list_schemas.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"

	"github.com/databricks/databricks-sdk-go/service/catalog"
	"github.com/mark3labs/mcp-go/mcp"
)

// ListSchemas retrieves all schemas in the specified catalog
// and returns them as a JSON string.
func ListSchemas(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
	w, err := WorkspaceClientFromContext(ctx)
	if err != nil {
		return nil, err
	}
	catalogName := request.GetString("catalog", "")
	return w.Schemas.ListAll(ctx, catalog.ListSchemasRequest{
		CatalogName: catalogName,
	})
}

```

--------------------------------------------------------------------------------
/tools/get_table.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"

	"github.com/databricks/databricks-sdk-go/service/catalog"
	"github.com/mark3labs/mcp-go/mcp"
)

// GetTable retrieves information about a single table using its full name (catalog.schema.table)
// and returns it as a JSON string.
func GetTable(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
	w, err := WorkspaceClientFromContext(ctx)
	if err != nil {
		return nil, err
	}

	fullName := request.GetString("full_name", "")

	// Note: The Get method doesn't support omitProperties and omitColumns parameters
	return w.Tables.Get(ctx, catalog.GetTableRequest{
		FullName: fullName,
	})
}

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Multi-stage build for Databricks MCP Server

# Build stage
FROM golang:1.24-alpine AS builder

# Set working directory
WORKDIR /app

# Copy go.mod and go.sum files
COPY go.mod go.sum ./

# Download dependencies
RUN go mod download

# Copy the source code
COPY . .

# Build the application with version information
RUN CGO_ENABLED=0 go build -ldflags="-X 'main.BuildDate=$(date -u +%Y-%m-%d)' -X 'main.GitCommit=$(git rev-parse --short HEAD || echo unknown)'" -o databricks-mcp-server

# Runtime stage
FROM alpine:latest

# Install CA certificates for HTTPS connections
RUN apk --no-cache add ca-certificates

# Set working directory
WORKDIR /app

# Copy the binary from the builder stage
COPY --from=builder /app/databricks-mcp-server /app/

# Set the entrypoint
ENTRYPOINT ["/app/databricks-mcp-server"]

# Document that the server listens on stdin/stdout
LABEL description="Databricks MCP Server - A Model Context Protocol (MCP) server for interacting with Databricks"
LABEL version="0.0.10"

```

--------------------------------------------------------------------------------
/npm/bin/index.js:
--------------------------------------------------------------------------------

```javascript
#!/usr/bin/env node

const childProcess = require('child_process');

const package = "databricks-mcp-server";

const BINARY_MAP = {
    darwin_x64: {name: `${package}-darwin-amd64`, suffix: ''},
    darwin_arm64: {name: `${package}-darwin-arm64`, suffix: ''},
    linux_x64: {name: `${package}-linux-amd64`, suffix: ''},
    linux_arm64: {name: `${package}-linux-arm64`, suffix: ''},
    win32_x64: {name: `${package}-windows-amd64`, suffix: '.exe'},
    win32_arm64: {name: `${package}-windows-arm64`, suffix: '.exe'},
};

// Resolving will fail if the optionalDependency was not installed or the platform/arch is not supported
const resolveBinaryPath = () => {
    try {
        const binary = BINARY_MAP[`${process.platform}_${process.arch}`];
        return require.resolve(`${binary.name}/bin/databricks-mcp-server${binary.suffix}`);
    } catch (e) {
        throw new Error(`Could not resolve binary path for platform/arch: ${process.platform}/${process.arch}`);
    }
};

childProcess.execFileSync(resolveBinaryPath(), process.argv.slice(2), {
    stdio: 'inherit',
});

```

--------------------------------------------------------------------------------
/npm/package.json:
--------------------------------------------------------------------------------

```json
{
    "name": "databricks-mcp-server",
    "version": "0.0.0",
    "description": "Model Context Protocol (MCP) server for interacting with Databricks",
    "main": "bin/index.js",
    "bin": {
        "databricks-mcp-server": "bin/index.js"
    },
    "optionalDependencies": {
        "databricks-mcp-server-darwin-amd64": "0.0.0",
        "databricks-mcp-server-darwin-arm64": "0.0.0",
        "databricks-mcp-server-linux-amd64": "0.0.0",
        "databricks-mcp-server-linux-arm64": "0.0.0",
        "databricks-mcp-server-windows-amd64": "0.0.0",
        "databricks-mcp-server-windows-arm64": "0.0.0"
    },
    "repository": {
        "type": "git",
        "url": "git+https://github.com/characat0/databricks-mcp-server.git"
    },
    "keywords": [
        "mcp",
        "databricks",
        "model context protocol"
    ],
    "author": {
        "name": "Marco Vela",
        "url": "https://www.marcovela.com"
    },
    "license": "MIT",
    "bugs": {
        "url": "https://github.com/characat0/databricks-mcp-server/issues"
    },
    "homepage": "https://github.com/characat0/databricks-mcp-server#readme"
}

```

--------------------------------------------------------------------------------
/tools/list_tables.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"
	"regexp"

	"github.com/databricks/databricks-sdk-go/listing"
	"github.com/databricks/databricks-sdk-go/service/catalog"
	"github.com/mark3labs/mcp-go/mcp"
)

// filterTables filters a list of tables based on a regex pattern applied to table names.
// Returns the filtered list of tables and any error that occurred during pattern compilation.
func filterTables(tables []catalog.TableInfo, pattern string) ([]catalog.TableInfo, error) {
	regex, err := regexp.Compile(pattern)
	if err != nil {
		return nil, err
	}
	var filteredTables []catalog.TableInfo
	for _, table := range tables {
		if regex.MatchString(table.Name) {
			filteredTables = append(filteredTables, table)
		}
	}
	return filteredTables, nil
}

// ListTables retrieves all tables in the specified catalog and schema,
// optionally filtering them by a regex pattern, and returns them as a JSON string.
// It also supports omitting table properties and column details from the response.
// The max_results parameter limits the number of tables returned (0 for all).
func ListTables(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
	w, err := WorkspaceClientFromContext(ctx)
	if err != nil {
		return nil, err
	}

	catalogName := request.GetString("catalog", "")
	schemaName := request.GetString("schema", "")
	tableNamePattern := request.GetString("table_name_pattern", ".*")
	omitProperties := request.GetBool("omit_properties", true)
	omitColumns := request.GetBool("omit_columns", false)
	maxResults := request.GetInt("max_results", 10)

	// Retrieve all tables from the specified catalog and schema
	tablesIt := w.Tables.List(ctx, catalog.ListTablesRequest{
		CatalogName:    catalogName,
		SchemaName:     schemaName,
		OmitProperties: omitProperties,
		OmitColumns:    omitColumns,
		MaxResults:     maxResults + 1,
	})
	tables, err := listing.ToSliceN[catalog.TableInfo](ctx, tablesIt, maxResults)
	if err != nil {
		return nil, err
	}

	var truncated = false
	if len(tables) > maxResults {
		tables = tables[:maxResults]
		truncated = true
	}

	// Apply filter if pattern is not ".*" (match everything)
	if tableNamePattern != "" && tableNamePattern != ".*" {
		tables, err = filterTables(tables, tableNamePattern)
		if err != nil {
			return nil, err
		}
	}

	// Return a structured response
	return map[string]interface{}{
		"tables":      tables,
		"total_count": len(tables),
		"truncated":   truncated,
	}, nil
}

```

--------------------------------------------------------------------------------
/tools/common.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"
	"encoding/json"
	"fmt"

	"github.com/databricks/databricks-sdk-go"
	"github.com/mark3labs/mcp-go/mcp"
	"github.com/mark3labs/mcp-go/server"
)

// contextKey is a type for context keys to avoid collisions
type contextKey string

// workspaceClientKey is the key used to store the workspace client in the context
const workspaceClientKey contextKey = "workspaceClient"

// WithWorkspaceClient returns a new context with the workspace client added
func WithWorkspaceClient(ctx context.Context, w *databricks.WorkspaceClient) context.Context {
	return context.WithValue(ctx, workspaceClientKey, w)
}

// WorkspaceClientFromContext retrieves the workspace client from the context
func WorkspaceClientFromContext(ctx context.Context) (*databricks.WorkspaceClient, error) {
	w, ok := ctx.Value(workspaceClientKey).(*databricks.WorkspaceClient)
	if !ok || w == nil {
		return nil, fmt.Errorf("workspace client not found in context")
	}
	return w, nil
}

// DatabricksTool represents a generic tool on Databricks
type DatabricksTool func(ctx context.Context, request mcp.CallToolRequest) (interface{}, error)

// ExecuteTool is a helper function that executes a Databricks tool and handles common error patterns
func ExecuteTool(tool DatabricksTool) server.ToolHandlerFunc {
	return func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
		result, err := tool(ctx, request)
		if err != nil {
			return mcp.NewToolResultErrorFromErr("Error executing tool", err), nil
		}

		// Marshal the result to JSON
		jsonResult, err := json.Marshal(result)
		if err != nil {
			return mcp.NewToolResultErrorFromErr("Error marshalling result into JSON", err), nil
		}

		return mcp.NewToolResultText(string(jsonResult)), nil
	}
}

// WithWorkspaceClientHandler wraps a tool handler function with the workspace client
func WithWorkspaceClientHandler(w *databricks.WorkspaceClient, handler server.ToolHandlerFunc) server.ToolHandlerFunc {
	return func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
		// Add the workspace client to the context
		ctx = WithWorkspaceClient(ctx, w)
		return handler(ctx, request)
	}
}

// SendProgressNotification sends a progress notification to the client
func SendProgressNotification(ctx context.Context, message string, progress, total int) error {
	mcpServer := server.ServerFromContext(ctx)
	if mcpServer == nil {
		return fmt.Errorf("server not found in context")
	}

	var token interface{} = 0
	return mcpServer.SendNotificationToClient(ctx, "notifications/progress", map[string]interface{}{
		"message":       message,
		"progressToken": token,
		"progress":      progress,
		"total":         total,
	})
}

```

--------------------------------------------------------------------------------
/tools/common_test.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"
	"encoding/json"
	"errors"
	"testing"

	"github.com/stretchr/testify/assert"
)

// TestRequest is a simplified version of mcp.CallToolRequest for testing
type TestRequest struct {
	Arguments map[string]interface{}
}

// TestResult is a simplified version of mcp.CallToolResult for testing
type TestResult struct {
	Type  string
	Text  string
	Error string
}

// TestOperation is a simplified version of DatabricksTool for testing
type TestOperation func(ctx context.Context, request TestRequest) (interface{}, error)

// NewTestResultText creates a new test result with text content
func NewTestResultText(text string) *TestResult {
	return &TestResult{
		Type: "text",
		Text: text,
	}
}

// NewTestResultErrorFromErr creates a new test result with error content from an error
func NewTestResultErrorFromErr(message string, err error) *TestResult {
	return &TestResult{
		Type:  "error",
		Error: message + ": " + err.Error(),
	}
}

// ExecuteTestOperation is a simplified version of ExecuteTool for testing
func ExecuteTestOperation(operation TestOperation) func(ctx context.Context, request TestRequest) (*TestResult, error) {
	return func(ctx context.Context, request TestRequest) (*TestResult, error) {
		result, err := operation(ctx, request)
		if err != nil {
			return NewTestResultErrorFromErr("Error executing operation", err), nil
		}

		// Marshal the result to JSON
		jsonResult, err := json.Marshal(result)
		if err != nil {
			return NewTestResultErrorFromErr("Error marshalling result into JSON", err), nil
		}

		return NewTestResultText(string(jsonResult)), nil
	}
}

// TestExecuteTestOperation tests the ExecuteTestOperation function
func TestExecuteTestOperation(t *testing.T) {
	// Create a mock operation that returns a successful result
	successOp := func(ctx context.Context, request TestRequest) (interface{}, error) {
		return map[string]string{"result": "success"}, nil
	}

	// Create a mock operation that returns an error
	errorOp := func(ctx context.Context, request TestRequest) (interface{}, error) {
		return nil, errors.New("operation failed")
	}

	// Test successful operation
	t.Run("SuccessfulOperation", func(t *testing.T) {
		handler := ExecuteTestOperation(successOp)
		result, err := handler(context.Background(), TestRequest{})
		assert.NoError(t, err)
		assert.NotNil(t, result)
		assert.Equal(t, "text", result.Type)
		assert.NotEmpty(t, result.Text)
	})

	// Test failed operation
	t.Run("FailedOperation", func(t *testing.T) {
		handler := ExecuteTestOperation(errorOp)
		result, err := handler(context.Background(), TestRequest{})
		assert.NoError(t, err)
		assert.NotNil(t, result)
		assert.Equal(t, "error", result.Type)
		assert.Contains(t, result.Error, "operation failed")
	})
}

```

--------------------------------------------------------------------------------
/tools/execute_sql.go:
--------------------------------------------------------------------------------

```go
package tools

import (
	"context"
	"fmt"
	"time"

	"github.com/databricks/databricks-sdk-go/service/sql"
	"github.com/mark3labs/mcp-go/mcp"
)

// ExecuteSQL executes a SQL statement on a Databricks warehouse and returns the results.
// It handles statement execution, polling for completion, and fetching result chunks.
func ExecuteSQL(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
	w, err := WorkspaceClientFromContext(ctx)
	if err != nil {
		return nil, err
	}

	sqlStatement := request.GetString("statement", "")
	timeoutSeconds := request.GetFloat("execution_timeout_seconds", 60)
	maxRows := request.GetInt("max_rows", 100)
	warehouseId := request.GetString("warehouse_id", "")

	// Convert timeout to string format for API and calculate a polling interval
	pollingInterval := 10 * time.Second
	// Poll for statement completion
	maxAttempts := int(timeoutSeconds / 10)

	// Determine which warehouse to use
	if warehouseId == "" {
		// Get available warehouses and use the first one
		warehouses, err := w.Warehouses.ListAll(ctx, sql.ListWarehousesRequest{})
		if err != nil {
			return nil, fmt.Errorf("error listing warehouses: %w", err)
		}
		if len(warehouses) == 0 {
			return nil, fmt.Errorf("no warehouses available")
		}
		warehouseId = warehouses[0].Id
	}

	// Execute the SQL statement with the specified row limit
	res, err := w.StatementExecution.ExecuteStatement(ctx, sql.ExecuteStatementRequest{
		RowLimit:    int64(maxRows),
		Statement:   sqlStatement,
		WaitTimeout: "5s",
		WarehouseId: warehouseId,
	})
	if err != nil {
		return nil, fmt.Errorf("error executing SQL statement: %w", err)
	}

	attempts := 0

	for attempts < maxAttempts && res.Status.State != sql.StatementStateSucceeded && res.Status.Error == nil {
		// Send progress notification
		err = SendProgressNotification(ctx,
			fmt.Sprintf("Statement execution in progress (%d seconds), current status: %s", attempts*10, res.Status.State),
			attempts, maxAttempts)
		if err != nil {
			return nil, err
		}

		// Wait before checking again
		time.Sleep(pollingInterval)

		// Check statement status
		res, err = w.StatementExecution.GetStatementByStatementId(ctx, res.StatementId)
		if err != nil {
			return nil, fmt.Errorf("error getting statement status: %w", err)
		}
		attempts++
	}

	// Handle statement errors
	if res.Status.Error != nil {
		return nil, fmt.Errorf("error executing the statement, current status %s: %s",
			res.Status.State, res.Status.Error.Message)
	}

	if res.Status.State != sql.StatementStateSucceeded {
		_ = w.StatementExecution.CancelExecution(ctx, sql.CancelExecutionRequest{
			StatementId: res.StatementId,
		})
		return nil, fmt.Errorf("statement execution timed out after %v seconds, current status %s.\nHint: Try with a higher timeout or simplying the query.", timeoutSeconds, res.Status.State)
	}

	// Collect all result chunks
	var resultDataArray [][]string
	resultData := res.Result
	resultDataArray = append(resultDataArray, resultData.DataArray...)

	// Fetch additional chunks if available
	for resultData.NextChunkIndex != 0 {
		resultData, err = w.StatementExecution.GetStatementResultChunkN(ctx, sql.GetStatementResultChunkNRequest{
			ChunkIndex:  resultData.NextChunkIndex,
			StatementId: res.StatementId,
		})
		if err != nil {
			return nil, fmt.Errorf("error getting statement result chunk: %w", err)
		}
		resultDataArray = append(resultDataArray, resultData.DataArray...)
	}

	// Return structured results
	return map[string]interface{}{
		"columns": res.Manifest.Schema.Columns,
		"rows":    resultDataArray,
	}, nil
}

```

--------------------------------------------------------------------------------
/main.go:
--------------------------------------------------------------------------------

```go
package main

import (
	"fmt"
	"log"
	"os"

	"databricks-mcp-server/tools"
	"github.com/databricks/databricks-sdk-go"
	"github.com/mark3labs/mcp-go/mcp"
	"github.com/mark3labs/mcp-go/server"
)

// w is the Databricks workspace client used for all API operations
var w *databricks.WorkspaceClient

func init() {
	var err error
	w, err = databricks.NewWorkspaceClient()
	if err != nil {
		log.Fatalf("Failed to initialize Databricks client: %v", err)
	}
}

func main() {
	// Create an MCP server
	s := server.NewMCPServer(
		"Databricks MCP Server",
		Version,
		server.WithLogging(),
	)

	// Add tool handlers for Databricks operations
	s.AddTool(mcp.NewTool("list_catalogs",
		mcp.WithDescription("Lists all catalogs available in the Databricks workspace"),
	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListCatalogs)))

	s.AddTool(mcp.NewTool("list_schemas",
		mcp.WithDescription("Lists all schemas in a specified Databricks catalog"),
		mcp.WithString("catalog", mcp.Description("Name of the catalog to list schemas from"), mcp.Required()),
	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListSchemas)))

	s.AddTool(mcp.NewTool("list_tables",
		mcp.WithDescription("Lists all tables in a specified Databricks schema with optional filtering"),
		mcp.WithString("catalog", mcp.Description("Name of the catalog containing the schema"), mcp.Required()),
		mcp.WithString("schema", mcp.Description("Name of the schema to list tables from"), mcp.Required()),
		mcp.WithString("table_name_pattern", mcp.Description("Regular expression pattern to filter table names"), mcp.DefaultString(".*")),
		mcp.WithBoolean("omit_properties", mcp.Description("Whether to omit table properties in the response, helps to reduce response size"), mcp.DefaultBool(true)),
		mcp.WithBoolean("omit_columns", mcp.Description("Whether to omit column details in the response"), mcp.DefaultBool(false)),
		mcp.WithNumber("max_results", mcp.Description("Maximum number of tables to return (0 for all, non-recommended)"), mcp.DefaultNumber(10)),
	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListTables)))

	s.AddTool(mcp.NewTool("get_table",
		mcp.WithDescription("Gets detailed information about a single Databricks table"),
		mcp.WithString("full_name", mcp.Description("Full name of the table in format 'catalog.schema.table'"), mcp.Required()),
	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.GetTable)))

	s.AddTool(mcp.NewTool("execute_sql",
		mcp.WithDescription(`
<use_case>
  Use this tool to execute SQL statements against a Databricks warehouse and retrieve results in JSON format.
</use_case>

<important_notes>
  The flavor of SQL supported is based on the Databricks SQL engine, which is similar to Apache Spark SQL.
  If asked explicitly to use a specific warehouse, you can use the "list_warehouses" tool to get available warehouses.
  Ensure that the SQL is optimized for performance, especially for large datasets; avoid running statements that do not use partitioning or indexing effectively.
</important_notes>
`),
		mcp.WithString("statement", mcp.Description("SQL statement to execute"), mcp.Required()),
		mcp.WithNumber("execution_timeout_seconds", mcp.Description("Maximum time in seconds to wait for query execution"), mcp.DefaultNumber(60)),
		mcp.WithNumber("max_rows", mcp.Description("Maximum number of rows to return in the result"), mcp.DefaultNumber(100)),
		mcp.WithString("warehouse_id", mcp.Description("ID of the warehouse to use for execution. If not specified, the first available warehouse will be used")),
	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ExecuteSQL)))

	s.AddTool(mcp.NewTool("list_warehouses",
		mcp.WithDescription(`
<use_case>
  Use this tool when asked explicitly to use a specific warehouse for SQL execution.
</use_case>
`),
	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListWarehouses)))

	// Start the stdio server
	logger := log.New(os.Stdout, "INFO: ", log.LstdFlags)
	if err := server.ServeStdio(s, server.WithErrorLogger(logger)); err != nil {
		fmt.Printf("Server error: %v\n", err)
	}
}

```

--------------------------------------------------------------------------------
/.github/workflows/release.yml:
--------------------------------------------------------------------------------

```yaml
name: Build and Release

on:
  push:
    tags:
      - 'v[0-9]+.[0-9]+.[0-9]+'
    # This ensures the workflow only runs on tags matching the pattern vX.Y.Z

jobs:
  build:
    name: Build
    runs-on: ubuntu-latest
    strategy:
      matrix:
        go-version: ['1.24.x']
        platform: [linux-amd64, windows-amd64, darwin-amd64, darwin-arm64]
        include:
          - platform: linux-amd64
            os: ubuntu-latest
            GOOS: linux
            GOARCH: amd64
            binary_name: databricks-mcp-server
            asset_name: databricks-mcp-server-linux-amd64
          - platform: windows-amd64
            os: ubuntu-latest
            GOOS: windows
            GOARCH: amd64
            binary_name: databricks-mcp-server.exe
            asset_name: databricks-mcp-server-windows-amd64.exe
          - platform: darwin-amd64
            os: ubuntu-latest
            GOOS: darwin
            GOARCH: amd64
            binary_name: databricks-mcp-server
            asset_name: databricks-mcp-server-darwin-amd64
          - platform: darwin-arm64
            os: ubuntu-latest
            GOOS: darwin
            GOARCH: arm64
            binary_name: databricks-mcp-server
            asset_name: databricks-mcp-server-darwin-arm64

    steps:
    - name: Checkout code
      uses: actions/checkout@v4
      with:
        fetch-depth: 0

    - name: Set up Go
      uses: actions/setup-go@v4
      with:
        go-version: ${{ matrix.go-version }}

    - name: Get version from tag
      id: get_version
      run: |
        if [[ $GITHUB_REF == refs/tags/v* ]]; then
          VERSION=${GITHUB_REF#refs/tags/v}
        else
          # For non-tag builds, try to get the latest tag or use version from version.go if no tags exist
          LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
          if [ -z "$LATEST_TAG" ]; then
            # Extract version from version.go if no tags exist
            VERSION=$(grep -oP 'Version = "\K[^"]+' version.go || echo "0.0.0")
            VERSION="$VERSION-dev-$(git rev-parse --short HEAD)"
          else
            VERSION="${LATEST_TAG#v}-dev-$(git rev-parse --short HEAD)"
          fi
        fi
        echo "VERSION=$VERSION" >> $GITHUB_ENV
        echo "version=$VERSION" >> $GITHUB_OUTPUT

    - name: Build
      env:
        GOOS: ${{ matrix.GOOS }}
        GOARCH: ${{ matrix.GOARCH }}
      run: |
        go build -trimpath -ldflags="-s -w -X 'main.Version=${{ env.VERSION }}' -X 'main.BuildDate=$(date -u +%Y-%m-%dT%H:%M:%SZ)' -X 'main.GitCommit=$(git rev-parse --short HEAD)'" -o ${{ matrix.binary_name }}

    - name: Install UPX
      if: matrix.GOOS != 'darwin'
      run: |
        sudo apt-get update
        sudo apt-get install -y upx-ucl

    - name: Compress binary with UPX
      if: matrix.GOOS != 'darwin'
      run: |
        upx --best --lzma ${{ matrix.binary_name }}

    - name: Upload artifact
      uses: actions/upload-artifact@v4
      with:
        name: ${{ matrix.asset_name }}
        path: ${{ matrix.binary_name }}

  release:
    permissions: write-all
    name: Create Release
    needs: build
    runs-on: ubuntu-latest
    outputs:
      upload_url: ${{ steps.release_outputs.outputs.upload_url }}
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Generate Changelog
        id: changelog
        run: |
          # Get the current tag
          CURRENT_TAG=${GITHUB_REF#refs/tags/}
          echo "Current tag: $CURRENT_TAG"

          # Check if this is the first tag
          TAG_COUNT=$(git tag | wc -l)

          if [ "$TAG_COUNT" -le 1 ]; then
            # This is the first tag or there's only one tag (the current one)
            echo "This is the first release. Including all commits."
            # Get all commits up to the current tag
            CHANGELOG=$(git log --pretty=format:"* %s (%h)" $CURRENT_TAG)

            # If the changelog is empty (can happen with the first tag), get all commits
            if [ -z "$CHANGELOG" ]; then
              echo "Getting all commits for the first release."
              CHANGELOG=$(git log --pretty=format:"* %s (%h)")
            fi
          else
            # Try to get the previous tag
            PREVIOUS_TAG=$(git describe --tags --abbrev=0 $CURRENT_TAG^ 2>/dev/null || echo "")

            if [ -z "$PREVIOUS_TAG" ]; then
              # If we can't get the previous tag, get all commits up to the current tag
              echo "No previous tag found. Using all commits up to $CURRENT_TAG."
              CHANGELOG=$(git log --pretty=format:"* %s (%h)" $CURRENT_TAG)
            else
              echo "Previous tag: $PREVIOUS_TAG"
              # Get commits between the previous tag and the current tag
              CHANGELOG=$(git log --pretty=format:"* %s (%h)" $PREVIOUS_TAG..$CURRENT_TAG)
            fi
          fi

          # Save changelog to output
          echo "CHANGELOG<<EOF" >> $GITHUB_ENV
          echo "$CHANGELOG" >> $GITHUB_ENV
          echo "EOF" >> $GITHUB_ENV

      - name: Download all artifacts
        uses: actions/download-artifact@v4
        with:
          path: artifacts

      - name: Create Release
        id: create_release
        uses: actions/create-release@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          tag_name: ${{ github.ref_name }}
          release_name: Release ${{ github.ref_name }}
          body: |
            # Release ${{ github.ref_name }} of databricks-mcp-server

            ## Changelog
            ${{ env.CHANGELOG }}
          draft: false
          prerelease: false

      # The release ID is needed for the upload-assets job
      - name: Set release outputs
        id: release_outputs
        run: |
          echo "release_id=${{ steps.create_release.outputs.id }}" >> $GITHUB_OUTPUT
          echo "upload_url=${{ steps.create_release.outputs.upload_url }}" >> $GITHUB_OUTPUT

  upload-assets:
    name: Upload Release Assets
    needs: release
    runs-on: ubuntu-latest
    strategy:
      matrix:
        asset:
          - name: databricks-mcp-server-linux-amd64
            path: artifacts/databricks-mcp-server-linux-amd64/databricks-mcp-server
            content_type: application/octet-stream
          - name: databricks-mcp-server-windows-amd64.exe
            path: artifacts/databricks-mcp-server-windows-amd64.exe/databricks-mcp-server.exe
            content_type: application/octet-stream
          - name: databricks-mcp-server-darwin-amd64
            path: artifacts/databricks-mcp-server-darwin-amd64/databricks-mcp-server
            content_type: application/octet-stream
          - name: databricks-mcp-server-darwin-arm64
            path: artifacts/databricks-mcp-server-darwin-arm64/databricks-mcp-server
            content_type: application/octet-stream
    steps:
      - name: Download all artifacts
        uses: actions/download-artifact@v4
        with:
          path: artifacts

      - name: Upload Release Asset
        uses: actions/upload-release-asset@v1
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          upload_url: ${{ needs.release.outputs.upload_url }}
          asset_path: ${{ matrix.asset.path }}
          asset_name: ${{ matrix.asset.name }}
          asset_content_type: ${{ matrix.asset.content_type }}

  publish-npm:
    name: Publish NPM Packages
    needs: [release, upload-assets]
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          registry-url: 'https://registry.npmjs.org/'

      - name: Download all artifacts
        uses: actions/download-artifact@v4
        with:
          path: artifacts

      - name: Get version from tag
        id: get_version
        run: |
          # Remove 'v' prefix from tag name
          VERSION=${GITHUB_REF#refs/tags/v}
          echo "VERSION=$VERSION" >> $GITHUB_ENV

      - name: Prepare main npm package
        run: |
          # Update main package version and dependencies
          jq ".version = \"${VERSION}\"" npm/package.json > tmp.json && mv tmp.json npm/package.json
          jq ".optionalDependencies |= with_entries(.value = \"${VERSION}\")" npm/package.json > tmp.json && mv tmp.json npm/package.json

          # Copy README and LICENSE to main package
          cp README.md LICENSE npm/

      - name: Publish main package
        run: |
          cd npm
          npm publish --access public
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

      - name: Prepare and publish platform-specific packages
        run: |
          # Create directories for platform-specific packages
          mkdir -p npm/databricks-mcp-server-darwin-amd64/bin
          mkdir -p npm/databricks-mcp-server-darwin-arm64/bin
          mkdir -p npm/databricks-mcp-server-linux-amd64/bin
          mkdir -p npm/databricks-mcp-server-windows-amd64/bin

          # Copy binaries to their respective npm package directories
          cp artifacts/databricks-mcp-server-darwin-amd64/databricks-mcp-server npm/databricks-mcp-server-darwin-amd64/bin/
          cp artifacts/databricks-mcp-server-darwin-arm64/databricks-mcp-server npm/databricks-mcp-server-darwin-arm64/bin/
          cp artifacts/databricks-mcp-server-linux-amd64/databricks-mcp-server npm/databricks-mcp-server-linux-amd64/bin/
          cp artifacts/databricks-mcp-server-windows-amd64.exe/databricks-mcp-server.exe npm/databricks-mcp-server-windows-amd64/bin/

          # Make binaries executable
          chmod +x npm/databricks-mcp-server-darwin-amd64/bin/databricks-mcp-server
          chmod +x npm/databricks-mcp-server-darwin-arm64/bin/databricks-mcp-server
          chmod +x npm/databricks-mcp-server-linux-amd64/bin/databricks-mcp-server
          chmod +x npm/databricks-mcp-server-windows-amd64/bin/databricks-mcp-server.exe

          # Create package.json and publish for each platform-specific package
          for dir in npm/databricks-mcp-server-*; do
            if [ -d "$dir" ]; then
              pkg_name=$(basename "$dir")
              os_name=${pkg_name#databricks-mcp-server-}

              # Extract CPU architecture from package name
              if [[ "$os_name" == *"arm64"* ]]; then
                cpu_arch="arm64"
              else
                cpu_arch="x64"
              fi

              # Extract only the OS part and convert windows to win32
              if [[ "$os_name" == windows* ]]; then
                os_name="win32"
              elif [[ "$os_name" == darwin* ]]; then
                os_name="darwin"
              elif [[ "$os_name" == linux* ]]; then
                os_name="linux"
              fi

              # Create package.json file
              echo '{
                "name": "'$pkg_name'",
                "version": "'$VERSION'",
                "description": "Platform-specific binary for databricks-mcp-server",
                "os": ["'$os_name'"],
                "cpu": ["'$cpu_arch'"]
              }' > "$dir/package.json"

              # Publish the platform-specific package
              cd "$dir"
              npm publish --access public
              cd ../..
            fi
          done
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

```