#
tokens: 12567/50000 17/17 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .github
│   └── workflows
│       └── release.yml
├── .gitignore
├── Dockerfile
├── go.mod
├── go.sum
├── LICENSE
├── main.go
├── npm
│   ├── bin
│   │   └── index.js
│   └── package.json
├── README.md
├── smithery.yaml
├── tools
│   ├── common_test.go
│   ├── common.go
│   ├── execute_sql.go
│   ├── get_table.go
│   ├── list_catalogs.go
│   ├── list_schemas.go
│   ├── list_tables.go
│   └── list_warehouses.go
└── version.go
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Binaries for programs and plugins
 2 | *.exe
 3 | *.exe~
 4 | *.dll
 5 | *.so
 6 | *.dylib
 7 | databricks-mcp-server
 8 | 
 9 | # Test binary, built with `go test -c`
10 | *.test
11 | 
12 | # Output of the go coverage tool, specifically when used with LiteIDE
13 | *.out
14 | 
15 | # Dependency directories (remove the comment below to include it)
16 | # vendor/
17 | 
18 | # Go workspace file
19 | go.work
20 | 
21 | # IDE specific files
22 | .idea/
23 | .vscode/
24 | *.swp
25 | *.swo
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Databricks MCP Server
  2 | 
  3 | A Model Context Protocol (MCP) server for interacting with Databricks.
  4 | 
  5 | ## Installation
  6 | 
  7 | You can download the latest release for your platform from the [Releases](https://github.com/characat0/databricks-mcp-server/releases) page.
  8 | 
  9 | ### VS Code
 10 | 
 11 | Install the Databricks MCP Server extension in VS Code by pressing the following link:
 12 | 
 13 | [<img src="https://img.shields.io/badge/VS_Code-VS_Code?style=flat-square&label=Install%20Server&color=0098FF" alt="Install in VS Code">](https://vscode.dev/redirect?url=vscode%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522databricks%2522%252C%2522command%2522%253A%2522npx%2522%252C%2522args%2522%253A%255B%2522-y%2522%252C%2522databricks-mcp-server%2540latest%2522%255D%257D)
 14 | 
 15 | Alternatively, you can install the extension manually by running the following command:
 16 | 
 17 | ```shell
 18 | # For VS Code
 19 | code --add-mcp '{"name":"databricks","command":"npx","args":["databricks-mcp-server@latest"]}'
 20 | # For VS Code Insiders
 21 | code-insiders --add-mcp '{"name":"databricks","command":"npx","args":["databricks-mcp-server@latest"]}'
 22 | ```
 23 | 
 24 | ## Tools
 25 | 
 26 | The Databricks MCP Server provides a Model Context Protocol (MCP) interface to interact with Databricks workspaces. It offers the following functionalities:
 27 | 
 28 | ### List Catalogs
 29 | 
 30 | Lists all catalogs available in the Databricks workspace.
 31 | 
 32 | **Tool name:** `list_catalogs`
 33 | 
 34 | **Parameters:** None
 35 | 
 36 | **Returns:** JSON array of catalog objects
 37 | 
 38 | ### List Schemas
 39 | 
 40 | Lists all schemas in a specified Databricks catalog.
 41 | 
 42 | **Tool name:** `list_schemas`
 43 | 
 44 | **Parameters:**
 45 | - `catalog` (string, required): Name of the catalog to list schemas from
 46 | 
 47 | **Returns:** JSON array of schema objects
 48 | 
 49 | ### List Tables
 50 | 
 51 | Lists all tables in a specified Databricks schema with optional filtering.
 52 | 
 53 | **Tool name:** `list_tables`
 54 | 
 55 | **Parameters:**
 56 | - `catalog` (string, required): Name of the catalog containing the schema
 57 | - `schema` (string, required): Name of the schema to list tables from
 58 | - `filter_pattern` (string, optional, default: ".*"): Regular expression pattern to filter table names
 59 | 
 60 | **Returns:** JSON array of table objects
 61 | 
 62 | ### Execute SQL
 63 | 
 64 | Executes SQL statements on a Databricks SQL warehouse and returns the results.
 65 | 
 66 | **Tool name:** `execute_sql`
 67 | 
 68 | **Parameters:**
 69 | - `statement` (string, required): SQL statement to execute
 70 | - `timeout_seconds` (number, optional, default: 60): Timeout in seconds for the statement execution
 71 | - `row_limit` (number, optional, default: 100): Maximum number of rows to return in the result
 72 | 
 73 | **Returns:** JSON object containing columns and rows from the query result, with information of the 
 74 | SQL warehouse used to execute the statement.
 75 | 
 76 | ### List SQL Warehouses
 77 | 
 78 | Lists all SQL warehouses available in the Databricks workspace.
 79 | 
 80 | **Tool name:** `list_warehouses`
 81 | 
 82 | **Parameters:** None
 83 | 
 84 | **Returns:** JSON array of SQL warehouse objects
 85 | 
 86 | ## Supported Platforms
 87 | 
 88 | - Linux (amd64)
 89 | - Windows (amd64)
 90 | - macOS (Intel/amd64)
 91 | - macOS (Apple Silicon/arm64)
 92 | 
 93 | ## Usage
 94 | 
 95 | ### Authentication
 96 | 
 97 | The application uses Databricks unified authentication. For details on how to configure authentication, please refer to the [Databricks Authentication documentation](https://docs.databricks.com/en/dev-tools/auth.html).
 98 | 
 99 | ### Running the Server
100 | 
101 | Start the MCP server:
102 | 
103 | ```bash
104 | ./databricks-mcp-server
105 | ```
106 | 
107 | The server will start and listen for MCP protocol commands on standard input/output.
108 | 
109 | ## Development
110 | 
111 | ### Prerequisites
112 | 
113 | - Go 1.24 or later
114 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | startCommand:
 2 |   type: stdio
 3 |   build:
 4 |     dockerfile: ./Dockerfile
 5 |     dockerBuildPath: .
 6 |   configSchema:
 7 |     {}
 8 |   commandFunction: |
 9 |     (config) => ({
10 |       "command": "./databricks-mcp-server",
11 |       "env": {}
12 |     })
13 | 
```

--------------------------------------------------------------------------------
/version.go:
--------------------------------------------------------------------------------

```go
 1 | package main
 2 | 
 3 | // Version information
 4 | var (
 5 | 	// Version is the current version of the application
 6 | 	Version = "0.0.10"
 7 | 
 8 | 	// BuildDate is the date when the binary was built
 9 | 	BuildDate = "unknown"
10 | 
11 | 	// GitCommit is the git commit hash of the build
12 | 	GitCommit = "unknown"
13 | )
14 | 
```

--------------------------------------------------------------------------------
/tools/list_catalogs.go:
--------------------------------------------------------------------------------

```go
 1 | package tools
 2 | 
 3 | import (
 4 | 	"context"
 5 | 
 6 | 	"github.com/databricks/databricks-sdk-go/service/catalog"
 7 | 	"github.com/mark3labs/mcp-go/mcp"
 8 | )
 9 | 
10 | // ListCatalogs retrieves all catalogs from the Databricks workspace
11 | // and returns them as a JSON string.
12 | func ListCatalogs(ctx context.Context, _ mcp.CallToolRequest) (interface{}, error) {
13 | 	w, err := WorkspaceClientFromContext(ctx)
14 | 	if err != nil {
15 | 		return nil, err
16 | 	}
17 | 	return w.Catalogs.ListAll(ctx, catalog.ListCatalogsRequest{})
18 | }
19 | 
```

--------------------------------------------------------------------------------
/tools/list_warehouses.go:
--------------------------------------------------------------------------------

```go
 1 | package tools
 2 | 
 3 | import (
 4 | 	"context"
 5 | 
 6 | 	"github.com/databricks/databricks-sdk-go/service/sql"
 7 | 	"github.com/mark3labs/mcp-go/mcp"
 8 | )
 9 | 
10 | // ListWarehouses retrieves all SQL warehouses from the Databricks workspace
11 | // and returns them as a JSON string.
12 | func ListWarehouses(ctx context.Context, _ mcp.CallToolRequest) (interface{}, error) {
13 | 	w, err := WorkspaceClientFromContext(ctx)
14 | 	if err != nil {
15 | 		return nil, err
16 | 	}
17 | 	return w.Warehouses.ListAll(ctx, sql.ListWarehousesRequest{})
18 | }
19 | 
```

--------------------------------------------------------------------------------
/tools/list_schemas.go:
--------------------------------------------------------------------------------

```go
 1 | package tools
 2 | 
 3 | import (
 4 | 	"context"
 5 | 
 6 | 	"github.com/databricks/databricks-sdk-go/service/catalog"
 7 | 	"github.com/mark3labs/mcp-go/mcp"
 8 | )
 9 | 
10 | // ListSchemas retrieves all schemas in the specified catalog
11 | // and returns them as a JSON string.
12 | func ListSchemas(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
13 | 	w, err := WorkspaceClientFromContext(ctx)
14 | 	if err != nil {
15 | 		return nil, err
16 | 	}
17 | 	catalogName := request.GetString("catalog", "")
18 | 	return w.Schemas.ListAll(ctx, catalog.ListSchemasRequest{
19 | 		CatalogName: catalogName,
20 | 	})
21 | }
22 | 
```

--------------------------------------------------------------------------------
/tools/get_table.go:
--------------------------------------------------------------------------------

```go
 1 | package tools
 2 | 
 3 | import (
 4 | 	"context"
 5 | 
 6 | 	"github.com/databricks/databricks-sdk-go/service/catalog"
 7 | 	"github.com/mark3labs/mcp-go/mcp"
 8 | )
 9 | 
10 | // GetTable retrieves information about a single table using its full name (catalog.schema.table)
11 | // and returns it as a JSON string.
12 | func GetTable(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
13 | 	w, err := WorkspaceClientFromContext(ctx)
14 | 	if err != nil {
15 | 		return nil, err
16 | 	}
17 | 
18 | 	fullName := request.GetString("full_name", "")
19 | 
20 | 	// Note: The Get method doesn't support omitProperties and omitColumns parameters
21 | 	return w.Tables.Get(ctx, catalog.GetTableRequest{
22 | 		FullName: fullName,
23 | 	})
24 | }
25 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Multi-stage build for Databricks MCP Server
 2 | 
 3 | # Build stage
 4 | FROM golang:1.24-alpine AS builder
 5 | 
 6 | # Set working directory
 7 | WORKDIR /app
 8 | 
 9 | # Copy go.mod and go.sum files
10 | COPY go.mod go.sum ./
11 | 
12 | # Download dependencies
13 | RUN go mod download
14 | 
15 | # Copy the source code
16 | COPY . .
17 | 
18 | # Build the application with version information
19 | RUN CGO_ENABLED=0 go build -ldflags="-X 'main.BuildDate=$(date -u +%Y-%m-%d)' -X 'main.GitCommit=$(git rev-parse --short HEAD || echo unknown)'" -o databricks-mcp-server
20 | 
21 | # Runtime stage
22 | FROM alpine:latest
23 | 
24 | # Install CA certificates for HTTPS connections
25 | RUN apk --no-cache add ca-certificates
26 | 
27 | # Set working directory
28 | WORKDIR /app
29 | 
30 | # Copy the binary from the builder stage
31 | COPY --from=builder /app/databricks-mcp-server /app/
32 | 
33 | # Set the entrypoint
34 | ENTRYPOINT ["/app/databricks-mcp-server"]
35 | 
36 | # Document that the server listens on stdin/stdout
37 | LABEL description="Databricks MCP Server - A Model Context Protocol (MCP) server for interacting with Databricks"
38 | LABEL version="0.0.10"
39 | 
```

--------------------------------------------------------------------------------
/npm/bin/index.js:
--------------------------------------------------------------------------------

```javascript
 1 | #!/usr/bin/env node
 2 | 
 3 | const childProcess = require('child_process');
 4 | 
 5 | const package = "databricks-mcp-server";
 6 | 
 7 | const BINARY_MAP = {
 8 |     darwin_x64: {name: `${package}-darwin-amd64`, suffix: ''},
 9 |     darwin_arm64: {name: `${package}-darwin-arm64`, suffix: ''},
10 |     linux_x64: {name: `${package}-linux-amd64`, suffix: ''},
11 |     linux_arm64: {name: `${package}-linux-arm64`, suffix: ''},
12 |     win32_x64: {name: `${package}-windows-amd64`, suffix: '.exe'},
13 |     win32_arm64: {name: `${package}-windows-arm64`, suffix: '.exe'},
14 | };
15 | 
16 | // Resolving will fail if the optionalDependency was not installed or the platform/arch is not supported
17 | const resolveBinaryPath = () => {
18 |     try {
19 |         const binary = BINARY_MAP[`${process.platform}_${process.arch}`];
20 |         return require.resolve(`${binary.name}/bin/databricks-mcp-server${binary.suffix}`);
21 |     } catch (e) {
22 |         throw new Error(`Could not resolve binary path for platform/arch: ${process.platform}/${process.arch}`);
23 |     }
24 | };
25 | 
26 | childProcess.execFileSync(resolveBinaryPath(), process.argv.slice(2), {
27 |     stdio: 'inherit',
28 | });
29 | 
```

--------------------------------------------------------------------------------
/npm/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |     "name": "databricks-mcp-server",
 3 |     "version": "0.0.0",
 4 |     "description": "Model Context Protocol (MCP) server for interacting with Databricks",
 5 |     "main": "bin/index.js",
 6 |     "bin": {
 7 |         "databricks-mcp-server": "bin/index.js"
 8 |     },
 9 |     "optionalDependencies": {
10 |         "databricks-mcp-server-darwin-amd64": "0.0.0",
11 |         "databricks-mcp-server-darwin-arm64": "0.0.0",
12 |         "databricks-mcp-server-linux-amd64": "0.0.0",
13 |         "databricks-mcp-server-linux-arm64": "0.0.0",
14 |         "databricks-mcp-server-windows-amd64": "0.0.0",
15 |         "databricks-mcp-server-windows-arm64": "0.0.0"
16 |     },
17 |     "repository": {
18 |         "type": "git",
19 |         "url": "git+https://github.com/characat0/databricks-mcp-server.git"
20 |     },
21 |     "keywords": [
22 |         "mcp",
23 |         "databricks",
24 |         "model context protocol"
25 |     ],
26 |     "author": {
27 |         "name": "Marco Vela",
28 |         "url": "https://www.marcovela.com"
29 |     },
30 |     "license": "MIT",
31 |     "bugs": {
32 |         "url": "https://github.com/characat0/databricks-mcp-server/issues"
33 |     },
34 |     "homepage": "https://github.com/characat0/databricks-mcp-server#readme"
35 | }
36 | 
```

--------------------------------------------------------------------------------
/tools/list_tables.go:
--------------------------------------------------------------------------------

```go
 1 | package tools
 2 | 
 3 | import (
 4 | 	"context"
 5 | 	"regexp"
 6 | 
 7 | 	"github.com/databricks/databricks-sdk-go/listing"
 8 | 	"github.com/databricks/databricks-sdk-go/service/catalog"
 9 | 	"github.com/mark3labs/mcp-go/mcp"
10 | )
11 | 
12 | // filterTables filters a list of tables based on a regex pattern applied to table names.
13 | // Returns the filtered list of tables and any error that occurred during pattern compilation.
14 | func filterTables(tables []catalog.TableInfo, pattern string) ([]catalog.TableInfo, error) {
15 | 	regex, err := regexp.Compile(pattern)
16 | 	if err != nil {
17 | 		return nil, err
18 | 	}
19 | 	var filteredTables []catalog.TableInfo
20 | 	for _, table := range tables {
21 | 		if regex.MatchString(table.Name) {
22 | 			filteredTables = append(filteredTables, table)
23 | 		}
24 | 	}
25 | 	return filteredTables, nil
26 | }
27 | 
28 | // ListTables retrieves all tables in the specified catalog and schema,
29 | // optionally filtering them by a regex pattern, and returns them as a JSON string.
30 | // It also supports omitting table properties and column details from the response.
31 | // The max_results parameter limits the number of tables returned (0 for all).
32 | func ListTables(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
33 | 	w, err := WorkspaceClientFromContext(ctx)
34 | 	if err != nil {
35 | 		return nil, err
36 | 	}
37 | 
38 | 	catalogName := request.GetString("catalog", "")
39 | 	schemaName := request.GetString("schema", "")
40 | 	tableNamePattern := request.GetString("table_name_pattern", ".*")
41 | 	omitProperties := request.GetBool("omit_properties", true)
42 | 	omitColumns := request.GetBool("omit_columns", false)
43 | 	maxResults := request.GetInt("max_results", 10)
44 | 
45 | 	// Retrieve all tables from the specified catalog and schema
46 | 	tablesIt := w.Tables.List(ctx, catalog.ListTablesRequest{
47 | 		CatalogName:    catalogName,
48 | 		SchemaName:     schemaName,
49 | 		OmitProperties: omitProperties,
50 | 		OmitColumns:    omitColumns,
51 | 		MaxResults:     maxResults + 1,
52 | 	})
53 | 	tables, err := listing.ToSliceN[catalog.TableInfo](ctx, tablesIt, maxResults)
54 | 	if err != nil {
55 | 		return nil, err
56 | 	}
57 | 
58 | 	var truncated = false
59 | 	if len(tables) > maxResults {
60 | 		tables = tables[:maxResults]
61 | 		truncated = true
62 | 	}
63 | 
64 | 	// Apply filter if pattern is not ".*" (match everything)
65 | 	if tableNamePattern != "" && tableNamePattern != ".*" {
66 | 		tables, err = filterTables(tables, tableNamePattern)
67 | 		if err != nil {
68 | 			return nil, err
69 | 		}
70 | 	}
71 | 
72 | 	// Return a structured response
73 | 	return map[string]interface{}{
74 | 		"tables":      tables,
75 | 		"total_count": len(tables),
76 | 		"truncated":   truncated,
77 | 	}, nil
78 | }
79 | 
```

--------------------------------------------------------------------------------
/tools/common.go:
--------------------------------------------------------------------------------

```go
 1 | package tools
 2 | 
 3 | import (
 4 | 	"context"
 5 | 	"encoding/json"
 6 | 	"fmt"
 7 | 
 8 | 	"github.com/databricks/databricks-sdk-go"
 9 | 	"github.com/mark3labs/mcp-go/mcp"
10 | 	"github.com/mark3labs/mcp-go/server"
11 | )
12 | 
13 | // contextKey is a type for context keys to avoid collisions
14 | type contextKey string
15 | 
16 | // workspaceClientKey is the key used to store the workspace client in the context
17 | const workspaceClientKey contextKey = "workspaceClient"
18 | 
19 | // WithWorkspaceClient returns a new context with the workspace client added
20 | func WithWorkspaceClient(ctx context.Context, w *databricks.WorkspaceClient) context.Context {
21 | 	return context.WithValue(ctx, workspaceClientKey, w)
22 | }
23 | 
24 | // WorkspaceClientFromContext retrieves the workspace client from the context
25 | func WorkspaceClientFromContext(ctx context.Context) (*databricks.WorkspaceClient, error) {
26 | 	w, ok := ctx.Value(workspaceClientKey).(*databricks.WorkspaceClient)
27 | 	if !ok || w == nil {
28 | 		return nil, fmt.Errorf("workspace client not found in context")
29 | 	}
30 | 	return w, nil
31 | }
32 | 
33 | // DatabricksTool represents a generic tool on Databricks
34 | type DatabricksTool func(ctx context.Context, request mcp.CallToolRequest) (interface{}, error)
35 | 
36 | // ExecuteTool is a helper function that executes a Databricks tool and handles common error patterns
37 | func ExecuteTool(tool DatabricksTool) server.ToolHandlerFunc {
38 | 	return func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
39 | 		result, err := tool(ctx, request)
40 | 		if err != nil {
41 | 			return mcp.NewToolResultErrorFromErr("Error executing tool", err), nil
42 | 		}
43 | 
44 | 		// Marshal the result to JSON
45 | 		jsonResult, err := json.Marshal(result)
46 | 		if err != nil {
47 | 			return mcp.NewToolResultErrorFromErr("Error marshalling result into JSON", err), nil
48 | 		}
49 | 
50 | 		return mcp.NewToolResultText(string(jsonResult)), nil
51 | 	}
52 | }
53 | 
54 | // WithWorkspaceClientHandler wraps a tool handler function with the workspace client
55 | func WithWorkspaceClientHandler(w *databricks.WorkspaceClient, handler server.ToolHandlerFunc) server.ToolHandlerFunc {
56 | 	return func(ctx context.Context, request mcp.CallToolRequest) (*mcp.CallToolResult, error) {
57 | 		// Add the workspace client to the context
58 | 		ctx = WithWorkspaceClient(ctx, w)
59 | 		return handler(ctx, request)
60 | 	}
61 | }
62 | 
63 | // SendProgressNotification sends a progress notification to the client
64 | func SendProgressNotification(ctx context.Context, message string, progress, total int) error {
65 | 	mcpServer := server.ServerFromContext(ctx)
66 | 	if mcpServer == nil {
67 | 		return fmt.Errorf("server not found in context")
68 | 	}
69 | 
70 | 	var token interface{} = 0
71 | 	return mcpServer.SendNotificationToClient(ctx, "notifications/progress", map[string]interface{}{
72 | 		"message":       message,
73 | 		"progressToken": token,
74 | 		"progress":      progress,
75 | 		"total":         total,
76 | 	})
77 | }
78 | 
```

--------------------------------------------------------------------------------
/tools/common_test.go:
--------------------------------------------------------------------------------

```go
 1 | package tools
 2 | 
 3 | import (
 4 | 	"context"
 5 | 	"encoding/json"
 6 | 	"errors"
 7 | 	"testing"
 8 | 
 9 | 	"github.com/stretchr/testify/assert"
10 | )
11 | 
12 | // TestRequest is a simplified version of mcp.CallToolRequest for testing
13 | type TestRequest struct {
14 | 	Arguments map[string]interface{}
15 | }
16 | 
17 | // TestResult is a simplified version of mcp.CallToolResult for testing
18 | type TestResult struct {
19 | 	Type  string
20 | 	Text  string
21 | 	Error string
22 | }
23 | 
24 | // TestOperation is a simplified version of DatabricksTool for testing
25 | type TestOperation func(ctx context.Context, request TestRequest) (interface{}, error)
26 | 
27 | // NewTestResultText creates a new test result with text content
28 | func NewTestResultText(text string) *TestResult {
29 | 	return &TestResult{
30 | 		Type: "text",
31 | 		Text: text,
32 | 	}
33 | }
34 | 
35 | // NewTestResultErrorFromErr creates a new test result with error content from an error
36 | func NewTestResultErrorFromErr(message string, err error) *TestResult {
37 | 	return &TestResult{
38 | 		Type:  "error",
39 | 		Error: message + ": " + err.Error(),
40 | 	}
41 | }
42 | 
43 | // ExecuteTestOperation is a simplified version of ExecuteTool for testing
44 | func ExecuteTestOperation(operation TestOperation) func(ctx context.Context, request TestRequest) (*TestResult, error) {
45 | 	return func(ctx context.Context, request TestRequest) (*TestResult, error) {
46 | 		result, err := operation(ctx, request)
47 | 		if err != nil {
48 | 			return NewTestResultErrorFromErr("Error executing operation", err), nil
49 | 		}
50 | 
51 | 		// Marshal the result to JSON
52 | 		jsonResult, err := json.Marshal(result)
53 | 		if err != nil {
54 | 			return NewTestResultErrorFromErr("Error marshalling result into JSON", err), nil
55 | 		}
56 | 
57 | 		return NewTestResultText(string(jsonResult)), nil
58 | 	}
59 | }
60 | 
61 | // TestExecuteTestOperation tests the ExecuteTestOperation function
62 | func TestExecuteTestOperation(t *testing.T) {
63 | 	// Create a mock operation that returns a successful result
64 | 	successOp := func(ctx context.Context, request TestRequest) (interface{}, error) {
65 | 		return map[string]string{"result": "success"}, nil
66 | 	}
67 | 
68 | 	// Create a mock operation that returns an error
69 | 	errorOp := func(ctx context.Context, request TestRequest) (interface{}, error) {
70 | 		return nil, errors.New("operation failed")
71 | 	}
72 | 
73 | 	// Test successful operation
74 | 	t.Run("SuccessfulOperation", func(t *testing.T) {
75 | 		handler := ExecuteTestOperation(successOp)
76 | 		result, err := handler(context.Background(), TestRequest{})
77 | 		assert.NoError(t, err)
78 | 		assert.NotNil(t, result)
79 | 		assert.Equal(t, "text", result.Type)
80 | 		assert.NotEmpty(t, result.Text)
81 | 	})
82 | 
83 | 	// Test failed operation
84 | 	t.Run("FailedOperation", func(t *testing.T) {
85 | 		handler := ExecuteTestOperation(errorOp)
86 | 		result, err := handler(context.Background(), TestRequest{})
87 | 		assert.NoError(t, err)
88 | 		assert.NotNil(t, result)
89 | 		assert.Equal(t, "error", result.Type)
90 | 		assert.Contains(t, result.Error, "operation failed")
91 | 	})
92 | }
93 | 
```

--------------------------------------------------------------------------------
/tools/execute_sql.go:
--------------------------------------------------------------------------------

```go
  1 | package tools
  2 | 
  3 | import (
  4 | 	"context"
  5 | 	"fmt"
  6 | 	"time"
  7 | 
  8 | 	"github.com/databricks/databricks-sdk-go/service/sql"
  9 | 	"github.com/mark3labs/mcp-go/mcp"
 10 | )
 11 | 
 12 | // ExecuteSQL executes a SQL statement on a Databricks warehouse and returns the results.
 13 | // It handles statement execution, polling for completion, and fetching result chunks.
 14 | func ExecuteSQL(ctx context.Context, request mcp.CallToolRequest) (interface{}, error) {
 15 | 	w, err := WorkspaceClientFromContext(ctx)
 16 | 	if err != nil {
 17 | 		return nil, err
 18 | 	}
 19 | 
 20 | 	sqlStatement := request.GetString("statement", "")
 21 | 	timeoutSeconds := request.GetFloat("execution_timeout_seconds", 60)
 22 | 	maxRows := request.GetInt("max_rows", 100)
 23 | 	warehouseId := request.GetString("warehouse_id", "")
 24 | 
 25 | 	// Convert timeout to string format for API and calculate a polling interval
 26 | 	pollingInterval := 10 * time.Second
 27 | 	// Poll for statement completion
 28 | 	maxAttempts := int(timeoutSeconds / 10)
 29 | 
 30 | 	// Determine which warehouse to use
 31 | 	if warehouseId == "" {
 32 | 		// Get available warehouses and use the first one
 33 | 		warehouses, err := w.Warehouses.ListAll(ctx, sql.ListWarehousesRequest{})
 34 | 		if err != nil {
 35 | 			return nil, fmt.Errorf("error listing warehouses: %w", err)
 36 | 		}
 37 | 		if len(warehouses) == 0 {
 38 | 			return nil, fmt.Errorf("no warehouses available")
 39 | 		}
 40 | 		warehouseId = warehouses[0].Id
 41 | 	}
 42 | 
 43 | 	// Execute the SQL statement with the specified row limit
 44 | 	res, err := w.StatementExecution.ExecuteStatement(ctx, sql.ExecuteStatementRequest{
 45 | 		RowLimit:    int64(maxRows),
 46 | 		Statement:   sqlStatement,
 47 | 		WaitTimeout: "5s",
 48 | 		WarehouseId: warehouseId,
 49 | 	})
 50 | 	if err != nil {
 51 | 		return nil, fmt.Errorf("error executing SQL statement: %w", err)
 52 | 	}
 53 | 
 54 | 	attempts := 0
 55 | 
 56 | 	for attempts < maxAttempts && res.Status.State != sql.StatementStateSucceeded && res.Status.Error == nil {
 57 | 		// Send progress notification
 58 | 		err = SendProgressNotification(ctx,
 59 | 			fmt.Sprintf("Statement execution in progress (%d seconds), current status: %s", attempts*10, res.Status.State),
 60 | 			attempts, maxAttempts)
 61 | 		if err != nil {
 62 | 			return nil, err
 63 | 		}
 64 | 
 65 | 		// Wait before checking again
 66 | 		time.Sleep(pollingInterval)
 67 | 
 68 | 		// Check statement status
 69 | 		res, err = w.StatementExecution.GetStatementByStatementId(ctx, res.StatementId)
 70 | 		if err != nil {
 71 | 			return nil, fmt.Errorf("error getting statement status: %w", err)
 72 | 		}
 73 | 		attempts++
 74 | 	}
 75 | 
 76 | 	// Handle statement errors
 77 | 	if res.Status.Error != nil {
 78 | 		return nil, fmt.Errorf("error executing the statement, current status %s: %s",
 79 | 			res.Status.State, res.Status.Error.Message)
 80 | 	}
 81 | 
 82 | 	if res.Status.State != sql.StatementStateSucceeded {
 83 | 		_ = w.StatementExecution.CancelExecution(ctx, sql.CancelExecutionRequest{
 84 | 			StatementId: res.StatementId,
 85 | 		})
 86 | 		return nil, fmt.Errorf("statement execution timed out after %v seconds, current status %s.\nHint: Try with a higher timeout or simplying the query.", timeoutSeconds, res.Status.State)
 87 | 	}
 88 | 
 89 | 	// Collect all result chunks
 90 | 	var resultDataArray [][]string
 91 | 	resultData := res.Result
 92 | 	resultDataArray = append(resultDataArray, resultData.DataArray...)
 93 | 
 94 | 	// Fetch additional chunks if available
 95 | 	for resultData.NextChunkIndex != 0 {
 96 | 		resultData, err = w.StatementExecution.GetStatementResultChunkN(ctx, sql.GetStatementResultChunkNRequest{
 97 | 			ChunkIndex:  resultData.NextChunkIndex,
 98 | 			StatementId: res.StatementId,
 99 | 		})
100 | 		if err != nil {
101 | 			return nil, fmt.Errorf("error getting statement result chunk: %w", err)
102 | 		}
103 | 		resultDataArray = append(resultDataArray, resultData.DataArray...)
104 | 	}
105 | 
106 | 	// Return structured results
107 | 	return map[string]interface{}{
108 | 		"columns": res.Manifest.Schema.Columns,
109 | 		"rows":    resultDataArray,
110 | 	}, nil
111 | }
112 | 
```

--------------------------------------------------------------------------------
/main.go:
--------------------------------------------------------------------------------

```go
 1 | package main
 2 | 
 3 | import (
 4 | 	"fmt"
 5 | 	"log"
 6 | 	"os"
 7 | 
 8 | 	"databricks-mcp-server/tools"
 9 | 	"github.com/databricks/databricks-sdk-go"
10 | 	"github.com/mark3labs/mcp-go/mcp"
11 | 	"github.com/mark3labs/mcp-go/server"
12 | )
13 | 
14 | // w is the Databricks workspace client used for all API operations
15 | var w *databricks.WorkspaceClient
16 | 
17 | func init() {
18 | 	var err error
19 | 	w, err = databricks.NewWorkspaceClient()
20 | 	if err != nil {
21 | 		log.Fatalf("Failed to initialize Databricks client: %v", err)
22 | 	}
23 | }
24 | 
25 | func main() {
26 | 	// Create an MCP server
27 | 	s := server.NewMCPServer(
28 | 		"Databricks MCP Server",
29 | 		Version,
30 | 		server.WithLogging(),
31 | 	)
32 | 
33 | 	// Add tool handlers for Databricks operations
34 | 	s.AddTool(mcp.NewTool("list_catalogs",
35 | 		mcp.WithDescription("Lists all catalogs available in the Databricks workspace"),
36 | 	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListCatalogs)))
37 | 
38 | 	s.AddTool(mcp.NewTool("list_schemas",
39 | 		mcp.WithDescription("Lists all schemas in a specified Databricks catalog"),
40 | 		mcp.WithString("catalog", mcp.Description("Name of the catalog to list schemas from"), mcp.Required()),
41 | 	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListSchemas)))
42 | 
43 | 	s.AddTool(mcp.NewTool("list_tables",
44 | 		mcp.WithDescription("Lists all tables in a specified Databricks schema with optional filtering"),
45 | 		mcp.WithString("catalog", mcp.Description("Name of the catalog containing the schema"), mcp.Required()),
46 | 		mcp.WithString("schema", mcp.Description("Name of the schema to list tables from"), mcp.Required()),
47 | 		mcp.WithString("table_name_pattern", mcp.Description("Regular expression pattern to filter table names"), mcp.DefaultString(".*")),
48 | 		mcp.WithBoolean("omit_properties", mcp.Description("Whether to omit table properties in the response, helps to reduce response size"), mcp.DefaultBool(true)),
49 | 		mcp.WithBoolean("omit_columns", mcp.Description("Whether to omit column details in the response"), mcp.DefaultBool(false)),
50 | 		mcp.WithNumber("max_results", mcp.Description("Maximum number of tables to return (0 for all, non-recommended)"), mcp.DefaultNumber(10)),
51 | 	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListTables)))
52 | 
53 | 	s.AddTool(mcp.NewTool("get_table",
54 | 		mcp.WithDescription("Gets detailed information about a single Databricks table"),
55 | 		mcp.WithString("full_name", mcp.Description("Full name of the table in format 'catalog.schema.table'"), mcp.Required()),
56 | 	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.GetTable)))
57 | 
58 | 	s.AddTool(mcp.NewTool("execute_sql",
59 | 		mcp.WithDescription(`
60 | <use_case>
61 |   Use this tool to execute SQL statements against a Databricks warehouse and retrieve results in JSON format.
62 | </use_case>
63 | 
64 | <important_notes>
65 |   The flavor of SQL supported is based on the Databricks SQL engine, which is similar to Apache Spark SQL.
66 |   If asked explicitly to use a specific warehouse, you can use the "list_warehouses" tool to get available warehouses.
67 |   Ensure that the SQL is optimized for performance, especially for large datasets; avoid running statements that do not use partitioning or indexing effectively.
68 | </important_notes>
69 | `),
70 | 		mcp.WithString("statement", mcp.Description("SQL statement to execute"), mcp.Required()),
71 | 		mcp.WithNumber("execution_timeout_seconds", mcp.Description("Maximum time in seconds to wait for query execution"), mcp.DefaultNumber(60)),
72 | 		mcp.WithNumber("max_rows", mcp.Description("Maximum number of rows to return in the result"), mcp.DefaultNumber(100)),
73 | 		mcp.WithString("warehouse_id", mcp.Description("ID of the warehouse to use for execution. If not specified, the first available warehouse will be used")),
74 | 	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ExecuteSQL)))
75 | 
76 | 	s.AddTool(mcp.NewTool("list_warehouses",
77 | 		mcp.WithDescription(`
78 | <use_case>
79 |   Use this tool when asked explicitly to use a specific warehouse for SQL execution.
80 | </use_case>
81 | `),
82 | 	), tools.WithWorkspaceClientHandler(w, tools.ExecuteTool(tools.ListWarehouses)))
83 | 
84 | 	// Start the stdio server
85 | 	logger := log.New(os.Stdout, "INFO: ", log.LstdFlags)
86 | 	if err := server.ServeStdio(s, server.WithErrorLogger(logger)); err != nil {
87 | 		fmt.Printf("Server error: %v\n", err)
88 | 	}
89 | }
90 | 
```

--------------------------------------------------------------------------------
/.github/workflows/release.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Build and Release
  2 | 
  3 | on:
  4 |   push:
  5 |     tags:
  6 |       - 'v[0-9]+.[0-9]+.[0-9]+'
  7 |     # This ensures the workflow only runs on tags matching the pattern vX.Y.Z
  8 | 
  9 | jobs:
 10 |   build:
 11 |     name: Build
 12 |     runs-on: ubuntu-latest
 13 |     strategy:
 14 |       matrix:
 15 |         go-version: ['1.24.x']
 16 |         platform: [linux-amd64, windows-amd64, darwin-amd64, darwin-arm64]
 17 |         include:
 18 |           - platform: linux-amd64
 19 |             os: ubuntu-latest
 20 |             GOOS: linux
 21 |             GOARCH: amd64
 22 |             binary_name: databricks-mcp-server
 23 |             asset_name: databricks-mcp-server-linux-amd64
 24 |           - platform: windows-amd64
 25 |             os: ubuntu-latest
 26 |             GOOS: windows
 27 |             GOARCH: amd64
 28 |             binary_name: databricks-mcp-server.exe
 29 |             asset_name: databricks-mcp-server-windows-amd64.exe
 30 |           - platform: darwin-amd64
 31 |             os: ubuntu-latest
 32 |             GOOS: darwin
 33 |             GOARCH: amd64
 34 |             binary_name: databricks-mcp-server
 35 |             asset_name: databricks-mcp-server-darwin-amd64
 36 |           - platform: darwin-arm64
 37 |             os: ubuntu-latest
 38 |             GOOS: darwin
 39 |             GOARCH: arm64
 40 |             binary_name: databricks-mcp-server
 41 |             asset_name: databricks-mcp-server-darwin-arm64
 42 | 
 43 |     steps:
 44 |     - name: Checkout code
 45 |       uses: actions/checkout@v4
 46 |       with:
 47 |         fetch-depth: 0
 48 | 
 49 |     - name: Set up Go
 50 |       uses: actions/setup-go@v4
 51 |       with:
 52 |         go-version: ${{ matrix.go-version }}
 53 | 
 54 |     - name: Get version from tag
 55 |       id: get_version
 56 |       run: |
 57 |         if [[ $GITHUB_REF == refs/tags/v* ]]; then
 58 |           VERSION=${GITHUB_REF#refs/tags/v}
 59 |         else
 60 |           # For non-tag builds, try to get the latest tag or use version from version.go if no tags exist
 61 |           LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
 62 |           if [ -z "$LATEST_TAG" ]; then
 63 |             # Extract version from version.go if no tags exist
 64 |             VERSION=$(grep -oP 'Version = "\K[^"]+' version.go || echo "0.0.0")
 65 |             VERSION="$VERSION-dev-$(git rev-parse --short HEAD)"
 66 |           else
 67 |             VERSION="${LATEST_TAG#v}-dev-$(git rev-parse --short HEAD)"
 68 |           fi
 69 |         fi
 70 |         echo "VERSION=$VERSION" >> $GITHUB_ENV
 71 |         echo "version=$VERSION" >> $GITHUB_OUTPUT
 72 | 
 73 |     - name: Build
 74 |       env:
 75 |         GOOS: ${{ matrix.GOOS }}
 76 |         GOARCH: ${{ matrix.GOARCH }}
 77 |       run: |
 78 |         go build -trimpath -ldflags="-s -w -X 'main.Version=${{ env.VERSION }}' -X 'main.BuildDate=$(date -u +%Y-%m-%dT%H:%M:%SZ)' -X 'main.GitCommit=$(git rev-parse --short HEAD)'" -o ${{ matrix.binary_name }}
 79 | 
 80 |     - name: Install UPX
 81 |       if: matrix.GOOS != 'darwin'
 82 |       run: |
 83 |         sudo apt-get update
 84 |         sudo apt-get install -y upx-ucl
 85 | 
 86 |     - name: Compress binary with UPX
 87 |       if: matrix.GOOS != 'darwin'
 88 |       run: |
 89 |         upx --best --lzma ${{ matrix.binary_name }}
 90 | 
 91 |     - name: Upload artifact
 92 |       uses: actions/upload-artifact@v4
 93 |       with:
 94 |         name: ${{ matrix.asset_name }}
 95 |         path: ${{ matrix.binary_name }}
 96 | 
 97 |   release:
 98 |     permissions: write-all
 99 |     name: Create Release
100 |     needs: build
101 |     runs-on: ubuntu-latest
102 |     outputs:
103 |       upload_url: ${{ steps.release_outputs.outputs.upload_url }}
104 |     steps:
105 |       - name: Checkout code
106 |         uses: actions/checkout@v4
107 |         with:
108 |           fetch-depth: 0
109 | 
110 |       - name: Generate Changelog
111 |         id: changelog
112 |         run: |
113 |           # Get the current tag
114 |           CURRENT_TAG=${GITHUB_REF#refs/tags/}
115 |           echo "Current tag: $CURRENT_TAG"
116 | 
117 |           # Check if this is the first tag
118 |           TAG_COUNT=$(git tag | wc -l)
119 | 
120 |           if [ "$TAG_COUNT" -le 1 ]; then
121 |             # This is the first tag or there's only one tag (the current one)
122 |             echo "This is the first release. Including all commits."
123 |             # Get all commits up to the current tag
124 |             CHANGELOG=$(git log --pretty=format:"* %s (%h)" $CURRENT_TAG)
125 | 
126 |             # If the changelog is empty (can happen with the first tag), get all commits
127 |             if [ -z "$CHANGELOG" ]; then
128 |               echo "Getting all commits for the first release."
129 |               CHANGELOG=$(git log --pretty=format:"* %s (%h)")
130 |             fi
131 |           else
132 |             # Try to get the previous tag
133 |             PREVIOUS_TAG=$(git describe --tags --abbrev=0 $CURRENT_TAG^ 2>/dev/null || echo "")
134 | 
135 |             if [ -z "$PREVIOUS_TAG" ]; then
136 |               # If we can't get the previous tag, get all commits up to the current tag
137 |               echo "No previous tag found. Using all commits up to $CURRENT_TAG."
138 |               CHANGELOG=$(git log --pretty=format:"* %s (%h)" $CURRENT_TAG)
139 |             else
140 |               echo "Previous tag: $PREVIOUS_TAG"
141 |               # Get commits between the previous tag and the current tag
142 |               CHANGELOG=$(git log --pretty=format:"* %s (%h)" $PREVIOUS_TAG..$CURRENT_TAG)
143 |             fi
144 |           fi
145 | 
146 |           # Save changelog to output
147 |           echo "CHANGELOG<<EOF" >> $GITHUB_ENV
148 |           echo "$CHANGELOG" >> $GITHUB_ENV
149 |           echo "EOF" >> $GITHUB_ENV
150 | 
151 |       - name: Download all artifacts
152 |         uses: actions/download-artifact@v4
153 |         with:
154 |           path: artifacts
155 | 
156 |       - name: Create Release
157 |         id: create_release
158 |         uses: actions/create-release@v1
159 |         env:
160 |           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
161 |         with:
162 |           tag_name: ${{ github.ref_name }}
163 |           release_name: Release ${{ github.ref_name }}
164 |           body: |
165 |             # Release ${{ github.ref_name }} of databricks-mcp-server
166 | 
167 |             ## Changelog
168 |             ${{ env.CHANGELOG }}
169 |           draft: false
170 |           prerelease: false
171 | 
172 |       # The release ID is needed for the upload-assets job
173 |       - name: Set release outputs
174 |         id: release_outputs
175 |         run: |
176 |           echo "release_id=${{ steps.create_release.outputs.id }}" >> $GITHUB_OUTPUT
177 |           echo "upload_url=${{ steps.create_release.outputs.upload_url }}" >> $GITHUB_OUTPUT
178 | 
179 |   upload-assets:
180 |     name: Upload Release Assets
181 |     needs: release
182 |     runs-on: ubuntu-latest
183 |     strategy:
184 |       matrix:
185 |         asset:
186 |           - name: databricks-mcp-server-linux-amd64
187 |             path: artifacts/databricks-mcp-server-linux-amd64/databricks-mcp-server
188 |             content_type: application/octet-stream
189 |           - name: databricks-mcp-server-windows-amd64.exe
190 |             path: artifacts/databricks-mcp-server-windows-amd64.exe/databricks-mcp-server.exe
191 |             content_type: application/octet-stream
192 |           - name: databricks-mcp-server-darwin-amd64
193 |             path: artifacts/databricks-mcp-server-darwin-amd64/databricks-mcp-server
194 |             content_type: application/octet-stream
195 |           - name: databricks-mcp-server-darwin-arm64
196 |             path: artifacts/databricks-mcp-server-darwin-arm64/databricks-mcp-server
197 |             content_type: application/octet-stream
198 |     steps:
199 |       - name: Download all artifacts
200 |         uses: actions/download-artifact@v4
201 |         with:
202 |           path: artifacts
203 | 
204 |       - name: Upload Release Asset
205 |         uses: actions/upload-release-asset@v1
206 |         env:
207 |           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
208 |         with:
209 |           upload_url: ${{ needs.release.outputs.upload_url }}
210 |           asset_path: ${{ matrix.asset.path }}
211 |           asset_name: ${{ matrix.asset.name }}
212 |           asset_content_type: ${{ matrix.asset.content_type }}
213 | 
214 |   publish-npm:
215 |     name: Publish NPM Packages
216 |     needs: [release, upload-assets]
217 |     runs-on: ubuntu-latest
218 |     steps:
219 |       - name: Checkout code
220 |         uses: actions/checkout@v4
221 |         with:
222 |           fetch-depth: 0
223 | 
224 |       - name: Set up Node.js
225 |         uses: actions/setup-node@v4
226 |         with:
227 |           node-version: '20'
228 |           registry-url: 'https://registry.npmjs.org/'
229 | 
230 |       - name: Download all artifacts
231 |         uses: actions/download-artifact@v4
232 |         with:
233 |           path: artifacts
234 | 
235 |       - name: Get version from tag
236 |         id: get_version
237 |         run: |
238 |           # Remove 'v' prefix from tag name
239 |           VERSION=${GITHUB_REF#refs/tags/v}
240 |           echo "VERSION=$VERSION" >> $GITHUB_ENV
241 | 
242 |       - name: Prepare main npm package
243 |         run: |
244 |           # Update main package version and dependencies
245 |           jq ".version = \"${VERSION}\"" npm/package.json > tmp.json && mv tmp.json npm/package.json
246 |           jq ".optionalDependencies |= with_entries(.value = \"${VERSION}\")" npm/package.json > tmp.json && mv tmp.json npm/package.json
247 | 
248 |           # Copy README and LICENSE to main package
249 |           cp README.md LICENSE npm/
250 | 
251 |       - name: Publish main package
252 |         run: |
253 |           cd npm
254 |           npm publish --access public
255 |         env:
256 |           NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
257 | 
258 |       - name: Prepare and publish platform-specific packages
259 |         run: |
260 |           # Create directories for platform-specific packages
261 |           mkdir -p npm/databricks-mcp-server-darwin-amd64/bin
262 |           mkdir -p npm/databricks-mcp-server-darwin-arm64/bin
263 |           mkdir -p npm/databricks-mcp-server-linux-amd64/bin
264 |           mkdir -p npm/databricks-mcp-server-windows-amd64/bin
265 | 
266 |           # Copy binaries to their respective npm package directories
267 |           cp artifacts/databricks-mcp-server-darwin-amd64/databricks-mcp-server npm/databricks-mcp-server-darwin-amd64/bin/
268 |           cp artifacts/databricks-mcp-server-darwin-arm64/databricks-mcp-server npm/databricks-mcp-server-darwin-arm64/bin/
269 |           cp artifacts/databricks-mcp-server-linux-amd64/databricks-mcp-server npm/databricks-mcp-server-linux-amd64/bin/
270 |           cp artifacts/databricks-mcp-server-windows-amd64.exe/databricks-mcp-server.exe npm/databricks-mcp-server-windows-amd64/bin/
271 | 
272 |           # Make binaries executable
273 |           chmod +x npm/databricks-mcp-server-darwin-amd64/bin/databricks-mcp-server
274 |           chmod +x npm/databricks-mcp-server-darwin-arm64/bin/databricks-mcp-server
275 |           chmod +x npm/databricks-mcp-server-linux-amd64/bin/databricks-mcp-server
276 |           chmod +x npm/databricks-mcp-server-windows-amd64/bin/databricks-mcp-server.exe
277 | 
278 |           # Create package.json and publish for each platform-specific package
279 |           for dir in npm/databricks-mcp-server-*; do
280 |             if [ -d "$dir" ]; then
281 |               pkg_name=$(basename "$dir")
282 |               os_name=${pkg_name#databricks-mcp-server-}
283 | 
284 |               # Extract CPU architecture from package name
285 |               if [[ "$os_name" == *"arm64"* ]]; then
286 |                 cpu_arch="arm64"
287 |               else
288 |                 cpu_arch="x64"
289 |               fi
290 | 
291 |               # Extract only the OS part and convert windows to win32
292 |               if [[ "$os_name" == windows* ]]; then
293 |                 os_name="win32"
294 |               elif [[ "$os_name" == darwin* ]]; then
295 |                 os_name="darwin"
296 |               elif [[ "$os_name" == linux* ]]; then
297 |                 os_name="linux"
298 |               fi
299 | 
300 |               # Create package.json file
301 |               echo '{
302 |                 "name": "'$pkg_name'",
303 |                 "version": "'$VERSION'",
304 |                 "description": "Platform-specific binary for databricks-mcp-server",
305 |                 "os": ["'$os_name'"],
306 |                 "cpu": ["'$cpu_arch'"]
307 |               }' > "$dir/package.json"
308 | 
309 |               # Publish the platform-specific package
310 |               cd "$dir"
311 |               npm publish --access public
312 |               cd ../..
313 |             fi
314 |           done
315 |         env:
316 |           NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
317 | 
```