#
tokens: 48929/50000 8/807 files (page 24/35)
lines: off (toggle) GitHub
raw markdown copy
This is page 24 of 35. Use http://codebase.md/googleapis/genai-toolbox?lines=false&page={x} to view the full context.

# Directory Structure

```
├── .ci
│   ├── continuous.release.cloudbuild.yaml
│   ├── generate_release_table.sh
│   ├── integration.cloudbuild.yaml
│   ├── quickstart_test
│   │   ├── go.integration.cloudbuild.yaml
│   │   ├── js.integration.cloudbuild.yaml
│   │   ├── py.integration.cloudbuild.yaml
│   │   ├── run_go_tests.sh
│   │   ├── run_js_tests.sh
│   │   ├── run_py_tests.sh
│   │   └── setup_hotels_sample.sql
│   ├── test_with_coverage.sh
│   └── versioned.release.cloudbuild.yaml
├── .github
│   ├── auto-label.yaml
│   ├── blunderbuss.yml
│   ├── CODEOWNERS
│   ├── header-checker-lint.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── question.yml
│   ├── label-sync.yml
│   ├── labels.yaml
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── release-please.yml
│   ├── renovate.json5
│   ├── sync-repo-settings.yaml
│   └── workflows
│       ├── cloud_build_failure_reporter.yml
│       ├── deploy_dev_docs.yaml
│       ├── deploy_previous_version_docs.yaml
│       ├── deploy_versioned_docs.yaml
│       ├── docs_deploy.yaml
│       ├── docs_preview_clean.yaml
│       ├── docs_preview_deploy.yaml
│       ├── lint.yaml
│       ├── schedule_reporter.yml
│       ├── sync-labels.yaml
│       └── tests.yaml
├── .gitignore
├── .gitmodules
├── .golangci.yaml
├── .hugo
│   ├── archetypes
│   │   └── default.md
│   ├── assets
│   │   ├── icons
│   │   │   └── logo.svg
│   │   └── scss
│   │       ├── _styles_project.scss
│   │       └── _variables_project.scss
│   ├── go.mod
│   ├── go.sum
│   ├── hugo.toml
│   ├── layouts
│   │   ├── _default
│   │   │   └── home.releases.releases
│   │   ├── index.llms-full.txt
│   │   ├── index.llms.txt
│   │   ├── partials
│   │   │   ├── hooks
│   │   │   │   └── head-end.html
│   │   │   ├── navbar-version-selector.html
│   │   │   ├── page-meta-links.html
│   │   │   └── td
│   │   │       └── render-heading.html
│   │   ├── robot.txt
│   │   └── shortcodes
│   │       ├── include.html
│   │       ├── ipynb.html
│   │       └── regionInclude.html
│   ├── package-lock.json
│   ├── package.json
│   └── static
│       ├── favicons
│       │   ├── android-chrome-192x192.png
│       │   ├── android-chrome-512x512.png
│       │   ├── apple-touch-icon.png
│       │   ├── favicon-16x16.png
│       │   ├── favicon-32x32.png
│       │   └── favicon.ico
│       └── js
│           └── w3.js
├── CHANGELOG.md
├── cmd
│   ├── options_test.go
│   ├── options.go
│   ├── root_test.go
│   ├── root.go
│   └── version.txt
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── DEVELOPER.md
├── Dockerfile
├── docs
│   └── en
│       ├── _index.md
│       ├── about
│       │   ├── _index.md
│       │   └── faq.md
│       ├── concepts
│       │   ├── _index.md
│       │   └── telemetry
│       │       ├── index.md
│       │       ├── telemetry_flow.png
│       │       └── telemetry_traces.png
│       ├── getting-started
│       │   ├── _index.md
│       │   ├── colab_quickstart.ipynb
│       │   ├── configure.md
│       │   ├── introduction
│       │   │   ├── _index.md
│       │   │   └── architecture.png
│       │   ├── local_quickstart_go.md
│       │   ├── local_quickstart_js.md
│       │   ├── local_quickstart.md
│       │   ├── mcp_quickstart
│       │   │   ├── _index.md
│       │   │   ├── inspector_tools.png
│       │   │   └── inspector.png
│       │   └── quickstart
│       │       ├── go
│       │       │   ├── genAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── genkit
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── langchain
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── openAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   └── quickstart_test.go
│       │       ├── golden.txt
│       │       ├── js
│       │       │   ├── genAI
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── genkit
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── langchain
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── llamaindex
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   └── quickstart.test.js
│       │       ├── python
│       │       │   ├── __init__.py
│       │       │   ├── adk
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── core
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── langchain
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── llamaindex
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   └── quickstart_test.py
│       │       └── shared
│       │           ├── cloud_setup.md
│       │           ├── configure_toolbox.md
│       │           └── database_setup.md
│       ├── how-to
│       │   ├── _index.md
│       │   ├── connect_via_geminicli.md
│       │   ├── connect_via_mcp.md
│       │   ├── connect-ide
│       │   │   ├── _index.md
│       │   │   ├── alloydb_pg_admin_mcp.md
│       │   │   ├── alloydb_pg_mcp.md
│       │   │   ├── bigquery_mcp.md
│       │   │   ├── cloud_sql_mssql_admin_mcp.md
│       │   │   ├── cloud_sql_mssql_mcp.md
│       │   │   ├── cloud_sql_mysql_admin_mcp.md
│       │   │   ├── cloud_sql_mysql_mcp.md
│       │   │   ├── cloud_sql_pg_admin_mcp.md
│       │   │   ├── cloud_sql_pg_mcp.md
│       │   │   ├── firestore_mcp.md
│       │   │   ├── looker_mcp.md
│       │   │   ├── mssql_mcp.md
│       │   │   ├── mysql_mcp.md
│       │   │   ├── neo4j_mcp.md
│       │   │   ├── postgres_mcp.md
│       │   │   ├── spanner_mcp.md
│       │   │   └── sqlite_mcp.md
│       │   ├── deploy_docker.md
│       │   ├── deploy_gke.md
│       │   ├── deploy_toolbox.md
│       │   ├── export_telemetry.md
│       │   └── toolbox-ui
│       │       ├── edit-headers.gif
│       │       ├── edit-headers.png
│       │       ├── index.md
│       │       ├── optional-param-checked.png
│       │       ├── optional-param-unchecked.png
│       │       ├── run-tool.gif
│       │       ├── tools.png
│       │       └── toolsets.png
│       ├── reference
│       │   ├── _index.md
│       │   ├── cli.md
│       │   └── prebuilt-tools.md
│       ├── resources
│       │   ├── _index.md
│       │   ├── authServices
│       │   │   ├── _index.md
│       │   │   └── google.md
│       │   ├── sources
│       │   │   ├── _index.md
│       │   │   ├── alloydb-admin.md
│       │   │   ├── alloydb-pg.md
│       │   │   ├── bigquery.md
│       │   │   ├── bigtable.md
│       │   │   ├── cassandra.md
│       │   │   ├── clickhouse.md
│       │   │   ├── cloud-monitoring.md
│       │   │   ├── cloud-sql-admin.md
│       │   │   ├── cloud-sql-mssql.md
│       │   │   ├── cloud-sql-mysql.md
│       │   │   ├── cloud-sql-pg.md
│       │   │   ├── couchbase.md
│       │   │   ├── dataplex.md
│       │   │   ├── dgraph.md
│       │   │   ├── firebird.md
│       │   │   ├── firestore.md
│       │   │   ├── http.md
│       │   │   ├── looker.md
│       │   │   ├── mongodb.md
│       │   │   ├── mssql.md
│       │   │   ├── mysql.md
│       │   │   ├── neo4j.md
│       │   │   ├── oceanbase.md
│       │   │   ├── oracle.md
│       │   │   ├── postgres.md
│       │   │   ├── redis.md
│       │   │   ├── spanner.md
│       │   │   ├── sqlite.md
│       │   │   ├── tidb.md
│       │   │   ├── trino.md
│       │   │   ├── valkey.md
│       │   │   └── yugabytedb.md
│       │   └── tools
│       │       ├── _index.md
│       │       ├── alloydb
│       │       │   ├── _index.md
│       │       │   ├── alloydb-create-cluster.md
│       │       │   ├── alloydb-create-instance.md
│       │       │   ├── alloydb-create-user.md
│       │       │   ├── alloydb-get-cluster.md
│       │       │   ├── alloydb-get-instance.md
│       │       │   ├── alloydb-get-user.md
│       │       │   ├── alloydb-list-clusters.md
│       │       │   ├── alloydb-list-instances.md
│       │       │   ├── alloydb-list-users.md
│       │       │   └── alloydb-wait-for-operation.md
│       │       ├── alloydbainl
│       │       │   ├── _index.md
│       │       │   └── alloydb-ai-nl.md
│       │       ├── bigquery
│       │       │   ├── _index.md
│       │       │   ├── bigquery-analyze-contribution.md
│       │       │   ├── bigquery-conversational-analytics.md
│       │       │   ├── bigquery-execute-sql.md
│       │       │   ├── bigquery-forecast.md
│       │       │   ├── bigquery-get-dataset-info.md
│       │       │   ├── bigquery-get-table-info.md
│       │       │   ├── bigquery-list-dataset-ids.md
│       │       │   ├── bigquery-list-table-ids.md
│       │       │   ├── bigquery-search-catalog.md
│       │       │   └── bigquery-sql.md
│       │       ├── bigtable
│       │       │   ├── _index.md
│       │       │   └── bigtable-sql.md
│       │       ├── cassandra
│       │       │   ├── _index.md
│       │       │   └── cassandra-cql.md
│       │       ├── clickhouse
│       │       │   ├── _index.md
│       │       │   ├── clickhouse-execute-sql.md
│       │       │   ├── clickhouse-list-databases.md
│       │       │   ├── clickhouse-list-tables.md
│       │       │   └── clickhouse-sql.md
│       │       ├── cloudmonitoring
│       │       │   ├── _index.md
│       │       │   └── cloud-monitoring-query-prometheus.md
│       │       ├── cloudsql
│       │       │   ├── _index.md
│       │       │   ├── cloudsqlcreatedatabase.md
│       │       │   ├── cloudsqlcreateusers.md
│       │       │   ├── cloudsqlgetinstances.md
│       │       │   ├── cloudsqllistdatabases.md
│       │       │   ├── cloudsqllistinstances.md
│       │       │   ├── cloudsqlmssqlcreateinstance.md
│       │       │   ├── cloudsqlmysqlcreateinstance.md
│       │       │   ├── cloudsqlpgcreateinstances.md
│       │       │   └── cloudsqlwaitforoperation.md
│       │       ├── couchbase
│       │       │   ├── _index.md
│       │       │   └── couchbase-sql.md
│       │       ├── dataform
│       │       │   ├── _index.md
│       │       │   └── dataform-compile-local.md
│       │       ├── dataplex
│       │       │   ├── _index.md
│       │       │   ├── dataplex-lookup-entry.md
│       │       │   ├── dataplex-search-aspect-types.md
│       │       │   └── dataplex-search-entries.md
│       │       ├── dgraph
│       │       │   ├── _index.md
│       │       │   └── dgraph-dql.md
│       │       ├── firebird
│       │       │   ├── _index.md
│       │       │   ├── firebird-execute-sql.md
│       │       │   └── firebird-sql.md
│       │       ├── firestore
│       │       │   ├── _index.md
│       │       │   ├── firestore-add-documents.md
│       │       │   ├── firestore-delete-documents.md
│       │       │   ├── firestore-get-documents.md
│       │       │   ├── firestore-get-rules.md
│       │       │   ├── firestore-list-collections.md
│       │       │   ├── firestore-query-collection.md
│       │       │   ├── firestore-query.md
│       │       │   ├── firestore-update-document.md
│       │       │   └── firestore-validate-rules.md
│       │       ├── http
│       │       │   ├── _index.md
│       │       │   └── http.md
│       │       ├── looker
│       │       │   ├── _index.md
│       │       │   ├── looker-add-dashboard-element.md
│       │       │   ├── looker-conversational-analytics.md
│       │       │   ├── looker-create-project-file.md
│       │       │   ├── looker-delete-project-file.md
│       │       │   ├── looker-dev-mode.md
│       │       │   ├── looker-get-dashboards.md
│       │       │   ├── looker-get-dimensions.md
│       │       │   ├── looker-get-explores.md
│       │       │   ├── looker-get-filters.md
│       │       │   ├── looker-get-looks.md
│       │       │   ├── looker-get-measures.md
│       │       │   ├── looker-get-models.md
│       │       │   ├── looker-get-parameters.md
│       │       │   ├── looker-get-project-file.md
│       │       │   ├── looker-get-project-files.md
│       │       │   ├── looker-get-projects.md
│       │       │   ├── looker-health-analyze.md
│       │       │   ├── looker-health-pulse.md
│       │       │   ├── looker-health-vacuum.md
│       │       │   ├── looker-make-dashboard.md
│       │       │   ├── looker-make-look.md
│       │       │   ├── looker-query-sql.md
│       │       │   ├── looker-query-url.md
│       │       │   ├── looker-query.md
│       │       │   ├── looker-run-look.md
│       │       │   └── looker-update-project-file.md
│       │       ├── mongodb
│       │       │   ├── _index.md
│       │       │   ├── mongodb-aggregate.md
│       │       │   ├── mongodb-delete-many.md
│       │       │   ├── mongodb-delete-one.md
│       │       │   ├── mongodb-find-one.md
│       │       │   ├── mongodb-find.md
│       │       │   ├── mongodb-insert-many.md
│       │       │   ├── mongodb-insert-one.md
│       │       │   ├── mongodb-update-many.md
│       │       │   └── mongodb-update-one.md
│       │       ├── mssql
│       │       │   ├── _index.md
│       │       │   ├── mssql-execute-sql.md
│       │       │   ├── mssql-list-tables.md
│       │       │   └── mssql-sql.md
│       │       ├── mysql
│       │       │   ├── _index.md
│       │       │   ├── mysql-execute-sql.md
│       │       │   ├── mysql-list-active-queries.md
│       │       │   ├── mysql-list-table-fragmentation.md
│       │       │   ├── mysql-list-tables-missing-unique-indexes.md
│       │       │   ├── mysql-list-tables.md
│       │       │   └── mysql-sql.md
│       │       ├── neo4j
│       │       │   ├── _index.md
│       │       │   ├── neo4j-cypher.md
│       │       │   ├── neo4j-execute-cypher.md
│       │       │   └── neo4j-schema.md
│       │       ├── oceanbase
│       │       │   ├── _index.md
│       │       │   ├── oceanbase-execute-sql.md
│       │       │   └── oceanbase-sql.md
│       │       ├── oracle
│       │       │   ├── _index.md
│       │       │   ├── oracle-execute-sql.md
│       │       │   └── oracle-sql.md
│       │       ├── postgres
│       │       │   ├── _index.md
│       │       │   ├── postgres-execute-sql.md
│       │       │   ├── postgres-list-active-queries.md
│       │       │   ├── postgres-list-available-extensions.md
│       │       │   ├── postgres-list-installed-extensions.md
│       │       │   ├── postgres-list-tables.md
│       │       │   └── postgres-sql.md
│       │       ├── redis
│       │       │   ├── _index.md
│       │       │   └── redis.md
│       │       ├── spanner
│       │       │   ├── _index.md
│       │       │   ├── spanner-execute-sql.md
│       │       │   ├── spanner-list-tables.md
│       │       │   └── spanner-sql.md
│       │       ├── sqlite
│       │       │   ├── _index.md
│       │       │   ├── sqlite-execute-sql.md
│       │       │   └── sqlite-sql.md
│       │       ├── tidb
│       │       │   ├── _index.md
│       │       │   ├── tidb-execute-sql.md
│       │       │   └── tidb-sql.md
│       │       ├── trino
│       │       │   ├── _index.md
│       │       │   ├── trino-execute-sql.md
│       │       │   └── trino-sql.md
│       │       ├── utility
│       │       │   ├── _index.md
│       │       │   └── wait.md
│       │       ├── valkey
│       │       │   ├── _index.md
│       │       │   └── valkey.md
│       │       └── yuagbytedb
│       │           ├── _index.md
│       │           └── yugabytedb-sql.md
│       ├── samples
│       │   ├── _index.md
│       │   ├── alloydb
│       │   │   ├── _index.md
│       │   │   ├── ai-nl
│       │   │   │   ├── alloydb_ai_nl.ipynb
│       │   │   │   └── index.md
│       │   │   └── mcp_quickstart.md
│       │   ├── bigquery
│       │   │   ├── _index.md
│       │   │   ├── colab_quickstart_bigquery.ipynb
│       │   │   ├── local_quickstart.md
│       │   │   └── mcp_quickstart
│       │   │       ├── _index.md
│       │   │       ├── inspector_tools.png
│       │   │       └── inspector.png
│       │   └── looker
│       │       ├── _index.md
│       │       ├── looker_gemini_oauth
│       │       │   ├── _index.md
│       │       │   ├── authenticated.png
│       │       │   ├── authorize.png
│       │       │   └── registration.png
│       │       ├── looker_gemini.md
│       │       └── looker_mcp_inspector
│       │           ├── _index.md
│       │           ├── inspector_tools.png
│       │           └── inspector.png
│       └── sdks
│           ├── _index.md
│           ├── go-sdk.md
│           ├── js-sdk.md
│           └── python-sdk.md
├── gemini-extension.json
├── go.mod
├── go.sum
├── internal
│   ├── auth
│   │   ├── auth.go
│   │   └── google
│   │       └── google.go
│   ├── log
│   │   ├── handler.go
│   │   ├── log_test.go
│   │   ├── log.go
│   │   └── logger.go
│   ├── prebuiltconfigs
│   │   ├── prebuiltconfigs_test.go
│   │   ├── prebuiltconfigs.go
│   │   └── tools
│   │       ├── alloydb-postgres-admin.yaml
│   │       ├── alloydb-postgres-observability.yaml
│   │       ├── alloydb-postgres.yaml
│   │       ├── bigquery.yaml
│   │       ├── clickhouse.yaml
│   │       ├── cloud-sql-mssql-admin.yaml
│   │       ├── cloud-sql-mssql-observability.yaml
│   │       ├── cloud-sql-mssql.yaml
│   │       ├── cloud-sql-mysql-admin.yaml
│   │       ├── cloud-sql-mysql-observability.yaml
│   │       ├── cloud-sql-mysql.yaml
│   │       ├── cloud-sql-postgres-admin.yaml
│   │       ├── cloud-sql-postgres-observability.yaml
│   │       ├── cloud-sql-postgres.yaml
│   │       ├── dataplex.yaml
│   │       ├── firestore.yaml
│   │       ├── looker-conversational-analytics.yaml
│   │       ├── looker.yaml
│   │       ├── mssql.yaml
│   │       ├── mysql.yaml
│   │       ├── neo4j.yaml
│   │       ├── oceanbase.yaml
│   │       ├── postgres.yaml
│   │       ├── spanner-postgres.yaml
│   │       ├── spanner.yaml
│   │       └── sqlite.yaml
│   ├── server
│   │   ├── api_test.go
│   │   ├── api.go
│   │   ├── common_test.go
│   │   ├── config.go
│   │   ├── mcp
│   │   │   ├── jsonrpc
│   │   │   │   ├── jsonrpc_test.go
│   │   │   │   └── jsonrpc.go
│   │   │   ├── mcp.go
│   │   │   ├── util
│   │   │   │   └── lifecycle.go
│   │   │   ├── v20241105
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   ├── v20250326
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   └── v20250618
│   │   │       ├── method.go
│   │   │       └── types.go
│   │   ├── mcp_test.go
│   │   ├── mcp.go
│   │   ├── server_test.go
│   │   ├── server.go
│   │   ├── static
│   │   │   ├── assets
│   │   │   │   └── mcptoolboxlogo.png
│   │   │   ├── css
│   │   │   │   └── style.css
│   │   │   ├── index.html
│   │   │   ├── js
│   │   │   │   ├── auth.js
│   │   │   │   ├── loadTools.js
│   │   │   │   ├── mainContent.js
│   │   │   │   ├── navbar.js
│   │   │   │   ├── runTool.js
│   │   │   │   ├── toolDisplay.js
│   │   │   │   ├── tools.js
│   │   │   │   └── toolsets.js
│   │   │   ├── tools.html
│   │   │   └── toolsets.html
│   │   ├── web_test.go
│   │   └── web.go
│   ├── sources
│   │   ├── alloydbadmin
│   │   │   ├── alloydbadmin_test.go
│   │   │   └── alloydbadmin.go
│   │   ├── alloydbpg
│   │   │   ├── alloydb_pg_test.go
│   │   │   └── alloydb_pg.go
│   │   ├── bigquery
│   │   │   ├── bigquery_test.go
│   │   │   └── bigquery.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   ├── cassandra_test.go
│   │   │   └── cassandra.go
│   │   ├── clickhouse
│   │   │   ├── clickhouse_test.go
│   │   │   └── clickhouse.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloud_monitoring_test.go
│   │   │   └── cloud_monitoring.go
│   │   ├── cloudsqladmin
│   │   │   ├── cloud_sql_admin_test.go
│   │   │   └── cloud_sql_admin.go
│   │   ├── cloudsqlmssql
│   │   │   ├── cloud_sql_mssql_test.go
│   │   │   └── cloud_sql_mssql.go
│   │   ├── cloudsqlmysql
│   │   │   ├── cloud_sql_mysql_test.go
│   │   │   └── cloud_sql_mysql.go
│   │   ├── cloudsqlpg
│   │   │   ├── cloud_sql_pg_test.go
│   │   │   └── cloud_sql_pg.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataplex
│   │   │   ├── dataplex_test.go
│   │   │   └── dataplex.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── dialect.go
│   │   ├── firebird
│   │   │   ├── firebird_test.go
│   │   │   └── firebird.go
│   │   ├── firestore
│   │   │   ├── firestore_test.go
│   │   │   └── firestore.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── ip_type.go
│   │   ├── looker
│   │   │   ├── looker_test.go
│   │   │   └── looker.go
│   │   ├── mongodb
│   │   │   ├── mongodb_test.go
│   │   │   └── mongodb.go
│   │   ├── mssql
│   │   │   ├── mssql_test.go
│   │   │   └── mssql.go
│   │   ├── mysql
│   │   │   ├── mysql_test.go
│   │   │   └── mysql.go
│   │   ├── neo4j
│   │   │   ├── neo4j_test.go
│   │   │   └── neo4j.go
│   │   ├── oceanbase
│   │   │   ├── oceanbase_test.go
│   │   │   └── oceanbase.go
│   │   ├── oracle
│   │   │   └── oracle.go
│   │   ├── postgres
│   │   │   ├── postgres_test.go
│   │   │   └── postgres.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── sources.go
│   │   ├── spanner
│   │   │   ├── spanner_test.go
│   │   │   └── spanner.go
│   │   ├── sqlite
│   │   │   ├── sqlite_test.go
│   │   │   └── sqlite.go
│   │   ├── tidb
│   │   │   ├── tidb_test.go
│   │   │   └── tidb.go
│   │   ├── trino
│   │   │   ├── trino_test.go
│   │   │   └── trino.go
│   │   ├── util.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedb
│   │       ├── yugabytedb_test.go
│   │       └── yugabytedb.go
│   ├── telemetry
│   │   ├── instrumentation.go
│   │   └── telemetry.go
│   ├── testutils
│   │   └── testutils.go
│   ├── tools
│   │   ├── alloydb
│   │   │   ├── alloydbcreatecluster
│   │   │   │   ├── alloydbcreatecluster_test.go
│   │   │   │   └── alloydbcreatecluster.go
│   │   │   ├── alloydbcreateinstance
│   │   │   │   ├── alloydbcreateinstance_test.go
│   │   │   │   └── alloydbcreateinstance.go
│   │   │   ├── alloydbcreateuser
│   │   │   │   ├── alloydbcreateuser_test.go
│   │   │   │   └── alloydbcreateuser.go
│   │   │   ├── alloydbgetcluster
│   │   │   │   ├── alloydbgetcluster_test.go
│   │   │   │   └── alloydbgetcluster.go
│   │   │   ├── alloydbgetinstance
│   │   │   │   ├── alloydbgetinstance_test.go
│   │   │   │   └── alloydbgetinstance.go
│   │   │   ├── alloydbgetuser
│   │   │   │   ├── alloydbgetuser_test.go
│   │   │   │   └── alloydbgetuser.go
│   │   │   ├── alloydblistclusters
│   │   │   │   ├── alloydblistclusters_test.go
│   │   │   │   └── alloydblistclusters.go
│   │   │   ├── alloydblistinstances
│   │   │   │   ├── alloydblistinstances_test.go
│   │   │   │   └── alloydblistinstances.go
│   │   │   ├── alloydblistusers
│   │   │   │   ├── alloydblistusers_test.go
│   │   │   │   └── alloydblistusers.go
│   │   │   └── alloydbwaitforoperation
│   │   │       ├── alloydbwaitforoperation_test.go
│   │   │       └── alloydbwaitforoperation.go
│   │   ├── alloydbainl
│   │   │   ├── alloydbainl_test.go
│   │   │   └── alloydbainl.go
│   │   ├── bigquery
│   │   │   ├── bigqueryanalyzecontribution
│   │   │   │   ├── bigqueryanalyzecontribution_test.go
│   │   │   │   └── bigqueryanalyzecontribution.go
│   │   │   ├── bigquerycommon
│   │   │   │   ├── table_name_parser_test.go
│   │   │   │   ├── table_name_parser.go
│   │   │   │   └── util.go
│   │   │   ├── bigqueryconversationalanalytics
│   │   │   │   ├── bigqueryconversationalanalytics_test.go
│   │   │   │   └── bigqueryconversationalanalytics.go
│   │   │   ├── bigqueryexecutesql
│   │   │   │   ├── bigqueryexecutesql_test.go
│   │   │   │   └── bigqueryexecutesql.go
│   │   │   ├── bigqueryforecast
│   │   │   │   ├── bigqueryforecast_test.go
│   │   │   │   └── bigqueryforecast.go
│   │   │   ├── bigquerygetdatasetinfo
│   │   │   │   ├── bigquerygetdatasetinfo_test.go
│   │   │   │   └── bigquerygetdatasetinfo.go
│   │   │   ├── bigquerygettableinfo
│   │   │   │   ├── bigquerygettableinfo_test.go
│   │   │   │   └── bigquerygettableinfo.go
│   │   │   ├── bigquerylistdatasetids
│   │   │   │   ├── bigquerylistdatasetids_test.go
│   │   │   │   └── bigquerylistdatasetids.go
│   │   │   ├── bigquerylisttableids
│   │   │   │   ├── bigquerylisttableids_test.go
│   │   │   │   └── bigquerylisttableids.go
│   │   │   ├── bigquerysearchcatalog
│   │   │   │   ├── bigquerysearchcatalog_test.go
│   │   │   │   └── bigquerysearchcatalog.go
│   │   │   └── bigquerysql
│   │   │       ├── bigquerysql_test.go
│   │   │       └── bigquerysql.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   └── cassandracql
│   │   │       ├── cassandracql_test.go
│   │   │       └── cassandracql.go
│   │   ├── clickhouse
│   │   │   ├── clickhouseexecutesql
│   │   │   │   ├── clickhouseexecutesql_test.go
│   │   │   │   └── clickhouseexecutesql.go
│   │   │   ├── clickhouselistdatabases
│   │   │   │   ├── clickhouselistdatabases_test.go
│   │   │   │   └── clickhouselistdatabases.go
│   │   │   ├── clickhouselisttables
│   │   │   │   ├── clickhouselisttables_test.go
│   │   │   │   └── clickhouselisttables.go
│   │   │   └── clickhousesql
│   │   │       ├── clickhousesql_test.go
│   │   │       └── clickhousesql.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloudmonitoring_test.go
│   │   │   └── cloudmonitoring.go
│   │   ├── cloudsql
│   │   │   ├── cloudsqlcreatedatabase
│   │   │   │   ├── cloudsqlcreatedatabase_test.go
│   │   │   │   └── cloudsqlcreatedatabase.go
│   │   │   ├── cloudsqlcreateusers
│   │   │   │   ├── cloudsqlcreateusers_test.go
│   │   │   │   └── cloudsqlcreateusers.go
│   │   │   ├── cloudsqlgetinstances
│   │   │   │   ├── cloudsqlgetinstances_test.go
│   │   │   │   └── cloudsqlgetinstances.go
│   │   │   ├── cloudsqllistdatabases
│   │   │   │   ├── cloudsqllistdatabases_test.go
│   │   │   │   └── cloudsqllistdatabases.go
│   │   │   ├── cloudsqllistinstances
│   │   │   │   ├── cloudsqllistinstances_test.go
│   │   │   │   └── cloudsqllistinstances.go
│   │   │   └── cloudsqlwaitforoperation
│   │   │       ├── cloudsqlwaitforoperation_test.go
│   │   │       └── cloudsqlwaitforoperation.go
│   │   ├── cloudsqlmssql
│   │   │   └── cloudsqlmssqlcreateinstance
│   │   │       ├── cloudsqlmssqlcreateinstance_test.go
│   │   │       └── cloudsqlmssqlcreateinstance.go
│   │   ├── cloudsqlmysql
│   │   │   └── cloudsqlmysqlcreateinstance
│   │   │       ├── cloudsqlmysqlcreateinstance_test.go
│   │   │       └── cloudsqlmysqlcreateinstance.go
│   │   ├── cloudsqlpg
│   │   │   └── cloudsqlpgcreateinstances
│   │   │       ├── cloudsqlpgcreateinstances_test.go
│   │   │       └── cloudsqlpgcreateinstances.go
│   │   ├── common_test.go
│   │   ├── common.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataform
│   │   │   └── dataformcompilelocal
│   │   │       ├── dataformcompilelocal_test.go
│   │   │       └── dataformcompilelocal.go
│   │   ├── dataplex
│   │   │   ├── dataplexlookupentry
│   │   │   │   ├── dataplexlookupentry_test.go
│   │   │   │   └── dataplexlookupentry.go
│   │   │   ├── dataplexsearchaspecttypes
│   │   │   │   ├── dataplexsearchaspecttypes_test.go
│   │   │   │   └── dataplexsearchaspecttypes.go
│   │   │   └── dataplexsearchentries
│   │   │       ├── dataplexsearchentries_test.go
│   │   │       └── dataplexsearchentries.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── firebird
│   │   │   ├── firebirdexecutesql
│   │   │   │   ├── firebirdexecutesql_test.go
│   │   │   │   └── firebirdexecutesql.go
│   │   │   └── firebirdsql
│   │   │       ├── firebirdsql_test.go
│   │   │       └── firebirdsql.go
│   │   ├── firestore
│   │   │   ├── firestoreadddocuments
│   │   │   │   ├── firestoreadddocuments_test.go
│   │   │   │   └── firestoreadddocuments.go
│   │   │   ├── firestoredeletedocuments
│   │   │   │   ├── firestoredeletedocuments_test.go
│   │   │   │   └── firestoredeletedocuments.go
│   │   │   ├── firestoregetdocuments
│   │   │   │   ├── firestoregetdocuments_test.go
│   │   │   │   └── firestoregetdocuments.go
│   │   │   ├── firestoregetrules
│   │   │   │   ├── firestoregetrules_test.go
│   │   │   │   └── firestoregetrules.go
│   │   │   ├── firestorelistcollections
│   │   │   │   ├── firestorelistcollections_test.go
│   │   │   │   └── firestorelistcollections.go
│   │   │   ├── firestorequery
│   │   │   │   ├── firestorequery_test.go
│   │   │   │   └── firestorequery.go
│   │   │   ├── firestorequerycollection
│   │   │   │   ├── firestorequerycollection_test.go
│   │   │   │   └── firestorequerycollection.go
│   │   │   ├── firestoreupdatedocument
│   │   │   │   ├── firestoreupdatedocument_test.go
│   │   │   │   └── firestoreupdatedocument.go
│   │   │   ├── firestorevalidaterules
│   │   │   │   ├── firestorevalidaterules_test.go
│   │   │   │   └── firestorevalidaterules.go
│   │   │   └── util
│   │   │       ├── converter_test.go
│   │   │       ├── converter.go
│   │   │       ├── validator_test.go
│   │   │       └── validator.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── http_method.go
│   │   ├── looker
│   │   │   ├── lookeradddashboardelement
│   │   │   │   ├── lookeradddashboardelement_test.go
│   │   │   │   └── lookeradddashboardelement.go
│   │   │   ├── lookercommon
│   │   │   │   ├── lookercommon_test.go
│   │   │   │   └── lookercommon.go
│   │   │   ├── lookerconversationalanalytics
│   │   │   │   ├── lookerconversationalanalytics_test.go
│   │   │   │   └── lookerconversationalanalytics.go
│   │   │   ├── lookercreateprojectfile
│   │   │   │   ├── lookercreateprojectfile_test.go
│   │   │   │   └── lookercreateprojectfile.go
│   │   │   ├── lookerdeleteprojectfile
│   │   │   │   ├── lookerdeleteprojectfile_test.go
│   │   │   │   └── lookerdeleteprojectfile.go
│   │   │   ├── lookerdevmode
│   │   │   │   ├── lookerdevmode_test.go
│   │   │   │   └── lookerdevmode.go
│   │   │   ├── lookergetdashboards
│   │   │   │   ├── lookergetdashboards_test.go
│   │   │   │   └── lookergetdashboards.go
│   │   │   ├── lookergetdimensions
│   │   │   │   ├── lookergetdimensions_test.go
│   │   │   │   └── lookergetdimensions.go
│   │   │   ├── lookergetexplores
│   │   │   │   ├── lookergetexplores_test.go
│   │   │   │   └── lookergetexplores.go
│   │   │   ├── lookergetfilters
│   │   │   │   ├── lookergetfilters_test.go
│   │   │   │   └── lookergetfilters.go
│   │   │   ├── lookergetlooks
│   │   │   │   ├── lookergetlooks_test.go
│   │   │   │   └── lookergetlooks.go
│   │   │   ├── lookergetmeasures
│   │   │   │   ├── lookergetmeasures_test.go
│   │   │   │   └── lookergetmeasures.go
│   │   │   ├── lookergetmodels
│   │   │   │   ├── lookergetmodels_test.go
│   │   │   │   └── lookergetmodels.go
│   │   │   ├── lookergetparameters
│   │   │   │   ├── lookergetparameters_test.go
│   │   │   │   └── lookergetparameters.go
│   │   │   ├── lookergetprojectfile
│   │   │   │   ├── lookergetprojectfile_test.go
│   │   │   │   └── lookergetprojectfile.go
│   │   │   ├── lookergetprojectfiles
│   │   │   │   ├── lookergetprojectfiles_test.go
│   │   │   │   └── lookergetprojectfiles.go
│   │   │   ├── lookergetprojects
│   │   │   │   ├── lookergetprojects_test.go
│   │   │   │   └── lookergetprojects.go
│   │   │   ├── lookerhealthanalyze
│   │   │   │   ├── lookerhealthanalyze_test.go
│   │   │   │   └── lookerhealthanalyze.go
│   │   │   ├── lookerhealthpulse
│   │   │   │   ├── lookerhealthpulse_test.go
│   │   │   │   └── lookerhealthpulse.go
│   │   │   ├── lookerhealthvacuum
│   │   │   │   ├── lookerhealthvacuum_test.go
│   │   │   │   └── lookerhealthvacuum.go
│   │   │   ├── lookermakedashboard
│   │   │   │   ├── lookermakedashboard_test.go
│   │   │   │   └── lookermakedashboard.go
│   │   │   ├── lookermakelook
│   │   │   │   ├── lookermakelook_test.go
│   │   │   │   └── lookermakelook.go
│   │   │   ├── lookerquery
│   │   │   │   ├── lookerquery_test.go
│   │   │   │   └── lookerquery.go
│   │   │   ├── lookerquerysql
│   │   │   │   ├── lookerquerysql_test.go
│   │   │   │   └── lookerquerysql.go
│   │   │   ├── lookerqueryurl
│   │   │   │   ├── lookerqueryurl_test.go
│   │   │   │   └── lookerqueryurl.go
│   │   │   ├── lookerrunlook
│   │   │   │   ├── lookerrunlook_test.go
│   │   │   │   └── lookerrunlook.go
│   │   │   └── lookerupdateprojectfile
│   │   │       ├── lookerupdateprojectfile_test.go
│   │   │       └── lookerupdateprojectfile.go
│   │   ├── mongodb
│   │   │   ├── mongodbaggregate
│   │   │   │   ├── mongodbaggregate_test.go
│   │   │   │   └── mongodbaggregate.go
│   │   │   ├── mongodbdeletemany
│   │   │   │   ├── mongodbdeletemany_test.go
│   │   │   │   └── mongodbdeletemany.go
│   │   │   ├── mongodbdeleteone
│   │   │   │   ├── mongodbdeleteone_test.go
│   │   │   │   └── mongodbdeleteone.go
│   │   │   ├── mongodbfind
│   │   │   │   ├── mongodbfind_test.go
│   │   │   │   └── mongodbfind.go
│   │   │   ├── mongodbfindone
│   │   │   │   ├── mongodbfindone_test.go
│   │   │   │   └── mongodbfindone.go
│   │   │   ├── mongodbinsertmany
│   │   │   │   ├── mongodbinsertmany_test.go
│   │   │   │   └── mongodbinsertmany.go
│   │   │   ├── mongodbinsertone
│   │   │   │   ├── mongodbinsertone_test.go
│   │   │   │   └── mongodbinsertone.go
│   │   │   ├── mongodbupdatemany
│   │   │   │   ├── mongodbupdatemany_test.go
│   │   │   │   └── mongodbupdatemany.go
│   │   │   └── mongodbupdateone
│   │   │       ├── mongodbupdateone_test.go
│   │   │       └── mongodbupdateone.go
│   │   ├── mssql
│   │   │   ├── mssqlexecutesql
│   │   │   │   ├── mssqlexecutesql_test.go
│   │   │   │   └── mssqlexecutesql.go
│   │   │   ├── mssqllisttables
│   │   │   │   ├── mssqllisttables_test.go
│   │   │   │   └── mssqllisttables.go
│   │   │   └── mssqlsql
│   │   │       ├── mssqlsql_test.go
│   │   │       └── mssqlsql.go
│   │   ├── mysql
│   │   │   ├── mysqlcommon
│   │   │   │   └── mysqlcommon.go
│   │   │   ├── mysqlexecutesql
│   │   │   │   ├── mysqlexecutesql_test.go
│   │   │   │   └── mysqlexecutesql.go
│   │   │   ├── mysqllistactivequeries
│   │   │   │   ├── mysqllistactivequeries_test.go
│   │   │   │   └── mysqllistactivequeries.go
│   │   │   ├── mysqllisttablefragmentation
│   │   │   │   ├── mysqllisttablefragmentation_test.go
│   │   │   │   └── mysqllisttablefragmentation.go
│   │   │   ├── mysqllisttables
│   │   │   │   ├── mysqllisttables_test.go
│   │   │   │   └── mysqllisttables.go
│   │   │   ├── mysqllisttablesmissinguniqueindexes
│   │   │   │   ├── mysqllisttablesmissinguniqueindexes_test.go
│   │   │   │   └── mysqllisttablesmissinguniqueindexes.go
│   │   │   └── mysqlsql
│   │   │       ├── mysqlsql_test.go
│   │   │       └── mysqlsql.go
│   │   ├── neo4j
│   │   │   ├── neo4jcypher
│   │   │   │   ├── neo4jcypher_test.go
│   │   │   │   └── neo4jcypher.go
│   │   │   ├── neo4jexecutecypher
│   │   │   │   ├── classifier
│   │   │   │   │   ├── classifier_test.go
│   │   │   │   │   └── classifier.go
│   │   │   │   ├── neo4jexecutecypher_test.go
│   │   │   │   └── neo4jexecutecypher.go
│   │   │   └── neo4jschema
│   │   │       ├── cache
│   │   │       │   ├── cache_test.go
│   │   │       │   └── cache.go
│   │   │       ├── helpers
│   │   │       │   ├── helpers_test.go
│   │   │       │   └── helpers.go
│   │   │       ├── neo4jschema_test.go
│   │   │       ├── neo4jschema.go
│   │   │       └── types
│   │   │           └── types.go
│   │   ├── oceanbase
│   │   │   ├── oceanbaseexecutesql
│   │   │   │   ├── oceanbaseexecutesql_test.go
│   │   │   │   └── oceanbaseexecutesql.go
│   │   │   └── oceanbasesql
│   │   │       ├── oceanbasesql_test.go
│   │   │       └── oceanbasesql.go
│   │   ├── oracle
│   │   │   ├── oracleexecutesql
│   │   │   │   └── oracleexecutesql.go
│   │   │   └── oraclesql
│   │   │       └── oraclesql.go
│   │   ├── parameters_test.go
│   │   ├── parameters.go
│   │   ├── postgres
│   │   │   ├── postgresexecutesql
│   │   │   │   ├── postgresexecutesql_test.go
│   │   │   │   └── postgresexecutesql.go
│   │   │   ├── postgreslistactivequeries
│   │   │   │   ├── postgreslistactivequeries_test.go
│   │   │   │   └── postgreslistactivequeries.go
│   │   │   ├── postgreslistavailableextensions
│   │   │   │   ├── postgreslistavailableextensions_test.go
│   │   │   │   └── postgreslistavailableextensions.go
│   │   │   ├── postgreslistinstalledextensions
│   │   │   │   ├── postgreslistinstalledextensions_test.go
│   │   │   │   └── postgreslistinstalledextensions.go
│   │   │   ├── postgreslisttables
│   │   │   │   ├── postgreslisttables_test.go
│   │   │   │   └── postgreslisttables.go
│   │   │   └── postgressql
│   │   │       ├── postgressql_test.go
│   │   │       └── postgressql.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── spanner
│   │   │   ├── spannerexecutesql
│   │   │   │   ├── spannerexecutesql_test.go
│   │   │   │   └── spannerexecutesql.go
│   │   │   ├── spannerlisttables
│   │   │   │   ├── spannerlisttables_test.go
│   │   │   │   └── spannerlisttables.go
│   │   │   └── spannersql
│   │   │       ├── spanner_test.go
│   │   │       └── spannersql.go
│   │   ├── sqlite
│   │   │   ├── sqliteexecutesql
│   │   │   │   ├── sqliteexecutesql_test.go
│   │   │   │   └── sqliteexecutesql.go
│   │   │   └── sqlitesql
│   │   │       ├── sqlitesql_test.go
│   │   │       └── sqlitesql.go
│   │   ├── tidb
│   │   │   ├── tidbexecutesql
│   │   │   │   ├── tidbexecutesql_test.go
│   │   │   │   └── tidbexecutesql.go
│   │   │   └── tidbsql
│   │   │       ├── tidbsql_test.go
│   │   │       └── tidbsql.go
│   │   ├── tools_test.go
│   │   ├── tools.go
│   │   ├── toolsets.go
│   │   ├── trino
│   │   │   ├── trinoexecutesql
│   │   │   │   ├── trinoexecutesql_test.go
│   │   │   │   └── trinoexecutesql.go
│   │   │   └── trinosql
│   │   │       ├── trinosql_test.go
│   │   │       └── trinosql.go
│   │   ├── utility
│   │   │   └── wait
│   │   │       ├── wait_test.go
│   │   │       └── wait.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedbsql
│   │       ├── yugabytedbsql_test.go
│   │       └── yugabytedbsql.go
│   └── util
│       └── util.go
├── LICENSE
├── logo.png
├── main.go
├── MCP-TOOLBOX-EXTENSION.md
├── README.md
└── tests
    ├── alloydb
    │   ├── alloydb_integration_test.go
    │   └── alloydb_wait_for_operation_test.go
    ├── alloydbainl
    │   └── alloydb_ai_nl_integration_test.go
    ├── alloydbpg
    │   └── alloydb_pg_integration_test.go
    ├── auth.go
    ├── bigquery
    │   └── bigquery_integration_test.go
    ├── bigtable
    │   └── bigtable_integration_test.go
    ├── cassandra
    │   └── cassandra_integration_test.go
    ├── clickhouse
    │   └── clickhouse_integration_test.go
    ├── cloudmonitoring
    │   └── cloud_monitoring_integration_test.go
    ├── cloudsql
    │   ├── cloud_sql_create_database_test.go
    │   ├── cloud_sql_create_users_test.go
    │   ├── cloud_sql_get_instances_test.go
    │   ├── cloud_sql_list_databases_test.go
    │   ├── cloudsql_list_instances_test.go
    │   └── cloudsql_wait_for_operation_test.go
    ├── cloudsqlmssql
    │   ├── cloud_sql_mssql_create_instance_integration_test.go
    │   └── cloud_sql_mssql_integration_test.go
    ├── cloudsqlmysql
    │   ├── cloud_sql_mysql_create_instance_integration_test.go
    │   └── cloud_sql_mysql_integration_test.go
    ├── cloudsqlpg
    │   ├── cloud_sql_pg_create_instances_test.go
    │   └── cloud_sql_pg_integration_test.go
    ├── common.go
    ├── couchbase
    │   └── couchbase_integration_test.go
    ├── dataform
    │   └── dataform_integration_test.go
    ├── dataplex
    │   └── dataplex_integration_test.go
    ├── dgraph
    │   └── dgraph_integration_test.go
    ├── firebird
    │   └── firebird_integration_test.go
    ├── firestore
    │   └── firestore_integration_test.go
    ├── http
    │   └── http_integration_test.go
    ├── looker
    │   └── looker_integration_test.go
    ├── mongodb
    │   └── mongodb_integration_test.go
    ├── mssql
    │   └── mssql_integration_test.go
    ├── mysql
    │   └── mysql_integration_test.go
    ├── neo4j
    │   └── neo4j_integration_test.go
    ├── oceanbase
    │   └── oceanbase_integration_test.go
    ├── option.go
    ├── oracle
    │   └── oracle_integration_test.go
    ├── postgres
    │   └── postgres_integration_test.go
    ├── redis
    │   └── redis_test.go
    ├── server.go
    ├── source.go
    ├── spanner
    │   └── spanner_integration_test.go
    ├── sqlite
    │   └── sqlite_integration_test.go
    ├── tidb
    │   └── tidb_integration_test.go
    ├── tool.go
    ├── trino
    │   └── trino_integration_test.go
    ├── utility
    │   └── wait_integration_test.go
    ├── valkey
    │   └── valkey_test.go
    └── yugabytedb
        └── yugabytedb_integration_test.go
```

# Files

--------------------------------------------------------------------------------
/internal/tools/firestore/firestorequery/firestorequery.go:
--------------------------------------------------------------------------------

```go
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package firestorequery

import (
	"context"
	"encoding/json"
	"fmt"
	"strconv"
	"strings"

	firestoreapi "cloud.google.com/go/firestore"
	yaml "github.com/goccy/go-yaml"
	"github.com/googleapis/genai-toolbox/internal/sources"
	firestoreds "github.com/googleapis/genai-toolbox/internal/sources/firestore"
	"github.com/googleapis/genai-toolbox/internal/tools"
	"github.com/googleapis/genai-toolbox/internal/tools/firestore/util"
)

// Constants for tool configuration
const (
	kind         = "firestore-query"
	defaultLimit = 100
)

// Firestore operators
var validOperators = map[string]bool{
	"<":                  true,
	"<=":                 true,
	">":                  true,
	">=":                 true,
	"==":                 true,
	"!=":                 true,
	"array-contains":     true,
	"array-contains-any": true,
	"in":                 true,
	"not-in":             true,
}

// Error messages
const (
	errFilterParseFailed      = "failed to parse filters: %w"
	errQueryExecutionFailed   = "failed to execute query: %w"
	errTemplateParseFailed    = "failed to parse template: %w"
	errTemplateExecFailed     = "failed to execute template: %w"
	errLimitParseFailed       = "failed to parse limit value '%s': %w"
	errSelectFieldParseFailed = "failed to parse select field: %w"
)

func init() {
	if !tools.Register(kind, newConfig) {
		panic(fmt.Sprintf("tool kind %q already registered", kind))
	}
}

func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
	actual := Config{Name: name}
	if err := decoder.DecodeContext(ctx, &actual); err != nil {
		return nil, err
	}
	return actual, nil
}

// compatibleSource defines the interface for sources that can provide a Firestore client
type compatibleSource interface {
	FirestoreClient() *firestoreapi.Client
}

// validate compatible sources are still compatible
var _ compatibleSource = &firestoreds.Source{}

var compatibleSources = [...]string{firestoreds.SourceKind}

// Config represents the configuration for the Firestore query tool
type Config struct {
	Name         string   `yaml:"name" validate:"required"`
	Kind         string   `yaml:"kind" validate:"required"`
	Source       string   `yaml:"source" validate:"required"`
	Description  string   `yaml:"description" validate:"required"`
	AuthRequired []string `yaml:"authRequired"`

	// Template fields
	CollectionPath string         `yaml:"collectionPath" validate:"required"`
	Filters        string         `yaml:"filters"`      // JSON string template
	Select         []string       `yaml:"select"`       // Fields to select
	OrderBy        map[string]any `yaml:"orderBy"`      // Order by configuration
	Limit          string         `yaml:"limit"`        // Limit template (can be a number or template)
	AnalyzeQuery   bool           `yaml:"analyzeQuery"` // Analyze query (boolean, not parameterizable)

	// Parameters for template substitution
	Parameters tools.Parameters `yaml:"parameters"`
}

// validate interface
var _ tools.ToolConfig = Config{}

// ToolConfigKind returns the kind of tool configuration
func (cfg Config) ToolConfigKind() string {
	return kind
}

// Initialize creates a new Tool instance from the configuration
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
	// verify source exists
	rawS, ok := srcs[cfg.Source]
	if !ok {
		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
	}

	// verify the source is compatible
	s, ok := rawS.(compatibleSource)
	if !ok {
		return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
	}

	// Set default limit if not specified
	if cfg.Limit == "" {
		cfg.Limit = fmt.Sprintf("%d", defaultLimit)
	}

	// Create MCP manifest
	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, cfg.Parameters)

	// finish tool setup
	t := Tool{
		Name:                   cfg.Name,
		Kind:                   kind,
		AuthRequired:           cfg.AuthRequired,
		Client:                 s.FirestoreClient(),
		CollectionPathTemplate: cfg.CollectionPath,
		FiltersTemplate:        cfg.Filters,
		SelectTemplate:         cfg.Select,
		OrderByTemplate:        cfg.OrderBy,
		LimitTemplate:          cfg.Limit,
		AnalyzeQuery:           cfg.AnalyzeQuery,
		Parameters:             cfg.Parameters,
		manifest:               tools.Manifest{Description: cfg.Description, Parameters: cfg.Parameters.Manifest(), AuthRequired: cfg.AuthRequired},
		mcpManifest:            mcpManifest,
	}
	return t, nil
}

// validate interface
var _ tools.Tool = Tool{}

// Tool represents the Firestore query tool
type Tool struct {
	Name         string   `yaml:"name"`
	Kind         string   `yaml:"kind"`
	AuthRequired []string `yaml:"authRequired"`

	Client                 *firestoreapi.Client
	CollectionPathTemplate string
	FiltersTemplate        string
	SelectTemplate         []string
	OrderByTemplate        map[string]any
	LimitTemplate          string
	AnalyzeQuery           bool
	Parameters             tools.Parameters

	manifest    tools.Manifest
	mcpManifest tools.McpManifest
}

// SimplifiedFilter represents the simplified filter format
type SimplifiedFilter struct {
	And   []SimplifiedFilter `json:"and,omitempty"`
	Or    []SimplifiedFilter `json:"or,omitempty"`
	Field string             `json:"field,omitempty"`
	Op    string             `json:"op,omitempty"`
	Value interface{}        `json:"value,omitempty"`
}

// OrderByConfig represents ordering configuration
type OrderByConfig struct {
	Field     string `json:"field"`
	Direction string `json:"direction"`
}

// GetDirection returns the Firestore direction constant
func (o *OrderByConfig) GetDirection() firestoreapi.Direction {
	if strings.EqualFold(o.Direction, "DESCENDING") || strings.EqualFold(o.Direction, "DESC") {
		return firestoreapi.Desc
	}
	return firestoreapi.Asc
}

// QueryResult represents a document result from the query
type QueryResult struct {
	ID         string         `json:"id"`
	Path       string         `json:"path"`
	Data       map[string]any `json:"data"`
	CreateTime interface{}    `json:"createTime,omitempty"`
	UpdateTime interface{}    `json:"updateTime,omitempty"`
	ReadTime   interface{}    `json:"readTime,omitempty"`
}

// QueryResponse represents the full response including optional metrics
type QueryResponse struct {
	Documents      []QueryResult  `json:"documents"`
	ExplainMetrics map[string]any `json:"explainMetrics,omitempty"`
}

// Invoke executes the Firestore query based on the provided parameters
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
	paramsMap := params.AsMap()

	// Process collection path with template substitution
	collectionPath, err := tools.PopulateTemplate("collectionPath", t.CollectionPathTemplate, paramsMap)
	if err != nil {
		return nil, fmt.Errorf("failed to process collection path: %w", err)
	}

	// Build the query
	query, err := t.buildQuery(collectionPath, paramsMap)
	if err != nil {
		return nil, err
	}

	// Execute the query and return results
	return t.executeQuery(ctx, query)
}

// buildQuery constructs the Firestore query from parameters
func (t Tool) buildQuery(collectionPath string, params map[string]any) (*firestoreapi.Query, error) {
	collection := t.Client.Collection(collectionPath)
	query := collection.Query

	// Process and apply filters if template is provided
	if t.FiltersTemplate != "" {
		// Apply template substitution to filters
		filtersJSON, err := tools.PopulateTemplateWithJSON("filters", t.FiltersTemplate, params)
		if err != nil {
			return nil, fmt.Errorf("failed to process filters template: %w", err)
		}

		// Parse the simplified filter format
		var simplifiedFilter SimplifiedFilter
		if err := json.Unmarshal([]byte(filtersJSON), &simplifiedFilter); err != nil {
			return nil, fmt.Errorf(errFilterParseFailed, err)
		}

		// Convert simplified filter to Firestore filter
		if filter := t.convertToFirestoreFilter(simplifiedFilter); filter != nil {
			query = query.WhereEntity(filter)
		}
	}

	// Process select fields
	selectFields, err := t.processSelectFields(params)
	if err != nil {
		return nil, err
	}
	if len(selectFields) > 0 {
		query = query.Select(selectFields...)
	}

	// Process and apply ordering
	orderBy, err := t.getOrderBy(params)
	if err != nil {
		return nil, err
	}
	if orderBy != nil {
		query = query.OrderBy(orderBy.Field, orderBy.GetDirection())
	}

	// Process and apply limit
	limit, err := t.getLimit(params)
	if err != nil {
		return nil, err
	}
	query = query.Limit(limit)

	// Apply analyze options if enabled
	if t.AnalyzeQuery {
		query = query.WithRunOptions(firestoreapi.ExplainOptions{
			Analyze: true,
		})
	}

	return &query, nil
}

// convertToFirestoreFilter converts simplified filter format to Firestore EntityFilter
func (t Tool) convertToFirestoreFilter(filter SimplifiedFilter) firestoreapi.EntityFilter {
	// Handle AND filters
	if len(filter.And) > 0 {
		filters := make([]firestoreapi.EntityFilter, 0, len(filter.And))
		for _, f := range filter.And {
			if converted := t.convertToFirestoreFilter(f); converted != nil {
				filters = append(filters, converted)
			}
		}
		if len(filters) > 0 {
			return firestoreapi.AndFilter{Filters: filters}
		}
		return nil
	}

	// Handle OR filters
	if len(filter.Or) > 0 {
		filters := make([]firestoreapi.EntityFilter, 0, len(filter.Or))
		for _, f := range filter.Or {
			if converted := t.convertToFirestoreFilter(f); converted != nil {
				filters = append(filters, converted)
			}
		}
		if len(filters) > 0 {
			return firestoreapi.OrFilter{Filters: filters}
		}
		return nil
	}

	// Handle simple property filter
	if filter.Field != "" && filter.Op != "" && filter.Value != nil {
		if validOperators[filter.Op] {
			// Convert the value using the Firestore native JSON converter
			convertedValue, err := util.JSONToFirestoreValue(filter.Value, t.Client)
			if err != nil {
				// If conversion fails, use the original value
				convertedValue = filter.Value
			}

			return firestoreapi.PropertyFilter{
				Path:     filter.Field,
				Operator: filter.Op,
				Value:    convertedValue,
			}
		}
	}

	return nil
}

// processSelectFields processes the select fields with parameter substitution
func (t Tool) processSelectFields(params map[string]any) ([]string, error) {
	var selectFields []string

	// Process configured select fields with template substitution
	for _, field := range t.SelectTemplate {
		// Check if it's a template
		if strings.Contains(field, "{{") {
			processed, err := tools.PopulateTemplate("selectField", field, params)
			if err != nil {
				return nil, err
			}
			if processed != "" {
				// The processed field might be an array format [a b c] or a single value
				trimmedProcessed := strings.TrimSpace(processed)

				// Check if it's in array format [a b c]
				if strings.HasPrefix(trimmedProcessed, "[") && strings.HasSuffix(trimmedProcessed, "]") {
					// Remove brackets and split by spaces
					arrayContent := strings.TrimPrefix(trimmedProcessed, "[")
					arrayContent = strings.TrimSuffix(arrayContent, "]")
					fields := strings.Fields(arrayContent) // Fields splits by any whitespace
					for _, f := range fields {
						if f != "" {
							selectFields = append(selectFields, f)
						}
					}
				} else {
					selectFields = append(selectFields, processed)
				}
			}
		} else {
			selectFields = append(selectFields, field)
		}
	}

	return selectFields, nil
}

// getOrderBy processes the orderBy configuration with parameter substitution
func (t Tool) getOrderBy(params map[string]any) (*OrderByConfig, error) {
	if t.OrderByTemplate == nil {
		return nil, nil
	}

	orderBy := &OrderByConfig{}

	// Process field
	field, err := t.getOrderByForKey("field", params)
	if err != nil {
		return nil, err
	}
	orderBy.Field = field

	// Process direction
	direction, err := t.getOrderByForKey("direction", params)
	if err != nil {
		return nil, err
	}
	orderBy.Direction = direction

	if orderBy.Field == "" {
		return nil, nil
	}

	return orderBy, nil
}

func (t Tool) getOrderByForKey(key string, params map[string]any) (string, error) {
	value, ok := t.OrderByTemplate[key].(string)
	if !ok {
		return "", nil
	}

	processedValue, err := tools.PopulateTemplate(fmt.Sprintf("orderBy%s", key), value, params)
	if err != nil {
		return "", err
	}

	return processedValue, nil
}

// processLimit processes the limit field with parameter substitution
func (t Tool) getLimit(params map[string]any) (int, error) {
	limit := defaultLimit
	if t.LimitTemplate != "" {
		processedValue, err := tools.PopulateTemplate("limit", t.LimitTemplate, params)
		if err != nil {
			return 0, err
		}

		// Try to parse as integer
		if processedValue != "" {
			parsedLimit, err := strconv.Atoi(processedValue)
			if err != nil {
				return 0, fmt.Errorf(errLimitParseFailed, processedValue, err)
			}
			limit = parsedLimit
		}
	}
	return limit, nil
}

// executeQuery runs the query and formats the results
func (t Tool) executeQuery(ctx context.Context, query *firestoreapi.Query) (any, error) {
	docIterator := query.Documents(ctx)
	docs, err := docIterator.GetAll()
	if err != nil {
		return nil, fmt.Errorf(errQueryExecutionFailed, err)
	}

	// Convert results to structured format
	results := make([]QueryResult, len(docs))
	for i, doc := range docs {
		results[i] = QueryResult{
			ID:         doc.Ref.ID,
			Path:       doc.Ref.Path,
			Data:       doc.Data(),
			CreateTime: doc.CreateTime,
			UpdateTime: doc.UpdateTime,
			ReadTime:   doc.ReadTime,
		}
	}

	// Return with explain metrics if requested
	if t.AnalyzeQuery {
		explainMetrics, err := t.getExplainMetrics(docIterator)
		if err == nil && explainMetrics != nil {
			response := QueryResponse{
				Documents:      results,
				ExplainMetrics: explainMetrics,
			}
			return response, nil
		}
	}

	return results, nil
}

// getExplainMetrics extracts explain metrics from the query iterator
func (t Tool) getExplainMetrics(docIterator *firestoreapi.DocumentIterator) (map[string]any, error) {
	explainMetrics, err := docIterator.ExplainMetrics()
	if err != nil || explainMetrics == nil {
		return nil, err
	}

	metricsData := make(map[string]any)

	// Add plan summary if available
	if explainMetrics.PlanSummary != nil {
		planSummary := make(map[string]any)
		planSummary["indexesUsed"] = explainMetrics.PlanSummary.IndexesUsed
		metricsData["planSummary"] = planSummary
	}

	// Add execution stats if available
	if explainMetrics.ExecutionStats != nil {
		executionStats := make(map[string]any)
		executionStats["resultsReturned"] = explainMetrics.ExecutionStats.ResultsReturned
		executionStats["readOperations"] = explainMetrics.ExecutionStats.ReadOperations

		if explainMetrics.ExecutionStats.ExecutionDuration != nil {
			executionStats["executionDuration"] = explainMetrics.ExecutionStats.ExecutionDuration.String()
		}

		if explainMetrics.ExecutionStats.DebugStats != nil {
			executionStats["debugStats"] = *explainMetrics.ExecutionStats.DebugStats
		}

		metricsData["executionStats"] = executionStats
	}

	return metricsData, nil
}

// ParseParams parses and validates input parameters
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
	return tools.ParseParams(t.Parameters, data, claims)
}

// Manifest returns the tool manifest
func (t Tool) Manifest() tools.Manifest {
	return t.manifest
}

// McpManifest returns the MCP manifest
func (t Tool) McpManifest() tools.McpManifest {
	return t.mcpManifest
}

// Authorized checks if the tool is authorized based on verified auth services
func (t Tool) Authorized(verifiedAuthServices []string) bool {
	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}

func (t Tool) RequiresClientAuthorization() bool {
	return false
}

```

--------------------------------------------------------------------------------
/internal/tools/neo4j/neo4jexecutecypher/classifier/classifier.go:
--------------------------------------------------------------------------------

```go
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

/*
Package classifier provides tools to classify Cypher queries as either read-only or write operations.

It uses a keyword-based and procedure-based approach to determine the query's nature.
The main entry point is the `Classify` method on a `QueryClassifier` object. The classifier
is designed to be conservative, defaulting to classifying unknown procedures as write
operations to ensure safety in read-only environments.

It can handle:
- Standard Cypher keywords (MATCH, CREATE, MERGE, etc.).
- Multi-word keywords (DETACH DELETE, ORDER BY).
- Comments and string literals, which are ignored during classification.
- Procedure calls (CALL db.labels), with predefined lists of known read/write procedures.
- Subqueries (CALL { ... }), checking for write operations within the subquery block.
*/
package classifier

import (
	"regexp"
	"sort"
	"strings"
)

// QueryType represents the classification of a Cypher query as either read or write.
type QueryType int

const (
	// ReadQuery indicates a query that only reads data.
	ReadQuery QueryType = iota
	// WriteQuery indicates a query that modifies data.
	WriteQuery
)

// String provides a human-readable representation of the QueryType.
func (qt QueryType) String() string {
	if qt == ReadQuery {
		return "READ"
	}
	return "WRITE"
}

// QueryClassification represents the detailed result of a query classification.
type QueryClassification struct {
	// Type is the overall classification of the query (READ or WRITE).
	Type QueryType
	// Confidence is a score from 0.0 to 1.0 indicating the classifier's certainty.
	// 1.0 is fully confident. Lower scores may be assigned for ambiguous cases,
	// like unknown procedures.
	Confidence float64
	// WriteTokens is a list of keywords or procedures found that indicate a write operation.
	WriteTokens []string
	// ReadTokens is a list of keywords or procedures found that indicate a read operation.
	ReadTokens []string
	// HasSubquery is true if the query contains a `CALL { ... }` block.
	HasSubquery bool
	// Error holds any error that occurred during classification, though this is not
	// currently used in the implementation.
	Error error
}

// QueryClassifier contains the logic and data for classifying Cypher queries.
// It should be instantiated via the NewQueryClassifier() function.
type QueryClassifier struct {
	writeKeywords map[string]struct{}
	readKeywords  map[string]struct{}
	// writeProcedures is a map of known write procedure prefixes for quick lookup.
	writeProcedures map[string]struct{}
	// readProcedures is a map of known read procedure prefixes for quick lookup.
	readProcedures         map[string]struct{}
	multiWordWriteKeywords []string
	multiWordReadKeywords  []string
	commentPattern         *regexp.Regexp
	stringLiteralPattern   *regexp.Regexp
	procedureCallPattern   *regexp.Regexp
	subqueryPattern        *regexp.Regexp
	whitespacePattern      *regexp.Regexp
	tokenSplitPattern      *regexp.Regexp
}

// NewQueryClassifier creates and initializes a new QueryClassifier instance.
// It pre-compiles regular expressions and populates the internal lists of
// known Cypher keywords and procedures.
func NewQueryClassifier() *QueryClassifier {
	c := &QueryClassifier{
		writeKeywords:        make(map[string]struct{}),
		readKeywords:         make(map[string]struct{}),
		writeProcedures:      make(map[string]struct{}),
		readProcedures:       make(map[string]struct{}),
		commentPattern:       regexp.MustCompile(`(?m)//.*?$|/\*[\s\S]*?\*/`),
		stringLiteralPattern: regexp.MustCompile(`'[^']*'|"[^"]*"`),
		procedureCallPattern: regexp.MustCompile(`(?i)\bCALL\s+([a-zA-Z0-9_.]+)`),
		subqueryPattern:      regexp.MustCompile(`(?i)\bCALL\s*\{`),
		whitespacePattern:    regexp.MustCompile(`\s+`),
		tokenSplitPattern:    regexp.MustCompile(`[\s,(){}[\]]+`),
	}

	// Lists of known keywords that perform write operations.
	writeKeywordsList := []string{
		"CREATE", "MERGE", "DELETE", "DETACH DELETE", "SET", "REMOVE", "FOREACH",
		"CREATE INDEX", "DROP INDEX", "CREATE CONSTRAINT", "DROP CONSTRAINT",
	}
	// Lists of known keywords that perform read operations.
	readKeywordsList := []string{
		"MATCH", "OPTIONAL MATCH", "WITH", "WHERE", "RETURN", "ORDER BY", "SKIP", "LIMIT",
		"UNION", "UNION ALL", "UNWIND", "CASE", "WHEN", "THEN", "ELSE", "END",
		"SHOW", "PROFILE", "EXPLAIN",
	}
	// A list of procedure prefixes known to perform write operations.
	writeProceduresList := []string{
		"apoc.create", "apoc.merge", "apoc.refactor", "apoc.atomic", "apoc.trigger",
		"apoc.periodic.commit", "apoc.load.jdbc", "apoc.load.json", "apoc.load.csv",
		"apoc.export", "apoc.import", "db.create", "db.drop", "db.index.create",
		"db.constraints.create", "dbms.security.create", "gds.graph.create", "gds.graph.drop",
	}
	// A list of procedure prefixes known to perform read operations.
	readProceduresList := []string{
		"apoc.meta", "apoc.help", "apoc.version", "apoc.text", "apoc.math", "apoc.coll",
		"apoc.path", "apoc.algo", "apoc.date", "db.labels", "db.propertyKeys",
		"db.relationshipTypes", "db.schema", "db.indexes", "db.constraints",
		"dbms.components", "dbms.listConfig", "gds.graph.list", "gds.util",
	}

	c.populateKeywords(writeKeywordsList, c.writeKeywords, &c.multiWordWriteKeywords)
	c.populateKeywords(readKeywordsList, c.readKeywords, &c.multiWordReadKeywords)
	c.populateProcedures(writeProceduresList, c.writeProcedures)
	c.populateProcedures(readProceduresList, c.readProcedures)

	return c
}

// populateKeywords processes a list of keyword strings, separating them into
// single-word and multi-word lists for easier processing later.
// Multi-word keywords (e.g., "DETACH DELETE") are sorted by length descending
// to ensure longer matches are replaced first.
func (c *QueryClassifier) populateKeywords(keywords []string, keywordMap map[string]struct{}, multiWord *[]string) {
	for _, kw := range keywords {
		if strings.Contains(kw, " ") {
			*multiWord = append(*multiWord, kw)
		}
		// Replace spaces with underscores for unified tokenization.
		keywordMap[strings.ReplaceAll(kw, " ", "_")] = struct{}{}
	}
	// Sort multi-word keywords by length (longest first) to prevent
	// partial matches, e.g., replacing "CREATE OR REPLACE" before "CREATE".
	sort.SliceStable(*multiWord, func(i, j int) bool {
		return len((*multiWord)[i]) > len((*multiWord)[j])
	})
}

// populateProcedures adds a list of procedure prefixes to the given map.
func (c *QueryClassifier) populateProcedures(procedures []string, procedureMap map[string]struct{}) {
	for _, proc := range procedures {
		procedureMap[strings.ToLower(proc)] = struct{}{}
	}
}

// Classify analyzes a Cypher query string and returns a QueryClassification result.
// It is the main method for this package.
//
// The process is as follows:
// 1. Normalize the query by removing comments and extra whitespace.
// 2. Replace string literals to prevent keywords inside them from being classified.
// 3. Unify multi-word keywords (e.g., "DETACH DELETE" becomes "DETACH_DELETE").
// 4. Extract all procedure calls (e.g., `CALL db.labels`).
// 5. Tokenize the remaining query string.
// 6. Check tokens and procedures against known read/write lists.
// 7. If a subquery `CALL { ... }` exists, check its contents for write operations.
// 8. Assign a final classification and confidence score.
//
// Usage example:
//
//	classifier := NewQueryClassifier()
//	query := "MATCH (n:Person) WHERE n.name = 'Alice' SET n.age = 30"
//	result := classifier.Classify(query)
//	fmt.Printf("Query is a %s query with confidence %f\n", result.Type, result.Confidence)
//	// Output: Query is a WRITE query with confidence 0.900000
//	fmt.Printf("Write tokens found: %v\n", result.WriteTokens)
//	// Output: Write tokens found: [SET]
func (c *QueryClassifier) Classify(query string) QueryClassification {
	result := QueryClassification{
		Type:       ReadQuery, // Default to read, upgrade to write if write tokens are found.
		Confidence: 1.0,
	}

	normalizedQuery := c.normalizeQuery(query)
	if normalizedQuery == "" {
		return result // Return default for empty queries.
	}

	// Early check for subqueries to set the flag.
	result.HasSubquery = c.subqueryPattern.MatchString(normalizedQuery)
	procedures := c.extractProcedureCalls(normalizedQuery)

	// Sanitize the query by replacing string literals to avoid misinterpreting their contents.
	sanitizedQuery := c.stringLiteralPattern.ReplaceAllString(normalizedQuery, "STRING_LITERAL")
	// Unify multi-word keywords to treat them as single tokens.
	unifiedQuery := c.unifyMultiWordKeywords(sanitizedQuery)
	tokens := c.extractTokens(unifiedQuery)

	// Classify based on standard keywords.
	for _, token := range tokens {
		upperToken := strings.ToUpper(token)

		if _, isWrite := c.writeKeywords[upperToken]; isWrite {
			result.WriteTokens = append(result.WriteTokens, upperToken)
			result.Type = WriteQuery
		} else if _, isRead := c.readKeywords[upperToken]; isRead {
			result.ReadTokens = append(result.ReadTokens, upperToken)
		}
	}

	// Classify based on procedure calls.
	for _, proc := range procedures {
		if c.isWriteProcedure(proc) {
			result.WriteTokens = append(result.WriteTokens, "CALL "+proc)
			result.Type = WriteQuery
		} else if c.isReadProcedure(proc) {
			result.ReadTokens = append(result.ReadTokens, "CALL "+proc)
		} else {
			// CONSERVATIVE APPROACH: If a procedure is not in a known list,
			// we guess its type. If it looks like a read (get, list), we treat it as such.
			// Otherwise, we assume it's a write operation with lower confidence.
			if strings.Contains(proc, ".get") || strings.Contains(proc, ".list") ||
				strings.Contains(proc, ".show") || strings.Contains(proc, ".meta") {
				result.ReadTokens = append(result.ReadTokens, "CALL "+proc)
			} else {
				result.WriteTokens = append(result.WriteTokens, "CALL "+proc)
				result.Type = WriteQuery
				result.Confidence = 0.8 // Lower confidence for unknown procedures.
			}
		}
	}

	// If a subquery exists, explicitly check its contents for write operations.
	if result.HasSubquery && c.hasWriteInSubquery(unifiedQuery) {
		result.Type = WriteQuery
		// Add a specific token to indicate the reason for the write classification.
		found := false
		for _, t := range result.WriteTokens {
			if t == "WRITE_IN_SUBQUERY" {
				found = true
				break
			}
		}
		if !found {
			result.WriteTokens = append(result.WriteTokens, "WRITE_IN_SUBQUERY")
		}
	}

	// If a query contains both read and write operations (e.g., MATCH ... DELETE),
	// it's a write query. We lower the confidence slightly to reflect the mixed nature.
	if len(result.WriteTokens) > 0 && len(result.ReadTokens) > 0 {
		result.Confidence = 0.9
	}

	return result
}

// unifyMultiWordKeywords replaces multi-word keywords in a query with a single,
// underscore-separated token. This simplifies the tokenization process.
// Example: "DETACH DELETE" becomes "DETACH_DELETE".
func (c *QueryClassifier) unifyMultiWordKeywords(query string) string {
	upperQuery := strings.ToUpper(query)
	// Combine all multi-word keywords for a single pass.
	allMultiWord := append(c.multiWordWriteKeywords, c.multiWordReadKeywords...)

	for _, kw := range allMultiWord {
		placeholder := strings.ReplaceAll(kw, " ", "_")
		upperQuery = strings.ReplaceAll(upperQuery, kw, placeholder)
	}
	return upperQuery
}

// normalizeQuery cleans a query string by removing comments and collapsing
// all whitespace into single spaces.
func (c *QueryClassifier) normalizeQuery(query string) string {
	// Remove single-line and multi-line comments.
	query = c.commentPattern.ReplaceAllString(query, " ")
	// Collapse consecutive whitespace characters into a single space.
	query = c.whitespacePattern.ReplaceAllString(query, " ")
	return strings.TrimSpace(query)
}

// extractTokens splits a query string into a slice of individual tokens.
// It splits on whitespace and various punctuation marks.
func (c *QueryClassifier) extractTokens(query string) []string {
	tokens := c.tokenSplitPattern.Split(query, -1)
	// Filter out empty strings that can result from the split.
	result := make([]string, 0, len(tokens))
	for _, token := range tokens {
		if token != "" {
			result = append(result, token)
		}
	}
	return result
}

// extractProcedureCalls finds all procedure calls (e.g., `CALL db.labels`)
// in the query and returns a slice of their names.
func (c *QueryClassifier) extractProcedureCalls(query string) []string {
	matches := c.procedureCallPattern.FindAllStringSubmatch(query, -1)
	procedures := make([]string, 0, len(matches))
	for _, match := range matches {
		if len(match) > 1 {
			procedures = append(procedures, strings.ToLower(match[1]))
		}
	}
	return procedures
}

// isWriteProcedure checks if a given procedure name matches any of the known
// write procedure prefixes.
func (c *QueryClassifier) isWriteProcedure(procedure string) bool {
	procedure = strings.ToLower(procedure)
	for wp := range c.writeProcedures {
		if strings.HasPrefix(procedure, wp) {
			return true
		}
	}
	return false
}

// isReadProcedure checks if a given procedure name matches any of the known
// read procedure prefixes.
func (c *QueryClassifier) isReadProcedure(procedure string) bool {
	procedure = strings.ToLower(procedure)
	for rp := range c.readProcedures {
		if strings.HasPrefix(procedure, rp) {
			return true
		}
	}
	return false
}

// hasWriteInSubquery detects if a write keyword exists within a `CALL { ... }` block.
// It correctly handles nested braces to find the content of the top-level subquery.
func (c *QueryClassifier) hasWriteInSubquery(unifiedQuery string) bool {
	loc := c.subqueryPattern.FindStringIndex(unifiedQuery)
	if loc == nil {
		return false
	}

	// The search starts from the beginning of the `CALL {` match.
	subqueryContent := unifiedQuery[loc[0]:]
	openBraces := 0
	startIndex := -1
	endIndex := -1

	// Find the boundaries of the first complete `{...}` block.
	for i, char := range subqueryContent {
		if char == '{' {
			if openBraces == 0 {
				startIndex = i + 1
			}
			openBraces++
		} else if char == '}' {
			openBraces--
			if openBraces == 0 {
				endIndex = i
				break
			}
		}
	}

	var block string
	if startIndex != -1 {
		if endIndex != -1 {
			// A complete `{...}` block was found.
			block = subqueryContent[startIndex:endIndex]
		} else {
			// An opening brace was found but no closing one; this indicates a
			// likely syntax error, but we check the rest of the string anyway.
			block = subqueryContent[startIndex:]
		}

		// Check if any write keyword exists as a whole word within the subquery block.
		for writeOp := range c.writeKeywords {
			// Use regex to match the keyword as a whole word to avoid partial matches
			// (e.g., finding "SET" in "ASSET").
			re := regexp.MustCompile(`\b` + writeOp + `\b`)
			if re.MatchString(block) {
				return true
			}
		}
	}

	return false
}

// AddWriteProcedure allows users to dynamically add a custom procedure prefix to the
// list of known write procedures. This is useful for environments with custom plugins.
// The pattern is matched using `strings.HasPrefix`.
//
// Usage example:
//
//	classifier := NewQueryClassifier()
//	classifier.AddWriteProcedure("my.custom.writer")
//	result := classifier.Classify("CALL my.custom.writer.createUser()")
//	// result.Type will be WriteQuery
func (c *QueryClassifier) AddWriteProcedure(pattern string) {
	if pattern != "" {
		c.writeProcedures[strings.ToLower(pattern)] = struct{}{}
	}
}

// AddReadProcedure allows users to dynamically add a custom procedure prefix to the
// list of known read procedures.
// The pattern is matched using `strings.HasPrefix`.
//
// Usage example:
//
//	classifier := NewQueryClassifier()
//	classifier.AddReadProcedure("my.custom.reader")
//	result := classifier.Classify("CALL my.custom.reader.getData()")
//	// result.Type will be ReadQuery
func (c *QueryClassifier) AddReadProcedure(pattern string) {
	if pattern != "" {
		c.readProcedures[strings.ToLower(pattern)] = struct{}{}
	}
}

```

--------------------------------------------------------------------------------
/internal/tools/looker/lookerhealthanalyze/lookerhealthanalyze.go:
--------------------------------------------------------------------------------

```go
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//	http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package lookerhealthanalyze

import (
	"context"
	"encoding/json"
	"fmt"
	"regexp"
	"strings"

	yaml "github.com/goccy/go-yaml"
	"github.com/googleapis/genai-toolbox/internal/sources"
	lookersrc "github.com/googleapis/genai-toolbox/internal/sources/looker"
	"github.com/googleapis/genai-toolbox/internal/tools"
	"github.com/googleapis/genai-toolbox/internal/tools/looker/lookercommon"
	"github.com/googleapis/genai-toolbox/internal/util"
	"github.com/looker-open-source/sdk-codegen/go/rtl"
	v4 "github.com/looker-open-source/sdk-codegen/go/sdk/v4"
)

// =================================================================================================================
// START MCP SERVER CORE LOGIC
// =================================================================================================================
const kind string = "looker-health-analyze"

func init() {
	if !tools.Register(kind, newConfig) {
		panic(fmt.Sprintf("tool kind %q already registered", kind))
	}
}

func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
	actual := Config{Name: name}
	if err := decoder.DecodeContext(ctx, &actual); err != nil {
		return nil, err
	}
	return actual, nil
}

type Config struct {
	Name         string         `yaml:"name" validate:"required"`
	Kind         string         `yaml:"kind" validate:"required"`
	Source       string         `yaml:"source" validate:"required"`
	Description  string         `yaml:"description" validate:"required"`
	AuthRequired []string       `yaml:"authRequired"`
	Parameters   map[string]any `yaml:"parameters"`
}

var _ tools.ToolConfig = Config{}

func (cfg Config) ToolConfigKind() string {
	return kind
}

func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
	rawS, ok := srcs[cfg.Source]
	if !ok {
		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
	}

	s, ok := rawS.(*lookersrc.Source)
	if !ok {
		return nil, fmt.Errorf("invalid source for %q tool: source kind must be `looker`", kind)
	}

	actionParameter := tools.NewStringParameterWithRequired("action", "The analysis to run. Can be 'projects', 'models', or 'explores'.", true)
	projectParameter := tools.NewStringParameterWithRequired("project", "The Looker project to analyze (optional).", false)
	modelParameter := tools.NewStringParameterWithRequired("model", "The Looker model to analyze (optional).", false)
	exploreParameter := tools.NewStringParameterWithRequired("explore", "The Looker explore to analyze (optional).", false)
	timeframeParameter := tools.NewIntParameterWithDefault("timeframe", 90, "The timeframe in days to analyze.")
	minQueriesParameter := tools.NewIntParameterWithDefault("min_queries", 0, "The minimum number of queries for a model or explore to be considered used.")

	parameters := tools.Parameters{
		actionParameter,
		projectParameter,
		modelParameter,
		exploreParameter,
		timeframeParameter,
		minQueriesParameter,
	}

	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, parameters)

	return Tool{
		Name:           cfg.Name,
		Kind:           kind,
		Parameters:     parameters,
		AuthRequired:   cfg.AuthRequired,
		UseClientOAuth: s.UseClientOAuth,
		Client:         s.Client,
		ApiSettings:    s.ApiSettings,
		manifest: tools.Manifest{
			Description:  cfg.Description,
			Parameters:   parameters.Manifest(),
			AuthRequired: cfg.AuthRequired,
		},
		mcpManifest: mcpManifest,
	}, nil
}

var _ tools.Tool = Tool{}

type Tool struct {
	Name           string `yaml:"name"`
	Kind           string `yaml:"kind"`
	UseClientOAuth bool
	Client         *v4.LookerSDK
	ApiSettings    *rtl.ApiSettings
	AuthRequired   []string `yaml:"authRequired"`
	Parameters     tools.Parameters
	manifest       tools.Manifest
	mcpManifest    tools.McpManifest
}

func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
	logger, err := util.LoggerFromContext(ctx)
	if err != nil {
		return nil, fmt.Errorf("unable to get logger from ctx: %s", err)
	}

	sdk, err := lookercommon.GetLookerSDK(t.UseClientOAuth, t.ApiSettings, t.Client, accessToken)
	if err != nil {
		return nil, fmt.Errorf("error getting sdk: %w", err)
	}

	paramsMap := params.AsMap()
	timeframe, _ := paramsMap["timeframe"].(int)
	if timeframe == 0 {
		timeframe = 90
	}
	minQueries, _ := paramsMap["min_queries"].(int)
	if minQueries == 0 {
		minQueries = 1
	}

	analyzeTool := &analyzeTool{
		SdkClient:  sdk,
		timeframe:  timeframe,
		minQueries: minQueries,
	}

	action, ok := paramsMap["action"].(string)
	if !ok {
		return nil, fmt.Errorf("action parameter not found")
	}

	switch action {
	case "projects":
		projectId, _ := paramsMap["project"].(string)
		result, err := analyzeTool.projects(ctx, projectId)
		if err != nil {
			return nil, fmt.Errorf("error analyzing projects: %w", err)
		}
		logger.DebugContext(ctx, "result = ", result)
		return result, nil
	case "models":
		projectName, _ := paramsMap["project"].(string)
		modelName, _ := paramsMap["model"].(string)
		result, err := analyzeTool.models(ctx, projectName, modelName)
		if err != nil {
			return nil, fmt.Errorf("error analyzing models: %w", err)
		}
		logger.DebugContext(ctx, "result = ", result)
		return result, nil
	case "explores":
		modelName, _ := paramsMap["model"].(string)
		exploreName, _ := paramsMap["explore"].(string)
		result, err := analyzeTool.explores(ctx, modelName, exploreName)
		if err != nil {
			return nil, fmt.Errorf("error analyzing explores: %w", err)
		}
		logger.DebugContext(ctx, "result = ", result)
		return result, nil
	default:
		return nil, fmt.Errorf("unknown action: %s", action)
	}
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
	return tools.ParseParams(t.Parameters, data, claims)
}

func (t Tool) Manifest() tools.Manifest {
	return t.manifest
}

func (t Tool) McpManifest() tools.McpManifest {
	return t.mcpManifest
}

func (t Tool) Authorized(verifiedAuthServices []string) bool {
	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}

func (t Tool) RequiresClientAuthorization() bool {
	return t.UseClientOAuth
}

// =================================================================================================================
// END MCP SERVER CORE LOGIC
// =================================================================================================================

// =================================================================================================================
// START LOOKER HEALTH ANALYZE CORE LOGIC
// =================================================================================================================
type analyzeTool struct {
	SdkClient  *v4.LookerSDK
	timeframe  int
	minQueries int
}

func (t *analyzeTool) projects(ctx context.Context, id string) ([]map[string]interface{}, error) {
	logger, err := util.LoggerFromContext(ctx)
	if err != nil {
		return nil, fmt.Errorf("unable to get logger from ctx: %s", err)
	}

	var projects []*v4.Project
	if id != "" {
		p, err := t.SdkClient.Project(id, "", nil)
		if err != nil {
			return nil, fmt.Errorf("error fetching project %s: %w", id, err)
		}
		projects = append(projects, &p)
	} else {
		allProjects, err := t.SdkClient.AllProjects("", nil)
		if err != nil {
			return nil, fmt.Errorf("error fetching all projects: %w", err)
		}
		for i := range allProjects {
			projects = append(projects, &allProjects[i])
		}
	}

	var results []map[string]interface{}
	for _, p := range projects {
		pName := *p.Name
		pID := *p.Id
		logger.InfoContext(ctx, fmt.Sprintf("Analyzing project: %s", pName))

		projectFiles, err := t.SdkClient.AllProjectFiles(pID, "", nil)
		if err != nil {
			return nil, fmt.Errorf("error fetching files for project %s: %w", pName, err)
		}

		modelCount := 0
		viewFileCount := 0
		for _, f := range projectFiles {
			if f.Type != nil {
				if *f.Type == "model" {
					modelCount++
				}
				if *f.Type == "view" {
					viewFileCount++
				}
			}
		}

		gitConnectionStatus := "OK"
		if p.GitRemoteUrl == nil {
			gitConnectionStatus = "No repo found"
		} else if strings.Contains(*p.GitRemoteUrl, "/bare_models/") {
			gitConnectionStatus = "Bare repo, no tests required"
		}

		results = append(results, map[string]interface{}{
			"Project":                pName,
			"# Models":               modelCount,
			"# View Files":           viewFileCount,
			"Git Connection Status":  gitConnectionStatus,
			"PR Mode":                string(*p.PullRequestMode),
			"Is Validation Required": *p.ValidationRequired,
		})
	}
	return results, nil
}

func (t *analyzeTool) models(ctx context.Context, project, model string) ([]map[string]interface{}, error) {
	logger, err := util.LoggerFromContext(ctx)
	if err != nil {
		return nil, fmt.Errorf("unable to get logger from ctx: %s", err)
	}
	logger.InfoContext(ctx, "Analyzing models...")

	usedModels, err := t.getUsedModels(ctx)
	if err != nil {
		return nil, err
	}

	lookmlModels, err := t.SdkClient.AllLookmlModels(v4.RequestAllLookmlModels{}, nil)
	if err != nil {
		return nil, fmt.Errorf("error fetching LookML models: %w", err)
	}

	var results []map[string]interface{}
	for _, m := range lookmlModels {
		if (project == "" || (m.ProjectName != nil && *m.ProjectName == project)) &&
			(model == "" || (m.Name != nil && *m.Name == model)) {

			queryCount := 0
			if qc, ok := usedModels[*m.Name]; ok {
				queryCount = qc
			}

			exploreCount := 0
			if m.Explores != nil {
				exploreCount = len(*m.Explores)
			}

			results = append(results, map[string]interface{}{
				"Project":     *m.ProjectName,
				"Model":       *m.Name,
				"# Explores":  exploreCount,
				"Query Count": queryCount,
			})
		}
	}
	return results, nil
}

func (t *analyzeTool) getUsedModels(ctx context.Context) (map[string]int, error) {
	limit := "5000"
	query := &v4.WriteQuery{
		Model:  "system__activity",
		View:   "history",
		Fields: &[]string{"history.query_run_count", "query.model"},
		Filters: &map[string]any{
			"history.created_date":    fmt.Sprintf("%d days", t.timeframe),
			"query.model":             "-system__activity, -i__looker",
			"history.query_run_count": fmt.Sprintf(">%d", t.minQueries-1),
			"user.dev_branch_name":    "NULL",
		},
		Limit: &limit,
	}
	raw, err := lookercommon.RunInlineQuery(ctx, t.SdkClient, query, "json", nil)
	if err != nil {
		return nil, err
	}

	var data []map[string]interface{}
	_ = json.Unmarshal([]byte(raw), &data)

	results := make(map[string]int)
	for _, row := range data {
		model, _ := row["query.model"].(string)
		count, _ := row["history.query_run_count"].(float64)
		results[model] = int(count)
	}
	return results, nil
}

func (t *analyzeTool) getUsedExploreFields(ctx context.Context, model, explore string) (map[string]int, error) {
	limit := "5000"
	query := &v4.WriteQuery{
		Model:  "system__activity",
		View:   "history",
		Fields: &[]string{"query.formatted_fields", "query.filters", "history.query_run_count"},
		Filters: &map[string]any{
			"history.created_date":   fmt.Sprintf("%d days", t.timeframe),
			"query.model":            strings.ReplaceAll(model, "_", "^_"),
			"query.view":             strings.ReplaceAll(explore, "_", "^_"),
			"query.formatted_fields": "-NULL",
			"history.workspace_id":   "production",
		},
		Limit: &limit,
	}
	raw, err := lookercommon.RunInlineQuery(ctx, t.SdkClient, query, "json", nil)
	if err != nil {
		return nil, err
	}

	var data []map[string]interface{}
	_ = json.Unmarshal([]byte(raw), &data)

	results := make(map[string]int)
	fieldRegex := regexp.MustCompile(`(\w+\.\w+)`)

	for _, row := range data {
		count, _ := row["history.query_run_count"].(float64)
		formattedFields, _ := row["query.formatted_fields"].(string)
		filters, _ := row["query.filters"].(string)

		usedFields := make(map[string]bool)

		for _, field := range fieldRegex.FindAllString(formattedFields, -1) {
			results[field] += int(count)
			usedFields[field] = true
		}

		for _, field := range fieldRegex.FindAllString(filters, -1) {
			if _, ok := usedFields[field]; !ok {
				results[field] += int(count)
			}
		}
	}
	return results, nil
}

func (t *analyzeTool) explores(ctx context.Context, model, explore string) ([]map[string]interface{}, error) {
	logger, err := util.LoggerFromContext(ctx)
	if err != nil {
		return nil, fmt.Errorf("unable to get logger from ctx: %s", err)
	}
	logger.InfoContext(ctx, "Analyzing explores...")

	lookmlModels, err := t.SdkClient.AllLookmlModels(v4.RequestAllLookmlModels{}, nil)
	if err != nil {
		return nil, fmt.Errorf("error fetching LookML models: %w", err)
	}

	var results []map[string]interface{}
	for _, m := range lookmlModels {
		if model != "" && (m.Name == nil || *m.Name != model) {
			continue
		}
		if m.Explores == nil {
			continue
		}

		for _, e := range *m.Explores {
			if explore != "" && (e.Name == nil || *e.Name != explore) {
				continue
			}
			if e.Name == nil {
				continue
			}

			// Get detailed explore info to count fields and joins
			req := v4.RequestLookmlModelExplore{
				LookmlModelName: *m.Name,
				ExploreName:     *e.Name,
			}
			exploreDetail, err := t.SdkClient.LookmlModelExplore(req, nil)
			if err != nil {
				// Log the error but continue to the next explore if possible
				logger.ErrorContext(ctx, fmt.Sprintf("Error fetching detail for explore %s.%s: %v", *m.Name, *e.Name, err))
				continue
			}

			fieldCount := 0
			if exploreDetail.Fields != nil {
				fieldCount = len(*exploreDetail.Fields.Dimensions) + len(*exploreDetail.Fields.Measures)
			}

			joinCount := 0
			if exploreDetail.Joins != nil {
				joinCount = len(*exploreDetail.Joins)
			}

			usedFields, err := t.getUsedExploreFields(ctx, *m.Name, *e.Name)
			if err != nil {
				logger.ErrorContext(ctx, fmt.Sprintf("Error fetching used fields for explore %s.%s: %v", *m.Name, *e.Name, err))
				continue
			}

			allFields := []string{}
			if exploreDetail.Fields != nil {
				for _, d := range *exploreDetail.Fields.Dimensions {
					if !*d.Hidden {
						allFields = append(allFields, *d.Name)
					}
				}
				for _, ms := range *exploreDetail.Fields.Measures {
					if !*ms.Hidden {
						allFields = append(allFields, *ms.Name)
					}
				}
			}

			unusedFieldsCount := 0
			for _, field := range allFields {
				if _, ok := usedFields[field]; !ok {
					unusedFieldsCount++
				}
			}

			joinStats := make(map[string]int)
			if exploreDetail.Joins != nil {
				for field, queryCount := range usedFields {
					join := strings.Split(field, ".")[0]
					joinStats[join] += queryCount
				}
				for _, join := range *exploreDetail.Joins {
					if _, ok := joinStats[*join.Name]; !ok {
						joinStats[*join.Name] = 0
					}
				}
			}

			unusedJoinsCount := 0
			for _, count := range joinStats {
				if count == 0 {
					unusedJoinsCount++
				}
			}

			// Use an inline query to get query count for the explore
			limit := "1"
			queryCountQueryBody := &v4.WriteQuery{
				Model:  "system__activity",
				View:   "history",
				Fields: &[]string{"history.query_run_count"},
				Filters: &map[string]any{
					"query.model":             *m.Name,
					"query.view":              *e.Name,
					"history.created_date":    fmt.Sprintf("%d days", t.timeframe),
					"history.query_run_count": fmt.Sprintf(">%d", t.minQueries-1),
					"user.dev_branch_name":    "NULL",
				},
				Limit: &limit,
			}

			rawQueryCount, err := lookercommon.RunInlineQuery(ctx, t.SdkClient, queryCountQueryBody, "json", nil)
			if err != nil {
				return nil, err
			}
			queryCount := 0
			var data []map[string]interface{}
			_ = json.Unmarshal([]byte(rawQueryCount), &data)
			if len(data) > 0 {
				if count, ok := data[0]["history.query_run_count"].(float64); ok {
					queryCount = int(count)
				}
			}

			results = append(results, map[string]interface{}{
				"Model":           *m.Name,
				"Explore":         *e.Name,
				"Is Hidden":       *e.Hidden,
				"Has Description": e.Description != nil && *e.Description != "",
				"# Joins":         joinCount,
				"# Unused Joins":  unusedJoinsCount,
				"# Unused Fields": unusedFieldsCount,
				"# Fields":        fieldCount,
				"Query Count":     queryCount,
			})
		}
	}
	return results, nil
}

// =================================================================================================================
// END LOOKER HEALTH ANALYZE CORE LOGIC
// =================================================================================================================

```

--------------------------------------------------------------------------------
/internal/sources/bigquery/bigquery.go:
--------------------------------------------------------------------------------

```go
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package bigquery

import (
	"context"
	"fmt"
	"net/http"
	"strings"
	"sync"
	"time"

	bigqueryapi "cloud.google.com/go/bigquery"
	dataplexapi "cloud.google.com/go/dataplex/apiv1"
	"github.com/goccy/go-yaml"
	"github.com/googleapis/genai-toolbox/internal/sources"
	"github.com/googleapis/genai-toolbox/internal/util"
	"go.opentelemetry.io/otel/trace"
	"golang.org/x/oauth2"
	"golang.org/x/oauth2/google"
	bigqueryrestapi "google.golang.org/api/bigquery/v2"
	"google.golang.org/api/googleapi"
	"google.golang.org/api/option"
)

const SourceKind string = "bigquery"

const (
	// No write operations are allowed.
	WriteModeBlocked string = "blocked"
	// Only protected write operations are allowed in a BigQuery session.
	WriteModeProtected string = "protected"
	// All write operations are allowed.
	WriteModeAllowed string = "allowed"
)

// validate interface
var _ sources.SourceConfig = Config{}

type BigqueryClientCreator func(tokenString string, wantRestService bool) (*bigqueryapi.Client, *bigqueryrestapi.Service, error)

type BigQuerySessionProvider func(ctx context.Context) (*Session, error)

type DataplexClientCreator func(tokenString string) (*dataplexapi.CatalogClient, error)

func init() {
	if !sources.Register(SourceKind, newConfig) {
		panic(fmt.Sprintf("source kind %q already registered", SourceKind))
	}
}

func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (sources.SourceConfig, error) {
	actual := Config{Name: name}
	if err := decoder.DecodeContext(ctx, &actual); err != nil {
		return nil, err
	}
	return actual, nil
}

type Config struct {
	// BigQuery configs
	Name            string   `yaml:"name" validate:"required"`
	Kind            string   `yaml:"kind" validate:"required"`
	Project         string   `yaml:"project" validate:"required"`
	Location        string   `yaml:"location"`
	WriteMode       string   `yaml:"writeMode"`
	AllowedDatasets []string `yaml:"allowedDatasets"`
	UseClientOAuth  bool     `yaml:"useClientOAuth"`
}

func (r Config) SourceConfigKind() string {
	// Returns BigQuery source kind
	return SourceKind
}

func (r Config) Initialize(ctx context.Context, tracer trace.Tracer) (sources.Source, error) {
	if r.WriteMode == "" {
		r.WriteMode = WriteModeAllowed
	}

	if r.WriteMode == WriteModeProtected && r.UseClientOAuth {
		return nil, fmt.Errorf("writeMode 'protected' cannot be used with useClientOAuth 'true'")
	}

	var client *bigqueryapi.Client
	var restService *bigqueryrestapi.Service
	var tokenSource oauth2.TokenSource
	var clientCreator BigqueryClientCreator
	var err error

	if r.UseClientOAuth {
		clientCreator, err = newBigQueryClientCreator(ctx, tracer, r.Project, r.Location, r.Name)
		if err != nil {
			return nil, fmt.Errorf("error constructing client creator: %w", err)
		}
	} else {
		// Initializes a BigQuery Google SQL source
		client, restService, tokenSource, err = initBigQueryConnection(ctx, tracer, r.Name, r.Project, r.Location)
		if err != nil {
			return nil, fmt.Errorf("error creating client from ADC: %w", err)
		}
	}

	allowedDatasets := make(map[string]struct{})
	// Get full id of allowed datasets and verify they exist.
	if len(r.AllowedDatasets) > 0 {
		for _, allowed := range r.AllowedDatasets {
			var projectID, datasetID, allowedFullID string
			if strings.Contains(allowed, ".") {
				parts := strings.Split(allowed, ".")
				if len(parts) != 2 {
					return nil, fmt.Errorf("invalid allowedDataset format: %q, expected 'project.dataset' or 'dataset'", allowed)
				}
				projectID = parts[0]
				datasetID = parts[1]
				allowedFullID = allowed
			} else {
				projectID = r.Project
				datasetID = allowed
				allowedFullID = fmt.Sprintf("%s.%s", projectID, datasetID)
			}

			if client != nil {
				dataset := client.DatasetInProject(projectID, datasetID)
				_, err := dataset.Metadata(ctx)
				if err != nil {
					if gerr, ok := err.(*googleapi.Error); ok && gerr.Code == http.StatusNotFound {
						return nil, fmt.Errorf("allowedDataset '%s' not found in project '%s'", datasetID, projectID)
					}
					return nil, fmt.Errorf("failed to verify allowedDataset '%s' in project '%s': %w", datasetID, projectID, err)
				}
			}
			allowedDatasets[allowedFullID] = struct{}{}
		}
	}

	s := &Source{
		Name:               r.Name,
		Kind:               SourceKind,
		Project:            r.Project,
		Location:           r.Location,
		Client:             client,
		RestService:        restService,
		TokenSource:        tokenSource,
		MaxQueryResultRows: 50,
		ClientCreator:      clientCreator,
		WriteMode:          r.WriteMode,
		AllowedDatasets:    allowedDatasets,
		UseClientOAuth:     r.UseClientOAuth,
	}
	s.SessionProvider = s.newBigQuerySessionProvider()

	if r.WriteMode != WriteModeAllowed && r.WriteMode != WriteModeBlocked && r.WriteMode != WriteModeProtected {
		return nil, fmt.Errorf("invalid writeMode %q: must be one of %q, %q, or %q", r.WriteMode, WriteModeAllowed, WriteModeProtected, WriteModeBlocked)
	}
	s.makeDataplexCatalogClient = s.lazyInitDataplexClient(ctx, tracer)
	return s, nil

}

var _ sources.Source = &Source{}

type Source struct {
	// BigQuery Google SQL struct with client
	Name                      string `yaml:"name"`
	Kind                      string `yaml:"kind"`
	Project                   string
	Location                  string
	Client                    *bigqueryapi.Client
	RestService               *bigqueryrestapi.Service
	TokenSource               oauth2.TokenSource
	MaxQueryResultRows        int
	ClientCreator             BigqueryClientCreator
	AllowedDatasets           map[string]struct{}
	UseClientOAuth            bool
	WriteMode                 string
	sessionMutex              sync.Mutex
	makeDataplexCatalogClient func() (*dataplexapi.CatalogClient, DataplexClientCreator, error)
	SessionProvider           BigQuerySessionProvider
	Session                   *Session
}

type Session struct {
	ID           string
	ProjectID    string
	DatasetID    string
	CreationTime time.Time
	LastUsed     time.Time
}

func (s *Source) SourceKind() string {
	// Returns BigQuery Google SQL source kind
	return SourceKind
}

func (s *Source) BigQueryClient() *bigqueryapi.Client {
	return s.Client
}

func (s *Source) BigQueryRestService() *bigqueryrestapi.Service {
	return s.RestService
}

func (s *Source) BigQueryWriteMode() string {
	return s.WriteMode
}

func (s *Source) BigQuerySession() BigQuerySessionProvider {
	return s.SessionProvider
}

func (s *Source) newBigQuerySessionProvider() BigQuerySessionProvider {
	return func(ctx context.Context) (*Session, error) {
		if s.WriteMode != WriteModeProtected {
			return nil, nil
		}

		s.sessionMutex.Lock()
		defer s.sessionMutex.Unlock()

		logger, err := util.LoggerFromContext(ctx)
		if err != nil {
			return nil, fmt.Errorf("failed to get logger from context: %w", err)
		}

		if s.Session != nil {
			// Absolute 7-day lifetime check.
			const sessionMaxLifetime = 7 * 24 * time.Hour
			// This assumes a single task will not exceed 30 minutes, preventing it from failing mid-execution.
			const refreshThreshold = 30 * time.Minute
			if time.Since(s.Session.CreationTime) > (sessionMaxLifetime - refreshThreshold) {
				logger.DebugContext(ctx, "Session is approaching its 7-day maximum lifetime. Creating a new one.")
			} else {
				job := &bigqueryrestapi.Job{
					Configuration: &bigqueryrestapi.JobConfiguration{
						DryRun: true,
						Query: &bigqueryrestapi.JobConfigurationQuery{
							Query:                "SELECT 1",
							UseLegacySql:         new(bool),
							ConnectionProperties: []*bigqueryrestapi.ConnectionProperty{{Key: "session_id", Value: s.Session.ID}},
						},
					},
				}
				_, err := s.RestService.Jobs.Insert(s.Project, job).Do()
				if err == nil {
					s.Session.LastUsed = time.Now()
					return s.Session, nil
				}
				logger.DebugContext(ctx, "Session validation failed (likely expired), creating a new one.", "error", err)
			}
		}

		// Create a new session if one doesn't exist, it has passed its 7-day lifetime,
		// or it failed the validation dry run.

		creationTime := time.Now()
		job := &bigqueryrestapi.Job{
			JobReference: &bigqueryrestapi.JobReference{
				ProjectId: s.Project,
				Location:  s.Location,
			},
			Configuration: &bigqueryrestapi.JobConfiguration{
				DryRun: true,
				Query: &bigqueryrestapi.JobConfigurationQuery{
					Query:         "SELECT 1",
					CreateSession: true,
				},
			},
		}

		createdJob, err := s.RestService.Jobs.Insert(s.Project, job).Do()
		if err != nil {
			return nil, fmt.Errorf("failed to create new session: %w", err)
		}

		var sessionID, sessionDatasetID, projectID string
		if createdJob.Status != nil && createdJob.Statistics.SessionInfo != nil {
			sessionID = createdJob.Statistics.SessionInfo.SessionId
		} else {
			return nil, fmt.Errorf("failed to get session ID from new session job")
		}

		if createdJob.Configuration != nil && createdJob.Configuration.Query != nil && createdJob.Configuration.Query.DestinationTable != nil {
			sessionDatasetID = createdJob.Configuration.Query.DestinationTable.DatasetId
			projectID = createdJob.Configuration.Query.DestinationTable.ProjectId
		} else {
			return nil, fmt.Errorf("failed to get session dataset ID from new session job")
		}

		s.Session = &Session{
			ID:           sessionID,
			ProjectID:    projectID,
			DatasetID:    sessionDatasetID,
			CreationTime: creationTime,
			LastUsed:     creationTime,
		}
		return s.Session, nil
	}
}

func (s *Source) UseClientAuthorization() bool {
	return s.UseClientOAuth
}

func (s *Source) BigQueryProject() string {
	return s.Project
}

func (s *Source) BigQueryLocation() string {
	return s.Location
}

func (s *Source) BigQueryTokenSource() oauth2.TokenSource {
	return s.TokenSource
}

func (s *Source) BigQueryTokenSourceWithScope(ctx context.Context, scope string) (oauth2.TokenSource, error) {
	return google.DefaultTokenSource(ctx, scope)
}

func (s *Source) GetMaxQueryResultRows() int {
	return s.MaxQueryResultRows
}

func (s *Source) BigQueryClientCreator() BigqueryClientCreator {
	return s.ClientCreator
}

func (s *Source) BigQueryAllowedDatasets() []string {
	if len(s.AllowedDatasets) == 0 {
		return nil
	}
	datasets := make([]string, 0, len(s.AllowedDatasets))
	for d := range s.AllowedDatasets {
		datasets = append(datasets, d)
	}
	return datasets
}

// IsDatasetAllowed checks if a given dataset is accessible based on the source's configuration.
func (s *Source) IsDatasetAllowed(projectID, datasetID string) bool {
	// If the normalized map is empty, it means no restrictions were configured.
	if len(s.AllowedDatasets) == 0 {
		return true
	}

	targetDataset := fmt.Sprintf("%s.%s", projectID, datasetID)
	_, ok := s.AllowedDatasets[targetDataset]
	return ok
}

func (s *Source) MakeDataplexCatalogClient() func() (*dataplexapi.CatalogClient, DataplexClientCreator, error) {
	return s.makeDataplexCatalogClient
}

func (s *Source) lazyInitDataplexClient(ctx context.Context, tracer trace.Tracer) func() (*dataplexapi.CatalogClient, DataplexClientCreator, error) {
	var once sync.Once
	var client *dataplexapi.CatalogClient
	var clientCreator DataplexClientCreator
	var err error

	return func() (*dataplexapi.CatalogClient, DataplexClientCreator, error) {
		once.Do(func() {
			c, cc, e := initDataplexConnection(ctx, tracer, s.Name, s.Project, s.UseClientOAuth)
			if e != nil {
				err = fmt.Errorf("failed to initialize dataplex client: %w", e)
				return
			}
			client = c
			clientCreator = cc
		})
		return client, clientCreator, err
	}
}

func initBigQueryConnection(
	ctx context.Context,
	tracer trace.Tracer,
	name string,
	project string,
	location string,
) (*bigqueryapi.Client, *bigqueryrestapi.Service, oauth2.TokenSource, error) {
	ctx, span := sources.InitConnectionSpan(ctx, tracer, SourceKind, name)
	defer span.End()

	cred, err := google.FindDefaultCredentials(ctx, "https://www.googleapis.com/auth/cloud-platform")
	if err != nil {
		return nil, nil, nil, fmt.Errorf("failed to find default Google Cloud credentials with scope %q: %w", bigqueryapi.Scope, err)
	}

	userAgent, err := util.UserAgentFromContext(ctx)
	if err != nil {
		return nil, nil, nil, err
	}

	// Initialize the high-level BigQuery client
	client, err := bigqueryapi.NewClient(ctx, project, option.WithUserAgent(userAgent), option.WithCredentials(cred))
	if err != nil {
		return nil, nil, nil, fmt.Errorf("failed to create BigQuery client for project %q: %w", project, err)
	}
	client.Location = location

	// Initialize the low-level BigQuery REST service using the same credentials
	restService, err := bigqueryrestapi.NewService(ctx, option.WithUserAgent(userAgent), option.WithCredentials(cred))
	if err != nil {
		return nil, nil, nil, fmt.Errorf("failed to create BigQuery v2 service: %w", err)
	}

	return client, restService, cred.TokenSource, nil
}

// initBigQueryConnectionWithOAuthToken initialize a BigQuery client with an
// OAuth access token.
func initBigQueryConnectionWithOAuthToken(
	ctx context.Context,
	tracer trace.Tracer,
	project string,
	location string,
	name string,
	userAgent string,
	tokenString string,
	wantRestService bool,
) (*bigqueryapi.Client, *bigqueryrestapi.Service, error) {
	ctx, span := sources.InitConnectionSpan(ctx, tracer, SourceKind, name)
	defer span.End()
	// Construct token source
	token := &oauth2.Token{
		AccessToken: string(tokenString),
	}
	ts := oauth2.StaticTokenSource(token)

	// Initialize the BigQuery client with tokenSource
	client, err := bigqueryapi.NewClient(ctx, project, option.WithUserAgent(userAgent), option.WithTokenSource(ts))
	if err != nil {
		return nil, nil, fmt.Errorf("failed to create BigQuery client for project %q: %w", project, err)
	}
	client.Location = location

	if wantRestService {
		// Initialize the low-level BigQuery REST service using the same credentials
		restService, err := bigqueryrestapi.NewService(ctx, option.WithUserAgent(userAgent), option.WithTokenSource(ts))
		if err != nil {
			return nil, nil, fmt.Errorf("failed to create BigQuery v2 service: %w", err)
		}
		return client, restService, nil
	}

	return client, nil, nil
}

// newBigQueryClientCreator sets the project parameters for the init helper
// function. The returned function takes in an OAuth access token and uses it to
// create a BQ client.
func newBigQueryClientCreator(
	ctx context.Context,
	tracer trace.Tracer,
	project string,
	location string,
	name string,
) (func(string, bool) (*bigqueryapi.Client, *bigqueryrestapi.Service, error), error) {
	userAgent, err := util.UserAgentFromContext(ctx)
	if err != nil {
		return nil, err
	}

	return func(tokenString string, wantRestService bool) (*bigqueryapi.Client, *bigqueryrestapi.Service, error) {
		return initBigQueryConnectionWithOAuthToken(ctx, tracer, project, location, name, userAgent, tokenString, wantRestService)
	}, nil
}

func initDataplexConnection(
	ctx context.Context,
	tracer trace.Tracer,
	name string,
	project string,
	useClientOAuth bool,
) (*dataplexapi.CatalogClient, DataplexClientCreator, error) {
	var client *dataplexapi.CatalogClient
	var clientCreator DataplexClientCreator
	var err error

	ctx, span := sources.InitConnectionSpan(ctx, tracer, SourceKind, name)
	defer span.End()

	cred, err := google.FindDefaultCredentials(ctx)
	if err != nil {
		return nil, nil, fmt.Errorf("failed to find default Google Cloud credentials: %w", err)
	}

	userAgent, err := util.UserAgentFromContext(ctx)
	if err != nil {
		return nil, nil, err
	}

	if useClientOAuth {
		clientCreator = newDataplexClientCreator(ctx, project, userAgent)
	} else {
		client, err = dataplexapi.NewCatalogClient(ctx, option.WithUserAgent(userAgent), option.WithCredentials(cred))
		if err != nil {
			return nil, nil, fmt.Errorf("failed to create Dataplex client for project %q: %w", project, err)
		}
	}

	return client, clientCreator, nil
}

func initDataplexConnectionWithOAuthToken(
	ctx context.Context,
	project string,
	userAgent string,
	tokenString string,
) (*dataplexapi.CatalogClient, error) {
	// Construct token source
	token := &oauth2.Token{
		AccessToken: string(tokenString),
	}
	ts := oauth2.StaticTokenSource(token)

	client, err := dataplexapi.NewCatalogClient(ctx, option.WithUserAgent(userAgent), option.WithTokenSource(ts))
	if err != nil {
		return nil, fmt.Errorf("failed to create Dataplex client for project %q: %w", project, err)
	}
	return client, nil
}

func newDataplexClientCreator(
	ctx context.Context,
	project string,
	userAgent string,
) func(string) (*dataplexapi.CatalogClient, error) {
	return func(tokenString string) (*dataplexapi.CatalogClient, error) {
		return initDataplexConnectionWithOAuthToken(ctx, project, userAgent, tokenString)
	}
}

```

--------------------------------------------------------------------------------
/docs/en/getting-started/introduction/_index.md:
--------------------------------------------------------------------------------

```markdown
---
title: "Introduction"
type: docs
weight: 1
description: >
  An introduction to MCP Toolbox for Databases.
---

MCP Toolbox for Databases is an open source MCP server for databases. It enables
you to develop tools easier, faster, and more securely by handling the complexities
such as connection pooling, authentication, and more.

{{< notice note >}}
This solution was originally named “Gen AI Toolbox for
Databases” as its initial development predated MCP, but was renamed to align
with recently added MCP compatibility.
{{< /notice >}}

## Why Toolbox?

Toolbox helps you build Gen AI tools that let your agents access data in your
database. Toolbox provides:

- **Simplified development**: Integrate tools to your agent in less than 10
  lines of code, reuse tools between multiple agents or frameworks, and deploy
  new versions of tools more easily.
- **Better performance**: Best practices such as connection pooling,
  authentication, and more.
- **Enhanced security**: Integrated auth for more secure access to your data
- **End-to-end observability**: Out of the box metrics and tracing with built-in
  support for OpenTelemetry.

**⚡ Supercharge Your Workflow with an AI Database Assistant ⚡**

Stop context-switching and let your AI assistant become a true co-developer. By
[connecting your IDE to your databases with MCP Toolbox][connect-ide], you can
delegate complex and time-consuming database tasks, allowing you to build faster
and focus on what matters. This isn't just about code completion; it's about
giving your AI the context it needs to handle the entire development lifecycle.

Here’s how it will save you time:

- **Query in Plain English**: Interact with your data using natural language
  right from your IDE. Ask complex questions like, *"How many orders were
  delivered in 2024, and what items were in them?"* without writing any SQL.
- **Automate Database Management**: Simply describe your data needs, and let the
  AI assistant manage your database for you. It can handle generating queries,
  creating tables, adding indexes, and more.
- **Generate Context-Aware Code**: Empower your AI assistant to generate
  application code and tests with a deep understanding of your real-time
  database schema.  This accelerates the development cycle by ensuring the
  generated code is directly usable.
- **Slash Development Overhead**: Radically reduce the time spent on manual
  setup and boilerplate. MCP Toolbox helps streamline lengthy database
  configurations, repetitive code, and error-prone schema migrations.

Learn [how to connect your AI tools (IDEs) to Toolbox using MCP][connect-ide].

[connect-ide]: ../../how-to/connect-ide/

## General Architecture

Toolbox sits between your application's orchestration framework and your
database, providing a control plane that is used to modify, distribute, or
invoke tools. It simplifies the management of your tools by providing you with a
centralized location to store and update tools, allowing you to share tools
between agents and applications and update those tools without necessarily
redeploying your application.

![architecture](./architecture.png)

## Getting Started

### Installing the server

For the latest version, check the [releases page][releases] and use the
following instructions for your OS and CPU architecture.

[releases]: https://github.com/googleapis/genai-toolbox/releases

<!-- {x-release-please-start-version} -->
{{< tabpane text=true >}}
{{% tab header="Binary" lang="en" %}}
{{< tabpane text=true >}}
{{% tab header="Linux (AMD64)" lang="en" %}}
To install Toolbox as a binary on Linux (AMD64):
```sh
# see releases page for other versions
export VERSION=0.17.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Apple Silicon)" lang="en" %}}
To install Toolbox as a binary on macOS (Apple Silicon):
```sh
# see releases page for other versions
export VERSION=0.17.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/arm64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Intel)" lang="en" %}}
To install Toolbox as a binary on macOS (Intel):
```sh
# see releases page for other versions
export VERSION=0.17.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="Windows (AMD64)" lang="en" %}}
To install Toolbox as a binary on Windows (AMD64):
```powershell
# see releases page for other versions
$VERSION = "0.17.0"
Invoke-WebRequest -Uri "https://storage.googleapis.com/genai-toolbox/v$VERSION/windows/amd64/toolbox.exe" -OutFile "toolbox.exe"
```
{{% /tab %}}
{{< /tabpane >}}
{{% /tab %}}
{{% tab header="Container image" lang="en" %}}
You can also install Toolbox as a container:

```sh
# see releases page for other versions
export VERSION=0.17.0
docker pull us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION
```

{{% /tab %}}
{{% tab header="Homebrew" lang="en" %}}
To install Toolbox using Homebrew on macOS or Linux:

```sh
brew install mcp-toolbox
```

{{% /tab %}}
{{% tab header="Compile from source" lang="en" %}}

To install from source, ensure you have the latest version of
[Go installed](https://go.dev/doc/install), and then run the following command:

```sh
go install github.com/googleapis/[email protected]
```

{{% /tab %}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->

### Running the server

[Configure](../configure.md) a `tools.yaml` to define your tools, and then
execute `toolbox` to start the server:

```sh
./toolbox --tools-file "tools.yaml"
```

{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}

#### Launching Toolbox UI

To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../../how-to/toolbox-ui/index.md).

```sh
./toolbox --ui
```

#### Homebrew Users

If you installed Toolbox using Homebrew, the `toolbox` binary is available in
your system path. You can start the server with the same command:

```sh
toolbox --tools-file "tools.yaml"
```

You can use `toolbox help` for a full list of flags! To stop the server, send a
terminate signal (`ctrl+c` on most platforms).

For more detailed documentation on deploying to different environments, check
out the resources in the [How-to section](../../how-to/)

### Integrating your application

Once your server is up and running, you can load the tools into your
application. See below the list of Client SDKs for using various frameworks:

#### Python

{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}

Once you've installed the [Toolbox Core
SDK](https://pypi.org/project/toolbox-core/), you can load
tools:

{{< highlight python >}}
from toolbox_core import ToolboxClient

# update the url to point to your server

async with ToolboxClient("http://127.0.0.1:5000") as client:

    # these tools can be passed to your application!
    tools = await client.load_toolset("toolset_name")
{{< /highlight >}}

For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md).

{{% /tab %}}
{{% tab header="LangChain" lang="en" %}}

Once you've installed the [Toolbox LangChain
SDK](https://pypi.org/project/toolbox-langchain/), you can load
tools:

{{< highlight python >}}
from toolbox_langchain import ToolboxClient

# update the url to point to your server

async with ToolboxClient("http://127.0.0.1:5000") as client:

    # these tools can be passed to your application!
    tools = client.load_toolset()
{{< /highlight >}}

For more detailed instructions on using the Toolbox LangChain SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/README.md).

{{% /tab %}}
{{% tab header="Llamaindex" lang="en" %}}

Once you've installed the [Toolbox Llamaindex
SDK](https://github.com/googleapis/genai-toolbox-llamaindex-python), you can load
tools:

{{< highlight python >}}
from toolbox_llamaindex import ToolboxClient

# update the url to point to your server

async with ToolboxClient("http://127.0.0.1:5000") as client:

# these tools can be passed to your application

  tools = client.load_toolset()
{{< /highlight >}}

For more detailed instructions on using the Toolbox Llamaindex SDK, see the
[project's
README](https://github.com/googleapis/genai-toolbox-llamaindex-python/blob/main/README.md).

{{% /tab %}}
{{< /tabpane >}}

#### Javascript/Typescript

Once you've installed the [Toolbox Core
SDK](https://www.npmjs.com/package/@toolbox-sdk/core), you can load
tools:

{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}

{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';

// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);

// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');
{{< /highlight >}}

{{% /tab %}}
{{% tab header="LangChain/Langraph" lang="en" %}}

{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';

// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);

// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');

// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => tool(currTool, {
    name: toolboxTool.getName(),
    description: toolboxTool.getDescription(),
    schema: toolboxTool.getParamSchema()
});

// Use these tools in your Langchain/Langraph applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}

{{% /tab %}}
{{% tab header="Genkit" lang="en" %}}

{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import { genkit } from 'genkit';

// Initialise genkit
const ai = genkit({
    plugins: [
        googleAI({
            apiKey: process.env.GEMINI_API_KEY || process.env.GOOGLE_API_KEY
        })
    ],
    model: googleAI.model('gemini-2.0-flash'),
});

// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);

// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');

// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => ai.defineTool({
    name: toolboxTool.getName(),
    description: toolboxTool.getDescription(),
    schema: toolboxTool.getParamSchema()
}, toolboxTool)

// Use these tools in your Genkit applications
const tools = toolboxTools.map(getTool);
{{< /highlight >}}

{{% /tab %}}
{{% tab header="LlamaIndex" lang="en" %}}

{{< highlight javascript >}}
import { ToolboxClient } from '@toolbox-sdk/core';
import { tool } from "llamaindex";

// update the url to point to your server
const URL = 'http://127.0.0.1:5000';
let client = new ToolboxClient(URL);

// these tools can be passed to your application!
const toolboxTools = await client.loadToolset('toolsetName');

// Define the basics of the tool: name, description, schema and core logic
const getTool = (toolboxTool) => tool({
    name: toolboxTool.getName(),
    description: toolboxTool.getDescription(),
    parameters: toolboxTool.getParamSchema(),
    execute: toolboxTool
});;

// Use these tools in your LlamaIndex applications
const tools = toolboxTools.map(getTool);

{{< /highlight >}}

{{% /tab %}}
{{< /tabpane >}}

For more detailed instructions on using the Toolbox Core SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).

#### Go

Once you've installed the [Toolbox Go
SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/core), you can load
tools:

{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}

{{< highlight go >}}
package main

import (
	"context"
	"log"

	"github.com/googleapis/mcp-toolbox-sdk-go/core"
)

func main() {
	// update the url to point to your server
	URL := "http://127.0.0.1:5000"
	ctx := context.Background()

	client, err := core.NewToolboxClient(URL)
	if err != nil {
		log.Fatalf("Failed to create Toolbox client: %v", err)
	}

	// Framework agnostic tools
	tools, err := client.LoadToolset("toolsetName", ctx)
	if err != nil {
		log.Fatalf("Failed to load tools: %v", err)
	}
}
{{< /highlight >}}

{{% /tab %}}
{{% tab header="LangChain Go" lang="en" %}}

{{< highlight go >}}
package main

import (
	"context"
	"encoding/json"
	"log"

	"github.com/googleapis/mcp-toolbox-sdk-go/core"
	"github.com/tmc/langchaingo/llms"
)

func main() {
	// Make sure to add the error checks
	// update the url to point to your server
	URL := "http://127.0.0.1:5000"
	ctx := context.Background()

	client, err := core.NewToolboxClient(URL)
	if err != nil {
		log.Fatalf("Failed to create Toolbox client: %v", err)
	}

	// Framework agnostic tool
	tool, err := client.LoadTool("toolName", ctx)
	if err != nil {
		log.Fatalf("Failed to load tools: %v", err)
	}

	// Fetch the tool's input schema
	inputschema, err := tool.InputSchema()
	if err != nil {
		log.Fatalf("Failed to fetch inputSchema: %v", err)
	}

	var paramsSchema map[string]any
	_ = json.Unmarshal(inputschema, &paramsSchema)

	// Use this tool with LangChainGo
	langChainTool := llms.Tool{
		Type: "function",
		Function: &llms.FunctionDefinition{
			Name:        tool.Name(),
			Description: tool.Description(),
			Parameters:  paramsSchema,
		},
	}
}
{{< /highlight >}}

{{% /tab %}}
{{% tab header="Genkit Go" lang="en" %}}

{{< highlight go >}}
package main
import (
	"context"
	"encoding/json"
	"log"

	"github.com/firebase/genkit/go/ai"
	"github.com/firebase/genkit/go/genkit"
	"github.com/googleapis/mcp-toolbox-sdk-go/core"
	"github.com/googleapis/mcp-toolbox-sdk-go/tbgenkit"
	"github.com/invopop/jsonschema"
)

func main() {
	// Make sure to add the error checks
	// Update the url to point to your server
	URL := "http://127.0.0.1:5000"
	ctx := context.Background()
	g, err := genkit.Init(ctx)

	client, err := core.NewToolboxClient(URL)
	if err != nil {
		log.Fatalf("Failed to create Toolbox client: %v", err)
	}

	// Framework agnostic tool
	tool, err := client.LoadTool("toolName", ctx)
	if err != nil {
		log.Fatalf("Failed to load tools: %v", err)
	}

	// Convert the tool using the tbgenkit package
 	// Use this tool with Genkit Go
	genkitTool, err := tbgenkit.ToGenkitTool(tool, g)
	if err != nil {
		log.Fatalf("Failed to convert tool: %v\n", err)
	}
}
{{< /highlight >}}

{{% /tab %}}
{{% tab header="Go GenAI" lang="en" %}}

{{< highlight go >}}
package main

import (
	"context"
	"encoding/json"
	"log"

	"github.com/googleapis/mcp-toolbox-sdk-go/core"
	"google.golang.org/genai"
)

func main() {
	// Make sure to add the error checks
	// Update the url to point to your server
	URL := "http://127.0.0.1:5000"
	ctx := context.Background()

	client, err := core.NewToolboxClient(URL)
	if err != nil {
		log.Fatalf("Failed to create Toolbox client: %v", err)
	}

	// Framework agnostic tool
	tool, err := client.LoadTool("toolName", ctx)
	if err != nil {
		log.Fatalf("Failed to load tools: %v", err)
	}

	// Fetch the tool's input schema
	inputschema, err := tool.InputSchema()
	if err != nil {
		log.Fatalf("Failed to fetch inputSchema: %v", err)
	}

	var schema *genai.Schema
	_ = json.Unmarshal(inputschema, &schema)

	funcDeclaration := &genai.FunctionDeclaration{
		Name:        tool.Name(),
		Description: tool.Description(),
		Parameters:  schema,
	}

	// Use this tool with Go GenAI
	genAITool := &genai.Tool{
		FunctionDeclarations: []*genai.FunctionDeclaration{funcDeclaration},
	}
}
{{< /highlight >}}

{{% /tab %}}

{{% tab header="OpenAI Go" lang="en" %}}

{{< highlight go >}}
package main

import (
	"context"
	"encoding/json"
	"log"

	"github.com/googleapis/mcp-toolbox-sdk-go/core"
	openai "github.com/openai/openai-go"
)

func main() {
	// Make sure to add the error checks
	// Update the url to point to your server
	URL := "http://127.0.0.1:5000"
	ctx := context.Background()

	client, err := core.NewToolboxClient(URL)
	if err != nil {
		log.Fatalf("Failed to create Toolbox client: %v", err)
	}

	// Framework agnostic tool
	tool, err := client.LoadTool("toolName", ctx)
	if err != nil {
		log.Fatalf("Failed to load tools: %v", err)
	}

	// Fetch the tool's input schema
	inputschema, err := tool.InputSchema()
	if err != nil {
		log.Fatalf("Failed to fetch inputSchema: %v", err)
	}

	var paramsSchema openai.FunctionParameters
	_ = json.Unmarshal(inputschema, &paramsSchema)

	// Use this tool with OpenAI Go
	openAITool := openai.ChatCompletionToolParam{
		Function: openai.FunctionDefinitionParam{
			Name:        tool.Name(),
			Description: openai.String(tool.Description()),
			Parameters:  paramsSchema,
		},
	}
}
{{< /highlight >}}

{{% /tab %}}
{{< /tabpane >}}

For more detailed instructions on using the Toolbox Go SDK, see the
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md).

For end-to-end samples on using the Toolbox Go SDK with orchestration
frameworks, see the [project's
samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)

```

--------------------------------------------------------------------------------
/tests/firebird/firebird_integration_test.go:
--------------------------------------------------------------------------------

```go
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package firebird

import (
	"context"
	"database/sql"
	"fmt"
	"os"
	"regexp"
	"strings"
	"testing"
	"time"

	"github.com/google/uuid"
	"github.com/googleapis/genai-toolbox/internal/testutils"
	"github.com/googleapis/genai-toolbox/internal/tools"
	"github.com/googleapis/genai-toolbox/tests"
	_ "github.com/nakagami/firebirdsql"
)

var (
	FirebirdSourceKind = "firebird"
	FirebirdToolKind   = "firebird-sql"
	FirebirdDatabase   = os.Getenv("FIREBIRD_DATABASE")
	FirebirdHost       = os.Getenv("FIREBIRD_HOST")
	FirebirdPort       = os.Getenv("FIREBIRD_PORT")
	FirebirdUser       = os.Getenv("FIREBIRD_USER")
	FirebirdPass       = os.Getenv("FIREBIRD_PASS")
)

func getFirebirdVars(t *testing.T) map[string]any {
	switch "" {
	case FirebirdDatabase:
		t.Fatal("'FIREBIRD_DATABASE' not set")
	case FirebirdHost:
		t.Fatal("'FIREBIRD_HOST' not set")
	case FirebirdPort:
		t.Fatal("'FIREBIRD_PORT' not set")
	case FirebirdUser:
		t.Fatal("'FIREBIRD_USER' not set")
	case FirebirdPass:
		t.Fatal("'FIREBIRD_PASS' not set")
	}

	return map[string]any{
		"kind":     FirebirdSourceKind,
		"host":     FirebirdHost,
		"port":     FirebirdPort,
		"database": FirebirdDatabase,
		"user":     FirebirdUser,
		"password": FirebirdPass,
	}
}

func initFirebirdConnection(host, port, user, pass, dbname string) (*sql.DB, error) {
	dsn := fmt.Sprintf("%s:%s@%s:%s/%s", user, pass, host, port, dbname)
	db, err := sql.Open("firebirdsql", dsn)
	if err != nil {
		return nil, fmt.Errorf("unable to create connection pool: %w", err)
	}

	// Configure connection pool to prevent deadlocks
	db.SetMaxOpenConns(5)
	db.SetMaxIdleConns(2)
	db.SetConnMaxLifetime(5 * time.Minute)
	db.SetConnMaxIdleTime(1 * time.Minute)

	return db, nil
}

func TestFirebirdToolEndpoints(t *testing.T) {
	sourceConfig := getFirebirdVars(t)
	ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
	defer cancel()

	var args []string

	db, err := initFirebirdConnection(FirebirdHost, FirebirdPort, FirebirdUser, FirebirdPass, FirebirdDatabase)
	if err != nil {
		t.Fatalf("unable to create firebird connection pool: %s", err)
	}
	defer db.Close()

	shortUUID := strings.ReplaceAll(uuid.New().String(), "-", "")[:8]
	tableNameParam := fmt.Sprintf("param_table_%s", shortUUID)
	tableNameAuth := fmt.Sprintf("auth_table_%s", shortUUID)
	tableNameTemplateParam := fmt.Sprintf("template_param_table_%s", shortUUID)

	createParamTableStmts, insertParamTableStmt, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, paramTestParams := getFirebirdParamToolInfo(tableNameParam)
	teardownTable1 := setupFirebirdTable(t, ctx, db, createParamTableStmts, insertParamTableStmt, tableNameParam, paramTestParams)
	defer teardownTable1(t)

	createAuthTableStmts, insertAuthTableStmt, authToolStmt, authTestParams := getFirebirdAuthToolInfo(tableNameAuth)
	teardownTable2 := setupFirebirdTable(t, ctx, db, createAuthTableStmts, insertAuthTableStmt, tableNameAuth, authTestParams)
	defer teardownTable2(t)

	toolsFile := getFirebirdToolsConfig(sourceConfig, FirebirdToolKind, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, authToolStmt)
	toolsFile = addFirebirdExecuteSqlConfig(t, toolsFile)
	tmplSelectCombined, tmplSelectFilterCombined := getFirebirdTmplToolStatement()
	toolsFile = addFirebirdTemplateParamConfig(t, toolsFile, FirebirdToolKind, tmplSelectCombined, tmplSelectFilterCombined)

	cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
	if err != nil {
		t.Fatalf("command initialization returned an error: %s", err)
	}
	defer cleanup()

	waitCtx, cancelWait := context.WithTimeout(ctx, 10*time.Second)
	defer cancelWait()
	out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
	if err != nil {
		t.Logf("toolbox command logs: \n%s", out)
		t.Fatalf("toolbox didn't start successfully: %s", err)
	}

	// Get configs for tests
	select1Want, mcpMyFailToolWant, createTableStatement, mcpSelect1Want := getFirebirdWants()
	nullWant := `[{"id":4,"name":null}]`
	select1Statement := `"SELECT 1 AS \"constant\" FROM RDB$DATABASE;"`
	templateParamCreateColArray := `["id INTEGER","name VARCHAR(255)","age INTEGER"]`

	// Run tests
	tests.RunToolGetTest(t)
	tests.RunToolInvokeTest(t, select1Want,
		tests.WithNullWant(nullWant),
		tests.DisableArrayTest())
	tests.RunMCPToolCallMethod(t, mcpMyFailToolWant, mcpSelect1Want)
	tests.RunExecuteSqlToolInvokeTest(t, createTableStatement, select1Want, tests.WithSelect1Statement(select1Statement))
	tests.RunToolInvokeWithTemplateParameters(t, tableNameTemplateParam,
		tests.WithCreateColArray(templateParamCreateColArray))
}

func setupFirebirdTable(t *testing.T, ctx context.Context, db *sql.DB, createStatements []string, insertStatement, tableName string, params []any) func(*testing.T) {
	err := db.PingContext(ctx)
	if err != nil {
		t.Fatalf("unable to connect to test database: %s", err)
	}

	for _, stmt := range createStatements {
		_, err = db.ExecContext(ctx, stmt)
		if err != nil {
			t.Fatalf("unable to execute create statement for table %s: %s\nStatement: %s", tableName, err, stmt)
		}
	}

	if insertStatement != "" && len(params) > 0 {
		stmt, err := db.PrepareContext(ctx, insertStatement)
		if err != nil {
			t.Fatalf("unable to prepare insert statement: %v", err)
		}
		defer stmt.Close()

		numPlaceholders := strings.Count(insertStatement, "?")
		if numPlaceholders == 0 {
			t.Fatalf("insert statement has no placeholders '?' but params were provided")
		}
		for i := 0; i < len(params); i += numPlaceholders {
			end := i + numPlaceholders
			if end > len(params) {
				end = len(params)
			}
			batchParams := params[i:end]

			_, err = stmt.ExecContext(ctx, batchParams...)
			if err != nil {
				t.Fatalf("unable to insert test data row with params %v: %v", batchParams, err)
			}
		}
	}

	return func(t *testing.T) {
		// Close the main connection to free up resources
		db.Close()

		// Helper function to check if error indicates object doesn't exist
		isNotFoundError := func(err error) bool {
			if err == nil {
				return false
			}
			errMsg := strings.ToLower(err.Error())
			return strings.Contains(errMsg, "does not exist") ||
				strings.Contains(errMsg, "not found") ||
				strings.Contains(errMsg, "is not defined") ||
				strings.Contains(errMsg, "unknown") ||
				strings.Contains(errMsg, "invalid")
		}

		// Create dedicated cleanup connection with minimal configuration
		createCleanupConnection := func() (*sql.DB, error) {
			dsn := fmt.Sprintf("%s:%s@%s:%s/%s", FirebirdUser, FirebirdPass, FirebirdHost, FirebirdPort, FirebirdDatabase)

			cleanupDb, err := sql.Open("firebirdsql", dsn)
			if err != nil {
				return nil, err
			}

			// Ultra minimal connection pool for cleanup only
			cleanupDb.SetMaxOpenConns(1)
			cleanupDb.SetMaxIdleConns(0)
			cleanupDb.SetConnMaxLifetime(5 * time.Second)
			cleanupDb.SetConnMaxIdleTime(1 * time.Second)

			return cleanupDb, nil
		}

		// Drop each object with its own dedicated connection and aggressive timeout
		dropObjects := []struct {
			objType string
			query   string
		}{
			{"trigger", fmt.Sprintf("DROP TRIGGER BI_%s_ID", tableName)},
			{"table", fmt.Sprintf("DROP TABLE %s", tableName)},
			{"generator", fmt.Sprintf("DROP GENERATOR GEN_%s_ID", tableName)},
		}

		for _, obj := range dropObjects {
			cleanupDb, err := createCleanupConnection()
			if err != nil {
				t.Logf("Failed to create cleanup connection for %s: %s", obj.objType, err)
				continue
			}

			// Use aggressive short timeout for each operation
			ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
			_, dropErr := cleanupDb.ExecContext(ctx, obj.query)
			cancel()
			cleanupDb.Close()

			if dropErr == nil {
				t.Logf("Successfully dropped %s", obj.objType)
			} else if isNotFoundError(dropErr) {
				t.Logf("%s does not exist, skipping", obj.objType)
			} else if ctx.Err() == context.DeadlineExceeded {
				t.Logf("Timeout dropping %s (3s limit exceeded) - continuing anyway", obj.objType)
			} else {
				t.Logf("Failed to drop %s: %s - continuing anyway", obj.objType, dropErr)
			}

			// Small delay between operations to reduce contention
			time.Sleep(100 * time.Millisecond)
		}
	}
}

func getFirebirdParamToolInfo(tableName string) ([]string, string, string, string, string, string, []any) {
	createStatements := []string{
		fmt.Sprintf("CREATE TABLE %s (id INTEGER NOT NULL PRIMARY KEY, name VARCHAR(255));", tableName),
		fmt.Sprintf("CREATE GENERATOR GEN_%s_ID;", tableName),
		fmt.Sprintf(`
			CREATE TRIGGER BI_%s_ID FOR %s
			ACTIVE BEFORE INSERT POSITION 0
			AS
			BEGIN
				IF (NEW.id IS NULL) THEN
					NEW.id = GEN_ID(GEN_%s_ID, 1);
			END;
		`, tableName, tableName, tableName),
	}

	insertStatement := fmt.Sprintf("INSERT INTO %s (name) VALUES (?);", tableName)
	toolStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE id = ? OR name = ?;", tableName)
	idParamStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE id = ?;", tableName)
	nameParamStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE name IS NOT DISTINCT FROM ?;", tableName)
	// Firebird doesn't support array parameters in IN clause the same way as other databases
	// We'll use a simpler approach for testing
	arrayToolStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE id = ? ORDER BY id;", tableName)

	params := []any{"Alice", "Jane", "Sid", nil}
	return createStatements, insertStatement, toolStatement, idParamStatement, nameParamStatement, arrayToolStatement, params
}

func getFirebirdAuthToolInfo(tableName string) ([]string, string, string, []any) {
	createStatements := []string{
		fmt.Sprintf("CREATE TABLE %s (id INTEGER NOT NULL PRIMARY KEY, name VARCHAR(255), email VARCHAR(255));", tableName),
		fmt.Sprintf("CREATE GENERATOR GEN_%s_ID;", tableName),
		fmt.Sprintf(`
			CREATE TRIGGER BI_%s_ID FOR %s
			ACTIVE BEFORE INSERT POSITION 0
			AS
			BEGIN
				IF (NEW.id IS NULL) THEN
					NEW.id = GEN_ID(GEN_%s_ID, 1);
			END;
		`, tableName, tableName, tableName),
	}

	insertStatement := fmt.Sprintf("INSERT INTO %s (name, email) VALUES (?, ?)", tableName)
	toolStatement := fmt.Sprintf("SELECT name AS \"name\" FROM %s WHERE email = ?;", tableName)
	params := []any{"Alice", tests.ServiceAccountEmail, "Jane", "[email protected]"}
	return createStatements, insertStatement, toolStatement, params
}

func getFirebirdWants() (string, string, string, string) {
	select1Want := `[{"constant":1}]`
	mcpMyFailToolWant := `{"jsonrpc":"2.0","id":"invoke-fail-tool","result":{"content":[{"type":"text","text":"unable to execute query: Dynamic SQL Error\nSQL error code = -104\nToken unknown - line 1, column 1\nSELEC\n"}],"isError":true}}`
	createTableStatement := `"CREATE TABLE t (id INTEGER PRIMARY KEY, name VARCHAR(50))"`
	mcpSelect1Want := `{"jsonrpc":"2.0","id":"invoke my-auth-required-tool","result":{"content":[{"type":"text","text":"{\"constant\":1}"}]}}`
	return select1Want, mcpMyFailToolWant, createTableStatement, mcpSelect1Want
}

func getFirebirdToolsConfig(sourceConfig map[string]any, toolKind, paramToolStatement, idParamToolStmt, nameParamToolStmt, arrayToolStatement, authToolStatement string) map[string]any {
	toolsFile := tests.GetToolsConfig(sourceConfig, toolKind, paramToolStatement, idParamToolStmt, nameParamToolStmt, arrayToolStatement, authToolStatement)

	toolsMap, ok := toolsFile["tools"].(map[string]any)
	if !ok {
		return toolsFile
	}

	if simpleTool, ok := toolsMap["my-simple-tool"].(map[string]any); ok {
		simpleTool["statement"] = "SELECT 1 AS \"constant\" FROM RDB$DATABASE;"
		toolsMap["my-simple-tool"] = simpleTool
	}
	if authRequiredTool, ok := toolsMap["my-auth-required-tool"].(map[string]any); ok {
		authRequiredTool["statement"] = "SELECT 1 AS \"constant\" FROM RDB$DATABASE;"
		toolsMap["my-auth-required-tool"] = authRequiredTool
	}

	if arrayTool, ok := toolsMap["my-array-tool"].(map[string]any); ok {
		// Firebird array tool - accept array but use only first element for compatibility
		arrayTool["parameters"] = []any{
			map[string]any{
				"name":        "idArray",
				"type":        "array",
				"description": "ID array (Firebird will use first element only)",
				"items": map[string]any{
					"name":        "id",
					"type":        "integer",
					"description": "ID",
				},
			},
		}
		// Statement is already defined in arrayToolStatement parameter
		toolsMap["my-array-tool"] = arrayTool
	}

	toolsFile["tools"] = toolsMap
	return toolsFile
}

func addFirebirdTemplateParamConfig(t *testing.T, config map[string]any, toolKind, tmplSelectCombined, tmplSelectFilterCombined string) map[string]any {
	toolsMap, ok := config["tools"].(map[string]any)
	if !ok {
		t.Fatalf("unable to get tools from config")
	}

	// Firebird-specific template parameter tools with compatible syntax
	toolsMap["create-table-templateParams-tool"] = map[string]any{
		"kind":        toolKind,
		"source":      "my-instance",
		"description": "Create table tool with template parameters",
		"statement":   "CREATE TABLE {{.tableName}} ({{array .columns}})",
		"templateParameters": []tools.Parameter{
			tools.NewStringParameter("tableName", "some description"),
			tools.NewArrayParameter("columns", "The columns to create", tools.NewStringParameter("column", "A column name that will be created")),
		},
	}
	toolsMap["insert-table-templateParams-tool"] = map[string]any{
		"kind":        toolKind,
		"source":      "my-instance",
		"description": "Insert table tool with template parameters",
		"statement":   "INSERT INTO {{.tableName}} ({{array .columns}}) VALUES ({{.values}})",
		"templateParameters": []tools.Parameter{
			tools.NewStringParameter("tableName", "some description"),
			tools.NewArrayParameter("columns", "The columns to insert into", tools.NewStringParameter("column", "A column name that will be returned from the query.")),
			tools.NewStringParameter("values", "The values to insert as a comma separated string"),
		},
	}
	toolsMap["select-templateParams-tool"] = map[string]any{
		"kind":        toolKind,
		"source":      "my-instance",
		"description": "Select table tool with template parameters",
		"statement":   "SELECT id AS \"id\", name AS \"name\", age AS \"age\" FROM {{.tableName}}",
		"templateParameters": []tools.Parameter{
			tools.NewStringParameter("tableName", "some description"),
		},
	}
	toolsMap["select-templateParams-combined-tool"] = map[string]any{
		"kind":        toolKind,
		"source":      "my-instance",
		"description": "Select table tool with combined template parameters",
		"statement":   tmplSelectCombined,
		"parameters": []tools.Parameter{
			tools.NewIntParameter("id", "the id of the user"),
		},
		"templateParameters": []tools.Parameter{
			tools.NewStringParameter("tableName", "some description"),
		},
	}
	toolsMap["select-fields-templateParams-tool"] = map[string]any{
		"kind":        toolKind,
		"source":      "my-instance",
		"description": "Select specific fields tool with template parameters",
		"statement":   "SELECT name AS \"name\" FROM {{.tableName}}",
		"templateParameters": []tools.Parameter{
			tools.NewStringParameter("tableName", "some description"),
		},
	}
	toolsMap["select-filter-templateParams-combined-tool"] = map[string]any{
		"kind":        toolKind,
		"source":      "my-instance",
		"description": "Select table tool with filter template parameters",
		"statement":   tmplSelectFilterCombined,
		"parameters": []tools.Parameter{
			tools.NewStringParameter("name", "the name to filter by"),
		},
		"templateParameters": []tools.Parameter{
			tools.NewStringParameter("tableName", "some description"),
			tools.NewStringParameter("columnFilter", "some description"),
		},
	}
	// Firebird uses simple DROP TABLE syntax without IF EXISTS
	toolsMap["drop-table-templateParams-tool"] = map[string]any{
		"kind":        toolKind,
		"source":      "my-instance",
		"description": "Drop table tool with template parameters",
		"statement":   "DROP TABLE {{.tableName}}",
		"templateParameters": []tools.Parameter{
			tools.NewStringParameter("tableName", "some description"),
		},
	}
	config["tools"] = toolsMap
	return config
}

func addFirebirdExecuteSqlConfig(t *testing.T, config map[string]any) map[string]any {
	tools, ok := config["tools"].(map[string]any)
	if !ok {
		t.Fatalf("unable to get tools from config")
	}
	tools["my-exec-sql-tool"] = map[string]any{
		"kind":        "firebird-execute-sql",
		"source":      "my-instance",
		"description": "Tool to execute sql",
	}
	tools["my-auth-exec-sql-tool"] = map[string]any{
		"kind":        "firebird-execute-sql",
		"source":      "my-instance",
		"description": "Tool to execute sql",
		"authRequired": []string{
			"my-google-auth",
		},
	}
	config["tools"] = tools
	return config
}

func getFirebirdTmplToolStatement() (string, string) {
	tmplSelectCombined := "SELECT id AS \"id\", name AS \"name\", age AS \"age\" FROM {{.tableName}} WHERE id = ?"
	tmplSelectFilterCombined := "SELECT id AS \"id\", name AS \"name\", age AS \"age\" FROM {{.tableName}} WHERE {{.columnFilter}} = ?"
	return tmplSelectCombined, tmplSelectFilterCombined
}

```

--------------------------------------------------------------------------------
/internal/tools/looker/lookerconversationalanalytics/lookerconversationalanalytics.go:
--------------------------------------------------------------------------------

```go
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

package lookerconversationalanalytics

import (
	"bytes"
	"context"
	"encoding/json"
	"fmt"
	"io"
	"net/http"
	"net/url"
	"strings"

	yaml "github.com/goccy/go-yaml"
	"github.com/googleapis/genai-toolbox/internal/sources"
	lookerds "github.com/googleapis/genai-toolbox/internal/sources/looker"
	"github.com/googleapis/genai-toolbox/internal/tools"
	"github.com/googleapis/genai-toolbox/internal/util"
	"github.com/looker-open-source/sdk-codegen/go/rtl"
	"golang.org/x/oauth2"
)

const kind string = "looker-conversational-analytics"

const instructions = `**INSTRUCTIONS - FOLLOW THESE RULES:**
1. **CONTENT:** Your answer should present the supporting data and then provide a conclusion based on that data.
2. **OUTPUT FORMAT:** Your entire response MUST be in plain text format ONLY.
3. **NO CHARTS:** You are STRICTLY FORBIDDEN from generating any charts, graphs, images, or any other form of visualization.`

func init() {
	if !tools.Register(kind, newConfig) {
		panic(fmt.Sprintf("tool kind %q already registered", kind))
	}
}

func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
	actual := Config{Name: name}
	if err := decoder.DecodeContext(ctx, &actual); err != nil {
		return nil, err
	}
	return actual, nil
}

type compatibleSource interface {
	GetApiSettings() *rtl.ApiSettings
	GoogleCloudTokenSourceWithScope(ctx context.Context, scope string) (oauth2.TokenSource, error)
	GoogleCloudProject() string
	GoogleCloudLocation() string
	UseClientAuthorization() bool
}

// Structs for building the JSON payload
type UserMessage struct {
	Text string `json:"text"`
}
type Message struct {
	UserMessage UserMessage `json:"userMessage"`
}
type LookerExploreReference struct {
	LookerInstanceUri string `json:"lookerInstanceUri"`
	LookmlModel       string `json:"lookmlModel"`
	Explore           string `json:"explore"`
}
type LookerExploreReferences struct {
	ExploreReferences []LookerExploreReference `json:"exploreReferences"`
	Credentials       Credentials              `json:"credentials,omitzero"`
}
type SecretBased struct {
	ClientId     string `json:"clientId"`
	ClientSecret string `json:"clientSecret"`
}
type TokenBased struct {
	AccessToken string `json:"accessToken"`
}
type OAuthCredentials struct {
	Secret SecretBased `json:"secret,omitzero"`
	Token  TokenBased  `json:"token,omitzero"`
}
type Credentials struct {
	OAuth OAuthCredentials `json:"oauth"`
}
type DatasourceReferences struct {
	Looker LookerExploreReferences `json:"looker"`
}
type ImageOptions struct {
	NoImage map[string]any `json:"noImage"`
}
type ChartOptions struct {
	Image ImageOptions `json:"image"`
}
type Python struct {
	Enabled bool `json:"enabled"`
}
type AnalysisOptions struct {
	Python Python `json:"python"`
}
type ConversationOptions struct {
	Chart    ChartOptions    `json:"chart,omitzero"`
	Analysis AnalysisOptions `json:"analysis,omitzero"`
}
type InlineContext struct {
	SystemInstruction    string               `json:"systemInstruction"`
	DatasourceReferences DatasourceReferences `json:"datasourceReferences"`
	Options              ConversationOptions  `json:"options"`
}
type CAPayload struct {
	Messages      []Message     `json:"messages"`
	InlineContext InlineContext `json:"inlineContext"`
	ClientIdEnum  string        `json:"clientIdEnum"`
}

// validate compatible sources are still compatible
var _ compatibleSource = &lookerds.Source{}

var compatibleSources = [...]string{lookerds.SourceKind}

type Config struct {
	Name         string   `yaml:"name" validate:"required"`
	Kind         string   `yaml:"kind" validate:"required"`
	Source       string   `yaml:"source" validate:"required"`
	Description  string   `yaml:"description" validate:"required"`
	AuthRequired []string `yaml:"authRequired"`
}

// validate interface
var _ tools.ToolConfig = Config{}

func (cfg Config) ToolConfigKind() string {
	return kind
}

func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
	// verify source exists
	rawS, ok := srcs[cfg.Source]
	if !ok {
		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
	}

	// verify the source is compatible
	s, ok := rawS.(compatibleSource)
	if !ok {
		return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
	}

	if s.GoogleCloudProject() == "" {
		return nil, fmt.Errorf("project must be defined for source to use with %q tool", kind)
	}

	userQueryParameter := tools.NewStringParameter("user_query_with_context", "The user's question, potentially including conversation history and system instructions for context.")

	exploreRefsDescription := `An Array of at least one and up to 5 explore references like [{'model': 'MODEL_NAME', 'explore': 'EXPLORE_NAME'}]`
	exploreRefsParameter := tools.NewArrayParameter(
		"explore_references",
		exploreRefsDescription,
		tools.NewMapParameter(
			"explore_reference",
			"An explore reference like {'model': 'MODEL_NAME', 'explore': 'EXPLORE_NAME'}",
			"",
		),
	)

	parameters := tools.Parameters{userQueryParameter, exploreRefsParameter}
	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, parameters)

	// Get cloud-platform token source for Gemini Data Analytics API during initialization
	ctx := context.Background()
	ts, err := s.GoogleCloudTokenSourceWithScope(ctx, "https://www.googleapis.com/auth/cloud-platform")
	if err != nil {
		return nil, fmt.Errorf("failed to get cloud-platform token source: %w", err)
	}

	// finish tool setup
	t := Tool{
		Name:           cfg.Name,
		Kind:           kind,
		ApiSettings:    s.GetApiSettings(),
		Project:        s.GoogleCloudProject(),
		Location:       s.GoogleCloudLocation(),
		Parameters:     parameters,
		AuthRequired:   cfg.AuthRequired,
		UseClientOAuth: s.UseClientAuthorization(),
		TokenSource:    ts,
		manifest:       tools.Manifest{Description: cfg.Description, Parameters: parameters.Manifest(), AuthRequired: cfg.AuthRequired},
		mcpManifest:    mcpManifest,
	}
	return t, nil
}

// validate interface
var _ tools.Tool = Tool{}

type Tool struct {
	Name           string `yaml:"name"`
	Kind           string `yaml:"kind"`
	ApiSettings    *rtl.ApiSettings
	AuthRequired   []string         `yaml:"authRequired"`
	UseClientOAuth bool             `yaml:"useClientOAuth"`
	Parameters     tools.Parameters `yaml:"parameters"`
	Project        string
	Location       string
	TokenSource    oauth2.TokenSource
	manifest       tools.Manifest
	mcpManifest    tools.McpManifest
}

func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
	var tokenStr string
	var err error

	// Get credentials for the API call
	// Use cloud-platform token source for Gemini Data Analytics API
	if t.TokenSource == nil {
		return nil, fmt.Errorf("cloud-platform token source is missing")
	}
	token, err := t.TokenSource.Token()
	if err != nil {
		return nil, fmt.Errorf("failed to get token from cloud-platform token source: %w", err)
	}
	tokenStr = token.AccessToken

	// Extract parameters from the map
	mapParams := params.AsMap()
	userQuery, _ := mapParams["user_query_with_context"].(string)
	exploreReferences, _ := mapParams["explore_references"].([]any)

	ler := make([]LookerExploreReference, 0)
	for _, er := range exploreReferences {
		ler = append(ler, LookerExploreReference{
			LookerInstanceUri: t.ApiSettings.BaseUrl,
			LookmlModel:       er.(map[string]any)["model"].(string),
			Explore:           er.(map[string]any)["explore"].(string),
		})
	}
	oauth_creds := OAuthCredentials{}
	if t.UseClientOAuth {
		oauth_creds.Token = TokenBased{AccessToken: string(accessToken)}
	} else {
		oauth_creds.Secret = SecretBased{ClientId: t.ApiSettings.ClientId, ClientSecret: t.ApiSettings.ClientSecret}
	}

	lers := LookerExploreReferences{
		ExploreReferences: ler,
		Credentials: Credentials{
			OAuth: oauth_creds,
		},
	}

	// Construct URL, headers, and payload
	projectID := t.Project
	location := t.Location
	caURL := fmt.Sprintf("https://geminidataanalytics.googleapis.com/v1beta/projects/%s/locations/%s:chat", url.PathEscape(projectID), url.PathEscape(location))

	headers := map[string]string{
		"Authorization": fmt.Sprintf("Bearer %s", tokenStr),
		"Content-Type":  "application/json",
	}

	payload := CAPayload{
		Messages: []Message{{UserMessage: UserMessage{Text: userQuery}}},
		InlineContext: InlineContext{
			SystemInstruction: instructions,
			DatasourceReferences: DatasourceReferences{
				Looker: lers,
			},
			Options: ConversationOptions{Chart: ChartOptions{Image: ImageOptions{NoImage: map[string]any{}}}},
		},
		ClientIdEnum: "GENAI_TOOLBOX",
	}

	// Call the streaming API
	response, err := getStream(ctx, caURL, payload, headers)
	if err != nil {
		return nil, fmt.Errorf("failed to get response from conversational analytics API: %w", err)
	}

	return response, nil
}

func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
	return tools.ParseParams(t.Parameters, data, claims)
}

func (t Tool) Manifest() tools.Manifest {
	return t.manifest
}

func (t Tool) McpManifest() tools.McpManifest {
	return t.mcpManifest
}

func (t Tool) Authorized(verifiedAuthServices []string) bool {
	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}

func (t Tool) RequiresClientAuthorization() bool {
	return t.UseClientOAuth
}

// StreamMessage represents a single message object from the streaming API response.
type StreamMessage struct {
	SystemMessage *SystemMessage `json:"systemMessage,omitempty"`
}

// SystemMessage contains different types of system-generated content.
type SystemMessage struct {
	Text     *TextMessage     `json:"text,omitempty"`
	Schema   *SchemaMessage   `json:"schema,omitempty"`
	Data     *DataMessage     `json:"data,omitempty"`
	Analysis *AnalysisMessage `json:"analysis,omitempty"`
	Error    *ErrorMessage    `json:"error,omitempty"`
}

// TextMessage contains textual parts of a message.
type TextMessage struct {
	Parts []string `json:"parts"`
}

// SchemaMessage contains schema-related information.
type SchemaMessage struct {
	Query  *SchemaQuery  `json:"query,omitempty"`
	Result *SchemaResult `json:"result,omitempty"`
}

// SchemaQuery holds the question that prompted a schema lookup.
type SchemaQuery struct {
	Question string `json:"question"`
}

// SchemaResult contains the datasources with their schemas.
type SchemaResult struct {
	Datasources []Datasource `json:"datasources"`
}

// Datasource represents a data source with its reference and schema.
type Datasource struct {
	LookerExploreReference LookerExploreReference `json:"lookerExploreReference"`
}

// DataMessage contains data-related information, like queries and results.
type DataMessage struct {
	GeneratedLookerQuery *LookerQuery `json:"generatedLookerQuery,omitempty"`
	Result               *DataResult  `json:"result,omitempty"`
}

type LookerQuery struct {
	Model   string   `json:"model"`
	Explore string   `json:"explore"`
	Fields  []string `json:"fields"`
	Filters []Filter `json:"filters,omitempty"`
	Sorts   []string `json:"sorts,omitempty"`
	Limit   string   `json:"limit,omitempty"`
}

type Filter struct {
	Field string `json:"field,omitempty"`
	Value string `json:"value,omitempty"`
}

// DataResult contains the schema and rows of a query result.
type DataResult struct {
	Data []map[string]any `json:"data"`
}

type AnalysisQuery struct {
	Question        string   `json:"question,omitempty"`
	DataResultNames []string `json:"dataResultNames,omitempty"`
}
type AnalysisEvent struct {
	PlannerReasoning      string `json:"plannerReasoning,omitempty"`
	CoderInstructions     string `json:"coderInstructions,omitempty"`
	Code                  string `json:"code,omitempty"`
	ExecutionOutput       string `json:"executionOutput,omitempty"`
	ExecutionError        string `json:"executionError,omitempty"`
	ResultVegaChartJson   string `json:"resultVegaChartJson,omitempty"`
	ResultNaturalLanguage string `json:"resultNaturalLanguage,omitempty"`
	ResultCsvData         string `json:"resultCsvData,omitempty"`
	ResultReferenceData   string `json:"resultReferenceData,omitempty"`
	Error                 string `json:"error,omitempty"`
}
type AnalysisMessage struct {
	Query         AnalysisQuery `json:"query,omitempty"`
	ProgressEvent AnalysisEvent `json:"progressEvent,omitempty"`
}

// ErrorResponse represents an error message from the API.
type ErrorMessage struct {
	Text string `json:"text"`
}

func getStream(ctx context.Context, url string, payload CAPayload, headers map[string]string) ([]map[string]any, error) {
	payloadBytes, err := json.Marshal(payload)
	if err != nil {
		return nil, fmt.Errorf("failed to marshal payload: %w", err)
	}

	req, err := http.NewRequest("POST", url, bytes.NewBuffer(payloadBytes))
	if err != nil {
		return nil, fmt.Errorf("failed to create request: %w", err)
	}
	for k, v := range headers {
		req.Header.Set(k, v)
	}

	client := &http.Client{}
	resp, err := client.Do(req)
	if err != nil {
		return nil, fmt.Errorf("failed to send request: %w", err)
	}
	defer resp.Body.Close()

	if resp.StatusCode != http.StatusOK {
		body, _ := io.ReadAll(resp.Body)
		return nil, fmt.Errorf("API returned non-200 status: %d %s", resp.StatusCode, string(body))
	}

	var messages []map[string]any
	decoder := json.NewDecoder(resp.Body)

	// The response is a JSON array, so we read the opening bracket.
	if _, err := decoder.Token(); err != nil {
		if err == io.EOF {
			return nil, nil // Empty response is valid
		}
		return nil, fmt.Errorf("error reading start of json array: %w", err)
	}

	for decoder.More() {
		var msg StreamMessage
		if err := decoder.Decode(&msg); err != nil {
			if err == io.EOF {
				break
			}
			return nil, fmt.Errorf("error decoding stream message: %w", err)
		}

		var newMessage map[string]any
		if msg.SystemMessage != nil {
			if msg.SystemMessage.Text != nil {
				newMessage = handleTextResponse(ctx, msg.SystemMessage.Text)
			} else if msg.SystemMessage.Schema != nil {
				newMessage = handleSchemaResponse(ctx, msg.SystemMessage.Schema)
			} else if msg.SystemMessage.Data != nil {
				newMessage = handleDataResponse(ctx, msg.SystemMessage.Data)
			} else if msg.SystemMessage.Analysis != nil {
				newMessage = handleAnalysisResponse(ctx, msg.SystemMessage.Analysis)
			} else if msg.SystemMessage.Error != nil {
				newMessage = handleError(ctx, msg.SystemMessage.Error)
			}
			messages = appendMessage(messages, newMessage)
		}
	}

	return messages, nil
}

func formatDatasourceAsDict(ctx context.Context, datasource *Datasource) map[string]any {
	logger, _ := util.LoggerFromContext(ctx)
	logger.DebugContext(ctx, "Datasource %s", *datasource)
	ds := make(map[string]any)
	ds["model"] = datasource.LookerExploreReference.LookmlModel
	ds["explore"] = datasource.LookerExploreReference.Explore
	ds["lookerInstanceUri"] = datasource.LookerExploreReference.LookerInstanceUri
	return map[string]any{"Datasource": ds}
}

func handleAnalysisResponse(ctx context.Context, resp *AnalysisMessage) map[string]any {
	logger, _ := util.LoggerFromContext(ctx)
	jsonData, err := json.Marshal(*resp)
	if err != nil {
		logger.ErrorContext(ctx, "error marshaling struct: %w", err)
		return map[string]any{"Analysis": "error"}
	}
	return map[string]any{"Analysis": jsonData}
}

func handleTextResponse(ctx context.Context, resp *TextMessage) map[string]any {
	logger, _ := util.LoggerFromContext(ctx)
	logger.DebugContext(ctx, "Text Response: %s", strings.Join(resp.Parts, ""))
	return map[string]any{"Answer": strings.Join(resp.Parts, "")}
}

func handleSchemaResponse(ctx context.Context, resp *SchemaMessage) map[string]any {
	if resp.Query != nil {
		return map[string]any{"Question": resp.Query.Question}
	}
	if resp.Result != nil {
		var formattedSources []map[string]any
		for _, ds := range resp.Result.Datasources {
			formattedSources = append(formattedSources, formatDatasourceAsDict(ctx, &ds))
		}
		return map[string]any{"Schema Resolved": formattedSources}
	}
	return nil
}

func handleDataResponse(ctx context.Context, resp *DataMessage) map[string]any {
	if resp.GeneratedLookerQuery != nil {
		logger, _ := util.LoggerFromContext(ctx)
		jsonData, err := json.Marshal(resp.GeneratedLookerQuery)
		if err != nil {
			logger.ErrorContext(ctx, "error marshaling struct: %w", err)
			return map[string]any{"Retrieval Query": "error"}
		}
		return map[string]any{
			"Retrieval Query": jsonData,
		}
	}
	if resp.Result != nil {

		return map[string]any{
			"Data Retrieved": resp.Result.Data,
		}
	}
	return nil
}

func handleError(ctx context.Context, resp *ErrorMessage) map[string]any {
	logger, _ := util.LoggerFromContext(ctx)
	logger.DebugContext(ctx, "Error Response: %s", resp.Text)
	return map[string]any{
		"Error": map[string]any{
			"Message": resp.Text,
		},
	}
}

func appendMessage(messages []map[string]any, newMessage map[string]any) []map[string]any {
	if newMessage == nil {
		return messages
	}
	if len(messages) > 0 {
		if _, ok := messages[len(messages)-1]["Data Retrieved"]; ok {
			messages = messages[:len(messages)-1]
		}
	}
	return append(messages, newMessage)
}

```

--------------------------------------------------------------------------------
/internal/prebuiltconfigs/tools/cloud-sql-postgres-observability.yaml:
--------------------------------------------------------------------------------

```yaml
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
  cloud-monitoring-source:
    kind: cloud-monitoring
tools:
  get_system_metrics:
    kind: cloud-monitoring-query-prometheus
    source: cloud-monitoring-source
    description: |
      Fetches system level cloudmonitoring data (timeseries metrics) for a Postgres instance using a PromQL query. Take projectId and instanceId from the user for which the metrics timeseries data needs to be fetched.
      To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.

      Generate PromQL `query` for Postgres system metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id` from user intent.

      Defaults:
      1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.

      PromQL Query Examples:
      1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m])`
      2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
      3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
      4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
      5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
      6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
      7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
      8. Percentile with groupby on database_id: `quantile by ("database_id")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`

      Available Metrics List: metricname. description. monitored resource. labels. database_id is actually the instance id and the format is `project_id:instance_id`. 
      1. `cloudsql.googleapis.com/database/postgresql/new_connection_count`: Count of new connections added to the postgres instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
      2. `cloudsql.googleapis.com/database/postgresql/backends_in_wait`: Number of backends in wait in postgres instance. `cloudsql_database`. `backend_type`, `wait_event`, `wait_event_type`, `project_id`, `database_id`.
      3. `cloudsql.googleapis.com/database/postgresql/transaction_count`: Delta count of number of transactions. `cloudsql_database`. `database`, `transaction_type`, `project_id`, `database_id`.
      4. `cloudsql.googleapis.com/database/memory/components`: Memory stats components in percentage as usage, cache and free memory for the database. `cloudsql_database`. `component`, `project_id`, `database_id`.
      5. `cloudsql.googleapis.com/database/postgresql/external_sync/max_replica_byte_lag`: Replication lag in bytes for Postgres External Server (ES) replicas. Aggregated across all DBs on the replica. `cloudsql_database`. `project_id`, `database_id`.
      6. `cloudsql.googleapis.com/database/cpu/utilization`: Current CPU utilization represented as a percentage of the reserved CPU that is currently in use. Values are typically numbers between 0.0 and 1.0 (but might exceed 1.0). Charts display the values as a percentage between 0% and 100% (or more). `cloudsql_database`. `project_id`, `database_id`.
      7. `cloudsql.googleapis.com/database/disk/bytes_used_by_data_type`: Data utilization in bytes. `cloudsql_database`. `data_type`, `project_id`, `database_id`.
      8. `cloudsql.googleapis.com/database/disk/read_ops_count`: Delta count of data disk read IO operations. `cloudsql_database`. `project_id`, `database_id`.
      9. `cloudsql.googleapis.com/database/disk/write_ops_count`: Delta count of data disk write IO operations. `cloudsql_database`. `project_id`, `database_id`.
      10. `cloudsql.googleapis.com/database/postgresql/num_backends_by_state`: Number of connections to the Cloud SQL PostgreSQL instance, grouped by its state. `cloudsql_database`. `database`, `state`, `project_id`, `database_id`.
      11. `cloudsql.googleapis.com/database/postgresql/num_backends`: Number of connections to the Cloud SQL PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
      12. `cloudsql.googleapis.com/database/network/received_bytes_count`: Delta count of bytes received through the network. `cloudsql_database`. `project_id`, `database_id`.
      13. `cloudsql.googleapis.com/database/network/sent_bytes_count`: Delta count of bytes sent through the network. `cloudsql_database`. `destination`, `project_id`, `database_id`.
      14. `cloudsql.googleapis.com/database/postgresql/deadlock_count`: Number of deadlocks detected for this database. `cloudsql_database`. `database`, `project_id`, `database_id`.
      15. `cloudsql.googleapis.com/database/postgresql/blocks_read_count`: Number of disk blocks read by this database. The source field distingushes actual reads from disk versus reads from buffer cache. `cloudsql_database`. `database`, `source`, `project_id`, `database_id`.
      16. `cloudsql.googleapis.com/database/postgresql/tuples_processed_count`: Number of tuples(rows) processed for a given database for operations like insert, update or delete. `cloudsql_database`. `operation_type`, `database`, `project_id`, `database_id`.
      17. `cloudsql.googleapis.com/database/postgresql/tuple_size`: Number of tuples (rows) in the database. `cloudsql_database`. `database`, `tuple_state`, `project_id`, `database_id`.
      18. `cloudsql.googleapis.com/database/postgresql/vacuum/oldest_transaction_age`: Age of the oldest transaction yet to be vacuumed in the Cloud SQL PostgreSQL instance, measured in number of transactions that have happened since the oldest transaction. `cloudsql_database`. `oldest_transaction_type`, `project_id`, `database_id`.
      19. `cloudsql.googleapis.com/database/replication/log_archive_success_count`: Number of successful attempts for archiving replication log files. `cloudsql_database`. `project_id`, `database_id`.
      20. `cloudsql.googleapis.com/database/replication/log_archive_failure_count`: Number of failed attempts for archiving replication log files. `cloudsql_database`. `project_id`, `database_id`.
      21. `cloudsql.googleapis.com/database/postgresql/transaction_id_utilization`: Current utilization represented as a percentage of transaction IDs consumed by the Cloud SQL PostgreSQL instance. Values are typically numbers between 0.0 and 1.0. Charts display the values as a percentage between 0% and 100% . `cloudsql_database`. `project_id`, `database_id`.
      22. `cloudsql.googleapis.com/database/postgresql/num_backends_by_application`: Number of connections to the Cloud SQL PostgreSQL instance, grouped by applications. `cloudsql_database`. `application`, `project_id`, `database_id`.
      23. `cloudsql.googleapis.com/database/postgresql/tuples_fetched_count`: Total number of rows fetched as a result of queries per database in the PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
      24. `cloudsql.googleapis.com/database/postgresql/tuples_returned_count`: Total number of rows scanned while processing the queries per database in the PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
      25. `cloudsql.googleapis.com/database/postgresql/temp_bytes_written_count`: Total amount of data (in bytes) written to temporary files by the queries per database. `cloudsql_database`. `database`, `project_id`, `database_id`.
      26. `cloudsql.googleapis.com/database/postgresql/temp_files_written_count`: Total number of temporary files used for writing data while performing algorithms such as join and sort. `cloudsql_database`. `database`, `project_id`, `database_id`.

  get_query_metrics:
    kind: cloud-monitoring-query-prometheus
    source: cloud-monitoring-source
    description: |
      Fetches query level cloudmonitoring data (timeseries metrics) for queries running in Postgres instance using a PromQL query. Take projectID and instanceID from the user for which the metrics timeseries data needs to be fetched.
      To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.

      Generate PromQL `query` for Postgres query metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id`, `query_hash` from user intent. If query_hash is provided then use the per_query metrics. Query hash and query id are same.

      Defaults:
      1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.

      PromQL Query Examples:
      1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m])`
      2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
      3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
      4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
      5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
      6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
      7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
      8. Percentile with groupby on resource_id, database: `quantile by ("resource_id","database")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`

      Available Metrics List: metricname. description. monitored resource. labels. resource_id label format is `project_id:instance_id` which is actually instance id only. aggregate is the aggregated values for all query stats, Use aggregate metrics if query id is not provided. For perquery metrics do not fetch querystring unless specified by user specifically. Have the aggregation on query hash to avoid fetching the querystring. Do not use latency metrics for anything.
      1. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/latencies`: Aggregated query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
      2. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time`: Accumulated aggregated query execution time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
      3. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/io_time`: Accumulated aggregated IO time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `io_type`, `project_id`, `resource_id`.
      4. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/lock_time`: Accumulated aggregated lock wait time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `lock_type`, `project_id`, `resource_id`.
      5. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/row_count`: Aggregated number of retrieved or affected rows since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
      6. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/shared_blk_access_count`: Aggregated shared blocks accessed by statement execution. `cloudsql_instance_database`. `user`, `client_addr`, `access_type`, `project_id`, `resource_id`.
      7. `cloudsql.googleapis.com/database/postgresql/insights/perquery/latencies`: Per query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
      8. `cloudsql.googleapis.com/database/postgresql/insights/perquery/execution_time`: Accumulated execution times per user per database per query. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
      9. `cloudsql.googleapis.com/database/postgresql/insights/perquery/io_time`: Accumulated IO time since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `io_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
      10. `cloudsql.googleapis.com/database/postgresql/insights/perquery/lock_time`: Accumulated lock wait time since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `lock_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
      11. `cloudsql.googleapis.com/database/postgresql/insights/perquery/row_count`: The number of retrieved or affected rows since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
      12. `cloudsql.googleapis.com/database/postgresql/insights/perquery/shared_blk_access_count`: Shared blocks accessed by statement execution per query. `cloudsql_instance_database`. `user`, `client_addr`, `access_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
      13. `cloudsql.googleapis.com/database/postgresql/insights/pertag/latencies`: Query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
      14. `cloudsql.googleapis.com/database/postgresql/insights/pertag/execution_time`: Accumulated execution times since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
      15. `cloudsql.googleapis.com/database/postgresql/insights/pertag/io_time`: Accumulated IO time since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `io_type`, `tag_hash`, `project_id`, `resource_id`.
      16. `cloudsql.googleapis.com/database/postgresql/insights/pertag/lock_time`: Accumulated lock wait time since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `lock_type`, `tag_hash`, `project_id`, `resource_id`.
      17. `cloudsql.googleapis.com/database/postgresql/insights/pertag/shared_blk_access_count`: Shared blocks accessed by statement execution per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `access_type`, `tag_hash`, `project_id`, `resource_id`.
      18. `cloudsql.googleapis.com/database/postgresql/insights/pertag/row_count`: The number of retrieved or affected rows since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.

toolsets:
  cloud_sql_postgres_cloud_monitoring_tools:
    - get_system_metrics
    - get_query_metrics

```
Page 24/35FirstPrevNextLast