#
tokens: 49554/50000 15/807 files (page 22/48)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 22 of 48. Use http://codebase.md/googleapis/genai-toolbox?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .ci
│   ├── continuous.release.cloudbuild.yaml
│   ├── generate_release_table.sh
│   ├── integration.cloudbuild.yaml
│   ├── quickstart_test
│   │   ├── go.integration.cloudbuild.yaml
│   │   ├── js.integration.cloudbuild.yaml
│   │   ├── py.integration.cloudbuild.yaml
│   │   ├── run_go_tests.sh
│   │   ├── run_js_tests.sh
│   │   ├── run_py_tests.sh
│   │   └── setup_hotels_sample.sql
│   ├── test_with_coverage.sh
│   └── versioned.release.cloudbuild.yaml
├── .github
│   ├── auto-label.yaml
│   ├── blunderbuss.yml
│   ├── CODEOWNERS
│   ├── header-checker-lint.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── question.yml
│   ├── label-sync.yml
│   ├── labels.yaml
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── release-please.yml
│   ├── renovate.json5
│   ├── sync-repo-settings.yaml
│   └── workflows
│       ├── cloud_build_failure_reporter.yml
│       ├── deploy_dev_docs.yaml
│       ├── deploy_previous_version_docs.yaml
│       ├── deploy_versioned_docs.yaml
│       ├── docs_deploy.yaml
│       ├── docs_preview_clean.yaml
│       ├── docs_preview_deploy.yaml
│       ├── lint.yaml
│       ├── schedule_reporter.yml
│       ├── sync-labels.yaml
│       └── tests.yaml
├── .gitignore
├── .gitmodules
├── .golangci.yaml
├── .hugo
│   ├── archetypes
│   │   └── default.md
│   ├── assets
│   │   ├── icons
│   │   │   └── logo.svg
│   │   └── scss
│   │       ├── _styles_project.scss
│   │       └── _variables_project.scss
│   ├── go.mod
│   ├── go.sum
│   ├── hugo.toml
│   ├── layouts
│   │   ├── _default
│   │   │   └── home.releases.releases
│   │   ├── index.llms-full.txt
│   │   ├── index.llms.txt
│   │   ├── partials
│   │   │   ├── hooks
│   │   │   │   └── head-end.html
│   │   │   ├── navbar-version-selector.html
│   │   │   ├── page-meta-links.html
│   │   │   └── td
│   │   │       └── render-heading.html
│   │   ├── robot.txt
│   │   └── shortcodes
│   │       ├── include.html
│   │       ├── ipynb.html
│   │       └── regionInclude.html
│   ├── package-lock.json
│   ├── package.json
│   └── static
│       ├── favicons
│       │   ├── android-chrome-192x192.png
│       │   ├── android-chrome-512x512.png
│       │   ├── apple-touch-icon.png
│       │   ├── favicon-16x16.png
│       │   ├── favicon-32x32.png
│       │   └── favicon.ico
│       └── js
│           └── w3.js
├── CHANGELOG.md
├── cmd
│   ├── options_test.go
│   ├── options.go
│   ├── root_test.go
│   ├── root.go
│   └── version.txt
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── DEVELOPER.md
├── Dockerfile
├── docs
│   └── en
│       ├── _index.md
│       ├── about
│       │   ├── _index.md
│       │   └── faq.md
│       ├── concepts
│       │   ├── _index.md
│       │   └── telemetry
│       │       ├── index.md
│       │       ├── telemetry_flow.png
│       │       └── telemetry_traces.png
│       ├── getting-started
│       │   ├── _index.md
│       │   ├── colab_quickstart.ipynb
│       │   ├── configure.md
│       │   ├── introduction
│       │   │   ├── _index.md
│       │   │   └── architecture.png
│       │   ├── local_quickstart_go.md
│       │   ├── local_quickstart_js.md
│       │   ├── local_quickstart.md
│       │   ├── mcp_quickstart
│       │   │   ├── _index.md
│       │   │   ├── inspector_tools.png
│       │   │   └── inspector.png
│       │   └── quickstart
│       │       ├── go
│       │       │   ├── genAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── genkit
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── langchain
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── openAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   └── quickstart_test.go
│       │       ├── golden.txt
│       │       ├── js
│       │       │   ├── genAI
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── genkit
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── langchain
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── llamaindex
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   └── quickstart.test.js
│       │       ├── python
│       │       │   ├── __init__.py
│       │       │   ├── adk
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── core
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── langchain
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── llamaindex
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   └── quickstart_test.py
│       │       └── shared
│       │           ├── cloud_setup.md
│       │           ├── configure_toolbox.md
│       │           └── database_setup.md
│       ├── how-to
│       │   ├── _index.md
│       │   ├── connect_via_geminicli.md
│       │   ├── connect_via_mcp.md
│       │   ├── connect-ide
│       │   │   ├── _index.md
│       │   │   ├── alloydb_pg_admin_mcp.md
│       │   │   ├── alloydb_pg_mcp.md
│       │   │   ├── bigquery_mcp.md
│       │   │   ├── cloud_sql_mssql_admin_mcp.md
│       │   │   ├── cloud_sql_mssql_mcp.md
│       │   │   ├── cloud_sql_mysql_admin_mcp.md
│       │   │   ├── cloud_sql_mysql_mcp.md
│       │   │   ├── cloud_sql_pg_admin_mcp.md
│       │   │   ├── cloud_sql_pg_mcp.md
│       │   │   ├── firestore_mcp.md
│       │   │   ├── looker_mcp.md
│       │   │   ├── mssql_mcp.md
│       │   │   ├── mysql_mcp.md
│       │   │   ├── neo4j_mcp.md
│       │   │   ├── postgres_mcp.md
│       │   │   ├── spanner_mcp.md
│       │   │   └── sqlite_mcp.md
│       │   ├── deploy_docker.md
│       │   ├── deploy_gke.md
│       │   ├── deploy_toolbox.md
│       │   ├── export_telemetry.md
│       │   └── toolbox-ui
│       │       ├── edit-headers.gif
│       │       ├── edit-headers.png
│       │       ├── index.md
│       │       ├── optional-param-checked.png
│       │       ├── optional-param-unchecked.png
│       │       ├── run-tool.gif
│       │       ├── tools.png
│       │       └── toolsets.png
│       ├── reference
│       │   ├── _index.md
│       │   ├── cli.md
│       │   └── prebuilt-tools.md
│       ├── resources
│       │   ├── _index.md
│       │   ├── authServices
│       │   │   ├── _index.md
│       │   │   └── google.md
│       │   ├── sources
│       │   │   ├── _index.md
│       │   │   ├── alloydb-admin.md
│       │   │   ├── alloydb-pg.md
│       │   │   ├── bigquery.md
│       │   │   ├── bigtable.md
│       │   │   ├── cassandra.md
│       │   │   ├── clickhouse.md
│       │   │   ├── cloud-monitoring.md
│       │   │   ├── cloud-sql-admin.md
│       │   │   ├── cloud-sql-mssql.md
│       │   │   ├── cloud-sql-mysql.md
│       │   │   ├── cloud-sql-pg.md
│       │   │   ├── couchbase.md
│       │   │   ├── dataplex.md
│       │   │   ├── dgraph.md
│       │   │   ├── firebird.md
│       │   │   ├── firestore.md
│       │   │   ├── http.md
│       │   │   ├── looker.md
│       │   │   ├── mongodb.md
│       │   │   ├── mssql.md
│       │   │   ├── mysql.md
│       │   │   ├── neo4j.md
│       │   │   ├── oceanbase.md
│       │   │   ├── oracle.md
│       │   │   ├── postgres.md
│       │   │   ├── redis.md
│       │   │   ├── spanner.md
│       │   │   ├── sqlite.md
│       │   │   ├── tidb.md
│       │   │   ├── trino.md
│       │   │   ├── valkey.md
│       │   │   └── yugabytedb.md
│       │   └── tools
│       │       ├── _index.md
│       │       ├── alloydb
│       │       │   ├── _index.md
│       │       │   ├── alloydb-create-cluster.md
│       │       │   ├── alloydb-create-instance.md
│       │       │   ├── alloydb-create-user.md
│       │       │   ├── alloydb-get-cluster.md
│       │       │   ├── alloydb-get-instance.md
│       │       │   ├── alloydb-get-user.md
│       │       │   ├── alloydb-list-clusters.md
│       │       │   ├── alloydb-list-instances.md
│       │       │   ├── alloydb-list-users.md
│       │       │   └── alloydb-wait-for-operation.md
│       │       ├── alloydbainl
│       │       │   ├── _index.md
│       │       │   └── alloydb-ai-nl.md
│       │       ├── bigquery
│       │       │   ├── _index.md
│       │       │   ├── bigquery-analyze-contribution.md
│       │       │   ├── bigquery-conversational-analytics.md
│       │       │   ├── bigquery-execute-sql.md
│       │       │   ├── bigquery-forecast.md
│       │       │   ├── bigquery-get-dataset-info.md
│       │       │   ├── bigquery-get-table-info.md
│       │       │   ├── bigquery-list-dataset-ids.md
│       │       │   ├── bigquery-list-table-ids.md
│       │       │   ├── bigquery-search-catalog.md
│       │       │   └── bigquery-sql.md
│       │       ├── bigtable
│       │       │   ├── _index.md
│       │       │   └── bigtable-sql.md
│       │       ├── cassandra
│       │       │   ├── _index.md
│       │       │   └── cassandra-cql.md
│       │       ├── clickhouse
│       │       │   ├── _index.md
│       │       │   ├── clickhouse-execute-sql.md
│       │       │   ├── clickhouse-list-databases.md
│       │       │   ├── clickhouse-list-tables.md
│       │       │   └── clickhouse-sql.md
│       │       ├── cloudmonitoring
│       │       │   ├── _index.md
│       │       │   └── cloud-monitoring-query-prometheus.md
│       │       ├── cloudsql
│       │       │   ├── _index.md
│       │       │   ├── cloudsqlcreatedatabase.md
│       │       │   ├── cloudsqlcreateusers.md
│       │       │   ├── cloudsqlgetinstances.md
│       │       │   ├── cloudsqllistdatabases.md
│       │       │   ├── cloudsqllistinstances.md
│       │       │   ├── cloudsqlmssqlcreateinstance.md
│       │       │   ├── cloudsqlmysqlcreateinstance.md
│       │       │   ├── cloudsqlpgcreateinstances.md
│       │       │   └── cloudsqlwaitforoperation.md
│       │       ├── couchbase
│       │       │   ├── _index.md
│       │       │   └── couchbase-sql.md
│       │       ├── dataform
│       │       │   ├── _index.md
│       │       │   └── dataform-compile-local.md
│       │       ├── dataplex
│       │       │   ├── _index.md
│       │       │   ├── dataplex-lookup-entry.md
│       │       │   ├── dataplex-search-aspect-types.md
│       │       │   └── dataplex-search-entries.md
│       │       ├── dgraph
│       │       │   ├── _index.md
│       │       │   └── dgraph-dql.md
│       │       ├── firebird
│       │       │   ├── _index.md
│       │       │   ├── firebird-execute-sql.md
│       │       │   └── firebird-sql.md
│       │       ├── firestore
│       │       │   ├── _index.md
│       │       │   ├── firestore-add-documents.md
│       │       │   ├── firestore-delete-documents.md
│       │       │   ├── firestore-get-documents.md
│       │       │   ├── firestore-get-rules.md
│       │       │   ├── firestore-list-collections.md
│       │       │   ├── firestore-query-collection.md
│       │       │   ├── firestore-query.md
│       │       │   ├── firestore-update-document.md
│       │       │   └── firestore-validate-rules.md
│       │       ├── http
│       │       │   ├── _index.md
│       │       │   └── http.md
│       │       ├── looker
│       │       │   ├── _index.md
│       │       │   ├── looker-add-dashboard-element.md
│       │       │   ├── looker-conversational-analytics.md
│       │       │   ├── looker-create-project-file.md
│       │       │   ├── looker-delete-project-file.md
│       │       │   ├── looker-dev-mode.md
│       │       │   ├── looker-get-dashboards.md
│       │       │   ├── looker-get-dimensions.md
│       │       │   ├── looker-get-explores.md
│       │       │   ├── looker-get-filters.md
│       │       │   ├── looker-get-looks.md
│       │       │   ├── looker-get-measures.md
│       │       │   ├── looker-get-models.md
│       │       │   ├── looker-get-parameters.md
│       │       │   ├── looker-get-project-file.md
│       │       │   ├── looker-get-project-files.md
│       │       │   ├── looker-get-projects.md
│       │       │   ├── looker-health-analyze.md
│       │       │   ├── looker-health-pulse.md
│       │       │   ├── looker-health-vacuum.md
│       │       │   ├── looker-make-dashboard.md
│       │       │   ├── looker-make-look.md
│       │       │   ├── looker-query-sql.md
│       │       │   ├── looker-query-url.md
│       │       │   ├── looker-query.md
│       │       │   ├── looker-run-look.md
│       │       │   └── looker-update-project-file.md
│       │       ├── mongodb
│       │       │   ├── _index.md
│       │       │   ├── mongodb-aggregate.md
│       │       │   ├── mongodb-delete-many.md
│       │       │   ├── mongodb-delete-one.md
│       │       │   ├── mongodb-find-one.md
│       │       │   ├── mongodb-find.md
│       │       │   ├── mongodb-insert-many.md
│       │       │   ├── mongodb-insert-one.md
│       │       │   ├── mongodb-update-many.md
│       │       │   └── mongodb-update-one.md
│       │       ├── mssql
│       │       │   ├── _index.md
│       │       │   ├── mssql-execute-sql.md
│       │       │   ├── mssql-list-tables.md
│       │       │   └── mssql-sql.md
│       │       ├── mysql
│       │       │   ├── _index.md
│       │       │   ├── mysql-execute-sql.md
│       │       │   ├── mysql-list-active-queries.md
│       │       │   ├── mysql-list-table-fragmentation.md
│       │       │   ├── mysql-list-tables-missing-unique-indexes.md
│       │       │   ├── mysql-list-tables.md
│       │       │   └── mysql-sql.md
│       │       ├── neo4j
│       │       │   ├── _index.md
│       │       │   ├── neo4j-cypher.md
│       │       │   ├── neo4j-execute-cypher.md
│       │       │   └── neo4j-schema.md
│       │       ├── oceanbase
│       │       │   ├── _index.md
│       │       │   ├── oceanbase-execute-sql.md
│       │       │   └── oceanbase-sql.md
│       │       ├── oracle
│       │       │   ├── _index.md
│       │       │   ├── oracle-execute-sql.md
│       │       │   └── oracle-sql.md
│       │       ├── postgres
│       │       │   ├── _index.md
│       │       │   ├── postgres-execute-sql.md
│       │       │   ├── postgres-list-active-queries.md
│       │       │   ├── postgres-list-available-extensions.md
│       │       │   ├── postgres-list-installed-extensions.md
│       │       │   ├── postgres-list-tables.md
│       │       │   └── postgres-sql.md
│       │       ├── redis
│       │       │   ├── _index.md
│       │       │   └── redis.md
│       │       ├── spanner
│       │       │   ├── _index.md
│       │       │   ├── spanner-execute-sql.md
│       │       │   ├── spanner-list-tables.md
│       │       │   └── spanner-sql.md
│       │       ├── sqlite
│       │       │   ├── _index.md
│       │       │   ├── sqlite-execute-sql.md
│       │       │   └── sqlite-sql.md
│       │       ├── tidb
│       │       │   ├── _index.md
│       │       │   ├── tidb-execute-sql.md
│       │       │   └── tidb-sql.md
│       │       ├── trino
│       │       │   ├── _index.md
│       │       │   ├── trino-execute-sql.md
│       │       │   └── trino-sql.md
│       │       ├── utility
│       │       │   ├── _index.md
│       │       │   └── wait.md
│       │       ├── valkey
│       │       │   ├── _index.md
│       │       │   └── valkey.md
│       │       └── yuagbytedb
│       │           ├── _index.md
│       │           └── yugabytedb-sql.md
│       ├── samples
│       │   ├── _index.md
│       │   ├── alloydb
│       │   │   ├── _index.md
│       │   │   ├── ai-nl
│       │   │   │   ├── alloydb_ai_nl.ipynb
│       │   │   │   └── index.md
│       │   │   └── mcp_quickstart.md
│       │   ├── bigquery
│       │   │   ├── _index.md
│       │   │   ├── colab_quickstart_bigquery.ipynb
│       │   │   ├── local_quickstart.md
│       │   │   └── mcp_quickstart
│       │   │       ├── _index.md
│       │   │       ├── inspector_tools.png
│       │   │       └── inspector.png
│       │   └── looker
│       │       ├── _index.md
│       │       ├── looker_gemini_oauth
│       │       │   ├── _index.md
│       │       │   ├── authenticated.png
│       │       │   ├── authorize.png
│       │       │   └── registration.png
│       │       ├── looker_gemini.md
│       │       └── looker_mcp_inspector
│       │           ├── _index.md
│       │           ├── inspector_tools.png
│       │           └── inspector.png
│       └── sdks
│           ├── _index.md
│           ├── go-sdk.md
│           ├── js-sdk.md
│           └── python-sdk.md
├── gemini-extension.json
├── go.mod
├── go.sum
├── internal
│   ├── auth
│   │   ├── auth.go
│   │   └── google
│   │       └── google.go
│   ├── log
│   │   ├── handler.go
│   │   ├── log_test.go
│   │   ├── log.go
│   │   └── logger.go
│   ├── prebuiltconfigs
│   │   ├── prebuiltconfigs_test.go
│   │   ├── prebuiltconfigs.go
│   │   └── tools
│   │       ├── alloydb-postgres-admin.yaml
│   │       ├── alloydb-postgres-observability.yaml
│   │       ├── alloydb-postgres.yaml
│   │       ├── bigquery.yaml
│   │       ├── clickhouse.yaml
│   │       ├── cloud-sql-mssql-admin.yaml
│   │       ├── cloud-sql-mssql-observability.yaml
│   │       ├── cloud-sql-mssql.yaml
│   │       ├── cloud-sql-mysql-admin.yaml
│   │       ├── cloud-sql-mysql-observability.yaml
│   │       ├── cloud-sql-mysql.yaml
│   │       ├── cloud-sql-postgres-admin.yaml
│   │       ├── cloud-sql-postgres-observability.yaml
│   │       ├── cloud-sql-postgres.yaml
│   │       ├── dataplex.yaml
│   │       ├── firestore.yaml
│   │       ├── looker-conversational-analytics.yaml
│   │       ├── looker.yaml
│   │       ├── mssql.yaml
│   │       ├── mysql.yaml
│   │       ├── neo4j.yaml
│   │       ├── oceanbase.yaml
│   │       ├── postgres.yaml
│   │       ├── spanner-postgres.yaml
│   │       ├── spanner.yaml
│   │       └── sqlite.yaml
│   ├── server
│   │   ├── api_test.go
│   │   ├── api.go
│   │   ├── common_test.go
│   │   ├── config.go
│   │   ├── mcp
│   │   │   ├── jsonrpc
│   │   │   │   ├── jsonrpc_test.go
│   │   │   │   └── jsonrpc.go
│   │   │   ├── mcp.go
│   │   │   ├── util
│   │   │   │   └── lifecycle.go
│   │   │   ├── v20241105
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   ├── v20250326
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   └── v20250618
│   │   │       ├── method.go
│   │   │       └── types.go
│   │   ├── mcp_test.go
│   │   ├── mcp.go
│   │   ├── server_test.go
│   │   ├── server.go
│   │   ├── static
│   │   │   ├── assets
│   │   │   │   └── mcptoolboxlogo.png
│   │   │   ├── css
│   │   │   │   └── style.css
│   │   │   ├── index.html
│   │   │   ├── js
│   │   │   │   ├── auth.js
│   │   │   │   ├── loadTools.js
│   │   │   │   ├── mainContent.js
│   │   │   │   ├── navbar.js
│   │   │   │   ├── runTool.js
│   │   │   │   ├── toolDisplay.js
│   │   │   │   ├── tools.js
│   │   │   │   └── toolsets.js
│   │   │   ├── tools.html
│   │   │   └── toolsets.html
│   │   ├── web_test.go
│   │   └── web.go
│   ├── sources
│   │   ├── alloydbadmin
│   │   │   ├── alloydbadmin_test.go
│   │   │   └── alloydbadmin.go
│   │   ├── alloydbpg
│   │   │   ├── alloydb_pg_test.go
│   │   │   └── alloydb_pg.go
│   │   ├── bigquery
│   │   │   ├── bigquery_test.go
│   │   │   └── bigquery.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   ├── cassandra_test.go
│   │   │   └── cassandra.go
│   │   ├── clickhouse
│   │   │   ├── clickhouse_test.go
│   │   │   └── clickhouse.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloud_monitoring_test.go
│   │   │   └── cloud_monitoring.go
│   │   ├── cloudsqladmin
│   │   │   ├── cloud_sql_admin_test.go
│   │   │   └── cloud_sql_admin.go
│   │   ├── cloudsqlmssql
│   │   │   ├── cloud_sql_mssql_test.go
│   │   │   └── cloud_sql_mssql.go
│   │   ├── cloudsqlmysql
│   │   │   ├── cloud_sql_mysql_test.go
│   │   │   └── cloud_sql_mysql.go
│   │   ├── cloudsqlpg
│   │   │   ├── cloud_sql_pg_test.go
│   │   │   └── cloud_sql_pg.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataplex
│   │   │   ├── dataplex_test.go
│   │   │   └── dataplex.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── dialect.go
│   │   ├── firebird
│   │   │   ├── firebird_test.go
│   │   │   └── firebird.go
│   │   ├── firestore
│   │   │   ├── firestore_test.go
│   │   │   └── firestore.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── ip_type.go
│   │   ├── looker
│   │   │   ├── looker_test.go
│   │   │   └── looker.go
│   │   ├── mongodb
│   │   │   ├── mongodb_test.go
│   │   │   └── mongodb.go
│   │   ├── mssql
│   │   │   ├── mssql_test.go
│   │   │   └── mssql.go
│   │   ├── mysql
│   │   │   ├── mysql_test.go
│   │   │   └── mysql.go
│   │   ├── neo4j
│   │   │   ├── neo4j_test.go
│   │   │   └── neo4j.go
│   │   ├── oceanbase
│   │   │   ├── oceanbase_test.go
│   │   │   └── oceanbase.go
│   │   ├── oracle
│   │   │   └── oracle.go
│   │   ├── postgres
│   │   │   ├── postgres_test.go
│   │   │   └── postgres.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── sources.go
│   │   ├── spanner
│   │   │   ├── spanner_test.go
│   │   │   └── spanner.go
│   │   ├── sqlite
│   │   │   ├── sqlite_test.go
│   │   │   └── sqlite.go
│   │   ├── tidb
│   │   │   ├── tidb_test.go
│   │   │   └── tidb.go
│   │   ├── trino
│   │   │   ├── trino_test.go
│   │   │   └── trino.go
│   │   ├── util.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedb
│   │       ├── yugabytedb_test.go
│   │       └── yugabytedb.go
│   ├── telemetry
│   │   ├── instrumentation.go
│   │   └── telemetry.go
│   ├── testutils
│   │   └── testutils.go
│   ├── tools
│   │   ├── alloydb
│   │   │   ├── alloydbcreatecluster
│   │   │   │   ├── alloydbcreatecluster_test.go
│   │   │   │   └── alloydbcreatecluster.go
│   │   │   ├── alloydbcreateinstance
│   │   │   │   ├── alloydbcreateinstance_test.go
│   │   │   │   └── alloydbcreateinstance.go
│   │   │   ├── alloydbcreateuser
│   │   │   │   ├── alloydbcreateuser_test.go
│   │   │   │   └── alloydbcreateuser.go
│   │   │   ├── alloydbgetcluster
│   │   │   │   ├── alloydbgetcluster_test.go
│   │   │   │   └── alloydbgetcluster.go
│   │   │   ├── alloydbgetinstance
│   │   │   │   ├── alloydbgetinstance_test.go
│   │   │   │   └── alloydbgetinstance.go
│   │   │   ├── alloydbgetuser
│   │   │   │   ├── alloydbgetuser_test.go
│   │   │   │   └── alloydbgetuser.go
│   │   │   ├── alloydblistclusters
│   │   │   │   ├── alloydblistclusters_test.go
│   │   │   │   └── alloydblistclusters.go
│   │   │   ├── alloydblistinstances
│   │   │   │   ├── alloydblistinstances_test.go
│   │   │   │   └── alloydblistinstances.go
│   │   │   ├── alloydblistusers
│   │   │   │   ├── alloydblistusers_test.go
│   │   │   │   └── alloydblistusers.go
│   │   │   └── alloydbwaitforoperation
│   │   │       ├── alloydbwaitforoperation_test.go
│   │   │       └── alloydbwaitforoperation.go
│   │   ├── alloydbainl
│   │   │   ├── alloydbainl_test.go
│   │   │   └── alloydbainl.go
│   │   ├── bigquery
│   │   │   ├── bigqueryanalyzecontribution
│   │   │   │   ├── bigqueryanalyzecontribution_test.go
│   │   │   │   └── bigqueryanalyzecontribution.go
│   │   │   ├── bigquerycommon
│   │   │   │   ├── table_name_parser_test.go
│   │   │   │   ├── table_name_parser.go
│   │   │   │   └── util.go
│   │   │   ├── bigqueryconversationalanalytics
│   │   │   │   ├── bigqueryconversationalanalytics_test.go
│   │   │   │   └── bigqueryconversationalanalytics.go
│   │   │   ├── bigqueryexecutesql
│   │   │   │   ├── bigqueryexecutesql_test.go
│   │   │   │   └── bigqueryexecutesql.go
│   │   │   ├── bigqueryforecast
│   │   │   │   ├── bigqueryforecast_test.go
│   │   │   │   └── bigqueryforecast.go
│   │   │   ├── bigquerygetdatasetinfo
│   │   │   │   ├── bigquerygetdatasetinfo_test.go
│   │   │   │   └── bigquerygetdatasetinfo.go
│   │   │   ├── bigquerygettableinfo
│   │   │   │   ├── bigquerygettableinfo_test.go
│   │   │   │   └── bigquerygettableinfo.go
│   │   │   ├── bigquerylistdatasetids
│   │   │   │   ├── bigquerylistdatasetids_test.go
│   │   │   │   └── bigquerylistdatasetids.go
│   │   │   ├── bigquerylisttableids
│   │   │   │   ├── bigquerylisttableids_test.go
│   │   │   │   └── bigquerylisttableids.go
│   │   │   ├── bigquerysearchcatalog
│   │   │   │   ├── bigquerysearchcatalog_test.go
│   │   │   │   └── bigquerysearchcatalog.go
│   │   │   └── bigquerysql
│   │   │       ├── bigquerysql_test.go
│   │   │       └── bigquerysql.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   └── cassandracql
│   │   │       ├── cassandracql_test.go
│   │   │       └── cassandracql.go
│   │   ├── clickhouse
│   │   │   ├── clickhouseexecutesql
│   │   │   │   ├── clickhouseexecutesql_test.go
│   │   │   │   └── clickhouseexecutesql.go
│   │   │   ├── clickhouselistdatabases
│   │   │   │   ├── clickhouselistdatabases_test.go
│   │   │   │   └── clickhouselistdatabases.go
│   │   │   ├── clickhouselisttables
│   │   │   │   ├── clickhouselisttables_test.go
│   │   │   │   └── clickhouselisttables.go
│   │   │   └── clickhousesql
│   │   │       ├── clickhousesql_test.go
│   │   │       └── clickhousesql.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloudmonitoring_test.go
│   │   │   └── cloudmonitoring.go
│   │   ├── cloudsql
│   │   │   ├── cloudsqlcreatedatabase
│   │   │   │   ├── cloudsqlcreatedatabase_test.go
│   │   │   │   └── cloudsqlcreatedatabase.go
│   │   │   ├── cloudsqlcreateusers
│   │   │   │   ├── cloudsqlcreateusers_test.go
│   │   │   │   └── cloudsqlcreateusers.go
│   │   │   ├── cloudsqlgetinstances
│   │   │   │   ├── cloudsqlgetinstances_test.go
│   │   │   │   └── cloudsqlgetinstances.go
│   │   │   ├── cloudsqllistdatabases
│   │   │   │   ├── cloudsqllistdatabases_test.go
│   │   │   │   └── cloudsqllistdatabases.go
│   │   │   ├── cloudsqllistinstances
│   │   │   │   ├── cloudsqllistinstances_test.go
│   │   │   │   └── cloudsqllistinstances.go
│   │   │   └── cloudsqlwaitforoperation
│   │   │       ├── cloudsqlwaitforoperation_test.go
│   │   │       └── cloudsqlwaitforoperation.go
│   │   ├── cloudsqlmssql
│   │   │   └── cloudsqlmssqlcreateinstance
│   │   │       ├── cloudsqlmssqlcreateinstance_test.go
│   │   │       └── cloudsqlmssqlcreateinstance.go
│   │   ├── cloudsqlmysql
│   │   │   └── cloudsqlmysqlcreateinstance
│   │   │       ├── cloudsqlmysqlcreateinstance_test.go
│   │   │       └── cloudsqlmysqlcreateinstance.go
│   │   ├── cloudsqlpg
│   │   │   └── cloudsqlpgcreateinstances
│   │   │       ├── cloudsqlpgcreateinstances_test.go
│   │   │       └── cloudsqlpgcreateinstances.go
│   │   ├── common_test.go
│   │   ├── common.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataform
│   │   │   └── dataformcompilelocal
│   │   │       ├── dataformcompilelocal_test.go
│   │   │       └── dataformcompilelocal.go
│   │   ├── dataplex
│   │   │   ├── dataplexlookupentry
│   │   │   │   ├── dataplexlookupentry_test.go
│   │   │   │   └── dataplexlookupentry.go
│   │   │   ├── dataplexsearchaspecttypes
│   │   │   │   ├── dataplexsearchaspecttypes_test.go
│   │   │   │   └── dataplexsearchaspecttypes.go
│   │   │   └── dataplexsearchentries
│   │   │       ├── dataplexsearchentries_test.go
│   │   │       └── dataplexsearchentries.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── firebird
│   │   │   ├── firebirdexecutesql
│   │   │   │   ├── firebirdexecutesql_test.go
│   │   │   │   └── firebirdexecutesql.go
│   │   │   └── firebirdsql
│   │   │       ├── firebirdsql_test.go
│   │   │       └── firebirdsql.go
│   │   ├── firestore
│   │   │   ├── firestoreadddocuments
│   │   │   │   ├── firestoreadddocuments_test.go
│   │   │   │   └── firestoreadddocuments.go
│   │   │   ├── firestoredeletedocuments
│   │   │   │   ├── firestoredeletedocuments_test.go
│   │   │   │   └── firestoredeletedocuments.go
│   │   │   ├── firestoregetdocuments
│   │   │   │   ├── firestoregetdocuments_test.go
│   │   │   │   └── firestoregetdocuments.go
│   │   │   ├── firestoregetrules
│   │   │   │   ├── firestoregetrules_test.go
│   │   │   │   └── firestoregetrules.go
│   │   │   ├── firestorelistcollections
│   │   │   │   ├── firestorelistcollections_test.go
│   │   │   │   └── firestorelistcollections.go
│   │   │   ├── firestorequery
│   │   │   │   ├── firestorequery_test.go
│   │   │   │   └── firestorequery.go
│   │   │   ├── firestorequerycollection
│   │   │   │   ├── firestorequerycollection_test.go
│   │   │   │   └── firestorequerycollection.go
│   │   │   ├── firestoreupdatedocument
│   │   │   │   ├── firestoreupdatedocument_test.go
│   │   │   │   └── firestoreupdatedocument.go
│   │   │   ├── firestorevalidaterules
│   │   │   │   ├── firestorevalidaterules_test.go
│   │   │   │   └── firestorevalidaterules.go
│   │   │   └── util
│   │   │       ├── converter_test.go
│   │   │       ├── converter.go
│   │   │       ├── validator_test.go
│   │   │       └── validator.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── http_method.go
│   │   ├── looker
│   │   │   ├── lookeradddashboardelement
│   │   │   │   ├── lookeradddashboardelement_test.go
│   │   │   │   └── lookeradddashboardelement.go
│   │   │   ├── lookercommon
│   │   │   │   ├── lookercommon_test.go
│   │   │   │   └── lookercommon.go
│   │   │   ├── lookerconversationalanalytics
│   │   │   │   ├── lookerconversationalanalytics_test.go
│   │   │   │   └── lookerconversationalanalytics.go
│   │   │   ├── lookercreateprojectfile
│   │   │   │   ├── lookercreateprojectfile_test.go
│   │   │   │   └── lookercreateprojectfile.go
│   │   │   ├── lookerdeleteprojectfile
│   │   │   │   ├── lookerdeleteprojectfile_test.go
│   │   │   │   └── lookerdeleteprojectfile.go
│   │   │   ├── lookerdevmode
│   │   │   │   ├── lookerdevmode_test.go
│   │   │   │   └── lookerdevmode.go
│   │   │   ├── lookergetdashboards
│   │   │   │   ├── lookergetdashboards_test.go
│   │   │   │   └── lookergetdashboards.go
│   │   │   ├── lookergetdimensions
│   │   │   │   ├── lookergetdimensions_test.go
│   │   │   │   └── lookergetdimensions.go
│   │   │   ├── lookergetexplores
│   │   │   │   ├── lookergetexplores_test.go
│   │   │   │   └── lookergetexplores.go
│   │   │   ├── lookergetfilters
│   │   │   │   ├── lookergetfilters_test.go
│   │   │   │   └── lookergetfilters.go
│   │   │   ├── lookergetlooks
│   │   │   │   ├── lookergetlooks_test.go
│   │   │   │   └── lookergetlooks.go
│   │   │   ├── lookergetmeasures
│   │   │   │   ├── lookergetmeasures_test.go
│   │   │   │   └── lookergetmeasures.go
│   │   │   ├── lookergetmodels
│   │   │   │   ├── lookergetmodels_test.go
│   │   │   │   └── lookergetmodels.go
│   │   │   ├── lookergetparameters
│   │   │   │   ├── lookergetparameters_test.go
│   │   │   │   └── lookergetparameters.go
│   │   │   ├── lookergetprojectfile
│   │   │   │   ├── lookergetprojectfile_test.go
│   │   │   │   └── lookergetprojectfile.go
│   │   │   ├── lookergetprojectfiles
│   │   │   │   ├── lookergetprojectfiles_test.go
│   │   │   │   └── lookergetprojectfiles.go
│   │   │   ├── lookergetprojects
│   │   │   │   ├── lookergetprojects_test.go
│   │   │   │   └── lookergetprojects.go
│   │   │   ├── lookerhealthanalyze
│   │   │   │   ├── lookerhealthanalyze_test.go
│   │   │   │   └── lookerhealthanalyze.go
│   │   │   ├── lookerhealthpulse
│   │   │   │   ├── lookerhealthpulse_test.go
│   │   │   │   └── lookerhealthpulse.go
│   │   │   ├── lookerhealthvacuum
│   │   │   │   ├── lookerhealthvacuum_test.go
│   │   │   │   └── lookerhealthvacuum.go
│   │   │   ├── lookermakedashboard
│   │   │   │   ├── lookermakedashboard_test.go
│   │   │   │   └── lookermakedashboard.go
│   │   │   ├── lookermakelook
│   │   │   │   ├── lookermakelook_test.go
│   │   │   │   └── lookermakelook.go
│   │   │   ├── lookerquery
│   │   │   │   ├── lookerquery_test.go
│   │   │   │   └── lookerquery.go
│   │   │   ├── lookerquerysql
│   │   │   │   ├── lookerquerysql_test.go
│   │   │   │   └── lookerquerysql.go
│   │   │   ├── lookerqueryurl
│   │   │   │   ├── lookerqueryurl_test.go
│   │   │   │   └── lookerqueryurl.go
│   │   │   ├── lookerrunlook
│   │   │   │   ├── lookerrunlook_test.go
│   │   │   │   └── lookerrunlook.go
│   │   │   └── lookerupdateprojectfile
│   │   │       ├── lookerupdateprojectfile_test.go
│   │   │       └── lookerupdateprojectfile.go
│   │   ├── mongodb
│   │   │   ├── mongodbaggregate
│   │   │   │   ├── mongodbaggregate_test.go
│   │   │   │   └── mongodbaggregate.go
│   │   │   ├── mongodbdeletemany
│   │   │   │   ├── mongodbdeletemany_test.go
│   │   │   │   └── mongodbdeletemany.go
│   │   │   ├── mongodbdeleteone
│   │   │   │   ├── mongodbdeleteone_test.go
│   │   │   │   └── mongodbdeleteone.go
│   │   │   ├── mongodbfind
│   │   │   │   ├── mongodbfind_test.go
│   │   │   │   └── mongodbfind.go
│   │   │   ├── mongodbfindone
│   │   │   │   ├── mongodbfindone_test.go
│   │   │   │   └── mongodbfindone.go
│   │   │   ├── mongodbinsertmany
│   │   │   │   ├── mongodbinsertmany_test.go
│   │   │   │   └── mongodbinsertmany.go
│   │   │   ├── mongodbinsertone
│   │   │   │   ├── mongodbinsertone_test.go
│   │   │   │   └── mongodbinsertone.go
│   │   │   ├── mongodbupdatemany
│   │   │   │   ├── mongodbupdatemany_test.go
│   │   │   │   └── mongodbupdatemany.go
│   │   │   └── mongodbupdateone
│   │   │       ├── mongodbupdateone_test.go
│   │   │       └── mongodbupdateone.go
│   │   ├── mssql
│   │   │   ├── mssqlexecutesql
│   │   │   │   ├── mssqlexecutesql_test.go
│   │   │   │   └── mssqlexecutesql.go
│   │   │   ├── mssqllisttables
│   │   │   │   ├── mssqllisttables_test.go
│   │   │   │   └── mssqllisttables.go
│   │   │   └── mssqlsql
│   │   │       ├── mssqlsql_test.go
│   │   │       └── mssqlsql.go
│   │   ├── mysql
│   │   │   ├── mysqlcommon
│   │   │   │   └── mysqlcommon.go
│   │   │   ├── mysqlexecutesql
│   │   │   │   ├── mysqlexecutesql_test.go
│   │   │   │   └── mysqlexecutesql.go
│   │   │   ├── mysqllistactivequeries
│   │   │   │   ├── mysqllistactivequeries_test.go
│   │   │   │   └── mysqllistactivequeries.go
│   │   │   ├── mysqllisttablefragmentation
│   │   │   │   ├── mysqllisttablefragmentation_test.go
│   │   │   │   └── mysqllisttablefragmentation.go
│   │   │   ├── mysqllisttables
│   │   │   │   ├── mysqllisttables_test.go
│   │   │   │   └── mysqllisttables.go
│   │   │   ├── mysqllisttablesmissinguniqueindexes
│   │   │   │   ├── mysqllisttablesmissinguniqueindexes_test.go
│   │   │   │   └── mysqllisttablesmissinguniqueindexes.go
│   │   │   └── mysqlsql
│   │   │       ├── mysqlsql_test.go
│   │   │       └── mysqlsql.go
│   │   ├── neo4j
│   │   │   ├── neo4jcypher
│   │   │   │   ├── neo4jcypher_test.go
│   │   │   │   └── neo4jcypher.go
│   │   │   ├── neo4jexecutecypher
│   │   │   │   ├── classifier
│   │   │   │   │   ├── classifier_test.go
│   │   │   │   │   └── classifier.go
│   │   │   │   ├── neo4jexecutecypher_test.go
│   │   │   │   └── neo4jexecutecypher.go
│   │   │   └── neo4jschema
│   │   │       ├── cache
│   │   │       │   ├── cache_test.go
│   │   │       │   └── cache.go
│   │   │       ├── helpers
│   │   │       │   ├── helpers_test.go
│   │   │       │   └── helpers.go
│   │   │       ├── neo4jschema_test.go
│   │   │       ├── neo4jschema.go
│   │   │       └── types
│   │   │           └── types.go
│   │   ├── oceanbase
│   │   │   ├── oceanbaseexecutesql
│   │   │   │   ├── oceanbaseexecutesql_test.go
│   │   │   │   └── oceanbaseexecutesql.go
│   │   │   └── oceanbasesql
│   │   │       ├── oceanbasesql_test.go
│   │   │       └── oceanbasesql.go
│   │   ├── oracle
│   │   │   ├── oracleexecutesql
│   │   │   │   └── oracleexecutesql.go
│   │   │   └── oraclesql
│   │   │       └── oraclesql.go
│   │   ├── parameters_test.go
│   │   ├── parameters.go
│   │   ├── postgres
│   │   │   ├── postgresexecutesql
│   │   │   │   ├── postgresexecutesql_test.go
│   │   │   │   └── postgresexecutesql.go
│   │   │   ├── postgreslistactivequeries
│   │   │   │   ├── postgreslistactivequeries_test.go
│   │   │   │   └── postgreslistactivequeries.go
│   │   │   ├── postgreslistavailableextensions
│   │   │   │   ├── postgreslistavailableextensions_test.go
│   │   │   │   └── postgreslistavailableextensions.go
│   │   │   ├── postgreslistinstalledextensions
│   │   │   │   ├── postgreslistinstalledextensions_test.go
│   │   │   │   └── postgreslistinstalledextensions.go
│   │   │   ├── postgreslisttables
│   │   │   │   ├── postgreslisttables_test.go
│   │   │   │   └── postgreslisttables.go
│   │   │   └── postgressql
│   │   │       ├── postgressql_test.go
│   │   │       └── postgressql.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── spanner
│   │   │   ├── spannerexecutesql
│   │   │   │   ├── spannerexecutesql_test.go
│   │   │   │   └── spannerexecutesql.go
│   │   │   ├── spannerlisttables
│   │   │   │   ├── spannerlisttables_test.go
│   │   │   │   └── spannerlisttables.go
│   │   │   └── spannersql
│   │   │       ├── spanner_test.go
│   │   │       └── spannersql.go
│   │   ├── sqlite
│   │   │   ├── sqliteexecutesql
│   │   │   │   ├── sqliteexecutesql_test.go
│   │   │   │   └── sqliteexecutesql.go
│   │   │   └── sqlitesql
│   │   │       ├── sqlitesql_test.go
│   │   │       └── sqlitesql.go
│   │   ├── tidb
│   │   │   ├── tidbexecutesql
│   │   │   │   ├── tidbexecutesql_test.go
│   │   │   │   └── tidbexecutesql.go
│   │   │   └── tidbsql
│   │   │       ├── tidbsql_test.go
│   │   │       └── tidbsql.go
│   │   ├── tools_test.go
│   │   ├── tools.go
│   │   ├── toolsets.go
│   │   ├── trino
│   │   │   ├── trinoexecutesql
│   │   │   │   ├── trinoexecutesql_test.go
│   │   │   │   └── trinoexecutesql.go
│   │   │   └── trinosql
│   │   │       ├── trinosql_test.go
│   │   │       └── trinosql.go
│   │   ├── utility
│   │   │   └── wait
│   │   │       ├── wait_test.go
│   │   │       └── wait.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedbsql
│   │       ├── yugabytedbsql_test.go
│   │       └── yugabytedbsql.go
│   └── util
│       └── util.go
├── LICENSE
├── logo.png
├── main.go
├── MCP-TOOLBOX-EXTENSION.md
├── README.md
└── tests
    ├── alloydb
    │   ├── alloydb_integration_test.go
    │   └── alloydb_wait_for_operation_test.go
    ├── alloydbainl
    │   └── alloydb_ai_nl_integration_test.go
    ├── alloydbpg
    │   └── alloydb_pg_integration_test.go
    ├── auth.go
    ├── bigquery
    │   └── bigquery_integration_test.go
    ├── bigtable
    │   └── bigtable_integration_test.go
    ├── cassandra
    │   └── cassandra_integration_test.go
    ├── clickhouse
    │   └── clickhouse_integration_test.go
    ├── cloudmonitoring
    │   └── cloud_monitoring_integration_test.go
    ├── cloudsql
    │   ├── cloud_sql_create_database_test.go
    │   ├── cloud_sql_create_users_test.go
    │   ├── cloud_sql_get_instances_test.go
    │   ├── cloud_sql_list_databases_test.go
    │   ├── cloudsql_list_instances_test.go
    │   └── cloudsql_wait_for_operation_test.go
    ├── cloudsqlmssql
    │   ├── cloud_sql_mssql_create_instance_integration_test.go
    │   └── cloud_sql_mssql_integration_test.go
    ├── cloudsqlmysql
    │   ├── cloud_sql_mysql_create_instance_integration_test.go
    │   └── cloud_sql_mysql_integration_test.go
    ├── cloudsqlpg
    │   ├── cloud_sql_pg_create_instances_test.go
    │   └── cloud_sql_pg_integration_test.go
    ├── common.go
    ├── couchbase
    │   └── couchbase_integration_test.go
    ├── dataform
    │   └── dataform_integration_test.go
    ├── dataplex
    │   └── dataplex_integration_test.go
    ├── dgraph
    │   └── dgraph_integration_test.go
    ├── firebird
    │   └── firebird_integration_test.go
    ├── firestore
    │   └── firestore_integration_test.go
    ├── http
    │   └── http_integration_test.go
    ├── looker
    │   └── looker_integration_test.go
    ├── mongodb
    │   └── mongodb_integration_test.go
    ├── mssql
    │   └── mssql_integration_test.go
    ├── mysql
    │   └── mysql_integration_test.go
    ├── neo4j
    │   └── neo4j_integration_test.go
    ├── oceanbase
    │   └── oceanbase_integration_test.go
    ├── option.go
    ├── oracle
    │   └── oracle_integration_test.go
    ├── postgres
    │   └── postgres_integration_test.go
    ├── redis
    │   └── redis_test.go
    ├── server.go
    ├── source.go
    ├── spanner
    │   └── spanner_integration_test.go
    ├── sqlite
    │   └── sqlite_integration_test.go
    ├── tidb
    │   └── tidb_integration_test.go
    ├── tool.go
    ├── trino
    │   └── trino_integration_test.go
    ├── utility
    │   └── wait_integration_test.go
    ├── valkey
    │   └── valkey_test.go
    └── yugabytedb
        └── yugabytedb_integration_test.go
```

# Files

--------------------------------------------------------------------------------
/internal/prebuiltconfigs/prebuiltconfigs_test.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2024 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package prebuiltconfigs
 16 | 
 17 | import (
 18 | 	"testing"
 19 | 
 20 | 	"github.com/google/go-cmp/cmp"
 21 | )
 22 | 
 23 | var expectedToolSources = []string{
 24 | 	"alloydb-postgres-admin",
 25 | 	"alloydb-postgres-observability",
 26 | 	"alloydb-postgres",
 27 | 	"bigquery",
 28 | 	"clickhouse",
 29 | 	"cloud-sql-mssql-admin",
 30 | 	"cloud-sql-mssql-observability",
 31 | 	"cloud-sql-mssql",
 32 | 	"cloud-sql-mysql-admin",
 33 | 	"cloud-sql-mysql-observability",
 34 | 	"cloud-sql-mysql",
 35 | 	"cloud-sql-postgres-admin",
 36 | 	"cloud-sql-postgres-observability",
 37 | 	"cloud-sql-postgres",
 38 | 	"dataplex",
 39 | 	"firestore",
 40 | 	"looker-conversational-analytics",
 41 | 	"looker",
 42 | 	"mssql",
 43 | 	"mysql",
 44 | 	"neo4j",
 45 | 	"oceanbase",
 46 | 	"postgres",
 47 | 	"spanner-postgres",
 48 | 	"spanner",
 49 | 	"sqlite",
 50 | }
 51 | 
 52 | func TestGetPrebuiltSources(t *testing.T) {
 53 | 	t.Run("Test Get Prebuilt Sources", func(t *testing.T) {
 54 | 		sources := GetPrebuiltSources()
 55 | 		if diff := cmp.Diff(expectedToolSources, sources); diff != "" {
 56 | 			t.Fatalf("incorrect sources parse: diff %v", diff)
 57 | 		}
 58 | 
 59 | 	})
 60 | }
 61 | 
 62 | func TestLoadPrebuiltToolYAMLs(t *testing.T) {
 63 | 	test_name := "test load prebuilt configs"
 64 | 	expectedKeys := expectedToolSources
 65 | 	t.Run(test_name, func(t *testing.T) {
 66 | 		configsMap, keys, err := loadPrebuiltToolYAMLs()
 67 | 		if err != nil {
 68 | 			t.Fatalf("unexpected error: %s", err)
 69 | 		}
 70 | 		foundExpectedKeys := make(map[string]bool)
 71 | 
 72 | 		if len(expectedKeys) != len(configsMap) {
 73 | 			t.Fatalf("Failed to load all prebuilt tools.")
 74 | 		}
 75 | 
 76 | 		for _, expectedKey := range expectedKeys {
 77 | 			_, ok := configsMap[expectedKey]
 78 | 			if !ok {
 79 | 				t.Fatalf("Prebuilt tools for '%s' was NOT FOUND in the loaded map.", expectedKey)
 80 | 			} else {
 81 | 				foundExpectedKeys[expectedKey] = true // Mark as found
 82 | 			}
 83 | 		}
 84 | 
 85 | 		t.Log(expectedKeys)
 86 | 		t.Log(keys)
 87 | 
 88 | 		if diff := cmp.Diff(expectedKeys, keys); diff != "" {
 89 | 			t.Fatalf("incorrect sources parse: diff %v", diff)
 90 | 		}
 91 | 
 92 | 	})
 93 | }
 94 | 
 95 | func TestGetPrebuiltTool(t *testing.T) {
 96 | 	alloydb_admin_config, _ := Get("alloydb-postgres-admin")
 97 | 	alloydb_observability_config, _ := Get("alloydb-postgres-observability")
 98 | 	alloydb_config, _ := Get("alloydb-postgres")
 99 | 	bigquery_config, _ := Get("bigquery")
100 | 	clickhouse_config, _ := Get("clickhouse")
101 | 	cloudsqlpg_observability_config, _ := Get("cloud-sql-postgres-observability")
102 | 	cloudsqlpg_config, _ := Get("cloud-sql-postgres")
103 | 	cloudsqlpg_admin_config, _ := Get("cloud-sql-postgres-admin")
104 | 	cloudsqlmysql_admin_config, _ := Get("cloud-sql-mysql-admin")
105 | 	cloudsqlmssql_admin_config, _ := Get("cloud-sql-mssql-admin")
106 | 	cloudsqlmysql_observability_config, _ := Get("cloud-sql-mysql-observability")
107 | 	cloudsqlmysql_config, _ := Get("cloud-sql-mysql")
108 | 	cloudsqlmssql_observability_config, _ := Get("cloud-sql-mssql-observability")
109 | 	cloudsqlmssql_config, _ := Get("cloud-sql-mssql")
110 | 	dataplex_config, _ := Get("dataplex")
111 | 	firestoreconfig, _ := Get("firestore")
112 | 	looker_config, _ := Get("looker")
113 | 	lookerca_config, _ := Get("looker-conversational-analytics")
114 | 	mysql_config, _ := Get("mysql")
115 | 	mssql_config, _ := Get("mssql")
116 | 	oceanbase_config, _ := Get("oceanbase")
117 | 	postgresconfig, _ := Get("postgres")
118 | 	spanner_config, _ := Get("spanner")
119 | 	spannerpg_config, _ := Get("spanner-postgres")
120 | 	sqlite_config, _ := Get("sqlite")
121 | 	neo4jconfig, _ := Get("neo4j")
122 | 	if len(alloydb_admin_config) <= 0 {
123 | 		t.Fatalf("unexpected error: could not fetch alloydb prebuilt tools yaml")
124 | 	}
125 | 	if len(alloydb_config) <= 0 {
126 | 		t.Fatalf("unexpected error: could not fetch alloydb prebuilt tools yaml")
127 | 	}
128 | 	if len(alloydb_observability_config) <= 0 {
129 | 		t.Fatalf("unexpected error: could not fetch alloydb-observability prebuilt tools yaml")
130 | 	}
131 | 	if len(bigquery_config) <= 0 {
132 | 		t.Fatalf("unexpected error: could not fetch bigquery prebuilt tools yaml")
133 | 	}
134 | 	if len(clickhouse_config) <= 0 {
135 | 		t.Fatalf("unexpected error: could not fetch clickhouse prebuilt tools yaml")
136 | 	}
137 | 	if len(cloudsqlpg_observability_config) <= 0 {
138 | 		t.Fatalf("unexpected error: could not fetch cloud sql pg observability prebuilt tools yaml")
139 | 	}
140 | 	if len(cloudsqlpg_config) <= 0 {
141 | 		t.Fatalf("unexpected error: could not fetch cloud sql pg prebuilt tools yaml")
142 | 	}
143 | 	if len(cloudsqlpg_admin_config) <= 0 {
144 | 		t.Fatalf("unexpected error: could not fetch cloud sql pg admin prebuilt tools yaml")
145 | 	}
146 | 	if len(cloudsqlmysql_admin_config) <= 0 {
147 | 		t.Fatalf("unexpected error: could not fetch cloud sql mysql admin prebuilt tools yaml")
148 | 	}
149 | 	if len(cloudsqlmysql_observability_config) <= 0 {
150 | 		t.Fatalf("unexpected error: could not fetch cloud sql mysql observability prebuilt tools yaml")
151 | 	}
152 | 	if len(cloudsqlmysql_config) <= 0 {
153 | 		t.Fatalf("unexpected error: could not fetch cloud sql mysql prebuilt tools yaml")
154 | 	}
155 | 	if len(cloudsqlmssql_observability_config) <= 0 {
156 | 		t.Fatalf("unexpected error: could not fetch cloud sql mssql observability prebuilt tools yaml")
157 | 	}
158 | 	if len(cloudsqlmssql_admin_config) <= 0 {
159 | 		t.Fatalf("unexpected error: could not fetch cloud sql mssql admin prebuilt tools yaml")
160 | 	}
161 | 	if len(cloudsqlmssql_config) <= 0 {
162 | 		t.Fatalf("unexpected error: could not fetch cloud sql mssql prebuilt tools yaml")
163 | 	}
164 | 	if len(dataplex_config) <= 0 {
165 | 		t.Fatalf("unexpected error: could not fetch dataplex prebuilt tools yaml")
166 | 	}
167 | 	if len(firestoreconfig) <= 0 {
168 | 		t.Fatalf("unexpected error: could not fetch firestore prebuilt tools yaml")
169 | 	}
170 | 	if len(looker_config) <= 0 {
171 | 		t.Fatalf("unexpected error: could not fetch looker prebuilt tools yaml")
172 | 	}
173 | 	if len(lookerca_config) <= 0 {
174 | 		t.Fatalf("unexpected error: could not fetch looker-conversational-analytics prebuilt tools yaml")
175 | 	}
176 | 	if len(mysql_config) <= 0 {
177 | 		t.Fatalf("unexpected error: could not fetch mysql prebuilt tools yaml")
178 | 	}
179 | 	if len(mssql_config) <= 0 {
180 | 		t.Fatalf("unexpected error: could not fetch mssql prebuilt tools yaml")
181 | 	}
182 | 	if len(oceanbase_config) <= 0 {
183 | 		t.Fatalf("unexpected error: could not fetch oceanbase prebuilt tools yaml")
184 | 	}
185 | 	if len(postgresconfig) <= 0 {
186 | 		t.Fatalf("unexpected error: could not fetch postgres prebuilt tools yaml")
187 | 	}
188 | 	if len(spanner_config) <= 0 {
189 | 		t.Fatalf("unexpected error: could not fetch spanner prebuilt tools yaml")
190 | 	}
191 | 	if len(spannerpg_config) <= 0 {
192 | 		t.Fatalf("unexpected error: could not fetch spanner pg prebuilt tools yaml")
193 | 	}
194 | 	if len(sqlite_config) <= 0 {
195 | 		t.Fatalf("unexpected error: could not fetch sqlite prebuilt tools yaml")
196 | 	}
197 | 	if len(neo4jconfig) <= 0 {
198 | 		t.Fatalf("unexpected error: could not fetch neo4j prebuilt tools yaml")
199 | 	}
200 | }
201 | 
202 | func TestFailGetPrebuiltTool(t *testing.T) {
203 | 	_, err := Get("sql")
204 | 	if err == nil {
205 | 		t.Fatalf("unexpected an error but got nil.")
206 | 	}
207 | }
208 | 
```

--------------------------------------------------------------------------------
/internal/server/common_test.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package server
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"fmt"
 20 | 	"io"
 21 | 	"net/http"
 22 | 	"net/http/httptest"
 23 | 	"os"
 24 | 	"testing"
 25 | 
 26 | 	"github.com/go-chi/chi/v5"
 27 | 	"github.com/googleapis/genai-toolbox/internal/log"
 28 | 	"github.com/googleapis/genai-toolbox/internal/telemetry"
 29 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 30 | )
 31 | 
 32 | // fakeVersionString is used as a temporary version string in tests
 33 | const fakeVersionString = "0.0.0"
 34 | 
 35 | var _ tools.Tool = &MockTool{}
 36 | 
 37 | // MockTool is used to mock tools in tests
 38 | type MockTool struct {
 39 | 	Name                         string
 40 | 	Description                  string
 41 | 	Params                       []tools.Parameter
 42 | 	manifest                     tools.Manifest
 43 | 	unauthorized                 bool
 44 | 	requiresClientAuthrorization bool
 45 | }
 46 | 
 47 | func (t MockTool) Invoke(context.Context, tools.ParamValues, tools.AccessToken) (any, error) {
 48 | 	mock := []any{t.Name}
 49 | 	return mock, nil
 50 | }
 51 | 
 52 | // claims is a map of user info decoded from an auth token
 53 | func (t MockTool) ParseParams(data map[string]any, claimsMap map[string]map[string]any) (tools.ParamValues, error) {
 54 | 	return tools.ParseParams(t.Params, data, claimsMap)
 55 | }
 56 | 
 57 | func (t MockTool) Manifest() tools.Manifest {
 58 | 	pMs := make([]tools.ParameterManifest, 0, len(t.Params))
 59 | 	for _, p := range t.Params {
 60 | 		pMs = append(pMs, p.Manifest())
 61 | 	}
 62 | 	return tools.Manifest{Description: t.Description, Parameters: pMs}
 63 | }
 64 | 
 65 | func (t MockTool) Authorized(verifiedAuthServices []string) bool {
 66 | 	// defaulted to true
 67 | 	return !t.unauthorized
 68 | }
 69 | 
 70 | func (t MockTool) RequiresClientAuthorization() bool {
 71 | 	// defaulted to false
 72 | 	return t.requiresClientAuthrorization
 73 | }
 74 | 
 75 | func (t MockTool) McpManifest() tools.McpManifest {
 76 | 	properties := make(map[string]tools.ParameterMcpManifest)
 77 | 	required := make([]string, 0)
 78 | 	authParams := make(map[string][]string)
 79 | 
 80 | 	for _, p := range t.Params {
 81 | 		name := p.GetName()
 82 | 		paramManifest, authParamList := p.McpManifest()
 83 | 		properties[name] = paramManifest
 84 | 		required = append(required, name)
 85 | 
 86 | 		if len(authParamList) > 0 {
 87 | 			authParams[name] = authParamList
 88 | 		}
 89 | 	}
 90 | 
 91 | 	toolsSchema := tools.McpToolsSchema{
 92 | 		Type:       "object",
 93 | 		Properties: properties,
 94 | 		Required:   required,
 95 | 	}
 96 | 
 97 | 	mcpManifest := tools.McpManifest{
 98 | 		Name:        t.Name,
 99 | 		Description: t.Description,
100 | 		InputSchema: toolsSchema,
101 | 	}
102 | 
103 | 	if len(authParams) > 0 {
104 | 		mcpManifest.Metadata = map[string]any{
105 | 			"toolbox/authParams": authParams,
106 | 		}
107 | 	}
108 | 
109 | 	return mcpManifest
110 | }
111 | 
112 | var tool1 = MockTool{
113 | 	Name:   "no_params",
114 | 	Params: []tools.Parameter{},
115 | }
116 | 
117 | var tool2 = MockTool{
118 | 	Name: "some_params",
119 | 	Params: tools.Parameters{
120 | 		tools.NewIntParameter("param1", "This is the first parameter."),
121 | 		tools.NewIntParameter("param2", "This is the second parameter."),
122 | 	},
123 | }
124 | 
125 | var tool3 = MockTool{
126 | 	Name:        "array_param",
127 | 	Description: "some description",
128 | 	Params: tools.Parameters{
129 | 		tools.NewArrayParameter("my_array", "this param is an array of strings", tools.NewStringParameter("my_string", "string item")),
130 | 	},
131 | }
132 | 
133 | var tool4 = MockTool{
134 | 	Name:         "unauthorized_tool",
135 | 	Params:       []tools.Parameter{},
136 | 	unauthorized: true,
137 | }
138 | 
139 | var tool5 = MockTool{
140 | 	Name:                         "require_client_auth_tool",
141 | 	Params:                       []tools.Parameter{},
142 | 	requiresClientAuthrorization: true,
143 | }
144 | 
145 | // setUpResources setups resources to test against
146 | func setUpResources(t *testing.T, mockTools []MockTool) (map[string]tools.Tool, map[string]tools.Toolset) {
147 | 	toolsMap := make(map[string]tools.Tool)
148 | 	var allTools []string
149 | 	for _, tool := range mockTools {
150 | 		tool.manifest = tool.Manifest()
151 | 		toolsMap[tool.Name] = tool
152 | 		allTools = append(allTools, tool.Name)
153 | 	}
154 | 
155 | 	toolsets := make(map[string]tools.Toolset)
156 | 	for name, l := range map[string][]string{
157 | 		"":           allTools,
158 | 		"tool1_only": {allTools[0]},
159 | 		"tool2_only": {allTools[1]},
160 | 	} {
161 | 		tc := tools.ToolsetConfig{Name: name, ToolNames: l}
162 | 		m, err := tc.Initialize(fakeVersionString, toolsMap)
163 | 		if err != nil {
164 | 			t.Fatalf("unable to initialize toolset %q: %s", name, err)
165 | 		}
166 | 		toolsets[name] = m
167 | 	}
168 | 	return toolsMap, toolsets
169 | }
170 | 
171 | // setUpServer create a new server with tools and toolsets that are given
172 | func setUpServer(t *testing.T, router string, tools map[string]tools.Tool, toolsets map[string]tools.Toolset) (chi.Router, func()) {
173 | 	ctx, cancel := context.WithCancel(context.Background())
174 | 
175 | 	testLogger, err := log.NewStdLogger(os.Stdout, os.Stderr, "info")
176 | 	if err != nil {
177 | 		t.Fatalf("unable to initialize logger: %s", err)
178 | 	}
179 | 
180 | 	otelShutdown, err := telemetry.SetupOTel(ctx, fakeVersionString, "", false, "toolbox")
181 | 	if err != nil {
182 | 		t.Fatalf("unable to setup otel: %s", err)
183 | 	}
184 | 
185 | 	instrumentation, err := telemetry.CreateTelemetryInstrumentation(fakeVersionString)
186 | 	if err != nil {
187 | 		t.Fatalf("unable to create custom metrics: %s", err)
188 | 	}
189 | 
190 | 	sseManager := newSseManager(ctx)
191 | 
192 | 	resourceManager := NewResourceManager(nil, nil, tools, toolsets)
193 | 
194 | 	server := Server{
195 | 		version:         fakeVersionString,
196 | 		logger:          testLogger,
197 | 		instrumentation: instrumentation,
198 | 		sseManager:      sseManager,
199 | 		ResourceMgr:     resourceManager,
200 | 	}
201 | 
202 | 	var r chi.Router
203 | 	switch router {
204 | 	case "api":
205 | 		r, err = apiRouter(&server)
206 | 		if err != nil {
207 | 			t.Fatalf("unable to initialize api router: %s", err)
208 | 		}
209 | 	case "mcp":
210 | 		r, err = mcpRouter(&server)
211 | 		if err != nil {
212 | 			t.Fatalf("unable to initialize mcp router: %s", err)
213 | 		}
214 | 	default:
215 | 		t.Fatalf("unknown router")
216 | 	}
217 | 	shutdown := func() {
218 | 		// cancel context
219 | 		cancel()
220 | 		// shutdown otel
221 | 		err := otelShutdown(ctx)
222 | 		if err != nil {
223 | 			t.Fatalf("error shutting down OpenTelemetry: %s", err)
224 | 		}
225 | 	}
226 | 
227 | 	return r, shutdown
228 | }
229 | 
230 | func runServer(r chi.Router, tls bool) *httptest.Server {
231 | 	var ts *httptest.Server
232 | 	if tls {
233 | 		ts = httptest.NewTLSServer(r)
234 | 	} else {
235 | 		ts = httptest.NewServer(r)
236 | 	}
237 | 	return ts
238 | }
239 | 
240 | func runRequest(ts *httptest.Server, method, path string, body io.Reader, header map[string]string) (*http.Response, []byte, error) {
241 | 	req, err := http.NewRequest(method, ts.URL+path, body)
242 | 	if err != nil {
243 | 		return nil, nil, fmt.Errorf("unable to create request: %w", err)
244 | 	}
245 | 
246 | 	req.Header.Set("Content-Type", "application/json")
247 | 	for k, v := range header {
248 | 		req.Header.Set(k, v)
249 | 	}
250 | 
251 | 	resp, err := http.DefaultClient.Do(req)
252 | 	if err != nil {
253 | 		return nil, nil, fmt.Errorf("unable to send request: %w", err)
254 | 	}
255 | 
256 | 	respBody, err := io.ReadAll(resp.Body)
257 | 	if err != nil {
258 | 		return nil, nil, fmt.Errorf("unable to read request body: %w", err)
259 | 	}
260 | 	defer resp.Body.Close()
261 | 
262 | 	return resp, respBody, nil
263 | }
264 | 
```

--------------------------------------------------------------------------------
/internal/tools/cloudsqlmssql/cloudsqlmssqlcreateinstance/cloudsqlmssqlcreateinstance.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //      http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package cloudsqlmssqlcreateinstance
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"fmt"
 20 | 	"strings"
 21 | 
 22 | 	yaml "github.com/goccy/go-yaml"
 23 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 24 | 	"github.com/googleapis/genai-toolbox/internal/sources/cloudsqladmin"
 25 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 26 | 	sqladmin "google.golang.org/api/sqladmin/v1"
 27 | )
 28 | 
 29 | const kind string = "cloud-sql-mssql-create-instance"
 30 | 
 31 | func init() {
 32 | 	if !tools.Register(kind, newConfig) {
 33 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 34 | 	}
 35 | }
 36 | 
 37 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 38 | 	actual := Config{Name: name}
 39 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 40 | 		return nil, err
 41 | 	}
 42 | 	return actual, nil
 43 | }
 44 | 
 45 | // Config defines the configuration for the create-instances tool.
 46 | type Config struct {
 47 | 	Name         string   `yaml:"name" validate:"required"`
 48 | 	Kind         string   `yaml:"kind" validate:"required"`
 49 | 	Description  string   `yaml:"description"`
 50 | 	Source       string   `yaml:"source" validate:"required"`
 51 | 	AuthRequired []string `yaml:"authRequired"`
 52 | }
 53 | 
 54 | // validate interface
 55 | var _ tools.ToolConfig = Config{}
 56 | 
 57 | // ToolConfigKind returns the kind of the tool.
 58 | func (cfg Config) ToolConfigKind() string {
 59 | 	return kind
 60 | }
 61 | 
 62 | // Initialize initializes the tool from the configuration.
 63 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
 64 | 	rawS, ok := srcs[cfg.Source]
 65 | 	if !ok {
 66 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
 67 | 	}
 68 | 	s, ok := rawS.(*cloudsqladmin.Source)
 69 | 	if !ok {
 70 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be `cloud-sql-admin`", kind)
 71 | 	}
 72 | 
 73 | 	allParameters := tools.Parameters{
 74 | 		tools.NewStringParameter("project", "The project ID"),
 75 | 		tools.NewStringParameter("name", "The name of the instance"),
 76 | 		tools.NewStringParameterWithDefault("databaseVersion", "SQLSERVER_2022_STANDARD", "The database version for SQL Server. If not specified, defaults to SQLSERVER_2022_STANDARD."),
 77 | 		tools.NewStringParameter("rootPassword", "The root password for the instance"),
 78 | 		tools.NewStringParameterWithDefault("editionPreset", "Development", "The edition of the instance. Can be `Production` or `Development`. This determines the default machine type and availability. Defaults to `Development`."),
 79 | 	}
 80 | 	paramManifest := allParameters.Manifest()
 81 | 
 82 | 	description := cfg.Description
 83 | 	if description == "" {
 84 | 		description = "Creates a SQL Server instance using `Production` and `Development` presets. For the `Development` template, it chooses a 2 vCPU, 8 GiB RAM (`db-custom-2-8192`) configuration with Non-HA/zonal availability. For the `Production` template, it chooses a 4 vCPU, 26 GiB RAM (`db-custom-4-26624`) configuration with HA/regional availability. The Enterprise edition is used in both cases. The default database version is `SQLSERVER_2022_STANDARD`. The agent should ask the user if they want to use a different version."
 85 | 	}
 86 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, description, cfg.AuthRequired, allParameters)
 87 | 
 88 | 	return Tool{
 89 | 		Name:         cfg.Name,
 90 | 		Kind:         kind,
 91 | 		AuthRequired: cfg.AuthRequired,
 92 | 		Source:       s,
 93 | 		AllParams:    allParameters,
 94 | 		manifest:     tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
 95 | 		mcpManifest:  mcpManifest,
 96 | 	}, nil
 97 | }
 98 | 
 99 | // Tool represents the create-instances tool.
100 | type Tool struct {
101 | 	Name         string   `yaml:"name"`
102 | 	Kind         string   `yaml:"kind"`
103 | 	Description  string   `yaml:"description"`
104 | 	AuthRequired []string `yaml:"authRequired"`
105 | 
106 | 	Source      *cloudsqladmin.Source
107 | 	AllParams   tools.Parameters `yaml:"allParams"`
108 | 	manifest    tools.Manifest
109 | 	mcpManifest tools.McpManifest
110 | }
111 | 
112 | // Invoke executes the tool's logic.
113 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
114 | 	paramsMap := params.AsMap()
115 | 
116 | 	project, ok := paramsMap["project"].(string)
117 | 	if !ok {
118 | 		return nil, fmt.Errorf("error casting 'project' parameter: %s", paramsMap["project"])
119 | 	}
120 | 	name, ok := paramsMap["name"].(string)
121 | 	if !ok {
122 | 		return nil, fmt.Errorf("error casting 'name' parameter: %s", paramsMap["name"])
123 | 	}
124 | 	dbVersion, ok := paramsMap["databaseVersion"].(string)
125 | 	if !ok {
126 | 		return nil, fmt.Errorf("error casting 'databaseVersion' parameter: %s", paramsMap["databaseVersion"])
127 | 	}
128 | 	rootPassword, ok := paramsMap["rootPassword"].(string)
129 | 	if !ok {
130 | 		return nil, fmt.Errorf("error casting 'rootPassword' parameter: %s", paramsMap["rootPassword"])
131 | 	}
132 | 	editionPreset, ok := paramsMap["editionPreset"].(string)
133 | 	if !ok {
134 | 		return nil, fmt.Errorf("error casting 'editionPreset' parameter: %s", paramsMap["editionPreset"])
135 | 	}
136 | 
137 | 	settings := sqladmin.Settings{}
138 | 	switch strings.ToLower(editionPreset) {
139 | 	case "production":
140 | 		settings.AvailabilityType = "REGIONAL"
141 | 		settings.Edition = "ENTERPRISE"
142 | 		settings.Tier = "db-custom-4-26624"
143 | 		settings.DataDiskSizeGb = 250
144 | 		settings.DataDiskType = "PD_SSD"
145 | 	case "development":
146 | 		settings.AvailabilityType = "ZONAL"
147 | 		settings.Edition = "ENTERPRISE"
148 | 		settings.Tier = "db-custom-2-8192"
149 | 		settings.DataDiskSizeGb = 100
150 | 		settings.DataDiskType = "PD_SSD"
151 | 	default:
152 | 		return nil, fmt.Errorf("invalid 'editionPreset': %q. Must be either 'Production' or 'Development'", editionPreset)
153 | 	}
154 | 
155 | 	instance := sqladmin.DatabaseInstance{
156 | 		Name:            name,
157 | 		DatabaseVersion: dbVersion,
158 | 		RootPassword:    rootPassword,
159 | 		Settings:        &settings,
160 | 		Project:         project,
161 | 	}
162 | 
163 | 	service, err := t.Source.GetService(ctx, string(accessToken))
164 | 	if err != nil {
165 | 		return nil, err
166 | 	}
167 | 
168 | 	resp, err := service.Instances.Insert(project, &instance).Do()
169 | 	if err != nil {
170 | 		return nil, fmt.Errorf("error creating instance: %w", err)
171 | 	}
172 | 
173 | 	return resp, nil
174 | }
175 | 
176 | // ParseParams parses the parameters for the tool.
177 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
178 | 	return tools.ParseParams(t.AllParams, data, claims)
179 | }
180 | 
181 | // Manifest returns the tool's manifest.
182 | func (t Tool) Manifest() tools.Manifest {
183 | 	return t.manifest
184 | }
185 | 
186 | // McpManifest returns the tool's MCP manifest.
187 | func (t Tool) McpManifest() tools.McpManifest {
188 | 	return t.mcpManifest
189 | }
190 | 
191 | // Authorized checks if the tool is authorized.
192 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
193 | 	return true
194 | }
195 | 
196 | func (t Tool) RequiresClientAuthorization() bool {
197 | 	return t.Source.UseClientAuthorization()
198 | }
199 | 
```

--------------------------------------------------------------------------------
/internal/tools/alloydb/alloydbcreateuser/alloydbcreateuser.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //      http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package alloydbcreateuser
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"fmt"
 20 | 
 21 | 	yaml "github.com/goccy/go-yaml"
 22 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 23 | 	alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin"
 24 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 25 | 	"google.golang.org/api/alloydb/v1"
 26 | )
 27 | 
 28 | const kind string = "alloydb-create-user"
 29 | 
 30 | func init() {
 31 | 	if !tools.Register(kind, newConfig) {
 32 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 33 | 	}
 34 | }
 35 | 
 36 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 37 | 	actual := Config{Name: name}
 38 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 39 | 		return nil, err
 40 | 	}
 41 | 	return actual, nil
 42 | }
 43 | 
 44 | // Configuration for the create-user tool.
 45 | type Config struct {
 46 | 	Name         string   `yaml:"name" validate:"required"`
 47 | 	Kind         string   `yaml:"kind" validate:"required"`
 48 | 	Source       string   `yaml:"source" validate:"required"`
 49 | 	Description  string   `yaml:"description"`
 50 | 	AuthRequired []string `yaml:"authRequired"`
 51 | }
 52 | 
 53 | // validate interface
 54 | var _ tools.ToolConfig = Config{}
 55 | 
 56 | // ToolConfigKind returns the kind of the tool.
 57 | func (cfg Config) ToolConfigKind() string {
 58 | 	return kind
 59 | }
 60 | 
 61 | // Initialize initializes the tool from the configuration.
 62 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
 63 | 	rawS, ok := srcs[cfg.Source]
 64 | 	if !ok {
 65 | 		return nil, fmt.Errorf("source %q not found", cfg.Source)
 66 | 	}
 67 | 
 68 | 	s, ok := rawS.(*alloydbadmin.Source)
 69 | 	if !ok {
 70 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be `alloydb-admin`", kind)
 71 | 	}
 72 | 
 73 | 	allParameters := tools.Parameters{
 74 | 		tools.NewStringParameter("project", "The GCP project ID."),
 75 | 		tools.NewStringParameter("location", "The location of the cluster (e.g., 'us-central1')."),
 76 | 		tools.NewStringParameter("cluster", "The ID of the cluster where the user will be created."),
 77 | 		tools.NewStringParameter("user", "The name for the new user. Must be unique within the cluster."),
 78 | 		tools.NewStringParameterWithRequired("password", "A secure password for the new user. Required only for ALLOYDB_BUILT_IN userType.", false),
 79 | 		tools.NewArrayParameterWithDefault("databaseRoles", []any{}, "Optional. A list of database roles to grant to the new user (e.g., ['pg_read_all_data']).", tools.NewStringParameter("role", "A single database role to grant to the user (e.g., 'pg_read_all_data').")),
 80 | 		tools.NewStringParameter("userType", "The type of user to create. Valid values are: ALLOYDB_BUILT_IN and ALLOYDB_IAM_USER. ALLOYDB_IAM_USER is recommended."),
 81 | 	}
 82 | 	paramManifest := allParameters.Manifest()
 83 | 
 84 | 	description := cfg.Description
 85 | 	if description == "" {
 86 | 		description = "Creates a new AlloyDB user within a cluster. Takes the new user's name and a secure password. Optionally, a list of database roles can be assigned. Always ask the user for the type of user to create. ALLOYDB_IAM_USER is recommended."
 87 | 	}
 88 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, description, cfg.AuthRequired, allParameters)
 89 | 
 90 | 	return Tool{
 91 | 		Name:        cfg.Name,
 92 | 		Kind:        kind,
 93 | 		Source:      s,
 94 | 		AllParams:   allParameters,
 95 | 		manifest:    tools.Manifest{Description: description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
 96 | 		mcpManifest: mcpManifest,
 97 | 	}, nil
 98 | }
 99 | 
100 | // Tool represents the create-user tool.
101 | type Tool struct {
102 | 	Name        string `yaml:"name"`
103 | 	Kind        string `yaml:"kind"`
104 | 	Description string `yaml:"description"`
105 | 
106 | 	Source    *alloydbadmin.Source
107 | 	AllParams tools.Parameters `yaml:"allParams"`
108 | 
109 | 	manifest    tools.Manifest
110 | 	mcpManifest tools.McpManifest
111 | }
112 | 
113 | // Invoke executes the tool's logic.
114 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
115 | 	paramsMap := params.AsMap()
116 | 	project, ok := paramsMap["project"].(string)
117 | 	if !ok || project == "" {
118 | 		return nil, fmt.Errorf("invalid or missing 'project' parameter; expected a non-empty string")
119 | 	}
120 | 
121 | 	location, ok := paramsMap["location"].(string)
122 | 	if !ok || location == "" {
123 | 		return nil, fmt.Errorf("invalid or missing'location' parameter; expected a non-empty string")
124 | 	}
125 | 
126 | 	cluster, ok := paramsMap["cluster"].(string)
127 | 	if !ok || cluster == "" {
128 | 		return nil, fmt.Errorf("invalid or missing 'cluster' parameter; expected a non-empty string")
129 | 	}
130 | 
131 | 	userID, ok := paramsMap["user"].(string)
132 | 	if !ok || userID == "" {
133 | 		return nil, fmt.Errorf("invalid or missing 'user' parameter; expected a non-empty string")
134 | 	}
135 | 
136 | 	userType, ok := paramsMap["userType"].(string)
137 | 	if !ok || (userType != "ALLOYDB_BUILT_IN" && userType != "ALLOYDB_IAM_USER") {
138 | 		return nil, fmt.Errorf("invalid or missing 'userType' parameter; expected 'ALLOYDB_BUILT_IN' or 'ALLOYDB_IAM_USER'")
139 | 	}
140 | 
141 | 	service, err := t.Source.GetService(ctx, string(accessToken))
142 | 	if err != nil {
143 | 		return nil, err
144 | 	}
145 | 
146 | 	urlString := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", project, location, cluster)
147 | 
148 | 	// Build the request body using the type-safe User struct.
149 | 	user := &alloydb.User{
150 | 		UserType: userType,
151 | 	}
152 | 
153 | 	if userType == "ALLOYDB_BUILT_IN" {
154 | 		password, ok := paramsMap["password"].(string)
155 | 		if !ok || password == "" {
156 | 			return nil, fmt.Errorf("password is required when userType is ALLOYDB_BUILT_IN")
157 | 		}
158 | 		user.Password = password
159 | 	}
160 | 
161 | 	if dbRolesRaw, ok := paramsMap["databaseRoles"].([]any); ok && len(dbRolesRaw) > 0 {
162 | 		var roles []string
163 | 		for _, r := range dbRolesRaw {
164 | 			if role, ok := r.(string); ok {
165 | 				roles = append(roles, role)
166 | 			}
167 | 		}
168 | 		if len(roles) > 0 {
169 | 			user.DatabaseRoles = roles
170 | 		}
171 | 	}
172 | 
173 | 	// The Create API returns a long-running operation.
174 | 	resp, err := service.Projects.Locations.Clusters.Users.Create(urlString, user).UserId(userID).Do()
175 | 	if err != nil {
176 | 		return nil, fmt.Errorf("error creating AlloyDB user: %w", err)
177 | 	}
178 | 
179 | 	return resp, nil
180 | }
181 | 
182 | // ParseParams parses the parameters for the tool.
183 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
184 | 	return tools.ParseParams(t.AllParams, data, claims)
185 | }
186 | 
187 | // Manifest returns the tool's manifest.
188 | func (t Tool) Manifest() tools.Manifest {
189 | 	return t.manifest
190 | }
191 | 
192 | // McpManifest returns the tool's MCP manifest.
193 | func (t Tool) McpManifest() tools.McpManifest {
194 | 	return t.mcpManifest
195 | }
196 | 
197 | // Authorized checks if the tool is authorized.
198 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
199 | 	return true
200 | }
201 | 
202 | func (t Tool) RequiresClientAuthorization() bool {
203 | 	return t.Source.UseClientAuthorization()
204 | }
205 | 
```

--------------------------------------------------------------------------------
/internal/tools/mongodb/mongodbfindone/mongodbfindone.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //	http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | package mongodbfindone
 15 | 
 16 | import (
 17 | 	"context"
 18 | 	"encoding/json"
 19 | 	"fmt"
 20 | 	"slices"
 21 | 
 22 | 	"github.com/goccy/go-yaml"
 23 | 	mongosrc "github.com/googleapis/genai-toolbox/internal/sources/mongodb"
 24 | 	"go.mongodb.org/mongo-driver/bson"
 25 | 	"go.mongodb.org/mongo-driver/mongo"
 26 | 	"go.mongodb.org/mongo-driver/mongo/options"
 27 | 
 28 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 29 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 30 | )
 31 | 
 32 | const kind string = "mongodb-find-one"
 33 | 
 34 | func init() {
 35 | 	if !tools.Register(kind, newConfig) {
 36 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 37 | 	}
 38 | }
 39 | 
 40 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 41 | 	actual := Config{Name: name}
 42 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 43 | 		return nil, err
 44 | 	}
 45 | 	return actual, nil
 46 | }
 47 | 
 48 | type Config struct {
 49 | 	Name           string           `yaml:"name" validate:"required"`
 50 | 	Kind           string           `yaml:"kind" validate:"required"`
 51 | 	Source         string           `yaml:"source" validate:"required"`
 52 | 	AuthRequired   []string         `yaml:"authRequired" validate:"required"`
 53 | 	Description    string           `yaml:"description" validate:"required"`
 54 | 	Database       string           `yaml:"database" validate:"required"`
 55 | 	Collection     string           `yaml:"collection" validate:"required"`
 56 | 	FilterPayload  string           `yaml:"filterPayload" validate:"required"`
 57 | 	FilterParams   tools.Parameters `yaml:"filterParams" validate:"required"`
 58 | 	ProjectPayload string           `yaml:"projectPayload"`
 59 | 	ProjectParams  tools.Parameters `yaml:"projectParams"`
 60 | 	SortPayload    string           `yaml:"sortPayload"`
 61 | 	SortParams     tools.Parameters `yaml:"sortParams"`
 62 | }
 63 | 
 64 | // validate interface
 65 | var _ tools.ToolConfig = Config{}
 66 | 
 67 | func (cfg Config) ToolConfigKind() string {
 68 | 	return kind
 69 | }
 70 | 
 71 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
 72 | 	// verify source exists
 73 | 	rawS, ok := srcs[cfg.Source]
 74 | 	if !ok {
 75 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
 76 | 	}
 77 | 
 78 | 	// verify the source is compatible
 79 | 	s, ok := rawS.(*mongosrc.Source)
 80 | 	if !ok {
 81 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be `mongodb`", kind)
 82 | 	}
 83 | 
 84 | 	// Create a slice for all parameters
 85 | 	allParameters := slices.Concat(cfg.FilterParams, cfg.ProjectParams, cfg.SortParams)
 86 | 
 87 | 	// Verify no duplicate parameter names
 88 | 	err := tools.CheckDuplicateParameters(allParameters)
 89 | 	if err != nil {
 90 | 		return nil, err
 91 | 	}
 92 | 
 93 | 	// Create Toolbox manifest
 94 | 	paramManifest := allParameters.Manifest()
 95 | 
 96 | 	if paramManifest == nil {
 97 | 		paramManifest = make([]tools.ParameterManifest, 0)
 98 | 	}
 99 | 
100 | 	// Create MCP manifest
101 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, allParameters)
102 | 
103 | 	// finish tool setup
104 | 	return Tool{
105 | 		Name:           cfg.Name,
106 | 		Kind:           kind,
107 | 		AuthRequired:   cfg.AuthRequired,
108 | 		Collection:     cfg.Collection,
109 | 		FilterPayload:  cfg.FilterPayload,
110 | 		FilterParams:   cfg.FilterParams,
111 | 		ProjectPayload: cfg.ProjectPayload,
112 | 		ProjectParams:  cfg.ProjectParams,
113 | 		SortPayload:    cfg.SortPayload,
114 | 		SortParams:     cfg.SortParams,
115 | 		AllParams:      allParameters,
116 | 		database:       s.Client.Database(cfg.Database),
117 | 		manifest:       tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
118 | 		mcpManifest:    mcpManifest,
119 | 	}, nil
120 | }
121 | 
122 | // validate interface
123 | var _ tools.Tool = Tool{}
124 | 
125 | type Tool struct {
126 | 	Name           string           `yaml:"name"`
127 | 	Kind           string           `yaml:"kind"`
128 | 	AuthRequired   []string         `yaml:"authRequired"`
129 | 	Description    string           `yaml:"description"`
130 | 	Collection     string           `yaml:"collection"`
131 | 	FilterPayload  string           `yaml:"filterPayload"`
132 | 	FilterParams   tools.Parameters `yaml:"filterParams"`
133 | 	ProjectPayload string           `yaml:"projectPayload"`
134 | 	ProjectParams  tools.Parameters `yaml:"projectParams"`
135 | 	SortPayload    string           `yaml:"sortPayload"`
136 | 	SortParams     tools.Parameters `yaml:"sortParams"`
137 | 	AllParams      tools.Parameters `yaml:"allParams"`
138 | 
139 | 	database    *mongo.Database
140 | 	manifest    tools.Manifest
141 | 	mcpManifest tools.McpManifest
142 | }
143 | 
144 | func getOptions(sortParameters tools.Parameters, projectPayload string, paramsMap map[string]any) (*options.FindOneOptions, error) {
145 | 	opts := options.FindOne()
146 | 
147 | 	sort := bson.M{}
148 | 	for _, p := range sortParameters {
149 | 		sort[p.GetName()] = paramsMap[p.GetName()]
150 | 	}
151 | 	opts = opts.SetSort(sort)
152 | 
153 | 	if len(projectPayload) == 0 {
154 | 		return opts, nil
155 | 	}
156 | 
157 | 	result, err := tools.PopulateTemplateWithJSON("MongoDBFindOneProjectString", projectPayload, paramsMap)
158 | 	if err != nil {
159 | 		return nil, fmt.Errorf("error populating project payload: %s", err)
160 | 	}
161 | 
162 | 	var projection any
163 | 	err = bson.UnmarshalExtJSON([]byte(result), false, &projection)
164 | 	if err != nil {
165 | 		return nil, fmt.Errorf("error unmarshalling projection: %s", err)
166 | 	}
167 | 	opts = opts.SetProjection(projection)
168 | 
169 | 	return opts, nil
170 | }
171 | 
172 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
173 | 	paramsMap := params.AsMap()
174 | 
175 | 	filterString, err := tools.PopulateTemplateWithJSON("MongoDBFindOneFilterString", t.FilterPayload, paramsMap)
176 | 
177 | 	if err != nil {
178 | 		return nil, fmt.Errorf("error populating filter: %s", err)
179 | 	}
180 | 
181 | 	opts, err := getOptions(t.SortParams, t.ProjectPayload, paramsMap)
182 | 	if err != nil {
183 | 		return nil, fmt.Errorf("error populating options: %s", err)
184 | 	}
185 | 
186 | 	var filter = bson.D{}
187 | 	err = bson.UnmarshalExtJSON([]byte(filterString), false, &filter)
188 | 	if err != nil {
189 | 		return nil, err
190 | 	}
191 | 
192 | 	res := t.database.Collection(t.Collection).FindOne(ctx, filter, opts)
193 | 	if res.Err() != nil {
194 | 		return nil, res.Err()
195 | 	}
196 | 
197 | 	var data any
198 | 	err = res.Decode(&data)
199 | 	if err != nil {
200 | 		return nil, err
201 | 	}
202 | 
203 | 	var final []any
204 | 	tmp, _ := bson.MarshalExtJSON(data, false, false)
205 | 	var tmp2 any
206 | 	err = json.Unmarshal(tmp, &tmp2)
207 | 	if err != nil {
208 | 		return nil, err
209 | 	}
210 | 	final = append(final, tmp2)
211 | 
212 | 	return final, err
213 | }
214 | 
215 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
216 | 	return tools.ParseParams(t.AllParams, data, claims)
217 | }
218 | 
219 | func (t Tool) Manifest() tools.Manifest {
220 | 	return t.manifest
221 | }
222 | 
223 | func (t Tool) McpManifest() tools.McpManifest {
224 | 	return t.mcpManifest
225 | }
226 | 
227 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
228 | 	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
229 | }
230 | 
231 | func (t Tool) RequiresClientAuthorization() bool {
232 | 	return false
233 | }
234 | 
```

--------------------------------------------------------------------------------
/internal/tools/clickhouse/clickhousesql/clickhousesql_test.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package clickhouse
 16 | 
 17 | import (
 18 | 	"testing"
 19 | 
 20 | 	"github.com/goccy/go-yaml"
 21 | 	"github.com/google/go-cmp/cmp"
 22 | 	"github.com/googleapis/genai-toolbox/internal/server"
 23 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 24 | 	"github.com/googleapis/genai-toolbox/internal/sources/clickhouse"
 25 | 	"github.com/googleapis/genai-toolbox/internal/testutils"
 26 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 27 | )
 28 | 
 29 | func TestConfigToolConfigKind(t *testing.T) {
 30 | 	config := Config{}
 31 | 	if config.ToolConfigKind() != sqlKind {
 32 | 		t.Errorf("Expected %s, got %s", sqlKind, config.ToolConfigKind())
 33 | 	}
 34 | }
 35 | 
 36 | func TestParseFromYamlClickHouseSQL(t *testing.T) {
 37 | 	ctx, err := testutils.ContextWithNewLogger()
 38 | 	if err != nil {
 39 | 		t.Fatalf("unexpected error: %s", err)
 40 | 	}
 41 | 	tcs := []struct {
 42 | 		desc string
 43 | 		in   string
 44 | 		want server.ToolConfigs
 45 | 	}{
 46 | 		{
 47 | 			desc: "basic example",
 48 | 			in: `
 49 | 			tools:
 50 | 				example_tool:
 51 | 					kind: clickhouse-sql
 52 | 					source: my-instance
 53 | 					description: some description
 54 | 					statement: SELECT 1
 55 | 			`,
 56 | 			want: server.ToolConfigs{
 57 | 				"example_tool": Config{
 58 | 					Name:         "example_tool",
 59 | 					Kind:         "clickhouse-sql",
 60 | 					Source:       "my-instance",
 61 | 					Description:  "some description",
 62 | 					Statement:    "SELECT 1",
 63 | 					AuthRequired: []string{},
 64 | 				},
 65 | 			},
 66 | 		},
 67 | 		{
 68 | 			desc: "with parameters",
 69 | 			in: `
 70 | 			tools:
 71 | 				param_tool:
 72 | 					kind: clickhouse-sql
 73 | 					source: test-source
 74 | 					description: Test ClickHouse tool
 75 | 					statement: SELECT * FROM test_table WHERE id = $1
 76 | 					parameters:
 77 | 					  - name: id
 78 | 					    type: string
 79 | 					    description: Test ID
 80 | 			`,
 81 | 			want: server.ToolConfigs{
 82 | 				"param_tool": Config{
 83 | 					Name:        "param_tool",
 84 | 					Kind:        "clickhouse-sql",
 85 | 					Source:      "test-source",
 86 | 					Description: "Test ClickHouse tool",
 87 | 					Statement:   "SELECT * FROM test_table WHERE id = $1",
 88 | 					Parameters: tools.Parameters{
 89 | 						tools.NewStringParameter("id", "Test ID"),
 90 | 					},
 91 | 					AuthRequired: []string{},
 92 | 				},
 93 | 			},
 94 | 		},
 95 | 	}
 96 | 	for _, tc := range tcs {
 97 | 		t.Run(tc.desc, func(t *testing.T) {
 98 | 			got := struct {
 99 | 				Tools server.ToolConfigs `yaml:"tools"`
100 | 			}{}
101 | 			err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
102 | 			if err != nil {
103 | 				t.Fatalf("unable to unmarshal: %s", err)
104 | 			}
105 | 			if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
106 | 				t.Fatalf("incorrect parse: diff %v", diff)
107 | 			}
108 | 		})
109 | 	}
110 | }
111 | 
112 | func TestSQLConfigInitializeValidSource(t *testing.T) {
113 | 	config := Config{
114 | 		Name:        "test-tool",
115 | 		Kind:        sqlKind,
116 | 		Source:      "test-clickhouse",
117 | 		Description: "Test tool",
118 | 		Statement:   "SELECT 1",
119 | 		Parameters:  tools.Parameters{},
120 | 	}
121 | 
122 | 	// Create a mock ClickHouse source
123 | 	mockSource := &clickhouse.Source{}
124 | 
125 | 	sources := map[string]sources.Source{
126 | 		"test-clickhouse": mockSource,
127 | 	}
128 | 
129 | 	tool, err := config.Initialize(sources)
130 | 	if err != nil {
131 | 		t.Fatalf("Expected no error, got: %v", err)
132 | 	}
133 | 
134 | 	clickhouseTool, ok := tool.(Tool)
135 | 	if !ok {
136 | 		t.Fatalf("Expected Tool type, got %T", tool)
137 | 	}
138 | 
139 | 	if clickhouseTool.Name != "test-tool" {
140 | 		t.Errorf("Expected name 'test-tool', got %s", clickhouseTool.Name)
141 | 	}
142 | }
143 | 
144 | func TestSQLConfigInitializeMissingSource(t *testing.T) {
145 | 	config := Config{
146 | 		Name:        "test-tool",
147 | 		Kind:        sqlKind,
148 | 		Source:      "missing-source",
149 | 		Description: "Test tool",
150 | 		Statement:   "SELECT 1",
151 | 		Parameters:  tools.Parameters{},
152 | 	}
153 | 
154 | 	sources := map[string]sources.Source{}
155 | 
156 | 	_, err := config.Initialize(sources)
157 | 	if err == nil {
158 | 		t.Fatal("Expected error for missing source, got nil")
159 | 	}
160 | 
161 | 	expectedErr := `no source named "missing-source" configured`
162 | 	if err.Error() != expectedErr {
163 | 		t.Errorf("Expected error %q, got %q", expectedErr, err.Error())
164 | 	}
165 | }
166 | 
167 | // mockIncompatibleSource is a mock source that doesn't implement the compatibleSource interface
168 | type mockIncompatibleSource struct{}
169 | 
170 | func (m *mockIncompatibleSource) SourceKind() string {
171 | 	return "mock"
172 | }
173 | 
174 | func TestSQLConfigInitializeIncompatibleSource(t *testing.T) {
175 | 	config := Config{
176 | 		Name:        "test-tool",
177 | 		Kind:        sqlKind,
178 | 		Source:      "incompatible-source",
179 | 		Description: "Test tool",
180 | 		Statement:   "SELECT 1",
181 | 		Parameters:  tools.Parameters{},
182 | 	}
183 | 
184 | 	mockSource := &mockIncompatibleSource{}
185 | 
186 | 	sources := map[string]sources.Source{
187 | 		"incompatible-source": mockSource,
188 | 	}
189 | 
190 | 	_, err := config.Initialize(sources)
191 | 	if err == nil {
192 | 		t.Fatal("Expected error for incompatible source, got nil")
193 | 	}
194 | 
195 | 	if err.Error() == "" {
196 | 		t.Error("Expected non-empty error message")
197 | 	}
198 | }
199 | 
200 | func TestToolManifest(t *testing.T) {
201 | 	tool := Tool{
202 | 		manifest: tools.Manifest{
203 | 			Description: "Test description",
204 | 			Parameters:  []tools.ParameterManifest{},
205 | 		},
206 | 	}
207 | 
208 | 	manifest := tool.Manifest()
209 | 	if manifest.Description != "Test description" {
210 | 		t.Errorf("Expected description 'Test description', got %s", manifest.Description)
211 | 	}
212 | }
213 | 
214 | func TestToolMcpManifest(t *testing.T) {
215 | 	tool := Tool{
216 | 		mcpManifest: tools.McpManifest{
217 | 			Name:        "test-tool",
218 | 			Description: "Test description",
219 | 		},
220 | 	}
221 | 
222 | 	manifest := tool.McpManifest()
223 | 	if manifest.Name != "test-tool" {
224 | 		t.Errorf("Expected name 'test-tool', got %s", manifest.Name)
225 | 	}
226 | 	if manifest.Description != "Test description" {
227 | 		t.Errorf("Expected description 'Test description', got %s", manifest.Description)
228 | 	}
229 | }
230 | 
231 | func TestToolAuthorized(t *testing.T) {
232 | 	tests := []struct {
233 | 		name                 string
234 | 		authRequired         []string
235 | 		verifiedAuthServices []string
236 | 		expectedAuthorized   bool
237 | 	}{
238 | 		{
239 | 			name:                 "no auth required",
240 | 			authRequired:         []string{},
241 | 			verifiedAuthServices: []string{},
242 | 			expectedAuthorized:   true,
243 | 		},
244 | 		{
245 | 			name:                 "auth required and verified",
246 | 			authRequired:         []string{"google"},
247 | 			verifiedAuthServices: []string{"google"},
248 | 			expectedAuthorized:   true,
249 | 		},
250 | 		{
251 | 			name:                 "auth required but not verified",
252 | 			authRequired:         []string{"google"},
253 | 			verifiedAuthServices: []string{},
254 | 			expectedAuthorized:   false,
255 | 		},
256 | 		{
257 | 			name:                 "auth required but different service verified",
258 | 			authRequired:         []string{"google"},
259 | 			verifiedAuthServices: []string{"aws"},
260 | 			expectedAuthorized:   false,
261 | 		},
262 | 	}
263 | 
264 | 	for _, tt := range tests {
265 | 		t.Run(tt.name, func(t *testing.T) {
266 | 			tool := Tool{
267 | 				AuthRequired: tt.authRequired,
268 | 			}
269 | 
270 | 			authorized := tool.Authorized(tt.verifiedAuthServices)
271 | 			if authorized != tt.expectedAuthorized {
272 | 				t.Errorf("Expected authorized %t, got %t", tt.expectedAuthorized, authorized)
273 | 			}
274 | 		})
275 | 	}
276 | }
277 | 
```

--------------------------------------------------------------------------------
/internal/server/mcp/v20250326/method.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package v20250326
 16 | 
 17 | import (
 18 | 	"bytes"
 19 | 	"context"
 20 | 	"encoding/json"
 21 | 	"errors"
 22 | 	"fmt"
 23 | 	"net/http"
 24 | 	"strings"
 25 | 
 26 | 	"github.com/googleapis/genai-toolbox/internal/auth"
 27 | 	"github.com/googleapis/genai-toolbox/internal/server/mcp/jsonrpc"
 28 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 29 | 	"github.com/googleapis/genai-toolbox/internal/util"
 30 | )
 31 | 
 32 | // ProcessMethod returns a response for the request.
 33 | func ProcessMethod(ctx context.Context, id jsonrpc.RequestId, method string, toolset tools.Toolset, tools map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) {
 34 | 	switch method {
 35 | 	case PING:
 36 | 		return pingHandler(id)
 37 | 	case TOOLS_LIST:
 38 | 		return toolsListHandler(id, toolset, body)
 39 | 	case TOOLS_CALL:
 40 | 		return toolsCallHandler(ctx, id, tools, authServices, body, header)
 41 | 	default:
 42 | 		err := fmt.Errorf("invalid method %s", method)
 43 | 		return jsonrpc.NewError(id, jsonrpc.METHOD_NOT_FOUND, err.Error(), nil), err
 44 | 	}
 45 | }
 46 | 
 47 | // pingHandler handles the "ping" method by returning an empty response.
 48 | func pingHandler(id jsonrpc.RequestId) (any, error) {
 49 | 	return jsonrpc.JSONRPCResponse{
 50 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
 51 | 		Id:      id,
 52 | 		Result:  struct{}{},
 53 | 	}, nil
 54 | }
 55 | 
 56 | func toolsListHandler(id jsonrpc.RequestId, toolset tools.Toolset, body []byte) (any, error) {
 57 | 	var req ListToolsRequest
 58 | 	if err := json.Unmarshal(body, &req); err != nil {
 59 | 		err = fmt.Errorf("invalid mcp tools list request: %w", err)
 60 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
 61 | 	}
 62 | 
 63 | 	result := ListToolsResult{
 64 | 		Tools: toolset.McpManifest,
 65 | 	}
 66 | 	return jsonrpc.JSONRPCResponse{
 67 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
 68 | 		Id:      id,
 69 | 		Result:  result,
 70 | 	}, nil
 71 | }
 72 | 
 73 | // toolsCallHandler generate a response for tools call.
 74 | func toolsCallHandler(ctx context.Context, id jsonrpc.RequestId, toolsMap map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) {
 75 | 	// retrieve logger from context
 76 | 	logger, err := util.LoggerFromContext(ctx)
 77 | 	if err != nil {
 78 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
 79 | 	}
 80 | 
 81 | 	var req CallToolRequest
 82 | 	if err = json.Unmarshal(body, &req); err != nil {
 83 | 		err = fmt.Errorf("invalid mcp tools call request: %w", err)
 84 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
 85 | 	}
 86 | 
 87 | 	toolName := req.Params.Name
 88 | 	toolArgument := req.Params.Arguments
 89 | 	logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName))
 90 | 	tool, ok := toolsMap[toolName]
 91 | 	if !ok {
 92 | 		err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName)
 93 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err
 94 | 	}
 95 | 
 96 | 	// Get access token
 97 | 	accessToken := tools.AccessToken(header.Get("Authorization"))
 98 | 
 99 | 	// Check if this specific tool requires the standard authorization header
100 | 	if tool.RequiresClientAuthorization() {
101 | 		if accessToken == "" {
102 | 			return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, "missing access token in the 'Authorization' header", nil), tools.ErrUnauthorized
103 | 		}
104 | 	}
105 | 
106 | 	// marshal arguments and decode it using decodeJSON instead to prevent loss between floats/int.
107 | 	aMarshal, err := json.Marshal(toolArgument)
108 | 	if err != nil {
109 | 		err = fmt.Errorf("unable to marshal tools argument: %w", err)
110 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
111 | 	}
112 | 
113 | 	var data map[string]any
114 | 	if err = util.DecodeJSON(bytes.NewBuffer(aMarshal), &data); err != nil {
115 | 		err = fmt.Errorf("unable to decode tools argument: %w", err)
116 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
117 | 	}
118 | 
119 | 	// Tool authentication
120 | 	// claimsFromAuth maps the name of the authservice to the claims retrieved from it.
121 | 	claimsFromAuth := make(map[string]map[string]any)
122 | 
123 | 	// if using stdio, header will be nil and auth will not be supported
124 | 	if header != nil {
125 | 		for _, aS := range authServices {
126 | 			claims, err := aS.GetClaimsFromHeader(ctx, header)
127 | 			if err != nil {
128 | 				logger.DebugContext(ctx, err.Error())
129 | 				continue
130 | 			}
131 | 			if claims == nil {
132 | 				// authService not present in header
133 | 				continue
134 | 			}
135 | 			claimsFromAuth[aS.GetName()] = claims
136 | 		}
137 | 	}
138 | 
139 | 	// Tool authorization check
140 | 	verifiedAuthServices := make([]string, len(claimsFromAuth))
141 | 	i := 0
142 | 	for k := range claimsFromAuth {
143 | 		verifiedAuthServices[i] = k
144 | 		i++
145 | 	}
146 | 
147 | 	// Check if any of the specified auth services is verified
148 | 	isAuthorized := tool.Authorized(verifiedAuthServices)
149 | 	if !isAuthorized {
150 | 		err = fmt.Errorf("unauthorized Tool call: Please make sure your specify correct auth headers: %w", tools.ErrUnauthorized)
151 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
152 | 	}
153 | 	logger.DebugContext(ctx, "tool invocation authorized")
154 | 
155 | 	params, err := tool.ParseParams(data, claimsFromAuth)
156 | 	if err != nil {
157 | 		err = fmt.Errorf("provided parameters were invalid: %w", err)
158 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err
159 | 	}
160 | 	logger.DebugContext(ctx, fmt.Sprintf("invocation params: %s", params))
161 | 
162 | 	// run tool invocation and generate response.
163 | 	results, err := tool.Invoke(ctx, params, accessToken)
164 | 	if err != nil {
165 | 		errStr := err.Error()
166 | 		// Missing authService tokens.
167 | 		if errors.Is(err, tools.ErrUnauthorized) {
168 | 			return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
169 | 		}
170 | 		// Upstream auth error
171 | 		if strings.Contains(errStr, "Error 401") || strings.Contains(errStr, "Error 403") {
172 | 			if tool.RequiresClientAuthorization() {
173 | 				// Error with client credentials should pass down to the client
174 | 				return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
175 | 			}
176 | 			// Auth error with ADC should raise internal 500 error
177 | 			return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
178 | 		}
179 | 		text := TextContent{
180 | 			Type: "text",
181 | 			Text: err.Error(),
182 | 		}
183 | 		return jsonrpc.JSONRPCResponse{
184 | 			Jsonrpc: jsonrpc.JSONRPC_VERSION,
185 | 			Id:      id,
186 | 			Result:  CallToolResult{Content: []TextContent{text}, IsError: true},
187 | 		}, nil
188 | 	}
189 | 
190 | 	content := make([]TextContent, 0)
191 | 
192 | 	sliceRes, ok := results.([]any)
193 | 	if !ok {
194 | 		sliceRes = []any{results}
195 | 	}
196 | 
197 | 	for _, d := range sliceRes {
198 | 		text := TextContent{Type: "text"}
199 | 		dM, err := json.Marshal(d)
200 | 		if err != nil {
201 | 			text.Text = fmt.Sprintf("fail to marshal: %s, result: %s", err, d)
202 | 		} else {
203 | 			text.Text = string(dM)
204 | 		}
205 | 		content = append(content, text)
206 | 	}
207 | 
208 | 	return jsonrpc.JSONRPCResponse{
209 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
210 | 		Id:      id,
211 | 		Result:  CallToolResult{Content: content},
212 | 	}, nil
213 | }
214 | 
```

--------------------------------------------------------------------------------
/internal/server/mcp/v20250618/method.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package v20250618
 16 | 
 17 | import (
 18 | 	"bytes"
 19 | 	"context"
 20 | 	"encoding/json"
 21 | 	"errors"
 22 | 	"fmt"
 23 | 	"net/http"
 24 | 	"strings"
 25 | 
 26 | 	"github.com/googleapis/genai-toolbox/internal/auth"
 27 | 	"github.com/googleapis/genai-toolbox/internal/server/mcp/jsonrpc"
 28 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 29 | 	"github.com/googleapis/genai-toolbox/internal/util"
 30 | )
 31 | 
 32 | // ProcessMethod returns a response for the request.
 33 | func ProcessMethod(ctx context.Context, id jsonrpc.RequestId, method string, toolset tools.Toolset, tools map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) {
 34 | 	switch method {
 35 | 	case PING:
 36 | 		return pingHandler(id)
 37 | 	case TOOLS_LIST:
 38 | 		return toolsListHandler(id, toolset, body)
 39 | 	case TOOLS_CALL:
 40 | 		return toolsCallHandler(ctx, id, tools, authServices, body, header)
 41 | 	default:
 42 | 		err := fmt.Errorf("invalid method %s", method)
 43 | 		return jsonrpc.NewError(id, jsonrpc.METHOD_NOT_FOUND, err.Error(), nil), err
 44 | 	}
 45 | }
 46 | 
 47 | // pingHandler handles the "ping" method by returning an empty response.
 48 | func pingHandler(id jsonrpc.RequestId) (any, error) {
 49 | 	return jsonrpc.JSONRPCResponse{
 50 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
 51 | 		Id:      id,
 52 | 		Result:  struct{}{},
 53 | 	}, nil
 54 | }
 55 | 
 56 | func toolsListHandler(id jsonrpc.RequestId, toolset tools.Toolset, body []byte) (any, error) {
 57 | 	var req ListToolsRequest
 58 | 	if err := json.Unmarshal(body, &req); err != nil {
 59 | 		err = fmt.Errorf("invalid mcp tools list request: %w", err)
 60 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
 61 | 	}
 62 | 
 63 | 	result := ListToolsResult{
 64 | 		Tools: toolset.McpManifest,
 65 | 	}
 66 | 	return jsonrpc.JSONRPCResponse{
 67 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
 68 | 		Id:      id,
 69 | 		Result:  result,
 70 | 	}, nil
 71 | }
 72 | 
 73 | // toolsCallHandler generate a response for tools call.
 74 | func toolsCallHandler(ctx context.Context, id jsonrpc.RequestId, toolsMap map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) {
 75 | 	// retrieve logger from context
 76 | 	logger, err := util.LoggerFromContext(ctx)
 77 | 	if err != nil {
 78 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
 79 | 	}
 80 | 
 81 | 	var req CallToolRequest
 82 | 	if err = json.Unmarshal(body, &req); err != nil {
 83 | 		err = fmt.Errorf("invalid mcp tools call request: %w", err)
 84 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
 85 | 	}
 86 | 
 87 | 	toolName := req.Params.Name
 88 | 	toolArgument := req.Params.Arguments
 89 | 	logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName))
 90 | 	tool, ok := toolsMap[toolName]
 91 | 	if !ok {
 92 | 		err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName)
 93 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err
 94 | 	}
 95 | 
 96 | 	// Get access token
 97 | 	accessToken := tools.AccessToken(header.Get("Authorization"))
 98 | 
 99 | 	// Check if this specific tool requires the standard authorization header
100 | 	if tool.RequiresClientAuthorization() {
101 | 		if accessToken == "" {
102 | 			return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, "missing access token in the 'Authorization' header", nil), tools.ErrUnauthorized
103 | 		}
104 | 	}
105 | 
106 | 	// marshal arguments and decode it using decodeJSON instead to prevent loss between floats/int.
107 | 	aMarshal, err := json.Marshal(toolArgument)
108 | 	if err != nil {
109 | 		err = fmt.Errorf("unable to marshal tools argument: %w", err)
110 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
111 | 	}
112 | 
113 | 	var data map[string]any
114 | 	if err = util.DecodeJSON(bytes.NewBuffer(aMarshal), &data); err != nil {
115 | 		err = fmt.Errorf("unable to decode tools argument: %w", err)
116 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
117 | 	}
118 | 
119 | 	// Tool authentication
120 | 	// claimsFromAuth maps the name of the authservice to the claims retrieved from it.
121 | 	claimsFromAuth := make(map[string]map[string]any)
122 | 
123 | 	// if using stdio, header will be nil and auth will not be supported
124 | 	if header != nil {
125 | 		for _, aS := range authServices {
126 | 			claims, err := aS.GetClaimsFromHeader(ctx, header)
127 | 			if err != nil {
128 | 				logger.DebugContext(ctx, err.Error())
129 | 				continue
130 | 			}
131 | 			if claims == nil {
132 | 				// authService not present in header
133 | 				continue
134 | 			}
135 | 			claimsFromAuth[aS.GetName()] = claims
136 | 		}
137 | 	}
138 | 
139 | 	// Tool authorization check
140 | 	verifiedAuthServices := make([]string, len(claimsFromAuth))
141 | 	i := 0
142 | 	for k := range claimsFromAuth {
143 | 		verifiedAuthServices[i] = k
144 | 		i++
145 | 	}
146 | 
147 | 	// Check if any of the specified auth services is verified
148 | 	isAuthorized := tool.Authorized(verifiedAuthServices)
149 | 	if !isAuthorized {
150 | 		err = fmt.Errorf("unauthorized Tool call: Please make sure your specify correct auth headers: %w", tools.ErrUnauthorized)
151 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
152 | 	}
153 | 	logger.DebugContext(ctx, "tool invocation authorized")
154 | 
155 | 	params, err := tool.ParseParams(data, claimsFromAuth)
156 | 	if err != nil {
157 | 		err = fmt.Errorf("provided parameters were invalid: %w", err)
158 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err
159 | 	}
160 | 	logger.DebugContext(ctx, fmt.Sprintf("invocation params: %s", params))
161 | 
162 | 	// run tool invocation and generate response.
163 | 	results, err := tool.Invoke(ctx, params, accessToken)
164 | 	if err != nil {
165 | 		errStr := err.Error()
166 | 		// Missing authService tokens.
167 | 		if errors.Is(err, tools.ErrUnauthorized) {
168 | 			return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
169 | 		}
170 | 		// Upstream auth error
171 | 		if strings.Contains(errStr, "Error 401") || strings.Contains(errStr, "Error 403") {
172 | 			if tool.RequiresClientAuthorization() {
173 | 				// Error with client credentials should pass down to the client
174 | 				return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
175 | 			}
176 | 			// Auth error with ADC should raise internal 500 error
177 | 			return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
178 | 		}
179 | 		text := TextContent{
180 | 			Type: "text",
181 | 			Text: err.Error(),
182 | 		}
183 | 		return jsonrpc.JSONRPCResponse{
184 | 			Jsonrpc: jsonrpc.JSONRPC_VERSION,
185 | 			Id:      id,
186 | 			Result:  CallToolResult{Content: []TextContent{text}, IsError: true},
187 | 		}, nil
188 | 	}
189 | 
190 | 	content := make([]TextContent, 0)
191 | 
192 | 	sliceRes, ok := results.([]any)
193 | 	if !ok {
194 | 		sliceRes = []any{results}
195 | 	}
196 | 
197 | 	for _, d := range sliceRes {
198 | 		text := TextContent{Type: "text"}
199 | 		dM, err := json.Marshal(d)
200 | 		if err != nil {
201 | 			text.Text = fmt.Sprintf("fail to marshal: %s, result: %s", err, d)
202 | 		} else {
203 | 			text.Text = string(dM)
204 | 		}
205 | 		content = append(content, text)
206 | 	}
207 | 
208 | 	return jsonrpc.JSONRPCResponse{
209 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
210 | 		Id:      id,
211 | 		Result:  CallToolResult{Content: content},
212 | 	}, nil
213 | }
214 | 
```

--------------------------------------------------------------------------------
/internal/server/mcp/v20241105/method.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package v20241105
 16 | 
 17 | import (
 18 | 	"bytes"
 19 | 	"context"
 20 | 	"encoding/json"
 21 | 	"errors"
 22 | 	"fmt"
 23 | 	"net/http"
 24 | 	"strings"
 25 | 
 26 | 	"github.com/googleapis/genai-toolbox/internal/auth"
 27 | 	"github.com/googleapis/genai-toolbox/internal/server/mcp/jsonrpc"
 28 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 29 | 	"github.com/googleapis/genai-toolbox/internal/util"
 30 | )
 31 | 
 32 | // ProcessMethod returns a response for the request.
 33 | func ProcessMethod(ctx context.Context, id jsonrpc.RequestId, method string, toolset tools.Toolset, tools map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) {
 34 | 	switch method {
 35 | 	case PING:
 36 | 		return pingHandler(id)
 37 | 	case TOOLS_LIST:
 38 | 		return toolsListHandler(id, toolset, body)
 39 | 	case TOOLS_CALL:
 40 | 		return toolsCallHandler(ctx, id, tools, authServices, body, header)
 41 | 	default:
 42 | 		err := fmt.Errorf("invalid method %s", method)
 43 | 		return jsonrpc.NewError(id, jsonrpc.METHOD_NOT_FOUND, err.Error(), nil), err
 44 | 	}
 45 | }
 46 | 
 47 | // pingHandler handles the "ping" method by returning an empty response.
 48 | func pingHandler(id jsonrpc.RequestId) (any, error) {
 49 | 	return jsonrpc.JSONRPCResponse{
 50 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
 51 | 		Id:      id,
 52 | 		Result:  struct{}{},
 53 | 	}, nil
 54 | }
 55 | 
 56 | func toolsListHandler(id jsonrpc.RequestId, toolset tools.Toolset, body []byte) (any, error) {
 57 | 	var req ListToolsRequest
 58 | 	if err := json.Unmarshal(body, &req); err != nil {
 59 | 		err = fmt.Errorf("invalid mcp tools list request: %w", err)
 60 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
 61 | 	}
 62 | 
 63 | 	result := ListToolsResult{
 64 | 		Tools: toolset.McpManifest,
 65 | 	}
 66 | 	return jsonrpc.JSONRPCResponse{
 67 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
 68 | 		Id:      id,
 69 | 		Result:  result,
 70 | 	}, nil
 71 | }
 72 | 
 73 | // toolsCallHandler generate a response for tools call.
 74 | func toolsCallHandler(ctx context.Context, id jsonrpc.RequestId, toolsMap map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) {
 75 | 	// retrieve logger from context
 76 | 	logger, err := util.LoggerFromContext(ctx)
 77 | 	if err != nil {
 78 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
 79 | 	}
 80 | 
 81 | 	var req CallToolRequest
 82 | 	if err = json.Unmarshal(body, &req); err != nil {
 83 | 		err = fmt.Errorf("invalid mcp tools call request: %w", err)
 84 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
 85 | 	}
 86 | 
 87 | 	toolName := req.Params.Name
 88 | 	toolArgument := req.Params.Arguments
 89 | 	logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName))
 90 | 	tool, ok := toolsMap[toolName]
 91 | 	if !ok {
 92 | 		err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName)
 93 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err
 94 | 	}
 95 | 
 96 | 	// Get access token
 97 | 	accessToken := tools.AccessToken(header.Get("Authorization"))
 98 | 
 99 | 	// Check if this specific tool requires the standard authorization header
100 | 	if tool.RequiresClientAuthorization() {
101 | 		if accessToken == "" {
102 | 			return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, "missing access token in the 'Authorization' header", nil), tools.ErrUnauthorized
103 | 		}
104 | 	}
105 | 
106 | 	// marshal arguments and decode it using decodeJSON instead to prevent loss between floats/int.
107 | 	aMarshal, err := json.Marshal(toolArgument)
108 | 	if err != nil {
109 | 		err = fmt.Errorf("unable to marshal tools argument: %w", err)
110 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
111 | 	}
112 | 
113 | 	var data map[string]any
114 | 	if err = util.DecodeJSON(bytes.NewBuffer(aMarshal), &data); err != nil {
115 | 		err = fmt.Errorf("unable to decode tools argument: %w", err)
116 | 		return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
117 | 	}
118 | 
119 | 	// Tool authentication
120 | 	// claimsFromAuth maps the name of the authservice to the claims retrieved from it.
121 | 	claimsFromAuth := make(map[string]map[string]any)
122 | 
123 | 	// if using stdio, header will be nil and auth will not be supported
124 | 	if header != nil {
125 | 		for _, aS := range authServices {
126 | 			claims, err := aS.GetClaimsFromHeader(ctx, header)
127 | 			if err != nil {
128 | 				logger.DebugContext(ctx, err.Error())
129 | 				continue
130 | 			}
131 | 			if claims == nil {
132 | 				// authService not present in header
133 | 				continue
134 | 			}
135 | 			claimsFromAuth[aS.GetName()] = claims
136 | 		}
137 | 	}
138 | 
139 | 	// Tool authorization check
140 | 	verifiedAuthServices := make([]string, len(claimsFromAuth))
141 | 	i := 0
142 | 	for k := range claimsFromAuth {
143 | 		verifiedAuthServices[i] = k
144 | 		i++
145 | 	}
146 | 
147 | 	// Check if any of the specified auth services is verified
148 | 	isAuthorized := tool.Authorized(verifiedAuthServices)
149 | 	if !isAuthorized {
150 | 		err = fmt.Errorf("unauthorized Tool call: Please make sure your specify correct auth headers: %w", tools.ErrUnauthorized)
151 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
152 | 	}
153 | 	logger.DebugContext(ctx, "tool invocation authorized")
154 | 
155 | 	params, err := tool.ParseParams(data, claimsFromAuth)
156 | 	if err != nil {
157 | 		err = fmt.Errorf("provided parameters were invalid: %w", err)
158 | 		return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err
159 | 	}
160 | 	logger.DebugContext(ctx, fmt.Sprintf("invocation params: %s", params))
161 | 
162 | 	// run tool invocation and generate response.
163 | 	results, err := tool.Invoke(ctx, params, accessToken)
164 | 	if err != nil {
165 | 		errStr := err.Error()
166 | 		// Missing authService tokens.
167 | 		if errors.Is(err, tools.ErrUnauthorized) {
168 | 			return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
169 | 		}
170 | 		// Upstream auth error
171 | 		if strings.Contains(errStr, "Error 401") || strings.Contains(errStr, "Error 403") {
172 | 			if tool.RequiresClientAuthorization() {
173 | 				// Error with client credentials should pass down to the client
174 | 				return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err
175 | 			}
176 | 			// Auth error with ADC should raise internal 500 error
177 | 			return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err
178 | 		}
179 | 
180 | 		text := TextContent{
181 | 			Type: "text",
182 | 			Text: err.Error(),
183 | 		}
184 | 		return jsonrpc.JSONRPCResponse{
185 | 			Jsonrpc: jsonrpc.JSONRPC_VERSION,
186 | 			Id:      id,
187 | 			Result:  CallToolResult{Content: []TextContent{text}, IsError: true},
188 | 		}, nil
189 | 	}
190 | 
191 | 	content := make([]TextContent, 0)
192 | 
193 | 	sliceRes, ok := results.([]any)
194 | 	if !ok {
195 | 		sliceRes = []any{results}
196 | 	}
197 | 
198 | 	for _, d := range sliceRes {
199 | 		text := TextContent{Type: "text"}
200 | 		dM, err := json.Marshal(d)
201 | 		if err != nil {
202 | 			text.Text = fmt.Sprintf("fail to marshal: %s, result: %s", err, d)
203 | 		} else {
204 | 			text.Text = string(dM)
205 | 		}
206 | 		content = append(content, text)
207 | 	}
208 | 
209 | 	return jsonrpc.JSONRPCResponse{
210 | 		Jsonrpc: jsonrpc.JSONRPC_VERSION,
211 | 		Id:      id,
212 | 		Result:  CallToolResult{Content: content},
213 | 	}, nil
214 | }
215 | 
```

--------------------------------------------------------------------------------
/internal/prebuiltconfigs/tools/postgres.yaml:
--------------------------------------------------------------------------------

```yaml
  1 | # Copyright 2025 Google LLC
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | 
 15 | sources:
 16 |     postgresql-source:
 17 |         kind: postgres
 18 |         host: ${POSTGRES_HOST}
 19 |         port: ${POSTGRES_PORT}
 20 |         database: ${POSTGRES_DATABASE}
 21 |         user: ${POSTGRES_USER}
 22 |         password: ${POSTGRES_PASSWORD}
 23 |         queryParams: ${POSTGRES_QUERY_PARAMS:}
 24 | 
 25 | tools:
 26 |     execute_sql:
 27 |         kind: postgres-execute-sql
 28 |         source: postgresql-source
 29 |         description: Use this tool to execute SQL.
 30 | 
 31 |     list_tables:
 32 |         kind: postgres-list-tables
 33 |         source: postgresql-source
 34 |         description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
 35 | 
 36 |     list_active_queries:
 37 |         kind: postgres-list-active-queries
 38 |         source: postgresql-source
 39 |         description: "List the top N (default 50) currently running queries (state='active') from pg_stat_activity, ordered by longest-running first. Returns pid, user, database, application_name, client_addr, state, wait_event_type/wait_event, backend/xact/query start times, computed query_duration, and the SQL text."
 40 | 
 41 |     list_available_extensions:
 42 |         kind: postgres-list-available-extensions
 43 |         source: postgresql-source
 44 |         description: "Discover all PostgreSQL extensions available for installation on this server, returning name, default_version, and description."
 45 | 
 46 |     list_installed_extensions:
 47 |         kind: postgres-list-installed-extensions
 48 |         source: postgresql-source
 49 |         description: "List all installed PostgreSQL extensions with their name, version, schema, owner, and description."
 50 | 
 51 |     list_autovacuum_configurations:
 52 |         kind: postgres-sql
 53 |         source: postgresql-source
 54 |         description: "List PostgreSQL autovacuum-related configurations (name and current setting) from pg_settings."
 55 |         statement: |
 56 |             SELECT name,
 57 |                 setting
 58 |             FROM   pg_settings
 59 |             WHERE  category = 'Autovacuum';
 60 | 
 61 |     list_memory_configurations:
 62 |         kind: postgres-sql
 63 |         source: postgresql-source
 64 |         description: "List PostgreSQL memory-related configurations (name and current setting) from pg_settings."
 65 |         statement: |
 66 |             (
 67 |                 SELECT
 68 |                 name,
 69 |                 pg_size_pretty((setting::bigint * 1024)::bigint) setting
 70 |                 FROM pg_settings
 71 |                 WHERE name IN ('work_mem', 'maintenance_work_mem')
 72 |             )
 73 |             UNION ALL
 74 |             (
 75 |                 SELECT
 76 |                 name,
 77 |                 pg_size_pretty((((setting::bigint) * 8) * 1024)::bigint)
 78 |                 FROM pg_settings
 79 |                 WHERE name IN ('shared_buffers', 'wal_buffers', 'effective_cache_size', 'temp_buffers')
 80 |             )
 81 |             ORDER BY 1 DESC;
 82 | 
 83 |     list_top_bloated_tables:
 84 |         kind: postgres-sql
 85 |         source: postgresql-source
 86 |         description: |
 87 |             List the top tables by dead-tuple (approximate bloat signal), returning schema, table, live/dead tuples, percentage, and last vacuum/analyze times.
 88 |         statement: |
 89 |             SELECT
 90 |                 schemaname AS schema_name,
 91 |                 relname AS relation_name,
 92 |                 n_live_tup AS live_tuples,
 93 |                 n_dead_tup AS dead_tuples,
 94 |                 TRUNC((n_dead_tup::NUMERIC / NULLIF(n_live_tup + n_dead_tup, 0)) * 100, 2) AS dead_tuple_percentage,
 95 |                 last_vacuum,
 96 |                 last_autovacuum,
 97 |                 last_analyze,
 98 |                 last_autoanalyze
 99 |             FROM pg_stat_user_tables
100 |             ORDER BY n_dead_tup DESC
101 |             LIMIT COALESCE($1::int, 50);
102 |         parameters:
103 |             - name: limit
104 |               description: "The maximum number of results to return."
105 |               type: integer
106 |               default: 50
107 | 
108 |     list_replication_slots:
109 |         kind: postgres-sql
110 |         source: postgresql-source
111 |         description: "List key details for all PostgreSQL replication slots (e.g., type, database, active status) and calculates the size of the outstanding WAL that is being prevented from removal by the slot."
112 |         statement: |
113 |             SELECT
114 |             slot_name,
115 |             slot_type,
116 |             plugin,
117 |             database,
118 |             temporary,
119 |             active,
120 |             restart_lsn,
121 |             confirmed_flush_lsn,
122 |             xmin,
123 |             catalog_xmin,
124 |             pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS retained_wal
125 |             FROM pg_replication_slots;
126 | 
127 |     list_invalid_indexes:
128 |         kind: postgres-sql
129 |         source: postgresql-source
130 |         description: "Lists all invalid PostgreSQL indexes which are taking up disk space but are unusable by the query planner. Typically created by failed CREATE INDEX CONCURRENTLY operations."
131 |         statement: |
132 |             SELECT
133 |                 nspname AS schema_name,
134 |                 indexrelid::regclass AS index_name,
135 |                 indrelid::regclass AS table_name,
136 |                 pg_size_pretty(pg_total_relation_size(indexrelid)) AS index_size,
137 |                 indisready,
138 |                 indisvalid,
139 |                 pg_get_indexdef(pg_class.oid) AS index_def
140 |             FROM pg_index
141 |             JOIN pg_class ON pg_class.oid = pg_index.indexrelid
142 |             JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace
143 |             WHERE indisvalid = FALSE;
144 | 
145 |     get_query_plan:
146 |         kind: postgres-sql
147 |         source: postgresql-source
148 |         description: "Generate a PostgreSQL EXPLAIN plan in JSON format for a single SQL statement—without executing it. This returns the optimizer's estimated plan, costs, and rows (no ANALYZE, no extra options). Use in production safely for plan inspection, regression checks, and query tuning workflows."
149 |         statement: |
150 |             EXPLAIN (FORMAT JSON) {{.query}};
151 |         templateParameters:
152 |             - name: query
153 |               type: string
154 |               description: "The SQL statement for which you want to generate plan (omit the EXPLAIN keyword)."
155 |               required: true
156 | 
157 | toolsets:
158 |     postgres_database_tools:
159 |         - execute_sql
160 |         - list_tables
161 |         - list_active_queries
162 |         - list_available_extensions
163 |         - list_installed_extensions
164 |         - list_autovacuum_configurations
165 |         - list_memory_configurations
166 |         - list_top_bloated_tables
167 |         - list_replication_slots
168 |         - list_invalid_indexes
169 |         - get_query_plan
170 | 
```

--------------------------------------------------------------------------------
/docs/en/resources/tools/spanner/spanner-sql.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: "spanner-sql"
  3 | type: docs
  4 | weight: 1
  5 | description: >
  6 |   A "spanner-sql" tool executes a pre-defined SQL statement against a Google
  7 |   Cloud Spanner database.
  8 | aliases:
  9 | - /resources/tools/spanner-sql
 10 | ---
 11 | 
 12 | ## About
 13 | 
 14 | A `spanner-sql` tool executes a pre-defined SQL statement (either `googlesql` or
 15 | `postgresql`) against a Cloud Spanner database. It's compatible with any of the
 16 | following sources:
 17 | 
 18 | - [spanner](../../sources/spanner.md)
 19 | 
 20 | ### GoogleSQL
 21 | 
 22 | For the `googlesql` dialect, the specified SQL statement is executed as a [data
 23 | manipulation language (DML)][gsql-dml] statements, and specified parameters will
 24 | inserted according to their name: e.g. `@name`.
 25 | 
 26 | > **Note:** This tool uses parameterized queries to prevent SQL injections.
 27 | > Query parameters can be used as substitutes for arbitrary expressions.
 28 | > Parameters cannot be used as substitutes for identifiers, column names, table
 29 | > names, or other parts of the query.
 30 | 
 31 | [gsql-dml]:
 32 |     https://cloud.google.com/spanner/docs/reference/standard-sql/dml-syntax
 33 | 
 34 | ### PostgreSQL
 35 | 
 36 | For the `postgresql` dialect, the specified SQL statement is executed as a [prepared
 37 | statement][pg-prepare], and specified parameters will be inserted according to
 38 | their position: e.g. `$1` will be the first parameter specified, `$2` will be
 39 | the second parameter, and so on.
 40 | 
 41 | [pg-prepare]: https://www.postgresql.org/docs/current/sql-prepare.html
 42 | 
 43 | ## Example
 44 | 
 45 | > **Note:** This tool uses parameterized queries to prevent SQL injections.
 46 | > Query parameters can be used as substitutes for arbitrary expressions.
 47 | > Parameters cannot be used as substitutes for identifiers, column names, table
 48 | > names, or other parts of the query.
 49 | 
 50 | {{< tabpane persist="header" >}}
 51 | {{< tab header="GoogleSQL" lang="yaml" >}}
 52 | 
 53 | tools:
 54 |  search_flights_by_number:
 55 |     kind: spanner-sql
 56 |     source: my-spanner-instance
 57 |     statement: |
 58 |       SELECT * FROM flights
 59 |       WHERE airline = @airline
 60 |       AND flight_number = @flight_number
 61 |       LIMIT 10
 62 |     description: |
 63 |       Use this tool to get information for a specific flight.
 64 |       Takes an airline code and flight number and returns info on the flight.
 65 |       Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
 66 |       A airline code is a code for an airline service consisting of two-character
 67 |       airline designator and followed by flight number, which is 1 to 4 digit number.
 68 |       For example, if given CY 0123, the airline is "CY", and flight_number is "123".
 69 |       Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
 70 |       If the tool returns more than one option choose the date closes to today.
 71 |       Example:
 72 |       {{
 73 |           "airline": "CY",
 74 |           "flight_number": "888",
 75 |       }}
 76 |       Example:
 77 |       {{
 78 |           "airline": "DL",
 79 |           "flight_number": "1234",
 80 |       }}
 81 |     parameters:
 82 |       - name: airline
 83 |         type: string
 84 |         description: Airline unique 2 letter identifier
 85 |       - name: flight_number
 86 |         type: string
 87 |         description: 1 to 4 digit number
 88 | 
 89 | {{< /tab >}}
 90 | {{< tab header="PostgreSQL" lang="yaml" >}}
 91 | 
 92 | tools:
 93 |  search_flights_by_number:
 94 |     kind: spanner
 95 |     source: my-spanner-instance
 96 |     statement: |
 97 |       SELECT * FROM flights
 98 |       WHERE airline = $1
 99 |       AND flight_number = $2
100 |       LIMIT 10
101 |     description: |
102 |       Use this tool to get information for a specific flight.
103 |       Takes an airline code and flight number and returns info on the flight.
104 |       Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number.
105 |       A airline code is a code for an airline service consisting of two-character
106 |       airline designator and followed by flight number, which is 1 to 4 digit number.
107 |       For example, if given CY 0123, the airline is "CY", and flight_number is "123".
108 |       Another example for this is DL 1234, the airline is "DL", and flight_number is "1234".
109 |       If the tool returns more than one option choose the date closes to today.
110 |       Example:
111 |       {{
112 |           "airline": "CY",
113 |           "flight_number": "888",
114 |       }}
115 |       Example:
116 |       {{
117 |           "airline": "DL",
118 |           "flight_number": "1234",
119 |       }}
120 |     parameters:
121 |       - name: airline
122 |         type: string
123 |         description: Airline unique 2 letter identifier
124 |       - name: flight_number
125 |         type: string
126 |         description: 1 to 4 digit number
127 | 
128 | {{< /tab >}}
129 | {{< /tabpane >}}
130 | 
131 | ### Example with Template Parameters
132 | 
133 | > **Note:** This tool allows direct modifications to the SQL statement,
134 | > including identifiers, column names, and table names. **This makes it more
135 | > vulnerable to SQL injections**. Using basic parameters only (see above) is
136 | > recommended for performance and safety reasons. For more details, please check
137 | > [templateParameters](..#template-parameters).
138 | 
139 | ```yaml
140 | tools:
141 |  list_table:
142 |     kind: spanner
143 |     source: my-spanner-instance
144 |     statement: |
145 |       SELECT * FROM {{.tableName}};
146 |     description: |
147 |       Use this tool to list all information from a specific table.
148 |       Example:
149 |       {{
150 |           "tableName": "flights",
151 |       }}
152 |     templateParameters:
153 |       - name: tableName
154 |         type: string
155 |         description: Table to select from
156 | ```
157 | 
158 | ## Reference
159 | 
160 | | **field**          |                   **type**                   | **required** | **description**                                                                                                                        |
161 | |--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
162 | | kind               |                    string                    |     true     | Must be "spanner-sql".                                                                                                                 |
163 | | source             |                    string                    |     true     | Name of the source the SQL should execute on.                                                                                          |
164 | | description        |                    string                    |     true     | Description of the tool that is passed to the LLM.                                                                                     |
165 | | statement          |                    string                    |     true     | SQL statement to execute on.                                                                                                           |
166 | | parameters         |   [parameters](../#specifying-parameters)    |    false     | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement.                                          |
167 | | readOnly           |                     bool                     |    false     | When set to `true`, the `statement` is run as a read-only transaction. Default: `false`.                                               |
168 | | templateParameters | [templateParameters](..#template-parameters) |    false     | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
169 | 
```

--------------------------------------------------------------------------------
/internal/prebuiltconfigs/tools/cloud-sql-postgres.yaml:
--------------------------------------------------------------------------------

```yaml
  1 | # Copyright 2025 Google LLC
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | 
 15 | sources:
 16 |     cloudsql-pg-source:
 17 |         kind: cloud-sql-postgres
 18 |         project: ${CLOUD_SQL_POSTGRES_PROJECT}
 19 |         region: ${CLOUD_SQL_POSTGRES_REGION}
 20 |         instance: ${CLOUD_SQL_POSTGRES_INSTANCE}
 21 |         database: ${CLOUD_SQL_POSTGRES_DATABASE}
 22 |         user: ${CLOUD_SQL_POSTGRES_USER:}
 23 |         password: ${CLOUD_SQL_POSTGRES_PASSWORD:}
 24 |         ipType: ${CLOUD_SQL_POSTGRES_IP_TYPE:public}
 25 | 
 26 | tools:
 27 |     execute_sql:
 28 |         kind: postgres-execute-sql
 29 |         source: cloudsql-pg-source
 30 |         description: Use this tool to execute sql.
 31 | 
 32 |     list_tables:
 33 |         kind: postgres-list-tables
 34 |         source: cloudsql-pg-source
 35 |         description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
 36 | 
 37 |     list_active_queries:
 38 |         kind: postgres-list-active-queries
 39 |         source: cloudsql-pg-source
 40 |         description: "List the top N (default 50) currently running queries (state='active') from pg_stat_activity, ordered by longest-running first. Returns pid, user, database, application_name, client_addr, state, wait_event_type/wait_event, backend/xact/query start times, computed query_duration, and the SQL text."
 41 | 
 42 |     list_available_extensions:
 43 |         kind: postgres-list-available-extensions
 44 |         source: cloudsql-pg-source
 45 |         description: "Discover all PostgreSQL extensions available for installation on this server, returning name, default_version, and description."
 46 | 
 47 |     list_installed_extensions:
 48 |         kind: postgres-list-installed-extensions
 49 |         source: cloudsql-pg-source
 50 |         description: "List all installed PostgreSQL extensions with their name, version, schema, owner, and description."
 51 | 
 52 |     list_autovacuum_configurations:
 53 |         kind: postgres-sql
 54 |         source: cloudsql-pg-source
 55 |         description: "List PostgreSQL autovacuum-related configurations (name and current setting) from pg_settings."
 56 |         statement: |
 57 |             SELECT name,
 58 |                 setting
 59 |             FROM   pg_settings
 60 |             WHERE  category = 'Autovacuum';
 61 | 
 62 |     list_memory_configurations:
 63 |         kind: postgres-sql
 64 |         source: cloudsql-pg-source
 65 |         description: "List PostgreSQL memory-related configurations (name and current setting) from pg_settings."
 66 |         statement: |
 67 |             (
 68 |                 SELECT
 69 |                 name,
 70 |                 pg_size_pretty((setting::bigint * 1024)::bigint) setting
 71 |                 FROM pg_settings
 72 |                 WHERE name IN ('work_mem', 'maintenance_work_mem')
 73 |             )
 74 |             UNION ALL
 75 |             (
 76 |                 SELECT
 77 |                 name,
 78 |                 pg_size_pretty((((setting::bigint) * 8) * 1024)::bigint)
 79 |                 FROM pg_settings
 80 |                 WHERE name IN ('shared_buffers', 'wal_buffers', 'effective_cache_size', 'temp_buffers')
 81 |             )
 82 |             ORDER BY 1 DESC;
 83 | 
 84 |     list_top_bloated_tables:
 85 |         kind: postgres-sql
 86 |         source: cloudsql-pg-source
 87 |         description: |
 88 |             List the top tables by dead-tuple (approximate bloat signal), returning schema, table, live/dead tuples, percentage, and last vacuum/analyze times.
 89 |         statement: |
 90 |             SELECT
 91 |                 schemaname AS schema_name,
 92 |                 relname AS relation_name,
 93 |                 n_live_tup AS live_tuples,
 94 |                 n_dead_tup AS dead_tuples,
 95 |                 TRUNC((n_dead_tup::NUMERIC / NULLIF(n_live_tup + n_dead_tup, 0)) * 100, 2) AS dead_tuple_percentage,
 96 |                 last_vacuum,
 97 |                 last_autovacuum,
 98 |                 last_analyze,
 99 |                 last_autoanalyze
100 |             FROM pg_stat_user_tables
101 |             ORDER BY n_dead_tup DESC
102 |             LIMIT COALESCE($1::int, 50);
103 |         parameters:
104 |             - name: limit
105 |               description: "The maximum number of results to return."
106 |               type: integer
107 |               default: 50
108 | 
109 |     list_replication_slots:
110 |         kind: postgres-sql
111 |         source: cloudsql-pg-source
112 |         description: "List key details for all PostgreSQL replication slots (e.g., type, database, active status) and calculates the size of the outstanding WAL that is being prevented from removal by the slot."
113 |         statement: |
114 |             SELECT
115 |             slot_name,
116 |             slot_type,
117 |             plugin,
118 |             database,
119 |             temporary,
120 |             active,
121 |             restart_lsn,
122 |             confirmed_flush_lsn,
123 |             xmin,
124 |             catalog_xmin,
125 |             pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS retained_wal
126 |             FROM pg_replication_slots;
127 | 
128 |     list_invalid_indexes:
129 |         kind: postgres-sql
130 |         source: cloudsql-pg-source
131 |         description: "Lists all invalid PostgreSQL indexes which are taking up disk space but are unusable by the query planner. Typically created by failed CREATE INDEX CONCURRENTLY operations."
132 |         statement: |
133 |             SELECT
134 |                 nspname AS schema_name,
135 |                 indexrelid::regclass AS index_name,
136 |                 indrelid::regclass AS table_name,
137 |                 pg_size_pretty(pg_total_relation_size(indexrelid)) AS index_size,
138 |                 indisready,
139 |                 indisvalid,
140 |                 pg_get_indexdef(pg_class.oid) AS index_def
141 |             FROM pg_index
142 |             JOIN pg_class ON pg_class.oid = pg_index.indexrelid
143 |             JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace
144 |             WHERE indisvalid = FALSE;
145 | 
146 |     get_query_plan:
147 |         kind: postgres-sql
148 |         source: cloudsql-pg-source
149 |         description: "Generate a PostgreSQL EXPLAIN plan in JSON format for a single SQL statement—without executing it. This returns the optimizer's estimated plan, costs, and rows (no ANALYZE, no extra options). Use in production safely for plan inspection, regression checks, and query tuning workflows."
150 |         statement: |
151 |             EXPLAIN (FORMAT JSON) {{.query}};
152 |         templateParameters:
153 |             - name: query
154 |               type: string
155 |               description: "The SQL statement for which you want to generate plan (omit the EXPLAIN keyword)."
156 |               required: true
157 | 
158 | toolsets:
159 |     cloud_sql_postgres_database_tools:
160 |         - execute_sql
161 |         - list_tables
162 |         - list_active_queries
163 |         - list_available_extensions
164 |         - list_installed_extensions
165 |         - list_autovacuum_configurations
166 |         - list_memory_configurations
167 |         - list_top_bloated_tables
168 |         - list_replication_slots
169 |         - list_invalid_indexes
170 |         - get_query_plan
171 | 
```

--------------------------------------------------------------------------------
/internal/prebuiltconfigs/tools/alloydb-postgres.yaml:
--------------------------------------------------------------------------------

```yaml
  1 | # Copyright 2025 Google LLC
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | 
 15 | sources:
 16 |     alloydb-pg-source:
 17 |         kind: "alloydb-postgres"
 18 |         project: ${ALLOYDB_POSTGRES_PROJECT}
 19 |         region: ${ALLOYDB_POSTGRES_REGION}
 20 |         cluster: ${ALLOYDB_POSTGRES_CLUSTER}
 21 |         instance: ${ALLOYDB_POSTGRES_INSTANCE}
 22 |         database: ${ALLOYDB_POSTGRES_DATABASE}
 23 |         user: ${ALLOYDB_POSTGRES_USER:}
 24 |         password: ${ALLOYDB_POSTGRES_PASSWORD:}
 25 |         ipType: ${ALLOYDB_POSTGRES_IP_TYPE:public}
 26 | 
 27 | tools:
 28 |     execute_sql:
 29 |         kind: postgres-execute-sql
 30 |         source: alloydb-pg-source
 31 |         description: Use this tool to execute sql.
 32 | 
 33 |     list_tables:
 34 |         kind: postgres-list-tables
 35 |         source: alloydb-pg-source
 36 |         description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
 37 | 
 38 |     list_active_queries:
 39 |         kind: postgres-list-active-queries
 40 |         source: alloydb-pg-source
 41 |         description: "List the top N (default 50) currently running queries (state='active') from pg_stat_activity, ordered by longest-running first. Returns pid, user, database, application_name, client_addr, state, wait_event_type/wait_event, backend/xact/query start times, computed query_duration, and the SQL text."
 42 | 
 43 |     list_available_extensions:
 44 |         kind: postgres-list-available-extensions
 45 |         source: alloydb-pg-source
 46 |         description: "Discover all PostgreSQL extensions available for installation on this server, returning name, default_version, and description."
 47 | 
 48 |     list_installed_extensions:
 49 |         kind: postgres-list-installed-extensions
 50 |         source: alloydb-pg-source
 51 |         description: "List all installed PostgreSQL extensions with their name, version, schema, owner, and description."
 52 | 
 53 |     list_autovacuum_configurations:
 54 |         kind: postgres-sql
 55 |         source: alloydb-pg-source
 56 |         description: "List PostgreSQL autovacuum-related configurations (name and current setting) from pg_settings."
 57 |         statement: |
 58 |             SELECT name,
 59 |                 setting
 60 |             FROM   pg_settings
 61 |             WHERE  category = 'Autovacuum';
 62 | 
 63 |     list_memory_configurations:
 64 |         kind: postgres-sql
 65 |         source: alloydb-pg-source
 66 |         description: "List PostgreSQL memory-related configurations (name and current setting) from pg_settings."
 67 |         statement: |
 68 |             (
 69 |                 SELECT
 70 |                 name,
 71 |                 pg_size_pretty((setting::bigint * 1024)::bigint) setting
 72 |                 FROM pg_settings
 73 |                 WHERE name IN ('work_mem', 'maintenance_work_mem')
 74 |             )
 75 |             UNION ALL
 76 |             (
 77 |                 SELECT
 78 |                 name,
 79 |                 pg_size_pretty((((setting::bigint) * 8) * 1024)::bigint)
 80 |                 FROM pg_settings
 81 |                 WHERE name IN ('shared_buffers', 'wal_buffers', 'effective_cache_size', 'temp_buffers')
 82 |             )
 83 |             ORDER BY 1 DESC;
 84 | 
 85 |     list_top_bloated_tables:
 86 |         kind: postgres-sql
 87 |         source: alloydb-pg-source
 88 |         description: |
 89 |             List the top tables by dead-tuple (approximate bloat signal), returning schema, table, live/dead tuples, percentage, and last vacuum/analyze times.
 90 |         statement: |
 91 |             SELECT
 92 |                 schemaname AS schema_name,
 93 |                 relname AS relation_name,
 94 |                 n_live_tup AS live_tuples,
 95 |                 n_dead_tup AS dead_tuples,
 96 |                 TRUNC((n_dead_tup::NUMERIC / NULLIF(n_live_tup + n_dead_tup, 0)) * 100, 2) AS dead_tuple_percentage,
 97 |                 last_vacuum,
 98 |                 last_autovacuum,
 99 |                 last_analyze,
100 |                 last_autoanalyze
101 |             FROM pg_stat_user_tables
102 |             ORDER BY n_dead_tup DESC
103 |             LIMIT COALESCE($1::int, 50);
104 |         parameters:
105 |             - name: limit
106 |               description: "The maximum number of results to return."
107 |               type: integer
108 |               default: 50
109 | 
110 |     list_replication_slots:
111 |         kind: postgres-sql
112 |         source: alloydb-pg-source
113 |         description: "List key details for all PostgreSQL replication slots (e.g., type, database, active status) and calculates the size of the outstanding WAL that is being prevented from removal by the slot."
114 |         statement: |
115 |             SELECT
116 |             slot_name,
117 |             slot_type,
118 |             plugin,
119 |             database,
120 |             temporary,
121 |             active,
122 |             restart_lsn,
123 |             confirmed_flush_lsn,
124 |             xmin,
125 |             catalog_xmin,
126 |             pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS retained_wal
127 |             FROM pg_replication_slots;
128 | 
129 |     list_invalid_indexes:
130 |         kind: postgres-sql
131 |         source: alloydb-pg-source
132 |         description: "Lists all invalid PostgreSQL indexes which are taking up disk space but are unusable by the query planner. Typically created by failed CREATE INDEX CONCURRENTLY operations."
133 |         statement: |
134 |             SELECT
135 |                 nspname AS schema_name,
136 |                 indexrelid::regclass AS index_name,
137 |                 indrelid::regclass AS table_name,
138 |                 pg_size_pretty(pg_total_relation_size(indexrelid)) AS index_size,
139 |                 indisready,
140 |                 indisvalid,
141 |                 pg_get_indexdef(pg_class.oid) AS index_def
142 |             FROM pg_index
143 |             JOIN pg_class ON pg_class.oid = pg_index.indexrelid
144 |             JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace
145 |             WHERE indisvalid = FALSE;
146 | 
147 |     get_query_plan:
148 |         kind: postgres-sql
149 |         source: alloydb-pg-source
150 |         description: "Generate a PostgreSQL EXPLAIN plan in JSON format for a single SQL statement—without executing it. This returns the optimizer's estimated plan, costs, and rows (no ANALYZE, no extra options). Use in production safely for plan inspection, regression checks, and query tuning workflows."
151 |         statement: |
152 |             EXPLAIN (FORMAT JSON) {{.query}};
153 |         templateParameters:
154 |             - name: query
155 |               type: string
156 |               description: "The SQL statement for which you want to generate plan (omit the EXPLAIN keyword)."
157 |               required: true
158 | 
159 | toolsets:
160 |     alloydb_postgres_database_tools:
161 |         - execute_sql
162 |         - list_tables
163 |         - list_active_queries
164 |         - list_available_extensions
165 |         - list_installed_extensions
166 |         - list_autovacuum_configurations
167 |         - list_memory_configurations
168 |         - list_top_bloated_tables
169 |         - list_replication_slots
170 |         - list_invalid_indexes
171 |         - get_query_plan
172 | 
```

--------------------------------------------------------------------------------
/internal/tools/firestore/firestoreadddocuments/firestoreadddocuments.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package firestoreadddocuments
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"fmt"
 20 | 
 21 | 	firestoreapi "cloud.google.com/go/firestore"
 22 | 	yaml "github.com/goccy/go-yaml"
 23 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 24 | 	firestoreds "github.com/googleapis/genai-toolbox/internal/sources/firestore"
 25 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 26 | 	"github.com/googleapis/genai-toolbox/internal/tools/firestore/util"
 27 | )
 28 | 
 29 | const kind string = "firestore-add-documents"
 30 | const collectionPathKey string = "collectionPath"
 31 | const documentDataKey string = "documentData"
 32 | const returnDocumentDataKey string = "returnData"
 33 | 
 34 | func init() {
 35 | 	if !tools.Register(kind, newConfig) {
 36 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 37 | 	}
 38 | }
 39 | 
 40 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 41 | 	actual := Config{Name: name}
 42 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 43 | 		return nil, err
 44 | 	}
 45 | 	return actual, nil
 46 | }
 47 | 
 48 | type compatibleSource interface {
 49 | 	FirestoreClient() *firestoreapi.Client
 50 | }
 51 | 
 52 | // validate compatible sources are still compatible
 53 | var _ compatibleSource = &firestoreds.Source{}
 54 | 
 55 | var compatibleSources = [...]string{firestoreds.SourceKind}
 56 | 
 57 | type Config struct {
 58 | 	Name         string   `yaml:"name" validate:"required"`
 59 | 	Kind         string   `yaml:"kind" validate:"required"`
 60 | 	Source       string   `yaml:"source" validate:"required"`
 61 | 	Description  string   `yaml:"description" validate:"required"`
 62 | 	AuthRequired []string `yaml:"authRequired"`
 63 | }
 64 | 
 65 | // validate interface
 66 | var _ tools.ToolConfig = Config{}
 67 | 
 68 | func (cfg Config) ToolConfigKind() string {
 69 | 	return kind
 70 | }
 71 | 
 72 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
 73 | 	// verify source exists
 74 | 	rawS, ok := srcs[cfg.Source]
 75 | 	if !ok {
 76 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
 77 | 	}
 78 | 
 79 | 	// verify the source is compatible
 80 | 	s, ok := rawS.(compatibleSource)
 81 | 	if !ok {
 82 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
 83 | 	}
 84 | 
 85 | 	// Create parameters
 86 | 	collectionPathParameter := tools.NewStringParameter(
 87 | 		collectionPathKey,
 88 | 		"The relative path of the collection where the document will be added to (e.g., 'users' or 'users/userId/posts'). Note: This is a relative path, NOT an absolute path like 'projects/{project_id}/databases/{database_id}/documents/...'",
 89 | 	)
 90 | 
 91 | 	documentDataParameter := tools.NewMapParameter(
 92 | 		documentDataKey,
 93 | 		`The document data in Firestore's native JSON format. Each field must be wrapped with a type indicator:
 94 | - Strings: {"stringValue": "text"}
 95 | - Integers: {"integerValue": "123"} or {"integerValue": 123}
 96 | - Doubles: {"doubleValue": 123.45}
 97 | - Booleans: {"booleanValue": true}
 98 | - Timestamps: {"timestampValue": "2025-01-07T10:00:00Z"}
 99 | - GeoPoints: {"geoPointValue": {"latitude": 34.05, "longitude": -118.24}}
100 | - Arrays: {"arrayValue": {"values": [{"stringValue": "item1"}, {"integerValue": "2"}]}}
101 | - Maps: {"mapValue": {"fields": {"key1": {"stringValue": "value1"}, "key2": {"booleanValue": true}}}}
102 | - Null: {"nullValue": null}
103 | - Bytes: {"bytesValue": "base64EncodedString"}
104 | - References: {"referenceValue": "collection/document"}`,
105 | 		"", // Empty string for generic map that accepts any value type
106 | 	)
107 | 
108 | 	returnDataParameter := tools.NewBooleanParameterWithDefault(
109 | 		returnDocumentDataKey,
110 | 		false,
111 | 		"If set to true the output will have the data of the created document. This flag if set to false will help avoid overloading the context of the agent.",
112 | 	)
113 | 
114 | 	parameters := tools.Parameters{
115 | 		collectionPathParameter,
116 | 		documentDataParameter,
117 | 		returnDataParameter,
118 | 	}
119 | 
120 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, parameters)
121 | 
122 | 	// finish tool setup
123 | 	t := Tool{
124 | 		Name:         cfg.Name,
125 | 		Kind:         kind,
126 | 		Parameters:   parameters,
127 | 		AuthRequired: cfg.AuthRequired,
128 | 		Client:       s.FirestoreClient(),
129 | 		manifest:     tools.Manifest{Description: cfg.Description, Parameters: parameters.Manifest(), AuthRequired: cfg.AuthRequired},
130 | 		mcpManifest:  mcpManifest,
131 | 	}
132 | 	return t, nil
133 | }
134 | 
135 | // validate interface
136 | var _ tools.Tool = Tool{}
137 | 
138 | type Tool struct {
139 | 	Name         string           `yaml:"name"`
140 | 	Kind         string           `yaml:"kind"`
141 | 	AuthRequired []string         `yaml:"authRequired"`
142 | 	Parameters   tools.Parameters `yaml:"parameters"`
143 | 
144 | 	Client      *firestoreapi.Client
145 | 	manifest    tools.Manifest
146 | 	mcpManifest tools.McpManifest
147 | }
148 | 
149 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
150 | 	mapParams := params.AsMap()
151 | 
152 | 	// Get collection path
153 | 	collectionPath, ok := mapParams[collectionPathKey].(string)
154 | 	if !ok || collectionPath == "" {
155 | 		return nil, fmt.Errorf("invalid or missing '%s' parameter", collectionPathKey)
156 | 	}
157 | 
158 | 	// Validate collection path
159 | 	if err := util.ValidateCollectionPath(collectionPath); err != nil {
160 | 		return nil, fmt.Errorf("invalid collection path: %w", err)
161 | 	}
162 | 
163 | 	// Get document data
164 | 	documentDataRaw, ok := mapParams[documentDataKey]
165 | 	if !ok {
166 | 		return nil, fmt.Errorf("invalid or missing '%s' parameter", documentDataKey)
167 | 	}
168 | 
169 | 	// Convert the document data from JSON format to Firestore format
170 | 	// The client is passed to handle referenceValue types
171 | 	documentData, err := util.JSONToFirestoreValue(documentDataRaw, t.Client)
172 | 	if err != nil {
173 | 		return nil, fmt.Errorf("failed to convert document data: %w", err)
174 | 	}
175 | 
176 | 	// Get return document data flag
177 | 	returnData := false
178 | 	if val, ok := mapParams[returnDocumentDataKey].(bool); ok {
179 | 		returnData = val
180 | 	}
181 | 
182 | 	// Get the collection reference
183 | 	collection := t.Client.Collection(collectionPath)
184 | 
185 | 	// Add the document to the collection
186 | 	docRef, writeResult, err := collection.Add(ctx, documentData)
187 | 	if err != nil {
188 | 		return nil, fmt.Errorf("failed to add document: %w", err)
189 | 	}
190 | 
191 | 	// Build the response
192 | 	response := map[string]any{
193 | 		"documentPath": docRef.Path,
194 | 		"createTime":   writeResult.UpdateTime.Format("2006-01-02T15:04:05.999999999Z"),
195 | 	}
196 | 
197 | 	// Add document data if requested
198 | 	if returnData {
199 | 		// Convert the document data back to simple JSON format
200 | 		simplifiedData := util.FirestoreValueToJSON(documentData)
201 | 		response["documentData"] = simplifiedData
202 | 	}
203 | 
204 | 	return response, nil
205 | }
206 | 
207 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
208 | 	return tools.ParseParams(t.Parameters, data, claims)
209 | }
210 | 
211 | func (t Tool) Manifest() tools.Manifest {
212 | 	return t.manifest
213 | }
214 | 
215 | func (t Tool) McpManifest() tools.McpManifest {
216 | 	return t.mcpManifest
217 | }
218 | 
219 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
220 | 	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
221 | }
222 | 
223 | func (t Tool) RequiresClientAuthorization() bool {
224 | 	return false
225 | }
226 | 
```

--------------------------------------------------------------------------------
/MCP-TOOLBOX-EXTENSION.md:
--------------------------------------------------------------------------------

```markdown
  1 | This document helps you find and install the right Gemini CLI extension to interact with your databases.
  2 | 
  3 | ## How to Install an Extension
  4 | 
  5 | To install any of the extensions listed below, use the `gemini extensions install` command followed by the extension's GitHub repository URL.
  6 | 
  7 | For complete instructions on finding, installing, and managing extensions, please see the [official Gemini CLI extensions documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/extensions/index.md).
  8 | 
  9 | **Example Installation Command:**
 10 | 
 11 | ```bash
 12 | gemini extensions install https://github.com/gemini-cli-extensions/EXTENSION_NAME
 13 | ```
 14 | 
 15 | Make sure the user knows:
 16 | * These commands are not supported from within the CLI
 17 | * These commands will only be reflected in active CLI sessions on restart
 18 | * Extensions require Application Default Credentials in your environment. See [Set up ADC for a local development environment](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment) to learn how you can provide either your user credentials or service account credentials to ADC in a local development environment.
 19 | * Most extensions require you to set environment variables to connect to a database. If there is a link provided for the configuration, fetch the web page and return the configuration.
 20 | 
 21 | -----
 22 | 
 23 | ## Find Your Database Extension
 24 | 
 25 | Find your database or service in the list below to get the correct installation command.
 26 | 
 27 | **Note on Observability:** Extensions with `-observability` in their name are designed to help you understand the health and performance of your database instances, often by analyzing metrics and logs.
 28 | 
 29 | ### Google Cloud Managed Databases
 30 | 
 31 | #### BigQuery
 32 | 
 33 |   * For data analytics and querying:
 34 |     ```bash
 35 |     gemini extensions install https://github.com/gemini-cli-extensions/bigquery-data-analytics
 36 |     ```
 37 | 
 38 |     Configuration: https://github.com/gemini-cli-extensions/bigquery-data-analytics/tree/main?tab=readme-ov-file#configuration
 39 | 
 40 |   * For conversational analytics (using natural language):
 41 |     ```bash
 42 |     gemini extensions install https://github.com/gemini-cli-extensions/bigquery-conversational-analytics
 43 |     ```
 44 |     
 45 |     Configuration: https://github.com/gemini-cli-extensions/bigquery-conversational-analytics/tree/main?tab=readme-ov-file#configuration
 46 | 
 47 | #### Cloud SQL for MySQL
 48 | 
 49 |   * Main Extension:
 50 |     ```bash
 51 |     gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-mysql
 52 |     ```
 53 |     Configuration: https://github.com/gemini-cli-extensions/cloud-sql-mysql/tree/main?tab=readme-ov-file#configuration
 54 | 
 55 |   * Observability:
 56 |     ```bash
 57 |     gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-mysql-observability
 58 |     ```
 59 | 
 60 |     If you are looking for self-hosted MySQL, consider the `mysql` extension.
 61 | 
 62 | #### Cloud SQL for PostgreSQL
 63 | 
 64 |   * Main Extension:
 65 |     ```bash
 66 |     gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-postgresql
 67 |     ```
 68 |     Configuration: https://github.com/gemini-cli-extensions/cloud-sql-postgresql/tree/main?tab=readme-ov-file#configuration
 69 | 
 70 |   * Observability:
 71 |     ```bash
 72 |     gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-postgresql-observability
 73 |     ```
 74 | 
 75 |     If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `alloydb` extension for AlloyDB for PostgreSQL.
 76 | 
 77 | #### Cloud SQL for SQL Server
 78 | 
 79 |   * Main Extension:
 80 |     ```bash
 81 |     gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-sqlserver
 82 |     ```
 83 |     
 84 |     Configuration: https://github.com/gemini-cli-extensions/cloud-sql-sqlserver/tree/main?tab=readme-ov-file#configuration
 85 | 
 86 |   * Observability:
 87 |     ```bash
 88 |     gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-sqlserver-observability
 89 |     ```
 90 | 
 91 |     If you are looking for self-hosted SQL Server, consider the `sql-server` extension.
 92 | 
 93 | #### AlloyDB for PostgreSQL
 94 | 
 95 |   * Main Extension:
 96 |     ```bash
 97 |     gemini extensions install https://github.com/gemini-cli-extensions/alloydb
 98 |     ```
 99 |     
100 |     Configuration: https://github.com/gemini-cli-extensions/alloydb/tree/main?tab=readme-ov-file#configuration
101 | 
102 |   * Observability:
103 |     ```bash
104 |     gemini extensions install https://github.com/gemini-cli-extensions/alloydb-observability
105 |     ```
106 | 
107 |     If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `cloud-sql-postgresql` extension for Cloud SQL for PostgreSQL.
108 | 
109 | #### Spanner
110 | 
111 |   * For querying Spanner databases:
112 |     ```bash
113 |     gemini extensions install https://github.com/gemini-cli-extensions/spanner
114 |     ```
115 |     
116 |     Configuration: https://github.com/gemini-cli-extensions/spanner/tree/main?tab=readme-ov-file#configuration
117 | 
118 | #### Firestore
119 | 
120 |   * For querying Firestore in Native Mode:
121 |     ```bash
122 |     gemini extensions install https://github.com/gemini-cli-extensions/firestore-native
123 |     ```
124 | 
125 |     Configuration: https://github.com/gemini-cli-extensions/firestore-native/tree/main?tab=readme-ov-file#configuration
126 | 
127 | ### Other Google Cloud Data Services
128 | 
129 | #### Dataplex
130 | 
131 |   * For interacting with Dataplex data lakes and assets:
132 |     ```bash
133 |     gemini extensions install https://github.com/gemini-cli-extensions/dataplex
134 |     ```
135 | 
136 |     Configuration: https://github.com/gemini-cli-extensions/dataplex/tree/main?tab=readme-ov-file#configuration
137 | 
138 | #### Looker
139 | 
140 |   * For querying Looker instances:
141 |     ```bash
142 |     gemini extensions install https://github.com/gemini-cli-extensions/looker
143 |     ```
144 | 
145 |     Configuration: https://github.com/gemini-cli-extensions/looker/tree/main?tab=readme-ov-file#configuration
146 | 
147 | ### Other Database Engines
148 | 
149 | These extensions are for connecting to database instances not managed by Cloud SQL (e.g., self-hosted on-prem, on a VM, or in another cloud).
150 | 
151 |   * MySQL:
152 |     ```bash
153 |     gemini extensions install https://github.com/gemini-cli-extensions/mysql
154 |     ```
155 |     
156 |     Configuration: https://github.com/gemini-cli-extensions/mysql/tree/main?tab=readme-ov-file#configuration
157 | 
158 |     If you are looking for Google Cloud managed MySQL, consider the `cloud-sql-mysql` extension.
159 | 
160 |   * PostgreSQL:
161 |     ```bash
162 |     gemini extensions install https://github.com/gemini-cli-extensions/postgres
163 |     ```
164 |     
165 |     Configuration: https://github.com/gemini-cli-extensions/postgres/tree/main?tab=readme-ov-file#configuration
166 | 
167 |     If you are looking for Google Cloud managed PostgreSQL, consider the `cloud-sql-postgresql` or `alloydb` extensions.
168 | 
169 |   * SQL Server:
170 |     ```bash
171 |     gemini extensions install https://github.com/gemini-cli-extensions/sql-server
172 |     ```
173 | 
174 |     Configuration: https://github.com/gemini-cli-extensions/sql-server/tree/main?tab=readme-ov-file#configuration
175 | 
176 |     If you are looking for Google Cloud managed SQL Server, consider the `cloud-sql-sqlserver` extension.
177 | 
178 | ### Custom Tools
179 | 
180 | #### MCP Toolbox
181 | 
182 |   * For connecting to MCP Toolbox servers:
183 | 
184 |     This extension can be used with any Google Cloud database to build custom tools. For more information, see the [MCP Toolbox documentation](https://googleapis.github.io/genai-toolbox/getting-started/introduction/).
185 |     ```bash
186 |     gemini extensions install https://github.com/gemini-cli-extensions/mcp-toolbox
187 |     ```
188 | 
189 |     Configuration: https://github.com/gemini-cli-extensions/mcp-toolbox/tree/main?tab=readme-ov-file#configuration
190 | 
```
Page 22/48FirstPrevNextLast