#
tokens: 49881/50000 10/816 files (page 28/48)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 28 of 48. Use http://codebase.md/googleapis/genai-toolbox?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .ci
│   ├── continuous.release.cloudbuild.yaml
│   ├── generate_release_table.sh
│   ├── integration.cloudbuild.yaml
│   ├── quickstart_test
│   │   ├── go.integration.cloudbuild.yaml
│   │   ├── js.integration.cloudbuild.yaml
│   │   ├── py.integration.cloudbuild.yaml
│   │   ├── run_go_tests.sh
│   │   ├── run_js_tests.sh
│   │   ├── run_py_tests.sh
│   │   └── setup_hotels_sample.sql
│   ├── test_with_coverage.sh
│   └── versioned.release.cloudbuild.yaml
├── .github
│   ├── auto-label.yaml
│   ├── blunderbuss.yml
│   ├── CODEOWNERS
│   ├── header-checker-lint.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── question.yml
│   ├── label-sync.yml
│   ├── labels.yaml
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── release-please.yml
│   ├── renovate.json5
│   ├── sync-repo-settings.yaml
│   └── workflows
│       ├── cloud_build_failure_reporter.yml
│       ├── deploy_dev_docs.yaml
│       ├── deploy_previous_version_docs.yaml
│       ├── deploy_versioned_docs.yaml
│       ├── docs_deploy.yaml
│       ├── docs_preview_clean.yaml
│       ├── docs_preview_deploy.yaml
│       ├── lint.yaml
│       ├── schedule_reporter.yml
│       ├── sync-labels.yaml
│       └── tests.yaml
├── .gitignore
├── .gitmodules
├── .golangci.yaml
├── .hugo
│   ├── archetypes
│   │   └── default.md
│   ├── assets
│   │   ├── icons
│   │   │   └── logo.svg
│   │   └── scss
│   │       ├── _styles_project.scss
│   │       └── _variables_project.scss
│   ├── go.mod
│   ├── go.sum
│   ├── hugo.toml
│   ├── layouts
│   │   ├── _default
│   │   │   └── home.releases.releases
│   │   ├── index.llms-full.txt
│   │   ├── index.llms.txt
│   │   ├── partials
│   │   │   ├── hooks
│   │   │   │   └── head-end.html
│   │   │   ├── navbar-version-selector.html
│   │   │   ├── page-meta-links.html
│   │   │   └── td
│   │   │       └── render-heading.html
│   │   ├── robot.txt
│   │   └── shortcodes
│   │       ├── include.html
│   │       ├── ipynb.html
│   │       └── regionInclude.html
│   ├── package-lock.json
│   ├── package.json
│   └── static
│       ├── favicons
│       │   ├── android-chrome-192x192.png
│       │   ├── android-chrome-512x512.png
│       │   ├── apple-touch-icon.png
│       │   ├── favicon-16x16.png
│       │   ├── favicon-32x32.png
│       │   └── favicon.ico
│       └── js
│           └── w3.js
├── CHANGELOG.md
├── cmd
│   ├── options_test.go
│   ├── options.go
│   ├── root_test.go
│   ├── root.go
│   └── version.txt
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── DEVELOPER.md
├── Dockerfile
├── docs
│   └── en
│       ├── _index.md
│       ├── about
│       │   ├── _index.md
│       │   └── faq.md
│       ├── concepts
│       │   ├── _index.md
│       │   └── telemetry
│       │       ├── index.md
│       │       ├── telemetry_flow.png
│       │       └── telemetry_traces.png
│       ├── getting-started
│       │   ├── _index.md
│       │   ├── colab_quickstart.ipynb
│       │   ├── configure.md
│       │   ├── introduction
│       │   │   ├── _index.md
│       │   │   └── architecture.png
│       │   ├── local_quickstart_go.md
│       │   ├── local_quickstart_js.md
│       │   ├── local_quickstart.md
│       │   ├── mcp_quickstart
│       │   │   ├── _index.md
│       │   │   ├── inspector_tools.png
│       │   │   └── inspector.png
│       │   └── quickstart
│       │       ├── go
│       │       │   ├── genAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── genkit
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── langchain
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── openAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   └── quickstart_test.go
│       │       ├── golden.txt
│       │       ├── js
│       │       │   ├── genAI
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── genkit
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── langchain
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── llamaindex
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   └── quickstart.test.js
│       │       ├── python
│       │       │   ├── __init__.py
│       │       │   ├── adk
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── core
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── langchain
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── llamaindex
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   └── quickstart_test.py
│       │       └── shared
│       │           ├── cloud_setup.md
│       │           ├── configure_toolbox.md
│       │           └── database_setup.md
│       ├── how-to
│       │   ├── _index.md
│       │   ├── connect_via_geminicli.md
│       │   ├── connect_via_mcp.md
│       │   ├── connect-ide
│       │   │   ├── _index.md
│       │   │   ├── alloydb_pg_admin_mcp.md
│       │   │   ├── alloydb_pg_mcp.md
│       │   │   ├── bigquery_mcp.md
│       │   │   ├── cloud_sql_mssql_admin_mcp.md
│       │   │   ├── cloud_sql_mssql_mcp.md
│       │   │   ├── cloud_sql_mysql_admin_mcp.md
│       │   │   ├── cloud_sql_mysql_mcp.md
│       │   │   ├── cloud_sql_pg_admin_mcp.md
│       │   │   ├── cloud_sql_pg_mcp.md
│       │   │   ├── firestore_mcp.md
│       │   │   ├── looker_mcp.md
│       │   │   ├── mssql_mcp.md
│       │   │   ├── mysql_mcp.md
│       │   │   ├── neo4j_mcp.md
│       │   │   ├── postgres_mcp.md
│       │   │   ├── spanner_mcp.md
│       │   │   └── sqlite_mcp.md
│       │   ├── deploy_docker.md
│       │   ├── deploy_gke.md
│       │   ├── deploy_toolbox.md
│       │   ├── export_telemetry.md
│       │   └── toolbox-ui
│       │       ├── edit-headers.gif
│       │       ├── edit-headers.png
│       │       ├── index.md
│       │       ├── optional-param-checked.png
│       │       ├── optional-param-unchecked.png
│       │       ├── run-tool.gif
│       │       ├── tools.png
│       │       └── toolsets.png
│       ├── reference
│       │   ├── _index.md
│       │   ├── cli.md
│       │   └── prebuilt-tools.md
│       ├── resources
│       │   ├── _index.md
│       │   ├── authServices
│       │   │   ├── _index.md
│       │   │   └── google.md
│       │   ├── sources
│       │   │   ├── _index.md
│       │   │   ├── alloydb-admin.md
│       │   │   ├── alloydb-pg.md
│       │   │   ├── bigquery.md
│       │   │   ├── bigtable.md
│       │   │   ├── cassandra.md
│       │   │   ├── clickhouse.md
│       │   │   ├── cloud-monitoring.md
│       │   │   ├── cloud-sql-admin.md
│       │   │   ├── cloud-sql-mssql.md
│       │   │   ├── cloud-sql-mysql.md
│       │   │   ├── cloud-sql-pg.md
│       │   │   ├── couchbase.md
│       │   │   ├── dataplex.md
│       │   │   ├── dgraph.md
│       │   │   ├── firebird.md
│       │   │   ├── firestore.md
│       │   │   ├── http.md
│       │   │   ├── looker.md
│       │   │   ├── mongodb.md
│       │   │   ├── mssql.md
│       │   │   ├── mysql.md
│       │   │   ├── neo4j.md
│       │   │   ├── oceanbase.md
│       │   │   ├── oracle.md
│       │   │   ├── postgres.md
│       │   │   ├── redis.md
│       │   │   ├── serverless-spark.md
│       │   │   ├── spanner.md
│       │   │   ├── sqlite.md
│       │   │   ├── tidb.md
│       │   │   ├── trino.md
│       │   │   ├── valkey.md
│       │   │   └── yugabytedb.md
│       │   └── tools
│       │       ├── _index.md
│       │       ├── alloydb
│       │       │   ├── _index.md
│       │       │   ├── alloydb-create-cluster.md
│       │       │   ├── alloydb-create-instance.md
│       │       │   ├── alloydb-create-user.md
│       │       │   ├── alloydb-get-cluster.md
│       │       │   ├── alloydb-get-instance.md
│       │       │   ├── alloydb-get-user.md
│       │       │   ├── alloydb-list-clusters.md
│       │       │   ├── alloydb-list-instances.md
│       │       │   ├── alloydb-list-users.md
│       │       │   └── alloydb-wait-for-operation.md
│       │       ├── alloydbainl
│       │       │   ├── _index.md
│       │       │   └── alloydb-ai-nl.md
│       │       ├── bigquery
│       │       │   ├── _index.md
│       │       │   ├── bigquery-analyze-contribution.md
│       │       │   ├── bigquery-conversational-analytics.md
│       │       │   ├── bigquery-execute-sql.md
│       │       │   ├── bigquery-forecast.md
│       │       │   ├── bigquery-get-dataset-info.md
│       │       │   ├── bigquery-get-table-info.md
│       │       │   ├── bigquery-list-dataset-ids.md
│       │       │   ├── bigquery-list-table-ids.md
│       │       │   ├── bigquery-search-catalog.md
│       │       │   └── bigquery-sql.md
│       │       ├── bigtable
│       │       │   ├── _index.md
│       │       │   └── bigtable-sql.md
│       │       ├── cassandra
│       │       │   ├── _index.md
│       │       │   └── cassandra-cql.md
│       │       ├── clickhouse
│       │       │   ├── _index.md
│       │       │   ├── clickhouse-execute-sql.md
│       │       │   ├── clickhouse-list-databases.md
│       │       │   ├── clickhouse-list-tables.md
│       │       │   └── clickhouse-sql.md
│       │       ├── cloudmonitoring
│       │       │   ├── _index.md
│       │       │   └── cloud-monitoring-query-prometheus.md
│       │       ├── cloudsql
│       │       │   ├── _index.md
│       │       │   ├── cloudsqlcreatedatabase.md
│       │       │   ├── cloudsqlcreateusers.md
│       │       │   ├── cloudsqlgetinstances.md
│       │       │   ├── cloudsqllistdatabases.md
│       │       │   ├── cloudsqllistinstances.md
│       │       │   ├── cloudsqlmssqlcreateinstance.md
│       │       │   ├── cloudsqlmysqlcreateinstance.md
│       │       │   ├── cloudsqlpgcreateinstances.md
│       │       │   └── cloudsqlwaitforoperation.md
│       │       ├── couchbase
│       │       │   ├── _index.md
│       │       │   └── couchbase-sql.md
│       │       ├── dataform
│       │       │   ├── _index.md
│       │       │   └── dataform-compile-local.md
│       │       ├── dataplex
│       │       │   ├── _index.md
│       │       │   ├── dataplex-lookup-entry.md
│       │       │   ├── dataplex-search-aspect-types.md
│       │       │   └── dataplex-search-entries.md
│       │       ├── dgraph
│       │       │   ├── _index.md
│       │       │   └── dgraph-dql.md
│       │       ├── firebird
│       │       │   ├── _index.md
│       │       │   ├── firebird-execute-sql.md
│       │       │   └── firebird-sql.md
│       │       ├── firestore
│       │       │   ├── _index.md
│       │       │   ├── firestore-add-documents.md
│       │       │   ├── firestore-delete-documents.md
│       │       │   ├── firestore-get-documents.md
│       │       │   ├── firestore-get-rules.md
│       │       │   ├── firestore-list-collections.md
│       │       │   ├── firestore-query-collection.md
│       │       │   ├── firestore-query.md
│       │       │   ├── firestore-update-document.md
│       │       │   └── firestore-validate-rules.md
│       │       ├── http
│       │       │   ├── _index.md
│       │       │   └── http.md
│       │       ├── looker
│       │       │   ├── _index.md
│       │       │   ├── looker-add-dashboard-element.md
│       │       │   ├── looker-conversational-analytics.md
│       │       │   ├── looker-create-project-file.md
│       │       │   ├── looker-delete-project-file.md
│       │       │   ├── looker-dev-mode.md
│       │       │   ├── looker-get-dashboards.md
│       │       │   ├── looker-get-dimensions.md
│       │       │   ├── looker-get-explores.md
│       │       │   ├── looker-get-filters.md
│       │       │   ├── looker-get-looks.md
│       │       │   ├── looker-get-measures.md
│       │       │   ├── looker-get-models.md
│       │       │   ├── looker-get-parameters.md
│       │       │   ├── looker-get-project-file.md
│       │       │   ├── looker-get-project-files.md
│       │       │   ├── looker-get-projects.md
│       │       │   ├── looker-health-analyze.md
│       │       │   ├── looker-health-pulse.md
│       │       │   ├── looker-health-vacuum.md
│       │       │   ├── looker-make-dashboard.md
│       │       │   ├── looker-make-look.md
│       │       │   ├── looker-query-sql.md
│       │       │   ├── looker-query-url.md
│       │       │   ├── looker-query.md
│       │       │   ├── looker-run-look.md
│       │       │   └── looker-update-project-file.md
│       │       ├── mongodb
│       │       │   ├── _index.md
│       │       │   ├── mongodb-aggregate.md
│       │       │   ├── mongodb-delete-many.md
│       │       │   ├── mongodb-delete-one.md
│       │       │   ├── mongodb-find-one.md
│       │       │   ├── mongodb-find.md
│       │       │   ├── mongodb-insert-many.md
│       │       │   ├── mongodb-insert-one.md
│       │       │   ├── mongodb-update-many.md
│       │       │   └── mongodb-update-one.md
│       │       ├── mssql
│       │       │   ├── _index.md
│       │       │   ├── mssql-execute-sql.md
│       │       │   ├── mssql-list-tables.md
│       │       │   └── mssql-sql.md
│       │       ├── mysql
│       │       │   ├── _index.md
│       │       │   ├── mysql-execute-sql.md
│       │       │   ├── mysql-list-active-queries.md
│       │       │   ├── mysql-list-table-fragmentation.md
│       │       │   ├── mysql-list-tables-missing-unique-indexes.md
│       │       │   ├── mysql-list-tables.md
│       │       │   └── mysql-sql.md
│       │       ├── neo4j
│       │       │   ├── _index.md
│       │       │   ├── neo4j-cypher.md
│       │       │   ├── neo4j-execute-cypher.md
│       │       │   └── neo4j-schema.md
│       │       ├── oceanbase
│       │       │   ├── _index.md
│       │       │   ├── oceanbase-execute-sql.md
│       │       │   └── oceanbase-sql.md
│       │       ├── oracle
│       │       │   ├── _index.md
│       │       │   ├── oracle-execute-sql.md
│       │       │   └── oracle-sql.md
│       │       ├── postgres
│       │       │   ├── _index.md
│       │       │   ├── postgres-execute-sql.md
│       │       │   ├── postgres-list-active-queries.md
│       │       │   ├── postgres-list-available-extensions.md
│       │       │   ├── postgres-list-installed-extensions.md
│       │       │   ├── postgres-list-tables.md
│       │       │   └── postgres-sql.md
│       │       ├── redis
│       │       │   ├── _index.md
│       │       │   └── redis.md
│       │       ├── serverless-spark
│       │       │   ├── _index.md
│       │       │   └── serverless-spark-list-batches.md
│       │       ├── spanner
│       │       │   ├── _index.md
│       │       │   ├── spanner-execute-sql.md
│       │       │   ├── spanner-list-tables.md
│       │       │   └── spanner-sql.md
│       │       ├── sqlite
│       │       │   ├── _index.md
│       │       │   ├── sqlite-execute-sql.md
│       │       │   └── sqlite-sql.md
│       │       ├── tidb
│       │       │   ├── _index.md
│       │       │   ├── tidb-execute-sql.md
│       │       │   └── tidb-sql.md
│       │       ├── trino
│       │       │   ├── _index.md
│       │       │   ├── trino-execute-sql.md
│       │       │   └── trino-sql.md
│       │       ├── utility
│       │       │   ├── _index.md
│       │       │   └── wait.md
│       │       ├── valkey
│       │       │   ├── _index.md
│       │       │   └── valkey.md
│       │       └── yuagbytedb
│       │           ├── _index.md
│       │           └── yugabytedb-sql.md
│       ├── samples
│       │   ├── _index.md
│       │   ├── alloydb
│       │   │   ├── _index.md
│       │   │   ├── ai-nl
│       │   │   │   ├── alloydb_ai_nl.ipynb
│       │   │   │   └── index.md
│       │   │   └── mcp_quickstart.md
│       │   ├── bigquery
│       │   │   ├── _index.md
│       │   │   ├── colab_quickstart_bigquery.ipynb
│       │   │   ├── local_quickstart.md
│       │   │   └── mcp_quickstart
│       │   │       ├── _index.md
│       │   │       ├── inspector_tools.png
│       │   │       └── inspector.png
│       │   └── looker
│       │       ├── _index.md
│       │       ├── looker_gemini_oauth
│       │       │   ├── _index.md
│       │       │   ├── authenticated.png
│       │       │   ├── authorize.png
│       │       │   └── registration.png
│       │       ├── looker_gemini.md
│       │       └── looker_mcp_inspector
│       │           ├── _index.md
│       │           ├── inspector_tools.png
│       │           └── inspector.png
│       └── sdks
│           ├── _index.md
│           ├── go-sdk.md
│           ├── js-sdk.md
│           └── python-sdk.md
├── gemini-extension.json
├── go.mod
├── go.sum
├── internal
│   ├── auth
│   │   ├── auth.go
│   │   └── google
│   │       └── google.go
│   ├── log
│   │   ├── handler.go
│   │   ├── log_test.go
│   │   ├── log.go
│   │   └── logger.go
│   ├── prebuiltconfigs
│   │   ├── prebuiltconfigs_test.go
│   │   ├── prebuiltconfigs.go
│   │   └── tools
│   │       ├── alloydb-postgres-admin.yaml
│   │       ├── alloydb-postgres-observability.yaml
│   │       ├── alloydb-postgres.yaml
│   │       ├── bigquery.yaml
│   │       ├── clickhouse.yaml
│   │       ├── cloud-sql-mssql-admin.yaml
│   │       ├── cloud-sql-mssql-observability.yaml
│   │       ├── cloud-sql-mssql.yaml
│   │       ├── cloud-sql-mysql-admin.yaml
│   │       ├── cloud-sql-mysql-observability.yaml
│   │       ├── cloud-sql-mysql.yaml
│   │       ├── cloud-sql-postgres-admin.yaml
│   │       ├── cloud-sql-postgres-observability.yaml
│   │       ├── cloud-sql-postgres.yaml
│   │       ├── dataplex.yaml
│   │       ├── firestore.yaml
│   │       ├── looker-conversational-analytics.yaml
│   │       ├── looker.yaml
│   │       ├── mssql.yaml
│   │       ├── mysql.yaml
│   │       ├── neo4j.yaml
│   │       ├── oceanbase.yaml
│   │       ├── postgres.yaml
│   │       ├── serverless-spark.yaml
│   │       ├── spanner-postgres.yaml
│   │       ├── spanner.yaml
│   │       └── sqlite.yaml
│   ├── server
│   │   ├── api_test.go
│   │   ├── api.go
│   │   ├── common_test.go
│   │   ├── config.go
│   │   ├── mcp
│   │   │   ├── jsonrpc
│   │   │   │   ├── jsonrpc_test.go
│   │   │   │   └── jsonrpc.go
│   │   │   ├── mcp.go
│   │   │   ├── util
│   │   │   │   └── lifecycle.go
│   │   │   ├── v20241105
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   ├── v20250326
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   └── v20250618
│   │   │       ├── method.go
│   │   │       └── types.go
│   │   ├── mcp_test.go
│   │   ├── mcp.go
│   │   ├── server_test.go
│   │   ├── server.go
│   │   ├── static
│   │   │   ├── assets
│   │   │   │   └── mcptoolboxlogo.png
│   │   │   ├── css
│   │   │   │   └── style.css
│   │   │   ├── index.html
│   │   │   ├── js
│   │   │   │   ├── auth.js
│   │   │   │   ├── loadTools.js
│   │   │   │   ├── mainContent.js
│   │   │   │   ├── navbar.js
│   │   │   │   ├── runTool.js
│   │   │   │   ├── toolDisplay.js
│   │   │   │   ├── tools.js
│   │   │   │   └── toolsets.js
│   │   │   ├── tools.html
│   │   │   └── toolsets.html
│   │   ├── web_test.go
│   │   └── web.go
│   ├── sources
│   │   ├── alloydbadmin
│   │   │   ├── alloydbadmin_test.go
│   │   │   └── alloydbadmin.go
│   │   ├── alloydbpg
│   │   │   ├── alloydb_pg_test.go
│   │   │   └── alloydb_pg.go
│   │   ├── bigquery
│   │   │   ├── bigquery_test.go
│   │   │   └── bigquery.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   ├── cassandra_test.go
│   │   │   └── cassandra.go
│   │   ├── clickhouse
│   │   │   ├── clickhouse_test.go
│   │   │   └── clickhouse.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloud_monitoring_test.go
│   │   │   └── cloud_monitoring.go
│   │   ├── cloudsqladmin
│   │   │   ├── cloud_sql_admin_test.go
│   │   │   └── cloud_sql_admin.go
│   │   ├── cloudsqlmssql
│   │   │   ├── cloud_sql_mssql_test.go
│   │   │   └── cloud_sql_mssql.go
│   │   ├── cloudsqlmysql
│   │   │   ├── cloud_sql_mysql_test.go
│   │   │   └── cloud_sql_mysql.go
│   │   ├── cloudsqlpg
│   │   │   ├── cloud_sql_pg_test.go
│   │   │   └── cloud_sql_pg.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataplex
│   │   │   ├── dataplex_test.go
│   │   │   └── dataplex.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── dialect.go
│   │   ├── firebird
│   │   │   ├── firebird_test.go
│   │   │   └── firebird.go
│   │   ├── firestore
│   │   │   ├── firestore_test.go
│   │   │   └── firestore.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── ip_type.go
│   │   ├── looker
│   │   │   ├── looker_test.go
│   │   │   └── looker.go
│   │   ├── mongodb
│   │   │   ├── mongodb_test.go
│   │   │   └── mongodb.go
│   │   ├── mssql
│   │   │   ├── mssql_test.go
│   │   │   └── mssql.go
│   │   ├── mysql
│   │   │   ├── mysql_test.go
│   │   │   └── mysql.go
│   │   ├── neo4j
│   │   │   ├── neo4j_test.go
│   │   │   └── neo4j.go
│   │   ├── oceanbase
│   │   │   ├── oceanbase_test.go
│   │   │   └── oceanbase.go
│   │   ├── oracle
│   │   │   └── oracle.go
│   │   ├── postgres
│   │   │   ├── postgres_test.go
│   │   │   └── postgres.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── serverlessspark
│   │   │   ├── serverlessspark_test.go
│   │   │   └── serverlessspark.go
│   │   ├── sources.go
│   │   ├── spanner
│   │   │   ├── spanner_test.go
│   │   │   └── spanner.go
│   │   ├── sqlite
│   │   │   ├── sqlite_test.go
│   │   │   └── sqlite.go
│   │   ├── tidb
│   │   │   ├── tidb_test.go
│   │   │   └── tidb.go
│   │   ├── trino
│   │   │   ├── trino_test.go
│   │   │   └── trino.go
│   │   ├── util.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedb
│   │       ├── yugabytedb_test.go
│   │       └── yugabytedb.go
│   ├── telemetry
│   │   ├── instrumentation.go
│   │   └── telemetry.go
│   ├── testutils
│   │   └── testutils.go
│   ├── tools
│   │   ├── alloydb
│   │   │   ├── alloydbcreatecluster
│   │   │   │   ├── alloydbcreatecluster_test.go
│   │   │   │   └── alloydbcreatecluster.go
│   │   │   ├── alloydbcreateinstance
│   │   │   │   ├── alloydbcreateinstance_test.go
│   │   │   │   └── alloydbcreateinstance.go
│   │   │   ├── alloydbcreateuser
│   │   │   │   ├── alloydbcreateuser_test.go
│   │   │   │   └── alloydbcreateuser.go
│   │   │   ├── alloydbgetcluster
│   │   │   │   ├── alloydbgetcluster_test.go
│   │   │   │   └── alloydbgetcluster.go
│   │   │   ├── alloydbgetinstance
│   │   │   │   ├── alloydbgetinstance_test.go
│   │   │   │   └── alloydbgetinstance.go
│   │   │   ├── alloydbgetuser
│   │   │   │   ├── alloydbgetuser_test.go
│   │   │   │   └── alloydbgetuser.go
│   │   │   ├── alloydblistclusters
│   │   │   │   ├── alloydblistclusters_test.go
│   │   │   │   └── alloydblistclusters.go
│   │   │   ├── alloydblistinstances
│   │   │   │   ├── alloydblistinstances_test.go
│   │   │   │   └── alloydblistinstances.go
│   │   │   ├── alloydblistusers
│   │   │   │   ├── alloydblistusers_test.go
│   │   │   │   └── alloydblistusers.go
│   │   │   └── alloydbwaitforoperation
│   │   │       ├── alloydbwaitforoperation_test.go
│   │   │       └── alloydbwaitforoperation.go
│   │   ├── alloydbainl
│   │   │   ├── alloydbainl_test.go
│   │   │   └── alloydbainl.go
│   │   ├── bigquery
│   │   │   ├── bigqueryanalyzecontribution
│   │   │   │   ├── bigqueryanalyzecontribution_test.go
│   │   │   │   └── bigqueryanalyzecontribution.go
│   │   │   ├── bigquerycommon
│   │   │   │   ├── table_name_parser_test.go
│   │   │   │   ├── table_name_parser.go
│   │   │   │   └── util.go
│   │   │   ├── bigqueryconversationalanalytics
│   │   │   │   ├── bigqueryconversationalanalytics_test.go
│   │   │   │   └── bigqueryconversationalanalytics.go
│   │   │   ├── bigqueryexecutesql
│   │   │   │   ├── bigqueryexecutesql_test.go
│   │   │   │   └── bigqueryexecutesql.go
│   │   │   ├── bigqueryforecast
│   │   │   │   ├── bigqueryforecast_test.go
│   │   │   │   └── bigqueryforecast.go
│   │   │   ├── bigquerygetdatasetinfo
│   │   │   │   ├── bigquerygetdatasetinfo_test.go
│   │   │   │   └── bigquerygetdatasetinfo.go
│   │   │   ├── bigquerygettableinfo
│   │   │   │   ├── bigquerygettableinfo_test.go
│   │   │   │   └── bigquerygettableinfo.go
│   │   │   ├── bigquerylistdatasetids
│   │   │   │   ├── bigquerylistdatasetids_test.go
│   │   │   │   └── bigquerylistdatasetids.go
│   │   │   ├── bigquerylisttableids
│   │   │   │   ├── bigquerylisttableids_test.go
│   │   │   │   └── bigquerylisttableids.go
│   │   │   ├── bigquerysearchcatalog
│   │   │   │   ├── bigquerysearchcatalog_test.go
│   │   │   │   └── bigquerysearchcatalog.go
│   │   │   └── bigquerysql
│   │   │       ├── bigquerysql_test.go
│   │   │       └── bigquerysql.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   └── cassandracql
│   │   │       ├── cassandracql_test.go
│   │   │       └── cassandracql.go
│   │   ├── clickhouse
│   │   │   ├── clickhouseexecutesql
│   │   │   │   ├── clickhouseexecutesql_test.go
│   │   │   │   └── clickhouseexecutesql.go
│   │   │   ├── clickhouselistdatabases
│   │   │   │   ├── clickhouselistdatabases_test.go
│   │   │   │   └── clickhouselistdatabases.go
│   │   │   ├── clickhouselisttables
│   │   │   │   ├── clickhouselisttables_test.go
│   │   │   │   └── clickhouselisttables.go
│   │   │   └── clickhousesql
│   │   │       ├── clickhousesql_test.go
│   │   │       └── clickhousesql.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloudmonitoring_test.go
│   │   │   └── cloudmonitoring.go
│   │   ├── cloudsql
│   │   │   ├── cloudsqlcreatedatabase
│   │   │   │   ├── cloudsqlcreatedatabase_test.go
│   │   │   │   └── cloudsqlcreatedatabase.go
│   │   │   ├── cloudsqlcreateusers
│   │   │   │   ├── cloudsqlcreateusers_test.go
│   │   │   │   └── cloudsqlcreateusers.go
│   │   │   ├── cloudsqlgetinstances
│   │   │   │   ├── cloudsqlgetinstances_test.go
│   │   │   │   └── cloudsqlgetinstances.go
│   │   │   ├── cloudsqllistdatabases
│   │   │   │   ├── cloudsqllistdatabases_test.go
│   │   │   │   └── cloudsqllistdatabases.go
│   │   │   ├── cloudsqllistinstances
│   │   │   │   ├── cloudsqllistinstances_test.go
│   │   │   │   └── cloudsqllistinstances.go
│   │   │   └── cloudsqlwaitforoperation
│   │   │       ├── cloudsqlwaitforoperation_test.go
│   │   │       └── cloudsqlwaitforoperation.go
│   │   ├── cloudsqlmssql
│   │   │   └── cloudsqlmssqlcreateinstance
│   │   │       ├── cloudsqlmssqlcreateinstance_test.go
│   │   │       └── cloudsqlmssqlcreateinstance.go
│   │   ├── cloudsqlmysql
│   │   │   └── cloudsqlmysqlcreateinstance
│   │   │       ├── cloudsqlmysqlcreateinstance_test.go
│   │   │       └── cloudsqlmysqlcreateinstance.go
│   │   ├── cloudsqlpg
│   │   │   └── cloudsqlpgcreateinstances
│   │   │       ├── cloudsqlpgcreateinstances_test.go
│   │   │       └── cloudsqlpgcreateinstances.go
│   │   ├── common_test.go
│   │   ├── common.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataform
│   │   │   └── dataformcompilelocal
│   │   │       ├── dataformcompilelocal_test.go
│   │   │       └── dataformcompilelocal.go
│   │   ├── dataplex
│   │   │   ├── dataplexlookupentry
│   │   │   │   ├── dataplexlookupentry_test.go
│   │   │   │   └── dataplexlookupentry.go
│   │   │   ├── dataplexsearchaspecttypes
│   │   │   │   ├── dataplexsearchaspecttypes_test.go
│   │   │   │   └── dataplexsearchaspecttypes.go
│   │   │   └── dataplexsearchentries
│   │   │       ├── dataplexsearchentries_test.go
│   │   │       └── dataplexsearchentries.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── firebird
│   │   │   ├── firebirdexecutesql
│   │   │   │   ├── firebirdexecutesql_test.go
│   │   │   │   └── firebirdexecutesql.go
│   │   │   └── firebirdsql
│   │   │       ├── firebirdsql_test.go
│   │   │       └── firebirdsql.go
│   │   ├── firestore
│   │   │   ├── firestoreadddocuments
│   │   │   │   ├── firestoreadddocuments_test.go
│   │   │   │   └── firestoreadddocuments.go
│   │   │   ├── firestoredeletedocuments
│   │   │   │   ├── firestoredeletedocuments_test.go
│   │   │   │   └── firestoredeletedocuments.go
│   │   │   ├── firestoregetdocuments
│   │   │   │   ├── firestoregetdocuments_test.go
│   │   │   │   └── firestoregetdocuments.go
│   │   │   ├── firestoregetrules
│   │   │   │   ├── firestoregetrules_test.go
│   │   │   │   └── firestoregetrules.go
│   │   │   ├── firestorelistcollections
│   │   │   │   ├── firestorelistcollections_test.go
│   │   │   │   └── firestorelistcollections.go
│   │   │   ├── firestorequery
│   │   │   │   ├── firestorequery_test.go
│   │   │   │   └── firestorequery.go
│   │   │   ├── firestorequerycollection
│   │   │   │   ├── firestorequerycollection_test.go
│   │   │   │   └── firestorequerycollection.go
│   │   │   ├── firestoreupdatedocument
│   │   │   │   ├── firestoreupdatedocument_test.go
│   │   │   │   └── firestoreupdatedocument.go
│   │   │   ├── firestorevalidaterules
│   │   │   │   ├── firestorevalidaterules_test.go
│   │   │   │   └── firestorevalidaterules.go
│   │   │   └── util
│   │   │       ├── converter_test.go
│   │   │       ├── converter.go
│   │   │       ├── validator_test.go
│   │   │       └── validator.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── http_method.go
│   │   ├── looker
│   │   │   ├── lookeradddashboardelement
│   │   │   │   ├── lookeradddashboardelement_test.go
│   │   │   │   └── lookeradddashboardelement.go
│   │   │   ├── lookercommon
│   │   │   │   ├── lookercommon_test.go
│   │   │   │   └── lookercommon.go
│   │   │   ├── lookerconversationalanalytics
│   │   │   │   ├── lookerconversationalanalytics_test.go
│   │   │   │   └── lookerconversationalanalytics.go
│   │   │   ├── lookercreateprojectfile
│   │   │   │   ├── lookercreateprojectfile_test.go
│   │   │   │   └── lookercreateprojectfile.go
│   │   │   ├── lookerdeleteprojectfile
│   │   │   │   ├── lookerdeleteprojectfile_test.go
│   │   │   │   └── lookerdeleteprojectfile.go
│   │   │   ├── lookerdevmode
│   │   │   │   ├── lookerdevmode_test.go
│   │   │   │   └── lookerdevmode.go
│   │   │   ├── lookergetdashboards
│   │   │   │   ├── lookergetdashboards_test.go
│   │   │   │   └── lookergetdashboards.go
│   │   │   ├── lookergetdimensions
│   │   │   │   ├── lookergetdimensions_test.go
│   │   │   │   └── lookergetdimensions.go
│   │   │   ├── lookergetexplores
│   │   │   │   ├── lookergetexplores_test.go
│   │   │   │   └── lookergetexplores.go
│   │   │   ├── lookergetfilters
│   │   │   │   ├── lookergetfilters_test.go
│   │   │   │   └── lookergetfilters.go
│   │   │   ├── lookergetlooks
│   │   │   │   ├── lookergetlooks_test.go
│   │   │   │   └── lookergetlooks.go
│   │   │   ├── lookergetmeasures
│   │   │   │   ├── lookergetmeasures_test.go
│   │   │   │   └── lookergetmeasures.go
│   │   │   ├── lookergetmodels
│   │   │   │   ├── lookergetmodels_test.go
│   │   │   │   └── lookergetmodels.go
│   │   │   ├── lookergetparameters
│   │   │   │   ├── lookergetparameters_test.go
│   │   │   │   └── lookergetparameters.go
│   │   │   ├── lookergetprojectfile
│   │   │   │   ├── lookergetprojectfile_test.go
│   │   │   │   └── lookergetprojectfile.go
│   │   │   ├── lookergetprojectfiles
│   │   │   │   ├── lookergetprojectfiles_test.go
│   │   │   │   └── lookergetprojectfiles.go
│   │   │   ├── lookergetprojects
│   │   │   │   ├── lookergetprojects_test.go
│   │   │   │   └── lookergetprojects.go
│   │   │   ├── lookerhealthanalyze
│   │   │   │   ├── lookerhealthanalyze_test.go
│   │   │   │   └── lookerhealthanalyze.go
│   │   │   ├── lookerhealthpulse
│   │   │   │   ├── lookerhealthpulse_test.go
│   │   │   │   └── lookerhealthpulse.go
│   │   │   ├── lookerhealthvacuum
│   │   │   │   ├── lookerhealthvacuum_test.go
│   │   │   │   └── lookerhealthvacuum.go
│   │   │   ├── lookermakedashboard
│   │   │   │   ├── lookermakedashboard_test.go
│   │   │   │   └── lookermakedashboard.go
│   │   │   ├── lookermakelook
│   │   │   │   ├── lookermakelook_test.go
│   │   │   │   └── lookermakelook.go
│   │   │   ├── lookerquery
│   │   │   │   ├── lookerquery_test.go
│   │   │   │   └── lookerquery.go
│   │   │   ├── lookerquerysql
│   │   │   │   ├── lookerquerysql_test.go
│   │   │   │   └── lookerquerysql.go
│   │   │   ├── lookerqueryurl
│   │   │   │   ├── lookerqueryurl_test.go
│   │   │   │   └── lookerqueryurl.go
│   │   │   ├── lookerrunlook
│   │   │   │   ├── lookerrunlook_test.go
│   │   │   │   └── lookerrunlook.go
│   │   │   └── lookerupdateprojectfile
│   │   │       ├── lookerupdateprojectfile_test.go
│   │   │       └── lookerupdateprojectfile.go
│   │   ├── mongodb
│   │   │   ├── mongodbaggregate
│   │   │   │   ├── mongodbaggregate_test.go
│   │   │   │   └── mongodbaggregate.go
│   │   │   ├── mongodbdeletemany
│   │   │   │   ├── mongodbdeletemany_test.go
│   │   │   │   └── mongodbdeletemany.go
│   │   │   ├── mongodbdeleteone
│   │   │   │   ├── mongodbdeleteone_test.go
│   │   │   │   └── mongodbdeleteone.go
│   │   │   ├── mongodbfind
│   │   │   │   ├── mongodbfind_test.go
│   │   │   │   └── mongodbfind.go
│   │   │   ├── mongodbfindone
│   │   │   │   ├── mongodbfindone_test.go
│   │   │   │   └── mongodbfindone.go
│   │   │   ├── mongodbinsertmany
│   │   │   │   ├── mongodbinsertmany_test.go
│   │   │   │   └── mongodbinsertmany.go
│   │   │   ├── mongodbinsertone
│   │   │   │   ├── mongodbinsertone_test.go
│   │   │   │   └── mongodbinsertone.go
│   │   │   ├── mongodbupdatemany
│   │   │   │   ├── mongodbupdatemany_test.go
│   │   │   │   └── mongodbupdatemany.go
│   │   │   └── mongodbupdateone
│   │   │       ├── mongodbupdateone_test.go
│   │   │       └── mongodbupdateone.go
│   │   ├── mssql
│   │   │   ├── mssqlexecutesql
│   │   │   │   ├── mssqlexecutesql_test.go
│   │   │   │   └── mssqlexecutesql.go
│   │   │   ├── mssqllisttables
│   │   │   │   ├── mssqllisttables_test.go
│   │   │   │   └── mssqllisttables.go
│   │   │   └── mssqlsql
│   │   │       ├── mssqlsql_test.go
│   │   │       └── mssqlsql.go
│   │   ├── mysql
│   │   │   ├── mysqlcommon
│   │   │   │   └── mysqlcommon.go
│   │   │   ├── mysqlexecutesql
│   │   │   │   ├── mysqlexecutesql_test.go
│   │   │   │   └── mysqlexecutesql.go
│   │   │   ├── mysqllistactivequeries
│   │   │   │   ├── mysqllistactivequeries_test.go
│   │   │   │   └── mysqllistactivequeries.go
│   │   │   ├── mysqllisttablefragmentation
│   │   │   │   ├── mysqllisttablefragmentation_test.go
│   │   │   │   └── mysqllisttablefragmentation.go
│   │   │   ├── mysqllisttables
│   │   │   │   ├── mysqllisttables_test.go
│   │   │   │   └── mysqllisttables.go
│   │   │   ├── mysqllisttablesmissinguniqueindexes
│   │   │   │   ├── mysqllisttablesmissinguniqueindexes_test.go
│   │   │   │   └── mysqllisttablesmissinguniqueindexes.go
│   │   │   └── mysqlsql
│   │   │       ├── mysqlsql_test.go
│   │   │       └── mysqlsql.go
│   │   ├── neo4j
│   │   │   ├── neo4jcypher
│   │   │   │   ├── neo4jcypher_test.go
│   │   │   │   └── neo4jcypher.go
│   │   │   ├── neo4jexecutecypher
│   │   │   │   ├── classifier
│   │   │   │   │   ├── classifier_test.go
│   │   │   │   │   └── classifier.go
│   │   │   │   ├── neo4jexecutecypher_test.go
│   │   │   │   └── neo4jexecutecypher.go
│   │   │   └── neo4jschema
│   │   │       ├── cache
│   │   │       │   ├── cache_test.go
│   │   │       │   └── cache.go
│   │   │       ├── helpers
│   │   │       │   ├── helpers_test.go
│   │   │       │   └── helpers.go
│   │   │       ├── neo4jschema_test.go
│   │   │       ├── neo4jschema.go
│   │   │       └── types
│   │   │           └── types.go
│   │   ├── oceanbase
│   │   │   ├── oceanbaseexecutesql
│   │   │   │   ├── oceanbaseexecutesql_test.go
│   │   │   │   └── oceanbaseexecutesql.go
│   │   │   └── oceanbasesql
│   │   │       ├── oceanbasesql_test.go
│   │   │       └── oceanbasesql.go
│   │   ├── oracle
│   │   │   ├── oracleexecutesql
│   │   │   │   └── oracleexecutesql.go
│   │   │   └── oraclesql
│   │   │       └── oraclesql.go
│   │   ├── parameters_test.go
│   │   ├── parameters.go
│   │   ├── postgres
│   │   │   ├── postgresexecutesql
│   │   │   │   ├── postgresexecutesql_test.go
│   │   │   │   └── postgresexecutesql.go
│   │   │   ├── postgreslistactivequeries
│   │   │   │   ├── postgreslistactivequeries_test.go
│   │   │   │   └── postgreslistactivequeries.go
│   │   │   ├── postgreslistavailableextensions
│   │   │   │   ├── postgreslistavailableextensions_test.go
│   │   │   │   └── postgreslistavailableextensions.go
│   │   │   ├── postgreslistinstalledextensions
│   │   │   │   ├── postgreslistinstalledextensions_test.go
│   │   │   │   └── postgreslistinstalledextensions.go
│   │   │   ├── postgreslisttables
│   │   │   │   ├── postgreslisttables_test.go
│   │   │   │   └── postgreslisttables.go
│   │   │   └── postgressql
│   │   │       ├── postgressql_test.go
│   │   │       └── postgressql.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── serverlessspark
│   │   │   └── serverlesssparklistbatches
│   │   │       ├── serverlesssparklistbatches_test.go
│   │   │       └── serverlesssparklistbatches.go
│   │   ├── spanner
│   │   │   ├── spannerexecutesql
│   │   │   │   ├── spannerexecutesql_test.go
│   │   │   │   └── spannerexecutesql.go
│   │   │   ├── spannerlisttables
│   │   │   │   ├── spannerlisttables_test.go
│   │   │   │   └── spannerlisttables.go
│   │   │   └── spannersql
│   │   │       ├── spanner_test.go
│   │   │       └── spannersql.go
│   │   ├── sqlite
│   │   │   ├── sqliteexecutesql
│   │   │   │   ├── sqliteexecutesql_test.go
│   │   │   │   └── sqliteexecutesql.go
│   │   │   └── sqlitesql
│   │   │       ├── sqlitesql_test.go
│   │   │       └── sqlitesql.go
│   │   ├── tidb
│   │   │   ├── tidbexecutesql
│   │   │   │   ├── tidbexecutesql_test.go
│   │   │   │   └── tidbexecutesql.go
│   │   │   └── tidbsql
│   │   │       ├── tidbsql_test.go
│   │   │       └── tidbsql.go
│   │   ├── tools_test.go
│   │   ├── tools.go
│   │   ├── toolsets.go
│   │   ├── trino
│   │   │   ├── trinoexecutesql
│   │   │   │   ├── trinoexecutesql_test.go
│   │   │   │   └── trinoexecutesql.go
│   │   │   └── trinosql
│   │   │       ├── trinosql_test.go
│   │   │       └── trinosql.go
│   │   ├── utility
│   │   │   └── wait
│   │   │       ├── wait_test.go
│   │   │       └── wait.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedbsql
│   │       ├── yugabytedbsql_test.go
│   │       └── yugabytedbsql.go
│   └── util
│       └── util.go
├── LICENSE
├── logo.png
├── main.go
├── MCP-TOOLBOX-EXTENSION.md
├── README.md
└── tests
    ├── alloydb
    │   ├── alloydb_integration_test.go
    │   └── alloydb_wait_for_operation_test.go
    ├── alloydbainl
    │   └── alloydb_ai_nl_integration_test.go
    ├── alloydbpg
    │   └── alloydb_pg_integration_test.go
    ├── auth.go
    ├── bigquery
    │   └── bigquery_integration_test.go
    ├── bigtable
    │   └── bigtable_integration_test.go
    ├── cassandra
    │   └── cassandra_integration_test.go
    ├── clickhouse
    │   └── clickhouse_integration_test.go
    ├── cloudmonitoring
    │   └── cloud_monitoring_integration_test.go
    ├── cloudsql
    │   ├── cloud_sql_create_database_test.go
    │   ├── cloud_sql_create_users_test.go
    │   ├── cloud_sql_get_instances_test.go
    │   ├── cloud_sql_list_databases_test.go
    │   ├── cloudsql_list_instances_test.go
    │   └── cloudsql_wait_for_operation_test.go
    ├── cloudsqlmssql
    │   ├── cloud_sql_mssql_create_instance_integration_test.go
    │   └── cloud_sql_mssql_integration_test.go
    ├── cloudsqlmysql
    │   ├── cloud_sql_mysql_create_instance_integration_test.go
    │   └── cloud_sql_mysql_integration_test.go
    ├── cloudsqlpg
    │   ├── cloud_sql_pg_create_instances_test.go
    │   └── cloud_sql_pg_integration_test.go
    ├── common.go
    ├── couchbase
    │   └── couchbase_integration_test.go
    ├── dataform
    │   └── dataform_integration_test.go
    ├── dataplex
    │   └── dataplex_integration_test.go
    ├── dgraph
    │   └── dgraph_integration_test.go
    ├── firebird
    │   └── firebird_integration_test.go
    ├── firestore
    │   └── firestore_integration_test.go
    ├── http
    │   └── http_integration_test.go
    ├── looker
    │   └── looker_integration_test.go
    ├── mongodb
    │   └── mongodb_integration_test.go
    ├── mssql
    │   └── mssql_integration_test.go
    ├── mysql
    │   └── mysql_integration_test.go
    ├── neo4j
    │   └── neo4j_integration_test.go
    ├── oceanbase
    │   └── oceanbase_integration_test.go
    ├── option.go
    ├── oracle
    │   └── oracle_integration_test.go
    ├── postgres
    │   └── postgres_integration_test.go
    ├── redis
    │   └── redis_test.go
    ├── server.go
    ├── serverlessspark
    │   └── serverless_spark_integration_test.go
    ├── source.go
    ├── spanner
    │   └── spanner_integration_test.go
    ├── sqlite
    │   └── sqlite_integration_test.go
    ├── tidb
    │   └── tidb_integration_test.go
    ├── tool.go
    ├── trino
    │   └── trino_integration_test.go
    ├── utility
    │   └── wait_integration_test.go
    ├── valkey
    │   └── valkey_test.go
    └── yugabytedb
        └── yugabytedb_integration_test.go
```

# Files

--------------------------------------------------------------------------------
/internal/tools/alloydb/alloydbwaitforoperation/alloydbwaitforoperation.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //      http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package alloydbwaitforoperation
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"encoding/json"
 20 | 	"fmt"
 21 | 	"net/http"
 22 | 	"strings"
 23 | 	"text/template"
 24 | 	"time"
 25 | 
 26 | 	yaml "github.com/goccy/go-yaml"
 27 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 28 | 	alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin"
 29 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 30 | )
 31 | 
 32 | const kind string = "alloydb-wait-for-operation"
 33 | 
 34 | var alloyDBConnectionMessageTemplate = `Your AlloyDB resource is ready.
 35 | 
 36 | To connect, please configure your environment. The method depends on how you are running the toolbox:
 37 | 
 38 | **If running locally via stdio:**
 39 | Update the MCP server configuration with the following environment variables:
 40 | ` + "```json" + `
 41 | {
 42 |   "mcpServers": {
 43 |     "alloydb": {
 44 |       "command": "./PATH/TO/toolbox",
 45 |       "args": ["--prebuilt","alloydb-postgres","--stdio"],
 46 |       "env": {
 47 |           "ALLOYDB_POSTGRES_PROJECT": "{{.Project}}",
 48 |           "ALLOYDB_POSTGRES_REGION": "{{.Region}}",
 49 |           "ALLOYDB_POSTGRES_CLUSTER": "{{.Cluster}}",
 50 | {{if .Instance}}          "ALLOYDB_POSTGRES_INSTANCE": "{{.Instance}}",
 51 | {{end}}          "ALLOYDB_POSTGRES_DATABASE": "postgres",
 52 |           "ALLOYDB_POSTGRES_USER": ""{{.User}}",",
 53 |           "ALLOYDB_POSTGRES_PASSWORD": ""{{.Password}}",
 54 |       }
 55 |     }
 56 |   }
 57 | }
 58 | ` + "```" + `
 59 | 
 60 | **If running remotely:**
 61 | For remote deployments, you will need to set the following environment variables in your deployment configuration:
 62 | ` + "```" + `
 63 | ALLOYDB_POSTGRES_PROJECT={{.Project}}
 64 | ALLOYDB_POSTGRES_REGION={{.Region}}
 65 | ALLOYDB_POSTGRES_CLUSTER={{.Cluster}}
 66 | {{if .Instance}}ALLOYDB_POSTGRES_INSTANCE={{.Instance}}
 67 | {{end}}ALLOYDB_POSTGRES_DATABASE=postgres
 68 | ALLOYDB_POSTGRES_USER=<your-user>
 69 | ALLOYDB_POSTGRES_PASSWORD=<your-password>
 70 | ` + "```" + `
 71 | 
 72 | Please refer to the official documentation for guidance on deploying the toolbox:
 73 | - Deploying the Toolbox: https://googleapis.github.io/genai-toolbox/how-to/deploy_toolbox/
 74 | - Deploying on GKE: https://googleapis.github.io/genai-toolbox/how-to/deploy_gke/
 75 | `
 76 | 
 77 | func init() {
 78 | 	if !tools.Register(kind, newConfig) {
 79 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 80 | 	}
 81 | }
 82 | 
 83 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 84 | 	actual := Config{Name: name}
 85 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 86 | 		return nil, err
 87 | 	}
 88 | 	return actual, nil
 89 | }
 90 | 
 91 | // Config defines the configuration for the wait-for-operation tool.
 92 | type Config struct {
 93 | 	Name         string   `yaml:"name" validate:"required"`
 94 | 	Kind         string   `yaml:"kind" validate:"required"`
 95 | 	Source       string   `yaml:"source" validate:"required"`
 96 | 	Description  string   `yaml:"description"`
 97 | 	AuthRequired []string `yaml:"authRequired"`
 98 | 
 99 | 	// Polling configuration
100 | 	Delay      string  `yaml:"delay"`
101 | 	MaxDelay   string  `yaml:"maxDelay"`
102 | 	Multiplier float64 `yaml:"multiplier"`
103 | 	MaxRetries int     `yaml:"maxRetries"`
104 | }
105 | 
106 | // validate interface
107 | var _ tools.ToolConfig = Config{}
108 | 
109 | // ToolConfigKind returns the kind of the tool.
110 | func (cfg Config) ToolConfigKind() string {
111 | 	return kind
112 | }
113 | 
114 | // Initialize initializes the tool from the configuration.
115 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
116 | 	rawS, ok := srcs[cfg.Source]
117 | 	if !ok {
118 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
119 | 	}
120 | 
121 | 	s, ok := rawS.(*alloydbadmin.Source)
122 | 	if !ok {
123 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be `%s`", kind, alloydbadmin.SourceKind)
124 | 	}
125 | 	allParameters := tools.Parameters{
126 | 		tools.NewStringParameter("project", "The project ID"),
127 | 		tools.NewStringParameter("location", "The location ID"),
128 | 		tools.NewStringParameter("operation", "The operation ID"),
129 | 	}
130 | 	paramManifest := allParameters.Manifest()
131 | 
132 | 	description := cfg.Description
133 | 	if description == "" {
134 | 		description = "This will poll on operations API until the operation is done. For checking operation status we need projectId, locationID and operationId. Once instance is created give follow up steps on how to use the variables to bring data plane MCP server up in local and remote setup."
135 | 	}
136 | 
137 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, description, cfg.AuthRequired, allParameters)
138 | 
139 | 	var delay time.Duration
140 | 	if cfg.Delay == "" {
141 | 		delay = 3 * time.Second
142 | 	} else {
143 | 		var err error
144 | 		delay, err = time.ParseDuration(cfg.Delay)
145 | 		if err != nil {
146 | 			return nil, fmt.Errorf("invalid value for delay: %w", err)
147 | 		}
148 | 	}
149 | 
150 | 	var maxDelay time.Duration
151 | 	if cfg.MaxDelay == "" {
152 | 		maxDelay = 4 * time.Minute
153 | 	} else {
154 | 		var err error
155 | 		maxDelay, err = time.ParseDuration(cfg.MaxDelay)
156 | 		if err != nil {
157 | 			return nil, fmt.Errorf("invalid value for maxDelay: %w", err)
158 | 		}
159 | 	}
160 | 
161 | 	multiplier := cfg.Multiplier
162 | 	if multiplier == 0 {
163 | 		multiplier = 2.0
164 | 	}
165 | 
166 | 	maxRetries := cfg.MaxRetries
167 | 	if maxRetries == 0 {
168 | 		maxRetries = 10
169 | 	}
170 | 
171 | 	return Tool{
172 | 		Name:         cfg.Name,
173 | 		Kind:         kind,
174 | 		AuthRequired: cfg.AuthRequired,
175 | 		Source:       s,
176 | 		AllParams:    allParameters,
177 | 		manifest:     tools.Manifest{Description: description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
178 | 		mcpManifest:  mcpManifest,
179 | 		Delay:        delay,
180 | 		MaxDelay:     maxDelay,
181 | 		Multiplier:   multiplier,
182 | 		MaxRetries:   maxRetries,
183 | 	}, nil
184 | }
185 | 
186 | // Tool represents the wait-for-operation tool.
187 | type Tool struct {
188 | 	Name         string   `yaml:"name"`
189 | 	Kind         string   `yaml:"kind"`
190 | 	Description  string   `yaml:"description"`
191 | 	AuthRequired []string `yaml:"authRequired"`
192 | 
193 | 	Source    *alloydbadmin.Source
194 | 	AllParams tools.Parameters `yaml:"allParams"`
195 | 
196 | 	// Polling configuration
197 | 	Delay      time.Duration
198 | 	MaxDelay   time.Duration
199 | 	Multiplier float64
200 | 	MaxRetries int
201 | 
202 | 	Client      *http.Client
203 | 	manifest    tools.Manifest
204 | 	mcpManifest tools.McpManifest
205 | }
206 | 
207 | // Invoke executes the tool's logic.
208 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
209 | 	paramsMap := params.AsMap()
210 | 
211 | 	project, ok := paramsMap["project"].(string)
212 | 	if !ok {
213 | 		return nil, fmt.Errorf("missing 'project' parameter")
214 | 	}
215 | 	location, ok := paramsMap["location"].(string)
216 | 	if !ok {
217 | 		return nil, fmt.Errorf("missing 'location' parameter")
218 | 	}
219 | 	operation, ok := paramsMap["operation"].(string)
220 | 	if !ok {
221 | 		return nil, fmt.Errorf("missing 'operation' parameter")
222 | 	}
223 | 
224 | 	service, err := t.Source.GetService(ctx, string(accessToken))
225 | 	if err != nil {
226 | 		return nil, err
227 | 	}
228 | 
229 | 	ctx, cancel := context.WithTimeout(ctx, 30*time.Minute)
230 | 	defer cancel()
231 | 
232 | 	name := fmt.Sprintf("projects/%s/locations/%s/operations/%s", project, location, operation)
233 | 
234 | 	delay := t.Delay
235 | 	maxDelay := t.MaxDelay
236 | 	multiplier := t.Multiplier
237 | 	maxRetries := t.MaxRetries
238 | 	retries := 0
239 | 
240 | 	for retries < maxRetries {
241 | 		select {
242 | 		case <-ctx.Done():
243 | 			return nil, fmt.Errorf("timed out waiting for operation: %w", ctx.Err())
244 | 		default:
245 | 		}
246 | 
247 | 		op, err := service.Projects.Locations.Operations.Get(name).Do()
248 | 		if err != nil {
249 | 			fmt.Printf("error getting operation: %s, retrying in %v\n", err, delay)
250 | 		} else {
251 | 			if op.Done {
252 | 				if op.Error != nil {
253 | 					var errorBytes []byte
254 | 					errorBytes, err = json.Marshal(op.Error)
255 | 					if err != nil {
256 | 						return nil, fmt.Errorf("operation finished with error but could not marshal error object: %w", err)
257 | 					}
258 | 					return nil, fmt.Errorf("operation finished with error: %s", string(errorBytes))
259 | 				}
260 | 
261 | 				var opBytes []byte
262 | 				opBytes, err = op.MarshalJSON()
263 | 				if err != nil {
264 | 					return nil, fmt.Errorf("could not marshal operation: %w", err)
265 | 				}
266 | 
267 | 				if msg, ok := t.generateAlloyDBConnectionMessage(map[string]any{"response": op.Response}); ok {
268 | 					return msg, nil
269 | 				}
270 | 
271 | 				return string(opBytes), nil
272 | 			}
273 | 			fmt.Printf("Operation not complete, retrying in %v\n", delay)
274 | 		}
275 | 
276 | 		time.Sleep(delay)
277 | 		delay = time.Duration(float64(delay) * multiplier)
278 | 		if delay > maxDelay {
279 | 			delay = maxDelay
280 | 		}
281 | 		retries++
282 | 	}
283 | 	return nil, fmt.Errorf("exceeded max retries waiting for operation")
284 | }
285 | 
286 | func (t Tool) generateAlloyDBConnectionMessage(responseData map[string]any) (string, bool) {
287 | 	resourceName, ok := responseData["name"].(string)
288 | 	if !ok {
289 | 		return "", false
290 | 	}
291 | 
292 | 	parts := strings.Split(resourceName, "/")
293 | 	var project, region, cluster, instance string
294 | 
295 | 	// Expected format: projects/{project}/locations/{location}/clusters/{cluster}
296 | 	// or projects/{project}/locations/{location}/clusters/{cluster}/instances/{instance}
297 | 	if len(parts) < 6 || parts[0] != "projects" || parts[2] != "locations" || parts[4] != "clusters" {
298 | 		return "", false
299 | 	}
300 | 
301 | 	project = parts[1]
302 | 	region = parts[3]
303 | 	cluster = parts[5]
304 | 
305 | 	if len(parts) >= 8 && parts[6] == "instances" {
306 | 		instance = parts[7]
307 | 	} else {
308 | 		return "", false
309 | 	}
310 | 
311 | 	tmpl, err := template.New("alloydb-connection").Parse(alloyDBConnectionMessageTemplate)
312 | 	if err != nil {
313 | 		// This should not happen with a static template
314 | 		return fmt.Sprintf("template parsing error: %v", err), false
315 | 	}
316 | 
317 | 	data := struct {
318 | 		Project  string
319 | 		Region   string
320 | 		Cluster  string
321 | 		Instance string
322 | 	}{
323 | 		Project:  project,
324 | 		Region:   region,
325 | 		Cluster:  cluster,
326 | 		Instance: instance,
327 | 	}
328 | 
329 | 	var b strings.Builder
330 | 	if err := tmpl.Execute(&b, data); err != nil {
331 | 		return fmt.Sprintf("template execution error: %v", err), false
332 | 	}
333 | 
334 | 	return b.String(), true
335 | }
336 | 
337 | // ParseParams parses the parameters for the tool.
338 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
339 | 	return tools.ParseParams(t.AllParams, data, claims)
340 | }
341 | 
342 | // Manifest returns the tool's manifest.
343 | func (t Tool) Manifest() tools.Manifest {
344 | 	return t.manifest
345 | }
346 | 
347 | // McpManifest returns the tool's MCP manifest.
348 | func (t Tool) McpManifest() tools.McpManifest {
349 | 	return t.mcpManifest
350 | }
351 | 
352 | // Authorized checks if the tool is authorized.
353 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
354 | 	return true
355 | }
356 | 
357 | func (t Tool) RequiresClientAuthorization() bool {
358 | 	return t.Source.UseClientAuthorization()
359 | }
360 | 
```

--------------------------------------------------------------------------------
/tests/trino/trino_integration_test.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package trino
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"database/sql"
 20 | 	"fmt"
 21 | 	"os"
 22 | 	"regexp"
 23 | 	"strings"
 24 | 	"testing"
 25 | 	"time"
 26 | 
 27 | 	"github.com/google/uuid"
 28 | 	"github.com/googleapis/genai-toolbox/internal/testutils"
 29 | 	"github.com/googleapis/genai-toolbox/tests"
 30 | 	_ "github.com/trinodb/trino-go-client/trino" // Import Trino SQL driver
 31 | )
 32 | 
 33 | var (
 34 | 	TrinoSourceKind = "trino"
 35 | 	TrinoToolKind   = "trino-sql"
 36 | 	TrinoHost       = os.Getenv("TRINO_HOST")
 37 | 	TrinoPort       = os.Getenv("TRINO_PORT")
 38 | 	TrinoUser       = os.Getenv("TRINO_USER")
 39 | 	TrinoPass       = os.Getenv("TRINO_PASS")
 40 | 	TrinoCatalog    = os.Getenv("TRINO_CATALOG")
 41 | 	TrinoSchema     = os.Getenv("TRINO_SCHEMA")
 42 | )
 43 | 
 44 | func getTrinoVars(t *testing.T) map[string]any {
 45 | 	switch "" {
 46 | 	case TrinoHost:
 47 | 		t.Fatal("'TRINO_HOST' not set")
 48 | 	case TrinoPort:
 49 | 		t.Fatal("'TRINO_PORT' not set")
 50 | 	// TrinoUser is optional for anonymous access
 51 | 	case TrinoCatalog:
 52 | 		t.Fatal("'TRINO_CATALOG' not set")
 53 | 	case TrinoSchema:
 54 | 		t.Fatal("'TRINO_SCHEMA' not set")
 55 | 	}
 56 | 
 57 | 	return map[string]any{
 58 | 		"kind":     TrinoSourceKind,
 59 | 		"host":     TrinoHost,
 60 | 		"port":     TrinoPort,
 61 | 		"user":     TrinoUser,
 62 | 		"password": TrinoPass,
 63 | 		"catalog":  TrinoCatalog,
 64 | 		"schema":   TrinoSchema,
 65 | 	}
 66 | }
 67 | 
 68 | // initTrinoConnectionPool creates a Trino connection pool (copied from trino.go)
 69 | func initTrinoConnectionPool(host, port, user, pass, catalog, schema string) (*sql.DB, error) {
 70 | 	dsn, err := buildTrinoDSN(host, port, user, pass, catalog, schema, "", "", false, false)
 71 | 	if err != nil {
 72 | 		return nil, fmt.Errorf("failed to build DSN: %w", err)
 73 | 	}
 74 | 
 75 | 	db, err := sql.Open("trino", dsn)
 76 | 	if err != nil {
 77 | 		return nil, fmt.Errorf("failed to open connection: %w", err)
 78 | 	}
 79 | 
 80 | 	// Configure connection pool
 81 | 	db.SetMaxOpenConns(10)
 82 | 	db.SetMaxIdleConns(5)
 83 | 	db.SetConnMaxLifetime(time.Hour)
 84 | 
 85 | 	return db, nil
 86 | }
 87 | 
 88 | // buildTrinoDSN builds a Trino DSN string (simplified version from trino.go)
 89 | func buildTrinoDSN(host, port, user, password, catalog, schema, queryTimeout, accessToken string, kerberosEnabled, sslEnabled bool) (string, error) {
 90 | 	scheme := "http"
 91 | 	if sslEnabled {
 92 | 		scheme = "https"
 93 | 	}
 94 | 
 95 | 	// Build base DSN without user info
 96 | 	dsn := fmt.Sprintf("%s://%s:%s?catalog=%s&schema=%s", scheme, host, port, catalog, schema)
 97 | 
 98 | 	// Add user authentication if provided
 99 | 	if user != "" {
100 | 		if password != "" {
101 | 			dsn = fmt.Sprintf("%s://%s:%s@%s:%s?catalog=%s&schema=%s", scheme, user, password, host, port, catalog, schema)
102 | 		} else {
103 | 			dsn = fmt.Sprintf("%s://%s@%s:%s?catalog=%s&schema=%s", scheme, user, host, port, catalog, schema)
104 | 		}
105 | 	}
106 | 
107 | 	if queryTimeout != "" {
108 | 		dsn += "&queryTimeout=" + queryTimeout
109 | 	}
110 | 
111 | 	if accessToken != "" {
112 | 		dsn += "&accessToken=" + accessToken
113 | 	}
114 | 
115 | 	if kerberosEnabled {
116 | 		dsn += "&KerberosEnabled=true"
117 | 	}
118 | 
119 | 	return dsn, nil
120 | }
121 | 
122 | // getTrinoParamToolInfo returns statements and param for my-tool trino-sql kind
123 | func getTrinoParamToolInfo(tableName string) (string, string, string, string, string, string, []any) {
124 | 	createStatement := fmt.Sprintf("CREATE TABLE %s (id BIGINT NOT NULL, name VARCHAR(255))", tableName)
125 | 	insertStatement := fmt.Sprintf("INSERT INTO %s (id, name) VALUES (1, ?), (2, ?), (3, ?), (4, ?)", tableName)
126 | 	toolStatement := fmt.Sprintf("SELECT * FROM %s WHERE id = ? OR name = ?", tableName)
127 | 	idParamStatement := fmt.Sprintf("SELECT * FROM %s WHERE id = ?", tableName)
128 | 	nameParamStatement := fmt.Sprintf("SELECT * FROM %s WHERE name = ?", tableName)
129 | 	arrayToolStatement := fmt.Sprintf("SELECT * FROM %s WHERE id IN (?, ?) AND name IN (?, ?)", tableName) // Trino doesn't use ANY() like MySQL/PostgreSQL
130 | 	params := []any{"Alice", "Jane", "Sid", nil}
131 | 	return createStatement, insertStatement, toolStatement, idParamStatement, nameParamStatement, arrayToolStatement, params
132 | }
133 | 
134 | // getTrinoAuthToolInfo returns statements and param of my-auth-tool for trino-sql kind
135 | func getTrinoAuthToolInfo(tableName string) (string, string, string, []any) {
136 | 	createStatement := fmt.Sprintf("CREATE TABLE %s (id BIGINT NOT NULL, name VARCHAR(255), email VARCHAR(255))", tableName)
137 | 	insertStatement := fmt.Sprintf("INSERT INTO %s (id, name, email) VALUES (1, ?, ?), (2, ?, ?)", tableName)
138 | 	toolStatement := fmt.Sprintf("SELECT name FROM %s WHERE email = ?", tableName)
139 | 	params := []any{"Alice", tests.ServiceAccountEmail, "Jane", "[email protected]"}
140 | 	return createStatement, insertStatement, toolStatement, params
141 | }
142 | 
143 | // getTrinoTmplToolStatement returns statements and param for template parameter test cases for trino-sql kind
144 | func getTrinoTmplToolStatement() (string, string) {
145 | 	tmplSelectCombined := "SELECT * FROM {{.tableName}} WHERE id = ?"
146 | 	tmplSelectFilterCombined := "SELECT * FROM {{.tableName}} WHERE {{.columnFilter}} = ?"
147 | 	return tmplSelectCombined, tmplSelectFilterCombined
148 | }
149 | 
150 | // getTrinoWants return the expected wants for trino
151 | func getTrinoWants() (string, string, string, string) {
152 | 	select1Want := `[{"_col0":1}]`
153 | 	failInvocationWant := `{"jsonrpc":"2.0","id":"invoke-fail-tool","result":{"content":[{"type":"text","text":"unable to execute query: trino: query failed (200 OK): \"USER_ERROR: line 1:1: mismatched input 'SELEC'. Expecting: 'ALTER', 'ANALYZE', 'CALL', 'COMMENT', 'COMMIT', 'CREATE', 'DEALLOCATE', 'DELETE', 'DENY', 'DESC', 'DESCRIBE', 'DROP', 'EXECUTE', 'EXPLAIN', 'GRANT', 'INSERT', 'MERGE', 'PREPARE', 'REFRESH', 'RESET', 'REVOKE', 'ROLLBACK', 'SET', 'SHOW', 'START', 'TRUNCATE', 'UPDATE', 'USE', 'WITH', \u003cquery\u003e\""}],"isError":true}}`
154 | 	createTableStatement := `"CREATE TABLE t (id BIGINT NOT NULL, name VARCHAR(255))"`
155 | 	mcpSelect1Want := `{"jsonrpc":"2.0","id":"invoke my-auth-required-tool","result":{"content":[{"type":"text","text":"{\"_col0\":1}"}]}}`
156 | 	return select1Want, failInvocationWant, createTableStatement, mcpSelect1Want
157 | }
158 | 
159 | // setupTrinoTable creates and inserts data into a table of tool
160 | // compatible with trino-sql tool
161 | func setupTrinoTable(t *testing.T, ctx context.Context, pool *sql.DB, createStatement, insertStatement, tableName string, params []any) func(*testing.T) {
162 | 	err := pool.PingContext(ctx)
163 | 	if err != nil {
164 | 		t.Fatalf("unable to connect to test database: %s", err)
165 | 	}
166 | 
167 | 	// Create table
168 | 	_, err = pool.QueryContext(ctx, createStatement)
169 | 	if err != nil {
170 | 		t.Fatalf("unable to create test table %s: %s", tableName, err)
171 | 	}
172 | 
173 | 	// Insert test data
174 | 	_, err = pool.QueryContext(ctx, insertStatement, params...)
175 | 	if err != nil {
176 | 		t.Fatalf("unable to insert test data: %s", err)
177 | 	}
178 | 
179 | 	return func(t *testing.T) {
180 | 		// tear down test
181 | 		_, err = pool.ExecContext(ctx, fmt.Sprintf("DROP TABLE %s", tableName))
182 | 		if err != nil {
183 | 			t.Errorf("Teardown failed: %s", err)
184 | 		}
185 | 	}
186 | }
187 | 
188 | // addTrinoExecuteSqlConfig gets the tools config for `trino-execute-sql`
189 | func addTrinoExecuteSqlConfig(t *testing.T, config map[string]any) map[string]any {
190 | 	tools, ok := config["tools"].(map[string]any)
191 | 	if !ok {
192 | 		t.Fatalf("unable to get tools from config")
193 | 	}
194 | 	tools["my-exec-sql-tool"] = map[string]any{
195 | 		"kind":        "trino-execute-sql",
196 | 		"source":      "my-instance",
197 | 		"description": "Tool to execute sql",
198 | 	}
199 | 	tools["my-auth-exec-sql-tool"] = map[string]any{
200 | 		"kind":        "trino-execute-sql",
201 | 		"source":      "my-instance",
202 | 		"description": "Tool to execute sql",
203 | 		"authRequired": []string{
204 | 			"my-google-auth",
205 | 		},
206 | 	}
207 | 	config["tools"] = tools
208 | 	return config
209 | }
210 | 
211 | func TestTrinoToolEndpoints(t *testing.T) {
212 | 	sourceConfig := getTrinoVars(t)
213 | 	ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
214 | 	defer cancel()
215 | 
216 | 	var args []string
217 | 
218 | 	pool, err := initTrinoConnectionPool(TrinoHost, TrinoPort, TrinoUser, TrinoPass, TrinoCatalog, TrinoSchema)
219 | 	if err != nil {
220 | 		t.Fatalf("unable to create Trino connection pool: %s", err)
221 | 	}
222 | 
223 | 	// create table name with UUID
224 | 	tableNameParam := "param_table_" + strings.ReplaceAll(uuid.New().String(), "-", "")
225 | 	tableNameAuth := "auth_table_" + strings.ReplaceAll(uuid.New().String(), "-", "")
226 | 	tableNameTemplateParam := "template_param_table_" + strings.ReplaceAll(uuid.New().String(), "-", "")
227 | 
228 | 	// set up data for param tool
229 | 	createParamTableStmt, insertParamTableStmt, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, paramTestParams := getTrinoParamToolInfo(tableNameParam)
230 | 	teardownTable1 := setupTrinoTable(t, ctx, pool, createParamTableStmt, insertParamTableStmt, tableNameParam, paramTestParams)
231 | 	defer teardownTable1(t)
232 | 
233 | 	// set up data for auth tool
234 | 	createAuthTableStmt, insertAuthTableStmt, authToolStmt, authTestParams := getTrinoAuthToolInfo(tableNameAuth)
235 | 	teardownTable2 := setupTrinoTable(t, ctx, pool, createAuthTableStmt, insertAuthTableStmt, tableNameAuth, authTestParams)
236 | 	defer teardownTable2(t)
237 | 
238 | 	// Write config into a file and pass it to command
239 | 	toolsFile := tests.GetToolsConfig(sourceConfig, TrinoToolKind, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, authToolStmt)
240 | 	toolsFile = addTrinoExecuteSqlConfig(t, toolsFile)
241 | 	tmplSelectCombined, tmplSelectFilterCombined := getTrinoTmplToolStatement()
242 | 	toolsFile = tests.AddTemplateParamConfig(t, toolsFile, TrinoToolKind, tmplSelectCombined, tmplSelectFilterCombined, "")
243 | 
244 | 	cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
245 | 	if err != nil {
246 | 		t.Fatalf("command initialization returned an error: %s", err)
247 | 	}
248 | 	defer cleanup()
249 | 
250 | 	waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
251 | 	defer cancel()
252 | 	out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
253 | 	if err != nil {
254 | 		t.Logf("toolbox command logs: \n%s", out)
255 | 		t.Fatalf("toolbox didn't start successfully: %s", err)
256 | 	}
257 | 
258 | 	// Get configs for tests
259 | 	select1Want, mcpMyFailToolWant, createTableStatement, mcpSelect1Want := getTrinoWants()
260 | 
261 | 	// Run tests
262 | 	tests.RunToolGetTest(t)
263 | 	tests.RunToolInvokeTest(t, select1Want, tests.DisableArrayTest())
264 | 	tests.RunMCPToolCallMethod(t, mcpMyFailToolWant, mcpSelect1Want)
265 | 	tests.RunExecuteSqlToolInvokeTest(t, createTableStatement, select1Want)
266 | 	tests.RunToolInvokeWithTemplateParameters(t, tableNameTemplateParam, tests.WithInsert1Want(`[{"rows":1}]`))
267 | }
268 | 
```

--------------------------------------------------------------------------------
/internal/tools/postgres/postgreslisttables/postgreslisttables.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package postgreslisttables
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"fmt"
 20 | 
 21 | 	yaml "github.com/goccy/go-yaml"
 22 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 23 | 	"github.com/googleapis/genai-toolbox/internal/sources/alloydbpg"
 24 | 	"github.com/googleapis/genai-toolbox/internal/sources/cloudsqlpg"
 25 | 	"github.com/googleapis/genai-toolbox/internal/sources/postgres"
 26 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 27 | 	"github.com/jackc/pgx/v5/pgxpool"
 28 | )
 29 | 
 30 | const kind string = "postgres-list-tables"
 31 | 
 32 | const listTablesStatement = `
 33 | 	WITH desired_relkinds AS (
 34 | 		SELECT ARRAY['r', 'p']::char[] AS kinds -- Always consider both 'TABLE' and 'PARTITIONED TABLE'
 35 | 	),
 36 | 	table_info AS (
 37 | 		SELECT
 38 | 			t.oid AS table_oid,
 39 | 			ns.nspname AS schema_name,
 40 | 			t.relname AS table_name,
 41 | 			pg_get_userbyid(t.relowner) AS table_owner,
 42 | 			obj_description(t.oid, 'pg_class') AS table_comment,
 43 | 			t.relkind AS object_kind
 44 | 		FROM
 45 | 			pg_class t
 46 | 		JOIN
 47 | 			pg_namespace ns ON ns.oid = t.relnamespace
 48 | 		CROSS JOIN desired_relkinds dk
 49 | 		WHERE
 50 | 			t.relkind = ANY(dk.kinds) -- Filter by selected table relkinds ('r', 'p')
 51 | 			AND (NULLIF(TRIM($1), '') IS NULL OR t.relname = ANY(string_to_array($1,','))) -- $1 is object_names
 52 | 			AND ns.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')
 53 | 			AND ns.nspname NOT LIKE 'pg_temp_%' AND ns.nspname NOT LIKE 'pg_toast_temp_%'
 54 | 	),
 55 | 	columns_info AS (
 56 | 		SELECT
 57 | 			att.attrelid AS table_oid, att.attname AS column_name, format_type(att.atttypid, att.atttypmod) AS data_type,
 58 | 			att.attnum AS column_ordinal_position, att.attnotnull AS is_not_nullable,
 59 | 			pg_get_expr(ad.adbin, ad.adrelid) AS column_default, col_description(att.attrelid, att.attnum) AS column_comment
 60 | 		FROM pg_attribute att LEFT JOIN pg_attrdef ad ON att.attrelid = ad.adrelid AND att.attnum = ad.adnum
 61 | 		JOIN table_info ti ON att.attrelid = ti.table_oid WHERE att.attnum > 0 AND NOT att.attisdropped
 62 | 	),
 63 | 	constraints_info AS (
 64 | 		SELECT
 65 | 			con.conrelid AS table_oid, con.conname AS constraint_name, pg_get_constraintdef(con.oid) AS constraint_definition,
 66 | 			CASE con.contype WHEN 'p' THEN 'PRIMARY KEY' WHEN 'f' THEN 'FOREIGN KEY' WHEN 'u' THEN 'UNIQUE' WHEN 'c' THEN 'CHECK' ELSE con.contype::text END AS constraint_type,
 67 | 			(SELECT array_agg(att.attname ORDER BY u.attposition) FROM unnest(con.conkey) WITH ORDINALITY AS u(attnum, attposition) JOIN pg_attribute att ON att.attrelid = con.conrelid AND att.attnum = u.attnum) AS constraint_columns,
 68 | 			NULLIF(con.confrelid, 0)::regclass AS foreign_key_referenced_table,
 69 | 			(SELECT array_agg(att.attname ORDER BY u.attposition) FROM unnest(con.confkey) WITH ORDINALITY AS u(attnum, attposition) JOIN pg_attribute att ON att.attrelid = con.confrelid AND att.attnum = u.attnum WHERE con.contype = 'f') AS foreign_key_referenced_columns
 70 | 		FROM pg_constraint con JOIN table_info ti ON con.conrelid = ti.table_oid
 71 | 	),
 72 | 	indexes_info AS (
 73 | 		SELECT
 74 | 			idx.indrelid AS table_oid, ic.relname AS index_name, pg_get_indexdef(idx.indexrelid) AS index_definition,
 75 | 			idx.indisunique AS is_unique, idx.indisprimary AS is_primary, am.amname AS index_method,
 76 | 			(SELECT array_agg(att.attname ORDER BY u.ord) FROM unnest(idx.indkey::int[]) WITH ORDINALITY AS u(colidx, ord) LEFT JOIN pg_attribute att ON att.attrelid = idx.indrelid AND att.attnum = u.colidx WHERE u.colidx <> 0) AS index_columns
 77 | 		FROM pg_index idx JOIN pg_class ic ON ic.oid = idx.indexrelid JOIN pg_am am ON am.oid = ic.relam JOIN table_info ti ON idx.indrelid = ti.table_oid
 78 | 	),
 79 | 	triggers_info AS (
 80 | 		SELECT tg.tgrelid AS table_oid, tg.tgname AS trigger_name, pg_get_triggerdef(tg.oid) AS trigger_definition, tg.tgenabled AS trigger_enabled_state
 81 | 		FROM pg_trigger tg JOIN table_info ti ON tg.tgrelid = ti.table_oid WHERE NOT tg.tgisinternal
 82 | 	)
 83 | 	SELECT
 84 | 		ti.schema_name,
 85 | 		ti.table_name AS object_name,
 86 | 		CASE
 87 | 		  WHEN $2 = 'simple' THEN
 88 | 			  -- IF format is 'simple', return basic JSON
 89 | 			  json_build_object('name', ti.table_name)
 90 | 		  ELSE
 91 | 			json_build_object(
 92 | 				'schema_name', ti.schema_name,
 93 | 				'object_name', ti.table_name,
 94 | 				'object_type', CASE ti.object_kind
 95 | 								WHEN 'r' THEN 'TABLE'
 96 | 								WHEN 'p' THEN 'PARTITIONED TABLE'
 97 | 								ELSE ti.object_kind::text -- Should not happen due to WHERE clause
 98 | 							END,
 99 | 				'owner', ti.table_owner,
100 | 				'comment', ti.table_comment,
101 | 				'columns', COALESCE((SELECT json_agg(json_build_object('column_name',ci.column_name,'data_type',ci.data_type,'ordinal_position',ci.column_ordinal_position,'is_not_nullable',ci.is_not_nullable,'column_default',ci.column_default,'column_comment',ci.column_comment) ORDER BY ci.column_ordinal_position) FROM columns_info ci WHERE ci.table_oid = ti.table_oid), '[]'::json),
102 | 				'constraints', COALESCE((SELECT json_agg(json_build_object('constraint_name',cons.constraint_name,'constraint_type',cons.constraint_type,'constraint_definition',cons.constraint_definition,'constraint_columns',cons.constraint_columns,'foreign_key_referenced_table',cons.foreign_key_referenced_table,'foreign_key_referenced_columns',cons.foreign_key_referenced_columns)) FROM constraints_info cons WHERE cons.table_oid = ti.table_oid), '[]'::json),
103 | 				'indexes', COALESCE((SELECT json_agg(json_build_object('index_name',ii.index_name,'index_definition',ii.index_definition,'is_unique',ii.is_unique,'is_primary',ii.is_primary,'index_method',ii.index_method,'index_columns',ii.index_columns)) FROM indexes_info ii WHERE ii.table_oid = ti.table_oid), '[]'::json),
104 | 				'triggers', COALESCE((SELECT json_agg(json_build_object('trigger_name',tri.trigger_name,'trigger_definition',tri.trigger_definition,'trigger_enabled_state',tri.trigger_enabled_state)) FROM triggers_info tri WHERE tri.table_oid = ti.table_oid), '[]'::json)
105 | 			) 
106 | 		END AS object_details
107 | 	FROM table_info ti ORDER BY ti.schema_name, ti.table_name;
108 | `
109 | 
110 | func init() {
111 | 	if !tools.Register(kind, newConfig) {
112 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
113 | 	}
114 | }
115 | 
116 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
117 | 	actual := Config{Name: name}
118 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
119 | 		return nil, err
120 | 	}
121 | 	return actual, nil
122 | }
123 | 
124 | type compatibleSource interface {
125 | 	PostgresPool() *pgxpool.Pool
126 | }
127 | 
128 | // validate compatible sources are still compatible
129 | var _ compatibleSource = &alloydbpg.Source{}
130 | var _ compatibleSource = &cloudsqlpg.Source{}
131 | var _ compatibleSource = &postgres.Source{}
132 | 
133 | var compatibleSources = [...]string{alloydbpg.SourceKind, cloudsqlpg.SourceKind, postgres.SourceKind}
134 | 
135 | type Config struct {
136 | 	Name         string   `yaml:"name" validate:"required"`
137 | 	Kind         string   `yaml:"kind" validate:"required"`
138 | 	Source       string   `yaml:"source" validate:"required"`
139 | 	Description  string   `yaml:"description" validate:"required"`
140 | 	AuthRequired []string `yaml:"authRequired"`
141 | }
142 | 
143 | // validate interface
144 | var _ tools.ToolConfig = Config{}
145 | 
146 | func (cfg Config) ToolConfigKind() string {
147 | 	return kind
148 | }
149 | 
150 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
151 | 	// verify source exists
152 | 	rawS, ok := srcs[cfg.Source]
153 | 	if !ok {
154 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
155 | 	}
156 | 
157 | 	// verify the source is compatible
158 | 	s, ok := rawS.(compatibleSource)
159 | 	if !ok {
160 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
161 | 	}
162 | 
163 | 	allParameters := tools.Parameters{
164 | 		tools.NewStringParameterWithDefault("table_names", "", "Optional: A comma-separated list of table names. If empty, details for all tables will be listed."),
165 | 		tools.NewStringParameterWithDefault("output_format", "detailed", "Optional: Use 'simple' for names only or 'detailed' for full info."),
166 | 	}
167 | 	paramManifest := allParameters.Manifest()
168 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, allParameters)
169 | 
170 | 	t := Tool{
171 | 		Name:         cfg.Name,
172 | 		Kind:         kind,
173 | 		AuthRequired: cfg.AuthRequired,
174 | 		AllParams:    allParameters,
175 | 		Pool:         s.PostgresPool(),
176 | 		manifest:     tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
177 | 		mcpManifest:  mcpManifest,
178 | 	}
179 | 
180 | 	return t, nil
181 | }
182 | 
183 | // validate interface
184 | var _ tools.Tool = Tool{}
185 | 
186 | type Tool struct {
187 | 	Name         string           `yaml:"name"`
188 | 	Kind         string           `yaml:"kind"`
189 | 	AuthRequired []string         `yaml:"authRequired"`
190 | 	AllParams    tools.Parameters `yaml:"allParams"`
191 | 
192 | 	Pool        *pgxpool.Pool
193 | 	manifest    tools.Manifest
194 | 	mcpManifest tools.McpManifest
195 | }
196 | 
197 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
198 | 	paramsMap := params.AsMap()
199 | 
200 | 	tableNames, ok := paramsMap["table_names"].(string)
201 | 	if !ok {
202 | 		return nil, fmt.Errorf("invalid 'table_names' parameter; expected a string")
203 | 	}
204 | 	outputFormat, _ := paramsMap["output_format"].(string)
205 | 	if outputFormat != "simple" && outputFormat != "detailed" {
206 | 		return nil, fmt.Errorf("invalid value for output_format: must be 'simple' or 'detailed', but got %q", outputFormat)
207 | 	}
208 | 
209 | 	results, err := t.Pool.Query(ctx, listTablesStatement, tableNames, outputFormat)
210 | 	if err != nil {
211 | 		return nil, fmt.Errorf("unable to execute query: %w", err)
212 | 	}
213 | 	defer results.Close()
214 | 
215 | 	fields := results.FieldDescriptions()
216 | 	var out []map[string]any
217 | 
218 | 	for results.Next() {
219 | 		values, err := results.Values()
220 | 		if err != nil {
221 | 			return nil, fmt.Errorf("unable to parse row: %w", err)
222 | 		}
223 | 		rowMap := make(map[string]any)
224 | 		for i, field := range fields {
225 | 			rowMap[string(field.Name)] = values[i]
226 | 		}
227 | 		out = append(out, rowMap)
228 | 	}
229 | 
230 | 	if err := results.Err(); err != nil {
231 | 		return nil, fmt.Errorf("error reading query results: %w", err)
232 | 	}
233 | 
234 | 	return out, nil
235 | }
236 | 
237 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
238 | 	return tools.ParseParams(t.AllParams, data, claims)
239 | }
240 | 
241 | func (t Tool) Manifest() tools.Manifest {
242 | 	return t.manifest
243 | }
244 | 
245 | func (t Tool) McpManifest() tools.McpManifest {
246 | 	return t.mcpManifest
247 | }
248 | 
249 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
250 | 	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
251 | }
252 | 
253 | func (t Tool) RequiresClientAuthorization() bool {
254 | 	return false
255 | }
256 | 
```

--------------------------------------------------------------------------------
/internal/server/api.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2024 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package server
 16 | 
 17 | import (
 18 | 	"encoding/json"
 19 | 	"errors"
 20 | 	"fmt"
 21 | 	"net/http"
 22 | 	"strings"
 23 | 
 24 | 	"github.com/go-chi/chi/v5"
 25 | 	"github.com/go-chi/chi/v5/middleware"
 26 | 	"github.com/go-chi/render"
 27 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 28 | 	"github.com/googleapis/genai-toolbox/internal/util"
 29 | 	"go.opentelemetry.io/otel/attribute"
 30 | 	"go.opentelemetry.io/otel/codes"
 31 | 	"go.opentelemetry.io/otel/metric"
 32 | )
 33 | 
 34 | // apiRouter creates a router that represents the routes under /api
 35 | func apiRouter(s *Server) (chi.Router, error) {
 36 | 	r := chi.NewRouter()
 37 | 
 38 | 	r.Use(middleware.AllowContentType("application/json"))
 39 | 	r.Use(middleware.StripSlashes)
 40 | 	r.Use(render.SetContentType(render.ContentTypeJSON))
 41 | 
 42 | 	r.Get("/toolset", func(w http.ResponseWriter, r *http.Request) { toolsetHandler(s, w, r) })
 43 | 	r.Get("/toolset/{toolsetName}", func(w http.ResponseWriter, r *http.Request) { toolsetHandler(s, w, r) })
 44 | 
 45 | 	r.Route("/tool/{toolName}", func(r chi.Router) {
 46 | 		r.Get("/", func(w http.ResponseWriter, r *http.Request) { toolGetHandler(s, w, r) })
 47 | 		r.Post("/invoke", func(w http.ResponseWriter, r *http.Request) { toolInvokeHandler(s, w, r) })
 48 | 	})
 49 | 
 50 | 	return r, nil
 51 | }
 52 | 
 53 | // toolsetHandler handles the request for information about a Toolset.
 54 | func toolsetHandler(s *Server, w http.ResponseWriter, r *http.Request) {
 55 | 	ctx, span := s.instrumentation.Tracer.Start(r.Context(), "toolbox/server/toolset/get")
 56 | 	r = r.WithContext(ctx)
 57 | 
 58 | 	toolsetName := chi.URLParam(r, "toolsetName")
 59 | 	s.logger.DebugContext(ctx, fmt.Sprintf("toolset name: %s", toolsetName))
 60 | 	span.SetAttributes(attribute.String("toolset_name", toolsetName))
 61 | 	var err error
 62 | 	defer func() {
 63 | 		if err != nil {
 64 | 			span.SetStatus(codes.Error, err.Error())
 65 | 		}
 66 | 		span.End()
 67 | 
 68 | 		status := "success"
 69 | 		if err != nil {
 70 | 			status = "error"
 71 | 		}
 72 | 		s.instrumentation.ToolsetGet.Add(
 73 | 			r.Context(),
 74 | 			1,
 75 | 			metric.WithAttributes(attribute.String("toolbox.name", toolsetName)),
 76 | 			metric.WithAttributes(attribute.String("toolbox.operation.status", status)),
 77 | 		)
 78 | 	}()
 79 | 
 80 | 	toolset, ok := s.ResourceMgr.GetToolset(toolsetName)
 81 | 	if !ok {
 82 | 		err = fmt.Errorf("toolset %q does not exist", toolsetName)
 83 | 		s.logger.DebugContext(ctx, err.Error())
 84 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusNotFound))
 85 | 		return
 86 | 	}
 87 | 	render.JSON(w, r, toolset.Manifest)
 88 | }
 89 | 
 90 | // toolGetHandler handles requests for a single Tool.
 91 | func toolGetHandler(s *Server, w http.ResponseWriter, r *http.Request) {
 92 | 	ctx, span := s.instrumentation.Tracer.Start(r.Context(), "toolbox/server/tool/get")
 93 | 	r = r.WithContext(ctx)
 94 | 
 95 | 	toolName := chi.URLParam(r, "toolName")
 96 | 	s.logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName))
 97 | 	span.SetAttributes(attribute.String("tool_name", toolName))
 98 | 	var err error
 99 | 	defer func() {
100 | 		if err != nil {
101 | 			span.SetStatus(codes.Error, err.Error())
102 | 		}
103 | 		span.End()
104 | 
105 | 		status := "success"
106 | 		if err != nil {
107 | 			status = "error"
108 | 		}
109 | 		s.instrumentation.ToolGet.Add(
110 | 			r.Context(),
111 | 			1,
112 | 			metric.WithAttributes(attribute.String("toolbox.name", toolName)),
113 | 			metric.WithAttributes(attribute.String("toolbox.operation.status", status)),
114 | 		)
115 | 	}()
116 | 	tool, ok := s.ResourceMgr.GetTool(toolName)
117 | 	if !ok {
118 | 		err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName)
119 | 		s.logger.DebugContext(ctx, err.Error())
120 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusNotFound))
121 | 		return
122 | 	}
123 | 	// TODO: this can be optimized later with some caching
124 | 	m := tools.ToolsetManifest{
125 | 		ServerVersion: s.version,
126 | 		ToolsManifest: map[string]tools.Manifest{
127 | 			toolName: tool.Manifest(),
128 | 		},
129 | 	}
130 | 
131 | 	render.JSON(w, r, m)
132 | }
133 | 
134 | // toolInvokeHandler handles the API request to invoke a specific Tool.
135 | func toolInvokeHandler(s *Server, w http.ResponseWriter, r *http.Request) {
136 | 	ctx, span := s.instrumentation.Tracer.Start(r.Context(), "toolbox/server/tool/invoke")
137 | 	r = r.WithContext(ctx)
138 | 	ctx = util.WithLogger(r.Context(), s.logger)
139 | 
140 | 	toolName := chi.URLParam(r, "toolName")
141 | 	s.logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName))
142 | 	span.SetAttributes(attribute.String("tool_name", toolName))
143 | 	var err error
144 | 	defer func() {
145 | 		if err != nil {
146 | 			span.SetStatus(codes.Error, err.Error())
147 | 		}
148 | 		span.End()
149 | 
150 | 		status := "success"
151 | 		if err != nil {
152 | 			status = "error"
153 | 		}
154 | 		s.instrumentation.ToolInvoke.Add(
155 | 			r.Context(),
156 | 			1,
157 | 			metric.WithAttributes(attribute.String("toolbox.name", toolName)),
158 | 			metric.WithAttributes(attribute.String("toolbox.operation.status", status)),
159 | 		)
160 | 	}()
161 | 
162 | 	tool, ok := s.ResourceMgr.GetTool(toolName)
163 | 	if !ok {
164 | 		err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName)
165 | 		s.logger.DebugContext(ctx, err.Error())
166 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusNotFound))
167 | 		return
168 | 	}
169 | 
170 | 	// Extract OAuth access token from the "Authorization" header (currently for
171 | 	// BigQuery end-user credentials usage only)
172 | 	accessToken := tools.AccessToken(r.Header.Get("Authorization"))
173 | 
174 | 	// Check if this specific tool requires the standard authorization header
175 | 	if tool.RequiresClientAuthorization() {
176 | 		if accessToken == "" {
177 | 			err = fmt.Errorf("tool requires client authorization but access token is missing from the request header")
178 | 			s.logger.DebugContext(ctx, err.Error())
179 | 			_ = render.Render(w, r, newErrResponse(err, http.StatusUnauthorized))
180 | 			return
181 | 		}
182 | 	}
183 | 
184 | 	// Tool authentication
185 | 	// claimsFromAuth maps the name of the authservice to the claims retrieved from it.
186 | 	claimsFromAuth := make(map[string]map[string]any)
187 | 	for _, aS := range s.ResourceMgr.GetAuthServiceMap() {
188 | 		claims, err := aS.GetClaimsFromHeader(ctx, r.Header)
189 | 		if err != nil {
190 | 			s.logger.DebugContext(ctx, err.Error())
191 | 			continue
192 | 		}
193 | 		if claims == nil {
194 | 			// authService not present in header
195 | 			continue
196 | 		}
197 | 		claimsFromAuth[aS.GetName()] = claims
198 | 	}
199 | 
200 | 	// Tool authorization check
201 | 	verifiedAuthServices := make([]string, len(claimsFromAuth))
202 | 	i := 0
203 | 	for k := range claimsFromAuth {
204 | 		verifiedAuthServices[i] = k
205 | 		i++
206 | 	}
207 | 
208 | 	// Check if any of the specified auth services is verified
209 | 	isAuthorized := tool.Authorized(verifiedAuthServices)
210 | 	if !isAuthorized {
211 | 		err = fmt.Errorf("tool invocation not authorized. Please make sure your specify correct auth headers")
212 | 		s.logger.DebugContext(ctx, err.Error())
213 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusUnauthorized))
214 | 		return
215 | 	}
216 | 	s.logger.DebugContext(ctx, "tool invocation authorized")
217 | 
218 | 	var data map[string]any
219 | 	if err = util.DecodeJSON(r.Body, &data); err != nil {
220 | 		render.Status(r, http.StatusBadRequest)
221 | 		err = fmt.Errorf("request body was invalid JSON: %w", err)
222 | 		s.logger.DebugContext(ctx, err.Error())
223 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusBadRequest))
224 | 		return
225 | 	}
226 | 
227 | 	params, err := tool.ParseParams(data, claimsFromAuth)
228 | 	if err != nil {
229 | 		// If auth error, return 401
230 | 		if errors.Is(err, tools.ErrUnauthorized) {
231 | 			s.logger.DebugContext(ctx, fmt.Sprintf("error parsing authenticated parameters from ID token: %s", err))
232 | 			_ = render.Render(w, r, newErrResponse(err, http.StatusUnauthorized))
233 | 			return
234 | 		}
235 | 		err = fmt.Errorf("provided parameters were invalid: %w", err)
236 | 		s.logger.DebugContext(ctx, err.Error())
237 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusBadRequest))
238 | 		return
239 | 	}
240 | 	s.logger.DebugContext(ctx, fmt.Sprintf("invocation params: %s", params))
241 | 
242 | 	res, err := tool.Invoke(ctx, params, accessToken)
243 | 
244 | 	// Determine what error to return to the users.
245 | 	if err != nil {
246 | 		errStr := err.Error()
247 | 		var statusCode int
248 | 
249 | 		// Upstream API auth error propagation
250 | 		switch {
251 | 		case strings.Contains(errStr, "Error 401"):
252 | 			statusCode = http.StatusUnauthorized
253 | 		case strings.Contains(errStr, "Error 403"):
254 | 			statusCode = http.StatusForbidden
255 | 		}
256 | 
257 | 		if statusCode == http.StatusUnauthorized || statusCode == http.StatusForbidden {
258 | 			if tool.RequiresClientAuthorization() {
259 | 				// Propagate the original 401/403 error.
260 | 				s.logger.DebugContext(ctx, fmt.Sprintf("error invoking tool. Client credentials lack authorization to the source: %v", err))
261 | 				_ = render.Render(w, r, newErrResponse(err, statusCode))
262 | 				return
263 | 			}
264 | 			// ADC lacking permission or credentials configuration error.
265 | 			internalErr := fmt.Errorf("unexpected auth error occured during Tool invocation: %w", err)
266 | 			s.logger.ErrorContext(ctx, internalErr.Error())
267 | 			_ = render.Render(w, r, newErrResponse(internalErr, http.StatusInternalServerError))
268 | 			return
269 | 		}
270 | 		err = fmt.Errorf("error while invoking tool: %w", err)
271 | 		s.logger.DebugContext(ctx, err.Error())
272 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusBadRequest))
273 | 		return
274 | 	}
275 | 
276 | 	resMarshal, err := json.Marshal(res)
277 | 	if err != nil {
278 | 		err = fmt.Errorf("unable to marshal result: %w", err)
279 | 		s.logger.DebugContext(ctx, err.Error())
280 | 		_ = render.Render(w, r, newErrResponse(err, http.StatusInternalServerError))
281 | 		return
282 | 	}
283 | 
284 | 	_ = render.Render(w, r, &resultResponse{Result: string(resMarshal)})
285 | }
286 | 
287 | var _ render.Renderer = &resultResponse{} // Renderer interface for managing response payloads.
288 | 
289 | // resultResponse is the response sent back when the tool was invocated successfully.
290 | type resultResponse struct {
291 | 	Result string `json:"result"` // result of tool invocation
292 | }
293 | 
294 | // Render renders a single payload and respond to the client request.
295 | func (rr resultResponse) Render(w http.ResponseWriter, r *http.Request) error {
296 | 	render.Status(r, http.StatusOK)
297 | 	return nil
298 | }
299 | 
300 | var _ render.Renderer = &errResponse{} // Renderer interface for managing response payloads.
301 | 
302 | // newErrResponse is a helper function initializing an ErrResponse
303 | func newErrResponse(err error, code int) *errResponse {
304 | 	return &errResponse{
305 | 		Err:            err,
306 | 		HTTPStatusCode: code,
307 | 
308 | 		StatusText: http.StatusText(code),
309 | 		ErrorText:  err.Error(),
310 | 	}
311 | }
312 | 
313 | // errResponse is the response sent back when an error has been encountered.
314 | type errResponse struct {
315 | 	Err            error `json:"-"` // low-level runtime error
316 | 	HTTPStatusCode int   `json:"-"` // http response status code
317 | 
318 | 	StatusText string `json:"status"`          // user-level status message
319 | 	ErrorText  string `json:"error,omitempty"` // application-level error message, for debugging
320 | }
321 | 
322 | func (e *errResponse) Render(w http.ResponseWriter, r *http.Request) error {
323 | 	render.Status(r, e.HTTPStatusCode)
324 | 	return nil
325 | }
326 | 
```

--------------------------------------------------------------------------------
/docs/en/resources/sources/bigquery.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: "BigQuery"
  3 | type: docs
  4 | weight: 1
  5 | description: >
  6 |   BigQuery is Google Cloud's fully managed, petabyte-scale, and cost-effective
  7 |   analytics data warehouse that lets you run analytics over vast amounts of 
  8 |   data in near real time. With BigQuery, there's no infrastructure to set 
  9 |   up or manage, letting you focus on finding meaningful insights using 
 10 |   GoogleSQL and taking advantage of flexible pricing models across on-demand 
 11 |   and flat-rate options.
 12 | ---
 13 | 
 14 | # BigQuery Source
 15 | 
 16 | [BigQuery][bigquery-docs] is Google Cloud's fully managed, petabyte-scale,
 17 | and cost-effective analytics data warehouse that lets you run analytics
 18 | over vast amounts of data in near real time. With BigQuery, there's no
 19 | infrastructure to set up or manage, letting you focus on finding meaningful
 20 | insights using GoogleSQL and taking advantage of flexible pricing models
 21 | across on-demand and flat-rate options.
 22 | 
 23 | If you are new to BigQuery, you can try to
 24 | [load and query data with the bq tool][bigquery-quickstart-cli].
 25 | 
 26 | BigQuery uses [GoogleSQL][bigquery-googlesql] for querying data. GoogleSQL
 27 | is an ANSI-compliant structured query language (SQL) that is also implemented
 28 | for other Google Cloud services. SQL queries are handled by cluster nodes
 29 | in the same way as NoSQL data requests. Therefore, the same best practices
 30 | apply when creating SQL queries to run against your BigQuery data, such as
 31 | avoiding full table scans or complex filters.
 32 | 
 33 | [bigquery-docs]: https://cloud.google.com/bigquery/docs
 34 | [bigquery-quickstart-cli]:
 35 |     https://cloud.google.com/bigquery/docs/quickstarts/quickstart-command-line
 36 | [bigquery-googlesql]:
 37 |     https://cloud.google.com/bigquery/docs/reference/standard-sql/
 38 | 
 39 | ## Available Tools
 40 | 
 41 | - [`bigquery-analyze-contribution`](../tools/bigquery/bigquery-analyze-contribution.md)
 42 |   Performs contribution analysis, also called key driver analysis in BigQuery.
 43 | 
 44 | - [`bigquery-conversational-analytics`](../tools/bigquery/bigquery-conversational-analytics.md)
 45 |   Allows conversational interaction with a BigQuery source.
 46 | 
 47 | - [`bigquery-execute-sql`](../tools/bigquery/bigquery-execute-sql.md)  
 48 |   Execute structured queries using parameters.
 49 | 
 50 | - [`bigquery-forecast`](../tools/bigquery/bigquery-forecast.md)
 51 |   Forecasts time series data in BigQuery.
 52 | 
 53 | - [`bigquery-get-dataset-info`](../tools/bigquery/bigquery-get-dataset-info.md)  
 54 |   Retrieve metadata for a specific dataset.
 55 | 
 56 | - [`bigquery-get-table-info`](../tools/bigquery/bigquery-get-table-info.md)  
 57 |   Retrieve metadata for a specific table.
 58 | 
 59 | - [`bigquery-list-dataset-ids`](../tools/bigquery/bigquery-list-dataset-ids.md)  
 60 |   List available dataset IDs.
 61 | 
 62 | - [`bigquery-list-table-ids`](../tools/bigquery/bigquery-list-table-ids.md)  
 63 |   List tables in a given dataset.
 64 | 
 65 | - [`bigquery-sql`](../tools/bigquery/bigquery-sql.md)  
 66 |   Run SQL queries directly against BigQuery datasets.
 67 | 
 68 | - [`bigquery-search-catalog`](../tools/bigquery/bigquery-search_catalog.md)
 69 |   List all entries in Dataplex Catalog (e.g. tables, views, models) that matches
 70 |   given user query.
 71 | 
 72 | ### Pre-built Configurations
 73 | 
 74 | - [BigQuery using
 75 |   MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/bigquery_mcp/)
 76 |   Connect your IDE to BigQuery using Toolbox.
 77 | 
 78 | ## Requirements
 79 | 
 80 | ### IAM Permissions
 81 | 
 82 | BigQuery uses [Identity and Access Management (IAM)][iam-overview] to control
 83 | user and group access to BigQuery resources like projects, datasets, and tables.
 84 | 
 85 | ### Authentication via Application Default Credentials (ADC)
 86 | 
 87 | By **default**, Toolbox will use your [Application Default Credentials
 88 | (ADC)][adc] to authorize and authenticate when interacting with
 89 | [BigQuery][bigquery-docs].
 90 | 
 91 | When using this method, you need to ensure the IAM identity associated with your
 92 | ADC (such as a service account) has the correct permissions for the queries you
 93 | intend to run. Common roles include `roles/bigquery.user` (which includes
 94 | permissions to run jobs and read data) or `roles/bigbigquery.dataViewer`.
 95 | Follow this [guide][set-adc] to set up your ADC.
 96 | 
 97 | ### Authentication via User's OAuth Access Token
 98 | 
 99 | If the `useClientOAuth` parameter is set to `true`, Toolbox will instead use the
100 | OAuth access token for authentication. This token is parsed from the
101 | `Authorization` header passed in with the tool invocation request. This method
102 | allows Toolbox to make queries to [BigQuery][bigquery-docs] on behalf of the
103 | client or the end-user.
104 | 
105 | When using this on-behalf-of authentication, you must ensure that the
106 | identity used has been granted the correct IAM permissions.
107 | 
108 | [iam-overview]: <https://cloud.google.com/bigquery/docs/access-control>
109 | [adc]: <https://cloud.google.com/docs/authentication#adc>
110 | [set-adc]: <https://cloud.google.com/docs/authentication/provide-credentials-adc>
111 | 
112 | ## Example
113 | 
114 | Initialize a BigQuery source that uses ADC:
115 | 
116 | ```yaml
117 | sources:
118 |   my-bigquery-source:
119 |     kind: "bigquery"
120 |     project: "my-project-id"
121 |     # location: "US" # Optional: Specifies the location for query jobs.
122 |     # writeMode: "allowed" # One of: allowed, blocked, protected. Defaults to "allowed".
123 |     # allowedDatasets: # Optional: Restricts tool access to a specific list of datasets.
124 |     #   - "my_dataset_1"
125 |     #   - "other_project.my_dataset_2"
126 | ```
127 | 
128 | Initialize a BigQuery source that uses the client's access token:
129 | 
130 | ```yaml
131 | sources:
132 |   my-bigquery-client-auth-source:
133 |     kind: "bigquery"
134 |     project: "my-project-id"
135 |     useClientOAuth: true
136 |     # location: "US" # Optional: Specifies the location for query jobs.
137 |     # writeMode: "allowed" # One of: allowed, blocked, protected. Defaults to "allowed".
138 |     # allowedDatasets: # Optional: Restricts tool access to a specific list of datasets.
139 |     #   - "my_dataset_1"
140 |     #   - "other_project.my_dataset_2"
141 | ```
142 | 
143 | ## Reference
144 | 
145 | | **field**       | **type** | **required** | **description**                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
146 | |-----------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
147 | | kind            |  string  |     true     | Must be "bigquery".                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
148 | | project         |  string  |     true     | Id of the Google Cloud project to use for billing and as the default project for BigQuery resources.                                                                                                                                                                                                                                                                                                                                                                                                                |
149 | | location        |  string  |    false     | Specifies the location (e.g., 'us', 'asia-northeast1') in which to run the query job. This location must match the location of any tables referenced in the query. Defaults to the table's location or 'US' if the location cannot be determined. [Learn More](https://cloud.google.com/bigquery/docs/locations)                                                                                                                                                                                                    |
150 | | writeMode       |  string  |    false     | Controls the write behavior for tools. `allowed` (default): All queries are permitted. `blocked`: Only `SELECT` statements are allowed for the `bigquery-execute-sql` tool. `protected`: Enables session-based execution where all tools associated with this source instance share the same [BigQuery session](https://cloud.google.com/bigquery/docs/sessions-intro). This allows for stateful operations using temporary tables (e.g., `CREATE TEMP TABLE`). For `bigquery-execute-sql`, `SELECT` statements can be used on all tables, but write operations are restricted to the session's temporary dataset. For tools like `bigquery-sql`, `bigquery-forecast`, and `bigquery-analyze-contribution`, the `writeMode` restrictions do not apply, but they will operate within the shared session. **Note:** The `protected` mode cannot be used with `useClientOAuth: true`. It is also not recommended for multi-user server environments, as all users would share the same session. A session is terminated automatically after 24 hours of inactivity or after 7 days, whichever comes first. A new session is created on the next request, and any temporary data from the previous session will be lost. |
151 | | allowedDatasets | []string |    false     | An optional list of dataset IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a dataset not in this list will be rejected. To enforce this, two types of operations are also disallowed: 1) Dataset-level operations (e.g., `CREATE SCHEMA`), and 2) operations where table access cannot be statically analyzed (e.g., `EXECUTE IMMEDIATE`, `CREATE PROCEDURE`). If a single dataset is provided, it will be treated as the default for prebuilt tools. |
152 | | useClientOAuth  |   bool   |    false     | If true, forwards the client's OAuth access token from the "Authorization" header to downstream queries. **Note:** This cannot be used with `writeMode: protected`.                                                                                                                                                                                                                                                                                                                                                |
153 | 
```

--------------------------------------------------------------------------------
/internal/tools/neo4j/neo4jexecutecypher/classifier/classifier_test.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package classifier
 16 | 
 17 | import (
 18 | 	"reflect"
 19 | 	"sort"
 20 | 	"testing"
 21 | )
 22 | 
 23 | // assertElementsMatch checks if two string slices have the same elements, ignoring order.
 24 | // It serves as a replacement for testify's assert.ElementsMatch.
 25 | func assertElementsMatch(t *testing.T, expected, actual []string, msg string) {
 26 | 	// t.Helper() marks this function as a test helper.
 27 | 	// When t.Errorf is called from this function, the line number of the calling code is reported, not the line number inside this helper.
 28 | 	t.Helper()
 29 | 	if len(expected) == 0 && len(actual) == 0 {
 30 | 		return // Both are empty or nil, they match.
 31 | 	}
 32 | 
 33 | 	// Create copies to sort, leaving the original slices unmodified.
 34 | 	expectedCopy := make([]string, len(expected))
 35 | 	actualCopy := make([]string, len(actual))
 36 | 	copy(expectedCopy, expected)
 37 | 	copy(actualCopy, actual)
 38 | 
 39 | 	sort.Strings(expectedCopy)
 40 | 	sort.Strings(actualCopy)
 41 | 
 42 | 	// reflect.DeepEqual provides a robust comparison for complex types, including sorted slices.
 43 | 	if !reflect.DeepEqual(expectedCopy, actualCopy) {
 44 | 		t.Errorf("%s: \nexpected: %v\n     got: %v", msg, expected, actual)
 45 | 	}
 46 | }
 47 | 
 48 | func TestQueryClassifier_Classify(t *testing.T) {
 49 | 	classifier := NewQueryClassifier()
 50 | 
 51 | 	tests := []struct {
 52 | 		name          string
 53 | 		query         string
 54 | 		expectedType  QueryType
 55 | 		expectedWrite []string
 56 | 		expectedRead  []string
 57 | 		minConfidence float64
 58 | 	}{
 59 | 		// Read queries
 60 | 		{
 61 | 			name:          "simple MATCH query",
 62 | 			query:         "MATCH (n:Person) RETURN n",
 63 | 			expectedType:  ReadQuery,
 64 | 			expectedRead:  []string{"MATCH", "RETURN"},
 65 | 			expectedWrite: []string{},
 66 | 			minConfidence: 1.0,
 67 | 		},
 68 | 		{
 69 | 			name:          "complex read query",
 70 | 			query:         "MATCH (p:Person)-[:KNOWS]->(f) WHERE p.age > 30 RETURN p.name, count(f) ORDER BY p.name SKIP 10 LIMIT 5",
 71 | 			expectedType:  ReadQuery,
 72 | 			expectedRead:  []string{"MATCH", "WHERE", "RETURN", "ORDER_BY", "SKIP", "LIMIT"},
 73 | 			expectedWrite: []string{},
 74 | 			minConfidence: 1.0,
 75 | 		},
 76 | 		{
 77 | 			name:          "UNION query",
 78 | 			query:         "MATCH (n:Person) RETURN n.name UNION MATCH (m:Company) RETURN m.name",
 79 | 			expectedType:  ReadQuery,
 80 | 			expectedRead:  []string{"MATCH", "RETURN", "UNION", "MATCH", "RETURN"},
 81 | 			expectedWrite: []string{},
 82 | 			minConfidence: 1.0,
 83 | 		},
 84 | 
 85 | 		// Write queries
 86 | 		{
 87 | 			name:          "CREATE query",
 88 | 			query:         "CREATE (n:Person {name: 'John', age: 30})",
 89 | 			expectedType:  WriteQuery,
 90 | 			expectedWrite: []string{"CREATE"},
 91 | 			expectedRead:  []string{},
 92 | 			minConfidence: 1.0,
 93 | 		},
 94 | 		{
 95 | 			name:          "MERGE query",
 96 | 			query:         "MERGE (n:Person {id: 123}) ON CREATE SET n.created = timestamp()",
 97 | 			expectedType:  WriteQuery,
 98 | 			expectedWrite: []string{"MERGE", "CREATE", "SET"},
 99 | 			expectedRead:  []string{},
100 | 			minConfidence: 1.0,
101 | 		},
102 | 		{
103 | 			name:          "DETACH DELETE query",
104 | 			query:         "MATCH (n:Person) DETACH DELETE n",
105 | 			expectedType:  WriteQuery,
106 | 			expectedWrite: []string{"DETACH_DELETE"},
107 | 			expectedRead:  []string{"MATCH"},
108 | 			minConfidence: 0.9,
109 | 		},
110 | 
111 | 		// Procedure calls
112 | 		{
113 | 			name:          "read procedure",
114 | 			query:         "CALL db.labels() YIELD label RETURN label",
115 | 			expectedType:  ReadQuery,
116 | 			expectedRead:  []string{"RETURN", "CALL db.labels"},
117 | 			expectedWrite: []string{},
118 | 			minConfidence: 1.0,
119 | 		},
120 | 		{
121 | 			name:          "unknown procedure conservative",
122 | 			query:         "CALL custom.procedure.doSomething()",
123 | 			expectedType:  WriteQuery,
124 | 			expectedWrite: []string{"CALL custom.procedure.dosomething"},
125 | 			expectedRead:  []string{},
126 | 			minConfidence: 0.8,
127 | 		},
128 | 		{
129 | 			name:          "unknown read-like procedure",
130 | 			query:         "CALL custom.procedure.getUsers()",
131 | 			expectedType:  ReadQuery,
132 | 			expectedRead:  []string{"CALL custom.procedure.getusers"},
133 | 			expectedWrite: []string{},
134 | 			minConfidence: 1.0,
135 | 		},
136 | 
137 | 		// Subqueries
138 | 		{
139 | 			name:          "read subquery",
140 | 			query:         "CALL { MATCH (n:Person) RETURN n } RETURN n",
141 | 			expectedType:  ReadQuery,
142 | 			expectedRead:  []string{"MATCH", "RETURN", "RETURN"},
143 | 			expectedWrite: []string{},
144 | 			minConfidence: 1.0,
145 | 		},
146 | 		{
147 | 			name:          "write subquery",
148 | 			query:         "CALL { CREATE (n:Person) RETURN n } RETURN n",
149 | 			expectedType:  WriteQuery,
150 | 			expectedWrite: []string{"CREATE", "WRITE_IN_SUBQUERY"},
151 | 			expectedRead:  []string{"RETURN", "RETURN"},
152 | 			minConfidence: 0.9,
153 | 		},
154 | 
155 | 		// Multiline Queries
156 | 		{
157 | 			name: "multiline read query with comments",
158 | 			query: `
159 | 				// Find all people and their friends
160 | 				MATCH (p:Person)-[:KNOWS]->(f:Friend)
161 | 				/*
162 | 				  Where the person is older than 25
163 | 				*/
164 | 				WHERE p.age > 25
165 | 				RETURN p.name, f.name
166 | 			`,
167 | 			expectedType:  ReadQuery,
168 | 			expectedWrite: []string{},
169 | 			expectedRead:  []string{"MATCH", "WHERE", "RETURN"},
170 | 			minConfidence: 1.0,
171 | 		},
172 | 		{
173 | 			name: "multiline write query",
174 | 			query: `
175 | 				MATCH (p:Person {name: 'Alice'})
176 | 				CREATE (c:Company {name: 'Neo4j'})
177 | 				CREATE (p)-[:WORKS_FOR]->(c)
178 | 			`,
179 | 			expectedType:  WriteQuery,
180 | 			expectedWrite: []string{"CREATE", "CREATE"},
181 | 			expectedRead:  []string{"MATCH"},
182 | 			minConfidence: 0.9,
183 | 		},
184 | 
185 | 		// Complex Subqueries
186 | 		{
187 | 			name: "nested read subquery",
188 | 			query: `
189 | 				CALL {
190 | 					MATCH (p:Person)
191 | 					RETURN p
192 | 				}
193 | 				CALL {
194 | 					MATCH (c:Company)
195 | 					RETURN c
196 | 				}
197 | 				RETURN p, c
198 | 			`,
199 | 			expectedType:  ReadQuery,
200 | 			expectedWrite: []string{},
201 | 			expectedRead:  []string{"MATCH", "RETURN", "MATCH", "RETURN", "RETURN"},
202 | 			minConfidence: 1.0,
203 | 		},
204 | 		{
205 | 			name: "subquery with write and outer read",
206 | 			query: `
207 | 				MATCH (u:User {id: 1})
208 | 				CALL {
209 | 					WITH u
210 | 					CREATE (p:Post {content: 'New post'})
211 | 					CREATE (u)-[:AUTHORED]->(p)
212 | 					RETURN p
213 | 				}
214 | 				RETURN u.name, p.content
215 | 			`,
216 | 			expectedType:  WriteQuery,
217 | 			expectedWrite: []string{"CREATE", "CREATE", "WRITE_IN_SUBQUERY"},
218 | 			expectedRead:  []string{"MATCH", "WITH", "RETURN", "RETURN"},
219 | 			minConfidence: 0.9,
220 | 		},
221 | 		{
222 | 			name: "subquery with read passing to outer write",
223 | 			query: `
224 | 				CALL {
225 | 					MATCH (p:Product {id: 'abc'})
226 | 					RETURN p
227 | 				}
228 | 				WITH p
229 | 				SET p.lastViewed = timestamp()
230 | 			`,
231 | 			expectedType:  WriteQuery,
232 | 			expectedWrite: []string{"SET"},
233 | 			expectedRead:  []string{"MATCH", "RETURN", "WITH"},
234 | 			minConfidence: 0.9,
235 | 		},
236 | 	}
237 | 
238 | 	for _, tt := range tests {
239 | 		t.Run(tt.name, func(t *testing.T) {
240 | 			result := classifier.Classify(tt.query)
241 | 
242 | 			if tt.expectedType != result.Type {
243 | 				t.Errorf("Query type mismatch: expected %v, got %v", tt.expectedType, result.Type)
244 | 			}
245 | 			if result.Confidence < tt.minConfidence {
246 | 				t.Errorf("Confidence too low: expected at least %f, got %f", tt.minConfidence, result.Confidence)
247 | 			}
248 | 			assertElementsMatch(t, tt.expectedWrite, result.WriteTokens, "Write tokens mismatch")
249 | 			assertElementsMatch(t, tt.expectedRead, result.ReadTokens, "Read tokens mismatch")
250 | 		})
251 | 	}
252 | }
253 | 
254 | func TestQueryClassifier_AbuseCases(t *testing.T) {
255 | 	classifier := NewQueryClassifier()
256 | 
257 | 	tests := []struct {
258 | 		name          string
259 | 		query         string
260 | 		expectedType  QueryType
261 | 		expectedWrite []string
262 | 		expectedRead  []string
263 | 	}{
264 | 		{
265 | 			name:          "write keyword in a string literal",
266 | 			query:         `MATCH (n) WHERE n.name = 'MERGE (m)' RETURN n`,
267 | 			expectedType:  ReadQuery,
268 | 			expectedWrite: []string{},
269 | 			expectedRead:  []string{"MATCH", "WHERE", "RETURN"},
270 | 		},
271 | 		{
272 | 			name:          "incomplete SET clause",
273 | 			query:         `MATCH (n) SET`,
274 | 			expectedType:  WriteQuery,
275 | 			expectedWrite: []string{"SET"},
276 | 			expectedRead:  []string{"MATCH"},
277 | 		},
278 | 		{
279 | 			name:          "keyword as a node label",
280 | 			query:         `MATCH (n:CREATE) RETURN n`,
281 | 			expectedType:  ReadQuery,
282 | 			expectedWrite: []string{}, // 'CREATE' should be seen as an identifier, not a keyword
283 | 			expectedRead:  []string{"MATCH", "RETURN"},
284 | 		},
285 | 		{
286 | 			name:          "unbalanced parentheses",
287 | 			query:         `MATCH (n:Person RETURN n`,
288 | 			expectedType:  ReadQuery,
289 | 			expectedWrite: []string{},
290 | 			expectedRead:  []string{"MATCH", "RETURN"},
291 | 		},
292 | 		{
293 | 			name:          "unclosed curly brace in subquery",
294 | 			query:         `CALL { MATCH (n) CREATE (m)`,
295 | 			expectedType:  WriteQuery,
296 | 			expectedWrite: []string{"CREATE", "WRITE_IN_SUBQUERY"},
297 | 			expectedRead:  []string{"MATCH"},
298 | 		},
299 | 		{
300 | 			name:          "semicolon inside a query part",
301 | 			query:         `MATCH (n;Person) RETURN n`,
302 | 			expectedType:  ReadQuery,
303 | 			expectedWrite: []string{},
304 | 			expectedRead:  []string{"MATCH", "RETURN"},
305 | 		},
306 | 		{
307 | 			name:         "jumbled keywords without proper syntax",
308 | 			query:        `RETURN CREATE MATCH DELETE`,
309 | 			expectedType: WriteQuery,
310 | 			// The classifier's job is to find the tokens, not validate the syntax.
311 | 			// It should find both read and write tokens.
312 | 			expectedWrite: []string{"CREATE", "DELETE"},
313 | 			expectedRead:  []string{"RETURN", "MATCH"},
314 | 		},
315 | 		{
316 | 			name: "write in a nested subquery",
317 | 			query: `
318 | 				CALL {
319 | 					MATCH (a)
320 | 					CALL {
321 | 						CREATE (b:Thing)
322 | 					}
323 | 					RETURN a
324 | 				}
325 | 				RETURN "done"
326 | 			`,
327 | 			expectedType:  WriteQuery,
328 | 			expectedWrite: []string{"CREATE", "WRITE_IN_SUBQUERY"},
329 | 			expectedRead:  []string{"MATCH", "RETURN", "RETURN"},
330 | 		},
331 | 	}
332 | 
333 | 	for _, tt := range tests {
334 | 		t.Run(tt.name, func(t *testing.T) {
335 | 			// This defer-recover block ensures the test fails gracefully if the Classify function panics,
336 | 			// which was the goal of the original assert.NotPanics call.
337 | 			defer func() {
338 | 				if r := recover(); r != nil {
339 | 					t.Fatalf("The code panicked on test '%s': %v", tt.name, r)
340 | 				}
341 | 			}()
342 | 
343 | 			result := classifier.Classify(tt.query)
344 | 			if tt.expectedType != result.Type {
345 | 				t.Errorf("Query type mismatch: expected %v, got %v", tt.expectedType, result.Type)
346 | 			}
347 | 			if tt.expectedWrite != nil {
348 | 				assertElementsMatch(t, tt.expectedWrite, result.WriteTokens, "Write tokens mismatch")
349 | 			}
350 | 			if tt.expectedRead != nil {
351 | 				assertElementsMatch(t, tt.expectedRead, result.ReadTokens, "Read tokens mismatch")
352 | 			}
353 | 		})
354 | 	}
355 | }
356 | 
357 | func TestNormalizeQuery(t *testing.T) {
358 | 	classifier := NewQueryClassifier()
359 | 	t.Run("single line comment", func(t *testing.T) {
360 | 		input := "MATCH (n) // comment\nRETURN n"
361 | 		expected := "MATCH (n) RETURN n"
362 | 		result := classifier.normalizeQuery(input)
363 | 		if expected != result {
364 | 			t.Errorf("normalizeQuery failed:\nexpected: %q\n     got: %q", expected, result)
365 | 		}
366 | 	})
367 | }
368 | 
```

--------------------------------------------------------------------------------
/internal/tools/firestore/util/converter_test.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package util
 16 | 
 17 | import (
 18 | 	"bytes"
 19 | 	"encoding/base64"
 20 | 	"encoding/json"
 21 | 	"strings"
 22 | 	"testing"
 23 | 	"time"
 24 | 
 25 | 	"google.golang.org/genproto/googleapis/type/latlng"
 26 | )
 27 | 
 28 | func TestJSONToFirestoreValue_ComplexDocument(t *testing.T) {
 29 | 	// This is the exact JSON format provided by the user
 30 | 	jsonData := `{
 31 | 		"name": {
 32 | 			"stringValue": "Acme Corporation"
 33 | 		},
 34 | 		"establishmentDate": {
 35 | 			"timestampValue": "2000-01-15T10:30:00Z"
 36 | 		},
 37 | 		"location": {
 38 | 			"geoPointValue": {
 39 | 				"latitude": 34.052235,
 40 | 				"longitude": -118.243683
 41 | 			}
 42 | 		},
 43 | 		"active": {
 44 | 			"booleanValue": true
 45 | 		},
 46 | 		"employeeCount": {
 47 | 			"integerValue": "1500"
 48 | 		},
 49 | 		"annualRevenue": {
 50 | 			"doubleValue": 1234567.89
 51 | 		},
 52 | 		"website": {
 53 | 			"stringValue": "https://www.acmecorp.com"
 54 | 		},
 55 | 		"contactInfo": {
 56 | 			"mapValue": {
 57 | 				"fields": {
 58 | 					"email": {
 59 | 						"stringValue": "[email protected]"
 60 | 					},
 61 | 					"phone": {
 62 | 						"stringValue": "+1-555-123-4567"
 63 | 					},
 64 | 					"address": {
 65 | 						"mapValue": {
 66 | 							"fields": {
 67 | 								"street": {
 68 | 									"stringValue": "123 Business Blvd"
 69 | 								},
 70 | 								"city": {
 71 | 									"stringValue": "Los Angeles"
 72 | 								},
 73 | 								"state": {
 74 | 									"stringValue": "CA"
 75 | 								},
 76 | 								"zipCode": {
 77 | 									"stringValue": "90012"
 78 | 								}
 79 | 							}
 80 | 						}
 81 | 					}
 82 | 				}
 83 | 			}
 84 | 		},
 85 | 		"products": {
 86 | 			"arrayValue": {
 87 | 				"values": [
 88 | 					{
 89 | 						"stringValue": "Product A"
 90 | 					},
 91 | 					{
 92 | 						"stringValue": "Product B"
 93 | 					},
 94 | 					{
 95 | 						"mapValue": {
 96 | 							"fields": {
 97 | 								"productName": {
 98 | 									"stringValue": "Product C Deluxe"
 99 | 								},
100 | 								"version": {
101 | 									"integerValue": "2"
102 | 								},
103 | 								"features": {
104 | 									"arrayValue": {
105 | 										"values": [
106 | 											{
107 | 												"stringValue": "Feature X"
108 | 											},
109 | 											{
110 | 												"stringValue": "Feature Y"
111 | 											}
112 | 										]
113 | 									}
114 | 								}
115 | 							}
116 | 						}
117 | 					}
118 | 				]
119 | 			}
120 | 		},
121 | 		"notes": {
122 | 			"nullValue": null
123 | 		},
124 | 		"lastUpdated": {
125 | 			"timestampValue": "2025-07-30T11:47:59.000Z"
126 | 		},
127 | 		"binaryData": {
128 | 			"bytesValue": "SGVsbG8gV29ybGQh"
129 | 		}
130 | 	}`
131 | 
132 | 	// Parse JSON
133 | 	var data interface{}
134 | 	err := json.Unmarshal([]byte(jsonData), &data)
135 | 	if err != nil {
136 | 		t.Fatalf("Failed to unmarshal JSON: %v", err)
137 | 	}
138 | 
139 | 	// Convert to Firestore format
140 | 	result, err := JSONToFirestoreValue(data, nil)
141 | 	if err != nil {
142 | 		t.Fatalf("Failed to convert JSON to Firestore value: %v", err)
143 | 	}
144 | 
145 | 	// Verify the result is a map
146 | 	resultMap, ok := result.(map[string]interface{})
147 | 	if !ok {
148 | 		t.Fatalf("Result should be a map, got %T", result)
149 | 	}
150 | 
151 | 	// Verify string values
152 | 	if resultMap["name"] != "Acme Corporation" {
153 | 		t.Errorf("Expected name 'Acme Corporation', got %v", resultMap["name"])
154 | 	}
155 | 	if resultMap["website"] != "https://www.acmecorp.com" {
156 | 		t.Errorf("Expected website 'https://www.acmecorp.com', got %v", resultMap["website"])
157 | 	}
158 | 
159 | 	// Verify timestamp
160 | 	establishmentDate, ok := resultMap["establishmentDate"].(time.Time)
161 | 	if !ok {
162 | 		t.Fatalf("establishmentDate should be time.Time, got %T", resultMap["establishmentDate"])
163 | 	}
164 | 	expectedDate, _ := time.Parse(time.RFC3339, "2000-01-15T10:30:00Z")
165 | 	if !establishmentDate.Equal(expectedDate) {
166 | 		t.Errorf("Expected date %v, got %v", expectedDate, establishmentDate)
167 | 	}
168 | 
169 | 	// Verify geopoint
170 | 	location, ok := resultMap["location"].(*latlng.LatLng)
171 | 	if !ok {
172 | 		t.Fatalf("location should be *latlng.LatLng, got %T", resultMap["location"])
173 | 	}
174 | 	if location.Latitude != 34.052235 {
175 | 		t.Errorf("Expected latitude 34.052235, got %v", location.Latitude)
176 | 	}
177 | 	if location.Longitude != -118.243683 {
178 | 		t.Errorf("Expected longitude -118.243683, got %v", location.Longitude)
179 | 	}
180 | 
181 | 	// Verify boolean
182 | 	if resultMap["active"] != true {
183 | 		t.Errorf("Expected active true, got %v", resultMap["active"])
184 | 	}
185 | 
186 | 	// Verify integer (should be int64)
187 | 	employeeCount, ok := resultMap["employeeCount"].(int64)
188 | 	if !ok {
189 | 		t.Fatalf("employeeCount should be int64, got %T", resultMap["employeeCount"])
190 | 	}
191 | 	if employeeCount != int64(1500) {
192 | 		t.Errorf("Expected employeeCount 1500, got %v", employeeCount)
193 | 	}
194 | 
195 | 	// Verify double
196 | 	annualRevenue, ok := resultMap["annualRevenue"].(float64)
197 | 	if !ok {
198 | 		t.Fatalf("annualRevenue should be float64, got %T", resultMap["annualRevenue"])
199 | 	}
200 | 	if annualRevenue != 1234567.89 {
201 | 		t.Errorf("Expected annualRevenue 1234567.89, got %v", annualRevenue)
202 | 	}
203 | 
204 | 	// Verify nested map
205 | 	contactInfo, ok := resultMap["contactInfo"].(map[string]interface{})
206 | 	if !ok {
207 | 		t.Fatalf("contactInfo should be a map, got %T", resultMap["contactInfo"])
208 | 	}
209 | 	if contactInfo["email"] != "[email protected]" {
210 | 		t.Errorf("Expected email '[email protected]', got %v", contactInfo["email"])
211 | 	}
212 | 	if contactInfo["phone"] != "+1-555-123-4567" {
213 | 		t.Errorf("Expected phone '+1-555-123-4567', got %v", contactInfo["phone"])
214 | 	}
215 | 
216 | 	// Verify nested nested map
217 | 	address, ok := contactInfo["address"].(map[string]interface{})
218 | 	if !ok {
219 | 		t.Fatalf("address should be a map, got %T", contactInfo["address"])
220 | 	}
221 | 	if address["street"] != "123 Business Blvd" {
222 | 		t.Errorf("Expected street '123 Business Blvd', got %v", address["street"])
223 | 	}
224 | 	if address["city"] != "Los Angeles" {
225 | 		t.Errorf("Expected city 'Los Angeles', got %v", address["city"])
226 | 	}
227 | 	if address["state"] != "CA" {
228 | 		t.Errorf("Expected state 'CA', got %v", address["state"])
229 | 	}
230 | 	if address["zipCode"] != "90012" {
231 | 		t.Errorf("Expected zipCode '90012', got %v", address["zipCode"])
232 | 	}
233 | 
234 | 	// Verify array
235 | 	products, ok := resultMap["products"].([]interface{})
236 | 	if !ok {
237 | 		t.Fatalf("products should be an array, got %T", resultMap["products"])
238 | 	}
239 | 	if len(products) != 3 {
240 | 		t.Errorf("Expected 3 products, got %d", len(products))
241 | 	}
242 | 	if products[0] != "Product A" {
243 | 		t.Errorf("Expected products[0] 'Product A', got %v", products[0])
244 | 	}
245 | 	if products[1] != "Product B" {
246 | 		t.Errorf("Expected products[1] 'Product B', got %v", products[1])
247 | 	}
248 | 
249 | 	// Verify complex item in array
250 | 	product3, ok := products[2].(map[string]interface{})
251 | 	if !ok {
252 | 		t.Fatalf("products[2] should be a map, got %T", products[2])
253 | 	}
254 | 	if product3["productName"] != "Product C Deluxe" {
255 | 		t.Errorf("Expected productName 'Product C Deluxe', got %v", product3["productName"])
256 | 	}
257 | 	version, ok := product3["version"].(int64)
258 | 	if !ok {
259 | 		t.Fatalf("version should be int64, got %T", product3["version"])
260 | 	}
261 | 	if version != int64(2) {
262 | 		t.Errorf("Expected version 2, got %v", version)
263 | 	}
264 | 
265 | 	features, ok := product3["features"].([]interface{})
266 | 	if !ok {
267 | 		t.Fatalf("features should be an array, got %T", product3["features"])
268 | 	}
269 | 	if len(features) != 2 {
270 | 		t.Errorf("Expected 2 features, got %d", len(features))
271 | 	}
272 | 	if features[0] != "Feature X" {
273 | 		t.Errorf("Expected features[0] 'Feature X', got %v", features[0])
274 | 	}
275 | 	if features[1] != "Feature Y" {
276 | 		t.Errorf("Expected features[1] 'Feature Y', got %v", features[1])
277 | 	}
278 | 
279 | 	// Verify null value
280 | 	if resultMap["notes"] != nil {
281 | 		t.Errorf("Expected notes to be nil, got %v", resultMap["notes"])
282 | 	}
283 | 
284 | 	// Verify bytes
285 | 	binaryData, ok := resultMap["binaryData"].([]byte)
286 | 	if !ok {
287 | 		t.Fatalf("binaryData should be []byte, got %T", resultMap["binaryData"])
288 | 	}
289 | 	expectedBytes, _ := base64.StdEncoding.DecodeString("SGVsbG8gV29ybGQh")
290 | 	if !bytes.Equal(binaryData, expectedBytes) {
291 | 		t.Errorf("Expected bytes %v, got %v", expectedBytes, binaryData)
292 | 	}
293 | }
294 | 
295 | func TestJSONToFirestoreValue_IntegerFromString(t *testing.T) {
296 | 	// Test that integerValue as string gets converted to int64
297 | 	data := map[string]interface{}{
298 | 		"integerValue": "1500",
299 | 	}
300 | 
301 | 	result, err := JSONToFirestoreValue(data, nil)
302 | 	if err != nil {
303 | 		t.Fatalf("Failed to convert: %v", err)
304 | 	}
305 | 
306 | 	intVal, ok := result.(int64)
307 | 	if !ok {
308 | 		t.Fatalf("Result should be int64, got %T", result)
309 | 	}
310 | 	if intVal != int64(1500) {
311 | 		t.Errorf("Expected 1500, got %v", intVal)
312 | 	}
313 | }
314 | 
315 | func TestFirestoreValueToJSON_RoundTrip(t *testing.T) {
316 | 	// Test round-trip conversion
317 | 	original := map[string]interface{}{
318 | 		"name":   "Test",
319 | 		"count":  int64(42),
320 | 		"price":  19.99,
321 | 		"active": true,
322 | 		"tags":   []interface{}{"tag1", "tag2"},
323 | 		"metadata": map[string]interface{}{
324 | 			"created": time.Now(),
325 | 		},
326 | 		"nullField": nil,
327 | 	}
328 | 
329 | 	// Convert to JSON representation
330 | 	jsonRepresentation := FirestoreValueToJSON(original)
331 | 
332 | 	// Verify types are simplified
333 | 	jsonMap, ok := jsonRepresentation.(map[string]interface{})
334 | 	if !ok {
335 | 		t.Fatalf("Expected map, got %T", jsonRepresentation)
336 | 	}
337 | 
338 | 	// Time should be converted to string
339 | 	metadata, ok := jsonMap["metadata"].(map[string]interface{})
340 | 	if !ok {
341 | 		t.Fatalf("metadata should be a map, got %T", jsonMap["metadata"])
342 | 	}
343 | 	_, ok = metadata["created"].(string)
344 | 	if !ok {
345 | 		t.Errorf("created should be a string, got %T", metadata["created"])
346 | 	}
347 | }
348 | 
349 | func TestJSONToFirestoreValue_InvalidFormats(t *testing.T) {
350 | 	tests := []struct {
351 | 		name    string
352 | 		input   interface{}
353 | 		wantErr bool
354 | 		errMsg  string
355 | 	}{
356 | 		{
357 | 			name: "invalid integer value",
358 | 			input: map[string]interface{}{
359 | 				"integerValue": "not-a-number",
360 | 			},
361 | 			wantErr: true,
362 | 			errMsg:  "invalid integer value",
363 | 		},
364 | 		{
365 | 			name: "invalid timestamp",
366 | 			input: map[string]interface{}{
367 | 				"timestampValue": "not-a-timestamp",
368 | 			},
369 | 			wantErr: true,
370 | 			errMsg:  "invalid timestamp format",
371 | 		},
372 | 		{
373 | 			name: "invalid geopoint - missing latitude",
374 | 			input: map[string]interface{}{
375 | 				"geoPointValue": map[string]interface{}{
376 | 					"longitude": -118.243683,
377 | 				},
378 | 			},
379 | 			wantErr: true,
380 | 			errMsg:  "invalid geopoint value format",
381 | 		},
382 | 		{
383 | 			name: "invalid array format",
384 | 			input: map[string]interface{}{
385 | 				"arrayValue": "not-an-array",
386 | 			},
387 | 			wantErr: true,
388 | 			errMsg:  "invalid array value format",
389 | 		},
390 | 		{
391 | 			name: "invalid map format",
392 | 			input: map[string]interface{}{
393 | 				"mapValue": "not-a-map",
394 | 			},
395 | 			wantErr: true,
396 | 			errMsg:  "invalid map value format",
397 | 		},
398 | 		{
399 | 			name: "invalid bytes - not base64",
400 | 			input: map[string]interface{}{
401 | 				"bytesValue": "!!!not-base64!!!",
402 | 			},
403 | 			wantErr: true,
404 | 		},
405 | 	}
406 | 
407 | 	for _, tt := range tests {
408 | 		t.Run(tt.name, func(t *testing.T) {
409 | 			_, err := JSONToFirestoreValue(tt.input, nil)
410 | 			if tt.wantErr {
411 | 				if err == nil {
412 | 					t.Errorf("Expected error but got none")
413 | 				} else if tt.errMsg != "" && !strings.Contains(err.Error(), tt.errMsg) {
414 | 					t.Errorf("Expected error containing '%s', got '%v'", tt.errMsg, err)
415 | 				}
416 | 			} else {
417 | 				if err != nil {
418 | 					t.Errorf("Unexpected error: %v", err)
419 | 				}
420 | 			}
421 | 		})
422 | 	}
423 | }
424 | 
```

--------------------------------------------------------------------------------
/internal/tools/bigquery/bigquerysql/bigquerysql.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package bigquerysql
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"fmt"
 20 | 	"reflect"
 21 | 	"strings"
 22 | 
 23 | 	bigqueryapi "cloud.google.com/go/bigquery"
 24 | 	yaml "github.com/goccy/go-yaml"
 25 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 26 | 
 27 | 	bigqueryds "github.com/googleapis/genai-toolbox/internal/sources/bigquery"
 28 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 29 | 	bqutil "github.com/googleapis/genai-toolbox/internal/tools/bigquery/bigquerycommon"
 30 | 	bigqueryrestapi "google.golang.org/api/bigquery/v2"
 31 | 	"google.golang.org/api/iterator"
 32 | )
 33 | 
 34 | const kind string = "bigquery-sql"
 35 | 
 36 | func init() {
 37 | 	if !tools.Register(kind, newConfig) {
 38 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 39 | 	}
 40 | }
 41 | 
 42 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 43 | 	actual := Config{Name: name}
 44 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 45 | 		return nil, err
 46 | 	}
 47 | 	return actual, nil
 48 | }
 49 | 
 50 | type compatibleSource interface {
 51 | 	BigQueryClient() *bigqueryapi.Client
 52 | 	BigQuerySession() bigqueryds.BigQuerySessionProvider
 53 | 	BigQueryWriteMode() string
 54 | 	BigQueryRestService() *bigqueryrestapi.Service
 55 | 	BigQueryClientCreator() bigqueryds.BigqueryClientCreator
 56 | 	UseClientAuthorization() bool
 57 | }
 58 | 
 59 | // validate compatible sources are still compatible
 60 | var _ compatibleSource = &bigqueryds.Source{}
 61 | 
 62 | var compatibleSources = [...]string{bigqueryds.SourceKind}
 63 | 
 64 | type Config struct {
 65 | 	Name               string           `yaml:"name" validate:"required"`
 66 | 	Kind               string           `yaml:"kind" validate:"required"`
 67 | 	Source             string           `yaml:"source" validate:"required"`
 68 | 	Description        string           `yaml:"description" validate:"required"`
 69 | 	Statement          string           `yaml:"statement" validate:"required"`
 70 | 	AuthRequired       []string         `yaml:"authRequired"`
 71 | 	Parameters         tools.Parameters `yaml:"parameters"`
 72 | 	TemplateParameters tools.Parameters `yaml:"templateParameters"`
 73 | }
 74 | 
 75 | // validate interface
 76 | var _ tools.ToolConfig = Config{}
 77 | 
 78 | func (cfg Config) ToolConfigKind() string {
 79 | 	return kind
 80 | }
 81 | 
 82 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
 83 | 	// verify source exists
 84 | 	rawS, ok := srcs[cfg.Source]
 85 | 	if !ok {
 86 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
 87 | 	}
 88 | 
 89 | 	// verify the source is compatible
 90 | 	s, ok := rawS.(compatibleSource)
 91 | 	if !ok {
 92 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
 93 | 	}
 94 | 
 95 | 	allParameters, paramManifest, err := tools.ProcessParameters(cfg.TemplateParameters, cfg.Parameters)
 96 | 	if err != nil {
 97 | 		return nil, err
 98 | 	}
 99 | 
100 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, allParameters)
101 | 
102 | 	// finish tool setup
103 | 	t := Tool{
104 | 		Name:               cfg.Name,
105 | 		Kind:               kind,
106 | 		AuthRequired:       cfg.AuthRequired,
107 | 		Parameters:         cfg.Parameters,
108 | 		TemplateParameters: cfg.TemplateParameters,
109 | 		AllParams:          allParameters,
110 | 
111 | 		Statement:       cfg.Statement,
112 | 		UseClientOAuth:  s.UseClientAuthorization(),
113 | 		Client:          s.BigQueryClient(),
114 | 		RestService:     s.BigQueryRestService(),
115 | 		SessionProvider: s.BigQuerySession(),
116 | 		ClientCreator:   s.BigQueryClientCreator(),
117 | 		manifest:        tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
118 | 		mcpManifest:     mcpManifest,
119 | 	}
120 | 	return t, nil
121 | }
122 | 
123 | // validate interface
124 | var _ tools.Tool = Tool{}
125 | 
126 | type Tool struct {
127 | 	Name               string           `yaml:"name"`
128 | 	Kind               string           `yaml:"kind"`
129 | 	AuthRequired       []string         `yaml:"authRequired"`
130 | 	UseClientOAuth     bool             `yaml:"useClientOAuth"`
131 | 	Parameters         tools.Parameters `yaml:"parameters"`
132 | 	TemplateParameters tools.Parameters `yaml:"templateParameters"`
133 | 	AllParams          tools.Parameters `yaml:"allParams"`
134 | 
135 | 	Statement       string
136 | 	Client          *bigqueryapi.Client
137 | 	RestService     *bigqueryrestapi.Service
138 | 	SessionProvider bigqueryds.BigQuerySessionProvider
139 | 	ClientCreator   bigqueryds.BigqueryClientCreator
140 | 	manifest        tools.Manifest
141 | 	mcpManifest     tools.McpManifest
142 | }
143 | 
144 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
145 | 	highLevelParams := make([]bigqueryapi.QueryParameter, 0, len(t.Parameters))
146 | 	lowLevelParams := make([]*bigqueryrestapi.QueryParameter, 0, len(t.Parameters))
147 | 
148 | 	paramsMap := params.AsMap()
149 | 	newStatement, err := tools.ResolveTemplateParams(t.TemplateParameters, t.Statement, paramsMap)
150 | 	if err != nil {
151 | 		return nil, fmt.Errorf("unable to extract template params %w", err)
152 | 	}
153 | 
154 | 	for _, p := range t.Parameters {
155 | 		name := p.GetName()
156 | 		value := paramsMap[name]
157 | 
158 | 		// This block for converting []any to typed slices is still necessary and correct.
159 | 		if arrayParam, ok := p.(*tools.ArrayParameter); ok {
160 | 			arrayParamValue, ok := value.([]any)
161 | 			if !ok {
162 | 				return nil, fmt.Errorf("unable to convert parameter `%s` to []any", name)
163 | 			}
164 | 			itemType := arrayParam.GetItems().GetType()
165 | 			var err error
166 | 			value, err = tools.ConvertAnySliceToTyped(arrayParamValue, itemType)
167 | 			if err != nil {
168 | 				return nil, fmt.Errorf("unable to convert parameter `%s` from []any to typed slice: %w", name, err)
169 | 			}
170 | 		}
171 | 
172 | 		// Determine if the parameter is named or positional for the high-level client.
173 | 		var paramNameForHighLevel string
174 | 		if strings.Contains(newStatement, "@"+name) {
175 | 			paramNameForHighLevel = name
176 | 		}
177 | 
178 | 		// 1. Create the high-level parameter for the final query execution.
179 | 		highLevelParams = append(highLevelParams, bigqueryapi.QueryParameter{
180 | 			Name:  paramNameForHighLevel,
181 | 			Value: value,
182 | 		})
183 | 
184 | 		// 2. Create the low-level parameter for the dry run, using the defined type from `p`.
185 | 		lowLevelParam := &bigqueryrestapi.QueryParameter{
186 | 			Name:           paramNameForHighLevel,
187 | 			ParameterType:  &bigqueryrestapi.QueryParameterType{},
188 | 			ParameterValue: &bigqueryrestapi.QueryParameterValue{},
189 | 		}
190 | 
191 | 		if arrayParam, ok := p.(*tools.ArrayParameter); ok {
192 | 			// Handle array types based on their defined item type.
193 | 			lowLevelParam.ParameterType.Type = "ARRAY"
194 | 			itemType, err := bqutil.BQTypeStringFromToolType(arrayParam.GetItems().GetType())
195 | 			if err != nil {
196 | 				return nil, err
197 | 			}
198 | 			lowLevelParam.ParameterType.ArrayType = &bigqueryrestapi.QueryParameterType{Type: itemType}
199 | 
200 | 			// Build the array values.
201 | 			sliceVal := reflect.ValueOf(value)
202 | 			arrayValues := make([]*bigqueryrestapi.QueryParameterValue, sliceVal.Len())
203 | 			for i := 0; i < sliceVal.Len(); i++ {
204 | 				arrayValues[i] = &bigqueryrestapi.QueryParameterValue{
205 | 					Value: fmt.Sprintf("%v", sliceVal.Index(i).Interface()),
206 | 				}
207 | 			}
208 | 			lowLevelParam.ParameterValue.ArrayValues = arrayValues
209 | 		} else {
210 | 			// Handle scalar types based on their defined type.
211 | 			bqType, err := bqutil.BQTypeStringFromToolType(p.GetType())
212 | 			if err != nil {
213 | 				return nil, err
214 | 			}
215 | 			lowLevelParam.ParameterType.Type = bqType
216 | 			lowLevelParam.ParameterValue.Value = fmt.Sprintf("%v", value)
217 | 		}
218 | 		lowLevelParams = append(lowLevelParams, lowLevelParam)
219 | 	}
220 | 
221 | 	bqClient := t.Client
222 | 	restService := t.RestService
223 | 
224 | 	// Initialize new client if using user OAuth token
225 | 	if t.UseClientOAuth {
226 | 		tokenStr, err := accessToken.ParseBearerToken()
227 | 		if err != nil {
228 | 			return nil, fmt.Errorf("error parsing access token: %w", err)
229 | 		}
230 | 		bqClient, restService, err = t.ClientCreator(tokenStr, true)
231 | 		if err != nil {
232 | 			return nil, fmt.Errorf("error creating client from OAuth access token: %w", err)
233 | 		}
234 | 	}
235 | 
236 | 	query := bqClient.Query(newStatement)
237 | 	query.Parameters = highLevelParams
238 | 	query.Location = bqClient.Location
239 | 
240 | 	connProps := []*bigqueryapi.ConnectionProperty{}
241 | 	if t.SessionProvider != nil {
242 | 		session, err := t.SessionProvider(ctx)
243 | 		if err != nil {
244 | 			return nil, fmt.Errorf("failed to get BigQuery session: %w", err)
245 | 		}
246 | 		if session != nil {
247 | 			// Add session ID to the connection properties for subsequent calls.
248 | 			connProps = append(connProps, &bigqueryapi.ConnectionProperty{Key: "session_id", Value: session.ID})
249 | 		}
250 | 	}
251 | 	query.ConnectionProperties = connProps
252 | 	dryRunJob, err := bqutil.DryRunQuery(ctx, restService, bqClient.Project(), query.Location, newStatement, lowLevelParams, connProps)
253 | 	if err != nil {
254 | 		return nil, fmt.Errorf("query validation failed: %w", err)
255 | 	}
256 | 
257 | 	statementType := dryRunJob.Statistics.Query.StatementType
258 | 
259 | 	// This block handles SELECT statements, which return a row set.
260 | 	// We iterate through the results, convert each row into a map of
261 | 	// column names to values, and return the collection of rows.
262 | 	job, err := query.Run(ctx)
263 | 	if err != nil {
264 | 		return nil, fmt.Errorf("unable to execute query: %w", err)
265 | 	}
266 | 	it, err := job.Read(ctx)
267 | 	if err != nil {
268 | 		return nil, fmt.Errorf("unable to read query results: %w", err)
269 | 	}
270 | 
271 | 	var out []any
272 | 	for {
273 | 		var row map[string]bigqueryapi.Value
274 | 		err = it.Next(&row)
275 | 		if err == iterator.Done {
276 | 			break
277 | 		}
278 | 		if err != nil {
279 | 			return nil, fmt.Errorf("unable to iterate through query results: %w", err)
280 | 		}
281 | 		vMap := make(map[string]any)
282 | 		for key, value := range row {
283 | 			vMap[key] = value
284 | 		}
285 | 		out = append(out, vMap)
286 | 	}
287 | 	// If the query returned any rows, return them directly.
288 | 	if len(out) > 0 {
289 | 		return out, nil
290 | 	}
291 | 
292 | 	// This handles the standard case for a SELECT query that successfully
293 | 	// executes but returns zero rows.
294 | 	if statementType == "SELECT" {
295 | 		return "The query returned 0 rows.", nil
296 | 	}
297 | 	// This is the fallback for a successful query that doesn't return content.
298 | 	// In most cases, this will be for DML/DDL statements like INSERT, UPDATE, CREATE, etc.
299 | 	// However, it is also possible that this was a query that was expected to return rows
300 | 	// but returned none, a case that we cannot distinguish here.
301 | 	return "Query executed successfully and returned no content.", nil
302 | }
303 | 
304 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
305 | 	return tools.ParseParams(t.AllParams, data, claims)
306 | }
307 | 
308 | func (t Tool) Manifest() tools.Manifest {
309 | 	return t.manifest
310 | }
311 | 
312 | func (t Tool) McpManifest() tools.McpManifest {
313 | 	return t.mcpManifest
314 | }
315 | 
316 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
317 | 	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
318 | }
319 | 
320 | func (t Tool) RequiresClientAuthorization() bool {
321 | 	return t.UseClientOAuth
322 | }
323 | 
```

--------------------------------------------------------------------------------
/docs/en/samples/alloydb/mcp_quickstart.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: "Quickstart (MCP with AlloyDB)"
  3 | type: docs
  4 | weight: 1
  5 | description: >
  6 |   How to get started running Toolbox with MCP Inspector and AlloyDB as the source.
  7 | ---
  8 | 
  9 | ## Overview
 10 | 
 11 | [Model Context Protocol](https://modelcontextprotocol.io) is an open protocol
 12 | that standardizes how applications provide context to LLMs. Check out this page
 13 | on how to [connect to Toolbox via MCP](../../how-to/connect_via_mcp.md).
 14 | 
 15 | ## Before you begin
 16 | 
 17 | This guide assumes you have already done the following:
 18 | 
 19 | 1.  [Create a AlloyDB cluster and
 20 |     instance](https://cloud.google.com/alloydb/docs/cluster-create) with a
 21 |     database and user.
 22 | 1. Connect to the instance using [AlloyDB
 23 |    Studio](https://cloud.google.com/alloydb/docs/manage-data-using-studio),
 24 |    [`psql` command-line tool](https://www.postgresql.org/download/), or any
 25 |    other PostgreSQL client.
 26 | 
 27 | 1.  Enable the `pgvector` and `google_ml_integration`
 28 |     [extensions](https://cloud.google.com/alloydb/docs/ai). These are required
 29 |     for Semantic Search and Natural Language to SQL tools. Run the following SQL
 30 |     commands:
 31 | 
 32 |     ```sql
 33 |     CREATE EXTENSION IF NOT EXISTS "vector";
 34 |     CREATE EXTENSION IF NOT EXISTS "google_ml_integration";
 35 |     CREATE EXTENSION IF NOT EXISTS alloydb_ai_nl cascade;
 36 |     CREATE EXTENSION IF NOT EXISTS parameterized_views;
 37 |     ```
 38 | 
 39 | ## Step 1: Set up your AlloyDB database
 40 | 
 41 | In this section, we will create the necessary tables and functions in your
 42 | AlloyDB instance.
 43 | 
 44 | 1.  Create tables using the following commands:
 45 | 
 46 |     ```sql
 47 |     CREATE TABLE products (
 48 |       product_id SERIAL PRIMARY KEY,
 49 |       name VARCHAR(255) NOT NULL,
 50 |       description TEXT,
 51 |       price DECIMAL(10, 2) NOT NULL,
 52 |       category_id INT,
 53 |       embedding vector(3072) -- Vector size for model(gemini-embedding-001)
 54 |     );
 55 | 
 56 |     CREATE TABLE customers (
 57 |       customer_id SERIAL PRIMARY KEY,
 58 |       name VARCHAR(255) NOT NULL,
 59 |       email VARCHAR(255) UNIQUE NOT NULL
 60 |     );
 61 | 
 62 |     CREATE TABLE cart (
 63 |       cart_id SERIAL PRIMARY KEY,
 64 |       customer_id INT UNIQUE NOT NULL,
 65 |       created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
 66 |       FOREIGN KEY (customer_id) REFERENCES customers(customer_id)
 67 |     );
 68 | 
 69 |     CREATE TABLE cart_items (
 70 |       cart_item_id SERIAL PRIMARY KEY,
 71 |       cart_id INT NOT NULL,
 72 |       product_id INT NOT NULL,
 73 |       quantity INT NOT NULL,
 74 |       price DECIMAL(10, 2) NOT NULL,
 75 |       FOREIGN KEY (cart_id) REFERENCES cart(cart_id),
 76 |       FOREIGN KEY (product_id) REFERENCES products(product_id)
 77 |     );
 78 | 
 79 |     CREATE TABLE categories (
 80 |       category_id SERIAL PRIMARY KEY,
 81 |       name VARCHAR(255) NOT NULL
 82 |     );
 83 |     ```
 84 | 
 85 | 2.  Insert sample data into the tables:
 86 | 
 87 |     ```sql
 88 |     INSERT INTO categories (category_id, name) VALUES
 89 |     (1, 'Flowers'),
 90 |     (2, 'Vases');
 91 | 
 92 |     INSERT INTO products (product_id, name, description, price, category_id, embedding) VALUES
 93 |     (1, 'Rose', 'A beautiful red rose', 2.50, 1, embedding('gemini-embedding-001', 'A beautiful red rose')),
 94 |     (2, 'Tulip', 'A colorful tulip', 1.50, 1, embedding('gemini-embedding-001', 'A colorful tulip')),
 95 |     (3, 'Glass Vase', 'A transparent glass vase', 10.00, 2, embedding('gemini-embedding-001', 'A transparent glass vase')),
 96 |     (4, 'Ceramic Vase', 'A handmade ceramic vase', 15.00, 2, embedding('gemini-embedding-001', 'A handmade ceramic vase'));
 97 | 
 98 |     INSERT INTO customers (customer_id, name, email) VALUES
 99 |     (1, 'John Doe', '[email protected]'),
100 |     (2, 'Jane Smith', '[email protected]');
101 | 
102 |     INSERT INTO cart (cart_id, customer_id) VALUES
103 |     (1, 1),
104 |     (2, 2);
105 | 
106 |     INSERT INTO cart_items (cart_id, product_id, quantity, price) VALUES
107 |     (1, 1, 2, 2.50),
108 |     (1, 3, 1, 10.00),
109 |     (2, 2, 5, 1.50);
110 |     ```
111 | 
112 | ## Step 2: Install Toolbox
113 | 
114 | In this section, we will download and install the Toolbox binary.
115 | 
116 | 1. Download the latest version of Toolbox as a binary:
117 | 
118 |     {{< notice tip >}}
119 |    Select the
120 |    [correct binary](https://github.com/googleapis/genai-toolbox/releases)
121 |    corresponding to your OS and CPU architecture.
122 |     {{< /notice >}}
123 |     <!-- {x-release-please-start-version} -->
124 |     ```bash
125 |     export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
126 |     export VERSION="0.18.0"
127 |     curl -O https://storage.googleapis.com/genai-toolbox/v$VERSION/$OS/toolbox
128 |     ```
129 |     <!-- {x-release-please-end} -->
130 | 
131 | 1. Make the binary executable:
132 | 
133 |     ```bash
134 |     chmod +x toolbox
135 |     ```
136 | 
137 | ## Step 3: Configure the tools
138 | 
139 | Create a `tools.yaml` file and add the following content. You must replace the
140 | placeholders with your actual AlloyDB configuration.
141 | 
142 | First, define the data source for your tools. This tells Toolbox how to connect
143 | to your AlloyDB instance.
144 | 
145 | ```yaml
146 | sources:
147 |   alloydb-pg-source:
148 |     kind: alloydb-postgres
149 |     project: YOUR_PROJECT_ID
150 |     region: YOUR_REGION
151 |     cluster: YOUR_CLUSTER
152 |     instance: YOUR_INSTANCE
153 |     database: YOUR_DATABASE
154 |     user: YOUR_USER
155 |     password: YOUR_PASSWORD
156 | ```
157 | 
158 | Next, define the tools the agent can use. We will categorize them into three
159 | types:
160 | 
161 | ### 1. Structured Queries Tools
162 | 
163 | These tools execute predefined SQL statements. They are ideal for common,
164 | structured queries like managing a shopping cart. Add the following to your
165 | `tools.yaml` file:
166 | 
167 | ```yaml
168 | tools:
169 | 
170 |   access-cart-information:
171 |     kind: postgres-sql
172 |     source: alloydb-pg-source
173 |     description: >-
174 |       List items in customer cart.
175 |       Use this tool to list items in a customer cart. This tool requires the cart ID.
176 |     parameters:
177 |       - name: cart_id
178 |         type: integer
179 |         description: The id of the cart.
180 |     statement: |
181 |       SELECT
182 |         p.name AS product_name,
183 |         ci.quantity,
184 |         ci.price AS item_price,
185 |         (ci.quantity * ci.price) AS total_item_price,
186 |         c.created_at AS cart_created_at,
187 |         ci.product_id AS product_id
188 |       FROM
189 |         cart_items ci JOIN cart c ON ci.cart_id = c.cart_id
190 |         JOIN products p ON ci.product_id = p.product_id
191 |       WHERE
192 |         c.cart_id = $1;
193 | 
194 |   add-to-cart:
195 |     kind: postgres-sql
196 |     source: alloydb-pg-source
197 |     description: >-
198 |       Add items to customer cart using the product ID and product prices from the product list.
199 |       Use this tool to add items to a customer cart.
200 |       This tool requires the cart ID, product ID, quantity, and price.
201 |     parameters:
202 |       - name: cart_id
203 |         type: integer
204 |         description: The id of the cart.
205 |       - name: product_id
206 |         type: integer
207 |         description: The id of the product.
208 |       - name: quantity
209 |         type: integer
210 |         description: The quantity of items to add.
211 |       - name: price
212 |         type: float
213 |         description: The price of items to add.
214 |     statement: |
215 |       INSERT INTO
216 |         cart_items (cart_id, product_id, quantity, price)
217 |       VALUES($1,$2,$3,$4);
218 | 
219 |   delete-from-cart:
220 |     kind: postgres-sql
221 |     source: alloydb-pg-source
222 |     description: >-
223 |       Remove products from customer cart.
224 |       Use this tool to remove products from a customer cart.
225 |       This tool requires the cart ID and product ID.
226 |     parameters:
227 |       - name: cart_id
228 |         type: integer
229 |         description: The id of the cart.
230 |       - name: product_id
231 |         type: integer
232 |         description: The id of the product.
233 |     statement: |
234 |       DELETE FROM
235 |         cart_items
236 |       WHERE
237 |         cart_id = $1 AND product_id = $2;
238 | ```
239 | 
240 | ### 2. Semantic Search Tools
241 | 
242 | These tools use vector embeddings to find the most relevant results based on the
243 | meaning of a user's query, rather than just keywords. Append the following tools
244 | to the `tools` section in your `tools.yaml`:
245 | 
246 | ```yaml
247 |   search-product-recommendations:
248 |     kind: postgres-sql
249 |     source: alloydb-pg-source
250 |     description: >-
251 |       Search for products based on user needs.
252 |       Use this tool to search for products. This tool requires the user's needs.
253 |     parameters:
254 |       - name: query
255 |         type: string
256 |         description: The product characteristics
257 |     statement: |
258 |       SELECT
259 |         product_id,
260 |         name,
261 |         description,
262 |         ROUND(CAST(price AS numeric), 2) as price
263 |       FROM
264 |         products
265 |       ORDER BY
266 |         embedding('gemini-embedding-001', $1)::vector <=> embedding
267 |       LIMIT 5;
268 | ```
269 | 
270 | ### 3. Natural Language to SQL (NL2SQL) Tools
271 | 
272 | 1. Create a [natural language
273 |    configuration](https://cloud.google.com/alloydb/docs/ai/use-natural-language-generate-sql-queries#create-config)
274 |    for your AlloyDB cluster.
275 | 
276 |     {{< notice tip >}}Before using NL2SQL tools,
277 |     you must first install the `alloydb_ai_nl` extension and
278 |     create the [semantic
279 |     layer](https://cloud.google.com/alloydb/docs/ai/natural-language-overview)
280 |     under a configuration named `flower_shop`.
281 |     {{< /notice >}}
282 | 
283 | 2. Configure your NL2SQL tool to use your configuration. These tools translate
284 |    natural language questions into SQL queries, allowing users to interact with
285 |    the database conversationally. Append the following tool to the `tools`
286 |    section:
287 | 
288 | ```yaml
289 |   ask-questions-about-products:
290 |     kind: alloydb-ai-nl
291 |     source: alloydb-pg-source
292 |     nlConfig: flower_shop
293 |     description: >-
294 |       Ask questions related to products or brands.
295 |       Use this tool to ask questions about products or brands.
296 |       Always SELECT the IDs of objects when generating queries.
297 | ```
298 | 
299 | Finally, group the tools into a `toolset` to make them available to the model.
300 | Add the following to the end of your `tools.yaml` file:
301 | 
302 | ```yaml
303 | toolsets:
304 |   flower_shop:
305 |     - access-cart-information
306 |     - search-product-recommendations
307 |     - ask-questions-about-products
308 |     - add-to-cart
309 |     - delete-from-cart
310 | ```
311 | 
312 | For more info on tools, check out the
313 | [Tools](../../resources/tools/) section.
314 | 
315 | ## Step 4: Run the Toolbox server
316 | 
317 | Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
318 | 
319 | ```bash
320 | ./toolbox --tools-file "tools.yaml"
321 | ```
322 | 
323 | ## Step 5: Connect to MCP Inspector
324 | 
325 | 1. Run the MCP Inspector:
326 | 
327 |     ```bash
328 |     npx @modelcontextprotocol/inspector
329 |     ```
330 | 
331 | 1. Type `y` when it asks to install the inspector package.
332 | 
333 | 1. It should show the following when the MCP Inspector is up and running (please
334 |    take note of `<YOUR_SESSION_TOKEN>`):
335 | 
336 |     ```bash
337 |     Starting MCP inspector...
338 |     ⚙️ Proxy server listening on localhost:6277
339 |     🔑 Session token: <YOUR_SESSION_TOKEN>
340 |        Use this token to authenticate requests or set DANGEROUSLY_OMIT_AUTH=true to disable auth
341 | 
342 |     🚀 MCP Inspector is up and running at:
343 |        http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=<YOUR_SESSION_TOKEN>
344 |     ```
345 | 
346 | 1. Open the above link in your browser.
347 | 
348 | 1. For `Transport Type`, select `Streamable HTTP`.
349 | 
350 | 1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
351 | 
352 | 1. For `Configuration` -> `Proxy Session Token`, make sure `<YOUR_SESSION_TOKEN>` is present.
353 | 
354 | 1. Click Connect.
355 | 
356 | 1. Select `List Tools`, you will see a list of tools configured in `tools.yaml`.
357 | 
358 | 1. Test out your tools here!
359 | 
360 | ## What's next
361 | 
362 | - Learn more about [MCP Inspector](../../how-to/connect_via_mcp.md).
363 | - Learn more about [Toolbox Resources](../../resources/).
364 | - Learn more about [Toolbox How-to guides](../../how-to/).
365 | 
```

--------------------------------------------------------------------------------
/docs/en/concepts/telemetry/index.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: "Telemetry"
  3 | type: docs
  4 | weight: 2
  5 | description: >
  6 |   An overview of telemetry and observability in Toolbox.
  7 | ---
  8 | 
  9 | ## About
 10 | 
 11 | Telemetry data such as logs, metrics, and traces will help developers understand
 12 | the internal state of the system. This page walks though different types of
 13 | telemetry and observability available in Toolbox.
 14 | 
 15 | Toolbox exports telemetry data of logs via standard out/err, and traces/metrics
 16 | through [OpenTelemetry](https://opentelemetry.io/). Additional flags can be
 17 | passed to Toolbox to enable different logging behavior, or to export metrics
 18 | through a specific [exporter](#exporter).
 19 | 
 20 | ## Logging
 21 | 
 22 | The following flags can be used to customize Toolbox logging:
 23 | 
 24 | | **Flag**           | **Description**                                                                         |
 25 | |--------------------|-----------------------------------------------------------------------------------------|
 26 | | `--log-level`      | Preferred log level, allowed values: `debug`, `info`, `warn`, `error`. Default: `info`. |
 27 | | `--logging-format` | Preferred logging format, allowed values: `standard`, `json`. Default: `standard`.      |
 28 | 
 29 | **Example:**
 30 | 
 31 | ```bash
 32 | ./toolbox --tools-file "tools.yaml" --log-level warn --logging-format json
 33 | ```
 34 | 
 35 | ### Level
 36 | 
 37 | Toolbox supports the following log levels, including:
 38 | 
 39 | | **Log level** | **Description**                                                                                                                                                                    |
 40 | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 41 | | Debug         | Debug logs typically contain information that is only useful during the debugging phase and may be of little value during production.                                              |
 42 | | Info          | Info logs include information about successful operations within the application, such as a successful start, pause, or exit of the application.                                   |
 43 | | Warn          | Warning logs are slightly less severe than error conditions. While it does not cause an error, it indicates that an operation might fail in the future if action is not taken now. |
 44 | | Error         | Error log is assigned to event logs that contain an application error message.                                                                                                     |
 45 | 
 46 | Toolbox will only output logs that are equal or more severe to the
 47 | level that it is set. Below are the log levels that Toolbox supports in the
 48 | order of severity.
 49 | 
 50 | ### Format
 51 | 
 52 | Toolbox supports both standard and structured logging format.
 53 | 
 54 | The standard logging outputs log as string:
 55 | 
 56 | ```
 57 | 2024-11-12T15:08:11.451377-08:00 INFO "Initialized 0 sources.\n"
 58 | ```
 59 | 
 60 | The structured logging outputs log as JSON:
 61 | 
 62 | ```
 63 | {
 64 |   "timestamp":"2024-11-04T16:45:11.987299-08:00",
 65 |   "severity":"ERROR",
 66 |   "logging.googleapis.com/sourceLocation":{...},
 67 |   "message":"unable to parse tool file at \"tools.yaml\": \"cloud-sql-postgres1\" is not a valid kind of data source"
 68 | }
 69 | ```
 70 | 
 71 | {{< notice tip >}}
 72 | `logging.googleapis.com/sourceLocation` shows the source code
 73 | location information associated with the log entry, if any.
 74 | {{< /notice >}}
 75 | 
 76 | ## Telemetry
 77 | 
 78 | Toolbox is supports exporting metrics and traces to any OpenTelemetry compatible
 79 | exporter.
 80 | 
 81 | ### Metrics
 82 | 
 83 | A metric is a measurement of a service captured at runtime. The collected data
 84 | can be used to provide important insights into the service. Toolbox provides the
 85 | following custom metrics:
 86 | 
 87 | | **Metric Name**                    | **Description**                                         |
 88 | |------------------------------------|---------------------------------------------------------|
 89 | | `toolbox.server.toolset.get.count` | Counts the number of toolset manifest requests served   |
 90 | | `toolbox.server.tool.get.count`    | Counts the number of tool manifest requests served      |
 91 | | `toolbox.server.tool.get.invoke`   | Counts the number of tool invocation requests served    |
 92 | | `toolbox.server.mcp.sse.count`     | Counts the number of mcp sse connection requests served |
 93 | | `toolbox.server.mcp.post.count`    | Counts the number of mcp post requests served           |
 94 | 
 95 | All custom metrics have the following attributes/labels:
 96 | 
 97 | | **Metric Attributes**      | **Description**                                           |
 98 | |----------------------------|-----------------------------------------------------------|
 99 | | `toolbox.name`             | Name of the toolset or tool, if applicable.               |
100 | | `toolbox.operation.status` | Operation status code, for example: `success`, `failure`. |
101 | | `toolbox.sse.sessionId`    | Session id for sse connection, if applicable.             |
102 | | `toolbox.method`           | Method of JSON-RPC request, if applicable.                |
103 | 
104 | ### Traces
105 | 
106 | A trace is a tree of spans that shows the path that a request makes through an
107 | application.
108 | 
109 | Spans generated by Toolbox server is prefixed with `toolbox/server/`. For
110 | example, when user run Toolbox, it will generate spans for the following, with
111 | `toolbox/server/init` as the root span:
112 | 
113 | ![traces](./telemetry_traces.png)
114 | 
115 | ### Resource Attributes
116 | 
117 | All metrics and traces generated within Toolbox will be associated with a
118 | unified [resource][resource]. The list of resource attributes included are:
119 | 
120 | | **Resource Name**                                                                         | **Description**                                                                                                                                               |
121 | |-------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
122 | | [TelemetrySDK](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithTelemetrySDK) | TelemetrySDK version info.                                                                                                                                    |
123 | | [OS](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithOS)                     | OS attributes including OS description and OS type.                                                                                                           |
124 | | [Container](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithContainer)       | Container attributes including container ID, if applicable.                                                                                                   |
125 | | [Host](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithHost)                 | Host attributes including host name.                                                                                                                          |
126 | | [SchemaURL](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/resource#WithSchemaURL)       | Sets the schema URL for the configured resource.                                                                                                              |
127 | | `service.name`                                                                            | Open telemetry service name. Defaulted to `toolbox`. User can set the service name via flag mentioned above to distinguish between different toolbox service. |
128 | | `service.version`                                                                         | The version of Toolbox used.                                                                                                                                  |
129 | 
130 | [resource]: https://opentelemetry.io/docs/languages/go/resources/
131 | 
132 | ### Exporter
133 | 
134 | An exporter is responsible for processing and exporting telemetry data. Toolbox
135 | generates telemetry data within the OpenTelemetry Protocol (OTLP), and user can
136 | choose to use exporters that are designed to support the OpenTelemetry
137 | Protocol. Within Toolbox, we provide two types of exporter implementation to
138 | choose from, either the Google Cloud Exporter that will send data directly to
139 | the backend, or the OTLP Exporter along with a Collector that will act as a
140 | proxy to collect and export data to the telemetry backend of user's choice.
141 | 
142 | ![telemetry_flow](./telemetry_flow.png)
143 | 
144 | #### Google Cloud Exporter
145 | 
146 | The Google Cloud Exporter directly exports telemetry to Google Cloud Monitoring.
147 | It utilizes the [GCP Metric Exporter][gcp-metric-exporter] and [GCP Trace
148 | Exporter][gcp-trace-exporter].
149 | 
150 | [gcp-metric-exporter]:
151 |     https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/tree/main/exporter/metric
152 | [gcp-trace-exporter]:
153 |     https://github.com/GoogleCloudPlatform/opentelemetry-operations-go/tree/main/exporter/trace
154 | 
155 | {{< notice note >}}
156 | If you're using Google Cloud Monitoring, the following APIs will need to be
157 | enabled:
158 | 
159 | - [Cloud Logging API](https://cloud.google.com/logging/docs/api/enable-api)
160 | - [Cloud Monitoring API](https://cloud.google.com/monitoring/api/enable-api)
161 | - [Cloud Trace API](https://console.cloud.google.com/apis/enableflow?apiid=cloudtrace.googleapis.com)
162 | {{< /notice >}}
163 | 
164 | #### OTLP Exporter
165 | 
166 | This implementation uses the default OTLP Exporter over HTTP for
167 | [metrics][otlp-metric-exporter] and [traces][otlp-trace-exporter]. You can use
168 | this exporter if you choose to export your telemetry data to a Collector.
169 | 
170 | [otlp-metric-exporter]: https://opentelemetry.io/docs/languages/go/exporters/#otlp-traces-over-http
171 | [otlp-trace-exporter]: https://opentelemetry.io/docs/languages/go/exporters/#otlp-traces-over-http
172 | 
173 | ### Collector
174 | 
175 | A collector acts as a proxy between the application and the telemetry backend.
176 | It receives telemetry data, transforms it, and then exports data to backends
177 | that can store it permanently. Toolbox provide an option to export telemetry
178 | data to user's choice of backend(s) that are compatible with the Open Telemetry
179 | Protocol (OTLP). If you would like to use a collector, please refer to this
180 | [Export Telemetry using the Otel Collector](../../how-to/export_telemetry.md).
181 | 
182 | ### Flags
183 | 
184 | The following flags are used to determine Toolbox's telemetry configuration:
185 | 
186 | | **flag**                   | **type** | **description**                                                                                                |
187 | |----------------------------|----------|----------------------------------------------------------------------------------------------------------------|
188 | | `--telemetry-gcp`          | bool     | Enable exporting directly to Google Cloud Monitoring. Default is `false`.                                      |
189 | | `--telemetry-otlp`         | string   | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. "<http://127.0.0.1:4318>"). |
190 | | `--telemetry-service-name` | string   | Sets the value of the `service.name` resource attribute. Default is `toolbox`.                                 |
191 | 
192 | In addition to the flags noted above, you can also make additional configuration
193 | for OpenTelemetry via the [General SDK Configuration][sdk-configuration] through
194 | environmental variables.
195 | 
196 | [sdk-configuration]:
197 |     https://opentelemetry.io/docs/languages/sdk-configuration/general/
198 | 
199 | **Examples:**
200 | 
201 | To enable Google Cloud Exporter:
202 | 
203 | ```bash
204 | ./toolbox --telemetry-gcp
205 | ```
206 | 
207 | To enable OTLP Exporter, provide Collector endpoint:
208 | 
209 | ```bash
210 | ./toolbox --telemetry-otlp="http://127.0.0.1:4553"
211 | ```
212 | 
```
Page 28/48FirstPrevNextLast