#
tokens: 46358/50000 6/784 files (page 32/45)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 32 of 45. Use http://codebase.md/googleapis/genai-toolbox?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .ci
│   ├── continuous.release.cloudbuild.yaml
│   ├── generate_release_table.sh
│   ├── integration.cloudbuild.yaml
│   ├── quickstart_test
│   │   ├── go.integration.cloudbuild.yaml
│   │   ├── js.integration.cloudbuild.yaml
│   │   ├── py.integration.cloudbuild.yaml
│   │   ├── run_go_tests.sh
│   │   ├── run_js_tests.sh
│   │   ├── run_py_tests.sh
│   │   └── setup_hotels_sample.sql
│   ├── test_with_coverage.sh
│   └── versioned.release.cloudbuild.yaml
├── .github
│   ├── auto-label.yaml
│   ├── blunderbuss.yml
│   ├── CODEOWNERS
│   ├── header-checker-lint.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── question.yml
│   ├── label-sync.yml
│   ├── labels.yaml
│   ├── PULL_REQUEST_TEMPLATE.md
│   ├── release-please.yml
│   ├── renovate.json5
│   ├── sync-repo-settings.yaml
│   └── workflows
│       ├── cloud_build_failure_reporter.yml
│       ├── deploy_dev_docs.yaml
│       ├── deploy_previous_version_docs.yaml
│       ├── deploy_versioned_docs.yaml
│       ├── docs_deploy.yaml
│       ├── docs_preview_clean.yaml
│       ├── docs_preview_deploy.yaml
│       ├── lint.yaml
│       ├── schedule_reporter.yml
│       ├── sync-labels.yaml
│       └── tests.yaml
├── .gitignore
├── .gitmodules
├── .golangci.yaml
├── .hugo
│   ├── archetypes
│   │   └── default.md
│   ├── assets
│   │   ├── icons
│   │   │   └── logo.svg
│   │   └── scss
│   │       ├── _styles_project.scss
│   │       └── _variables_project.scss
│   ├── go.mod
│   ├── go.sum
│   ├── hugo.toml
│   ├── layouts
│   │   ├── _default
│   │   │   └── home.releases.releases
│   │   ├── index.llms-full.txt
│   │   ├── index.llms.txt
│   │   ├── partials
│   │   │   ├── hooks
│   │   │   │   └── head-end.html
│   │   │   ├── navbar-version-selector.html
│   │   │   ├── page-meta-links.html
│   │   │   └── td
│   │   │       └── render-heading.html
│   │   ├── robot.txt
│   │   └── shortcodes
│   │       ├── include.html
│   │       ├── ipynb.html
│   │       └── regionInclude.html
│   ├── package-lock.json
│   ├── package.json
│   └── static
│       ├── favicons
│       │   ├── android-chrome-192x192.png
│       │   ├── android-chrome-512x512.png
│       │   ├── apple-touch-icon.png
│       │   ├── favicon-16x16.png
│       │   ├── favicon-32x32.png
│       │   └── favicon.ico
│       └── js
│           └── w3.js
├── CHANGELOG.md
├── cmd
│   ├── options_test.go
│   ├── options.go
│   ├── root_test.go
│   ├── root.go
│   └── version.txt
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── DEVELOPER.md
├── Dockerfile
├── docs
│   └── en
│       ├── _index.md
│       ├── about
│       │   ├── _index.md
│       │   └── faq.md
│       ├── concepts
│       │   ├── _index.md
│       │   └── telemetry
│       │       ├── index.md
│       │       ├── telemetry_flow.png
│       │       └── telemetry_traces.png
│       ├── getting-started
│       │   ├── _index.md
│       │   ├── colab_quickstart.ipynb
│       │   ├── configure.md
│       │   ├── introduction
│       │   │   ├── _index.md
│       │   │   └── architecture.png
│       │   ├── local_quickstart_go.md
│       │   ├── local_quickstart_js.md
│       │   ├── local_quickstart.md
│       │   ├── mcp_quickstart
│       │   │   ├── _index.md
│       │   │   ├── inspector_tools.png
│       │   │   └── inspector.png
│       │   └── quickstart
│       │       ├── go
│       │       │   ├── genAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── genkit
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── langchain
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   ├── openAI
│       │       │   │   ├── go.mod
│       │       │   │   ├── go.sum
│       │       │   │   └── quickstart.go
│       │       │   └── quickstart_test.go
│       │       ├── golden.txt
│       │       ├── js
│       │       │   ├── genAI
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── genkit
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── langchain
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   ├── llamaindex
│       │       │   │   ├── package-lock.json
│       │       │   │   ├── package.json
│       │       │   │   └── quickstart.js
│       │       │   └── quickstart.test.js
│       │       ├── python
│       │       │   ├── __init__.py
│       │       │   ├── adk
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── core
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── langchain
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   ├── llamaindex
│       │       │   │   ├── quickstart.py
│       │       │   │   └── requirements.txt
│       │       │   └── quickstart_test.py
│       │       └── shared
│       │           ├── cloud_setup.md
│       │           ├── configure_toolbox.md
│       │           └── database_setup.md
│       ├── how-to
│       │   ├── _index.md
│       │   ├── connect_via_geminicli.md
│       │   ├── connect_via_mcp.md
│       │   ├── connect-ide
│       │   │   ├── _index.md
│       │   │   ├── alloydb_pg_admin_mcp.md
│       │   │   ├── alloydb_pg_mcp.md
│       │   │   ├── bigquery_mcp.md
│       │   │   ├── cloud_sql_mssql_admin_mcp.md
│       │   │   ├── cloud_sql_mssql_mcp.md
│       │   │   ├── cloud_sql_mysql_admin_mcp.md
│       │   │   ├── cloud_sql_mysql_mcp.md
│       │   │   ├── cloud_sql_pg_admin_mcp.md
│       │   │   ├── cloud_sql_pg_mcp.md
│       │   │   ├── firestore_mcp.md
│       │   │   ├── looker_mcp.md
│       │   │   ├── mssql_mcp.md
│       │   │   ├── mysql_mcp.md
│       │   │   ├── neo4j_mcp.md
│       │   │   ├── postgres_mcp.md
│       │   │   ├── spanner_mcp.md
│       │   │   └── sqlite_mcp.md
│       │   ├── deploy_docker.md
│       │   ├── deploy_gke.md
│       │   ├── deploy_toolbox.md
│       │   ├── export_telemetry.md
│       │   └── toolbox-ui
│       │       ├── edit-headers.gif
│       │       ├── edit-headers.png
│       │       ├── index.md
│       │       ├── optional-param-checked.png
│       │       ├── optional-param-unchecked.png
│       │       ├── run-tool.gif
│       │       ├── tools.png
│       │       └── toolsets.png
│       ├── reference
│       │   ├── _index.md
│       │   ├── cli.md
│       │   └── prebuilt-tools.md
│       ├── resources
│       │   ├── _index.md
│       │   ├── authServices
│       │   │   ├── _index.md
│       │   │   └── google.md
│       │   ├── sources
│       │   │   ├── _index.md
│       │   │   ├── alloydb-admin.md
│       │   │   ├── alloydb-pg.md
│       │   │   ├── bigquery.md
│       │   │   ├── bigtable.md
│       │   │   ├── cassandra.md
│       │   │   ├── clickhouse.md
│       │   │   ├── cloud-monitoring.md
│       │   │   ├── cloud-sql-admin.md
│       │   │   ├── cloud-sql-mssql.md
│       │   │   ├── cloud-sql-mysql.md
│       │   │   ├── cloud-sql-pg.md
│       │   │   ├── couchbase.md
│       │   │   ├── dataplex.md
│       │   │   ├── dgraph.md
│       │   │   ├── firebird.md
│       │   │   ├── firestore.md
│       │   │   ├── http.md
│       │   │   ├── looker.md
│       │   │   ├── mongodb.md
│       │   │   ├── mssql.md
│       │   │   ├── mysql.md
│       │   │   ├── neo4j.md
│       │   │   ├── oceanbase.md
│       │   │   ├── oracle.md
│       │   │   ├── postgres.md
│       │   │   ├── redis.md
│       │   │   ├── spanner.md
│       │   │   ├── sqlite.md
│       │   │   ├── tidb.md
│       │   │   ├── trino.md
│       │   │   ├── valkey.md
│       │   │   └── yugabytedb.md
│       │   └── tools
│       │       ├── _index.md
│       │       ├── alloydb
│       │       │   ├── _index.md
│       │       │   ├── alloydb-create-cluster.md
│       │       │   ├── alloydb-create-instance.md
│       │       │   ├── alloydb-create-user.md
│       │       │   ├── alloydb-get-cluster.md
│       │       │   ├── alloydb-get-instance.md
│       │       │   ├── alloydb-get-user.md
│       │       │   ├── alloydb-list-clusters.md
│       │       │   ├── alloydb-list-instances.md
│       │       │   ├── alloydb-list-users.md
│       │       │   └── alloydb-wait-for-operation.md
│       │       ├── alloydbainl
│       │       │   ├── _index.md
│       │       │   └── alloydb-ai-nl.md
│       │       ├── bigquery
│       │       │   ├── _index.md
│       │       │   ├── bigquery-analyze-contribution.md
│       │       │   ├── bigquery-conversational-analytics.md
│       │       │   ├── bigquery-execute-sql.md
│       │       │   ├── bigquery-forecast.md
│       │       │   ├── bigquery-get-dataset-info.md
│       │       │   ├── bigquery-get-table-info.md
│       │       │   ├── bigquery-list-dataset-ids.md
│       │       │   ├── bigquery-list-table-ids.md
│       │       │   ├── bigquery-search-catalog.md
│       │       │   └── bigquery-sql.md
│       │       ├── bigtable
│       │       │   ├── _index.md
│       │       │   └── bigtable-sql.md
│       │       ├── cassandra
│       │       │   ├── _index.md
│       │       │   └── cassandra-cql.md
│       │       ├── clickhouse
│       │       │   ├── _index.md
│       │       │   ├── clickhouse-execute-sql.md
│       │       │   ├── clickhouse-list-databases.md
│       │       │   ├── clickhouse-list-tables.md
│       │       │   └── clickhouse-sql.md
│       │       ├── cloudmonitoring
│       │       │   ├── _index.md
│       │       │   └── cloud-monitoring-query-prometheus.md
│       │       ├── cloudsql
│       │       │   ├── _index.md
│       │       │   ├── cloudsqlcreatedatabase.md
│       │       │   ├── cloudsqlcreateusers.md
│       │       │   ├── cloudsqlgetinstances.md
│       │       │   ├── cloudsqllistdatabases.md
│       │       │   ├── cloudsqllistinstances.md
│       │       │   ├── cloudsqlmssqlcreateinstance.md
│       │       │   ├── cloudsqlmysqlcreateinstance.md
│       │       │   ├── cloudsqlpgcreateinstances.md
│       │       │   └── cloudsqlwaitforoperation.md
│       │       ├── couchbase
│       │       │   ├── _index.md
│       │       │   └── couchbase-sql.md
│       │       ├── dataform
│       │       │   ├── _index.md
│       │       │   └── dataform-compile-local.md
│       │       ├── dataplex
│       │       │   ├── _index.md
│       │       │   ├── dataplex-lookup-entry.md
│       │       │   ├── dataplex-search-aspect-types.md
│       │       │   └── dataplex-search-entries.md
│       │       ├── dgraph
│       │       │   ├── _index.md
│       │       │   └── dgraph-dql.md
│       │       ├── firebird
│       │       │   ├── _index.md
│       │       │   ├── firebird-execute-sql.md
│       │       │   └── firebird-sql.md
│       │       ├── firestore
│       │       │   ├── _index.md
│       │       │   ├── firestore-add-documents.md
│       │       │   ├── firestore-delete-documents.md
│       │       │   ├── firestore-get-documents.md
│       │       │   ├── firestore-get-rules.md
│       │       │   ├── firestore-list-collections.md
│       │       │   ├── firestore-query-collection.md
│       │       │   ├── firestore-query.md
│       │       │   ├── firestore-update-document.md
│       │       │   └── firestore-validate-rules.md
│       │       ├── http
│       │       │   ├── _index.md
│       │       │   └── http.md
│       │       ├── looker
│       │       │   ├── _index.md
│       │       │   ├── looker-add-dashboard-element.md
│       │       │   ├── looker-conversational-analytics.md
│       │       │   ├── looker-get-dashboards.md
│       │       │   ├── looker-get-dimensions.md
│       │       │   ├── looker-get-explores.md
│       │       │   ├── looker-get-filters.md
│       │       │   ├── looker-get-looks.md
│       │       │   ├── looker-get-measures.md
│       │       │   ├── looker-get-models.md
│       │       │   ├── looker-get-parameters.md
│       │       │   ├── looker-health-analyze.md
│       │       │   ├── looker-health-pulse.md
│       │       │   ├── looker-health-vacuum.md
│       │       │   ├── looker-make-dashboard.md
│       │       │   ├── looker-make-look.md
│       │       │   ├── looker-query-sql.md
│       │       │   ├── looker-query-url.md
│       │       │   ├── looker-query.md
│       │       │   └── looker-run-look.md
│       │       ├── mongodb
│       │       │   ├── _index.md
│       │       │   ├── mongodb-aggregate.md
│       │       │   ├── mongodb-delete-many.md
│       │       │   ├── mongodb-delete-one.md
│       │       │   ├── mongodb-find-one.md
│       │       │   ├── mongodb-find.md
│       │       │   ├── mongodb-insert-many.md
│       │       │   ├── mongodb-insert-one.md
│       │       │   ├── mongodb-update-many.md
│       │       │   └── mongodb-update-one.md
│       │       ├── mssql
│       │       │   ├── _index.md
│       │       │   ├── mssql-execute-sql.md
│       │       │   ├── mssql-list-tables.md
│       │       │   └── mssql-sql.md
│       │       ├── mysql
│       │       │   ├── _index.md
│       │       │   ├── mysql-execute-sql.md
│       │       │   ├── mysql-list-active-queries.md
│       │       │   ├── mysql-list-table-fragmentation.md
│       │       │   ├── mysql-list-tables-missing-unique-indexes.md
│       │       │   ├── mysql-list-tables.md
│       │       │   └── mysql-sql.md
│       │       ├── neo4j
│       │       │   ├── _index.md
│       │       │   ├── neo4j-cypher.md
│       │       │   ├── neo4j-execute-cypher.md
│       │       │   └── neo4j-schema.md
│       │       ├── oceanbase
│       │       │   ├── _index.md
│       │       │   ├── oceanbase-execute-sql.md
│       │       │   └── oceanbase-sql.md
│       │       ├── oracle
│       │       │   ├── _index.md
│       │       │   ├── oracle-execute-sql.md
│       │       │   └── oracle-sql.md
│       │       ├── postgres
│       │       │   ├── _index.md
│       │       │   ├── postgres-execute-sql.md
│       │       │   ├── postgres-list-active-queries.md
│       │       │   ├── postgres-list-available-extensions.md
│       │       │   ├── postgres-list-installed-extensions.md
│       │       │   ├── postgres-list-tables.md
│       │       │   └── postgres-sql.md
│       │       ├── redis
│       │       │   ├── _index.md
│       │       │   └── redis.md
│       │       ├── spanner
│       │       │   ├── _index.md
│       │       │   ├── spanner-execute-sql.md
│       │       │   ├── spanner-list-tables.md
│       │       │   └── spanner-sql.md
│       │       ├── sqlite
│       │       │   ├── _index.md
│       │       │   ├── sqlite-execute-sql.md
│       │       │   └── sqlite-sql.md
│       │       ├── tidb
│       │       │   ├── _index.md
│       │       │   ├── tidb-execute-sql.md
│       │       │   └── tidb-sql.md
│       │       ├── trino
│       │       │   ├── _index.md
│       │       │   ├── trino-execute-sql.md
│       │       │   └── trino-sql.md
│       │       ├── utility
│       │       │   ├── _index.md
│       │       │   └── wait.md
│       │       ├── valkey
│       │       │   ├── _index.md
│       │       │   └── valkey.md
│       │       └── yuagbytedb
│       │           ├── _index.md
│       │           └── yugabytedb-sql.md
│       ├── samples
│       │   ├── _index.md
│       │   ├── alloydb
│       │   │   ├── _index.md
│       │   │   ├── ai-nl
│       │   │   │   ├── alloydb_ai_nl.ipynb
│       │   │   │   └── index.md
│       │   │   └── mcp_quickstart.md
│       │   ├── bigquery
│       │   │   ├── _index.md
│       │   │   ├── colab_quickstart_bigquery.ipynb
│       │   │   ├── local_quickstart.md
│       │   │   └── mcp_quickstart
│       │   │       ├── _index.md
│       │   │       ├── inspector_tools.png
│       │   │       └── inspector.png
│       │   └── looker
│       │       ├── _index.md
│       │       ├── looker_gemini_oauth
│       │       │   ├── _index.md
│       │       │   ├── authenticated.png
│       │       │   ├── authorize.png
│       │       │   └── registration.png
│       │       ├── looker_gemini.md
│       │       └── looker_mcp_inspector
│       │           ├── _index.md
│       │           ├── inspector_tools.png
│       │           └── inspector.png
│       └── sdks
│           ├── _index.md
│           ├── go-sdk.md
│           ├── js-sdk.md
│           └── python-sdk.md
├── go.mod
├── go.sum
├── internal
│   ├── auth
│   │   ├── auth.go
│   │   └── google
│   │       └── google.go
│   ├── log
│   │   ├── handler.go
│   │   ├── log_test.go
│   │   ├── log.go
│   │   └── logger.go
│   ├── prebuiltconfigs
│   │   ├── prebuiltconfigs_test.go
│   │   ├── prebuiltconfigs.go
│   │   └── tools
│   │       ├── alloydb-postgres-admin.yaml
│   │       ├── alloydb-postgres-observability.yaml
│   │       ├── alloydb-postgres.yaml
│   │       ├── bigquery.yaml
│   │       ├── clickhouse.yaml
│   │       ├── cloud-sql-mssql-admin.yaml
│   │       ├── cloud-sql-mssql-observability.yaml
│   │       ├── cloud-sql-mssql.yaml
│   │       ├── cloud-sql-mysql-admin.yaml
│   │       ├── cloud-sql-mysql-observability.yaml
│   │       ├── cloud-sql-mysql.yaml
│   │       ├── cloud-sql-postgres-admin.yaml
│   │       ├── cloud-sql-postgres-observability.yaml
│   │       ├── cloud-sql-postgres.yaml
│   │       ├── dataplex.yaml
│   │       ├── firestore.yaml
│   │       ├── looker-conversational-analytics.yaml
│   │       ├── looker.yaml
│   │       ├── mssql.yaml
│   │       ├── mysql.yaml
│   │       ├── neo4j.yaml
│   │       ├── oceanbase.yaml
│   │       ├── postgres.yaml
│   │       ├── spanner-postgres.yaml
│   │       ├── spanner.yaml
│   │       └── sqlite.yaml
│   ├── server
│   │   ├── api_test.go
│   │   ├── api.go
│   │   ├── common_test.go
│   │   ├── config.go
│   │   ├── mcp
│   │   │   ├── jsonrpc
│   │   │   │   ├── jsonrpc_test.go
│   │   │   │   └── jsonrpc.go
│   │   │   ├── mcp.go
│   │   │   ├── util
│   │   │   │   └── lifecycle.go
│   │   │   ├── v20241105
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   ├── v20250326
│   │   │   │   ├── method.go
│   │   │   │   └── types.go
│   │   │   └── v20250618
│   │   │       ├── method.go
│   │   │       └── types.go
│   │   ├── mcp_test.go
│   │   ├── mcp.go
│   │   ├── server_test.go
│   │   ├── server.go
│   │   ├── static
│   │   │   ├── assets
│   │   │   │   └── mcptoolboxlogo.png
│   │   │   ├── css
│   │   │   │   └── style.css
│   │   │   ├── index.html
│   │   │   ├── js
│   │   │   │   ├── auth.js
│   │   │   │   ├── loadTools.js
│   │   │   │   ├── mainContent.js
│   │   │   │   ├── navbar.js
│   │   │   │   ├── runTool.js
│   │   │   │   ├── toolDisplay.js
│   │   │   │   ├── tools.js
│   │   │   │   └── toolsets.js
│   │   │   ├── tools.html
│   │   │   └── toolsets.html
│   │   ├── web_test.go
│   │   └── web.go
│   ├── sources
│   │   ├── alloydbadmin
│   │   │   ├── alloydbadmin_test.go
│   │   │   └── alloydbadmin.go
│   │   ├── alloydbpg
│   │   │   ├── alloydb_pg_test.go
│   │   │   └── alloydb_pg.go
│   │   ├── bigquery
│   │   │   ├── bigquery_test.go
│   │   │   └── bigquery.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   ├── cassandra_test.go
│   │   │   └── cassandra.go
│   │   ├── clickhouse
│   │   │   ├── clickhouse_test.go
│   │   │   └── clickhouse.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloud_monitoring_test.go
│   │   │   └── cloud_monitoring.go
│   │   ├── cloudsqladmin
│   │   │   ├── cloud_sql_admin_test.go
│   │   │   └── cloud_sql_admin.go
│   │   ├── cloudsqlmssql
│   │   │   ├── cloud_sql_mssql_test.go
│   │   │   └── cloud_sql_mssql.go
│   │   ├── cloudsqlmysql
│   │   │   ├── cloud_sql_mysql_test.go
│   │   │   └── cloud_sql_mysql.go
│   │   ├── cloudsqlpg
│   │   │   ├── cloud_sql_pg_test.go
│   │   │   └── cloud_sql_pg.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataplex
│   │   │   ├── dataplex_test.go
│   │   │   └── dataplex.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── dialect.go
│   │   ├── firebird
│   │   │   ├── firebird_test.go
│   │   │   └── firebird.go
│   │   ├── firestore
│   │   │   ├── firestore_test.go
│   │   │   └── firestore.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── ip_type.go
│   │   ├── looker
│   │   │   ├── looker_test.go
│   │   │   └── looker.go
│   │   ├── mongodb
│   │   │   ├── mongodb_test.go
│   │   │   └── mongodb.go
│   │   ├── mssql
│   │   │   ├── mssql_test.go
│   │   │   └── mssql.go
│   │   ├── mysql
│   │   │   ├── mysql_test.go
│   │   │   └── mysql.go
│   │   ├── neo4j
│   │   │   ├── neo4j_test.go
│   │   │   └── neo4j.go
│   │   ├── oceanbase
│   │   │   ├── oceanbase_test.go
│   │   │   └── oceanbase.go
│   │   ├── oracle
│   │   │   └── oracle.go
│   │   ├── postgres
│   │   │   ├── postgres_test.go
│   │   │   └── postgres.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── sources.go
│   │   ├── spanner
│   │   │   ├── spanner_test.go
│   │   │   └── spanner.go
│   │   ├── sqlite
│   │   │   ├── sqlite_test.go
│   │   │   └── sqlite.go
│   │   ├── tidb
│   │   │   ├── tidb_test.go
│   │   │   └── tidb.go
│   │   ├── trino
│   │   │   ├── trino_test.go
│   │   │   └── trino.go
│   │   ├── util.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedb
│   │       ├── yugabytedb_test.go
│   │       └── yugabytedb.go
│   ├── telemetry
│   │   ├── instrumentation.go
│   │   └── telemetry.go
│   ├── testutils
│   │   └── testutils.go
│   ├── tools
│   │   ├── alloydb
│   │   │   ├── alloydbcreatecluster
│   │   │   │   ├── alloydbcreatecluster_test.go
│   │   │   │   └── alloydbcreatecluster.go
│   │   │   ├── alloydbcreateinstance
│   │   │   │   ├── alloydbcreateinstance_test.go
│   │   │   │   └── alloydbcreateinstance.go
│   │   │   ├── alloydbcreateuser
│   │   │   │   ├── alloydbcreateuser_test.go
│   │   │   │   └── alloydbcreateuser.go
│   │   │   ├── alloydbgetcluster
│   │   │   │   ├── alloydbgetcluster_test.go
│   │   │   │   └── alloydbgetcluster.go
│   │   │   ├── alloydbgetinstance
│   │   │   │   ├── alloydbgetinstance_test.go
│   │   │   │   └── alloydbgetinstance.go
│   │   │   ├── alloydbgetuser
│   │   │   │   ├── alloydbgetuser_test.go
│   │   │   │   └── alloydbgetuser.go
│   │   │   ├── alloydblistclusters
│   │   │   │   ├── alloydblistclusters_test.go
│   │   │   │   └── alloydblistclusters.go
│   │   │   ├── alloydblistinstances
│   │   │   │   ├── alloydblistinstances_test.go
│   │   │   │   └── alloydblistinstances.go
│   │   │   ├── alloydblistusers
│   │   │   │   ├── alloydblistusers_test.go
│   │   │   │   └── alloydblistusers.go
│   │   │   └── alloydbwaitforoperation
│   │   │       ├── alloydbwaitforoperation_test.go
│   │   │       └── alloydbwaitforoperation.go
│   │   ├── alloydbainl
│   │   │   ├── alloydbainl_test.go
│   │   │   └── alloydbainl.go
│   │   ├── bigquery
│   │   │   ├── bigqueryanalyzecontribution
│   │   │   │   ├── bigqueryanalyzecontribution_test.go
│   │   │   │   └── bigqueryanalyzecontribution.go
│   │   │   ├── bigquerycommon
│   │   │   │   ├── table_name_parser_test.go
│   │   │   │   ├── table_name_parser.go
│   │   │   │   └── util.go
│   │   │   ├── bigqueryconversationalanalytics
│   │   │   │   ├── bigqueryconversationalanalytics_test.go
│   │   │   │   └── bigqueryconversationalanalytics.go
│   │   │   ├── bigqueryexecutesql
│   │   │   │   ├── bigqueryexecutesql_test.go
│   │   │   │   └── bigqueryexecutesql.go
│   │   │   ├── bigqueryforecast
│   │   │   │   ├── bigqueryforecast_test.go
│   │   │   │   └── bigqueryforecast.go
│   │   │   ├── bigquerygetdatasetinfo
│   │   │   │   ├── bigquerygetdatasetinfo_test.go
│   │   │   │   └── bigquerygetdatasetinfo.go
│   │   │   ├── bigquerygettableinfo
│   │   │   │   ├── bigquerygettableinfo_test.go
│   │   │   │   └── bigquerygettableinfo.go
│   │   │   ├── bigquerylistdatasetids
│   │   │   │   ├── bigquerylistdatasetids_test.go
│   │   │   │   └── bigquerylistdatasetids.go
│   │   │   ├── bigquerylisttableids
│   │   │   │   ├── bigquerylisttableids_test.go
│   │   │   │   └── bigquerylisttableids.go
│   │   │   ├── bigquerysearchcatalog
│   │   │   │   ├── bigquerysearchcatalog_test.go
│   │   │   │   └── bigquerysearchcatalog.go
│   │   │   └── bigquerysql
│   │   │       ├── bigquerysql_test.go
│   │   │       └── bigquerysql.go
│   │   ├── bigtable
│   │   │   ├── bigtable_test.go
│   │   │   └── bigtable.go
│   │   ├── cassandra
│   │   │   └── cassandracql
│   │   │       ├── cassandracql_test.go
│   │   │       └── cassandracql.go
│   │   ├── clickhouse
│   │   │   ├── clickhouseexecutesql
│   │   │   │   ├── clickhouseexecutesql_test.go
│   │   │   │   └── clickhouseexecutesql.go
│   │   │   ├── clickhouselistdatabases
│   │   │   │   ├── clickhouselistdatabases_test.go
│   │   │   │   └── clickhouselistdatabases.go
│   │   │   ├── clickhouselisttables
│   │   │   │   ├── clickhouselisttables_test.go
│   │   │   │   └── clickhouselisttables.go
│   │   │   └── clickhousesql
│   │   │       ├── clickhousesql_test.go
│   │   │       └── clickhousesql.go
│   │   ├── cloudmonitoring
│   │   │   ├── cloudmonitoring_test.go
│   │   │   └── cloudmonitoring.go
│   │   ├── cloudsql
│   │   │   ├── cloudsqlcreatedatabase
│   │   │   │   ├── cloudsqlcreatedatabase_test.go
│   │   │   │   └── cloudsqlcreatedatabase.go
│   │   │   ├── cloudsqlcreateusers
│   │   │   │   ├── cloudsqlcreateusers_test.go
│   │   │   │   └── cloudsqlcreateusers.go
│   │   │   ├── cloudsqlgetinstances
│   │   │   │   ├── cloudsqlgetinstances_test.go
│   │   │   │   └── cloudsqlgetinstances.go
│   │   │   ├── cloudsqllistdatabases
│   │   │   │   ├── cloudsqllistdatabases_test.go
│   │   │   │   └── cloudsqllistdatabases.go
│   │   │   ├── cloudsqllistinstances
│   │   │   │   ├── cloudsqllistinstances_test.go
│   │   │   │   └── cloudsqllistinstances.go
│   │   │   └── cloudsqlwaitforoperation
│   │   │       ├── cloudsqlwaitforoperation_test.go
│   │   │       └── cloudsqlwaitforoperation.go
│   │   ├── cloudsqlmssql
│   │   │   └── cloudsqlmssqlcreateinstance
│   │   │       ├── cloudsqlmssqlcreateinstance_test.go
│   │   │       └── cloudsqlmssqlcreateinstance.go
│   │   ├── cloudsqlmysql
│   │   │   └── cloudsqlmysqlcreateinstance
│   │   │       ├── cloudsqlmysqlcreateinstance_test.go
│   │   │       └── cloudsqlmysqlcreateinstance.go
│   │   ├── cloudsqlpg
│   │   │   └── cloudsqlpgcreateinstances
│   │   │       ├── cloudsqlpgcreateinstances_test.go
│   │   │       └── cloudsqlpgcreateinstances.go
│   │   ├── common_test.go
│   │   ├── common.go
│   │   ├── couchbase
│   │   │   ├── couchbase_test.go
│   │   │   └── couchbase.go
│   │   ├── dataform
│   │   │   └── dataformcompilelocal
│   │   │       ├── dataformcompilelocal_test.go
│   │   │       └── dataformcompilelocal.go
│   │   ├── dataplex
│   │   │   ├── dataplexlookupentry
│   │   │   │   ├── dataplexlookupentry_test.go
│   │   │   │   └── dataplexlookupentry.go
│   │   │   ├── dataplexsearchaspecttypes
│   │   │   │   ├── dataplexsearchaspecttypes_test.go
│   │   │   │   └── dataplexsearchaspecttypes.go
│   │   │   └── dataplexsearchentries
│   │   │       ├── dataplexsearchentries_test.go
│   │   │       └── dataplexsearchentries.go
│   │   ├── dgraph
│   │   │   ├── dgraph_test.go
│   │   │   └── dgraph.go
│   │   ├── firebird
│   │   │   ├── firebirdexecutesql
│   │   │   │   ├── firebirdexecutesql_test.go
│   │   │   │   └── firebirdexecutesql.go
│   │   │   └── firebirdsql
│   │   │       ├── firebirdsql_test.go
│   │   │       └── firebirdsql.go
│   │   ├── firestore
│   │   │   ├── firestoreadddocuments
│   │   │   │   ├── firestoreadddocuments_test.go
│   │   │   │   └── firestoreadddocuments.go
│   │   │   ├── firestoredeletedocuments
│   │   │   │   ├── firestoredeletedocuments_test.go
│   │   │   │   └── firestoredeletedocuments.go
│   │   │   ├── firestoregetdocuments
│   │   │   │   ├── firestoregetdocuments_test.go
│   │   │   │   └── firestoregetdocuments.go
│   │   │   ├── firestoregetrules
│   │   │   │   ├── firestoregetrules_test.go
│   │   │   │   └── firestoregetrules.go
│   │   │   ├── firestorelistcollections
│   │   │   │   ├── firestorelistcollections_test.go
│   │   │   │   └── firestorelistcollections.go
│   │   │   ├── firestorequery
│   │   │   │   ├── firestorequery_test.go
│   │   │   │   └── firestorequery.go
│   │   │   ├── firestorequerycollection
│   │   │   │   ├── firestorequerycollection_test.go
│   │   │   │   └── firestorequerycollection.go
│   │   │   ├── firestoreupdatedocument
│   │   │   │   ├── firestoreupdatedocument_test.go
│   │   │   │   └── firestoreupdatedocument.go
│   │   │   ├── firestorevalidaterules
│   │   │   │   ├── firestorevalidaterules_test.go
│   │   │   │   └── firestorevalidaterules.go
│   │   │   └── util
│   │   │       ├── converter_test.go
│   │   │       ├── converter.go
│   │   │       ├── validator_test.go
│   │   │       └── validator.go
│   │   ├── http
│   │   │   ├── http_test.go
│   │   │   └── http.go
│   │   ├── http_method.go
│   │   ├── looker
│   │   │   ├── lookeradddashboardelement
│   │   │   │   ├── lookeradddashboardelement_test.go
│   │   │   │   └── lookeradddashboardelement.go
│   │   │   ├── lookercommon
│   │   │   │   ├── lookercommon_test.go
│   │   │   │   └── lookercommon.go
│   │   │   ├── lookerconversationalanalytics
│   │   │   │   ├── lookerconversationalanalytics_test.go
│   │   │   │   └── lookerconversationalanalytics.go
│   │   │   ├── lookergetdashboards
│   │   │   │   ├── lookergetdashboards_test.go
│   │   │   │   └── lookergetdashboards.go
│   │   │   ├── lookergetdimensions
│   │   │   │   ├── lookergetdimensions_test.go
│   │   │   │   └── lookergetdimensions.go
│   │   │   ├── lookergetexplores
│   │   │   │   ├── lookergetexplores_test.go
│   │   │   │   └── lookergetexplores.go
│   │   │   ├── lookergetfilters
│   │   │   │   ├── lookergetfilters_test.go
│   │   │   │   └── lookergetfilters.go
│   │   │   ├── lookergetlooks
│   │   │   │   ├── lookergetlooks_test.go
│   │   │   │   └── lookergetlooks.go
│   │   │   ├── lookergetmeasures
│   │   │   │   ├── lookergetmeasures_test.go
│   │   │   │   └── lookergetmeasures.go
│   │   │   ├── lookergetmodels
│   │   │   │   ├── lookergetmodels_test.go
│   │   │   │   └── lookergetmodels.go
│   │   │   ├── lookergetparameters
│   │   │   │   ├── lookergetparameters_test.go
│   │   │   │   └── lookergetparameters.go
│   │   │   ├── lookerhealthanalyze
│   │   │   │   ├── lookerhealthanalyze_test.go
│   │   │   │   └── lookerhealthanalyze.go
│   │   │   ├── lookerhealthpulse
│   │   │   │   ├── lookerhealthpulse_test.go
│   │   │   │   └── lookerhealthpulse.go
│   │   │   ├── lookerhealthvacuum
│   │   │   │   ├── lookerhealthvacuum_test.go
│   │   │   │   └── lookerhealthvacuum.go
│   │   │   ├── lookermakedashboard
│   │   │   │   ├── lookermakedashboard_test.go
│   │   │   │   └── lookermakedashboard.go
│   │   │   ├── lookermakelook
│   │   │   │   ├── lookermakelook_test.go
│   │   │   │   └── lookermakelook.go
│   │   │   ├── lookerquery
│   │   │   │   ├── lookerquery_test.go
│   │   │   │   └── lookerquery.go
│   │   │   ├── lookerquerysql
│   │   │   │   ├── lookerquerysql_test.go
│   │   │   │   └── lookerquerysql.go
│   │   │   ├── lookerqueryurl
│   │   │   │   ├── lookerqueryurl_test.go
│   │   │   │   └── lookerqueryurl.go
│   │   │   └── lookerrunlook
│   │   │       ├── lookerrunlook_test.go
│   │   │       └── lookerrunlook.go
│   │   ├── mongodb
│   │   │   ├── mongodbaggregate
│   │   │   │   ├── mongodbaggregate_test.go
│   │   │   │   └── mongodbaggregate.go
│   │   │   ├── mongodbdeletemany
│   │   │   │   ├── mongodbdeletemany_test.go
│   │   │   │   └── mongodbdeletemany.go
│   │   │   ├── mongodbdeleteone
│   │   │   │   ├── mongodbdeleteone_test.go
│   │   │   │   └── mongodbdeleteone.go
│   │   │   ├── mongodbfind
│   │   │   │   ├── mongodbfind_test.go
│   │   │   │   └── mongodbfind.go
│   │   │   ├── mongodbfindone
│   │   │   │   ├── mongodbfindone_test.go
│   │   │   │   └── mongodbfindone.go
│   │   │   ├── mongodbinsertmany
│   │   │   │   ├── mongodbinsertmany_test.go
│   │   │   │   └── mongodbinsertmany.go
│   │   │   ├── mongodbinsertone
│   │   │   │   ├── mongodbinsertone_test.go
│   │   │   │   └── mongodbinsertone.go
│   │   │   ├── mongodbupdatemany
│   │   │   │   ├── mongodbupdatemany_test.go
│   │   │   │   └── mongodbupdatemany.go
│   │   │   └── mongodbupdateone
│   │   │       ├── mongodbupdateone_test.go
│   │   │       └── mongodbupdateone.go
│   │   ├── mssql
│   │   │   ├── mssqlexecutesql
│   │   │   │   ├── mssqlexecutesql_test.go
│   │   │   │   └── mssqlexecutesql.go
│   │   │   ├── mssqllisttables
│   │   │   │   ├── mssqllisttables_test.go
│   │   │   │   └── mssqllisttables.go
│   │   │   └── mssqlsql
│   │   │       ├── mssqlsql_test.go
│   │   │       └── mssqlsql.go
│   │   ├── mysql
│   │   │   ├── mysqlcommon
│   │   │   │   └── mysqlcommon.go
│   │   │   ├── mysqlexecutesql
│   │   │   │   ├── mysqlexecutesql_test.go
│   │   │   │   └── mysqlexecutesql.go
│   │   │   ├── mysqllistactivequeries
│   │   │   │   ├── mysqllistactivequeries_test.go
│   │   │   │   └── mysqllistactivequeries.go
│   │   │   ├── mysqllisttablefragmentation
│   │   │   │   ├── mysqllisttablefragmentation_test.go
│   │   │   │   └── mysqllisttablefragmentation.go
│   │   │   ├── mysqllisttables
│   │   │   │   ├── mysqllisttables_test.go
│   │   │   │   └── mysqllisttables.go
│   │   │   ├── mysqllisttablesmissinguniqueindexes
│   │   │   │   ├── mysqllisttablesmissinguniqueindexes_test.go
│   │   │   │   └── mysqllisttablesmissinguniqueindexes.go
│   │   │   └── mysqlsql
│   │   │       ├── mysqlsql_test.go
│   │   │       └── mysqlsql.go
│   │   ├── neo4j
│   │   │   ├── neo4jcypher
│   │   │   │   ├── neo4jcypher_test.go
│   │   │   │   └── neo4jcypher.go
│   │   │   ├── neo4jexecutecypher
│   │   │   │   ├── classifier
│   │   │   │   │   ├── classifier_test.go
│   │   │   │   │   └── classifier.go
│   │   │   │   ├── neo4jexecutecypher_test.go
│   │   │   │   └── neo4jexecutecypher.go
│   │   │   └── neo4jschema
│   │   │       ├── cache
│   │   │       │   ├── cache_test.go
│   │   │       │   └── cache.go
│   │   │       ├── helpers
│   │   │       │   ├── helpers_test.go
│   │   │       │   └── helpers.go
│   │   │       ├── neo4jschema_test.go
│   │   │       ├── neo4jschema.go
│   │   │       └── types
│   │   │           └── types.go
│   │   ├── oceanbase
│   │   │   ├── oceanbaseexecutesql
│   │   │   │   ├── oceanbaseexecutesql_test.go
│   │   │   │   └── oceanbaseexecutesql.go
│   │   │   └── oceanbasesql
│   │   │       ├── oceanbasesql_test.go
│   │   │       └── oceanbasesql.go
│   │   ├── oracle
│   │   │   ├── oracleexecutesql
│   │   │   │   └── oracleexecutesql.go
│   │   │   └── oraclesql
│   │   │       └── oraclesql.go
│   │   ├── parameters_test.go
│   │   ├── parameters.go
│   │   ├── postgres
│   │   │   ├── postgresexecutesql
│   │   │   │   ├── postgresexecutesql_test.go
│   │   │   │   └── postgresexecutesql.go
│   │   │   ├── postgreslistactivequeries
│   │   │   │   ├── postgreslistactivequeries_test.go
│   │   │   │   └── postgreslistactivequeries.go
│   │   │   ├── postgreslistavailableextensions
│   │   │   │   ├── postgreslistavailableextensions_test.go
│   │   │   │   └── postgreslistavailableextensions.go
│   │   │   ├── postgreslistinstalledextensions
│   │   │   │   ├── postgreslistinstalledextensions_test.go
│   │   │   │   └── postgreslistinstalledextensions.go
│   │   │   ├── postgreslisttables
│   │   │   │   ├── postgreslisttables_test.go
│   │   │   │   └── postgreslisttables.go
│   │   │   └── postgressql
│   │   │       ├── postgressql_test.go
│   │   │       └── postgressql.go
│   │   ├── redis
│   │   │   ├── redis_test.go
│   │   │   └── redis.go
│   │   ├── spanner
│   │   │   ├── spannerexecutesql
│   │   │   │   ├── spannerexecutesql_test.go
│   │   │   │   └── spannerexecutesql.go
│   │   │   ├── spannerlisttables
│   │   │   │   ├── spannerlisttables_test.go
│   │   │   │   └── spannerlisttables.go
│   │   │   └── spannersql
│   │   │       ├── spanner_test.go
│   │   │       └── spannersql.go
│   │   ├── sqlite
│   │   │   ├── sqliteexecutesql
│   │   │   │   ├── sqliteexecutesql_test.go
│   │   │   │   └── sqliteexecutesql.go
│   │   │   └── sqlitesql
│   │   │       ├── sqlitesql_test.go
│   │   │       └── sqlitesql.go
│   │   ├── tidb
│   │   │   ├── tidbexecutesql
│   │   │   │   ├── tidbexecutesql_test.go
│   │   │   │   └── tidbexecutesql.go
│   │   │   └── tidbsql
│   │   │       ├── tidbsql_test.go
│   │   │       └── tidbsql.go
│   │   ├── tools_test.go
│   │   ├── tools.go
│   │   ├── toolsets.go
│   │   ├── trino
│   │   │   ├── trinoexecutesql
│   │   │   │   ├── trinoexecutesql_test.go
│   │   │   │   └── trinoexecutesql.go
│   │   │   └── trinosql
│   │   │       ├── trinosql_test.go
│   │   │       └── trinosql.go
│   │   ├── utility
│   │   │   └── wait
│   │   │       ├── wait_test.go
│   │   │       └── wait.go
│   │   ├── valkey
│   │   │   ├── valkey_test.go
│   │   │   └── valkey.go
│   │   └── yugabytedbsql
│   │       ├── yugabytedbsql_test.go
│   │       └── yugabytedbsql.go
│   └── util
│       └── util.go
├── LICENSE
├── logo.png
├── main.go
├── README.md
└── tests
    ├── alloydb
    │   ├── alloydb_integration_test.go
    │   └── alloydb_wait_for_operation_test.go
    ├── alloydbainl
    │   └── alloydb_ai_nl_integration_test.go
    ├── alloydbpg
    │   └── alloydb_pg_integration_test.go
    ├── auth.go
    ├── bigquery
    │   └── bigquery_integration_test.go
    ├── bigtable
    │   └── bigtable_integration_test.go
    ├── cassandra
    │   └── cassandra_integration_test.go
    ├── clickhouse
    │   └── clickhouse_integration_test.go
    ├── cloudmonitoring
    │   └── cloud_monitoring_integration_test.go
    ├── cloudsql
    │   ├── cloud_sql_create_database_test.go
    │   ├── cloud_sql_create_users_test.go
    │   ├── cloud_sql_get_instances_test.go
    │   ├── cloud_sql_list_databases_test.go
    │   ├── cloudsql_list_instances_test.go
    │   └── cloudsql_wait_for_operation_test.go
    ├── cloudsqlmssql
    │   ├── cloud_sql_mssql_create_instance_integration_test.go
    │   └── cloud_sql_mssql_integration_test.go
    ├── cloudsqlmysql
    │   ├── cloud_sql_mysql_create_instance_integration_test.go
    │   └── cloud_sql_mysql_integration_test.go
    ├── cloudsqlpg
    │   ├── cloud_sql_pg_create_instances_test.go
    │   └── cloud_sql_pg_integration_test.go
    ├── common.go
    ├── couchbase
    │   └── couchbase_integration_test.go
    ├── dataform
    │   └── dataform_integration_test.go
    ├── dataplex
    │   └── dataplex_integration_test.go
    ├── dgraph
    │   └── dgraph_integration_test.go
    ├── firebird
    │   └── firebird_integration_test.go
    ├── firestore
    │   └── firestore_integration_test.go
    ├── http
    │   └── http_integration_test.go
    ├── looker
    │   └── looker_integration_test.go
    ├── mongodb
    │   └── mongodb_integration_test.go
    ├── mssql
    │   └── mssql_integration_test.go
    ├── mysql
    │   └── mysql_integration_test.go
    ├── neo4j
    │   └── neo4j_integration_test.go
    ├── oceanbase
    │   └── oceanbase_integration_test.go
    ├── option.go
    ├── oracle
    │   └── oracle_integration_test.go
    ├── postgres
    │   └── postgres_integration_test.go
    ├── redis
    │   └── redis_test.go
    ├── server.go
    ├── source.go
    ├── spanner
    │   └── spanner_integration_test.go
    ├── sqlite
    │   └── sqlite_integration_test.go
    ├── tidb
    │   └── tidb_integration_test.go
    ├── tool.go
    ├── trino
    │   └── trino_integration_test.go
    ├── utility
    │   └── wait_integration_test.go
    ├── valkey
    │   └── valkey_test.go
    └── yugabytedb
        └── yugabytedb_integration_test.go
```

# Files

--------------------------------------------------------------------------------
/tests/firebird/firebird_integration_test.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package firebird
 16 | 
 17 | import (
 18 | 	"context"
 19 | 	"database/sql"
 20 | 	"fmt"
 21 | 	"os"
 22 | 	"regexp"
 23 | 	"strings"
 24 | 	"testing"
 25 | 	"time"
 26 | 
 27 | 	"github.com/google/uuid"
 28 | 	"github.com/googleapis/genai-toolbox/internal/testutils"
 29 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 30 | 	"github.com/googleapis/genai-toolbox/tests"
 31 | 	_ "github.com/nakagami/firebirdsql"
 32 | )
 33 | 
 34 | var (
 35 | 	FirebirdSourceKind = "firebird"
 36 | 	FirebirdToolKind   = "firebird-sql"
 37 | 	FirebirdDatabase   = os.Getenv("FIREBIRD_DATABASE")
 38 | 	FirebirdHost       = os.Getenv("FIREBIRD_HOST")
 39 | 	FirebirdPort       = os.Getenv("FIREBIRD_PORT")
 40 | 	FirebirdUser       = os.Getenv("FIREBIRD_USER")
 41 | 	FirebirdPass       = os.Getenv("FIREBIRD_PASS")
 42 | )
 43 | 
 44 | func getFirebirdVars(t *testing.T) map[string]any {
 45 | 	switch "" {
 46 | 	case FirebirdDatabase:
 47 | 		t.Fatal("'FIREBIRD_DATABASE' not set")
 48 | 	case FirebirdHost:
 49 | 		t.Fatal("'FIREBIRD_HOST' not set")
 50 | 	case FirebirdPort:
 51 | 		t.Fatal("'FIREBIRD_PORT' not set")
 52 | 	case FirebirdUser:
 53 | 		t.Fatal("'FIREBIRD_USER' not set")
 54 | 	case FirebirdPass:
 55 | 		t.Fatal("'FIREBIRD_PASS' not set")
 56 | 	}
 57 | 
 58 | 	return map[string]any{
 59 | 		"kind":     FirebirdSourceKind,
 60 | 		"host":     FirebirdHost,
 61 | 		"port":     FirebirdPort,
 62 | 		"database": FirebirdDatabase,
 63 | 		"user":     FirebirdUser,
 64 | 		"password": FirebirdPass,
 65 | 	}
 66 | }
 67 | 
 68 | func initFirebirdConnection(host, port, user, pass, dbname string) (*sql.DB, error) {
 69 | 	dsn := fmt.Sprintf("%s:%s@%s:%s/%s", user, pass, host, port, dbname)
 70 | 	db, err := sql.Open("firebirdsql", dsn)
 71 | 	if err != nil {
 72 | 		return nil, fmt.Errorf("unable to create connection pool: %w", err)
 73 | 	}
 74 | 
 75 | 	// Configure connection pool to prevent deadlocks
 76 | 	db.SetMaxOpenConns(5)
 77 | 	db.SetMaxIdleConns(2)
 78 | 	db.SetConnMaxLifetime(5 * time.Minute)
 79 | 	db.SetConnMaxIdleTime(1 * time.Minute)
 80 | 
 81 | 	return db, nil
 82 | }
 83 | 
 84 | func TestFirebirdToolEndpoints(t *testing.T) {
 85 | 	sourceConfig := getFirebirdVars(t)
 86 | 	ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
 87 | 	defer cancel()
 88 | 
 89 | 	var args []string
 90 | 
 91 | 	db, err := initFirebirdConnection(FirebirdHost, FirebirdPort, FirebirdUser, FirebirdPass, FirebirdDatabase)
 92 | 	if err != nil {
 93 | 		t.Fatalf("unable to create firebird connection pool: %s", err)
 94 | 	}
 95 | 	defer db.Close()
 96 | 
 97 | 	shortUUID := strings.ReplaceAll(uuid.New().String(), "-", "")[:8]
 98 | 	tableNameParam := fmt.Sprintf("param_table_%s", shortUUID)
 99 | 	tableNameAuth := fmt.Sprintf("auth_table_%s", shortUUID)
100 | 	tableNameTemplateParam := fmt.Sprintf("template_param_table_%s", shortUUID)
101 | 
102 | 	createParamTableStmts, insertParamTableStmt, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, paramTestParams := getFirebirdParamToolInfo(tableNameParam)
103 | 	teardownTable1 := setupFirebirdTable(t, ctx, db, createParamTableStmts, insertParamTableStmt, tableNameParam, paramTestParams)
104 | 	defer teardownTable1(t)
105 | 
106 | 	createAuthTableStmts, insertAuthTableStmt, authToolStmt, authTestParams := getFirebirdAuthToolInfo(tableNameAuth)
107 | 	teardownTable2 := setupFirebirdTable(t, ctx, db, createAuthTableStmts, insertAuthTableStmt, tableNameAuth, authTestParams)
108 | 	defer teardownTable2(t)
109 | 
110 | 	toolsFile := getFirebirdToolsConfig(sourceConfig, FirebirdToolKind, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, authToolStmt)
111 | 	toolsFile = addFirebirdExecuteSqlConfig(t, toolsFile)
112 | 	tmplSelectCombined, tmplSelectFilterCombined := getFirebirdTmplToolStatement()
113 | 	toolsFile = addFirebirdTemplateParamConfig(t, toolsFile, FirebirdToolKind, tmplSelectCombined, tmplSelectFilterCombined)
114 | 
115 | 	cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
116 | 	if err != nil {
117 | 		t.Fatalf("command initialization returned an error: %s", err)
118 | 	}
119 | 	defer cleanup()
120 | 
121 | 	waitCtx, cancelWait := context.WithTimeout(ctx, 10*time.Second)
122 | 	defer cancelWait()
123 | 	out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
124 | 	if err != nil {
125 | 		t.Logf("toolbox command logs: \n%s", out)
126 | 		t.Fatalf("toolbox didn't start successfully: %s", err)
127 | 	}
128 | 
129 | 	// Get configs for tests
130 | 	select1Want, mcpMyFailToolWant, createTableStatement, mcpSelect1Want := getFirebirdWants()
131 | 	nullWant := `[{"id":4,"name":null}]`
132 | 	select1Statement := `"SELECT 1 AS \"constant\" FROM RDB$DATABASE;"`
133 | 	templateParamCreateColArray := `["id INTEGER","name VARCHAR(255)","age INTEGER"]`
134 | 
135 | 	// Run tests
136 | 	tests.RunToolGetTest(t)
137 | 	tests.RunToolInvokeTest(t, select1Want,
138 | 		tests.WithNullWant(nullWant),
139 | 		tests.DisableArrayTest())
140 | 	tests.RunMCPToolCallMethod(t, mcpMyFailToolWant, mcpSelect1Want)
141 | 	tests.RunExecuteSqlToolInvokeTest(t, createTableStatement, select1Want, tests.WithSelect1Statement(select1Statement))
142 | 	tests.RunToolInvokeWithTemplateParameters(t, tableNameTemplateParam,
143 | 		tests.WithCreateColArray(templateParamCreateColArray))
144 | }
145 | 
146 | func setupFirebirdTable(t *testing.T, ctx context.Context, db *sql.DB, createStatements []string, insertStatement, tableName string, params []any) func(*testing.T) {
147 | 	err := db.PingContext(ctx)
148 | 	if err != nil {
149 | 		t.Fatalf("unable to connect to test database: %s", err)
150 | 	}
151 | 
152 | 	for _, stmt := range createStatements {
153 | 		_, err = db.ExecContext(ctx, stmt)
154 | 		if err != nil {
155 | 			t.Fatalf("unable to execute create statement for table %s: %s\nStatement: %s", tableName, err, stmt)
156 | 		}
157 | 	}
158 | 
159 | 	if insertStatement != "" && len(params) > 0 {
160 | 		stmt, err := db.PrepareContext(ctx, insertStatement)
161 | 		if err != nil {
162 | 			t.Fatalf("unable to prepare insert statement: %v", err)
163 | 		}
164 | 		defer stmt.Close()
165 | 
166 | 		numPlaceholders := strings.Count(insertStatement, "?")
167 | 		if numPlaceholders == 0 {
168 | 			t.Fatalf("insert statement has no placeholders '?' but params were provided")
169 | 		}
170 | 		for i := 0; i < len(params); i += numPlaceholders {
171 | 			end := i + numPlaceholders
172 | 			if end > len(params) {
173 | 				end = len(params)
174 | 			}
175 | 			batchParams := params[i:end]
176 | 
177 | 			_, err = stmt.ExecContext(ctx, batchParams...)
178 | 			if err != nil {
179 | 				t.Fatalf("unable to insert test data row with params %v: %v", batchParams, err)
180 | 			}
181 | 		}
182 | 	}
183 | 
184 | 	return func(t *testing.T) {
185 | 		// Close the main connection to free up resources
186 | 		db.Close()
187 | 
188 | 		// Helper function to check if error indicates object doesn't exist
189 | 		isNotFoundError := func(err error) bool {
190 | 			if err == nil {
191 | 				return false
192 | 			}
193 | 			errMsg := strings.ToLower(err.Error())
194 | 			return strings.Contains(errMsg, "does not exist") ||
195 | 				strings.Contains(errMsg, "not found") ||
196 | 				strings.Contains(errMsg, "is not defined") ||
197 | 				strings.Contains(errMsg, "unknown") ||
198 | 				strings.Contains(errMsg, "invalid")
199 | 		}
200 | 
201 | 		// Create dedicated cleanup connection with minimal configuration
202 | 		createCleanupConnection := func() (*sql.DB, error) {
203 | 			dsn := fmt.Sprintf("%s:%s@%s:%s/%s", FirebirdUser, FirebirdPass, FirebirdHost, FirebirdPort, FirebirdDatabase)
204 | 
205 | 			cleanupDb, err := sql.Open("firebirdsql", dsn)
206 | 			if err != nil {
207 | 				return nil, err
208 | 			}
209 | 
210 | 			// Ultra minimal connection pool for cleanup only
211 | 			cleanupDb.SetMaxOpenConns(1)
212 | 			cleanupDb.SetMaxIdleConns(0)
213 | 			cleanupDb.SetConnMaxLifetime(5 * time.Second)
214 | 			cleanupDb.SetConnMaxIdleTime(1 * time.Second)
215 | 
216 | 			return cleanupDb, nil
217 | 		}
218 | 
219 | 		// Drop each object with its own dedicated connection and aggressive timeout
220 | 		dropObjects := []struct {
221 | 			objType string
222 | 			query   string
223 | 		}{
224 | 			{"trigger", fmt.Sprintf("DROP TRIGGER BI_%s_ID", tableName)},
225 | 			{"table", fmt.Sprintf("DROP TABLE %s", tableName)},
226 | 			{"generator", fmt.Sprintf("DROP GENERATOR GEN_%s_ID", tableName)},
227 | 		}
228 | 
229 | 		for _, obj := range dropObjects {
230 | 			cleanupDb, err := createCleanupConnection()
231 | 			if err != nil {
232 | 				t.Logf("Failed to create cleanup connection for %s: %s", obj.objType, err)
233 | 				continue
234 | 			}
235 | 
236 | 			// Use aggressive short timeout for each operation
237 | 			ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
238 | 			_, dropErr := cleanupDb.ExecContext(ctx, obj.query)
239 | 			cancel()
240 | 			cleanupDb.Close()
241 | 
242 | 			if dropErr == nil {
243 | 				t.Logf("Successfully dropped %s", obj.objType)
244 | 			} else if isNotFoundError(dropErr) {
245 | 				t.Logf("%s does not exist, skipping", obj.objType)
246 | 			} else if ctx.Err() == context.DeadlineExceeded {
247 | 				t.Logf("Timeout dropping %s (3s limit exceeded) - continuing anyway", obj.objType)
248 | 			} else {
249 | 				t.Logf("Failed to drop %s: %s - continuing anyway", obj.objType, dropErr)
250 | 			}
251 | 
252 | 			// Small delay between operations to reduce contention
253 | 			time.Sleep(100 * time.Millisecond)
254 | 		}
255 | 	}
256 | }
257 | 
258 | func getFirebirdParamToolInfo(tableName string) ([]string, string, string, string, string, string, []any) {
259 | 	createStatements := []string{
260 | 		fmt.Sprintf("CREATE TABLE %s (id INTEGER NOT NULL PRIMARY KEY, name VARCHAR(255));", tableName),
261 | 		fmt.Sprintf("CREATE GENERATOR GEN_%s_ID;", tableName),
262 | 		fmt.Sprintf(`
263 | 			CREATE TRIGGER BI_%s_ID FOR %s
264 | 			ACTIVE BEFORE INSERT POSITION 0
265 | 			AS
266 | 			BEGIN
267 | 				IF (NEW.id IS NULL) THEN
268 | 					NEW.id = GEN_ID(GEN_%s_ID, 1);
269 | 			END;
270 | 		`, tableName, tableName, tableName),
271 | 	}
272 | 
273 | 	insertStatement := fmt.Sprintf("INSERT INTO %s (name) VALUES (?);", tableName)
274 | 	toolStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE id = ? OR name = ?;", tableName)
275 | 	idParamStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE id = ?;", tableName)
276 | 	nameParamStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE name IS NOT DISTINCT FROM ?;", tableName)
277 | 	// Firebird doesn't support array parameters in IN clause the same way as other databases
278 | 	// We'll use a simpler approach for testing
279 | 	arrayToolStatement := fmt.Sprintf("SELECT id AS \"id\", name AS \"name\" FROM %s WHERE id = ? ORDER BY id;", tableName)
280 | 
281 | 	params := []any{"Alice", "Jane", "Sid", nil}
282 | 	return createStatements, insertStatement, toolStatement, idParamStatement, nameParamStatement, arrayToolStatement, params
283 | }
284 | 
285 | func getFirebirdAuthToolInfo(tableName string) ([]string, string, string, []any) {
286 | 	createStatements := []string{
287 | 		fmt.Sprintf("CREATE TABLE %s (id INTEGER NOT NULL PRIMARY KEY, name VARCHAR(255), email VARCHAR(255));", tableName),
288 | 		fmt.Sprintf("CREATE GENERATOR GEN_%s_ID;", tableName),
289 | 		fmt.Sprintf(`
290 | 			CREATE TRIGGER BI_%s_ID FOR %s
291 | 			ACTIVE BEFORE INSERT POSITION 0
292 | 			AS
293 | 			BEGIN
294 | 				IF (NEW.id IS NULL) THEN
295 | 					NEW.id = GEN_ID(GEN_%s_ID, 1);
296 | 			END;
297 | 		`, tableName, tableName, tableName),
298 | 	}
299 | 
300 | 	insertStatement := fmt.Sprintf("INSERT INTO %s (name, email) VALUES (?, ?)", tableName)
301 | 	toolStatement := fmt.Sprintf("SELECT name AS \"name\" FROM %s WHERE email = ?;", tableName)
302 | 	params := []any{"Alice", tests.ServiceAccountEmail, "Jane", "[email protected]"}
303 | 	return createStatements, insertStatement, toolStatement, params
304 | }
305 | 
306 | func getFirebirdWants() (string, string, string, string) {
307 | 	select1Want := `[{"constant":1}]`
308 | 	mcpMyFailToolWant := `{"jsonrpc":"2.0","id":"invoke-fail-tool","result":{"content":[{"type":"text","text":"unable to execute query: Dynamic SQL Error\nSQL error code = -104\nToken unknown - line 1, column 1\nSELEC\n"}],"isError":true}}`
309 | 	createTableStatement := `"CREATE TABLE t (id INTEGER PRIMARY KEY, name VARCHAR(50))"`
310 | 	mcpSelect1Want := `{"jsonrpc":"2.0","id":"invoke my-auth-required-tool","result":{"content":[{"type":"text","text":"{\"constant\":1}"}]}}`
311 | 	return select1Want, mcpMyFailToolWant, createTableStatement, mcpSelect1Want
312 | }
313 | 
314 | func getFirebirdToolsConfig(sourceConfig map[string]any, toolKind, paramToolStatement, idParamToolStmt, nameParamToolStmt, arrayToolStatement, authToolStatement string) map[string]any {
315 | 	toolsFile := tests.GetToolsConfig(sourceConfig, toolKind, paramToolStatement, idParamToolStmt, nameParamToolStmt, arrayToolStatement, authToolStatement)
316 | 
317 | 	toolsMap, ok := toolsFile["tools"].(map[string]any)
318 | 	if !ok {
319 | 		return toolsFile
320 | 	}
321 | 
322 | 	if simpleTool, ok := toolsMap["my-simple-tool"].(map[string]any); ok {
323 | 		simpleTool["statement"] = "SELECT 1 AS \"constant\" FROM RDB$DATABASE;"
324 | 		toolsMap["my-simple-tool"] = simpleTool
325 | 	}
326 | 	if authRequiredTool, ok := toolsMap["my-auth-required-tool"].(map[string]any); ok {
327 | 		authRequiredTool["statement"] = "SELECT 1 AS \"constant\" FROM RDB$DATABASE;"
328 | 		toolsMap["my-auth-required-tool"] = authRequiredTool
329 | 	}
330 | 
331 | 	if arrayTool, ok := toolsMap["my-array-tool"].(map[string]any); ok {
332 | 		// Firebird array tool - accept array but use only first element for compatibility
333 | 		arrayTool["parameters"] = []any{
334 | 			map[string]any{
335 | 				"name":        "idArray",
336 | 				"type":        "array",
337 | 				"description": "ID array (Firebird will use first element only)",
338 | 				"items": map[string]any{
339 | 					"name":        "id",
340 | 					"type":        "integer",
341 | 					"description": "ID",
342 | 				},
343 | 			},
344 | 		}
345 | 		// Statement is already defined in arrayToolStatement parameter
346 | 		toolsMap["my-array-tool"] = arrayTool
347 | 	}
348 | 
349 | 	toolsFile["tools"] = toolsMap
350 | 	return toolsFile
351 | }
352 | 
353 | func addFirebirdTemplateParamConfig(t *testing.T, config map[string]any, toolKind, tmplSelectCombined, tmplSelectFilterCombined string) map[string]any {
354 | 	toolsMap, ok := config["tools"].(map[string]any)
355 | 	if !ok {
356 | 		t.Fatalf("unable to get tools from config")
357 | 	}
358 | 
359 | 	// Firebird-specific template parameter tools with compatible syntax
360 | 	toolsMap["create-table-templateParams-tool"] = map[string]any{
361 | 		"kind":        toolKind,
362 | 		"source":      "my-instance",
363 | 		"description": "Create table tool with template parameters",
364 | 		"statement":   "CREATE TABLE {{.tableName}} ({{array .columns}})",
365 | 		"templateParameters": []tools.Parameter{
366 | 			tools.NewStringParameter("tableName", "some description"),
367 | 			tools.NewArrayParameter("columns", "The columns to create", tools.NewStringParameter("column", "A column name that will be created")),
368 | 		},
369 | 	}
370 | 	toolsMap["insert-table-templateParams-tool"] = map[string]any{
371 | 		"kind":        toolKind,
372 | 		"source":      "my-instance",
373 | 		"description": "Insert table tool with template parameters",
374 | 		"statement":   "INSERT INTO {{.tableName}} ({{array .columns}}) VALUES ({{.values}})",
375 | 		"templateParameters": []tools.Parameter{
376 | 			tools.NewStringParameter("tableName", "some description"),
377 | 			tools.NewArrayParameter("columns", "The columns to insert into", tools.NewStringParameter("column", "A column name that will be returned from the query.")),
378 | 			tools.NewStringParameter("values", "The values to insert as a comma separated string"),
379 | 		},
380 | 	}
381 | 	toolsMap["select-templateParams-tool"] = map[string]any{
382 | 		"kind":        toolKind,
383 | 		"source":      "my-instance",
384 | 		"description": "Select table tool with template parameters",
385 | 		"statement":   "SELECT id AS \"id\", name AS \"name\", age AS \"age\" FROM {{.tableName}}",
386 | 		"templateParameters": []tools.Parameter{
387 | 			tools.NewStringParameter("tableName", "some description"),
388 | 		},
389 | 	}
390 | 	toolsMap["select-templateParams-combined-tool"] = map[string]any{
391 | 		"kind":        toolKind,
392 | 		"source":      "my-instance",
393 | 		"description": "Select table tool with combined template parameters",
394 | 		"statement":   tmplSelectCombined,
395 | 		"parameters": []tools.Parameter{
396 | 			tools.NewIntParameter("id", "the id of the user"),
397 | 		},
398 | 		"templateParameters": []tools.Parameter{
399 | 			tools.NewStringParameter("tableName", "some description"),
400 | 		},
401 | 	}
402 | 	toolsMap["select-fields-templateParams-tool"] = map[string]any{
403 | 		"kind":        toolKind,
404 | 		"source":      "my-instance",
405 | 		"description": "Select specific fields tool with template parameters",
406 | 		"statement":   "SELECT name AS \"name\" FROM {{.tableName}}",
407 | 		"templateParameters": []tools.Parameter{
408 | 			tools.NewStringParameter("tableName", "some description"),
409 | 		},
410 | 	}
411 | 	toolsMap["select-filter-templateParams-combined-tool"] = map[string]any{
412 | 		"kind":        toolKind,
413 | 		"source":      "my-instance",
414 | 		"description": "Select table tool with filter template parameters",
415 | 		"statement":   tmplSelectFilterCombined,
416 | 		"parameters": []tools.Parameter{
417 | 			tools.NewStringParameter("name", "the name to filter by"),
418 | 		},
419 | 		"templateParameters": []tools.Parameter{
420 | 			tools.NewStringParameter("tableName", "some description"),
421 | 			tools.NewStringParameter("columnFilter", "some description"),
422 | 		},
423 | 	}
424 | 	// Firebird uses simple DROP TABLE syntax without IF EXISTS
425 | 	toolsMap["drop-table-templateParams-tool"] = map[string]any{
426 | 		"kind":        toolKind,
427 | 		"source":      "my-instance",
428 | 		"description": "Drop table tool with template parameters",
429 | 		"statement":   "DROP TABLE {{.tableName}}",
430 | 		"templateParameters": []tools.Parameter{
431 | 			tools.NewStringParameter("tableName", "some description"),
432 | 		},
433 | 	}
434 | 	config["tools"] = toolsMap
435 | 	return config
436 | }
437 | 
438 | func addFirebirdExecuteSqlConfig(t *testing.T, config map[string]any) map[string]any {
439 | 	tools, ok := config["tools"].(map[string]any)
440 | 	if !ok {
441 | 		t.Fatalf("unable to get tools from config")
442 | 	}
443 | 	tools["my-exec-sql-tool"] = map[string]any{
444 | 		"kind":        "firebird-execute-sql",
445 | 		"source":      "my-instance",
446 | 		"description": "Tool to execute sql",
447 | 	}
448 | 	tools["my-auth-exec-sql-tool"] = map[string]any{
449 | 		"kind":        "firebird-execute-sql",
450 | 		"source":      "my-instance",
451 | 		"description": "Tool to execute sql",
452 | 		"authRequired": []string{
453 | 			"my-google-auth",
454 | 		},
455 | 	}
456 | 	config["tools"] = tools
457 | 	return config
458 | }
459 | 
460 | func getFirebirdTmplToolStatement() (string, string) {
461 | 	tmplSelectCombined := "SELECT id AS \"id\", name AS \"name\", age AS \"age\" FROM {{.tableName}} WHERE id = ?"
462 | 	tmplSelectFilterCombined := "SELECT id AS \"id\", name AS \"name\", age AS \"age\" FROM {{.tableName}} WHERE {{.columnFilter}} = ?"
463 | 	return tmplSelectCombined, tmplSelectFilterCombined
464 | }
465 | 
```

--------------------------------------------------------------------------------
/internal/tools/looker/lookerconversationalanalytics/lookerconversationalanalytics.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package lookerconversationalanalytics
 16 | 
 17 | import (
 18 | 	"bytes"
 19 | 	"context"
 20 | 	"encoding/json"
 21 | 	"fmt"
 22 | 	"io"
 23 | 	"net/http"
 24 | 	"net/url"
 25 | 	"strings"
 26 | 
 27 | 	yaml "github.com/goccy/go-yaml"
 28 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 29 | 	lookerds "github.com/googleapis/genai-toolbox/internal/sources/looker"
 30 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 31 | 	"github.com/googleapis/genai-toolbox/internal/util"
 32 | 	"github.com/looker-open-source/sdk-codegen/go/rtl"
 33 | 	"golang.org/x/oauth2"
 34 | )
 35 | 
 36 | const kind string = "looker-conversational-analytics"
 37 | 
 38 | const instructions = `**INSTRUCTIONS - FOLLOW THESE RULES:**
 39 | 1. **CONTENT:** Your answer should present the supporting data and then provide a conclusion based on that data.
 40 | 2. **OUTPUT FORMAT:** Your entire response MUST be in plain text format ONLY.
 41 | 3. **NO CHARTS:** You are STRICTLY FORBIDDEN from generating any charts, graphs, images, or any other form of visualization.`
 42 | 
 43 | func init() {
 44 | 	if !tools.Register(kind, newConfig) {
 45 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 46 | 	}
 47 | }
 48 | 
 49 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 50 | 	actual := Config{Name: name}
 51 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 52 | 		return nil, err
 53 | 	}
 54 | 	return actual, nil
 55 | }
 56 | 
 57 | type compatibleSource interface {
 58 | 	GetApiSettings() *rtl.ApiSettings
 59 | 	GoogleCloudTokenSourceWithScope(ctx context.Context, scope string) (oauth2.TokenSource, error)
 60 | 	GoogleCloudProject() string
 61 | 	GoogleCloudLocation() string
 62 | 	UseClientAuthorization() bool
 63 | }
 64 | 
 65 | // Structs for building the JSON payload
 66 | type UserMessage struct {
 67 | 	Text string `json:"text"`
 68 | }
 69 | type Message struct {
 70 | 	UserMessage UserMessage `json:"userMessage"`
 71 | }
 72 | type LookerExploreReference struct {
 73 | 	LookerInstanceUri string `json:"lookerInstanceUri"`
 74 | 	LookmlModel       string `json:"lookmlModel"`
 75 | 	Explore           string `json:"explore"`
 76 | }
 77 | type LookerExploreReferences struct {
 78 | 	ExploreReferences []LookerExploreReference `json:"exploreReferences"`
 79 | 	Credentials       Credentials              `json:"credentials,omitzero"`
 80 | }
 81 | type SecretBased struct {
 82 | 	ClientId     string `json:"clientId"`
 83 | 	ClientSecret string `json:"clientSecret"`
 84 | }
 85 | type TokenBased struct {
 86 | 	AccessToken string `json:"accessToken"`
 87 | }
 88 | type OAuthCredentials struct {
 89 | 	Secret SecretBased `json:"secret,omitzero"`
 90 | 	Token  TokenBased  `json:"token,omitzero"`
 91 | }
 92 | type Credentials struct {
 93 | 	OAuth OAuthCredentials `json:"oauth"`
 94 | }
 95 | type DatasourceReferences struct {
 96 | 	Looker LookerExploreReferences `json:"looker"`
 97 | }
 98 | type ImageOptions struct {
 99 | 	NoImage map[string]any `json:"noImage"`
100 | }
101 | type ChartOptions struct {
102 | 	Image ImageOptions `json:"image"`
103 | }
104 | type Python struct {
105 | 	Enabled bool `json:"enabled"`
106 | }
107 | type AnalysisOptions struct {
108 | 	Python Python `json:"python"`
109 | }
110 | type ConversationOptions struct {
111 | 	Chart    ChartOptions    `json:"chart,omitzero"`
112 | 	Analysis AnalysisOptions `json:"analysis,omitzero"`
113 | }
114 | type InlineContext struct {
115 | 	SystemInstruction    string               `json:"systemInstruction"`
116 | 	DatasourceReferences DatasourceReferences `json:"datasourceReferences"`
117 | 	Options              ConversationOptions  `json:"options"`
118 | }
119 | type CAPayload struct {
120 | 	Messages      []Message     `json:"messages"`
121 | 	InlineContext InlineContext `json:"inlineContext"`
122 | 	ClientIdEnum  string        `json:"clientIdEnum"`
123 | }
124 | 
125 | // validate compatible sources are still compatible
126 | var _ compatibleSource = &lookerds.Source{}
127 | 
128 | var compatibleSources = [...]string{lookerds.SourceKind}
129 | 
130 | type Config struct {
131 | 	Name         string   `yaml:"name" validate:"required"`
132 | 	Kind         string   `yaml:"kind" validate:"required"`
133 | 	Source       string   `yaml:"source" validate:"required"`
134 | 	Description  string   `yaml:"description" validate:"required"`
135 | 	AuthRequired []string `yaml:"authRequired"`
136 | }
137 | 
138 | // validate interface
139 | var _ tools.ToolConfig = Config{}
140 | 
141 | func (cfg Config) ToolConfigKind() string {
142 | 	return kind
143 | }
144 | 
145 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
146 | 	// verify source exists
147 | 	rawS, ok := srcs[cfg.Source]
148 | 	if !ok {
149 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
150 | 	}
151 | 
152 | 	// verify the source is compatible
153 | 	s, ok := rawS.(compatibleSource)
154 | 	if !ok {
155 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
156 | 	}
157 | 
158 | 	if s.GoogleCloudProject() == "" {
159 | 		return nil, fmt.Errorf("project must be defined for source to use with %q tool", kind)
160 | 	}
161 | 
162 | 	userQueryParameter := tools.NewStringParameter("user_query_with_context", "The user's question, potentially including conversation history and system instructions for context.")
163 | 
164 | 	exploreRefsDescription := `An Array of at least one and up to 5 explore references like [{'model': 'MODEL_NAME', 'explore': 'EXPLORE_NAME'}]`
165 | 	exploreRefsParameter := tools.NewArrayParameter(
166 | 		"explore_references",
167 | 		exploreRefsDescription,
168 | 		tools.NewMapParameter(
169 | 			"explore_reference",
170 | 			"An explore reference like {'model': 'MODEL_NAME', 'explore': 'EXPLORE_NAME'}",
171 | 			"",
172 | 		),
173 | 	)
174 | 
175 | 	parameters := tools.Parameters{userQueryParameter, exploreRefsParameter}
176 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, parameters)
177 | 
178 | 	// Get cloud-platform token source for Gemini Data Analytics API during initialization
179 | 	ctx := context.Background()
180 | 	ts, err := s.GoogleCloudTokenSourceWithScope(ctx, "https://www.googleapis.com/auth/cloud-platform")
181 | 	if err != nil {
182 | 		return nil, fmt.Errorf("failed to get cloud-platform token source: %w", err)
183 | 	}
184 | 
185 | 	// finish tool setup
186 | 	t := Tool{
187 | 		Name:           cfg.Name,
188 | 		Kind:           kind,
189 | 		ApiSettings:    s.GetApiSettings(),
190 | 		Project:        s.GoogleCloudProject(),
191 | 		Location:       s.GoogleCloudLocation(),
192 | 		Parameters:     parameters,
193 | 		AuthRequired:   cfg.AuthRequired,
194 | 		UseClientOAuth: s.UseClientAuthorization(),
195 | 		TokenSource:    ts,
196 | 		manifest:       tools.Manifest{Description: cfg.Description, Parameters: parameters.Manifest(), AuthRequired: cfg.AuthRequired},
197 | 		mcpManifest:    mcpManifest,
198 | 	}
199 | 	return t, nil
200 | }
201 | 
202 | // validate interface
203 | var _ tools.Tool = Tool{}
204 | 
205 | type Tool struct {
206 | 	Name           string `yaml:"name"`
207 | 	Kind           string `yaml:"kind"`
208 | 	ApiSettings    *rtl.ApiSettings
209 | 	AuthRequired   []string         `yaml:"authRequired"`
210 | 	UseClientOAuth bool             `yaml:"useClientOAuth"`
211 | 	Parameters     tools.Parameters `yaml:"parameters"`
212 | 	Project        string
213 | 	Location       string
214 | 	TokenSource    oauth2.TokenSource
215 | 	manifest       tools.Manifest
216 | 	mcpManifest    tools.McpManifest
217 | }
218 | 
219 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
220 | 	var tokenStr string
221 | 	var err error
222 | 
223 | 	// Get credentials for the API call
224 | 	// Use cloud-platform token source for Gemini Data Analytics API
225 | 	if t.TokenSource == nil {
226 | 		return nil, fmt.Errorf("cloud-platform token source is missing")
227 | 	}
228 | 	token, err := t.TokenSource.Token()
229 | 	if err != nil {
230 | 		return nil, fmt.Errorf("failed to get token from cloud-platform token source: %w", err)
231 | 	}
232 | 	tokenStr = token.AccessToken
233 | 
234 | 	// Extract parameters from the map
235 | 	mapParams := params.AsMap()
236 | 	userQuery, _ := mapParams["user_query_with_context"].(string)
237 | 	exploreReferences, _ := mapParams["explore_references"].([]any)
238 | 
239 | 	ler := make([]LookerExploreReference, 0)
240 | 	for _, er := range exploreReferences {
241 | 		ler = append(ler, LookerExploreReference{
242 | 			LookerInstanceUri: t.ApiSettings.BaseUrl,
243 | 			LookmlModel:       er.(map[string]any)["model"].(string),
244 | 			Explore:           er.(map[string]any)["explore"].(string),
245 | 		})
246 | 	}
247 | 	oauth_creds := OAuthCredentials{}
248 | 	if t.UseClientOAuth {
249 | 		oauth_creds.Token = TokenBased{AccessToken: string(accessToken)}
250 | 	} else {
251 | 		oauth_creds.Secret = SecretBased{ClientId: t.ApiSettings.ClientId, ClientSecret: t.ApiSettings.ClientSecret}
252 | 	}
253 | 
254 | 	lers := LookerExploreReferences{
255 | 		ExploreReferences: ler,
256 | 		Credentials: Credentials{
257 | 			OAuth: oauth_creds,
258 | 		},
259 | 	}
260 | 
261 | 	// Construct URL, headers, and payload
262 | 	projectID := t.Project
263 | 	location := t.Location
264 | 	caURL := fmt.Sprintf("https://geminidataanalytics.googleapis.com/v1beta/projects/%s/locations/%s:chat", url.PathEscape(projectID), url.PathEscape(location))
265 | 
266 | 	headers := map[string]string{
267 | 		"Authorization": fmt.Sprintf("Bearer %s", tokenStr),
268 | 		"Content-Type":  "application/json",
269 | 	}
270 | 
271 | 	payload := CAPayload{
272 | 		Messages: []Message{{UserMessage: UserMessage{Text: userQuery}}},
273 | 		InlineContext: InlineContext{
274 | 			SystemInstruction: instructions,
275 | 			DatasourceReferences: DatasourceReferences{
276 | 				Looker: lers,
277 | 			},
278 | 			Options: ConversationOptions{Chart: ChartOptions{Image: ImageOptions{NoImage: map[string]any{}}}},
279 | 		},
280 | 		ClientIdEnum: "GENAI_TOOLBOX",
281 | 	}
282 | 
283 | 	// Call the streaming API
284 | 	response, err := getStream(ctx, caURL, payload, headers)
285 | 	if err != nil {
286 | 		return nil, fmt.Errorf("failed to get response from conversational analytics API: %w", err)
287 | 	}
288 | 
289 | 	return response, nil
290 | }
291 | 
292 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
293 | 	return tools.ParseParams(t.Parameters, data, claims)
294 | }
295 | 
296 | func (t Tool) Manifest() tools.Manifest {
297 | 	return t.manifest
298 | }
299 | 
300 | func (t Tool) McpManifest() tools.McpManifest {
301 | 	return t.mcpManifest
302 | }
303 | 
304 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
305 | 	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
306 | }
307 | 
308 | func (t Tool) RequiresClientAuthorization() bool {
309 | 	return t.UseClientOAuth
310 | }
311 | 
312 | // StreamMessage represents a single message object from the streaming API response.
313 | type StreamMessage struct {
314 | 	SystemMessage *SystemMessage `json:"systemMessage,omitempty"`
315 | }
316 | 
317 | // SystemMessage contains different types of system-generated content.
318 | type SystemMessage struct {
319 | 	Text     *TextMessage     `json:"text,omitempty"`
320 | 	Schema   *SchemaMessage   `json:"schema,omitempty"`
321 | 	Data     *DataMessage     `json:"data,omitempty"`
322 | 	Analysis *AnalysisMessage `json:"analysis,omitempty"`
323 | 	Error    *ErrorMessage    `json:"error,omitempty"`
324 | }
325 | 
326 | // TextMessage contains textual parts of a message.
327 | type TextMessage struct {
328 | 	Parts []string `json:"parts"`
329 | }
330 | 
331 | // SchemaMessage contains schema-related information.
332 | type SchemaMessage struct {
333 | 	Query  *SchemaQuery  `json:"query,omitempty"`
334 | 	Result *SchemaResult `json:"result,omitempty"`
335 | }
336 | 
337 | // SchemaQuery holds the question that prompted a schema lookup.
338 | type SchemaQuery struct {
339 | 	Question string `json:"question"`
340 | }
341 | 
342 | // SchemaResult contains the datasources with their schemas.
343 | type SchemaResult struct {
344 | 	Datasources []Datasource `json:"datasources"`
345 | }
346 | 
347 | // Datasource represents a data source with its reference and schema.
348 | type Datasource struct {
349 | 	LookerExploreReference LookerExploreReference `json:"lookerExploreReference"`
350 | }
351 | 
352 | // DataMessage contains data-related information, like queries and results.
353 | type DataMessage struct {
354 | 	GeneratedLookerQuery *LookerQuery `json:"generatedLookerQuery,omitempty"`
355 | 	Result               *DataResult  `json:"result,omitempty"`
356 | }
357 | 
358 | type LookerQuery struct {
359 | 	Model   string   `json:"model"`
360 | 	Explore string   `json:"explore"`
361 | 	Fields  []string `json:"fields"`
362 | 	Filters []Filter `json:"filters,omitempty"`
363 | 	Sorts   []string `json:"sorts,omitempty"`
364 | 	Limit   string   `json:"limit,omitempty"`
365 | }
366 | 
367 | type Filter struct {
368 | 	Field string `json:"field,omitempty"`
369 | 	Value string `json:"value,omitempty"`
370 | }
371 | 
372 | // DataResult contains the schema and rows of a query result.
373 | type DataResult struct {
374 | 	Data []map[string]any `json:"data"`
375 | }
376 | 
377 | type AnalysisQuery struct {
378 | 	Question        string   `json:"question,omitempty"`
379 | 	DataResultNames []string `json:"dataResultNames,omitempty"`
380 | }
381 | type AnalysisEvent struct {
382 | 	PlannerReasoning      string `json:"plannerReasoning,omitempty"`
383 | 	CoderInstructions     string `json:"coderInstructions,omitempty"`
384 | 	Code                  string `json:"code,omitempty"`
385 | 	ExecutionOutput       string `json:"executionOutput,omitempty"`
386 | 	ExecutionError        string `json:"executionError,omitempty"`
387 | 	ResultVegaChartJson   string `json:"resultVegaChartJson,omitempty"`
388 | 	ResultNaturalLanguage string `json:"resultNaturalLanguage,omitempty"`
389 | 	ResultCsvData         string `json:"resultCsvData,omitempty"`
390 | 	ResultReferenceData   string `json:"resultReferenceData,omitempty"`
391 | 	Error                 string `json:"error,omitempty"`
392 | }
393 | type AnalysisMessage struct {
394 | 	Query         AnalysisQuery `json:"query,omitempty"`
395 | 	ProgressEvent AnalysisEvent `json:"progressEvent,omitempty"`
396 | }
397 | 
398 | // ErrorResponse represents an error message from the API.
399 | type ErrorMessage struct {
400 | 	Text string `json:"text"`
401 | }
402 | 
403 | func getStream(ctx context.Context, url string, payload CAPayload, headers map[string]string) ([]map[string]any, error) {
404 | 	payloadBytes, err := json.Marshal(payload)
405 | 	if err != nil {
406 | 		return nil, fmt.Errorf("failed to marshal payload: %w", err)
407 | 	}
408 | 
409 | 	req, err := http.NewRequest("POST", url, bytes.NewBuffer(payloadBytes))
410 | 	if err != nil {
411 | 		return nil, fmt.Errorf("failed to create request: %w", err)
412 | 	}
413 | 	for k, v := range headers {
414 | 		req.Header.Set(k, v)
415 | 	}
416 | 
417 | 	client := &http.Client{}
418 | 	resp, err := client.Do(req)
419 | 	if err != nil {
420 | 		return nil, fmt.Errorf("failed to send request: %w", err)
421 | 	}
422 | 	defer resp.Body.Close()
423 | 
424 | 	if resp.StatusCode != http.StatusOK {
425 | 		body, _ := io.ReadAll(resp.Body)
426 | 		return nil, fmt.Errorf("API returned non-200 status: %d %s", resp.StatusCode, string(body))
427 | 	}
428 | 
429 | 	var messages []map[string]any
430 | 	decoder := json.NewDecoder(resp.Body)
431 | 
432 | 	// The response is a JSON array, so we read the opening bracket.
433 | 	if _, err := decoder.Token(); err != nil {
434 | 		if err == io.EOF {
435 | 			return nil, nil // Empty response is valid
436 | 		}
437 | 		return nil, fmt.Errorf("error reading start of json array: %w", err)
438 | 	}
439 | 
440 | 	for decoder.More() {
441 | 		var msg StreamMessage
442 | 		if err := decoder.Decode(&msg); err != nil {
443 | 			if err == io.EOF {
444 | 				break
445 | 			}
446 | 			return nil, fmt.Errorf("error decoding stream message: %w", err)
447 | 		}
448 | 
449 | 		var newMessage map[string]any
450 | 		if msg.SystemMessage != nil {
451 | 			if msg.SystemMessage.Text != nil {
452 | 				newMessage = handleTextResponse(ctx, msg.SystemMessage.Text)
453 | 			} else if msg.SystemMessage.Schema != nil {
454 | 				newMessage = handleSchemaResponse(ctx, msg.SystemMessage.Schema)
455 | 			} else if msg.SystemMessage.Data != nil {
456 | 				newMessage = handleDataResponse(ctx, msg.SystemMessage.Data)
457 | 			} else if msg.SystemMessage.Analysis != nil {
458 | 				newMessage = handleAnalysisResponse(ctx, msg.SystemMessage.Analysis)
459 | 			} else if msg.SystemMessage.Error != nil {
460 | 				newMessage = handleError(ctx, msg.SystemMessage.Error)
461 | 			}
462 | 			messages = appendMessage(messages, newMessage)
463 | 		}
464 | 	}
465 | 
466 | 	return messages, nil
467 | }
468 | 
469 | func formatDatasourceAsDict(ctx context.Context, datasource *Datasource) map[string]any {
470 | 	logger, _ := util.LoggerFromContext(ctx)
471 | 	logger.DebugContext(ctx, "Datasource %s", *datasource)
472 | 	ds := make(map[string]any)
473 | 	ds["model"] = datasource.LookerExploreReference.LookmlModel
474 | 	ds["explore"] = datasource.LookerExploreReference.Explore
475 | 	ds["lookerInstanceUri"] = datasource.LookerExploreReference.LookerInstanceUri
476 | 	return map[string]any{"Datasource": ds}
477 | }
478 | 
479 | func handleAnalysisResponse(ctx context.Context, resp *AnalysisMessage) map[string]any {
480 | 	logger, _ := util.LoggerFromContext(ctx)
481 | 	jsonData, err := json.Marshal(*resp)
482 | 	if err != nil {
483 | 		logger.ErrorContext(ctx, "error marshaling struct: %w", err)
484 | 		return map[string]any{"Analysis": "error"}
485 | 	}
486 | 	return map[string]any{"Analysis": jsonData}
487 | }
488 | 
489 | func handleTextResponse(ctx context.Context, resp *TextMessage) map[string]any {
490 | 	logger, _ := util.LoggerFromContext(ctx)
491 | 	logger.DebugContext(ctx, "Text Response: %s", strings.Join(resp.Parts, ""))
492 | 	return map[string]any{"Answer": strings.Join(resp.Parts, "")}
493 | }
494 | 
495 | func handleSchemaResponse(ctx context.Context, resp *SchemaMessage) map[string]any {
496 | 	if resp.Query != nil {
497 | 		return map[string]any{"Question": resp.Query.Question}
498 | 	}
499 | 	if resp.Result != nil {
500 | 		var formattedSources []map[string]any
501 | 		for _, ds := range resp.Result.Datasources {
502 | 			formattedSources = append(formattedSources, formatDatasourceAsDict(ctx, &ds))
503 | 		}
504 | 		return map[string]any{"Schema Resolved": formattedSources}
505 | 	}
506 | 	return nil
507 | }
508 | 
509 | func handleDataResponse(ctx context.Context, resp *DataMessage) map[string]any {
510 | 	if resp.GeneratedLookerQuery != nil {
511 | 		logger, _ := util.LoggerFromContext(ctx)
512 | 		jsonData, err := json.Marshal(resp.GeneratedLookerQuery)
513 | 		if err != nil {
514 | 			logger.ErrorContext(ctx, "error marshaling struct: %w", err)
515 | 			return map[string]any{"Retrieval Query": "error"}
516 | 		}
517 | 		return map[string]any{
518 | 			"Retrieval Query": jsonData,
519 | 		}
520 | 	}
521 | 	if resp.Result != nil {
522 | 
523 | 		return map[string]any{
524 | 			"Data Retrieved": resp.Result.Data,
525 | 		}
526 | 	}
527 | 	return nil
528 | }
529 | 
530 | func handleError(ctx context.Context, resp *ErrorMessage) map[string]any {
531 | 	logger, _ := util.LoggerFromContext(ctx)
532 | 	logger.DebugContext(ctx, "Error Response: %s", resp.Text)
533 | 	return map[string]any{
534 | 		"Error": map[string]any{
535 | 			"Message": resp.Text,
536 | 		},
537 | 	}
538 | }
539 | 
540 | func appendMessage(messages []map[string]any, newMessage map[string]any) []map[string]any {
541 | 	if newMessage == nil {
542 | 		return messages
543 | 	}
544 | 	if len(messages) > 0 {
545 | 		if _, ok := messages[len(messages)-1]["Data Retrieved"]; ok {
546 | 			messages = messages[:len(messages)-1]
547 | 		}
548 | 	}
549 | 	return append(messages, newMessage)
550 | }
551 | 
```

--------------------------------------------------------------------------------
/internal/prebuiltconfigs/tools/cloud-sql-postgres-observability.yaml:
--------------------------------------------------------------------------------

```yaml
  1 | # Copyright 2025 Google LLC
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | sources:
 15 |   cloud-monitoring-source:
 16 |     kind: cloud-monitoring
 17 | tools:
 18 |   get_system_metrics:
 19 |     kind: cloud-monitoring-query-prometheus
 20 |     source: cloud-monitoring-source
 21 |     description: |
 22 |       Fetches system level cloudmonitoring data (timeseries metrics) for a Postgres instance using a PromQL query. Take projectId and instanceId from the user for which the metrics timeseries data needs to be fetched.
 23 |       To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
 24 | 
 25 |       Generate PromQL `query` for Postgres system metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id` from user intent.
 26 | 
 27 |       Defaults:
 28 |       1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
 29 | 
 30 |       PromQL Query Examples:
 31 |       1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m])`
 32 |       2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
 33 |       3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
 34 |       4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
 35 |       5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
 36 |       6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
 37 |       7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
 38 |       8. Percentile with groupby on database_id: `quantile by ("database_id")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
 39 | 
 40 |       Available Metrics List: metricname. description. monitored resource. labels. database_id is actually the instance id and the format is `project_id:instance_id`. 
 41 |       1. `cloudsql.googleapis.com/database/postgresql/new_connection_count`: Count of new connections added to the postgres instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
 42 |       2. `cloudsql.googleapis.com/database/postgresql/backends_in_wait`: Number of backends in wait in postgres instance. `cloudsql_database`. `backend_type`, `wait_event`, `wait_event_type`, `project_id`, `database_id`.
 43 |       3. `cloudsql.googleapis.com/database/postgresql/transaction_count`: Delta count of number of transactions. `cloudsql_database`. `database`, `transaction_type`, `project_id`, `database_id`.
 44 |       4. `cloudsql.googleapis.com/database/memory/components`: Memory stats components in percentage as usage, cache and free memory for the database. `cloudsql_database`. `component`, `project_id`, `database_id`.
 45 |       5. `cloudsql.googleapis.com/database/postgresql/external_sync/max_replica_byte_lag`: Replication lag in bytes for Postgres External Server (ES) replicas. Aggregated across all DBs on the replica. `cloudsql_database`. `project_id`, `database_id`.
 46 |       6. `cloudsql.googleapis.com/database/cpu/utilization`: Current CPU utilization represented as a percentage of the reserved CPU that is currently in use. Values are typically numbers between 0.0 and 1.0 (but might exceed 1.0). Charts display the values as a percentage between 0% and 100% (or more). `cloudsql_database`. `project_id`, `database_id`.
 47 |       7. `cloudsql.googleapis.com/database/disk/bytes_used_by_data_type`: Data utilization in bytes. `cloudsql_database`. `data_type`, `project_id`, `database_id`.
 48 |       8. `cloudsql.googleapis.com/database/disk/read_ops_count`: Delta count of data disk read IO operations. `cloudsql_database`. `project_id`, `database_id`.
 49 |       9. `cloudsql.googleapis.com/database/disk/write_ops_count`: Delta count of data disk write IO operations. `cloudsql_database`. `project_id`, `database_id`.
 50 |       10. `cloudsql.googleapis.com/database/postgresql/num_backends_by_state`: Number of connections to the Cloud SQL PostgreSQL instance, grouped by its state. `cloudsql_database`. `database`, `state`, `project_id`, `database_id`.
 51 |       11. `cloudsql.googleapis.com/database/postgresql/num_backends`: Number of connections to the Cloud SQL PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
 52 |       12. `cloudsql.googleapis.com/database/network/received_bytes_count`: Delta count of bytes received through the network. `cloudsql_database`. `project_id`, `database_id`.
 53 |       13. `cloudsql.googleapis.com/database/network/sent_bytes_count`: Delta count of bytes sent through the network. `cloudsql_database`. `destination`, `project_id`, `database_id`.
 54 |       14. `cloudsql.googleapis.com/database/postgresql/deadlock_count`: Number of deadlocks detected for this database. `cloudsql_database`. `database`, `project_id`, `database_id`.
 55 |       15. `cloudsql.googleapis.com/database/postgresql/blocks_read_count`: Number of disk blocks read by this database. The source field distingushes actual reads from disk versus reads from buffer cache. `cloudsql_database`. `database`, `source`, `project_id`, `database_id`.
 56 |       16. `cloudsql.googleapis.com/database/postgresql/tuples_processed_count`: Number of tuples(rows) processed for a given database for operations like insert, update or delete. `cloudsql_database`. `operation_type`, `database`, `project_id`, `database_id`.
 57 |       17. `cloudsql.googleapis.com/database/postgresql/tuple_size`: Number of tuples (rows) in the database. `cloudsql_database`. `database`, `tuple_state`, `project_id`, `database_id`.
 58 |       18. `cloudsql.googleapis.com/database/postgresql/vacuum/oldest_transaction_age`: Age of the oldest transaction yet to be vacuumed in the Cloud SQL PostgreSQL instance, measured in number of transactions that have happened since the oldest transaction. `cloudsql_database`. `oldest_transaction_type`, `project_id`, `database_id`.
 59 |       19. `cloudsql.googleapis.com/database/replication/log_archive_success_count`: Number of successful attempts for archiving replication log files. `cloudsql_database`. `project_id`, `database_id`.
 60 |       20. `cloudsql.googleapis.com/database/replication/log_archive_failure_count`: Number of failed attempts for archiving replication log files. `cloudsql_database`. `project_id`, `database_id`.
 61 |       21. `cloudsql.googleapis.com/database/postgresql/transaction_id_utilization`: Current utilization represented as a percentage of transaction IDs consumed by the Cloud SQL PostgreSQL instance. Values are typically numbers between 0.0 and 1.0. Charts display the values as a percentage between 0% and 100% . `cloudsql_database`. `project_id`, `database_id`.
 62 |       22. `cloudsql.googleapis.com/database/postgresql/num_backends_by_application`: Number of connections to the Cloud SQL PostgreSQL instance, grouped by applications. `cloudsql_database`. `application`, `project_id`, `database_id`.
 63 |       23. `cloudsql.googleapis.com/database/postgresql/tuples_fetched_count`: Total number of rows fetched as a result of queries per database in the PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
 64 |       24. `cloudsql.googleapis.com/database/postgresql/tuples_returned_count`: Total number of rows scanned while processing the queries per database in the PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
 65 |       25. `cloudsql.googleapis.com/database/postgresql/temp_bytes_written_count`: Total amount of data (in bytes) written to temporary files by the queries per database. `cloudsql_database`. `database`, `project_id`, `database_id`.
 66 |       26. `cloudsql.googleapis.com/database/postgresql/temp_files_written_count`: Total number of temporary files used for writing data while performing algorithms such as join and sort. `cloudsql_database`. `database`, `project_id`, `database_id`.
 67 | 
 68 |   get_query_metrics:
 69 |     kind: cloud-monitoring-query-prometheus
 70 |     source: cloud-monitoring-source
 71 |     description: |
 72 |       Fetches query level cloudmonitoring data (timeseries metrics) for queries running in Postgres instance using a PromQL query. Take projectID and instanceID from the user for which the metrics timeseries data needs to be fetched.
 73 |       To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
 74 | 
 75 |       Generate PromQL `query` for Postgres query metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id`, `query_hash` from user intent. If query_hash is provided then use the per_query metrics. Query hash and query id are same.
 76 | 
 77 |       Defaults:
 78 |       1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
 79 | 
 80 |       PromQL Query Examples:
 81 |       1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m])`
 82 |       2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
 83 |       3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
 84 |       4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
 85 |       5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
 86 |       6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
 87 |       7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
 88 |       8. Percentile with groupby on resource_id, database: `quantile by ("resource_id","database")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
 89 | 
 90 |       Available Metrics List: metricname. description. monitored resource. labels. resource_id label format is `project_id:instance_id` which is actually instance id only. aggregate is the aggregated values for all query stats, Use aggregate metrics if query id is not provided. For perquery metrics do not fetch querystring unless specified by user specifically. Have the aggregation on query hash to avoid fetching the querystring. Do not use latency metrics for anything.
 91 |       1. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/latencies`: Aggregated query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
 92 |       2. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time`: Accumulated aggregated query execution time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
 93 |       3. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/io_time`: Accumulated aggregated IO time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `io_type`, `project_id`, `resource_id`.
 94 |       4. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/lock_time`: Accumulated aggregated lock wait time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `lock_type`, `project_id`, `resource_id`.
 95 |       5. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/row_count`: Aggregated number of retrieved or affected rows since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
 96 |       6. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/shared_blk_access_count`: Aggregated shared blocks accessed by statement execution. `cloudsql_instance_database`. `user`, `client_addr`, `access_type`, `project_id`, `resource_id`.
 97 |       7. `cloudsql.googleapis.com/database/postgresql/insights/perquery/latencies`: Per query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
 98 |       8. `cloudsql.googleapis.com/database/postgresql/insights/perquery/execution_time`: Accumulated execution times per user per database per query. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
 99 |       9. `cloudsql.googleapis.com/database/postgresql/insights/perquery/io_time`: Accumulated IO time since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `io_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
100 |       10. `cloudsql.googleapis.com/database/postgresql/insights/perquery/lock_time`: Accumulated lock wait time since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `lock_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
101 |       11. `cloudsql.googleapis.com/database/postgresql/insights/perquery/row_count`: The number of retrieved or affected rows since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
102 |       12. `cloudsql.googleapis.com/database/postgresql/insights/perquery/shared_blk_access_count`: Shared blocks accessed by statement execution per query. `cloudsql_instance_database`. `user`, `client_addr`, `access_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
103 |       13. `cloudsql.googleapis.com/database/postgresql/insights/pertag/latencies`: Query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
104 |       14. `cloudsql.googleapis.com/database/postgresql/insights/pertag/execution_time`: Accumulated execution times since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
105 |       15. `cloudsql.googleapis.com/database/postgresql/insights/pertag/io_time`: Accumulated IO time since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `io_type`, `tag_hash`, `project_id`, `resource_id`.
106 |       16. `cloudsql.googleapis.com/database/postgresql/insights/pertag/lock_time`: Accumulated lock wait time since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `lock_type`, `tag_hash`, `project_id`, `resource_id`.
107 |       17. `cloudsql.googleapis.com/database/postgresql/insights/pertag/shared_blk_access_count`: Shared blocks accessed by statement execution per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `access_type`, `tag_hash`, `project_id`, `resource_id`.
108 |       18. `cloudsql.googleapis.com/database/postgresql/insights/pertag/row_count`: The number of retrieved or affected rows since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
109 | 
110 | toolsets:
111 |   cloud_sql_postgres_cloud_monitoring_tools:
112 |     - get_system_metrics
113 |     - get_query_metrics
114 | 
```

--------------------------------------------------------------------------------
/internal/tools/bigquery/bigqueryconversationalanalytics/bigqueryconversationalanalytics.go:
--------------------------------------------------------------------------------

```go
  1 | // Copyright 2025 Google LLC
  2 | //
  3 | // Licensed under the Apache License, Version 2.0 (the "License");
  4 | // you may not use this file except in compliance with the License.
  5 | // You may obtain a copy of the License at
  6 | //
  7 | //     http://www.apache.org/licenses/LICENSE-2.0
  8 | //
  9 | // Unless required by applicable law or agreed to in writing, software
 10 | // distributed under the License is distributed on an "AS IS" BASIS,
 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | // See the License for the specific language governing permissions and
 13 | // limitations under the License.
 14 | 
 15 | package bigqueryconversationalanalytics
 16 | 
 17 | import (
 18 | 	"bytes"
 19 | 	"context"
 20 | 	"encoding/json"
 21 | 	"fmt"
 22 | 	"io"
 23 | 	"net/http"
 24 | 	"strings"
 25 | 
 26 | 	bigqueryapi "cloud.google.com/go/bigquery"
 27 | 	yaml "github.com/goccy/go-yaml"
 28 | 	"github.com/googleapis/genai-toolbox/internal/sources"
 29 | 	bigqueryds "github.com/googleapis/genai-toolbox/internal/sources/bigquery"
 30 | 	"github.com/googleapis/genai-toolbox/internal/tools"
 31 | 	"golang.org/x/oauth2"
 32 | )
 33 | 
 34 | const kind string = "bigquery-conversational-analytics"
 35 | 
 36 | const instructions = `**INSTRUCTIONS - FOLLOW THESE RULES:**
 37 | 1. **CONTENT:** Your answer should present the supporting data and then provide a conclusion based on that data.
 38 | 2. **OUTPUT FORMAT:** Your entire response MUST be in plain text format ONLY.
 39 | 3. **NO CHARTS:** You are STRICTLY FORBIDDEN from generating any charts, graphs, images, or any other form of visualization.`
 40 | 
 41 | func init() {
 42 | 	if !tools.Register(kind, newConfig) {
 43 | 		panic(fmt.Sprintf("tool kind %q already registered", kind))
 44 | 	}
 45 | }
 46 | 
 47 | func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
 48 | 	actual := Config{Name: name}
 49 | 	if err := decoder.DecodeContext(ctx, &actual); err != nil {
 50 | 		return nil, err
 51 | 	}
 52 | 	return actual, nil
 53 | }
 54 | 
 55 | type compatibleSource interface {
 56 | 	BigQueryClient() *bigqueryapi.Client
 57 | 	BigQueryTokenSourceWithScope(ctx context.Context, scope string) (oauth2.TokenSource, error)
 58 | 	BigQueryProject() string
 59 | 	BigQueryLocation() string
 60 | 	GetMaxQueryResultRows() int
 61 | 	UseClientAuthorization() bool
 62 | 	IsDatasetAllowed(projectID, datasetID string) bool
 63 | 	BigQueryAllowedDatasets() []string
 64 | }
 65 | 
 66 | type BQTableReference struct {
 67 | 	ProjectID string `json:"projectId"`
 68 | 	DatasetID string `json:"datasetId"`
 69 | 	TableID   string `json:"tableId"`
 70 | }
 71 | 
 72 | // Structs for building the JSON payload
 73 | type UserMessage struct {
 74 | 	Text string `json:"text"`
 75 | }
 76 | type Message struct {
 77 | 	UserMessage UserMessage `json:"userMessage"`
 78 | }
 79 | type BQDatasource struct {
 80 | 	TableReferences []BQTableReference `json:"tableReferences"`
 81 | }
 82 | type DatasourceReferences struct {
 83 | 	BQ BQDatasource `json:"bq"`
 84 | }
 85 | type ImageOptions struct {
 86 | 	NoImage map[string]any `json:"noImage"`
 87 | }
 88 | type ChartOptions struct {
 89 | 	Image ImageOptions `json:"image"`
 90 | }
 91 | type Options struct {
 92 | 	Chart ChartOptions `json:"chart"`
 93 | }
 94 | type InlineContext struct {
 95 | 	DatasourceReferences DatasourceReferences `json:"datasourceReferences"`
 96 | 	Options              Options              `json:"options"`
 97 | }
 98 | 
 99 | type CAPayload struct {
100 | 	Project       string        `json:"project"`
101 | 	Messages      []Message     `json:"messages"`
102 | 	InlineContext InlineContext `json:"inlineContext"`
103 | 	ClientIdEnum  string        `json:"clientIdEnum"`
104 | }
105 | 
106 | // validate compatible sources are still compatible
107 | var _ compatibleSource = &bigqueryds.Source{}
108 | 
109 | var compatibleSources = [...]string{bigqueryds.SourceKind}
110 | 
111 | type Config struct {
112 | 	Name         string   `yaml:"name" validate:"required"`
113 | 	Kind         string   `yaml:"kind" validate:"required"`
114 | 	Source       string   `yaml:"source" validate:"required"`
115 | 	Description  string   `yaml:"description" validate:"required"`
116 | 	AuthRequired []string `yaml:"authRequired"`
117 | }
118 | 
119 | // validate interface
120 | var _ tools.ToolConfig = Config{}
121 | 
122 | func (cfg Config) ToolConfigKind() string {
123 | 	return kind
124 | }
125 | 
126 | func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
127 | 	// verify source exists
128 | 	rawS, ok := srcs[cfg.Source]
129 | 	if !ok {
130 | 		return nil, fmt.Errorf("no source named %q configured", cfg.Source)
131 | 	}
132 | 
133 | 	// verify the source is compatible
134 | 	s, ok := rawS.(compatibleSource)
135 | 	if !ok {
136 | 		return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
137 | 	}
138 | 
139 | 	allowedDatasets := s.BigQueryAllowedDatasets()
140 | 	tableRefsDescription := `A JSON string of a list of BigQuery tables to use as context. Each object in the list must contain 'projectId', 'datasetId', and 'tableId'. Example: '[{"projectId": "my-gcp-project", "datasetId": "my_dataset", "tableId": "my_table"}]'.`
141 | 	if len(allowedDatasets) > 0 {
142 | 		datasetIDs := []string{}
143 | 		for _, ds := range allowedDatasets {
144 | 			datasetIDs = append(datasetIDs, fmt.Sprintf("`%s`", ds))
145 | 		}
146 | 		tableRefsDescription += fmt.Sprintf(" The tables must only be from datasets in the following list: %s.", strings.Join(datasetIDs, ", "))
147 | 	}
148 | 	userQueryParameter := tools.NewStringParameter("user_query_with_context", "The user's question, potentially including conversation history and system instructions for context.")
149 | 	tableRefsParameter := tools.NewStringParameter("table_references", tableRefsDescription)
150 | 
151 | 	parameters := tools.Parameters{userQueryParameter, tableRefsParameter}
152 | 	mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, parameters)
153 | 
154 | 	// Get cloud-platform token source for Gemini Data Analytics API during initialization
155 | 	var bigQueryTokenSourceWithScope oauth2.TokenSource
156 | 	if !s.UseClientAuthorization() {
157 | 		ctx := context.Background()
158 | 		ts, err := s.BigQueryTokenSourceWithScope(ctx, "https://www.googleapis.com/auth/cloud-platform")
159 | 		if err != nil {
160 | 			return nil, fmt.Errorf("failed to get cloud-platform token source: %w", err)
161 | 		}
162 | 		bigQueryTokenSourceWithScope = ts
163 | 	}
164 | 
165 | 	// finish tool setup
166 | 	t := Tool{
167 | 		Name:               cfg.Name,
168 | 		Kind:               kind,
169 | 		Project:            s.BigQueryProject(),
170 | 		Location:           s.BigQueryLocation(),
171 | 		Parameters:         parameters,
172 | 		AuthRequired:       cfg.AuthRequired,
173 | 		Client:             s.BigQueryClient(),
174 | 		UseClientOAuth:     s.UseClientAuthorization(),
175 | 		TokenSource:        bigQueryTokenSourceWithScope,
176 | 		manifest:           tools.Manifest{Description: cfg.Description, Parameters: parameters.Manifest(), AuthRequired: cfg.AuthRequired},
177 | 		mcpManifest:        mcpManifest,
178 | 		MaxQueryResultRows: s.GetMaxQueryResultRows(),
179 | 		IsDatasetAllowed:   s.IsDatasetAllowed,
180 | 		AllowedDatasets:    allowedDatasets,
181 | 	}
182 | 	return t, nil
183 | }
184 | 
185 | // validate interface
186 | var _ tools.Tool = Tool{}
187 | 
188 | type Tool struct {
189 | 	Name           string           `yaml:"name"`
190 | 	Kind           string           `yaml:"kind"`
191 | 	AuthRequired   []string         `yaml:"authRequired"`
192 | 	UseClientOAuth bool             `yaml:"useClientOAuth"`
193 | 	Parameters     tools.Parameters `yaml:"parameters"`
194 | 
195 | 	Project            string
196 | 	Location           string
197 | 	Client             *bigqueryapi.Client
198 | 	TokenSource        oauth2.TokenSource
199 | 	manifest           tools.Manifest
200 | 	mcpManifest        tools.McpManifest
201 | 	MaxQueryResultRows int
202 | 	IsDatasetAllowed   func(projectID, datasetID string) bool
203 | 	AllowedDatasets    []string
204 | }
205 | 
206 | func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
207 | 	var tokenStr string
208 | 	var err error
209 | 
210 | 	// Get credentials for the API call
211 | 	if t.UseClientOAuth {
212 | 		// Use client-side access token
213 | 		if accessToken == "" {
214 | 			return nil, fmt.Errorf("tool is configured for client OAuth but no token was provided in the request header: %w", tools.ErrUnauthorized)
215 | 		}
216 | 		tokenStr, err = accessToken.ParseBearerToken()
217 | 		if err != nil {
218 | 			return nil, fmt.Errorf("error parsing access token: %w", err)
219 | 		}
220 | 	} else {
221 | 		// Use cloud-platform token source for Gemini Data Analytics API
222 | 		if t.TokenSource == nil {
223 | 			return nil, fmt.Errorf("cloud-platform token source is missing")
224 | 		}
225 | 		token, err := t.TokenSource.Token()
226 | 		if err != nil {
227 | 			return nil, fmt.Errorf("failed to get token from cloud-platform token source: %w", err)
228 | 		}
229 | 		tokenStr = token.AccessToken
230 | 	}
231 | 
232 | 	// Extract parameters from the map
233 | 	mapParams := params.AsMap()
234 | 	userQuery, _ := mapParams["user_query_with_context"].(string)
235 | 
236 | 	finalQueryText := fmt.Sprintf("%s\n**User Query and Context:**\n%s", instructions, userQuery)
237 | 
238 | 	tableRefsJSON, _ := mapParams["table_references"].(string)
239 | 	var tableRefs []BQTableReference
240 | 	if tableRefsJSON != "" {
241 | 		if err := json.Unmarshal([]byte(tableRefsJSON), &tableRefs); err != nil {
242 | 			return nil, fmt.Errorf("failed to parse 'table_references' JSON string: %w", err)
243 | 		}
244 | 	}
245 | 
246 | 	if len(t.AllowedDatasets) > 0 {
247 | 		for _, tableRef := range tableRefs {
248 | 			if !t.IsDatasetAllowed(tableRef.ProjectID, tableRef.DatasetID) {
249 | 				return nil, fmt.Errorf("access to dataset '%s.%s' (from table '%s') is not allowed", tableRef.ProjectID, tableRef.DatasetID, tableRef.TableID)
250 | 			}
251 | 		}
252 | 	}
253 | 
254 | 	// Construct URL, headers, and payload
255 | 	projectID := t.Project
256 | 	location := t.Location
257 | 	if location == "" {
258 | 		location = "us"
259 | 	}
260 | 	caURL := fmt.Sprintf("https://geminidataanalytics.googleapis.com/v1alpha/projects/%s/locations/%s:chat", projectID, location)
261 | 
262 | 	headers := map[string]string{
263 | 		"Authorization": fmt.Sprintf("Bearer %s", tokenStr),
264 | 		"Content-Type":  "application/json",
265 | 	}
266 | 
267 | 	payload := CAPayload{
268 | 		Project:  fmt.Sprintf("projects/%s", projectID),
269 | 		Messages: []Message{{UserMessage: UserMessage{Text: finalQueryText}}},
270 | 		InlineContext: InlineContext{
271 | 			DatasourceReferences: DatasourceReferences{
272 | 				BQ: BQDatasource{TableReferences: tableRefs},
273 | 			},
274 | 			Options: Options{Chart: ChartOptions{Image: ImageOptions{NoImage: map[string]any{}}}},
275 | 		},
276 | 		ClientIdEnum: "GENAI_TOOLBOX",
277 | 	}
278 | 
279 | 	// Call the streaming API
280 | 	response, err := getStream(caURL, payload, headers, t.MaxQueryResultRows)
281 | 	if err != nil {
282 | 		return nil, fmt.Errorf("failed to get response from conversational analytics API: %w", err)
283 | 	}
284 | 
285 | 	return response, nil
286 | }
287 | 
288 | func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
289 | 	return tools.ParseParams(t.Parameters, data, claims)
290 | }
291 | 
292 | func (t Tool) Manifest() tools.Manifest {
293 | 	return t.manifest
294 | }
295 | 
296 | func (t Tool) McpManifest() tools.McpManifest {
297 | 	return t.mcpManifest
298 | }
299 | 
300 | func (t Tool) Authorized(verifiedAuthServices []string) bool {
301 | 	return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
302 | }
303 | 
304 | func (t Tool) RequiresClientAuthorization() bool {
305 | 	return t.UseClientOAuth
306 | }
307 | 
308 | // StreamMessage represents a single message object from the streaming API response.
309 | type StreamMessage struct {
310 | 	SystemMessage *SystemMessage `json:"systemMessage,omitempty"`
311 | 	Error         *ErrorResponse `json:"error,omitempty"`
312 | }
313 | 
314 | // SystemMessage contains different types of system-generated content.
315 | type SystemMessage struct {
316 | 	Text   *TextResponse   `json:"text,omitempty"`
317 | 	Schema *SchemaResponse `json:"schema,omitempty"`
318 | 	Data   *DataResponse   `json:"data,omitempty"`
319 | }
320 | 
321 | // TextResponse contains textual parts of a message.
322 | type TextResponse struct {
323 | 	Parts []string `json:"parts"`
324 | }
325 | 
326 | // SchemaResponse contains schema-related information.
327 | type SchemaResponse struct {
328 | 	Query  *SchemaQuery  `json:"query,omitempty"`
329 | 	Result *SchemaResult `json:"result,omitempty"`
330 | }
331 | 
332 | // SchemaQuery holds the question that prompted a schema lookup.
333 | type SchemaQuery struct {
334 | 	Question string `json:"question"`
335 | }
336 | 
337 | // SchemaResult contains the datasources with their schemas.
338 | type SchemaResult struct {
339 | 	Datasources []Datasource `json:"datasources"`
340 | }
341 | 
342 | // Datasource represents a data source with its reference and schema.
343 | type Datasource struct {
344 | 	BigQueryTableReference *BQTableReference `json:"bigqueryTableReference,omitempty"`
345 | 	Schema                 *BQSchema         `json:"schema,omitempty"`
346 | }
347 | 
348 | // BQSchema defines the structure of a BigQuery table.
349 | type BQSchema struct {
350 | 	Fields []BQField `json:"fields"`
351 | }
352 | 
353 | // BQField describes a single column in a BigQuery table.
354 | type BQField struct {
355 | 	Name        string `json:"name"`
356 | 	Type        string `json:"type"`
357 | 	Description string `json:"description"`
358 | 	Mode        string `json:"mode"`
359 | }
360 | 
361 | // DataResponse contains data-related information, like queries and results.
362 | type DataResponse struct {
363 | 	Query        *DataQuery  `json:"query,omitempty"`
364 | 	GeneratedSQL string      `json:"generatedSql,omitempty"`
365 | 	Result       *DataResult `json:"result,omitempty"`
366 | }
367 | 
368 | // DataQuery holds information about a data retrieval query.
369 | type DataQuery struct {
370 | 	Name     string `json:"name"`
371 | 	Question string `json:"question"`
372 | }
373 | 
374 | // DataResult contains the schema and rows of a query result.
375 | type DataResult struct {
376 | 	Schema BQSchema         `json:"schema"`
377 | 	Data   []map[string]any `json:"data"`
378 | }
379 | 
380 | // ErrorResponse represents an error message from the API.
381 | type ErrorResponse struct {
382 | 	Code    float64 `json:"code"` // JSON numbers are float64 by default
383 | 	Message string  `json:"message"`
384 | }
385 | 
386 | func getStream(url string, payload CAPayload, headers map[string]string, maxRows int) (string, error) {
387 | 	payloadBytes, err := json.Marshal(payload)
388 | 	if err != nil {
389 | 		return "", fmt.Errorf("failed to marshal payload: %w", err)
390 | 	}
391 | 
392 | 	req, err := http.NewRequest("POST", url, bytes.NewBuffer(payloadBytes))
393 | 	if err != nil {
394 | 		return "", fmt.Errorf("failed to create request: %w", err)
395 | 	}
396 | 	for k, v := range headers {
397 | 		req.Header.Set(k, v)
398 | 	}
399 | 
400 | 	client := &http.Client{}
401 | 	resp, err := client.Do(req)
402 | 	if err != nil {
403 | 		return "", fmt.Errorf("failed to send request: %w", err)
404 | 	}
405 | 	defer resp.Body.Close()
406 | 
407 | 	if resp.StatusCode != http.StatusOK {
408 | 		body, _ := io.ReadAll(resp.Body)
409 | 		return "", fmt.Errorf("API returned non-200 status: %d %s", resp.StatusCode, string(body))
410 | 	}
411 | 
412 | 	var messages []map[string]any
413 | 	decoder := json.NewDecoder(resp.Body)
414 | 
415 | 	// The response is a JSON array, so we read the opening bracket.
416 | 	if _, err := decoder.Token(); err != nil {
417 | 		if err == io.EOF {
418 | 			return "", nil // Empty response is valid
419 | 		}
420 | 		return "", fmt.Errorf("error reading start of json array: %w", err)
421 | 	}
422 | 
423 | 	for decoder.More() {
424 | 		var msg StreamMessage
425 | 		if err := decoder.Decode(&msg); err != nil {
426 | 			if err == io.EOF {
427 | 				break
428 | 			}
429 | 			return "", fmt.Errorf("error decoding stream message: %w", err)
430 | 		}
431 | 
432 | 		var newMessage map[string]any
433 | 		if msg.SystemMessage != nil {
434 | 			if msg.SystemMessage.Text != nil {
435 | 				newMessage = handleTextResponse(msg.SystemMessage.Text)
436 | 			} else if msg.SystemMessage.Schema != nil {
437 | 				newMessage = handleSchemaResponse(msg.SystemMessage.Schema)
438 | 			} else if msg.SystemMessage.Data != nil {
439 | 				newMessage = handleDataResponse(msg.SystemMessage.Data, maxRows)
440 | 			}
441 | 		} else if msg.Error != nil {
442 | 			newMessage = handleError(msg.Error)
443 | 		}
444 | 		messages = appendMessage(messages, newMessage)
445 | 	}
446 | 
447 | 	var acc strings.Builder
448 | 	for i, msg := range messages {
449 | 		jsonBytes, err := json.MarshalIndent(msg, "", "  ")
450 | 		if err != nil {
451 | 			return "", fmt.Errorf("error marshalling message: %w", err)
452 | 		}
453 | 		acc.Write(jsonBytes)
454 | 		if i < len(messages)-1 {
455 | 			acc.WriteString("\n")
456 | 		}
457 | 	}
458 | 
459 | 	return acc.String(), nil
460 | }
461 | 
462 | func formatBqTableRef(tableRef *BQTableReference) string {
463 | 	return fmt.Sprintf("%s.%s.%s", tableRef.ProjectID, tableRef.DatasetID, tableRef.TableID)
464 | }
465 | 
466 | func formatSchemaAsDict(data *BQSchema) map[string]any {
467 | 	headers := []string{"Column", "Type", "Description", "Mode"}
468 | 	if data == nil {
469 | 		return map[string]any{"headers": headers, "rows": []any{}}
470 | 	}
471 | 
472 | 	var rows [][]any
473 | 	for _, field := range data.Fields {
474 | 		rows = append(rows, []any{field.Name, field.Type, field.Description, field.Mode})
475 | 	}
476 | 	return map[string]any{"headers": headers, "rows": rows}
477 | }
478 | 
479 | func formatDatasourceAsDict(datasource *Datasource) map[string]any {
480 | 	var sourceName string
481 | 	if datasource.BigQueryTableReference != nil {
482 | 		sourceName = formatBqTableRef(datasource.BigQueryTableReference)
483 | 	}
484 | 
485 | 	var schema map[string]any
486 | 	if datasource.Schema != nil {
487 | 		schema = formatSchemaAsDict(datasource.Schema)
488 | 	}
489 | 
490 | 	return map[string]any{"source_name": sourceName, "schema": schema}
491 | }
492 | 
493 | func handleTextResponse(resp *TextResponse) map[string]any {
494 | 	return map[string]any{"Answer": strings.Join(resp.Parts, "")}
495 | }
496 | 
497 | func handleSchemaResponse(resp *SchemaResponse) map[string]any {
498 | 	if resp.Query != nil {
499 | 		return map[string]any{"Question": resp.Query.Question}
500 | 	}
501 | 	if resp.Result != nil {
502 | 		var formattedSources []map[string]any
503 | 		for _, ds := range resp.Result.Datasources {
504 | 			formattedSources = append(formattedSources, formatDatasourceAsDict(&ds))
505 | 		}
506 | 		return map[string]any{"Schema Resolved": formattedSources}
507 | 	}
508 | 	return nil
509 | }
510 | 
511 | func handleDataResponse(resp *DataResponse, maxRows int) map[string]any {
512 | 	if resp.Query != nil {
513 | 		return map[string]any{
514 | 			"Retrieval Query": map[string]any{
515 | 				"Query Name": resp.Query.Name,
516 | 				"Question":   resp.Query.Question,
517 | 			},
518 | 		}
519 | 	}
520 | 	if resp.GeneratedSQL != "" {
521 | 		return map[string]any{"SQL Generated": resp.GeneratedSQL}
522 | 	}
523 | 	if resp.Result != nil {
524 | 		var headers []string
525 | 		for _, f := range resp.Result.Schema.Fields {
526 | 			headers = append(headers, f.Name)
527 | 		}
528 | 
529 | 		totalRows := len(resp.Result.Data)
530 | 		var compactRows [][]any
531 | 		numRowsToDisplay := totalRows
532 | 		if numRowsToDisplay > maxRows {
533 | 			numRowsToDisplay = maxRows
534 | 		}
535 | 
536 | 		for _, rowVal := range resp.Result.Data[:numRowsToDisplay] {
537 | 			var rowValues []any
538 | 			for _, header := range headers {
539 | 				rowValues = append(rowValues, rowVal[header])
540 | 			}
541 | 			compactRows = append(compactRows, rowValues)
542 | 		}
543 | 
544 | 		summary := fmt.Sprintf("Showing all %d rows.", totalRows)
545 | 		if totalRows > maxRows {
546 | 			summary = fmt.Sprintf("Showing the first %d of %d total rows.", numRowsToDisplay, totalRows)
547 | 		}
548 | 
549 | 		return map[string]any{
550 | 			"Data Retrieved": map[string]any{
551 | 				"headers": headers,
552 | 				"rows":    compactRows,
553 | 				"summary": summary,
554 | 			},
555 | 		}
556 | 	}
557 | 	return nil
558 | }
559 | 
560 | func handleError(resp *ErrorResponse) map[string]any {
561 | 	return map[string]any{
562 | 		"Error": map[string]any{
563 | 			"Code":    int(resp.Code),
564 | 			"Message": resp.Message,
565 | 		},
566 | 	}
567 | }
568 | 
569 | func appendMessage(messages []map[string]any, newMessage map[string]any) []map[string]any {
570 | 	if newMessage == nil {
571 | 		return messages
572 | 	}
573 | 	if len(messages) > 0 {
574 | 		if _, ok := messages[len(messages)-1]["Data Retrieved"]; ok {
575 | 			messages = messages[:len(messages)-1]
576 | 		}
577 | 	}
578 | 	return append(messages, newMessage)
579 | }
580 | 
```

--------------------------------------------------------------------------------
/docs/en/resources/sources/dataplex.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: "Dataplex"
  3 | type: docs
  4 | weight: 1
  5 | description: >
  6 |   Dataplex Universal Catalog is a unified, intelligent governance solution for data and AI assets in Google Cloud. Dataplex Universal Catalog powers AI, analytics, and business intelligence at scale.
  7 | ---
  8 | 
  9 | # Dataplex Source
 10 | 
 11 | [Dataplex][dataplex-docs] Universal Catalog is a unified, intelligent governance
 12 | solution for data and AI assets in Google Cloud. Dataplex Universal Catalog
 13 | powers AI, analytics, and business intelligence at scale.
 14 | 
 15 | At the heart of these governance capabilities is a catalog that contains a
 16 | centralized inventory of the data assets in your organization. Dataplex
 17 | Universal Catalog holds business, technical, and runtime metadata for all of
 18 | your data. It helps you discover relationships and semantics in the metadata by
 19 | applying artificial intelligence and machine learning.
 20 | 
 21 | [dataplex-docs]: https://cloud.google.com/dataplex/docs
 22 | 
 23 | ## Example
 24 | 
 25 | ```yaml
 26 | sources:
 27 |   my-dataplex-source:
 28 |     kind: "dataplex"
 29 |     project: "my-project-id"
 30 | ```
 31 | 
 32 | ## Sample System Prompt
 33 | 
 34 | You can use the following system prompt as "Custom Instructions" in your client
 35 | application.
 36 | 
 37 | ```
 38 | # Objective
 39 | Your primary objective is to help discover, organize and manage metadata related to data assets. 
 40 | 
 41 | # Tone and Style
 42 | 1. Adopt the persona of a senior subject matter expert
 43 | 2. Your communication style must be:
 44 |     1. Concise: Always favor brevity.
 45 |     2. Direct: Avoid greetings (e.g., "Hi there!", "Certainly!"). Get straight to the point.  
 46 |         Example (Incorrect): Hi there! I see that you are looking for...  
 47 |         Example (Correct): This problem likely stems from...
 48 | 3. Do not reiterate or summarize the question in the answer.
 49 | 4. Crucially, always convey a tone of uncertainty and caution. Since you are interpreting metadata and have no way to externally verify your answers, never express complete confidence. Frame your responses as interpretations based solely on the provided metadata. Use a suggestive tone, not a prescriptive one:
 50 |     Example (Correct): "The entry describes..."  
 51 |     Example (Correct): "According to catalog,..."  
 52 |     Example (Correct): "Based on the metadata,..."  
 53 |     Example (Correct): "Based on the search results,..."  
 54 | 5. Do not make assumptions
 55 | 
 56 | # Data Model
 57 | ## Entries
 58 | Entry represents a specific data asset. Entry acts as a metadata record for something that is managed by Catalog, such as:
 59 | 
 60 | - A BigQuery table or dataset
 61 | - A Cloud Storage bucket or folder
 62 | - An on-premises SQL table
 63 | 
 64 | ## Aspects
 65 | While the Entry itself is a container, the rich descriptive information about the asset (e.g., schema, data types, business descriptions, classifications) is stored in associated components called Aspects. Aspects are created based on pre-defined blueprints known as Aspect Types.
 66 | 
 67 | ## Aspect Types
 68 | Aspect Type is a reusable template that defines the schema for a set of metadata fields. Think of an Aspect Type as a structure for the kind of metadata that is organized in the catalog within the Entry. 
 69 | 
 70 | Examples:
 71 | - projects/dataplex-types/locations/global/aspectTypes/analytics-hub-exchange
 72 | - projects/dataplex-types/locations/global/aspectTypes/analytics-hub
 73 | - projects/dataplex-types/locations/global/aspectTypes/analytics-hub-listing
 74 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-connection
 75 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-data-policy
 76 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-dataset
 77 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-model
 78 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-policy
 79 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-routine
 80 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-row-access-policy
 81 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-table
 82 | - projects/dataplex-types/locations/global/aspectTypes/bigquery-view
 83 | - projects/dataplex-types/locations/global/aspectTypes/cloud-bigtable-instance
 84 | - projects/dataplex-types/locations/global/aspectTypes/cloud-bigtable-table
 85 | - projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-database
 86 | - projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-instance
 87 | - projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-table
 88 | - projects/dataplex-types/locations/global/aspectTypes/cloud-spanner-view
 89 | - projects/dataplex-types/locations/global/aspectTypes/cloudsql-database
 90 | - projects/dataplex-types/locations/global/aspectTypes/cloudsql-instance
 91 | - projects/dataplex-types/locations/global/aspectTypes/cloudsql-schema
 92 | - projects/dataplex-types/locations/global/aspectTypes/cloudsql-table
 93 | - projects/dataplex-types/locations/global/aspectTypes/cloudsql-view
 94 | - projects/dataplex-types/locations/global/aspectTypes/contacts
 95 | - projects/dataplex-types/locations/global/aspectTypes/dataform-code-asset
 96 | - projects/dataplex-types/locations/global/aspectTypes/dataform-repository
 97 | - projects/dataplex-types/locations/global/aspectTypes/dataform-workspace
 98 | - projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-database
 99 | - projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-service
100 | - projects/dataplex-types/locations/global/aspectTypes/dataproc-metastore-table
101 | - projects/dataplex-types/locations/global/aspectTypes/data-product
102 | - projects/dataplex-types/locations/global/aspectTypes/data-quality-scorecard
103 | - projects/dataplex-types/locations/global/aspectTypes/external-connection
104 | - projects/dataplex-types/locations/global/aspectTypes/overview
105 | - projects/dataplex-types/locations/global/aspectTypes/pubsub-topic
106 | - projects/dataplex-types/locations/global/aspectTypes/schema
107 | - projects/dataplex-types/locations/global/aspectTypes/sensitive-data-protection-job-result
108 | - projects/dataplex-types/locations/global/aspectTypes/sensitive-data-protection-profile
109 | - projects/dataplex-types/locations/global/aspectTypes/sql-access
110 | - projects/dataplex-types/locations/global/aspectTypes/storage-bucket
111 | - projects/dataplex-types/locations/global/aspectTypes/storage-folder
112 | - projects/dataplex-types/locations/global/aspectTypes/storage
113 | - projects/dataplex-types/locations/global/aspectTypes/usage
114 | 
115 | ## Entry Types
116 | Every Entry must conform to an Entry Type. The Entry Type acts as a template, defining the structure, required aspects, and constraints for Entries of that type. 
117 | 
118 | Examples:
119 | - projects/dataplex-types/locations/global/entryTypes/analytics-hub-exchange
120 | - projects/dataplex-types/locations/global/entryTypes/analytics-hub-listing
121 | - projects/dataplex-types/locations/global/entryTypes/bigquery-connection
122 | - projects/dataplex-types/locations/global/entryTypes/bigquery-data-policy
123 | - projects/dataplex-types/locations/global/entryTypes/bigquery-dataset
124 | - projects/dataplex-types/locations/global/entryTypes/bigquery-model
125 | - projects/dataplex-types/locations/global/entryTypes/bigquery-routine
126 | - projects/dataplex-types/locations/global/entryTypes/bigquery-row-access-policy
127 | - projects/dataplex-types/locations/global/entryTypes/bigquery-table
128 | - projects/dataplex-types/locations/global/entryTypes/bigquery-view
129 | - projects/dataplex-types/locations/global/entryTypes/cloud-bigtable-instance
130 | - projects/dataplex-types/locations/global/entryTypes/cloud-bigtable-table
131 | - projects/dataplex-types/locations/global/entryTypes/cloud-spanner-database
132 | - projects/dataplex-types/locations/global/entryTypes/cloud-spanner-instance
133 | - projects/dataplex-types/locations/global/entryTypes/cloud-spanner-table
134 | - projects/dataplex-types/locations/global/entryTypes/cloud-spanner-view
135 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-database
136 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-instance
137 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-table
138 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-mysql-view
139 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-database
140 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-instance
141 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-schema
142 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-table
143 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-postgresql-view
144 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-database
145 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-instance
146 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-schema
147 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-table
148 | - projects/dataplex-types/locations/global/entryTypes/cloudsql-sqlserver-view
149 | - projects/dataplex-types/locations/global/entryTypes/dataform-code-asset
150 | - projects/dataplex-types/locations/global/entryTypes/dataform-repository
151 | - projects/dataplex-types/locations/global/entryTypes/dataform-workspace
152 | - projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-database
153 | - projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-service
154 | - projects/dataplex-types/locations/global/entryTypes/dataproc-metastore-table
155 | - projects/dataplex-types/locations/global/entryTypes/pubsub-topic
156 | - projects/dataplex-types/locations/global/entryTypes/storage-bucket
157 | - projects/dataplex-types/locations/global/entryTypes/storage-folder
158 | - projects/dataplex-types/locations/global/entryTypes/vertexai-dataset
159 | - projects/dataplex-types/locations/global/entryTypes/vertexai-feature-group
160 | - projects/dataplex-types/locations/global/entryTypes/vertexai-feature-online-store
161 | 
162 | ## Entry Groups
163 | Entries are organized within Entry Groups, which are logical groupings of Entries. An Entry Group acts as a namespace for its Entries.
164 | 
165 | ## Entry Links
166 | Entries can be linked together using EntryLinks to represent relationships between data assets (e.g. foreign keys).
167 | 
168 | # Tool instructions
169 | ## Tool: dataplex_search_entries
170 | ## General
171 | - Do not try to search within search results on your own.
172 | - Do not fetch multiple pages of results unless explicitly asked.
173 | 
174 | ## Search syntax
175 | 
176 | ### Simple search
177 | In its simplest form, a search query consists of a single predicate. Such a predicate can match several pieces of metadata:
178 | 
179 | - A substring of a name, display name, or description of a resource
180 | - A substring of the type of a resource
181 | - A substring of a column name (or nested column name) in the schema of a resource
182 | - A substring of a project ID
183 | - A string from an overview description
184 | 
185 | For example, the predicate foo matches the following resources:
186 | - Resource with the name foo.bar
187 | - Resource with the display name Foo Bar
188 | - Resource with the description This is the foo script
189 | - Resource with the exact type foo
190 | - Column foo_bar in the schema of a resource
191 | - Nested column foo_bar in the schema of a resource
192 | - Project prod-foo-bar
193 | - Resource with an overview containing the word foo
194 | 
195 | 
196 | ### Qualified predicates
197 | You can qualify a predicate by prefixing it with a key that restricts the matching to a specific piece of metadata:
198 | - An equal sign (=) restricts the search to an exact match.
199 | - A colon (:) after the key matches the predicate to either a substring or a token within the value in the search results.
200 | 
201 | Tokenization splits the stream of text into a series of tokens, with each token usually corresponding to a single word. For example:
202 | - name:foo selects resources with names that contain the foo substring, like foo1 and barfoo.
203 | - description:foo selects resources with the foo token in the description, like bar and foo.
204 | - location=foo matches resources in a specified location with foo as the location name.
205 | 
206 | The predicate keys type, system, location, and orgid support only the exact match (=) qualifier, not the substring qualifier (:). For example, type=foo or orgid=number.
207 | 
208 | Search syntax supports the following qualifiers:
209 | - "name:x" - Matches x as a substring of the resource ID.
210 | - "displayname:x" - Match x as a substring of the resource display name.
211 | - "column:x" - Matches x as a substring of the column name (or nested column name) in the schema of the resource.
212 | - "description:x" - Matches x as a token in the resource description.
213 | - "label:bar" - Matches BigQuery resources that have a label (with some value) and the label key has bar as a substring.
214 | - "label=bar" - Matches BigQuery resources that have a label (with some value) and the label key equals bar as a string.
215 | - "label:bar:x" - Matches x as a substring in the value of a label with a key bar attached to a BigQuery resource.
216 | - "label=foo:bar" - Matches BigQuery resources where the key equals foo and the key value equals bar.
217 | - "label.foo=bar" - Matches BigQuery resources where the key equals foo and the key value equals bar.
218 | - "label.foo" - Matches BigQuery resources that have a label whose key equals foo as a string.
219 | - "type=TYPE" - Matches resources of a specific entry type or its type alias.
220 | - "projectid:bar" - Matches resources within Google Cloud projects that match bar as a substring in the ID.
221 | - "parent:x" - Matches x as a substring of the hierarchical path of a resource. It supports same syntax as `name` predicate.
222 | - "orgid=number" - Matches resources within a Google Cloud organization with the exact ID value of the number.
223 | - "system=SYSTEM" - Matches resources from a specified system. For example, system=bigquery matches BigQuery resources.
224 | - "location=LOCATION" - Matches resources in a specified location with an exact name. For example, location=us-central1 matches assets hosted in Iowa. BigQuery Omni assets support this qualifier by using the BigQuery Omni location name. For example, location=aws-us-east-1 matches BigQuery Omni assets in Northern Virginia.
225 | - "createtime" -
226 | Finds resources that were created within, before, or after a given date or time. For example "createtime:2019-01-01" matches resources created on 2019-01-01. 
227 | - "updatetime" - Finds resources that were updated within, before, or after a given date or time. For example "updatetime>2019-01-01" matches resources updated after 2019-01-01.
228 | 
229 | ### Aspect Search
230 | To search for entries based on their attached aspects, use the following query syntax.
231 | 
232 | aspect:x	Matches x as a substring of the full path to the aspect type of an aspect that is attached to the entry, in the format projectid.location.ASPECT_TYPE_ID
233 | aspect=x	Matches x as the full path to the aspect type of an aspect that is attached to the entry, in the format projectid.location.ASPECT_TYPE_ID
234 | aspect:xOPERATORvalue	
235 | Searches for aspect field values. Matches x as a substring of the full path to the aspect type and field name of an aspect that is attached to the entry, in the format projectid.location.ASPECT_TYPE_ID.FIELD_NAME
236 | 
237 | The list of supported {OPERATOR}s depends on the type of field in the aspect, as follows:
238 | - String: = (exact match) and : (substring)
239 | - All number types: =, :, <, >, <=, >=, =>, =<
240 | - Enum: =
241 | - Datetime: same as for numbers, but the values to compare are treated as datetimes instead of numbers
242 | - Boolean: =
243 | 
244 | Only top-level fields of the aspect are searchable. For example, all of the following queries match entries where the value of the is-enrolled field in the employee-info aspect type is true. Other entries that match on the substring are also returned.
245 | - aspect:example-project.us-central1.employee-info.is-enrolled=true
246 | - aspect:example-project.us-central1.employee=true
247 | - aspect:employee=true
248 | 
249 | Example:-
250 | You can use following filters
251 | - dataplex-types.global.bigquery-table.type={BIGLAKE_TABLE, BIGLAKE_OBJECT_TABLE, EXTERNAL_TABLE, TABLE}
252 | - dataplex-types.global.storage.type={STRUCTURED, UNSTRUCTURED}
253 | 
254 | ### Logical operators
255 | A query can consist of several predicates with logical operators. If you don't specify an operator, logical AND is implied. For example, foo bar returns resources that match both predicate foo and predicate bar.
256 | Logical AND and logical OR are supported. For example, foo OR bar.
257 | 
258 | You can negate a predicate with a - (hyphen) or NOT prefix. For example, -name:foo returns resources with names that don't match the predicate foo.
259 | Logical operators are case-sensitive. `OR` and `AND` are acceptable whereas `or` and `and` are not.
260 | 
261 | ### Request
262 | 1. Always try to rewrite the prompt using search syntax.
263 | 
264 | ### Response
265 | 1. If there are multiple search results found
266 |     1. Present the list of search results
267 |     2. Format the output in nested ordered list, for example:  
268 |     Given
269 |     ```
270 |     {
271 |         results: [
272 |             {
273 |                 name: "projects/test-project/locations/us/entryGroups/@bigquery-aws-us-east-1/entries/users"
274 |                 entrySource: {
275 |                 displayName: "Users"
276 |                 description: "Table contains list of users."
277 |                 location: "aws-us-east-1"
278 |                 system: "BigQuery"
279 |                 }
280 |             },
281 |             {
282 |                 name: "projects/another_project/locations/us-central1/entryGroups/@bigquery/entries/top_customers"
283 |                 entrySource: {
284 |                 displayName: "Top customers",
285 |                 description: "Table contains list of best customers."
286 |                 location: "us-central1"
287 |                 system: "BigQuery"
288 |                 }
289 |             },
290 |         ]
291 |     }
292 |     ```
293 |     Return output formatted as markdown nested list:
294 |     ```
295 |     * Users:
296 |         - projectId: test_project
297 |         - location: aws-us-east-1
298 |         - description: Table contains list of users.
299 |     * Top customers:
300 |         - projectId: another_project
301 |         - location: us-central1
302 |         - description: Table contains list of best customers.
303 |     ```
304 |     3. Ask to select one of the presented search results
305 | 2. If there is only one search result found
306 |     1. Present the search result immediately.
307 | 3. If there are no search result found
308 |     1. Explain that no search result was found
309 |     2. Suggest to provide a more specific search query.
310 | 
311 | ## Tool: dataplex_lookup_entry
312 | ### Request
313 | 1. Always try to limit the size of the response by specifying `aspect_types` parameter. Make sure to include to select view=CUSTOM when using aspect_types parameter. If you do not know the name of the aspect type, use the `dataplex_search_aspect_types` tool.
314 | 2. If you do not know the name of the entry, use `dataplex_search_entries` tool
315 | ### Response
316 | 1. Unless asked for a specific aspect, respond with all aspects attached to the entry.
317 | ```
318 | 
319 | ## Reference
320 | 
321 | | **field** | **type** | **required** | **description**                                                                  |
322 | |-----------|:--------:|:------------:|----------------------------------------------------------------------------------|
323 | | kind      |  string  |     true     | Must be "dataplex".                                                              |
324 | | project   |  string  |     true     | ID of the GCP project used for quota and billing purposes (e.g. "my-project-id").|
```

--------------------------------------------------------------------------------
/internal/prebuiltconfigs/tools/alloydb-postgres-observability.yaml:
--------------------------------------------------------------------------------

```yaml
  1 | # Copyright 2025 Google LLC
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | sources:
 15 |   cloud-monitoring-source:
 16 |     kind: cloud-monitoring
 17 | tools:
 18 |   get_system_metrics:
 19 |     kind: cloud-monitoring-query-prometheus
 20 |     source: cloud-monitoring-source
 21 |     description: |
 22 |       Fetches system level cloudmonitoring data (timeseries metrics) for an AlloyDB cluster, instance.
 23 |       To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
 24 | 
 25 |       Generate the PromQL `query` for AlloyDB system metrics using the provided metrics and rules. Get labels like `cluster_id` and `instance_id` from the user's intent.
 26 | 
 27 |       Defaults:
 28 |       1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
 29 | 
 30 |       PromQL Query Examples:
 31 |       1. Basic Time Series: `avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance"}[5m])`
 32 |       2. Top K: `topk(30, avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance"}[5m]))`
 33 |       3. Mean: `avg(avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="my-instance","cluster_id"="my-cluster"}[5m]))`
 34 |       4. Minimum: `min(min_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 35 |       5. Maximum: `max(max_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 36 |       6. Sum: `sum(avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 37 |       7. Count streams: `count(avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 38 |       8. Percentile with groupby on instanceid, clusterid: `quantile by ("instance_id","cluster_id")(0.99,avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","cluster_id"="my-cluster","instance_id"="my-instance"}[5m]))`
 39 | 
 40 |       Available Metrics List: metricname. description. monitored resource. labels
 41 |       1. `alloydb.googleapis.com/instance/cpu/average_utilization`: The percentage of CPU being used on an instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 42 |       2. `alloydb.googleapis.com/instance/cpu/maximum_utilization`: Maximum CPU utilization across all currently serving nodes of the instance from 0 to 100. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 43 |       3. `alloydb.googleapis.com/cluster/storage/usage`: The total AlloyDB storage in bytes across the entire cluster. `alloydb.googleapis.com/Cluster`. `cluster_id`.
 44 |       4. `alloydb.googleapis.com/instance/postgres/replication/replicas`: The number of read replicas connected to the primary instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`, `state`, `replica_instance_id`.
 45 |       5. `alloydb.googleapis.com/instance/postgres/replication/maximum_lag`: The maximum replication time lag calculated across all serving read replicas of the instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`, `replica_instance_id`.
 46 |       6. `alloydb.googleapis.com/instance/memory/min_available_memory`: The minimum available memory across all currently serving nodes of the instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 47 |       7. `alloydb.googleapis.com/instance/postgres/instances`: The number of nodes in the instance, along with their status, which can be either up or down. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`, `status`.
 48 |       8. `alloydb.googleapis.com/database/postgresql/tuples`: Number of tuples (rows) by state per database in the instance. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`, `state`.
 49 |       9. `alloydb.googleapis.com/database/postgresql/temp_bytes_written_for_top_databases`: The total amount of data(in bytes) written to temporary files by the queries per database for top 500 dbs. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 50 |       10. `alloydb.googleapis.com/database/postgresql/temp_files_written_for_top_databases`: The number of temporary files used for writing data per database while performing internal algorithms like join, sort etc for top 500 dbs. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 51 |       11. `alloydb.googleapis.com/database/postgresql/inserted_tuples_count_for_top_databases`: The total number of rows inserted per db for top 500 dbs as a result of the queries in the instance. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 52 |       12. `alloydb.googleapis.com/database/postgresql/updated_tuples_count_for_top_databases`: The total number of rows updated per db for top 500 dbs as a result of the queries in the instance. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 53 |       13. `alloydb.googleapis.com/database/postgresql/deleted_tuples_count_for_top_databases`: The total  number of rows deleted per db for top 500 dbs as a result of the queries in the instance. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 54 |       14. `alloydb.googleapis.com/database/postgresql/backends_for_top_databases`: The current number of connections per database to the instance for top 500 dbs. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 55 |       15. `alloydb.googleapis.com/instance/postgresql/backends_by_state`: The current number of connections to the instance grouped by the state like idle, active, idle_in_transaction, idle_in_transaction_aborted, disabled, and fastpath_function_call. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`, `state`.
 56 |       16. `alloydb.googleapis.com/instance/postgresql/backends_for_top_applications`: The current number of connections to the AlloyDB instance, grouped by applications for top 500 applications. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`, `application_name`.
 57 |       17. `alloydb.googleapis.com/database/postgresql/new_connections_for_top_databases`: Total number of new connections added per database for top 500 databases to the instance. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 58 |       18. `alloydb.googleapis.com/database/postgresql/deadlock_count_for_top_databases`: Total number of deadlocks detected in the instance per database for top 500 dbs. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`.
 59 |       19. `alloydb.googleapis.com/database/postgresql/statements_executed_count`: Total count of statements executed in the instance per database per operation_type. `alloydb.googleapis.com/Database`. `cluster_id`, `instance_id`, `database`, `operation_type`.
 60 |       20. `alloydb.googleapis.com/instance/postgresql/returned_tuples_count`: Number of rows scanned while processing the queries in the instance since the last sample. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 61 |       21. `alloydb.googleapis.com/instance/postgresql/fetched_tuples_count`: Number of rows fetched while processing the queries in the instance since the last sample. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 62 |       22. `alloydb.googleapis.com/instance/postgresql/updated_tuples_count`: Number of rows updated while processing the queries in the instance since the last sample. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 63 |       23. `alloydb.googleapis.com/instance/postgresql/inserted_tuples_count`: Number of rows inserted while processing the queries in the instance since the last sample. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 64 |       24. `alloydb.googleapis.com/instance/postgresql/deleted_tuples_count`: Number of rows deleted while processing the queries in the instance since the last sample. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 65 |       25. `alloydb.googleapis.com/instance/postgresql/written_tuples_count`: Number of rows written while processing the queries in the instance since the last sample. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 66 |       26. `alloydb.googleapis.com/instance/postgresql/deadlock_count`: Number of deadlocks detected in the instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 67 |       27. `alloydb.googleapis.com/instance/postgresql/blks_read`: Number of blocks read by Postgres that were not in the buffer cache. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 68 |       28. `alloydb.googleapis.com/instance/postgresql/blks_hit`: Number of times Postgres found the requested block in the buffer cache. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 69 |       29. `alloydb.googleapis.com/instance/postgresql/temp_bytes_written_count`: The total amount of data(in bytes) written to temporary files by the queries while performing internal algorithms like join, sort etc. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 70 |       30. `alloydb.googleapis.com/instance/postgresql/temp_files_written_count`: The number of temporary files used for writing data in the instance while performing internal algorithms like join, sort etc. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 71 |       31. `alloydb.googleapis.com/instance/postgresql/new_connections_count`: The number new connections added to the instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 72 |       32. `alloydb.googleapis.com/instance/postgresql/wait_count`: Total number of times processes waited for each wait event in the instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`, `wait_event_type`, `wait_event_name`.
 73 |       33. `alloydb.googleapis.com/instance/postgresql/wait_time`: Total elapsed wait time for each wait event in the instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`, `wait_event_type`, `wait_event_name`.
 74 |       34. `alloydb.googleapis.com/instance/postgres/transaction_count`: The number of committed and rolled back transactions across all serving nodes of the instance. `alloydb.googleapis.com/Instance`. `cluster_id`, `instance_id`.
 75 | 
 76 |   get_query_metrics:
 77 |     kind: cloud-monitoring-query-prometheus
 78 |     source: cloud-monitoring-source
 79 |     description: |
 80 |       Fetches query level cloudmonitoring data (timeseries metrics) for queries running in an AlloyDB instance.
 81 |       To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
 82 | 
 83 |       Generate the PromQL `query` for AlloyDB query metrics using the provided metrics and rules. Get labels like `cluster_id`, `instance_id`, and `query_hash` from the user's intent. If `query_hash` is provided, use the per-query metrics.
 84 | 
 85 |       Defaults:
 86 |       1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
 87 | 
 88 |       PromQL Query Examples:
 89 |       1. Basic Time Series: `avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance"}[5m])`
 90 |       2. Top K: `topk(30, avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance"}[5m]))`
 91 |       3. Mean: `avg(avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="my-instance","cluster_id"="my-cluster"}[5m]))`
 92 |       4. Minimum: `min(min_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 93 |       5. Maximum: `max(max_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 94 |       6. Sum: `sum(avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 95 |       7. Count streams: `count(avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","instance_id"="alloydb-instance","cluster_id"="alloydb-cluster"}[5m]))`
 96 |       8. Percentile with groupby on instanceid, clusterid: `quantile by ("instance_id","cluster_id")(0.99,avg_over_time({"__name__"="alloydb.googleapis.com/instance/cpu/average_utilization","monitored_resource"="alloydb.googleapis.com/Instance","cluster_id"="my-cluster","instance_id"="my-instance"}[5m]))`
 97 | 
 98 |       Available Metrics List: metricname. description. monitored resource. labels. aggregate is the aggregated values for all query stats, Use aggregate metrics if query id is not provided. For perquery metrics do not fetch querystring unless specified by user specifically. Have the aggregation on query hash to avoid fetching the querystring. Do not use latency metrics for anything.
 99 |       1. `alloydb.googleapis.com/database/postgresql/insights/aggregate/latencies`: Aggregated query latency distribution. `alloydb.googleapis.com/Database`. `user`, `client_addr`.
100 |       2. `alloydb.googleapis.com/database/postgresql/insights/aggregate/execution_time`: Accumulated aggregated query execution time since the last sample. `alloydb.googleapis.com/Database`. `user`, `client_addr`.
101 |       3. `alloydb.googleapis.com/database/postgresql/insights/aggregate/io_time`: Accumulated aggregated IO time since the last sample. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `io_type`.
102 |       4. `alloydb.googleapis.com/database/postgresql/insights/aggregate/lock_time`: Accumulated aggregated lock wait time since the last sample. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `lock_type`.
103 |       5. `alloydb.googleapis.com/database/postgresql/insights/aggregate/row_count`: Aggregated number of retrieved or affected rows since the last sample. `alloydb.googleapis.com/Database`. `user`, `client_addr`.
104 |       6. `alloydb.googleapis.com/database/postgresql/insights/aggregate/shared_blk_access_count`: Aggregated shared blocks accessed by statement execution. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `access_type`.
105 |       7. `alloydb.googleapis.com/database/postgresql/insights/perquery/latencies`: Per query latency distribution. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `querystring`, `query_hash`.
106 |       8. `alloydb.googleapis.com/database/postgresql/insights/perquery/execution_time`: Accumulated execution times per user per database per query. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `querystring`, `query_hash`.
107 |       9. `alloydb.googleapis.com/database/postgresql/insights/perquery/io_time`: Accumulated IO time since the last sample per query. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `io_type`, `querystring`, `query_hash`.
108 |       10. `alloydb.googleapis.com/database/postgresql/insights/perquery/lock_time`: Accumulated lock wait time since the last sample per query. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `lock_type`, `querystring`, `query_hash`.
109 |       11. `alloydb.googleapis.com/database/postgresql/insights/perquery/row_count`: The number of retrieved or affected rows since the last sample per query. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `querystring`, `query_hash`.
110 |       12. `alloydb.googleapis.com/database/postgresql/insights/perquery/shared_blk_access_count`: Shared blocks accessed by statement execution per query. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `access_type`, `querystring`, `query_hash`.
111 |       13. `alloydb.googleapis.com/database/postgresql/insights/pertag/latencies`: Query latency distribution. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`.
112 |       14. `alloydb.googleapis.com/database/postgresql/insights/pertag/execution_time`: Accumulated execution times since the last sample. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`.
113 |       15. `alloydb.googleapis.com/database/postgresql/insights/pertag/io_time`: Accumulated IO time since the last sample per tag. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `io_type`, `tag_hash`.
114 |       16. `alloydb.googleapis.com/database/postgresql/insights/pertag/lock_time`: Accumulated lock wait time since the last sample per tag. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `lock_type`, `tag_hash`.
115 |       17. `alloydb.googleapis.com/database/postgresql/insights/pertag/shared_blk_access_count`: Shared blocks accessed by statement execution per tag. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `access_type`, `tag_hash`.
116 |       18. `alloydb.googleapis.com/database/postgresql/insights/pertag/row_count`: The number of retrieved or affected rows since the last sample per tag. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`.
117 | 
118 | toolsets:
119 |   alloydb_postgres_cloud_monitoring_tools:
120 |     - get_system_metrics
121 |     - get_query_metrics
122 | 
```
Page 32/45FirstPrevNextLast