This is page 16 of 35. Use http://codebase.md/googleapis/genai-toolbox?lines=false&page={x} to view the full context. # Directory Structure ``` ├── .ci │ ├── continuous.release.cloudbuild.yaml │ ├── generate_release_table.sh │ ├── integration.cloudbuild.yaml │ ├── quickstart_test │ │ ├── go.integration.cloudbuild.yaml │ │ ├── js.integration.cloudbuild.yaml │ │ ├── py.integration.cloudbuild.yaml │ │ ├── run_go_tests.sh │ │ ├── run_js_tests.sh │ │ ├── run_py_tests.sh │ │ └── setup_hotels_sample.sql │ ├── test_with_coverage.sh │ └── versioned.release.cloudbuild.yaml ├── .github │ ├── auto-label.yaml │ ├── blunderbuss.yml │ ├── CODEOWNERS │ ├── header-checker-lint.yml │ ├── ISSUE_TEMPLATE │ │ ├── bug_report.yml │ │ ├── config.yml │ │ ├── feature_request.yml │ │ └── question.yml │ ├── label-sync.yml │ ├── labels.yaml │ ├── PULL_REQUEST_TEMPLATE.md │ ├── release-please.yml │ ├── renovate.json5 │ ├── sync-repo-settings.yaml │ └── workflows │ ├── cloud_build_failure_reporter.yml │ ├── deploy_dev_docs.yaml │ ├── deploy_previous_version_docs.yaml │ ├── deploy_versioned_docs.yaml │ ├── docs_deploy.yaml │ ├── docs_preview_clean.yaml │ ├── docs_preview_deploy.yaml │ ├── lint.yaml │ ├── schedule_reporter.yml │ ├── sync-labels.yaml │ └── tests.yaml ├── .gitignore ├── .gitmodules ├── .golangci.yaml ├── .hugo │ ├── archetypes │ │ └── default.md │ ├── assets │ │ ├── icons │ │ │ └── logo.svg │ │ └── scss │ │ ├── _styles_project.scss │ │ └── _variables_project.scss │ ├── go.mod │ ├── go.sum │ ├── hugo.toml │ ├── layouts │ │ ├── _default │ │ │ └── home.releases.releases │ │ ├── index.llms-full.txt │ │ ├── index.llms.txt │ │ ├── partials │ │ │ ├── hooks │ │ │ │ └── head-end.html │ │ │ ├── navbar-version-selector.html │ │ │ ├── page-meta-links.html │ │ │ └── td │ │ │ └── render-heading.html │ │ ├── robot.txt │ │ └── shortcodes │ │ ├── include.html │ │ ├── ipynb.html │ │ └── regionInclude.html │ ├── package-lock.json │ ├── package.json │ └── static │ ├── favicons │ │ ├── android-chrome-192x192.png │ │ ├── android-chrome-512x512.png │ │ ├── apple-touch-icon.png │ │ ├── favicon-16x16.png │ │ ├── favicon-32x32.png │ │ └── favicon.ico │ └── js │ └── w3.js ├── CHANGELOG.md ├── cmd │ ├── options_test.go │ ├── options.go │ ├── root_test.go │ ├── root.go │ └── version.txt ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── DEVELOPER.md ├── Dockerfile ├── docs │ └── en │ ├── _index.md │ ├── about │ │ ├── _index.md │ │ └── faq.md │ ├── concepts │ │ ├── _index.md │ │ └── telemetry │ │ ├── index.md │ │ ├── telemetry_flow.png │ │ └── telemetry_traces.png │ ├── getting-started │ │ ├── _index.md │ │ ├── colab_quickstart.ipynb │ │ ├── configure.md │ │ ├── introduction │ │ │ ├── _index.md │ │ │ └── architecture.png │ │ ├── local_quickstart_go.md │ │ ├── local_quickstart_js.md │ │ ├── local_quickstart.md │ │ ├── mcp_quickstart │ │ │ ├── _index.md │ │ │ ├── inspector_tools.png │ │ │ └── inspector.png │ │ └── quickstart │ │ ├── go │ │ │ ├── genAI │ │ │ │ ├── go.mod │ │ │ │ ├── go.sum │ │ │ │ └── quickstart.go │ │ │ ├── genkit │ │ │ │ ├── go.mod │ │ │ │ ├── go.sum │ │ │ │ └── quickstart.go │ │ │ ├── langchain │ │ │ │ ├── go.mod │ │ │ │ ├── go.sum │ │ │ │ └── quickstart.go │ │ │ ├── openAI │ │ │ │ ├── go.mod │ │ │ │ ├── go.sum │ │ │ │ └── quickstart.go │ │ │ └── quickstart_test.go │ │ ├── golden.txt │ │ ├── js │ │ │ ├── genAI │ │ │ │ ├── package-lock.json │ │ │ │ ├── package.json │ │ │ │ └── quickstart.js │ │ │ ├── genkit │ │ │ │ ├── package-lock.json │ │ │ │ ├── package.json │ │ │ │ └── quickstart.js │ │ │ ├── langchain │ │ │ │ ├── package-lock.json │ │ │ │ ├── package.json │ │ │ │ └── quickstart.js │ │ │ ├── llamaindex │ │ │ │ ├── package-lock.json │ │ │ │ ├── package.json │ │ │ │ └── quickstart.js │ │ │ └── quickstart.test.js │ │ ├── python │ │ │ ├── __init__.py │ │ │ ├── adk │ │ │ │ ├── quickstart.py │ │ │ │ └── requirements.txt │ │ │ ├── core │ │ │ │ ├── quickstart.py │ │ │ │ └── requirements.txt │ │ │ ├── langchain │ │ │ │ ├── quickstart.py │ │ │ │ └── requirements.txt │ │ │ ├── llamaindex │ │ │ │ ├── quickstart.py │ │ │ │ └── requirements.txt │ │ │ └── quickstart_test.py │ │ └── shared │ │ ├── cloud_setup.md │ │ ├── configure_toolbox.md │ │ └── database_setup.md │ ├── how-to │ │ ├── _index.md │ │ ├── connect_via_geminicli.md │ │ ├── connect_via_mcp.md │ │ ├── connect-ide │ │ │ ├── _index.md │ │ │ ├── alloydb_pg_admin_mcp.md │ │ │ ├── alloydb_pg_mcp.md │ │ │ ├── bigquery_mcp.md │ │ │ ├── cloud_sql_mssql_admin_mcp.md │ │ │ ├── cloud_sql_mssql_mcp.md │ │ │ ├── cloud_sql_mysql_admin_mcp.md │ │ │ ├── cloud_sql_mysql_mcp.md │ │ │ ├── cloud_sql_pg_admin_mcp.md │ │ │ ├── cloud_sql_pg_mcp.md │ │ │ ├── firestore_mcp.md │ │ │ ├── looker_mcp.md │ │ │ ├── mssql_mcp.md │ │ │ ├── mysql_mcp.md │ │ │ ├── neo4j_mcp.md │ │ │ ├── postgres_mcp.md │ │ │ ├── spanner_mcp.md │ │ │ └── sqlite_mcp.md │ │ ├── deploy_docker.md │ │ ├── deploy_gke.md │ │ ├── deploy_toolbox.md │ │ ├── export_telemetry.md │ │ └── toolbox-ui │ │ ├── edit-headers.gif │ │ ├── edit-headers.png │ │ ├── index.md │ │ ├── optional-param-checked.png │ │ ├── optional-param-unchecked.png │ │ ├── run-tool.gif │ │ ├── tools.png │ │ └── toolsets.png │ ├── reference │ │ ├── _index.md │ │ ├── cli.md │ │ └── prebuilt-tools.md │ ├── resources │ │ ├── _index.md │ │ ├── authServices │ │ │ ├── _index.md │ │ │ └── google.md │ │ ├── sources │ │ │ ├── _index.md │ │ │ ├── alloydb-admin.md │ │ │ ├── alloydb-pg.md │ │ │ ├── bigquery.md │ │ │ ├── bigtable.md │ │ │ ├── cassandra.md │ │ │ ├── clickhouse.md │ │ │ ├── cloud-monitoring.md │ │ │ ├── cloud-sql-admin.md │ │ │ ├── cloud-sql-mssql.md │ │ │ ├── cloud-sql-mysql.md │ │ │ ├── cloud-sql-pg.md │ │ │ ├── couchbase.md │ │ │ ├── dataplex.md │ │ │ ├── dgraph.md │ │ │ ├── firebird.md │ │ │ ├── firestore.md │ │ │ ├── http.md │ │ │ ├── looker.md │ │ │ ├── mongodb.md │ │ │ ├── mssql.md │ │ │ ├── mysql.md │ │ │ ├── neo4j.md │ │ │ ├── oceanbase.md │ │ │ ├── oracle.md │ │ │ ├── postgres.md │ │ │ ├── redis.md │ │ │ ├── spanner.md │ │ │ ├── sqlite.md │ │ │ ├── tidb.md │ │ │ ├── trino.md │ │ │ ├── valkey.md │ │ │ └── yugabytedb.md │ │ └── tools │ │ ├── _index.md │ │ ├── alloydb │ │ │ ├── _index.md │ │ │ ├── alloydb-create-cluster.md │ │ │ ├── alloydb-create-instance.md │ │ │ ├── alloydb-create-user.md │ │ │ ├── alloydb-get-cluster.md │ │ │ ├── alloydb-get-instance.md │ │ │ ├── alloydb-get-user.md │ │ │ ├── alloydb-list-clusters.md │ │ │ ├── alloydb-list-instances.md │ │ │ ├── alloydb-list-users.md │ │ │ └── alloydb-wait-for-operation.md │ │ ├── alloydbainl │ │ │ ├── _index.md │ │ │ └── alloydb-ai-nl.md │ │ ├── bigquery │ │ │ ├── _index.md │ │ │ ├── bigquery-analyze-contribution.md │ │ │ ├── bigquery-conversational-analytics.md │ │ │ ├── bigquery-execute-sql.md │ │ │ ├── bigquery-forecast.md │ │ │ ├── bigquery-get-dataset-info.md │ │ │ ├── bigquery-get-table-info.md │ │ │ ├── bigquery-list-dataset-ids.md │ │ │ ├── bigquery-list-table-ids.md │ │ │ ├── bigquery-search-catalog.md │ │ │ └── bigquery-sql.md │ │ ├── bigtable │ │ │ ├── _index.md │ │ │ └── bigtable-sql.md │ │ ├── cassandra │ │ │ ├── _index.md │ │ │ └── cassandra-cql.md │ │ ├── clickhouse │ │ │ ├── _index.md │ │ │ ├── clickhouse-execute-sql.md │ │ │ ├── clickhouse-list-databases.md │ │ │ ├── clickhouse-list-tables.md │ │ │ └── clickhouse-sql.md │ │ ├── cloudmonitoring │ │ │ ├── _index.md │ │ │ └── cloud-monitoring-query-prometheus.md │ │ ├── cloudsql │ │ │ ├── _index.md │ │ │ ├── cloudsqlcreatedatabase.md │ │ │ ├── cloudsqlcreateusers.md │ │ │ ├── cloudsqlgetinstances.md │ │ │ ├── cloudsqllistdatabases.md │ │ │ ├── cloudsqllistinstances.md │ │ │ ├── cloudsqlmssqlcreateinstance.md │ │ │ ├── cloudsqlmysqlcreateinstance.md │ │ │ ├── cloudsqlpgcreateinstances.md │ │ │ └── cloudsqlwaitforoperation.md │ │ ├── couchbase │ │ │ ├── _index.md │ │ │ └── couchbase-sql.md │ │ ├── dataform │ │ │ ├── _index.md │ │ │ └── dataform-compile-local.md │ │ ├── dataplex │ │ │ ├── _index.md │ │ │ ├── dataplex-lookup-entry.md │ │ │ ├── dataplex-search-aspect-types.md │ │ │ └── dataplex-search-entries.md │ │ ├── dgraph │ │ │ ├── _index.md │ │ │ └── dgraph-dql.md │ │ ├── firebird │ │ │ ├── _index.md │ │ │ ├── firebird-execute-sql.md │ │ │ └── firebird-sql.md │ │ ├── firestore │ │ │ ├── _index.md │ │ │ ├── firestore-add-documents.md │ │ │ ├── firestore-delete-documents.md │ │ │ ├── firestore-get-documents.md │ │ │ ├── firestore-get-rules.md │ │ │ ├── firestore-list-collections.md │ │ │ ├── firestore-query-collection.md │ │ │ ├── firestore-query.md │ │ │ ├── firestore-update-document.md │ │ │ └── firestore-validate-rules.md │ │ ├── http │ │ │ ├── _index.md │ │ │ └── http.md │ │ ├── looker │ │ │ ├── _index.md │ │ │ ├── looker-add-dashboard-element.md │ │ │ ├── looker-conversational-analytics.md │ │ │ ├── looker-create-project-file.md │ │ │ ├── looker-delete-project-file.md │ │ │ ├── looker-dev-mode.md │ │ │ ├── looker-get-dashboards.md │ │ │ ├── looker-get-dimensions.md │ │ │ ├── looker-get-explores.md │ │ │ ├── looker-get-filters.md │ │ │ ├── looker-get-looks.md │ │ │ ├── looker-get-measures.md │ │ │ ├── looker-get-models.md │ │ │ ├── looker-get-parameters.md │ │ │ ├── looker-get-project-file.md │ │ │ ├── looker-get-project-files.md │ │ │ ├── looker-get-projects.md │ │ │ ├── looker-health-analyze.md │ │ │ ├── looker-health-pulse.md │ │ │ ├── looker-health-vacuum.md │ │ │ ├── looker-make-dashboard.md │ │ │ ├── looker-make-look.md │ │ │ ├── looker-query-sql.md │ │ │ ├── looker-query-url.md │ │ │ ├── looker-query.md │ │ │ ├── looker-run-look.md │ │ │ └── looker-update-project-file.md │ │ ├── mongodb │ │ │ ├── _index.md │ │ │ ├── mongodb-aggregate.md │ │ │ ├── mongodb-delete-many.md │ │ │ ├── mongodb-delete-one.md │ │ │ ├── mongodb-find-one.md │ │ │ ├── mongodb-find.md │ │ │ ├── mongodb-insert-many.md │ │ │ ├── mongodb-insert-one.md │ │ │ ├── mongodb-update-many.md │ │ │ └── mongodb-update-one.md │ │ ├── mssql │ │ │ ├── _index.md │ │ │ ├── mssql-execute-sql.md │ │ │ ├── mssql-list-tables.md │ │ │ └── mssql-sql.md │ │ ├── mysql │ │ │ ├── _index.md │ │ │ ├── mysql-execute-sql.md │ │ │ ├── mysql-list-active-queries.md │ │ │ ├── mysql-list-table-fragmentation.md │ │ │ ├── mysql-list-tables-missing-unique-indexes.md │ │ │ ├── mysql-list-tables.md │ │ │ └── mysql-sql.md │ │ ├── neo4j │ │ │ ├── _index.md │ │ │ ├── neo4j-cypher.md │ │ │ ├── neo4j-execute-cypher.md │ │ │ └── neo4j-schema.md │ │ ├── oceanbase │ │ │ ├── _index.md │ │ │ ├── oceanbase-execute-sql.md │ │ │ └── oceanbase-sql.md │ │ ├── oracle │ │ │ ├── _index.md │ │ │ ├── oracle-execute-sql.md │ │ │ └── oracle-sql.md │ │ ├── postgres │ │ │ ├── _index.md │ │ │ ├── postgres-execute-sql.md │ │ │ ├── postgres-list-active-queries.md │ │ │ ├── postgres-list-available-extensions.md │ │ │ ├── postgres-list-installed-extensions.md │ │ │ ├── postgres-list-tables.md │ │ │ └── postgres-sql.md │ │ ├── redis │ │ │ ├── _index.md │ │ │ └── redis.md │ │ ├── spanner │ │ │ ├── _index.md │ │ │ ├── spanner-execute-sql.md │ │ │ ├── spanner-list-tables.md │ │ │ └── spanner-sql.md │ │ ├── sqlite │ │ │ ├── _index.md │ │ │ ├── sqlite-execute-sql.md │ │ │ └── sqlite-sql.md │ │ ├── tidb │ │ │ ├── _index.md │ │ │ ├── tidb-execute-sql.md │ │ │ └── tidb-sql.md │ │ ├── trino │ │ │ ├── _index.md │ │ │ ├── trino-execute-sql.md │ │ │ └── trino-sql.md │ │ ├── utility │ │ │ ├── _index.md │ │ │ └── wait.md │ │ ├── valkey │ │ │ ├── _index.md │ │ │ └── valkey.md │ │ └── yuagbytedb │ │ ├── _index.md │ │ └── yugabytedb-sql.md │ ├── samples │ │ ├── _index.md │ │ ├── alloydb │ │ │ ├── _index.md │ │ │ ├── ai-nl │ │ │ │ ├── alloydb_ai_nl.ipynb │ │ │ │ └── index.md │ │ │ └── mcp_quickstart.md │ │ ├── bigquery │ │ │ ├── _index.md │ │ │ ├── colab_quickstart_bigquery.ipynb │ │ │ ├── local_quickstart.md │ │ │ └── mcp_quickstart │ │ │ ├── _index.md │ │ │ ├── inspector_tools.png │ │ │ └── inspector.png │ │ └── looker │ │ ├── _index.md │ │ ├── looker_gemini_oauth │ │ │ ├── _index.md │ │ │ ├── authenticated.png │ │ │ ├── authorize.png │ │ │ └── registration.png │ │ ├── looker_gemini.md │ │ └── looker_mcp_inspector │ │ ├── _index.md │ │ ├── inspector_tools.png │ │ └── inspector.png │ └── sdks │ ├── _index.md │ ├── go-sdk.md │ ├── js-sdk.md │ └── python-sdk.md ├── gemini-extension.json ├── go.mod ├── go.sum ├── internal │ ├── auth │ │ ├── auth.go │ │ └── google │ │ └── google.go │ ├── log │ │ ├── handler.go │ │ ├── log_test.go │ │ ├── log.go │ │ └── logger.go │ ├── prebuiltconfigs │ │ ├── prebuiltconfigs_test.go │ │ ├── prebuiltconfigs.go │ │ └── tools │ │ ├── alloydb-postgres-admin.yaml │ │ ├── alloydb-postgres-observability.yaml │ │ ├── alloydb-postgres.yaml │ │ ├── bigquery.yaml │ │ ├── clickhouse.yaml │ │ ├── cloud-sql-mssql-admin.yaml │ │ ├── cloud-sql-mssql-observability.yaml │ │ ├── cloud-sql-mssql.yaml │ │ ├── cloud-sql-mysql-admin.yaml │ │ ├── cloud-sql-mysql-observability.yaml │ │ ├── cloud-sql-mysql.yaml │ │ ├── cloud-sql-postgres-admin.yaml │ │ ├── cloud-sql-postgres-observability.yaml │ │ ├── cloud-sql-postgres.yaml │ │ ├── dataplex.yaml │ │ ├── firestore.yaml │ │ ├── looker-conversational-analytics.yaml │ │ ├── looker.yaml │ │ ├── mssql.yaml │ │ ├── mysql.yaml │ │ ├── neo4j.yaml │ │ ├── oceanbase.yaml │ │ ├── postgres.yaml │ │ ├── spanner-postgres.yaml │ │ ├── spanner.yaml │ │ └── sqlite.yaml │ ├── server │ │ ├── api_test.go │ │ ├── api.go │ │ ├── common_test.go │ │ ├── config.go │ │ ├── mcp │ │ │ ├── jsonrpc │ │ │ │ ├── jsonrpc_test.go │ │ │ │ └── jsonrpc.go │ │ │ ├── mcp.go │ │ │ ├── util │ │ │ │ └── lifecycle.go │ │ │ ├── v20241105 │ │ │ │ ├── method.go │ │ │ │ └── types.go │ │ │ ├── v20250326 │ │ │ │ ├── method.go │ │ │ │ └── types.go │ │ │ └── v20250618 │ │ │ ├── method.go │ │ │ └── types.go │ │ ├── mcp_test.go │ │ ├── mcp.go │ │ ├── server_test.go │ │ ├── server.go │ │ ├── static │ │ │ ├── assets │ │ │ │ └── mcptoolboxlogo.png │ │ │ ├── css │ │ │ │ └── style.css │ │ │ ├── index.html │ │ │ ├── js │ │ │ │ ├── auth.js │ │ │ │ ├── loadTools.js │ │ │ │ ├── mainContent.js │ │ │ │ ├── navbar.js │ │ │ │ ├── runTool.js │ │ │ │ ├── toolDisplay.js │ │ │ │ ├── tools.js │ │ │ │ └── toolsets.js │ │ │ ├── tools.html │ │ │ └── toolsets.html │ │ ├── web_test.go │ │ └── web.go │ ├── sources │ │ ├── alloydbadmin │ │ │ ├── alloydbadmin_test.go │ │ │ └── alloydbadmin.go │ │ ├── alloydbpg │ │ │ ├── alloydb_pg_test.go │ │ │ └── alloydb_pg.go │ │ ├── bigquery │ │ │ ├── bigquery_test.go │ │ │ └── bigquery.go │ │ ├── bigtable │ │ │ ├── bigtable_test.go │ │ │ └── bigtable.go │ │ ├── cassandra │ │ │ ├── cassandra_test.go │ │ │ └── cassandra.go │ │ ├── clickhouse │ │ │ ├── clickhouse_test.go │ │ │ └── clickhouse.go │ │ ├── cloudmonitoring │ │ │ ├── cloud_monitoring_test.go │ │ │ └── cloud_monitoring.go │ │ ├── cloudsqladmin │ │ │ ├── cloud_sql_admin_test.go │ │ │ └── cloud_sql_admin.go │ │ ├── cloudsqlmssql │ │ │ ├── cloud_sql_mssql_test.go │ │ │ └── cloud_sql_mssql.go │ │ ├── cloudsqlmysql │ │ │ ├── cloud_sql_mysql_test.go │ │ │ └── cloud_sql_mysql.go │ │ ├── cloudsqlpg │ │ │ ├── cloud_sql_pg_test.go │ │ │ └── cloud_sql_pg.go │ │ ├── couchbase │ │ │ ├── couchbase_test.go │ │ │ └── couchbase.go │ │ ├── dataplex │ │ │ ├── dataplex_test.go │ │ │ └── dataplex.go │ │ ├── dgraph │ │ │ ├── dgraph_test.go │ │ │ └── dgraph.go │ │ ├── dialect.go │ │ ├── firebird │ │ │ ├── firebird_test.go │ │ │ └── firebird.go │ │ ├── firestore │ │ │ ├── firestore_test.go │ │ │ └── firestore.go │ │ ├── http │ │ │ ├── http_test.go │ │ │ └── http.go │ │ ├── ip_type.go │ │ ├── looker │ │ │ ├── looker_test.go │ │ │ └── looker.go │ │ ├── mongodb │ │ │ ├── mongodb_test.go │ │ │ └── mongodb.go │ │ ├── mssql │ │ │ ├── mssql_test.go │ │ │ └── mssql.go │ │ ├── mysql │ │ │ ├── mysql_test.go │ │ │ └── mysql.go │ │ ├── neo4j │ │ │ ├── neo4j_test.go │ │ │ └── neo4j.go │ │ ├── oceanbase │ │ │ ├── oceanbase_test.go │ │ │ └── oceanbase.go │ │ ├── oracle │ │ │ └── oracle.go │ │ ├── postgres │ │ │ ├── postgres_test.go │ │ │ └── postgres.go │ │ ├── redis │ │ │ ├── redis_test.go │ │ │ └── redis.go │ │ ├── sources.go │ │ ├── spanner │ │ │ ├── spanner_test.go │ │ │ └── spanner.go │ │ ├── sqlite │ │ │ ├── sqlite_test.go │ │ │ └── sqlite.go │ │ ├── tidb │ │ │ ├── tidb_test.go │ │ │ └── tidb.go │ │ ├── trino │ │ │ ├── trino_test.go │ │ │ └── trino.go │ │ ├── util.go │ │ ├── valkey │ │ │ ├── valkey_test.go │ │ │ └── valkey.go │ │ └── yugabytedb │ │ ├── yugabytedb_test.go │ │ └── yugabytedb.go │ ├── telemetry │ │ ├── instrumentation.go │ │ └── telemetry.go │ ├── testutils │ │ └── testutils.go │ ├── tools │ │ ├── alloydb │ │ │ ├── alloydbcreatecluster │ │ │ │ ├── alloydbcreatecluster_test.go │ │ │ │ └── alloydbcreatecluster.go │ │ │ ├── alloydbcreateinstance │ │ │ │ ├── alloydbcreateinstance_test.go │ │ │ │ └── alloydbcreateinstance.go │ │ │ ├── alloydbcreateuser │ │ │ │ ├── alloydbcreateuser_test.go │ │ │ │ └── alloydbcreateuser.go │ │ │ ├── alloydbgetcluster │ │ │ │ ├── alloydbgetcluster_test.go │ │ │ │ └── alloydbgetcluster.go │ │ │ ├── alloydbgetinstance │ │ │ │ ├── alloydbgetinstance_test.go │ │ │ │ └── alloydbgetinstance.go │ │ │ ├── alloydbgetuser │ │ │ │ ├── alloydbgetuser_test.go │ │ │ │ └── alloydbgetuser.go │ │ │ ├── alloydblistclusters │ │ │ │ ├── alloydblistclusters_test.go │ │ │ │ └── alloydblistclusters.go │ │ │ ├── alloydblistinstances │ │ │ │ ├── alloydblistinstances_test.go │ │ │ │ └── alloydblistinstances.go │ │ │ ├── alloydblistusers │ │ │ │ ├── alloydblistusers_test.go │ │ │ │ └── alloydblistusers.go │ │ │ └── alloydbwaitforoperation │ │ │ ├── alloydbwaitforoperation_test.go │ │ │ └── alloydbwaitforoperation.go │ │ ├── alloydbainl │ │ │ ├── alloydbainl_test.go │ │ │ └── alloydbainl.go │ │ ├── bigquery │ │ │ ├── bigqueryanalyzecontribution │ │ │ │ ├── bigqueryanalyzecontribution_test.go │ │ │ │ └── bigqueryanalyzecontribution.go │ │ │ ├── bigquerycommon │ │ │ │ ├── table_name_parser_test.go │ │ │ │ ├── table_name_parser.go │ │ │ │ └── util.go │ │ │ ├── bigqueryconversationalanalytics │ │ │ │ ├── bigqueryconversationalanalytics_test.go │ │ │ │ └── bigqueryconversationalanalytics.go │ │ │ ├── bigqueryexecutesql │ │ │ │ ├── bigqueryexecutesql_test.go │ │ │ │ └── bigqueryexecutesql.go │ │ │ ├── bigqueryforecast │ │ │ │ ├── bigqueryforecast_test.go │ │ │ │ └── bigqueryforecast.go │ │ │ ├── bigquerygetdatasetinfo │ │ │ │ ├── bigquerygetdatasetinfo_test.go │ │ │ │ └── bigquerygetdatasetinfo.go │ │ │ ├── bigquerygettableinfo │ │ │ │ ├── bigquerygettableinfo_test.go │ │ │ │ └── bigquerygettableinfo.go │ │ │ ├── bigquerylistdatasetids │ │ │ │ ├── bigquerylistdatasetids_test.go │ │ │ │ └── bigquerylistdatasetids.go │ │ │ ├── bigquerylisttableids │ │ │ │ ├── bigquerylisttableids_test.go │ │ │ │ └── bigquerylisttableids.go │ │ │ ├── bigquerysearchcatalog │ │ │ │ ├── bigquerysearchcatalog_test.go │ │ │ │ └── bigquerysearchcatalog.go │ │ │ └── bigquerysql │ │ │ ├── bigquerysql_test.go │ │ │ └── bigquerysql.go │ │ ├── bigtable │ │ │ ├── bigtable_test.go │ │ │ └── bigtable.go │ │ ├── cassandra │ │ │ └── cassandracql │ │ │ ├── cassandracql_test.go │ │ │ └── cassandracql.go │ │ ├── clickhouse │ │ │ ├── clickhouseexecutesql │ │ │ │ ├── clickhouseexecutesql_test.go │ │ │ │ └── clickhouseexecutesql.go │ │ │ ├── clickhouselistdatabases │ │ │ │ ├── clickhouselistdatabases_test.go │ │ │ │ └── clickhouselistdatabases.go │ │ │ ├── clickhouselisttables │ │ │ │ ├── clickhouselisttables_test.go │ │ │ │ └── clickhouselisttables.go │ │ │ └── clickhousesql │ │ │ ├── clickhousesql_test.go │ │ │ └── clickhousesql.go │ │ ├── cloudmonitoring │ │ │ ├── cloudmonitoring_test.go │ │ │ └── cloudmonitoring.go │ │ ├── cloudsql │ │ │ ├── cloudsqlcreatedatabase │ │ │ │ ├── cloudsqlcreatedatabase_test.go │ │ │ │ └── cloudsqlcreatedatabase.go │ │ │ ├── cloudsqlcreateusers │ │ │ │ ├── cloudsqlcreateusers_test.go │ │ │ │ └── cloudsqlcreateusers.go │ │ │ ├── cloudsqlgetinstances │ │ │ │ ├── cloudsqlgetinstances_test.go │ │ │ │ └── cloudsqlgetinstances.go │ │ │ ├── cloudsqllistdatabases │ │ │ │ ├── cloudsqllistdatabases_test.go │ │ │ │ └── cloudsqllistdatabases.go │ │ │ ├── cloudsqllistinstances │ │ │ │ ├── cloudsqllistinstances_test.go │ │ │ │ └── cloudsqllistinstances.go │ │ │ └── cloudsqlwaitforoperation │ │ │ ├── cloudsqlwaitforoperation_test.go │ │ │ └── cloudsqlwaitforoperation.go │ │ ├── cloudsqlmssql │ │ │ └── cloudsqlmssqlcreateinstance │ │ │ ├── cloudsqlmssqlcreateinstance_test.go │ │ │ └── cloudsqlmssqlcreateinstance.go │ │ ├── cloudsqlmysql │ │ │ └── cloudsqlmysqlcreateinstance │ │ │ ├── cloudsqlmysqlcreateinstance_test.go │ │ │ └── cloudsqlmysqlcreateinstance.go │ │ ├── cloudsqlpg │ │ │ └── cloudsqlpgcreateinstances │ │ │ ├── cloudsqlpgcreateinstances_test.go │ │ │ └── cloudsqlpgcreateinstances.go │ │ ├── common_test.go │ │ ├── common.go │ │ ├── couchbase │ │ │ ├── couchbase_test.go │ │ │ └── couchbase.go │ │ ├── dataform │ │ │ └── dataformcompilelocal │ │ │ ├── dataformcompilelocal_test.go │ │ │ └── dataformcompilelocal.go │ │ ├── dataplex │ │ │ ├── dataplexlookupentry │ │ │ │ ├── dataplexlookupentry_test.go │ │ │ │ └── dataplexlookupentry.go │ │ │ ├── dataplexsearchaspecttypes │ │ │ │ ├── dataplexsearchaspecttypes_test.go │ │ │ │ └── dataplexsearchaspecttypes.go │ │ │ └── dataplexsearchentries │ │ │ ├── dataplexsearchentries_test.go │ │ │ └── dataplexsearchentries.go │ │ ├── dgraph │ │ │ ├── dgraph_test.go │ │ │ └── dgraph.go │ │ ├── firebird │ │ │ ├── firebirdexecutesql │ │ │ │ ├── firebirdexecutesql_test.go │ │ │ │ └── firebirdexecutesql.go │ │ │ └── firebirdsql │ │ │ ├── firebirdsql_test.go │ │ │ └── firebirdsql.go │ │ ├── firestore │ │ │ ├── firestoreadddocuments │ │ │ │ ├── firestoreadddocuments_test.go │ │ │ │ └── firestoreadddocuments.go │ │ │ ├── firestoredeletedocuments │ │ │ │ ├── firestoredeletedocuments_test.go │ │ │ │ └── firestoredeletedocuments.go │ │ │ ├── firestoregetdocuments │ │ │ │ ├── firestoregetdocuments_test.go │ │ │ │ └── firestoregetdocuments.go │ │ │ ├── firestoregetrules │ │ │ │ ├── firestoregetrules_test.go │ │ │ │ └── firestoregetrules.go │ │ │ ├── firestorelistcollections │ │ │ │ ├── firestorelistcollections_test.go │ │ │ │ └── firestorelistcollections.go │ │ │ ├── firestorequery │ │ │ │ ├── firestorequery_test.go │ │ │ │ └── firestorequery.go │ │ │ ├── firestorequerycollection │ │ │ │ ├── firestorequerycollection_test.go │ │ │ │ └── firestorequerycollection.go │ │ │ ├── firestoreupdatedocument │ │ │ │ ├── firestoreupdatedocument_test.go │ │ │ │ └── firestoreupdatedocument.go │ │ │ ├── firestorevalidaterules │ │ │ │ ├── firestorevalidaterules_test.go │ │ │ │ └── firestorevalidaterules.go │ │ │ └── util │ │ │ ├── converter_test.go │ │ │ ├── converter.go │ │ │ ├── validator_test.go │ │ │ └── validator.go │ │ ├── http │ │ │ ├── http_test.go │ │ │ └── http.go │ │ ├── http_method.go │ │ ├── looker │ │ │ ├── lookeradddashboardelement │ │ │ │ ├── lookeradddashboardelement_test.go │ │ │ │ └── lookeradddashboardelement.go │ │ │ ├── lookercommon │ │ │ │ ├── lookercommon_test.go │ │ │ │ └── lookercommon.go │ │ │ ├── lookerconversationalanalytics │ │ │ │ ├── lookerconversationalanalytics_test.go │ │ │ │ └── lookerconversationalanalytics.go │ │ │ ├── lookercreateprojectfile │ │ │ │ ├── lookercreateprojectfile_test.go │ │ │ │ └── lookercreateprojectfile.go │ │ │ ├── lookerdeleteprojectfile │ │ │ │ ├── lookerdeleteprojectfile_test.go │ │ │ │ └── lookerdeleteprojectfile.go │ │ │ ├── lookerdevmode │ │ │ │ ├── lookerdevmode_test.go │ │ │ │ └── lookerdevmode.go │ │ │ ├── lookergetdashboards │ │ │ │ ├── lookergetdashboards_test.go │ │ │ │ └── lookergetdashboards.go │ │ │ ├── lookergetdimensions │ │ │ │ ├── lookergetdimensions_test.go │ │ │ │ └── lookergetdimensions.go │ │ │ ├── lookergetexplores │ │ │ │ ├── lookergetexplores_test.go │ │ │ │ └── lookergetexplores.go │ │ │ ├── lookergetfilters │ │ │ │ ├── lookergetfilters_test.go │ │ │ │ └── lookergetfilters.go │ │ │ ├── lookergetlooks │ │ │ │ ├── lookergetlooks_test.go │ │ │ │ └── lookergetlooks.go │ │ │ ├── lookergetmeasures │ │ │ │ ├── lookergetmeasures_test.go │ │ │ │ └── lookergetmeasures.go │ │ │ ├── lookergetmodels │ │ │ │ ├── lookergetmodels_test.go │ │ │ │ └── lookergetmodels.go │ │ │ ├── lookergetparameters │ │ │ │ ├── lookergetparameters_test.go │ │ │ │ └── lookergetparameters.go │ │ │ ├── lookergetprojectfile │ │ │ │ ├── lookergetprojectfile_test.go │ │ │ │ └── lookergetprojectfile.go │ │ │ ├── lookergetprojectfiles │ │ │ │ ├── lookergetprojectfiles_test.go │ │ │ │ └── lookergetprojectfiles.go │ │ │ ├── lookergetprojects │ │ │ │ ├── lookergetprojects_test.go │ │ │ │ └── lookergetprojects.go │ │ │ ├── lookerhealthanalyze │ │ │ │ ├── lookerhealthanalyze_test.go │ │ │ │ └── lookerhealthanalyze.go │ │ │ ├── lookerhealthpulse │ │ │ │ ├── lookerhealthpulse_test.go │ │ │ │ └── lookerhealthpulse.go │ │ │ ├── lookerhealthvacuum │ │ │ │ ├── lookerhealthvacuum_test.go │ │ │ │ └── lookerhealthvacuum.go │ │ │ ├── lookermakedashboard │ │ │ │ ├── lookermakedashboard_test.go │ │ │ │ └── lookermakedashboard.go │ │ │ ├── lookermakelook │ │ │ │ ├── lookermakelook_test.go │ │ │ │ └── lookermakelook.go │ │ │ ├── lookerquery │ │ │ │ ├── lookerquery_test.go │ │ │ │ └── lookerquery.go │ │ │ ├── lookerquerysql │ │ │ │ ├── lookerquerysql_test.go │ │ │ │ └── lookerquerysql.go │ │ │ ├── lookerqueryurl │ │ │ │ ├── lookerqueryurl_test.go │ │ │ │ └── lookerqueryurl.go │ │ │ ├── lookerrunlook │ │ │ │ ├── lookerrunlook_test.go │ │ │ │ └── lookerrunlook.go │ │ │ └── lookerupdateprojectfile │ │ │ ├── lookerupdateprojectfile_test.go │ │ │ └── lookerupdateprojectfile.go │ │ ├── mongodb │ │ │ ├── mongodbaggregate │ │ │ │ ├── mongodbaggregate_test.go │ │ │ │ └── mongodbaggregate.go │ │ │ ├── mongodbdeletemany │ │ │ │ ├── mongodbdeletemany_test.go │ │ │ │ └── mongodbdeletemany.go │ │ │ ├── mongodbdeleteone │ │ │ │ ├── mongodbdeleteone_test.go │ │ │ │ └── mongodbdeleteone.go │ │ │ ├── mongodbfind │ │ │ │ ├── mongodbfind_test.go │ │ │ │ └── mongodbfind.go │ │ │ ├── mongodbfindone │ │ │ │ ├── mongodbfindone_test.go │ │ │ │ └── mongodbfindone.go │ │ │ ├── mongodbinsertmany │ │ │ │ ├── mongodbinsertmany_test.go │ │ │ │ └── mongodbinsertmany.go │ │ │ ├── mongodbinsertone │ │ │ │ ├── mongodbinsertone_test.go │ │ │ │ └── mongodbinsertone.go │ │ │ ├── mongodbupdatemany │ │ │ │ ├── mongodbupdatemany_test.go │ │ │ │ └── mongodbupdatemany.go │ │ │ └── mongodbupdateone │ │ │ ├── mongodbupdateone_test.go │ │ │ └── mongodbupdateone.go │ │ ├── mssql │ │ │ ├── mssqlexecutesql │ │ │ │ ├── mssqlexecutesql_test.go │ │ │ │ └── mssqlexecutesql.go │ │ │ ├── mssqllisttables │ │ │ │ ├── mssqllisttables_test.go │ │ │ │ └── mssqllisttables.go │ │ │ └── mssqlsql │ │ │ ├── mssqlsql_test.go │ │ │ └── mssqlsql.go │ │ ├── mysql │ │ │ ├── mysqlcommon │ │ │ │ └── mysqlcommon.go │ │ │ ├── mysqlexecutesql │ │ │ │ ├── mysqlexecutesql_test.go │ │ │ │ └── mysqlexecutesql.go │ │ │ ├── mysqllistactivequeries │ │ │ │ ├── mysqllistactivequeries_test.go │ │ │ │ └── mysqllistactivequeries.go │ │ │ ├── mysqllisttablefragmentation │ │ │ │ ├── mysqllisttablefragmentation_test.go │ │ │ │ └── mysqllisttablefragmentation.go │ │ │ ├── mysqllisttables │ │ │ │ ├── mysqllisttables_test.go │ │ │ │ └── mysqllisttables.go │ │ │ ├── mysqllisttablesmissinguniqueindexes │ │ │ │ ├── mysqllisttablesmissinguniqueindexes_test.go │ │ │ │ └── mysqllisttablesmissinguniqueindexes.go │ │ │ └── mysqlsql │ │ │ ├── mysqlsql_test.go │ │ │ └── mysqlsql.go │ │ ├── neo4j │ │ │ ├── neo4jcypher │ │ │ │ ├── neo4jcypher_test.go │ │ │ │ └── neo4jcypher.go │ │ │ ├── neo4jexecutecypher │ │ │ │ ├── classifier │ │ │ │ │ ├── classifier_test.go │ │ │ │ │ └── classifier.go │ │ │ │ ├── neo4jexecutecypher_test.go │ │ │ │ └── neo4jexecutecypher.go │ │ │ └── neo4jschema │ │ │ ├── cache │ │ │ │ ├── cache_test.go │ │ │ │ └── cache.go │ │ │ ├── helpers │ │ │ │ ├── helpers_test.go │ │ │ │ └── helpers.go │ │ │ ├── neo4jschema_test.go │ │ │ ├── neo4jschema.go │ │ │ └── types │ │ │ └── types.go │ │ ├── oceanbase │ │ │ ├── oceanbaseexecutesql │ │ │ │ ├── oceanbaseexecutesql_test.go │ │ │ │ └── oceanbaseexecutesql.go │ │ │ └── oceanbasesql │ │ │ ├── oceanbasesql_test.go │ │ │ └── oceanbasesql.go │ │ ├── oracle │ │ │ ├── oracleexecutesql │ │ │ │ └── oracleexecutesql.go │ │ │ └── oraclesql │ │ │ └── oraclesql.go │ │ ├── parameters_test.go │ │ ├── parameters.go │ │ ├── postgres │ │ │ ├── postgresexecutesql │ │ │ │ ├── postgresexecutesql_test.go │ │ │ │ └── postgresexecutesql.go │ │ │ ├── postgreslistactivequeries │ │ │ │ ├── postgreslistactivequeries_test.go │ │ │ │ └── postgreslistactivequeries.go │ │ │ ├── postgreslistavailableextensions │ │ │ │ ├── postgreslistavailableextensions_test.go │ │ │ │ └── postgreslistavailableextensions.go │ │ │ ├── postgreslistinstalledextensions │ │ │ │ ├── postgreslistinstalledextensions_test.go │ │ │ │ └── postgreslistinstalledextensions.go │ │ │ ├── postgreslisttables │ │ │ │ ├── postgreslisttables_test.go │ │ │ │ └── postgreslisttables.go │ │ │ └── postgressql │ │ │ ├── postgressql_test.go │ │ │ └── postgressql.go │ │ ├── redis │ │ │ ├── redis_test.go │ │ │ └── redis.go │ │ ├── spanner │ │ │ ├── spannerexecutesql │ │ │ │ ├── spannerexecutesql_test.go │ │ │ │ └── spannerexecutesql.go │ │ │ ├── spannerlisttables │ │ │ │ ├── spannerlisttables_test.go │ │ │ │ └── spannerlisttables.go │ │ │ └── spannersql │ │ │ ├── spanner_test.go │ │ │ └── spannersql.go │ │ ├── sqlite │ │ │ ├── sqliteexecutesql │ │ │ │ ├── sqliteexecutesql_test.go │ │ │ │ └── sqliteexecutesql.go │ │ │ └── sqlitesql │ │ │ ├── sqlitesql_test.go │ │ │ └── sqlitesql.go │ │ ├── tidb │ │ │ ├── tidbexecutesql │ │ │ │ ├── tidbexecutesql_test.go │ │ │ │ └── tidbexecutesql.go │ │ │ └── tidbsql │ │ │ ├── tidbsql_test.go │ │ │ └── tidbsql.go │ │ ├── tools_test.go │ │ ├── tools.go │ │ ├── toolsets.go │ │ ├── trino │ │ │ ├── trinoexecutesql │ │ │ │ ├── trinoexecutesql_test.go │ │ │ │ └── trinoexecutesql.go │ │ │ └── trinosql │ │ │ ├── trinosql_test.go │ │ │ └── trinosql.go │ │ ├── utility │ │ │ └── wait │ │ │ ├── wait_test.go │ │ │ └── wait.go │ │ ├── valkey │ │ │ ├── valkey_test.go │ │ │ └── valkey.go │ │ └── yugabytedbsql │ │ ├── yugabytedbsql_test.go │ │ └── yugabytedbsql.go │ └── util │ └── util.go ├── LICENSE ├── logo.png ├── main.go ├── MCP-TOOLBOX-EXTENSION.md ├── README.md └── tests ├── alloydb │ ├── alloydb_integration_test.go │ └── alloydb_wait_for_operation_test.go ├── alloydbainl │ └── alloydb_ai_nl_integration_test.go ├── alloydbpg │ └── alloydb_pg_integration_test.go ├── auth.go ├── bigquery │ └── bigquery_integration_test.go ├── bigtable │ └── bigtable_integration_test.go ├── cassandra │ └── cassandra_integration_test.go ├── clickhouse │ └── clickhouse_integration_test.go ├── cloudmonitoring │ └── cloud_monitoring_integration_test.go ├── cloudsql │ ├── cloud_sql_create_database_test.go │ ├── cloud_sql_create_users_test.go │ ├── cloud_sql_get_instances_test.go │ ├── cloud_sql_list_databases_test.go │ ├── cloudsql_list_instances_test.go │ └── cloudsql_wait_for_operation_test.go ├── cloudsqlmssql │ ├── cloud_sql_mssql_create_instance_integration_test.go │ └── cloud_sql_mssql_integration_test.go ├── cloudsqlmysql │ ├── cloud_sql_mysql_create_instance_integration_test.go │ └── cloud_sql_mysql_integration_test.go ├── cloudsqlpg │ ├── cloud_sql_pg_create_instances_test.go │ └── cloud_sql_pg_integration_test.go ├── common.go ├── couchbase │ └── couchbase_integration_test.go ├── dataform │ └── dataform_integration_test.go ├── dataplex │ └── dataplex_integration_test.go ├── dgraph │ └── dgraph_integration_test.go ├── firebird │ └── firebird_integration_test.go ├── firestore │ └── firestore_integration_test.go ├── http │ └── http_integration_test.go ├── looker │ └── looker_integration_test.go ├── mongodb │ └── mongodb_integration_test.go ├── mssql │ └── mssql_integration_test.go ├── mysql │ └── mysql_integration_test.go ├── neo4j │ └── neo4j_integration_test.go ├── oceanbase │ └── oceanbase_integration_test.go ├── option.go ├── oracle │ └── oracle_integration_test.go ├── postgres │ └── postgres_integration_test.go ├── redis │ └── redis_test.go ├── server.go ├── source.go ├── spanner │ └── spanner_integration_test.go ├── sqlite │ └── sqlite_integration_test.go ├── tidb │ └── tidb_integration_test.go ├── tool.go ├── trino │ └── trino_integration_test.go ├── utility │ └── wait_integration_test.go ├── valkey │ └── valkey_test.go └── yugabytedb └── yugabytedb_integration_test.go ``` # Files -------------------------------------------------------------------------------- /internal/tools/mysql/mysqllisttablesmissinguniqueindexes/mysqllisttablesmissinguniqueindexes.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package mysqllisttablesmissinguniqueindexes import ( "context" "database/sql" "fmt" yaml "github.com/goccy/go-yaml" "github.com/googleapis/genai-toolbox/internal/sources" "github.com/googleapis/genai-toolbox/internal/sources/cloudsqlmysql" "github.com/googleapis/genai-toolbox/internal/sources/mysql" "github.com/googleapis/genai-toolbox/internal/tools" "github.com/googleapis/genai-toolbox/internal/tools/mysql/mysqlcommon" "github.com/googleapis/genai-toolbox/internal/util" ) const kind string = "mysql-list-tables-missing-unique-indexes" const listTablesMissingUniqueIndexesStatement = ` SELECT tab.table_schema AS table_schema, tab.table_name AS table_name FROM information_schema.tables tab LEFT JOIN information_schema.table_constraints tco ON tab.table_schema = tco.table_schema AND tab.table_name = tco.table_name AND tco.constraint_type IN ('PRIMARY KEY', 'UNIQUE') WHERE tco.constraint_type IS NULL AND tab.table_schema NOT IN('mysql', 'information_schema', 'performance_schema', 'sys') AND tab.table_type = 'BASE TABLE' AND (COALESCE(?, '') = '' OR tab.table_schema = ?) ORDER BY tab.table_schema, tab.table_name LIMIT ?; ` func init() { if !tools.Register(kind, newConfig) { panic(fmt.Sprintf("tool kind %q already registered", kind)) } } func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) { actual := Config{Name: name} if err := decoder.DecodeContext(ctx, &actual); err != nil { return nil, err } return actual, nil } type compatibleSource interface { MySQLPool() *sql.DB } // validate compatible sources are still compatible var _ compatibleSource = &mysql.Source{} var _ compatibleSource = &cloudsqlmysql.Source{} var compatibleSources = [...]string{mysql.SourceKind, cloudsqlmysql.SourceKind} type Config struct { Name string `yaml:"name" validate:"required"` Kind string `yaml:"kind" validate:"required"` Source string `yaml:"source" validate:"required"` Description string `yaml:"description" validate:"required"` AuthRequired []string `yaml:"authRequired"` } // validate interface var _ tools.ToolConfig = Config{} func (cfg Config) ToolConfigKind() string { return kind } func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) { // verify source exists rawS, ok := srcs[cfg.Source] if !ok { return nil, fmt.Errorf("no source named %q configured", cfg.Source) } // verify the source is compatible s, ok := rawS.(compatibleSource) if !ok { return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources) } allParameters := tools.Parameters{ tools.NewStringParameterWithDefault("table_schema", "", "(Optional) The database where the check is to be performed. Check all tables visible to the current user if not specified"), tools.NewIntParameterWithDefault("limit", 50, "(Optional) Max rows to return, default is 50"), } mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, allParameters) // finish tool setup t := Tool{ Name: cfg.Name, Kind: kind, AuthRequired: cfg.AuthRequired, Pool: s.MySQLPool(), allParams: allParameters, manifest: tools.Manifest{Description: cfg.Description, Parameters: allParameters.Manifest(), AuthRequired: cfg.AuthRequired}, mcpManifest: mcpManifest, } return t, nil } // validate interface var _ tools.Tool = Tool{} type Tool struct { Name string `yaml:"name"` Kind string `yaml:"kind"` AuthRequired []string `yaml:"authRequired"` allParams tools.Parameters `yaml:"parameters"` Pool *sql.DB manifest tools.Manifest mcpManifest tools.McpManifest } func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) { paramsMap := params.AsMap() table_schema, ok := paramsMap["table_schema"].(string) if !ok { return nil, fmt.Errorf("invalid 'table_schema' parameter; expected a string") } limit, ok := paramsMap["limit"].(int) if !ok { return nil, fmt.Errorf("invalid 'limit' parameter; expected an integer") } // Log the query executed for debugging. logger, err := util.LoggerFromContext(ctx) if err != nil { return nil, fmt.Errorf("error getting logger: %s", err) } logger.DebugContext(ctx, "executing `%s` tool query: %s", kind, listTablesMissingUniqueIndexesStatement) results, err := t.Pool.QueryContext(ctx, listTablesMissingUniqueIndexesStatement, table_schema, table_schema, limit) if err != nil { return nil, fmt.Errorf("unable to execute query: %w", err) } defer results.Close() cols, err := results.Columns() if err != nil { return nil, fmt.Errorf("unable to retrieve rows column name: %w", err) } // create an array of values for each column, which can be re-used to scan each row rawValues := make([]any, len(cols)) values := make([]any, len(cols)) for i := range rawValues { values[i] = &rawValues[i] } colTypes, err := results.ColumnTypes() if err != nil { return nil, fmt.Errorf("unable to get column types: %w", err) } var out []any for results.Next() { err := results.Scan(values...) if err != nil { return nil, fmt.Errorf("unable to parse row: %w", err) } vMap := make(map[string]any) for i, name := range cols { val := rawValues[i] if val == nil { vMap[name] = nil continue } vMap[name], err = mysqlcommon.ConvertToType(colTypes[i], val) if err != nil { return nil, fmt.Errorf("errors encountered when converting values: %w", err) } } out = append(out, vMap) } if err := results.Err(); err != nil { return nil, fmt.Errorf("errors encountered during row iteration: %w", err) } return out, nil } func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) { return tools.ParseParams(t.allParams, data, claims) } func (t Tool) Manifest() tools.Manifest { return t.manifest } func (t Tool) McpManifest() tools.McpManifest { return t.mcpManifest } func (t Tool) Authorized(verifiedAuthServices []string) bool { return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices) } func (t Tool) RequiresClientAuthorization() bool { return false } ``` -------------------------------------------------------------------------------- /tests/cloudsqlmssql/cloud_sql_mssql_integration_test.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package cloudsqlmssql import ( "context" "database/sql" "fmt" "net/url" "os" "regexp" "slices" "strings" "testing" "time" "cloud.google.com/go/cloudsqlconn" "cloud.google.com/go/cloudsqlconn/sqlserver/mssql" "github.com/google/uuid" "github.com/googleapis/genai-toolbox/internal/testutils" "github.com/googleapis/genai-toolbox/tests" ) var ( CloudSQLMSSQLSourceKind = "cloud-sql-mssql" CloudSQLMSSQLToolKind = "mssql-sql" CloudSQLMSSQLProject = os.Getenv("CLOUD_SQL_MSSQL_PROJECT") CloudSQLMSSQLRegion = os.Getenv("CLOUD_SQL_MSSQL_REGION") CloudSQLMSSQLInstance = os.Getenv("CLOUD_SQL_MSSQL_INSTANCE") CloudSQLMSSQLDatabase = os.Getenv("CLOUD_SQL_MSSQL_DATABASE") CloudSQLMSSQLIp = os.Getenv("CLOUD_SQL_MSSQL_IP") CloudSQLMSSQLUser = os.Getenv("CLOUD_SQL_MSSQL_USER") CloudSQLMSSQLPass = os.Getenv("CLOUD_SQL_MSSQL_PASS") ) func getCloudSQLMSSQLVars(t *testing.T) map[string]any { switch "" { case CloudSQLMSSQLProject: t.Fatal("'CLOUD_SQL_MSSQL_PROJECT' not set") case CloudSQLMSSQLRegion: t.Fatal("'CLOUD_SQL_MSSQL_REGION' not set") case CloudSQLMSSQLInstance: t.Fatal("'CLOUD_SQL_MSSQL_INSTANCE' not set") case CloudSQLMSSQLIp: t.Fatal("'CLOUD_SQL_MSSQL_IP' not set") case CloudSQLMSSQLDatabase: t.Fatal("'CLOUD_SQL_MSSQL_DATABASE' not set") case CloudSQLMSSQLUser: t.Fatal("'CLOUD_SQL_MSSQL_USER' not set") case CloudSQLMSSQLPass: t.Fatal("'CLOUD_SQL_MSSQL_PASS' not set") } return map[string]any{ "kind": CloudSQLMSSQLSourceKind, "project": CloudSQLMSSQLProject, "instance": CloudSQLMSSQLInstance, "ipAddress": CloudSQLMSSQLIp, "region": CloudSQLMSSQLRegion, "database": CloudSQLMSSQLDatabase, "user": CloudSQLMSSQLUser, "password": CloudSQLMSSQLPass, } } // Copied over from cloud_sql_mssql.go func initCloudSQLMSSQLConnection(project, region, instance, ipAddress, ipType, user, pass, dbname string) (*sql.DB, error) { // Create dsn query := fmt.Sprintf("database=%s&cloudsql=%s:%s:%s", dbname, project, region, instance) url := &url.URL{ Scheme: "sqlserver", User: url.UserPassword(user, pass), Host: ipAddress, RawQuery: query, } // Get dial options dialOpts, err := tests.GetCloudSQLDialOpts(ipType) if err != nil { return nil, err } // Register sql server driver if !slices.Contains(sql.Drivers(), "cloudsql-sqlserver-driver") { _, err := mssql.RegisterDriver("cloudsql-sqlserver-driver", cloudsqlconn.WithDefaultDialOptions(dialOpts...)) if err != nil { return nil, err } } // Open database connection db, err := sql.Open( "cloudsql-sqlserver-driver", url.String(), ) if err != nil { return nil, err } return db, nil } func TestCloudSQLMSSQLToolEndpoints(t *testing.T) { sourceConfig := getCloudSQLMSSQLVars(t) ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() var args []string db, err := initCloudSQLMSSQLConnection(CloudSQLMSSQLProject, CloudSQLMSSQLRegion, CloudSQLMSSQLInstance, CloudSQLMSSQLIp, "public", CloudSQLMSSQLUser, CloudSQLMSSQLPass, CloudSQLMSSQLDatabase) if err != nil { t.Fatalf("unable to create Cloud SQL connection pool: %s", err) } // cleanup test environment tests.CleanupMSSQLTables(t, ctx, db) // create table name with UUID tableNameParam := "param_table_" + strings.ReplaceAll(uuid.New().String(), "-", "") tableNameAuth := "auth_table_" + strings.ReplaceAll(uuid.New().String(), "-", "") tableNameTemplateParam := "template_param_table_" + strings.ReplaceAll(uuid.New().String(), "-", "") // set up data for param tool createParamTableStmt, insertParamTableStmt, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, paramTestParams := tests.GetMSSQLParamToolInfo(tableNameParam) teardownTable1 := tests.SetupMsSQLTable(t, ctx, db, createParamTableStmt, insertParamTableStmt, tableNameParam, paramTestParams) defer teardownTable1(t) // set up data for auth tool createAuthTableStmt, insertAuthTableStmt, authToolStmt, authTestParams := tests.GetMSSQLAuthToolInfo(tableNameAuth) teardownTable2 := tests.SetupMsSQLTable(t, ctx, db, createAuthTableStmt, insertAuthTableStmt, tableNameAuth, authTestParams) defer teardownTable2(t) // Write config into a file and pass it to command toolsFile := tests.GetToolsConfig(sourceConfig, CloudSQLMSSQLToolKind, paramToolStmt, idParamToolStmt, nameParamToolStmt, arrayToolStmt, authToolStmt) toolsFile = tests.AddMSSQLExecuteSqlConfig(t, toolsFile) tmplSelectCombined, tmplSelectFilterCombined := tests.GetMSSQLTmplToolStatement() toolsFile = tests.AddTemplateParamConfig(t, toolsFile, CloudSQLMSSQLToolKind, tmplSelectCombined, tmplSelectFilterCombined, "") toolsFile = tests.AddMSSQLPrebuiltToolConfig(t, toolsFile) cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...) if err != nil { t.Fatalf("command initialization returned an error: %s", err) } defer cleanup() waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second) defer cancel() out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out) if err != nil { t.Logf("toolbox command logs: \n%s", out) t.Fatalf("toolbox didn't start successfully: %s", err) } // Get configs for tests select1Want, mcpMyFailToolWant, createTableStatement, mcpSelect1Want := tests.GetMSSQLWants() // Run tests tests.RunToolGetTest(t) tests.RunToolInvokeTest(t, select1Want, tests.DisableArrayTest()) tests.RunMCPToolCallMethod(t, mcpMyFailToolWant, mcpSelect1Want) tests.RunExecuteSqlToolInvokeTest(t, createTableStatement, select1Want) tests.RunToolInvokeWithTemplateParameters(t, tableNameTemplateParam) // Run specific MSSQL tool tests tests.RunMSSQLListTablesTest(t, tableNameParam, tableNameAuth) } // Test connection with different IP type func TestCloudSQLMSSQLIpConnection(t *testing.T) { sourceConfig := getCloudSQLMSSQLVars(t) tcs := []struct { name string ipType string }{ { name: "public ip", ipType: "public", }, { name: "private ip", ipType: "private", }, } for _, tc := range tcs { t.Run(tc.name, func(t *testing.T) { sourceConfig["ipType"] = tc.ipType err := tests.RunSourceConnectionTest(t, sourceConfig, CloudSQLMSSQLToolKind) if err != nil { t.Fatalf("Connection test failure: %s", err) } }) } } ``` -------------------------------------------------------------------------------- /tests/alloydb/alloydb_wait_for_operation_test.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package alloydb import ( "bytes" "context" "encoding/json" "fmt" "io" "net/http" "net/http/httptest" "net/url" "reflect" "regexp" "strings" "sync" "testing" "time" "github.com/googleapis/genai-toolbox/internal/testutils" "github.com/googleapis/genai-toolbox/tests" _ "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydbwaitforoperation" ) var ( waitToolKind = "alloydb-wait-for-operation" ) type waitForOperationTransport struct { transport http.RoundTripper url *url.URL } func (t *waitForOperationTransport) RoundTrip(req *http.Request) (*http.Response, error) { if strings.HasPrefix(req.URL.String(), "https://alloydb.googleapis.com") { req.URL.Scheme = t.url.Scheme req.URL.Host = t.url.Host } return t.transport.RoundTrip(req) } type operation struct { Name string `json:"name"` Done bool `json:"done"` Response any `json:"response,omitempty"` Error *struct { Code int `json:"code"` Message string `json:"message"` } `json:"error,omitempty"` } type handler struct { mu sync.Mutex operations map[string]*operation t *testing.T } func (h *handler) ServeHTTP(w http.ResponseWriter, r *http.Request) { h.mu.Lock() defer h.mu.Unlock() if !strings.Contains(r.UserAgent(), "genai-toolbox/") { h.t.Errorf("User-Agent header not found") } // The format is projects/{project}/locations/{location}/operations/{operation} // The tool will call something like /v1/projects/p1/locations/l1/operations/op1 if match, _ := regexp.MatchString("/v1/projects/.*/locations/.*/operations/.*", r.URL.Path); match { parts := regexp.MustCompile("/").Split(r.URL.Path, -1) opName := parts[len(parts)-1] op, ok := h.operations[opName] if !ok { http.NotFound(w, r) return } if !op.Done { op.Done = true } w.Header().Set("Content-Type", "application/json") if err := json.NewEncoder(w).Encode(op); err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } } else { http.NotFound(w, r) } } func TestWaitToolEndpoints(t *testing.T) { h := &handler{ operations: map[string]*operation{ "op1": {Name: "op1", Done: false, Response: "success"}, "op2": {Name: "op2", Done: false, Error: &struct { Code int `json:"code"` Message string `json:"message"` }{Code: 1, Message: "failed"}}, }, t: t, } server := httptest.NewServer(h) defer server.Close() serverURL, err := url.Parse(server.URL) if err != nil { t.Fatalf("failed to parse server URL: %v", err) } originalTransport := http.DefaultClient.Transport if originalTransport == nil { originalTransport = http.DefaultTransport } http.DefaultClient.Transport = &waitForOperationTransport{ transport: originalTransport, url: serverURL, } t.Cleanup(func() { http.DefaultClient.Transport = originalTransport }) ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() var args []string toolsFile := getWaitToolsConfig() cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...) if err != nil { t.Fatalf("command initialization returned an error: %s", err) } defer cleanup() waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second) defer cancel() out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out) if err != nil { t.Logf("toolbox command logs: \n%s", out) t.Fatalf("toolbox didn't start successfully: %s", err) } tcs := []struct { name string toolName string body string want string expectError bool wantSubstring bool }{ { name: "successful operation", toolName: "wait-for-op1", body: `{"project": "p1", "location": "l1", "operation": "op1"}`, want: `{"name":"op1","done":true,"response":"success"}`, }, { name: "failed operation", toolName: "wait-for-op2", body: `{"project": "p1", "location": "l1", "operation": "op2"}`, expectError: true, }, } for _, tc := range tcs { t.Run(tc.name, func(t *testing.T) { api := fmt.Sprintf("http://127.0.0.1:5000/api/tool/%s/invoke", tc.toolName) req, err := http.NewRequest(http.MethodPost, api, bytes.NewBufferString(tc.body)) if err != nil { t.Fatalf("unable to create request: %s", err) } req.Header.Add("Content-type", "application/json") resp, err := http.DefaultClient.Do(req) if err != nil { t.Fatalf("unable to send request: %s", err) } defer resp.Body.Close() if tc.expectError { if resp.StatusCode == http.StatusOK { t.Fatal("expected error but got status 200") } return } if resp.StatusCode != http.StatusOK { bodyBytes, _ := io.ReadAll(resp.Body) t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes)) } var result struct { Result string `json:"result"` } if err := json.NewDecoder(resp.Body).Decode(&result); err != nil { t.Fatalf("failed to decode response: %v", err) } if tc.wantSubstring { if !bytes.Contains([]byte(result.Result), []byte(tc.want)) { t.Fatalf("unexpected result: got %q, want substring %q", result.Result, tc.want) } return } // The result is a JSON-encoded string, so we need to unmarshal it twice. var tempString string if err := json.Unmarshal([]byte(result.Result), &tempString); err != nil { t.Fatalf("failed to unmarshal result string: %v", err) } var got, want map[string]any if err := json.Unmarshal([]byte(tempString), &got); err != nil { t.Fatalf("failed to unmarshal result: %v", err) } if err := json.Unmarshal([]byte(tc.want), &want); err != nil { t.Fatalf("failed to unmarshal want: %v", err) } if !reflect.DeepEqual(got, want) { t.Fatalf("unexpected result: got %+v, want %+v", got, want) } }) } } func getWaitToolsConfig() map[string]any { return map[string]any{ "sources": map[string]any{ "my-alloydb-source": map[string]any{ "kind": "alloydb-admin", }, }, "tools": map[string]any{ "wait-for-op1": map[string]any{ "kind": waitToolKind, "source": "my-alloydb-source", "description": "wait for op1", }, "wait-for-op2": map[string]any{ "kind": waitToolKind, "source": "my-alloydb-source", "description": "wait for op2", }, }, } } ``` -------------------------------------------------------------------------------- /internal/tools/alloydb/alloydbcreateinstance/alloydbcreateinstance.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package alloydbcreateinstance import ( "context" "fmt" yaml "github.com/goccy/go-yaml" "github.com/googleapis/genai-toolbox/internal/sources" alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin" "github.com/googleapis/genai-toolbox/internal/tools" "google.golang.org/api/alloydb/v1" ) const kind string = "alloydb-create-instance" func init() { if !tools.Register(kind, newConfig) { panic(fmt.Sprintf("tool kind %q already registered", kind)) } } func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) { actual := Config{Name: name} if err := decoder.DecodeContext(ctx, &actual); err != nil { return nil, err } return actual, nil } // Configuration for the create-instance tool. type Config struct { Name string `yaml:"name" validate:"required"` Kind string `yaml:"kind" validate:"required"` Source string `yaml:"source" validate:"required"` Description string `yaml:"description"` AuthRequired []string `yaml:"authRequired"` } // validate interface var _ tools.ToolConfig = Config{} // ToolConfigKind returns the kind of the tool. func (cfg Config) ToolConfigKind() string { return kind } // Initialize initializes the tool from the configuration. func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) { rawS, ok := srcs[cfg.Source] if !ok { return nil, fmt.Errorf("source %q not found", cfg.Source) } s, ok := rawS.(*alloydbadmin.Source) if !ok { return nil, fmt.Errorf("invalid source for %q tool: source kind must be `alloydb-admin`", kind) } allParameters := tools.Parameters{ tools.NewStringParameter("project", "The GCP project ID."), tools.NewStringParameter("location", "The location of the cluster (e.g., 'us-central1')."), tools.NewStringParameter("cluster", "The ID of the cluster to create the instance in."), tools.NewStringParameter("instance", "A unique ID for the new AlloyDB instance."), tools.NewStringParameterWithDefault("instanceType", "PRIMARY", "The type of instance to create. Valid values are: PRIMARY and READ_POOL. Default is PRIMARY"), tools.NewStringParameterWithRequired("displayName", "An optional, user-friendly name for the instance.", false), tools.NewIntParameterWithDefault("nodeCount", 1, "The number of nodes in the read pool. Required only if instanceType is READ_POOL. Default is 1."), } paramManifest := allParameters.Manifest() description := cfg.Description if description == "" { description = "Creates a new AlloyDB instance (PRIMARY or READ_POOL) within a cluster. This is a long-running operation. This will return operation id to be used by get operations tool. Take all parameters from user in one go." } mcpManifest := tools.GetMcpManifest(cfg.Name, description, cfg.AuthRequired, allParameters) return Tool{ Name: cfg.Name, Kind: kind, Source: s, AllParams: allParameters, manifest: tools.Manifest{Description: description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired}, mcpManifest: mcpManifest, }, nil } // Tool represents the create-instance tool. type Tool struct { Name string `yaml:"name"` Kind string `yaml:"kind"` Description string `yaml:"description"` Source *alloydbadmin.Source AllParams tools.Parameters `yaml:"allParams"` manifest tools.Manifest mcpManifest tools.McpManifest } // Invoke executes the tool's logic. func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) { paramsMap := params.AsMap() project, ok := paramsMap["project"].(string) if !ok || project == "" { return nil, fmt.Errorf("invalid or missing 'project' parameter; expected a non-empty string") } location, ok := paramsMap["location"].(string) if !ok || location == "" { return nil, fmt.Errorf("invalid or missing 'location' parameter; expected a non-empty string") } cluster, ok := paramsMap["cluster"].(string) if !ok || cluster == "" { return nil, fmt.Errorf("invalid or missing 'cluster' parameter; expected a non-empty string") } instanceID, ok := paramsMap["instance"].(string) if !ok || instanceID == "" { return nil, fmt.Errorf("invalid or missing 'instance' parameter; expected a non-empty string") } instanceType, ok := paramsMap["instanceType"].(string) if !ok || (instanceType != "READ_POOL" && instanceType != "PRIMARY") { return nil, fmt.Errorf("invalid 'instanceType' parameter; expected 'PRIMARY' or 'READ_POOL'") } service, err := t.Source.GetService(ctx, string(accessToken)) if err != nil { return nil, err } urlString := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", project, location, cluster) // Build the request body using the type-safe Instance struct. instance := &alloydb.Instance{ InstanceType: instanceType, NetworkConfig: &alloydb.InstanceNetworkConfig{ EnablePublicIp: true, }, DatabaseFlags: map[string]string{ "password.enforce_complexity": "on", }, } if displayName, ok := paramsMap["displayName"].(string); ok && displayName != "" { instance.DisplayName = displayName } if instanceType == "READ_POOL" { nodeCount, ok := paramsMap["nodeCount"].(int) if !ok { return nil, fmt.Errorf("invalid 'nodeCount' parameter; expected an integer for READ_POOL") } instance.ReadPoolConfig = &alloydb.ReadPoolConfig{ NodeCount: int64(nodeCount), } } // The Create API returns a long-running operation. resp, err := service.Projects.Locations.Clusters.Instances.Create(urlString, instance).InstanceId(instanceID).Do() if err != nil { return nil, fmt.Errorf("error creating AlloyDB instance: %w", err) } return resp, nil } // ParseParams parses the parameters for the tool. func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) { return tools.ParseParams(t.AllParams, data, claims) } // Manifest returns the tool's manifest. func (t Tool) Manifest() tools.Manifest { return t.manifest } // McpManifest returns the tool's MCP manifest. func (t Tool) McpManifest() tools.McpManifest { return t.mcpManifest } // Authorized checks if the tool is authorized. func (t Tool) Authorized(verifiedAuthServices []string) bool { return true } func (t Tool) RequiresClientAuthorization() bool { return t.Source.UseClientAuthorization() } ``` -------------------------------------------------------------------------------- /internal/tools/firestore/util/converter.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package util import ( "encoding/base64" "fmt" "strconv" "strings" "time" "cloud.google.com/go/firestore" "google.golang.org/genproto/googleapis/type/latlng" ) // JSONToFirestoreValue converts a JSON value with type information to a Firestore-compatible value // The input should be a map with a single key indicating the type (e.g., "stringValue", "integerValue") // If a client is provided, referenceValue types will be converted to *firestore.DocumentRef func JSONToFirestoreValue(value interface{}, client *firestore.Client) (interface{}, error) { if value == nil { return nil, nil } switch v := value.(type) { case map[string]interface{}: // Check for typed values if len(v) == 1 { for key, val := range v { switch key { case "nullValue": return nil, nil case "booleanValue": return val, nil case "stringValue": return val, nil case "integerValue": // Convert to int64 switch num := val.(type) { case float64: return int64(num), nil case int: return int64(num), nil case int64: return num, nil case string: // Parse string representation using strconv for better performance i, err := strconv.ParseInt(strings.TrimSpace(num), 10, 64) if err != nil { return nil, fmt.Errorf("invalid integer value: %v", val) } return i, nil } return nil, fmt.Errorf("invalid integer value: %v", val) case "doubleValue": // Convert to float64 switch num := val.(type) { case float64: return num, nil case int: return float64(num), nil case int64: return float64(num), nil } return nil, fmt.Errorf("invalid double value: %v", val) case "bytesValue": // Decode base64 string to bytes if str, ok := val.(string); ok { return base64.StdEncoding.DecodeString(str) } return nil, fmt.Errorf("bytes value must be a base64 encoded string") case "timestampValue": // Parse timestamp if str, ok := val.(string); ok { t, err := time.Parse(time.RFC3339Nano, str) if err != nil { return nil, fmt.Errorf("invalid timestamp format: %w", err) } return t, nil } return nil, fmt.Errorf("timestamp value must be a string") case "geoPointValue": // Convert to LatLng if geoMap, ok := val.(map[string]interface{}); ok { lat, latOk := geoMap["latitude"].(float64) lng, lngOk := geoMap["longitude"].(float64) if latOk && lngOk { return &latlng.LatLng{ Latitude: lat, Longitude: lng, }, nil } } return nil, fmt.Errorf("invalid geopoint value format") case "arrayValue": // Convert array if arrayMap, ok := val.(map[string]interface{}); ok { if values, ok := arrayMap["values"].([]interface{}); ok { result := make([]interface{}, len(values)) for i, item := range values { converted, err := JSONToFirestoreValue(item, client) if err != nil { return nil, fmt.Errorf("array item %d: %w", i, err) } result[i] = converted } return result, nil } } return nil, fmt.Errorf("invalid array value format") case "mapValue": // Convert map if mapMap, ok := val.(map[string]interface{}); ok { if fields, ok := mapMap["fields"].(map[string]interface{}); ok { result := make(map[string]interface{}) for k, v := range fields { converted, err := JSONToFirestoreValue(v, client) if err != nil { return nil, fmt.Errorf("map field %q: %w", k, err) } result[k] = converted } return result, nil } } return nil, fmt.Errorf("invalid map value format") case "referenceValue": // Convert to DocumentRef if client is provided if strVal, ok := val.(string); ok { if client != nil && isValidDocumentPath(strVal) { return client.Doc(strVal), nil } // Return the path as string if no client or invalid path return strVal, nil } return nil, fmt.Errorf("reference value must be a string") default: // If not a typed value, treat as regular map return convertPlainMap(v, client) } } } // Regular map without type annotation return convertPlainMap(v, client) default: // Plain values (for backward compatibility) return value, nil } } // convertPlainMap converts a plain map to Firestore format func convertPlainMap(m map[string]interface{}, client *firestore.Client) (map[string]interface{}, error) { result := make(map[string]interface{}) for k, v := range m { converted, err := JSONToFirestoreValue(v, client) if err != nil { return nil, fmt.Errorf("field %q: %w", k, err) } result[k] = converted } return result, nil } // FirestoreValueToJSON converts a Firestore value to a simplified JSON representation // This removes type information and returns plain values func FirestoreValueToJSON(value interface{}) interface{} { if value == nil { return nil } switch v := value.(type) { case time.Time: return v.Format(time.RFC3339Nano) case *latlng.LatLng: return map[string]interface{}{ "latitude": v.Latitude, "longitude": v.Longitude, } case []byte: return base64.StdEncoding.EncodeToString(v) case []interface{}: result := make([]interface{}, len(v)) for i, item := range v { result[i] = FirestoreValueToJSON(item) } return result case map[string]interface{}: result := make(map[string]interface{}) for k, val := range v { result[k] = FirestoreValueToJSON(val) } return result case *firestore.DocumentRef: return v.Path default: return value } } // isValidDocumentPath checks if a string is a valid Firestore document path // Valid paths have an even number of segments (collection/doc/collection/doc...) func isValidDocumentPath(path string) bool { if path == "" { return false } // Split the path by '/' and check if it has an even number of segments segments := splitPath(path) return len(segments) > 0 && len(segments)%2 == 0 } // splitPath splits a path by '/' while handling empty segments correctly func splitPath(path string) []string { rawSegments := strings.Split(path, "/") var segments []string for _, s := range rawSegments { if s != "" { segments = append(segments, s) } } return segments } ``` -------------------------------------------------------------------------------- /internal/prebuiltconfigs/prebuiltconfigs_test.go: -------------------------------------------------------------------------------- ```go // Copyright 2024 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package prebuiltconfigs import ( "testing" "github.com/google/go-cmp/cmp" ) var expectedToolSources = []string{ "alloydb-postgres-admin", "alloydb-postgres-observability", "alloydb-postgres", "bigquery", "clickhouse", "cloud-sql-mssql-admin", "cloud-sql-mssql-observability", "cloud-sql-mssql", "cloud-sql-mysql-admin", "cloud-sql-mysql-observability", "cloud-sql-mysql", "cloud-sql-postgres-admin", "cloud-sql-postgres-observability", "cloud-sql-postgres", "dataplex", "firestore", "looker-conversational-analytics", "looker", "mssql", "mysql", "neo4j", "oceanbase", "postgres", "spanner-postgres", "spanner", "sqlite", } func TestGetPrebuiltSources(t *testing.T) { t.Run("Test Get Prebuilt Sources", func(t *testing.T) { sources := GetPrebuiltSources() if diff := cmp.Diff(expectedToolSources, sources); diff != "" { t.Fatalf("incorrect sources parse: diff %v", diff) } }) } func TestLoadPrebuiltToolYAMLs(t *testing.T) { test_name := "test load prebuilt configs" expectedKeys := expectedToolSources t.Run(test_name, func(t *testing.T) { configsMap, keys, err := loadPrebuiltToolYAMLs() if err != nil { t.Fatalf("unexpected error: %s", err) } foundExpectedKeys := make(map[string]bool) if len(expectedKeys) != len(configsMap) { t.Fatalf("Failed to load all prebuilt tools.") } for _, expectedKey := range expectedKeys { _, ok := configsMap[expectedKey] if !ok { t.Fatalf("Prebuilt tools for '%s' was NOT FOUND in the loaded map.", expectedKey) } else { foundExpectedKeys[expectedKey] = true // Mark as found } } t.Log(expectedKeys) t.Log(keys) if diff := cmp.Diff(expectedKeys, keys); diff != "" { t.Fatalf("incorrect sources parse: diff %v", diff) } }) } func TestGetPrebuiltTool(t *testing.T) { alloydb_admin_config, _ := Get("alloydb-postgres-admin") alloydb_observability_config, _ := Get("alloydb-postgres-observability") alloydb_config, _ := Get("alloydb-postgres") bigquery_config, _ := Get("bigquery") clickhouse_config, _ := Get("clickhouse") cloudsqlpg_observability_config, _ := Get("cloud-sql-postgres-observability") cloudsqlpg_config, _ := Get("cloud-sql-postgres") cloudsqlpg_admin_config, _ := Get("cloud-sql-postgres-admin") cloudsqlmysql_admin_config, _ := Get("cloud-sql-mysql-admin") cloudsqlmssql_admin_config, _ := Get("cloud-sql-mssql-admin") cloudsqlmysql_observability_config, _ := Get("cloud-sql-mysql-observability") cloudsqlmysql_config, _ := Get("cloud-sql-mysql") cloudsqlmssql_observability_config, _ := Get("cloud-sql-mssql-observability") cloudsqlmssql_config, _ := Get("cloud-sql-mssql") dataplex_config, _ := Get("dataplex") firestoreconfig, _ := Get("firestore") looker_config, _ := Get("looker") lookerca_config, _ := Get("looker-conversational-analytics") mysql_config, _ := Get("mysql") mssql_config, _ := Get("mssql") oceanbase_config, _ := Get("oceanbase") postgresconfig, _ := Get("postgres") spanner_config, _ := Get("spanner") spannerpg_config, _ := Get("spanner-postgres") sqlite_config, _ := Get("sqlite") neo4jconfig, _ := Get("neo4j") if len(alloydb_admin_config) <= 0 { t.Fatalf("unexpected error: could not fetch alloydb prebuilt tools yaml") } if len(alloydb_config) <= 0 { t.Fatalf("unexpected error: could not fetch alloydb prebuilt tools yaml") } if len(alloydb_observability_config) <= 0 { t.Fatalf("unexpected error: could not fetch alloydb-observability prebuilt tools yaml") } if len(bigquery_config) <= 0 { t.Fatalf("unexpected error: could not fetch bigquery prebuilt tools yaml") } if len(clickhouse_config) <= 0 { t.Fatalf("unexpected error: could not fetch clickhouse prebuilt tools yaml") } if len(cloudsqlpg_observability_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql pg observability prebuilt tools yaml") } if len(cloudsqlpg_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql pg prebuilt tools yaml") } if len(cloudsqlpg_admin_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql pg admin prebuilt tools yaml") } if len(cloudsqlmysql_admin_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql mysql admin prebuilt tools yaml") } if len(cloudsqlmysql_observability_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql mysql observability prebuilt tools yaml") } if len(cloudsqlmysql_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql mysql prebuilt tools yaml") } if len(cloudsqlmssql_observability_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql mssql observability prebuilt tools yaml") } if len(cloudsqlmssql_admin_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql mssql admin prebuilt tools yaml") } if len(cloudsqlmssql_config) <= 0 { t.Fatalf("unexpected error: could not fetch cloud sql mssql prebuilt tools yaml") } if len(dataplex_config) <= 0 { t.Fatalf("unexpected error: could not fetch dataplex prebuilt tools yaml") } if len(firestoreconfig) <= 0 { t.Fatalf("unexpected error: could not fetch firestore prebuilt tools yaml") } if len(looker_config) <= 0 { t.Fatalf("unexpected error: could not fetch looker prebuilt tools yaml") } if len(lookerca_config) <= 0 { t.Fatalf("unexpected error: could not fetch looker-conversational-analytics prebuilt tools yaml") } if len(mysql_config) <= 0 { t.Fatalf("unexpected error: could not fetch mysql prebuilt tools yaml") } if len(mssql_config) <= 0 { t.Fatalf("unexpected error: could not fetch mssql prebuilt tools yaml") } if len(oceanbase_config) <= 0 { t.Fatalf("unexpected error: could not fetch oceanbase prebuilt tools yaml") } if len(postgresconfig) <= 0 { t.Fatalf("unexpected error: could not fetch postgres prebuilt tools yaml") } if len(spanner_config) <= 0 { t.Fatalf("unexpected error: could not fetch spanner prebuilt tools yaml") } if len(spannerpg_config) <= 0 { t.Fatalf("unexpected error: could not fetch spanner pg prebuilt tools yaml") } if len(sqlite_config) <= 0 { t.Fatalf("unexpected error: could not fetch sqlite prebuilt tools yaml") } if len(neo4jconfig) <= 0 { t.Fatalf("unexpected error: could not fetch neo4j prebuilt tools yaml") } } func TestFailGetPrebuiltTool(t *testing.T) { _, err := Get("sql") if err == nil { t.Fatalf("unexpected an error but got nil.") } } ``` -------------------------------------------------------------------------------- /internal/server/common_test.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package server import ( "context" "fmt" "io" "net/http" "net/http/httptest" "os" "testing" "github.com/go-chi/chi/v5" "github.com/googleapis/genai-toolbox/internal/log" "github.com/googleapis/genai-toolbox/internal/telemetry" "github.com/googleapis/genai-toolbox/internal/tools" ) // fakeVersionString is used as a temporary version string in tests const fakeVersionString = "0.0.0" var _ tools.Tool = &MockTool{} // MockTool is used to mock tools in tests type MockTool struct { Name string Description string Params []tools.Parameter manifest tools.Manifest unauthorized bool requiresClientAuthrorization bool } func (t MockTool) Invoke(context.Context, tools.ParamValues, tools.AccessToken) (any, error) { mock := []any{t.Name} return mock, nil } // claims is a map of user info decoded from an auth token func (t MockTool) ParseParams(data map[string]any, claimsMap map[string]map[string]any) (tools.ParamValues, error) { return tools.ParseParams(t.Params, data, claimsMap) } func (t MockTool) Manifest() tools.Manifest { pMs := make([]tools.ParameterManifest, 0, len(t.Params)) for _, p := range t.Params { pMs = append(pMs, p.Manifest()) } return tools.Manifest{Description: t.Description, Parameters: pMs} } func (t MockTool) Authorized(verifiedAuthServices []string) bool { // defaulted to true return !t.unauthorized } func (t MockTool) RequiresClientAuthorization() bool { // defaulted to false return t.requiresClientAuthrorization } func (t MockTool) McpManifest() tools.McpManifest { properties := make(map[string]tools.ParameterMcpManifest) required := make([]string, 0) authParams := make(map[string][]string) for _, p := range t.Params { name := p.GetName() paramManifest, authParamList := p.McpManifest() properties[name] = paramManifest required = append(required, name) if len(authParamList) > 0 { authParams[name] = authParamList } } toolsSchema := tools.McpToolsSchema{ Type: "object", Properties: properties, Required: required, } mcpManifest := tools.McpManifest{ Name: t.Name, Description: t.Description, InputSchema: toolsSchema, } if len(authParams) > 0 { mcpManifest.Metadata = map[string]any{ "toolbox/authParams": authParams, } } return mcpManifest } var tool1 = MockTool{ Name: "no_params", Params: []tools.Parameter{}, } var tool2 = MockTool{ Name: "some_params", Params: tools.Parameters{ tools.NewIntParameter("param1", "This is the first parameter."), tools.NewIntParameter("param2", "This is the second parameter."), }, } var tool3 = MockTool{ Name: "array_param", Description: "some description", Params: tools.Parameters{ tools.NewArrayParameter("my_array", "this param is an array of strings", tools.NewStringParameter("my_string", "string item")), }, } var tool4 = MockTool{ Name: "unauthorized_tool", Params: []tools.Parameter{}, unauthorized: true, } var tool5 = MockTool{ Name: "require_client_auth_tool", Params: []tools.Parameter{}, requiresClientAuthrorization: true, } // setUpResources setups resources to test against func setUpResources(t *testing.T, mockTools []MockTool) (map[string]tools.Tool, map[string]tools.Toolset) { toolsMap := make(map[string]tools.Tool) var allTools []string for _, tool := range mockTools { tool.manifest = tool.Manifest() toolsMap[tool.Name] = tool allTools = append(allTools, tool.Name) } toolsets := make(map[string]tools.Toolset) for name, l := range map[string][]string{ "": allTools, "tool1_only": {allTools[0]}, "tool2_only": {allTools[1]}, } { tc := tools.ToolsetConfig{Name: name, ToolNames: l} m, err := tc.Initialize(fakeVersionString, toolsMap) if err != nil { t.Fatalf("unable to initialize toolset %q: %s", name, err) } toolsets[name] = m } return toolsMap, toolsets } // setUpServer create a new server with tools and toolsets that are given func setUpServer(t *testing.T, router string, tools map[string]tools.Tool, toolsets map[string]tools.Toolset) (chi.Router, func()) { ctx, cancel := context.WithCancel(context.Background()) testLogger, err := log.NewStdLogger(os.Stdout, os.Stderr, "info") if err != nil { t.Fatalf("unable to initialize logger: %s", err) } otelShutdown, err := telemetry.SetupOTel(ctx, fakeVersionString, "", false, "toolbox") if err != nil { t.Fatalf("unable to setup otel: %s", err) } instrumentation, err := telemetry.CreateTelemetryInstrumentation(fakeVersionString) if err != nil { t.Fatalf("unable to create custom metrics: %s", err) } sseManager := newSseManager(ctx) resourceManager := NewResourceManager(nil, nil, tools, toolsets) server := Server{ version: fakeVersionString, logger: testLogger, instrumentation: instrumentation, sseManager: sseManager, ResourceMgr: resourceManager, } var r chi.Router switch router { case "api": r, err = apiRouter(&server) if err != nil { t.Fatalf("unable to initialize api router: %s", err) } case "mcp": r, err = mcpRouter(&server) if err != nil { t.Fatalf("unable to initialize mcp router: %s", err) } default: t.Fatalf("unknown router") } shutdown := func() { // cancel context cancel() // shutdown otel err := otelShutdown(ctx) if err != nil { t.Fatalf("error shutting down OpenTelemetry: %s", err) } } return r, shutdown } func runServer(r chi.Router, tls bool) *httptest.Server { var ts *httptest.Server if tls { ts = httptest.NewTLSServer(r) } else { ts = httptest.NewServer(r) } return ts } func runRequest(ts *httptest.Server, method, path string, body io.Reader, header map[string]string) (*http.Response, []byte, error) { req, err := http.NewRequest(method, ts.URL+path, body) if err != nil { return nil, nil, fmt.Errorf("unable to create request: %w", err) } req.Header.Set("Content-Type", "application/json") for k, v := range header { req.Header.Set(k, v) } resp, err := http.DefaultClient.Do(req) if err != nil { return nil, nil, fmt.Errorf("unable to send request: %w", err) } respBody, err := io.ReadAll(resp.Body) if err != nil { return nil, nil, fmt.Errorf("unable to read request body: %w", err) } defer resp.Body.Close() return resp, respBody, nil } ``` -------------------------------------------------------------------------------- /internal/tools/cloudsqlmssql/cloudsqlmssqlcreateinstance/cloudsqlmssqlcreateinstance.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package cloudsqlmssqlcreateinstance import ( "context" "fmt" "strings" yaml "github.com/goccy/go-yaml" "github.com/googleapis/genai-toolbox/internal/sources" "github.com/googleapis/genai-toolbox/internal/sources/cloudsqladmin" "github.com/googleapis/genai-toolbox/internal/tools" sqladmin "google.golang.org/api/sqladmin/v1" ) const kind string = "cloud-sql-mssql-create-instance" func init() { if !tools.Register(kind, newConfig) { panic(fmt.Sprintf("tool kind %q already registered", kind)) } } func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) { actual := Config{Name: name} if err := decoder.DecodeContext(ctx, &actual); err != nil { return nil, err } return actual, nil } // Config defines the configuration for the create-instances tool. type Config struct { Name string `yaml:"name" validate:"required"` Kind string `yaml:"kind" validate:"required"` Description string `yaml:"description"` Source string `yaml:"source" validate:"required"` AuthRequired []string `yaml:"authRequired"` } // validate interface var _ tools.ToolConfig = Config{} // ToolConfigKind returns the kind of the tool. func (cfg Config) ToolConfigKind() string { return kind } // Initialize initializes the tool from the configuration. func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) { rawS, ok := srcs[cfg.Source] if !ok { return nil, fmt.Errorf("no source named %q configured", cfg.Source) } s, ok := rawS.(*cloudsqladmin.Source) if !ok { return nil, fmt.Errorf("invalid source for %q tool: source kind must be `cloud-sql-admin`", kind) } allParameters := tools.Parameters{ tools.NewStringParameter("project", "The project ID"), tools.NewStringParameter("name", "The name of the instance"), tools.NewStringParameterWithDefault("databaseVersion", "SQLSERVER_2022_STANDARD", "The database version for SQL Server. If not specified, defaults to SQLSERVER_2022_STANDARD."), tools.NewStringParameter("rootPassword", "The root password for the instance"), tools.NewStringParameterWithDefault("editionPreset", "Development", "The edition of the instance. Can be `Production` or `Development`. This determines the default machine type and availability. Defaults to `Development`."), } paramManifest := allParameters.Manifest() description := cfg.Description if description == "" { description = "Creates a SQL Server instance using `Production` and `Development` presets. For the `Development` template, it chooses a 2 vCPU, 8 GiB RAM (`db-custom-2-8192`) configuration with Non-HA/zonal availability. For the `Production` template, it chooses a 4 vCPU, 26 GiB RAM (`db-custom-4-26624`) configuration with HA/regional availability. The Enterprise edition is used in both cases. The default database version is `SQLSERVER_2022_STANDARD`. The agent should ask the user if they want to use a different version." } mcpManifest := tools.GetMcpManifest(cfg.Name, description, cfg.AuthRequired, allParameters) return Tool{ Name: cfg.Name, Kind: kind, AuthRequired: cfg.AuthRequired, Source: s, AllParams: allParameters, manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired}, mcpManifest: mcpManifest, }, nil } // Tool represents the create-instances tool. type Tool struct { Name string `yaml:"name"` Kind string `yaml:"kind"` Description string `yaml:"description"` AuthRequired []string `yaml:"authRequired"` Source *cloudsqladmin.Source AllParams tools.Parameters `yaml:"allParams"` manifest tools.Manifest mcpManifest tools.McpManifest } // Invoke executes the tool's logic. func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) { paramsMap := params.AsMap() project, ok := paramsMap["project"].(string) if !ok { return nil, fmt.Errorf("error casting 'project' parameter: %s", paramsMap["project"]) } name, ok := paramsMap["name"].(string) if !ok { return nil, fmt.Errorf("error casting 'name' parameter: %s", paramsMap["name"]) } dbVersion, ok := paramsMap["databaseVersion"].(string) if !ok { return nil, fmt.Errorf("error casting 'databaseVersion' parameter: %s", paramsMap["databaseVersion"]) } rootPassword, ok := paramsMap["rootPassword"].(string) if !ok { return nil, fmt.Errorf("error casting 'rootPassword' parameter: %s", paramsMap["rootPassword"]) } editionPreset, ok := paramsMap["editionPreset"].(string) if !ok { return nil, fmt.Errorf("error casting 'editionPreset' parameter: %s", paramsMap["editionPreset"]) } settings := sqladmin.Settings{} switch strings.ToLower(editionPreset) { case "production": settings.AvailabilityType = "REGIONAL" settings.Edition = "ENTERPRISE" settings.Tier = "db-custom-4-26624" settings.DataDiskSizeGb = 250 settings.DataDiskType = "PD_SSD" case "development": settings.AvailabilityType = "ZONAL" settings.Edition = "ENTERPRISE" settings.Tier = "db-custom-2-8192" settings.DataDiskSizeGb = 100 settings.DataDiskType = "PD_SSD" default: return nil, fmt.Errorf("invalid 'editionPreset': %q. Must be either 'Production' or 'Development'", editionPreset) } instance := sqladmin.DatabaseInstance{ Name: name, DatabaseVersion: dbVersion, RootPassword: rootPassword, Settings: &settings, Project: project, } service, err := t.Source.GetService(ctx, string(accessToken)) if err != nil { return nil, err } resp, err := service.Instances.Insert(project, &instance).Do() if err != nil { return nil, fmt.Errorf("error creating instance: %w", err) } return resp, nil } // ParseParams parses the parameters for the tool. func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) { return tools.ParseParams(t.AllParams, data, claims) } // Manifest returns the tool's manifest. func (t Tool) Manifest() tools.Manifest { return t.manifest } // McpManifest returns the tool's MCP manifest. func (t Tool) McpManifest() tools.McpManifest { return t.mcpManifest } // Authorized checks if the tool is authorized. func (t Tool) Authorized(verifiedAuthServices []string) bool { return true } func (t Tool) RequiresClientAuthorization() bool { return t.Source.UseClientAuthorization() } ``` -------------------------------------------------------------------------------- /internal/tools/alloydb/alloydbcreateuser/alloydbcreateuser.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package alloydbcreateuser import ( "context" "fmt" yaml "github.com/goccy/go-yaml" "github.com/googleapis/genai-toolbox/internal/sources" alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin" "github.com/googleapis/genai-toolbox/internal/tools" "google.golang.org/api/alloydb/v1" ) const kind string = "alloydb-create-user" func init() { if !tools.Register(kind, newConfig) { panic(fmt.Sprintf("tool kind %q already registered", kind)) } } func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) { actual := Config{Name: name} if err := decoder.DecodeContext(ctx, &actual); err != nil { return nil, err } return actual, nil } // Configuration for the create-user tool. type Config struct { Name string `yaml:"name" validate:"required"` Kind string `yaml:"kind" validate:"required"` Source string `yaml:"source" validate:"required"` Description string `yaml:"description"` AuthRequired []string `yaml:"authRequired"` } // validate interface var _ tools.ToolConfig = Config{} // ToolConfigKind returns the kind of the tool. func (cfg Config) ToolConfigKind() string { return kind } // Initialize initializes the tool from the configuration. func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) { rawS, ok := srcs[cfg.Source] if !ok { return nil, fmt.Errorf("source %q not found", cfg.Source) } s, ok := rawS.(*alloydbadmin.Source) if !ok { return nil, fmt.Errorf("invalid source for %q tool: source kind must be `alloydb-admin`", kind) } allParameters := tools.Parameters{ tools.NewStringParameter("project", "The GCP project ID."), tools.NewStringParameter("location", "The location of the cluster (e.g., 'us-central1')."), tools.NewStringParameter("cluster", "The ID of the cluster where the user will be created."), tools.NewStringParameter("user", "The name for the new user. Must be unique within the cluster."), tools.NewStringParameterWithRequired("password", "A secure password for the new user. Required only for ALLOYDB_BUILT_IN userType.", false), tools.NewArrayParameterWithDefault("databaseRoles", []any{}, "Optional. A list of database roles to grant to the new user (e.g., ['pg_read_all_data']).", tools.NewStringParameter("role", "A single database role to grant to the user (e.g., 'pg_read_all_data').")), tools.NewStringParameter("userType", "The type of user to create. Valid values are: ALLOYDB_BUILT_IN and ALLOYDB_IAM_USER. ALLOYDB_IAM_USER is recommended."), } paramManifest := allParameters.Manifest() description := cfg.Description if description == "" { description = "Creates a new AlloyDB user within a cluster. Takes the new user's name and a secure password. Optionally, a list of database roles can be assigned. Always ask the user for the type of user to create. ALLOYDB_IAM_USER is recommended." } mcpManifest := tools.GetMcpManifest(cfg.Name, description, cfg.AuthRequired, allParameters) return Tool{ Name: cfg.Name, Kind: kind, Source: s, AllParams: allParameters, manifest: tools.Manifest{Description: description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired}, mcpManifest: mcpManifest, }, nil } // Tool represents the create-user tool. type Tool struct { Name string `yaml:"name"` Kind string `yaml:"kind"` Description string `yaml:"description"` Source *alloydbadmin.Source AllParams tools.Parameters `yaml:"allParams"` manifest tools.Manifest mcpManifest tools.McpManifest } // Invoke executes the tool's logic. func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) { paramsMap := params.AsMap() project, ok := paramsMap["project"].(string) if !ok || project == "" { return nil, fmt.Errorf("invalid or missing 'project' parameter; expected a non-empty string") } location, ok := paramsMap["location"].(string) if !ok || location == "" { return nil, fmt.Errorf("invalid or missing'location' parameter; expected a non-empty string") } cluster, ok := paramsMap["cluster"].(string) if !ok || cluster == "" { return nil, fmt.Errorf("invalid or missing 'cluster' parameter; expected a non-empty string") } userID, ok := paramsMap["user"].(string) if !ok || userID == "" { return nil, fmt.Errorf("invalid or missing 'user' parameter; expected a non-empty string") } userType, ok := paramsMap["userType"].(string) if !ok || (userType != "ALLOYDB_BUILT_IN" && userType != "ALLOYDB_IAM_USER") { return nil, fmt.Errorf("invalid or missing 'userType' parameter; expected 'ALLOYDB_BUILT_IN' or 'ALLOYDB_IAM_USER'") } service, err := t.Source.GetService(ctx, string(accessToken)) if err != nil { return nil, err } urlString := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", project, location, cluster) // Build the request body using the type-safe User struct. user := &alloydb.User{ UserType: userType, } if userType == "ALLOYDB_BUILT_IN" { password, ok := paramsMap["password"].(string) if !ok || password == "" { return nil, fmt.Errorf("password is required when userType is ALLOYDB_BUILT_IN") } user.Password = password } if dbRolesRaw, ok := paramsMap["databaseRoles"].([]any); ok && len(dbRolesRaw) > 0 { var roles []string for _, r := range dbRolesRaw { if role, ok := r.(string); ok { roles = append(roles, role) } } if len(roles) > 0 { user.DatabaseRoles = roles } } // The Create API returns a long-running operation. resp, err := service.Projects.Locations.Clusters.Users.Create(urlString, user).UserId(userID).Do() if err != nil { return nil, fmt.Errorf("error creating AlloyDB user: %w", err) } return resp, nil } // ParseParams parses the parameters for the tool. func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) { return tools.ParseParams(t.AllParams, data, claims) } // Manifest returns the tool's manifest. func (t Tool) Manifest() tools.Manifest { return t.manifest } // McpManifest returns the tool's MCP manifest. func (t Tool) McpManifest() tools.McpManifest { return t.mcpManifest } // Authorized checks if the tool is authorized. func (t Tool) Authorized(verifiedAuthServices []string) bool { return true } func (t Tool) RequiresClientAuthorization() bool { return t.Source.UseClientAuthorization() } ``` -------------------------------------------------------------------------------- /internal/tools/mongodb/mongodbfindone/mongodbfindone.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package mongodbfindone import ( "context" "encoding/json" "fmt" "slices" "github.com/goccy/go-yaml" mongosrc "github.com/googleapis/genai-toolbox/internal/sources/mongodb" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" "github.com/googleapis/genai-toolbox/internal/sources" "github.com/googleapis/genai-toolbox/internal/tools" ) const kind string = "mongodb-find-one" func init() { if !tools.Register(kind, newConfig) { panic(fmt.Sprintf("tool kind %q already registered", kind)) } } func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) { actual := Config{Name: name} if err := decoder.DecodeContext(ctx, &actual); err != nil { return nil, err } return actual, nil } type Config struct { Name string `yaml:"name" validate:"required"` Kind string `yaml:"kind" validate:"required"` Source string `yaml:"source" validate:"required"` AuthRequired []string `yaml:"authRequired" validate:"required"` Description string `yaml:"description" validate:"required"` Database string `yaml:"database" validate:"required"` Collection string `yaml:"collection" validate:"required"` FilterPayload string `yaml:"filterPayload" validate:"required"` FilterParams tools.Parameters `yaml:"filterParams" validate:"required"` ProjectPayload string `yaml:"projectPayload"` ProjectParams tools.Parameters `yaml:"projectParams"` SortPayload string `yaml:"sortPayload"` SortParams tools.Parameters `yaml:"sortParams"` } // validate interface var _ tools.ToolConfig = Config{} func (cfg Config) ToolConfigKind() string { return kind } func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) { // verify source exists rawS, ok := srcs[cfg.Source] if !ok { return nil, fmt.Errorf("no source named %q configured", cfg.Source) } // verify the source is compatible s, ok := rawS.(*mongosrc.Source) if !ok { return nil, fmt.Errorf("invalid source for %q tool: source kind must be `mongodb`", kind) } // Create a slice for all parameters allParameters := slices.Concat(cfg.FilterParams, cfg.ProjectParams, cfg.SortParams) // Verify no duplicate parameter names err := tools.CheckDuplicateParameters(allParameters) if err != nil { return nil, err } // Create Toolbox manifest paramManifest := allParameters.Manifest() if paramManifest == nil { paramManifest = make([]tools.ParameterManifest, 0) } // Create MCP manifest mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, allParameters) // finish tool setup return Tool{ Name: cfg.Name, Kind: kind, AuthRequired: cfg.AuthRequired, Collection: cfg.Collection, FilterPayload: cfg.FilterPayload, FilterParams: cfg.FilterParams, ProjectPayload: cfg.ProjectPayload, ProjectParams: cfg.ProjectParams, SortPayload: cfg.SortPayload, SortParams: cfg.SortParams, AllParams: allParameters, database: s.Client.Database(cfg.Database), manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired}, mcpManifest: mcpManifest, }, nil } // validate interface var _ tools.Tool = Tool{} type Tool struct { Name string `yaml:"name"` Kind string `yaml:"kind"` AuthRequired []string `yaml:"authRequired"` Description string `yaml:"description"` Collection string `yaml:"collection"` FilterPayload string `yaml:"filterPayload"` FilterParams tools.Parameters `yaml:"filterParams"` ProjectPayload string `yaml:"projectPayload"` ProjectParams tools.Parameters `yaml:"projectParams"` SortPayload string `yaml:"sortPayload"` SortParams tools.Parameters `yaml:"sortParams"` AllParams tools.Parameters `yaml:"allParams"` database *mongo.Database manifest tools.Manifest mcpManifest tools.McpManifest } func getOptions(sortParameters tools.Parameters, projectPayload string, paramsMap map[string]any) (*options.FindOneOptions, error) { opts := options.FindOne() sort := bson.M{} for _, p := range sortParameters { sort[p.GetName()] = paramsMap[p.GetName()] } opts = opts.SetSort(sort) if len(projectPayload) == 0 { return opts, nil } result, err := tools.PopulateTemplateWithJSON("MongoDBFindOneProjectString", projectPayload, paramsMap) if err != nil { return nil, fmt.Errorf("error populating project payload: %s", err) } var projection any err = bson.UnmarshalExtJSON([]byte(result), false, &projection) if err != nil { return nil, fmt.Errorf("error unmarshalling projection: %s", err) } opts = opts.SetProjection(projection) return opts, nil } func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) { paramsMap := params.AsMap() filterString, err := tools.PopulateTemplateWithJSON("MongoDBFindOneFilterString", t.FilterPayload, paramsMap) if err != nil { return nil, fmt.Errorf("error populating filter: %s", err) } opts, err := getOptions(t.SortParams, t.ProjectPayload, paramsMap) if err != nil { return nil, fmt.Errorf("error populating options: %s", err) } var filter = bson.D{} err = bson.UnmarshalExtJSON([]byte(filterString), false, &filter) if err != nil { return nil, err } res := t.database.Collection(t.Collection).FindOne(ctx, filter, opts) if res.Err() != nil { return nil, res.Err() } var data any err = res.Decode(&data) if err != nil { return nil, err } var final []any tmp, _ := bson.MarshalExtJSON(data, false, false) var tmp2 any err = json.Unmarshal(tmp, &tmp2) if err != nil { return nil, err } final = append(final, tmp2) return final, err } func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) { return tools.ParseParams(t.AllParams, data, claims) } func (t Tool) Manifest() tools.Manifest { return t.manifest } func (t Tool) McpManifest() tools.McpManifest { return t.mcpManifest } func (t Tool) Authorized(verifiedAuthServices []string) bool { return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices) } func (t Tool) RequiresClientAuthorization() bool { return false } ``` -------------------------------------------------------------------------------- /internal/tools/clickhouse/clickhousesql/clickhousesql_test.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package clickhouse import ( "testing" "github.com/goccy/go-yaml" "github.com/google/go-cmp/cmp" "github.com/googleapis/genai-toolbox/internal/server" "github.com/googleapis/genai-toolbox/internal/sources" "github.com/googleapis/genai-toolbox/internal/sources/clickhouse" "github.com/googleapis/genai-toolbox/internal/testutils" "github.com/googleapis/genai-toolbox/internal/tools" ) func TestConfigToolConfigKind(t *testing.T) { config := Config{} if config.ToolConfigKind() != sqlKind { t.Errorf("Expected %s, got %s", sqlKind, config.ToolConfigKind()) } } func TestParseFromYamlClickHouseSQL(t *testing.T) { ctx, err := testutils.ContextWithNewLogger() if err != nil { t.Fatalf("unexpected error: %s", err) } tcs := []struct { desc string in string want server.ToolConfigs }{ { desc: "basic example", in: ` tools: example_tool: kind: clickhouse-sql source: my-instance description: some description statement: SELECT 1 `, want: server.ToolConfigs{ "example_tool": Config{ Name: "example_tool", Kind: "clickhouse-sql", Source: "my-instance", Description: "some description", Statement: "SELECT 1", AuthRequired: []string{}, }, }, }, { desc: "with parameters", in: ` tools: param_tool: kind: clickhouse-sql source: test-source description: Test ClickHouse tool statement: SELECT * FROM test_table WHERE id = $1 parameters: - name: id type: string description: Test ID `, want: server.ToolConfigs{ "param_tool": Config{ Name: "param_tool", Kind: "clickhouse-sql", Source: "test-source", Description: "Test ClickHouse tool", Statement: "SELECT * FROM test_table WHERE id = $1", Parameters: tools.Parameters{ tools.NewStringParameter("id", "Test ID"), }, AuthRequired: []string{}, }, }, }, } for _, tc := range tcs { t.Run(tc.desc, func(t *testing.T) { got := struct { Tools server.ToolConfigs `yaml:"tools"` }{} err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got) if err != nil { t.Fatalf("unable to unmarshal: %s", err) } if diff := cmp.Diff(tc.want, got.Tools); diff != "" { t.Fatalf("incorrect parse: diff %v", diff) } }) } } func TestSQLConfigInitializeValidSource(t *testing.T) { config := Config{ Name: "test-tool", Kind: sqlKind, Source: "test-clickhouse", Description: "Test tool", Statement: "SELECT 1", Parameters: tools.Parameters{}, } // Create a mock ClickHouse source mockSource := &clickhouse.Source{} sources := map[string]sources.Source{ "test-clickhouse": mockSource, } tool, err := config.Initialize(sources) if err != nil { t.Fatalf("Expected no error, got: %v", err) } clickhouseTool, ok := tool.(Tool) if !ok { t.Fatalf("Expected Tool type, got %T", tool) } if clickhouseTool.Name != "test-tool" { t.Errorf("Expected name 'test-tool', got %s", clickhouseTool.Name) } } func TestSQLConfigInitializeMissingSource(t *testing.T) { config := Config{ Name: "test-tool", Kind: sqlKind, Source: "missing-source", Description: "Test tool", Statement: "SELECT 1", Parameters: tools.Parameters{}, } sources := map[string]sources.Source{} _, err := config.Initialize(sources) if err == nil { t.Fatal("Expected error for missing source, got nil") } expectedErr := `no source named "missing-source" configured` if err.Error() != expectedErr { t.Errorf("Expected error %q, got %q", expectedErr, err.Error()) } } // mockIncompatibleSource is a mock source that doesn't implement the compatibleSource interface type mockIncompatibleSource struct{} func (m *mockIncompatibleSource) SourceKind() string { return "mock" } func TestSQLConfigInitializeIncompatibleSource(t *testing.T) { config := Config{ Name: "test-tool", Kind: sqlKind, Source: "incompatible-source", Description: "Test tool", Statement: "SELECT 1", Parameters: tools.Parameters{}, } mockSource := &mockIncompatibleSource{} sources := map[string]sources.Source{ "incompatible-source": mockSource, } _, err := config.Initialize(sources) if err == nil { t.Fatal("Expected error for incompatible source, got nil") } if err.Error() == "" { t.Error("Expected non-empty error message") } } func TestToolManifest(t *testing.T) { tool := Tool{ manifest: tools.Manifest{ Description: "Test description", Parameters: []tools.ParameterManifest{}, }, } manifest := tool.Manifest() if manifest.Description != "Test description" { t.Errorf("Expected description 'Test description', got %s", manifest.Description) } } func TestToolMcpManifest(t *testing.T) { tool := Tool{ mcpManifest: tools.McpManifest{ Name: "test-tool", Description: "Test description", }, } manifest := tool.McpManifest() if manifest.Name != "test-tool" { t.Errorf("Expected name 'test-tool', got %s", manifest.Name) } if manifest.Description != "Test description" { t.Errorf("Expected description 'Test description', got %s", manifest.Description) } } func TestToolAuthorized(t *testing.T) { tests := []struct { name string authRequired []string verifiedAuthServices []string expectedAuthorized bool }{ { name: "no auth required", authRequired: []string{}, verifiedAuthServices: []string{}, expectedAuthorized: true, }, { name: "auth required and verified", authRequired: []string{"google"}, verifiedAuthServices: []string{"google"}, expectedAuthorized: true, }, { name: "auth required but not verified", authRequired: []string{"google"}, verifiedAuthServices: []string{}, expectedAuthorized: false, }, { name: "auth required but different service verified", authRequired: []string{"google"}, verifiedAuthServices: []string{"aws"}, expectedAuthorized: false, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { tool := Tool{ AuthRequired: tt.authRequired, } authorized := tool.Authorized(tt.verifiedAuthServices) if authorized != tt.expectedAuthorized { t.Errorf("Expected authorized %t, got %t", tt.expectedAuthorized, authorized) } }) } } ``` -------------------------------------------------------------------------------- /internal/server/mcp/v20250326/method.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package v20250326 import ( "bytes" "context" "encoding/json" "errors" "fmt" "net/http" "strings" "github.com/googleapis/genai-toolbox/internal/auth" "github.com/googleapis/genai-toolbox/internal/server/mcp/jsonrpc" "github.com/googleapis/genai-toolbox/internal/tools" "github.com/googleapis/genai-toolbox/internal/util" ) // ProcessMethod returns a response for the request. func ProcessMethod(ctx context.Context, id jsonrpc.RequestId, method string, toolset tools.Toolset, tools map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) { switch method { case PING: return pingHandler(id) case TOOLS_LIST: return toolsListHandler(id, toolset, body) case TOOLS_CALL: return toolsCallHandler(ctx, id, tools, authServices, body, header) default: err := fmt.Errorf("invalid method %s", method) return jsonrpc.NewError(id, jsonrpc.METHOD_NOT_FOUND, err.Error(), nil), err } } // pingHandler handles the "ping" method by returning an empty response. func pingHandler(id jsonrpc.RequestId) (any, error) { return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: struct{}{}, }, nil } func toolsListHandler(id jsonrpc.RequestId, toolset tools.Toolset, body []byte) (any, error) { var req ListToolsRequest if err := json.Unmarshal(body, &req); err != nil { err = fmt.Errorf("invalid mcp tools list request: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } result := ListToolsResult{ Tools: toolset.McpManifest, } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: result, }, nil } // toolsCallHandler generate a response for tools call. func toolsCallHandler(ctx context.Context, id jsonrpc.RequestId, toolsMap map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) { // retrieve logger from context logger, err := util.LoggerFromContext(ctx) if err != nil { return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } var req CallToolRequest if err = json.Unmarshal(body, &req); err != nil { err = fmt.Errorf("invalid mcp tools call request: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } toolName := req.Params.Name toolArgument := req.Params.Arguments logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName)) tool, ok := toolsMap[toolName] if !ok { err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName) return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err } // Get access token accessToken := tools.AccessToken(header.Get("Authorization")) // Check if this specific tool requires the standard authorization header if tool.RequiresClientAuthorization() { if accessToken == "" { return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, "missing access token in the 'Authorization' header", nil), tools.ErrUnauthorized } } // marshal arguments and decode it using decodeJSON instead to prevent loss between floats/int. aMarshal, err := json.Marshal(toolArgument) if err != nil { err = fmt.Errorf("unable to marshal tools argument: %w", err) return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } var data map[string]any if err = util.DecodeJSON(bytes.NewBuffer(aMarshal), &data); err != nil { err = fmt.Errorf("unable to decode tools argument: %w", err) return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } // Tool authentication // claimsFromAuth maps the name of the authservice to the claims retrieved from it. claimsFromAuth := make(map[string]map[string]any) // if using stdio, header will be nil and auth will not be supported if header != nil { for _, aS := range authServices { claims, err := aS.GetClaimsFromHeader(ctx, header) if err != nil { logger.DebugContext(ctx, err.Error()) continue } if claims == nil { // authService not present in header continue } claimsFromAuth[aS.GetName()] = claims } } // Tool authorization check verifiedAuthServices := make([]string, len(claimsFromAuth)) i := 0 for k := range claimsFromAuth { verifiedAuthServices[i] = k i++ } // Check if any of the specified auth services is verified isAuthorized := tool.Authorized(verifiedAuthServices) if !isAuthorized { err = fmt.Errorf("unauthorized Tool call: Please make sure your specify correct auth headers: %w", tools.ErrUnauthorized) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } logger.DebugContext(ctx, "tool invocation authorized") params, err := tool.ParseParams(data, claimsFromAuth) if err != nil { err = fmt.Errorf("provided parameters were invalid: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err } logger.DebugContext(ctx, fmt.Sprintf("invocation params: %s", params)) // run tool invocation and generate response. results, err := tool.Invoke(ctx, params, accessToken) if err != nil { errStr := err.Error() // Missing authService tokens. if errors.Is(err, tools.ErrUnauthorized) { return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } // Upstream auth error if strings.Contains(errStr, "Error 401") || strings.Contains(errStr, "Error 403") { if tool.RequiresClientAuthorization() { // Error with client credentials should pass down to the client return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } // Auth error with ADC should raise internal 500 error return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } text := TextContent{ Type: "text", Text: err.Error(), } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: CallToolResult{Content: []TextContent{text}, IsError: true}, }, nil } content := make([]TextContent, 0) sliceRes, ok := results.([]any) if !ok { sliceRes = []any{results} } for _, d := range sliceRes { text := TextContent{Type: "text"} dM, err := json.Marshal(d) if err != nil { text.Text = fmt.Sprintf("fail to marshal: %s, result: %s", err, d) } else { text.Text = string(dM) } content = append(content, text) } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: CallToolResult{Content: content}, }, nil } ``` -------------------------------------------------------------------------------- /internal/server/mcp/v20250618/method.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package v20250618 import ( "bytes" "context" "encoding/json" "errors" "fmt" "net/http" "strings" "github.com/googleapis/genai-toolbox/internal/auth" "github.com/googleapis/genai-toolbox/internal/server/mcp/jsonrpc" "github.com/googleapis/genai-toolbox/internal/tools" "github.com/googleapis/genai-toolbox/internal/util" ) // ProcessMethod returns a response for the request. func ProcessMethod(ctx context.Context, id jsonrpc.RequestId, method string, toolset tools.Toolset, tools map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) { switch method { case PING: return pingHandler(id) case TOOLS_LIST: return toolsListHandler(id, toolset, body) case TOOLS_CALL: return toolsCallHandler(ctx, id, tools, authServices, body, header) default: err := fmt.Errorf("invalid method %s", method) return jsonrpc.NewError(id, jsonrpc.METHOD_NOT_FOUND, err.Error(), nil), err } } // pingHandler handles the "ping" method by returning an empty response. func pingHandler(id jsonrpc.RequestId) (any, error) { return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: struct{}{}, }, nil } func toolsListHandler(id jsonrpc.RequestId, toolset tools.Toolset, body []byte) (any, error) { var req ListToolsRequest if err := json.Unmarshal(body, &req); err != nil { err = fmt.Errorf("invalid mcp tools list request: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } result := ListToolsResult{ Tools: toolset.McpManifest, } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: result, }, nil } // toolsCallHandler generate a response for tools call. func toolsCallHandler(ctx context.Context, id jsonrpc.RequestId, toolsMap map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) { // retrieve logger from context logger, err := util.LoggerFromContext(ctx) if err != nil { return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } var req CallToolRequest if err = json.Unmarshal(body, &req); err != nil { err = fmt.Errorf("invalid mcp tools call request: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } toolName := req.Params.Name toolArgument := req.Params.Arguments logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName)) tool, ok := toolsMap[toolName] if !ok { err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName) return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err } // Get access token accessToken := tools.AccessToken(header.Get("Authorization")) // Check if this specific tool requires the standard authorization header if tool.RequiresClientAuthorization() { if accessToken == "" { return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, "missing access token in the 'Authorization' header", nil), tools.ErrUnauthorized } } // marshal arguments and decode it using decodeJSON instead to prevent loss between floats/int. aMarshal, err := json.Marshal(toolArgument) if err != nil { err = fmt.Errorf("unable to marshal tools argument: %w", err) return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } var data map[string]any if err = util.DecodeJSON(bytes.NewBuffer(aMarshal), &data); err != nil { err = fmt.Errorf("unable to decode tools argument: %w", err) return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } // Tool authentication // claimsFromAuth maps the name of the authservice to the claims retrieved from it. claimsFromAuth := make(map[string]map[string]any) // if using stdio, header will be nil and auth will not be supported if header != nil { for _, aS := range authServices { claims, err := aS.GetClaimsFromHeader(ctx, header) if err != nil { logger.DebugContext(ctx, err.Error()) continue } if claims == nil { // authService not present in header continue } claimsFromAuth[aS.GetName()] = claims } } // Tool authorization check verifiedAuthServices := make([]string, len(claimsFromAuth)) i := 0 for k := range claimsFromAuth { verifiedAuthServices[i] = k i++ } // Check if any of the specified auth services is verified isAuthorized := tool.Authorized(verifiedAuthServices) if !isAuthorized { err = fmt.Errorf("unauthorized Tool call: Please make sure your specify correct auth headers: %w", tools.ErrUnauthorized) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } logger.DebugContext(ctx, "tool invocation authorized") params, err := tool.ParseParams(data, claimsFromAuth) if err != nil { err = fmt.Errorf("provided parameters were invalid: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err } logger.DebugContext(ctx, fmt.Sprintf("invocation params: %s", params)) // run tool invocation and generate response. results, err := tool.Invoke(ctx, params, accessToken) if err != nil { errStr := err.Error() // Missing authService tokens. if errors.Is(err, tools.ErrUnauthorized) { return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } // Upstream auth error if strings.Contains(errStr, "Error 401") || strings.Contains(errStr, "Error 403") { if tool.RequiresClientAuthorization() { // Error with client credentials should pass down to the client return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } // Auth error with ADC should raise internal 500 error return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } text := TextContent{ Type: "text", Text: err.Error(), } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: CallToolResult{Content: []TextContent{text}, IsError: true}, }, nil } content := make([]TextContent, 0) sliceRes, ok := results.([]any) if !ok { sliceRes = []any{results} } for _, d := range sliceRes { text := TextContent{Type: "text"} dM, err := json.Marshal(d) if err != nil { text.Text = fmt.Sprintf("fail to marshal: %s, result: %s", err, d) } else { text.Text = string(dM) } content = append(content, text) } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: CallToolResult{Content: content}, }, nil } ``` -------------------------------------------------------------------------------- /internal/server/mcp/v20241105/method.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package v20241105 import ( "bytes" "context" "encoding/json" "errors" "fmt" "net/http" "strings" "github.com/googleapis/genai-toolbox/internal/auth" "github.com/googleapis/genai-toolbox/internal/server/mcp/jsonrpc" "github.com/googleapis/genai-toolbox/internal/tools" "github.com/googleapis/genai-toolbox/internal/util" ) // ProcessMethod returns a response for the request. func ProcessMethod(ctx context.Context, id jsonrpc.RequestId, method string, toolset tools.Toolset, tools map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) { switch method { case PING: return pingHandler(id) case TOOLS_LIST: return toolsListHandler(id, toolset, body) case TOOLS_CALL: return toolsCallHandler(ctx, id, tools, authServices, body, header) default: err := fmt.Errorf("invalid method %s", method) return jsonrpc.NewError(id, jsonrpc.METHOD_NOT_FOUND, err.Error(), nil), err } } // pingHandler handles the "ping" method by returning an empty response. func pingHandler(id jsonrpc.RequestId) (any, error) { return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: struct{}{}, }, nil } func toolsListHandler(id jsonrpc.RequestId, toolset tools.Toolset, body []byte) (any, error) { var req ListToolsRequest if err := json.Unmarshal(body, &req); err != nil { err = fmt.Errorf("invalid mcp tools list request: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } result := ListToolsResult{ Tools: toolset.McpManifest, } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: result, }, nil } // toolsCallHandler generate a response for tools call. func toolsCallHandler(ctx context.Context, id jsonrpc.RequestId, toolsMap map[string]tools.Tool, authServices map[string]auth.AuthService, body []byte, header http.Header) (any, error) { // retrieve logger from context logger, err := util.LoggerFromContext(ctx) if err != nil { return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } var req CallToolRequest if err = json.Unmarshal(body, &req); err != nil { err = fmt.Errorf("invalid mcp tools call request: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } toolName := req.Params.Name toolArgument := req.Params.Arguments logger.DebugContext(ctx, fmt.Sprintf("tool name: %s", toolName)) tool, ok := toolsMap[toolName] if !ok { err = fmt.Errorf("invalid tool name: tool with name %q does not exist", toolName) return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err } // Get access token accessToken := tools.AccessToken(header.Get("Authorization")) // Check if this specific tool requires the standard authorization header if tool.RequiresClientAuthorization() { if accessToken == "" { return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, "missing access token in the 'Authorization' header", nil), tools.ErrUnauthorized } } // marshal arguments and decode it using decodeJSON instead to prevent loss between floats/int. aMarshal, err := json.Marshal(toolArgument) if err != nil { err = fmt.Errorf("unable to marshal tools argument: %w", err) return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } var data map[string]any if err = util.DecodeJSON(bytes.NewBuffer(aMarshal), &data); err != nil { err = fmt.Errorf("unable to decode tools argument: %w", err) return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } // Tool authentication // claimsFromAuth maps the name of the authservice to the claims retrieved from it. claimsFromAuth := make(map[string]map[string]any) // if using stdio, header will be nil and auth will not be supported if header != nil { for _, aS := range authServices { claims, err := aS.GetClaimsFromHeader(ctx, header) if err != nil { logger.DebugContext(ctx, err.Error()) continue } if claims == nil { // authService not present in header continue } claimsFromAuth[aS.GetName()] = claims } } // Tool authorization check verifiedAuthServices := make([]string, len(claimsFromAuth)) i := 0 for k := range claimsFromAuth { verifiedAuthServices[i] = k i++ } // Check if any of the specified auth services is verified isAuthorized := tool.Authorized(verifiedAuthServices) if !isAuthorized { err = fmt.Errorf("unauthorized Tool call: Please make sure your specify correct auth headers: %w", tools.ErrUnauthorized) return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } logger.DebugContext(ctx, "tool invocation authorized") params, err := tool.ParseParams(data, claimsFromAuth) if err != nil { err = fmt.Errorf("provided parameters were invalid: %w", err) return jsonrpc.NewError(id, jsonrpc.INVALID_PARAMS, err.Error(), nil), err } logger.DebugContext(ctx, fmt.Sprintf("invocation params: %s", params)) // run tool invocation and generate response. results, err := tool.Invoke(ctx, params, accessToken) if err != nil { errStr := err.Error() // Missing authService tokens. if errors.Is(err, tools.ErrUnauthorized) { return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } // Upstream auth error if strings.Contains(errStr, "Error 401") || strings.Contains(errStr, "Error 403") { if tool.RequiresClientAuthorization() { // Error with client credentials should pass down to the client return jsonrpc.NewError(id, jsonrpc.INVALID_REQUEST, err.Error(), nil), err } // Auth error with ADC should raise internal 500 error return jsonrpc.NewError(id, jsonrpc.INTERNAL_ERROR, err.Error(), nil), err } text := TextContent{ Type: "text", Text: err.Error(), } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: CallToolResult{Content: []TextContent{text}, IsError: true}, }, nil } content := make([]TextContent, 0) sliceRes, ok := results.([]any) if !ok { sliceRes = []any{results} } for _, d := range sliceRes { text := TextContent{Type: "text"} dM, err := json.Marshal(d) if err != nil { text.Text = fmt.Sprintf("fail to marshal: %s, result: %s", err, d) } else { text.Text = string(dM) } content = append(content, text) } return jsonrpc.JSONRPCResponse{ Jsonrpc: jsonrpc.JSONRPC_VERSION, Id: id, Result: CallToolResult{Content: content}, }, nil } ``` -------------------------------------------------------------------------------- /internal/prebuiltconfigs/tools/postgres.yaml: -------------------------------------------------------------------------------- ```yaml # Copyright 2025 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. sources: postgresql-source: kind: postgres host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} database: ${POSTGRES_DATABASE} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} queryParams: ${POSTGRES_QUERY_PARAMS:} tools: execute_sql: kind: postgres-execute-sql source: postgresql-source description: Use this tool to execute SQL. list_tables: kind: postgres-list-tables source: postgresql-source description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas." list_active_queries: kind: postgres-list-active-queries source: postgresql-source description: "List the top N (default 50) currently running queries (state='active') from pg_stat_activity, ordered by longest-running first. Returns pid, user, database, application_name, client_addr, state, wait_event_type/wait_event, backend/xact/query start times, computed query_duration, and the SQL text." list_available_extensions: kind: postgres-list-available-extensions source: postgresql-source description: "Discover all PostgreSQL extensions available for installation on this server, returning name, default_version, and description." list_installed_extensions: kind: postgres-list-installed-extensions source: postgresql-source description: "List all installed PostgreSQL extensions with their name, version, schema, owner, and description." list_autovacuum_configurations: kind: postgres-sql source: postgresql-source description: "List PostgreSQL autovacuum-related configurations (name and current setting) from pg_settings." statement: | SELECT name, setting FROM pg_settings WHERE category = 'Autovacuum'; list_memory_configurations: kind: postgres-sql source: postgresql-source description: "List PostgreSQL memory-related configurations (name and current setting) from pg_settings." statement: | ( SELECT name, pg_size_pretty((setting::bigint * 1024)::bigint) setting FROM pg_settings WHERE name IN ('work_mem', 'maintenance_work_mem') ) UNION ALL ( SELECT name, pg_size_pretty((((setting::bigint) * 8) * 1024)::bigint) FROM pg_settings WHERE name IN ('shared_buffers', 'wal_buffers', 'effective_cache_size', 'temp_buffers') ) ORDER BY 1 DESC; list_top_bloated_tables: kind: postgres-sql source: postgresql-source description: | List the top tables by dead-tuple (approximate bloat signal), returning schema, table, live/dead tuples, percentage, and last vacuum/analyze times. statement: | SELECT schemaname AS schema_name, relname AS relation_name, n_live_tup AS live_tuples, n_dead_tup AS dead_tuples, TRUNC((n_dead_tup::NUMERIC / NULLIF(n_live_tup + n_dead_tup, 0)) * 100, 2) AS dead_tuple_percentage, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze FROM pg_stat_user_tables ORDER BY n_dead_tup DESC LIMIT COALESCE($1::int, 50); parameters: - name: limit description: "The maximum number of results to return." type: integer default: 50 list_replication_slots: kind: postgres-sql source: postgresql-source description: "List key details for all PostgreSQL replication slots (e.g., type, database, active status) and calculates the size of the outstanding WAL that is being prevented from removal by the slot." statement: | SELECT slot_name, slot_type, plugin, database, temporary, active, restart_lsn, confirmed_flush_lsn, xmin, catalog_xmin, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS retained_wal FROM pg_replication_slots; list_invalid_indexes: kind: postgres-sql source: postgresql-source description: "Lists all invalid PostgreSQL indexes which are taking up disk space but are unusable by the query planner. Typically created by failed CREATE INDEX CONCURRENTLY operations." statement: | SELECT nspname AS schema_name, indexrelid::regclass AS index_name, indrelid::regclass AS table_name, pg_size_pretty(pg_total_relation_size(indexrelid)) AS index_size, indisready, indisvalid, pg_get_indexdef(pg_class.oid) AS index_def FROM pg_index JOIN pg_class ON pg_class.oid = pg_index.indexrelid JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace WHERE indisvalid = FALSE; get_query_plan: kind: postgres-sql source: postgresql-source description: "Generate a PostgreSQL EXPLAIN plan in JSON format for a single SQL statement—without executing it. This returns the optimizer's estimated plan, costs, and rows (no ANALYZE, no extra options). Use in production safely for plan inspection, regression checks, and query tuning workflows." statement: | EXPLAIN (FORMAT JSON) {{.query}}; templateParameters: - name: query type: string description: "The SQL statement for which you want to generate plan (omit the EXPLAIN keyword)." required: true toolsets: postgres_database_tools: - execute_sql - list_tables - list_active_queries - list_available_extensions - list_installed_extensions - list_autovacuum_configurations - list_memory_configurations - list_top_bloated_tables - list_replication_slots - list_invalid_indexes - get_query_plan ``` -------------------------------------------------------------------------------- /docs/en/resources/tools/spanner/spanner-sql.md: -------------------------------------------------------------------------------- ```markdown --- title: "spanner-sql" type: docs weight: 1 description: > A "spanner-sql" tool executes a pre-defined SQL statement against a Google Cloud Spanner database. aliases: - /resources/tools/spanner-sql --- ## About A `spanner-sql` tool executes a pre-defined SQL statement (either `googlesql` or `postgresql`) against a Cloud Spanner database. It's compatible with any of the following sources: - [spanner](../../sources/spanner.md) ### GoogleSQL For the `googlesql` dialect, the specified SQL statement is executed as a [data manipulation language (DML)][gsql-dml] statements, and specified parameters will inserted according to their name: e.g. `@name`. > **Note:** This tool uses parameterized queries to prevent SQL injections. > Query parameters can be used as substitutes for arbitrary expressions. > Parameters cannot be used as substitutes for identifiers, column names, table > names, or other parts of the query. [gsql-dml]: https://cloud.google.com/spanner/docs/reference/standard-sql/dml-syntax ### PostgreSQL For the `postgresql` dialect, the specified SQL statement is executed as a [prepared statement][pg-prepare], and specified parameters will be inserted according to their position: e.g. `$1` will be the first parameter specified, `$2` will be the second parameter, and so on. [pg-prepare]: https://www.postgresql.org/docs/current/sql-prepare.html ## Example > **Note:** This tool uses parameterized queries to prevent SQL injections. > Query parameters can be used as substitutes for arbitrary expressions. > Parameters cannot be used as substitutes for identifiers, column names, table > names, or other parts of the query. {{< tabpane persist="header" >}} {{< tab header="GoogleSQL" lang="yaml" >}} tools: search_flights_by_number: kind: spanner-sql source: my-spanner-instance statement: | SELECT * FROM flights WHERE airline = @airline AND flight_number = @flight_number LIMIT 10 description: | Use this tool to get information for a specific flight. Takes an airline code and flight number and returns info on the flight. Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number. A airline code is a code for an airline service consisting of two-character airline designator and followed by flight number, which is 1 to 4 digit number. For example, if given CY 0123, the airline is "CY", and flight_number is "123". Another example for this is DL 1234, the airline is "DL", and flight_number is "1234". If the tool returns more than one option choose the date closes to today. Example: {{ "airline": "CY", "flight_number": "888", }} Example: {{ "airline": "DL", "flight_number": "1234", }} parameters: - name: airline type: string description: Airline unique 2 letter identifier - name: flight_number type: string description: 1 to 4 digit number {{< /tab >}} {{< tab header="PostgreSQL" lang="yaml" >}} tools: search_flights_by_number: kind: spanner source: my-spanner-instance statement: | SELECT * FROM flights WHERE airline = $1 AND flight_number = $2 LIMIT 10 description: | Use this tool to get information for a specific flight. Takes an airline code and flight number and returns info on the flight. Do NOT use this tool with a flight id. Do NOT guess an airline code or flight number. A airline code is a code for an airline service consisting of two-character airline designator and followed by flight number, which is 1 to 4 digit number. For example, if given CY 0123, the airline is "CY", and flight_number is "123". Another example for this is DL 1234, the airline is "DL", and flight_number is "1234". If the tool returns more than one option choose the date closes to today. Example: {{ "airline": "CY", "flight_number": "888", }} Example: {{ "airline": "DL", "flight_number": "1234", }} parameters: - name: airline type: string description: Airline unique 2 letter identifier - name: flight_number type: string description: 1 to 4 digit number {{< /tab >}} {{< /tabpane >}} ### Example with Template Parameters > **Note:** This tool allows direct modifications to the SQL statement, > including identifiers, column names, and table names. **This makes it more > vulnerable to SQL injections**. Using basic parameters only (see above) is > recommended for performance and safety reasons. For more details, please check > [templateParameters](..#template-parameters). ```yaml tools: list_table: kind: spanner source: my-spanner-instance statement: | SELECT * FROM {{.tableName}}; description: | Use this tool to list all information from a specific table. Example: {{ "tableName": "flights", }} templateParameters: - name: tableName type: string description: Table to select from ``` ## Reference | **field** | **type** | **required** | **description** | |--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------| | kind | string | true | Must be "spanner-sql". | | source | string | true | Name of the source the SQL should execute on. | | description | string | true | Description of the tool that is passed to the LLM. | | statement | string | true | SQL statement to execute on. | | parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. | | readOnly | bool | false | When set to `true`, the `statement` is run as a read-only transaction. Default: `false`. | | templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. | ``` -------------------------------------------------------------------------------- /internal/prebuiltconfigs/tools/cloud-sql-postgres.yaml: -------------------------------------------------------------------------------- ```yaml # Copyright 2025 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. sources: cloudsql-pg-source: kind: cloud-sql-postgres project: ${CLOUD_SQL_POSTGRES_PROJECT} region: ${CLOUD_SQL_POSTGRES_REGION} instance: ${CLOUD_SQL_POSTGRES_INSTANCE} database: ${CLOUD_SQL_POSTGRES_DATABASE} user: ${CLOUD_SQL_POSTGRES_USER:} password: ${CLOUD_SQL_POSTGRES_PASSWORD:} ipType: ${CLOUD_SQL_POSTGRES_IP_TYPE:public} tools: execute_sql: kind: postgres-execute-sql source: cloudsql-pg-source description: Use this tool to execute sql. list_tables: kind: postgres-list-tables source: cloudsql-pg-source description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas." list_active_queries: kind: postgres-list-active-queries source: cloudsql-pg-source description: "List the top N (default 50) currently running queries (state='active') from pg_stat_activity, ordered by longest-running first. Returns pid, user, database, application_name, client_addr, state, wait_event_type/wait_event, backend/xact/query start times, computed query_duration, and the SQL text." list_available_extensions: kind: postgres-list-available-extensions source: cloudsql-pg-source description: "Discover all PostgreSQL extensions available for installation on this server, returning name, default_version, and description." list_installed_extensions: kind: postgres-list-installed-extensions source: cloudsql-pg-source description: "List all installed PostgreSQL extensions with their name, version, schema, owner, and description." list_autovacuum_configurations: kind: postgres-sql source: cloudsql-pg-source description: "List PostgreSQL autovacuum-related configurations (name and current setting) from pg_settings." statement: | SELECT name, setting FROM pg_settings WHERE category = 'Autovacuum'; list_memory_configurations: kind: postgres-sql source: cloudsql-pg-source description: "List PostgreSQL memory-related configurations (name and current setting) from pg_settings." statement: | ( SELECT name, pg_size_pretty((setting::bigint * 1024)::bigint) setting FROM pg_settings WHERE name IN ('work_mem', 'maintenance_work_mem') ) UNION ALL ( SELECT name, pg_size_pretty((((setting::bigint) * 8) * 1024)::bigint) FROM pg_settings WHERE name IN ('shared_buffers', 'wal_buffers', 'effective_cache_size', 'temp_buffers') ) ORDER BY 1 DESC; list_top_bloated_tables: kind: postgres-sql source: cloudsql-pg-source description: | List the top tables by dead-tuple (approximate bloat signal), returning schema, table, live/dead tuples, percentage, and last vacuum/analyze times. statement: | SELECT schemaname AS schema_name, relname AS relation_name, n_live_tup AS live_tuples, n_dead_tup AS dead_tuples, TRUNC((n_dead_tup::NUMERIC / NULLIF(n_live_tup + n_dead_tup, 0)) * 100, 2) AS dead_tuple_percentage, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze FROM pg_stat_user_tables ORDER BY n_dead_tup DESC LIMIT COALESCE($1::int, 50); parameters: - name: limit description: "The maximum number of results to return." type: integer default: 50 list_replication_slots: kind: postgres-sql source: cloudsql-pg-source description: "List key details for all PostgreSQL replication slots (e.g., type, database, active status) and calculates the size of the outstanding WAL that is being prevented from removal by the slot." statement: | SELECT slot_name, slot_type, plugin, database, temporary, active, restart_lsn, confirmed_flush_lsn, xmin, catalog_xmin, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS retained_wal FROM pg_replication_slots; list_invalid_indexes: kind: postgres-sql source: cloudsql-pg-source description: "Lists all invalid PostgreSQL indexes which are taking up disk space but are unusable by the query planner. Typically created by failed CREATE INDEX CONCURRENTLY operations." statement: | SELECT nspname AS schema_name, indexrelid::regclass AS index_name, indrelid::regclass AS table_name, pg_size_pretty(pg_total_relation_size(indexrelid)) AS index_size, indisready, indisvalid, pg_get_indexdef(pg_class.oid) AS index_def FROM pg_index JOIN pg_class ON pg_class.oid = pg_index.indexrelid JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace WHERE indisvalid = FALSE; get_query_plan: kind: postgres-sql source: cloudsql-pg-source description: "Generate a PostgreSQL EXPLAIN plan in JSON format for a single SQL statement—without executing it. This returns the optimizer's estimated plan, costs, and rows (no ANALYZE, no extra options). Use in production safely for plan inspection, regression checks, and query tuning workflows." statement: | EXPLAIN (FORMAT JSON) {{.query}}; templateParameters: - name: query type: string description: "The SQL statement for which you want to generate plan (omit the EXPLAIN keyword)." required: true toolsets: cloud_sql_postgres_database_tools: - execute_sql - list_tables - list_active_queries - list_available_extensions - list_installed_extensions - list_autovacuum_configurations - list_memory_configurations - list_top_bloated_tables - list_replication_slots - list_invalid_indexes - get_query_plan ``` -------------------------------------------------------------------------------- /internal/prebuiltconfigs/tools/alloydb-postgres.yaml: -------------------------------------------------------------------------------- ```yaml # Copyright 2025 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. sources: alloydb-pg-source: kind: "alloydb-postgres" project: ${ALLOYDB_POSTGRES_PROJECT} region: ${ALLOYDB_POSTGRES_REGION} cluster: ${ALLOYDB_POSTGRES_CLUSTER} instance: ${ALLOYDB_POSTGRES_INSTANCE} database: ${ALLOYDB_POSTGRES_DATABASE} user: ${ALLOYDB_POSTGRES_USER:} password: ${ALLOYDB_POSTGRES_PASSWORD:} ipType: ${ALLOYDB_POSTGRES_IP_TYPE:public} tools: execute_sql: kind: postgres-execute-sql source: alloydb-pg-source description: Use this tool to execute sql. list_tables: kind: postgres-list-tables source: alloydb-pg-source description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas." list_active_queries: kind: postgres-list-active-queries source: alloydb-pg-source description: "List the top N (default 50) currently running queries (state='active') from pg_stat_activity, ordered by longest-running first. Returns pid, user, database, application_name, client_addr, state, wait_event_type/wait_event, backend/xact/query start times, computed query_duration, and the SQL text." list_available_extensions: kind: postgres-list-available-extensions source: alloydb-pg-source description: "Discover all PostgreSQL extensions available for installation on this server, returning name, default_version, and description." list_installed_extensions: kind: postgres-list-installed-extensions source: alloydb-pg-source description: "List all installed PostgreSQL extensions with their name, version, schema, owner, and description." list_autovacuum_configurations: kind: postgres-sql source: alloydb-pg-source description: "List PostgreSQL autovacuum-related configurations (name and current setting) from pg_settings." statement: | SELECT name, setting FROM pg_settings WHERE category = 'Autovacuum'; list_memory_configurations: kind: postgres-sql source: alloydb-pg-source description: "List PostgreSQL memory-related configurations (name and current setting) from pg_settings." statement: | ( SELECT name, pg_size_pretty((setting::bigint * 1024)::bigint) setting FROM pg_settings WHERE name IN ('work_mem', 'maintenance_work_mem') ) UNION ALL ( SELECT name, pg_size_pretty((((setting::bigint) * 8) * 1024)::bigint) FROM pg_settings WHERE name IN ('shared_buffers', 'wal_buffers', 'effective_cache_size', 'temp_buffers') ) ORDER BY 1 DESC; list_top_bloated_tables: kind: postgres-sql source: alloydb-pg-source description: | List the top tables by dead-tuple (approximate bloat signal), returning schema, table, live/dead tuples, percentage, and last vacuum/analyze times. statement: | SELECT schemaname AS schema_name, relname AS relation_name, n_live_tup AS live_tuples, n_dead_tup AS dead_tuples, TRUNC((n_dead_tup::NUMERIC / NULLIF(n_live_tup + n_dead_tup, 0)) * 100, 2) AS dead_tuple_percentage, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze FROM pg_stat_user_tables ORDER BY n_dead_tup DESC LIMIT COALESCE($1::int, 50); parameters: - name: limit description: "The maximum number of results to return." type: integer default: 50 list_replication_slots: kind: postgres-sql source: alloydb-pg-source description: "List key details for all PostgreSQL replication slots (e.g., type, database, active status) and calculates the size of the outstanding WAL that is being prevented from removal by the slot." statement: | SELECT slot_name, slot_type, plugin, database, temporary, active, restart_lsn, confirmed_flush_lsn, xmin, catalog_xmin, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS retained_wal FROM pg_replication_slots; list_invalid_indexes: kind: postgres-sql source: alloydb-pg-source description: "Lists all invalid PostgreSQL indexes which are taking up disk space but are unusable by the query planner. Typically created by failed CREATE INDEX CONCURRENTLY operations." statement: | SELECT nspname AS schema_name, indexrelid::regclass AS index_name, indrelid::regclass AS table_name, pg_size_pretty(pg_total_relation_size(indexrelid)) AS index_size, indisready, indisvalid, pg_get_indexdef(pg_class.oid) AS index_def FROM pg_index JOIN pg_class ON pg_class.oid = pg_index.indexrelid JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace WHERE indisvalid = FALSE; get_query_plan: kind: postgres-sql source: alloydb-pg-source description: "Generate a PostgreSQL EXPLAIN plan in JSON format for a single SQL statement—without executing it. This returns the optimizer's estimated plan, costs, and rows (no ANALYZE, no extra options). Use in production safely for plan inspection, regression checks, and query tuning workflows." statement: | EXPLAIN (FORMAT JSON) {{.query}}; templateParameters: - name: query type: string description: "The SQL statement for which you want to generate plan (omit the EXPLAIN keyword)." required: true toolsets: alloydb_postgres_database_tools: - execute_sql - list_tables - list_active_queries - list_available_extensions - list_installed_extensions - list_autovacuum_configurations - list_memory_configurations - list_top_bloated_tables - list_replication_slots - list_invalid_indexes - get_query_plan ``` -------------------------------------------------------------------------------- /internal/tools/firestore/firestoreadddocuments/firestoreadddocuments.go: -------------------------------------------------------------------------------- ```go // Copyright 2025 Google LLC // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package firestoreadddocuments import ( "context" "fmt" firestoreapi "cloud.google.com/go/firestore" yaml "github.com/goccy/go-yaml" "github.com/googleapis/genai-toolbox/internal/sources" firestoreds "github.com/googleapis/genai-toolbox/internal/sources/firestore" "github.com/googleapis/genai-toolbox/internal/tools" "github.com/googleapis/genai-toolbox/internal/tools/firestore/util" ) const kind string = "firestore-add-documents" const collectionPathKey string = "collectionPath" const documentDataKey string = "documentData" const returnDocumentDataKey string = "returnData" func init() { if !tools.Register(kind, newConfig) { panic(fmt.Sprintf("tool kind %q already registered", kind)) } } func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) { actual := Config{Name: name} if err := decoder.DecodeContext(ctx, &actual); err != nil { return nil, err } return actual, nil } type compatibleSource interface { FirestoreClient() *firestoreapi.Client } // validate compatible sources are still compatible var _ compatibleSource = &firestoreds.Source{} var compatibleSources = [...]string{firestoreds.SourceKind} type Config struct { Name string `yaml:"name" validate:"required"` Kind string `yaml:"kind" validate:"required"` Source string `yaml:"source" validate:"required"` Description string `yaml:"description" validate:"required"` AuthRequired []string `yaml:"authRequired"` } // validate interface var _ tools.ToolConfig = Config{} func (cfg Config) ToolConfigKind() string { return kind } func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) { // verify source exists rawS, ok := srcs[cfg.Source] if !ok { return nil, fmt.Errorf("no source named %q configured", cfg.Source) } // verify the source is compatible s, ok := rawS.(compatibleSource) if !ok { return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources) } // Create parameters collectionPathParameter := tools.NewStringParameter( collectionPathKey, "The relative path of the collection where the document will be added to (e.g., 'users' or 'users/userId/posts'). Note: This is a relative path, NOT an absolute path like 'projects/{project_id}/databases/{database_id}/documents/...'", ) documentDataParameter := tools.NewMapParameter( documentDataKey, `The document data in Firestore's native JSON format. Each field must be wrapped with a type indicator: - Strings: {"stringValue": "text"} - Integers: {"integerValue": "123"} or {"integerValue": 123} - Doubles: {"doubleValue": 123.45} - Booleans: {"booleanValue": true} - Timestamps: {"timestampValue": "2025-01-07T10:00:00Z"} - GeoPoints: {"geoPointValue": {"latitude": 34.05, "longitude": -118.24}} - Arrays: {"arrayValue": {"values": [{"stringValue": "item1"}, {"integerValue": "2"}]}} - Maps: {"mapValue": {"fields": {"key1": {"stringValue": "value1"}, "key2": {"booleanValue": true}}}} - Null: {"nullValue": null} - Bytes: {"bytesValue": "base64EncodedString"} - References: {"referenceValue": "collection/document"}`, "", // Empty string for generic map that accepts any value type ) returnDataParameter := tools.NewBooleanParameterWithDefault( returnDocumentDataKey, false, "If set to true the output will have the data of the created document. This flag if set to false will help avoid overloading the context of the agent.", ) parameters := tools.Parameters{ collectionPathParameter, documentDataParameter, returnDataParameter, } mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, parameters) // finish tool setup t := Tool{ Name: cfg.Name, Kind: kind, Parameters: parameters, AuthRequired: cfg.AuthRequired, Client: s.FirestoreClient(), manifest: tools.Manifest{Description: cfg.Description, Parameters: parameters.Manifest(), AuthRequired: cfg.AuthRequired}, mcpManifest: mcpManifest, } return t, nil } // validate interface var _ tools.Tool = Tool{} type Tool struct { Name string `yaml:"name"` Kind string `yaml:"kind"` AuthRequired []string `yaml:"authRequired"` Parameters tools.Parameters `yaml:"parameters"` Client *firestoreapi.Client manifest tools.Manifest mcpManifest tools.McpManifest } func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) { mapParams := params.AsMap() // Get collection path collectionPath, ok := mapParams[collectionPathKey].(string) if !ok || collectionPath == "" { return nil, fmt.Errorf("invalid or missing '%s' parameter", collectionPathKey) } // Validate collection path if err := util.ValidateCollectionPath(collectionPath); err != nil { return nil, fmt.Errorf("invalid collection path: %w", err) } // Get document data documentDataRaw, ok := mapParams[documentDataKey] if !ok { return nil, fmt.Errorf("invalid or missing '%s' parameter", documentDataKey) } // Convert the document data from JSON format to Firestore format // The client is passed to handle referenceValue types documentData, err := util.JSONToFirestoreValue(documentDataRaw, t.Client) if err != nil { return nil, fmt.Errorf("failed to convert document data: %w", err) } // Get return document data flag returnData := false if val, ok := mapParams[returnDocumentDataKey].(bool); ok { returnData = val } // Get the collection reference collection := t.Client.Collection(collectionPath) // Add the document to the collection docRef, writeResult, err := collection.Add(ctx, documentData) if err != nil { return nil, fmt.Errorf("failed to add document: %w", err) } // Build the response response := map[string]any{ "documentPath": docRef.Path, "createTime": writeResult.UpdateTime.Format("2006-01-02T15:04:05.999999999Z"), } // Add document data if requested if returnData { // Convert the document data back to simple JSON format simplifiedData := util.FirestoreValueToJSON(documentData) response["documentData"] = simplifiedData } return response, nil } func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) { return tools.ParseParams(t.Parameters, data, claims) } func (t Tool) Manifest() tools.Manifest { return t.manifest } func (t Tool) McpManifest() tools.McpManifest { return t.mcpManifest } func (t Tool) Authorized(verifiedAuthServices []string) bool { return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices) } func (t Tool) RequiresClientAuthorization() bool { return false } ``` -------------------------------------------------------------------------------- /MCP-TOOLBOX-EXTENSION.md: -------------------------------------------------------------------------------- ```markdown This document helps you find and install the right Gemini CLI extension to interact with your databases. ## How to Install an Extension To install any of the extensions listed below, use the `gemini extensions install` command followed by the extension's GitHub repository URL. For complete instructions on finding, installing, and managing extensions, please see the [official Gemini CLI extensions documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/extensions/index.md). **Example Installation Command:** ```bash gemini extensions install https://github.com/gemini-cli-extensions/EXTENSION_NAME ``` Make sure the user knows: * These commands are not supported from within the CLI * These commands will only be reflected in active CLI sessions on restart * Extensions require Application Default Credentials in your environment. See [Set up ADC for a local development environment](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment) to learn how you can provide either your user credentials or service account credentials to ADC in a local development environment. * Most extensions require you to set environment variables to connect to a database. If there is a link provided for the configuration, fetch the web page and return the configuration. ----- ## Find Your Database Extension Find your database or service in the list below to get the correct installation command. **Note on Observability:** Extensions with `-observability` in their name are designed to help you understand the health and performance of your database instances, often by analyzing metrics and logs. ### Google Cloud Managed Databases #### BigQuery * For data analytics and querying: ```bash gemini extensions install https://github.com/gemini-cli-extensions/bigquery-data-analytics ``` Configuration: https://github.com/gemini-cli-extensions/bigquery-data-analytics/tree/main?tab=readme-ov-file#configuration * For conversational analytics (using natural language): ```bash gemini extensions install https://github.com/gemini-cli-extensions/bigquery-conversational-analytics ``` Configuration: https://github.com/gemini-cli-extensions/bigquery-conversational-analytics/tree/main?tab=readme-ov-file#configuration #### Cloud SQL for MySQL * Main Extension: ```bash gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-mysql ``` Configuration: https://github.com/gemini-cli-extensions/cloud-sql-mysql/tree/main?tab=readme-ov-file#configuration * Observability: ```bash gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-mysql-observability ``` If you are looking for self-hosted MySQL, consider the `mysql` extension. #### Cloud SQL for PostgreSQL * Main Extension: ```bash gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-postgresql ``` Configuration: https://github.com/gemini-cli-extensions/cloud-sql-postgresql/tree/main?tab=readme-ov-file#configuration * Observability: ```bash gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-postgresql-observability ``` If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `alloydb` extension for AlloyDB for PostgreSQL. #### Cloud SQL for SQL Server * Main Extension: ```bash gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-sqlserver ``` Configuration: https://github.com/gemini-cli-extensions/cloud-sql-sqlserver/tree/main?tab=readme-ov-file#configuration * Observability: ```bash gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-sqlserver-observability ``` If you are looking for self-hosted SQL Server, consider the `sql-server` extension. #### AlloyDB for PostgreSQL * Main Extension: ```bash gemini extensions install https://github.com/gemini-cli-extensions/alloydb ``` Configuration: https://github.com/gemini-cli-extensions/alloydb/tree/main?tab=readme-ov-file#configuration * Observability: ```bash gemini extensions install https://github.com/gemini-cli-extensions/alloydb-observability ``` If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `cloud-sql-postgresql` extension for Cloud SQL for PostgreSQL. #### Spanner * For querying Spanner databases: ```bash gemini extensions install https://github.com/gemini-cli-extensions/spanner ``` Configuration: https://github.com/gemini-cli-extensions/spanner/tree/main?tab=readme-ov-file#configuration #### Firestore * For querying Firestore in Native Mode: ```bash gemini extensions install https://github.com/gemini-cli-extensions/firestore-native ``` Configuration: https://github.com/gemini-cli-extensions/firestore-native/tree/main?tab=readme-ov-file#configuration ### Other Google Cloud Data Services #### Dataplex * For interacting with Dataplex data lakes and assets: ```bash gemini extensions install https://github.com/gemini-cli-extensions/dataplex ``` Configuration: https://github.com/gemini-cli-extensions/dataplex/tree/main?tab=readme-ov-file#configuration #### Looker * For querying Looker instances: ```bash gemini extensions install https://github.com/gemini-cli-extensions/looker ``` Configuration: https://github.com/gemini-cli-extensions/looker/tree/main?tab=readme-ov-file#configuration ### Other Database Engines These extensions are for connecting to database instances not managed by Cloud SQL (e.g., self-hosted on-prem, on a VM, or in another cloud). * MySQL: ```bash gemini extensions install https://github.com/gemini-cli-extensions/mysql ``` Configuration: https://github.com/gemini-cli-extensions/mysql/tree/main?tab=readme-ov-file#configuration If you are looking for Google Cloud managed MySQL, consider the `cloud-sql-mysql` extension. * PostgreSQL: ```bash gemini extensions install https://github.com/gemini-cli-extensions/postgres ``` Configuration: https://github.com/gemini-cli-extensions/postgres/tree/main?tab=readme-ov-file#configuration If you are looking for Google Cloud managed PostgreSQL, consider the `cloud-sql-postgresql` or `alloydb` extensions. * SQL Server: ```bash gemini extensions install https://github.com/gemini-cli-extensions/sql-server ``` Configuration: https://github.com/gemini-cli-extensions/sql-server/tree/main?tab=readme-ov-file#configuration If you are looking for Google Cloud managed SQL Server, consider the `cloud-sql-sqlserver` extension. ### Custom Tools #### MCP Toolbox * For connecting to MCP Toolbox servers: This extension can be used with any Google Cloud database to build custom tools. For more information, see the [MCP Toolbox documentation](https://googleapis.github.io/genai-toolbox/getting-started/introduction/). ```bash gemini extensions install https://github.com/gemini-cli-extensions/mcp-toolbox ``` Configuration: https://github.com/gemini-cli-extensions/mcp-toolbox/tree/main?tab=readme-ov-file#configuration ```