This is page 4 of 6. Use http://codebase.md/apollographql/apollo-mcp-server?lines=false&page={x} to view the full context.
# Directory Structure
```
├── .cargo
│ └── config.toml
├── .changesets
│ └── README.md
├── .envrc
├── .github
│ ├── CODEOWNERS
│ ├── renovate.json5
│ └── workflows
│ ├── canary-release.yml
│ ├── ci.yml
│ ├── prep-release.yml
│ ├── release-bins.yml
│ ├── release-container.yml
│ ├── sync-develop.yml
│ └── verify-changeset.yml
├── .gitignore
├── .idea
│ └── runConfigurations
│ ├── clippy.xml
│ ├── format___test___clippy.xml
│ ├── format.xml
│ ├── Run_spacedevs.xml
│ └── Test_apollo_mcp_server.xml
├── .vscode
│ ├── extensions.json
│ ├── launch.json
│ ├── settings.json
│ └── tasks.json
├── apollo.config.json
├── Cargo.lock
├── Cargo.toml
├── CHANGELOG_SECTION.md
├── CHANGELOG.md
├── clippy.toml
├── codecov.yml
├── CONTRIBUTING.md
├── crates
│ ├── apollo-mcp-registry
│ │ ├── Cargo.toml
│ │ └── src
│ │ ├── files.rs
│ │ ├── lib.rs
│ │ ├── logging.rs
│ │ ├── platform_api
│ │ │ ├── operation_collections
│ │ │ │ ├── collection_poller.rs
│ │ │ │ ├── error.rs
│ │ │ │ ├── event.rs
│ │ │ │ └── operation_collections.graphql
│ │ │ ├── operation_collections.rs
│ │ │ └── platform-api.graphql
│ │ ├── platform_api.rs
│ │ ├── testdata
│ │ │ ├── minimal_supergraph.graphql
│ │ │ └── supergraph.graphql
│ │ ├── uplink
│ │ │ ├── persisted_queries
│ │ │ │ ├── event.rs
│ │ │ │ ├── manifest_poller.rs
│ │ │ │ ├── manifest.rs
│ │ │ │ └── persisted_queries_manifest_query.graphql
│ │ │ ├── persisted_queries.rs
│ │ │ ├── schema
│ │ │ │ ├── event.rs
│ │ │ │ ├── schema_query.graphql
│ │ │ │ └── schema_stream.rs
│ │ │ ├── schema.rs
│ │ │ ├── snapshots
│ │ │ │ ├── apollo_mcp_registry__uplink__schema__tests__schema_by_url_all_fail@logs.snap
│ │ │ │ ├── apollo_mcp_registry__uplink__schema__tests__schema_by_url_fallback@logs.snap
│ │ │ │ └── apollo_mcp_registry__uplink__schema__tests__schema_by_url@logs.snap
│ │ │ └── uplink.graphql
│ │ └── uplink.rs
│ ├── apollo-mcp-server
│ │ ├── build.rs
│ │ ├── Cargo.toml
│ │ ├── src
│ │ │ ├── auth
│ │ │ │ ├── networked_token_validator.rs
│ │ │ │ ├── protected_resource.rs
│ │ │ │ ├── valid_token.rs
│ │ │ │ └── www_authenticate.rs
│ │ │ ├── auth.rs
│ │ │ ├── config_schema.rs
│ │ │ ├── cors.rs
│ │ │ ├── custom_scalar_map.rs
│ │ │ ├── errors.rs
│ │ │ ├── event.rs
│ │ │ ├── explorer.rs
│ │ │ ├── graphql.rs
│ │ │ ├── headers.rs
│ │ │ ├── health.rs
│ │ │ ├── introspection
│ │ │ │ ├── minify.rs
│ │ │ │ ├── snapshots
│ │ │ │ │ └── apollo_mcp_server__introspection__minify__tests__minify_schema.snap
│ │ │ │ ├── tools
│ │ │ │ │ ├── execute.rs
│ │ │ │ │ ├── introspect.rs
│ │ │ │ │ ├── search.rs
│ │ │ │ │ ├── snapshots
│ │ │ │ │ │ └── apollo_mcp_server__introspection__tools__search__tests__search_tool.snap
│ │ │ │ │ ├── testdata
│ │ │ │ │ │ └── schema.graphql
│ │ │ │ │ └── validate.rs
│ │ │ │ └── tools.rs
│ │ │ ├── introspection.rs
│ │ │ ├── json_schema.rs
│ │ │ ├── lib.rs
│ │ │ ├── main.rs
│ │ │ ├── meter.rs
│ │ │ ├── operations
│ │ │ │ ├── mutation_mode.rs
│ │ │ │ ├── operation_source.rs
│ │ │ │ ├── operation.rs
│ │ │ │ ├── raw_operation.rs
│ │ │ │ ├── schema_walker
│ │ │ │ │ ├── name.rs
│ │ │ │ │ └── type.rs
│ │ │ │ └── schema_walker.rs
│ │ │ ├── operations.rs
│ │ │ ├── runtime
│ │ │ │ ├── config.rs
│ │ │ │ ├── endpoint.rs
│ │ │ │ ├── filtering_exporter.rs
│ │ │ │ ├── graphos.rs
│ │ │ │ ├── introspection.rs
│ │ │ │ ├── logging
│ │ │ │ │ ├── defaults.rs
│ │ │ │ │ ├── log_rotation_kind.rs
│ │ │ │ │ └── parsers.rs
│ │ │ │ ├── logging.rs
│ │ │ │ ├── operation_source.rs
│ │ │ │ ├── overrides.rs
│ │ │ │ ├── schema_source.rs
│ │ │ │ ├── schemas.rs
│ │ │ │ ├── telemetry
│ │ │ │ │ └── sampler.rs
│ │ │ │ └── telemetry.rs
│ │ │ ├── runtime.rs
│ │ │ ├── sanitize.rs
│ │ │ ├── schema_tree_shake.rs
│ │ │ ├── server
│ │ │ │ ├── states
│ │ │ │ │ ├── configuring.rs
│ │ │ │ │ ├── operations_configured.rs
│ │ │ │ │ ├── running.rs
│ │ │ │ │ ├── schema_configured.rs
│ │ │ │ │ └── starting.rs
│ │ │ │ └── states.rs
│ │ │ ├── server.rs
│ │ │ └── telemetry_attributes.rs
│ │ └── telemetry.toml
│ └── apollo-schema-index
│ ├── Cargo.toml
│ └── src
│ ├── error.rs
│ ├── lib.rs
│ ├── path.rs
│ ├── snapshots
│ │ ├── apollo_schema_index__tests__search.snap
│ │ └── apollo_schema_index__traverse__tests__schema_traverse.snap
│ ├── testdata
│ │ └── schema.graphql
│ └── traverse.rs
├── docs
│ └── source
│ ├── _sidebar.yaml
│ ├── auth.mdx
│ ├── best-practices.mdx
│ ├── config-file.mdx
│ ├── cors.mdx
│ ├── custom-scalars.mdx
│ ├── debugging.mdx
│ ├── define-tools.mdx
│ ├── deploy.mdx
│ ├── guides
│ │ └── auth-auth0.mdx
│ ├── health-checks.mdx
│ ├── images
│ │ ├── auth0-permissions-enable.png
│ │ ├── mcp-getstarted-inspector-http.jpg
│ │ └── mcp-getstarted-inspector-stdio.jpg
│ ├── index.mdx
│ ├── licensing.mdx
│ ├── limitations.mdx
│ ├── quickstart.mdx
│ ├── run.mdx
│ └── telemetry.mdx
├── e2e
│ └── mcp-server-tester
│ ├── local-operations
│ │ ├── api.graphql
│ │ ├── config.yaml
│ │ ├── operations
│ │ │ ├── ExploreCelestialBodies.graphql
│ │ │ ├── GetAstronautDetails.graphql
│ │ │ ├── GetAstronautsCurrentlyInSpace.graphql
│ │ │ └── SearchUpcomingLaunches.graphql
│ │ └── tool-tests.yaml
│ ├── pq-manifest
│ │ ├── api.graphql
│ │ ├── apollo.json
│ │ ├── config.yaml
│ │ └── tool-tests.yaml
│ ├── run_tests.sh
│ └── server-config.template.json
├── flake.lock
├── flake.nix
├── graphql
│ ├── TheSpaceDevs
│ │ ├── .vscode
│ │ │ ├── extensions.json
│ │ │ └── tasks.json
│ │ ├── api.graphql
│ │ ├── apollo.config.json
│ │ ├── config.yaml
│ │ ├── operations
│ │ │ ├── ExploreCelestialBodies.graphql
│ │ │ ├── GetAstronautDetails.graphql
│ │ │ ├── GetAstronautsCurrentlyInSpace.graphql
│ │ │ └── SearchUpcomingLaunches.graphql
│ │ ├── persisted_queries
│ │ │ └── apollo.json
│ │ ├── persisted_queries.config.json
│ │ ├── README.md
│ │ └── supergraph.yaml
│ └── weather
│ ├── api.graphql
│ ├── config.yaml
│ ├── operations
│ │ ├── alerts.graphql
│ │ ├── all.graphql
│ │ └── forecast.graphql
│ ├── persisted_queries
│ │ └── apollo.json
│ ├── supergraph.graphql
│ ├── supergraph.yaml
│ └── weather.graphql
├── LICENSE
├── macos-entitlements.plist
├── nix
│ ├── apollo-mcp.nix
│ ├── cargo-zigbuild.patch
│ ├── mcp-server-tools
│ │ ├── default.nix
│ │ ├── node-generated
│ │ │ ├── default.nix
│ │ │ ├── node-env.nix
│ │ │ └── node-packages.nix
│ │ ├── node-mcp-servers.json
│ │ └── README.md
│ └── mcphost.nix
├── README.md
├── rust-toolchain.toml
├── scripts
│ ├── nix
│ │ └── install.sh
│ └── windows
│ └── install.ps1
└── xtask
├── Cargo.lock
├── Cargo.toml
└── src
├── commands
│ ├── changeset
│ │ ├── matching_pull_request.graphql
│ │ ├── matching_pull_request.rs
│ │ ├── mod.rs
│ │ ├── scalars.rs
│ │ └── snapshots
│ │ ├── xtask__commands__changeset__tests__it_templatizes_with_multiple_issues_in_title_and_multiple_prs_in_footer.snap
│ │ ├── xtask__commands__changeset__tests__it_templatizes_with_multiple_issues_in_title.snap
│ │ ├── xtask__commands__changeset__tests__it_templatizes_with_multiple_prs_in_footer.snap
│ │ ├── xtask__commands__changeset__tests__it_templatizes_with_neither_issues_or_prs.snap
│ │ ├── xtask__commands__changeset__tests__it_templatizes_with_prs_in_title_when_empty_issues.snap
│ │ └── xtask__commands__changeset__tests__it_templatizes_without_prs_in_title_when_issues_present.snap
│ └── mod.rs
├── lib.rs
└── main.rs
```
# Files
--------------------------------------------------------------------------------
/crates/apollo-schema-index/src/lib.rs:
--------------------------------------------------------------------------------
```rust
//! Library for indexing and searching GraphQL schemas.
//!
//! To build the index, the types in the schema are traversed depth-first, starting with a set of
//! supplied root types (Query, Mutation, Subscription). Each type encountered in the traversal is
//! indexed by:
//!
//! * The type name
//! * The type description
//! * The field names
//!
//! Searching for a set of terms returns the top root paths to types matching the search terms.
//! A root path is a path from a root type (Query, Mutation, or Subscription) to the type. This
//! provides not only information about the type itself, but also how to construct a query to
//! retrieve that type.
//!
//! Shorter paths are preferred by a customizable boost factor. If parent types in the path also
//! match the search terms, a customizable portion of their scores are added to the path score.
//! The total number of matching types considered can be customized, as can the maximum number of
//! paths to each type (types may be reachable by more than one path - the shortest paths to root
//! take precedence over longer paths).
use crate::path::PathNode;
use apollo_compiler::ast::{NamedType, OperationType as AstOperationType};
use apollo_compiler::collections::IndexMap;
use apollo_compiler::schema::ExtendedType;
use apollo_compiler::validation::Valid;
use apollo_compiler::{Name, Schema};
use enumset::{EnumSet, EnumSetType};
use error::{IndexingError, SearchError};
use itertools::Itertools;
use path::Scored;
use std::collections::{HashMap, HashSet, VecDeque};
use std::time::Instant;
use tantivy::collector::TopDocs;
use tantivy::query::{BooleanQuery, Occur, Query, TermQuery};
use tantivy::schema::{Field, IndexRecordOption, TextFieldIndexing, TextOptions, Value};
use tantivy::tokenizer::{Language, LowerCaser, SimpleTokenizer, Stemmer, TextAnalyzer};
use tantivy::{
Index, TantivyDocument, Term,
schema::{STORED, Schema as TantivySchema},
};
use tracing::{Level, debug, error, info, warn};
use traverse::SchemaExt;
pub mod error;
mod path;
mod traverse;
pub const TYPE_NAME_FIELD: &str = "type_name";
pub const DESCRIPTION_FIELD: &str = "description";
pub const FIELDS_FIELD: &str = "fields";
pub const RAW_TYPE_NAME_FIELD: &str = "raw_type_name";
pub const REFERENCING_TYPES_FIELD: &str = "referencing_types";
/// Types of operations to be included in the schema index. Unlike the AST types, these types can
/// be included in an [`EnumSet`](EnumSet).
#[derive(EnumSetType, Debug)]
pub enum OperationType {
Query,
Mutation,
Subscription,
}
impl From<AstOperationType> for OperationType {
fn from(value: AstOperationType) -> Self {
match value {
AstOperationType::Query => OperationType::Query,
AstOperationType::Mutation => OperationType::Mutation,
AstOperationType::Subscription => OperationType::Subscription,
}
}
}
impl From<OperationType> for AstOperationType {
fn from(value: OperationType) -> Self {
match value {
OperationType::Query => AstOperationType::Query,
OperationType::Mutation => AstOperationType::Mutation,
OperationType::Subscription => AstOperationType::Subscription,
}
}
}
pub struct Options {
/// The maximum number of matching schema types to include in the results
pub max_type_matches: usize,
/// The maximum number of paths to root to include for each matching schema type
pub max_paths_per_type: usize,
/// The boost factor applied to shorter paths to root (0.0 for no boost, 1.0 for 100% boost)
pub short_path_boost_factor: f32,
/// The percentage of the score of each parent type added to the overall score of the path
/// to root 0.0 for 0%, 1.0 for 100%)
pub parent_match_boost_factor: f32,
}
impl Default for Options {
fn default() -> Self {
Self {
max_type_matches: 10,
max_paths_per_type: 3,
short_path_boost_factor: 0.5,
parent_match_boost_factor: 0.2,
}
}
}
#[derive(Clone)]
pub struct SchemaIndex {
inner: Index,
text_analyzer: TextAnalyzer,
raw_type_name_field: Field,
type_name_field: Field,
description_field: Field,
fields_field: Field,
referencing_types_field: Field,
}
impl SchemaIndex {
#[tracing::instrument(skip_all, name = "schema_index")]
pub fn new(
schema: &Valid<Schema>,
root_types: EnumSet<OperationType>,
index_memory_bytes: usize,
) -> Result<Self, IndexingError> {
let start_time = Instant::now();
// Register a custom analyzer with English stemming and lowercasing
// TODO: support other languages
let text_analyzer = TextAnalyzer::builder(SimpleTokenizer::default())
.filter(LowerCaser)
.filter(Stemmer::new(Language::English))
.build();
// Create the schema builder and add fields with the custom analyzer
let mut index_schema = TantivySchema::builder();
let type_name_field = index_schema.add_text_field(
TYPE_NAME_FIELD,
TextOptions::default()
.set_indexing_options(TextFieldIndexing::default().set_tokenizer("en_stem"))
.set_stored(),
);
let description_field = index_schema.add_text_field(
DESCRIPTION_FIELD,
TextOptions::default()
.set_indexing_options(TextFieldIndexing::default().set_tokenizer("en_stem"))
.set_stored(),
);
let fields_field = index_schema.add_text_field(
FIELDS_FIELD,
TextOptions::default()
.set_indexing_options(TextFieldIndexing::default().set_tokenizer("en_stem"))
.set_stored(),
);
// The raw type name is indexed as the exact name (no stemming or lowercasing)
let raw_type_name_field = index_schema.add_text_field(
RAW_TYPE_NAME_FIELD,
TextOptions::default()
.set_indexing_options(TextFieldIndexing::default().set_tokenizer("raw"))
.set_stored(),
);
let referencing_types_field = index_schema.add_text_field(REFERENCING_TYPES_FIELD, STORED);
// Create the index
let index_schema = index_schema.build();
let index = Index::create_in_ram(index_schema);
index
.tokenizers()
.register("en_stem", text_analyzer.clone());
// Map every type in the schema to the types referencing it
let mut index_writer = index.writer(index_memory_bytes)?;
let mut type_references: HashMap<String, Vec<String>> = HashMap::default();
for (extended_type, path) in schema.traverse(root_types) {
let entry = type_references
.entry(extended_type.name().to_string())
.or_default();
if let Some((ref_type, field_name, field_args)) = path.referencing_type() {
if let Some(field_name) = field_name {
entry.push(format!(
"{}#{}{}",
ref_type,
field_name.as_str(),
if field_args.is_empty() {
"".to_string()
} else {
format!("#{}", field_args.iter().join(","))
}
));
} else {
entry.push(ref_type.to_string())
}
}
}
if tracing::enabled!(Level::DEBUG) {
for (type_name, references) in &type_references {
debug!("Type '{}' is referenced by: {:?}", type_name, references);
}
}
// Build an index of each type
for (type_name, references) in &type_references {
let type_name = NamedType::new_unchecked(type_name.as_str());
let extended_type = if let Some(extended_type) = schema.types.get(&type_name) {
extended_type
} else {
// This can never really happen since we got the type name from the schema above
continue;
};
if extended_type.is_built_in() {
continue;
}
// Create a document for each type
let mut doc = TantivyDocument::default();
doc.add_text(type_name_field, extended_type.name());
doc.add_text(raw_type_name_field, extended_type.name());
doc.add_text(
description_field,
extended_type
.description()
.map(|d| d.to_string())
.unwrap_or_default(),
);
for ref_type in references {
doc.add_text(referencing_types_field, ref_type);
}
let fields = match extended_type {
ExtendedType::Object(obj) => obj
.fields
.iter()
.map(|(name, field)| format!("{}: {}", name, field.ty.inner_named_type()))
.collect::<Vec<_>>()
.join(", "),
ExtendedType::Interface(interface) => interface
.fields
.iter()
.map(|(name, field)| format!("{}: {}", name, field.ty.inner_named_type()))
.collect::<Vec<_>>()
.join(", "),
ExtendedType::InputObject(input) => input
.fields
.iter()
.map(|(name, field)| format!("{}: {}", name, field.ty.inner_named_type()))
.collect::<Vec<_>>()
.join(", "),
ExtendedType::Enum(enum_type) => format!(
"{}: {}",
enum_type.name,
enum_type
.values
.iter()
.map(|(name, _)| name.to_string())
.collect::<Vec<_>>()
.join(" | ")
),
_ => String::new(),
};
doc.add_text(fields_field, &fields);
let field_descriptions = match extended_type {
ExtendedType::Enum(enum_type) => enum_type
.values
.iter()
.flat_map(|(_, value)| value.description.as_ref())
.map(|node| node.as_str())
.collect::<Vec<_>>()
.join("\n"),
ExtendedType::Object(obj) => obj
.fields
.iter()
.flat_map(|(_, field)| field.description.as_ref())
.map(|node| node.as_str())
.collect::<Vec<_>>()
.join("\n"),
ExtendedType::Interface(interface) => interface
.fields
.iter()
.flat_map(|(_, field)| field.description.as_ref())
.map(|node| node.as_str())
.collect::<Vec<_>>()
.join("\n"),
ExtendedType::InputObject(input) => input
.fields
.iter()
.flat_map(|(_, field)| field.description.as_ref())
.map(|node| node.as_str())
.collect::<Vec<_>>()
.join("\n"),
_ => String::new(),
};
doc.add_text(description_field, &field_descriptions);
index_writer.add_document(doc)?;
}
index_writer.commit()?;
let elapsed = start_time.elapsed();
info!("Indexed {} types in {:.2?}", type_references.len(), elapsed);
Ok(Self {
inner: index,
text_analyzer,
raw_type_name_field,
type_name_field,
description_field,
fields_field,
referencing_types_field,
})
}
/// Search the schema for a set of terms
pub fn search<I>(
&self,
terms: I,
options: Options,
) -> Result<Vec<Scored<PathNode>>, SearchError>
where
I: IntoIterator<Item = String>,
{
let searcher = self.inner.reader()?.searcher();
let mut root_paths: Vec<Scored<PathNode>> = Default::default();
let mut scores: IndexMap<String, f32> = Default::default();
let query = self.query(terms);
debug!("Index query: {:?}", query);
// Get the top GraphQL schema types matching the search terms
let top_docs = searcher.search(&query, &TopDocs::with_limit(100))?;
// Map each type name to its score
for (score, doc_address) in top_docs {
let doc: TantivyDocument = searcher.doc(doc_address)?;
if let Some(type_name) = doc
.get_first(self.raw_type_name_field)
.and_then(|v| v.as_str())
{
debug!(
"Explanation for {type_name}: {:?}",
query.explain(&searcher, doc_address)?
);
scores.insert(type_name.to_string(), score);
} else {
// This should never happen, since every document we add has this field defined
error!("Doc address {doc_address:?} missing raw type name field");
}
}
// For the top M types, compute the top N root paths to that type
for (type_name, score) in scores.iter().take(options.max_type_matches) {
let mut root_path_score = *score;
// Build up root paths by looking up referencing types
let mut visited = HashSet::new();
let mut queue = VecDeque::new();
let mut root_path_count = 0usize;
// Start with the current type as a Path
queue.push_back(PathNode::new(NamedType::new_unchecked(type_name)));
while let Some(current_path) = queue.pop_front() {
if root_path_count >= options.max_paths_per_type {
break;
}
let current_type = current_path.node_type.to_string();
visited.insert(current_type.clone());
// Create a query to find the document for the current type
let term = Term::from_field_text(self.raw_type_name_field, current_type.as_str());
let type_query = TermQuery::new(term, IndexRecordOption::Basic);
let type_search = searcher.search(&type_query, &TopDocs::with_limit(1))?;
let current_type_doc: Option<TantivyDocument> = type_search
.first()
.and_then(|(_, type_doc_address)| searcher.doc(*type_doc_address).ok());
let referencing_types: Vec<String> = if let Some(type_doc) = current_type_doc {
type_doc
.get_all(self.referencing_types_field)
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect()
} else {
// This should never happen since the type was found in the schema traversal
warn!(type_name = current_type, "Type not found");
Vec::new()
};
// The score of each type in the root path contributes to the total score of the path
if let Some(score) = scores.get(¤t_type) {
root_path_score += options.parent_match_boost_factor * *score;
}
if referencing_types.is_empty() {
// This is a root type (no referencing types)
let root_path = current_path.clone();
root_paths.push(Scored::new(root_path, root_path_score));
root_path_count += 1;
} else {
// Continue traversing up to a root type
for ref_type in referencing_types {
let (type_name, field_name, field_args) =
if let Some((type_name, field_name)) = ref_type.split_once('#') {
if let Some((field_name, field_args)) = field_name.split_once('#') {
(
type_name.to_string(),
Some(Name::new_unchecked(field_name)),
field_args
.split(',')
.map(|arg| Name::new_unchecked(arg.trim()))
.collect::<Vec<_>>(),
)
} else {
(
type_name.to_string(),
Some(Name::new_unchecked(field_name)),
vec![],
)
}
} else {
(ref_type.clone(), None, vec![])
};
if !visited.contains(&ref_type) {
queue.push_back(current_path.clone().add_parent(
field_name,
field_args,
NamedType::new_unchecked(&type_name),
));
}
}
}
}
}
Ok(self
.boost_shorter_paths(root_paths, options.short_path_boost_factor)
.into_iter()
.sorted_by(|a, b| {
b.score()
.partial_cmp(&a.score())
.unwrap_or(std::cmp::Ordering::Equal)
})
.collect::<Vec<_>>())
}
/// Apply a boost factor to shorter paths
fn boost_shorter_paths(
&self,
scored_paths: Vec<Scored<PathNode>>,
boost_factor: f32,
) -> Vec<Scored<PathNode>> {
if scored_paths.is_empty() || boost_factor == 0f32 {
return scored_paths;
}
// Calculate the range of path lengths
let path_lengths: Vec<usize> = scored_paths
.iter()
.map(|scored| scored.inner.len())
.collect();
let min_length = *path_lengths.iter().min().unwrap_or(&1);
let max_length = *path_lengths.iter().max().unwrap_or(&1);
// Only apply boost if there's a range in path lengths
if max_length <= min_length {
return scored_paths;
}
let length_range = (max_length - min_length) as f32;
// Apply normalized boost to each path
scored_paths
.into_iter()
.map(|scored_path| {
let path_length = scored_path.inner.len();
let normalized_length = (path_length - min_length) as f32 / length_range;
// Boost shorter paths: 1.0 for shortest, 0.0 for longest
let length_boost = 1.0 - normalized_length;
let boosted_score = scored_path.score() * (1.0 + boost_factor * length_boost);
Scored::new(scored_path.inner, boosted_score)
})
.collect()
}
/// Create the query used to search for a given set of terms.
fn query<I>(&self, terms: I) -> impl Query
where
I: IntoIterator<Item = String>,
{
let mut text_analyzer = self.text_analyzer.clone();
let mut query = BooleanQuery::new(
terms
.into_iter()
.flat_map(|term| {
let mut terms: Vec<Term> = Vec::new();
let mut token_stream = text_analyzer.token_stream(&term);
token_stream.process(&mut |token| {
terms.push(Term::from_field_text(self.type_name_field, &token.text));
terms.push(Term::from_field_text(self.description_field, &token.text));
terms.push(Term::from_field_text(self.fields_field, &token.text));
});
terms
})
.map(|term| {
(
Occur::Should,
Box::new(TermQuery::new(term, IndexRecordOption::Basic)) as Box<dyn Query>,
)
})
.collect(),
);
query.set_minimum_number_should_match(1);
query
}
}
#[cfg(test)]
mod tests {
use super::*;
use insta::assert_snapshot;
use rstest::{fixture, rstest};
const TEST_SCHEMA: &str = include_str!("testdata/schema.graphql");
#[fixture]
fn schema() -> Valid<Schema> {
Schema::parse(TEST_SCHEMA, "schema.graphql")
.expect("Failed to parse test schema")
.validate()
.expect("Failed to validate test schema")
}
#[rstest]
fn test_search(schema: Valid<Schema>) {
let search = SchemaIndex::new(
&schema,
OperationType::Query | OperationType::Mutation,
15_000_000,
)
.unwrap();
let results = search
.search(vec!["dimensions".to_string()], Options::default())
.unwrap();
assert_snapshot!(
results
.iter()
.take(10)
.map(ToString::to_string)
.collect::<Vec<_>>()
.join("\n")
);
}
}
```
--------------------------------------------------------------------------------
/crates/apollo-mcp-registry/src/uplink.rs:
--------------------------------------------------------------------------------
```rust
use futures::{Stream, StreamExt};
use graphql_client::QueryBody;
use secrecy::ExposeSecret;
pub use secrecy::SecretString;
use std::error::Error as _;
use std::fmt::Debug;
use std::time::Duration;
use tokio::sync::mpsc::channel;
use tokio_stream::wrappers::ReceiverStream;
use tower::BoxError;
use url::Url;
pub mod persisted_queries;
pub mod schema;
const GCP_URL: &str = "https://uplink.api.apollographql.com";
const AWS_URL: &str = "https://aws.uplink.api.apollographql.com";
/// Errors returned by the uplink module
#[derive(Debug, thiserror::Error)]
pub enum Error {
#[error("http error")]
Http(#[from] reqwest::Error),
#[error("fetch failed from uplink endpoint, and there are no fallback endpoints configured")]
FetchFailedSingle,
#[error("fetch failed from all {url_count} uplink endpoints")]
FetchFailedMultiple { url_count: usize },
#[allow(clippy::enum_variant_names)]
#[error("uplink error: code={code} message={message}")]
UplinkError { code: String, message: String },
#[error("uplink error, the request will not be retried: code={code} message={message}")]
UplinkErrorNoRetry { code: String, message: String },
}
/// Represents a request to Apollo Uplink
#[derive(Debug)]
pub struct UplinkRequest {
pub api_key: String,
pub graph_ref: String,
pub id: Option<String>,
}
/// The response from an Apollo Uplink request
#[derive(Debug)]
pub enum UplinkResponse<Response>
where
Response: Send + Debug + 'static,
{
New {
response: Response,
id: String,
delay: u64,
},
Unchanged {
id: Option<String>,
delay: Option<u64>,
},
Error {
retry_later: bool,
code: String,
message: String,
},
}
/// Endpoint configuration strategies
#[derive(Debug, Clone)]
pub enum Endpoints {
Fallback {
urls: Vec<Url>,
},
#[allow(dead_code)]
RoundRobin {
urls: Vec<Url>,
current: usize,
},
}
impl Default for Endpoints {
#[allow(clippy::expect_used)] // Default URLs are fixed at compile-time
fn default() -> Self {
Self::fallback(
[GCP_URL, AWS_URL]
.iter()
.map(|url| Url::parse(url).expect("default urls must be valid"))
.collect(),
)
}
}
impl Endpoints {
pub fn fallback(urls: Vec<Url>) -> Self {
Endpoints::Fallback { urls }
}
pub fn round_robin(urls: Vec<Url>) -> Self {
Endpoints::RoundRobin { urls, current: 0 }
}
/// Return an iterator of endpoints to check on a poll of uplink.
/// Fallback will always return URLs in the same order.
/// Round-robin will return an iterator that cycles over the URLS starting at the next URL
fn iter<'a>(&'a mut self) -> Box<dyn Iterator<Item = &'a Url> + Send + 'a> {
match self {
Endpoints::Fallback { urls } => Box::new(urls.iter()),
Endpoints::RoundRobin { urls, current } => {
// Prevent current from getting large.
*current %= urls.len();
// The iterator cycles, but will skip to the next untried URL and is finally limited by the number of URLs.
// This gives us a sliding window of URLs to try on each poll to uplink.
// The returned iterator will increment current each time it is called.
Box::new(
urls.iter()
.cycle()
.skip(*current)
.inspect(|_| {
*current += 1;
})
.take(urls.len()),
)
}
}
}
pub fn url_count(&self) -> usize {
match self {
Endpoints::Fallback { urls } => urls.len(),
Endpoints::RoundRobin { urls, current: _ } => urls.len(),
}
}
}
/// Configuration for polling Apollo Uplink.
#[derive(Clone, Debug, Default)]
pub struct UplinkConfig {
/// The Apollo key: `<YOUR_GRAPH_API_KEY>`
pub apollo_key: SecretString,
/// The apollo graph reference: `<YOUR_GRAPH_ID>@<VARIANT>`
pub apollo_graph_ref: String,
/// The endpoints polled.
pub endpoints: Option<Endpoints>,
/// The duration between polling
pub poll_interval: Duration,
/// The HTTP client timeout for each poll
pub timeout: Duration,
}
impl UplinkConfig {
/// Mock uplink configuration options for use in tests
/// A nice pattern is to use wiremock to start an uplink mocker and pass the URL here.
pub fn for_tests(uplink_endpoints: Endpoints) -> Self {
Self {
apollo_key: SecretString::from("key"),
apollo_graph_ref: "graph".to_string(),
endpoints: Some(uplink_endpoints),
poll_interval: Duration::from_secs(2),
timeout: Duration::from_secs(5),
}
}
}
/// Regularly fetch from Uplink
/// If urls are supplied then they will be called round-robin
pub fn stream_from_uplink<Query, Response>(
uplink_config: UplinkConfig,
) -> impl Stream<Item = Result<Response, Error>>
where
Query: graphql_client::GraphQLQuery,
<Query as graphql_client::GraphQLQuery>::ResponseData: Into<UplinkResponse<Response>> + Send,
<Query as graphql_client::GraphQLQuery>::Variables: From<UplinkRequest> + Send + Sync,
Response: Send + 'static + Debug,
{
stream_from_uplink_transforming_new_response::<Query, Response, Response>(
uplink_config,
|response| Box::new(Box::pin(async { Ok(response) })),
)
}
/// Like stream_from_uplink, but applies an async transformation function to the
/// result of the HTTP fetch if the response is an UplinkResponse::New. If this
/// function returns Err, we fail over to the next Uplink endpoint, just like if
/// the HTTP fetch itself failed. This serves the use case where an Uplink
/// endpoint's response includes another URL located close to the Uplink
/// endpoint; if that second URL is down, we want to try the next Uplink
/// endpoint rather than fully giving up.
pub fn stream_from_uplink_transforming_new_response<Query, Response, TransformedResponse>(
mut uplink_config: UplinkConfig,
transform_new_response: impl Fn(
Response,
) -> Box<
dyn Future<Output = Result<TransformedResponse, BoxError>> + Send + Unpin,
> + Send
+ Sync
+ 'static,
) -> impl Stream<Item = Result<TransformedResponse, Error>>
where
Query: graphql_client::GraphQLQuery,
<Query as graphql_client::GraphQLQuery>::ResponseData: Into<UplinkResponse<Response>> + Send,
<Query as graphql_client::GraphQLQuery>::Variables: From<UplinkRequest> + Send + Sync,
Response: Send + 'static + Debug,
TransformedResponse: Send + 'static + Debug,
{
let (sender, receiver) = channel(2);
let client = match reqwest::Client::builder()
.no_gzip()
.timeout(uplink_config.timeout)
.build()
{
Ok(client) => client,
Err(err) => {
tracing::error!("unable to create client to query uplink: {err}", err = err);
return futures::stream::empty().boxed();
}
};
let task = async move {
let mut last_id = None;
let mut endpoints = uplink_config.endpoints.unwrap_or_default();
loop {
let variables = UplinkRequest {
graph_ref: uplink_config.apollo_graph_ref.to_string(),
api_key: uplink_config.apollo_key.expose_secret().to_string(),
id: last_id.clone(),
};
let query_body = Query::build_query(variables.into());
match fetch::<Query, Response, TransformedResponse>(
&client,
&query_body,
&mut endpoints,
&transform_new_response,
)
.await
{
Ok(response) => {
match response {
UplinkResponse::New {
id,
response,
delay,
} => {
last_id = Some(id);
uplink_config.poll_interval = Duration::from_secs(delay);
if let Err(e) = sender.send(Ok(response)).await {
tracing::debug!(
"failed to push to stream. This is likely to be because the server is shutting down: {e}"
);
break;
}
}
UplinkResponse::Unchanged { id, delay } => {
// Preserve behavior for schema uplink errors where id and delay are not reset if they are not provided on error.
if let Some(id) = id {
last_id = Some(id);
}
if let Some(delay) = delay {
uplink_config.poll_interval = Duration::from_secs(delay);
}
}
UplinkResponse::Error {
retry_later,
message,
code,
} => {
let err = if retry_later {
Err(Error::UplinkError { code, message })
} else {
Err(Error::UplinkErrorNoRetry { code, message })
};
if let Err(e) = sender.send(err).await {
tracing::debug!(
"failed to send error to uplink stream. This is likely to be because the server is shutting down: {e}"
);
break;
}
if !retry_later {
break;
}
}
}
}
Err(err) => {
if let Err(e) = sender.send(Err(err)).await {
tracing::debug!(
"failed to send error to uplink stream. This is likely to be because the server is shutting down: {e}"
);
break;
}
}
}
tokio::time::sleep(uplink_config.poll_interval).await;
}
};
// Using tokio::spawn instead of with_current_subscriber to simplify
tokio::task::spawn(task);
ReceiverStream::new(receiver).boxed()
}
async fn fetch<Query, Response, TransformedResponse>(
client: &reqwest::Client,
request_body: &QueryBody<Query::Variables>,
endpoints: &mut Endpoints,
// See stream_from_uplink_transforming_new_response for an explanation of
// this argument.
transform_new_response: &(
impl Fn(
Response,
) -> Box<dyn Future<Output = Result<TransformedResponse, BoxError>> + Send + Unpin>
+ Send
+ Sync
+ 'static
),
) -> Result<UplinkResponse<TransformedResponse>, Error>
where
Query: graphql_client::GraphQLQuery,
<Query as graphql_client::GraphQLQuery>::ResponseData: Into<UplinkResponse<Response>> + Send,
<Query as graphql_client::GraphQLQuery>::Variables: From<UplinkRequest> + Send + Sync,
Response: Send + Debug + 'static,
TransformedResponse: Send + Debug + 'static,
{
for url in endpoints.iter() {
match http_request::<Query>(client, url.as_str(), request_body).await {
Ok(response) => match response.data.map(Into::into) {
None => {}
Some(UplinkResponse::New {
response,
id,
delay,
}) => match transform_new_response(response).await {
Ok(res) => {
return Ok(UplinkResponse::New {
response: res,
id,
delay,
});
}
Err(err) => {
tracing::debug!(
"failed to process results of Uplink response from {url}: {err}. Other endpoints will be tried"
);
continue;
}
},
Some(UplinkResponse::Unchanged { id, delay }) => {
return Ok(UplinkResponse::Unchanged { id, delay });
}
Some(UplinkResponse::Error {
message,
code,
retry_later,
}) => {
return Ok(UplinkResponse::Error {
message,
code,
retry_later,
});
}
},
Err(err) => {
tracing::debug!(
"failed to fetch from Uplink endpoint {url}: {err}. Other endpoints will be tried"
);
}
};
}
let url_count = endpoints.url_count();
if url_count == 1 {
Err(Error::FetchFailedSingle)
} else {
Err(Error::FetchFailedMultiple { url_count })
}
}
async fn http_request<Query>(
client: &reqwest::Client,
url: &str,
request_body: &QueryBody<Query::Variables>,
) -> Result<graphql_client::Response<Query::ResponseData>, reqwest::Error>
where
Query: graphql_client::GraphQLQuery,
{
let res = client
.post(url)
.header("x-apollo-mcp-server-version", env!("CARGO_PKG_VERSION"))
.header("apollographql-client-name", "apollo-mcp-server")
.header("apollographql-client-version", env!("CARGO_PKG_VERSION"))
.json(request_body)
.send()
.await
.inspect_err(|e| {
if let Some(hyper_err) = e.source() &&
let Some(os_err) = hyper_err.source() &&
os_err.to_string().contains("tcp connect error: Cannot assign requested address (os error 99)")
{
tracing::warn!("If your MCP server is executing within a kubernetes pod, this failure may be caused by istio-proxy injection. See https://github.com/apollographql/router/issues/3533 for more details about how to solve this");
}
})?;
tracing::debug!("uplink response {:?}", res);
let response_body: graphql_client::Response<Query::ResponseData> = res.json().await?;
Ok(response_body)
}
#[cfg(test)]
mod test {
use super::*;
use futures::stream::StreamExt;
use secrecy::SecretString;
use std::str::FromStr;
use std::time::Duration;
use url::Url;
#[tokio::test]
async fn test_stream_from_uplink() {
for url in &[GCP_URL, AWS_URL] {
if let (Ok(apollo_key), Ok(apollo_graph_ref)) = (
std::env::var("TEST_APOLLO_KEY"),
std::env::var("TEST_APOLLO_GRAPH_REF"),
) {
let results =
stream_from_uplink::<schema::SupergraphSdlQuery, String>(UplinkConfig {
apollo_key: SecretString::from(apollo_key),
apollo_graph_ref,
endpoints: Some(Endpoints::fallback(vec![
Url::from_str(url).expect("url must be valid"),
])),
poll_interval: Duration::from_secs(1),
timeout: Duration::from_secs(5),
})
.take(1)
.collect::<Vec<_>>()
.await;
let schema = results
.first()
.unwrap_or_else(|| panic!("expected one result from {url}"))
.as_ref()
.unwrap_or_else(|_| panic!("schema should be OK from {url}"));
assert!(schema.contains("type Product"))
}
}
}
#[test]
fn test_uplink_config_for_tests() {
let endpoints = Endpoints::fallback(vec![
Url::from_str("http://test1.example.com").unwrap(),
Url::from_str("http://test2.example.com").unwrap(),
]);
let config = UplinkConfig::for_tests(endpoints.clone());
assert_eq!(config.apollo_key.expose_secret(), "key");
assert_eq!(config.apollo_graph_ref, "graph");
assert_eq!(config.poll_interval, Duration::from_secs(2));
assert_eq!(config.timeout, Duration::from_secs(5));
// Check endpoints
if let Some(Endpoints::Fallback { urls }) = config.endpoints {
assert_eq!(urls.len(), 2);
assert_eq!(urls[0].as_str(), "http://test1.example.com/");
assert_eq!(urls[1].as_str(), "http://test2.example.com/");
} else {
panic!("Expected fallback endpoints");
}
}
#[test]
fn test_endpoints_fallback() {
let urls = vec![
Url::from_str("http://test1.example.com").unwrap(),
Url::from_str("http://test2.example.com").unwrap(),
];
let endpoints = Endpoints::fallback(urls.clone());
if let Endpoints::Fallback {
urls: fallback_urls,
} = endpoints
{
assert_eq!(fallback_urls.len(), 2);
assert_eq!(fallback_urls[0], urls[0]);
assert_eq!(fallback_urls[1], urls[1]);
} else {
panic!("Expected fallback endpoints");
}
}
#[test]
fn test_endpoints_round_robin() {
let urls = vec![
Url::from_str("http://test1.example.com").unwrap(),
Url::from_str("http://test2.example.com").unwrap(),
];
let endpoints = Endpoints::round_robin(urls.clone());
if let Endpoints::RoundRobin {
urls: rr_urls,
current,
} = endpoints
{
assert_eq!(rr_urls.len(), 2);
assert_eq!(rr_urls[0], urls[0]);
assert_eq!(rr_urls[1], urls[1]);
assert_eq!(current, 0);
} else {
panic!("Expected round robin endpoints");
}
}
#[test]
fn test_endpoints_url_count() {
let urls = vec![
Url::from_str("http://test1.example.com").unwrap(),
Url::from_str("http://test2.example.com").unwrap(),
Url::from_str("http://test3.example.com").unwrap(),
];
let fallback = Endpoints::fallback(urls.clone());
assert_eq!(fallback.url_count(), 3);
let round_robin = Endpoints::round_robin(urls);
assert_eq!(round_robin.url_count(), 3);
}
#[test]
fn test_endpoints_iter_fallback() {
let urls = vec![
Url::from_str("http://test1.example.com").unwrap(),
Url::from_str("http://test2.example.com").unwrap(),
];
let mut endpoints = Endpoints::fallback(urls.clone());
{
let iter_urls: Vec<&Url> = endpoints.iter().collect();
assert_eq!(iter_urls.len(), 2);
assert_eq!(iter_urls[0], &urls[0]);
assert_eq!(iter_urls[1], &urls[1]);
}
// Fallback should always return the same order
{
let iter_urls2: Vec<&Url> = endpoints.iter().collect();
assert_eq!(iter_urls2.len(), 2);
assert_eq!(iter_urls2[0], &urls[0]);
assert_eq!(iter_urls2[1], &urls[1]);
}
}
#[test]
fn test_endpoints_iter_round_robin() {
let urls = vec![
Url::from_str("http://test1.example.com").unwrap(),
Url::from_str("http://test2.example.com").unwrap(),
Url::from_str("http://test3.example.com").unwrap(),
];
let mut endpoints = Endpoints::round_robin(urls.clone());
// First iteration should start at index 0
{
let iter_urls1: Vec<&Url> = endpoints.iter().collect();
assert_eq!(iter_urls1.len(), 3);
assert_eq!(iter_urls1[0], &urls[0]);
assert_eq!(iter_urls1[1], &urls[1]);
assert_eq!(iter_urls1[2], &urls[2]);
}
// Second iteration should start at index 3 (current incremented to 3, then mod 3 = 0)
// But since the inspect closure increments current for each item yielded,
// the actual behavior is that current advances as the iterator is consumed
{
let iter_urls2: Vec<&Url> = endpoints.iter().collect();
assert_eq!(iter_urls2.len(), 3);
// After the first iteration consumed 3 items, current should be 3, then 3 % 3 = 0
assert_eq!(iter_urls2[0], &urls[0]);
assert_eq!(iter_urls2[1], &urls[1]);
assert_eq!(iter_urls2[2], &urls[2]);
}
}
#[test]
fn test_endpoints_default() {
let endpoints = Endpoints::default();
assert_eq!(endpoints.url_count(), 2); // GCP_URL and AWS_URL
if let Endpoints::Fallback { urls } = endpoints {
// URLs parsed with trailing slash
assert_eq!(urls[0].as_str(), "https://uplink.api.apollographql.com/");
assert_eq!(
urls[1].as_str(),
"https://aws.uplink.api.apollographql.com/"
);
} else {
panic!("Expected fallback endpoints");
}
}
#[test]
fn test_uplink_config_default() {
let config = UplinkConfig::default();
assert_eq!(config.apollo_key.expose_secret(), "");
assert_eq!(config.apollo_graph_ref, "");
assert!(config.endpoints.is_none());
assert_eq!(config.poll_interval, Duration::from_secs(0));
assert_eq!(config.timeout, Duration::from_secs(0));
}
#[test]
fn test_error_display() {
let error1 = Error::FetchFailedSingle;
assert_eq!(
error1.to_string(),
"fetch failed from uplink endpoint, and there are no fallback endpoints configured"
);
let error2 = Error::FetchFailedMultiple { url_count: 3 };
assert_eq!(
error2.to_string(),
"fetch failed from all 3 uplink endpoints"
);
let error3 = Error::UplinkError {
code: "AUTH_FAILED".to_string(),
message: "Invalid API key".to_string(),
};
assert_eq!(
error3.to_string(),
"uplink error: code=AUTH_FAILED message=Invalid API key"
);
let error4 = Error::UplinkErrorNoRetry {
code: "UNKNOWN_REF".to_string(),
message: "Graph not found".to_string(),
};
assert_eq!(
error4.to_string(),
"uplink error, the request will not be retried: code=UNKNOWN_REF message=Graph not found"
);
}
#[test]
fn test_uplink_request_debug() {
let request = UplinkRequest {
api_key: "test_api_key".to_string(),
graph_ref: "test@main".to_string(),
id: Some("test_id".to_string()),
};
let debug_output = format!("{:?}", request);
assert!(debug_output.contains("test_api_key"));
assert!(debug_output.contains("test@main"));
assert!(debug_output.contains("test_id"));
}
#[test]
fn test_uplink_response_debug() {
let response_new = UplinkResponse::New {
response: "test_response".to_string(),
id: "test_id".to_string(),
delay: 30,
};
let debug_new = format!("{:?}", response_new);
assert!(debug_new.contains("New"));
assert!(debug_new.contains("test_response"));
let response_unchanged = UplinkResponse::<String>::Unchanged {
id: Some("test_id".to_string()),
delay: Some(30),
};
let debug_unchanged = format!("{:?}", response_unchanged);
assert!(debug_unchanged.contains("Unchanged"));
let response_error = UplinkResponse::<String>::Error {
retry_later: true,
code: "RETRY_LATER".to_string(),
message: "Try again".to_string(),
};
let debug_error = format!("{:?}", response_error);
assert!(debug_error.contains("Error"));
assert!(debug_error.contains("retry_later: true"));
}
}
```
--------------------------------------------------------------------------------
/crates/apollo-mcp-registry/src/platform_api/operation_collections/collection_poller.rs:
--------------------------------------------------------------------------------
```rust
use futures::Stream;
use graphql_client::GraphQLQuery;
use reqwest::header::{HeaderMap, HeaderName, HeaderValue};
use secrecy::ExposeSecret;
use std::collections::HashMap;
use std::pin::Pin;
use tokio::sync::mpsc::channel;
use tokio_stream::wrappers::ReceiverStream;
use super::{error::CollectionError, event::CollectionEvent};
use crate::platform_api::PlatformApiConfig;
use operation_collection_default_polling_query::{
OperationCollectionDefaultPollingQueryVariant as PollingDefaultGraphVariant,
OperationCollectionDefaultPollingQueryVariantOnGraphVariantMcpDefaultCollection as PollingDefaultCollection,
};
use operation_collection_default_query::{
OperationCollectionDefaultQueryVariant,
OperationCollectionDefaultQueryVariantOnGraphVariantMcpDefaultCollection as DefaultCollectionResult,
OperationCollectionDefaultQueryVariantOnGraphVariantMcpDefaultCollectionOnOperationCollectionOperations as OperationCollectionDefaultEntry,
};
use operation_collection_entries_query::OperationCollectionEntriesQueryOperationCollectionEntries;
use operation_collection_polling_query::{
OperationCollectionPollingQueryOperationCollection as PollingOperationCollectionResult,
OperationCollectionPollingQueryOperationCollectionOnNotFoundError as PollingNotFoundError,
OperationCollectionPollingQueryOperationCollectionOnPermissionError as PollingPermissionError,
OperationCollectionPollingQueryOperationCollectionOnValidationError as PollingValidationError,
};
use operation_collection_query::{
OperationCollectionQueryOperationCollection as OperationCollectionResult,
OperationCollectionQueryOperationCollectionOnNotFoundError as NotFoundError,
OperationCollectionQueryOperationCollectionOnOperationCollectionOperations as OperationCollectionEntry,
OperationCollectionQueryOperationCollectionOnPermissionError as PermissionError,
OperationCollectionQueryOperationCollectionOnValidationError as ValidationError,
};
const MAX_COLLECTION_SIZE_FOR_POLLING: usize = 100;
type Timestamp = String;
#[derive(GraphQLQuery)]
#[graphql(
query_path = "src/platform_api/operation_collections/operation_collections.graphql",
schema_path = "src/platform_api/platform-api.graphql",
request_derives = "Debug",
response_derives = "PartialEq, Debug, Deserialize, Clone"
)]
struct OperationCollectionEntriesQuery;
#[derive(GraphQLQuery)]
#[graphql(
query_path = "src/platform_api/operation_collections/operation_collections.graphql",
schema_path = "src/platform_api/platform-api.graphql",
request_derives = "Debug",
response_derives = "PartialEq, Debug, Deserialize"
)]
struct OperationCollectionPollingQuery;
#[derive(GraphQLQuery)]
#[graphql(
query_path = "src/platform_api/operation_collections/operation_collections.graphql",
schema_path = "src/platform_api/platform-api.graphql",
request_derives = "Debug",
response_derives = "PartialEq, Debug, Deserialize"
)]
struct OperationCollectionQuery;
#[derive(GraphQLQuery)]
#[graphql(
query_path = "src/platform_api/operation_collections/operation_collections.graphql",
schema_path = "src/platform_api/platform-api.graphql",
request_derives = "Debug",
response_derives = "PartialEq, Debug, Deserialize"
)]
struct OperationCollectionDefaultQuery;
#[derive(GraphQLQuery)]
#[graphql(
query_path = "src/platform_api/operation_collections/operation_collections.graphql",
schema_path = "src/platform_api/platform-api.graphql",
request_derives = "Debug",
response_derives = "PartialEq, Debug, Deserialize"
)]
struct OperationCollectionDefaultPollingQuery;
async fn handle_poll_result(
previous_updated_at: &mut HashMap<String, OperationData>,
poll: Vec<(String, String)>,
platform_api_config: &PlatformApiConfig,
) -> Result<Option<Vec<OperationData>>, CollectionError> {
let removed_ids = previous_updated_at.clone();
let removed_ids = removed_ids
.keys()
.filter(|id| poll.iter().all(|(keep_id, _)| keep_id != *id))
.collect::<Vec<_>>();
let changed_ids: Vec<String> = poll
.into_iter()
.filter_map(|(id, last_updated_at)| match previous_updated_at.get(&id) {
Some(previous_operation) if last_updated_at == previous_operation.last_updated_at => {
None
}
_ => Some(id.clone()),
})
.collect();
if changed_ids.is_empty() && removed_ids.is_empty() {
tracing::debug!("no operation changed");
return Ok(None);
}
if !removed_ids.is_empty() {
tracing::info!("removed operation ids: {:?}", removed_ids);
for id in removed_ids {
previous_updated_at.remove(id);
}
}
if !changed_ids.is_empty() {
tracing::debug!("changed operation ids: {:?}", changed_ids);
let full_response = graphql_request::<OperationCollectionEntriesQuery>(
&OperationCollectionEntriesQuery::build_query(
operation_collection_entries_query::Variables {
collection_entry_ids: changed_ids,
},
),
platform_api_config,
)
.await?;
for operation in full_response.operation_collection_entries {
previous_updated_at.insert(
operation.id.clone(),
OperationData::from(&operation).clone(),
);
}
}
Ok(Some(previous_updated_at.clone().into_values().collect()))
}
fn is_collection_error_transient(error: &CollectionError) -> bool {
match error {
CollectionError::Request(req_err) => {
// Check if the underlying reqwest error is transient
req_err.is_connect()
|| req_err.is_timeout()
|| req_err.is_request()
|| req_err.status().is_some_and(|status| {
status.is_server_error() || status == reqwest::StatusCode::TOO_MANY_REQUESTS
})
}
_ => false,
}
}
#[derive(Clone)]
pub struct OperationData {
id: String,
last_updated_at: String,
pub source_text: String,
pub headers: Option<Vec<(String, String)>>,
pub variables: Option<String>,
}
impl From<&OperationCollectionEntry> for OperationData {
fn from(operation: &OperationCollectionEntry) -> Self {
Self {
id: operation.id.clone(),
last_updated_at: operation.last_updated_at.clone(),
source_text: operation
.operation_data
.current_operation_revision
.body
.clone(),
headers: operation
.operation_data
.current_operation_revision
.headers
.as_ref()
.map(|headers| {
headers
.iter()
.map(|h| (h.name.clone(), h.value.clone()))
.collect()
}),
variables: operation
.operation_data
.current_operation_revision
.variables
.clone(),
}
}
}
impl From<&OperationCollectionEntriesQueryOperationCollectionEntries> for OperationData {
fn from(operation: &OperationCollectionEntriesQueryOperationCollectionEntries) -> Self {
Self {
id: operation.id.clone(),
last_updated_at: operation.last_updated_at.clone(),
source_text: operation
.operation_data
.current_operation_revision
.body
.clone(),
headers: operation
.operation_data
.current_operation_revision
.headers
.as_ref()
.map(|headers| {
headers
.iter()
.map(|h| (h.name.clone(), h.value.clone()))
.collect()
}),
variables: operation
.operation_data
.current_operation_revision
.variables
.clone(),
}
}
}
impl From<&OperationCollectionDefaultEntry> for OperationData {
fn from(operation: &OperationCollectionDefaultEntry) -> Self {
Self {
id: operation.id.clone(),
last_updated_at: operation.last_updated_at.clone(),
source_text: operation
.operation_data
.current_operation_revision
.body
.clone(),
headers: operation
.operation_data
.current_operation_revision
.headers
.as_ref()
.map(|headers| {
headers
.iter()
.map(|h| (h.name.clone(), h.value.clone()))
.collect()
}),
variables: operation
.operation_data
.current_operation_revision
.variables
.clone(),
}
}
}
#[derive(Clone, Debug)]
pub enum CollectionSource {
Id(String, PlatformApiConfig),
Default(String, PlatformApiConfig),
}
async fn write_init_response(
sender: &tokio::sync::mpsc::Sender<CollectionEvent>,
previous_updated_at: &mut HashMap<String, OperationData>,
operations: impl Iterator<Item = OperationData>,
) -> bool {
let operations = operations
.inspect(|operation_data| {
previous_updated_at.insert(operation_data.id.clone(), operation_data.clone());
})
.collect::<Vec<_>>();
let operation_count = operations.len();
if let Err(e) = sender
.send(CollectionEvent::UpdateOperationCollection(operations))
.await
{
tracing::debug!(
"failed to push to stream. This is likely to be because the server is shutting down: {e}"
);
false
} else if operation_count > MAX_COLLECTION_SIZE_FOR_POLLING {
tracing::warn!(
"Operation Collection polling disabled. Collection has {} operations which exceeds the maximum of {}.",
operation_count,
MAX_COLLECTION_SIZE_FOR_POLLING
);
false
} else {
true
}
}
impl CollectionSource {
pub fn into_stream(self) -> Pin<Box<dyn Stream<Item = CollectionEvent> + Send>> {
match self {
CollectionSource::Id(ref id, ref platform_api_config) => {
self.collection_id_stream(id.clone(), platform_api_config.clone())
}
CollectionSource::Default(ref graph_ref, ref platform_api_config) => {
self.default_collection_stream(graph_ref.clone(), platform_api_config.clone())
}
}
}
fn collection_id_stream(
&self,
collection_id: String,
platform_api_config: PlatformApiConfig,
) -> Pin<Box<dyn Stream<Item = CollectionEvent> + Send>> {
let (sender, receiver) = channel(2);
tokio::task::spawn(async move {
let mut previous_updated_at = HashMap::new();
match graphql_request::<OperationCollectionQuery>(
&OperationCollectionQuery::build_query(operation_collection_query::Variables {
operation_collection_id: collection_id.clone(),
}),
&platform_api_config,
)
.await
{
Ok(response) => match response.operation_collection {
OperationCollectionResult::NotFoundError(NotFoundError { message })
| OperationCollectionResult::PermissionError(PermissionError { message })
| OperationCollectionResult::ValidationError(ValidationError { message }) => {
if let Err(e) = sender
.send(CollectionEvent::CollectionError(CollectionError::Response(
message,
)))
.await
{
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
return;
}
}
OperationCollectionResult::OperationCollection(collection) => {
let should_poll = write_init_response(
&sender,
&mut previous_updated_at,
collection.operations.iter().map(OperationData::from),
)
.await;
if !should_poll {
return;
}
}
},
Err(err) => {
if is_collection_error_transient(&err) {
// Log transient errors but don't send CollectionError to prevent server restart
tracing::warn!(
"Failed to fetch initial operation collection (transient error), will retry on next poll in {}s: {}",
platform_api_config.poll_interval.as_secs(),
err
);
} else {
tracing::error!(
"Failed to fetch initial operation collection with permanent error: {err}"
);
if let Err(e) = sender.send(CollectionEvent::CollectionError(err)).await {
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
}
return;
}
}
};
loop {
tokio::time::sleep(platform_api_config.poll_interval).await;
match poll_operation_collection_id(
collection_id.clone(),
&platform_api_config,
&mut previous_updated_at,
)
.await
{
Ok(Some(operations)) => {
let operations_count = operations.len();
if let Err(e) = sender
.send(CollectionEvent::UpdateOperationCollection(operations))
.await
{
tracing::debug!(
"failed to push to stream. This is likely to be because the server is shutting down: {e}"
);
break;
} else if operations_count > MAX_COLLECTION_SIZE_FOR_POLLING {
tracing::warn!(
"Operation Collection polling disabled. Collection has {operations_count} operations which exceeds the maximum of {MAX_COLLECTION_SIZE_FOR_POLLING}."
);
break;
}
}
Ok(None) => {
tracing::debug!("Operation collection unchanged");
}
Err(err) => {
if let Err(e) = sender.send(CollectionEvent::CollectionError(err)).await {
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
break;
}
}
}
}
});
Box::pin(ReceiverStream::new(receiver))
}
pub fn default_collection_stream(
&self,
graph_ref: String,
platform_api_config: PlatformApiConfig,
) -> Pin<Box<dyn Stream<Item = CollectionEvent> + Send>> {
let (sender, receiver) = channel(2);
tokio::task::spawn(async move {
let mut previous_updated_at = HashMap::new();
match graphql_request::<OperationCollectionDefaultQuery>(
&OperationCollectionDefaultQuery::build_query(
operation_collection_default_query::Variables {
graph_ref: graph_ref.clone(),
},
),
&platform_api_config,
)
.await
{
Ok(response) => match response.variant {
Some(OperationCollectionDefaultQueryVariant::GraphVariant(variant)) => {
match variant.mcp_default_collection {
DefaultCollectionResult::OperationCollection(collection) => {
let should_poll = write_init_response(
&sender,
&mut previous_updated_at,
collection.operations.iter().map(OperationData::from),
)
.await;
if !should_poll {
return;
}
}
DefaultCollectionResult::PermissionError(error) => {
if let Err(e) = sender
.send(CollectionEvent::CollectionError(
CollectionError::Response(error.message),
))
.await
{
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
return;
}
}
}
}
Some(OperationCollectionDefaultQueryVariant::InvalidRefFormat(err)) => {
if let Err(e) = sender
.send(CollectionEvent::CollectionError(CollectionError::Response(
err.message,
)))
.await
{
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
return;
}
}
None => {
if let Err(e) = sender
.send(CollectionEvent::CollectionError(CollectionError::Response(
format!("{graph_ref} not found"),
)))
.await
{
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
}
return;
}
},
Err(err) => {
if is_collection_error_transient(&err) {
// Log transient errors but don't send CollectionError to prevent server restart
tracing::warn!(
"Failed to fetch initial operation collection (transient error), will retry on next poll in {}s: {}",
platform_api_config.poll_interval.as_secs(),
err
);
} else {
tracing::error!(
"Failed to fetch initial operation collection with permanent error: {err}"
);
if let Err(e) = sender.send(CollectionEvent::CollectionError(err)).await {
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
}
return;
}
}
};
loop {
tokio::time::sleep(platform_api_config.poll_interval).await;
match poll_operation_collection_default(
graph_ref.clone(),
&platform_api_config,
&mut previous_updated_at,
)
.await
{
Ok(Some(operations)) => {
let operations_count = operations.len();
if let Err(e) = sender
.send(CollectionEvent::UpdateOperationCollection(operations))
.await
{
tracing::debug!(
"failed to push to stream. This is likely to be because the server is shutting down: {e}"
);
break;
} else if operations_count > MAX_COLLECTION_SIZE_FOR_POLLING {
tracing::warn!(
"Operation Collection polling disabled. Collection has {operations_count} operations which exceeds the maximum of {MAX_COLLECTION_SIZE_FOR_POLLING}."
);
break;
}
}
Ok(None) => {
tracing::debug!("Operation collection unchanged");
}
Err(err) => {
if let Err(e) = sender.send(CollectionEvent::CollectionError(err)).await {
tracing::debug!(
"failed to send error to collection stream. This is likely to be because the server is shutting down: {e}"
);
break;
}
}
}
}
});
Box::pin(ReceiverStream::new(receiver))
}
}
async fn poll_operation_collection_id(
collection_id: String,
platform_api_config: &PlatformApiConfig,
previous_updated_at: &mut HashMap<String, OperationData>,
) -> Result<Option<Vec<OperationData>>, CollectionError> {
let response = graphql_request::<OperationCollectionPollingQuery>(
&OperationCollectionPollingQuery::build_query(
operation_collection_polling_query::Variables {
operation_collection_id: collection_id.clone(),
},
),
platform_api_config,
)
.await?;
match response.operation_collection {
PollingOperationCollectionResult::OperationCollection(collection) => {
handle_poll_result(
previous_updated_at,
collection
.operations
.into_iter()
.map(|operation| (operation.id, operation.last_updated_at))
.collect(),
platform_api_config,
)
.await
}
PollingOperationCollectionResult::NotFoundError(PollingNotFoundError { message })
| PollingOperationCollectionResult::PermissionError(PollingPermissionError { message })
| PollingOperationCollectionResult::ValidationError(PollingValidationError { message }) => {
Err(CollectionError::Response(message))
}
}
}
async fn poll_operation_collection_default(
graph_ref: String,
platform_api_config: &PlatformApiConfig,
previous_updated_at: &mut HashMap<String, OperationData>,
) -> Result<Option<Vec<OperationData>>, CollectionError> {
let response = graphql_request::<OperationCollectionDefaultPollingQuery>(
&OperationCollectionDefaultPollingQuery::build_query(
operation_collection_default_polling_query::Variables { graph_ref },
),
platform_api_config,
)
.await?;
match response.variant {
Some(PollingDefaultGraphVariant::GraphVariant(variant)) => {
match variant.mcp_default_collection {
PollingDefaultCollection::OperationCollection(collection) => {
handle_poll_result(
previous_updated_at,
collection
.operations
.into_iter()
.map(|operation| (operation.id, operation.last_updated_at))
.collect(),
platform_api_config,
)
.await
}
PollingDefaultCollection::PermissionError(error) => {
Err(CollectionError::Response(error.message))
}
}
}
Some(PollingDefaultGraphVariant::InvalidRefFormat(err)) => {
Err(CollectionError::Response(err.message))
}
None => Err(CollectionError::Response(
"Default collection not found".to_string(),
)),
}
}
async fn graphql_request<Query>(
request_body: &graphql_client::QueryBody<Query::Variables>,
platform_api_config: &PlatformApiConfig,
) -> Result<Query::ResponseData, CollectionError>
where
Query: graphql_client::GraphQLQuery,
<Query as graphql_client::GraphQLQuery>::ResponseData: std::fmt::Debug,
{
let res = reqwest::Client::new()
.post(platform_api_config.registry_url.clone())
.headers(HeaderMap::from_iter(vec![
(
HeaderName::from_static("apollographql-client-name"),
HeaderValue::from_static("apollo-mcp-server"),
),
(
HeaderName::from_static("apollographql-client-version"),
HeaderValue::from_static(env!("CARGO_PKG_VERSION")),
),
(
HeaderName::from_static("x-api-key"),
HeaderValue::from_str(platform_api_config.apollo_key.expose_secret())
.map_err(CollectionError::HeaderValue)?,
),
]))
.timeout(platform_api_config.timeout)
.json(request_body)
.send()
.await
.map_err(CollectionError::Request)?;
let response_body: graphql_client::Response<Query::ResponseData> =
res.json().await.map_err(CollectionError::Request)?;
response_body
.data
.ok_or(CollectionError::Response("missing data".to_string()))
}
```
--------------------------------------------------------------------------------
/crates/apollo-mcp-server/src/cors.rs:
--------------------------------------------------------------------------------
```rust
use http::Method;
use regex::Regex;
use schemars::JsonSchema;
use serde::Deserialize;
use tower_http::cors::{AllowOrigin, Any, CorsLayer};
use url::Url;
use crate::errors::ServerError;
/// CORS configuration options
#[derive(Debug, Clone, Deserialize, JsonSchema)]
#[serde(default)]
pub struct CorsConfig {
/// Enable CORS support
pub enabled: bool,
/// List of allowed origins (exact match)
pub origins: Vec<String>,
/// List of origin patterns (regex matching)
pub match_origins: Vec<String>,
/// Allow any origin (use with caution)
pub allow_any_origin: bool,
/// Allow credentials in CORS requests
pub allow_credentials: bool,
/// Allowed HTTP methods
pub allow_methods: Vec<String>,
/// Allowed request headers
pub allow_headers: Vec<String>,
/// Headers exposed to the browser
pub expose_headers: Vec<String>,
/// Max age for preflight cache (in seconds)
pub max_age: Option<u64>,
}
impl Default for CorsConfig {
fn default() -> Self {
Self {
enabled: false,
origins: Vec::new(),
match_origins: Vec::new(),
allow_any_origin: false,
allow_credentials: false,
allow_methods: vec![
"GET".to_string(),
"POST".to_string(),
"DELETE".to_string(), // Clients that no longer need a particular session SHOULD send an HTTP DELETE to explicitly terminate the session
],
allow_headers: vec![
"content-type".to_string(),
"mcp-protocol-version".to_string(), // https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#protocol-version-header
"mcp-session-id".to_string(), // https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#session-management
"traceparent".to_string(), // https://www.w3.org/TR/trace-context/#traceparent-header
"tracestate".to_string(), // https://www.w3.org/TR/trace-context/#tracestate-header
],
expose_headers: vec![
"mcp-session-id".to_string(), // https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#session-management
"traceparent".to_string(), // https://www.w3.org/TR/trace-context/#traceparent-header
"tracestate".to_string(), // https://www.w3.org/TR/trace-context/#tracestate-header
],
max_age: Some(7200), // 2 hours
}
}
}
impl CorsConfig {
/// Build a CorsLayer from this configuration
pub fn build_cors_layer(&self) -> Result<CorsLayer, ServerError> {
if !self.enabled {
return Err(ServerError::Cors("CORS is not enabled".to_string()));
}
// Validate configuration
self.validate()?;
let mut cors = CorsLayer::new();
// Configure origins
if self.allow_any_origin {
cors = cors.allow_origin(Any);
} else {
// Collect all origins (exact and regex patterns)
let mut origin_list = Vec::new();
// Parse exact origins
for origin_str in &self.origins {
let origin = origin_str.parse::<http::HeaderValue>().map_err(|e| {
ServerError::Cors(format!("Invalid origin '{}': {}", origin_str, e))
})?;
origin_list.push(origin);
}
// For regex patterns, we need to use a predicate function
if !self.match_origins.is_empty() {
// Parse regex patterns to validate them
let mut regex_patterns = Vec::new();
for pattern in &self.match_origins {
let regex = Regex::new(pattern).map_err(|e| {
ServerError::Cors(format!("Invalid origin pattern '{}': {}", pattern, e))
})?;
regex_patterns.push(regex);
}
// Use predicate function that combines exact origins and regex patterns
let exact_origins = origin_list;
cors = cors.allow_origin(AllowOrigin::predicate(move |origin, _| {
let origin_str = origin.to_str().unwrap_or("");
// Check exact origins
if exact_origins
.iter()
.any(|exact| exact.as_bytes() == origin.as_bytes())
{
return true;
}
// Check regex patterns
regex_patterns
.iter()
.any(|regex| regex.is_match(origin_str))
}));
} else if !origin_list.is_empty() {
// Only exact origins, no regex
cors = cors.allow_origin(origin_list);
}
}
// Configure credentials
cors = cors.allow_credentials(self.allow_credentials);
// Configure methods
let methods: Result<Vec<Method>, _> = self
.allow_methods
.iter()
.map(|m| m.parse::<Method>())
.collect();
let methods =
methods.map_err(|e| ServerError::Cors(format!("Invalid HTTP method: {}", e)))?;
cors = cors.allow_methods(methods);
// Configure headers
if !self.allow_headers.is_empty() {
let headers: Result<Vec<http::HeaderName>, _> = self
.allow_headers
.iter()
.map(|h| h.parse::<http::HeaderName>())
.collect();
let headers =
headers.map_err(|e| ServerError::Cors(format!("Invalid header name: {}", e)))?;
cors = cors.allow_headers(headers);
}
// Configure exposed headers
if !self.expose_headers.is_empty() {
let headers: Result<Vec<http::HeaderName>, _> = self
.expose_headers
.iter()
.map(|h| h.parse::<http::HeaderName>())
.collect();
let headers = headers
.map_err(|e| ServerError::Cors(format!("Invalid exposed header name: {}", e)))?;
cors = cors.expose_headers(headers);
}
// Configure max age
if let Some(max_age) = self.max_age {
cors = cors.max_age(std::time::Duration::from_secs(max_age));
}
Ok(cors)
}
/// Validate the configuration for consistency
fn validate(&self) -> Result<(), ServerError> {
// Cannot use credentials with any origin
if self.allow_credentials && self.allow_any_origin {
return Err(ServerError::Cors(
"Cannot use allow_credentials with allow_any_origin for security reasons. See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS#requests_with_credentials"
.to_string(),
));
}
// Must have at least some origin configuration if not allowing any origin
if !self.allow_any_origin && self.origins.is_empty() && self.match_origins.is_empty() {
return Err(ServerError::Cors(
"Must specify origins, match_origins, or allow_any_origin when CORS is enabled"
.to_string(),
));
}
// Validate that origin strings are valid URLs
for origin in &self.origins {
Url::parse(origin).map_err(|e| {
ServerError::Cors(format!("Invalid origin URL '{}': {}", origin, e))
})?;
}
// Validate regex patterns
for pattern in &self.match_origins {
Regex::new(pattern).map_err(|e| {
ServerError::Cors(format!("Invalid regex pattern '{}': {}", pattern, e))
})?;
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use axum::{Router, routing::get};
use http::{HeaderValue, Method, Request, StatusCode};
use tower::util::ServiceExt;
#[test]
fn test_default_config() {
let config = CorsConfig::default();
assert!(!config.enabled);
assert!(!config.allow_any_origin);
assert!(!config.allow_credentials);
assert_eq!(
config.allow_methods,
vec!["GET".to_string(), "POST".to_string(), "DELETE".to_string()]
);
assert_eq!(
config.allow_headers,
vec![
"content-type".to_string(),
"mcp-protocol-version".to_string(),
"mcp-session-id".to_string(),
"traceparent".to_string(),
"tracestate".to_string(),
]
);
assert_eq!(
config.expose_headers,
vec![
"mcp-session-id".to_string(),
"traceparent".to_string(),
"tracestate".to_string(),
]
);
assert_eq!(config.max_age, Some(7200));
}
#[test]
fn test_disabled_cors_fails_to_build() {
let config = CorsConfig::default();
assert!(config.build_cors_layer().is_err());
}
#[test]
fn test_allow_any_origin_builds() {
let config = CorsConfig {
enabled: true,
allow_any_origin: true,
..Default::default()
};
assert!(config.build_cors_layer().is_ok());
}
#[test]
fn test_specific_origins_build() {
let config = CorsConfig {
enabled: true,
origins: vec![
"http://localhost:3000".to_string(),
"https://studio.apollographql.com".to_string(),
],
..Default::default()
};
assert!(config.build_cors_layer().is_ok());
}
#[test]
fn test_regex_origins_build() {
let config = CorsConfig {
enabled: true,
match_origins: vec!["^http://localhost:[0-9]+$".to_string()],
..Default::default()
};
assert!(config.build_cors_layer().is_ok());
}
#[test]
fn test_credentials_with_any_origin_fails() {
let config = CorsConfig {
enabled: true,
allow_any_origin: true,
allow_credentials: true,
..Default::default()
};
assert!(config.build_cors_layer().is_err());
}
#[test]
fn test_no_origins_fails() {
let config = CorsConfig {
enabled: true,
allow_any_origin: false,
origins: vec![],
match_origins: vec![],
..Default::default()
};
assert!(config.build_cors_layer().is_err());
}
#[test]
fn test_invalid_origin_fails() {
let config = CorsConfig {
enabled: true,
origins: vec!["not-a-valid-url".to_string()],
..Default::default()
};
assert!(config.build_cors_layer().is_err());
}
#[test]
fn test_invalid_regex_fails() {
let config = CorsConfig {
enabled: true,
match_origins: vec!["[invalid regex".to_string()],
..Default::default()
};
assert!(config.build_cors_layer().is_err());
}
#[test]
fn test_invalid_method_fails() {
let config = CorsConfig {
enabled: true,
origins: vec!["http://localhost:3000".to_string()],
allow_methods: vec!["invalid method with spaces".to_string()],
..Default::default()
};
assert!(config.build_cors_layer().is_err());
}
#[tokio::test]
async fn test_preflight_request_with_exact_origin() {
let config = CorsConfig {
enabled: true,
origins: vec!["http://localhost:3000".to_string()],
max_age: Some(3600),
..Default::default()
};
let app = Router::new().layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::OPTIONS)
.uri("/test")
.header("Origin", "http://localhost:3000")
.header("Access-Control-Request-Method", "POST")
.header(
"Access-Control-Request-Headers",
"content-type,authorization",
)
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static("http://localhost:3000"))
);
}
#[tokio::test]
async fn test_simple_request_with_exact_origin() {
let config = CorsConfig {
enabled: true,
origins: vec!["http://localhost:3000".to_string()],
..Default::default()
};
let app = Router::new()
.route("/health", get(|| async { "test response" }))
.layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::GET)
.uri("/health")
.header("Origin", "http://localhost:3000")
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static("http://localhost:3000"))
);
}
#[tokio::test]
async fn test_preflight_request_with_regex_origin() {
let config = CorsConfig {
enabled: true,
match_origins: vec!["^http://localhost:[0-9]+$".to_string()],
..Default::default()
};
let app = Router::new().layer(config.build_cors_layer().unwrap());
// Test matching port
let request = Request::builder()
.method(Method::OPTIONS)
.uri("/test")
.header("Origin", "http://localhost:4321")
.header("Access-Control-Request-Method", "POST")
.header(
"Access-Control-Request-Headers",
"content-type,authorization",
)
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static("http://localhost:4321"))
);
}
#[tokio::test]
async fn test_simple_request_with_regex_origin() {
let config = CorsConfig {
enabled: true,
match_origins: vec!["^https://.*\\.apollographql\\.com$".to_string()],
..Default::default()
};
let app = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::GET)
.uri("/test")
.header("Origin", "https://www.apollographql.com")
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static("https://www.apollographql.com"))
);
}
#[tokio::test]
async fn test_mixed_exact_and_regex_origins() {
let config = CorsConfig {
enabled: true,
origins: vec!["http://localhost:3000".to_string()],
match_origins: vec!["^https://.*\\.apollographql\\.com$".to_string()],
..Default::default()
};
let cors_layer = config.build_cors_layer().unwrap();
// Test exact origin
let app1 = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(cors_layer.clone());
let request1 = Request::builder()
.method(Method::GET)
.uri("/test")
.header("Origin", "http://localhost:3000")
.body(axum::body::Body::empty())
.unwrap();
let response1 = app1.oneshot(request1).await.unwrap();
assert_eq!(
response1.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static("http://localhost:3000"))
);
// Test regex origin
let app2 = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(cors_layer);
let request2 = Request::builder()
.method(Method::GET)
.uri("/test")
.header("Origin", "https://studio.apollographql.com")
.body(axum::body::Body::empty())
.unwrap();
let response2 = app2.oneshot(request2).await.unwrap();
assert_eq!(
response2.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static(
"https://studio.apollographql.com"
))
);
}
#[tokio::test]
async fn test_preflight_request_rejected_origin_exact() {
let config = CorsConfig {
enabled: true,
origins: vec!["https://allowed.com".to_string()],
..Default::default()
};
let app = Router::new().layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::OPTIONS)
.uri("/test")
.header("Origin", "https://blocked.com")
.header("Access-Control-Request-Method", "POST")
.header(
"Access-Control-Request-Headers",
"content-type,authorization",
)
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert!(
response
.headers()
.get("access-control-allow-origin")
.is_none()
);
}
#[tokio::test]
async fn test_simple_request_rejected_origin_exact() {
let config = CorsConfig {
enabled: true,
origins: vec!["https://allowed.com".to_string()],
..Default::default()
};
let app = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::GET)
.uri("/test")
.header("Origin", "https://blocked.com")
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert!(
response
.headers()
.get("access-control-allow-origin")
.is_none()
);
}
#[tokio::test]
async fn test_preflight_request_rejected_origin_regex() {
let config = CorsConfig {
enabled: true,
match_origins: vec!["^https://.*\\.allowed\\.com$".to_string()],
..Default::default()
};
let cors_layer = config.build_cors_layer().unwrap();
let app = Router::new().layer(cors_layer);
let request = Request::builder()
.method(Method::OPTIONS)
.uri("/test")
.header("Origin", "https://malicious.blocked.com")
.header("Access-Control-Request-Method", "POST")
.header(
"Access-Control-Request-Headers",
"content-type,authorization",
)
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert!(
response
.headers()
.get("access-control-allow-origin")
.is_none()
);
}
#[tokio::test]
async fn test_simple_request_rejected_origin_regex() {
let config = CorsConfig {
enabled: true,
match_origins: vec!["^https://.*\\.allowed\\.com$".to_string()],
..Default::default()
};
let cors_layer = config.build_cors_layer().unwrap();
let app = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(cors_layer);
let request = Request::builder()
.method(Method::GET)
.uri("/test")
.header("Origin", "https://malicious.blocked.com")
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert!(
response
.headers()
.get("access-control-allow-origin")
.is_none()
);
}
#[tokio::test]
async fn test_preflight_request_any_origin() {
let config = CorsConfig {
enabled: true,
allow_any_origin: true,
..Default::default()
};
let app = Router::new().layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::OPTIONS)
.uri("/test")
.header("Origin", "https://any-domain.com")
.header("Access-Control-Request-Method", "POST")
.header(
"Access-Control-Request-Headers",
"content-type,authorization",
)
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static("*"))
);
}
#[tokio::test]
async fn test_simple_request_any_origin() {
let config = CorsConfig {
enabled: true,
allow_any_origin: true,
..Default::default()
};
let app = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::GET)
.uri("/test")
.header("Origin", "https://any-domain.com")
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-origin"),
Some(&HeaderValue::from_static("*"))
);
}
#[tokio::test]
async fn test_non_cors_request() {
let config = CorsConfig {
enabled: true,
origins: vec!["https://allowed.com".to_string()],
..Default::default()
};
let cors_layer = config.build_cors_layer().unwrap();
let app = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(cors_layer);
let request = Request::builder()
.method(Method::GET)
.uri("/test")
// No Origin header
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
// Request should succeed but without CORS headers
assert_eq!(response.status(), StatusCode::OK);
assert!(
response
.headers()
.get("access-control-allow-origin")
.is_none()
);
}
#[tokio::test]
async fn test_multiple_request_headers() {
let config = CorsConfig {
enabled: true,
origins: vec!["https://allowed.com".to_string()],
allow_headers: vec![
"content-type".to_string(),
"authorization".to_string(),
"x-api-key".to_string(),
"x-requested-with".to_string(),
],
..Default::default()
};
let app = Router::new().layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::OPTIONS)
.uri("/test")
.header("Origin", "https://allowed.com")
.header("Access-Control-Request-Method", "POST")
.header(
"Access-Control-Request-Headers",
"content-type,authorization,x-api-key,disallowed-header",
)
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
let allow_headers = response
.headers()
.get("access-control-allow-headers")
.unwrap();
let headers_str = allow_headers.to_str().unwrap();
assert!(headers_str.contains("content-type"));
assert!(headers_str.contains("authorization"));
assert!(headers_str.contains("x-api-key"));
assert!(!headers_str.contains("disallowed-header"));
}
#[tokio::test]
async fn test_preflight_request_with_credentials() {
let config = CorsConfig {
enabled: true,
origins: vec!["https://allowed.com".to_string()],
allow_credentials: true,
..Default::default()
};
let app = Router::new().layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::OPTIONS)
.uri("/test")
.header("Origin", "https://allowed.com")
.header("Access-Control-Request-Method", "POST")
.header(
"Access-Control-Request-Headers",
"content-type,authorization",
)
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-credentials"),
Some(&HeaderValue::from_static("true"))
);
}
#[tokio::test]
async fn test_simple_request_with_credentials() {
let config = CorsConfig {
enabled: true,
origins: vec!["https://allowed.com".to_string()],
allow_credentials: true,
..Default::default()
};
let app = Router::new()
.route("/test", get(|| async { "test response" }))
.layer(config.build_cors_layer().unwrap());
let request = Request::builder()
.method(Method::GET)
.uri("/test")
.header("Origin", "https://allowed.com")
.header("Cookie", "sessionid=abc123")
.body(axum::body::Body::empty())
.unwrap();
let response = app.oneshot(request).await.unwrap();
assert_eq!(response.status(), StatusCode::OK);
assert_eq!(
response.headers().get("access-control-allow-credentials"),
Some(&HeaderValue::from_static("true"))
);
}
}
```
--------------------------------------------------------------------------------
/docs/source/config-file.mdx:
--------------------------------------------------------------------------------
```markdown
---
title: Config File Reference
description: Reference guide of configuration options for running Apollo MCP Server.
redirectFrom:
- /apollo-mcp-server/command-reference
---
You can configure Apollo MCP Server using a configuration file. You can also [override configuration options using environment variables](#override-configuration-options-using-environment-variables).
See the [example config file](#example-config-file) for an example.
## Configuration options
All fields are optional.
### Top-level options
| Option | Type | Default | Description |
| :--------------- | :-------------------- | :----------------------- | :--------------------------------------------------------------- |
| `cors` | `Cors` | | CORS configuration |
| `custom_scalars` | `FilePath` | | Path to a [custom scalar map](/apollo-mcp-server/custom-scalars) |
| `endpoint` | `URL` | `http://localhost:4000/` | The target GraphQL endpoint |
| `forward_headers`| `List<string>` | `[]` | Headers to forward from MCP clients to GraphQL API |
| `graphos` | `GraphOS` | | Apollo-specific credential overrides |
| `headers` | `Map<string, string>` | `{}` | List of hard-coded headers to include in all GraphQL requests |
| `health_check` | `HealthCheck` | | Health check configuration |
| `introspection` | `Introspection` | | Introspection configuration |
| `logging` | `Logging` | | Logging configuration |
| `operations` | `OperationSource` | | Operations configuration |
| `overrides` | `Overrides` | | Overrides for server behavior |
| `schema` | `SchemaSource` | | Schema configuration |
| `transport` | `Transport` | | The type of server transport to use |
| `telemetry` | `Telemetry` | | Configuration to export metrics and traces via OTLP |
### GraphOS
These fields are under the top-level `graphos` key and define your GraphOS graph credentials and endpoints.
| Option | Type | Default | Description |
| :------------------------ | :------- | :------ | :-------------------------------------------------------------------------------------------------------------- |
| `apollo_key` | `string` | | The Apollo GraphOS key. You can also provide this with the `APOLLO_KEY` environment variable |
| `apollo_graph_ref` | `string` | | The Apollo GraphOS graph reference. You can also provide this with the `APOLLO_GRAPH_REF` environment variable |
| `apollo_registry_url` | `URL` | | The URL to use for Apollo's registry |
| `apollo_uplink_endpoints` | `URL` | | List of uplink URL overrides. You can also provide this with the `APOLLO_UPLINK_ENDPOINTS` environment variable |
### Static headers
The `headers` option enables you to specify a list of static, hard-coded headers and values. These are included in all GraphQL requests.
```yaml title="mcp.yaml"
headers: { "apollographql-client-name": "my-mcp-server" }
```
To forward dynamic header values from the client, use the [`forward_headers` option](#forwarding-headers) instead.
### Forwarding headers
The `forward_headers` option allows you to forward specific headers from incoming MCP client requests to your GraphQL API.
This is useful for:
- Multi-tenant applications (forwarding tenant IDs)
- A/B testing (forwarding experiment IDs)
- Geo information (forwarding country codes)
- Client identification (forwarding client names)
- Internal instrumentation (forwarding correlation IDs)
Header names should match exactly but are case-insensitive.
Header values are forwarded as-is, according to what the MCP client provides.
Hop-by-hop headers (like `connection`, `transfer-encoding`) are automatically blocked for security.
```yaml title="mcp.yaml"
forward_headers:
- x-tenant-id
- x-experiment-id
- x-geo-country
```
<Caution>
Don't use header forwarding to pass through sensitive credentials such as API keys or access tokens.
<br/><br/>
According to [MCP security best practices](https://modelcontextprotocol.io/specification/2025-06-18/basic/security_best_practices#token-passthrough) and the [MCP authorization specification](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#access-token-privilege-restriction), token passthrough introduces serious security risks:
- **Audience confusion**: If the MCP Server accepts tokens not intended for it, it can violate OAuth's trust boundaries.
- **Confused deputy problem**: If an unvalidated token is passed downstream, a downstream API may incorrectly trust it as though it were validated by the MCP Server.
<br/>
Apollo MCP Server supports OAuth 2.1 authentication that follows best practices and aligns with the MCP authorization model. See our [authorization guide](/apollo-mcp-server/auth) for implementation details and how to use the [auth configuration](#auth).
</Caution>
### CORS
These fields are under the top-level `cors` key and configure Cross-Origin Resource Sharing (CORS) for browser-based MCP clients.
| Option | Type | Default | Description |
| :------------------ | :------------- | :---------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------- |
| `enabled` | `bool` | `false` | Enable CORS support |
| `origins` | `List<string>` | `[]` | List of allowed origins (exact matches). Use `["*"]` to allow any origin (not recommended in production) |
| `match_origins` | `List<string>` | `[]` | List of regex patterns to match allowed origins (e.g., `"^https://localhost:[0-9]+$"`) |
| `allow_any_origin` | `bool` | `false` | Allow requests from any origin. Cannot be used with `allow_credentials: true` |
| `allow_credentials` | `bool` | `false` | Allow credentials (cookies, authorization headers) in CORS requests |
| `allow_methods` | `List<string>` | `["GET", "POST", "OPTIONS"]` | List of allowed HTTP methods |
| `allow_headers` | `List<string>` | `["content-type", "mcp-protocol-version", "mcp-session-id", "traceparent", "tracestate"]` | List of allowed request headers |
| `expose_headers` | `List<string>` | `["mcp-session-id", "traceparent", "tracestate"]` | List of response headers exposed to the browser (includes MCP and W3C Trace Context headers) |
| `max_age` | `number` | `86400` | Maximum age (in seconds) for preflight cache |
### Health checks
These fields are under the top-level `health_check` key. Learn more about [health checks](/apollo-mcp-server/health-checks).
| Option | Type | Default | Description |
| :---------------------------- | :--------- | :---------- | :--------------------------------------------------------------------------------- |
| `enabled` | `bool` | `false` | Enable health check endpoints |
| `path` | `string` | `"/health"` | Custom health check endpoint path |
| `readiness` | `object` | | Readiness check configuration |
| `readiness.allowed` | `number` | `100` | Maximum number of rejections allowed in a sampling interval before marking unready |
| `readiness.interval` | `object` | | Readiness check interval configuration |
| `readiness.interval.sampling` | `duration` | `"5s"` | How often to check the rejection count |
| `readiness.interval.unready` | `duration` | `"10s"` | How long to wait before recovering from unready state (default: 2 \* sampling) |
<Note>
Health checks are only available when using the `streamable_http` transport. The health check feature is inspired by Apollo Router's health check implementation.
</Note>
### Introspection
These fields are under the top-level `introspection` key. Learn more about the MCP [introspection tools](/apollo-mcp-server/define-tools#introspection-tools).
| Option | Type | Default | Description |
| :-------------------------- | :------- | :--------- | :-------------------------------------------------------------------- |
| `execute` | `object` | | Execution configuration for introspection |
| `execute.enabled` | `bool` | `false` | Enable introspection for execution |
| `introspect` | `object` | | Introspection configuration for allowing clients to run introspection |
| `introspect.enabled` | `bool` | `false` | Enable introspection requests |
| `introspect.minify` | `bool` | `false` | Minify introspection results to reduce context window usage |
| `search` | `object` | | Search tool configuration |
| `search.enabled` | `bool` | `false` | Enable search tool |
| `search.index_memory_bytes` | `number` | `50000000` | Amount of memory used for indexing (in bytes) |
| `search.leaf_depth` | `number` | `1` | Depth of subtype information to include from matching types |
| `search.minify` | `bool` | `false` | Minify search results to reduce context window usage |
| `validate` | `object` | | Validation tool configuration |
| `validate.enabled` | `bool` | `false` | Enable validation tool |
### Logging
These fields are under the top-level `logging` key.
| Option | Type | Default | Description |
| :--------- | :-------------------------------------------------- | :--------- | :-------------------------------------------------------------------------------- |
| `level` | `oneOf ["trace", "debug", "info", "warn", "error"]` | `"info"` | The minimum log level to record |
| `path` | `FilePath` | | An output file path for logging. If not provided logging outputs to stdio/stderr. |
| `rotation` | `oneOf ["minutely", "hourly", "daily", "never"]` | `"hourly"` | The log file rotation interval (if file logging is used) |
### Operation source
These fields are under the top-level `operations` key. The available fields depend on the value of the nested `source` key.
The default value for `source` is `"infer"`. Learn more about [defining tools as operations](/apollo-mcp-server/define-tools).
| Source | Option | Type | Default | Description |
| :----------------- | :------- | :--------------- | :------ | :------------------------------------------------------------------------------------------------------------------------------------------------------ |
| GraphOS Collection | `source` | `"collection"` | | Load operations from a GraphOS collection |
| GraphOS Collection | `id` | `string` | | The collection ID to use in GraphOS. Use `default` for the default collection. [Learn more](/apollo-mcp-server/define-tools#from-operation-collection). |
| Introspection | `source` | `"introspect"` | | Load operations by introspecting the schema. Note: You must enable introspection to use this source |
| Local | `source` | `"local"` | | Load operations from local GraphQL files or directories |
| Local | `paths` | `List<FilePath>` | | Paths to GraphQL files or directories to search. Note: These paths are relative to the location from which you are running Apollo MCP Server. |
| Manifest | `source` | `"manifest"` | | Load operations from a persisted queries manifest file |
| Manifest | `path` | `FilePath` | | The path to the persisted query manifest |
| Uplink | `source` | `"uplink"` | | Load operations from an uplink manifest. Note: This source requires an Apollo key and graph reference |
| Infer | `source` | `"infer"` | \* | Infer where to load operations based on other configuration options. |
### Overrides
These fields are under the top-level `overrides` key.
| Option | Type | Default | Description |
| :--------------------------- | :---------------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------- |
| `disable_type_description` | `bool` | `false` | Disable type descriptions to save on context-window space |
| `disable_schema_description` | `bool` | `false` | Disable schema descriptions to save on context-window space |
| `enable_explorer` | `bool` | `false` | Expose a tool that returns the URL to open a GraphQL operation in Apollo Explorer. Note: This requires a GraphOS graph reference |
| `mutation_mode` | `oneOf ["none", "explicit", "all"]` | `"none"` | Defines the mutation access level for the MCP server |
### Schema source
These fields are under the top-level `schema` key. The available fields depend on the value of the nested `source` key.
The default value for `source` is `"uplink"`.
| Source | Option | Type | Default | Description |
| :----- | :------- | :--------- | :------ | :---------------------------------------------------------------------------------- |
| Local | `source` | `"local"` | | Load schema from local file |
| Local | `path` | `FilePath` | | Path to the GraphQL schema |
| Uplink | `source` | `"uplink"` | \* | Fetch the schema from uplink. Note: This requires an Apollo key and graph reference |
### Transport
These fields are under the top-level `transport` key, to configure running the MCP Server in different environments - stdio, Streamable HTTP or SSE (deprecated).
```
transport:
type: stdio
```
The available fields depend on the value of the nested `type` key:
##### stdio (default)
| Option | Value | Default Value | Description |
| :----- | :-------- | :------------ | :-------------------------------------------------------------- |
| `type` | `"stdio"` | \* | Use standard IO for communication between the server and client |
##### Streamable HTTP
| Option | Value | Value Type | Description |
| :-------------- | :-------------------- | :--------- | :------------------------------------------------------------------------ |
| `type` | `"streamable_http"` | | Host the MCP server on the configuration, using streamable HTTP messages. |
| `address` | `127.0.0.1` (default) | `IpAddr` | The IP address to bind to |
| `port` | `8000` (default) | `u16` | The port to bind to |
| `stateful_mode` | `true` (default) | `bool` | Flag to enable or disable stateful mode and session management. |
<Note>
For Apollo MCP Server `≤v1.0.0`, the default `port` value is `5000`. In `v1.1.0`, the default `port` option was changed to `8000` to avoid conflicts with common development tools and services that typically use port 5000 (such as macOS AirPlay, Flask development servers, and other local services).
</Note>
##### SSE (Deprecated, use StreamableHTTP)
| Option | Value | Value Type | Description |
| :-------- | :-------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------- |
| `type` | `"sse"` | | Host the MCP server on the supplied config, using SSE for communication. Deprecated in favor of `StreamableHTTP` |
| `address` | `127.0.0.1` (default) | `IpAddr` | The IP address to bind to |
| `port` | `8000` (default) | `u16` | The port to bind to |
### Auth
These fields are under the top-level `transport` key, nested under the `auth` key. Learn more about [authorization and authentication](/apollo-mcp-server/auth).
| Option | Type | Default | Description |
| :------------------------------- | :------------- | :------ | :------------------------------------------------------------------------------------------------- |
| `servers` | `List<URL>` | | List of upstream delegated OAuth servers (must support OIDC metadata discovery endpoint) |
| `audiences` | `List<string>` | | List of accepted audiences from upstream signed JWTs |
| `resource` | `string` | | The externally available URL pointing to this MCP server. Can be `localhost` when testing locally. |
| `resource_documentation` | `string` | | Optional link to more documentation relating to this MCP server |
| `scopes` | `List<string>` | | List of queryable OAuth scopes from the upstream OAuth servers |
| `disable_auth_token_passthrough` | `bool` | `false` | Optional flag to disable passing validated Authorization header to downstream API |
Below is an example configuration using `StreamableHTTP` transport with authentication:
```yaml title="mcp.yaml"
transport:
type: streamable_http
auth:
# List of upstream delegated OAuth servers
# Note: These need to support the OIDC metadata discovery endpoint
servers:
- https://auth.example.com
# List of accepted audiences from upstream signed JWTs
# See: https://www.ory.sh/docs/hydra/guides/audiences
audiences:
- mcp.example.audience
# The externally available URL pointing to this MCP server. Can be `localhost`
# when testing locally.
# Note: Subpaths must be preserved here as well. So append `/mcp` if using
# Streamable HTTP or `/sse` is using SSE.
resource: https://hosted.mcp.server/mcp
# Optional link to more documentation relating to this MCP server.
resource_documentation: https://info.mcp.server
# List of queryable OAuth scopes from the upstream OAuth servers
scopes:
- read
- mcp
- profile
```
### Telemetry
| Option | Type | Default | Description |
| :-------------- | :---------- | :-------------------------- | :--------------------------------------- |
| `service_name` | `string` | "apollo-mcp-server" | The service name in telemetry data. |
| `version` | `string` | Current crate version | The service version in telemetry data. |
| `exporters` | `Exporters` | `null` (Telemetry disabled) | Configuration for telemetry exporters. |
#### Exporters
| Option | Type | Default | Description |
| :--------- | :---------- | :-------------------------- | :--------------------------------------- |
| `metrics` | `Metrics` | `null` (Metrics disabled) | Configuration for exporting metrics. |
| `tracing` | `Tracing` | `null` (Tracing disabled) | Configuration for exporting traces. |
#### Metrics
| Option | Type | Default | Description |
| :-------------------- | :---------------------- | :-------------------------- | :--------------------------------------------- |
| `otlp` | `OTLP Metric Exporter` | `null` (Exporting disabled) | Configuration for exporting metrics via OTLP. |
| `omitted_attributes` | `List<String>` | | List of attributes to be omitted from metrics. |
#### OTLP Metrics Exporter
| Option | Type | Default | Description |
| :--------- | :--------------------------------------- | :----------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `endpoint` | `URL` | `http://localhost:4317` | URL to export data to. Requires full path. |
| `protocol` | `string` | `grpc` | Protocol for export. Only `grpc` and `http/protobuf` are supported. |
| `temporality` | `MetricTemporality` | `Cumulative` | Optional OTel property that refers to the way additive quantities are expressed, in relation to time, indicating whether reported values incorporate previous measurements (Cumulative) or not (Delta). |
#### Traces
| Option | Type | Default | Description |
| :-------------------- | :--------------------- | :-------------------------- | :--------------------------------------------- |
| `otlp` | `OTLP Trace Exporter` | `null` (Exporting disabled) | Configuration for exporting traces via OTLP. |
| `sampler` | `SamplerOption` | `ALWAYS_ON` | Configuration to control sampling of traces. |
| `omitted_attributes` | `List<String>` | | List of attributes to be omitted from traces. |
#### OTLP Trace Exporter
| Option | Type | Default | Description |
| :--------- | :-------- | :-------------------------- | :--------------------------------------------------------------- |
| `endpoint` | `URL` | `http://localhost:4317` | URL to export data to. Requires full path. |
| `protocol` | `string` | `grpc` | Protocol for export. `grpc` and `http/protobuf` are supported. |
#### SamplerOption
| Option | Type | Description |
| :----------- | :-------- | :------------------------------------------------------- |
| `always_on` | `string` | All traces will be exported. |
| `always_off` | `string` | Sampling is turned off, no traces will be exported. |
| `0.0-1.0` | `f64` | Percentage of traces to export. |
## Example config file
The following example file sets your endpoint to `localhost:4001`, configures transport over Streamable HTTP, enables introspection, and provides two local MCP operations for the server to expose.
```yaml config.yaml
endpoint: http://localhost:4001/
transport:
type: streamable_http
introspection:
introspect:
enabled: true
operations:
source: local
paths:
- relative/path/to/your/operations/userDetails.graphql
- relative/path/to/your/operations/listing.graphql
```
## Override configuration options using environment variables
You can override configuration options using environment variables. The environment variable name is the same as the option name, but with `APOLLO_MCP_` prefixed. You can use `__` to mark nested options.
For example, to override the `introspection.execute.enabled` option, you can set the `APOLLO_MCP_INTROSPECTION__EXECUTE__ENABLED` environment variable.
```sh
APOLLO_MCP_INTROSPECTION__EXECUTE__ENABLED="true"
```
For list values, you can set the environment variable to a comma-separated list.
For example, to override the `transport.auth.servers` option, you can set the `APOLLO_MCP_TRANSPORT__AUTH__SERVERS` environment variable to a comma-separated list.
```sh
APOLLO_MCP_TRANSPORT__AUTH__SERVERS='[server_url_1,server_url_2]'
```
```
--------------------------------------------------------------------------------
/xtask/src/commands/changeset/mod.rs:
--------------------------------------------------------------------------------
```rust
/// This module is generated:
/// 1. Only if you actually change the `matching_pull_request.graphql` query.
/// 1. By installing `graphql_client_cli`
///
/// cargo install graphql_client_cli
///
/// 1. By running:
/// 1. A command that downloads the GitHub Schema. It's large so we don't
/// need to check it in. Make sure to download it INTO the
/// `src/commands/changeset/` directory.
///
/// wget https://docs.github.com/public/schema.docs.graphql`
///
/// 2. Generate against this downloaded schema. Run this from inside the
/// `src/commands/changeset/` directory.
///
/// graphql-client generate \
/// --schema-path ./schema.docs.graphql \
/// --response-derives='Debug' \
/// --custom-scalars-module='crate::commands::changeset::scalars' \
/// ./matching_pull_request.graphql
///
mod matching_pull_request;
mod scalars;
use std::fmt;
use std::fs;
use std::fs::remove_file;
use std::fs::DirEntry;
use std::path::PathBuf;
use std::str::FromStr;
use ::reqwest::Client;
use anyhow::Result;
use console::style;
use dialoguer::console::Term;
use dialoguer::theme::ColorfulTheme;
use dialoguer::Confirm;
use dialoguer::Editor;
use dialoguer::Input;
use dialoguer::Select;
use itertools::Itertools;
use matching_pull_request::matching_pull_request::ResponseData;
use matching_pull_request::matching_pull_request::Variables;
use matching_pull_request::MatchingPullRequest;
use serde::Serialize;
use tinytemplate::format_unescaped;
use tinytemplate::TinyTemplate;
use xtask::PKG_PROJECT_ROOT;
#[derive(Serialize)]
struct TemplateResource {
number: String,
url: String,
}
#[derive(Serialize)]
struct TemplateContext {
pulls: Vec<TemplateResource>,
issues: Vec<TemplateResource>,
title: String,
body: String,
author: String,
}
const REPO_WITH_OWNER: &str = "apollographql/apollo-mcp-server";
const EXAMPLE_TEMPLATE: &str = "### { title }
{{- if issues -}}
{{- if issues }} {{ endif -}}
{{- for issue in issues -}}
([Issue #{issue.number}]({issue.url}))
{{- if not @last }}, {{ endif -}}
{{- endfor -}}
{{ else -}}
{{- if pulls -}}
{{- if pulls }} - {{ endif -}}
{{- for pull in pulls -}}
@{author} PR #{pull.number}
{{- if not @last }}, {{ endif -}}
{{- endfor -}}
{{- else -}}
{{- endif -}}
{{- endif }}
{body}
";
impl Command {
pub fn run(&self) -> Result<()> {
match self {
Command::Create(command) => command.run(),
Command::Changelog(command) => command.run(),
}
}
}
#[derive(Debug, clap::Subcommand)]
pub enum Command {
/// Add a new changeset
Create(Create),
/// Generate the CHANGELOG.md entry for a release
Changelog(Changelog),
}
#[allow(clippy::derive_ord_xor_partial_ord)]
#[derive(Debug, Copy, Clone, Eq, PartialEq, Hash, Ord)]
enum Classification {
Breaking,
Feature,
Fix,
Configuration,
Maintenance,
Documentation,
Experimental,
}
impl Classification {
/// These "short names" are the prefixes that are used on the files
/// themselves and also for the `--class` flag for the CLI.
fn as_short_name(&self) -> &'static str {
match self {
Classification::Breaking => "breaking",
Classification::Feature => "feat",
Classification::Fix => "fix",
Classification::Configuration => "config",
Classification::Maintenance => "maint",
Classification::Documentation => "docs",
Classification::Experimental => "exp",
}
}
/// Defines the ordering that eventually appears in the emitted CHANGELOG
/// and the order options appear in the TUI.
const ORDERED_ALL: &'static [Self] = &[
Classification::Breaking,
Classification::Feature,
Classification::Fix,
Classification::Configuration,
Classification::Maintenance,
Classification::Documentation,
Classification::Experimental,
];
}
impl std::cmp::PartialOrd for Classification {
fn partial_cmp(&self, other: &Classification) -> Option<std::cmp::Ordering> {
Self::ORDERED_ALL
.iter()
.position(|item| item == self)
.partial_cmp(&Self::ORDERED_ALL.iter().position(|item| item == other))
}
}
type ParseError = &'static str;
impl FromStr for Classification {
type Err = ParseError;
fn from_str(classification: &str) -> Result<Self, Self::Err> {
if classification.starts_with("break") {
return Ok(Classification::Breaking);
}
if classification.starts_with("feat") {
return Ok(Classification::Feature);
}
if classification.starts_with("fix") {
return Ok(Classification::Fix);
}
if classification.starts_with("config") {
return Ok(Classification::Configuration);
}
if classification.starts_with("maint") {
return Ok(Classification::Maintenance);
}
if classification.starts_with("docs") {
return Ok(Classification::Documentation);
}
if classification.starts_with("exp") {
return Ok(Classification::Experimental);
}
Err("unknown classification")
}
}
impl fmt::Display for Classification {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let pretty = match self {
Classification::Breaking => "❗ BREAKING ❗",
Classification::Feature => "🚀 Features",
Classification::Fix => "🐛 Fixes",
Classification::Configuration => "📃 Configuration",
Classification::Maintenance => "🛠 Maintenance",
Classification::Documentation => "📚 Documentation",
Classification::Experimental => "🧪 Experimental",
};
write!(f, "{}", pretty)
}
}
#[derive(Debug, clap::Parser)]
pub struct Create {
/// Use the current branch as the file name
#[clap(short = 'b', long = "with-branch-name")]
with_branch_name: bool,
/// The classification of the changeset
#[clap(short = 'c', long = "class")]
classification: Option<Classification>,
}
#[derive(Debug, clap::Parser)]
pub struct Changelog {
/// The version number of the release
version: String,
}
async fn github_graphql_post_request(
token: &str,
url: &str,
request_body: &graphql_client::QueryBody<Variables>,
) -> Result<graphql_client::Response<ResponseData>, ::reqwest::Error> {
let client = Client::builder().build()?;
let res = client
.post(url)
.header(
"User-Agent",
format!("github {} releasing", REPO_WITH_OWNER),
)
.header("Authorization", format!("Bearer {}", token))
.json(request_body)
.send()
.await?;
let response_body: graphql_client::Response<ResponseData> = res.json().await?;
Ok(response_body)
}
fn get_changesets_dir() -> camino::Utf8PathBuf {
PKG_PROJECT_ROOT.join(".changesets")
}
impl Create {
pub fn run(&self) -> Result<()> {
let changesets_dir_path = get_changesets_dir();
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
let items = Classification::ORDERED_ALL;
let selected_classification: Classification = if self.classification.is_some() {
println!(
"{} {} {}",
style("Using").yellow(),
style(self.classification.unwrap()).cyan(),
style("classification from CLI arguments").yellow()
);
self.classification.unwrap()
} else {
let selection = Select::with_theme(&ColorfulTheme::default())
.with_prompt("What is the classification?")
.items(items)
.interact_on_opt(&Term::stderr())?
.expect("no classification was selected");
items[selection]
};
let gh_cli_path = which::which("gh");
let use_gh_cli = if gh_cli_path.is_err() {
println!("{}", style("If you install and authorize the GitHub CLI, we can use information from the PR!").underlined().on_blue().yellow().bright().bold());
println!(" Find more details at: {}", style("https://cli.github.com/").bold());
false
} else if Confirm::new()
.default(true)
.with_prompt(format!(
"{}",
style("You have the GitHub CLI installed! Can we use it to access the API and pre-populate values for the changelog?").yellow(),
))
.interact()?
{
println!("{}", style("Great! That'll make your life easier.").yellow());
true
} else {
println!("Ok! We won't talk to GitHub, so you'll be on your own.");
false
};
let use_branch_name = if self.with_branch_name {
true
} else {
let selection = Select::with_theme(&ColorfulTheme::default())
.default(0)
.with_prompt("How do you want to name it?")
.items(&["Branch Name", "Random Name"])
.interact_on_opt(&Term::stderr())?
.expect("no naming convention was selected");
// Match if the first index was "Branch Name" (from `items`)
selection == 0
};
// Get the branch name, optionally, using `git rev-parse --abbrev-ref HEAD`.
let branch_name: Option<String> = match std::process::Command::new("git")
.arg("rev-parse")
.arg("--abbrev-ref")
.arg("HEAD")
.output()
{
Ok(output) => {
if output.status.success() {
Some(String::from_utf8(output.stdout).unwrap().trim().to_string())
} else {
None
}
}
Err(e) => panic!("failed to open: {e}"),
};
// If the branch name worked out, we'll use that, otherwise, random.
let initial_text = if use_branch_name && branch_name.is_some() {
let branch_regex = regex::Regex::new(r"[^a-z0-9]")?;
branch_regex.replace_all(
branch_name
.clone()
.unwrap()
.to_lowercase()
.as_str(), "_"
).to_string()
} else {
memorable_wordlist::snake_case(48)
};
let input: String = Input::new()
.with_prompt(format!(
"{} {} {}",
style("Any edits to the slug for the").yellow(),
style(selected_classification.to_string()).cyan(),
style("changeset?").yellow(),
))
.with_initial_text(initial_text)
.interact_text()?;
let new_changeset_path = changesets_dir_path.join(format!(
"{}_{}.md",
selected_classification.as_short_name(),
input
));
let mut tt = TinyTemplate::new();
tt.add_template("message", EXAMPLE_TEMPLATE)?;
let default_context = TemplateContext {
title: String::from("Brief but complete sentence that stands on its own"),
issues: vec!(TemplateResource {
url: format!("https://github.com/{}/issues/ISSUE_NUMBER", REPO_WITH_OWNER),
number: String::from("ISSUE_NUMBER"),
}),
pulls: vec!(TemplateResource {
url: format!("https://github.com/{}/pull/PULL_NUMBER", REPO_WITH_OWNER),
number: String::from("PULL_NUMBER"),
}),
author: String::from("AUTHOR"),
body: String::from("A description of the fix which stands on its own separate from the title. It should embrace the use of Markdown to stylize the commentary so it looks great on the GitHub Releases, when shared on social cards, etc."),
};
let context: TemplateContext = if use_gh_cli && branch_name.is_some() {
match get_token_from_gh_cli(gh_cli_path.unwrap()) {
Err(_) => default_context,
Ok(gh_token) => {
// Good for testing. ;)
// let search = format!("repo:{} is:pr head:{}", REPO_WITH_OWNER, "pubmodmatt/hot_reload_operations");
let search = format!("repo:{} is:open is:pr head:{}", REPO_WITH_OWNER, &branch_name.as_ref().unwrap());
let query = <MatchingPullRequest as graphql_client::GraphQLQuery>::build_query(Variables { search });
let response = github_graphql_post_request(&gh_token, "https://api.github.com/graphql", &query).await?;
// There's only ever one query because the operation only asks for the `first: 1`.
let all_prs_info = pr_info_from_response(response.data.expect("no data"));
let pr_info_opt = all_prs_info.first();
match pr_info_opt {
Some(pr_info) => {
let issues= pr_info.closing_issues_references.as_ref().map(|i| {
i.nodes.as_ref().unwrap().iter().map(|j| {
j.as_ref().unwrap()
})
}).unwrap().filter(|p| {
p.repository.name_with_owner == REPO_WITH_OWNER
}).map(|p| {
TemplateResource {
number: p.number.to_string(),
url: p.url.to_string(),
}
});
let pr_body = pr_info.body.clone().replace("\r\n", "\n");
// Remove the trailing part of the checklist from the PR body.
let pr_body_meta_regex = regex::Regex::new(
r"(?m)<!-- start metadata -->\n---",
)?;
// Remove all the "Fixes" references, since we're already going to reference
// those in the course of generating the template.
let pr_body_fixes_regex = regex::Regex::new(
r"(?m)^(- )?Fix(es)? #.*$",
)?;
let index = pr_body_meta_regex.find(&pr_body).map(|mat| mat.start()).unwrap_or(pr_body.len());
// Run the above Regex and trim the blurb.
let clean_pr_body = pr_body_fixes_regex
.replace_all(&pr_body[..index].trim(), "")
.trim()
.to_string();
TemplateContext {
title: pr_info.title.clone(),
issues: issues.collect_vec(),
pulls: vec!(TemplateResource {
number: pr_info.number.to_string(),
url: pr_info.url.to_string(),
}),
body: clean_pr_body,
author: pr_info.author.as_ref().unwrap().login.to_string(),
}
},
None => {
// TODO In a follow-up we should figure out how forks work with the GitHub API.
println!(
"{} {} {} {} {}",
style("The changeset will be").magenta(),
style("generic").red().bold(),
style("as we didn't find any PRs on GitHub for").magenta(),
style(&branch_name.as_ref().unwrap()).green(),
style("! (We don't support forks right now.)")
);
default_context
}
}
}
}
} else {
default_context
};
tt.set_default_formatter(&format_unescaped);
let rendered_template = tt.render("message", &context).unwrap().replace('\r', "");
if new_changeset_path.exists() {
panic!("The desired changeset name already exists and proceeding would clobber it. Edit or delete the existing changeset with the same name.");
}
fs::write(&new_changeset_path, &rendered_template)?;
println!(
"{} {} {} {}",
style("Created new").yellow(),
style(selected_classification.to_string()).cyan(),
style("changeset named").yellow(),
style(&new_changeset_path).cyan(),
);
if Confirm::new()
.default(true)
.with_prompt(format!(
"{} {} {} {}?",
style("Do you want to open").yellow(),
style(&new_changeset_path).cyan(),
style("in").yellow(),
style("$EDITOR").green(),
))
.interact()?
{
if let Some(rv) = Editor::new()
.extension(".md")
.trim_newlines(true)
.edit(&rendered_template)
.unwrap()
{
fs::write(&new_changeset_path, rv)?;
} else {
println!(
"{}",
style("Editing was aborted and changes were not saved.")
.red()
.on_yellow()
);
}
}
println!(
"{}",
style("Be sure to finalize the changeset, commit it and push it to Git.")
.magenta()
);
Ok(())
})
}
}
fn get_token_from_gh_cli(gh_cli_path: PathBuf) -> Result<String, &'static str> {
let result = std::process::Command::new(gh_cli_path)
.args(["auth", "token"])
.output()
.expect("this didn't go well");
if !result.status.success() {
Err("We couldn't run `gh auth token`. Perhaps run `gh auth login`.")
} else {
let gh_token_with_nl =
String::from_utf8(result.stdout).expect("should have had newline token");
let gh_token = gh_token_with_nl.trim().to_string();
if gh_token.is_empty() {
Err("Doesn't look like you have a valid token. Run `gh auth login`.")
} else {
Ok(gh_token)
}
}
}
fn pr_info_from_response(
response_data: ResponseData,
) -> Vec<matching_pull_request::matching_pull_request::PrInfo> {
response_data.search.nodes.map(|node| {
let maybe_prs = node.into_iter().map(|p| {
p.unwrap()
});
maybe_prs.filter_map(|maybe_pr| {
if let matching_pull_request::matching_pull_request::PrSearchResultNodes::PullRequest(info) = maybe_pr {
Some(info)
} else {
None
}
}).collect()
}).unwrap_or_default()
}
fn get_changeset_files() -> Vec<Changeset> {
fs::read_dir(get_changesets_dir())
.unwrap()
.collect::<std::io::Result<Vec<_>>>()
.unwrap()
.iter()
.filter_map(|file_entry| file_entry.try_into().ok())
.collect::<Vec<Changeset>>()
}
fn generate_content_from_changeset_files(changelog_entries: &[Changeset]) -> String {
let mut changelog_entries = changelog_entries.to_owned();
changelog_entries.sort();
let mut output: String = String::from("");
// We'll use this to track the classification, and print it one per change.
let mut last_kind = None;
for entry in changelog_entries {
// For each classification change, print the heading.
if last_kind.is_none() || Some(entry.classification) != last_kind {
let new_header = format!("## {}\n\n", entry.classification);
output += &*new_header;
}
last_kind = Some(entry.classification);
// Add the entry's content to the block of text!
let entry = format!("{}\n\n", entry.content);
output += &*entry;
}
output
}
fn remove_changeset_files(changesets: &Vec<Changeset>) -> bool {
let mut failure: bool = false;
for changeset in changesets {
if remove_file(&changeset.path).is_ok() {
println!("Deleted {:?}", changeset.path);
} else {
eprintln!("Could not delete {:?}", changeset.path);
failure = true;
}
}
!failure
}
impl Changelog {
pub fn run(&self) -> Result<()> {
println!("finalizing changelog");
let changelog = std::fs::read_to_string("./CHANGELOG.md")?;
let semver_heading = "This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).";
let new_changelog = slurp_and_remove_changesets();
let update_regex =
regex::Regex::new(format!("(?ms){}\n", regex::escape(semver_heading)).as_str())?;
let updated = update_regex.replace(
&changelog,
format!(
"{}\n\n# [{}] - {}\n\n{}\n",
semver_heading,
self.version.as_str(),
chrono::Utc::now().date_naive(),
&new_changelog,
),
);
std::fs::write("./CHANGELOG.md", updated.to_string())?;
Ok(())
}
}
pub fn slurp_and_remove_changesets() -> String {
let changesets = get_changeset_files();
let content = generate_content_from_changeset_files(&changesets);
remove_changeset_files(&changesets);
content
}
#[allow(clippy::derive_ord_xor_partial_ord)]
#[derive(Clone, Debug, Eq, Ord)]
struct Changeset {
classification: Classification,
content: String,
path: PathBuf,
}
impl std::cmp::PartialEq for Changeset {
fn eq(&self, other: &Self) -> bool {
self.classification == other.classification
}
}
impl std::cmp::PartialOrd for Changeset {
fn partial_cmp(&self, other: &Changeset) -> Option<std::cmp::Ordering> {
self.classification.partial_cmp(&other.classification)
}
}
impl TryFrom<&DirEntry> for Changeset {
type Error = String;
fn try_from(entry: &DirEntry) -> std::result::Result<Self, Self::Error> {
let path = entry.path();
let content = fs::read_to_string(&path).unwrap().trim().to_string();
Ok(Changeset {
classification: entry
.file_name()
.to_string_lossy()
.parse()
.map_err(|e: &str| e.to_string())?,
content,
path,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_templatizes_with_multiple_issues_in_title_and_multiple_prs_in_footer() {
let test_context = TemplateContext {
title: String::from("TITLE"),
issues: vec![
TemplateResource {
url: format!(
"https://github.com/{}/issues/ISSUE_NUMBER1",
String::from("REPO_WITH_OWNER")
),
number: String::from("ISSUE_NUMBER1"),
},
TemplateResource {
url: format!(
"https://github.com/{}/issues/ISSUE_NUMBER2",
String::from("REPO_WITH_OWNER")
),
number: String::from("ISSUE_NUMBER2"),
},
],
pulls: vec![
TemplateResource {
url: format!(
"https://github.com/{}/pull/PULL_NUMBER1",
String::from("REPO_WITH_OWNER")
),
number: String::from("PULL_NUMBER1"),
},
TemplateResource {
url: format!(
"https://github.com/{}/pull/PULL_NUMBER2",
String::from("REPO_WITH_OWNER")
),
number: String::from("PULL_NUMBER2"),
},
],
author: String::from("AUTHOR"),
body: String::from("BODY"),
};
let mut tt = TinyTemplate::new();
tt.add_template("message", EXAMPLE_TEMPLATE).unwrap();
tt.set_default_formatter(&format_unescaped);
let rendered_template = tt
.render("message", &test_context)
.unwrap()
.replace('\r', "");
insta::assert_snapshot!(rendered_template);
}
#[test]
fn it_templatizes_with_multiple_prs_in_footer() {
let test_context = TemplateContext {
title: String::from("TITLE"),
issues: vec![TemplateResource {
url: format!(
"https://github.com/{}/issues/ISSUE_NUMBER",
String::from("REPO_WITH_OWNER")
),
number: String::from("ISSUE_NUMBER"),
}],
pulls: vec![
TemplateResource {
url: format!(
"https://github.com/{}/pull/PULL_NUMBER1",
String::from("REPO_WITH_OWNER")
),
number: String::from("PULL_NUMBER1"),
},
TemplateResource {
url: format!(
"https://github.com/{}/pull/PULL_NUMBER2",
String::from("REPO_WITH_OWNER")
),
number: String::from("PULL_NUMBER2"),
},
],
author: String::from("AUTHOR"),
body: String::from("BODY"),
};
let mut tt = TinyTemplate::new();
tt.add_template("message", EXAMPLE_TEMPLATE).unwrap();
tt.set_default_formatter(&format_unescaped);
let rendered_template = tt
.render("message", &test_context)
.unwrap()
.replace('\r', "");
insta::assert_snapshot!(rendered_template);
}
#[test]
fn it_templatizes_with_multiple_issues_in_title() {
let test_context = TemplateContext {
title: String::from("TITLE"),
issues: vec![
TemplateResource {
url: format!(
"https://github.com/{}/issues/ISSUE_NUMBER1",
String::from("REPO_WITH_OWNER")
),
number: String::from("ISSUE_NUMBER1"),
},
TemplateResource {
url: format!(
"https://github.com/{}/issues/ISSUE_NUMBER2",
String::from("REPO_WITH_OWNER")
),
number: String::from("ISSUE_NUMBER2"),
},
],
pulls: vec![TemplateResource {
url: format!(
"https://github.com/{}/pull/PULL_NUMBER",
String::from("REPO_WITH_OWNER")
),
number: String::from("PULL_NUMBER"),
}],
author: String::from("AUTHOR"),
body: String::from("BODY"),
};
let mut tt = TinyTemplate::new();
tt.add_template("message", EXAMPLE_TEMPLATE).unwrap();
tt.set_default_formatter(&format_unescaped);
let rendered_template = tt
.render("message", &test_context)
.unwrap()
.replace('\r', "");
insta::assert_snapshot!(rendered_template);
}
#[test]
fn it_templatizes_with_prs_in_title_when_empty_issues() {
let test_context = TemplateContext {
title: String::from("TITLE"),
issues: vec![],
pulls: vec![TemplateResource {
url: format!(
"https://github.com/{}/pull/PULL_NUMBER",
String::from("REPO_WITH_OWNER")
),
number: String::from("PULL_NUMBER"),
}],
author: String::from("AUTHOR"),
body: String::from("BODY"),
};
let mut tt = TinyTemplate::new();
tt.add_template("message", EXAMPLE_TEMPLATE).unwrap();
tt.set_default_formatter(&format_unescaped);
let rendered_template = tt
.render("message", &test_context)
.unwrap()
.replace('\r', "");
insta::assert_snapshot!(rendered_template);
}
#[test]
fn it_templatizes_without_prs_in_title_when_issues_present() {
let test_context = TemplateContext {
title: String::from("TITLE"),
issues: vec![TemplateResource {
url: format!(
"https://github.com/{}/pull/ISSUE_NUMBER",
String::from("REPO_WITH_OWNER")
),
number: String::from("ISSUE_NUMBER"),
}],
pulls: vec![TemplateResource {
url: format!(
"https://github.com/{}/pull/PULL_NUMBER",
String::from("REPO_WITH_OWNER")
),
number: String::from("PULL_NUMBER"),
}],
author: String::from("AUTHOR"),
body: String::from("BODY"),
};
let mut tt = TinyTemplate::new();
tt.add_template("message", EXAMPLE_TEMPLATE).unwrap();
tt.set_default_formatter(&format_unescaped);
let rendered_template = tt
.render("message", &test_context)
.unwrap()
.replace('\r', "");
insta::assert_snapshot!(rendered_template);
}
#[test]
fn it_templatizes_with_neither_issues_or_prs() {
let test_context = TemplateContext {
title: String::from("TITLE"),
issues: vec![],
pulls: vec![],
author: String::from("AUTHOR"),
body: String::from("BODY"),
};
let mut tt = TinyTemplate::new();
tt.add_template("message", EXAMPLE_TEMPLATE).unwrap();
tt.set_default_formatter(&format_unescaped);
let rendered_template = tt
.render("message", &test_context)
.unwrap()
.replace('\r', "");
insta::assert_snapshot!(rendered_template);
}
}
```
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
```markdown
# Changelog
All notable changes to this project will be documented in this file.
This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
# [1.1.1] - 2025-10-21
## 🐛 Fixes
### fix docker image ignoring port setting - @DaleSeo PR #467
The Docker image had `APOLLO_MCP_TRANSPORT__PORT=8000` baked in as an environment variable in `flake.nix`. Since environment variables take precedence over config file settings (by design in our config loading logic), users are unable to override the port in their `config.yaml` when running the Docker container.
# [1.1.0] - 2025-10-16
## ❗ BREAKING ❗
### Change default port from 5000 to 8000 - @DaleSeo PR #417
The default server port has been changed from `5000` to `8000` to avoid conflicts with common development tools and services that typically use port 5000 (such as macOS AirPlay, Flask development servers, and other local services).
**Migration**: If you were relying on the default port 5000, you can continue using it by explicitly setting the port in your configuration file or command line arguments.
- Before
```yaml
transport:
type: streamable_http
```
- After
```yaml
transport:
type: streamable_http
port: 5000
```
## 🚀 Features
### feat: Add configuration option for metric temporality - @swcollard PR #413
Creates a new configuration option for telemetry to set the Metric temporality to either Cumulative (default) or Delta.
* Cumulative - The metric value will be the overall value since the start of the measurement.
* Delta - The metric will be the difference in the measurement since the last time it was reported.
Some observability vendors require that one is used over the other so we want to support the configuration in the MCP Server.
### Add support for forwarding headers from MCP clients to GraphQL APIs - @DaleSeo PR #428
Adds opt-in support for dynamic header forwarding, which enables metadata for A/B testing, feature flagging, geo information from CDNs, or internal instrumentation to be sent from MCP clients to downstream GraphQL APIs. It automatically blocks hop-by-hop headers according to the guidelines in [RFC 7230, section 6.1](https://datatracker.ietf.org/doc/html/rfc7230#section-6.1), and it only works with the Streamable HTTP transport.
You can configure using the `forward_headers` setting:
```yaml
forward_headers:
- x-tenant-id
- x-experiment-id
- x-geo-country
```
Please note that this feature is not intended for passing through credentials as documented in the best practices page.
### feat: Add mcp-session-id header to HTTP request trace attributes - @swcollard PR #421
Includes the value of the [Mcp-Session-Id](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#session-management) HTTP header as an attribute of the trace for HTTP requests to the MCP Server
## 🐛 Fixes
### Fix compatibility issue with VSCode/Copilot - @DaleSeo PR #447
This updates Apollo MCP Server’s tool schemas from [Draft 2020-12](https://json-schema.org/draft/2020-12) to [Draft‑07](https://json-schema.org/draft-07) which is more widely supported across different validators. VSCode/Copilot still validate against Draft‑07, so rejects Apollo MCP Server’s tools. Our JSON schemas don’t rely on newer features, so downgrading improves compatibility across MCP clients with no practical impact.
## 🛠 Maintenance
### Update rmcp sdk to version 0.8.x - @swcollard PR #433
Bumping the Rust MCP SDK version used in this server up to 0.8.x
### chore: Only initialize a single HTTP client for graphql requests - @swcollard PR #412
Currently the MCP Server spins up a new HTTP client every time it wants to make a request to the downstream graphql endpoint. This change creates a static reqwest client that gets initialized using LazyLock and reused on each graphql request.
This change is based on the suggestion from the reqwest [documentation](https://docs.rs/reqwest/latest/reqwest/struct.Client.html)
> "The Client holds a connection pool internally, so it is advised that you create one and reuse it."
# [1.0.0] - 2025-10-01
# Apollo MCP Server 1.0 Release Notes
Apollo MCP Server 1.0 marks the **General Availability (GA)** milestone, delivering a production-ready Model Context Protocol server that seamlessly bridges GraphQL APIs with AI applications. This release transforms how AI agents interact with GraphQL APIs through standardized MCP tools, enabling natural language access to your GraphQL operations.
## 🎯 GA Highlights
### **Production-Ready MCP Protocol Implementation**
Apollo MCP Server 1.0 provides full compliance with the [MCP specification](https://modelcontextprotocol.io/specification/2025-06-18), enabling AI applications to discover and invoke GraphQL operations through standardized protocols. The server acts as a translation layer, converting GraphQL operations into MCP tools that AI models can execute through natural language requests.
**Key Benefits:**
- **Standardized AI Integration**: No more custom API bridges - use the industry-standard MCP protocol
- **Automatic Tool Discovery**: AI agents automatically discover available GraphQL operations as MCP tools
- **Type-Safe Execution**: All operations are validated against your GraphQL schema before execution
- **Enterprise-Ready**: Full OAuth 2.1 authentication and comprehensive observability
### **🚀 Multi-Transport Architecture**
Flexible communication options for every deployment scenario:
- **stdio**: Perfect for local development and debugging with MCP Inspector
- **Streamable HTTP**: Production-grade transport with load balancer support and concurrent connections
All transports maintain full MCP protocol compliance while optimizing for specific use cases.
### **🔧 Advanced GraphQL Integration**
**Custom Scalar Support**: Seamlessly handle specialized types like `DateTime`, `UUID`, and domain-specific scalars with automatic JSON Schema mapping.
**Mutation Controls**: Fine-grained security controls to prevent unintended data changes:
- `all`: Enable all mutations (default)
- `none`: Disable all mutations for read-only access
- `allowlist`: Only allow specific mutations
### **📊 Flexible Schema & Operation Management**
**Dual Schema Sources:**
- **Local Files**: Direct schema control for development and offline scenarios
- **Apollo GraphOS**: Centralized schema management with automatic updates via uplink integration
**Multiple Operation Sources:**
- **Local Statement Files**: Hot-reloading `.graphql` files for rapid development
- **Persisted Query Manifests**: Security-focused pre-approved operation execution
- **GraphOS Operation Collections**: Centrally managed operations with automatic polling
- **GraphOS Persisted Queries**: Enterprise-grade operation management
### **🤖 AI-Optimized Introspection Tools**
**Core Tools:**
- **`introspect`**: Comprehensive schema exploration with AI-friendly formatting
- **`execute`**: Safe dynamic operation execution with proper error handling
- **`validate`**: Operation validation without execution to prevent side effects
- **`search`**: Semantic schema search to efficiently find relevant types and fields
**AI Optimizations:**
- **Minified Output**: Configurable minification reduces context window usage by 30%+ while preserving essential information
- **Semantic Search**: Natural language schema exploration with ranked results
### **⚙️ Configuration-Driven Architecture**
**YAML Configuration**: Replace complex command-line arguments with structured, version-controllable configuration files.
**Environment Variable Overrides**: Seamless environment-specific customization using the `APOLLO_MCP_` prefix convention.
**Comprehensive Validation**: Clear error messages and sensible defaults for rapid deployment.
### **🔐 Enterprise Security & Observability**
**OAuth 2.1 Authentication**: Production-ready authentication supporting major identity providers:
- Auth0, WorkOS, Keycloak, Okta
- JWT token validation with audience and scope enforcement
- OIDC discovery for automatic provider configuration
**Health Monitoring**: Kubernetes-ready health checks with configurable liveness and readiness probes.
**OpenTelemetry Integration**: Comprehensive observability with traces, metrics, and events:
- Operation-level performance tracking
- Semantic conventions for HTTP servers when using the Streamable HTTP transport.
- OTLP export to any OpenTelemetry-compatible collector
- Integration with existing monitoring infrastructure
**CORS Support**: Enable browser-based MCP clients with comprehensive Cross-Origin Resource Sharing support following Apollo Router patterns.
## 🐛 Fixes
### fix: remove verbose logging - @swcollard PR #401
The tracing-subscriber crate we are using to create logs does not have a configuration to exclude the span name and attributes from the log line. This led to rather verbose logs on app startup which would dump the full operation object into the logs before the actual log line.
This change strips the attributes from the top level spans so that we still have telemetry and tracing during this important work the server is doing, but they don't make it into the logs. The relevant details are provided in child spans after the operation has been parsed so we aren't losing any information other than a large json blob in the top level trace of generating Tools from GraphQL Operations.
## 🛠 Maintenance
### deps: update rust to v1.90.0 - @DaleSeo PR #387
Updates the Rust version to v1.90.0
# [0.9.0] - 2025-09-24
## 🚀 Features
### Prototype OpenTelemetry Traces in MCP Server - @swcollard PR #274
Pulls in new crates and SDKs for prototyping instrumenting the Apollo MCP Server with Open Telemetry Traces.
- Adds new rust crates to support OTel
- Annotates excecute and call_tool functions with trace macro
- Adds Axum and Tower middleware's for OTel tracing
- Refactors Logging so that all the tracing_subscribers are set together in a single module.
### Add CORS support - @DaleSeo PR #362
This PR implements comprehensive CORS support for Apollo MCP Server to enable web-based MCP clients to connect without CORS errors. The implementation and configuration draw heavily from the Router's approach. Similar to other features like health checks and telemetry, CORS is supported only for the StreamableHttp transport, making it a top-level configuration.
### Enhance tool descriptions - @DaleSeo PR #350
This PR enhances the descriptions of the introspect and search tools to offer clearer guidance for AI models on efficient GraphQL schema exploration patterns.
### Telemetry: Trace operations and auth - @swcollard PR #375
- Adds traces for the MCP server generating Tools from Operations and performing authorization
- Includes the HTTP status code to the top level HTTP trace
### Implement metrics for mcp tool and operation counts and durations - @swcollard PR #297
This PR adds metrics to count and measure request duration to events throughout the MCP server
- apollo.mcp.operation.duration
- apollo.mcp.operation.count
- apollo.mcp.tool.duration
- apollo.mcp.tool.count
- apollo.mcp.initialize.count
- apollo.mcp.list_tools.count
- apollo.mcp.get_info.count
### Adding ability to omit attributes for traces and metrics - @alocay PR #358
Adding ability to configure which attributes are omitted from telemetry traces and metrics.
1. Using a Rust build script (`build.rs`) to auto-generate telemetry attribute code based on the data found in `telemetry.toml`.
2. Utilizing an enum for attributes so typos in the config file raise an error.
3. Omitting trace attributes by filtering it out in a custom exporter.
4. Omitting metric attributes by indicating which attributes are allowed via a view.
5. Created `telemetry_attributes.rs` to map `TelemetryAttribute` enum to a OTEL `Key`.
The `telemetry.toml` file includes attributes (both for metrics and traces) as well as list of metrics gathered. An example would look like the following:
```
[attributes.apollo.mcp]
my_attribute = "Some attribute info"
[metrics.apollo.mcp]
some.count = "Some metric count info"
```
This would generate a file that looks like the following:
```
/// All TelemetryAttribute values
pub const ALL_ATTRS: &[TelemetryAttribute; 1usize] = &[
TelemetryAttribute::MyAttribute
];
#[derive(Debug, ::serde::Deserialize, ::schemars::JsonSchema,, Clone, Eq, PartialEq, Hash, Copy)]
pub enum TelemetryAttribute {
///Some attribute info
#[serde(alias = "my_attribute")]
MyAttribute,
}
impl TelemetryAttribute {
/// Supported telemetry attribute (tags) values
pub const fn as_str(&self) -> &'static str {
match self {
TelemetryAttribute::MyAttribute => "apollo.mcp.my_attribute",
}
}
}
#[derive(Debug, ::serde::Deserialize, ::schemars::JsonSchema,, Clone, Eq, PartialEq, Hash, Copy)]
pub enum TelemetryMetric {
///Some metric count info
#[serde(alias = "some.count")]
SomeCount,
}
impl TelemetryMetric {
/// Converts TelemetryMetric to &str
pub const fn as_str(&self) -> &'static str {
match self {
TelemetryMetric::SomeCount => "apollo.mcp.some.count",
}
}
}
```
An example configuration that omits `tool_name` attribute for metrics and `request_id` for tracing would look like the following:
```
telemetry:
exporters:
metrics:
otlp:
endpoint: "http://localhost:4317"
protocol: "grpc"
omitted_attributes:
- tool_name
tracing:
otlp:
endpoint: "http://localhost:4317"
protocol: "grpc"
omitted_attributes:
- request_id
```
### Adding config option for trace sampling - @alocay PR #366
Adding configuration option to sample traces. Can use the following options:
1. Ratio based samples (ratio >= 1 is always sample)
2. Always on
3. Always off
Defaults to always on if not provided.
## 🐛 Fixes
### Update SDL handling in sdl_to_api_schema function - @lennyburdette PR #365
Loads supergraph schemas using a function that supports various features, including Apollo Connectors. When supergraph loading failed, it would load it as a standard GraphQL schema, which reveals Federation query planning directives in when using the `search` and `introspection` tools.
### Include the cargo feature and TraceContextPropagator to send otel headers downstream - @swcollard PR #307
Inside the reqwest middleware, if the global text_map_propagator is not set, it will no op and not send the traceparent and tracestate headers to the Router. Adding this is needed to correlate traces from the mcp server to the router or other downstream APIs
### Add support for deprecated directive - @esilverm PR #367
Includes any existing `@deprecated` directives in the schema in the minified output of builtin tools. Now operations generated via these tools should take into account deprecated fields when being generated.
## 📃 Configuration
### Add basic config file options to otel telemetry - @swcollard PR #330
Adds new Configuration options for setting up configuration beyond the standard OTEL environment variables needed before.
- Renames trace->telemetry
- Adds OTLP options for metrics and tracing to choose grpc or http upload protocols and setting the endpoints
- This configuration is all optional, so by default nothing will be logged
### Disable statefulness to fix initialize race condition - @swcollard PR #351
We've been seeing errors with state and session handling in the MCP Server. Whether that is requests being sent before the initialized notification is processed. Or running a fleet of MCP Server pods behind a round robin load balancer. A new configuration option under the streamable_http transport `stateful_mode`, allows disabling session handling which appears to fix the race condition issue.
## 🛠 Maintenance
### Add tests for server event and SupergraphSdlQuery - @DaleSeo PR #347
This PR adds tests for some uncovered parts of the codebase to check the Codecov integration.
### Fix version on mcp server tester - @alocay PR #374
Add a specific version when calling the mcp-server-tester for e2e tests. The current latest (1.4.1) as an issue so to avoid problems now and in the future updating the test script to invoke the testing tool via specific version.
# [0.8.0] - 2025-09-12
## 🚀 Features
### feat: Configuration for disabling authorization token passthrough - @swcollard PR #336
A new optional new MCP Server configuration parameter, `transport.auth.disable_auth_token_passthrough`, which is `false` by default, that when true, will no longer pass through validated Auth tokens to the GraphQL API.
## 🛠 Maintenance
### Configure Codecov with coverage targets - @DaleSeo PR #337
This PR adds `codecov.yml` to set up Codecov with specific coverage targets and quality standards. It helps define clear expectations for code quality. It also includes some documentation about code coverage in `CONTRIBUTING.md` and adds the Codecov badge to `README.md`.
### Implement Test Coverage Measurement and Reporting - @DaleSeo PR #335
This PR adds the bare minimum for code coverage reporting using [cargo-llvm-cov](https://crates.io/crates/cargo-llvm-cov) and integrates with [Codecov](https://www.codecov.io/). It adds a new `coverage` job to the CI workflow that generates and uploads coverage reporting in parallel with existing tests. The setup mirrors that of Router, except it uses `nextest` instead of the built-in test runner and CircleCI instead of GitHub Actions.
### chore: update RMCP dependency ([328](https://github.com/apollographql/apollo-mcp-server/issues/328))
Update the RMCP dependency to the latest version, pulling in newer specification changes.
### ci: Pin stable rust version ([Issue #287](https://github.com/apollographql/apollo-mcp-server/issues/287))
Pins the stable version of Rust to the current latest version to ensure backwards compatibility with future versions.
# [0.7.5] - 2025-09-03
## 🐛 Fixes
### fix: Validate ExecutableDocument in validate tool - @swcollard PR #329
Contains fixes for https://github.com/apollographql/apollo-mcp-server/issues/327
The validate tool was parsing the operation passed in to it against the schema but it wasn't performing the validate function on the ExecutableDocument returned by the Parser. This led to cases where missing required arguments were not caught by the Tool.
This change also updates the input schema to the execute tool to make it more clear to the LLM that it needs to provide a valid JSON object
## 🛠 Maintenance
### test: adding a basic manual e2e test for mcp server - @alocay PR #320
Adding some basic e2e tests using [mcp-server-tester](https://github.com/steviec/mcp-server-tester). Currently, the tool does not always exit (ctrl+c is sometimes needed) so this should be run manually.
### How to run tests?
Added a script `run_tests.sh` (may need to run `chmod +x` to run it) to run tests. Basic usage found via `./run_tests.sh -h`. The script does the following:
1. Builds test/config yaml paths and verifies the files exist.
2. Checks if release `apollo-mcp-server` binary exists. If not, it builds the binary via `cargo build --release`.
3. Reads in the template file (used by `mcp-server-tester`) and replaces all `<test-dir>` placeholders with the test directory value. Generates this test server config file and places it in a temp location.
4. Invokes the `mcp-server-tester` via `npx`.
5. On script exit the generated config is cleaned up.
### Example run:
To run the tests for `local-operations` simply run `./run_tests.sh local-operations`
### Update snapshot format - @DaleSeo PR #313
Updates all inline snapshots in the codebase to ensure they are consistent with the latest insta format.
### Hardcoded version strings in tests - @DaleSeo PR #305
The GraphQL tests have hardcoded version strings that we need to update manually each time we release a new version. Since this isn't included in the release checklist, it's easy to miss it and only notice the test failures later.
# [0.7.4] - 2025-08-27
## 🐛 Fixes
### fix: Add missing token propagation for execute tool - @DaleSeo PR #298
The execute tool is not forwarding JWT authentication tokens to upstream GraphQL endpoints, causing authentication failures when using this tool with protected APIs. This PR adds missing token propagation for execute tool.
# [0.7.3] - 2025-08-25
## 🐛 Fixes
### fix: generate openAI-compatible json schemas for list types - @DaleSeo PR #272
The MCP server is generating JSON schemas that don't match OpenAI's function calling specification. It puts `oneOf` at the array level instead of using `items` to define the JSON schemas for the GraphQL list types. While some other LLMs are more flexible about this, it technically violates the [JSON Schema specification](https://json-schema.org/understanding-json-schema/reference/array) that OpenAI strictly follows.
This PR updates the list type handling logic to move `oneOf` inside `items` for GraphQL list types.
# [0.7.2] - 2025-08-19
## 🚀 Features
### Prevent server restarts while polling collections - @DaleSeo PR #261
Right now, the MCP server restarts whenever there's a connectivity issue while polling collections from GraphOS. This causes the entire server to restart instead of handling the error gracefully.
```
Error: Failed to create operation: Error loading collection: error sending request for url (https://graphql.api.apollographql.com/api/graphql)
Caused by:
Error loading collection: error sending request for url (https://graphql.api.apollographql.com/api/graphql)
```
This PR prevents server restarts by distinguishing between transient errors and permanent errors.
## 🐛 Fixes
### Keycloak OIDC discovery URL transformation - @DaleSeo PR #238
The MCP server currently replaces the entire path when building OIDC discovery URLs. This causes authentication failures for identity providers like Keycloak, which have path-based realms in the URL. This PR updates the URL transformation logic to preserve the existing path from the OAuth server URL.
### fix: build error, let expressions unstable in while - @ThoreKoritzius #263
Fix unstable let expressions in while loop
Replaced the unstable while let = expr syntax with a stable alternative, ensuring the code compiles on stable Rust without requiring nightly features.
## 🛠 Maintenance
### Address Security Vulnerabilities - @DaleSeo PR #264
This PR addresses the security vulnerabilities and dependency issues tracked in Dependency Dashboard #41 (https://osv.dev/vulnerability/RUSTSEC-2024-0388).
- Replaced the unmaintained `derivate` crate with the `educe` crate instead.
- Updated the `tantivy` crate.
# [0.7.1] - 2025-08-13
## 🚀 Features
### feat: Pass `remote-mcp` mcp-session-id header along to GraphQL request - @damassi PR #236
This adds support for passing the `mcp-session-id` header through from `remote-mcp` via the MCP client config. This header [originates from the underlying `@modelcontextprotocol/sdk` library](https://github.com/modelcontextprotocol/typescript-sdk/blob/a1608a6513d18eb965266286904760f830de96fe/src/client/streamableHttp.ts#L182), invoked from `remote-mcp`.
With this change it is possible to correlate requests from MCP clients through to the final GraphQL server destination.
## 🐛 Fixes
### fix: Valid token fails validation with multiple audiences - @DaleSeo PR #244
Valid tokens are failing validation with the following error when the JWT tokens contain an audience claim as an array.
```
JSON error: invalid type: sequence, expected a string at line 1 column 97
```
According to [RFC 7519 Section 4.1.3](https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.3), the audience claim can be either a single string or an array of strings. However, our implementation assumes it will always be a string, which is causing this JSON parsing error.
This fix updates the `Claims` struct to use `Vec<String>` instead of `String` for the `aud` field, along with a custom deserializer to handle both string and array formats.
### fix: Add custom deserializer to handle APOLLO_UPLINK_ENDPOINTS environment variable parsing - @swcollard PR #220
The APOLLO_UPLINK_ENDPOINTS environment variables has historically been a comma separated list of URL strings.
The move to yaml configuration allows us to more directly define the endpoints as a Vec.
This fix introduces a custom deserializer for the `apollo_uplink_endpoints` config field that can handle both the environment variable comma separated string, and the yaml-based list.
# [0.7.0] - 2025-08-04
## 🚀 Features
### feat: add mcp auth - @nicholascioli PR #210
The MCP server can now be configured to act as an OAuth 2.1 resource server, following
guidelines from the official MCP specification on Authorization / Authentication (see
[the spec](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization)).
To configure this new feature, a new `auth` section has been added to the SSE and
Streamable HTTP transports. Below is an example configuration using Streamable HTTP:
```yaml
transport:
type: streamable_http
auth:
# List of upstream delegated OAuth servers
# Note: These need to support the OIDC metadata discovery endpoint
servers:
- https://auth.example.com
# List of accepted audiences from upstream signed JWTs
# See: https://www.ory.sh/docs/hydra/guides/audiences
audiences:
- mcp.example.audience
# The externally available URL pointing to this MCP server. Can be `localhost`
# when testing locally.
# Note: Subpaths must be preserved here as well. So append `/mcp` if using
# Streamable HTTP or `/sse` is using SSE.
resource: https://hosted.mcp.server/mcp
# Optional link to more documentation relating to this MCP server.
resource_documentation: https://info.mcp.server
# List of queryable OAuth scopes from the upstream OAuth servers
scopes:
- read
- mcp
- profile
```
## 🐛 Fixes
### Setting input_schema properties to empty when operation has no args ([Issue #136](https://github.com/apollographql/apollo-mcp-server/issues/136)) ([PR #212](https://github.com/apollographql/apollo-mcp-server/pull/212))
To support certain scenarios where a client fails on an omitted `properties` field within `input_schema`, setting the field to an empty map (`{}`) instead. While a missing `properties` field is allowed this will unblock
certain users and allow them to use the MCP server.
# [0.6.1] - 2025-07-29
## 🐛 Fixes
### Handle headers from config file - @tylerscoville PR #213
Fix an issue where the server crashes when headers are set in the config file
### Handle environment variables when no config file is provided - @DaleSeo PR #211
Fix an issue where the server fails with the message "Missing environment variable: APOLLO_GRAPH_REF," even when the variables are properly set.
## 🚀 Features
### Health Check Support - @DaleSeo PR #209
Health reporting functionality has been added to make the MCP server ready for production deployment with proper health monitoring and Kubernetes integration.
# [0.6.0] - 2025-07-14
## ❗ BREAKING ❗
### Replace CLI flags with a configuration file - @nicholascioli PR #162
All command line arguments are now removed and replaced with equivalent configuration
options. The Apollo MCP server only accepts a single argument which is a path to a
configuration file. An empty file may be passed, as all options have sane defaults
that follow the previous argument defaults.
All options can be overridden by environment variables. They are of the following
form:
- Prefixed by `APOLLO_MCP_`
- Suffixed by the config equivalent path, with `__` marking nested options.
E.g. The environment variable to change the config option `introspection.execute.enabled`
would be `APOLLO_MCP_INTROSPECTION__EXECUTE__ENABLED`.
Below is a valid configuration file with some options filled out:
```yaml
custom_scalars: /path/to/custom/scalars
endpoint: http://127.0.0.1:4000
graphos:
apollo_key: some.key
apollo_graph_ref: example@graph
headers:
X-Some-Header: example-value
introspection:
execute:
enabled: true
introspect:
enabled: false
logging:
level: info
operations:
source: local
paths:
- /path/to/operation.graphql
- /path/to/other/operation.graphql
overrides:
disable_type_description: false
disable_schema_description: false
enable_explorer: false
mutation_mode: all
schema:
source: local
path: /path/to/schema.graphql
transport:
type: streamable_http
address: 127.0.0.1
port: 5000
```
## 🚀 Features
### Validate tool for verifying graphql queries before executing them - @swcollard PR #203
The introspection options in the mcp server provide introspect, execute, and search tools. The LLM often tries to validate its queries by just executing them. This may not be desired (there might be side effects, for example). This feature adds a `validate` tool so the LLM can validate the operation without actually hitting the GraphQL endpoint. It first validates the syntax of the operation, and then checks it against the introspected schema for validation.
### Minify introspect return value - @pubmodmatt PR #178
The `introspect` and `search` tools now have an option to minify results. Minified GraphQL SDL takes up less space in the context window.
### Add search tool - @pubmodmatt PR #171
A new experimental `search` tool has been added that allows the AI model to specify a set of terms to search for in the GraphQL schema. The top types matching that search are returned, as well as enough information to enable creation of GraphQL operations involving those types.
# [0.5.2] - 2025-07-10
## 🐛 Fixes
### Fix ServerInfo - @pubmodmatt PR #183
The server will now report the correct server name and version to clients, rather than the Rust MCP SDK name and version.
# [0.5.1] - 2025-07-08
## 🐛 Fixes
### Fix an issue with rmcp 0.2.x upgrade - @pubmodmatt PR #181
Fix an issue where the server was unresponsive to external events such as changes to operation collections.
# [0.5.0] - 2025-07-08
## ❗ BREAKING ❗
### Deprecate -u,--uplink argument and use default collection - @Jephuff PR #154
`--uplink` and `-u` are deprecated and will act as an alias for `--uplink-manifest`. If a schema isn't provided, it will get fetched from uplink by default, and `--uplink-manifest` can be used to fetch the persisted queries from uplink.
The server will now default to the default MCP tools from operation collections.
## 🚀 Features
### Add --version argument - @Jephuff PR #154
`apollo-mcp-server --version` will print the version of apollo-mcp-server currently installed
### Support operation variable comments as description overrides - @alocay PR #164
Operation comments for variables will now act as overrides for variable descriptions
### Include operation name with GraphQL requests - @DaleSeo PR #166
Include the operation name with GraphQL requests if it's available.
```diff
{
"query":"query GetAlerts(: String!) { alerts(state: ) { severity description instruction } }",
"variables":{
"state":"CO"
},
"extensions":{
"clientLibrary":{
"name":"mcp",
"version": ...
}
},
+ "operationName":"GetAlerts"
}
```
## 🐛 Fixes
### The execute tool handles invalid operation types - @DaleSeo PR #170
The execute tool returns an invalid parameters error when the operation type does not match the mutation mode.
### Skip unnamed operations and log a warning instead of crashing - @DaleSeo PR #173
Unnamed operations are now skipped with a warning instead of causing the server to crash
### Support retaining argument descriptions from schema for variables - @alocay PR #147
Use descriptions for arguments from schema when building descriptions for operation variables.
### Invalid operation should not crash the MCP Server - @DaleSeo PR #176
Gracefully handle and skip invalid GraphQL operations to prevent MCP server crashes during startup or runtime.
# [0.4.2] - 2025-06-24
## 🚀 Features
### Pass in --collection default to use default collection - @Jephuff PR #151
--collection default will use the configured default collection on the graph variant specified by the --apollo-graph-ref arg
# [0.4.1] - 2025-06-20
## 🐛 Fixes
### Fix tool update on every poll - @Jephuff PR #146
Only update the tool list if an operation was removed, changed, or added.
# [0.4.0] - 2025-06-17
## 🚀 Features
### Add `--collection <COLLECTION_ID>` as another option for operation source - @Jephuff PR #118
Use operation collections as the source of operations for your MCP server. The server will watch for changes and automatically update when you change your operation collection.
### Allow overriding registry endpoints - @Jephuff PR #134
Set APOLLO_UPLINK_ENDPOINTS and APOLLO_REGISTRY_URL to override the endpoints for fetching schemas and operations
### Add client metadata to GraphQL requests - @pubmodmatt PR #137
The MCP Server will now identify itself to Apollo Router through the `ApolloClientMetadata` extension. This allows traffic from MCP to be identified in the router, for example through telemetry.
### Update license to MIT - @kbychu PR #122
The Apollo MCP Server is now licensed under MIT instead of ELv2
## 🐛 Fixes
### Fix GetAstronautsCurrentlyInSpace query - @pubmodmatt PR #114
The `GetAstronautsCurrentlyInSpace` in the Quickstart documentation was not working.
### Change explorer tool to return URL - @pubmodmatt PR #123
The explorer tool previously opened the GraphQL query directly in the user's browser. Although convenient, this would only work if the MCP Server was hosted on the end user's machine, not remotely. It will now return the URL instead.
### Fix bug in operation directory watching - @pubmodmatt PR #135
Operation directory watching would not trigger an update of operations in some cases.
### fix: handle headers with colons in value - @DaleSeo PR #128
The MCP server won't crash when a header's value contains colons.
## 🛠 Maintenance
### Automate changesets and changelog - @pubmodmatt PR #107
Contributors can now generate a changeset file automatically with:
```console
cargo xtask changeset create
```
This will generate a file in the `.changesets` directory, which can be added to the pull request.
## [0.3.0] - 2025-05-29
### 🚀 Features
- Implement the Streamable HTTP transport. Enable with `--http-port` and/or `--http-address`. (#98)
- Include both the type description and field description in input schema (#100)
- Hide String, ID, Int, Float, and Boolean descriptions in input schema (#100)
- Set the `readOnlyHint` tool annotation for tools based on GraphQL query operations (#103)
### 🐛 Fixes
- Fix error with recursive input types (#100)
## [0.2.1] - 2025-05-27
### 🐛 Fixes
- Reduce the log level of many messages emitted by the server so INFO is less verbose, and add a `--log` option to specify the log level used by the MCP Server (default is INFO) (#82)
- Ignore mutations and subscriptions rather than erroring out (#91)
- Silence \_\_typename used in operations errors (#79)
- Fix issues with the `introspect` tool. (#83)
- The tool was not working when there were top-level subscription in the schema
- Argument types were not being resolved correctly
- Improvements to operation loading (#80)
- When specifying multiple operation paths, all paths were reloaded when any one changed
- Many redundant events were sent on startup, causing verbose logging about loaded operations
- Better error handling for missing, invalid, or empty operation files
- The `execute` tool did not handle variables correctly (#77 and #93)
- Cycles in schema type definitions would lead to stack overflow (#74)
## [0.2.0] - 2025-05-21
### 🚀 Features
- The `--operations` argument now supports hot reloading and directory paths. If a directory is specified, all .graphql files in the directory will be loaded as operations. The running server will update when files are added to or removed from the directory. (#69)
- Add an optional `--sse-address` argument to set the bind address of the MCP server. Defaults to 127.0.0.1. (#63)
### 🐛 Fixes
- Fixed PowerShell script (#55)
- Log to stdout, not stderr (#59)
- The `--directory` argument is now optional. When using the stdio transport, it is recommended to either set this option or use absolute paths for other arguments. (#64)
### 📚 Documentation
- Fix and simplify the example `rover dev --mcp` commands
## [0.1.0] - 2025-05-15
### 🚀 Features
- Initial release of the Apollo MCP Server
```