#
tokens: 5291/50000 7/7 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── .pre-commit-config.yaml
├── claude_desktop_config.json
├── claude.png
├── LICENSE
├── llamacloud_mcp
│   ├── __init__.py
│   └── main.py
├── pyproject.toml
├── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | .env
2 | data/
3 | llamacloud-testing-service-account.json
4 | test-credentials.py
5 | test-index.py
6 | 
```

--------------------------------------------------------------------------------
/.pre-commit-config.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | ---
 2 | default_language_version:
 3 |   python: python3
 4 | 
 5 | repos:
 6 |   - repo: https://github.com/pre-commit/pre-commit-hooks
 7 |     rev: v5.0.0
 8 |     hooks:
 9 |       - id: check-byte-order-marker
10 |       - id: check-merge-conflict
11 |       - id: check-toml
12 |       - id: check-yaml
13 |         args: [--allow-multiple-documents]
14 |       - id: detect-private-key
15 |       - id: end-of-file-fixer
16 |       - id: mixed-line-ending
17 |       - id: trailing-whitespace
18 | 
19 |   - repo: https://github.com/charliermarsh/ruff-pre-commit
20 |     rev: v0.12.1
21 |     hooks:
22 |       - id: ruff-format
23 |       - id: ruff-check
24 |         args: [--fix, --exit-non-zero-on-fix]
25 | 
26 |   - repo: https://github.com/pre-commit/mirrors-mypy
27 |     rev: v1.15.0
28 |     hooks:
29 |       - id: mypy
30 |         additional_dependencies:
31 |           [
32 |             "types-Deprecated",
33 |             "types-PyYAML",
34 |             "types-botocore",
35 |             "types-aiobotocore",
36 |             "types-protobuf==4.24.0.4",
37 |             "types-redis",
38 |             "types-requests",
39 |             "types-setuptools",
40 |             "types-click",
41 |           ]
42 |         args:
43 |           [
44 |             --disallow-untyped-defs,
45 |             --ignore-missing-imports,
46 |             --python-version=3.11,
47 |           ]
48 | 
49 |   - repo: https://github.com/codespell-project/codespell
50 |     rev: v2.3.0
51 |     hooks:
52 |       - id: codespell
53 |         additional_dependencies: [tomli]
54 | 
55 |   - repo: https://github.com/pappasam/toml-sort
56 |     rev: v0.23.1
57 |     hooks:
58 |       - id: toml-sort-fix
59 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # LlamaIndex MCP demos
  2 | 
  3 | `llamacloud-mcp` is a tool that allows you to use LlamaCloud as an MCP server. It can be used to query LlamaCloud indexes and extract data from files.
  4 | 
  5 | It allows for:
  6 | - specifying one or more indexes to use for context retrieval.
  7 | - specifying one or more extract agents to use for data extraction
  8 | - configuring project and organization ids
  9 | - configuring the transport to use for the MCP server (stdio, sse, streamable-http)
 10 | 
 11 | ## Getting Started
 12 | 
 13 | 1. Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
 14 | 2. Run `uvx llamacloud-mcp@latest --help` to see the available options.
 15 | 3. Configure your MCP client to use the `llamacloud-mcp` server. You can either launch the server directly with `uvx llamacloud-mcp@latest` or use a `claude_desktop_config.json` file to connect with claude desktop.
 16 | 
 17 | ### Usage
 18 | 
 19 | ```bash
 20 | % uvx llamacloud-mcp@latest --help
 21 | Usage: llamacloud-mcp [OPTIONS]
 22 | 
 23 | Options:
 24 |   --index TEXT                    Index definition in the format
 25 |                                   name:description. Can be used multiple
 26 |                                   times.
 27 |   --extract-agent TEXT            Extract agent definition in the format
 28 |                                   name:description. Can be used multiple
 29 |                                   times.
 30 |   --project-id TEXT               Project ID for LlamaCloud
 31 |   --org-id TEXT                   Organization ID for LlamaCloud
 32 |   --transport [stdio|sse|streamable-http]
 33 |                                   Transport to run the MCP server on. One of
 34 |                                   "stdio", "sse", "streamable-http".
 35 |   --api-key TEXT                  API key for LlamaCloud
 36 |   --help                          Show this message and exit.
 37 | ```
 38 | 
 39 | ### Configure Claude Desktop
 40 | 
 41 | 1. Install [Claude Desktop](https://claude.ai/download)
 42 | 2. In the menu bar choose `Claude` -> `Settings` -> `Developer` -> `Edit Config`. This will show up a config file that you can edit in your preferred text editor.
 43 | 3. Create a add the following "mcpServers" to the config file, where each `--index` is a new index tool that you define, and each `--extract-agent` is an extraction agent tool.
 44 | 4. You'll want your config to look something like this (make sure to replace `$YOURPATH` with the path to the repository):
 45 | 
 46 | ```json
 47 | {
 48 |     "mcpServers": {
 49 |         "llama_index_docs_server": {
 50 |             "command": "uvx",
 51 |             "args": [
 52 |                 "llamacloud-mcp@latest",
 53 |                 "--index",
 54 |                 "your-index-name:Description of your index",
 55 |                 "--index",
 56 |                 "your-other-index-name:Description of your other index",
 57 |                 "--extract-agent",
 58 |                 "extract-agent-name:Description of your extract agent",
 59 |                 "--project-name",
 60 |                 "<Your LlamaCloud Project Name>",
 61 |                 "--org-id",
 62 |                 "<Your LlamaCloud Org ID>",
 63 |                 "--api-key",
 64 |                 "<Your LlamaCloud API Key>"
 65 |             ]
 66 |         },
 67 |         "filesystem": {
 68 |         "command": "npx",
 69 |         "args": [
 70 |                 "-y",
 71 |                 "@modelcontextprotocol/server-filesystem",
 72 |                 "<your directory you want filesystem tool to have access to>"
 73 |             ]
 74 |         }
 75 |     }
 76 | }
 77 | ```
 78 | 
 79 | Make sure to **restart Claude Desktop** after configuring the file.
 80 | 
 81 | Now you're ready to query! You should see a tool icon with your server listed underneath the query box in Claude Desktop, like this:
 82 | 
 83 | ![](./claude.png)
 84 | 
 85 | ## LlamaCloud as an MCP server From Scratch
 86 | 
 87 | To provide a local MCP server that can be used by a client like Claude Desktop, you can use `mcp-server.py`. You can use this to provide a tool that will use RAG to provide Claude with up-to-the-second private information that it can use to answer questions. You can provide as many of these tools as you want.
 88 | 
 89 | ### Set up your LlamaCloud index
 90 | 
 91 | 1. Get a [LlamaCloud](https://cloud.llamaindex.ai/) account
 92 | 2. [Create a new index](https://docs.cloud.llamaindex.ai/llamacloud/guides/ui) with any data source you want. In our case we used [Google Drive](https://docs.cloud.llamaindex.ai/llamacloud/integrations/data_sources/google_drive) and provided a subset of the LlamaIndex documentation as a source. You could also upload documents directly to the index if you just want to test it out.
 93 | 3. Get an API key from the [LlamaCloud UI](https://cloud.llamaindex.ai/)
 94 | 
 95 | ### Set up your MCP server
 96 | 
 97 | 1. Clone this repository
 98 | 2. Create a `.env` file and add two environment variables:
 99 |     - `LLAMA_CLOUD_API_KEY` - The API key you got in the previous step
100 |     - `OPENAI_API_KEY` - An OpenAI API key. This is used to power the RAG query. You can use [any other LLM](https://docs.llamaindex.ai/en/stable/understanding/using_llms/using_llms/) if you don't want to use OpenAI.
101 | 
102 | Now let's look at the code. First you instantiate an MCP server:
103 | 
104 | ```python
105 | mcp = FastMCP('llama-index-server')
106 | ```
107 | 
108 | Then you define your tool using the `@mcp.tool()` decorator:
109 | 
110 | ```python
111 | @mcp.tool()
112 | def llama_index_documentation(query: str) -> str:
113 |     """Search the llama-index documentation for the given query."""
114 | 
115 |     index = LlamaCloudIndex(
116 |         name="mcp-demo-2",
117 |         project_name="Rando project",
118 |         organization_id="e793a802-cb91-4e6a-bd49-61d0ba2ac5f9",
119 |         api_key=os.getenv("LLAMA_CLOUD_API_KEY"),
120 |     )
121 | 
122 |     response = index.as_query_engine().query(query + " Be verbose and include code examples.")
123 | 
124 |     return str(response)
125 | ```
126 | 
127 | Here our tool is called `llama_index_documentation`; it instantiates a LlamaCloud index called `mcp-demo-2` and then uses it as a query engine to answer the query, including some extra instructions in the prompt. You'll get instructions on how to set up your LlamaCloud index in the next section.
128 | 
129 | Finally, you run the server:
130 | 
131 | ```python
132 | if __name__ == "__main__":
133 |     mcp.run(transport="stdio")
134 | ```
135 | 
136 | Note the `stdio` transport, used for communicating to Claude Desktop.
137 | 
138 | ## LlamaIndex as an MCP client
139 | 
140 | LlamaIndex also has an MCP client integration, meaning you can turn any MCP server into a set of tools that can be used by an agent. You can see this in `mcp-client.py`, where we use the `BasicMCPClient` to connect to our local MCP server.
141 | 
142 | For simplicity of demo, we are using the same MCP server we just set up above. Ordinarily, you would not use MCP to connect LlamaCloud to a LlamaIndex agent, you would use [QueryEngineTool](https://docs.llamaindex.ai/en/stable/examples/agent/openai_agent_with_query_engine/) and pass it directly to the agent.
143 | 
144 | ### Set up your MCP server
145 | 
146 | To provide a local MCP server that can be used by an HTTP client, we need to slightly modify `mcp-server.py` to use the `run_sse_async` method instead of `run`. You can find this in `mcp-http-server.py`.
147 | 
148 | ```python
149 | mcp = FastMCP('llama-index-server',port=8000)
150 | 
151 | asyncio.run(mcp.run_sse_async())
152 | ```
153 | 
154 | ### Get your tools from the MCP server
155 | 
156 | ```python
157 | mcp_client = BasicMCPClient("http://localhost:8000/sse")
158 | mcp_tool_spec = McpToolSpec(
159 |     client=mcp_client,
160 |     # Optional: Filter the tools by name
161 |     # allowed_tools=["tool1", "tool2"],
162 | )
163 | 
164 | tools = mcp_tool_spec.to_tool_list()
165 | ```
166 | 
167 | ### Create an agent and ask a question
168 | 
169 | ```python
170 | llm = OpenAI(model="gpt-4o-mini")
171 | 
172 | agent = FunctionAgent(
173 |     tools=tools,
174 |     llm=llm,
175 |     system_prompt="You are an agent that knows how to build agents in LlamaIndex.",
176 | )
177 | 
178 | async def run_agent():
179 |     response = await agent.run("How do I instantiate an agent in LlamaIndex?")
180 |     print(response)
181 | 
182 | if __name__ == "__main__":
183 |     asyncio.run(run_agent())
184 | ```
185 | 
186 | You're all set! You can now use the agent to answer questions from your LlamaCloud index.
187 | 
```

--------------------------------------------------------------------------------
/llamacloud_mcp/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/claude_desktop_config.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |     "mcpServers": {
 3 |         "llama_index_docs_server": {
 4 |             "command": "uvx",
 5 |             "args": [
 6 |                 "llamacloud-mcp@latest",
 7 |                 "--indexes",
 8 |                 "llama-index-docs:LlamaIndex documentation",
 9 |                 "--extract-agents",
10 |                 "llama-index-docs-extract:LlamaIndex documentation extract agent",
11 |                 "--project-id",
12 |                 "<your-project-id>",
13 |                 "--org-id",
14 |                 "<your-org-id>",
15 |                 "--api-key",
16 |                 "<your-api-key>",
17 |                 "--transport",
18 |                 "stdio"
19 |             ]
20 |         },
21 |         "filesystem": {
22 |         "command": "npx",
23 |         "args": [
24 |                 "-y",
25 |                 "@modelcontextprotocol/server-filesystem",
26 |                 "<your directory you want filesystem tool to have access to>"
27 |             ]
28 |         }
29 |     }
30 | }
31 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [build-system]
 2 | requires = ["hatchling"]
 3 | build-backend = "hatchling.build"
 4 | 
 5 | [dependency-groups]
 6 | dev = [
 7 |   "pre-commit>=4.2.0"
 8 | ]
 9 | 
10 | [project]
11 | name = "llamacloud-mcp"
12 | version = "1.0.0"
13 | description = "Expose LlamaCloud services as MCP tools"
14 | readme = "README.md"
15 | requires-python = ">=3.10"
16 | dependencies = [
17 |   "llama-index-indices-managed-llama-cloud>=0.6.9",
18 |   "mcp[cli]>=1.6.0",
19 |   "python-dotenv>=1.1.0",
20 |   "llama-index-tools-mcp>=0.1.0",
21 |   "llama-cloud-services",
22 |   "click"
23 | ]
24 | license = "MIT"
25 | authors = [
26 |   {name = "Tuana Celik", email = "[email protected]"},
27 |   {name = "Laurie Voss", email = "[email protected]"},
28 |   {name = "Logan Markewich", email = "[email protected]"}
29 | ]
30 | keywords = [
31 |   "mcp",
32 |   "llama",
33 |   "llamacloud",
34 |   "llama-cloud",
35 |   "llama-cloud-services"
36 | ]
37 | 
38 | [project.scripts]
39 | llamacloud-mcp = "llamacloud_mcp.main:main"
40 | 
41 | [tool.hatch.build.targets.sdist]
42 | include = ["llamacloud_mcp/"]
43 | exclude = ["**/BUILD"]
44 | 
45 | [tool.hatch.build.targets.wheel]
46 | include = ["llamacloud_mcp/"]
47 | exclude = ["**/BUILD"]
48 | 
```

--------------------------------------------------------------------------------
/llamacloud_mcp/main.py:
--------------------------------------------------------------------------------

```python
  1 | import click
  2 | import os
  3 | 
  4 | from mcp.server.fastmcp import Context, FastMCP
  5 | from llama_cloud_services import LlamaExtract
  6 | from llama_index.indices.managed.llama_cloud import LlamaCloudIndex
  7 | from typing import Awaitable, Callable, Optional
  8 | 
  9 | 
 10 | mcp = FastMCP("llama-index-server")
 11 | 
 12 | 
 13 | def make_index_tool(
 14 |     index_name: str, project_id: Optional[str], org_id: Optional[str]
 15 | ) -> Callable[[Context, str], Awaitable[str]]:
 16 |     async def tool(ctx: Context, query: str) -> str:
 17 |         try:
 18 |             await ctx.info(f"Querying index: {index_name} with query: {query}")
 19 |             index = LlamaCloudIndex(
 20 |                 name=index_name,
 21 |                 project_id=project_id,
 22 |                 organization_id=org_id,
 23 |             )
 24 |             response = await index.as_retriever().aretrieve(query)
 25 |             return str(response)
 26 |         except Exception as e:
 27 |             await ctx.error(f"Error querying index: {str(e)}")
 28 |             return f"Error querying index: {str(e)}"
 29 | 
 30 |     return tool
 31 | 
 32 | 
 33 | def make_extract_tool(
 34 |     agent_name: str, project_id: Optional[str], org_id: Optional[str]
 35 | ) -> Callable[[Context, str], Awaitable[str]]:
 36 |     async def tool(ctx: Context, file_path: str) -> str:
 37 |         """Extract data using a LlamaExtract Agent from the given file."""
 38 |         try:
 39 |             await ctx.info(
 40 |                 f"Extracting data using agent: {agent_name} with file path: {file_path}"
 41 |             )
 42 |             llama_extract = LlamaExtract(
 43 |                 organization_id=org_id,
 44 |                 project_id=project_id,
 45 |             )
 46 |             extract_agent = llama_extract.get_agent(name=agent_name)
 47 |             result = await extract_agent.aextract(file_path)
 48 |             return str(result)
 49 |         except Exception as e:
 50 |             await ctx.error(f"Error extracting data: {str(e)}")
 51 |             return f"Error extracting data: {str(e)}"
 52 | 
 53 |     return tool
 54 | 
 55 | 
 56 | @click.command()
 57 | @click.option(
 58 |     "--index",
 59 |     "indexes",
 60 |     multiple=True,
 61 |     required=False,
 62 |     type=str,
 63 |     help="Index definition in the format name:description. Can be used multiple times.",
 64 | )
 65 | @click.option(
 66 |     "--extract-agent",
 67 |     "extract_agents",
 68 |     multiple=True,
 69 |     required=False,
 70 |     type=str,
 71 |     help="Extract agent definition in the format name:description. Can be used multiple times.",
 72 | )
 73 | @click.option(
 74 |     "--project-id", required=False, type=str, help="Project ID for LlamaCloud"
 75 | )
 76 | @click.option(
 77 |     "--org-id", required=False, type=str, help="Organization ID for LlamaCloud"
 78 | )
 79 | @click.option(
 80 |     "--transport",
 81 |     default="stdio",
 82 |     type=click.Choice(["stdio", "sse", "streamable-http"]),
 83 |     help='Transport to run the MCP server on. One of "stdio", "sse", "streamable-http".',
 84 | )
 85 | @click.option("--api-key", required=False, type=str, help="API key for LlamaCloud")
 86 | def main(
 87 |     indexes: Optional[list[str]],
 88 |     extract_agents: Optional[list[str]],
 89 |     project_id: Optional[str],
 90 |     org_id: Optional[str],
 91 |     transport: str,
 92 |     api_key: Optional[str],
 93 | ) -> None:
 94 |     api_key = api_key or os.getenv("LLAMA_CLOUD_API_KEY")
 95 |     if not api_key:
 96 |         raise click.BadParameter(
 97 |             "API key not found. Please provide an API key or set the LLAMA_CLOUD_API_KEY environment variable."
 98 |         )
 99 |     else:
100 |         os.environ["LLAMA_CLOUD_API_KEY"] = api_key
101 | 
102 |     # Parse indexes into (name, description) tuples
103 |     index_info = []
104 |     if indexes:
105 |         for idx in indexes:
106 |             if ":" not in idx:
107 |                 raise click.BadParameter(
108 |                     f"Index '{idx}' must be in the format name:description"
109 |                 )
110 |             name, description = idx.split(":", 1)
111 |             index_info.append((name, description))
112 | 
113 |     # Parse extract agents into (name, description) tuples if provided
114 |     extract_agent_info = []
115 |     if extract_agents:
116 |         for agent in extract_agents:
117 |             if ":" not in agent:
118 |                 raise click.BadParameter(
119 |                     f"Extract agent '{agent}' must be in the format name:description"
120 |                 )
121 |             name, description = agent.split(":", 1)
122 |             extract_agent_info.append((name, description))
123 | 
124 |     # Dynamically register a tool for each index
125 |     for name, description in index_info:
126 |         tool_func = make_index_tool(name, project_id, org_id)
127 |         mcp.tool(name=f"query_{name}", description=description)(tool_func)
128 | 
129 |     # Dynamically register a tool for each extract agent, if any
130 |     for name, description in extract_agent_info:
131 |         tool_func = make_extract_tool(name, project_id, org_id)
132 |         mcp.tool(name=f"extract_{name}", description=description)(tool_func)
133 | 
134 |     mcp.run(transport=transport)
135 | 
136 | 
137 | if __name__ == "__main__":
138 |     main()
139 | 
```