This is page 1 of 3. Use http://codebase.md/twelvedata/mcp?lines=true&page={x} to view the full context.
# Directory Structure
```
├── .dockerignore
├── .env.template
├── .gitignore
├── .python-version
├── config
│ └── mcpConfigStdio.json
├── docker-compose.yml
├── Dockerfile
├── example.gif
├── extra
│ ├── commands.txt
│ ├── endpoints_spec_en.csv
│ ├── full_descriptions.json
│ ├── instructions.txt
│ └── openapi_clean.json
├── favicon.ico
├── LICENSE
├── pyproject.toml
├── README.md
├── scripts
│ ├── check_embedings.py
│ ├── generate_docs_embeddings.py
│ ├── generate_endpoints_embeddings.py
│ ├── generate_requests_models.py
│ ├── generate_response_models.py
│ ├── generate_tools.py
│ ├── generate.md
│ ├── patch_vector_in_embeddings.py
│ ├── select_embedding.py
│ ├── split_openapi.py
│ └── split_opnapi_by_groups.py
├── src
│ └── mcp_server_twelve_data
│ ├── __init__.py
│ ├── __main__.py
│ ├── common.py
│ ├── doc_tool_remote.py
│ ├── doc_tool_response.py
│ ├── doc_tool.py
│ ├── key_provider.py
│ ├── prompts.py
│ ├── request_models.py
│ ├── response_models.py
│ ├── server.py
│ ├── tools.py
│ ├── u_tool_remote.py
│ ├── u_tool_response.py
│ └── u_tool.py
├── test
│ ├── __init__.py
│ ├── common.py
│ ├── endpoint_pairs.py
│ ├── test_doc_tool.py
│ ├── test_mcp_main.py
│ ├── test_top_n_filter.py
│ └── test_user_plan.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
```
1 |
```
--------------------------------------------------------------------------------
/.dockerignore:
--------------------------------------------------------------------------------
```
1 | .git
2 | data/
3 | publish.md
```
--------------------------------------------------------------------------------
/.env.template:
--------------------------------------------------------------------------------
```
1 | LANCE_DB_ENDPOINTS_PATH=
2 | LANCE_DB_DOCS_PATH=
3 | OPENAI_API_KEY=
4 | TWELVE_DATA_API_KEY=
5 | MCP_URL=
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
1 | dist/
2 | publish.md
3 | .idea
4 | src/mcp_server_twelve_data/__pycache__/
5 | src/mcp_server_twelve_data/requests/__pycache__/
6 | src/mcp_server_twelve_data/responses/__pycache__/
7 | scripts/__pycache__/
8 | test/__pycache__/
9 | /data/
10 | /.env
11 | debug.txt
12 | src/resources/docs.lancedb/
13 | src/resources/endpoints.lancedb/
14 |
15 |
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
1 |
2 | # Twelve Data MCP Server
3 |
4 | ## Overview
5 |
6 | The Twelve Data MCP Server provides a seamless integration with the Twelve Data API to access financial market data. It enables retrieval of historical time series, real-time quotes, and instrument metadata for stocks, forex pairs, and cryptocurrencies.
7 |
8 | > Note: This server is currently in early-stage development; features and tools may evolve alongside updates to the Twelve Data API.
9 |
10 | ## Obtaining Your API Key
11 |
12 | To use Twelve Data MCP Server, you must first obtain an API key from Twelve Data:
13 |
14 | 1. Visit [Twelve Data Sign Up](https://twelvedata.com/register?utm_source=github&utm_medium=repository&utm_campaign=mcp_repo).
15 | 2. Create an account or log in if you already have one.
16 | 3. Navigate to your Dashboard and copy your API key.
17 |
18 | Important: Access to specific endpoints or markets may vary depending on your Twelve Data subscription plan.
19 |
20 | ## U-tool
21 | u-tool is an AI-powered universal router for the Twelve Data API that transforms how you access financial data. Instead of navigating 100+ individual endpoints and complex documentation, simply describe what you need in plain English.
22 |
23 | How it works:
24 | 🧠 Natural Language Processing: Understands your request in conversational English
25 | 🔍 Smart Routing: Uses vector search to find the most relevant endpoints from Twelve Data's entire API catalog
26 | 🎯 Intelligent Selection: Leverages OpenAI GPT-4o to choose the optimal method and generate correct parameters
27 | ⚡ Automatic Execution: Calls the appropriate endpoint and returns formatted results
28 |
29 | What you can ask:
30 | 📈 "Show me Apple stock performance this week"
31 | 📊 "Calculate RSI for Bitcoin with 14-day period"
32 | 💰 "Get Tesla's financial ratios and balance sheet"
33 | 🌍 "Compare EUR/USD exchange rates over 6 months"
34 | 🏦 "Find top-performing tech ETFs"
35 |
36 | Supported data categories:
37 | - Market data & quotes • Technical indicators (100+)
38 | - Fundamental data & financials • Currencies & crypto
39 | - Mutual funds & ETFs • Economic calendars & events
40 |
41 | One tool, entire Twelve Data ecosystem. No API documentation required.
42 |
43 | ## Installation
44 |
45 | ### Using **UV** (recommended)
46 |
47 | Directly run without local installation using [`uvx`](https://docs.astral.sh/uv/guides/tools/):
48 |
49 | ```bash
50 | uvx mcp-server-twelve-data --help
51 | ```
52 |
53 | ### Using **pip**
54 |
55 | Install the server via pip:
56 |
57 | ```bash
58 | pip install mcp-server-twelve-data
59 | python -m mcp_server_twelve_data --help
60 | ```
61 |
62 | ## Configuration
63 |
64 | ### Claude Desktop integration
65 |
66 | Add one of the following snippets to your `claude_desktop_config.json`:
67 | (1) local stdio server configured with utool
68 | ```json
69 | {
70 | "mcpServers": {
71 | "twelvedata": {
72 | "command": "uvx",
73 | "args": ["mcp-server-twelve-data@latest", "-k", "YOUR_TWELVE_DATA_API_KEY", "-u", "YOUR_OPEN_AI_APIKEY"]
74 | }
75 | }
76 | }
77 | ```
78 |
79 | (2) local stdio server only with 10 the most popular endpoints
80 | ```json
81 | {
82 | "mcpServers": {
83 | "twelvedata": {
84 | "command": "uvx",
85 | "args": ["mcp-server-twelve-data@latest", "-k", "YOUR_TWELVE_DATA_API_KEY", "-n", "10"]
86 | }
87 | }
88 | }
89 | ```
90 |
91 | (3) twelve data remote mcp server
92 |
93 | ```json
94 | {
95 | "mcpServers": {
96 | "twelvedata-remote": {
97 | "command": "npx",
98 | "args": [
99 | "mcp-remote", "https://mcp.twelvedata.com/mcp",
100 | "--header",
101 | "Authorization:${AUTH_HEADER}",
102 | "--header",
103 | "X-OpenAPI-Key:${OPENAI_API_KEY}"
104 | ],
105 | "env": {
106 | "AUTH_HEADER": "apikey YOUR_TWELVE_DATA_API_KEY",
107 | "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY"
108 | }
109 | }
110 | }
111 | }
112 | ```
113 |
114 | See how easy it is to connect Claude Desktop to Twelve Data MCP Server:
115 |
116 | 
117 |
118 | ### VS Code integration
119 |
120 | #### Automatic setup (with UV)
121 |
122 | [](https://insiders.vscode.dev/redirect/mcp/install?name=twelvedata&config=%7B%22command%22%3A%22uvx%22%2C%22args%22%3A%5B%22mcp-server-twelve-data%22%2C%22-k%22%2C%22YOUR_TWELVE_DATA_API_KEY%22%2C%22-u%22%2C%22YOUR_OPENAI_API_KEY%22%5D%7D)
123 |
124 | #### Manual setup
125 |
126 | For manual configuration, add to your **User Settings (JSON)**:
127 |
128 | ```json
129 | {
130 | "mcp": {
131 | "servers": {
132 | "twelvedata": {
133 | "command": "uvx",
134 | "args": [
135 | "mcp-server-twelve-data",
136 | "-k", "YOUR_TWELVE_DATA_API_KEY",
137 | "-u", "YOUR_OPENAI_API_KEY"
138 | ]
139 | }
140 | }
141 | }
142 | }
143 | ```
144 |
145 | ## Debugging
146 |
147 | Use the MCP Inspector for troubleshooting:
148 |
149 | ```bash
150 | npx @modelcontextprotocol/inspector uvx mcp-server-twelve-data@latest -k YOUR_TWELVE_DATA_API_KEY
151 | ```
152 |
153 | ## Development guide
154 |
155 | 1. **Local testing:** Utilize the MCP Inspector as described in **Debugging**.
156 | 2. **Claude Desktop:**: Update `claude_desktop_config.json` to reference local source paths.
157 |
158 | ## Docker usage
159 |
160 | Build and run the server using Docker:
161 |
162 | ```bash
163 | docker build -t mcp-server-twelve-data .
164 |
165 | docker run --rm mcp-server-twelve-data \
166 | -k YOUR_TWELVE_DATA_API_KEY \
167 | -u YOUR_OPENAI_API_KEY \
168 | -t streamable-http
169 | ```
170 |
171 | ## License
172 |
173 | This MCP server is licensed under the MIT License. See the [LICENSE](../../LICENSE) file for details.
174 |
```
--------------------------------------------------------------------------------
/extra/instructions.txt:
--------------------------------------------------------------------------------
```
1 |
```
--------------------------------------------------------------------------------
/test/__init__.py:
--------------------------------------------------------------------------------
```python
1 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/__main__.py:
--------------------------------------------------------------------------------
```python
1 | from mcp_server_twelve_data import main
2 |
3 | main()
4 |
```
--------------------------------------------------------------------------------
/config/mcpConfigStdio.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "mcpServers": {
3 | "mcpServerDev": {
4 | "command": "uvx",
5 | "args": [
6 | "mcp-server-twelve-data"
7 | ],
8 | "workingDir": ".",
9 | "env": {
10 |
11 | }
12 | }
13 | }
14 | }
```
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
```yaml
1 | version: '3.9'
2 |
3 | services:
4 | mcp-server-twelve-data:
5 | build: .
6 | container_name: mcp-server-twelve-data
7 | restart: unless-stopped
8 | ports:
9 | - "8000:8000"
10 |
11 | command: ["-k", "demo", "-t", "streamable-http"]
12 |
13 | networks:
14 | backend:
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/doc_tool_response.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Optional, List, Callable, Awaitable
2 | from mcp.server.fastmcp import Context
3 | from pydantic import BaseModel
4 |
5 |
6 | class DocToolResponse(BaseModel):
7 | query: str
8 | top_candidates: Optional[List[str]] = None
9 | result: Optional[str] = None
10 | error: Optional[str] = None
11 |
12 |
13 | doctool_func_type = Callable[[str, Context], Awaitable[DocToolResponse]]
14 |
```
--------------------------------------------------------------------------------
/extra/commands.txt:
--------------------------------------------------------------------------------
```
1 | {"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{"sampling":{},"roots":{"listChanged":true}},"clientInfo":{"name":"mcp-inspector","version":"0.13.0"}},"jsonrpc":"2.0","id":0}
2 | {"method":"notifications/initialized","jsonrpc":"2.0"}
3 | {"method":"tools/list","params":{"_meta":{"progressToken":1}},"jsonrpc":"2.0","id":1}
4 | {"method":"tools/call","params":{"name":"add","arguments":{"a":1,"b":5},"_meta":{"progressToken":2}},"jsonrpc":"2.0","id":2}
5 |
```
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
```dockerfile
1 | # Use official Python 3.13 runtime
2 | FROM python:3.13-slim
3 |
4 | # Set working directory
5 | WORKDIR /app
6 |
7 | # Install pip and UV for dependency management
8 | RUN pip install --upgrade pip uv
9 |
10 | # Copy project metadata and README for build context
11 | COPY pyproject.toml uv.lock* README.md LICENSE ./
12 |
13 | # Copy source code in src directory
14 | COPY src ./src
15 |
16 | # Install project dependencies and build the package using UV
17 | RUN uv pip install . --system
18 |
19 | # Run the MCP server directly from source
20 | ENTRYPOINT ["python", "-m", "mcp_server_twelve_data"]
21 | CMD ["-k", "demo", "-t", "streamable-http"]
22 |
```
--------------------------------------------------------------------------------
/scripts/generate_response_models.py:
--------------------------------------------------------------------------------
```python
1 | import subprocess
2 | from pathlib import Path
3 |
4 | openapi_path = '../extra/openapi_clean.json'
5 | output_path = '../data/response_models.py'
6 |
7 | cmd = [
8 | 'datamodel-codegen',
9 | '--input', str(openapi_path),
10 | '--input-file-type', 'openapi',
11 | '--output', str(output_path),
12 | '--output-model-type', 'pydantic_v2.BaseModel',
13 | '--reuse-model',
14 | '--use-title-as-name',
15 | '--disable-timestamp',
16 | '--field-constraints',
17 | '--use-double-quotes',
18 | ]
19 |
20 | subprocess.run(cmd, check=True)
21 |
22 | # Append aliases
23 | alias_lines = [
24 | '',
25 | '# Aliases for response models',
26 | 'GetMarketMovers200Response = MarketMoversResponseBody',
27 | 'GetTimeSeriesPercent_B200Response = GetTimeSeriesPercentB200Response',
28 | ''
29 | ]
30 |
31 | with open(output_path, 'a', encoding='utf-8') as f:
32 | f.write('\n'.join(alias_lines))
33 |
34 | print(f"[SUCCESS] Models generated using CLI and aliases added: {output_path}")
35 |
```
--------------------------------------------------------------------------------
/scripts/select_embedding.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | from dotenv import load_dotenv
3 | from lancedb import connect
4 | from openai import OpenAI
5 | import numpy as np
6 | from numpy.linalg import norm
7 |
8 | load_dotenv('../.env')
9 | client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
10 |
11 | query = "Show me tax information for AAPL."
12 | query_vector = np.array(
13 | client.embeddings.create(input=query, model="text-embedding-3-large").data[0].embedding
14 | )
15 |
16 | db = connect("../src/mcp_server_twelve_data/resources/endpoints.lancedb")
17 | tbl = db.open_table("endpoints")
18 | df = tbl.to_pandas()
19 |
20 | tax_vector = np.array(df.query("id == 'GetTaxInfo'").iloc[0]["vector"])
21 | balance_vector = np.array(df.query("id == 'GetBalanceSheetConsolidated'").iloc[0]["vector"])
22 |
23 |
24 | def cosine_similarity(a, b):
25 | return np.dot(a, b) / (norm(a) * norm(b))
26 |
27 |
28 | print(f"GetTaxInfo: {cosine_similarity(query_vector, tax_vector):.4f}")
29 | print(f"GetBalanceSheetConsolidated: {cosine_similarity(query_vector, balance_vector):.4f}")
30 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/u_tool_response.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Optional, List, Any, Callable, Awaitable
2 | from pydantic import BaseModel, Field
3 |
4 | from mcp.server.fastmcp import Context
5 |
6 |
7 | class UToolResponse(BaseModel):
8 | """Response object returned by the u-tool."""
9 |
10 | top_candidates: Optional[List[str]] = Field(
11 | default=None, description="List of tool operationIds considered by the vector search."
12 | )
13 | premium_only_candidates: Optional[List[str]] = Field(
14 | default=None, description="Relevant tool IDs available only in higher-tier plans"
15 | )
16 | selected_tool: Optional[str] = Field(
17 | default=None, description="Name (operationId) of the tool selected by the LLM."
18 | )
19 | param: Optional[dict] = Field(
20 | default=None, description="Parameters passed to the selected tool."
21 | )
22 | response: Optional[Any] = Field(
23 | default=None, description="Result returned by the selected tool."
24 | )
25 | error: Optional[str] = Field(
26 | default=None, description="Error message, if tool resolution or execution fails."
27 | )
28 |
29 |
30 | utool_func_type = Callable[[str, Context, Optional[str], Optional[str]], Awaitable[UToolResponse]]
31 |
```
--------------------------------------------------------------------------------
/test/common.py:
--------------------------------------------------------------------------------
```python
1 | import asyncio
2 | import os
3 | import signal
4 | import sys
5 |
6 | import pytest_asyncio
7 | from dotenv import load_dotenv
8 |
9 | sys.unraisablehook = lambda unraisable: None
10 |
11 | dotenv_path = os.path.join(os.path.dirname(__file__), '..', '.env')
12 | load_dotenv(dotenv_path)
13 |
14 | SERVER_URL = os.environ["SERVER_URL"]
15 | MCP_URL = SERVER_URL + '/mcp/'
16 | TD_API_KEY = os.environ["TWELVE_DATA_API_KEY"]
17 | OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
18 |
19 |
20 | @pytest_asyncio.fixture(scope="function")
21 | async def run_server():
22 | proc = await asyncio.create_subprocess_exec(
23 | "python", "-m", "mcp_server_twelve_data",
24 | "-t", "streamable-http",
25 | "-k", TD_API_KEY,
26 | "-u", OPENAI_API_KEY,
27 | stdout=asyncio.subprocess.DEVNULL,
28 | stderr=asyncio.subprocess.DEVNULL,
29 | )
30 |
31 | for _ in range(40):
32 | try:
33 | import httpx
34 | async with httpx.AsyncClient() as client:
35 | r = await client.get(f"{SERVER_URL}/health")
36 | if r.status_code == 200:
37 | break
38 | except Exception:
39 | await asyncio.sleep(1)
40 | else:
41 | proc.terminate()
42 | raise RuntimeError("Server did not start")
43 |
44 | yield
45 | proc.send_signal(signal.SIGINT)
46 | await proc.wait()
47 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/prompts.py:
--------------------------------------------------------------------------------
```python
1 |
2 | utool_doc_string = """
3 | A universal tool router for the MCP system, designed for the Twelve Data API.
4 |
5 | This tool accepts a natural language query in English and performs the following:
6 | 1. Uses vector search to retrieve the top-N relevant Twelve Data endpoints.
7 | 2. Sends the query and tool descriptions to OpenAI's gpt-4o with function calling.
8 | 3. The model selects the most appropriate tool and generates the input parameters.
9 | 4. The selected endpoint (tool) is executed and its response is returned.
10 |
11 | Supported endpoint categories (from Twelve Data docs):
12 | - Market & Reference: price, quote, symbol_search, stocks, exchanges, market_state
13 | - Time Series: time_series, eod, splits, dividends, etc.
14 | - Technical Indicators: rsi, macd, ema, bbands, atr, vwap, and 100+ others
15 | - Fundamentals & Reports: earnings, earnings_estimate, income_statement,
16 | balance_sheet, cash_flow, statistics, profile, ipo_calendar, analyst_ratings
17 | - Currency & Crypto: currency_conversion, exchange_rate, price_target
18 | - Mutual Funds / ETFs: funds, mutual_funds/type, mutual_funds/world
19 | - Misc Utilities: logo, calendar endpoints, time_series_calendar, etc.
20 | """
21 |
22 | doctool_doc_string = """
23 | Search Twelve Data documentation and return a Markdown summary of the most relevant sections.
24 | """
```
--------------------------------------------------------------------------------
/test/test_doc_tool.py:
--------------------------------------------------------------------------------
```python
1 | import json
2 |
3 | import pytest
4 | from mcp.client.streamable_http import streamablehttp_client
5 | from mcp import ClientSession
6 |
7 | from test.common import TD_API_KEY, OPENAI_API_KEY, MCP_URL, run_server
8 |
9 |
10 | @pytest.mark.asyncio
11 | @pytest.mark.parametrize("query, expected_title_keyword", [
12 | ("what does the macd indicator do?", "MACD"),
13 | ("how to fetch time series data?", "Time Series"),
14 | ("supported intervals for time_series?", "interval"),
15 | ])
16 | async def test_doc_tool_async(query, expected_title_keyword, run_server):
17 | headers = {
18 | "Authorization": f"apikey {TD_API_KEY}",
19 | "x-openapi-key": OPENAI_API_KEY
20 | }
21 |
22 | async with streamablehttp_client(MCP_URL, headers=headers) as (read_stream, write_stream, _):
23 | async with ClientSession(read_stream, write_stream) as session:
24 | await session.initialize()
25 | call_result = await session.call_tool("doc-tool", arguments={"query": query})
26 | await read_stream.aclose()
27 | await write_stream.aclose()
28 |
29 | assert not call_result.isError, f"doc-tool error: {call_result.content}"
30 | raw = call_result.content[0].text
31 | payload = json.loads(raw)
32 |
33 | assert payload["error"] is None
34 | assert payload["result"] is not None
35 | assert expected_title_keyword.lower() in payload["result"].lower(), (
36 | f"Expected '{expected_title_keyword}' in result Markdown:\n{payload['result']}"
37 | )
38 | assert len(payload["top_candidates"]) > 0
39 |
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
1 | [project]
2 | name = "mcp-server-twelve-data"
3 | version = "0.2.4"
4 | description = "A Model Context Protocol server providing tools access Twelve Data."
5 | readme = "README.md"
6 | requires-python = ">=3.13"
7 | authors = [{ name = "Twelve Data, PBC." }]
8 | maintainers = [{ name = "Kopyev Eugene", email = "[email protected]" }]
9 | keywords = ["twelve", "data", "mcp", "llm", "automation"]
10 | license = { text = "MIT" }
11 | classifiers = [
12 | "Development Status :: 4 - Beta",
13 | "Intended Audience :: Developers",
14 | "License :: OSI Approved :: MIT License",
15 | "Programming Language :: Python :: 3",
16 | "Programming Language :: Python :: 3.13",
17 | ]
18 | dependencies = [
19 | "click==8.2.1",
20 | "mcp[cli]>=1.9.4",
21 | "openai>=1.86.0",
22 | "pydantic==2.11.5",
23 | "pylint>=3.3.7",
24 | "pyyml>=0.0.2",
25 | ]
26 |
27 | [project.scripts]
28 | mcp-server-twelve-data = "mcp_server_twelve_data:main"
29 |
30 | [build-system]
31 | requires = ["hatchling"]
32 | build-backend = "hatchling.build"
33 |
34 | [tool.uv]
35 | dev-dependencies = [
36 | "pyright>=1.1.389",
37 | "ruff>=0.7.3",
38 | "pytest>=8.0.0",
39 | "datamodel-code-generator>=0.31.2",
40 | "pytest-asyncio>=1.0.0",
41 | "bs4>=0.0.2",
42 | ]
43 |
44 | [project.optional-dependencies]
45 | db = [
46 | "lancedb>=0.23.0",
47 | "pandas>=2.3.1"
48 | ]
49 |
50 | [tool.pytest.ini_options]
51 | testpaths = ["tests"]
52 | python_files = "test_*.py"
53 | python_classes = "Test*"
54 | python_functions = "test_*"
55 | asyncio_default_fixture_loop_scope = "function"
56 | addopts = "-s"
57 | log_cli = true
58 | log_cli_level = "INFO"
59 |
60 |
61 | [tool.hatch.build]
62 | exclude = [
63 | "src/resources/*",
64 | "example.gif"
65 | ]
66 |
```
--------------------------------------------------------------------------------
/test/test_top_n_filter.py:
--------------------------------------------------------------------------------
```python
1 | import json
2 |
3 | import pytest
4 | from mcp.client.streamable_http import streamablehttp_client
5 | from mcp import ClientSession
6 |
7 | from test.common import TD_API_KEY, MCP_URL, run_server
8 | from test.endpoint_pairs import pairs
9 |
10 |
11 | @pytest.mark.asyncio
12 | @pytest.mark.parametrize("user_query,expected_op_id", pairs)
13 | async def test_embedding_and_utool_async(user_query, expected_op_id, run_server):
14 | headers = {"Authorization": f"apikey {TD_API_KEY}"}
15 |
16 | async with streamablehttp_client(MCP_URL, headers=headers) as (read_stream, write_stream, _):
17 | async with ClientSession(read_stream, write_stream) as session:
18 | await session.initialize()
19 | call_result = await session.call_tool("u-tool", arguments={"query": user_query})
20 | await read_stream.aclose()
21 | await write_stream.aclose()
22 |
23 | assert not call_result.isError, f"u-tool error: {call_result.content}"
24 | raw = call_result.content[0].text
25 | payload = json.loads(raw)
26 | top_cands = payload.get("top_candidates", [])
27 | error = payload.get("error")
28 | selected_tool = payload.get("selected_tool")
29 | response = payload.get("response")
30 | assert expected_op_id in top_cands, f"{expected_op_id!r} not in {top_cands!r}"
31 | assert error is None, f"u-tool error: {error}"
32 | assert selected_tool == expected_op_id, (
33 | f"selected_tool {payload.get('selected_tool')!r} != expected {expected_op_id!r}"
34 | )
35 | assert response is not None
36 | if "GetTimeSeries" in selected_tool:
37 | values = response['values']
38 | assert values is not None and len(values) > 0
39 |
```
--------------------------------------------------------------------------------
/test/test_user_plan.py:
--------------------------------------------------------------------------------
```python
1 | import json
2 |
3 | import pytest
4 | from mcp.client.streamable_http import streamablehttp_client
5 | from mcp import ClientSession
6 |
7 | from test.common import TD_API_KEY, MCP_URL, run_server
8 |
9 |
10 | @pytest.mark.asyncio
11 | @pytest.mark.parametrize("user_query, expected_operation_id", [
12 | ("Show me market movers", "GetMarketMovers"),
13 | ("Show me earnings estimates for AAPL", "GetEarningsEstimate"),
14 | ("Show me price targets for TSLA", "GetPriceTarget"),
15 | ])
16 | async def test_utool_basic_plan_restrictions(user_query, expected_operation_id, run_server):
17 | """
18 | Users on Basic plan should be denied access to endpoints that require higher plans.
19 | Error message must include required operationId.
20 | """
21 | headers = {"Authorization": f"apikey {TD_API_KEY}"}
22 | user_plan = "Basic"
23 |
24 | async with streamablehttp_client(MCP_URL, headers=headers) as (read_stream, write_stream, _):
25 | async with ClientSession(read_stream, write_stream) as session:
26 | await session.initialize()
27 | result = await session.call_tool("u-tool", arguments={
28 | "query": user_query,
29 | "plan": user_plan
30 | })
31 | await read_stream.aclose()
32 | await write_stream.aclose()
33 |
34 | assert not result.isError, f"u-tool error: {result.content}"
35 | payload = json.loads(result.content[0].text)
36 | error = payload.get("error")
37 | selected_tool = payload.get("selected_tool")
38 | premium_only_candidates = payload.get("premium_only_candidates")
39 | assert expected_operation_id in premium_only_candidates
40 | assert selected_tool != expected_operation_id
41 | assert error is None
42 |
```
--------------------------------------------------------------------------------
/scripts/patch_vector_in_embeddings.py:
--------------------------------------------------------------------------------
```python
1 | import json
2 | import lancedb
3 | import openai
4 | from pathlib import Path
5 |
6 |
7 | def patch_one_vector(
8 | operation_id: str,
9 | db_path: str = "../src/mcp_server_twelve_data/resources/endpoints.lancedb",
10 | table_name: str = "endpoints",
11 | desc_path: str = "../extra/full_descriptions.json",
12 | verbose: bool = True
13 | ):
14 | desc_file = Path(desc_path)
15 | if not desc_file.exists():
16 | raise FileNotFoundError(f"{desc_path} not found")
17 |
18 | with desc_file.open("r", encoding="utf-8") as f:
19 | full_descriptions = json.load(f)
20 |
21 | if operation_id not in full_descriptions:
22 | raise ValueError(f"No description found for operation_id '{operation_id}'")
23 |
24 | new_description = full_descriptions[operation_id]
25 |
26 | embedding = openai.OpenAI().embeddings.create(
27 | model="text-embedding-3-small",
28 | input=[new_description]
29 | ).data[0].embedding
30 |
31 | db = lancedb.connect(db_path)
32 | table = db.open_table(table_name)
33 |
34 | matches = table.to_arrow().to_pylist()
35 | record = next((row for row in matches if row["id"] == operation_id), None)
36 |
37 | if not record:
38 | raise ValueError(f"operation_id '{operation_id}' not found in LanceDB")
39 |
40 | if verbose:
41 | print(f"Updating vector for operation_id: {operation_id}")
42 | print(f"Old description:\n{record['description']}\n")
43 | print(f"New description:\n{new_description}\n")
44 |
45 | table.update(
46 | where=f"id == '{operation_id}'",
47 | values={
48 | "description": new_description,
49 | "vector": embedding
50 | }
51 | )
52 |
53 | if verbose:
54 | print("Update complete.")
55 |
56 |
57 | if __name__ == "__main__":
58 | patch_one_vector("GetETFsList")
59 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/doc_tool_remote.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Optional
2 |
3 | import httpx
4 | from mcp.server.fastmcp import FastMCP, Context
5 |
6 | from mcp_server_twelve_data.common import mcp_server_base_url
7 | from mcp_server_twelve_data.doc_tool_response import doctool_func_type, DocToolResponse
8 | from mcp_server_twelve_data.key_provider import extract_open_ai_apikey, extract_twelve_data_apikey
9 | from mcp_server_twelve_data.prompts import utool_doc_string
10 |
11 |
12 | def register_doc_tool_remote(
13 | server: FastMCP,
14 | transport: str,
15 | open_ai_api_key_from_args: Optional[str],
16 | twelve_data_apikey: Optional[str],
17 | ) -> doctool_func_type:
18 |
19 | @server.tool(name="doc-tool")
20 | async def doc_tool(
21 | query: str,
22 | ctx: Context,
23 | ) -> DocToolResponse:
24 | o_ai_api_key_to_use, error = extract_open_ai_apikey(
25 | transport=transport,
26 | open_ai_api_key=open_ai_api_key_from_args,
27 | ctx=ctx,
28 | )
29 | if error is not None:
30 | return DocToolResponse(query=query, error=error)
31 |
32 | td_key_to_use = extract_twelve_data_apikey(
33 | transport=transport,
34 | twelve_data_apikey=twelve_data_apikey,
35 | ctx=ctx,
36 | )
37 |
38 | async with httpx.AsyncClient(
39 | trust_env=False,
40 | headers={
41 | "accept": "application/json",
42 | "user-agent": "python-httpx/0.24.0",
43 | "x-openapi-key": o_ai_api_key_to_use,
44 | "Authorization": f'apikey {td_key_to_use}',
45 | },
46 | timeout=30,
47 | ) as client:
48 | resp = await client.get(
49 | f"{mcp_server_base_url}/doctool",
50 | params={
51 | "query": query,
52 | }
53 | )
54 | resp.raise_for_status()
55 | resp_json = resp.json()
56 | return DocToolResponse.model_validate(resp_json)
57 |
58 | doc_tool.__doc__ = utool_doc_string
59 | return doc_tool
60 |
```
--------------------------------------------------------------------------------
/scripts/generate.md:
--------------------------------------------------------------------------------
```markdown
1 | ## Update utool
2 |
3 | To update the `utool` tool and regenerate all dependent files and embeddings.
4 |
5 | ---
6 |
7 | ### 1. Copy the New OpenAPI Spec
8 |
9 | Copy the updated OpenAPI specification to:
10 |
11 | ```
12 | extra/openapi_clean.json
13 | ```
14 |
15 | ---
16 |
17 | ### 2. Update Endpoint Embeddings
18 |
19 | If **new methods were added**, you must regenerate all embeddings:
20 |
21 | ```bash
22 | python scripts/generate_endpoints_embeddings.py
23 | ```
24 |
25 | This will generate two files:
26 |
27 | - `full_descriptions.json`
28 | - `endpoints.lancedb`
29 |
30 | > ⚠️ This process is time-consuming.
31 |
32 | If you only need to **update one endpoint**, modify or insert the updated description in:
33 |
34 | ```
35 | data/full_descriptions.txt
36 | ```
37 |
38 | Then run:
39 |
40 | ```bash
41 | python scripts/patch_vector_in_embeddings.py
42 | ```
43 |
44 | ---
45 |
46 | ### 3. Generate Request Models
47 |
48 | Run:
49 |
50 | ```bash
51 | python scripts/generate_requests_models.py
52 | ```
53 |
54 | This will create:
55 |
56 | ```
57 | data/requests_models.py
58 | ```
59 |
60 | ---
61 |
62 | ### 4. Generate Response Models
63 |
64 | Run:
65 |
66 | ```bash
67 | python scripts/generate_response_models.py
68 | ```
69 |
70 | This will create:
71 |
72 | ```
73 | data/response_models.py
74 | ```
75 |
76 | ---
77 |
78 | ### 5. Generate Tools
79 |
80 | Run:
81 |
82 | ```bash
83 | python scripts/generate_tools.py
84 | ```
85 |
86 | This will create:
87 |
88 | ```
89 | data/tools.py
90 | ```
91 |
92 | ---
93 |
94 | ### 6. Update Existing Files
95 |
96 | Replace the existing versions with the newly generated files:
97 |
98 | - `requests_models.py`
99 | - `response_models.py`
100 | - `tools.py`
101 |
102 | ---
103 |
104 | ### 7. Run Tests
105 |
106 | Run the following to verify vector search functionality:
107 |
108 | ```bash
109 | python scripts/test_top_n_filter.py
110 | ```
111 |
112 | ---
113 |
114 | ### 8. Fix and Extend Tests
115 |
116 | - Fix any failing tests
117 | - Add new test cases for newly added endpoints
118 |
119 | ---
120 |
121 | ## Update doctool
122 |
123 | To update the `doctool` vector database:
124 |
125 | ---
126 |
127 | ### 1. Generate Documentation Embeddings
128 |
129 | Run:
130 |
131 | ```bash
132 | python scripts/generate_doc_tool_embeddings.py
133 | ```
134 |
135 | This will create:
136 |
137 | ```
138 | docs.lancedb
139 | ```
140 |
141 | ---
142 |
143 | ### 2. Update Existing Files
144 |
145 | Replace any relevant files with the updated ones produced by the script.
146 |
147 | ---
148 |
149 | ### 3. Run doctool Tests
150 |
151 | Run the following to verify `doctool` embeddings:
152 |
153 | ```bash
154 | python scripts/run_test_doctool.py
155 | ```
156 |
157 | ---
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/u_tool_remote.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Optional
2 |
3 | import httpx
4 | from mcp.server.fastmcp import FastMCP, Context
5 |
6 | from mcp_server_twelve_data.common import mcp_server_base_url
7 | from mcp_server_twelve_data.key_provider import extract_open_ai_apikey, extract_twelve_data_apikey
8 | from mcp_server_twelve_data.prompts import utool_doc_string
9 | from mcp_server_twelve_data.u_tool_response import UToolResponse, utool_func_type
10 |
11 |
12 | def register_u_tool_remote(
13 | server: FastMCP,
14 | transport: str,
15 | open_ai_api_key_from_args: Optional[str],
16 | twelve_data_apikey: Optional[str],
17 | ) -> utool_func_type:
18 |
19 | @server.tool(name="u-tool")
20 | async def u_tool(
21 | query: str,
22 | ctx: Context,
23 | format: Optional[str] = None,
24 | plan: Optional[str] = None,
25 | ) -> UToolResponse:
26 | o_ai_api_key_to_use, error = extract_open_ai_apikey(
27 | transport=transport,
28 | open_ai_api_key=open_ai_api_key_from_args,
29 | ctx=ctx,
30 | )
31 | if error is not None:
32 | return UToolResponse(error=error)
33 |
34 | td_key_to_use = extract_twelve_data_apikey(
35 | transport=transport,
36 | twelve_data_apikey=twelve_data_apikey,
37 | ctx=ctx,
38 | )
39 |
40 | async with httpx.AsyncClient(
41 | trust_env=False,
42 | headers={
43 | "accept": "application/json",
44 | "user-agent": "python-httpx/0.24.0",
45 | "x-openapi-key": o_ai_api_key_to_use,
46 | "Authorization": f'apikey {td_key_to_use}',
47 | },
48 | timeout=30,
49 | ) as client:
50 | resp = await client.get(
51 | f"{mcp_server_base_url}/utool",
52 | params={
53 | "query": query,
54 | "format": format,
55 | "plan": plan,
56 | }
57 | )
58 | resp.raise_for_status()
59 | resp_json = resp.json()
60 | return UToolResponse.model_validate(resp_json)
61 |
62 | u_tool.__doc__ = utool_doc_string
63 | return u_tool
64 |
```
--------------------------------------------------------------------------------
/scripts/generate_docs_embeddings.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | import uuid
3 | import httpx
4 | import openai
5 | import pandas as pd
6 | import lancedb
7 | from bs4 import BeautifulSoup
8 | from dotenv import load_dotenv
9 | from tqdm import tqdm
10 |
11 | # === CONFIG ===
12 | load_dotenv('../.env')
13 |
14 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
15 | DB_PATH = os.getenv("LANCEDB_PATH", '../data/docs.lancedb')
16 | OPENAI_MODEL = 'text-embedding-3-large'
17 | DOCS_URL = 'https://twelvedata.com/docs'
18 |
19 | client = openai.OpenAI(api_key=OPENAI_API_KEY)
20 |
21 |
22 | def download_docs(url: str) -> str:
23 | print(f"Downloading documentation from: {url}")
24 | response = httpx.get(url, timeout=10)
25 | response.raise_for_status()
26 | print("HTML download complete.")
27 | return response.text
28 |
29 |
30 | def parse_sections(html: str) -> list[dict]:
31 | soup = BeautifulSoup(html, "html.parser")
32 | sections = soup.select("section[id]")
33 |
34 | records = []
35 | for idx, section in enumerate(sections, start=1):
36 | section_id = section["id"]
37 | title_el = section.find("h2") or section.find("h3") or section.find("h1")
38 | title = title_el.get_text(strip=True) if title_el else section_id
39 | content = section.get_text(separator="\n", strip=True)
40 | print(f"[{idx}/{len(sections)}] Parsed section: {title}")
41 | records.append({
42 | "id": str(uuid.uuid4()),
43 | "section_id": section_id,
44 | "title": title,
45 | "content": content
46 | })
47 | return records
48 |
49 |
50 | def generate_embedding(text: str) -> list[float]:
51 | response = client.embeddings.create(
52 | model=OPENAI_MODEL,
53 | input=[text]
54 | )
55 | return response.data[0].embedding
56 |
57 |
58 | def build_lancedb(records: list[dict], db_path: str):
59 | df = pd.DataFrame(records)
60 |
61 | print(f"Generating embeddings for {len(df)} sections...")
62 | vectors = []
63 | for content in tqdm(df["content"], desc="Embedding"):
64 | vectors.append(generate_embedding(content))
65 | df["vector"] = vectors
66 |
67 | db = lancedb.connect(db_path)
68 | db.create_table("docs", data=df, mode="overwrite")
69 |
70 | print(f"Saved {len(df)} sections to LanceDB at: {db_path}")
71 | print("Section titles:")
72 | for title in df["title"]:
73 | print(f" - {title}")
74 |
75 |
76 | def main():
77 | print("Step 1: Downloading HTML")
78 | html = download_docs(DOCS_URL)
79 |
80 | print("Step 2: Parsing sections")
81 | records = parse_sections(html)
82 |
83 | print("Step 3: Building LanceDB")
84 | build_lancedb(records, DB_PATH)
85 |
86 |
87 | if __name__ == '__main__':
88 | main()
89 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/__init__.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Literal, Optional
2 |
3 | import click
4 | import logging
5 | import sys
6 |
7 | from dotenv import load_dotenv
8 |
9 | from .server import serve
10 |
11 |
12 | @click.command()
13 | @click.option("-v", "--verbose", count=True)
14 | @click.option("-t", "--transport", default="stdio", help="stdio, streamable-http")
15 | @click.option(
16 | "-k",
17 | "--twelve-data-apikey",
18 | default=None,
19 | help=(
20 | "This parameter is required for 'stdio' transport. "
21 | "For 'streamable-http', you have three options: "
22 | "1. Use the -k option to set a predefined API key. "
23 | "2. Use the -ua option to retrieve the API key from the Twelve Data server. "
24 | "3. Provide the API key in the 'Authorization' header as: 'apikey <your-apikey>'."
25 | )
26 | )
27 | @click.option(
28 | "-n", "--number-of-tools", default=35,
29 | help="limit number of tools to prevent problems with mcp clients, max n value is 193, default is 35"
30 | )
31 | @click.option(
32 | "-u", "--u-tool-open-ai-api-key", default=None,
33 | help=(
34 | "If set, activates a unified 'u-tool' powered by OpenAI "
35 | "to select and call the appropriate Twelve Data endpoint."
36 | ),
37 | )
38 | @click.option(
39 | "-ua", "--u-tool-oauth2", default=False, is_flag=True,
40 | help=(
41 | "If set, activates the unified 'u-tool' powered by OpenAI, "
42 | "and fetches Twelve Data and OpenAI API keys directly from the Twelve Data server."
43 | )
44 | )
45 | def main(
46 | verbose: bool,
47 | transport: Literal["stdio", "sse", "streamable-http"] = "stdio",
48 | twelve_data_apikey: Optional[str] = None,
49 | number_of_tools: int = 30,
50 | u_tool_open_ai_api_key: Optional[str] = None,
51 | u_tool_oauth2: bool = False,
52 | ) -> None:
53 | load_dotenv()
54 | logging_level = logging.WARN
55 | if verbose == 1:
56 | logging_level = logging.INFO
57 | elif verbose >= 2:
58 | logging_level = logging.DEBUG
59 |
60 | logging.basicConfig(level=logging_level, stream=sys.stderr)
61 |
62 | if u_tool_oauth2 and u_tool_open_ai_api_key is not None:
63 | RuntimeError("Set either u_tool_open_ai_api_key or u_tool_oauth2")
64 | if u_tool_oauth2 and transport != "streamable-http":
65 | RuntimeError("Set transport to streamable-http if you want to use -ua option")
66 | if transport == "stdio" and twelve_data_apikey is None:
67 | RuntimeError("Set -k to use stdio transport")
68 |
69 | serve(
70 | api_base="https://api.twelvedata.com",
71 | transport=transport,
72 | twelve_data_apikey=twelve_data_apikey,
73 | number_of_tools=number_of_tools,
74 | u_tool_open_ai_api_key=u_tool_open_ai_api_key,
75 | u_tool_oauth2=u_tool_oauth2,
76 | )
77 |
78 |
79 | if __name__ == "__main__":
80 | main()
81 |
```
--------------------------------------------------------------------------------
/extra/endpoints_spec_en.csv:
--------------------------------------------------------------------------------
```
1 | Path,Default,Number
2 | /time_series,1,2939
3 | /price,1,949
4 | /quote,1,768
5 | /rsi,1,355
6 | /macd,1,193
7 | /stocks,1,193
8 | /bbands,1,182
9 | /exchange_rate,1,175
10 | /ema,1,173
11 | /statistics,1,146
12 | /profile,1,128
13 | /market_state,1,116
14 | /symbol_search,1,104
15 | /eod,1,96
16 | /sma,1,94
17 | /forex_pairs,1,59
18 | /atr,1,57
19 | /dividends,1,54
20 | /cryptocurrencies,1,48
21 | /earnings,1,43
22 | /currency_conversion,1,41
23 | /exchanges,1,35
24 | /splits,1,32
25 | /earliest_timestamp,1,31
26 | /etfs,1,22
27 | /commodities,1,15
28 | /funds,1,14
29 | /ipo_calendar,1,8
30 | /cryptocurrency_exchanges,1,5
31 | /time_series/cross,1,4
32 | /cross_listings,1,3
33 | /technical_indicators,0,247
34 | /api_usage,0,106
35 | /logo,0,80
36 | /adx,0,52
37 | /vwap,0,42
38 | /stoch,0,41
39 | /income_statement,0,39
40 | /cash_flow,0,34
41 | /balance_sheet,0,32
42 | /recommendations,0,29
43 | /obv,0,25
44 | /cci,0,23
45 | /price_target,0,22
46 | /mfi,0,21
47 | /ma,0,18
48 | /splits_calendar,0,17
49 | /supertrend,0,17
50 | /earnings_calendar,0,16
51 | /willr,0,16
52 | /dividends_calendar,0,15
53 | /earnings_estimate,0,12
54 | /heikinashicandles,0,11
55 | /analyst_ratings/us_equities,0,10
56 | /ichimoku,0,10
57 | /percent_b,0,10
58 | /sar,0,10
59 | /analyst_ratings/light,0,9
60 | /revenue_estimate,0,9
61 | /insider_transactions,0,9
62 | /plus_di,0,9
63 | /institutional_holders,0,8
64 | /market_cap,0,8
65 | /minus_di,0,8
66 | /etfs/list,0,8
67 | /bonds,0,8
68 | /ad,0,8
69 | /growth_estimates,0,7
70 | /mutual_funds/list,0,7
71 | /key_executives,0,7
72 | /stochrsi,0,7
73 | /eps_trend,0,7
74 | /income_statement/consolidated,0,6
75 | /exchange_schedule,0,6
76 | /eps_revisions,0,6
77 | /pivot_points_hl,0,6
78 | /etfs/world,0,6
79 | /fund_holders,0,6
80 | /avg,0,6
81 | /mutual_funds/world,0,5
82 | /etfs/world/summary,0,5
83 | /etfs/family,0,5
84 | /avgprice,0,5
85 | /countries,0,5
86 | /rvol,0,5
87 | /adxr,0,5
88 | /wma,0,5
89 | /mom,0,5
90 | /beta,0,5
91 | /crsi,0,5
92 | /adosc,0,5
93 | /roc,0,5
94 | /mutual_funds/world/performance,0,4
95 | /balance_sheet/consolidated,0,4
96 | /mutual_funds/world/ratings,0,4
97 | /mutual_funds/world/summary,0,4
98 | /etfs/type,0,4
99 | /direct_holders,0,4
100 | /linearreg,0,4
101 | /tema,0,4
102 | /keltner,0,4
103 | /kst,0,4
104 | /mutual_funds/world/composition,0,3
105 | /mutual_funds/world/purchase_info,0,3
106 | /etfs/world/performance,0,3
107 | /edgar_filings/archive,0,3
108 | /ht_trendmode,0,3
109 | /midprice,0,3
110 | /instrument_type,0,3
111 | /trima,0,3
112 | /dema,0,3
113 | /bop,0,3
114 | /mama,0,3
115 | /ppo,0,3
116 | /mutual_funds/world/sustainability,0,2
117 | /etfs/world/composition,0,2
118 | /mutual_funds/world/risk,0,2
119 | /mutual_funds/type,0,2
120 | /mutual_funds/family,0,2
121 | /cash_flow/consolidated,0,2
122 | /wclprice,0,2
123 | /ht_dcperiod,0,2
124 | /medprice,0,2
125 | /typprice,0,2
126 | /ht_trendline,0,2
127 | /linearregangle,0,2
128 | /minus_dm,0,2
129 | /ht_dcphase,0,2
130 | /midpoint,0,2
131 | /ht_phasor,0,2
132 | /ultosc,0,2
133 | /kama,0,2
134 | /trange,0,2
135 | /apo,0,2
136 | /aroon,0,2
137 | /plus_dm,0,2
138 | /dx,0,2
139 | /stddev,0,2
140 | /macdext,0,2
141 | /natr,0,2
142 | /cmo,0,2
143 | /correl,0,2
144 | /max,0,2
145 | /stochf,0,2
146 | /hlc3,0,2
147 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/key_provider.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Optional
2 |
3 | from mcp.server.fastmcp import Context
4 | from mcp.client.streamable_http import RequestContext
5 |
6 |
7 | def extract_open_ai_apikey(
8 | transport: str,
9 | open_ai_api_key: str,
10 | ctx: Context,
11 | ) -> (Optional[str], Optional[str]):
12 | """Returns optional key and optional error"""
13 | if transport == 'stdio':
14 | if open_ai_api_key is not None:
15 | return (open_ai_api_key, None)
16 | else:
17 | # It's not a possible case
18 | error = (
19 | f"Transport is stdio and u_tool_open_ai_api_key is None. "
20 | f"Something goes wrong. Please contact support."
21 | )
22 | return None, error
23 | elif transport == "streamable-http":
24 | if open_ai_api_key is not None:
25 | return open_ai_api_key, None
26 | else:
27 | rc: RequestContext = ctx.request_context
28 | token_from_rc = get_tokens_from_rc(rc=rc)
29 | if token_from_rc.error is not None:
30 | return None, token_from_rc.error
31 | elif token_from_rc.twelve_data_api_key and token_from_rc.open_ai_api_key:
32 | o_ai_api_key_to_use = token_from_rc.open_ai_api_key
33 | return o_ai_api_key_to_use, None
34 | else:
35 | return None, "Either OPEN API KEY or TWELVE Data API key is not provided."
36 | else:
37 | return None, "This transport is not supported"
38 |
39 |
40 | def extract_twelve_data_apikey(
41 | transport: str,
42 | twelve_data_apikey: Optional[str],
43 | ctx: Context,
44 | ):
45 | if transport in {'stdio', 'streamable-http'} and twelve_data_apikey:
46 | return twelve_data_apikey
47 | else:
48 | rc: RequestContext = ctx.request_context
49 | tokens = get_tokens_from_rc(rc=rc)
50 | return tokens.twelve_data_api_key
51 |
52 |
53 | class ToolTokens:
54 | def __init__(
55 | self,
56 | twelve_data_api_key: Optional[str] = None,
57 | open_ai_api_key: Optional[str] = None,
58 | error: Optional[str] = None,
59 | ):
60 | self.twelve_data_api_key = twelve_data_api_key
61 | self.open_ai_api_key = open_ai_api_key
62 | self.error = error
63 |
64 |
65 | def get_tokens_from_rc(rc: RequestContext) -> ToolTokens:
66 | if hasattr(rc, "headers"):
67 | headers = rc.headers
68 | elif hasattr(rc, "request"):
69 | headers = rc.request.headers
70 | else:
71 | return ToolTokens(error="Headers were not found in a request context.")
72 | auth_header = headers.get("authorization")
73 | split = auth_header.split(" ") if auth_header else []
74 | if len(split) == 2:
75 | access_token = split[1]
76 | openai_key = headers.get("x-openapi-key")
77 | return ToolTokens(
78 | twelve_data_api_key=access_token,
79 | open_ai_api_key=openai_key,
80 | )
81 | return ToolTokens(error=f"Bad or missing authorization header: {auth_header}")
```
--------------------------------------------------------------------------------
/scripts/check_embedings.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | import lancedb
3 | import openai
4 |
5 | # Константы
6 | EMBEDDING_MODEL = "text-embedding-3-small"
7 |
8 | def generate_embedding(text: str) -> list[float]:
9 | client = openai.OpenAI()
10 | response = client.embeddings.create(
11 | model=EMBEDDING_MODEL,
12 | input=[text]
13 | )
14 | return response.data[0].embedding
15 |
16 | def evaluate_queries(query_target_pairs: list[tuple[str, str]]):
17 | db_path = os.getenv('LANCEDB_PATH', '../extra/endpoints.lancedb')
18 | db = lancedb.connect(db_path)
19 | table = db.open_table("endpoints")
20 |
21 | results_summary = []
22 |
23 | for user_query, target_endpoint in query_target_pairs:
24 | query_vec = generate_embedding(user_query)
25 | results = table.search(query_vec).metric("cosine").limit(30).to_list()
26 |
27 | top_endpoint = results[0]['path'] if results else None
28 | target_position = next((i for i, r in enumerate(results) if r['path'] == target_endpoint), None)
29 |
30 | print(f"Query: {user_query}")
31 | print(f"Target endpoint: {target_endpoint}")
32 | print(f"Top 1 endpoint: {top_endpoint}")
33 | print(f"Target position in top 30: {target_position}\n")
34 |
35 | results_summary.append((user_query, target_endpoint, top_endpoint, target_position))
36 |
37 | return results_summary
38 |
39 |
40 | def main():
41 | query_target_pairs = [
42 | ("Show me intraday stock prices for Tesla (TSLA) with 1-minute intervals for the past 3 hours.", "/time_series"),
43 | ("What is the current exchange rate between USD and EUR?", "/price"),
44 | ("Get the RSI indicator for Apple (AAPL) over the last 14 days.", "/rsi"),
45 | ("When did Amazon last split its stock?", "/splits"),
46 | ("Give me daily closing prices for Bitcoin in the past 6 months.", "/time_series"),
47 | ("Show the MACD for Microsoft.", "/macd"),
48 | ("Get Google earnings reports for the last year.", "/earnings"),
49 | ("Fetch dividend history for Johnson & Johnson.", "/dividends"),
50 | ("Give me fundamentals for Netflix including P/E ratio.", "/fundamentals"),
51 | ("What is the latest stock quote for Nvidia?", "/quote"),
52 | ("Retrieve the Bollinger Bands for Apple.", "/bbands"),
53 | ("What is the VWAP for Tesla?", "/vwap"),
54 | ("Get ATR indicator for Amazon.", "/atr"),
55 | ("What is the stochastic oscillator for MSFT?", "/stoch"),
56 | ("Show me the EMA for S&P 500.", "/ema"),
57 | ("Retrieve the ADX indicator for crude oil.", "/adx"),
58 | ("Get the OBV for Bitcoin.", "/obv"),
59 | ("What is the highest stock price of Apple in the last 30 days?", "/max"),
60 | ("Give me the minimum price for TSLA in January 2024.", "/min"),
61 | ("Get the ROC indicator for Ethereum.", "/roc"),
62 | ]
63 |
64 | results = evaluate_queries(query_target_pairs)
65 |
66 | print("\nSummary:")
67 | for row in results:
68 | print(row)
69 |
70 |
71 | if __name__ == "__main__":
72 | main()
73 |
```
--------------------------------------------------------------------------------
/scripts/generate_tools.py:
--------------------------------------------------------------------------------
```python
1 | import json
2 | import csv
3 | from pathlib import Path
4 |
5 | OPENAPI_PATH = "../extra/openapi_clean.json"
6 | ENDPOINTS_PATH = "../extra/endpoints_spec_en.csv"
7 | OUTPUT_PATH = "../data/tools_autogen.py"
8 |
9 |
10 | def load_csv_paths(path):
11 | with open(path, newline='', encoding='utf-8') as f:
12 | return [row[0] for i, row in enumerate(csv.reader(f)) if i > 0 and row]
13 |
14 |
15 | def load_openapi_spec(path):
16 | with open(path, encoding='utf-8') as f:
17 | return json.load(f)
18 |
19 |
20 | def collect_operations(paths, spec):
21 | ops = []
22 | seen = set()
23 | for path in paths:
24 | path_item = spec.get("paths", {}).get(path)
25 | if not path_item:
26 | continue
27 | for method, details in path_item.items():
28 | op_id = details.get("operationId")
29 | if not op_id or op_id in seen:
30 | continue
31 | seen.add(op_id)
32 | desc = details.get("description", "").strip().replace('"', '\\"').replace('\n', ' ')
33 | ops.append((op_id, desc, path.lstrip('/')))
34 | return ops
35 |
36 |
37 | def generate_code(ops):
38 | def fix_case(name: str) -> str:
39 | return name[0].upper() + name[1:] if name.lower().startswith("advanced") else name
40 |
41 | lines = [
42 | 'from mcp.server import FastMCP',
43 | 'from mcp.server.fastmcp import Context',
44 | ''
45 | ]
46 |
47 | # Import request models
48 | for op, _, _ in ops:
49 | lines.append(f'from .request_models import {fix_case(op)}Request')
50 | lines.append('')
51 |
52 | # Import response models
53 | for op, _, _ in ops:
54 | lines.append(f'from .response_models import {fix_case(op)}200Response')
55 | lines.append('')
56 |
57 | # Register tools
58 | lines.append('def register_all_tools(server: FastMCP, _call_endpoint):')
59 | for op, desc, key in ops:
60 | fixed_op = fix_case(op)
61 | lines += [
62 | f' @server.tool(name="{op}",',
63 | f' description="{desc}")',
64 | f' async def {op}(params: {fixed_op}Request, ctx: Context) -> {fixed_op}200Response:',
65 | f' return await _call_endpoint("{key}", params, {fixed_op}200Response, ctx)',
66 | ''
67 | ]
68 | return '\n'.join(lines)
69 |
70 |
71 | def main():
72 | spec = load_openapi_spec(OPENAPI_PATH)
73 | csv_paths = load_csv_paths(ENDPOINTS_PATH)
74 | all_spec_paths = list(spec.get("paths", {}).keys())
75 | extra_paths = sorted(set(all_spec_paths) - set(csv_paths))
76 | final_paths = csv_paths + extra_paths
77 |
78 | ops = collect_operations(final_paths, spec)
79 | total = len(ops)
80 | from_csv = len([op for op in ops if '/' + op[2] in csv_paths])
81 | from_extra = total - from_csv
82 |
83 | print(f"[INFO] Loaded {len(csv_paths)} paths from CSV.")
84 | print(f"[INFO] Found {len(all_spec_paths)} paths in OpenAPI spec.")
85 | print(f"[INFO] Added {from_extra} additional paths not listed in CSV.")
86 | print(f"[INFO] Generated {total} tools in total.")
87 |
88 | code = '# AUTOGENERATED FILE - DO NOT EDIT MANUALLY\n\n' + generate_code(ops)
89 | Path(OUTPUT_PATH).write_text(code, encoding='utf-8')
90 | print(f"[SUCCESS] File written to: {OUTPUT_PATH}")
91 |
92 |
93 | if __name__ == '__main__':
94 | main()
95 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/common.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | import importlib.util
3 | from pathlib import Path
4 | from typing import Optional, List, Tuple
5 | from starlette.requests import Request
6 | from mcp.client.streamable_http import RequestContext
7 |
8 |
9 | mcp_server_base_url = "https://mcp.twelvedata.com"
10 | spec = importlib.util.find_spec("mcp_server_twelve_data")
11 | MODULE_PATH = Path(spec.origin).resolve()
12 | PACKAGE_ROOT = MODULE_PATH.parent # src/mcp_server_twelve_data
13 |
14 | LANCE_DB_ENDPOINTS_PATH = os.environ.get(
15 | "LANCE_DB_ENDPOINTS_PATH",
16 | str(PACKAGE_ROOT / ".." / "resources" / "endpoints.lancedb")
17 | )
18 |
19 | LANCE_DB_DOCS_PATH = os.environ.get(
20 | "LANCE_DB_DOCS_PATH",
21 | str(PACKAGE_ROOT / ".." / "resources" / "docs.lancedb")
22 | )
23 |
24 |
25 | def vector_db_exists():
26 | return os.path.isdir(LANCE_DB_ENDPOINTS_PATH)
27 |
28 |
29 | def create_dummy_request_context(request: Request) -> RequestContext:
30 | return RequestContext(
31 | client=object(),
32 | headers=dict(request.headers),
33 | session_id="generated-session-id",
34 | session_message=object(),
35 | metadata=object(),
36 | read_stream_writer=object(),
37 | sse_read_timeout=10.0,
38 | )
39 |
40 |
41 | class ToolPlanMap:
42 | def __init__(self, df):
43 | self.df = df
44 | self.plan_to_int = {
45 | 'basic': 0,
46 | 'grow': 1,
47 | 'pro': 2,
48 | 'ultra': 3,
49 | 'enterprise': 4,
50 | }
51 |
52 | def split(self, user_plan: Optional[str], tool_operation_ids: List[str]) -> Tuple[List[str], List[str]]:
53 | if user_plan is None:
54 | # if user plan param was not specified, then we have no restrictions for function calling
55 | return tool_operation_ids, []
56 | user_plan_key = user_plan.lower()
57 | user_plan_int = self.plan_to_int.get(user_plan_key)
58 | if user_plan_int is None:
59 | raise ValueError(f"Wrong user_plan: '{user_plan}'")
60 |
61 | tools_df = self.df[self.df["id"].isin(tool_operation_ids)]
62 |
63 | candidates = []
64 | premium_only_candidates = []
65 |
66 | for _, row in tools_df.iterrows():
67 | tool_id = row["id"]
68 | tool_plan_raw = row["x-starting-plan"]
69 | if tool_plan_raw is None:
70 | tool_plan_raw = 'basic'
71 |
72 | tool_plan_key = tool_plan_raw.lower()
73 | tool_plan_int = self.plan_to_int.get(tool_plan_key)
74 | if tool_plan_int is None:
75 | raise ValueError(f"Wrong tool_starting_plan: '{tool_plan_key}'")
76 |
77 | if user_plan_int >= tool_plan_int:
78 | candidates.append(tool_id)
79 | else:
80 | premium_only_candidates.append(tool_id)
81 |
82 | return candidates, premium_only_candidates
83 |
84 |
85 | def build_openai_tools_subset(tool_list):
86 | def expand_parameters(params):
87 | if (
88 | "properties" in params and
89 | "params" in params["properties"] and
90 | "$ref" in params["properties"]["params"] and
91 | "$defs" in params
92 | ):
93 | ref_path = params["properties"]["params"]["$ref"]
94 | ref_name = ref_path.split("/")[-1]
95 | schema = params["$defs"].get(ref_name, {})
96 | return {
97 | "type": "object",
98 | "properties": {
99 | "params": {
100 | "type": "object",
101 | "properties": schema.get("properties", {}),
102 | "required": schema.get("required", []),
103 | "description": schema.get("description", "")
104 | }
105 | },
106 | "required": ["params"]
107 | }
108 | else:
109 | return params
110 |
111 | tools = []
112 | for tool in tool_list:
113 | expanded_parameters = expand_parameters(tool.parameters)
114 | tools.append({
115 | "type": "function",
116 | "function": {
117 | "name": tool.name,
118 | "description": tool.description or "No description provided.",
119 | "parameters": expanded_parameters
120 | }
121 | })
122 | # [t for t in tools if t["function"]["name"] in ["GetTimeSeriesAdd", "GetTimeSeriesAd"]]
123 | return tools
124 |
```
--------------------------------------------------------------------------------
/test/test_mcp_main.py:
--------------------------------------------------------------------------------
```python
1 | import json
2 | import os
3 | import signal
4 |
5 | import httpx
6 | import pytest
7 | import asyncio
8 | import urllib.parse
9 |
10 |
11 | import pytest_asyncio
12 | from dotenv import load_dotenv
13 | from mcp import stdio_client, ClientSession, StdioServerParameters
14 |
15 | dotenv_path = os.path.join(os.path.dirname(__file__), '..', '.env')
16 | load_dotenv(dotenv_path)
17 | server_url = os.environ['SERVER_URL']
18 | td_api_key = os.environ['TWELVE_DATA_API_KEY']
19 | OPENAI_API_KEY = os.environ['OPENAI_API_KEY']
20 |
21 |
22 | @pytest_asyncio.fixture
23 | def run_server_factory():
24 | async def _start_server(*args):
25 | proc = await asyncio.create_subprocess_exec(
26 | "python", "-m", "mcp_server_twelve_data",
27 | *args,
28 | # stdout=asyncio.subprocess.DEVNULL,
29 | # stderr=asyncio.subprocess.DEVNULL,
30 | )
31 |
32 | # healthcheck
33 | for _ in range(30):
34 | try:
35 | async with httpx.AsyncClient() as client:
36 | r = await client.get(f"{server_url}/health")
37 | if r.status_code == 200:
38 | break
39 | except Exception:
40 | await asyncio.sleep(1)
41 | else:
42 | proc.terminate()
43 | raise RuntimeError("Server did not start")
44 |
45 | async def stop():
46 | proc.send_signal(signal.SIGINT)
47 | await proc.wait()
48 |
49 | return stop
50 |
51 | return _start_server
52 |
53 |
54 | @pytest.mark.asyncio
55 | async def test_call_utool(run_server_factory):
56 | stop_server = await run_server_factory(
57 | "-t", "streamable-http",
58 | "-k", td_api_key,
59 | "-u", OPENAI_API_KEY,
60 | )
61 | try:
62 | async with httpx.AsyncClient() as client:
63 | response = await client.get(
64 | f"{server_url}/utool?query={urllib.parse.quote('show me RSI for AAPL')}",
65 | timeout=30,
66 | )
67 | assert response.status_code == 200
68 | data = response.json()
69 | response = data.get("response")
70 | assert response
71 | assert "values" in response
72 | assert len(response["values"]) > 0
73 | finally:
74 | await stop_server()
75 |
76 |
77 | @pytest.mark.asyncio
78 | async def test_call_utool_both_keys_in_header(run_server_factory):
79 | stop_server = await run_server_factory(
80 | "-t", "streamable-http", "-ua"
81 | )
82 |
83 | try:
84 | async with httpx.AsyncClient() as client:
85 | response = await client.get(
86 | f"{server_url}/utool?query={urllib.parse.quote('show me RSI for AAPL')}",
87 | timeout=30,
88 | headers={
89 | 'Authorization': f'apikey {td_api_key}',
90 | 'X-OpenAPI-Key': OPENAI_API_KEY,
91 | }
92 | )
93 | assert response.status_code == 200
94 | data = response.json()
95 | response = data.get("response")
96 | assert response
97 | assert "values" in response
98 | assert len(response["values"]) > 0
99 | finally:
100 | await stop_server()
101 |
102 |
103 | @pytest.mark.asyncio
104 | async def test_call_utool_stdio():
105 | server_params = StdioServerParameters(
106 | command="python",
107 | args=[
108 | "-m", "mcp_server_twelve_data",
109 | "-t", "stdio",
110 | "-k", td_api_key,
111 | "-u", OPENAI_API_KEY
112 | ],
113 | )
114 |
115 | async with stdio_client(server_params) as (reader, writer):
116 | async with ClientSession(reader, writer) as session:
117 | await session.initialize()
118 | result = await session.call_tool("u-tool", arguments={"query": "show me RSI for AAPL"})
119 | data = json.loads(result.content[0].text)
120 | response = data.get("response")
121 | assert response
122 | assert "values" in response
123 | assert len(response["values"]) > 0
124 |
125 |
126 | @pytest.mark.asyncio
127 | async def test_call_time_series_stdio():
128 | server_params = StdioServerParameters(
129 | command="python",
130 | args=[
131 | "-m", "mcp_server_twelve_data",
132 | "-t", "stdio",
133 | "-k", td_api_key,
134 | ],
135 | )
136 |
137 | async with stdio_client(server_params) as (reader, writer):
138 | async with ClientSession(reader, writer) as session:
139 | await session.initialize()
140 | arguments = {
141 | "params": {
142 | "symbol": "AAPL",
143 | "interval": "1day",
144 | }
145 | }
146 |
147 | result = await session.call_tool("GetTimeSeries", arguments=arguments)
148 | data = json.loads(result.content[0].text)
149 |
150 | assert "values" in data
151 | assert len(data["values"]) > 0
152 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/doc_tool.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Optional, Literal, cast
2 |
3 | import openai
4 | from openai.types.chat import ChatCompletionSystemMessageParam
5 | from starlette.requests import Request
6 | from starlette.responses import JSONResponse
7 |
8 | from mcp.server.fastmcp import FastMCP, Context
9 | from mcp_server_twelve_data.common import (
10 | create_dummy_request_context, LANCE_DB_DOCS_PATH,
11 | )
12 | from mcp_server_twelve_data.doc_tool_response import DocToolResponse, doctool_func_type
13 | from mcp_server_twelve_data.key_provider import extract_open_ai_apikey
14 | from mcp_server_twelve_data.prompts import doctool_doc_string
15 |
16 |
17 | def register_doc_tool(
18 | server: FastMCP,
19 | open_ai_api_key_from_args: Optional[str],
20 | transport: Literal["stdio", "sse", "streamable-http"],
21 | ) -> doctool_func_type:
22 | embedding_model = "text-embedding-3-large"
23 | llm_model = "gpt-4.1-mini"
24 | # llm_model = "gpt-4o-mini"
25 | # llm_model = "gpt-4.1-nano"
26 |
27 | db_path = LANCE_DB_DOCS_PATH
28 | top_k = 15
29 |
30 | import lancedb
31 | db = lancedb.connect(db_path)
32 | table = db.open_table("docs")
33 |
34 | @server.tool(name="doc-tool")
35 | async def doc_tool(query: str, ctx: Context) -> DocToolResponse:
36 | openai_key, error = extract_open_ai_apikey(
37 | transport=transport,
38 | open_ai_api_key=open_ai_api_key_from_args,
39 | ctx=ctx,
40 | )
41 | if error is not None:
42 | return DocToolResponse(query=query, error=error)
43 |
44 | client = openai.OpenAI(api_key=openai_key)
45 |
46 | try:
47 | embedding = client.embeddings.create(
48 | model=embedding_model,
49 | input=[query],
50 | ).data[0].embedding
51 |
52 | results = table.search(embedding).metric("cosine").limit(top_k).to_list()
53 | matches = [r["title"] for r in results]
54 | combined_text = "\n\n---\n\n".join([r["content"] for r in results])
55 |
56 | except Exception as e:
57 | return DocToolResponse(query=query, top_candidates=[], error=f"Vector search failed: {e}")
58 |
59 | try:
60 | prompt = (
61 | "You are a documentation assistant. Given a user query and relevant documentation sections, "
62 | "generate a helpful, accurate, and Markdown-formatted answer.\n\n"
63 | "Use:\n"
64 | "- Headings\n"
65 | "- Bullet points\n"
66 | "- Short paragraphs\n"
67 | "- Code blocks if applicable\n\n"
68 | "Do not repeat the full documentation — summarize only what's relevant to the query.\n\n"
69 | "If the user asks how to perform an action "
70 | "(e.g., 'how to get', 'ways to retrieve', 'methods for', etc.), "
71 | "and there are multiple suitable API endpoints, provide "
72 | "a list of the most relevant ones with a brief description of each.\n"
73 | "Highlight when to use which endpoint and what kind of data they return."
74 | )
75 |
76 | llm_response = client.chat.completions.create(
77 | model=llm_model,
78 | messages=[
79 | cast(ChatCompletionSystemMessageParam, {"role": "system", "content": prompt}),
80 | cast(ChatCompletionSystemMessageParam, {"role": "user", "content": f"User query:\n{query}"}),
81 | cast(ChatCompletionSystemMessageParam,
82 | {"role": "user", "content": f"Documentation:\n{combined_text}"}),
83 | ],
84 | temperature=0.2,
85 | )
86 |
87 | markdown = llm_response.choices[0].message.content.strip()
88 | return DocToolResponse(
89 | query=query,
90 | top_candidates=matches,
91 | result=markdown,
92 | )
93 |
94 | except Exception as e:
95 | return DocToolResponse(query=query, top_candidates=matches, error=f"LLM summarization failed: {e}")
96 |
97 | doc_tool.__doc__ = doctool_doc_string
98 | return doc_tool
99 |
100 |
101 | def register_http_doctool(
102 | transport: str,
103 | server: FastMCP,
104 | doc_tool,
105 | ):
106 | if transport == "streamable-http":
107 | @server.custom_route("/doctool", ["GET"])
108 | async def doc_tool_http(request: Request):
109 | query = request.query_params.get("query")
110 | if not query:
111 | return JSONResponse({"error": "Missing 'query' query parameter"}, status_code=400)
112 |
113 | ctx = Context(request_context=create_dummy_request_context(request))
114 | result = await doc_tool(query=query, ctx=ctx)
115 | return JSONResponse(content=result.model_dump())
116 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/server.py:
--------------------------------------------------------------------------------
```python
1 | from typing import Type, TypeVar, Literal, Optional
2 | import httpx
3 | from pydantic import BaseModel
4 | from mcp.server.fastmcp import FastMCP, Context
5 | from starlette.exceptions import HTTPException
6 | from starlette.requests import Request
7 | from starlette.responses import JSONResponse
8 | import re
9 |
10 | from .common import vector_db_exists
11 | from .doc_tool import register_doc_tool, register_http_doctool
12 | from .doc_tool_remote import register_doc_tool_remote
13 | from .key_provider import extract_twelve_data_apikey
14 | from .tools import register_all_tools
15 | from .u_tool import register_u_tool, register_http_utool
16 | from .u_tool_remote import register_u_tool_remote
17 |
18 |
19 | def serve(
20 | api_base: str,
21 | transport: Literal["stdio", "sse", "streamable-http"],
22 | twelve_data_apikey: Optional[str],
23 | number_of_tools: int,
24 | u_tool_open_ai_api_key: Optional[str],
25 | u_tool_oauth2: bool
26 | ) -> None:
27 | server = FastMCP(
28 | "mcp-twelve-data",
29 | host="0.0.0.0",
30 | port="8000",
31 | )
32 |
33 | P = TypeVar('P', bound=BaseModel)
34 | R = TypeVar('R', bound=BaseModel)
35 |
36 | def resolve_path_params(endpoint: str, params_dict: dict) -> str:
37 | def replacer(match):
38 | key = match.group(1)
39 | if key not in params_dict:
40 | raise ValueError(f"Missing path parameter: {key}")
41 | return str(params_dict.pop(key))
42 | return re.sub(r"{(\w+)}", replacer, endpoint)
43 |
44 | async def _call_endpoint(
45 | endpoint: str,
46 | params: P,
47 | response_model: Type[R],
48 | ctx: Context
49 | ) -> R:
50 | params.apikey = extract_twelve_data_apikey(
51 | twelve_data_apikey=twelve_data_apikey,
52 | transport=transport,
53 | ctx=ctx
54 | )
55 |
56 | params_dict = params.model_dump(exclude_none=True)
57 | resolved_endpoint = resolve_path_params(endpoint, params_dict)
58 |
59 | async with httpx.AsyncClient(
60 | trust_env=False,
61 | headers={
62 | "accept": "application/json",
63 | "user-agent": "python-httpx/0.24.0"
64 | },
65 | ) as client:
66 | resp = await client.get(
67 | f"{api_base}/{resolved_endpoint}",
68 | params=params_dict
69 | )
70 | resp.raise_for_status()
71 | resp_json = resp.json()
72 |
73 | if isinstance(resp_json, dict):
74 | status = resp_json.get("status")
75 | if status == "error":
76 | code = resp_json.get('code')
77 | raise HTTPException(
78 | status_code=code,
79 | detail=f"Failed to perform request,"
80 | f" code = {code}, message = {resp_json.get('message')}"
81 | )
82 |
83 | return response_model.model_validate(resp_json)
84 |
85 | if u_tool_oauth2 or u_tool_open_ai_api_key is not None:
86 | # we will not publish large vector db, without it server will work in remote mode
87 | if vector_db_exists():
88 | register_all_tools(server=server, _call_endpoint=_call_endpoint)
89 | u_tool = register_u_tool(
90 | server=server,
91 | open_ai_api_key_from_args=u_tool_open_ai_api_key,
92 | transport=transport
93 | )
94 | doc_tool = register_doc_tool(
95 | server=server,
96 | open_ai_api_key_from_args=u_tool_open_ai_api_key,
97 | transport=transport
98 | )
99 | else:
100 | u_tool = register_u_tool_remote(
101 | server=server,
102 | twelve_data_apikey=twelve_data_apikey,
103 | open_ai_api_key_from_args=u_tool_open_ai_api_key,
104 | transport=transport,
105 | )
106 | doc_tool = register_doc_tool_remote(
107 | server=server,
108 | twelve_data_apikey=twelve_data_apikey,
109 | open_ai_api_key_from_args=u_tool_open_ai_api_key,
110 | transport=transport,
111 | )
112 | register_http_utool(
113 | transport=transport,
114 | u_tool=u_tool,
115 | server=server,
116 | )
117 | register_http_doctool(
118 | transport=transport,
119 | server=server,
120 | doc_tool=doc_tool,
121 | )
122 |
123 | else:
124 | register_all_tools(server=server, _call_endpoint=_call_endpoint)
125 | all_tools = server._tool_manager._tools
126 | server._tool_manager._tools = dict(list(all_tools.items())[:number_of_tools])
127 |
128 | @server.custom_route("/health", ["GET"])
129 | async def health(_: Request):
130 | return JSONResponse({"status": "ok"})
131 |
132 | server.run(transport=transport)
133 |
```
--------------------------------------------------------------------------------
/scripts/generate_endpoints_embeddings.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | import json
3 | from typing import cast
4 |
5 | import yaml
6 | import openai
7 | import lancedb
8 | from dotenv import load_dotenv
9 | from openai.types.chat import ChatCompletionSystemMessageParam, ChatCompletionUserMessageParam
10 |
11 |
12 | # === CONFIG ===
13 | load_dotenv('../.env')
14 |
15 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
16 | OPENAI_MODEL = "gpt-4o-mini"
17 | OPENAI_MODEL_EMBEDDINGS = "text-embedding-3-large"
18 | spec_path = os.getenv('OPENAPI_SPEC', '../extra/openapi_clean.json')
19 | db_path = os.getenv('LANCEDB_PATH', '../data/endpoints.lancedb')
20 | desc_path = os.getenv('DESC_JSON_PATH', '../extra/full_descriptions.json')
21 |
22 |
23 | def load_spec(path: str) -> dict:
24 | with open(path, 'r', encoding='utf-8') as f:
25 | return yaml.safe_load(f) if path.lower().endswith(('.yaml', '.yml')) else json.load(f)
26 |
27 |
28 | def extract_endpoints(spec: dict) -> list[dict]:
29 | paths = spec.get('paths', {})
30 | components = spec.get('components', {})
31 |
32 | def resolve_ref(obj):
33 | if isinstance(obj, dict):
34 | if '$ref' in obj:
35 | ref_path = obj['$ref'].lstrip('#/').split('/')
36 | resolved = spec
37 | for part in ref_path:
38 | resolved = resolved.get(part, {})
39 | return resolve_ref(resolved)
40 | else:
41 | return {k: resolve_ref(v) for k, v in obj.items()}
42 | elif isinstance(obj, list):
43 | return [resolve_ref(item) for item in obj]
44 | return obj
45 |
46 | endpoints = []
47 | for path, methods in paths.items():
48 | for method, op in methods.items():
49 | if not isinstance(op, dict):
50 | continue
51 |
52 | parameters = op.get('parameters', [])
53 | request_body = op.get('requestBody', {})
54 | responses = []
55 |
56 | for code, raw_resp in op.get('responses', {}).items():
57 | resolved_resp = resolve_ref(raw_resp)
58 | content = resolved_resp.get('content', {})
59 | resolved_content = {}
60 |
61 | for mime_type, mime_obj in content.items():
62 | schema = mime_obj.get('schema', {})
63 | resolved_schema = resolve_ref(schema)
64 | resolved_content[mime_type] = {
65 | 'schema': resolved_schema
66 | }
67 |
68 | responses.append({
69 | 'code': code,
70 | 'description': resolved_resp.get('description', ''),
71 | 'content': resolved_content
72 | })
73 |
74 | endpoints.append({
75 | 'path': path,
76 | 'method': method.upper(),
77 | 'summary': op.get('summary', ''),
78 | 'description': op.get('description', ''),
79 | 'parameters': parameters,
80 | 'requestBody': request_body,
81 | 'responses': responses,
82 | 'operationId': op.get('operationId', f'{method}_{path}'),
83 | 'x-starting-plan': op.get('x-starting-plan', None),
84 | })
85 |
86 | return endpoints
87 |
88 |
89 | def generate_llm_description(info: dict) -> str:
90 | prompt = (
91 | "You are an OpenAPI endpoint explainer. Your goal is to produce a clear, concise, and "
92 | "natural-language explanation of the given API endpoint based on its metadata. "
93 | "This description will be embedded into a vector space for solving a top-N retrieval task. "
94 | "Given a user query, the system will compare it semantically to these embeddings to find "
95 | "the most relevant endpoints. Therefore, the output must reflect both the purpose of the "
96 | "endpoint and its parameter semantics using natural language.\n\n"
97 | "Please summarize the endpoint's purpose, its key input parameters and their roles, and "
98 | "what the endpoint returns. You may include short usage context or constraints to help clarify its behavior. "
99 | "Do not echo raw JSON. Avoid listing all optional or less relevant fields unless necessary for understanding.\n"
100 | "Instead of showing URL-style query examples, include two or three natural-language questions "
101 | "a user might ask that this endpoint could satisfy. These examples will help optimize the embedding "
102 | "for semantic search over user queries."
103 | )
104 | client = openai.OpenAI()
105 | messages = [
106 | cast(ChatCompletionSystemMessageParam, {"role": "system", "content": prompt}),
107 | cast(ChatCompletionUserMessageParam, {"role": "user", "content": json.dumps(info, indent=2)})
108 | ]
109 |
110 | response = client.chat.completions.create(
111 | model=OPENAI_MODEL,
112 | messages=messages,
113 | temperature=0.3
114 | )
115 |
116 | return response.choices[0].message.content.strip()
117 |
118 |
119 | def generate_embedding(text: str) -> list[float]:
120 | response = openai.OpenAI().embeddings.create(
121 | model=OPENAI_MODEL_EMBEDDINGS,
122 | input=[text]
123 | )
124 | return response.data[0].embedding
125 |
126 |
127 | def load_existing_descriptions(path: str) -> dict:
128 | if os.path.exists(path):
129 | with open(path, 'r', encoding='utf-8') as f:
130 | return json.load(f)
131 | return {}
132 |
133 |
134 | def save_descriptions(path: str, data: dict):
135 | with open(path, 'w', encoding='utf-8') as f:
136 | json.dump(data, f, indent=2, ensure_ascii=False)
137 |
138 |
139 | def main():
140 | spec = load_spec(spec_path)
141 | endpoints = extract_endpoints(spec)
142 |
143 | full_descriptions = load_existing_descriptions(desc_path)
144 | records = []
145 |
146 | for info in endpoints:
147 | try:
148 | operation_id = info.get('operationId', f"{info['method']}_{info['path']}")
149 | if operation_id in full_descriptions:
150 | description = full_descriptions[operation_id]
151 | else:
152 | description = generate_llm_description(info)
153 | full_descriptions[operation_id] = description
154 | save_descriptions(desc_path, full_descriptions) # Save on each iteration
155 |
156 | print(f"\n--- LLM Description for {info['method']} {info['path']} ---\n{description}\n")
157 | vector = generate_embedding(description)
158 | records.append({
159 | 'id': operation_id,
160 | 'vector': vector,
161 | 'path': info['path'],
162 | 'method': info['method'],
163 | 'summary': info['summary'],
164 | 'x-starting-plan': info.get('x-starting-plan', None),
165 | })
166 | except Exception as e:
167 | print(f"Error processing {info['method']} {info['path']}: {e}")
168 |
169 | db = lancedb.connect(db_path)
170 | db.create_table(name='endpoints', data=records, mode='overwrite')
171 |
172 | save_descriptions(desc_path, full_descriptions)
173 | print(f"Indexed {len(records)} endpoints into '{db_path}' and saved LLM descriptions to '{desc_path}'.")
174 |
175 |
176 | if __name__ == '__main__':
177 | main()
178 |
```
--------------------------------------------------------------------------------
/scripts/split_opnapi_by_groups.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | import yaml
3 | import json
4 | import re
5 | from collections import defaultdict
6 |
7 | input_path = "../data/openapi01.06.2025.json"
8 | output_dir = "../data"
9 |
10 | GROUPS = {
11 | "reference_data": [
12 | "/stocks",
13 | "/forex_pairs",
14 | "/cryptocurrencies",
15 | "/funds",
16 | "/bonds",
17 | "/etfs",
18 | "/commodities",
19 | "/cross_listings",
20 | "/exchanges",
21 | "/exchange_schedule",
22 | "/cryptocurrency_exchanges",
23 | "/market_state",
24 | "/instrument_type",
25 | "/countries",
26 | "/earliest_timestamp",
27 | "/symbol_search",
28 | "/intervals"
29 | ],
30 | "core_data": [
31 | "/time_series",
32 | "/time_series/cross",
33 | "/exchange_rate",
34 | "/currency_conversion",
35 | "/quote",
36 | "/price",
37 | "/eod",
38 | "/market_movers/{market}"
39 | ],
40 | "mutual_funds": [
41 | "/mutual_funds/list",
42 | "/mutual_funds/family",
43 | "/mutual_funds/type",
44 | "/mutual_funds/world",
45 | "/mutual_funds/world/summary",
46 | "/mutual_funds/world/performance",
47 | "/mutual_funds/world/risk",
48 | "/mutual_funds/world/ratings",
49 | "/mutual_funds/world/composition",
50 | "/mutual_funds/world/purchase_info",
51 | "/mutual_funds/world/sustainability"
52 | ],
53 | "etfs": [
54 | "/etfs/list",
55 | "/etfs/family",
56 | "/etfs/type",
57 | "/etfs/world",
58 | "/etfs/world/summary",
59 | "/etfs/world/performance",
60 | "/etfs/world/risk",
61 | "/etfs/world/composition"
62 | ],
63 | "fundamentals": [
64 | "/balance_sheet",
65 | "/balance_sheet/consolidated",
66 | "/cash_flow",
67 | "/cash_flow/consolidated",
68 | "/dividends",
69 | "/dividends_calendar",
70 | "/earnings",
71 | "/income_statement",
72 | "/income_statement/consolidated",
73 | "/ipo_calendar",
74 | "/key_executives",
75 | "/last_change/{endpoint}",
76 | "/logo",
77 | "/market_cap",
78 | "/profile",
79 | "/splits",
80 | "/splits_calendar",
81 | "/statistics"
82 | ],
83 | "analysis": [
84 | "/analyst_ratings/light",
85 | "/analyst_ratings/us_equities",
86 | "/earnings_estimate",
87 | "/revenue_estimate",
88 | "/eps_trend",
89 | "/eps_revisions",
90 | "/growth_estimates",
91 | "/price_target",
92 | "/recommendations",
93 | "/earnings_calendar"
94 | ],
95 | "regulatory": [
96 | "/tax_info",
97 | "/edgar_filings/archive",
98 | "/insider_transactions",
99 | "/direct_holders",
100 | "/fund_holders",
101 | "/institutional_holders",
102 | "/sanctions/{source}"
103 | ]
104 | }
105 |
106 | mirrors = [
107 | "https://api-reference-data.twelvedata.com",
108 | "https://api-time-series.twelvedata.com",
109 | "https://api-mutual-funds.twelvedata.com",
110 | "https://api-etfs.twelvedata.com",
111 | "https://api-fundamental.twelvedata.com",
112 | "https://api-analysis.twelvedata.com",
113 | "https://api-regulator.twelvedata.com",
114 | ]
115 |
116 |
117 | def load_spec(path):
118 | with open(path, 'r', encoding='utf-8') as f:
119 | if path.lower().endswith(('.yaml', '.yml')):
120 | return yaml.safe_load(f)
121 | return json.load(f)
122 |
123 |
124 | def dump_spec(spec, path):
125 | with open(path, 'w', encoding='utf-8') as f:
126 | if path.lower().endswith(('.yaml', '.yml')):
127 | yaml.safe_dump(spec, f, sort_keys=False, allow_unicode=True)
128 | else:
129 | json.dump(spec, f, ensure_ascii=False, indent=2)
130 |
131 |
132 | def find_refs(obj):
133 | refs = set()
134 | if isinstance(obj, dict):
135 | for k, v in obj.items():
136 | if k == '$ref' and isinstance(v, str):
137 | refs.add(v)
138 | else:
139 | refs |= find_refs(v)
140 | elif isinstance(obj, list):
141 | for item in obj:
142 | refs |= find_refs(item)
143 | return refs
144 |
145 |
146 | def prune_components(full_components, used_refs):
147 | pattern = re.compile(r'^#/components/([^/]+)/(.+)$')
148 | used = defaultdict(set)
149 | for ref in used_refs:
150 | m = pattern.match(ref)
151 | if m:
152 | comp_type, comp_name = m.group(1), m.group(2)
153 | used[comp_type].add(comp_name)
154 | changed = True
155 | while changed:
156 | changed = False
157 | for comp_type, names in list(used.items()):
158 | for name in list(names):
159 | definition = full_components.get(comp_type, {}).get(name)
160 | if definition:
161 | for r in find_refs(definition):
162 | m2 = pattern.match(r)
163 | if m2:
164 | ct, cn = m2.group(1), m2.group(2)
165 | if cn not in used[ct]:
166 | used[ct].add(cn)
167 | changed = True
168 | pruned = {}
169 | for comp_type, defs in full_components.items():
170 | if comp_type in used:
171 | kept = {n: defs[n] for n in defs if n in used[comp_type]}
172 | if kept:
173 | pruned[comp_type] = kept
174 | return pruned
175 |
176 |
177 | def trim_fields(obj):
178 | if isinstance(obj, dict):
179 | for k, v in obj.items():
180 | if k == "description" and isinstance(v, str):
181 | if len(v) > 300:
182 | obj[k] = v[:300]
183 | elif k == "example" and isinstance(v, str):
184 | if len(v) > 700:
185 | obj[k] = v[:700]
186 | else:
187 | trim_fields(v)
188 | elif isinstance(obj, list):
189 | for item in obj:
190 | trim_fields(item)
191 |
192 |
193 | def add_empty_properties(obj):
194 | if isinstance(obj, dict):
195 | for k, v in obj.items():
196 | if k == "schema" and isinstance(v, dict):
197 | if "properties" not in v:
198 | v["properties"] = {}
199 | add_empty_properties(v)
200 | else:
201 | add_empty_properties(v)
202 | elif isinstance(obj, list):
203 | for item in obj:
204 | add_empty_properties(item)
205 |
206 |
207 | def filter_paths(all_paths, allowed_list):
208 | return [path for path in all_paths if path in allowed_list]
209 |
210 |
211 | def main():
212 | os.makedirs(output_dir, exist_ok=True)
213 | spec = load_spec(input_path)
214 | all_paths = set(spec.get('paths', {}).keys())
215 |
216 | for idx, (group_name, group_paths) in enumerate(GROUPS.items()):
217 | group_allowed = filter_paths(all_paths, group_paths)
218 | if not group_allowed:
219 | continue
220 |
221 | new_spec = {
222 | 'openapi': spec.get('openapi'),
223 | 'info': spec.get('info'),
224 | 'servers': [{'url': mirrors[idx]}] if idx < len(mirrors) else spec.get('servers', []),
225 | 'paths': {k: spec['paths'][k] for k in group_allowed}
226 | }
227 |
228 | add_empty_properties(new_spec)
229 | trim_fields(new_spec)
230 | used_refs = find_refs(new_spec['paths'])
231 | pruned = prune_components(spec.get('components', {}), used_refs)
232 | if pruned:
233 | new_spec['components'] = pruned
234 |
235 | out_file = os.path.join(output_dir, f"{os.path.splitext(os.path.basename(input_path))[0]}_{group_name}{os.path.splitext(input_path)[1]}")
236 | dump_spec(new_spec, out_file)
237 | print(f"{group_name}: {len(new_spec['paths'])} paths -> {out_file}")
238 |
239 |
240 | if __name__ == "__main__":
241 | main()
242 |
```
--------------------------------------------------------------------------------
/src/mcp_server_twelve_data/u_tool.py:
--------------------------------------------------------------------------------
```python
1 | from starlette.requests import Request
2 |
3 | import openai
4 | import json
5 |
6 | from mcp.server.fastmcp import FastMCP, Context
7 | from pydantic import BaseModel
8 | from typing import Optional, List, cast, Literal
9 | from openai.types.chat import ChatCompletionSystemMessageParam
10 | from starlette.responses import JSONResponse
11 |
12 | from mcp_server_twelve_data.common import create_dummy_request_context, ToolPlanMap, \
13 | build_openai_tools_subset, LANCE_DB_ENDPOINTS_PATH
14 | from mcp_server_twelve_data.key_provider import extract_open_ai_apikey
15 | from mcp_server_twelve_data.prompts import utool_doc_string
16 | from mcp_server_twelve_data.u_tool_response import UToolResponse, utool_func_type
17 |
18 |
19 | def get_md_response(
20 | client: openai.OpenAI,
21 | llm_model: str,
22 | query: str,
23 | result: BaseModel
24 | ) -> str:
25 | prompt = """
26 | You are a Markdown report generator.
27 |
28 | Your task is to generate a clear, well-structured and readable response in Markdown format based on:
29 | 1. A user query
30 | 2. A JSON object containing the data relevant to the query
31 |
32 | Instructions:
33 | - Do NOT include raw JSON.
34 | - Instead, extract relevant information and present it using Markdown structure: headings, bullet points, tables,
35 | bold/italic text, etc.
36 | - Be concise, accurate, and helpful.
37 | - If the data is insufficient to fully answer the query, say so clearly.
38 |
39 | Respond only with Markdown. Do not explain or include extra commentary outside of the Markdown response.
40 | """
41 |
42 | llm_response = client.chat.completions.create(
43 | model=llm_model,
44 | messages=[
45 | cast(ChatCompletionSystemMessageParam, {"role": "system", "content": prompt}),
46 | cast(ChatCompletionSystemMessageParam, {"role": "user", "content": f"User query:\n{query}"}),
47 | cast(ChatCompletionSystemMessageParam, {"role": "user", "content": f"Data:\n{result.model_dump_json()}"}),
48 | ],
49 | temperature=0,
50 | )
51 |
52 | return llm_response.choices[0].message.content.strip()
53 |
54 |
55 | def register_u_tool(
56 | server: FastMCP,
57 | open_ai_api_key_from_args: Optional[str],
58 | transport: Literal["stdio", "sse", "streamable-http"],
59 | ) -> utool_func_type:
60 | # llm_model = "gpt-4o" # Input $2.5, Output $10
61 | # llm_model = "gpt-4-turbo" # Input $10.00, Output $30
62 | llm_model = "gpt-4o-mini" # Input $0.15, Output $0.60
63 | # llm_model = "gpt-4.1-nano" # Input $0.10, Output $0.40
64 |
65 | embedding_model = "text-embedding-3-large"
66 | top_n = 30
67 |
68 | all_tools = server._tool_manager._tools
69 | server._tool_manager._tools = {} # leave only u-tool
70 |
71 | import lancedb
72 | db = lancedb.connect(LANCE_DB_ENDPOINTS_PATH)
73 | table = db.open_table("endpoints")
74 | table_df = table.to_pandas()
75 | tool_plan_map = ToolPlanMap(table_df)
76 |
77 | @server.tool(name="u-tool")
78 | async def u_tool(
79 | query: str,
80 | ctx: Context,
81 | format: Optional[str] = None,
82 | plan: Optional[str] = None,
83 | ) -> UToolResponse:
84 | o_ai_api_key_to_use, error = extract_open_ai_apikey(
85 | transport=transport,
86 | open_ai_api_key=open_ai_api_key_from_args,
87 | ctx=ctx,
88 | )
89 | if error is not None:
90 | return UToolResponse(error=error)
91 |
92 | client = openai.OpenAI(api_key=o_ai_api_key_to_use)
93 | all_candidate_ids: List[str]
94 |
95 | try:
96 | embedding = client.embeddings.create(
97 | model=embedding_model,
98 | input=[query]
99 | ).data[0].embedding
100 |
101 | results = table.search(embedding).metric("cosine").limit(top_n).to_list() # type: ignore[attr-defined]
102 | all_candidate_ids = [r["id"] for r in results]
103 | if "GetTimeSeries" not in all_candidate_ids:
104 | all_candidate_ids.append('GetTimeSeries')
105 |
106 | candidates, premium_only_candidates = tool_plan_map.split(
107 | user_plan=plan, tool_operation_ids=all_candidate_ids
108 | )
109 |
110 | except Exception as e:
111 | return UToolResponse(error=f"Embedding or vector search failed: {e}")
112 |
113 | filtered_tools = [tool for tool in all_tools.values() if tool.name in candidates] # type: ignore
114 | openai_tools = build_openai_tools_subset(filtered_tools)
115 |
116 | prompt = (
117 | "You are a function-calling assistant. Based on the user query, "
118 | "you must select the most appropriate function from the provided tools and return "
119 | "a valid tool call with all required parameters. "
120 | "Before the function call, provide a brief plain-text explanation (1–2 sentences) of "
121 | "why you chose that function, based on the user's intent and tool descriptions."
122 | )
123 |
124 | try:
125 | llm_response = client.chat.completions.create(
126 | model=llm_model,
127 | messages=[
128 | cast(ChatCompletionSystemMessageParam, {"role": "system", "content": prompt}),
129 | cast(ChatCompletionSystemMessageParam, {"role": "user", "content": query}),
130 | ],
131 | tools=openai_tools,
132 | tool_choice="required",
133 | temperature=0,
134 | )
135 |
136 | call = llm_response.choices[0].message.tool_calls[0]
137 | name = call.function.name
138 | arguments = json.loads(call.function.arguments)
139 | # all tools require single parameter with nested attributes, but sometimes LLM flattens it
140 | if "params" not in arguments:
141 | arguments = {"params": arguments}
142 |
143 | except Exception as e:
144 | return UToolResponse(
145 | top_candidates=candidates,
146 | premium_only_candidates=premium_only_candidates,
147 | error=f"LLM did not return valid tool call: {e}",
148 | )
149 |
150 | tool = all_tools.get(name)
151 | if not tool:
152 | return UToolResponse(
153 | top_candidates=candidates,
154 | premium_only_candidates=premium_only_candidates,
155 | selected_tool=name,
156 | param=arguments,
157 | error=f"Tool '{name}' not found in MCP",
158 | )
159 |
160 | try:
161 | params_type = tool.fn_metadata.arg_model.model_fields["params"].annotation
162 | arguments['params'] = params_type(**arguments['params'])
163 | arguments['ctx'] = ctx
164 |
165 | result = await tool.fn(**arguments)
166 |
167 | if format == "md":
168 | result = get_md_response(
169 | client=client,
170 | llm_model=llm_model,
171 | query=query,
172 | result=result,
173 | )
174 |
175 | return UToolResponse(
176 | top_candidates=candidates,
177 | premium_only_candidates=premium_only_candidates,
178 | selected_tool=name,
179 | param=arguments,
180 | response=result,
181 | )
182 | except Exception as e:
183 | return UToolResponse(
184 | top_candidates=candidates,
185 | premium_only_candidates=premium_only_candidates,
186 | selected_tool=name,
187 | param=arguments,
188 | error=str(e),
189 | )
190 | u_tool.__doc__ = utool_doc_string
191 | return u_tool
192 |
193 |
194 | def register_http_utool(
195 | transport: str,
196 | server: FastMCP,
197 | u_tool,
198 | ):
199 | if transport == "streamable-http":
200 | @server.custom_route("/utool", ["GET"])
201 | async def u_tool_http(request: Request):
202 | query = request.query_params.get("query")
203 | format_param = request.query_params.get("format", default="json").lower()
204 | user_plan_param = request.query_params.get("plan", None)
205 | if not query:
206 | return JSONResponse({"error": "Missing 'query' query parameter"}, status_code=400)
207 |
208 | request_context = create_dummy_request_context(request)
209 | ctx = Context(request_context=request_context)
210 | result = await u_tool(
211 | query=query, ctx=ctx,
212 | format=format_param,
213 | plan=user_plan_param
214 | )
215 |
216 | return JSONResponse(content=result.model_dump(mode="json"))
217 |
```
--------------------------------------------------------------------------------
/scripts/split_openapi.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | import yaml
3 | import json
4 | import re
5 | from collections import defaultdict
6 |
7 | input_path = "../data/openapi01.06.2025.json"
8 | output_dir = "../data"
9 |
10 | mirrors = [
11 | "https://api-reference-data.twelvedata.com",
12 | "https://api-time-series.twelvedata.com",
13 | "https://api-mutual-funds.twelvedata.com",
14 | "https://api-etfs.twelvedata.com",
15 | "https://api-fundamental.twelvedata.com",
16 | "https://api-analysis.twelvedata.com",
17 | "https://api-regulator.twelvedata.com",
18 | "https://api-ti-overlap-studies.twelvedata.com",
19 | "https://api-ti-volume-indicators.twelvedata.com",
20 | "https://api-ti-price-transform.twelvedata.com",
21 | "https://api-ti-cycle-indicators.twelvedata.com",
22 | "https://api-ti-statistics-functions.twelvedata.com"
23 | ]
24 |
25 | allowed_endpoints = [
26 | # Reference Data
27 | "/stocks",
28 | "/forex_pairs",
29 | "/cryptocurrencies",
30 | "/funds",
31 | "/bonds",
32 | "/etfs",
33 | "/commodities",
34 | "/cross_listings",
35 | "/exchanges",
36 | "/exchange_schedule",
37 | "/cryptocurrency_exchanges",
38 | "/market_state",
39 | "/instrument_type",
40 | "/countries",
41 | "/earliest_timestamp",
42 | "/symbol_search",
43 | "/intervals",
44 |
45 | # Core Data
46 | "/time_series",
47 | "/time_series/cross",
48 | "/exchange_rate",
49 | "/currency_conversion",
50 | "/quote",
51 | "/price",
52 | "/eod",
53 | "/market_movers/{market}",
54 |
55 | # Mutual Funds
56 | "/mutual_funds/list",
57 | "/mutual_funds/family",
58 | "/mutual_funds/type",
59 | "/mutual_funds/world",
60 | "/mutual_funds/world/summary",
61 | "/mutual_funds/world/performance",
62 | "/mutual_funds/world/risk",
63 | "/mutual_funds/world/ratings",
64 | "/mutual_funds/world/composition",
65 | "/mutual_funds/world/purchase_info",
66 | "/mutual_funds/world/sustainability",
67 |
68 | # ETFs
69 | "/etfs/list",
70 | "/etfs/family",
71 | "/etfs/type",
72 | "/etfs/world",
73 | "/etfs/world/summary",
74 | "/etfs/world/performance",
75 | "/etfs/world/risk",
76 | "/etfs/world/composition",
77 |
78 | # Fundamentals
79 | "/balance_sheet",
80 | "/balance_sheet/consolidated",
81 | "/cash_flow",
82 | "/cash_flow/consolidated",
83 | "/dividends",
84 | "/dividends_calendar",
85 | "/earnings",
86 | "/income_statement",
87 | "/income_statement/consolidated",
88 | "/ipo_calendar",
89 | "/key_executives",
90 | "/last_change/{endpoint}",
91 | "/logo",
92 | "/market_cap",
93 | "/profile",
94 | "/splits",
95 | "/splits_calendar",
96 | "/statistics",
97 |
98 | # Analysis
99 | "/analyst_ratings/light",
100 | "/analyst_ratings/us_equities",
101 | "/earnings_estimate",
102 | "/revenue_estimate",
103 | "/eps_trend",
104 | "/eps_revisions",
105 | "/growth_estimates",
106 | "/price_target",
107 | "/recommendations",
108 | "/earnings_calendar",
109 |
110 | # Regulatory
111 | "/tax_info",
112 | "/edgar_filings/archive",
113 | "/insider_transactions",
114 | "/direct_holders",
115 | "/fund_holders",
116 | "/institutional_holders",
117 | "/sanctions/{source}",
118 | ]
119 |
120 | added_endpoints = []
121 |
122 |
123 | def load_spec(path):
124 | with open(path, 'r', encoding='utf-8') as f:
125 | if path.lower().endswith(('.yaml', '.yml')):
126 | return yaml.safe_load(f)
127 | return json.load(f)
128 |
129 |
130 | def dump_spec(spec, path):
131 | with open(path, 'w', encoding='utf-8') as f:
132 | if path.lower().endswith(('.yaml', '.yml')):
133 | yaml.safe_dump(spec, f, sort_keys=False, allow_unicode=True)
134 | else:
135 | json.dump(spec, f, ensure_ascii=False, indent=2)
136 |
137 |
138 | def find_refs(obj):
139 | refs = set()
140 | if isinstance(obj, dict):
141 | for k, v in obj.items():
142 | if k == '$ref' and isinstance(v, str):
143 | refs.add(v)
144 | else:
145 | refs |= find_refs(v)
146 | elif isinstance(obj, list):
147 | for item in obj:
148 | refs |= find_refs(item)
149 | return refs
150 |
151 |
152 | def split_paths(keys, chunk_size=25):
153 | for i in range(0, len(keys), chunk_size):
154 | yield keys[i:i + chunk_size]
155 |
156 |
157 | def filter_paths(all_paths, allowed_list):
158 | """
159 | Returns a list of only those paths from all_paths that are present in allowed_list.
160 | Additionally, prints paths that are in allowed_list but not found in all_paths.
161 | """
162 | f_paths = [path for path in all_paths if path in allowed_list]
163 | missing_paths = [path for path in allowed_list if path not in f_paths]
164 | if missing_paths:
165 | print("Paths in allowed_list but not found in all_paths:", missing_paths)
166 | return f_paths
167 |
168 |
169 | def prune_components(full_components, used_refs):
170 | pattern = re.compile(r'^#/components/([^/]+)/(.+)$')
171 | used = defaultdict(set)
172 |
173 | # Mark direct references
174 | for ref in used_refs:
175 | m = pattern.match(ref)
176 | if m:
177 | comp_type, comp_name = m.group(1), m.group(2)
178 | used[comp_type].add(comp_name)
179 |
180 | # Recursively include nested references
181 | changed = True
182 | while changed:
183 | changed = False
184 | for comp_type, names in list(used.items()):
185 | for name in list(names):
186 | definition = full_components.get(comp_type, {}).get(name)
187 | if definition:
188 | for r in find_refs(definition):
189 | m2 = pattern.match(r)
190 | if m2:
191 | ct, cn = m2.group(1), m2.group(2)
192 | if cn not in used[ct]:
193 | used[ct].add(cn)
194 | changed = True
195 |
196 | # Assemble a limited set
197 | pruned = {}
198 | for comp_type, defs in full_components.items():
199 | if comp_type in used:
200 | kept = {n: defs[n] for n in defs if n in used[comp_type]}
201 | if kept:
202 | pruned[comp_type] = kept
203 | return pruned
204 |
205 |
206 | def trim_fields(obj):
207 | """
208 | Recursively trims string fields:
209 | - 'description' to 300 characters
210 | - 'example' to 700 characters
211 | """
212 | if isinstance(obj, dict):
213 | for k, v in obj.items():
214 | if k == "description" and isinstance(v, str):
215 | if len(v) > 300:
216 | obj[k] = v[:300]
217 | elif k == "example" and isinstance(v, str):
218 | if len(v) > 700:
219 | obj[k] = v[:700]
220 | else:
221 | trim_fields(v)
222 | elif isinstance(obj, list):
223 | for item in obj:
224 | trim_fields(item)
225 |
226 |
227 | def add_empty_properties(obj):
228 | """
229 | Recursively searches for all occurrences of the 'schema' key.
230 | If its value is a dict and this dict does not have 'properties',
231 | adds 'properties': {}.
232 | """
233 | if isinstance(obj, dict):
234 | for k, v in obj.items():
235 | if k == "schema" and isinstance(v, dict):
236 | if "properties" not in v:
237 | v["properties"] = {}
238 | # Continue deep traversal within the found schema
239 | add_empty_properties(v)
240 | else:
241 | add_empty_properties(v)
242 | elif isinstance(obj, list):
243 | for item in obj:
244 | add_empty_properties(item)
245 |
246 |
247 | def main():
248 | os.makedirs(output_dir, exist_ok=True)
249 |
250 | spec = load_spec(input_path)
251 | all_paths = list(spec.get('paths', {}).keys())
252 | f_paths = filter_paths(all_paths, allowed_endpoints)
253 | chunks = list(split_paths(f_paths))
254 |
255 | for idx, paths_chunk in enumerate(chunks, start=1):
256 | # Build a new part of the specification
257 | new_spec = {
258 | 'openapi': spec.get('openapi'),
259 | 'info': spec.get('info'),
260 | 'servers': (
261 | [{'url': mirrors[idx - 1]}]
262 | if idx - 1 < len(mirrors)
263 | else spec.get('servers', [])
264 | ),
265 | 'paths': {k: spec['paths'][k] for k in paths_chunk}
266 | }
267 |
268 | # Trim long fields and add missing 'properties'
269 | add_empty_properties(new_spec)
270 | trim_fields(new_spec)
271 |
272 | # Prune components
273 | used_refs = find_refs(new_spec['paths'])
274 | pruned = prune_components(spec.get('components', {}), used_refs)
275 | if pruned:
276 | new_spec['components'] = pruned
277 |
278 | # Calculate metrics and save the file
279 | new_paths_count = len(new_spec['paths'])
280 | new_components_count = sum(
281 | len(v) for v in new_spec.get('components', {}).values()
282 | )
283 | out_file = os.path.join(
284 | output_dir,
285 | f"{os.path.splitext(os.path.basename(input_path))[0]}"
286 | f"_part{idx}{os.path.splitext(input_path)[1]}"
287 | )
288 | dump_spec(new_spec, out_file)
289 | print(
290 | f"Part {idx}: {new_paths_count} paths, "
291 | f"{new_components_count} components -> {out_file}"
292 | )
293 |
294 |
295 | if __name__ == "__main__":
296 | main()
297 |
```
--------------------------------------------------------------------------------
/scripts/generate_requests_models.py:
--------------------------------------------------------------------------------
```python
1 | import json
2 | from pathlib import Path
3 | import keyword
4 | from typing import Any, List, Optional
5 |
6 | OPENAPI_PATH = "../extra/openapi_clean.json"
7 | REQUESTS_FILE = "../data/request_models.py"
8 |
9 | PRIMITIVES = {
10 | "string": "str",
11 | "integer": "int",
12 | "number": "float",
13 | "boolean": "bool",
14 | "object": "dict",
15 | "array": "list",
16 | }
17 |
18 |
19 | def canonical_class_name(opid: str, suffix: str) -> str:
20 | if not opid:
21 | return ""
22 | return opid[0].upper() + opid[1:] + suffix
23 |
24 |
25 | def safe_field_name(name: str) -> str:
26 | # Append underscore if name is a Python keyword
27 | if keyword.iskeyword(name):
28 | return name + "_"
29 | return name
30 |
31 |
32 | def python_type(schema: dict, components: dict) -> str:
33 | # Resolve $ref to the corresponding model class name
34 | if "$ref" in schema:
35 | ref_name = schema["$ref"].split("/")[-1]
36 | return canonical_class_name(ref_name, "")
37 | # Handle allOf by delegating to the first subschema
38 | if "allOf" in schema:
39 | for subschema in schema["allOf"]:
40 | return python_type(subschema, components)
41 | t = schema.get("type", "string")
42 | if t == "array":
43 | # Construct type for lists recursively
44 | return f"list[{python_type(schema.get('items', {}), components)}]"
45 | return PRIMITIVES.get(t, "Any")
46 |
47 |
48 | def resolve_schema(schema: dict, components: dict) -> dict:
49 | # Fully resolve $ref and allOf compositions into a merged schema
50 | if "$ref" in schema:
51 | ref = schema["$ref"].split("/")[-1]
52 | return resolve_schema(components.get(ref, {}), components)
53 | if "allOf" in schema:
54 | merged = {"properties": {}, "required": [], "description": ""}
55 | for subschema in schema["allOf"]:
56 | sub = resolve_schema(subschema, components)
57 | merged["properties"].update(sub.get("properties", {}))
58 | merged["required"].extend(sub.get("required", []))
59 | if sub.get("description"):
60 | merged["description"] += sub["description"] + "\n"
61 | merged["required"] = list(set(merged["required"]))
62 | merged["description"] = merged["description"].strip() or None
63 | return merged
64 | return schema
65 |
66 |
67 | def collect_examples(param: dict, sch: dict) -> List[Any]:
68 | # Collect all examples from parameter, schema, and enums without deduplication
69 | examples: List[Any] = []
70 | if "example" in param:
71 | examples.append(param["example"])
72 | if "examples" in param:
73 | exs = param["examples"]
74 | if isinstance(exs, dict):
75 | for v in exs.values():
76 | examples.append(v["value"] if isinstance(v, dict) and "value" in v else v)
77 | elif isinstance(exs, list):
78 | examples.extend(exs)
79 | if "example" in sch:
80 | examples.append(sch["example"])
81 | if "examples" in sch:
82 | exs = sch["examples"]
83 | if isinstance(exs, dict):
84 | for v in exs.values():
85 | examples.append(v["value"] if isinstance(v, dict) and "value" in v else v)
86 | elif isinstance(exs, list):
87 | examples.extend(exs)
88 | # Include enum values as examples if present
89 | if "enum" in sch and isinstance(sch["enum"], list):
90 | examples.extend(sch["enum"])
91 | return [e for e in examples if e is not None]
92 |
93 |
94 | def gen_field(name: str, typ: str, required: bool, desc: Optional[str],
95 | examples: List[Any], default: Any) -> str:
96 | name = safe_field_name(name)
97 | # Wrap in Optional[...] if default is None and field is not required
98 | if default is None and not required:
99 | typ = f"Optional[{typ}]"
100 | args: List[str] = []
101 | if required:
102 | args.append("...")
103 | else:
104 | args.append(f"default={repr(default)}")
105 | if desc:
106 | args.append(f"description={repr(desc)}")
107 | if examples:
108 | args.append(f"examples={repr(examples)}")
109 | return f" {name}: {typ} = Field({', '.join(args)})"
110 |
111 |
112 | def gen_class(name: str, props: dict, desc: Optional[str]) -> str:
113 | lines = [f"class {name}(BaseModel):"]
114 | if desc:
115 | # Add class docstring if description is present
116 | lines.append(f' """{desc.replace(chr(34)*3, "")}"""')
117 | if not props:
118 | lines.append(" pass")
119 | else:
120 | for pname, fdict in props.items():
121 | lines.append(gen_field(
122 | pname,
123 | fdict["type"],
124 | fdict["required"],
125 | fdict["description"],
126 | fdict["examples"],
127 | fdict["default"]
128 | ))
129 | return "\n".join(lines)
130 |
131 |
132 | def main():
133 | # Load the OpenAPI specification
134 | with open(OPENAPI_PATH, "r", encoding="utf-8") as f:
135 | spec = json.load(f)
136 |
137 | components = spec.get("components", {}).get("schemas", {})
138 | request_models: List[str] = []
139 | request_names: set = set()
140 |
141 | for path, methods in spec.get("paths", {}).items():
142 | for http_method, op in methods.items():
143 | opid = op.get("operationId")
144 | if not opid:
145 | continue
146 | class_name = canonical_class_name(opid, "Request")
147 |
148 | # Collect parameters from path, query, header, etc.
149 | props: dict = {}
150 | for param in op.get("parameters", []):
151 | name = param["name"]
152 | sch = param.get("schema", {"type": "string"})
153 | typ = python_type(sch, components)
154 | required = param.get("required", False)
155 | desc = param.get("description") or sch.get("description")
156 | examples = collect_examples(param, sch)
157 | default = sch.get("default", None)
158 | props[name] = {
159 | "type": typ,
160 | "required": required,
161 | "description": desc,
162 | "examples": examples,
163 | "default": default,
164 | }
165 |
166 | # Collect JSON body properties
167 | body = op.get("requestBody", {}) \
168 | .get("content", {}) \
169 | .get("application/json", {}) \
170 | .get("schema")
171 | if body:
172 | body_sch = resolve_schema(body, components)
173 | for name, sch in body_sch.get("properties", {}).items():
174 | typ = python_type(sch, components)
175 | required = name in body_sch.get("required", [])
176 | desc = sch.get("description")
177 | examples = collect_examples({}, sch)
178 | default = sch.get("default", None)
179 | props[name] = {
180 | "type": typ,
181 | "required": required,
182 | "description": desc,
183 | "examples": examples,
184 | "default": default,
185 | }
186 |
187 | if "outputsize" not in props:
188 | props["outputsize"] = {
189 | "type": "int",
190 | "required": False,
191 | "description": (
192 | "Number of data points to retrieve. Supports values in the range from `1` to `5000`. "
193 | "Default `10` when no date parameters are set, otherwise set to maximum"
194 | ),
195 | "examples": [10],
196 | "default": 10,
197 | }
198 | else:
199 | props["outputsize"]["default"] = 10
200 | props["outputsize"]["description"] = props["outputsize"]["description"].replace(
201 | 'Default `30`', 'Default `10`'
202 | )
203 | props["outputsize"]["examples"] = [10]
204 |
205 | # Add apikey with default="demo"
206 | props["apikey"] = {
207 | "type": "str",
208 | "required": False,
209 | "description": "API key",
210 | "examples": ["demo"],
211 | "default": "demo",
212 | }
213 |
214 | if "interval" in props:
215 | props["interval"]["required"] = False
216 | props["interval"]["default"] = "1day"
217 |
218 | # Append plan availability to the description if x-starting-plan is present
219 | starting_plan = op.get("x-starting-plan")
220 | description = op.get("description", "")
221 | if starting_plan:
222 | addon = f" Available starting from the `{starting_plan}` plan."
223 | description = (description or "") + addon
224 |
225 | code = gen_class(class_name, props, description)
226 |
227 | if class_name not in request_names:
228 | request_models.append(code)
229 | request_names.add(class_name)
230 |
231 | # Write all generated models to the target file
232 | header = (
233 | "from pydantic import BaseModel, Field\n"
234 | "from typing import Any, List, Optional\n\n"
235 | )
236 | Path(REQUESTS_FILE).write_text(header + "\n\n".join(request_models), encoding="utf-8")
237 | print(f"Generated request models: {REQUESTS_FILE}")
238 |
239 |
240 | if __name__ == "__main__":
241 | main()
242 |
```
--------------------------------------------------------------------------------
/test/endpoint_pairs.py:
--------------------------------------------------------------------------------
```python
1 | pairs_ = [
2 | # ('Show me batches for AAPL.', 'advanced'), # skipped-error,
3 | ('Tell me the last update time for Apple’s income statement?', 'GetLastChanges'),
4 | ]
5 |
6 | pairs = [
7 | ('Give me the accumulation/distribution indicator for AAPL.', 'GetTimeSeriesAd'),
8 | ('Show me add for AAPL.', 'GetTimeSeriesAdd'),
9 | ('Show me adosc for AAPL.', 'GetTimeSeriesAdOsc'),
10 | ('Show me adx for AAPL.', 'GetTimeSeriesAdx'),
11 | ("Show me the Average Directional Movement Index Rating (ADXR) time series for AAPL.", "GetTimeSeriesAdxr"),
12 | ('Show me analyst ratings - light for AAPL.', 'GetAnalystRatingsLight'),
13 | ('Show me analyst ratings - us equities for AAPL.', 'GetAnalystRatingsUsEquities'),
14 | ('How many API requests have I made in the last minute', 'GetApiUsage'),
15 | ('Show me apo for AAPL.', 'GetTimeSeriesApo'),
16 | ('Show me aroon for AAPL.', 'GetTimeSeriesAroon'),
17 |
18 | ('Show me aroonosc for AAPL.', 'GetTimeSeriesAroonOsc'),
19 | ('Show me atr for AAPL.', 'GetTimeSeriesAtr'),
20 | ('Show me avg for AAPL.', 'GetTimeSeriesAvg'),
21 | ('Show me avgprice for AAPL.', 'GetTimeSeriesAvgPrice'),
22 | ('Show me balance sheet for AAPL.', 'GetBalanceSheet'),
23 | ('Show me balance sheet consolidated for AAPL.', 'GetBalanceSheetConsolidated'),
24 | # ('Show me batches for AAPL.', 'advanced'), # skipped-error,
25 | ('Show me bbands for AAPL.', 'GetTimeSeriesBBands'),
26 | ('Show me beta for AAPL.', 'GetTimeSeriesBeta'),
27 | ('Show me bonds list for AAPL.', 'GetBonds'),
28 |
29 | ('Show me bop for AAPL.', 'GetTimeSeriesBop'),
30 | ('Show me cash flow for AAPL.', 'GetCashFlow'),
31 | ('Show me cash flow consolidated for AAPL.', 'GetCashFlowConsolidated'),
32 | ('Show me cci for AAPL.', 'GetTimeSeriesCci'),
33 | ('Show me ceil for AAPL.', 'GetTimeSeriesCeil'),
34 | ('Show me cmo for AAPL.', 'GetTimeSeriesCmo'),
35 | ('Show me commodities list for AAPL.', 'GetCommodities'),
36 | ('Show me coppock for AAPL.', 'GetTimeSeriesCoppock'),
37 | ('Show me correl for AAPL.', 'GetTimeSeriesCorrel'),
38 | ('Show me countries list for AAPL.', 'GetCountries'),
39 |
40 | ('Show me cross listings for AAPL.', 'GetCrossListings'),
41 | ('Show me crsi for AAPL.', 'GetTimeSeriesCrsi'),
42 | ('Show me cryptocurrencies list for BTC/USD.', 'GetCryptocurrencies'),
43 | ('Show me cryptocurrency exchanges', 'GetCryptocurrencyExchanges'),
44 | ('Show me currency conversion for EUR/USD.', 'GetCurrencyConversion'),
45 | ('Show me dema for AAPL.', 'GetTimeSeriesDema'),
46 | ('Show me direct holders for AAPL.', 'GetDirectHolders'),
47 | ('Calculate DIV indicator for AAPL.', 'GetTimeSeriesDiv'),
48 | ('Show me dividends for AAPL.', 'GetDividends'),
49 | ('Show me dividends calendar for AAPL.', 'GetDividendsCalendar'),
50 |
51 | ('Show me dpo for AAPL.', 'GetTimeSeriesDpo'),
52 | ('Show me dx for AAPL.', 'GetTimeSeriesDx'),
53 | ('Show me earliest timestamp for AAPL.', 'GetEarliestTimestamp'),
54 | ('Show me earnings for AAPL.', 'GetEarnings'),
55 | ('Show me earnings calendar for China for 2024 year.', 'GetEarningsCalendar'),
56 | ('Show me earnings estimate for AAPL.', 'GetEarningsEstimate'),
57 | ('Show me edgar filings archive for AAPL.', 'GetEdgarFilingsArchive'),
58 | ('Show me ema for AAPL.', 'GetTimeSeriesEma'),
59 | ('Show me end of day price for AAPL.', 'GetEod'),
60 | ('Show me eps revisions for AAPL.', 'GetEpsRevisions'),
61 |
62 | ('Show me eps trend for AAPL.', 'GetEpsTrend'),
63 | ('Show me ETFs for SPY on NYSE.', 'GetEtf'),
64 | ('Show me ETFs in the same family as SPY.', 'GetETFsFamily'),
65 | ("Show me the full list of bond-type exchange-traded funds issued by BlackRock investment company.", "GetETFsList"),
66 | ('Show me ETF types available in the United States.', 'GetETFsType'),
67 | ("Give me a complete ETF analysis report for IVV, with all metrics like performance,"
68 | " summary, volatility, sector weights and country allocations.", "GetETFsWorld"),
69 | ("Show me the portfolio composition of ETF IVV.", "GetETFsWorldComposition"),
70 | ("Show me performance for the iShares Core S&P 500 ETF.", "GetETFsWorldPerformance"),
71 | ("Show me the risk metrics for the iShares Core S&P 500 ETF.", "GetETFsWorldRisk"),
72 | ('Show me a summary for the SPY ETF.', 'GetETFsWorldSummary'),
73 |
74 | ('Show me the exchange rate from USD to EUR.', 'GetExchangeRate'),
75 | ('Show me exchange schedule for AAPL.', 'GetExchangeSchedule'),
76 | ('Show me the list of available exchanges.', 'GetExchanges'),
77 | ('Show me exp for AAPL.', 'GetTimeSeriesExp'),
78 | ('Show me floor for AAPL.', 'GetTimeSeriesFloor'),
79 | ('Show me all available forex trading pairs.', 'GetForexPairs'),
80 | ('Show me fund holders for AAPL.', 'GetFundHolders'),
81 | ('Show me funds list for AAPL.', 'GetFunds'),
82 | ('Show me growth estimates for AAPL.', 'GetGrowthEstimates'),
83 | ('Show me heikinashicandles for AAPL.', 'GetTimeSeriesHeikinashiCandles'),
84 |
85 | ('Show me hlc3 for AAPL.', 'GetTimeSeriesHlc3'),
86 | ('Show me ht_dcperiod for AAPL.', 'GetTimeSeriesHtDcPeriod'),
87 | ('Show me ht_dcphase for AAPL.', 'GetTimeSeriesHtDcPhase'),
88 | ('Show me ht_phasor for AAPL.', 'GetTimeSeriesHtPhasor'),
89 | ('Show me ht_sine for AAPL.', 'GetTimeSeriesHtSine'),
90 | ('Show me ht_trendline for AAPL.', 'GetTimeSeriesHtTrendline'),
91 | ('Show me ht_trendmode for AAPL.', 'GetTimeSeriesHtTrendMode'),
92 | ('Show me ichimoku for AAPL.', 'GetTimeSeriesIchimoku'),
93 | ('Show me income statement for AAPL.', 'GetIncomeStatement'),
94 | ('Show me income statement consolidated for AAPL.', 'GetIncomeStatementConsolidated'),
95 |
96 | ('Show me insider transactions for AAPL.', 'GetInsiderTransactions'),
97 | ('Show me institutional holders for AAPL.', 'GetInstitutionalHolders'),
98 | ('What types of instruments are available through the API?', 'GetInstrumentType'),
99 | ('Show me the list of available time intervals.', 'GetIntervals'),
100 | ('Show me the IPO calendar for upcoming companies.', 'GetIpoCalendar'),
101 | ('Show me kama for AAPL.', 'GetTimeSeriesKama'),
102 | ('Show me keltner for AAPL.', 'GetTimeSeriesKeltner'),
103 | ('Show me key executives for AAPL.', 'GetKeyExecutives'),
104 | ('Show me kst for AAPL.', 'GetTimeSeriesKst'),
105 | ('Tell me the last update time for Apple’s income statement?', 'GetLastChanges'),
106 |
107 | ('Show me linearreg for AAPL.', 'GetTimeSeriesLinearReg'),
108 | ('Show me linearregangle for AAPL.', 'GetTimeSeriesLinearRegAngle'),
109 | ('Show me linearregintercept for AAPL.', 'GetTimeSeriesLinearRegIntercept'),
110 | ('Show me linearregslope for AAPL.', 'GetTimeSeriesLinearRegSlope'),
111 | ('Show me ln for AAPL.', 'GetTimeSeriesLn'),
112 | ('Show me log10 for AAPL.', 'GetTimeSeriesLog10'),
113 | ('Show me logo for AAPL.', 'GetLogo'),
114 | ('Show me ma for AAPL.', 'GetTimeSeriesMa'),
115 | ('Show me macd for AAPL.', 'GetTimeSeriesMacd'),
116 | ('Show me macd slope for AAPL.', 'GetTimeSeriesMacdSlope'),
117 |
118 | ('Show me macdext for AAPL.', 'GetTimeSeriesMacdExt'),
119 | ('Show me mama for AAPL.', 'GetTimeSeriesMama'),
120 | ('Show me market capitalization for AAPL.', 'GetMarketCap'),
121 | ("Show me the top market movers in the US stock market.", "GetMarketMovers"),
122 | ("Is the NASDAQ market currently open?", "GetMarketState"),
123 | ('Show me max for AAPL.', 'GetTimeSeriesMax'),
124 | ('Show me maxindex for AAPL.', 'GetTimeSeriesMaxIndex'),
125 | ('Show me mcginley_dynamic for AAPL.', 'GetTimeSeriesMcGinleyDynamic'),
126 | ('Show me medprice for AAPL.', 'GetTimeSeriesMedPrice'),
127 |
128 | ('Show me mfi for AAPL.', 'GetTimeSeriesMfi'),
129 | ("Show me the MIDPOINT indicator for AAPL", 'GetTimeSeriesMidPoint'),
130 | ('Show me midprice for AAPL.', 'GetTimeSeriesMidPrice'),
131 | ('Show me min for AAPL.', 'GetTimeSeriesMin'),
132 | ('Show me minimal price index for AAPL.', 'GetTimeSeriesMinIndex'),
133 | ('Show me minmax for AAPL.', 'GetTimeSeriesMinMax'),
134 | ('Show me minmaxindex for AAPL.', 'GetTimeSeriesMinMaxIndex'),
135 | ('Show me minus_di for AAPL.', 'GetTimeSeriesMinusDI'),
136 | ('Show me minus_dm for AAPL.', 'GetTimeSeriesMinusDM'),
137 | ('Show me mom for AAPL.', 'GetTimeSeriesMom'),
138 |
139 | ('Show me mult for AAPL.', 'GetTimeSeriesMult'),
140 | ('Show me mutual fonds family list.', 'GetMutualFundsFamily'),
141 | ('Show me mutual fonds list.', 'GetMutualFundsList'),
142 | ('Show me mutual fonds type list.', 'GetMutualFundsType'),
143 | ('Show me all data for mutual fund VTSMX.', 'GetMutualFundsWorld'),
144 | ('Show me composition for mutual fund VTSMX.', 'GetMutualFundsWorldComposition'),
145 | ('Show me performance for mutual fund VTSMX.', 'GetMutualFundsWorldPerformance'),
146 | ('Show me purchase info for mutual fund VTSMX.', 'GetMutualFundsWorldPurchaseInfo'),
147 | ('Show me ratings for mutual fund VTSMX.', 'GetMutualFundsWorldRatings'),
148 | ('Show me risk for mutual fund VTSMX.', 'GetMutualFundsWorldRisk'),
149 |
150 | ('Show me summary for mutual fund VTSMX.', 'GetMutualFundsWorldSummary'),
151 | ('Show me sustainability for mutual fund VTSMX.', 'GetMutualFundsWorldSustainability'),
152 | ('Show me natr indicator for AAPL.', 'GetTimeSeriesNatr'),
153 | ('Show me obv indicator for AAPL.', 'GetTimeSeriesObv'),
154 | ('Show me percent B indicator for AAPL.', 'GetTimeSeriesPercent_B'),
155 | ('Show me pivot points HL for AAPL.', 'GetTimeSeriesPivotPointsHL'),
156 | ('Show me plus DI indicator for AAPL.', 'GetTimeSeriesPlusDI'),
157 | ('Show me plus DM indicator for AAPL.', 'GetTimeSeriesPlusDM'),
158 | ('Show me PPO indicator for AAPL.', 'GetTimeSeriesPpo'),
159 | ('Show me real-time price for AAPL.', 'GetPrice'),
160 |
161 | ('Show me price target for AAPL.', 'GetPriceTarget'),
162 | ('Show me company profile for AAPL.', 'GetProfile'),
163 | ('Show me real-time quote for AAPL.', 'GetQuote'),
164 | ('Show me analyst recommendations for AAPL.', 'GetRecommendations'),
165 | ('Show me revenue estimate for AAPL.', 'GetRevenueEstimate'),
166 | ('Show me ROC indicator for AAPL.', 'GetTimeSeriesRoc'),
167 | ('Show me ROCP indicator for AAPL.', 'GetTimeSeriesRocp'),
168 | ('Show me ROCR indicator for AAPL.', 'GetTimeSeriesRocr'),
169 | ('Show me ROCR100 indicator for AAPL.', 'GetTimeSeriesRocr100'),
170 | ('Show me RSI indicator for AAPL.', 'GetTimeSeriesRsi'),
171 |
172 | ('Show me RVOL indicator for AAPL.', 'GetTimeSeriesRvol'),
173 | ('List all entities sanctioned by OFAC.', 'GetSourceSanctionedEntities'),
174 | ('Show me SAR indicator for AAPL.', 'GetTimeSeriesSar'),
175 | ('Show me extended SAR indicator for AAPL.', 'GetTimeSeriesSarExt'),
176 | ('Show me SMA indicator for AAPL.', 'GetTimeSeriesSma'),
177 | ('Show me stock splits for AAPL.', 'GetSplits'),
178 | ('Show me splits calendar for AAPL.', 'GetSplitsCalendar'),
179 | ('Show me SQRT indicator for AAPL.', 'GetTimeSeriesSqrt'),
180 | ('Show me statistics for AAPL.', 'GetStatistics'),
181 | ('Show me standard deviation for AAPL.', 'GetTimeSeriesStdDev'),
182 |
183 | ('Show me stoch for AAPL.', 'GetTimeSeriesStoch'),
184 | ('Show me stochf for AAPL.', 'GetTimeSeriesStochF'),
185 | ('Show me stochrsi for AAPL.', 'GetTimeSeriesStochRsi'),
186 | ('Show me stocks list for AAPL.', 'GetStocks'),
187 | ('Show me sub for AAPL.', 'GetTimeSeriesSub'),
188 | ('Show me sum for AAPL.', 'GetTimeSeriesSum'),
189 | ('Show me supertrend for AAPL.', 'GetTimeSeriesSuperTrend'),
190 | ('Show me supertrend heikinashicandles for AAPL.', 'GetTimeSeriesSuperTrendHeikinAshiCandles'),
191 | ('Show me symbol search for AAPL.', 'GetSymbolSearch'),
192 | ('Show me t3ma for AAPL.', 'GetTimeSeriesT3ma'),
193 |
194 | ('Show me tax information for AAPL.', 'GetTaxInfo'),
195 | ('Show me technical indicators interface for AAPL.', 'GetTechnicalIndicators'),
196 | ('Show me tema for AAPL.', 'GetTimeSeriesTema'),
197 | ('Show me time series for AAPL.', 'GetTimeSeries'),
198 | ('Get cross rate time series for USD/BTC', 'GetTimeSeriesCross'),
199 | ('Show me trange for AAPL.', 'GetTimeSeriesTRange'),
200 | ('Show me trima for AAPL.', 'GetTimeSeriesTrima'),
201 | ('Show me tsf for AAPL.', 'GetTimeSeriesTsf'),
202 | ('Show me typprice for AAPL.', 'GetTimeSeriesTypPrice'),
203 | ('Show me ultosc for AAPL.', 'GetTimeSeriesUltOsc'),
204 |
205 | ('Show me var for AAPL.', 'GetTimeSeriesVar'),
206 | ('Show me vwap for AAPL.', 'GetTimeSeriesVwap'),
207 | ('Show me wclprice for AAPL.', 'GetTimeSeriesWclPrice'),
208 | ('Show me willr for AAPL.', 'GetTimeSeriesWillR'),
209 | ('Show me wma for AAPL.', 'GetTimeSeriesWma'),
210 | ]
211 |
```