#
tokens: 49707/50000 67/71 files (page 1/2)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 1 of 2. Use http://codebase.md/ai-zerolab/mcp-toolbox?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .github
│   ├── actions
│   │   └── setup-python-env
│   │       └── action.yml
│   └── workflows
│       ├── main.yml
│       ├── on-release-main.yml
│       └── validate-codecov-config.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .vscode
│   └── settings.json
├── codecov.yaml
├── CONTRIBUTING.md
├── Dockerfile
├── docs
│   ├── index.md
│   └── modules.md
├── generate_config_template.py
├── LICENSE
├── llms.txt
├── Makefile
├── mcp_toolbox
│   ├── __init__.py
│   ├── app.py
│   ├── audio
│   │   ├── __init__.py
│   │   └── tools.py
│   ├── cli.py
│   ├── command_line
│   │   ├── __init__.py
│   │   └── tools.py
│   ├── config.py
│   ├── enhance
│   │   ├── __init__.py
│   │   ├── memory.py
│   │   └── tools.py
│   ├── figma
│   │   ├── __init__.py
│   │   └── tools.py
│   ├── file_ops
│   │   ├── __init__.py
│   │   └── tools.py
│   ├── flux
│   │   ├── __init__.py
│   │   ├── api.py
│   │   └── tools.py
│   ├── log.py
│   ├── markitdown
│   │   ├── __init__.py
│   │   └── tools.py
│   ├── web
│   │   ├── __init__.py
│   │   └── tools.py
│   └── xiaoyuzhoufm
│       ├── __init__.py
│       └── tools.py
├── mkdocs.yml
├── pyproject.toml
├── pytest.ini
├── README.md
├── smithery.yaml
├── tests
│   ├── audio
│   │   └── test_audio_tools.py
│   ├── command_line
│   │   └── test_command_line_tools.py
│   ├── enhance
│   │   ├── test_enhance_tools.py
│   │   └── test_memory.py
│   ├── figma
│   │   └── test_figma_tools.py
│   ├── file_ops
│   │   └── test_file_ops_tools.py
│   ├── flux
│   │   └── test_flux_tools.py
│   ├── markitdown
│   │   └── test_markitdown_tools.py
│   ├── mock
│   │   └── figma
│   │       ├── delete_comment.json
│   │       ├── get_comments.json
│   │       ├── get_component.json
│   │       ├── get_file_components.json
│   │       ├── get_file_nodes.json
│   │       ├── get_file_styles.json
│   │       ├── get_file.json
│   │       ├── get_image_fills.json
│   │       ├── get_image.json
│   │       ├── get_project_files.json
│   │       ├── get_style.json
│   │       ├── get_team_component_sets.json
│   │       ├── get_team_components.json
│   │       ├── get_team_projects.json
│   │       ├── get_team_styles.json
│   │       └── post_comment.json
│   ├── web
│   │   └── test_web_tools.py
│   └── xiaoyuzhoufm
│       └── test_xiaoyuzhoufm_tools.py
├── tox.ini
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.pre-commit-config.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | repos:
 2 |   - repo: https://github.com/pre-commit/pre-commit-hooks
 3 |     rev: "v5.0.0"
 4 |     hooks:
 5 |       - id: check-case-conflict
 6 |       - id: check-merge-conflict
 7 |       - id: check-toml
 8 |       - id: check-yaml
 9 |       - id: check-json
10 |         exclude: ^.devcontainer/devcontainer.json
11 |       - id: pretty-format-json
12 |         exclude: ^.devcontainer/devcontainer.json
13 |         args: [--autofix]
14 |       - id: end-of-file-fixer
15 |       - id: trailing-whitespace
16 | 
17 |   - repo: https://github.com/executablebooks/mdformat
18 |     rev: 0.7.22
19 |     hooks:
20 |       - id: mdformat
21 |         additional_dependencies:
22 |           [mdformat-gfm, mdformat-frontmatter, mdformat-footnote]
23 | 
24 |   - repo: https://github.com/astral-sh/ruff-pre-commit
25 |     rev: "v0.11.8"
26 |     hooks:
27 |       - id: ruff
28 |         args: [--exit-non-zero-on-fix]
29 |       - id: ruff-format
30 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
  1 | docs/source
  2 | 
  3 | # From https://raw.githubusercontent.com/github/gitignore/main/Python.gitignore
  4 | 
  5 | # Byte-compiled / optimized / DLL files
  6 | __pycache__/
  7 | *.py[cod]
  8 | *$py.class
  9 | 
 10 | # C extensions
 11 | *.so
 12 | 
 13 | # Distribution / packaging
 14 | .Python
 15 | build/
 16 | develop-eggs/
 17 | dist/
 18 | downloads/
 19 | eggs/
 20 | .eggs/
 21 | lib/
 22 | lib64/
 23 | parts/
 24 | sdist/
 25 | var/
 26 | wheels/
 27 | share/python-wheels/
 28 | *.egg-info/
 29 | .installed.cfg
 30 | *.egg
 31 | MANIFEST
 32 | 
 33 | # PyInstaller
 34 | #  Usually these files are written by a python script from a template
 35 | #  before PyInstaller builds the exe, so as to inject date/other infos into it.
 36 | *.manifest
 37 | *.spec
 38 | 
 39 | # Installer logs
 40 | pip-log.txt
 41 | pip-delete-this-directory.txt
 42 | 
 43 | # Unit test / coverage reports
 44 | htmlcov/
 45 | .tox/
 46 | .nox/
 47 | .coverage
 48 | .coverage.*
 49 | .cache
 50 | nosetests.xml
 51 | coverage.xml
 52 | *.cover
 53 | *.py,cover
 54 | .hypothesis/
 55 | .pytest_cache/
 56 | cover/
 57 | 
 58 | # Translations
 59 | *.mo
 60 | *.pot
 61 | 
 62 | # Django stuff:
 63 | *.log
 64 | local_settings.py
 65 | db.sqlite3
 66 | db.sqlite3-journal
 67 | 
 68 | # Flask stuff:
 69 | instance/
 70 | .webassets-cache
 71 | 
 72 | # Scrapy stuff:
 73 | .scrapy
 74 | 
 75 | # Sphinx documentation
 76 | docs/_build/
 77 | 
 78 | # PyBuilder
 79 | .pybuilder/
 80 | target/
 81 | 
 82 | # Jupyter Notebook
 83 | .ipynb_checkpoints
 84 | 
 85 | # IPython
 86 | profile_default/
 87 | ipython_config.py
 88 | 
 89 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
 90 | __pypackages__/
 91 | 
 92 | # Celery stuff
 93 | celerybeat-schedule
 94 | celerybeat.pid
 95 | 
 96 | # SageMath parsed files
 97 | *.sage.py
 98 | 
 99 | # Environments
100 | .env
101 | .venv
102 | env/
103 | venv/
104 | ENV/
105 | env.bak/
106 | venv.bak/
107 | 
108 | # Spyder project settings
109 | .spyderproject
110 | .spyproject
111 | 
112 | # Rope project settings
113 | .ropeproject
114 | 
115 | # mkdocs documentation
116 | /site
117 | 
118 | # mypy
119 | .mypy_cache/
120 | .dmypy.json
121 | dmypy.json
122 | 
123 | # Pyre type checker
124 | .pyre/
125 | 
126 | # pytype static type analyzer
127 | .pytype/
128 | 
129 | # Cython debug symbols
130 | cython_debug/
131 | 
132 | # Vscode config files
133 | # .vscode/
134 | 
135 | # PyCharm
136 | #  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
137 | #  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
138 | #  and can be added to the global gitignore or merged into this file.  For a more nuclear
139 | #  option (not recommended) you can uncomment the following to ignore the entire idea folder.
140 | #.idea/
141 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # mcp-toolbox
  2 | 
  3 | [![Release](https://img.shields.io/github/v/release/ai-zerolab/mcp-toolbox)](https://img.shields.io/github/v/release/ai-zerolab/mcp-toolbox)
  4 | [![Build status](https://img.shields.io/github/actions/workflow/status/ai-zerolab/mcp-toolbox/main.yml?branch=main)](https://github.com/ai-zerolab/mcp-toolbox/actions/workflows/main.yml?query=branch%3Amain)
  5 | [![codecov](https://codecov.io/gh/ai-zerolab/mcp-toolbox/branch/main/graph/badge.svg)](https://codecov.io/gh/ai-zerolab/mcp-toolbox)
  6 | [![Commit activity](https://img.shields.io/github/commit-activity/m/ai-zerolab/mcp-toolbox)](https://img.shields.io/github/commit-activity/m/ai-zerolab/mcp-toolbox)
  7 | [![License](https://img.shields.io/github/license/ai-zerolab/mcp-toolbox)](https://img.shields.io/github/license/ai-zerolab/mcp-toolbox)
  8 | 
  9 | A comprehensive toolkit for enhancing LLM capabilities through the Model Context Protocol (MCP). This package provides a collection of tools that allow LLMs to interact with external services and APIs, extending their functionality beyond text generation.
 10 | 
 11 | - **GitHub repository**: <https://github.com/ai-zerolab/mcp-toolbox/>
 12 | - (WIP)**Documentation**: <https://ai-zerolab.github.io/mcp-toolbox/>
 13 | 
 14 | ## Features
 15 | 
 16 | > \*nix is our main target, but Windows should work too.
 17 | 
 18 | - **Command Line Execution**: Execute any command line instruction through LLM
 19 | - **Figma Integration**: Access Figma files, components, styles, and more
 20 | - **Extensible Architecture**: Easily add new API integrations
 21 | - **MCP Protocol Support**: Compatible with Claude Desktop and other MCP-enabled LLMs
 22 | - **Comprehensive Testing**: Well-tested codebase with high test coverage
 23 | 
 24 | ## Installation
 25 | 
 26 | ### Using uv (Recommended)
 27 | 
 28 | We recommend using [uv](https://github.com/astral-sh/uv) to manage your environment.
 29 | 
 30 | ```bash
 31 | # Install uv
 32 | curl -LsSf https://astral.sh/uv/install.sh | sh  # For macOS/Linux
 33 | # or
 34 | powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"  # For Windows
 35 | ```
 36 | 
 37 | Then you can use `uvx "mcp-toolbox@latest" stdio` as commands for running the MCP server for latest version. **Audio and memory tools are not included in the default installation.**, you can include them by installing the `all` extra:
 38 | 
 39 | > [audio] for audio tools, [memory] for memory tools, [all] for all tools
 40 | 
 41 | ```bash
 42 | uvx "mcp-toolbox[all]@latest" stdio
 43 | ```
 44 | 
 45 | ### Installing via Smithery
 46 | 
 47 | To install Toolbox for LLM Enhancement for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@ai-zerolab/mcp-toolbox):
 48 | 
 49 | ```bash
 50 | npx -y @smithery/cli install @ai-zerolab/mcp-toolbox --client claude
 51 | ```
 52 | 
 53 | ### Using pip
 54 | 
 55 | ```bash
 56 | pip install "mcp-toolbox[all]"
 57 | ```
 58 | 
 59 | And you can use `mcp-toolbox stdio` as commands for running the MCP server.
 60 | 
 61 | ## Configuration
 62 | 
 63 | ### Environment Variables
 64 | 
 65 | The following environment variables can be configured:
 66 | 
 67 | - `FIGMA_API_KEY`: API key for Figma integration
 68 | - `TAVILY_API_KEY`: API key for Tavily integration
 69 | - `DUCKDUCKGO_API_KEY`: API key for DuckDuckGo integration
 70 | - `BFL_API_KEY`: API key for Flux image generation API
 71 | 
 72 | ### Memory Storage
 73 | 
 74 | Memory tools store data in the following locations:
 75 | 
 76 | - **macOS**: `~/Documents/zerolab/mcp-toolbox/memory` (syncs across devices via iCloud)
 77 | - **Other platforms**: `~/.zerolab/mcp-toolbox/memory`
 78 | 
 79 | ### Full Configuration
 80 | 
 81 | To use mcp-toolbox with Claude Desktop/Cline/Cursor/..., add the following to your configuration file:
 82 | 
 83 | ```json
 84 | {
 85 |   "mcpServers": {
 86 |     "zerolab-toolbox": {
 87 |       "command": "uvx",
 88 |       "args": ["--prerelease=allow", "mcp-toolbox@latest", "stdio"],
 89 |       "env": {
 90 |         "FIGMA_API_KEY": "your-figma-api-key",
 91 |         "TAVILY_API_KEY": "your-tavily-api-key",
 92 |         "DUCKDUCKGO_API_KEY": "your-duckduckgo-api-key",
 93 |         "BFL_API_KEY": "your-bfl-api-key"
 94 |       }
 95 |     }
 96 |   }
 97 | }
 98 | ```
 99 | 
100 | For full features:
101 | 
102 | ```json
103 | {
104 |   "mcpServers": {
105 |     "zerolab-toolbox": {
106 |       "command": "uvx",
107 |       "args": [
108 |         "--prerelease=allow",
109 |         "--python=3.12",
110 |         "mcp-toolbox[all]@latest",
111 |         "stdio"
112 |       ],
113 |       "env": {
114 |         "FIGMA_API_KEY": "your-figma-api-key",
115 |         "TAVILY_API_KEY": "your-tavily-api-key",
116 |         "DUCKDUCKGO_API_KEY": "your-duckduckgo-api-key",
117 |         "BFL_API_KEY": "your-bfl-api-key"
118 |       }
119 |     }
120 |   }
121 | }
122 | ```
123 | 
124 | You can generate a debug configuration template using:
125 | 
126 | ```bash
127 | uv run generate_config_template.py
128 | ```
129 | 
130 | ## Available Tools
131 | 
132 | ### Command Line Tools
133 | 
134 | | Tool              | Description                        |
135 | | ----------------- | ---------------------------------- |
136 | | `execute_command` | Execute a command line instruction |
137 | 
138 | ### File Operations Tools
139 | 
140 | | Tool                 | Description                                         |
141 | | -------------------- | --------------------------------------------------- |
142 | | `read_file_content`  | Read content from a file                            |
143 | | `write_file_content` | Write content to a file                             |
144 | | `replace_in_file`    | Replace content in a file using regular expressions |
145 | | `list_directory`     | List directory contents with detailed information   |
146 | 
147 | ### Figma Tools
148 | 
149 | | Tool                            | Description                              |
150 | | ------------------------------- | ---------------------------------------- |
151 | | `figma_get_file`                | Get a Figma file by key                  |
152 | | `figma_get_file_nodes`          | Get specific nodes from a Figma file     |
153 | | `figma_get_image`               | Get images for nodes in a Figma file     |
154 | | `figma_get_image_fills`         | Get URLs for images used in a Figma file |
155 | | `figma_get_comments`            | Get comments on a Figma file             |
156 | | `figma_post_comment`            | Post a comment on a Figma file           |
157 | | `figma_delete_comment`          | Delete a comment from a Figma file       |
158 | | `figma_get_team_projects`       | Get projects for a team                  |
159 | | `figma_get_project_files`       | Get files for a project                  |
160 | | `figma_get_team_components`     | Get components for a team                |
161 | | `figma_get_file_components`     | Get components from a file               |
162 | | `figma_get_component`           | Get a component by key                   |
163 | | `figma_get_team_component_sets` | Get component sets for a team            |
164 | | `figma_get_team_styles`         | Get styles for a team                    |
165 | | `figma_get_file_styles`         | Get styles from a file                   |
166 | | `figma_get_style`               | Get a style by key                       |
167 | 
168 | ### XiaoyuZhouFM Tools
169 | 
170 | | Tool                    | Description                                                                                |
171 | | ----------------------- | ------------------------------------------------------------------------------------------ |
172 | | `xiaoyuzhoufm_download` | Download a podcast episode from XiaoyuZhouFM with optional automatic m4a to mp3 conversion |
173 | 
174 | ### Audio Tools
175 | 
176 | | Tool               | Description                                                      |
177 | | ------------------ | ---------------------------------------------------------------- |
178 | | `get_audio_length` | Get the length of an audio file in seconds                       |
179 | | `get_audio_text`   | Get transcribed text from a specific time range in an audio file |
180 | 
181 | ### Memory Tools
182 | 
183 | | Tool             | Description                                                             |
184 | | ---------------- | ----------------------------------------------------------------------- |
185 | | `think`          | Use the tool to think about something and append the thought to the log |
186 | | `get_session_id` | Get the current session ID                                              |
187 | | `remember`       | Store a memory (brief and detail) in the memory database                |
188 | | `recall`         | Query memories from the database with semantic search                   |
189 | | `forget`         | Clear all memories in the memory database                               |
190 | 
191 | ### Markitdown Tools
192 | 
193 | | Tool                       | Description                                   |
194 | | -------------------------- | --------------------------------------------- |
195 | | `convert_file_to_markdown` | Convert any file to Markdown using MarkItDown |
196 | | `convert_url_to_markdown`  | Convert a URL to Markdown using MarkItDown    |
197 | 
198 | ### Web Tools
199 | 
200 | | Tool                     | Description                                        |
201 | | ------------------------ | -------------------------------------------------- |
202 | | `get_html`               | Get HTML content from a URL                        |
203 | | `save_html`              | Save HTML from a URL to a file                     |
204 | | `search_with_tavily`     | Search the web using Tavily (requires API key)     |
205 | | `search_with_duckduckgo` | Search the web using DuckDuckGo (requires API key) |
206 | 
207 | ### Flux Image Generation Tools
208 | 
209 | | Tool                  | Description                                                |
210 | | --------------------- | ---------------------------------------------------------- |
211 | | `flux_generate_image` | Generate an image using the Flux API and save it to a file |
212 | 
213 | ## Usage Examples
214 | 
215 | ### Running the MCP Server
216 | 
217 | ```bash
218 | # Run with stdio transport (default)
219 | mcp-toolbox stdio
220 | 
221 | # Run with SSE transport
222 | mcp-toolbox sse --host localhost --port 9871
223 | ```
224 | 
225 | ### Using with Claude Desktop
226 | 
227 | 1. Configure Claude Desktop as shown in the Configuration section
228 | 1. Start Claude Desktop
229 | 1. Ask Claude to interact with Figma files:
230 |    - "Can you get information about this Figma file: 12345abcde?"
231 |    - "Show me the components in this Figma file: 12345abcde"
232 |    - "Get the comments from this Figma file: 12345abcde"
233 | 1. Ask Claude to execute command line instructions:
234 |    - "What files are in the current directory?"
235 |    - "What's the current system time?"
236 |    - "Show me the contents of a specific file."
237 | 1. Ask Claude to download podcasts from XiaoyuZhouFM:
238 |    - "Download this podcast episode: https://www.xiaoyuzhoufm.com/episode/67c3d80fb0167b8db9e3ec0f"
239 |    - "Download and convert to MP3 this podcast: https://www.xiaoyuzhoufm.com/episode/67c3d80fb0167b8db9e3ec0f"
240 | 1. Ask Claude to work with audio files:
241 |    - "What's the length of this audio file: audio.m4a?"
242 |    - "Transcribe the audio from 60 to 90 seconds in audio.m4a"
243 |    - "Get the text from 2:30 to 3:00 in the audio file"
244 | 1. Ask Claude to convert files or URLs to Markdown:
245 |    - "Convert this file to Markdown: document.docx"
246 |    - "Convert this webpage to Markdown: https://example.com"
247 | 1. Ask Claude to work with web content:
248 |    - "Get the HTML content from https://example.com"
249 |    - "Save the HTML from https://example.com to a file"
250 |    - "Search the web for 'artificial intelligence news'"
251 | 1. Ask Claude to generate images with Flux:
252 |    - "Generate an image of a beautiful sunset over mountains"
253 |    - "Create an image of a futuristic city and save it to my desktop"
254 |    - "Generate a portrait of a cat in a space suit"
255 | 1. Ask Claude to use memory tools:
256 |    - "Remember this important fact: The capital of France is Paris"
257 |    - "What's my current session ID?"
258 |    - "Recall any information about France"
259 |    - "Think about the implications of climate change"
260 |    - "Forget all stored memories"
261 | 
262 | ## Development
263 | 
264 | ### Local Setup
265 | 
266 | Fork the repository and clone it to your local machine.
267 | 
268 | ```bash
269 | # Install in development mode
270 | make install
271 | # Activate a virtual environment
272 | source .venv/bin/activate  # For macOS/Linux
273 | # or
274 | .venv\Scripts\activate  # For Windows
275 | ```
276 | 
277 | ### Running Tests
278 | 
279 | ```bash
280 | make test
281 | ```
282 | 
283 | ### Running Checks
284 | 
285 | ```bash
286 | make check
287 | ```
288 | 
289 | ### Building Documentation
290 | 
291 | ```bash
292 | make docs
293 | ```
294 | 
295 | ## Adding New Tools
296 | 
297 | To add a new API integration:
298 | 
299 | 1. Update `config.py` with any required API keys
300 | 1. Create a new module in `mcp_toolbox/`
301 | 1. Implement your API client and tools
302 | 1. Add tests for your new functionality
303 | 1. Update the README.md with new environment variables and tools
304 | 
305 | See the [development guide](llms.txt) for more detailed instructions.
306 | 
307 | ## Contributing
308 | 
309 | Contributions are welcome! Please feel free to submit a Pull Request.
310 | 
311 | 1. Fork the repository
312 | 1. Create a feature branch (`git checkout -b feature/amazing-feature`)
313 | 1. Commit your changes (`git commit -m 'Add some amazing feature'`)
314 | 1. Push to the branch (`git push origin feature/amazing-feature`)
315 | 1. Open a Pull Request
316 | 
317 | ## License
318 | 
319 | This project is licensed under the terms of the license included in the repository.
320 | 
```

--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Contributing to `mcp-toolbox`
  2 | 
  3 | Contributions are welcome, and they are greatly appreciated!
  4 | Every little bit helps, and credit will always be given.
  5 | 
  6 | You can contribute in many ways:
  7 | 
  8 | # Types of Contributions
  9 | 
 10 | ## Report Bugs
 11 | 
 12 | Report bugs at https://github.com/ai-zerolab/mcp-toolbox/issues
 13 | 
 14 | If you are reporting a bug, please include:
 15 | 
 16 | - Your operating system name and version.
 17 | - Any details about your local setup that might be helpful in troubleshooting.
 18 | - Detailed steps to reproduce the bug.
 19 | 
 20 | ## Fix Bugs
 21 | 
 22 | Look through the GitHub issues for bugs.
 23 | Anything tagged with "bug" and "help wanted" is open to whoever wants to implement a fix for it.
 24 | 
 25 | ## Implement Features
 26 | 
 27 | Look through the GitHub issues for features.
 28 | Anything tagged with "enhancement" and "help wanted" is open to whoever wants to implement it.
 29 | 
 30 | ## Write Documentation
 31 | 
 32 | mcp-toolbox could always use more documentation, whether as part of the official docs, in docstrings, or even on the web in blog posts, articles, and such.
 33 | 
 34 | ## Submit Feedback
 35 | 
 36 | The best way to send feedback is to file an issue at https://github.com/ai-zerolab/mcp-toolbox/issues.
 37 | 
 38 | If you are proposing a new feature:
 39 | 
 40 | - Explain in detail how it would work.
 41 | - Keep the scope as narrow as possible, to make it easier to implement.
 42 | - Remember that this is a volunteer-driven project, and that contributions
 43 |   are welcome :)
 44 | 
 45 | # Get Started!
 46 | 
 47 | ## Installing locally
 48 | 
 49 | Ready to contribute? Here's how to set up `mcp-toolbox` for local development.
 50 | Please note this documentation assumes you already have `uv` and `Git` installed and ready to go.
 51 | 
 52 | 1. Fork the `mcp-toolbox` repo on GitHub.
 53 | 1. Clone your fork locally:
 54 | 
 55 | ```bash
 56 | cd <directory_in_which_repo_should_be_created>
 57 | git clone [email protected]:YOUR_NAME/mcp-toolbox.git
 58 | ```
 59 | 
 60 | 3. Now we need to install the environment. Navigate into the directory
 61 | 
 62 | ```bash
 63 | cd mcp-toolbox
 64 | ```
 65 | 
 66 | Then, install and activate the environment with:
 67 | 
 68 | ```bash
 69 | make install
 70 | ```
 71 | 
 72 | 4. Install pre-commit to run linters/formatters at commit time:
 73 | 
 74 | ```bash
 75 | uv run pre-commit install
 76 | ```
 77 | 
 78 | 5. Create a branch for local development:
 79 | 
 80 | ```bash
 81 | git checkout -b name-of-your-bugfix-or-feature
 82 | ```
 83 | 
 84 | Now you can make your changes locally.
 85 | 
 86 | Don't forget to add test cases for your added functionality to the `tests` directory.
 87 | 
 88 | ## After making your changes
 89 | 
 90 | When you're done making changes, check that your changes pass the formatting tests.
 91 | 
 92 | ```bash
 93 | make check
 94 | ```
 95 | 
 96 | Now, validate that all unit tests are passing:
 97 | 
 98 | ```bash
 99 | make test
100 | ```
101 | 
102 | Before raising a pull request you should also run tox. This will run the tests across different versions of Python:
103 | 
104 | ```bash
105 | tox
106 | ```
107 | 
108 | This requires you to have multiple versions of python installed.
109 | This step is also triggered in the CI/CD pipeline, so you could also choose to skip this step locally.
110 | 
111 | ## Commit your changes and push your branch to GitHub:
112 | 
113 | ```bash
114 | git add .
115 | git commit -m "Your detailed description of your changes."
116 | git push origin name-of-your-bugfix-or-feature
117 | ```
118 | 
119 | Submit a pull request through the GitHub website.
120 | 
121 | # Pull Request Guidelines
122 | 
123 | Before you submit a pull request, check that it meets these guidelines:
124 | 
125 | 1. The pull request should include tests.
126 | 1. If the pull request adds functionality, the docs should be updated.
127 |    Put your new functionality into a function with a docstring, and add the feature to the list in `README.md`.
128 | 
```

--------------------------------------------------------------------------------
/docs/modules.md:
--------------------------------------------------------------------------------

```markdown
1 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/figma/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/markitdown/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/web/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/delete_comment.json:
--------------------------------------------------------------------------------

```json
1 | {
2 |   "success": true
3 | }
4 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/audio/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """Audio processing tools."""
2 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/flux/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """Flux API image generation module."""
2 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/command_line/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """Command line tools for MCP-Toolbox."""
2 | 
```

--------------------------------------------------------------------------------
/pytest.ini:
--------------------------------------------------------------------------------

```
1 | # pytest.ini
2 | [pytest]
3 | asyncio_mode = auto
4 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/xiaoyuzhoufm/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """XiaoyuZhouFM podcast crawler module."""
2 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/enhance/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """LLM Enhancement tools for MCP-Toolbox."""
2 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/file_ops/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """File operations tools for MCP-Toolbox."""
2 | 
```

--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------

```json
1 | {
2 |   "python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python"
3 | }
4 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_image.json:
--------------------------------------------------------------------------------

```json
1 | {
2 |   "err": null,
3 |   "images": {
4 |     "2:0": "https://example.com/images/frame1.png",
5 |     "3:0": "https://example.com/images/text1.png"
6 |   }
7 | }
8 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_image_fills.json:
--------------------------------------------------------------------------------

```json
1 | {
2 |   "meta": {
3 |     "images": {
4 |       "image1": "https://example.com/images/image1.png",
5 |       "image2": "https://example.com/images/image2.png"
6 |     }
7 |   }
8 | }
9 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/log.py:
--------------------------------------------------------------------------------

```python
 1 | import os
 2 | 
 3 | USER_DEFINED_LOG_LEVEL = os.getenv("MCP_TOOLBOX_LOG_LEVEL", "INFO")
 4 | 
 5 | os.environ["LOGURU_LEVEL"] = USER_DEFINED_LOG_LEVEL
 6 | 
 7 | from loguru import logger  # noqa: E402
 8 | 
 9 | __all__ = ["logger"]
10 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_team_projects.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "projects": [
 3 |     {
 4 |       "created_at": "2023-01-01T00:00:00Z",
 5 |       "id": "project1",
 6 |       "name": "Project 1"
 7 |     },
 8 |     {
 9 |       "created_at": "2023-01-02T00:00:00Z",
10 |       "id": "project2",
11 |       "name": "Project 2"
12 |     }
13 |   ]
14 | }
15 | 
```

--------------------------------------------------------------------------------
/codecov.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | coverage:
 2 |   range: 70..100
 3 |   round: down
 4 |   precision: 1
 5 |   status:
 6 |     project:
 7 |       default:
 8 |         target: 90%
 9 |         threshold: 0.5%
10 |     patch:
11 |       default:
12 |         target: auto
13 |         threshold: 0%
14 |         informational: true
15 | codecov:
16 |  token: f927bff4-d404-4986-8c11-624eadda8431
17 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/post_comment.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "client_meta": {
 3 |     "node_id": "2:0",
 4 |     "x": 300,
 5 |     "y": 400
 6 |   },
 7 |   "created_at": "2023-01-04T00:00:00Z",
 8 |   "id": "comment3",
 9 |   "message": "Test comment",
10 |   "order_id": 3,
11 |   "resolved_at": null,
12 |   "user": {
13 |     "handle": "user1",
14 |     "id": "user1",
15 |     "img_url": "https://example.com/user1.png"
16 |   }
17 | }
18 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/cli.py:
--------------------------------------------------------------------------------

```python
 1 | import typer
 2 | 
 3 | from mcp_toolbox.app import mcp
 4 | 
 5 | app = typer.Typer()
 6 | 
 7 | 
 8 | @app.command()
 9 | def stdio():
10 |     mcp.run(transport="stdio")
11 | 
12 | 
13 | @app.command()
14 | def sse(
15 |     host: str = "localhost",
16 |     port: int = 9871,
17 | ):
18 |     mcp.settings.host = host
19 |     mcp.settings.port = port
20 |     mcp.run(transport="sse")
21 | 
22 | 
23 | if __name__ == "__main__":
24 |     app(["stdio"])
25 | 
```

--------------------------------------------------------------------------------
/.github/workflows/validate-codecov-config.yml:
--------------------------------------------------------------------------------

```yaml
 1 | name: validate-codecov-config
 2 | 
 3 | on:
 4 |   pull_request:
 5 |     paths: [codecov.yaml]
 6 |   push:
 7 |     branches: [main]
 8 | 
 9 | jobs:
10 |   validate-codecov-config:
11 |     runs-on: ubuntu-22.04
12 |     steps:
13 |       - uses: actions/checkout@v4
14 |       - name: Validate codecov configuration
15 |         run: curl -sSL --fail-with-body --data-binary @codecov.yaml https://codecov.io/validate
16 | 
```

--------------------------------------------------------------------------------
/tox.ini:
--------------------------------------------------------------------------------

```
 1 | [tox]
 2 | skipsdist = true
 3 | envlist = py310, py311, py312, py313
 4 | 
 5 | [gh-actions]
 6 | python =
 7 |     3.10: py310
 8 |     3.11: py311
 9 |     3.12: py312
10 |     3.13: py313
11 | 
12 | [testenv]
13 | passenv = PYTHON_VERSION
14 | allowlist_externals = uv
15 | commands =
16 |     uv sync --python {envpython}
17 |     uv run python -m pytest --doctest-modules tests --cov --cov-config=pyproject.toml --cov-report=xml
18 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_project_files.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "files": [
 3 |     {
 4 |       "key": "file1",
 5 |       "last_modified": "2023-01-01T00:00:00Z",
 6 |       "name": "File 1",
 7 |       "thumbnail_url": "https://example.com/thumbnails/file1.png"
 8 |     },
 9 |     {
10 |       "key": "file2",
11 |       "last_modified": "2023-01-02T00:00:00Z",
12 |       "name": "File 2",
13 |       "thumbnail_url": "https://example.com/thumbnails/file2.png"
14 |     }
15 |   ]
16 | }
17 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_file_styles.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "meta": {
 3 |     "styles": {
 4 |       "style1": {
 5 |         "description": "Primary brand color",
 6 |         "key": "style1",
 7 |         "name": "Primary Color",
 8 |         "remote": false,
 9 |         "style_type": "FILL"
10 |       },
11 |       "style2": {
12 |         "description": "Main heading style",
13 |         "key": "style2",
14 |         "name": "Heading 1",
15 |         "remote": false,
16 |         "style_type": "TEXT"
17 |       }
18 |     }
19 |   }
20 | }
21 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Install uv
 2 | FROM python:3.12-slim
 3 | 
 4 | # Install tini
 5 | RUN apt-get update && \
 6 |     apt-get install -y --no-install-recommends tini && \
 7 |     rm -rf /var/lib/apt/lists/*
 8 | 
 9 | COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
10 | 
11 | # Change the working directory to the `app` directory
12 | WORKDIR /app
13 | 
14 | # Copy the lockfile and `pyproject.toml` into the image
15 | COPY uv.lock /app/uv.lock
16 | COPY pyproject.toml /app/pyproject.toml
17 | 
18 | # Install dependencies
19 | RUN uv sync --frozen --no-install-project
20 | 
21 | # Copy the project into the image
22 | COPY . /app
23 | 
24 | # Sync the project
25 | RUN uv sync --frozen
26 | 
27 | # Run the server
28 | ENTRYPOINT ["tini", "--", "uv", "run", "mcp-toolbox"]
29 | CMD ["stdio"]
30 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_style.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "containing_file": {
 3 |     "key": "file1",
 4 |     "name": "UI Styles"
 5 |   },
 6 |   "created_at": "2023-01-01T00:00:00Z",
 7 |   "description": "Primary brand color",
 8 |   "key": "style1",
 9 |   "name": "Primary Color",
10 |   "sort_position": "1",
11 |   "style_properties": {
12 |     "fills": [
13 |       {
14 |         "color": {
15 |           "a": 1,
16 |           "b": 0.8,
17 |           "g": 0.4,
18 |           "r": 0.2
19 |         },
20 |         "type": "SOLID"
21 |       }
22 |     ]
23 |   },
24 |   "style_type": "FILL",
25 |   "thumbnail_url": "https://example.com/thumbnails/style1.png",
26 |   "updated_at": "2023-01-02T00:00:00Z",
27 |   "user": {
28 |     "handle": "user1",
29 |     "id": "user1",
30 |     "img_url": "https://example.com/user1.png"
31 |   }
32 | }
33 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required: []
 9 |     properties:
10 |       figmaApiKey:
11 |         type: string
12 |         default: ""
13 |         description: Optional API key for Figma integration.
14 |   commandFunction:
15 |     # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
16 |     |-
17 |     (config) => ({ command: 'uv', args: ['run', '--prerelease=allow', 'mcp-toolbox@latest', 'stdio'], env: { FIGMA_API_KEY: config.figmaApiKey } })
18 |   exampleConfig:
19 |     figmaApiKey: your-figma-api-key
20 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_file_components.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "meta": {
 3 |     "components": {
 4 |       "component1": {
 5 |         "containing_frame": {
 6 |           "name": "Components",
 7 |           "node_id": "4:0",
 8 |           "page_id": "1:0",
 9 |           "page_name": "Page 1"
10 |         },
11 |         "description": "Standard button component",
12 |         "key": "component1",
13 |         "name": "Button",
14 |         "remote": false
15 |       },
16 |       "component2": {
17 |         "containing_frame": {
18 |           "name": "Components",
19 |           "node_id": "4:0",
20 |           "page_id": "1:0",
21 |           "page_name": "Page 1"
22 |         },
23 |         "description": "Standard input field component",
24 |         "key": "component2",
25 |         "name": "Input Field",
26 |         "remote": false
27 |       }
28 |     }
29 |   }
30 | }
31 | 
```

--------------------------------------------------------------------------------
/.github/actions/setup-python-env/action.yml:
--------------------------------------------------------------------------------

```yaml
 1 | name: "Setup Python Environment"
 2 | description: "Set up Python environment for the given Python version"
 3 | 
 4 | inputs:
 5 |   python-version:
 6 |     description: "Python version to use"
 7 |     required: true
 8 |     default: "3.12"
 9 |   uv-version:
10 |     description: "uv version to use"
11 |     required: true
12 |     default: "0.6.2"
13 | 
14 | runs:
15 |   using: "composite"
16 |   steps:
17 |     - uses: actions/setup-python@v5
18 |       with:
19 |         python-version: ${{ inputs.python-version }}
20 | 
21 |     - name: Install uv
22 |       uses: astral-sh/setup-uv@v2
23 |       with:
24 |         version: ${{ inputs.uv-version }}
25 |         enable-cache: 'true'
26 |         cache-suffix: ${{ matrix.python-version }}
27 | 
28 |     - name: Install Python dependencies
29 |       run: uv sync --frozen
30 |       shell: bash
31 | 
```

--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------

```markdown
1 | # mcp-toolbox
2 | 
3 | [![Release](https://img.shields.io/github/v/release/ai-zerolab/mcp-toolbox)](https://img.shields.io/github/v/release/ai-zerolab/mcp-toolbox)
4 | [![Build status](https://img.shields.io/github/actions/workflow/status/ai-zerolab/mcp-toolbox/main.yml?branch=main)](https://github.com/ai-zerolab/mcp-toolbox/actions/workflows/main.yml?query=branch%3Amain)
5 | [![Commit activity](https://img.shields.io/github/commit-activity/m/ai-zerolab/mcp-toolbox)](https://img.shields.io/github/commit-activity/m/ai-zerolab/mcp-toolbox)
6 | [![License](https://img.shields.io/github/license/ai-zerolab/mcp-toolbox)](https://img.shields.io/github/license/ai-zerolab/mcp-toolbox)
7 | 
8 | Maintenance of a set of tools to enhance LLM through MCP protocols.
9 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_component.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "component_property_definitions": {
 3 |     "size": {
 4 |       "defaultValue": "medium",
 5 |       "type": "VARIANT"
 6 |     },
 7 |     "variant": {
 8 |       "defaultValue": "primary",
 9 |       "type": "VARIANT"
10 |     }
11 |   },
12 |   "component_set_id": "component_set1",
13 |   "containing_file": {
14 |     "key": "file1",
15 |     "name": "UI Components"
16 |   },
17 |   "containing_frame": {
18 |     "name": "Components",
19 |     "node_id": "4:0",
20 |     "page_id": "1:0",
21 |     "page_name": "Page 1"
22 |   },
23 |   "created_at": "2023-01-01T00:00:00Z",
24 |   "description": "Standard button component",
25 |   "key": "component1",
26 |   "name": "Button",
27 |   "thumbnail_url": "https://example.com/thumbnails/component1.png",
28 |   "updated_at": "2023-01-02T00:00:00Z",
29 |   "user": {
30 |     "handle": "user1",
31 |     "id": "user1",
32 |     "img_url": "https://example.com/user1.png"
33 |   }
34 | }
35 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_comments.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "comments": [
 3 |     {
 4 |       "client_meta": {
 5 |         "node_id": "2:0",
 6 |         "x": 100,
 7 |         "y": 200
 8 |       },
 9 |       "created_at": "2023-01-01T00:00:00Z",
10 |       "id": "comment1",
11 |       "message": "This is a comment",
12 |       "order_id": 1,
13 |       "resolved_at": null,
14 |       "user": {
15 |         "handle": "user1",
16 |         "id": "user1",
17 |         "img_url": "https://example.com/user1.png"
18 |       }
19 |     },
20 |     {
21 |       "client_meta": {
22 |         "node_id": "3:0",
23 |         "x": 150,
24 |         "y": 250
25 |       },
26 |       "created_at": "2023-01-02T00:00:00Z",
27 |       "id": "comment2",
28 |       "message": "Another comment",
29 |       "order_id": 2,
30 |       "resolved_at": "2023-01-03T00:00:00Z",
31 |       "user": {
32 |         "handle": "user2",
33 |         "id": "user2",
34 |         "img_url": "https://example.com/user2.png"
35 |       }
36 |     }
37 |   ]
38 | }
39 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/app.py:
--------------------------------------------------------------------------------

```python
 1 | from mcp.server.fastmcp import FastMCP
 2 | 
 3 | from mcp_toolbox.config import Config
 4 | from mcp_toolbox.log import logger
 5 | 
 6 | mcp = FastMCP("mcp-toolbox")
 7 | config = Config()
 8 | 
 9 | 
10 | # Import tools to register them with the MCP server
11 | if config.enable_commond_tools:
12 |     import mcp_toolbox.command_line.tools
13 | if config.enable_file_ops_tools:
14 |     import mcp_toolbox.file_ops.tools
15 | if config.enable_audio_tools:
16 |     try:
17 |         import mcp_toolbox.audio.tools
18 |     except ImportError:
19 |         logger.error(
20 |             "Audio tools is not available. Please install the required dependencies. e.g. `pip install mcp-toolbox[audio]`"
21 |         )
22 | if config.enabel_enhance_tools:
23 |     import mcp_toolbox.enhance.tools
24 | if config.figma_api_key:
25 |     import mcp_toolbox.figma.tools
26 | if config.bfl_api_key:
27 |     import mcp_toolbox.flux.tools
28 | import mcp_toolbox.markitdown.tools  # noqa: E402
29 | import mcp_toolbox.web.tools  # noqa: E402
30 | import mcp_toolbox.xiaoyuzhoufm.tools  # noqa: E402, F401
31 | 
32 | # TODO: Add prompt for toolbox's tools
33 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_team_styles.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "styles": [
 3 |     {
 4 |       "containing_file": {
 5 |         "key": "file1",
 6 |         "name": "UI Styles"
 7 |       },
 8 |       "created_at": "2023-01-01T00:00:00Z",
 9 |       "description": "Primary brand color",
10 |       "key": "style1",
11 |       "name": "Primary Color",
12 |       "sort_position": "1",
13 |       "style_type": "FILL",
14 |       "thumbnail_url": "https://example.com/thumbnails/style1.png",
15 |       "updated_at": "2023-01-02T00:00:00Z",
16 |       "user": {
17 |         "handle": "user1",
18 |         "id": "user1",
19 |         "img_url": "https://example.com/user1.png"
20 |       }
21 |     },
22 |     {
23 |       "containing_file": {
24 |         "key": "file1",
25 |         "name": "UI Styles"
26 |       },
27 |       "created_at": "2023-01-03T00:00:00Z",
28 |       "description": "Main heading style",
29 |       "key": "style2",
30 |       "name": "Heading 1",
31 |       "sort_position": "2",
32 |       "style_type": "TEXT",
33 |       "thumbnail_url": "https://example.com/thumbnails/style2.png",
34 |       "updated_at": "2023-01-04T00:00:00Z",
35 |       "user": {
36 |         "handle": "user2",
37 |         "id": "user2",
38 |         "img_url": "https://example.com/user2.png"
39 |       }
40 |     }
41 |   ]
42 | }
43 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/config.py:
--------------------------------------------------------------------------------

```python
 1 | import platform
 2 | from pathlib import Path
 3 | 
 4 | from pydantic_settings import BaseSettings
 5 | 
 6 | 
 7 | class Config(BaseSettings):
 8 |     figma_api_key: str | None = None
 9 |     tavily_api_key: str | None = None
10 |     duckduckgo_api_key: str | None = None
11 |     bfl_api_key: str | None = None
12 | 
13 |     enable_commond_tools: bool = True
14 |     enable_file_ops_tools: bool = True
15 |     enable_audio_tools: bool = True
16 |     enabel_enhance_tools: bool = True
17 |     tool_home: str = Path("~/.zerolab/mcp-toolbox").expanduser().as_posix()
18 | 
19 |     @property
20 |     def cache_dir(self) -> str:
21 |         return (Path(self.tool_home) / "cache").expanduser().resolve().absolute().as_posix()
22 | 
23 |     @property
24 |     def memory_file(self) -> str:
25 |         # Use Documents folder for macOS to enable sync across multiple Mac devices
26 |         if platform.system() == "Darwin":  # macOS
27 |             documents_path = Path("~/Documents/zerolab/mcp-toolbox").expanduser()
28 |             documents_path.mkdir(parents=True, exist_ok=True)
29 |             return (documents_path / "memory").resolve().absolute().as_posix()
30 |         else:
31 |             # Default behavior for other operating systems
32 |             return (Path(self.tool_home) / "memory").expanduser().resolve().absolute().as_posix()
33 | 
34 | 
35 | if __name__ == "__main__":
36 |     print(Config())
37 | 
```

--------------------------------------------------------------------------------
/generate_config_template.py:
--------------------------------------------------------------------------------

```python
 1 | import json
 2 | import shutil
 3 | import sys
 4 | from pathlib import Path
 5 | 
 6 | from mcp_toolbox.config import Config
 7 | 
 8 | 
 9 | def get_endpoint_path() -> str:
10 |     """
11 |     Find the path to the mcp-toolbox script.
12 |     Similar to the 'which' command in Unix-like systems.
13 | 
14 |     Returns:
15 |         str: The full path to the mcp-toolbox script
16 |     """
17 |     # First try using shutil.which to find the script in PATH
18 |     script_path = shutil.which("mcp-toolbox")
19 |     if script_path:
20 |         return script_path
21 | 
22 |     # If not found in PATH, try to find it in the current Python environment
23 |     # This handles cases where the script is installed but not in PATH
24 |     bin_dir = Path(sys.executable).parent
25 |     possible_paths = [
26 |         bin_dir / "mcp-toolbox",
27 |         bin_dir / "mcp-toolbox.exe",  # For Windows
28 |     ]
29 | 
30 |     for path in possible_paths:
31 |         if path.exists():
32 |             return str(path)
33 | 
34 |     # If we can't find it, return the script name and hope it's in PATH when executed
35 |     return "mcp-toolbox"
36 | 
37 | 
38 | if __name__ == "__main__":
39 |     endpoint_path = get_endpoint_path()
40 | 
41 |     mcp_config = {
42 |         "command": endpoint_path,
43 |         "args": ["stdio"],
44 |         "env": {field.upper(): "" for field in Config.model_fields},
45 |     }
46 | 
47 |     mcp_item = {
48 |         "zerolab-toolbox-dev": mcp_config,
49 |     }
50 | 
51 |     print(json.dumps(mcp_item, indent=4))
52 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_team_components.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "components": [
 3 |     {
 4 |       "containing_file": {
 5 |         "key": "file1",
 6 |         "name": "UI Components"
 7 |       },
 8 |       "containing_frame": {
 9 |         "name": "Components",
10 |         "node_id": "4:0",
11 |         "page_id": "1:0",
12 |         "page_name": "Page 1"
13 |       },
14 |       "created_at": "2023-01-01T00:00:00Z",
15 |       "description": "Standard button component",
16 |       "key": "component1",
17 |       "name": "Button",
18 |       "thumbnail_url": "https://example.com/thumbnails/component1.png",
19 |       "updated_at": "2023-01-02T00:00:00Z",
20 |       "user": {
21 |         "handle": "user1",
22 |         "id": "user1",
23 |         "img_url": "https://example.com/user1.png"
24 |       }
25 |     },
26 |     {
27 |       "containing_file": {
28 |         "key": "file1",
29 |         "name": "UI Components"
30 |       },
31 |       "containing_frame": {
32 |         "name": "Components",
33 |         "node_id": "4:0",
34 |         "page_id": "1:0",
35 |         "page_name": "Page 1"
36 |       },
37 |       "created_at": "2023-01-03T00:00:00Z",
38 |       "description": "Standard input field component",
39 |       "key": "component2",
40 |       "name": "Input Field",
41 |       "thumbnail_url": "https://example.com/thumbnails/component2.png",
42 |       "updated_at": "2023-01-04T00:00:00Z",
43 |       "user": {
44 |         "handle": "user2",
45 |         "id": "user2",
46 |         "img_url": "https://example.com/user2.png"
47 |       }
48 |     }
49 |   ]
50 | }
51 | 
```

--------------------------------------------------------------------------------
/mkdocs.yml:
--------------------------------------------------------------------------------

```yaml
 1 | site_name: mcp-toolbox
 2 | repo_url: https://github.com/ai-zerolab/mcp-toolbox
 3 | site_url: https://ai-zerolab.github.io/mcp-toolbox
 4 | site_description: Maintenance of a set of tools to enhance LLM through MCP protocols.
 5 | site_author: ai-zerolab
 6 | edit_uri: edit/main/docs/
 7 | repo_name: ai-zerolab/mcp-toolbox
 8 | copyright: Maintained by <a href="https://ai-zerolab.com">ai-zerolab</a>.
 9 | 
10 | nav:
11 |   - Home: index.md
12 |   - Modules: modules.md
13 | plugins:
14 |   - search
15 |   - mkdocstrings:
16 |       handlers:
17 |         python:
18 |           paths: ["mcp_toolbox"]
19 | theme:
20 |   name: material
21 |   feature:
22 |     tabs: true
23 |   palette:
24 |     - media: "(prefers-color-scheme: light)"
25 |       scheme: default
26 |       primary: white
27 |       accent: deep orange
28 |       toggle:
29 |         icon: material/brightness-7
30 |         name: Switch to dark mode
31 |     - media: "(prefers-color-scheme: dark)"
32 |       scheme: slate
33 |       primary: black
34 |       accent: deep orange
35 |       toggle:
36 |         icon: material/brightness-4
37 |         name: Switch to light mode
38 |   icon:
39 |     repo: fontawesome/brands/github
40 | 
41 | extra:
42 |   social:
43 |     - icon: fontawesome/brands/github
44 |       link: https://github.com/ai-zerolab/mcp-toolbox
45 |     - icon: fontawesome/brands/python
46 |       link: https://pypi.org/project/mcp-toolbox
47 | 
48 | markdown_extensions:
49 |   - toc:
50 |       permalink: true
51 |   - pymdownx.arithmatex:
52 |       generic: true
53 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_team_component_sets.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "component_sets": [
 3 |     {
 4 |       "containing_file": {
 5 |         "key": "file1",
 6 |         "name": "UI Components"
 7 |       },
 8 |       "containing_frame": {
 9 |         "name": "Components",
10 |         "node_id": "4:0",
11 |         "page_id": "1:0",
12 |         "page_name": "Page 1"
13 |       },
14 |       "created_at": "2023-01-01T00:00:00Z",
15 |       "description": "Button component set with variants",
16 |       "key": "component_set1",
17 |       "name": "Button",
18 |       "thumbnail_url": "https://example.com/thumbnails/component_set1.png",
19 |       "updated_at": "2023-01-02T00:00:00Z",
20 |       "user": {
21 |         "handle": "user1",
22 |         "id": "user1",
23 |         "img_url": "https://example.com/user1.png"
24 |       }
25 |     },
26 |     {
27 |       "containing_file": {
28 |         "key": "file1",
29 |         "name": "UI Components"
30 |       },
31 |       "containing_frame": {
32 |         "name": "Components",
33 |         "node_id": "4:0",
34 |         "page_id": "1:0",
35 |         "page_name": "Page 1"
36 |       },
37 |       "created_at": "2023-01-03T00:00:00Z",
38 |       "description": "Input field component set with variants",
39 |       "key": "component_set2",
40 |       "name": "Input Field",
41 |       "thumbnail_url": "https://example.com/thumbnails/component_set2.png",
42 |       "updated_at": "2023-01-04T00:00:00Z",
43 |       "user": {
44 |         "handle": "user2",
45 |         "id": "user2",
46 |         "img_url": "https://example.com/user2.png"
47 |       }
48 |     }
49 |   ]
50 | }
51 | 
```

--------------------------------------------------------------------------------
/.github/workflows/main.yml:
--------------------------------------------------------------------------------

```yaml
 1 | name: Main
 2 | 
 3 | on:
 4 |   push:
 5 |     branches:
 6 |       - main
 7 |   pull_request:
 8 |     types: [opened, synchronize, reopened, ready_for_review]
 9 | 
10 | jobs:
11 |   quality:
12 |     runs-on: ubuntu-latest
13 |     steps:
14 |       - name: Check out
15 |         uses: actions/checkout@v4
16 | 
17 |       - uses: actions/cache@v4
18 |         with:
19 |           path: ~/.cache/pre-commit
20 |           key: pre-commit-${{ hashFiles('.pre-commit-config.yaml') }}
21 | 
22 |       - name: Set up the environment
23 |         uses: ./.github/actions/setup-python-env
24 | 
25 |       - name: Run checks
26 |         run: make check
27 | 
28 |   tests-and-type-check:
29 |     runs-on: ubuntu-latest
30 |     strategy:
31 |       matrix:
32 |         python-version: ["3.10", "3.11", "3.12", "3.13"]
33 |       fail-fast: false
34 |     defaults:
35 |       run:
36 |         shell: bash
37 |     steps:
38 |       - name: Check out
39 |         uses: actions/checkout@v4
40 | 
41 |       - name: Set up the environment
42 |         uses: ./.github/actions/setup-python-env
43 |         with:
44 |           python-version: ${{ matrix.python-version }}
45 | 
46 |       - name: Run tests
47 |         run: uv run python -m pytest tests --cov --cov-config=pyproject.toml --cov-report=xml
48 | 
49 |       - name: Upload coverage reports to Codecov with GitHub Action on Python 3.11
50 |         uses: codecov/codecov-action@v4
51 |         if: ${{ matrix.python-version == '3.11' }}
52 | 
53 |   check-docs:
54 |     runs-on: ubuntu-latest
55 |     steps:
56 |       - name: Check out
57 |         uses: actions/checkout@v4
58 | 
59 |       - name: Set up the environment
60 |         uses: ./.github/actions/setup-python-env
61 | 
62 |       - name: Check if documentation can be built
63 |         run: uv run mkdocs build -s
64 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_file.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "components": {},
 3 |   "document": {
 4 |     "children": [
 5 |       {
 6 |         "children": [
 7 |           {
 8 |             "absoluteBoundingBox": {
 9 |               "height": 100,
10 |               "width": 100,
11 |               "x": 0,
12 |               "y": 0
13 |             },
14 |             "background": [
15 |               {
16 |                 "blendMode": "NORMAL",
17 |                 "color": {
18 |                   "a": 1,
19 |                   "b": 1,
20 |                   "g": 1,
21 |                   "r": 1
22 |                 },
23 |                 "type": "SOLID"
24 |               }
25 |             ],
26 |             "backgroundColor": {
27 |               "a": 1,
28 |               "b": 1,
29 |               "g": 1,
30 |               "r": 1
31 |             },
32 |             "blendMode": "PASS_THROUGH",
33 |             "children": [],
34 |             "clipsContent": true,
35 |             "constraints": {
36 |               "horizontal": "LEFT",
37 |               "vertical": "TOP"
38 |             },
39 |             "effects": [],
40 |             "fills": [
41 |               {
42 |                 "blendMode": "NORMAL",
43 |                 "color": {
44 |                   "a": 1,
45 |                   "b": 1,
46 |                   "g": 1,
47 |                   "r": 1
48 |                 },
49 |                 "type": "SOLID"
50 |               }
51 |             ],
52 |             "id": "2:0",
53 |             "name": "Frame 1",
54 |             "strokeAlign": "INSIDE",
55 |             "strokeWeight": 1,
56 |             "strokes": [],
57 |             "type": "FRAME"
58 |           }
59 |         ],
60 |         "id": "1:0",
61 |         "name": "Page 1",
62 |         "type": "CANVAS"
63 |       }
64 |     ],
65 |     "id": "0:0",
66 |     "name": "Document",
67 |     "type": "DOCUMENT"
68 |   },
69 |   "editorType": "figma",
70 |   "lastModified": "2023-01-01T00:00:00Z",
71 |   "name": "Test File",
72 |   "role": "owner",
73 |   "schemaVersion": 0,
74 |   "styles": {},
75 |   "thumbnailUrl": "https://example.com/thumbnail.png",
76 |   "version": "123"
77 | }
78 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/markitdown/tools.py:
--------------------------------------------------------------------------------

```python
 1 | from pathlib import Path
 2 | from typing import Annotated, Any
 3 | 
 4 | from markitdown import MarkItDown
 5 | from pydantic import Field
 6 | 
 7 | from mcp_toolbox.app import mcp
 8 | 
 9 | md = MarkItDown(enable_builtins=True, enable_plugins=True)
10 | 
11 | 
12 | @mcp.tool(
13 |     description="Convert any file to Markdown, using MarkItDown.",
14 | )
15 | async def convert_file_to_markdown(
16 |     input_file: Annotated[str, Field(description="The input Markdown file")],
17 |     output_file: Annotated[str, Field(description="The output HTML file")],
18 | ) -> dict[str, Any]:
19 |     """Convert any file to Markdown
20 | 
21 |     Args:
22 |         input_file: The input Markdown file
23 |         output_file: The output HTML file
24 |     """
25 |     input_file: Path = Path(input_file).expanduser().resolve().absolute()
26 |     output_file: Path = Path(output_file).expanduser().resolve().absolute()
27 | 
28 |     if not input_file.is_file():
29 |         return {
30 |             "error": f"Input file not found: {input_file.as_posix()}",
31 |             "success": False,
32 |         }
33 | 
34 |     output_file.parent.mkdir(parents=True, exist_ok=True)
35 | 
36 |     c = md.convert(input_file.as_posix()).text_content
37 |     output_file.write_text(c)
38 | 
39 |     return {
40 |         "success": True,
41 |         "input_file": input_file.as_posix(),
42 |         "output_file": output_file.as_posix(),
43 |     }
44 | 
45 | 
46 | @mcp.tool(
47 |     description="Convert a URL to Markdown, using MarkItDown.",
48 | )
49 | async def convert_url_to_markdown(
50 |     url: Annotated[str, Field(description="The URL to convert")],
51 |     output_file: Annotated[str, Field(description="The output Markdown file")],
52 | ) -> dict[str, Any]:
53 |     """Convert a URL to Markdown
54 | 
55 |     Args:
56 |         url: The URL to convert
57 |         output_file: The output Markdown file"
58 |     """
59 |     output_file: Path = Path(output_file).expanduser().resolve().absolute()
60 | 
61 |     output_file.parent.mkdir(parents=True, exist_ok=True)
62 | 
63 |     c = md.convert_url(url).text_content
64 |     output_file.write_text(c)
65 | 
66 |     return {
67 |         "success": True,
68 |         "url": url,
69 |         "output_file": output_file.as_posix(),
70 |     }
71 | 
```

--------------------------------------------------------------------------------
/tests/mock/figma/get_file_nodes.json:
--------------------------------------------------------------------------------

```json
  1 | {
  2 |   "nodes": {
  3 |     "2:0": {
  4 |       "document": {
  5 |         "absoluteBoundingBox": {
  6 |           "height": 100,
  7 |           "width": 100,
  8 |           "x": 0,
  9 |           "y": 0
 10 |         },
 11 |         "background": [
 12 |           {
 13 |             "blendMode": "NORMAL",
 14 |             "color": {
 15 |               "a": 1,
 16 |               "b": 1,
 17 |               "g": 1,
 18 |               "r": 1
 19 |             },
 20 |             "type": "SOLID"
 21 |           }
 22 |         ],
 23 |         "backgroundColor": {
 24 |           "a": 1,
 25 |           "b": 1,
 26 |           "g": 1,
 27 |           "r": 1
 28 |         },
 29 |         "blendMode": "PASS_THROUGH",
 30 |         "children": [],
 31 |         "clipsContent": true,
 32 |         "constraints": {
 33 |           "horizontal": "LEFT",
 34 |           "vertical": "TOP"
 35 |         },
 36 |         "effects": [],
 37 |         "fills": [
 38 |           {
 39 |             "blendMode": "NORMAL",
 40 |             "color": {
 41 |               "a": 1,
 42 |               "b": 1,
 43 |               "g": 1,
 44 |               "r": 1
 45 |             },
 46 |             "type": "SOLID"
 47 |           }
 48 |         ],
 49 |         "id": "2:0",
 50 |         "name": "Frame 1",
 51 |         "strokeAlign": "INSIDE",
 52 |         "strokeWeight": 1,
 53 |         "strokes": [],
 54 |         "type": "FRAME"
 55 |       }
 56 |     },
 57 |     "3:0": {
 58 |       "document": {
 59 |         "absoluteBoundingBox": {
 60 |           "height": 20,
 61 |           "width": 80,
 62 |           "x": 10,
 63 |           "y": 10
 64 |         },
 65 |         "blendMode": "PASS_THROUGH",
 66 |         "characters": "Hello World",
 67 |         "constraints": {
 68 |           "horizontal": "LEFT",
 69 |           "vertical": "TOP"
 70 |         },
 71 |         "effects": [],
 72 |         "fills": [
 73 |           {
 74 |             "blendMode": "NORMAL",
 75 |             "color": {
 76 |               "a": 1,
 77 |               "b": 0,
 78 |               "g": 0,
 79 |               "r": 0
 80 |             },
 81 |             "type": "SOLID"
 82 |           }
 83 |         ],
 84 |         "id": "3:0",
 85 |         "name": "Text Layer",
 86 |         "strokeAlign": "INSIDE",
 87 |         "strokeWeight": 1,
 88 |         "strokes": [],
 89 |         "style": {
 90 |           "fontFamily": "Roboto",
 91 |           "fontPostScriptName": "Roboto-Regular",
 92 |           "fontSize": 14,
 93 |           "fontWeight": 400,
 94 |           "letterSpacing": 0,
 95 |           "lineHeightPercent": 100,
 96 |           "lineHeightPx": 16.4,
 97 |           "textAlignHorizontal": "LEFT",
 98 |           "textAlignVertical": "TOP"
 99 |         },
100 |         "type": "TEXT"
101 |       }
102 |     }
103 |   }
104 | }
105 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/command_line/tools.py:
--------------------------------------------------------------------------------

```python
 1 | """Command line execution tools for MCP-Toolbox."""
 2 | 
 3 | import asyncio
 4 | import contextlib
 5 | import os
 6 | from pathlib import Path
 7 | from typing import Annotated, Any
 8 | 
 9 | from pydantic import Field
10 | 
11 | from mcp_toolbox.app import mcp
12 | 
13 | 
14 | @mcp.tool(description="Execute a command line instruction.")
15 | async def execute_command(
16 |     command: Annotated[list[str], Field(description="The command to execute as a list of strings")],
17 |     timeout_seconds: Annotated[int, Field(default=30, description="Maximum execution time in seconds")] = 30,
18 |     working_dir: Annotated[str | None, Field(default=None, description="Directory to execute the command in")] = None,
19 | ) -> dict[str, Any]:
20 |     """Execute a command line instruction."""
21 |     if not command:
22 |         return {
23 |             "error": "Command cannot be empty",
24 |             "stdout": "",
25 |             "stderr": "",
26 |             "return_code": 1,
27 |         }
28 | 
29 |     try:
30 |         # Expand user home directory in working_dir if provided
31 |         expanded_working_dir = Path(working_dir).expanduser() if working_dir else working_dir
32 | 
33 |         # Create subprocess with current environment
34 |         process = await asyncio.create_subprocess_exec(
35 |             *command,
36 |             stdout=asyncio.subprocess.PIPE,
37 |             stderr=asyncio.subprocess.PIPE,
38 |             env=os.environ,
39 |             cwd=expanded_working_dir,
40 |         )
41 | 
42 |         try:
43 |             # Wait for the process with timeout
44 |             stdout, stderr = await asyncio.wait_for(process.communicate(), timeout=timeout_seconds)
45 | 
46 |             # Decode output
47 |             stdout_str = stdout.decode("utf-8", errors="replace") if stdout else ""
48 |             stderr_str = stderr.decode("utf-8", errors="replace") if stderr else ""
49 | 
50 |             return {
51 |                 "stdout": stdout_str,
52 |                 "stderr": stderr_str,
53 |                 "return_code": process.returncode,
54 |             }
55 | 
56 |         except asyncio.TimeoutError:
57 |             # Kill the process if it times out
58 |             with contextlib.suppress(ProcessLookupError):
59 |                 process.kill()
60 | 
61 |             return {
62 |                 "error": f"Command execution timed out after {timeout_seconds} seconds",
63 |                 "stdout": "",
64 |                 "stderr": "",
65 |                 "return_code": 124,  # Standard timeout return code
66 |             }
67 | 
68 |     except Exception as e:
69 |         return {
70 |             "error": f"Failed to execute command: {e!s}",
71 |             "stdout": "",
72 |             "stderr": "",
73 |             "return_code": 1,
74 |         }
75 | 
```

--------------------------------------------------------------------------------
/.github/workflows/on-release-main.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: release-main
  2 | 
  3 | permissions:
  4 |   contents: write
  5 |   packages: write
  6 | 
  7 | on:
  8 |   release:
  9 |     types: [published]
 10 | 
 11 | jobs:
 12 |   set-version:
 13 |     runs-on: ubuntu-24.04
 14 |     steps:
 15 |       - uses: actions/checkout@v4
 16 | 
 17 |       - name: Export tag
 18 |         id: vars
 19 |         run: echo tag=${GITHUB_REF#refs/*/} >> $GITHUB_OUTPUT
 20 |         if: ${{ github.event_name == 'release' }}
 21 | 
 22 |       - name: Update project version
 23 |         run: |
 24 |           sed -i "s/^version = \".*\"/version = \"$RELEASE_VERSION\"/" pyproject.toml
 25 |         env:
 26 |           RELEASE_VERSION: ${{ steps.vars.outputs.tag }}
 27 |         if: ${{ github.event_name == 'release' }}
 28 | 
 29 |       - name: Upload updated pyproject.toml
 30 |         uses: actions/upload-artifact@v4
 31 |         with:
 32 |           name: pyproject-toml
 33 |           path: pyproject.toml
 34 | 
 35 |   publish:
 36 |     runs-on: ubuntu-latest
 37 |     needs: [set-version]
 38 |     steps:
 39 |       - name: Check out
 40 |         uses: actions/checkout@v4
 41 | 
 42 |       - name: Set up the environment
 43 |         uses: ./.github/actions/setup-python-env
 44 | 
 45 |       - name: Download updated pyproject.toml
 46 |         uses: actions/download-artifact@v4
 47 |         with:
 48 |           name: pyproject-toml
 49 | 
 50 |       - name: Build package
 51 |         run: uv build
 52 | 
 53 |       - name: Publish package
 54 |         run: uv publish
 55 |         env:
 56 |           UV_PUBLISH_TOKEN: ${{ secrets.PYPI_TOKEN }}
 57 | 
 58 |       - name: Upload dists to release
 59 |         uses: svenstaro/upload-release-action@v2
 60 |         with:
 61 |             repo_token: ${{ secrets.GITHUB_TOKEN }}
 62 |             file: dist/*
 63 |             file_glob: true
 64 |             tag: ${{ github.ref }}
 65 |             overwrite: true
 66 | 
 67 | 
 68 |   push-image:
 69 |     runs-on: ubuntu-latest
 70 |     needs: [set-version]
 71 |     steps:
 72 |       - uses: actions/checkout@v4
 73 |       - name: Export tag
 74 |         id: vars
 75 |         run: echo tag=${GITHUB_REF#refs/*/} >> $GITHUB_OUTPUT
 76 |         if: ${{ github.event_name == 'release' }}
 77 |       - name: Set up QEMU
 78 |         uses: docker/setup-qemu-action@v3
 79 |       - name: Set up Docker Buildx
 80 |         uses: docker/setup-buildx-action@v3
 81 |       - name: Login to Github Container Registry
 82 |         uses: docker/login-action@v3
 83 |         with:
 84 |           registry: ghcr.io
 85 |           username: ai-zerolab
 86 |           password: ${{ secrets.GITHUB_TOKEN }}
 87 |       - name: Build and push image
 88 |         id: docker_build_publish
 89 |         uses: docker/build-push-action@v5
 90 |         with:
 91 |             context: .
 92 |             platforms: linux/amd64,linux/arm64/v8
 93 |             cache-from: type=gha
 94 |             cache-to: type=gha,mode=max
 95 |             file: ./Dockerfile
 96 |             push: true
 97 |             tags: |
 98 |               ghcr.io/ai-zerolab/mcp-toolbox:${{ steps.vars.outputs.tag }}
 99 |               ghcr.io/ai-zerolab/mcp-toolbox:latest
100 | 
101 |   deploy-docs:
102 |     needs: publish
103 |     runs-on: ubuntu-latest
104 |     steps:
105 |       - name: Check out
106 |         uses: actions/checkout@v4
107 | 
108 |       - name: Set up the environment
109 |         uses: ./.github/actions/setup-python-env
110 | 
111 |       - name: Deploy documentation
112 |         run: uv run mkdocs gh-deploy --force
113 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/enhance/tools.py:
--------------------------------------------------------------------------------

```python
 1 | from typing import Annotated
 2 | 
 3 | from pydantic import Field
 4 | 
 5 | from mcp_toolbox.app import mcp
 6 | from mcp_toolbox.log import logger
 7 | 
 8 | 
 9 | @mcp.tool(
10 |     description="Use the tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory is needed."
11 | )
12 | async def think(
13 |     thought: Annotated[str, Field(description="A thought to think about.")],
14 | ) -> dict[str, str]:
15 |     """
16 |     see: https://www.anthropic.com/engineering/claude-think-tool
17 |     """
18 | 
19 |     return {
20 |         "thought": thought,
21 |     }
22 | 
23 | 
24 | try:
25 |     from mcp_toolbox.enhance.memory import LocalMemory, get_current_session_memory
26 | except ImportError:
27 |     logger.error(
28 |         "Memory tools are not available. Please install the required dependencies. e.g. `pip install mcp-toolbox[enhance]`"
29 |     )
30 | else:
31 | 
32 |     @mcp.tool(description="Get the current session id.")
33 |     def get_session_id() -> dict[str, str]:
34 |         memory: LocalMemory = get_current_session_memory()
35 |         return {"session_id": memory.session_id}
36 | 
37 |     @mcp.tool(description="Store a memory in the memory database.")
38 |     def remember(
39 |         brief: Annotated[str, Field(description="The brief information of the memory.")],
40 |         detail: Annotated[str, Field(description="The detailed information of the brief text.")],
41 |     ) -> dict[str, str]:
42 |         memory: LocalMemory = get_current_session_memory()
43 |         memory.store(brief, detail)
44 |         return {
45 |             "session_id": memory.session_id,
46 |             "brief": brief,
47 |             "detail": detail,
48 |         }
49 | 
50 |     @mcp.tool(description="Query a memory from the memory database.")
51 |     def recall(
52 |         query: Annotated[str, Field(description="The query to search in the memory database.")],
53 |         top_k: Annotated[
54 |             int,
55 |             Field(
56 |                 description="The maximum number of results to return. Default to 5.",
57 |                 default=5,
58 |             ),
59 |         ] = 5,
60 |         cross_session: Annotated[
61 |             bool,
62 |             Field(
63 |                 description="Whether to search across all sessions. Default to True.",
64 |                 default=True,
65 |             ),
66 |         ] = True,
67 |         session_id: Annotated[
68 |             str | None,
69 |             Field(
70 |                 description="The session id of the memory. If not provided, the current session id will be used.",
71 |                 default=None,
72 |             ),
73 |         ] = None,
74 |     ) -> list[dict[str, str]]:
75 |         if session_id:
76 |             memory = LocalMemory.use_session(session_id)
77 |         else:
78 |             memory: LocalMemory = get_current_session_memory()
79 |         results = memory.query(query, top_k=top_k, cross_session=cross_session)
80 |         return [r.model_dump(exclude_none=True) for r in results]
81 | 
82 |     @mcp.tool(description="Clear all memories in the memory database.")
83 |     def forget() -> dict[str, str]:
84 |         memory: LocalMemory = get_current_session_memory()
85 |         memory.clear()
86 |         return {"message": "All memories are cleared."}
87 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
  1 | [project]
  2 | name = "mcp-toolbox"
  3 | version = "0.0.0.dev"
  4 | description = "Maintenance of a set of tools to enhance LLM through MCP protocols."
  5 | authors = [{ name = "ai-zerolab", email = "[email protected]" }]
  6 | readme = "README.md"
  7 | keywords = ['MCP', "Model Context Protocol", "LLM"]
  8 | requires-python = ">=3.10"
  9 | classifiers = [
 10 |     "Intended Audience :: Developers",
 11 |     "Programming Language :: Python",
 12 |     "Programming Language :: Python :: 3",
 13 |     "Programming Language :: Python :: 3.10",
 14 |     "Programming Language :: Python :: 3.11",
 15 |     "Programming Language :: Python :: 3.12",
 16 |     "Programming Language :: Python :: 3.13",
 17 |     "Topic :: Software Development :: Libraries :: Python Modules",
 18 | ]
 19 | dependencies = [
 20 |     "anyio>=4.8.0",
 21 |     "duckduckgo-search>=7.5.2",
 22 |     "httpx>=0.28.1",
 23 |     "loguru>=0.7.3",
 24 |     "markitdown[all]~=0.1.0a1",
 25 |     "mcp[cli]>=1.3.0",
 26 |     "numpy>=2.1.3",
 27 |     "pillow>=11.1.0",
 28 |     "pydantic>=2.10.6",
 29 |     "pydantic-settings[toml]>=2.8.0",
 30 |     "tavily-python>=0.5.1",
 31 |     "typer>=0.15.2",
 32 | ]
 33 | 
 34 | [project.urls]
 35 | Homepage = "https://ai-zerolab.github.io/mcp-toolbox/"
 36 | Repository = "https://github.com/ai-zerolab/mcp-toolbox"
 37 | Documentation = "https://ai-zerolab.github.io/mcp-toolbox/"
 38 | 
 39 | [dependency-groups]
 40 | 
 41 | dev = [
 42 |     "pytest>=7.2.0",
 43 |     "pre-commit>=2.20.0",
 44 |     "tox-uv>=1.11.3",
 45 |     "deptry>=0.22.0",
 46 |     "pytest-cov>=4.0.0",
 47 |     "ruff>=0.9.2",
 48 |     "mkdocs>=1.4.2",
 49 |     "mkdocs-material>=8.5.10",
 50 |     "mkdocstrings[python]>=0.26.1",
 51 |     "pytest-asyncio>=0.25.3",
 52 |     "mcp-toolbox[all]",
 53 | ]
 54 | [project.optional-dependencies]
 55 | audio = ["openai-whisper>=20240930 ; python_version <= '3.12'"]
 56 | memory = ["fastembed>=0.6.0", "portalocker>=3.1.1"]
 57 | all = ["mcp-toolbox[audio, memory]"]
 58 | 
 59 | [project.scripts]
 60 | mcp-toolbox = "mcp_toolbox.cli:app"
 61 | 
 62 | [build-system]
 63 | requires = ["hatchling"]
 64 | build-backend = "hatchling.build"
 65 | 
 66 | [tool.setuptools]
 67 | py-modules = ["mcp_toolbox"]
 68 | 
 69 | 
 70 | [tool.pytest.ini_options]
 71 | testpaths = ["tests"]
 72 | 
 73 | [tool.ruff]
 74 | target-version = "py310"
 75 | line-length = 120
 76 | fix = true
 77 | 
 78 | [tool.ruff.lint]
 79 | select = [
 80 |     # flake8-2020
 81 |     "YTT",
 82 |     # flake8-bandit
 83 |     "S",
 84 |     # flake8-bugbear
 85 |     "B",
 86 |     # flake8-builtins
 87 |     "A",
 88 |     # flake8-comprehensions
 89 |     "C4",
 90 |     # flake8-debugger
 91 |     "T10",
 92 |     # flake8-simplify
 93 |     "SIM",
 94 |     # isort
 95 |     "I",
 96 |     # mccabe
 97 |     "C90",
 98 |     # pycodestyle
 99 |     "E",
100 |     "W",
101 |     # pyflakes
102 |     "F",
103 |     # pygrep-hooks
104 |     "PGH",
105 |     # pyupgrade
106 |     "UP",
107 |     # ruff
108 |     "RUF",
109 |     # tryceratops
110 |     "TRY",
111 | ]
112 | ignore = [
113 |     # LineTooLong
114 |     "E501",
115 |     # DoNotAssignLambda
116 |     "E731",
117 |     # raise-vanilla-args
118 |     "TRY003",
119 |     # try-consider-else
120 |     "TRY300",
121 |     # raise-within-try
122 |     "TRY301",
123 | ]
124 | 
125 | [tool.ruff.lint.per-file-ignores]
126 | "tests/*" = ["S101", "C901", "F841", "S108", "F821"]
127 | "mcp_toolbox/flux/api.py" = ["C901", "SIM102"]
128 | 
129 | [tool.ruff.format]
130 | preview = true
131 | 
132 | [tool.coverage.report]
133 | skip_empty = true
134 | 
135 | [tool.coverage.run]
136 | branch = true
137 | source = ["mcp_toolbox"]
138 | omit = ["mcp_toolbox/flux/api.py"]
139 | 
140 | 
141 | [tool.deptry]
142 | exclude = ["mcp_toolbox/app.py", ".venv", "tests"]
143 | 
144 | [tool.deptry.per_rule_ignores]
145 | DEP002 = ["mcp", "mcp-toolbox"]
146 | 
```

--------------------------------------------------------------------------------
/tests/audio/test_audio_tools.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for audio tools."""
  2 | 
  3 | from unittest.mock import MagicMock, patch
  4 | 
  5 | import pytest
  6 | 
  7 | try:
  8 |     from mcp_toolbox.audio.tools import get_audio_length, get_audio_text
  9 | except ImportError:
 10 |     pytest.skip("Audio tools are not available.", allow_module_level=True)
 11 | 
 12 | 
 13 | @pytest.fixture
 14 | def mock_whisper():
 15 |     """Mock whisper module."""
 16 |     with patch("mcp_toolbox.audio.tools.whisper") as mock_whisper:
 17 |         # Mock load_audio to return a numpy array of a specific length
 18 |         mock_audio = MagicMock()
 19 |         mock_audio.__len__.return_value = 16000 * 60  # 60 seconds of audio at 16kHz
 20 |         mock_whisper.load_audio.return_value = mock_audio
 21 | 
 22 |         # Mock the model
 23 |         mock_model = MagicMock()
 24 |         mock_model.detect_language.return_value = (None, {"en": 0.9, "zh": 0.1})
 25 |         mock_model.transcribe.return_value = {"text": "Successfully transcribed audio"}
 26 |         mock_whisper.load_model.return_value = mock_model
 27 | 
 28 |         yield mock_whisper
 29 | 
 30 | 
 31 | @pytest.fixture
 32 | def mock_os_path_exists():
 33 |     """Mock os.path.exists to return True."""
 34 |     with patch("os.path.exists", return_value=True):
 35 |         yield
 36 | 
 37 | 
 38 | @pytest.mark.asyncio
 39 | async def test_get_audio_length(mock_whisper, mock_os_path_exists):
 40 |     """Test get_audio_length function."""
 41 |     result = await get_audio_length("test.m4a")
 42 | 
 43 |     # Check that the function returns the expected values
 44 |     assert "duration_seconds" in result
 45 |     assert "formatted_duration" in result
 46 |     assert "message" in result
 47 |     assert result["duration_seconds"] == 60.0
 48 |     assert result["formatted_duration"] == "0:01:00"
 49 |     assert "60.00 seconds" in result["message"]
 50 | 
 51 |     # Check that whisper.load_audio was called with the correct arguments
 52 |     mock_whisper.load_audio.assert_called_once_with("test.m4a")
 53 | 
 54 | 
 55 | @pytest.mark.asyncio
 56 | async def test_get_audio_length_file_not_found():
 57 |     """Test get_audio_length function with a non-existent file."""
 58 |     with patch("os.path.exists", return_value=False):
 59 |         result = await get_audio_length("nonexistent.m4a")
 60 | 
 61 |     # Check that the function returns an error
 62 |     assert "error" in result
 63 |     assert "message" in result
 64 |     assert "not found" in result["error"]
 65 | 
 66 | 
 67 | @pytest.mark.asyncio
 68 | async def test_get_audio_text(mock_whisper, mock_os_path_exists):
 69 |     """Test get_audio_text function."""
 70 |     # Set up global variables in the module
 71 |     with patch("mcp_toolbox.audio.tools._detected_language", "en"):
 72 |         result = await get_audio_text("test.m4a", 10.0, 20.0, "base")
 73 | 
 74 |     # Check that the function returns the expected values
 75 |     assert "text" in result
 76 |     assert "start_time" in result
 77 |     assert "end_time" in result
 78 |     assert "time_range" in result
 79 |     assert "language" in result
 80 |     assert "message" in result
 81 |     assert result["text"] == "Successfully transcribed audio"
 82 |     assert result["start_time"] == 10.0
 83 |     assert result["end_time"] == 20.0
 84 |     assert result["time_range"] == "0:00:10 - 0:00:20"
 85 |     assert "Successfully transcribed audio" in result["message"]
 86 | 
 87 |     # Check that whisper.load_model and transcribe were called
 88 |     mock_whisper.load_model.assert_called()
 89 |     mock_whisper.load_model().transcribe.assert_called()
 90 | 
 91 | 
 92 | @pytest.mark.asyncio
 93 | async def test_get_audio_text_file_not_found():
 94 |     """Test get_audio_text function with a non-existent file."""
 95 |     with patch("os.path.exists", return_value=False):
 96 |         result = await get_audio_text("nonexistent.m4a", 10.0, 20.0)
 97 | 
 98 |     # Check that the function returns an error
 99 |     assert "error" in result
100 |     assert "message" in result
101 |     assert "not found" in result["error"]
102 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/flux/tools.py:
--------------------------------------------------------------------------------

```python
  1 | """Flux API image generation tools."""
  2 | 
  3 | from pathlib import Path
  4 | from typing import Annotated, Any
  5 | 
  6 | from loguru import logger
  7 | from pydantic import Field
  8 | 
  9 | from mcp_toolbox.app import mcp
 10 | from mcp_toolbox.config import Config
 11 | from mcp_toolbox.flux.api import ApiException, ImageRequest
 12 | 
 13 | 
 14 | @mcp.tool(description="Generate an image using the Flux API and save it to a local file.")
 15 | async def flux_generate_image(
 16 |     prompt: Annotated[str, Field(description="The text prompt for image generation")],
 17 |     output_dir: Annotated[str, Field(description="The directory to save the image")],
 18 |     model_name: Annotated[str, Field(default="flux.1.1-pro", description="The model version to use")] = "flux.1.1-pro",
 19 |     width: Annotated[int | None, Field(default=None, description="Width of the image in pixels")] = None,
 20 |     height: Annotated[int | None, Field(default=None, description="Height of the image in pixels")] = None,
 21 |     seed: Annotated[int | None, Field(default=None, description="Seed for reproducibility")] = None,
 22 | ) -> dict[str, Any]:
 23 |     """Generate an image using the Flux API and save it to a local file.
 24 | 
 25 |     Args:
 26 |         prompt: The text prompt for image generation
 27 |         output_dir: The directory to save the image
 28 |         model_name: The model version to use (default: flux.1.1-pro)
 29 |         width: Width of the image in pixels (must be a multiple of 32, between 256 and 1440)
 30 |         height: Height of the image in pixels (must be a multiple of 32, between 256 and 1440)
 31 |         seed: Optional seed for reproducibility
 32 | 
 33 |     Returns:
 34 |         A dictionary containing information about the generated image
 35 |     """
 36 |     config = Config()
 37 | 
 38 |     if not config.bfl_api_key:
 39 |         return {
 40 |             "success": False,
 41 |             "error": "BFL_API_KEY not provided. Set BFL_API_KEY environment variable.",
 42 |         }
 43 | 
 44 |     try:
 45 |         # Create output directory if it doesn't exist
 46 |         output_path = Path(output_dir).expanduser().resolve()
 47 |         output_path.mkdir(parents=True, exist_ok=True)
 48 | 
 49 |         # Generate a filename based on the prompt
 50 |         filename = "_".join(prompt.split()[:5]).lower()
 51 |         filename = "".join(c if c.isalnum() or c == "_" else "_" for c in filename)
 52 |         if len(filename) > 50:
 53 |             filename = filename[:50]
 54 | 
 55 |         # Full path for the image (extension will be added by the save method)
 56 |         image_path = output_path / filename
 57 | 
 58 |         logger.info(f"Generating image with prompt: {prompt}")
 59 | 
 60 |         # Create image request
 61 |         image_request = ImageRequest(
 62 |             prompt=prompt,
 63 |             name=model_name,
 64 |             width=width,
 65 |             height=height,
 66 |             seed=seed,
 67 |             api_key=config.bfl_api_key,
 68 |             validate=True,
 69 |         )
 70 | 
 71 |         # Request and save the image
 72 |         logger.info("Requesting image from Flux API...")
 73 |         await image_request.request()
 74 | 
 75 |         logger.info("Waiting for image generation to complete...")
 76 |         await image_request.retrieve()
 77 | 
 78 |         logger.info("Saving image to disk...")
 79 |         saved_path = await image_request.save(str(image_path))
 80 | 
 81 |         # Get the image URL
 82 |         image_url = await image_request.get_url()
 83 | 
 84 |         return {
 85 |             "success": True,
 86 |             "prompt": prompt,
 87 |             "model": model_name,
 88 |             "image_path": saved_path,
 89 |             "image_url": image_url,
 90 |             "message": f"Successfully generated and saved image to {saved_path}",
 91 |         }
 92 | 
 93 |     except ApiException as e:
 94 |         return {
 95 |             "success": False,
 96 |             "error": f"API error: {e}",
 97 |             "message": f"Failed to generate image: {e}",
 98 |         }
 99 |     except ValueError as e:
100 |         return {
101 |             "success": False,
102 |             "error": str(e),
103 |             "message": f"Invalid parameters: {e}",
104 |         }
105 |     except Exception as e:
106 |         return {
107 |             "success": False,
108 |             "error": str(e),
109 |             "message": f"Failed to generate image: {e}",
110 |         }
111 | 
```

--------------------------------------------------------------------------------
/tests/flux/test_flux_tools.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for Flux API tools."""
  2 | 
  3 | from unittest.mock import AsyncMock, MagicMock, patch
  4 | 
  5 | import pytest
  6 | 
  7 | from mcp_toolbox.flux.api import ApiException
  8 | from mcp_toolbox.flux.tools import flux_generate_image
  9 | 
 10 | 
 11 | @pytest.fixture
 12 | def mock_config():
 13 |     """Mock Config with BFL_API_KEY."""
 14 |     with patch("mcp_toolbox.flux.tools.Config") as mock_config:
 15 |         config_instance = MagicMock()
 16 |         config_instance.bfl_api_key = "test_api_key"
 17 |         mock_config.return_value = config_instance
 18 |         yield mock_config
 19 | 
 20 | 
 21 | @pytest.fixture
 22 | def mock_image_request():
 23 |     """Mock ImageRequest class."""
 24 |     with patch("mcp_toolbox.flux.tools.ImageRequest") as mock_class:
 25 |         instance = AsyncMock()
 26 |         instance.request = AsyncMock()
 27 |         instance.retrieve = AsyncMock(return_value={"sample": "https://example.com/image.png"})
 28 |         instance.get_url = AsyncMock(return_value="https://example.com/image.png")
 29 |         instance.save = AsyncMock(return_value="/path/to/saved/image.png")
 30 |         mock_class.return_value = instance
 31 |         yield mock_class, instance
 32 | 
 33 | 
 34 | @pytest.mark.asyncio
 35 | async def test_flux_generate_image_success(mock_config, mock_image_request):
 36 |     """Test successful image generation."""
 37 |     mock_class, mock_instance = mock_image_request
 38 | 
 39 |     result = await flux_generate_image(
 40 |         prompt="a beautiful landscape",
 41 |         output_dir="/tmp/images",
 42 |         model_name="flux.1.1-pro",
 43 |         width=512,
 44 |         height=512,
 45 |         seed=42,
 46 |     )
 47 | 
 48 |     # Check that ImageRequest was created with correct parameters
 49 |     mock_class.assert_called_once_with(
 50 |         prompt="a beautiful landscape",
 51 |         name="flux.1.1-pro",
 52 |         width=512,
 53 |         height=512,
 54 |         seed=42,
 55 |         api_key="test_api_key",
 56 |         validate=True,
 57 |     )
 58 | 
 59 |     # Check that methods were called
 60 |     mock_instance.request.assert_called_once()
 61 |     mock_instance.retrieve.assert_called_once()
 62 |     mock_instance.save.assert_called_once()
 63 |     mock_instance.get_url.assert_called_once()
 64 | 
 65 |     # Check result
 66 |     assert result["success"] is True
 67 |     assert result["prompt"] == "a beautiful landscape"
 68 |     assert result["model"] == "flux.1.1-pro"
 69 |     assert result["image_path"] == "/path/to/saved/image.png"
 70 |     assert result["image_url"] == "https://example.com/image.png"
 71 |     assert "Successfully generated" in result["message"]
 72 | 
 73 | 
 74 | @pytest.mark.asyncio
 75 | async def test_flux_generate_image_no_api_key():
 76 |     """Test image generation with no API key."""
 77 |     with patch("mcp_toolbox.flux.tools.Config") as mock_config:
 78 |         config_instance = MagicMock()
 79 |         config_instance.bfl_api_key = None
 80 |         mock_config.return_value = config_instance
 81 | 
 82 |         result = await flux_generate_image(
 83 |             prompt="a beautiful landscape",
 84 |             output_dir="/tmp/images",
 85 |         )
 86 | 
 87 |         assert result["success"] is False
 88 |         assert "BFL_API_KEY not provided" in result["error"]
 89 | 
 90 | 
 91 | @pytest.mark.asyncio
 92 | async def test_flux_generate_image_api_exception(mock_config):
 93 |     """Test image generation with API exception."""
 94 |     with patch("mcp_toolbox.flux.tools.ImageRequest") as mock_class:
 95 |         instance = AsyncMock()
 96 |         instance.request = AsyncMock(side_effect=ApiException(400, "Invalid request"))
 97 |         mock_class.return_value = instance
 98 | 
 99 |         result = await flux_generate_image(
100 |             prompt="a beautiful landscape",
101 |             output_dir="/tmp/images",
102 |         )
103 | 
104 |         assert result["success"] is False
105 |         assert "API error" in result["error"]
106 | 
107 | 
108 | @pytest.mark.asyncio
109 | async def test_flux_generate_image_value_error(mock_config):
110 |     """Test image generation with value error."""
111 |     with patch("mcp_toolbox.flux.tools.ImageRequest") as mock_class:
112 |         instance = AsyncMock()
113 |         instance.request = AsyncMock(side_effect=ValueError("Invalid width"))
114 |         mock_class.return_value = instance
115 | 
116 |         result = await flux_generate_image(
117 |             prompt="a beautiful landscape",
118 |             output_dir="/tmp/images",
119 |             width=123,  # Not a multiple of 32
120 |         )
121 | 
122 |         assert result["success"] is False
123 |         assert "Invalid parameters" in result["message"]
124 | 
```

--------------------------------------------------------------------------------
/tests/markitdown/test_markitdown_tools.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for Markitdown tools."""
  2 | 
  3 | from unittest.mock import MagicMock, patch
  4 | 
  5 | import pytest
  6 | 
  7 | from mcp_toolbox.markitdown.tools import (
  8 |     convert_file_to_markdown,
  9 |     convert_url_to_markdown,
 10 |     md,
 11 | )
 12 | 
 13 | 
 14 | # Mock for MarkItDown.convert method
 15 | @pytest.fixture
 16 | def mock_markitdown_convert():
 17 |     """Mock for MarkItDown.convert method."""
 18 |     with patch.object(md, "convert") as mock_convert:
 19 |         # Set up the mock to return a result with text_content
 20 |         mock_result = MagicMock()
 21 |         mock_result.text_content = "# Converted Markdown\n\nThis is converted content."
 22 |         mock_convert.return_value = mock_result
 23 |         yield mock_convert
 24 | 
 25 | 
 26 | @pytest.fixture
 27 | def mock_markitdown_convert_url():
 28 |     """Mock for MarkItDown.convert method."""
 29 |     with patch.object(md, "convert_url") as mock_convert:
 30 |         # Set up the mock to return a result with text_content
 31 |         mock_result = MagicMock()
 32 |         mock_result.text_content = "# Converted Markdown\n\nThis is converted content."
 33 |         mock_convert.return_value = mock_result
 34 |         yield mock_convert
 35 | 
 36 | 
 37 | # Test convert_file_to_markdown function
 38 | @pytest.mark.asyncio
 39 | async def test_convert_file_to_markdown_success(mock_markitdown_convert):
 40 |     """Test successful file conversion."""
 41 |     # Mock file operations
 42 |     with (
 43 |         patch("pathlib.Path.is_file", return_value=True),
 44 |         patch("pathlib.Path.write_text") as mock_write_text,
 45 |         patch("pathlib.Path.mkdir") as mock_mkdir,
 46 |     ):
 47 |         # Call the function
 48 |         result = await convert_file_to_markdown("input.txt", "output.md")
 49 | 
 50 |         # Verify the output file was written with the converted content
 51 |         mock_write_text.assert_called_once_with("# Converted Markdown\n\nThis is converted content.")
 52 | 
 53 |         # Verify the output directory was created
 54 |         mock_mkdir.assert_called_once_with(parents=True, exist_ok=True)
 55 | 
 56 |         # Verify the result is as expected
 57 |         assert result["success"] is True
 58 |         assert "input.txt" in result["input_file"]
 59 |         assert "output.md" in result["output_file"]
 60 | 
 61 | 
 62 | @pytest.mark.asyncio
 63 | async def test_convert_file_to_markdown_file_not_found():
 64 |     """Test file conversion when input file doesn't exist."""
 65 |     # Mock file operations
 66 |     with patch("pathlib.Path.is_file", return_value=False):
 67 |         # Call the function
 68 |         result = await convert_file_to_markdown("nonexistent.txt", "output.md")
 69 | 
 70 |         # Verify the result is as expected
 71 |         assert result["success"] is False
 72 |         assert "not found" in result["error"]
 73 | 
 74 | 
 75 | @pytest.mark.asyncio
 76 | async def test_convert_file_to_markdown_exception(mock_markitdown_convert):
 77 |     """Test file conversion when an exception occurs."""
 78 |     # Set up the mock to raise an exception
 79 |     mock_markitdown_convert.side_effect = Exception("Conversion error")
 80 | 
 81 |     # Mock file operations
 82 |     with (
 83 |         patch("pathlib.Path.is_file", return_value=True),
 84 |         patch("pathlib.Path.read_text", return_value="Original content"),
 85 |         patch("pathlib.Path.mkdir"),
 86 |     ):
 87 |         # Call the function and expect an exception
 88 |         with pytest.raises(Exception) as excinfo:
 89 |             await convert_file_to_markdown("input.txt", "output.md")
 90 | 
 91 |         # Verify the exception message
 92 |         assert "Conversion error" in str(excinfo.value)
 93 | 
 94 | 
 95 | @pytest.mark.asyncio
 96 | async def test_convert_url_to_markdown_success(mock_markitdown_convert_url):
 97 |     """Test successful file conversion."""
 98 |     # Mock file operations
 99 |     with (
100 |         patch("pathlib.Path.write_text") as mock_write_text,
101 |         patch("pathlib.Path.mkdir") as mock_mkdir,
102 |     ):
103 |         # Call the function
104 |         result = await convert_url_to_markdown("https://example.com", "output.md")
105 | 
106 |         # Verify the output file was written with the converted content
107 |         mock_write_text.assert_called_once_with("# Converted Markdown\n\nThis is converted content.")
108 | 
109 |         # Verify the output directory was created
110 |         mock_mkdir.assert_called_once_with(parents=True, exist_ok=True)
111 | 
112 |         # Verify the result is as expected
113 |         assert result["success"] is True
114 |         assert "https://example.com" in result["url"]
115 |         assert "output.md" in result["output_file"]
116 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/web/tools.py:
--------------------------------------------------------------------------------

```python
  1 | import functools
  2 | from pathlib import Path
  3 | from typing import Annotated, Any, Literal
  4 | 
  5 | import anyio
  6 | from duckduckgo_search import DDGS
  7 | from httpx import AsyncClient
  8 | from pydantic import Field
  9 | from tavily import AsyncTavilyClient
 10 | 
 11 | from mcp_toolbox.app import mcp
 12 | from mcp_toolbox.config import Config
 13 | 
 14 | client = AsyncClient(
 15 |     follow_redirects=True,
 16 | )
 17 | 
 18 | 
 19 | async def get_http_content(
 20 |     url: Annotated[str, Field(description="The URL to request")],
 21 |     method: Annotated[str, Field(default="GET", description="HTTP method to use")] = "GET",
 22 |     headers: Annotated[dict[str, str] | None, Field(default=None, description="Optional HTTP headers")] = None,
 23 |     params: Annotated[dict[str, str] | None, Field(default=None, description="Optional query parameters")] = None,
 24 |     data: Annotated[dict[str, str] | None, Field(default=None, description="Optional request body data")] = None,
 25 |     timeout: Annotated[int, Field(default=60, description="Request timeout in seconds")] = 60,
 26 | ) -> str:
 27 |     response = await client.request(
 28 |         method,
 29 |         url,
 30 |         headers=headers,
 31 |         params=params,
 32 |         data=data,
 33 |         timeout=timeout,
 34 |     )
 35 |     response.raise_for_status()
 36 |     return response.text
 37 | 
 38 | 
 39 | @mcp.tool(
 40 |     description="Save HTML from a URL.",
 41 | )
 42 | async def save_html(
 43 |     url: Annotated[str, Field(description="The URL to save")],
 44 |     output_path: Annotated[str, Field(description="The path to save the HTML")],
 45 | ) -> dict[str, Any]:
 46 |     output_path: Path = Path(output_path).expanduser().resolve().absolute()
 47 | 
 48 |     output_path.parent.mkdir(parents=True, exist_ok=True)
 49 |     try:
 50 |         content = await get_http_content(url)
 51 |     except Exception as e:
 52 |         return {
 53 |             "success": False,
 54 |             "error": f"Failed to save HTML: {e!s}",
 55 |         }
 56 | 
 57 |     try:
 58 |         output_path.write_text(content)
 59 |         return {
 60 |             "success": True,
 61 |             "url": url,
 62 |             "output_path": output_path.as_posix(),
 63 |         }
 64 |     except Exception as e:
 65 |         return {
 66 |             "success": False,
 67 |             "error": f"Failed to save HTML: {e!s}",
 68 |         }
 69 | 
 70 | 
 71 | @mcp.tool(
 72 |     description="Get HTML from a URL.",
 73 | )
 74 | async def get_html(url: Annotated[str, Field(description="The URL to get")]) -> dict[str, Any]:
 75 |     try:
 76 |         content = await get_http_content(url)
 77 |         return {
 78 |             "success": True,
 79 |             "url": url,
 80 |             "content": content,
 81 |         }
 82 |     except Exception as e:
 83 |         return {
 84 |             "success": False,
 85 |             "error": f"Failed to get HTML: {e!s}",
 86 |         }
 87 | 
 88 | 
 89 | if Config().tavily_api_key:
 90 | 
 91 |     @mcp.tool(
 92 |         description="Search with Tavily.",
 93 |     )
 94 |     async def search_with_tavily(
 95 |         query: Annotated[str, Field(description="The search query")],
 96 |         search_deep: Annotated[
 97 |             Literal["basic", "advanced"], Field(default="basic", description="The search depth")
 98 |         ] = "basic",
 99 |         topic: Annotated[Literal["general", "news"], Field(default="general", description="The topic")] = "general",
100 |         time_range: Annotated[
101 |             Literal["day", "week", "month", "year", "d", "w", "m", "y"] | None,
102 |             Field(default=None, description="The time range"),
103 |         ] = None,
104 |     ) -> list[dict[str, Any]]:
105 |         client = AsyncTavilyClient(Config().tavily_api_key)
106 |         results = await client.search(query, search_depth=search_deep, topic=topic, time_range=time_range)
107 |         if not results["results"]:
108 |             return {
109 |                 "success": False,
110 |                 "error": "No search results found.",
111 |             }
112 |         return results["results"]
113 | 
114 | 
115 | if Config().duckduckgo_api_key:
116 | 
117 |     @mcp.tool(
118 |         description="Search with DuckDuckGo.",
119 |     )
120 |     async def search_with_duckduckgo(
121 |         query: Annotated[str, Field(description="The search query")],
122 |         max_results: Annotated[int, Field(default=10, description="The maximum number of results")] = 10,
123 |     ) -> list[dict[str, Any]]:
124 |         ddg = DDGS(Config().duckduckgo_api_key)
125 |         search = functools.partial(ddg.text, max_results=max_results)
126 |         results = await anyio.to_thread.run_sync(search, query)
127 |         if len(results) == 0:
128 |             return {
129 |                 "success": False,
130 |                 "error": "No search results found.",
131 |             }
132 |         return results
133 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/xiaoyuzhoufm/tools.py:
--------------------------------------------------------------------------------

```python
  1 | """XiaoyuZhouFM podcast crawler tools."""
  2 | 
  3 | import os
  4 | import re
  5 | from typing import Annotated, Any
  6 | 
  7 | import httpx
  8 | from loguru import logger
  9 | from pydantic import Field
 10 | 
 11 | from mcp_toolbox.app import mcp
 12 | from mcp_toolbox.config import Config
 13 | 
 14 | 
 15 | class XiaoyuZhouFMCrawler:
 16 |     """XiaoyuZhouFM podcast crawler."""
 17 | 
 18 |     def __init__(self):
 19 |         """Initialize the crawler."""
 20 |         self.config = Config()
 21 | 
 22 |     async def extract_audio_url(self, url: str) -> str:
 23 |         """Extract audio URL from XiaoyuZhouFM episode page.
 24 | 
 25 |         Args:
 26 |             url: The XiaoyuZhouFM episode URL
 27 | 
 28 |         Returns:
 29 |             The audio URL
 30 | 
 31 |         Raises:
 32 |             ValueError: If the audio URL cannot be found
 33 |         """
 34 |         async with httpx.AsyncClient() as client:
 35 |             try:
 36 |                 response = await client.get(url)
 37 |                 response.raise_for_status()
 38 |                 html_content = response.text
 39 | 
 40 |                 # Use regex to find the og:audio meta tag
 41 |                 pattern = r'<meta\s+property="og:audio"\s+content="([^"]+)"'
 42 |                 match = re.search(pattern, html_content)
 43 | 
 44 |                 if not match:
 45 |                     raise ValueError("Could not find audio URL in the page")
 46 | 
 47 |                 audio_url = match.group(1)
 48 |                 return audio_url
 49 | 
 50 |             except httpx.HTTPStatusError as e:
 51 |                 raise ValueError(f"HTTP error: {e.response.status_code} - {e.response.reason_phrase}") from e
 52 |             except httpx.RequestError as e:
 53 |                 raise ValueError(f"Request error: {e}") from e
 54 | 
 55 |     async def download_audio(self, audio_url: str, output_path: str) -> str:
 56 |         """Download audio file from URL.
 57 | 
 58 |         Args:
 59 |             audio_url: The audio file URL
 60 |             output_path: The path to save the audio file
 61 | 
 62 |         Returns:
 63 |             The path to the downloaded file
 64 | 
 65 |         Raises:
 66 |             ValueError: If the download fails
 67 |         """
 68 |         # Create directory if it doesn't exist
 69 |         output_dir = os.path.dirname(output_path)
 70 |         if output_dir:
 71 |             os.makedirs(output_dir, exist_ok=True)
 72 | 
 73 |         async with httpx.AsyncClient() as client:
 74 |             try:
 75 |                 logger.info(f"Downloading audio from {audio_url}")
 76 |                 response = await client.get(audio_url)
 77 |                 response.raise_for_status()
 78 | 
 79 |                 with open(output_path, "wb") as f:
 80 |                     f.write(response.content)
 81 | 
 82 |                 logger.info(f"Audio saved to {output_path}")
 83 |                 return output_path
 84 | 
 85 |             except httpx.HTTPStatusError as e:
 86 |                 raise ValueError(f"HTTP error: {e.response.status_code} - {e.response.reason_phrase}") from e
 87 |             except httpx.RequestError as e:
 88 |                 raise ValueError(f"Request error: {e}") from e
 89 |             except OSError as e:
 90 |                 raise ValueError(f"IO error: {e}") from e
 91 | 
 92 | 
 93 | # Initialize crawler
 94 | crawler = XiaoyuZhouFMCrawler()
 95 | 
 96 | 
 97 | @mcp.tool(description="Crawl and download a podcast episode from XiaoyuZhouFM.")
 98 | async def xiaoyuzhoufm_download(
 99 |     xiaoyuzhoufm_url: Annotated[str, Field(description="The URL of the XiaoyuZhouFM episode")],
100 |     output_dir: Annotated[str, Field(description="The directory to save the audio file")],
101 | ) -> dict[str, Any]:
102 |     """Crawl and download a podcast episode from XiaoyuZhouFM.
103 | 
104 |     Args:
105 |         xiaoyuzhoufm_url: The URL of the XiaoyuZhouFM episode
106 |         output_dir: The directory to save the audio file
107 | 
108 |     Returns:
109 |         A dictionary containing the audio URL and the path to the downloaded file
110 |     """
111 |     try:
112 |         # Validate URL
113 |         if not xiaoyuzhoufm_url.startswith("https://www.xiaoyuzhoufm.com/episode/"):
114 |             raise ValueError("Invalid XiaoyuZhouFM URL. URL should start with 'https://www.xiaoyuzhoufm.com/episode/'")
115 | 
116 |         # Extract episode ID from URL
117 |         episode_id = xiaoyuzhoufm_url.split("/")[-1]
118 |         if not episode_id:
119 |             episode_id = "episode"
120 | 
121 |         # Extract audio URL
122 |         audio_url = await crawler.extract_audio_url(xiaoyuzhoufm_url)
123 | 
124 |         # Determine file extension from audio URL
125 |         file_extension = "m4a"
126 |         if "." in audio_url.split("/")[-1]:
127 |             file_extension = audio_url.split("/")[-1].split(".")[-1]
128 | 
129 |         # Create output path with episode ID as filename
130 |         output_path = os.path.join(output_dir, f"{episode_id}.{file_extension}")
131 | 
132 |         # Download audio
133 |         downloaded_path = await crawler.download_audio(audio_url, output_path)
134 | 
135 |         return {
136 |             "audio_url": audio_url,
137 |             "downloaded_path": downloaded_path,
138 |             "message": f"Successfully downloaded podcast to {downloaded_path}",
139 |         }
140 |     except Exception as e:
141 |         return {
142 |             "error": str(e),
143 |             "message": f"Failed to download podcast: {e!s}",
144 |         }
145 | 
```

--------------------------------------------------------------------------------
/tests/xiaoyuzhoufm/test_xiaoyuzhoufm_tools.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for XiaoyuZhouFM tools."""
  2 | 
  3 | import json
  4 | import os
  5 | from pathlib import Path
  6 | from unittest.mock import AsyncMock, MagicMock, patch
  7 | 
  8 | import pytest
  9 | 
 10 | from mcp_toolbox.xiaoyuzhoufm.tools import XiaoyuZhouFMCrawler, xiaoyuzhoufm_download
 11 | 
 12 | 
 13 | # Helper function to load mock data
 14 | def load_mock_data(filename):
 15 |     mock_dir = Path(__file__).parent.parent / "mock" / "xiaoyuzhoufm"
 16 |     mock_dir.mkdir(parents=True, exist_ok=True)
 17 |     file_path = mock_dir / filename
 18 | 
 19 |     if not file_path.exists():
 20 |         # Create empty mock data if it doesn't exist
 21 |         mock_data = {"mock": "data"}
 22 |         with open(file_path, "w") as f:
 23 |             json.dump(mock_data, f)
 24 | 
 25 |     with open(file_path) as f:
 26 |         return json.load(f)
 27 | 
 28 | 
 29 | # Mock HTML content with audio URL
 30 | MOCK_HTML_CONTENT = """
 31 | <!DOCTYPE html>
 32 | <html>
 33 | <head>
 34 |     <meta property="og:audio" content="https://media.example.com/podcasts/episode123.m4a">
 35 |     <title>Test Episode</title>
 36 | </head>
 37 | <body>
 38 |     <h1>Test Episode</h1>
 39 | </body>
 40 | </html>
 41 | """
 42 | 
 43 | 
 44 | # Test XiaoyuZhouFMCrawler.extract_audio_url method
 45 | @pytest.mark.asyncio
 46 | async def test_extract_audio_url():
 47 |     # Create a mock response
 48 |     mock_response = MagicMock()
 49 |     mock_response.text = MOCK_HTML_CONTENT
 50 |     mock_response.raise_for_status = MagicMock()  # Changed from AsyncMock to MagicMock
 51 | 
 52 |     # Create a mock client
 53 |     mock_client = AsyncMock()
 54 |     mock_client.__aenter__.return_value.get.return_value = mock_response
 55 | 
 56 |     # Patch httpx.AsyncClient
 57 |     with patch("httpx.AsyncClient", return_value=mock_client):
 58 |         crawler = XiaoyuZhouFMCrawler()
 59 |         audio_url = await crawler.extract_audio_url("https://www.xiaoyuzhoufm.com/episode/test")
 60 | 
 61 |     # Assert the audio URL is extracted correctly
 62 |     assert audio_url == "https://media.example.com/podcasts/episode123.m4a"
 63 | 
 64 | 
 65 | # Test XiaoyuZhouFMCrawler.download_audio method
 66 | @pytest.mark.asyncio
 67 | async def test_download_audio(tmp_path):
 68 |     # Create a mock response with binary content
 69 |     mock_response = MagicMock()
 70 |     mock_response.content = b"test audio content"
 71 |     mock_response.raise_for_status = MagicMock()  # Changed from AsyncMock to MagicMock
 72 | 
 73 |     # Create a mock client
 74 |     mock_client = AsyncMock()
 75 |     mock_client.__aenter__.return_value.get.return_value = mock_response
 76 | 
 77 |     # Patch httpx.AsyncClient
 78 |     with patch("httpx.AsyncClient", return_value=mock_client):
 79 |         crawler = XiaoyuZhouFMCrawler()
 80 |         output_path = str(tmp_path / "test_audio.m4a")
 81 |         downloaded_path = await crawler.download_audio("https://media.example.com/podcasts/episode123.m4a", output_path)
 82 | 
 83 |     # Assert the file was downloaded correctly
 84 |     assert downloaded_path == output_path
 85 |     assert os.path.exists(output_path)
 86 |     with open(output_path, "rb") as f:
 87 |         content = f.read()
 88 |         assert content == b"test audio content"
 89 | 
 90 | 
 91 | # Test xiaoyuzhoufm_download tool
 92 | @pytest.mark.asyncio
 93 | async def test_xiaoyuzhoufm_download():
 94 |     # Mock the crawler methods
 95 |     with (
 96 |         patch("mcp_toolbox.xiaoyuzhoufm.tools.crawler.extract_audio_url") as mock_extract,
 97 |         patch("mcp_toolbox.xiaoyuzhoufm.tools.crawler.download_audio") as mock_download,
 98 |     ):
 99 |         # Set up the mocks
100 |         mock_extract.return_value = "https://media.example.com/podcasts/episode123.m4a"
101 |         mock_download.return_value = "/tmp/test/test.m4a"
102 | 
103 |         # Call the tool
104 |         result = await xiaoyuzhoufm_download("https://www.xiaoyuzhoufm.com/episode/test", "/tmp/test")
105 | 
106 |         # Assert the result
107 |         assert result["audio_url"] == "https://media.example.com/podcasts/episode123.m4a"
108 |         assert result["downloaded_path"] == "/tmp/test/test.m4a"
109 |         assert "Successfully downloaded" in result["message"]
110 | 
111 |         # Verify the mocks were called correctly
112 |         mock_extract.assert_called_once_with("https://www.xiaoyuzhoufm.com/episode/test")
113 |         # The output path should be constructed from the output_dir and episode ID
114 |         mock_download.assert_called_once_with("https://media.example.com/podcasts/episode123.m4a", "/tmp/test/test.m4a")
115 | 
116 | 
117 | # Test xiaoyuzhoufm_download tool with invalid URL
118 | @pytest.mark.asyncio
119 | async def test_xiaoyuzhoufm_download_invalid_url():
120 |     # Call the tool with an invalid URL
121 |     result = await xiaoyuzhoufm_download("https://invalid-url.com", "/tmp/test")
122 | 
123 |     # Assert the result contains an error
124 |     assert "error" in result
125 |     assert "Invalid XiaoyuZhouFM URL" in result["message"]
126 | 
127 | 
128 | # Test xiaoyuzhoufm_download tool with extraction error
129 | @pytest.mark.asyncio
130 | async def test_xiaoyuzhoufm_download_extraction_error():
131 |     # Mock the crawler methods to raise an error
132 |     with patch("mcp_toolbox.xiaoyuzhoufm.tools.crawler.extract_audio_url") as mock_extract:
133 |         # Set up the mock to raise an error
134 |         mock_extract.side_effect = ValueError("Could not find audio URL")
135 | 
136 |         # Call the tool
137 |         result = await xiaoyuzhoufm_download("https://www.xiaoyuzhoufm.com/episode/test", "/tmp/test")
138 | 
139 |         # Assert the result contains an error
140 |         assert "error" in result
141 |         assert "Could not find audio URL" in result["error"]
142 |         assert "Failed to download podcast" in result["message"]
143 | 
```

--------------------------------------------------------------------------------
/tests/enhance/test_memory.py:
--------------------------------------------------------------------------------

```python
  1 | import pytest
  2 | 
  3 | from mcp_toolbox.enhance.memory import LocalMemory, MemoryModel
  4 | 
  5 | 
  6 | @pytest.fixture
  7 | def memory_file(tmp_path):
  8 |     return tmp_path / "test-memory"
  9 | 
 10 | 
 11 | @pytest.fixture
 12 | def local_memory(memory_file):
 13 |     memory = LocalMemory("test-session", memory_file)
 14 |     # Ensure the file is empty at the start of each test
 15 |     memory.clear()
 16 |     return memory
 17 | 
 18 | 
 19 | def test_memory_basic(local_memory: LocalMemory):
 20 |     """Test basic memory operations"""
 21 |     assert local_memory.session_id == "test-session"
 22 | 
 23 |     # Store and query
 24 |     memory_model = local_memory.store("test-brief", "test-detail")
 25 |     assert isinstance(memory_model, MemoryModel)
 26 |     assert memory_model.session_id == "test-session"
 27 |     assert memory_model.brief == "test-brief"
 28 |     assert memory_model.detail == "test-detail"
 29 |     assert memory_model.embedding is not None
 30 | 
 31 |     # Query
 32 |     results = local_memory.query("test-brief")
 33 |     assert len(results) == 1
 34 |     assert results[0].brief == "test-brief"
 35 |     assert results[0].detail == "test-detail"
 36 |     assert results[0].session_id == "test-session"
 37 | 
 38 | 
 39 | def test_memory_cross_session(memory_file):
 40 |     """Test cross-session memory operations"""
 41 |     # Create two memory instances with different session IDs
 42 |     memory1 = LocalMemory("session-1", memory_file)
 43 |     memory1.clear()  # Start with a clean file
 44 | 
 45 |     # Store a memory in session 1
 46 |     memory1.store("brief-1", "detail-1")
 47 | 
 48 |     # Create a second memory instance with a different session ID
 49 |     memory2 = LocalMemory("session-2", memory_file)
 50 | 
 51 |     # Store a memory in session 2
 52 |     memory2.store("brief-2", "detail-2")
 53 | 
 54 |     # Refresh memory1 to see both entries
 55 |     memory1.current_memory = memory1._load()
 56 | 
 57 |     # Query with cross_session=True (default)
 58 |     results1 = memory1.query("brief", top_k=5, refresh=True)
 59 |     assert len(results1) == 2, f"Expected 2 results, got {len(results1)}: {results1}"
 60 | 
 61 |     # Query with cross_session=False
 62 |     results2 = memory1.query("brief", top_k=5, cross_session=False)
 63 |     assert len(results2) == 1, f"Expected 1 result, got {len(results2)}: {results2}"
 64 |     assert results2[0].session_id == "session-1"
 65 | 
 66 |     results3 = memory2.query("brief", top_k=5, cross_session=False)
 67 |     assert len(results3) == 1, f"Expected 1 result, got {len(results3)}: {results3}"
 68 |     assert results3[0].session_id == "session-2"
 69 | 
 70 | 
 71 | def test_memory_clear(memory_file):
 72 |     """Test clearing memory"""
 73 |     # Create a new memory instance
 74 |     memory = LocalMemory("test-session", memory_file)
 75 |     memory.clear()  # Start with a clean file
 76 | 
 77 |     # Store some memories
 78 |     memory.store("brief-1", "detail-1")
 79 |     memory.store("brief-2", "detail-2")
 80 | 
 81 |     # Verify memories are stored
 82 |     results = memory.query("brief", top_k=5)
 83 |     assert len(results) == 2, f"Expected 2 results, got {len(results)}: {results}"
 84 | 
 85 |     # Clear memories
 86 |     memory.clear()
 87 | 
 88 |     # Verify memories are cleared
 89 |     results = memory.query("brief", top_k=5)
 90 |     assert len(results) == 0, f"Expected 0 results, got {len(results)}: {results}"
 91 | 
 92 | 
 93 | def test_memory_empty_file(memory_file):
 94 |     """Test handling of empty memory file"""
 95 |     # Create a new memory instance with a non-existent file
 96 |     memory = LocalMemory("test-session", memory_file)
 97 |     memory.clear()  # Start with a clean file
 98 | 
 99 |     # Query should return empty list
100 |     results = memory.query("test")
101 |     assert len(results) == 0
102 | 
103 |     # Store should work even with empty file
104 |     memory.store("test-brief", "test-detail")
105 |     results = memory.query("test")
106 |     assert len(results) == 1
107 | 
108 | 
109 | def test_memory_top_k(memory_file):
110 |     """Test top_k parameter in query"""
111 |     # Create a new memory instance
112 |     memory = LocalMemory("test-session", memory_file)
113 |     memory.clear()  # Start with a clean file
114 | 
115 |     # Store multiple memories with distinct embeddings
116 |     memory.store("apple", "A fruit")
117 |     memory.store("banana", "A yellow fruit")
118 |     memory.store("orange", "A citrus fruit")
119 |     memory.store("grape", "A small fruit")
120 | 
121 |     # Query with different top_k values
122 |     results1 = memory.query("fruit", top_k=2)
123 |     assert len(results1) == 2, f"Expected 2 results, got {len(results1)}: {results1}"
124 | 
125 |     results2 = memory.query("fruit", top_k=4)
126 |     assert len(results2) == 4, f"Expected 4 results, got {len(results2)}: {results2}"
127 | 
128 |     # Query with top_k larger than available results
129 |     results3 = memory.query("fruit", top_k=10)
130 |     assert len(results3) == 4, f"Expected 4 results, got {len(results3)}: {results3}"
131 | 
132 | 
133 | def test_memory_refresh(memory_file):
134 |     """Test refresh parameter in query"""
135 |     # Create two memory instances with the same session ID and file
136 |     memory1 = LocalMemory("same-session", memory_file)
137 |     memory1.clear()  # Start with a clean file
138 | 
139 |     memory2 = LocalMemory("same-session", memory_file)
140 | 
141 |     # Store a memory using the first instance
142 |     memory1.store("test-brief", "test-detail")
143 | 
144 |     # Query using the second instance without refresh
145 |     results1 = memory2.query("test", refresh=False)
146 |     assert len(results1) == 0, f"Expected 0 results, got {len(results1)}: {results1}"
147 | 
148 |     # Query using the second instance with refresh
149 |     results2 = memory2.query("test", refresh=True)
150 |     assert len(results2) == 1, f"Expected 1 result, got {len(results2)}: {results2}"
151 | 
```

--------------------------------------------------------------------------------
/tests/command_line/test_command_line_tools.py:
--------------------------------------------------------------------------------

```python
  1 | import asyncio
  2 | from pathlib import Path
  3 | from unittest.mock import AsyncMock, MagicMock, patch
  4 | 
  5 | import pytest
  6 | 
  7 | from mcp_toolbox.command_line.tools import execute_command
  8 | 
  9 | 
 10 | # Mock for asyncio.create_subprocess_exec
 11 | class MockProcess:
 12 |     def __init__(self, stdout=b"", stderr=b"", returncode=0):
 13 |         self.stdout = stdout
 14 |         self.stderr = stderr
 15 |         self.returncode = returncode
 16 |         self.communicate = AsyncMock(return_value=(stdout, stderr))
 17 |         self.kill = MagicMock()
 18 | 
 19 | 
 20 | @pytest.mark.asyncio
 21 | async def test_execute_command_success():
 22 |     """Test successful command execution."""
 23 |     # Mock process with successful execution
 24 |     mock_process = MockProcess(stdout=b"test output", stderr=b"", returncode=0)
 25 | 
 26 |     with patch("asyncio.create_subprocess_exec", return_value=mock_process) as mock_exec:
 27 |         result = await execute_command(["echo", "test"])
 28 | 
 29 |         # Verify subprocess was called with correct arguments
 30 |         mock_exec.assert_called_once()
 31 | 
 32 |         # Verify the result contains expected fields
 33 |         assert "stdout" in result
 34 |         assert "stderr" in result
 35 |         assert "return_code" in result
 36 |         assert result["stdout"] == "test output"
 37 |         assert result["stderr"] == ""
 38 |         assert result["return_code"] == 0
 39 | 
 40 | 
 41 | @pytest.mark.asyncio
 42 | async def test_execute_command_error():
 43 |     """Test command execution with error."""
 44 |     # Mock process with error
 45 |     mock_process = MockProcess(stdout=b"", stderr=b"error message", returncode=1)
 46 | 
 47 |     with patch("asyncio.create_subprocess_exec", return_value=mock_process) as mock_exec:
 48 |         result = await execute_command(["invalid_command"])
 49 | 
 50 |         # Verify subprocess was called
 51 |         mock_exec.assert_called_once()
 52 | 
 53 |         # Verify the result contains expected fields
 54 |         assert "stdout" in result
 55 |         assert "stderr" in result
 56 |         assert "return_code" in result
 57 |         assert result["stdout"] == ""
 58 |         assert result["stderr"] == "error message"
 59 |         assert result["return_code"] == 1
 60 | 
 61 | 
 62 | @pytest.mark.asyncio
 63 | async def test_execute_command_timeout():
 64 |     """Test command execution timeout."""
 65 |     # Mock process that will time out
 66 |     mock_process = MockProcess()
 67 |     mock_process.communicate = AsyncMock(side_effect=asyncio.TimeoutError())
 68 | 
 69 |     with patch("asyncio.create_subprocess_exec", return_value=mock_process) as mock_exec:
 70 |         result = await execute_command(["sleep", "100"], timeout_seconds=1)
 71 | 
 72 |         # Verify subprocess was called
 73 |         mock_exec.assert_called_once()
 74 | 
 75 |         # Verify process was killed
 76 |         mock_process.kill.assert_called_once()
 77 | 
 78 |         # Verify the result contains expected fields
 79 |         assert "error" in result
 80 |         assert "timed out" in result["error"]
 81 |         assert result["return_code"] == 124  # Timeout return code
 82 | 
 83 | 
 84 | @pytest.mark.asyncio
 85 | async def test_execute_command_exception():
 86 |     """Test exception during command execution."""
 87 |     with patch("asyncio.create_subprocess_exec", side_effect=Exception("Test exception")) as mock_exec:
 88 |         result = await execute_command(["echo", "test"])
 89 | 
 90 |         # Verify subprocess was called
 91 |         mock_exec.assert_called_once()
 92 | 
 93 |         # Verify the result contains expected fields
 94 |         assert "error" in result
 95 |         assert "Failed to execute command" in result["error"]
 96 |         assert result["return_code"] == 1
 97 | 
 98 | 
 99 | @pytest.mark.asyncio
100 | async def test_execute_command_empty():
101 |     """Test execution with empty command."""
102 |     result = await execute_command([])
103 | 
104 |     # Verify the result contains expected fields
105 |     assert "error" in result
106 |     assert "Command cannot be empty" in result["error"]
107 |     assert result["return_code"] == 1
108 | 
109 | 
110 | @pytest.mark.asyncio
111 | async def test_execute_command_with_working_dir():
112 |     """Test command execution with working directory."""
113 |     # Mock process with successful execution
114 |     mock_process = MockProcess(stdout=b"test output", stderr=b"", returncode=0)
115 |     test_dir = "/test_dir"  # Using a non-tmp directory for testing
116 | 
117 |     with patch("asyncio.create_subprocess_exec", return_value=mock_process) as mock_exec:
118 |         result = await execute_command(["echo", "test"], working_dir=test_dir)
119 | 
120 |         # Verify subprocess was called with correct arguments
121 |         mock_exec.assert_called_once()
122 |         _, kwargs = mock_exec.call_args
123 |         assert kwargs["cwd"] == Path(test_dir)
124 | 
125 |         # Verify the result contains expected fields
126 |         assert result["return_code"] == 0
127 | 
128 | 
129 | @pytest.mark.asyncio
130 | async def test_execute_command_with_tilde_in_working_dir():
131 |     """Test command execution with tilde in working directory."""
132 |     # Mock process with successful execution
133 |     mock_process = MockProcess(stdout=b"test output", stderr=b"", returncode=0)
134 |     test_dir = "~/test_dir"  # Using tilde in path
135 | 
136 |     with (
137 |         patch("asyncio.create_subprocess_exec", return_value=mock_process) as mock_exec,
138 |         patch("pathlib.Path.expanduser", return_value=Path("/home/user/test_dir")) as mock_expanduser,
139 |     ):
140 |         result = await execute_command(["echo", "test"], working_dir=test_dir)
141 | 
142 |         # Verify expanduser was called
143 |         mock_expanduser.assert_called_once()
144 | 
145 |         # Verify subprocess was called with correct arguments
146 |         mock_exec.assert_called_once()
147 |         _, kwargs = mock_exec.call_args
148 |         assert kwargs["cwd"] == Path("/home/user/test_dir")
149 | 
150 |         # Verify the result contains expected fields
151 |         assert result["return_code"] == 0
152 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/enhance/memory.py:
--------------------------------------------------------------------------------

```python
  1 | from __future__ import annotations
  2 | 
  3 | import uuid
  4 | from functools import cache
  5 | from os import PathLike
  6 | from pathlib import Path
  7 | from typing import Annotated
  8 | 
  9 | import numpy as np
 10 | import portalocker
 11 | from fastembed import TextEmbedding
 12 | from pydantic import BaseModel, Field
 13 | 
 14 | from mcp_toolbox.config import Config
 15 | from mcp_toolbox.log import logger
 16 | 
 17 | embedding_model = TextEmbedding()
 18 | logger.info("The model BAAI/bge-small-en-v1.5 is ready to use.")
 19 | 
 20 | 
 21 | def embed_text(text: str) -> list[float]:
 22 |     return next(iter(embedding_model.embed([text])))
 23 | 
 24 | 
 25 | class MemoryModel(BaseModel):
 26 |     session_id: Annotated[str, Field(description="The session id of the memory")]
 27 |     brief: Annotated[str, Field(description="The brief information of the memory")]
 28 |     detail: Annotated[str, Field(description="The detailed information of the brief text")]
 29 |     embedding: Annotated[list[float] | None, Field(description="The embedding of the brief text")] = None
 30 | 
 31 | 
 32 | @cache
 33 | def get_current_session_memory() -> LocalMemory:
 34 |     return LocalMemory.new_session()
 35 | 
 36 | 
 37 | class LocalMemory:
 38 |     @classmethod
 39 |     def new_session(cls) -> LocalMemory:
 40 |         return cls.use_session(uuid.uuid4().hex)
 41 | 
 42 |     @classmethod
 43 |     def use_session(cls, session_id: str) -> LocalMemory:
 44 |         config = Config()
 45 |         return cls(session_id, config.memory_file)
 46 | 
 47 |     def __init__(self, session_id: str, memory_file: PathLike):
 48 |         self.session_id = session_id
 49 |         self.memory_file = Path(memory_file)
 50 | 
 51 |         self.memory_file.parent.mkdir(parents=True, exist_ok=True)
 52 |         self.memory_file.touch(exist_ok=True)
 53 |         self.current_memory: np.ndarray = self._load()
 54 | 
 55 |     def _load(self) -> np.ndarray:
 56 |         if not self.memory_file.exists():
 57 |             return np.empty((0, 4), dtype=object)
 58 | 
 59 |         try:
 60 |             with portalocker.Lock(self.memory_file, "rb") as f:
 61 |                 memory = np.load(f, allow_pickle=True)
 62 |         except Exception as e:
 63 |             logger.warning(f"Error loading memory: {e}")
 64 |             memory = np.empty((0, 4), dtype=object)
 65 | 
 66 |         return memory
 67 | 
 68 |     def store(self, brief: str, detail: str) -> MemoryModel:
 69 |         try:
 70 |             # Keep the file locked during the entire operation
 71 |             with portalocker.Lock(self.memory_file, "rb+") as f:
 72 |                 try:
 73 |                     # Load existing memory
 74 |                     current_memory = np.load(f, allow_pickle=True)
 75 |                 except (ValueError, EOFError):
 76 |                     # File is empty or not a valid numpy array
 77 |                     current_memory = np.empty((0, 4), dtype=object)
 78 | 
 79 |                 embedding = embed_text(brief)
 80 | 
 81 |                 # Append the new entry
 82 |                 if current_memory.size == 0:
 83 |                     # Initialize with first entry including all 4 fields
 84 |                     updated_memory = np.array([[self.session_id, brief, detail, embedding]], dtype=object)
 85 |                 else:
 86 |                     updated_memory = np.append(
 87 |                         current_memory,
 88 |                         np.array([[self.session_id, brief, detail, embedding]], dtype=object),
 89 |                         axis=0,
 90 |                     )
 91 | 
 92 |                 # Save the updated memory
 93 |                 f.seek(0)
 94 |                 f.truncate()
 95 |                 np.save(f, updated_memory)
 96 |         except Exception as e:
 97 |             logger.warning(f"Error storing memory: {e}")
 98 |             raise
 99 | 
100 |         self.current_memory = self._load()
101 | 
102 |         return MemoryModel(
103 |             session_id=self.session_id,
104 |             brief=brief,
105 |             detail=detail,
106 |             embedding=embedding,
107 |         )
108 | 
109 |     def query(
110 |         self,
111 |         query: str,
112 |         top_k: int = 3,
113 |         cross_session: bool = True,
114 |         refresh: bool = False,
115 |     ) -> list[MemoryModel]:
116 |         if refresh:
117 |             self.current_memory = self._load()
118 |         embedding = embed_text(query)
119 | 
120 |         # Check if memory is empty
121 |         if self.current_memory.size == 0:
122 |             return []
123 | 
124 |         # Filter by session if cross_session is False
125 |         if not cross_session:
126 |             # Create a mask for entries from the current session
127 |             session_mask = self.current_memory[:, 0] == self.session_id
128 |             if not any(session_mask):
129 |                 return []  # No entries for current session
130 | 
131 |             # Filter memory to only include current session
132 |             filtered_memory = self.current_memory[session_mask]
133 | 
134 |             # Calculate similarity between query embedding and each stored embedding
135 |             similarity = np.array([np.dot(stored_embedding, embedding) for stored_embedding in filtered_memory[:, 3]])
136 |             top_k_idx = np.argsort(similarity)[-min(top_k, len(similarity)) :]
137 | 
138 |             return [
139 |                 MemoryModel(
140 |                     session_id=filtered_memory[idx, 0],
141 |                     brief=filtered_memory[idx, 1],
142 |                     detail=filtered_memory[idx, 2],
143 |                 )
144 |                 for idx in top_k_idx
145 |             ]
146 |         else:
147 |             # Calculate similarity between query embedding and each stored embedding
148 |             similarity = np.array([
149 |                 np.dot(stored_embedding, embedding) for stored_embedding in self.current_memory[:, 3]
150 |             ])
151 |             top_k_idx = np.argsort(similarity)[-min(top_k, len(similarity)) :]
152 | 
153 |             return [
154 |                 MemoryModel(
155 |                     session_id=self.current_memory[idx, 0],
156 |                     brief=self.current_memory[idx, 1],
157 |                     detail=self.current_memory[idx, 2],
158 |                 )
159 |                 for idx in top_k_idx
160 |             ]
161 | 
162 |     def clear(self):
163 |         # Create an empty memory array
164 |         empty_memory = np.empty((0, 4), dtype=object)
165 | 
166 |         # Update the file with the empty array
167 |         with portalocker.Lock(self.memory_file, "wb") as f:
168 |             np.save(f, empty_memory)
169 | 
170 |         # Update the current memory
171 |         self.current_memory = empty_memory
172 | 
```

--------------------------------------------------------------------------------
/tests/enhance/test_enhance_tools.py:
--------------------------------------------------------------------------------

```python
  1 | from unittest.mock import patch
  2 | 
  3 | import pytest
  4 | 
  5 | from mcp_toolbox.enhance.memory import LocalMemory, MemoryModel
  6 | from mcp_toolbox.enhance.tools import forget, get_session_id, recall, remember, think
  7 | 
  8 | 
  9 | @pytest.mark.asyncio
 10 | async def test_think_returns_dict():
 11 |     """Test that the think function returns a dictionary."""
 12 |     result = await think("Test thought")
 13 |     assert isinstance(result, dict), "think() should return a dictionary"
 14 | 
 15 | 
 16 | @pytest.mark.asyncio
 17 | async def test_think_returns_correct_thought():
 18 |     """Test that the returned dictionary contains the input thought."""
 19 |     test_thought = "This is a test thought"
 20 |     result = await think(test_thought)
 21 |     assert result == {"thought": test_thought}, "think() should return a dictionary with the input thought"
 22 | 
 23 | 
 24 | @pytest.mark.asyncio
 25 | async def test_think_with_different_thought_types():
 26 |     """Test think() with various types of thoughts."""
 27 |     test_cases = [
 28 |         "Simple string thought",
 29 |         "Thought with special characters: !@#$%^&*()",
 30 |         "Thought with numbers: 12345",
 31 |         "Thought with unicode: こんにちは 世界",
 32 |         "",  # Empty string
 33 |     ]
 34 | 
 35 |     for test_thought in test_cases:
 36 |         result = await think(test_thought)
 37 |         assert result == {"thought": test_thought}, f"Failed for thought: {test_thought}"
 38 | 
 39 | 
 40 | @pytest.fixture
 41 | def memory_file(tmp_path):
 42 |     return tmp_path / "test-memory"
 43 | 
 44 | 
 45 | @pytest.fixture
 46 | def mock_memory(memory_file):
 47 |     memory = LocalMemory("test-session", memory_file)
 48 |     memory.clear()  # Start with a clean file
 49 |     return memory
 50 | 
 51 | 
 52 | @patch("mcp_toolbox.enhance.tools.get_current_session_memory")
 53 | def test_get_session_id(mock_get_memory, mock_memory):
 54 |     """Test that get_session_id returns the correct session ID."""
 55 |     mock_get_memory.return_value = mock_memory
 56 | 
 57 |     result = get_session_id()
 58 | 
 59 |     assert result == {"session_id": "test-session"}
 60 |     mock_get_memory.assert_called_once()
 61 | 
 62 | 
 63 | @patch("mcp_toolbox.enhance.tools.get_current_session_memory")
 64 | def test_remember(mock_get_memory, mock_memory):
 65 |     """Test that remember stores a memory and returns the correct data."""
 66 |     mock_get_memory.return_value = mock_memory
 67 | 
 68 |     result = remember("test-brief", "test-detail")
 69 | 
 70 |     assert result == {
 71 |         "session_id": "test-session",
 72 |         "brief": "test-brief",
 73 |         "detail": "test-detail",
 74 |     }
 75 |     mock_get_memory.assert_called_once()
 76 | 
 77 |     # Verify the memory was stored
 78 |     memories = mock_memory.query("test-brief")
 79 |     assert len(memories) == 1
 80 |     assert memories[0].brief == "test-brief"
 81 |     assert memories[0].detail == "test-detail"
 82 | 
 83 | 
 84 | @patch("mcp_toolbox.enhance.tools.get_current_session_memory")
 85 | def test_recall_current_session(mock_get_memory, mock_memory):
 86 |     """Test that recall retrieves memories from the current session."""
 87 |     mock_get_memory.return_value = mock_memory
 88 | 
 89 |     # Store some memories
 90 |     mock_memory.store("brief-1", "detail-1")
 91 |     mock_memory.store("brief-2", "detail-2")
 92 | 
 93 |     # Recall with default parameters (current session)
 94 |     result = recall("brief")
 95 | 
 96 |     assert len(result) == 2
 97 |     assert all(isinstance(item, dict) for item in result)
 98 |     assert all("session_id" in item and "brief" in item and "detail" in item for item in result)
 99 |     mock_get_memory.assert_called()
100 | 
101 | 
102 | @patch("mcp_toolbox.enhance.tools.LocalMemory.use_session")
103 | def test_recall_specific_session(mock_use_session, mock_memory):
104 |     """Test that recall retrieves memories from a specific session."""
105 |     mock_use_session.return_value = mock_memory
106 | 
107 |     # Store some memories
108 |     mock_memory.store("brief-1", "detail-1")
109 | 
110 |     # Recall with specific session ID
111 |     result = recall("brief", session_id="specific-session")
112 | 
113 |     assert len(result) == 1
114 |     mock_use_session.assert_called_once_with("specific-session")
115 | 
116 | 
117 | @patch("mcp_toolbox.enhance.tools.get_current_session_memory")
118 | def test_recall_cross_session(mock_get_memory, mock_memory):
119 |     """Test that recall retrieves memories across sessions when cross_session is True."""
120 |     mock_get_memory.return_value = mock_memory
121 | 
122 |     # Mock the query method to simulate cross-session behavior
123 |     original_query = mock_memory.query
124 | 
125 |     def mock_query(query_text, top_k=3, cross_session=True):
126 |         if cross_session:
127 |             return [
128 |                 MemoryModel(session_id="session-1", brief="brief-1", detail="detail-1"),
129 |                 MemoryModel(session_id="session-2", brief="brief-2", detail="detail-2"),
130 |             ]
131 |         else:
132 |             return [MemoryModel(session_id="test-session", brief="brief-1", detail="detail-1")]
133 | 
134 |     mock_memory.query = mock_query
135 | 
136 |     # Recall with cross_session=True
137 |     result = recall("brief", cross_session=True)
138 | 
139 |     assert len(result) == 2
140 |     assert result[0]["session_id"] == "session-1"
141 |     assert result[1]["session_id"] == "session-2"
142 | 
143 |     # Recall with cross_session=False
144 |     result = recall("brief", cross_session=False)
145 | 
146 |     assert len(result) == 1
147 |     assert result[0]["session_id"] == "test-session"
148 | 
149 |     # Restore original query method
150 |     mock_memory.query = original_query
151 | 
152 | 
153 | @patch("mcp_toolbox.enhance.tools.get_current_session_memory")
154 | def test_recall_top_k(mock_get_memory, mock_memory):
155 |     """Test that recall respects the top_k parameter."""
156 |     mock_get_memory.return_value = mock_memory
157 | 
158 |     # Store multiple memories
159 |     for i in range(10):
160 |         mock_memory.store(f"brief-{i}", f"detail-{i}")
161 | 
162 |     # Recall with top_k=3
163 |     result = recall("brief", top_k=3)
164 | 
165 |     assert len(result) <= 3
166 |     mock_get_memory.assert_called()
167 | 
168 | 
169 | @patch("mcp_toolbox.enhance.tools.get_current_session_memory")
170 | def test_forget(mock_get_memory, mock_memory):
171 |     """Test that forget clears all memories."""
172 |     mock_get_memory.return_value = mock_memory
173 | 
174 |     # Store some memories
175 |     mock_memory.store("brief-1", "detail-1")
176 |     mock_memory.store("brief-2", "detail-2")
177 | 
178 |     # Verify memories are stored
179 |     assert len(mock_memory.query("brief")) == 2
180 | 
181 |     # Forget all memories
182 |     result = forget()
183 | 
184 |     assert result == {"message": "All memories are cleared."}
185 |     mock_get_memory.assert_called()
186 | 
187 |     # Verify memories are cleared
188 |     assert len(mock_memory.query("brief")) == 0
189 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/audio/tools.py:
--------------------------------------------------------------------------------

```python
  1 | """Audio processing tools for transcription and analysis."""
  2 | 
  3 | import datetime
  4 | import os
  5 | from pathlib import Path
  6 | from typing import Annotated, Any
  7 | 
  8 | import whisper
  9 | from loguru import logger
 10 | from pydantic import Field
 11 | 
 12 | from mcp_toolbox.app import mcp
 13 | 
 14 | # Global variables to cache model and audio data
 15 | _model = None
 16 | _model_name = None
 17 | _audio = None
 18 | _audio_path = None
 19 | _detected_language = None
 20 | 
 21 | 
 22 | def load_model(model_name="base"):
 23 |     """
 24 |     Load and cache the Whisper model.
 25 | 
 26 |     Args:
 27 |         model_name: The name of the Whisper model to load (tiny, base, small, medium, large)
 28 | 
 29 |     Returns:
 30 |         The loaded Whisper model
 31 |     """
 32 |     global _model, _model_name
 33 | 
 34 |     # Load model if not loaded or if model name has changed
 35 |     if _model is None or _model_name != model_name:
 36 |         logger.info(f"Loading Whisper model: {model_name}")
 37 |         _model = whisper.load_model(model_name)
 38 |         _model_name = model_name
 39 | 
 40 |     return _model
 41 | 
 42 | 
 43 | def load_audio(audio_path, model_name="base"):
 44 |     """
 45 |     Load and cache the audio file.
 46 | 
 47 |     Args:
 48 |         audio_path: The path to the audio file
 49 |         model_name: The name of the Whisper model to use for language detection
 50 | 
 51 |     Returns:
 52 |         The loaded audio data
 53 |     """
 54 |     global _audio, _audio_path, _detected_language, _model
 55 | 
 56 |     # Ensure model is loaded
 57 |     model = load_model(model_name)
 58 | 
 59 |     # Only reload if it's a different file or not loaded yet
 60 |     audio_path = Path(audio_path).expanduser().resolve().absolute().as_posix()
 61 |     if _audio is None or _audio_path != audio_path:
 62 |         logger.info(f"Loading audio: {audio_path}")
 63 |         _audio = whisper.load_audio(audio_path)
 64 |         _audio_path = audio_path
 65 | 
 66 |         # Get audio duration in seconds
 67 |         audio_duration = len(_audio) / 16000  # Whisper uses 16kHz audio
 68 |         logger.info(f"Audio duration: {datetime.timedelta(seconds=int(audio_duration))!s}")
 69 | 
 70 |         # Detect language from the first chunk
 71 |         chunk_samples = 30 * 16000  # Use 30 seconds for language detection
 72 |         first_chunk = whisper.pad_or_trim(_audio[:chunk_samples])
 73 |         mel = whisper.log_mel_spectrogram(first_chunk).to(model.device)
 74 |         _, probs = model.detect_language(mel)
 75 |         _detected_language = max(probs, key=probs.get)
 76 |         logger.info(f"Detected language: {_detected_language}")
 77 | 
 78 |     return _audio
 79 | 
 80 | 
 81 | @mcp.tool(description="Get the length of an audio file in seconds.")
 82 | async def get_audio_length(
 83 |     audio_path: Annotated[str, Field(description="The path to the audio file")],
 84 | ) -> dict[str, Any]:
 85 |     """Get the length of an audio file in seconds.
 86 | 
 87 |     Args:
 88 |         audio_path: The path to the audio file
 89 | 
 90 |     Returns:
 91 |         A dictionary containing the audio length in seconds and formatted time
 92 |     """
 93 |     try:
 94 |         if not os.path.exists(audio_path):
 95 |             raise ValueError(f"Audio file not found: {audio_path}")
 96 | 
 97 |         # Load audio
 98 |         audio = whisper.load_audio(audio_path)
 99 | 
100 |         # Calculate duration
101 |         audio_duration_seconds = len(audio) / 16000  # Whisper uses 16kHz audio
102 |         formatted_duration = str(datetime.timedelta(seconds=int(audio_duration_seconds)))
103 | 
104 |         return {
105 |             "duration_seconds": audio_duration_seconds,
106 |             "formatted_duration": formatted_duration,
107 |             "message": f"Audio length: {formatted_duration} ({audio_duration_seconds:.2f} seconds)",
108 |         }
109 |     except Exception as e:
110 |         return {
111 |             "error": str(e),
112 |             "message": f"Failed to get audio length: {e!s}",
113 |         }
114 | 
115 | 
116 | @mcp.tool(description="Get transcribed text from a specific time range in an audio file.")
117 | async def get_audio_text(
118 |     audio_path: Annotated[str, Field(description="The path to the audio file")],
119 |     start_time: Annotated[float, Field(description="Start time in seconds")],
120 |     end_time: Annotated[float, Field(description="End time in seconds")],
121 |     model_name: Annotated[
122 |         str, Field(default="base", description="Whisper model name: tiny, base, small, medium, large")
123 |     ] = "base",
124 | ) -> dict[str, Any]:
125 |     """Extract and transcribe text from a specific time range in an audio file.
126 | 
127 |     Args:
128 |         audio_path: The path to the audio file
129 |         start_time: Start time in seconds
130 |         end_time: End time in seconds
131 |         model_name: Whisper model name (tiny, base, small, medium, large)
132 |         initial_prompt: Initial prompt to guide transcription
133 | 
134 |     Returns:
135 |         A dictionary containing the transcribed text and time range
136 |     """
137 |     try:
138 |         if not os.path.exists(audio_path):
139 |             raise ValueError(f"Audio file not found: {audio_path}")
140 | 
141 |         # Load audio to detect language if not already loaded
142 |         _ = load_audio(audio_path, model_name)
143 |         if _detected_language == "zh":
144 |             initial_prompt = "以下是普通话的句子"
145 |         elif _detected_language == "en":
146 |             initial_prompt = "The following is English speech"
147 |         else:
148 |             initial_prompt = ""
149 | 
150 |         # Load model and audio (uses cached versions if already loaded)
151 |         model = load_model(model_name)
152 |         audio = load_audio(audio_path, model_name)
153 | 
154 |         # Convert times to sample indices
155 |         sample_rate = 16000  # Whisper uses 16kHz audio
156 |         start_sample = int(start_time * sample_rate)
157 |         end_sample = int(end_time * sample_rate)
158 | 
159 |         # Ensure indices are within bounds
160 |         audio_length = len(audio)
161 |         start_sample = max(0, min(start_sample, audio_length - 1))
162 |         end_sample = max(start_sample, min(end_sample, audio_length))
163 | 
164 |         # Extract the requested audio segment
165 |         segment = audio[start_sample:end_sample]
166 | 
167 |         # If segment is too short, pad it
168 |         if len(segment) < 0.5 * sample_rate:  # Less than 0.5 seconds
169 |             logger.warning("Audio segment is very short, results may be poor")
170 |             segment = whisper.pad_or_trim(segment, 0.5 * sample_rate)
171 | 
172 |         # Transcribe the segment
173 |         result = model.transcribe(
174 |             segment,
175 |             language=_detected_language,
176 |             initial_prompt=initial_prompt,
177 |             verbose=False,
178 |         )
179 | 
180 |         # Format time range for display
181 |         start_formatted = str(datetime.timedelta(seconds=int(start_time)))
182 |         end_formatted = str(datetime.timedelta(seconds=int(end_time)))
183 | 
184 |         # Extract and return the text
185 |         transcribed_text = result["text"].strip()
186 | 
187 |         return {
188 |             "text": transcribed_text,
189 |             "start_time": start_time,
190 |             "end_time": end_time,
191 |             "time_range": f"{start_formatted} - {end_formatted}",
192 |             "language": _detected_language,
193 |             "message": "Successfully transcribed audio",
194 |         }
195 |     except Exception as e:
196 |         return {
197 |             "error": str(e),
198 |             "message": f"Failed to transcribe audio: {e!s}",
199 |         }
200 | 
```

--------------------------------------------------------------------------------
/mcp_toolbox/flux/api.py:
--------------------------------------------------------------------------------

```python
  1 | import asyncio
  2 | import io
  3 | import os
  4 | from pathlib import Path
  5 | 
  6 | import httpx
  7 | from PIL import Image
  8 | 
  9 | API_URL = "https://api.bfl.ml"
 10 | API_ENDPOINTS = {
 11 |     "flux.1-pro": "flux-pro",
 12 |     "flux.1-dev": "flux-dev",
 13 |     "flux.1.1-pro": "flux-pro-1.1",
 14 | }
 15 | 
 16 | 
 17 | class ApiException(Exception):
 18 |     def __init__(self, status_code: int, detail: str | list[dict] | None = None):
 19 |         super().__init__()
 20 |         self.detail = detail
 21 |         self.status_code = status_code
 22 | 
 23 |     def __str__(self) -> str:
 24 |         return self.__repr__()
 25 | 
 26 |     def __repr__(self) -> str:
 27 |         if self.detail is None:
 28 |             message = None
 29 |         elif isinstance(self.detail, str):
 30 |             message = self.detail
 31 |         else:
 32 |             message = "[" + ",".join(d["msg"] for d in self.detail) + "]"
 33 |         return f"ApiException({self.status_code=}, {message=}, detail={self.detail})"
 34 | 
 35 | 
 36 | class ImageRequest:
 37 |     def __init__(
 38 |         self,
 39 |         # api inputs
 40 |         prompt: str,
 41 |         name: str = "flux.1.1-pro",
 42 |         width: int | None = None,
 43 |         height: int | None = None,
 44 |         num_steps: int | None = None,
 45 |         prompt_upsampling: bool | None = None,
 46 |         seed: int | None = None,
 47 |         guidance: float | None = None,
 48 |         interval: float | None = None,
 49 |         safety_tolerance: int | None = None,
 50 |         # behavior of this class
 51 |         validate: bool = True,
 52 |         api_key: str | None = None,
 53 |     ):
 54 |         """
 55 |         Manages an image generation request to the API.
 56 | 
 57 |         All parameters not specified will use the API defaults.
 58 | 
 59 |         Args:
 60 |             prompt: Text prompt for image generation.
 61 |             width: Width of the generated image in pixels. Must be a multiple of 32.
 62 |             height: Height of the generated image in pixels. Must be a multiple of 32.
 63 |             name: Which model version to use
 64 |             num_steps: Number of steps for the image generation process.
 65 |             prompt_upsampling: Whether to perform upsampling on the prompt.
 66 |             seed: Optional seed for reproducibility.
 67 |             guidance: Guidance scale for image generation.
 68 |             safety_tolerance: Tolerance level for input and output moderation.
 69 |                  Between 0 and 6, 0 being most strict, 6 being least strict.
 70 |             validate: Run input validation
 71 |             api_key: Your API key if not provided by the environment
 72 | 
 73 |         Raises:
 74 |             ValueError: For invalid input, when `validate`
 75 |             ApiException: For errors raised from the API
 76 |         """
 77 |         if validate:
 78 |             if name not in API_ENDPOINTS:
 79 |                 raise ValueError(f"Invalid model {name}")
 80 |             elif width is not None and width % 32 != 0:
 81 |                 raise ValueError(f"width must be divisible by 32, got {width}")
 82 |             elif width is not None and not (256 <= width <= 1440):
 83 |                 raise ValueError(f"width must be between 256 and 1440, got {width}")
 84 |             elif height is not None and height % 32 != 0:
 85 |                 raise ValueError(f"height must be divisible by 32, got {height}")
 86 |             elif height is not None and not (256 <= height <= 1440):
 87 |                 raise ValueError(f"height must be between 256 and 1440, got {height}")
 88 |             elif num_steps is not None and not (1 <= num_steps <= 50):
 89 |                 raise ValueError(f"steps must be between 1 and 50, got {num_steps}")
 90 |             elif guidance is not None and not (1.5 <= guidance <= 5.0):
 91 |                 raise ValueError(f"guidance must be between 1.5 and 4, got {guidance}")
 92 |             elif interval is not None and not (1.0 <= interval <= 4.0):
 93 |                 raise ValueError(f"interval must be between 1 and 4, got {interval}")
 94 |             elif safety_tolerance is not None and not (0 <= safety_tolerance <= 6.0):
 95 |                 raise ValueError(f"safety_tolerance must be between 0 and 6, got {interval}")
 96 | 
 97 |             if name == "flux.1-dev":
 98 |                 if interval is not None:
 99 |                     raise ValueError("Interval is not supported for flux.1-dev")
100 |             if name == "flux.1.1-pro":
101 |                 if interval is not None or num_steps is not None or guidance is not None:
102 |                     raise ValueError("Interval, num_steps and guidance are not supported for flux.1.1-pro")
103 | 
104 |         self.name = name
105 |         self.request_json = {
106 |             "prompt": prompt,
107 |             "width": width,
108 |             "height": height,
109 |             "steps": num_steps,
110 |             "prompt_upsampling": prompt_upsampling,
111 |             "seed": seed,
112 |             "guidance": guidance,
113 |             "interval": interval,
114 |             "safety_tolerance": safety_tolerance,
115 |         }
116 |         self.request_json = {key: value for key, value in self.request_json.items() if value is not None}
117 | 
118 |         self.request_id: str | None = None
119 |         self.result: dict | None = None
120 |         self._image_bytes: bytes | None = None
121 |         self._url: str | None = None
122 |         if api_key is None:
123 |             self.api_key = os.environ.get("BFL_API_KEY")
124 |         else:
125 |             self.api_key = api_key
126 | 
127 |     async def request(self):
128 |         """
129 |         Request to generate the image.
130 |         """
131 |         if self.request_id is not None:
132 |             return
133 |         async with httpx.AsyncClient() as client:
134 |             response = await client.post(
135 |                 f"{API_URL}/v1/{API_ENDPOINTS[self.name]}",
136 |                 headers={
137 |                     "accept": "application/json",
138 |                     "x-key": self.api_key,
139 |                     "Content-Type": "application/json",
140 |                 },
141 |                 json=self.request_json,
142 |             )
143 |             result = response.json()
144 |             if response.status_code != 200:
145 |                 raise ApiException(status_code=response.status_code, detail=result.get("detail"))
146 |             self.request_id = response.json()["id"]
147 | 
148 |     async def retrieve(self) -> dict:
149 |         """
150 |         Wait for the generation to finish and retrieve response.
151 |         """
152 |         if self.request_id is None:
153 |             await self.request()
154 |         while self.result is None:
155 |             async with httpx.AsyncClient() as client:
156 |                 response = await client.get(
157 |                     f"{API_URL}/v1/get_result",
158 |                     headers={
159 |                         "accept": "application/json",
160 |                         "x-key": self.api_key,
161 |                     },
162 |                     params={
163 |                         "id": self.request_id,
164 |                     },
165 |                 )
166 |                 result = response.json()
167 |                 if "status" not in result:
168 |                     raise ApiException(status_code=response.status_code, detail=result.get("detail"))
169 |                 elif result["status"] == "Ready":
170 |                     self.result = result["result"]
171 |                 elif result["status"] == "Pending":
172 |                     await asyncio.sleep(0.5)
173 |                 else:
174 |                     raise ApiException(status_code=200, detail=f"API returned status '{result['status']}'")
175 |         return self.result
176 | 
177 |     async def get_bytes(self) -> bytes:
178 |         """
179 |         Generated image as bytes.
180 |         """
181 |         if self._image_bytes is None:
182 |             url = await self.get_url()
183 |             async with httpx.AsyncClient() as client:
184 |                 response = await client.get(url)
185 |                 if response.status_code == 200:
186 |                     self._image_bytes = response.content
187 |                 else:
188 |                     raise ApiException(status_code=response.status_code)
189 |         return self._image_bytes
190 | 
191 |     async def get_url(self) -> str:
192 |         """
193 |         Public url to retrieve the image from
194 |         """
195 |         if self._url is None:
196 |             result = await self.retrieve()
197 |             self._url = result["sample"]
198 |         return self._url
199 | 
200 |     async def get_image(self) -> Image.Image:
201 |         """
202 |         Load the image as a PIL Image
203 |         """
204 |         bytes_data = await self.get_bytes()
205 |         return Image.open(io.BytesIO(bytes_data))
206 | 
207 |     async def save(self, path: str) -> str:
208 |         """
209 |         Save the generated image to a local path
210 | 
211 |         Args:
212 |             path: The path to save the image to
213 | 
214 |         Returns:
215 |             The full path where the image was saved
216 |         """
217 |         url = await self.get_url()
218 |         suffix = Path(url).suffix
219 |         if not path.endswith(suffix):
220 |             path = path + suffix
221 |         Path(path).resolve().parent.mkdir(parents=True, exist_ok=True)
222 |         bytes_data = await self.get_bytes()
223 |         with open(path, "wb") as file:
224 |             file.write(bytes_data)
225 |         return path
226 | 
```

--------------------------------------------------------------------------------
/tests/web/test_web_tools.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for web tools."""
  2 | 
  3 | from unittest.mock import AsyncMock, MagicMock, patch
  4 | 
  5 | import pytest
  6 | from httpx import HTTPStatusError, RequestError, Response
  7 | 
  8 | from mcp_toolbox.web.tools import (
  9 |     get_html,
 10 |     get_http_content,
 11 |     save_html,
 12 | )
 13 | 
 14 | # Check if optional dependencies are available
 15 | try:
 16 |     from mcp_toolbox.web.tools import search_with_tavily
 17 | 
 18 |     TAVILY_AVAILABLE = True
 19 | except ImportError:
 20 |     TAVILY_AVAILABLE = False
 21 | 
 22 | try:
 23 |     from mcp_toolbox.web.tools import search_with_duckduckgo
 24 | 
 25 |     DUCKDUCKGO_AVAILABLE = True
 26 | except ImportError:
 27 |     DUCKDUCKGO_AVAILABLE = False
 28 | 
 29 | 
 30 | # Mock HTML content for testing
 31 | MOCK_HTML_CONTENT = """
 32 | <!DOCTYPE html>
 33 | <html>
 34 | <head>
 35 |     <title>Test Page</title>
 36 | </head>
 37 | <body>
 38 |     <h1>Hello World</h1>
 39 |     <p>This is a test page.</p>
 40 | </body>
 41 | </html>
 42 | """
 43 | 
 44 | 
 45 | # Helper function to create a mock response
 46 | def create_mock_response(status_code=200, content=MOCK_HTML_CONTENT):
 47 |     mock_response = MagicMock(spec=Response)
 48 |     mock_response.status_code = status_code
 49 |     mock_response.text = content
 50 |     mock_response.raise_for_status = MagicMock()
 51 |     if status_code >= 400:
 52 |         mock_response.raise_for_status.side_effect = HTTPStatusError(
 53 |             "HTTP Error", request=MagicMock(), response=mock_response
 54 |         )
 55 |     return mock_response
 56 | 
 57 | 
 58 | # Test get_http_content function
 59 | @pytest.mark.asyncio
 60 | async def test_get_http_content_success():
 61 |     """Test successful HTTP request."""
 62 |     # Create a mock response
 63 |     mock_response = create_mock_response()
 64 | 
 65 |     # Create a mock client
 66 |     mock_client = AsyncMock()
 67 |     mock_client.request.return_value = mock_response
 68 | 
 69 |     # Patch the client
 70 |     with patch("mcp_toolbox.web.tools.client", mock_client):
 71 |         # Call the function
 72 |         result = await get_http_content("https://example.com")
 73 | 
 74 |         # Verify the client was called with the correct arguments
 75 |         mock_client.request.assert_called_once_with(
 76 |             "GET",
 77 |             "https://example.com",
 78 |             headers=None,
 79 |             params=None,
 80 |             data=None,
 81 |             timeout=60,
 82 |         )
 83 | 
 84 |         # Verify the result is as expected
 85 |         assert result == MOCK_HTML_CONTENT
 86 | 
 87 | 
 88 | @pytest.mark.asyncio
 89 | async def test_get_http_content_error():
 90 |     """Test HTTP request with error."""
 91 |     # Create a mock response with error status
 92 |     mock_response = create_mock_response(status_code=404)
 93 | 
 94 |     # Create a mock client
 95 |     mock_client = AsyncMock()
 96 |     mock_client.request.return_value = mock_response
 97 | 
 98 |     # Patch the client and expect an exception
 99 |     with patch("mcp_toolbox.web.tools.client", mock_client), pytest.raises(HTTPStatusError):
100 |         await get_http_content("https://example.com")
101 | 
102 | 
103 | @pytest.mark.asyncio
104 | async def test_get_http_content_request_error():
105 |     """Test HTTP request with request error."""
106 |     # Create a mock client that raises a RequestError
107 |     mock_client = AsyncMock()
108 |     mock_client.request.side_effect = RequestError("Connection error", request=MagicMock())
109 | 
110 |     # Patch the client and expect an exception
111 |     with patch("mcp_toolbox.web.tools.client", mock_client), pytest.raises(RequestError):
112 |         await get_http_content("https://example.com")
113 | 
114 | 
115 | # Test save_html tool
116 | @pytest.mark.asyncio
117 | async def test_save_html_success():
118 |     """Test successful saving of HTML."""
119 |     # Mock get_http_content to return HTML content
120 |     with (
121 |         patch("mcp_toolbox.web.tools.get_http_content", return_value=MOCK_HTML_CONTENT),
122 |         patch("pathlib.Path.write_text") as mock_write_text,
123 |         patch("pathlib.Path.mkdir") as mock_mkdir,
124 |     ):
125 |         # Call the function
126 |         result = await save_html("https://example.com", "/tmp/test.html")
127 | 
128 |         # Verify the result is as expected
129 |         assert result["success"] is True
130 |         assert result["url"] == "https://example.com"
131 |         assert "/tmp/test.html" in result["output_path"]
132 | 
133 |         # Verify the file operations were called
134 |         mock_mkdir.assert_called_once_with(parents=True, exist_ok=True)
135 |         mock_write_text.assert_called_once_with(MOCK_HTML_CONTENT)
136 | 
137 | 
138 | @pytest.mark.asyncio
139 | async def test_save_html_network_error():
140 |     """Test saving HTML with network error."""
141 |     # Mock get_http_content to raise an exception
142 |     with patch(
143 |         "mcp_toolbox.web.tools.get_http_content",
144 |         side_effect=Exception("Network error"),
145 |     ):
146 |         # Call the function
147 |         result = await save_html("https://example.com", "/tmp/test.html")
148 | 
149 |         # Verify the result is as expected
150 |         assert result["success"] is False
151 |         assert "error" in result
152 |         assert "Network error" in result["error"]
153 | 
154 | 
155 | @pytest.mark.asyncio
156 | async def test_save_html_write_error():
157 |     """Test saving HTML with file write error."""
158 |     # Mock get_http_content to return HTML content
159 |     # Mock write_text to raise an exception
160 |     with (
161 |         patch("mcp_toolbox.web.tools.get_http_content", return_value=MOCK_HTML_CONTENT),
162 |         patch("pathlib.Path.write_text", side_effect=Exception("Write error")),
163 |         patch("pathlib.Path.mkdir"),
164 |     ):
165 |         # Call the function
166 |         result = await save_html("https://example.com", "/tmp/test.html")
167 | 
168 |         # Verify the result is as expected
169 |         assert result["success"] is False
170 |         assert "error" in result
171 |         assert "Write error" in result["error"]
172 | 
173 | 
174 | # Test get_html tool
175 | @pytest.mark.asyncio
176 | async def test_get_html_success():
177 |     """Test successful retrieval of HTML."""
178 |     # Mock get_http_content to return HTML content
179 |     with patch("mcp_toolbox.web.tools.get_http_content", return_value=MOCK_HTML_CONTENT):
180 |         # Call the function
181 |         result = await get_html("https://example.com")
182 | 
183 |         # Verify the result is as expected
184 |         assert result["success"] is True
185 |         assert result["url"] == "https://example.com"
186 |         assert result["content"] == MOCK_HTML_CONTENT
187 | 
188 | 
189 | @pytest.mark.asyncio
190 | async def test_get_html_error():
191 |     """Test retrieval of HTML with error."""
192 |     # Mock get_http_content to raise an exception
193 |     with patch(
194 |         "mcp_toolbox.web.tools.get_http_content",
195 |         side_effect=Exception("Network error"),
196 |     ):
197 |         # Call the function
198 |         result = await get_html("https://example.com")
199 | 
200 |         # Verify the result is as expected
201 |         assert result["success"] is False
202 |         assert "error" in result
203 |         assert "Network error" in result["error"]
204 | 
205 | 
206 | # Test search_with_tavily tool if available
207 | if TAVILY_AVAILABLE:
208 | 
209 |     @pytest.mark.asyncio
210 |     async def test_search_with_tavily_success():
211 |         """Test successful search with Tavily."""
212 |         # Mock search results
213 |         mock_results = [
214 |             {"title": "Result 1", "url": "https://example.com/1", "content": "Content 1"},
215 |             {"title": "Result 2", "url": "https://example.com/2", "content": "Content 2"},
216 |         ]
217 | 
218 |         # Mock the Tavily client
219 |         mock_client = AsyncMock()
220 |         mock_client.search.return_value = {"results": mock_results}
221 | 
222 |         # Patch the AsyncTavilyClient
223 |         with patch("mcp_toolbox.web.tools.AsyncTavilyClient", return_value=mock_client):
224 |             # Call the function
225 |             results = await search_with_tavily("test query")
226 | 
227 |             # Verify the client was called with the correct arguments
228 |             mock_client.search.assert_called_once_with(
229 |                 "test query", search_depth="basic", topic="general", time_range=None
230 |             )
231 | 
232 |             # Verify the results are as expected
233 |             assert results == mock_results
234 | 
235 |     @pytest.mark.asyncio
236 |     async def test_search_with_tavily_no_results():
237 |         """Test search with Tavily with no results."""
238 |         # Mock empty search results
239 |         mock_results = {"results": []}
240 | 
241 |         # Mock the Tavily client
242 |         mock_client = AsyncMock()
243 |         mock_client.search.return_value = mock_results
244 | 
245 |         # Patch the AsyncTavilyClient
246 |         with patch("mcp_toolbox.web.tools.AsyncTavilyClient", return_value=mock_client):
247 |             # Call the function
248 |             result = await search_with_tavily("test query")
249 | 
250 |             # Verify the result is as expected
251 |             assert result["success"] is False
252 |             assert "error" in result
253 |             assert "No search results found" in result["error"]
254 | 
255 | 
256 | # Test search_with_duckduckgo tool if available
257 | if DUCKDUCKGO_AVAILABLE:
258 | 
259 |     @pytest.mark.asyncio
260 |     async def test_search_with_duckduckgo_success():
261 |         """Test successful search with DuckDuckGo."""
262 |         # Mock search results
263 |         mock_results = [
264 |             {"title": "Result 1", "href": "https://example.com/1", "body": "Content 1"},
265 |             {"title": "Result 2", "href": "https://example.com/2", "body": "Content 2"},
266 |         ]
267 | 
268 |         # Mock the DDGS instance
269 |         mock_ddgs = MagicMock()
270 |         mock_ddgs.text.return_value = mock_results
271 | 
272 |         # Patch the DDGS class and anyio.to_thread.run_sync
273 |         with (
274 |             patch("mcp_toolbox.web.tools.DDGS", return_value=mock_ddgs),
275 |             patch("mcp_toolbox.web.tools.anyio.to_thread.run_sync", return_value=mock_results),
276 |         ):
277 |             # Call the function
278 |             results = await search_with_duckduckgo("test query")
279 | 
280 |             # Verify the results are as expected
281 |             assert results == mock_results
282 | 
283 |     @pytest.mark.asyncio
284 |     async def test_search_with_duckduckgo_no_results():
285 |         """Test search with DuckDuckGo with no results."""
286 |         # Mock empty search results
287 |         mock_results = []
288 | 
289 |         # Mock the DDGS instance
290 |         mock_ddgs = MagicMock()
291 |         mock_ddgs.text.return_value = mock_results
292 | 
293 |         # Patch the DDGS class and anyio.to_thread.run_sync
294 |         with (
295 |             patch("mcp_toolbox.web.tools.DDGS", return_value=mock_ddgs),
296 |             patch("mcp_toolbox.web.tools.anyio.to_thread.run_sync", return_value=mock_results),
297 |         ):
298 |             # Call the function
299 |             result = await search_with_duckduckgo("test query")
300 | 
301 |             # Verify the result is as expected
302 |             assert result["success"] is False
303 |             assert "error" in result
304 |             assert "No search results found" in result["error"]
305 | 
```

--------------------------------------------------------------------------------
/llms.txt:
--------------------------------------------------------------------------------

```
  1 | # MCP-Toolbox Development Guide for LLMs
  2 | 
  3 | This guide is designed to help you (an LLM) effectively contribute to the mcp-toolbox project. It provides essential information about the project structure, development workflow, and best practices.
  4 | 
  5 | ## Project Overview
  6 | 
  7 | MCP-Toolbox is a Python package that provides tools for enhancing LLMs through the Model Context Protocol (MCP). The project implements various API integrations as MCP tools, allowing LLMs to interact with external services.
  8 | 
  9 | ### Key Components
 10 | 
 11 | - **mcp_toolbox/app.py**: Initializes the FastMCP server
 12 | - **mcp_toolbox/cli.py**: Command-line interface for running the MCP server
 13 | - **mcp_toolbox/config.py**: Configuration management using Pydantic
 14 | - **mcp_toolbox/figma/**: Figma API integration tools
 15 | - **tests/**: Test files for the project
 16 | 
 17 | ## Environment Setup
 18 | 
 19 | Always help the user set up a proper development environment using `uv`. This is the preferred package manager for this project.
 20 | 
 21 | ### Setting Up the Environment
 22 | 
 23 | ```bash
 24 | # Install uv if not already installed
 25 | curl -LsSf https://astral.sh/uv/install.sh | sh  # For macOS/Linux
 26 | # or
 27 | powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"  # For Windows
 28 | 
 29 | # Clone the repository (if not already done)
 30 | git clone https://github.com/username/mcp-toolbox.git
 31 | cd mcp-toolbox
 32 | 
 33 | # Create and activate a virtual environment
 34 | uv venv
 35 | source .venv/bin/activate  # For macOS/Linux
 36 | # or
 37 | .venv\Scripts\activate  # For Windows
 38 | 
 39 | # Install the package in development mode
 40 | uv pip install -e .
 41 | 
 42 | # Install development dependencies
 43 | uv pip install -e ".[dev]"
 44 | ```
 45 | 
 46 | ## GitHub Workflow
 47 | 
 48 | Always follow proper GitHub workflow when making changes:
 49 | 
 50 | 1. **Create a branch with a descriptive name**:
 51 |    ```bash
 52 |    # Assume the user already has their own fork
 53 |    git checkout -b feature/add-spotify-integration
 54 |    ```
 55 | 
 56 | 2. **Make your changes**: Implement the requested features or fixes
 57 | 
 58 | 3. **Run tests and checks**:
 59 |    ```bash
 60 |    make check  # Run linting and formatting
 61 |    make test   # Run tests
 62 |    ```
 63 | 
 64 | 4. **Commit your changes with descriptive messages**:
 65 |    ```bash
 66 |    git add .
 67 |    git commit -m "feat: add Spotify API integration"
 68 |    ```
 69 | 
 70 | 5. **Push your changes**:
 71 |    ```bash
 72 |    git push origin feature/add-spotify-integration
 73 |    ```
 74 | 
 75 | 6. **Create a pull request**: Guide the user to create a PR from their branch to the main repository
 76 | 
 77 | ## Adding New Tools
 78 | 
 79 | When adding new API integrations or tools, follow this pattern. Here's an example of adding Spotify API integration:
 80 | 
 81 | ### 1. Update Config Class
 82 | 
 83 | First, update the `config.py` file to include the new API key:
 84 | 
 85 | ```python
 86 | class Config(BaseSettings):
 87 |     figma_api_key: str | None = None
 88 |     spotify_client_id: str | None = None
 89 |     spotify_client_secret: str | None = None
 90 | 
 91 |     cache_dir: str = (HOME / "cache").expanduser().resolve().absolute().as_posix()
 92 | ```
 93 | 
 94 | ### 2. Create Module Structure
 95 | 
 96 | Create a new module for the integration:
 97 | 
 98 | ```bash
 99 | mkdir -p mcp_toolbox/spotify
100 | touch mcp_toolbox/spotify/__init__.py
101 | touch mcp_toolbox/spotify/tools.py
102 | ```
103 | 
104 | ### 3. Implement API Client and Tools
105 | 
106 | In `mcp_toolbox/spotify/tools.py`:
107 | 
108 | ```python
109 | import json
110 | from typing import Any, List, Dict, Optional
111 | 
112 | import httpx
113 | from pydantic import BaseModel
114 | 
115 | from mcp_toolbox.app import mcp
116 | from mcp_toolbox.config import Config
117 | 
118 | 
119 | class SpotifyApiClient:
120 |     BASE_URL = "https://api.spotify.com/v1"
121 | 
122 |     def __init__(self):
123 |         self.config = Config()
124 |         self.access_token = None
125 | 
126 |     async def get_access_token(self) -> str:
127 |         """Get or refresh the Spotify access token."""
128 |         if not self.config.spotify_client_id or not self.config.spotify_client_secret:
129 |             raise ValueError(
130 |                 "Spotify credentials not provided. Set SPOTIFY_CLIENT_ID and SPOTIFY_CLIENT_SECRET environment variables."
131 |             )
132 | 
133 |         auth_url = "https://accounts.spotify.com/api/token"
134 | 
135 |         async with httpx.AsyncClient() as client:
136 |             response = await client.post(
137 |                 auth_url,
138 |                 data={"grant_type": "client_credentials"},
139 |                 auth=(self.config.spotify_client_id, self.config.spotify_client_secret),
140 |             )
141 |             response.raise_for_status()
142 |             data = response.json()
143 |             self.access_token = data["access_token"]
144 |             return self.access_token
145 | 
146 |     async def make_request(self, path: str, method: str = "GET", params: Dict[str, Any] = None) -> Dict[str, Any]:
147 |         """Make a request to the Spotify API."""
148 |         token = await self.get_access_token()
149 | 
150 |         async with httpx.AsyncClient() as client:
151 |             headers = {"Authorization": f"Bearer {token}"}
152 |             url = f"{self.BASE_URL}{path}"
153 | 
154 |             try:
155 |                 if method == "GET":
156 |                     response = await client.get(url, headers=headers, params=params)
157 |                 else:
158 |                     raise ValueError(f"Unsupported HTTP method: {method}")
159 | 
160 |                 response.raise_for_status()
161 |                 return response.json()
162 |             except httpx.HTTPStatusError as e:
163 |                 spotify_error = e.response.json() if e.response.content else {"status": e.response.status_code, "error": str(e)}
164 |                 raise ValueError(f"Spotify API error: {spotify_error}") from e
165 |             except httpx.RequestError as e:
166 |                 raise ValueError(f"Request error: {e!s}") from e
167 | 
168 | 
169 | # Initialize API client
170 | api_client = SpotifyApiClient()
171 | 
172 | 
173 | # Tool implementations
174 | @mcp.tool(
175 |     description="Search for tracks on Spotify. Args: query (required, The search query), limit (optional, Maximum number of results to return)"
176 | )
177 | async def spotify_search_tracks(query: str, limit: int = 10) -> Dict[str, Any]:
178 |     """Search for tracks on Spotify.
179 | 
180 |     Args:
181 |         query: The search query
182 |         limit: Maximum number of results to return (default: 10)
183 |     """
184 |     params = {"q": query, "type": "track", "limit": limit}
185 |     return await api_client.make_request("/search", params=params)
186 | 
187 | 
188 | @mcp.tool(
189 |     description="Get details about a specific track. Args: track_id (required, The Spotify ID of the track)"
190 | )
191 | async def spotify_get_track(track_id: str) -> Dict[str, Any]:
192 |     """Get details about a specific track.
193 | 
194 |     Args:
195 |         track_id: The Spotify ID of the track
196 |     """
197 |     return await api_client.make_request(f"/tracks/{track_id}")
198 | 
199 | 
200 | @mcp.tool(
201 |     description="Get an artist's top tracks. Args: artist_id (required, The Spotify ID of the artist), market (optional, An ISO 3166-1 alpha-2 country code)"
202 | )
203 | async def spotify_get_artist_top_tracks(artist_id: str, market: str = "US") -> Dict[str, Any]:
204 |     """Get an artist's top tracks.
205 | 
206 |     Args:
207 |         artist_id: The Spotify ID of the artist
208 |         market: An ISO 3166-1 alpha-2 country code (default: US)
209 |     """
210 |     params = {"market": market}
211 |     return await api_client.make_request(f"/artists/{artist_id}/top-tracks", params=params)
212 | ```
213 | 
214 | ### 4. Create Tests
215 | 
216 | Create test files for your new tools:
217 | 
218 | ```bash
219 | mkdir -p tests/spotify
220 | touch tests/spotify/test_tools.py
221 | mkdir -p tests/mock/spotify
222 | ```
223 | 
224 | ### 5. Update README
225 | 
226 | Always update the README.md when adding new environment variables or tools:
227 | 
228 | ```markdown
229 | ## Environment Variables
230 | 
231 | The following environment variables can be configured:
232 | 
233 | - `FIGMA_API_KEY`: API key for Figma integration
234 | - `SPOTIFY_CLIENT_ID`: Client ID for Spotify API
235 | - `SPOTIFY_CLIENT_SECRET`: Client Secret for Spotify API
236 | ```
237 | 
238 | ## Error Handling Best Practices
239 | 
240 | When implementing tools, follow these error handling best practices:
241 | 
242 | 1. **Graceful Degradation**: If one API key is missing, other tools should still work
243 |    ```python
244 |    async def get_access_token(self) -> str:
245 |        if not self.config.spotify_client_id or not self.config.spotify_client_secret:
246 |            raise ValueError(
247 |                "Spotify credentials not provided. Set SPOTIFY_CLIENT_ID and SPOTIFY_CLIENT_SECRET environment variables."
248 |            )
249 |        # Rest of the method...
250 |    ```
251 | 
252 | 2. **Descriptive Error Messages**: Provide clear error messages that help users understand what went wrong
253 |    ```python
254 |    except httpx.HTTPStatusError as e:
255 |        spotify_error = e.response.json() if e.response.content else {"status": e.response.status_code, "error": str(e)}
256 |        raise ValueError(f"Spotify API error: {spotify_error}") from e
257 |    ```
258 | 
259 | 3. **Proper Exception Handling**: Catch specific exceptions and handle them appropriately
260 | 
261 | 4. **Fallbacks**: Implement fallback mechanisms when possible
262 | 
263 | ## Testing
264 | 
265 | Always write tests for new functionality:
266 | 
267 | ```python
268 | import json
269 | from pathlib import Path
270 | from unittest.mock import patch
271 | 
272 | import pytest
273 | 
274 | from mcp_toolbox.spotify.tools import (
275 |     SpotifyApiClient,
276 |     spotify_search_tracks,
277 |     spotify_get_track,
278 |     spotify_get_artist_top_tracks,
279 | )
280 | 
281 | 
282 | # Helper function to load mock data
283 | def load_mock_data(filename):
284 |     mock_dir = Path(__file__).parent.parent / "mock" / "spotify"
285 |     file_path = mock_dir / filename
286 | 
287 |     if not file_path.exists():
288 |         # Create empty mock data if it doesn't exist
289 |         mock_data = {"mock": "data"}
290 |         with open(file_path, "w") as f:
291 |             json.dump(mock_data, f)
292 | 
293 |     with open(file_path) as f:
294 |         return json.load(f)
295 | 
296 | 
297 | # Patch the SpotifyApiClient.make_request method
298 | @pytest.fixture
299 | def mock_make_request():
300 |     with patch.object(SpotifyApiClient, "make_request") as mock:
301 |         def side_effect(path, method="GET", params=None):
302 |             if path == "/search":
303 |                 return load_mock_data("search_tracks.json")
304 |             elif path.startswith("/tracks/"):
305 |                 return load_mock_data("get_track.json")
306 |             elif path.endswith("/top-tracks"):
307 |                 return load_mock_data("get_artist_top_tracks.json")
308 |             return {"mock": "data"}
309 | 
310 |         mock.side_effect = side_effect
311 |         yield mock
312 | 
313 | 
314 | # Test spotify_search_tracks function
315 | @pytest.mark.asyncio
316 | async def test_spotify_search_tracks(mock_make_request):
317 |     result = await spotify_search_tracks("test query")
318 |     mock_make_request.assert_called_once()
319 |     assert mock_make_request.call_args[0][0] == "/search"
320 | 
321 | 
322 | # Test spotify_get_track function
323 | @pytest.mark.asyncio
324 | async def test_spotify_get_track(mock_make_request):
325 |     result = await spotify_get_track("track_id")
326 |     mock_make_request.assert_called_once()
327 |     assert mock_make_request.call_args[0][0] == "/tracks/track_id"
328 | 
329 | 
330 | # Test spotify_get_artist_top_tracks function
331 | @pytest.mark.asyncio
332 | async def test_spotify_get_artist_top_tracks(mock_make_request):
333 |     result = await spotify_get_artist_top_tracks("artist_id")
334 |     mock_make_request.assert_called_once()
335 |     assert mock_make_request.call_args[0][0] == "/artists/artist_id/top-tracks"
336 | ```
337 | 
338 | ## Documentation
339 | 
340 | When adding new tools, make sure to:
341 | 
342 | 1. Add clear docstrings to all functions and classes
343 | 2. Include detailed argument descriptions in the `@mcp.tool` decorator
344 | 3. Add type hints to all functions and methods
345 | 4. Update the README.md with new environment variables and tools
346 | 
347 | ## Final Checklist
348 | 
349 | Before submitting your changes:
350 | 
351 | 1. ✅ Run `make check` to ensure code quality
352 | 2. ✅ Run `make test` to ensure all tests pass
353 | 3. ✅ Update documentation if needed
354 | 4. ✅ Add new environment variables to README.md
355 | 5. ✅ Follow proper Git workflow (branch, commit, push)
356 | 
```
Page 1/2FirstPrevNextLast