Skip to main content
Glama

MCP Toolbox

by ai-zerolab

mcp-toolbox

Release Build status codecov Commit activity License

A comprehensive toolkit for enhancing LLM capabilities through the Model Context Protocol (MCP). This package provides a collection of tools that allow LLMs to interact with external services and APIs, extending their functionality beyond text generation.

Features

*nix is our main target, but Windows should work too.

  • Command Line Execution: Execute any command line instruction through LLM

  • Figma Integration: Access Figma files, components, styles, and more

  • Extensible Architecture: Easily add new API integrations

  • MCP Protocol Support: Compatible with Claude Desktop and other MCP-enabled LLMs

  • Comprehensive Testing: Well-tested codebase with high test coverage

Installation

Using uv (Recommended)

We recommend using uv to manage your environment.

# Install uv curl -LsSf https://astral.sh/uv/install.sh | sh # For macOS/Linux # or powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" # For Windows

Then you can use uvx "mcp-toolbox@latest" stdio as commands for running the MCP server for latest version. Audio and memory tools are not included in the default installation., you can include them by installing the all extra:

[audio] for audio tools, [memory] for memory tools, [all] for all tools

uvx "mcp-toolbox[all]@latest" stdio

Installing via Smithery

To install Toolbox for LLM Enhancement for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @ai-zerolab/mcp-toolbox --client claude

Using pip

pip install "mcp-toolbox[all]"

And you can use mcp-toolbox stdio as commands for running the MCP server.

Configuration

Environment Variables

The following environment variables can be configured:

  • FIGMA_API_KEY: API key for Figma integration

  • TAVILY_API_KEY: API key for Tavily integration

  • DUCKDUCKGO_API_KEY: API key for DuckDuckGo integration

  • BFL_API_KEY: API key for Flux image generation API

Memory Storage

Memory tools store data in the following locations:

  • macOS: ~/Documents/zerolab/mcp-toolbox/memory (syncs across devices via iCloud)

  • Other platforms: ~/.zerolab/mcp-toolbox/memory

Full Configuration

To use mcp-toolbox with Claude Desktop/Cline/Cursor/..., add the following to your configuration file:

{ "mcpServers": { "zerolab-toolbox": { "command": "uvx", "args": ["--prerelease=allow", "mcp-toolbox@latest", "stdio"], "env": { "FIGMA_API_KEY": "your-figma-api-key", "TAVILY_API_KEY": "your-tavily-api-key", "DUCKDUCKGO_API_KEY": "your-duckduckgo-api-key", "BFL_API_KEY": "your-bfl-api-key" } } } }

For full features:

{ "mcpServers": { "zerolab-toolbox": { "command": "uvx", "args": [ "--prerelease=allow", "--python=3.12", "mcp-toolbox[all]@latest", "stdio" ], "env": { "FIGMA_API_KEY": "your-figma-api-key", "TAVILY_API_KEY": "your-tavily-api-key", "DUCKDUCKGO_API_KEY": "your-duckduckgo-api-key", "BFL_API_KEY": "your-bfl-api-key" } } } }

You can generate a debug configuration template using:

uv run generate_config_template.py

Available Tools

Command Line Tools

Tool

Description

execute_command

Execute a command line instruction

File Operations Tools

Tool

Description

read_file_content

Read content from a file

write_file_content

Write content to a file

replace_in_file

Replace content in a file using regular expressions

list_directory

List directory contents with detailed information

Figma Tools

Tool

Description

figma_get_file

Get a Figma file by key

figma_get_file_nodes

Get specific nodes from a Figma file

figma_get_image

Get images for nodes in a Figma file

figma_get_image_fills

Get URLs for images used in a Figma file

figma_get_comments

Get comments on a Figma file

figma_post_comment

Post a comment on a Figma file

figma_delete_comment

Delete a comment from a Figma file

figma_get_team_projects

Get projects for a team

figma_get_project_files

Get files for a project

figma_get_team_components

Get components for a team

figma_get_file_components

Get components from a file

figma_get_component

Get a component by key

figma_get_team_component_sets

Get component sets for a team

figma_get_team_styles

Get styles for a team

figma_get_file_styles

Get styles from a file

figma_get_style

Get a style by key

XiaoyuZhouFM Tools

Tool

Description

xiaoyuzhoufm_download

Download a podcast episode from XiaoyuZhouFM with optional automatic m4a to mp3 conversion

Audio Tools

Tool

Description

get_audio_length

Get the length of an audio file in seconds

get_audio_text

Get transcribed text from a specific time range in an audio file

Memory Tools

Tool

Description

think

Use the tool to think about something and append the thought to the log

get_session_id

Get the current session ID

remember

Store a memory (brief and detail) in the memory database

recall

Query memories from the database with semantic search

forget

Clear all memories in the memory database

Markitdown Tools

Tool

Description

convert_file_to_markdown

Convert any file to Markdown using MarkItDown

convert_url_to_markdown

Convert a URL to Markdown using MarkItDown

Web Tools

Tool

Description

get_html

Get HTML content from a URL

save_html

Save HTML from a URL to a file

search_with_tavily

Search the web using Tavily (requires API key)

search_with_duckduckgo

Search the web using DuckDuckGo (requires API key)

Flux Image Generation Tools

Tool

Description

flux_generate_image

Generate an image using the Flux API and save it to a file

Usage Examples

Running the MCP Server

# Run with stdio transport (default) mcp-toolbox stdio # Run with SSE transport mcp-toolbox sse --host localhost --port 9871

Using with Claude Desktop

  1. Configure Claude Desktop as shown in the Configuration section

  2. Start Claude Desktop

  3. Ask Claude to interact with Figma files:

    • "Can you get information about this Figma file: 12345abcde?"

    • "Show me the components in this Figma file: 12345abcde"

    • "Get the comments from this Figma file: 12345abcde"

  4. Ask Claude to execute command line instructions:

    • "What files are in the current directory?"

    • "What's the current system time?"

    • "Show me the contents of a specific file."

  5. Ask Claude to download podcasts from XiaoyuZhouFM:

  6. Ask Claude to work with audio files:

    • "What's the length of this audio file: audio.m4a?"

    • "Transcribe the audio from 60 to 90 seconds in audio.m4a"

    • "Get the text from 2:30 to 3:00 in the audio file"

  7. Ask Claude to convert files or URLs to Markdown:

    • "Convert this file to Markdown: document.docx"

    • "Convert this webpage to Markdown: https://example.com"

  8. Ask Claude to work with web content:

  9. Ask Claude to generate images with Flux:

    • "Generate an image of a beautiful sunset over mountains"

    • "Create an image of a futuristic city and save it to my desktop"

    • "Generate a portrait of a cat in a space suit"

  10. Ask Claude to use memory tools:

    • "Remember this important fact: The capital of France is Paris"

    • "What's my current session ID?"

    • "Recall any information about France"

    • "Think about the implications of climate change"

    • "Forget all stored memories"

Development

Local Setup

Fork the repository and clone it to your local machine.

# Install in development mode make install # Activate a virtual environment source .venv/bin/activate # For macOS/Linux # or .venv\Scripts\activate # For Windows

Running Tests

make test

Running Checks

make check

Building Documentation

make docs

Adding New Tools

To add a new API integration:

  1. Update config.py with any required API keys

  2. Create a new module in mcp_toolbox/

  3. Implement your API client and tools

  4. Add tests for your new functionality

  5. Update the README.md with new environment variables and tools

See the development guide for more detailed instructions.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository

  2. Create a feature branch (git checkout -b feature/amazing-feature)

  3. Commit your changes (git commit -m 'Add some amazing feature')

  4. Push to the branch (git push origin feature/amazing-feature)

  5. Open a Pull Request

License

This project is licensed under the terms of the license included in the repository.

-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

A comprehensive toolkit that enhances LLM capabilities through the Model Context Protocol, allowing LLMs to interact with external services including command-line operations, file management, Figma integration, and audio processing.

  1. Features
    1. Installation
      1. Using uv (Recommended)
      2. Installing via Smithery
      3. Using pip
    2. Configuration
      1. Environment Variables
      2. Memory Storage
      3. Full Configuration
    3. Available Tools
      1. Command Line Tools
      2. File Operations Tools
      3. Figma Tools
      4. XiaoyuZhouFM Tools
      5. Audio Tools
      6. Memory Tools
      7. Markitdown Tools
      8. Web Tools
      9. Flux Image Generation Tools
    4. Usage Examples
      1. Running the MCP Server
      2. Using with Claude Desktop
    5. Development
      1. Local Setup
      2. Running Tests
      3. Running Checks
      4. Building Documentation
    6. Adding New Tools
      1. Contributing
        1. License

          Related MCP Servers

          • A
            security
            F
            license
            A
            quality
            Enables seamless interaction with Figma via the Model Context Protocol, allowing LLM applications to access, manipulate, and track Figma files, components, and variables.
            Last updated -
            107
            139
            • Apple
          • A
            security
            F
            license
            A
            quality
            A Model Context Protocol server that allows LLMs to interact with Python environments, enabling code execution, file operations, package management, and development workflows.
            Last updated -
            9
          • -
            security
            A
            license
            -
            quality
            A toolkit for building Model Context Protocol servers and clients that provide standardized context for LLMs, allowing applications to expose resources, tools, and prompts through stdio or Streamable HTTP transports.
            Last updated -
            7,267,844
            MIT License
          • -
            security
            A
            license
            -
            quality
            A command-line tool for creating and running Model Context Protocol servers that expose resources, tools, and prompts to LLM clients.
            Last updated -
            183
            1
            AGPL 3.0

          View all related MCP servers

          MCP directory API

          We provide all the information about MCP servers via our MCP API.

          curl -X GET 'https://glama.ai/api/mcp/v1/servers/ai-zerolab/mcp-toolbox'

          If you have feedback or need assistance with the MCP directory API, please join our Discord server