Skip to main content
Glama

Voice Mode

by mbailey

VoiceMode

Install via: uv tool install voice-mode | getvoicemode.com

Natural voice conversations for AI assistants. VoiceMode brings human-like voice interactions to Claude Code, AI code editors through the Model Context Protocol (MCP).

🖥️ Compatibility

Runs on: Linux • macOS • Windows (WSL) • NixOS | Python: 3.10+

✨ Features

  • 🎙️ Natural Voice Conversations with Claude Code - ask questions and hear responses
  • 🗣️ Supports local VoiceModels - works with any OpenAI API compatible STT/TTS services
  • ⚡ Real-time - low-latency voice interactions with automatic transport selection
  • 🔧 MCP Integration - seamless with Claude Code (and other MCP clients)
  • 🎯 Silence detection - automatically stops recording when you stop speaking (no more waiting!)
  • 🔄 Multiple transports - local microphone or LiveKit room-based communication

🎯 Simple Requirements

All you need to get started:

  1. 🎤 Computer with microphone and speakers
  2. 🔑 OpenAI API Key (optional) - VoiceMode can install free, open-source transcription and text-to-speech services locally

Optional for enhanced performance:

  • 🍎 Xcode (macOS only) - Required for Core ML acceleration of Whisper models (2-3x faster inference). Install from Mac App Store then run sudo xcode-select -s /Applications/Xcode.app/Contents/Developer

Quick Start

Install Claude Code with VoiceMode configured and ready to run on Linux, macOS, and Windows WSL:

# Download and run the installer curl -O https://getvoicemode.com/install.sh && bash install.sh # While local voice services can be installed automatically, we recommend # providing an OpenAI API key as a fallback in case local services are unavailable export OPENAI_API_KEY=your-openai-key # Optional but recommended # Start a voice conversation claude converse

This installer will:

  • Install all system dependencies (Node.js, audio libraries, etc.)
  • Install Claude Code if not already installed
  • Configure VoiceMode as an MCP server
  • Set up your system for voice conversations
  • Offer to install free local STT/TTS services if no API key is provided

Manual Installation

For manual setup steps, see the Getting Started Guide.

🎬 Demo

Watch VoiceMode in action with Claude Code:

The converse function makes voice interactions natural - it automatically waits for your response by default, creating a real conversation flow.

Installation

Prerequisites

  • Python >= 3.10
  • Astral UV - Package manager (install with curl -LsSf https://astral.sh/uv/install.sh | sh)
  • OpenAI API Key (or compatible service)
System Dependencies
sudo apt update sudo apt install -y python3-dev libasound2-dev libasound2-plugins libportaudio2 portaudio19-dev ffmpeg pulseaudio pulseaudio-utils

Note for WSL2 users: WSL2 requires additional audio packages (pulseaudio, libasound2-plugins) for microphone access.

sudo dnf install python3-devel alsa-lib-devel portaudio-devel ffmpeg
# Install Homebrew if not already installed /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # Install dependencies brew install portaudio ffmpeg cmake

Follow the Ubuntu/Debian instructions above within WSL.

VoiceMode includes a flake.nix with all required dependencies. You can either:

  1. Use the development shell (temporary):
nix develop github:mbailey/voicemode
  1. Install system-wide (see Installation section below)

Quick Install

# Using Claude Code (recommended) claude mcp add --scope user voicemode uvx --refresh voice-mode

Configuration for AI Coding Assistants

📖 Looking for detailed setup instructions? Check our comprehensive Getting Started Guide for step-by-step instructions!

Below are quick configuration snippets. For full installation and setup instructions, see the integration guides above.

claude mcp add voicemode -- uvx --refresh voice-mode

Or with environment variables:

claude mcp add voicemode --env OPENAI_API_KEY=your-openai-key -- uvx --refresh voice-mode

Alternative Installation Options

git clone https://github.com/mbailey/voicemode.git cd voicemode pip install -e .

1. Install with nix profile (user-wide):

nix profile install github:mbailey/voicemode

2. Add to NixOS configuration (system-wide):

# In /etc/nixos/configuration.nix environment.systemPackages = [ (builtins.getFlake "github:mbailey/voicemode").packages.${pkgs.system}.default ];

3. Add to home-manager:

# In home-manager configuration home.packages = [ (builtins.getFlake "github:mbailey/voicemode").packages.${pkgs.system}.default ];

4. Run without installing:

nix run github:mbailey/voicemode

Configuration

Quick Setup

The only required configuration is your OpenAI API key:

export OPENAI_API_KEY="your-key"

Local STT/TTS Services

For privacy-focused or offline usage, VoiceMode supports local speech services:

  • Whisper.cpp - Local speech-to-text with OpenAI-compatible API
  • Kokoro - Local text-to-speech with multiple voice options

These services provide the same API interface as OpenAI, allowing seamless switching between cloud and local processing.

Troubleshooting

Common Issues

  • No microphone access: Check system permissions for terminal/application
    • WSL2 Users: Additional audio packages (pulseaudio, libasound2-plugins) required for microphone access
  • UV not found: Install with curl -LsSf https://astral.sh/uv/install.sh | sh
  • OpenAI API error: Verify your OPENAI_API_KEY is set correctly
  • No audio output: Check system audio settings and available devices

Audio Saving

To save all audio files (both TTS output and STT input):

export VOICEMODE_SAVE_AUDIO=true

Audio files are saved to: ~/.voicemode/audio/YYYY/MM/ with timestamps in the filename.

Documentation

📚 Read the full documentation at voice-mode.readthedocs.io

Getting Started

Development

Service Guides

Community

See Also

License

MIT - A Failmode Project


mcp-name: com.failmode/voicemode

-
security - not tested
F
license - not found
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Natural voice conversations for AI assistants that brings human-like voice interactions to Claude, ChatGPT, and other LLMs through the Model Context Protocol (MCP).

  1. 🖥️ Compatibility
    1. ✨ Features
      1. 🎯 Simple Requirements
        1. Quick Start
          1. Automatic Installation (Recommended)
          2. Manual Installation
        2. 🎬 Demo
          1. Installation
            1. Prerequisites
            2. Quick Install
            3. Configuration for AI Coding Assistants
            4. Alternative Installation Options
          2. Configuration
            1. Quick Setup
          3. Local STT/TTS Services
            1. Troubleshooting
              1. Common Issues
              2. Audio Saving
            2. Documentation
              1. Getting Started
              2. Development
              3. Service Guides
            3. Links
              1. Community
            4. See Also
              1. License

                Related MCP Servers

                • A
                  security
                  A
                  license
                  A
                  quality
                  A Model Context Protocol server that enables AI assistants like Claude to interact with Google Cloud Platform environments through natural language, allowing users to query and manage GCP resources during conversations.
                  Last updated -
                  9
                  193
                  165
                  MIT License
                • -
                  security
                  A
                  license
                  -
                  quality
                  A Model Context Protocol server that integrates high-quality text-to-speech capabilities with Claude Desktop and other MCP-compatible clients, supporting multiple voice options and audio formats.
                  Last updated -
                  2
                  1
                  MIT License
                • -
                  security
                  F
                  license
                  -
                  quality
                  MCP ChatGPT Responses connects Claude to ChatGPT through two essential tools: standard queries for AI-to-AI conversations and web-enabled requests for current information. It uses OpenAI's Responses API to maintain conversation state automatically.
                  Last updated -
                  13
                • -
                  security
                  A
                  license
                  -
                  quality
                  A Model Context Protocol (MCP) server that allows AI assistants like Claude to interact with Go's Language Server Protocol (LSP) and benefit from advanced Go code analysis features.
                  Last updated -
                  43
                  Apache 2.0

                View all related MCP servers

                MCP directory API

                We provide all the information about MCP servers via our MCP API.

                curl -X GET 'https://glama.ai/api/mcp/v1/servers/mbailey/voicemode'

                If you have feedback or need assistance with the MCP directory API, please join our Discord server