Enables AI assistants to interact with Google's Gemini CLI, allowing for analysis of files, large codebases, brainstorming, and code execution in sandbox mode using Gemini's language models.
Utilizes Node.js environment to run the MCP server, requiring v16.0.0 or higher for proper functioning.
🚀 Gemini MCP Tool - Windows Fixed Version
Latest Version v1.0.21 - Fixed cross-terminal compatibility issues and fetch-chunk format errors
A Windows-compatible Model Context Protocol (MCP) server that enables AI assistants to interact with Google's Gemini CLI. This is a fixed version specifically designed to work seamlessly on Windows environments with PowerShell support.
Note: This is an enhanced version of the original gemini-mcp-tool with Windows-specific fixes and improvements.
🆕 Latest Updates (v1.0.21)
- 🔧 Fixed Cross-Terminal Compatibility - Resolved Node.js path not found issues in different terminal environments
- 📦 Fixed fetch-chunk Format Error - Fixed MCP protocol format mismatch in chunked responses
- 🛡️ Enhanced PATH Environment Variable Handling - Automatically adds common Node.js installation paths
- ✅ Full Compatibility with All Terminals - Supports PowerShell, CMD, VS Code Terminal, Trae AI, CherryStudio, etc.
- 🚀 Improved Error Handling - Better error messages and debug output
v1.0.3 Updates
- 🆕 PowerShell Path Parameter Support - Added optional
powershellPath
parameter allowing users to customize PowerShell executable path - ✅ Fixed PowerShell Execution Error - Resolved
spawn powershell.exe ENOENT
issue - ✅ Improved Windows Compatibility - Automatic detection of available PowerShell versions
- ✅ Fixed Undefined Variable Error - Fixed
args
variable issue inexecuteCommandWithPipedInput
function - ✅ Enhanced Error Handling - Better error messages and debug output
- ✅ Backward Compatibility - Existing configurations require no modification, automatically uses default detection logic
✨ Features
- 🪟 Windows Compatible: Full PowerShell support with Windows-specific path handling
- 📊 Large Context Window: Leverage Gemini's massive token window for analyzing entire codebases
- 📁 File Analysis: Analyze files using
@filename
syntax - 🔒 Sandbox Mode: Safe code execution environment
- 🔗 MCP Integration: Seamless integration with MCP-compatible AI assistants (Trae AI, Claude Desktop)
- ⚡ NPX Ready: Easy installation and usage with NPX
- 🔧 Environment Variable Support: Flexible API key configuration
This Windows-fixed version resolves:
- PowerShell parameter passing issues
- Character encoding problems with Chinese/Unicode text
- Command line argument escaping on Windows
- Environment variable handling
📋 Prerequisites
Before using this tool, ensure you have:
- Node.js (v16.0.0 or higher)
- Google Gemini CLI installed and configured
- API Key: Get your API key from Google AI Studio
📦 Installation
Quick Start with NPX (Recommended)
Global Installation
Updating Existing Installation
If you previously installed an older version:
⚙️ MCP Client Configuration
Claude Code (One-Line Setup)
Verify Installation:
Type /mcp
inside Claude Code to verify the gemini-cli
MCP is active. 1
Alternative: Import from Claude Desktop
If you already have it configured in Claude Desktop:
- Add to your Claude Desktop config (see below)
- Import to Claude Code:
Trae AI (Recommended)
- Open:
%APPDATA%\Trae\User\mcp.json
- Add this configuration:
Claude Desktop
- Open:
%APPDATA%\Claude\claude_desktop_config.json
- Add this configuration:
🔑 API Key Configuration
Option 1: MCP Configuration (Recommended)
Replace YOUR_ACTUAL_API_KEY_HERE
in the configuration above with your actual API key.
Option 2: Environment Variable
Configuration File Locations
Claude Desktop:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Linux:
~/.config/claude/claude_desktop_config.json
Trae AI:
- Windows:
%APPDATA%\Trae\User\mcp.json
🛠️ Available Tools
This MCP server provides the following tools for AI assistants:
1. ask-gemini
Interact with Google Gemini for analysis and questions.
Parameters:
prompt
(required): The analysis request. Use@
syntax for file referencesmodel
(optional): Gemini model to use (default:gemini-2.5-pro
)sandbox
(optional): Enable sandbox mode for safe code executionchangeMode
(optional): Enable structured change modechunkIndex
(optional): Chunk index for continuationchunkCacheKey
(optional): Cache key for continuation
2. brainstorm
Generate creative ideas using various brainstorming frameworks.
Parameters:
prompt
(required): Brainstorming challenge or questionmodel
(optional): Gemini model to usemethodology
(optional): Framework (divergent
,convergent
,scamper
,design-thinking
,lateral
,auto
)domain
(optional): Domain context (software
,business
,creative
, etc.)constraints
(optional): Known limitations or requirementsexistingContext
(optional): Background informationideaCount
(optional): Number of ideas to generate (default: 12)includeAnalysis
(optional): Include feasibility analysis (default: true)
3. fetch-chunk
Retrieve cached chunks from changeMode responses.
Parameters:
cacheKey
(required): Cache key from initial responsechunkIndex
(required): Chunk index to retrieve (1-based)
4. timeout-test
Test timeout prevention mechanisms.
Parameters:
duration
(required): Duration in milliseconds (minimum: 10ms)
5. ping
Test connection to the server.
Parameters:
prompt
(optional): Message to echo back
6. Help
Display help information about available tools.
🎯 Usage Examples
Once configured, you can use the following tools through your MCP client:
Natural Language Examples 2
With File References (using @ syntax):
- "ask gemini to analyze @src/main.js and explain what it does"
- "use gemini to summarize @. the current directory"
- "analyze @package.json and tell me about dependencies"
General Questions (without files):
- "ask gemini to search for the latest tech news"
- "use gemini to explain div centering"
- "ask gemini about best practices for React development related to @file_im_confused_about"
- "use gemini to explain index.html"
- "understand the massive project using gemini"
- "ask gemini to search for latest news"
Using Gemini CLI's Sandbox Mode (-s): 2 The sandbox mode allows you to safely test code changes, run scripts, or execute potentially risky operations in an isolated environment.
- "use gemini sandbox to create and run a Python script that processes data"
- "ask gemini to safely test @script.py and explain what it does"
- "use gemini sandbox to install numpy and create a data visualization"
- "test this code safely: Create a script that makes HTTP requests to an API"
Slash Commands (for Claude Code Users) 2
You can use these commands directly in Claude Code's interface (compatibility with other clients has not been tested):
- /analyze: Analyzes files or directories using Gemini, or asks general questions
prompt
(required): The analysis prompt. Use @ syntax to include files (e.g.,/analyze prompt:@src/ summarize this directory
) or ask general questions (e.g.,/analyze prompt:Please use a web search to find the latest news stories
)
- /sandbox: Safely tests code or scripts in Gemini's sandbox environment
prompt
(required): Code testing request (e.g.,/sandbox prompt:Create and run a Python script that processes CSV data
or/sandbox prompt:@script.py Test this script safely
)
- /help: Displays the Gemini CLI help information
- /ping: Tests the connection to the server
message
(optional): A message to echo back
Available Tools
- ask-gemini: Send prompts to Gemini
- analyze-file: Analyze specific files using
@filename
syntax - sandbox-mode: Execute code in a safe environment
🔧 Windows-Specific Fixes
This version includes the following Windows-specific improvements:
- PowerShell Parameter Handling: Fixed argument passing to avoid parameter splitting
- Character Encoding: Proper UTF-8 handling for Chinese and Unicode characters
- Quote Escaping: Correct escaping of quotes in command arguments
- Environment Variables: Improved
.env
file loading and environment variable handling - Path Resolution: Windows-compatible path handling
🧪 Testing Installation
1. Test Gemini CLI
2. Test MCP Tool
3. Test MCP Integration
- Restart your MCP client (Trae AI, Claude Desktop)
- Try asking: "Use gemini to explain what MCP is"
- Check for successful responses
🐛 Troubleshooting
Common Issues
"Command not found: gemini"
"API key not found"
"Permission denied"
For detailed troubleshooting, see INSTALL-GUIDE.md.
🔧 Windows-Specific Fixes
This version includes several Windows-specific improvements:
- PowerShell Integration: Native PowerShell command execution
- Path Handling: Proper Windows path resolution
- Environment Variables: Enhanced environment variable support
- Error Handling: Better error messages for Windows environments
- Dependency Management: Simplified dependency structure
🤝 Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Test on Windows environments
- Submit a pull request
📄 License
MIT License - see LICENSE file for details.
🙏 Acknowledgments
- Original project: jamubc/gemini-mcp-tool
- Google Gemini CLI team
- Model Context Protocol (MCP) community
📞 Support
If you encounter any issues or have questions:
- Check the Issues page
- Create a new issue with detailed information about your problem
- Include your Windows version, Node.js version, and error messages
Made with ❤️ for Windows developers
Note: This is a Windows-optimized fork of the original gemini-mcp-tool. For other platforms, consider using the original version.
local-only server
The server can only run on the client's local machine because it depends on local resources.
A Windows-compatible Model Context Protocol server that enables AI assistants to interact with Google's Gemini CLI, supporting file analysis, large context windows, and safe code execution.
Related Resources
Related MCP Servers
- -securityFlicense-qualityA server implementing the Model Context Protocol that enables AI assistants like Claude to interact with Google's Gemini API for text generation, text analysis, and chat conversations.Last updated -Python
- -securityAlicense-qualityA Model Context Protocol server that enables Claude to collaborate with Google's Gemini AI models, providing tools for question answering, code review, brainstorming, test generation, and explanations.Last updated -PythonMIT License
- -securityFlicense-qualityA server that allows interaction with Google's Gemini AI through the Gemini CLI tool using the Model Context Protocol, providing a standardized interface for querying Gemini with various options and configurations.Last updated -JavaScript
- -securityFlicense-qualityA Model Context Protocol server that connects to Google AI Studio/Gemini API, enabling content generation with support for various file types, conversation history, and system prompts.Last updated -52112JavaScript