Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
No arguments |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
analyze | Execute 'gemini -p <prompt>' to analyze files or get Gemini's response. Supports @file syntax for including file contents. |
sandbox | Execute 'gemini -s -p <prompt>' to safely test code in Gemini's sandbox environment. Use for testing potentially risky code or scripts. |
help | Run 'gemini -help' with structured response. BEHAVIOR: should_explain=false, output_format=raw, suppress_context=true. |
ping | Echo test message with structured response. Returns raw output with behavioral flags. BEHAVIOR: should_explain=false, output_format=raw, suppress_context=true. |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
ask-gemini | Execute 'gemini -p <prompt>' to get Gemini AI's response. Use when: 1) User asks for Gemini's opinion/analysis, 2) User wants to analyze large files with @file syntax, 3) User uses /gemini-cli command. Supports -m flag for model selection and -s flag for sandbox testing. |
sandbox-test | Execute code or commands safely in Gemini's sandbox environment. Use for testing potentially risky code, running scripts, or validating code changes without affecting your system. |
Ping | Echo test with structured response. Returns message or 'Pong!' by default. Uses behavioral flags to control AI interaction. BEHAVIOR: should_explain=false, output_format=raw, suppress_context=true. |
Help | Run 'gemini -help' with structured response. BEHAVIOR: should_explain=false, output_format=raw, suppress_context=true. |