Server Configuration
Describes the environment variables required to run the server.
Name | Required | Description | Default |
---|---|---|---|
GEMINI_MODEL | No | Optional: Override the default Gemini model | gemini-1.5-pro |
LLM_PROVIDER | No | Which LLM provider to use. Options: OPEN_AI, ANTHROPIC, or GEMINI | OPEN_AI |
OPENAI_MODEL | No | Optional: Override the default OpenAI model | gpt-4o |
GEMINI_API_KEY | No | Your Gemini API key | |
OPENAI_API_KEY | No | Your OpenAI API key | |
ANTHROPIC_MODEL | No | Optional: Override the default Anthropic model | claude-3-opus-20240307 |
ANTHROPIC_API_KEY | No | Your Anthropic API key |
Schema
Prompts
Interactive templates invoked by user choice
Name | Description |
---|---|
No prompts |
Resources
Contextual data attached and managed by the client
Name | Description |
---|---|
No resources |
Tools
Functions exposed to the LLM to take actions
Name | Description |
---|---|
analyze_repo | Use this tool when you need to analyze a code repository structure without performing a detailed review. This tool flattens the repository into a textual representation and is ideal for getting a high-level overview of code organization, directory structure, and file contents. Use it before code_review when you need to understand the codebase structure first, or when a full code review is not needed. |
code_review | Use this tool when you need a comprehensive code review with specific feedback on code quality, security issues, performance problems, and maintainability concerns. This tool performs in-depth analysis on a repository or specific files and returns structured results including issues found, their severity, recommendations for fixes, and overall strengths of the codebase. Use it when you need actionable insights to improve code quality or when evaluating a codebase for potential problems. |