gemini_generateContent
Generate complete text responses using Google Gemini models. Process single-turn prompts with optional control over token limits, creativity, and safety settings for tailored content creation.
Instructions
Generates non-streaming text content using a specified Google Gemini model. This tool takes a text prompt and returns the complete generated response from the model. It's suitable for single-turn generation tasks where the full response is needed at once. Optional parameters allow control over generation (temperature, max tokens, etc.) and safety settings.
Input Schema
Name | Required | Description | Default |
---|---|---|---|
generationConfig | No | Optional configuration for controlling the generation process. | |
modelName | No | Optional. The name of the Gemini model to use (e.g., 'gemini-1.5-flash'). If omitted, the server's default model (from GOOGLE_GEMINI_MODEL env var) will be used. | |
prompt | Yes | Required. The text prompt to send to the Gemini model for content generation. | |
safetySettings | No | Optional. A list of safety settings to apply, overriding default model safety settings. Each setting specifies a harm category and a blocking threshold. |
Input Schema (JSON Schema)
{
"$schema": "http://json-schema.org/draft-07/schema#",
"additionalProperties": false,
"properties": {
"generationConfig": {
"additionalProperties": false,
"description": "Optional configuration for controlling the generation process.",
"properties": {
"maxOutputTokens": {
"description": "Maximum number of tokens to generate in the response.",
"minimum": 1,
"type": "integer"
},
"stopSequences": {
"description": "Sequences where the API will stop generating further tokens.",
"items": {
"type": "string"
},
"type": "array"
},
"temperature": {
"description": "Controls randomness. Lower values (~0.2) make output more deterministic, higher values (~0.8) make it more creative. Default varies by model.",
"maximum": 1,
"minimum": 0,
"type": "number"
},
"topK": {
"description": "Top-k sampling parameter. The model considers the k most probable tokens. Default varies by model.",
"minimum": 1,
"type": "integer"
},
"topP": {
"description": "Nucleus sampling parameter. The model considers only tokens with probability mass summing to this value. Default varies by model.",
"maximum": 1,
"minimum": 0,
"type": "number"
}
},
"type": "object"
},
"modelName": {
"description": "Optional. The name of the Gemini model to use (e.g., 'gemini-1.5-flash'). If omitted, the server's default model (from GOOGLE_GEMINI_MODEL env var) will be used.",
"minLength": 1,
"type": "string"
},
"prompt": {
"description": "Required. The text prompt to send to the Gemini model for content generation.",
"minLength": 1,
"type": "string"
},
"safetySettings": {
"description": "Optional. A list of safety settings to apply, overriding default model safety settings. Each setting specifies a harm category and a blocking threshold.",
"items": {
"additionalProperties": false,
"description": "Setting for controlling content safety for a specific harm category.",
"properties": {
"category": {
"description": "Category of harmful content to apply safety settings for.",
"enum": [
"HARM_CATEGORY_UNSPECIFIED",
"HARM_CATEGORY_HATE_SPEECH",
"HARM_CATEGORY_SEXUALLY_EXPLICIT",
"HARM_CATEGORY_HARASSMENT",
"HARM_CATEGORY_DANGEROUS_CONTENT"
],
"type": "string"
},
"threshold": {
"description": "Threshold for blocking harmful content. Higher thresholds block more content.",
"enum": [
"HARM_BLOCK_THRESHOLD_UNSPECIFIED",
"BLOCK_LOW_AND_ABOVE",
"BLOCK_MEDIUM_AND_ABOVE",
"BLOCK_ONLY_HIGH",
"BLOCK_NONE"
],
"type": "string"
}
},
"required": [
"category",
"threshold"
],
"type": "object"
},
"type": "array"
}
},
"required": [
"prompt"
],
"type": "object"
}