Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PROMPT_PATHNoPath to your custom prompt templates directory
GOOGLE_API_KEYNoYour Google API key
OPENAI_API_KEYNoYour OpenAI API key for AI-powered processing and transcription
CCORE_URL_ENGINENoForce URL engine selection (auto, simple, firecrawl, jina)auto
CCORE_DOCUMENT_ENGINENoForce document engine selection (auto, simple, docling)auto
CCORE_AUDIO_CONCURRENCYNoNumber of concurrent audio transcriptions (1-10, default: 3)3

Tools

Functions exposed to the LLM to take actions

NameDescription
extract_content

Extract content from a URL or file using Content Core's auto engine.

Args: url: Optional URL to extract content from file_path: Optional file path to extract content from

Returns: JSON object containing extracted content and metadata

Raises: ValueError: If neither or both url and file_path are provided

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lfnovo/content-core'

If you have feedback or need assistance with the MCP directory API, please join our Discord server