Skip to main content
Glama

Enterprise Code Search MCP Server

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
BATCH_SIZENoBatch size for indexing
CHROMA_HOSTNoThe host for ChromaDB
CHROMA_PORTNoThe port for ChromaDB
OLLAMA_HOSTNoThe host URL for Ollama service
COMPANY_NAMENoYour company name
OLLAMA_MODELNoThe Ollama model to use for embeddings
OPENAI_MODELNoThe OpenAI model to use for embeddings
MAX_FILE_SIZENoMaximum file size in KB
MAX_CHUNK_SIZENoMaximum chunk size in characters
OPENAI_API_KEYNoYour OpenAI API key
CHROMA_SERVER_HOSTNoChromaDB server host for restricting access
EMBEDDING_PROVIDERNoThe embedding provider to use (ollama or openai)

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
index_local_project

Index a local project directory into the vector database

search_codebase

Search the indexed codebase using semantic search

list_indexed_projects

List all projects currently indexed

get_embedding_provider_info

Get information about the current embedding provider

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/damian-pramparo/semantic-context-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server