Supports running Qdrant in a Docker container for local vector database storage and querying
Respects .gitignore patterns when indexing codebases and provides integration with Git-managed projects
Uses OpenAI embeddings to power semantic code search, converting code into vector representations for meaning-based retrieval
Qdrant MCP Server
A Model Context Protocol (MCP) server that provides semantic code search capabilities using Qdrant vector database and OpenAI embeddings.
Features
π Semantic Code Search - Find code by meaning, not just keywords
π Fast Indexing - Efficient incremental indexing of large codebases
π€ MCP Integration - Works seamlessly with Claude and other MCP clients
π Background Monitoring - Automatic reindexing of changed files
π― Smart Filtering - Respects .gitignore and custom patterns
πΎ Persistent Storage - Embeddings stored in Qdrant for fast retrieval
Related MCP server: Qdrant Retrieve MCP Server
Installation
Prerequisites
Node.js 18+
Python 3.8+
Docker (for Qdrant) or Qdrant Cloud account
OpenAI API key
Quick Start
Configuration
Environment Variables
Create a .env file in your project root:
MCP Configuration
Add to your Claude Desktop config (~/.claude/config.json):
Usage
Command Line Interface
In Claude
Once configured, you can use natural language queries:
"Find all authentication code"
"Show me files that handle user permissions"
"What code is similar to the PaymentService class?"
"Find all API endpoints related to users"
"Show me error handling patterns in the codebase"
Programmatic Usage
Architecture
Advanced Configuration
Custom File Processors
Embedding Models
Support for multiple embedding providers:
Performance Optimization
Batch Processing
Incremental Indexing
Cost Estimation
Monitoring
Web UI (Coming Soon)
Logs
Metrics
Files indexed
Tokens processed
Search queries per minute
Average response time
Cache hit rate
Troubleshooting
Common Issues
"Connection refused" error
Ensure Qdrant is running:
docker psCheck QDRANT_URL is correct
Verify firewall settings
"Rate limit exceeded" error
Reduce batch size:
--batch-size 5Add delay between requests:
--delay 1000Use a different OpenAI tier
"Out of memory" error
Process fewer files at once
Increase Node.js memory:
NODE_OPTIONS="--max-old-space-size=4096"Use streaming mode for large files
Debug Mode
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Development Setup
License
MIT License - see LICENSE for details.
Acknowledgments
Built for the Model Context Protocol
Powered by Qdrant vector database
Embeddings by OpenAI
Originally developed for KinDash
Support
π§ Email: support@kindash.app
π¬ Discord: Join our community
π Issues: GitHub Issues
π Docs: Full Documentation