Uses OpenAI's text-embedding-ada-002 model to generate embeddings for vector operations in Pinecone
Pinecone MCP Server
A Model Context Protocol server for Pinecone vector database operations.
This MCP server provides programmatic access to Pinecone vector database operations, enabling AI assistants to perform semantic search, manage vectors, and interact with your knowledge base through standardized MCP tools.
Features
Tools
🔍 query_vectors
Perform semantic search on your Pinecone database
Input: Text query, optional top_k and include_metadata parameters
Output: JSON response with matching vectors and similarity scores
Use case: Find relevant documents based on natural language queries
➕ upsert_vectors
Add new documents to your vector database
Input: Array of texts, optional metadata and IDs
Output: Confirmation of successful vector insertion
Use case: Index new documents or update existing knowledge base
🗑️ delete_vectors
Remove vectors from your database
Input: Array of vector IDs or delete_all flag
Output: Confirmation of deletion operation
Use case: Clean up outdated information or reset database
📊 get_index_stats
Monitor your Pinecone database
Input: None
Output: Index statistics including vector count and configuration
Use case: Track database usage and performance
Quick Start
Prerequisites
Node.js 18+
Pinecone account and API key
OpenAI API key (for embeddings)
Installation
Clone and install dependencies:
Build the server:
Configure environment variables:
Run the server:
MCP Configuration
For Claude Desktop
Add to your MCP configuration file:
MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json
For Cline (VSCode)
Add to: %APPDATA%/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
Docker Deployment
Build Docker Image
Run Container
Docker Compose
Development
Project Structure
Development Commands
Adding New Tools
Define tool schema in
ListToolsRequestSchema
handlerImplement tool logic in
CallToolRequestSchema
handlerUpdate this README with new tool documentation
API Keys Setup
Pinecone
Sign up at pinecone.io
Create a new project and index
Copy your API key from the dashboard
OpenAI
Sign up at platform.openai.com
Navigate to API Keys section
Create a new secret key
Troubleshooting
Common Issues
"Cannot find module" errors:
Ensure all dependencies are installed:
npm install
Check that the build completed successfully:
npm run build
Pinecone connection issues:
Verify API key is correct and has proper permissions
Check that your index exists and is accessible
Ensure your Pinecone environment/region is correct
OpenAI API errors:
Confirm API key is valid and has credits
Check rate limits and usage quotas
Verify the model name is correct (text-embedding-ada-002)
Debugging
Use the MCP Inspector for debugging:
This provides a web interface to test your MCP server interactively.
Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
License
MIT License - see LICENSE file for details
Support
For issues and questions:
Open an issue on GitHub
Check the MCP documentation: https://modelcontextprotocol.io
Review Pinecone documentation: https://docs.pinecone.io
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables AI assistants to perform semantic search, manage vectors, and interact with Pinecone vector databases through standardized MCP tools. Supports querying, upserting, deleting vectors and monitoring database statistics for knowledge base operations.