MCP Server
A Model Context Protocol (MCP) server implementation with LLM integration and chat memory capabilities.
Features
- MCP Server: Full Model Context Protocol server implementation
- LLM Integration: Support for OpenAI and Anthropic models
- Chat Memory: Persistent conversation storage and retrieval
- Tool System: Extensible tool framework for various operations
Installation
- Clone this repository:
- Install dependencies:
Or using the development environment:
Configuration
Set up your API keys as environment variables:
Or create a .env
file:
Usage
Running the MCP Server
Start the server using the command line:
Or run directly:
Available Tools
The server provides the following tools:
Echo Tool
Simple echo functionality for testing.
Chat Memory Tools
Store Memory
Get Memory
LLM Chat Tool
Supported Models
OpenAI Models:
- gpt-3.5-turbo
- gpt-4
- gpt-4-turbo
- gpt-4o
Anthropic Models:
- claude-3-haiku-20240307
- claude-3-sonnet-20240229
- claude-3-opus-20240229
Development
Running Tests
Code Formatting
Type Checking
Architecture
Components
- mcp.py: Main MCP server implementation and tool registration
- llmintegrationsystem.py: LLM provider integration and chat completions
- chatmemorysystem.py: Persistent conversation storage with SQLite
Database Schema
The chat memory system uses SQLite with two main tables:
memories
: Individual conversation messages and metadataconversation_summaries
: Conversation overviews and statistics
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
License
MIT License - see LICENSE file for details.
Troubleshooting
Common Issues
API Key Errors Ensure your API keys are properly set in environment variables.
Database Permissions
The server creates a chat_memory.db
file in the current directory. Ensure write permissions.
Port Conflicts The MCP server uses stdio communication by default. No port configuration needed.
Logging
Enable debug logging:
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables chat with multiple LLM providers (OpenAI and Anthropic) while maintaining persistent conversation memory. Provides extensible tool framework for various operations including echo functionality and conversation storage/retrieval.