Provides a lightweight web interface to inspect PostgreSQL databases, including browsing tables, viewing data, running SQL queries, and monitoring database schema changes
Mentions deployment considerations when the MCP client runs inside Cloudflare Workers, noting potential redirect issues that require configuration adjustments
Used as the web framework for implementing API key-based URL routing and serving the MCP endpoints
Recommended deployment platform with specific configuration options for optimal performance
Used for multi-tenant data storage with connection pooling and optimization features
Provides database ORM capabilities with async support and Alembic migrations for the MCP server
Offers a YouTube lookup tool that provides video information through the MCP interface
MCPeasy
the easiest way to set-up and self-host your own multi-MCP server with streamable http transport and multi-client and key management
A production-grade multi-tenant MCP server that provides different tools and configurations to different clients using API key-based routing.
Architecture
FastMCP 2.6: Core MCP implementation following https://gofastmcp.com/llms-full.txt
FastAPI: Web framework with API key-based URL routing
PostgreSQL: Multi-tenant data storage with SQLAlchemy
Streamable HTTP: All subservers provide streamable transport
Multi-tenancy: Clients can have multiple API keys with tool-specific configurations
Key Features
Multi-tenant design: Clients manage multiple rotatable API keys
Per-tool configuration: Each client can configure tools differently (e.g., custom email addresses)
Dynamic tool sets: Different clients get different tool combinations
Tool auto-discovery: Modular tool system with automatic registration
Custom tools support: Add organization-specific tools in your fork with namespaced directories
Per-resource configuration: Each client can access different resources with custom settings
Dynamic resource sets: Different clients get different resource combinations
Resource auto-discovery: Modular resource system with automatic registration
Enhanced tool responses: Multiple content types (text, JSON, markdown, file) for optimal LLM integration
Environment-based discovery: Simple environment variable configuration for tool/resource enablement
Shared infrastructure: Database, logging, and configuration shared across servers
Admin interface: Web-based client and API key management with CORE/CUSTOM tool source badges
Production ready: Built for Fly deployment with Neon database
High performance: Background task processing, request timeouts, configuration caching, and optimized database connections
Quick Start
Setup environment:
Start all services with Docker Compose (recommended):
Access the services:
That's it! Docker Compose handles all dependencies, database setup, and migrations automatically.
API Endpoints
GET /health
- Health checkGET /admin
- Admin login pageGET /admin/clients
- Client management dashboardPOST /admin/clients
- Create new clientGET /admin/clients/{id}/keys
- Manage API keys for clientPOST /admin/clients/{id}/keys
- Generate new API keyGET /admin/clients/{id}/tools
- Configure tools for clientGET|POST /mcp/{api_key}
- MCP endpoint (streamable)
Client & API Key Management
Creating Clients
Visit
/
(in development localhost:3000) and login with superadmin passwordCreate client with name and description
Generate API keys for the client
Configure tools and resources with their settings per client
Managing API Keys
Multiple keys per client: Production, staging, development keys
Key rotation: Generate new keys without losing configuration
Expiry management: Set expiration dates for keys
Secure deletion: Deactivate compromised keys immediately
Tool Configuration
Each client must explicitly configure tools to access them:
Simple tools:
echo
,get_weather
- click "Add" to enable (no configuration needed)Configurable tools:
send_email
- click "Configure" to set from address, SMTP settingsPer-client settings: Same tool, different configuration per client
Strict access control: Only configured tools are visible and callable
Available Tools
Namespaced Tool System: All tools are organized in namespaces for better organization and conflict avoidance:
Core Tools (namespace:
core/echo
- Simple echo tool for testing (no configuration needed)core/weather
- Weather information (no configuration needed)core/send_email
- Send emails (requires: from_email, optional: smtp_server)core/datetime
- Date and time utilitiescore/scrape
- Web scraping functionalitycore/youtube_lookup
- YouTube video information
Custom Tools (namespace:
myorg/send_invoice
- Custom tool would live hereCustom tools can be added in organization-specific namespaces
Each deployment can control which tools are available via environment configuration
Custom tools show with purple "CUSTOM" badges in admin UI vs blue "CORE" badges
Tool Discovery:
Use
TOOLS=__all__
to automatically discover and enable all available toolsOr specify exact tools:
TOOLS='core/echo,core/weather,myorg/send_invoice'
Directory structure:
src/tools/{namespace}/{tool_name}/tool.py
Tool Call Tracking
All tool executions are automatically tracked in the database for monitoring and auditing:
Complete tracking: Input arguments, output data, execution time, and errors
Per-client logging: Track usage patterns by client and API key
Performance monitoring: Execution time tracking in milliseconds
Error logging: Failed tool calls with detailed error messages
Automatic: No configuration needed - all tool calls are logged transparently
Resource Configuration
Each client must explicitly configure resources to access them:
Simple resources: click "Add" to enable with default settings
Configurable resources: click "Configure" to set category filters, article limits, search permissions
Per-client settings: Same resource, different configuration per client (e.g., different category access)
Strict access control: Only configured resources are visible and accessible
Available Resources
Namespaced Resource System: Resources follow the same namespacing pattern as tools for better organization:
Custom Resources (namespace:
myorg/knowledge
- Example of a namespaced resourceCustom resources can be added in organization-specific namespaces
Each deployment can control which resources are available via environment configuration
Resource Discovery:
Use
RESOURCES=__all__
to automatically discover and enable all available resourcesOr specify exact resources:
RESOURCES='myorg/product_catalog'
Directory structure:
src/resources/{namespace}/{resource_name}/resource.py
Resource Auto-Seeding
Resources can automatically seed initial data when their table is empty, perfect for:
Reference data: Countries, categories, product catalogs
Demo content: Sample articles, documentation
Initial configuration: Default settings, presets
How It Works:
Resource checks if its table is empty on first initialization
If empty, loads seed data from configured source (CSV/JSON file or URL)
Inserts data into database with proper field mapping
Only runs once - subsequent startups skip seeding
Setup Example:
Supported Formats:
CSV: Column names match model fields, empty strings become NULL
JSON: Array of objects with field names as keys
Remote URLs: Fetch seed data from CDNs or APIs
Example Seed Files:
Configuration
Environment Variables
see .env.example for more
Multi-Tenant Architecture
The system uses three main entities:
Clients: Organizations or users (e.g., "ACME Corp") with UUID identifiers
API Keys: Multiple rotatable keys per client
Tool Configurations: Per-client tool settings stored as JSON with strict access control
Resource Configurations: Per-client resource settings stored as JSON with strict access control
Custom Tools Development
MCPeasy supports adding organization-specific tools using a simplified namespaced directory structure. When forking this repository, you can add your custom tools directly without worrying about merge conflicts.
Quick Custom Tool Setup
Fork the repository: Create your own fork of mcpeasy
Create namespace directory:
mkdir -p src/tools/yourorg
Add your tool: Create
src/tools/yourorg/yourtool/tool.py
with your tool implementationAuto-discovery: Tool automatically discovered as
yourorg/yourtool
Configure environment:
Use
TOOLS=__all__
to enable all tools automaticallyOr specify:
TOOLS='core/echo,yourorg/yourtool'
Enable for clients: Use admin UI to configure tools per client
Stay updated: Pull upstream changes from mcpeasy main branch when needed
Directory Structure
Enhanced Tool Response Types
Custom tools support multiple content types for optimal LLM integration:
Running Synchronous Code in Tools
If your custom tool needs to run synchronous (blocking) code, use asyncio.to_thread()
to avoid blocking the async event loop:
Important: Never use blocking operations directly in the execute()
method as it will block the entire event loop and affect other tool executions.
Custom Resources Development
MCPeasy supports adding organization-specific resources with automatic data seeding capabilities. Just like with tools, add your custom resources directly to your fork.
Quick Custom Resource Setup
In your fork: Navigate to your mcpeasy fork
Create namespace directory:
mkdir -p src/resources/yourorg
Add your resource: Create
src/resources/yourorg/yourresource/resource.py
with implementationAuto-discovery: Resource automatically discovered as
yourorg/yourresource
Configure environment:
Use
RESOURCES=__all__
to enable all resources automaticallyOr specify:
RESOURCES='knowledge,yourorg/yourresource'
Optional seeding: Add
seed_source
andseeds/
directory for initial dataEnable for clients: Use admin UI to configure resources per client
Directory Structure
Custom Resource with Auto-Seeding
Templates and Documentation
Templates: Complete tool/resource templates in
templates/
directory with auto-seeding examplesBest practices: Examples show proper dependency management, configuration, and data seeding
Namespace organization: Clean separation between core and custom tools/resources
Environment variable discovery: Simple TOOLS and RESOURCES configuration
Seed data examples: CSV and JSON seed file templates included
Development
Docker Development (Recommended)
Live reload on both frontend and backend
Database Inspector (Adminer)
When running with Docker Compose, Adminer provides a lightweight web interface to inspect your PostgreSQL database:
URL:
http://localhost:8080
Login credentials:
Server:
db
Username:
postgres
Password:
postgres
Database:
mcp
Features:
Browse all tables (clients, api_keys, tool_configurations, resource_configurations, tool_calls)
View table data and relationships
Run SQL queries
Export data
Monitor database schema changes
Analyze tool usage patterns and performance metrics
Local Development
Dependencies: Managed with
uv
Code structure: Modular design with SQLAlchemy models, session auth, admin UI
Database: PostgreSQL with async SQLAlchemy and Alembic migrations
Authentication: Session-based admin authentication with secure cookies
Migrations: Automatic database migrations with Alembic
Testing: Run development server with auto-reload
Testing MCP Endpoints
Using MCP Inspector (Recommended)
Get token URL: From admin dashboard, copy the MCP URL for your token
Install inspector:
npx @modelcontextprotocol/inspector
Open inspector: Visit http://localhost:6274 in browser (include proxy auth if needed, following instructions at inspector launch)
Add server: Enter your MCP URL:
http://localhost:8000/mcp/{token}
Configure tools and resources: In admin interface, add/configure tools and resources for your client
Test functionality: Click on configured tools and resources to test them (unconfigured items won't appear)
✅ Verified Working: The MCP Inspector successfully connects and displays only configured tools and resources!
Manual Testing
Database Migrations
The system uses Alembic for database migrations with automatic execution on Docker startup for the best developer experience.
Migration Workflow (Simplified)
Available Migration Commands
The ./migrate.sh
script provides all migration functionality:
How It Works
Development: Use
./migrate.sh create "message"
to generate migration filesAutomatic Application: Migrations run automatically when Docker containers start
No Manual Steps: The Docker containers handle
alembic upgrade head
on startupDatabase Dependency: Docker waits for database health check before running migrations
Volume Mounting: Migration files are immediately available in containers via volume mounts
Model Organization
Models are organized in separate files by domain:
src/models/base.py
- SQLAlchemy Base classsrc/models/client.py
- Client and APIKey modelssrc/models/configuration.py
- Tool and Resource configurationssrc/models/knowledge.py
- Knowledge base modelssrc/models/tool_call.py
- Tool call tracking and auditing
Migration Workflow
Make model changes in the appropriate model files
Generate migration: The system auto-detects changes and creates migration files
Review migration: Check the generated SQL in
src/migrations/versions/
Deploy: Migrations run automatically on startup in production
Production Migration Behavior
✅ Automatic execution: Migrations run on app startup
✅ Safe rollouts: Failed migrations prevent app startup
✅ Version tracking: Database tracks current migration state
✅ Idempotent: Safe to run multiple times
Performance & Scalability
The system is optimized for production workloads with several performance enhancements:
Queue-based execution: Bounded concurrency with configurable worker pools prevents server overload
Fair scheduling: FIFO queue ensures all clients get served during traffic bursts
Background processing: Tool call logging moved to background tasks for faster response times
Extended timeouts: 3-minute timeouts support long-running tools (configurable)
Configuration caching: 5-minute TTL cache reduces database queries for configuration lookups
Connection pooling: Optimized PostgreSQL connection management with pre-ping validation
Multi-worker setup: 2 workers optimized for Fly.io deployment with automatic recycling
Queue monitoring: Real-time queue metrics available at
/metrics/queue
endpoint
Queue Configuration
Control tool execution concurrency and queue behavior:
Queue Monitoring
Deployment
Platform: Recommended deployment with Fly.io. NB! In some situations (e.g. if your MCP client connected to this runs inside cloudflare workers - you should set
force_https = false
in your fly.toml, because otherwise you may get endless redirect issues on the MCP client side)Database: Any postgres will do, tested on Neon PostgreSQL with automatic migrations
Environment: Production-ready with proper error handling and migration safety
Workers: 2 Uvicorn workers with 1000 request recycling for optimal memory management
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A production-grade multi-tenant MCP server that provides different tools and configurations to different clients using API key-based routing.
Related MCP Servers
- -securityAlicense-qualityMCP Server provides a simpler API to interact with the Model Context Protocol by allowing users to define custom tools and services to streamline workflows and processes.Last updated -03MIT License
- -securityAlicense-qualityA middleware server that enables multiple isolated instances of the same MCP servers to coexist independently with unique namespaces and configurations.Last updated -671MIT License
- -securityAlicense-qualityA unified interface that intelligently routes requests to appropriate MCP servers, solving the problem of managing multiple tools by providing a single entry point with smart routing capabilities.Last updated -MIT License
- -securityAlicense-qualityGateway MCP Server - Route MCP requests intelligently to multiple backend servers.Last updated -3MIT License