Uses .env files for configuration of the MCP server, allowing setting of n8n API URL, API key, and debug options.
Used for cloning the repository during source installation of the MCP server.
Provides access to the source code repository for manual installation and development of the MCP server.
Provides tools for managing n8n workflows through natural language, including listing, creating, updating, and deleting workflows, activating/deactivating workflows, executing workflows, and monitoring execution status.
Requires Node.js 18 or later as a runtime environment for the MCP server.
Allows installation of the MCP server via npm package manager using global installation commands.
n8n MCP Server
A Model Context Protocol (MCP) server that allows AI assistants to interact with n8n workflows through natural language.
Overview
This MCP server provides tools and resources for AI assistants to manage n8n workflows and executions. It allows assistants to:
List, create, update, and delete workflows
Activate and deactivate workflows
Execute workflows and monitor their status
Access workflow information and execution statistics
Related MCP server: MCP Toolkit
Installation
Prerequisites
Node.js 18 or later
n8n instance with API access enabled
Install from npm
Install from source
Docker Installation
You can also run the server using Docker:
Configuration
Create a .env file in the directory where you'll run the server, using .env.example as a template:
Configure the following environment variables:
Variable | Description | Example |
| Full URL of the n8n API, including
|
|
| API key for authenticating with n8n |
|
| Username for webhook authentication (if using webhooks) |
|
| Password for webhook authentication |
|
| Enable debug logging (optional) |
or
|
Generating an n8n API Key
Open your n8n instance in a browser
Go to Settings > API > API Keys
Create a new API key with appropriate permissions
Copy the key to your
.envfile
Usage
Running the Server
From the installation directory:
Or if installed globally:
Integrating with AI Assistants
After building the server (npm run build), you need to configure your AI assistant (like VS Code with the Claude extension or the Claude Desktop app) to run it. This typically involves editing a JSON configuration file.
Example Configuration (e.g., in VS Code
Key Points:
Replace
/path/to/your/cloned/n8n-mcp-server/with the actual absolute path where you cloned and built the repository.Use the correct path separator for your operating system (forward slashes
/for macOS/Linux, double backslashes\\for Windows).Ensure you provide the correct
N8N_API_URL(including/api/v1) andN8N_API_KEY.The server needs to be built (
npm run build) before the assistant can run thebuild/index.jsfile.
Available Tools
The server provides the following tools:
Using Webhooks
This MCP server supports executing workflows through n8n webhooks. To use this functionality:
Create a webhook-triggered workflow in n8n.
Set up Basic Authentication on your webhook node.
Use the
run_webhooktool to trigger the workflow, passing just the workflow name.
Example:
The webhook authentication is handled automatically using the N8N_WEBHOOK_USERNAME and N8N_WEBHOOK_PASSWORD environment variables.
Workflow Management
workflow_list: List all workflowsworkflow_get: Get details of a specific workflowworkflow_create: Create a new workflowworkflow_update: Update an existing workflowworkflow_delete: Delete a workflowworkflow_activate: Activate a workflowworkflow_deactivate: Deactivate a workflow
Execution Management
execution_run: Execute a workflow via the APIrun_webhook: Execute a workflow via a webhookexecution_get: Get details of a specific executionexecution_list: List executions for a workflowexecution_stop: Stop a running execution
Resources
The server provides the following resources:
n8n://workflows/list: List of all workflowsn8n://workflow/{id}: Details of a specific workflown8n://executions/{workflowId}: List of executions for a workflown8n://execution/{id}: Details of a specific execution
Development
Building
Running in Development Mode
Testing
Linting
License
MIT