Skip to main content
Glama

Searchcraft MCP Server

Official

The Searchcraft MCP Server provides a suite of tools for managing your Searchcraft cluster's Documents, Indexes, Federations, Access Keys, and Analytics. It enables MCP Clients, like Claude Desktop, to be prompted in plain English to perform administrative actions like setting up search indexes, access keys, ingesting documents, viewing analytics, searching indexes, and more.

Available Tools

The Searchcraft MCP Server provides two main categories of tools:

Engine API Tools

These tools provide direct access to your Searchcraft cluster's core functionality for managing indexes, documents, federations, authentication, and search operations.

Index Management
Tool NameDescription
create_indexCreate a new index with the specified schema. This will empty the index if it already exists.
delete_indexDelete an index and all its documents permanently.
get_all_index_statsGet document counts and statistics for all indexes.
get_index_schemaGet the schema definition for a specific index.
get_index_statsGet statistics and metadata for a specific index (document count, etc.).
list_all_indexesGet a list of all indexes in the Searchcraft instance.
patch_indexMake partial configuration changes to an index schema (search_fields, weight_multipliers, etc.).
update_indexReplace the entire contents of an existing index with a new schema definition.
Document Management
Tool NameDescription
add_documentsAdd one or multiple documents to an index. Documents should be provided as an array of JSON objects.
delete_all_documentsDelete all documents from an index. The index will continue to exist after all documents are deleted.
delete_document_by_idDelete a single document from an index by its internal Searchcraft ID (_id).
delete_documents_by_fieldDelete one or several documents from an index by field term match (e.g., {id: 'xyz'} or {title: 'foo'}).
delete_documents_by_queryDelete one or several documents from an index by query match.
get_document_by_idGet a single document from an index by its internal Searchcraft ID (_id).
Federation Management
Tool NameDescription
create_federationCreate or update a federation with the specified configuration.
delete_federationDelete a federation permanently.
get_federation_detailsGet detailed information for a specific federation.
get_federation_statsGet document counts per index for a federation as well as the total document count.
get_organization_federationsGet a list of all federations for a specific organization.
list_all_federationsGet a list of all federations in the Searchcraft instance.
update_federationReplace the current federation entity with an updated one.
Authentication & Key Management
Tool NameDescription
create_keyCreate a new authentication key with specified permissions and access controls.
delete_all_keysDelete all authentication keys on the Searchcraft cluster. Use with extreme caution!
delete_keyDelete a specific authentication key permanently.
get_application_keysGet a list of all authentication keys associated with a specific application.
get_federation_keysGet a list of all authentication keys associated with a specific federation.
get_key_detailsGet detailed information for a specific authentication key.
get_organization_keysGet a list of all authentication keys associated with a specific organization.
list_all_keysGet a list of all authentication keys on the Searchcraft cluster.
update_keyUpdate an existing authentication key with new configuration.
Stopwords Management
Tool NameDescription
add_stopwordsAdd custom stopwords to an index. These are added on top of the default language-specific dictionary.
delete_all_stopwordsDelete all custom stopwords from an index. This only affects custom stopwords, not the default language dictionary.
delete_stopwordsDelete specific custom stopwords from an index. This only affects custom stopwords, not the default language dictionary.
get_index_stopwordsGet all stopwords for an index, including both default language dictionary and custom stopwords.
Synonyms Management
Tool NameDescription
add_synonymsAdd synonyms to an index. Synonyms only work with fuzzy queries, not exact match queries.
delete_all_synonymsDelete all synonyms from an index.
delete_synonymsDelete specific synonyms from an index by their keys.
get_index_synonymsGet all synonyms defined for an index.
Search & Analytics
Tool NameDescription
get_measure_conversionGet measurement conversion data with optional filtering and aggregation parameters. *requires Clickhouse if running locally
get_measure_summaryGet measurement summary data with optional filtering and aggregation parameters. *requires Clickhouse if running locally
get_search_resultsPerforms a search query using the Searchcraft API with support for fuzzy/exact matching, facets, and date ranges.
get_prelim_search_dataGet schema fields and facet information for a search index to understand available fields for constructing queries.
get_searchcraft_statusGet the current status of the Searchcraft search service.

Import Tools

These tools provide workflows for importing JSON data and automatically generating Searchcraft schemas. Perfect for quickly setting up new indexes from existing data sources.

Tool NameDescription
analyze_json_from_fileRead JSON data from a local file and analyze its structure to understand field types and patterns for Searchcraft index schema generation.
analyze_json_from_urlFetch JSON data from a URL and analyze its structure to understand field types and patterns for Searchcraft index schema generation.
generate_searchcraft_schemaGenerate a complete Searchcraft index schema from analyzed JSON structure, with customizable options for search fields, weights, and other index settings.
create_index_from_jsonComplete workflow to create a Searchcraft index from JSON data. Fetches JSON from URL or file, analyzes structure, generates schema, and creates the index in one step.
Import Tools Workflow

The import tools are designed to work together in a streamlined workflow:

  1. Analyze → Use analyze_json_from_file or analyze_json_from_url to examine your JSON data structure
  2. Generate → Use generate_searchcraft_schema to create a customized Searchcraft schema from the analysis
  3. Create → Use the Engine API create_index tool to create the index with your generated schema
  4. Import → Use add_documents to populate your new index with data

Or use the all-in-one approach:

  • One-Step → Use create_index_from_json to analyze, generate schema, and create the index all in one command

Getting Started

Environment Variables

Create .env file at the project's root and fill in the values:

# Server Config USER_AGENT=searchcraft-mcp-server/<project-version> DEBUG=true PORT=3100 # Searchcraft Config ENDPOINT_URL= # The endpoint url of your Searchcraft Cluster ADMIN_KEY= # The admin key (super user key) of your Searchcraft Cluster

.env sample

Installation & Setup

Make sure your environment has the correct version of node selected.

nvm use

Install dependencies with yarn

yarn

Build the server

yarn build

This creates two server versions:

  • dist/server.js - HTTP server for testing and remote deployment
  • dist/stdio-server.js - stdio server for Claude Desktop

Usage

For local use with Claude Desktop, use the stdio version which provides better performance and reliability.

claude_desktop_config.json

{ "mcpServers": { "searchcraft": { "command": "node", "args": [ "/path/to/searchcraft-mcp-server/dist/stdio-server.js" ] } } }

The claude desktop config file can be found at:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

If the file doesn't exist, create it.

Option 2: Claude Code

For use with Claude Code, use the CLI to configure the MCP server:

Basic setup:

# Add the Searchcraft MCP server to Claude Code claude mcp add searchcraft -- node /path/to/searchcraft-mcp-server/dist/stdio-server.js

With environment variables:

# Add with your Searchcraft cluster configuration claude mcp add searchcraft \ --env ENDPOINT_URL=https://your-cluster.searchcraft.io \ --env ADMIN_KEY=your_admin_key_here \ -- node /path/to/searchcraft-mcp-server/dist/stdio-server.js

Configuration scopes:

  • --scope local (default): Available only to you in the current project
  • --scope project: Shared with team via .mcp.json file (recommended for teams)
  • --scope user: Available to you across all projects

Managing servers:

# List configured servers claude mcp list # Check server status /mcp # Remove server claude mcp remove searchcraft

Option 3: Open WebUI (via Pipelines)

Open WebUI supports MCP servers through its Pipelines framework. This requires creating a custom pipeline that bridges your MCP server to Open WebUI.

Step 1: Start the Searchcraft MCP HTTP server

yarn start # Starts HTTP server on port 3100

Step 2: Create an MCP Pipeline for Open WebUI

Create a file called searchcraft_mcp_pipeline.py:

""" title: Searchcraft MCP Pipeline author: Searchcraft Team version: 1.0.0 license: Apache-2.0 description: A pipeline that integrates Searchcraft MCP server with Open WebUI requirements: requests """ import requests import json from typing import List, Union, Generator, Iterator from pydantic import BaseModel class Pipeline: class Valves(BaseModel): MCP_SERVER_URL: str = "http://localhost:3100/mcp" ENDPOINT_URL: str = "" ADMIN_KEY: str = "" def __init__(self): self.name = "Searchcraft MCP Pipeline" self.valves = self.Valves() async def on_startup(self): print(f"on_startup:{__name__}") async def on_shutdown(self): print(f"on_shutdown:{__name__}") def pipe( self, user_message: str, model_id: str, messages: List[dict], body: dict ) -> Union[str, Generator, Iterator]: # This pipeline acts as a bridge between Open WebUI and your MCP server # You can customize this to handle specific Searchcraft operations # Example: If user mentions search operations, route to MCP server if any(keyword in user_message.lower() for keyword in ['search', 'index', 'document', 'searchcraft']): try: # Initialize MCP session init_payload = { "jsonrpc": "2.0", "id": 1, "method": "initialize", "params": { "protocolVersion": "2025-06-18", "capabilities": {}, "clientInfo": {"name": "open-webui-pipeline", "version": "1.0.0"} } } response = requests.post(self.valves.MCP_SERVER_URL, json=init_payload) if response.status_code == 200: # Add context about available Searchcraft tools enhanced_message = f""" {user_message} [Available Searchcraft MCP Tools: create_index, delete_index, add_documents, get_search_results, list_all_indexes, get_index_stats, create_key, delete_key, and 20+ more tools for managing Searchcraft clusters] """ return enhanced_message except Exception as e: print(f"MCP connection error: {e}") return user_message

Step 3: Install the Pipeline in Open WebUI

  1. Via Admin Panel:
    • Go to Admin Settings → Pipelines
    • Click "Add Pipeline"
    • Paste the pipeline code above
    • Configure the valves with your Searchcraft settings:
      • MCP_SERVER_URL: http://localhost:3100/mcp
      • ENDPOINT_URL: Your Searchcraft cluster URL
      • ADMIN_KEY: Your Searchcraft admin key
  2. Via Docker Environment:
    # Save the pipeline to a file and mount it docker run -d -p 3000:8080 \ -v open-webui:/app/backend/data \ -v ./searchcraft_mcp_pipeline.py:/app/backend/data/pipelines/searchcraft_mcp_pipeline.py \ --name open-webui \ ghcr.io/open-webui/open-webui:main

Step 4: Configure Open WebUI to use Pipelines

  1. Start Open WebUI with Pipelines support:
    # Using Docker Compose (recommended) services: openwebui: image: ghcr.io/open-webui/open-webui:main ports: - "3000:8080" volumes: - open-webui:/app/backend/data environment: - OPENAI_API_BASE_URL=http://pipelines:9099 - OPENAI_API_KEY=0p3n-w3bu! pipelines: image: ghcr.io/open-webui/pipelines:main volumes: - pipelines:/app/pipelines environment: - PIPELINES_API_KEY=0p3n-w3bu!
  2. In Open WebUI Settings → Connections:
    • Set OpenAI API URL to your Pipelines instance
    • Enable the Searchcraft MCP Pipeline

Option 4: HTTP Server (for testing/remote deployment)

Start the HTTP server for testing, debugging, or remote deployment:

yarn start # Starts HTTP server on port 3100

For Claude Desktop with HTTP server, you'll need mcp-remote:

claude_desktop_config.json

{ "mcpServers": { "searchcraft": { "command": "npx", "args": [ "mcp-remote", "http://localhost:3100/mcp" ] } } }

Available Scripts

# Development yarn dev # Watch HTTP server yarn dev:stdio # Watch stdio server # Production yarn start # Start HTTP server yarn start:stdio # Start stdio server # Testing yarn inspect # Launch MCP inspector yarn claude-logs # View Claude Desktop logs

stdio vs HTTP: Which to Choose?

Featurestdio (Recommended)HTTP
Performance✅ Direct IPC, lower latency⚠️ HTTP overhead
Security✅ No exposed ports⚠️ Network port required
Simplicity✅ No port management⚠️ Port conflicts possible
Claude Desktop✅ Native support⚠️ Requires mcp-remote
Claude Code✅ Native support✅ Native support
Open WebUI❌ Not supported✅ Via Pipelines framework
Remote Access❌ Local only✅ Can deploy remotely
Testing⚠️ Requires MCP tools✅ Easy with curl/Postman
Multiple Clients❌ One client at a time✅ Multiple concurrent clients

Use stdio when:

  • Using Claude Desktop or Claude Code locally
  • You want the best performance
  • You prefer simplicity

Use HTTP when:

  • You need remote access
  • You want easy testing/debugging
  • You need multiple concurrent clients
  • You're deploying to a server
  • Using Open WebUI or other web-based interfaces

Debugging

Claude Desktop Logs

To view Claude Desktop's logs for debugging MCP connections:

yarn claude-logs

Testing with MCP Inspector

The MCP Inspector allows you to test your server tools interactively.

For stdio server (recommended):

yarn inspect
  • Choose Transport Type: stdio
  • Command: node dist/stdio-server.js

For HTTP server:

yarn start # Start HTTP server first yarn inspect
  • Choose Transport Type: Streamable HTTP
  • URL: http://localhost:3100/mcp

Manual Testing

Test HTTP server:

# Health check curl http://localhost:3100/health # Test MCP endpoint curl -X POST http://localhost:3100/mcp \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-06-18","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}}}'

Test stdio server:

echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-06-18","capabilities":{},"clientInfo":{"name":"test","version":"1.0.0"}}}' | node dist/stdio-server.js

Resources

Issues and Feature Requests

Visit https://github.com/searchcraft-inc/searchcraft-issues

License

Licensed under the Apache 2.0 License.

Related MCP Servers

  • A
    security
    A
    license
    A
    quality
    The Search MCP Server enables seamless integration of network and local search capabilities in tools like Claude Desktop and Cursor, utilizing the Brave Search API for high-concurrency and asynchronous requests.
    Last updated -
    1
    72
    MIT License
    • Linux
  • -
    security
    F
    license
    -
    quality
    An MCP server that allows AI assistants like Claude to execute terminal commands on the user's computer and return the output, functioning like a terminal through AI.
    Last updated -
    57
    • Apple
  • -
    security
    F
    license
    -
    quality
    An MCP server that integrates with Claude to provide smart documentation search capabilities across multiple AI/ML libraries, allowing users to retrieve and process technical information through natural language queries.
    Last updated -
  • -
    security
    A
    license
    -
    quality
    An MCP server that integrates with Sonar API to provide Claude with real-time web search capabilities for comprehensive research.
    Last updated -
    0
    MIT License

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/searchcraft-inc/searchcraft-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server