The Web Research Assistant is a comprehensive MCP server offering 13 specialized tools for technical web research, package management, content extraction, and infrastructure monitoring.
Search & Discovery:
Web Search: Federated search across multiple engines via SearXNG for general queries and recent information
Technical Content Search: Find code examples, tutorials, and technical articles with time filtering
Image Search: High-quality royalty-free stock photos, illustrations, and vectors via Pixabay
Package Management: Search and retrieve detailed metadata (license, downloads, versions) across npm, PyPI, crates.io, and Go registries
Content Extraction & Analysis:
Web Crawling: Extract full, cleaned page content from URLs with configurable character limits
Structured Data Extraction: Extract tables, lists, fields (via CSS selectors), or JSON-LD from web pages into clean JSON
API Documentation: Dynamically discover and crawl official API documentation with examples and explanations
Development Tools:
GitHub Analysis: Repository health metrics, stars, forks, issues, commits, and development activity
Error Resolution: Find Stack Overflow and GitHub solutions for stack traces and error messages with automatic language/framework detection
Technology Comparison: Side-by-side comparison of 2-5 technologies with popularity metrics and performance insights
Changelog Access: Release notes and breaking change detection for safe package upgrades
Infrastructure Monitoring:
Service Status Checks: Instant health checks for 25+ popular services (Stripe, AWS, GitHub, OpenAI, etc.)
Key Features: All tools require a reasoning parameter for usage analytics, include automatic response size limiting (8KB default), comprehensive error handling, and dynamic discovery for maximum flexibility.
Requires a local Docker instance to run SearXNG for federated search capabilities.
Provides repository health metrics including stars, forks, issues, recent commits, and project details for evaluating open source projects.
Allows searching for npm packages by keywords and retrieving detailed package metadata including versions, downloads, licenses, and dependencies.
Enables checking OpenAI service status including operational health, current incidents, and component status for production debugging.
Enables searching for high-quality royalty-free stock images, photos, illustrations, and vectors through the Pixabay API.
Enables searching for Python packages and retrieving package metadata including versions, downloads, licenses, and dependencies from PyPI.
Provides federated web search across multiple search engines via a local SearXNG instance, enabling comprehensive web research and content discovery.
Enables finding solutions for error messages and stack traces by searching Stack Overflow with automatic language/framework detection and filtering.
Enables checking Stripe service status including operational health, current incidents, and component status for production debugging.
Enables checking Vercel service status including operational health, current incidents, and component status for production debugging.
Web Research Assistant MCP Server
Comprehensive Model Context Protocol (MCP) server that provides web research and discovery capabilities.
Includes 13 tools for searching, crawling, and analyzing web content, powered by your local Docker SearXNG
instance, the crawl4ai project, and Pixabay API:
web_search— federated search across multiple engines via SearXNGsearch_examples— find code examples, tutorials, and articles (defaults to recent content)search_images— find high-quality stock photos, illustrations, and vectors via Pixabaycrawl_url— full page content extraction with advanced crawlingpackage_info— detailed package metadata from npm, PyPI, crates.io, Gopackage_search— discover packages by keywords and functionalitygithub_repo— repository health metrics and development activitytranslate_error— find solutions for error messages and stack traces from Stack Overflow (auto-detects CORS, fetch, and web errors)api_docs— auto-discover and crawl official API documentation with examples (works for any API - no hardcoded URLs)extract_data— extract structured data (tables, lists, fields, JSON-LD) from web pages with automatic detectioncompare_tech— compare technologies side-by-side with NPM downloads, GitHub stars, and aspect analysis (React vs Vue, PostgreSQL vs MongoDB, etc.)get_changelog— NEW! Get release notes and changelogs with breaking change detection (upgrade safely from version X to Y)check_service_status— NEW! Instant health checks for 25+ services (Stripe, AWS, GitHub, OpenAI, etc.) - "Is it down or just me?"
All tools feature comprehensive error handling, response size limits, usage tracking, and clear documentation for optimal AI agent integration.
Quick Start
Set up SearXNG (5 minutes):
# Using Docker (recommended) docker run -d -p 2288:8080 searxng/searxng:latestThen configure search engines - see SEARXNG_SETUP.md for optimized settings.
Install the MCP server:
uvx web-research-assistant # or: pip install web-research-assistantConfigure Claude Desktop - add to
claude_desktop_config.json:{ "mcpServers": { "web-research-assistant": { "command": "uvx", "args": ["web-research-assistant"] } } }Restart Claude Desktop and start researching!
⚠️ For best results: Configure SearXNG with GitHub, Stack Overflow, and other code-focused search engines. See SEARXNG_SETUP.md for the recommended configuration.
Prerequisites
Required
Python 3.10+
A running SearXNG instance on
http://localhost:2288📖 See
⚠️ IMPORTANT: For best results, enable these search engines in SearXNG:
GitHub, Stack Overflow, GitLab (for code search - critical!)
DuckDuckGo, Brave (for web search)
MDN, Wikipedia (for documentation)
Reddit, HackerNews (for tutorials and discussions)
See SEARXNG_SETUP.md for the full optimized configuration
Optional
Pixabay API key for image search - Get free key
Playwright browsers for advanced crawling (auto-installed with
crawl4ai-setup)
Developer Setup (if running from source)
You can also use
pip install -r requirements.txtif you prefer pip over uv.
Installation
Option 1: Using uvx (Recommended - No installation needed!)
This runs the server directly from PyPI without installing it globally.
Option 2: Install with pip
Option 3: Install with uv
By default the server communicates over stdio, which makes it easy to wire into Claude Desktop or any other MCP host.
MCP Client Configuration
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
Option 1: Using uvx (Recommended - No installation needed!)
Option 2: Using installed package
OpenCode
Add to ~/.config/opencode/opencode.json:
Using uvx (Recommended)
Using installed package
Development (Running from source)
For Claude Desktop:
For OpenCode:
Restart your MCP client afterwards. The MCP tools will be available immediately.
Tool behavior
Tool | When to use | Arguments |
| Use first to gather recent information and URLs from SearXNG. Returns 1–10 ranked snippets with clickable URLs. |
(required),
(required), optional
(defaults to
), and
(defaults to 5). |
| Find code examples, tutorials, and technical articles. Optimized for technical content with optional time filtering. Perfect for learning APIs or finding usage patterns. |
(required, e.g., "Python async examples"),
(required),
(code/articles/both, defaults to both),
(day/week/month/year/all, defaults to all), optional
(defaults to 5). |
| Find high-quality royalty-free stock images from Pixabay. Returns photos, illustrations, or vectors. Requires
environment variable. |
(required, e.g., "mountain landscape"),
(required),
(all/photo/illustration/vector, defaults to all),
(all/horizontal/vertical, defaults to all), optional
(defaults to 10). |
| Call immediately after search when you need the actual article body for quoting, summarizing, or extracting data. |
(required),
(required), optional
(defaults to 8000 characters). |
| Look up specific npm, PyPI, crates.io, or Go package metadata including version, downloads, license, and dependencies. Use when you know the package name. |
(required package name),
(required),
(npm/pypi/crates/go, defaults to npm). |
| Search for packages by keywords or functionality (e.g., "web framework", "json parser"). Use when you need to find packages that solve a specific problem. |
(required search terms),
(required),
(npm/pypi/crates/go, defaults to npm), optional
(defaults to 5). |
| Get GitHub repository health metrics including stars, forks, issues, recent commits, and project details. Use when evaluating open source projects. |
(required, owner/repo or full URL),
(required), optional
(defaults to true). |
| Find Stack Overflow solutions for error messages and stack traces. Auto-detects language/framework, extracts key terms (CORS, map, undefined, etc.), filters irrelevant results, and prioritizes Stack Overflow solutions. Handles web-specific errors (CORS, fetch). |
(required stack trace or error text),
(required), optional
(auto-detected), optional
(auto-detected), optional
(defaults to 5). |
| Auto-discover and crawl official API documentation. Dynamically finds docs URLs using patterns (docs.{api}.com, {api}.com/docs, etc.), searches for specific topics, crawls pages, and extracts overview, parameters, examples, and related links. Works for ANY API - no hardcoded URLs. Perfect for API integration and learning. |
(required, e.g., "stripe", "react"),
(required, e.g., "create customer", "hooks"),
(required), optional
(defaults to 2 pages). |
| Extract structured data from HTML pages. Supports tables, lists, fields (via CSS selectors), JSON-LD, and auto-detection. Returns clean JSON output. More efficient than parsing full page text. Perfect for scraping pricing tables, package specs, release notes, or any structured content. |
(required),
(required),
(table/list/fields/json-ld/auto, defaults to auto), optional
(CSS selectors for fields mode), optional
(defaults to 100). |
| Compare 2-5 technologies side-by-side. Auto-detects category (framework/database/language) and gathers data from NPM, GitHub, and web search. Returns structured comparison with popularity metrics (downloads, stars), performance insights, and best-use summaries. Fast parallel processing (3-4s). |
(required list of 2-5 names),
(required), optional
(auto-detects if not provided), optional
(auto-selected by category), optional
(defaults to 3). |
| NEW! Get release notes and changelogs for package upgrades. Fetches GitHub releases, highlights breaking changes, and provides upgrade recommendations. Answers "What changed in version X → Y?" and "Are there breaking changes?" Perfect for planning dependency updates. |
(required name),
(required), optional
(npm/pypi/auto, defaults to auto), optional
(defaults to 5). |
| NEW! Instantly check if external services are experiencing issues. Covers 25+ popular services (Stripe, AWS, GitHub, OpenAI, Vercel, etc.). Returns operational status, current incidents, and component health. Critical for production debugging - know immediately if the issue is external. Response time < 2s. |
(required name, e.g., "stripe", "aws"),
(required). |
Results are automatically trimmed (default 8 KB) so they stay well within MCP response expectations. If truncation happens, the text ends with a note reminding the model that more detail is available on request.
Configuration
Environment variables let you adapt the server without touching code:
Variable | Default | Description |
|
| Endpoint queried by
. |
|
| Category used when none is provided. |
|
| Default number of search hits. |
|
| Hard cap on hits per request. |
|
| Default character budget for
. |
|
| Overall response limit applied to every tool reply. |
|
| User-Agent header for outward HTTP calls. |
| (empty) | API key for Pixabay image search. Get free key at . |
|
| Location for usage analytics data. |
Development
The codebase is intentionally modular and organized:
Each module is well under 400 lines, making the codebase easy to understand and extend.
Usage Analytics
All tools automatically track usage metrics including:
Tool invocation counts and success rates
Response times and performance trends
Common use case patterns (via the
reasoningparameter)Error frequencies and types
Analytics data is stored in ~/.config/web-research-assistant/usage.json and can be analyzed
to optimize tool usage and identify patterns. Each tool requires a reasoning parameter
that helps categorize why tools are being used, enabling better analytics and insights.
Note: As of the latest update, the reasoning parameter is required for all tools (previously optional with defaults). This ensures meaningful analytics data collection.
Documentation
Comprehensive documentation is available in the docs/ directory:
Project Status - Current status, metrics, roadmap
API Docs Implementation - NEW tool documentation
Error Translator Design - Error translator details
Tool Ideas Ranked - Prioritization and progress
SearXNG Configuration - Recommended setup
Quick Start Examples - Usage examples
See the docs README for a complete index.