Skip to main content
Glama

tavily-mcp-python

by MertAtesmen

tavily-crawl

Start a structured web crawl from a base URL, using a tree-like approach to follow internal links. Control depth, breadth, and focus on specific site sections, domains, or content types for targeted data extraction.

Instructions

A powerful web crawler that initiates a structured web crawl starting from a specified base URL. The crawler expands from that point like a tree, following internal links across pages. You can control how deep and wide it goes, and guide it to focus on specific sections of the site.

Input Schema

NameRequiredDescriptionDefault
allow_externalNoWhether to allow following links that go to external domains
categoriesNoFilter URLs using predefined categories like documentation, blog, api, etc
extract_depthNoAdvanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latencybasic
formatNoThe format of the extracted web page content. markdown returns content in markdown format. text returns plain text and may increase latency.markdown
instructionsYesNatural language instructions for the crawler
limitNoTotal number of links the crawler will process before stopping
max_breadthNoMax number of links to follow per level of the tree (i.e., per page)
max_depthNoMax depth of the crawl. Defines how far from the base URL the crawler can explore.
select_domainsNoRegex patterns to select crawling to specific domains or subdomains (e.g., ^docs\.example\.com$)
select_pathsNoRegex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)
urlYesRoot URL to begin the crawl

Input Schema (JSON Schema)

{ "properties": { "allow_external": { "default": false, "description": "Whether to allow following links that go to external domains", "title": "Allow External", "type": "boolean" }, "categories": { "description": "Filter URLs using predefined categories like documentation, blog, api, etc", "items": { "enum": [ "Careers", "Blog", "Documentation", "About", "Pricing", "Community", "Developers", "Contact", "Media" ], "type": "string" }, "title": "Categories", "type": "array" }, "extract_depth": { "default": "basic", "description": "Advanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latency", "enum": [ "basic", "advanced" ], "title": "Extract Depth", "type": "string" }, "format": { "default": "markdown", "description": "The format of the extracted web page content. markdown returns content in markdown format. text returns plain text and may increase latency.", "enum": [ "markdown", "text" ], "title": "Format", "type": "string" }, "instructions": { "description": "Natural language instructions for the crawler", "title": "Instructions", "type": "string" }, "limit": { "default": 50, "description": "Total number of links the crawler will process before stopping", "minimum": 1, "title": "Limit", "type": "integer" }, "max_breadth": { "default": 20, "description": "Max number of links to follow per level of the tree (i.e., per page)", "minimum": 1, "title": "Max Breadth", "type": "integer" }, "max_depth": { "default": 1, "description": "Max depth of the crawl. Defines how far from the base URL the crawler can explore.", "minimum": 1, "title": "Max Depth", "type": "integer" }, "select_domains": { "description": "Regex patterns to select crawling to specific domains or subdomains (e.g., ^docs\\.example\\.com$)", "items": { "type": "string" }, "title": "Select Domains", "type": "array" }, "select_paths": { "description": "Regex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)", "items": { "type": "string" }, "title": "Select Paths", "type": "array" }, "url": { "description": "Root URL to begin the crawl", "title": "Url", "type": "string" } }, "required": [ "url", "instructions" ], "type": "object" }
Install Server

Other Tools from tavily-mcp-python

Related Tools

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MertAtesmen/tavily-mcp-python'

If you have feedback or need assistance with the MCP directory API, please join our Discord server