Build a Full Site Inventory
Crawl your entire website to build a complete inventory of every page, its status, and its metadata.
Map pages, links, and errors
Walk a website from any URL, collecting per-page titles, descriptions, heading counts, word counts, response times, and link counts. Ideal for site audits, content inventories, broken-link detection, and SEO analysis.
Crawl pages from a starting URL and collect metadata including title, description, heading counts, link counts, response time, and crawl diagnostics.
curl -H "Authorization: Bearer $TOOLROUTER_API_KEY" \
-d '{
"tool": "site-crawler",
"skill": "crawl_site",
"input": {
"url": "https://example.com",
"max_pages": 5
}
}' \
https://api.toolrouter.com/v1/tools/callclaude mcp add --transport stdio \
--env TOOLROUTER_API_KEY=YOUR_API_KEY \
toolrouter -- npx -y toolrouter-mcpcurl -H "Authorization: Bearer $TOOLROUTER_API_KEY" \
-d '{"tool":"site-crawler","skill":"crawl_site","input":{}}' \
https://api.toolrouter.com/v1/tools/callCrawl your entire website to build a complete inventory of every page, its status, and its metadata.
Crawl your website to discover all pages returning 404 errors and the internal links pointing to them.
Find relevant academic papers on any research topic across millions of scholarly publications.
Understand how users feel about your app by analyzing sentiment patterns across hundreds or thousands of reviews.
Run a full-spectrum website audit combining SEO analysis, performance testing, site crawling, and visual documentation in one workflow.
Prepare for website migration by crawling the current site, documenting SEO baselines, benchmarking performance, and recording DNS configuration.
Crawl a web application, analyze HTTP security, test for injections, and document findings with screenshots.
It walks a website from a starting URL and gathers data page by page, so it works for both single-page and full-site audits.
It collects titles, descriptions, heading counts, word counts, response times, and link counts.
Yes. Those are two of the most practical uses for the crawler.
No. You can start from any URL and let the crawl expand from there.