Stealth Scraper
Scrape bot-protected websites
Extract data from bot-protected websites (Cloudflare, Akamai, DataDome, PerimeterX) using residential proxies and geo-targeted IPs. Scrape a single protected page or crawl an entire bot-protected site when standard scraping fails.
Scrape a single bot-protected web page using enhanced residential proxies, geo-targeted IPs, and extended rendering wait times. Bypasses Cloudflare, Akamai, DataDome, and similar anti-bot systems.
curl -H "Authorization: Bearer $TOOLROUTER_API_KEY" \
-d '{
"tool": "stealth-scraper",
"skill": "stealth_scrape",
"input": {
"url": "https://example.com/protected-page"
}
}' \
https://api.toolrouter.com/v1/tools/callRecursively crawl a bot-protected website using enhanced proxies on every page. Bypasses anti-bot systems across the entire crawl, with geo-targeted IPs and extended rendering.
curl -H "Authorization: Bearer $TOOLROUTER_API_KEY" \
-d '{
"tool": "stealth-scraper",
"skill": "stealth_crawl",
"input": {
"url": "https://example.com/blog",
"limit": 20
}
}' \
https://api.toolrouter.com/v1/tools/callQuick Start
claude mcp add --transport stdio \
--env TOOLROUTER_API_KEY=YOUR_API_KEY \
toolrouter -- npx -y toolrouter-mcpcurl -H "Authorization: Bearer $TOOLROUTER_API_KEY" \
-d '{"tool":"stealth-scraper","skill":"stealth_scrape","input":{}}' \
https://api.toolrouter.com/v1/tools/callFrequently Asked Questions
When should I use this instead of a normal scraper?
Use `stealth_scrape` when a site throws 403s, 429s, CAPTCHA pages, or bot protection from Cloudflare, Akamai, DataDome, or PerimeterX.
Can it scrape just one page or an entire site?
`stealth_scrape` handles a single protected page, while `stealth_crawl` recursively crawls a whole site with a page limit and depth cap.
Can I match the browser to the site region or language?
Yes. Set `country` for geo-targeted proxies and `languages` for locale headers. You can also use `mobile`, `waitFor`, and `onlyMainContent` when the site needs it.
What output formats can I get?
`stealth_scrape` can return formats like markdown, HTML, raw HTML, links, and screenshots, along with the extracted page metadata.