Stealth Scraper extracts data from bot-protected websites that block standard scraping tools. When you're getting 403 errors, CAPTCHA walls, or blocks from Cloudflare, Akamai, DataDome, or PerimeterX — this tool gets through using residential proxies and geo-targeted IPs.
It's the fallback for when normal web scraping fails. Use it on single pages or crawl entire protected sites. If a site loads fine in your browser but blocks automated requests, this is the tool for the job.
What you can do
- Scrape a single bot-protected page using residential proxies and extended rendering wait times
- Crawl an entire bot-protected site recursively
- Target geo-specific content using country-level proxy selection
- Match locale with language headers for localized content
- Control rendering wait time for heavy single-page applications
- Return content as markdown, HTML, or raw HTML
Who it's for
Developers and researchers who need data from sites that actively block scrapers. Data teams collecting pricing, product, or content data from protected sources. Anyone whose standard scraping workflow is hitting bot detection walls.
How to use it
- Try the regular Web Scraper tool first — use Stealth Scraper only when you get 403, 429, or CAPTCHA responses
- Use stealth_scrape for a single page — it automatically applies residential proxies and a 3-second render wait
- Set country to match the target site's region (e.g. "gb" for UK sites) for better proxy matching
- Increase waitFor for heavy SPAs that need extra time to render their content
- Use stealth_crawl for recursive crawling of a protected site — note it's slower and async (~120s)
Getting started
No setup required — the tool uses a shared proxy pool by default, or connect your own account for dedicated proxies.