Skip to content
Tools / Stealth Scraper
Stealth Scraper icon

Stealth Scraper

Scrape bot-protected websites

Stealth Scraper extracts data from bot-protected websites that block standard scraping tools. When you're getting 403 errors, CAPTCHA walls, or blocks from Cloudflare, Akamai, DataDome, or PerimeterX — this tool gets through using residential proxies and geo-targeted IPs.

It's the fallback for when normal web scraping fails. Use it on single pages or crawl entire protected sites. If a site loads fine in your browser but blocks automated requests, this is the tool for the job.

What you can do

  • Scrape a single bot-protected page using residential proxies and extended rendering wait times
  • Crawl an entire bot-protected site recursively
  • Target geo-specific content using country-level proxy selection
  • Match locale with language headers for localized content
  • Control rendering wait time for heavy single-page applications
  • Return content as markdown, HTML, or raw HTML

Who it's for

Developers and researchers who need data from sites that actively block scrapers. Data teams collecting pricing, product, or content data from protected sources. Anyone whose standard scraping workflow is hitting bot detection walls.

How to use it

  1. Try the regular Web Scraper tool first — use Stealth Scraper only when you get 403, 429, or CAPTCHA responses
  2. Use stealth_scrape for a single page — it automatically applies residential proxies and a 3-second render wait
  3. Set country to match the target site's region (e.g. "gb" for UK sites) for better proxy matching
  4. Increase waitFor for heavy SPAs that need extra time to render their content
  5. Use stealth_crawl for recursive crawling of a protected site — note it's slower and async (~120s)

Getting started

No setup required — the tool uses a shared proxy pool by default, or connect your own account for dedicated proxies.

Stealth Scrape Page

Scrape a single bot-protected web page using enhanced residential proxies, geo-targeted IPs, and extended rendering wait times. Bypasses Cloudflare, Akamai, DataDome, and similar anti-bot systems.

Returns: Scraped page content from bot-protected sites in the requested formats with metadata
Stealth Crawl Site

Recursively crawl a bot-protected website using enhanced proxies on every page. Bypasses anti-bot systems across the entire crawl, with geo-targeted IPs and extended rendering.

Returns: Array of scraped pages from bot-protected sites with content, metadata, and crawl status
Loading reviews...

Loading activity...

v0.012026-03-23
  • Initial release with stealth_scrape and stealth_crawl skills

Stealth Scraper Use Cases(3)

Browse all 3 Stealth Scraperguides →
Open Scrape JavaScript-Rendered Pages

Scrape JavaScript-Rendered Pages

Extract content from single-page applications and JavaScript-rendered sites that return blank pages to standard scrapers.

Stealth Scraper icon
Stealth Scraper
4 agent guides
Open Search Papers by Topic

Search Papers by Topic

Find relevant academic papers on any research topic across millions of scholarly publications.

Academic Research icon
Academic Research
4 agent guides
Open Geocode Addresses to Coordinates

Geocode Addresses to Coordinates

Convert street addresses into precise latitude and longitude coordinates for mapping and spatial analysis.

Address Geocoding icon
Address Geocoding
4 agent guides
See every Stealth Scraperuse case (Claude, ChatGPT, Copilot, OpenClaw guides) →

Related Tools

Open Web Search
Web Search icon
Web SearchWeb, news, images & maps — one tool
5

Frequently Asked Questions

When should I use this instead of a normal scraper?

Use `stealth_scrape` when a site throws 403s, 429s, CAPTCHA pages, or bot protection from Cloudflare, Akamai, DataDome, or PerimeterX.

Can it scrape just one page or an entire site?

`stealth_scrape` handles a single protected page, while `stealth_crawl` recursively crawls a whole site with a page limit and depth cap.

Can I match the browser to the site region or language?

Yes. Set `country` for geo-targeted proxies and `languages` for locale headers. You can also use `mobile`, `waitFor`, and `onlyMainContent` when the site needs it.

What output formats can I get?

`stealth_scrape` can return formats like markdown, HTML, raw HTML, links, and screenshots, along with the extracted page metadata.