Block bots per URL
Pick URL patterns (e.g. /blog/*, /research/*) and the bots to stop — GPTBot, ClaudeBot, PerplexityBot, Bytespider, Google-Extended, or all AI. Matched requests get HTTP 403; everything else passes through untouched.
Stop AI crawlers from taking your content for free. Block them per URL, or charge them per request with the x402 standard — enforced by a Cloudflare Worker at the edge, no script tag. Included on every paid plan from Indexed up.
Per-URL control over which AI bots get in — and on what terms.
Authorize once and we deploy a Worker onto your domain. It extends the same Worker our auto-resolve uses — one edge layer, nothing for you to host or maintain.
In the dashboard, pick URL patterns, choose an action (allow, block, or charge), and the bot scope. Unmatched URLs default to allow — you only gate what you choose to gate.
The activity feed shows every bot request and the action taken in real time. Adjust prices or patterns whenever you want — changes redeploy to the edge automatically.
Nothing extra. It used to be a $19/mo add-on, but it is now bundled free into every paid plan from Indexed ($5/mo) up — Featured, Established, Foundation and Enterprise included. The Free (Visible) tier does not include it. There is no separate subscription.
AI crawlers do not execute JavaScript, so a client-side script can never intercept them. A Cloudflare Worker runs at the edge — before the request reaches your origin — which is the only place you can reliably see a bot and decide to block, charge, or allow it. No plugin to install, no code on your server.
For a "charge" rule, the Worker returns HTTP 402 Payment Required with the price you set (system-recommended by page length, or your override). Bots that implement the x402 payment standard can settle the micropayment and get the content; bots that do not respect 402 simply do not get the page. You set prices per rule — they apply universally across the bots that rule targets.
GPTBot (OpenAI), ClaudeBot (Anthropic), PerplexityBot, Bytespider (TikTok/ByteDance), Google-Extended, or "all AI" as a group. Identification is layered: a Web Bot Auth signature is trusted first, then reverse DNS, then the user-agent string — so a spoofed user-agent alone does not slip past a block rule.
No. Rules target AI crawlers specifically, and anything not matched defaults to allow. Classic search indexing bots like Googlebot are not in scope unless you explicitly add a rule for them, which you would not. You stay in control of exactly which user-agents and URL patterns are gated.
Blocking is the simplest mode and a perfectly good default — set the rule action to "block" and matched AI bots get a 403. Charging is opt-in per rule for when you would rather monetize access than refuse it outright.