Cloudflare changes the rules for AI crawlers with a new standard policy that blocks content scraping without permission.
Cloudflare now standardly blocks AI crawlers that collect content without explicit permission. Website administrators can decide which AI companies get access and for what purposes their content may be used, such as for training, searches, or other applications.
Publishers under Pressure
According to Cloudflare, this is necessary to restore balance on the internet. AI systems often use large amounts of online content to generate answers without sending traffic back to the original source.
Consider Google Gemini, which summarizes information from websites without permission during a search query, preventing legitimate traffic to those sites. This threatens the revenue model of publishers, as well as the creation of new non-AI-generated articles in general.
Existing Feature
The new policy builds on a feature that Cloudflare has offered since September 2024, allowing administrators to block AI crawlers with a single click. More than a million customers are already using it. From now on, this blockade applies by default to new websites that sign up with Cloudflare. Website owners can still choose to allow crawlers.
Several major international media companies and technology platforms support Cloudflare’s new approach. Among them, Condé Nast, Dotdash Meredith, Gannett Media (USA Today), and Pinterest endorse the idea that AI platforms should fairly compensate publishers for using their content. Reddit and Ziff Davis emphasize the need for more transparency and control over who crawls content and for what purpose.
Cloudflare states that the new policy not only protects publishers but also helps AI companies that want to collect content legally and transparently. The company is working on a new protocol that allows AI bots to better identify themselves and enables website administrators to more easily control which bots visit their site. This prevents legal disputes over copyright afterward.