
On July 14, 2025, Cloudflare’s widely used 1.1.1.1 DNS resolver went offline for about an hour, disrupting internet access for users across the globe. Starting at 21:52 UTC and ending at 22:54 UTC, DNS queries using Cloudflare’s resolver simply failed to go through. The company has confirmed the outage was caused by a misconfiguration introduced over a month earlier, not a cyberattack.
The problem stemmed from a flawed configuration change made on June 6. That change was intended for a non-production service tied to Cloudflare’s Data Localization Suite. Unfortunately, it accidentally included the IP address prefixes used by the 1.1.1.1 resolver. While the mistake sat dormant at first, a follow-up update on July 14 triggered a global network configuration refresh. That’s when the trouble started.
As a result, Cloudflare began withdrawing 1.1.1.1-related prefixes from its global data centers, essentially making the DNS resolver unreachable. Since DNS resolution is the backbone of most online activity, many users found that nearly every internet service broke at once. Queries over UDP, TCP, and DNS-over-TLS were hit hard. Only DNS-over-HTTPS (DoH), which uses a different IP routing scheme via the domain cloudflare-dns.com, continued to operate normally for most users.
Adding confusion to the situation, Tata Communications briefly began advertising the 1.1.1.0/24 prefix via BGP, creating the appearance of a hijack. Cloudflare has stated clearly that this was unrelated to the incident, although it did show up as a side effect of their route withdrawal.
Cloudflare detected the issue internally at 22:01 UTC and began rolling out a fix by 22:20. While BGP routes returned quickly, about 23 percent of Cloudflare’s edge servers had already reconfigured themselves in response to the earlier withdrawal, requiring a progressive rollout of corrected configurations. By 22:54 UTC, services were mostly restored and traffic had returned to expected levels.
According to Cloudflare, the root of the issue lies in their current use of both legacy and modern systems to manage service topologies. The older system still relies on manually managed IP-to-datacenter mappings, which don’t support staged deployments or automatic rollback. This incident was made worse because the change didn’t go through a progressive rollout with health monitoring before hitting every location.
In response, Cloudflare plans to speed up the retirement of these legacy components. They’ll move entirely to a system that allows gradual rollouts, better documentation, and improved test coverage. The goal is to avoid another global mess like this one by catching issues before they reach production.
Cloudflare issued an apology and stressed that the incident was entirely their fault. While many users may not know or care about BGP or service topologies, the reality is that even a single typo in the wrong place can cause ripple effects across the internet.
Although the company’s transparency is commendable, this event reminds us just how fragile even “simple” infrastructure like DNS can be when it’s centralized. For privacy-focused users who rely on 1.1.1.1, this is likely to raise questions about fallback options and resilience planning.
Cloudflare’s free resolver service is once again up and running. It can be accessed by visiting 1.1.1.1 on any device.