When a site loads for you but not your friend, a single-region ping can't tell which one of you is seeing the truth. isitdown.io runs every website check from 4 parallel regions — US East (Virginia), US West (Oregon), Europe (London), and Asia (Singapore) — in the same window. This is what that buys you.
The problem with single-region pings
A ping or curl from one machine tells you one thing: whether that specific machine can reach the target over that specific network path right now. It does not distinguish:
- "Server is down for the world" from
- "Server is down for your continent" from
- "Server is down for your ISP" from
- "Server is down only for you."
Most outages in practice are not global. They are:
- Regional CDN issues — a Cloudflare or Fastly edge dropping traffic from one geography.
- DNS propagation delays — a deploy moved the site to new infrastructure and some resolvers still answer with the old IP.
- BGP routing flaps — a transit provider has a bad route for a few minutes.
- IP or ASN bans — the target blocked your IP, your ISP, or your VPN provider.
A single-region check can't see any of these patterns. A multi-region check makes them obvious.
What 4 parallel regions reveal
With 4 regions probing in parallel, every outage falls into one of five bins:
| Fail count | Interpretation |
|---|---|
| 0 / 4 | Target is globally healthy. If you can't reach it, the issue is local. |
| 1 / 4 | Regional — one geography has issues. Usually a CDN edge or transit route. |
| 2 / 4 | Partial outage. Often a CDN-provider problem affecting specific regions. |
| 3 / 4 | Mostly down — the last region is probably a CDN cache serving stale content. |
| 4 / 4 | Confirmed global outage. Every probe path failed. |
That's a four-bit diagnostic before you've even looked at the HTTP status code.
Patterns worth knowing
The regional-edge outage
A CDN drops traffic from one of its POPs. Probes from that region fail; every other region is green. If you're a user inside that region, the site is "down" for you — but users elsewhere see it working and report it as fine. Without multi-region data, you'd assume those users are lying.
The deploy gone wrong
A provider rolls out a bad config to US East only. Probes from us-east-1 fail; us-west-2, eu-west-2, and ap-southeast-1 pass. The customer can reload their own status page from Europe, see it work, and assume users are lying. Multi-region data kills that assumption.
DNS propagation during a CDN migration
A site moves from its legacy CDN to a new one. For the first 20 minutes, probes from some regions resolve to the old IP (which has been decommissioned) and probes from others resolve to the new IP. The failure pattern migrates across regions as TTLs expire. Hard to diagnose without direct probes in each region.
How isitdown.io does it
Every /check/<domain> request kicks off four parallel HTTP(S) probes:
- Region list: US East (Virginia), US West (Oregon), Europe (London), Asia (Singapore).
- Method: HTTP GET, follows redirects once, reads response headers.
- Timeout: 10 seconds per region (worst-case total wait is 10s, not 40s — they run in parallel).
- Phase timings: DNS lookup, TCP connect, TLS handshake, first byte — captured from the socket lifecycle.
- Retries: capped at 3 attempts per region for transient errors (
EAI_AGAIN,ECONNRESET). - Cadence: background monitors re-probe popular targets every 5 minutes.
The per-region result includes the HTTP status code, the response time, and (when available) the phase timings — so you can tell slow DNS from slow TLS from slow first byte. See the global snapshot at isitdown.io/status.
Direct probes vs crowd-sourced reports
Multi-region direct probes and crowd-sourced reports measure different things:
| Direct probes | Crowd reports | |
|---|---|---|
| What they measure | Network reachability right now | User-observed symptoms |
| Latency | Sub-second | Minutes |
| False positives | Rare | Common (trending topics, unrelated issues) |
| Coverage | Limited to probed regions | Broad but biased toward popular services |
| Useful for | Confirming an outage, diagnosing scope | Gauging user-visible impact |
Both have their place. If you're a user trying to figure out "is the site actually down," direct probes answer faster and more reliably. If you're gauging user sentiment or UX-level issues that don't show up in HTTP responses, crowd data helps.
Try it
Check any site's live 4-region status: isitdown.io
See the global fleet snapshot: isitdown.io/status
Related: Is it down for everyone or just me? · What 503 Service Unavailable means