Cloudflare Error 520 — "Web server is returning an unknown error" — is the catch-all Cloudflare uses when the origin server did respond, but the response wasn't HTTP that Cloudflare could parse. It's the most ambiguous of Cloudflare's 5xx family, because the only definitive thing it tells you is "we tried, your origin didn't speak the language we expected." Five concrete behaviors at the origin produce this error, and the fix depends entirely on which one is in play.
The literal definition ¶
Cloudflare's documentation describes 520 as: "An empty, unknown, or unexpected response was received from the origin server." That description is broader than it sounds. The TCP connection succeeded — Cloudflare reached the origin's IP on the right port. TLS, if used between Cloudflare and origin, completed. But what came back over that connection wasn't a well-formed HTTP response, so Cloudflare can't proxy it to the user.
This puts 520 in a different bucket from the rest of Cloudflare's 5xx range. 521 / 522 / 523 are "we couldn't even talk to your origin." 524 is "origin held the connection but never finished the response in time." 520 is "we talked, you said something, we don't know what it was."
520 vs the rest of Cloudflare's 5xx ¶
| Code | What Cloudflare saw | Where to look first |
|---|---|---|
| 520 | Origin replied with something Cloudflare can't parse — empty, malformed, or truncated. | Origin application logs at the moment of the error. |
| 521 Web Server Is Down | TCP connection refused — origin actively rejected the connection (closed port, firewall block, application down). | Origin uptime + firewall rules + Cloudflare IP allowlist. |
| 522 Connection Timed Out | TCP connection never completed — Cloudflare's SYN went out, origin never SYN-ACK'd within the window. | Origin overload (no SYN-ACK queue) or routing/firewall silently dropping packets. |
| 523 Origin Is Unreachable | Cloudflare couldn't route to the origin's IP at all — DNS issue at Cloudflare's resolver, or origin's IP isn't routable from Cloudflare's network. | DNS records for the origin host, network ACLs. |
| 524 A Timeout Occurred | TCP connection completed, request was sent, origin held the connection but didn't finish the HTTP response within Cloudflare's 100-second window. | Slow upstream — same shape as 504. See 504 reference. |
Quick mental shortcut: 521-523 are connection failures; 524 is a slowness; 520 is "the connection worked but the response was bad."
Five real-world causes of 520 ¶
1. The origin closed the connection mid-response
The application started writing a response, sent the headers, started streaming the body, and then the process crashed or the upstream proxy killed the worker. Cloudflare gets headers but no body, or a body that ends mid-stream. The HTTP framing is broken. Cloudflare can't proxy half a response so it returns 520. Common after worker timeouts inside the origin (PHP-FPM, Gunicorn worker timeouts, Node uncaught exceptions in stream handlers).
2. The origin returned a response with no headers
Some misconfigured nginx setups, custom HTTP servers, or proxy chains return responses where the HTTP status line is present but the headers are empty or invalid. Cloudflare expects at least a Content-Type or Content-Length to make the response usable. When neither is present and the body is also empty, Cloudflare flags 520. This often happens when an origin nginx is configured to return return 200; with no body and headers stripped by aggressive header filtering.
3. The response exceeded Cloudflare's size limit
Cloudflare's free / pro / business plans cap response sizes (varies by plan but historically around 100MB for Business, less for free). A response larger than the cap is truncated; Cloudflare returns 520 rather than serving truncated bytes that would corrupt the user's download. Legitimate large file serving should bypass Cloudflare's cache or use Workers / R2 designed for large objects.
4. The origin sent invalid HTTP headers
Headers with non-ASCII bytes, CRLF injection, duplicated single-value headers, or values longer than the parser's limits. Some application frameworks let user-supplied data leak into headers (Set-Cookie values from URL parameters, X-Custom from form input) and the resulting bytes break HTTP parsing. Cloudflare is strict about HTTP grammar and will 520 rather than forward malformed headers downstream.
5. TLS handshake to origin succeeded but session terminated abnormally
Less common, but: the origin's TLS layer sent a fatal alert in the middle of the response (cert validation failure on a re-handshake, session ticket rotation gone wrong), or the upstream's TCP connection was reset by an intermediate device (corporate proxy, ddos-protection box) that doesn't understand Cloudflare's traffic pattern. Cloudflare gets a partial response or none, returns 520.
Reading the cf-ray header ¶
Cloudflare 520 pages include a Ray ID at the bottom — a string like 8a7d5e2fbcd1a1c3-DFW. The trailing three letters are the Cloudflare data center that handled the request (DFW = Dallas-Fort Worth, LHR = London Heathrow, NRT = Tokyo Narita, etc.). Two useful facts:
- The Ray ID is what you'd give Cloudflare support if you have a Pro / Business / Enterprise account. They can look up the exact upstream interaction in their logs.
- If the trailing data-center code keeps changing across requests, the issue is region-independent (almost certainly origin). If it's the same data center every time, you might be seeing a regional Cloudflare-side issue and bypass via DNS would help diagnose.
You can also see the ray ID in response headers without hitting the error page — /headers on isitdown.io dumps every response header, including cf-ray, for any URL.
Diagnosing 520 as the site owner ¶
- Pull the upstream's access logs at the moment of the error. If the origin's logs show a request that was received and "completed" with status 200 but Cloudflare returned 520, you have a body-truncation or framing issue (cause 1, 2, or 5). If the origin logs show no request at all, Cloudflare reached a different IP than your application is listening on (cause 4 or routing).
- Bypass Cloudflare and hit origin directly. If your DNS at Cloudflare points to your real origin IP, you can hit that IP with a Host header set:
curl -k --resolve example.com:443:1.2.3.4 https://example.com/. If the response is fine direct-to-origin, Cloudflare's expectation isn't being met. If the same response triggers parsing failures even with curl, your origin is producing the malformed output. - Check the response size and headers manually.
curl -i(include headers) reveals empty header sections, suspicious Content-Length values, oddly-encoded data. The response that breaks Cloudflare almost always looks slightly off in plain curl too. - Look at the time-of-error pattern. 520s happening on every request mean a structural problem (cause 1-2). 520s happening only on specific URLs mean an endpoint-specific bug (cause 3-4 — often request-data-driven). 520s on random ~1% of requests usually mean a worker process is occasionally crashing and Cloudflare catches the in-flight ones.
- Consider the Cloudflare features in the path. If you have Workers, Page Rules with response transformations, or HTML rocket-loader / minification enabled, those can interact with malformed responses in surprising ways. Temporarily disabling them isolates whether the 520 originates upstream or in Cloudflare's processing layer.
Diagnosing 520 as a user ¶
520 is a server-side problem. There's nothing client-side that produces it. But two things you can do:
- Refresh after a few seconds. If the origin is in a crash-loop, your refresh might catch the moment the new worker is healthy. If the issue is structural, the 520 will persist.
- Note the Ray ID. If you're reporting the issue to the site operator, the Ray ID is the single most useful piece of information you can include — they can look it up if they have a paid Cloudflare plan.
FAQ ¶
Why does Cloudflare not just pass through whatever the origin sent?
Forwarding a malformed HTTP response would corrupt downstream parsers — including the user's browser, which would render whatever fragment came through and produce confusing visual artifacts or runtime errors. Returning a clean 520 with the Cloudflare error page is a better user experience than passing the broken response through. The cost is the operator has to debug "what does Cloudflare consider unparseable" rather than seeing the raw upstream output.
Can a 520 be caused by Cloudflare itself, not the origin?
Rarely. Cloudflare's edge processing layer can produce 520 if a Worker script throws or returns malformed data, or if a Cloudflare-side feature corrupts the response stream. But unmodified Cloudflare proxying without Workers / Page Rules almost never produces 520 from its own behavior — the verdict really does mean "what came back from upstream wasn't usable."
How is 520 different from a regular 502?
A regular 502 (covered in the 502 reference) is upstream-agnostic: any reverse proxy can return it for any malformed-upstream-response shape. Cloudflare 520 is the same idea but more specific — it's Cloudflare's branded version, and the Ray ID + Cloudflare-specific debugging tools (real-time logs, Workers debugging, Page Rules audit) are available to dig in. If you see a generic 502 page versus the Cloudflare-branded 520 page, you can tell which proxy is reporting the failure.
Can switching DNS away from Cloudflare make the error go away?
It will hide the error page (because Cloudflare won't be in the path), but the underlying origin issue is still there — direct visitors will hit whatever the malformed origin response was. The right move is to fix the origin, not bypass the proxy that's flagging the problem honestly.