AI provider status

Is your AI stack down?

Live status of the major AI APIs from two angles: what the provider says on its own status page, and what a real HTTP probe to the API host returns from each of our four monitoring regions. The combination catches the common failure mode where the official status page lags real impact by 30+ minutes during major incidents.

1Providers tracked 1Operational 0Degraded 0Outage 0Pending
OpenAI
api.openai.com · Atlassian Statuspage
OPERATIONAL

ChatGPT, GPT-4o, GPT-4o-mini, the Realtime API, and the Assistants API. Powers Cursor, Windsurf, Notion AI, and most of the AI-flavored consumer apps shipped after November 2022.

Fastest region — · no probe yet 30s ago
All Systems Operational

How we measure

Two probe layers, both free and unauthenticated. We poll each provider's official Atlassian Statuspage feed (e.g. status.openai.com/api/v2/status.json) every 5 minutes for the publisher's own classification. We separately run HTTP HEAD requests against the API host (api.openai.com) from US East, US West, Europe, and Asia in parallel — that's the latency you see in each card. When the status page says green but our HTTP probes are timing out, that's the gap; we surface both signals so you can tell.

What this doesn't catch: model-level health (e.g. "GPT-4o is degraded but GPT-4o-mini is fine"), authenticated-API-only failures (we don't run paid requests), or rate-limit-induced 429s for specific accounts. Each provider's own status page handles those better than we could; we link to it from every card.