Previous incidents

February 2025
Feb 26, 2025
1 incident

Signups and credential creation are not working

Degraded

Resolved Feb 26 at 09:08pm PST

Root Cause Analysis (RCA) for the Incident – Timeline in PT

TL;DR

A recent security fix by Supabase impacted database projects using pg_net 0.8.0, causing failures in the POST /credential endpoint and new user signups.

Timeline

March 5th, 3:00 AM PT: Failures in POST /credential endpoint and new user signups begin.

3:26 AM PT: On-call engineer observes a surge in errors related to POST /credential, including an unusual PostgresError.

3:32 AM PT: Team...

4 previous updates

Feb 22, 2025
1 incident

Assembly AI transcriber calls are facing degradation.

Degraded

Resolved Feb 22 at 06:17am PST

It is resolved now. It was due to a account related problem which has been fixed now. We will be taking steps to make sure it doesn't happen again.

1 previous update

Feb 21, 2025
1 incident

API returning 413 (payload too large) due to networking misconfiguration

Resolved Feb 21 at 11:24am PST

TL;DR

A change in the cluster-router networking filter caused an increase in 413 (request entity too large) errors. API requests to POST /call, /assistant, and /file were impacted.

Timeline

  1. February 20th 9:54pm PST: A change to the cluster-router is released and traffic is cut over to prod1.
  2. 10:19pm PST: 413 responses from Cloudflare begin appearing in increased Datadog logs.
  3. February 21st ~8:50am: Users in Discord flag requests failing with 413 errors.
  4. **...
Feb 20, 2025
2 incidents

Deepgram is failing to send transcription intermittently

Degraded

Resolved Feb 21 at 12:57am PST

Deepgram has resolved the incident on their side. Back to normal.
https://status.deepgram.com/incidents/wr5whbzk45mg

2 previous updates

Elevenlabs rate limiting and high latency

Degraded

Resolved Feb 20 at 09:11am PST

11labs has confirmed that the problem has been fixed. No failures in last 10mins. Resolving incident.
Here is the elevenlabs report on the incident https://status.elevenlabs.io/incidents/01JMJ4B025B83H28C3K81B1YS4

1 previous update

Feb 19, 2025
1 incident

ElevenLabs Rate Limiting

Resolved Feb 19 at 11:43am PST

ElevenLabs is imposing rate limits which will have impact on Vapi users who have it configured as their voice model. We are working to resolve this issue, but users can restore service by switching to Cartesia or using their own API key.

January 2025
Jan 30, 2025
1 incident

API is degraded

Degraded

Resolved Jan 30 at 03:44am PST

TL;DR

The API experienced intermittent downtime due to choked database connections and subsequent call failures caused by the database running out of memory. A forced deployment using direct connections and capacity adjustments restored service.

Timeline

2:09AM: Alerts triggered for API unavailability (503 errors) and frequent pod crashes.
2:40AM: A switch to a backup deployment showed temporary stability, but pods continued to restart and out-of-memory errors began appearing.
3:27AM...

1 previous update

Jan 29, 2025
1 incident

API is down

Downtime

Resolved Jan 29 at 09:24am PST

TL;DR

A failed deployment by Supabase of their connection pooler, Supavisor, in one region caused all database connections to fail. Since API pods rely on a successful database health check at startup, none could start properly. The workaround was to bypass the pooler and connect directly to the database, restoring service.

Timeline

8:08am PST, Jan 29: Monitoring detects Postgres errors.
8:13am: The provider’s status page reports a failed connection pooler deployment. (Due to subscri...

3 previous updates

Jan 21, 2025
1 incident

Updates to DB are failing

Degraded

Resolved Jan 21 at 05:23am PST

TL;DR

A configuration error caused the production database to switch to read-only mode, blocking write operations and eventually leading to an API outage. Restarting the database restored service.

Timeline

5:03:04am: A SQL client connected to the production database via the connection pooler, which inadvertently set the database to read-only.
5:05am: Write operations began failing.
5:18am: The API went down due to accumulated errors.
~5:23am: The team initiated a database restart.
5:...

1 previous update

Jan 13, 2025
1 incident

Calls not connecting for `weekly` channel

Degraded

Resolved Jan 13 at 08:49am PST

TL;DR: Scaler failed and we didn't have enough workers

Root Cause

During a weekly deployment, Redis IP addresses changed. This prevented our scaling system from finding the queue, leaving us stuck at fixed number workers instead of scaling up as needed. We resolved the issue by temporarily moving traffic to our daily environment.

Timeline

Jan 11, 5:12 PM: Deploy started
Jan 13, 6:00 AM: Calls started failing due to scaling issues
Jan 13, 8:45 AM: Resolved by moving traffic to daily
Ja...

1 previous update

December 2024
Dec 11, 2024
1 incident

OpenAI API is degraded

Downtime

Resolved Dec 11 at 08:00pm PST

Resolved: https://status.openai.com/

1 previous update