Previous incidents
Elevated errors in connecting calls.
Resolved Oct 28, 2025 at 11:01pm UTC
We experienced a spike in call connection errors between 3:40 and 3:58. The issue has since been resolved.
API + DB Degradation
Resolved Oct 22, 2025 at 7:35pm UTC
We are seeing increased latency and requests timing out from API and DB degradation. We are working with our DB provider to resolve this, and have made a change.
Now monitoring to ensure improvement.
There was a DB restart and things are looking normal now. The issues have been resolved.
Elevated errors in api
Resolved Oct 21, 2025 at 4:54pm UTC
we experienced elevated errors in our api (for phone calls create) at 9:35 AM PT for few minutes. This has been resolved.
Increased 5XXs for a minute
Resolved Oct 15, 2025 at 6:31pm UTC
We had a restart on our database endpoint leading to a small blip in 500s for api endpoints.
Worker died errors on calls.
Resolved Oct 14, 2025 at 9:00am UTC
We had a small blip on daily channel with call.in-progress.error-vapifault-worker-died errors due to a new daily deployment. We have rolled it back.
Degradation for inbound twilio calls
Resolved Oct 13, 2025 at 8:11pm UTC
The issue with twilio inbound calls failing on daily has been resolved. The root cause was connection timeouts on a new egress proxy service.
1 previous update
Call logs not visible on dashboard
Resolved Oct 13, 2025 at 7:45am UTC
We detected calls logs not reflecting in the dashboard for some time. This was due to an error while attaching a partition and has been resolved now. Call logs will be populated soon if they were missing.
Increased Latency in Vapi Web Call
Resolved Oct 4, 2025 at 11:22pm UTC
After further investigation with our WebRTC provider, this does not seem to be platform issue. We will follow up with impacted users directly.
1 previous update
Elevated API errors between 9:30-11:30am PT on 30th Sept.
Resolved Sep 30, 2025 at 7:11pm UTC
We experienced intermittent spikes in 5xx errors on our APIs in the weekly cluster. The root cause was identified, and a fix has already been implemented.
During this period, both inbound and outbound calls may have been affected, as they rely on the APIs for data, resulting in potential service degradation.
Degradation in connecting SIP calls.
Resolved Sep 12, 2025 at 10:47pm UTC
Services are restored.
1 previous update
High errors with connecting to our assistant Db. Calls and API are affected.
Resolved Sep 10, 2025 at 7:27pm UTC
Problem has been resolved now. All services are healthy again.
1 previous update
Call Logs Since 09/05 00:00 UTC not loading
Resolved Sep 5, 2025 at 1:43am UTC
We have fixed the issue. Call Logs are returned correctly by API
1 previous update
Cartesia Voices Degraded
Resolved Sep 4, 2025 at 8:55pm UTC
Cartesia has resolved the issue, and is full operational.
1 previous update
Elevated cases of assistant not responding in calls on daily cluster
Resolved Sep 3, 2025 at 10:06am UTC
We have identified the root cause. The problem has been fixed by rolling back a recent deployment on daily.
1 previous update
Call Transfers Degradation on Vapi Phone Number
Resolved Sep 2, 2025 at 7:24pm UTC
We have scaled up our telephony infrastructure resources and bumped our rate limits. We haven't seen any more issues in the last 20 minutes, and call transfers are working as expected now. We are closely monitoring.
1 previous update
Deepgram Aura-2 TTS Performance Degradation
Resolved Aug 28, 2025 at 7:04pm UTC
Deepgram is investigating an issue where a subset of requests may return elevated rates of 5XX errors or experience significantly higher time to first byte
Aura-2 TTS Performance Degradation
Resolved Aug 27, 2025 at 9:00pm UTC
This incident has been resolved.
1 previous update
Intermittent issues with the dashboard loading
Resolved Aug 26, 2025 at 12:04am UTC
The issue was pinpointed and reverted quickly
2 previous updates
Elevated error rates on ElevenLabs Voice requests
Resolved Aug 6, 2025 at 4:14am UTC
Elevenlabs released a fix and is fully operational now. We are also seeing normal levels, but will continue to monitor.
For impacted users, we recommend implementing Vapi Fallback Plan to automatically failover in the future
https://docs.vapi.ai/voice-fallback-plan
1 previous update
API and calls were momentarily disrupted
Resolved Aug 5, 2025 at 7:12pm UTC
IR August 4th: Call Degradation due to Pod Evictions
TL;DR
On August 4th, an incident occurred due to aggressive pod consolidation by Karpenter, which caused Redis pods to be evicted and restarted. This led to API pod failures, triggering a failover to an outdated networking component, resulting in dropped calls. The incident caused a total of 393 calls to be dropped.
Timeline (PST)
August 4th
- 11:02-11:27 AM - Core team identifies Karpenter pods in CrashLoopBackOff (OOMKi...
1 previous update