Previous incidents
End of call reports + transcript showing duplicate messages
Resolved Dec 24, 2025 at 7:21am UTC
We have reverted the change and tested to confirm the issue is resolved.
1 previous update
Calls failing due to worker unavailability
Resolved Dec 23, 2025 at 11:18pm UTC
[IR] Dec 17th — Call Worker Degradation — Object Storage Upload Errors
Summary
On December 17, 2025, at 10:25 AM PST, we observed degradation in our Call Worker service. The issue was caused by Call Workers becoming blocked while uploading call recordings to a downstream object storage provider facing an outage of their own. The incident was fully resolved by 11:02 AM PST, once the downstream provider recovered and Call Workers returned to normal operation.
Timeline (PST)
- **10:25 ...
2 previous updates
Deepgram Transcriber Degraded
Resolved Dec 10, 2025 at 8:18pm UTC
Deepgram has resolved the issue, and we're seeing calls go through fine. We will be closely monitoring the issue. We highly recommend setting up transcriber fallbacks to avoid call failures in situations like this.
https://docs.vapi.ai/api-reference/assistants/create#request.body.transcriber.DeepgramTranscriber.fallbackPlan
1 previous update
ElevenLabs Transcriber Performance Degraded
Resolved Dec 6, 2025 at 12:26am UTC
The issue has been resolved and we will continue monitoring the situation. We recommend setting up transcriber fallbacks to avoid any failed calls in such situations - https://docs.vapi.ai/api-reference/assistants/create#request.body.transcriber.ElevenLabsTranscriber.fallbackPlan
1 previous update
Dashboard unavailable due to Cloudflare Issues
Resolved Dec 5, 2025 at 9:27am UTC
The Vapi dashboard is now available after cloudflare have applied the fix. We will continue to monitor to ensure no further disruptions.
1 previous update
Calls are impacted.
Resolved Dec 4, 2025 at 9:24pm UTC
The system has recovered. We are now working on monitoring the failures closely.
1 previous update
Call Logs after Nov 22 not available
Resolved Nov 29, 2025 at 6:40pm UTC
We identified the issue as a misconfiguration in the read-API endpoint. The fix has been applied, and all call logs should now display correctly. No data was lost.
1 previous update
Call Logs after 4 PM PT not loading
Resolved Nov 20, 2025 at 2:40am UTC
Affected call logs have been successfully restored. We will providing a detailed RCA soon.
2 previous updates
API Degradation
Resolved Nov 18, 2025 at 5:00pm UTC
Cloudflare has resolved their issues and our services are restored.
1 previous update
Gladia concurrency limit affected
Resolved Nov 17, 2025 at 3:00pm UTC
We have temporarily increased our concurrency limits with provider and working on a long term solution.
1 previous update
Call concurrency limit affected
Resolved Nov 16, 2025 at 4:57pm UTC
concurrency limit has been reset and jobs are processing normally. issue has been resolved as of 9:05 PT
Deepgram STT degradation
Resolved Nov 13, 2025 at 11:26pm UTC
Our STT provider has made a fix on their end and are reporting improvement. We are continuing to monitor while we push out our own improvement: https://status.deepgram.com/incidents/vgsyqxkc67by.
3 previous updates
Calls to openAI provider are affected
Resolved Nov 13, 2025 at 10:51am UTC
Issue has been mitigated as of 2:50 AM PT
1 previous update
SIP calls are degraded
Resolved Nov 13, 2025 at 2:05am UTC
Nov 7th 2025 SIP service degradation
Summary
On Friday, November 7th, 2025, one of our SIP gateway experienced a failure, causing inbound and outbound Vapi SIP calls to be disrupted between 10:30 AM and 12:15 PM PST
Context
All Vapi SIP calls go through our SIP infrastructure which handles SIP trunking, authentication, and registration. When an inbound SIP call arrives, the SIP SBC authenticates and validates it, making a webhook call to our API server for call registr...
1 previous update
Degradation in SIP calls
Resolved Nov 8, 2025 at 8:51am UTC
We are working on RCA for SIP degradation, we will share it by November 12th
3 previous updates
Elevated errors in connecting calls.
Resolved Oct 28, 2025 at 11:01pm UTC
We experienced a spike in call connection errors between 3:40 and 3:58. The issue has since been resolved.
API + DB Degradation
Resolved Oct 22, 2025 at 7:35pm UTC
We are seeing increased latency and requests timing out from API and DB degradation. We are working with our DB provider to resolve this, and have made a change.
Now monitoring to ensure improvement.
There was a DB restart and things are looking normal now. The issues have been resolved.
Elevated errors in api
Resolved Oct 21, 2025 at 4:54pm UTC
we experienced elevated errors in our api (for phone calls create) at 9:35 AM PT for few minutes. This has been resolved.
Increased 5XXs for a minute
Resolved Oct 15, 2025 at 6:31pm UTC
We had a restart on our database endpoint leading to a small blip in 500s for api endpoints.
Worker died errors on calls.
Resolved Oct 14, 2025 at 9:00am UTC
We had a small blip on daily channel with call.in-progress.error-vapifault-worker-died errors due to a new daily deployment. We have rolled it back.
Degradation for inbound twilio calls
Resolved Oct 13, 2025 at 8:11pm UTC
The issue with twilio inbound calls failing on daily has been resolved. The root cause was connection timeouts on a new egress proxy service.
1 previous update
Call logs not visible on dashboard
Resolved Oct 13, 2025 at 7:45am UTC
We detected calls logs not reflecting in the dashboard for some time. This was due to an error while attaching a partition and has been resolved now. Call logs will be populated soon if they were missing.
Increased Latency in Vapi Web Call
Resolved Oct 4, 2025 at 11:22pm UTC
After further investigation with our WebRTC provider, this does not seem to be platform issue. We will follow up with impacted users directly.
1 previous update