Turn latency: 3610ms (transcriber: 507ms, endpoint...
# support
d
Endpointing lately is insane! Turn latency: 3307ms (transcriber: 710ms, endpointing: 2091ms, kb: N/A, model: 177ms, voice: 291ms)
v
Hey there! I see you're experiencing some high endpointing latency values - those numbers are definitely on the high side! Based on the metrics you shared, endpointing is taking 2-2.6 seconds, which is causing your overall turn latency to be quite high. I can help explain what might be causing this and how to address it. ## What causes high endpointing latency? According to the [information I found](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1331035264751112202), high endpointing latency can be caused by several factors: 1. Endpointing model issues - The model might be taking longer than usual to identify when you've stopped speaking 2. Incorrect configuration settings - Your
startSpeakingPlan.waitSeconds
or Deepgram endpointing values might need adjustment 3. Speech model processing delays - Sometimes speech models can't produce expected results in the required time 4. Network issues - Random latency bumps due to networking problems 5. Resource exhaustion - Backend capacity issues or rate limiting ## How to fix it: Here are some troubleshooting steps you can take: 1. **Adjust your endpointing settings**:
Copy code
json
   "start_speaking_plan": {
     "waitSeconds": 0.8,
     "smartEndpointingEnabled": true
   }
2. Modify Deepgram's endpointing value - Try setting it to 300ms instead of the default 10ms which can be too aggressive. [Several users](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1344177033625997322) have reported success with this change. 3. Check your network connectivity - Ensure optimal routing to the nearest data center. 4. Review your VAPI configuration - Check settings like
silenceTimeoutSeconds
,
responseDelaySeconds
, and
llmRequestDelaySeconds
. Would you like me to provide more specific guidance on implementing any of these fixes? I'd be happy to help you troubleshoot further! Sources:
s
probably related to known issues with transcriber ... vapi is already fixing it ... @Silverstar307 posted a possible workaround at https://discord.com/channels/1211482211119796234/1357701801964736704/1357712404720648283
@Intelli says: The workaround doesnt work .... sorry
s
Use vapi smartendpointing it will fix it.
thanks @Sven
s
try change to google or openai ... save and then change back to deepgram... that worked for me ... kinda ... the agents responses are whacky still but the language remains changed
@Arvind Agents working reliably again ... thanks for your effort
a
Thank you for reaching out to us. We've received your message and will get back to you as soon as possible.
Hi, checking if this is resolved/solved for you?