Big delay between "Transcriber output" and "Model ...
# support
m
For some reason, I got ~1.5 seconds delay between "Transcriber output" and "Model request started". Typically, it takes around 50-100 ms. So it is very strange. VAPI Assistant: https://dashboard.vapi.ai/assistants/ee6c7605-b8a2-480c-9956-7aad6bd82b70 Call: https://dashboard.vapi.ai/calls/57bae01a-852f-484b-8807-12e68e1b1478 "Turn Latency" shows the following: [INFO] Turn Latency: 2510ms (Endpointing 1503ms, Model 220ms, Voice: 785ms) We can see that "Endpointing" took 1.5s. I'd like to ask what exactly "Endpointing" means in terms this stats. Can we also split these 3 stats items into 6 ("internal" and "external"). If so, it would be much easier to differentiate problems (whether a problem in our side or not). https://cdn.discordapp.com/attachments/1253731185084071998/1253731185423552652/Transcriber_-_Model_request_latency.png?ex=6676eba7&is=66759a27&hm=6c2604dbde2d1f3f0a428cddb357ca48646e7476692ed33c7f7fd1be3141142f&
s
Can you try once again?
n
It's 1500ms because the transcriber didn't output a punctuation. We thought the user might be mid sentence and decided to hold
m
Basically, I mute my microphone while testing so there was no any noises or sounds. It is quite strange that it took so long after I finished my phrase. Is it OK that it happens? Does it have high frequency? If so, there might be significant delays which affects user experience. Anyway, I will write you back if I encounter the same issue
2 Views