Transcriber + Model output not working in real cal...
# support
n
I've been testing my assistant with the web call functionality and everything works correctly, as you can see in this call
d24c445e-60a3-4ea3-8a49-64af21426a8b
. However, when I call the same assistant using a real phone the transcriber output and the model output seem completely garbled and I'm not sure why that is. Everything should be the same, but it's not. Here's an example call id for that
ed53ce6f-413e-42dc-b9b3-d28ffed2ca3f
. I expect it to work the same as the real call, but I'm just getting random stuff like this https://cdn.discordapp.com/attachments/1351305768657489993/1351305768955150427/Screenshot_2025-03-17_at_5.26.32_PM.png?ex=67d9e511&is=67d89391&hm=342bf0a2f50545b0b9ef904e175f017670a1488ba3d629ea632cd1694eae1228&
u
@neelmehta247 I've a similar issue. I wonder if they are related. Are you r audio transcript disjointed?
n
Yeah the transcript doesn’t line up with what the audio is saying
u
s
@neelmehta247 @~Stu~ CoreAspect.Agency looking into this.
Can you try another call with DeepGram transcriber instead, setting the DeepGram endpoint to 300ms? Also, ensure that the
Start speaking
plan has a
wait seconds
value set to 0.6 with smartEndpointing either set to
true
or
LiveKit
depending on your choice. Meanwhile, I'll look into this Google Transcriber issue - if others are having the same or is it on us?
n
That seems to work, however we cannot use deepgram since I need multilingual support which only google seems to offer right now
Also even the AI transcript seemed to be incorrect previously
@Shubham Bajaj any updates?
s
@neelmehta247 @~Stu~ CoreAspect.Agency Hey guys, the issue is on our end, and the team is working on it. It will be fixed by the end of the day. Till then, to use
smartEndpointingPlan
, please switch to Daily Channel for your organization.
3 Views