Multiple replies speaking over each other.
# support
n
v
Hi @Nick | CFive, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Get Assistant]() * [List Assistants]() * [Create Assistant]() * [Update Assistant]() -----------------------------
### Answer It sounds like you might be encountering issues with overlapping voice responses due to the LLM being triggered multiple times. One way to mitigate this problem could be to adjust the settings for interruptibility and response delays in your assistant configuration. To handle interruptions more smoothly, you can set the
numWordsToInterruptAssistant
parameter, which controls how many words the user must say before the assistant stops speaking. Additionally, adjusting the
responseDelaySeconds
parameter, which controls the delay before the assistant responds after the user has finished speaking, might help prevent overlapping speeches. Here are the relevant settings you might consider adjusting: -
numWordsToInterruptAssistant
to define how many words trigger an interruption. -
responseDelaySeconds
to set a delay before the assistant responds, giving the user more time to finish speaking. These configurations can be adjusted in the assistant's settings through API calls or dashboard configurations. For more detailed guidance on managing these settings, you could refer to the sections on managing assistant behavior in the Vapi documentation [(1)]() , [(2)]() , [(3)]() , [(4)]() . If issues persist, reviewing the specific implementation details and the sequence in which the LLM is triggered might provide further insights into preventing simultaneous voice responses.
n
already tried that
v
Got it. Tagging @User for additional help.
s
Is it happening like all the time?
n
no sir
sus call id 54ade894-5a55-4786-88db-468c3bcff401
the first bit is fine. But that it has like three different audio responses on top on each other
i think its when it switches to Model request started (gpt-4o-2024-05-13, openai)
Actually no i dont think it is. i removed all fallback models and without switching to gpt-4o-2024-05-13 it still overlapped the speach. 300a611e-0938-4fed-bd79-04f89b9cb7e8
i have been playing around with the responseDelaySeconds llmRequestDelaySeconds as i think this is the issue? For this call 6c7e41d0-d03a-4919-8508-c1ab5d9d70cd I put the responseDelaySeconds to 0.05 llmRequestDelaySeconds": 0.15 It work working fine until the transciber output 'yes' like twice really fast. meaning the model request started twice. generting two outputs? making the speach overlapped?? https://cdn.discordapp.com/attachments/1242805073860624484/1243015824034369596/Screenshot_2024-05-23_at_11.38.45_AM.png?ex=664ff032&is=664e9eb2&hm=d4c677114bb4a0a345b65064f5a868f28abb64e69f25c6af25fa432fee55829c&
s
I will report it to the team.
n
Thanks @Sahil, am i able to get an eta on this. this is really haulting my business atm
s
Will update you once it is fixed.
n
Thanks! to add onto this
i removed the "assistantOverrides": { "variableValues": and it seems to be happening way less. I also increaesd llm and reponse delays
still an issue
hope that helps thos
i have removed knowledge base
and it happens even more less
s
Are you still facing this issue? We pushed couple of fixes.
n
well i removed knowledgebase and increase the reposonse time
as well
but defintely seen a change
s
So, you are saying this happen when you add KB?
n
still happening. no kb
25cc329e-7703-4763-bdee-15b494461cf2
45 seconds
in
s
@Nick | CFive Can you join the Vapi Office Hour and ask this question? https://discord.gg/KnK58ue6?event=1240493862867304479
n
@Sahil The guy from the office hour. Hasnt gotten back to me
and it just happened again
365c122c-a298-4fb4-a486-14cad2251c6b
please man
l
yeah we are still getting this issue f5378179-c5d5-4079-9b6f-74add62010c9
v
Got it. Tagging @User for additional help.
s
I am really sorry about it. Let me check with the team.
I think I have discovered the issue. Instead of using "numFastTurns": 1, always use "numFastTurns": 0 or do not include this parameter.
I don't see any issue with your call_id can you please provide me the timestamp as well describe little more about what is the issue that you are getting?
l
{ "role": "bot", "time": 1717383392489, "source": "", "endTime": 1717383410529, "message": "I understand. Sometimes it takes a bit. I have under Damn thoughts. Matthew do sometimes you have all about any timing other properties. Right, you? Might do be you considering have any selling other properties. You might be considering selling", "secondsFromStart": 35.35 }, - two responses got merged into one here
s
This is a different issue could you please create a new support ticket for it?
l
oh, sure.
m
Via API, do you recommend to use "numFastTurns": 0 just to avoid the problem ?
s
Yeah.
5 Views