How increase input volume
# support
s
I have a problem that the client's speech is very poorly heard and the AI ​​does not correctly recognize the client's speech or does not hear it at all. This analysis is based on 500+ calls. The next question is, how can I increase the volume of the sound that the AI ​​receives from the client. And the second question is, how can I improve the sensitivity of capturing the client's speech?
v
message has been deleted
@San Sanych I'm sorry to hear that you're experiencing issues with the audio quality and speech recognition. To address your concerns, we can look into adjusting the audio input settings to ensure the AI receives a clearer signal from the client's speech. For increasing the volume of the sound that the AI receives, you might want to check the input gain settings on the device that's capturing the client's voice. If the device allows for software-level adjustments, you can increase the input volume or gain to ensure the AI receives a louder audio signal. Improving the sensitivity of capturing the client's speech could involve a few steps. You can consider using noise-cancellation technologies or high-quality microphones that are better at isolating the speaker's voice from background noise. Additionally, ensuring that the client is speaking at an appropriate distance from the microphone can also help. If you have access to specific settings within the VAPI system for adjusting audio input levels or sensitivity, I would recommend exploring those options. However, if you're looking for a solution within the VAPI system and need further assistance with technical settings or configurations, please ask support for more info on this. Would you like me to assist you with anything else regarding this issue?
You can continue this conversation by mentioning me in the message.
s
Hey @San Sanych can you share few of call ids where transcription is really poor and what you were expecting instead along with timestamp?
s
80660d77-50eb-4306-84b2-3bb09cd53e1c 06:50:58:198 [LOG] Voice cached: Я пытаюсь связаться с Коваленко Татьяна Георгиевна 06:50:58:288 [CHECKPOINT] Assistant speech started 06:51:01:393 [CHECKPOINT] Assistant speech stopped 06:51:02:803 [CHECKPOINT] User speech possibly starting 06:51:02:863 [CHECKPOINT] User speech started 06:51:02:964 [CHECKPOINT] User speech possibly stopping 06:51:03:483 [CHECKPOINT] User speech stopped 06:51:07:345 [LOG] Voice input: Давайте продолжим, если у вас нет дополнительных вопросов. This voice input is idle message. I correctly hear voice and phrases that client said, i think that transcript service also can hear it, because when client said second time with the same volume and word - AI heard it and proccessed. @Shubham Bajaj
And what about this ? Why ai not recognized 2 times word "Москва" ? 1 time he got it, but with no reason he didnt use it and ignored)
s
Hey @San Sanych this is related to transcriber if it's related to it, shall I close this ticket?
s
Yes, i guess, I would even say that this is a problem with the fact that your system with ignoring words with a transcription confidence of less than 50% does not receive AI. It is necessary to make it so that we, as AI developers, have the ability to configure these parameters, so that there is full control over the process, this is very, very important. As we have seen, it translates words correctly, but does not give them out and the AI ​​simply decides to ask questions that relate to silence, although this should work differently.
@Shubham Bajaj
s
Noted will ask the team regarding this what can be done.
s
thanks, wait for reply, its very important for me.
8 Views