Multi-Language Assistant
# support
g
Hello, I'm trying to build an assistant that can detect and speak multiple languages, the problem is that it's completely unreliable. Sometimes it detect the language change and adapt transcription while some other time it still try to transcript in english I suppose and that's why the responses are getting cut off and not written down. Example Unsuccesfull Call ID: 65d43a94-2120-465d-ab58-ff970cdb141c Example Succesfull Call ID: 96d67f72-3a62-4972-b52a-7a68158d2dca Assistant ID: d7a62991-8b13-4fb2-b9a5-9d0e49843918 I set transcriber deepgram nova 2 with codeswitching enabled and it should understand italian. What am I missing here? What's the way to solve it? Better prompt?
v
It sounds like you're facing some challenges with language detection and transcription reliability in your assistant using Deepgram Nova 2. Here are a few suggestions to help improve the situation: 1. **Check Language Support**: Ensure that the code-switching feature is correctly configured. Currently, Deepgram's Nova 2 supports code-switching primarily between English and Spanish. If you're trying to switch between English and Italian, this might be causing issues since Italian isn't supported for code-switching. 2. **Configuration Review**: Double-check your transcriber settings. Make sure that the model is set to
nova-2
and that the language is explicitly set to Italian when you expect to receive Italian input. Here’s an example configuration:
Copy code
json
   {
     "model": "nova-2",
     "provider": "deepgram",
     "language": "it",
     "codeSwitchingEnabled": true
   }
3. **Endpointing Adjustments**: Adjust the endpointing settings to ensure that the audio isn't cut off prematurely. You can increase the endpointing time to allow for longer phrases to be captured. For example:
Copy code
json
   {
     "endpointing": 300
   }
4. **Keyword Boosting**: If there are specific terms or phrases that are frequently misrecognized, consider using keyword boosting to enhance their recognition. This can help the model adapt better to the vocabulary used in your conversations. 5. **Testing and Iteration**: Use the successful and unsuccessful call IDs to analyze the differences in the audio input and settings. This can provide insights into what might be causing the inconsistencies. 6. **User Input**: Encourage users to articulate their language preferences clearly, especially in multilingual contexts. This can help the assistant better understand when to switch languages.
If you continue to experience issues, it might be helpful to review the logs for any patterns or errors related to language recognition. Let me know if you need further assistance or clarification on any of these points! 😊 Sources: - [Best Practices for Improving Language Detection](https://docs.vapi.ai/quickstart/phone/outbound) - [Code Switching in Deepgram Nova 2](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1308120552140964034) - [Troubleshooting Steps for Voice AI Assistant](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1328595920899674152)
s
@giorgior27 Let me investigate this call ID and return with outcomes and conclusions.
g
The problem to me appear that sometimes it is not able to "switch" transcriber language.
b
Same thing happening to me with German. Sometimes it picks up the language switch, but most often it does not.
k
For the immediate resolution of this issue, use squads instead because I am believing fore code switching your use case is quite advance for now.
2 Views