Adding extra voices to my SDK
# support
r
Hello, I’ve added my ElevenLabs API key to Vapi and successfully configured the burt voice, which works as expected for the support AI. However, when I try to add other voices from the ElevenLabs library (e.g., adam, dorothy, chris, michael), they do not work. Here is the relevant part of my configuration: public readonly voices = { 'burt': 'burt', // works 'adam': 'adam', // doesn’t work 'dorothy': 'dorothy', // doesn’t work 'chris': 'chris', // doesn’t work 'michael': 'michael' // doesn’t work } as const; return { transcriber: { provider: 'deepgram', model: 'nova-2', language: 'en-US' }, model: { provider: 'openai', model: 'gpt-3.5-turbo', messages }, voice: { provider: '11labs', voiceId: selectedVoice }, name: this.config.value.assistantName }; I’ve tried adding more voice names from the VAPI voice library, but they don’t seem to work. The burt voice was already assigned to the support AI, and it functions correctly. How can I add and configure additional voices? Could you clarify: How do I add and configure more voices from the ElevenLabs library? Are there specific steps, naming conventions, or prerequisites for the voiceId values? Is there documentation or examples that outline how to map ElevenLabs voices correctly to the voiceId?
v
You can continue this conversation by mentioning me in the message.
@RayRay020 Hello! It's great to hear that you've successfully configured the 'burt' voice with VAPI. To add and configure additional voices from the ElevenLabs library, you'll need to ensure that each voiceId you're using matches the voice names provided by ElevenLabs exactly. Here are the steps you should follow: 1. **Sync Your Cloned Voice**: If you haven't already, make sure to sync your cloned voices with VAPI by selecting 11labs as your voice provider and clicking on “Sync with 11labs.” 2. **Search and Use Your Cloned Voice**: After syncing, search for the additional voices you want to use within the voice library. Ensure that the voiceId values you're using in your configuration match the names of the voices exactly as they appear in the voice library. 3. **Naming Conventions and Prerequisites**: The voiceId values should correspond to the exact names of the voices as provided by ElevenLabs. If there are any discrepancies in the naming, the voices will not work. 4. **Documentation and Examples**: If there is documentation provided by ElevenLabs or VAPI, it should outline the correct mapping of voiceId values. If you cannot find this information, you may need to reach out to ElevenLabs or VAPI support for detailed documentation or examples. If you've followed these steps and the additional voices are still not working, there might be an issue with the integration or a step that has been missed. In that case, please ask support for more info on this, as they can provide you with specific guidance tailored to your setup. Would you like me to assist you with anything else at this moment?
Got it. Tagging @User for additional help.
r
https://dashboard.vapi.ai/library/voice so for example I am using Mady exactly as the library voice but still my web SDK doesn't play the voice, logs only the transcript public readonly voices = { 'burt': 'burt', // works 'Mady': 'Mady', //doesnt work 'dorothy': 'dorothy', //doesnt work 'chris': 'chris', //doesnt work 'michael': 'michael' //doesnt work } as const;
nevermind had to use voiceid in elevenlabs directly, hard to find
solved
2 Views