GHP
03/13/2025, 2:11 PMVapi Support Bot
03/13/2025, 2:13 PMmodelOutputInMessagesEnabled
feature. This is actually a known issue that others have experienced as well.
When you enable modelOutputInMessagesEnabled
, the transcript should ideally show the exact output from the LLM rather than what's transcribed from the TTS audio. However, as you've noticed, sometimes the transcript still reflects the transcriber's output instead of the model's output.
To address this issue, you might need to:
1. Verify your assistant configuration is correctly set up with:
bash
curl -X PATCH https://api.vapi.ai/assistant/your-assistant-id \
-H "Authorization: Bearer your-private-key" \
-H "Content-Type: application/json" \
-d '{
"modelOutputInMessagesEnabled": true
}'
2. Consider configuring a custom transcriber with specific settings that might help with accuracy:
json
{
"transcriber": {
"provider": "custom-llm",
"model": "your-llm-model-here",
"url": "https://your-custom-llm-url"
}
}
The discrepancy you're seeing (between "Voice input: Oui bien s没r..." and "Oui bien s没r...") is exactly the type of issue that has been reported by other users when the system isn't properly using the LLM output for the transcript.
Let me know if you need more specific guidance on implementing these solutions!
Sources:
- [Discord Thread about modelOutputInMessagesEnabled issues](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1343304419210694656)
- [Custom Transcriber Documentation](https://docs.vapi.ai/documentation/advanced-concepts/customization/custom-transcriber)
- [Assistant API Reference](https://docs.vapi.ai/api-reference/assistants/create-assistant)TheNewPenguin
03/15/2025, 11:55 AMTheNewPenguin
03/15/2025, 11:55 AMKings Big
03/15/2025, 6:55 PM