modelOutputInMessagesEnabled
# support
g
For all my calls and assistants (created via API), even if I enable modelOutputInMessagesEnabled, transcript do not use the LLM output but a transcription of what TTS says Example: 1a670edf-1e66-476e-97df-a0940d32961f At the end of the conversation, the LLM output was "Voice input: Oui bien s没r, c'est le z茅ro six cinquante deux quarante-huit trente-neuf vingt et un." and transcript was "Oui bien s没r, c'est le z茅ro 6 cinquante-deux quarante-huit trente-neuf vingt et un. " which says the transcript is not the output of the LLM but a Speech to text
v
Hi there! 馃憢 I understand your frustration with the
modelOutputInMessagesEnabled
feature. This is actually a known issue that others have experienced as well. When you enable
modelOutputInMessagesEnabled
, the transcript should ideally show the exact output from the LLM rather than what's transcribed from the TTS audio. However, as you've noticed, sometimes the transcript still reflects the transcriber's output instead of the model's output. To address this issue, you might need to: 1. Verify your assistant configuration is correctly set up with:
Copy code
bash
curl -X PATCH https://api.vapi.ai/assistant/your-assistant-id \
-H "Authorization: Bearer your-private-key" \
-H "Content-Type: application/json" \
-d '{
"modelOutputInMessagesEnabled": true
}'
2. Consider configuring a custom transcriber with specific settings that might help with accuracy:
Copy code
json
{
"transcriber": {
  "provider": "custom-llm",
  "model": "your-llm-model-here",
  "url": "https://your-custom-llm-url"
}
}
The discrepancy you're seeing (between "Voice input: Oui bien s没r..." and "Oui bien s没r...") is exactly the type of issue that has been reported by other users when the system isn't properly using the LLM output for the transcript. Let me know if you need more specific guidance on implementing these solutions! Sources: - [Discord Thread about modelOutputInMessagesEnabled issues](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1343304419210694656) - [Custom Transcriber Documentation](https://docs.vapi.ai/documentation/advanced-concepts/customization/custom-transcriber) - [Assistant API Reference](https://docs.vapi.ai/api-reference/assistants/create-assistant)
t
Hey there. Yes my assistant is correctly configured. I am currently using whisper. Happy to share an assistant example so you can check yourself its configuration and tell me if something is wrong.
Here is an assistant ID: be332009-3de9-4d3e-a90e-180d783c2e93
k
@GHP for me to continuing debugging can you keep your assistant, and try one more call.
2 Views