how reliable is modelOutputInMessagesEnabled now
# support
u
this says
beta
, and we wonder how reliable it is now to use? We tested few months back and the conversation messages were quite mess. so i am following up to ask we need to use this as we are using customized-llm, and we need to rely on what assistant has said from LLM, rather than what is transcribed from stt. (sometimes stt is off especially for number, and names) thanks https://cdn.discordapp.com/attachments/1306400837160931399/1306400837341548684/Screenshot_2024-11-13_at_6.27.55_PM.png?ex=67368817&is=67353697&hm=2d22f7bd1e09e2e2943b01013dbca91ba62389b31ff434153f03be757165dad2&
s
Hey @8Ds you can use it without worry,
modelOutputInMessagesEnabled
is highly stable now and is being used by other users as well.
By any chance if you face any issue do let me know will help you to get unblocked asap.
@8Ds do let me know how it goes.
u
okay thanks
hey @Shubham Bajaj I tested with this call 25083bc1-5df2-41b0-858c-cd28d6cbb7ae enabled modelOutputInMessagesEnabled, however, it's not working as expected, the name is still transcribed rather than being what we sent to vapi is there anything wrong? thanks
s
Hey @8Ds if you look at the transcription model output is being sent to messages context, instead of using transcriber for messages context.
Copy code
"AI: Hello, I'm Jennifer with the lemon law office  of the consumer protection law group. Please know that this call is recorded for quality assurance.  I speaking with Dinju?\nUser: Yes.\nAI: Great, thanks for taking my call today.  Please note there might be a slight delay in my responses as I will be taking notes  to connect you with the right person from our team.  <flush/> We understand you have  20 22 vehicle you told us about in the form you completed.  In order to make sure your vehicle qualifies for a lemon law case, I need to ask you a few  simple questions regarding your vehicle. Do you have a few minutes to chat?\nUser: Yes. I do. Yeah. What's my name? Again?\nAI: Your name is Dingyu Zhong.  Thank you for confirming that you have time to chat.   You told us you have 1 repair, is that true?\nUser: Yes. I do. I do have 1 repair. You are right.\nAI: Thank you for confirming that.   I see you own a twenty twenty two vehicle. Could you please tell me the make and model of your vehicle?\nUser: I have a, uh, Honda Civic.\nAI: Thank you for sharing that.    What is the current mileage on your Honda Civic? I'm still here when you're ready to continue.\n"
@8Ds what wrong did you found here?
u
i don't understand, where do i get the above? btw, we are using custom-llm, and i am refering to the payload we receive from vapi and expecting the assistant message being what we sent as an example, we sent
dingyu
as the name, and you can see, the name return from the payload was
Dingju
s
Hey your using static first message because of it went through transcription service. šŸ”µ 18:46:42:141 Say Request Saying 172... (say: Hello, I'm Jennifer with the lemon law office of the consumer protection law group. Please know that this call is recorded for quality assurance. Am I speaking with dingyu?) šŸ”µ 18:46:50:209
assistant
Final Transcript : I speaking with Dinju?: 0.84781903
@8Ds do let me know other parts, where your having doubts now?
u
are you saying, for the content that is LLM provided, the assistant final message would be based on LLM, instead of the transcrption, is that right?
s
No I am saying for the first message, your using a static or fixed instead of using model to generate it because of this the first message output appended using transcription, because Model was never used to generate the response. helps @8Ds ?
u
yes, thanks
and my question following this up is that for custom-llm generated message, will that be based on transcription or not?
s
If modelOutputInMessages set to true then it won't be based on transcription. For first message set the mode to
assistant-speaks-first-with-model-generated-message
.
@8Ds do let me know If I can close this ticket.
u
will check
3 Views