Transient assistant doesn't follow the steps of my...
# support
s
Hey, I created an assistant on Vapi's webiste with a particular script and it was working perfectly. But when I try to create a transient assistant via a POST request, the assistant ends up skipping steps in the script. Any idea as to why this could happen? I'm using llama 3 70 b as the model but I'm not doing any function calling for this script.
v
Hi @Saien | Exousia AI, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Create Phone Call]() * [Create Assistant]() * [Get Call]() * [Web Calling]() -----------------------------
### Answer It sounds like there might be an issue with how the transient assistant is being set up through your POST request. When creating a transient assistant via a POST request, it's crucial to ensure that the assistant's configuration matches exactly what is needed for your specific script. This includes setting up the language model, transcriber, voice options, and any specific behaviors or functionalities precisely as intended. Since you mentioned using "llama 3 70 b" as the model, which does not directly correspond to the models mentioned in the Vapi documentation, this might be contributing to the issue. The Vapi documentation primarily references models from OpenAI (e.g., GPT-4) and other specific providers for transcribing and voice rendering [(1)]() . It's also important to note that any slight deviation in the configuration between the persistent assistant created on the Vapi website and the transient assistant created via the API could lead to differences in behavior. Ensure all parameters related to the model, transcriber, voice, and specific script handling settings are correctly replicated in your POST request body when creating the transient assistant [(2)]() [(1)]() . If you've confirmed that all settings are correctly replicated and the issue persists, it might be related to how the transient assistant is processing the script or the interaction between the chosen model and the specific demands of your script. Since transient assistants are created on the fly, double-check if all dynamic elements of your script (if any) are correctly handled in each request. If none of this resolves the issue, it might be beneficial to review the detailed API references and examp
les provided in the Vapi documentation to ensure the POST request for creating a transient assistant is properly formulated. Since the problem is quite specific and might involve detailed script and configuration nuances, examining the examples and guidelines closely might offer insights into what could be going wrong.
Got it. Tagging @User for additional help.
s
call_id pls
s
23a5d31d-84bd-4127-ba33-e1521b77e243
s
Can you share a web call id as well?
s
Here is the web call ID so you can compare and contrast: 09872e2c-56df-4706-b631-76c984fe78c9
You'll notice that in the first call I sent to you, the AI completely ignored the first part of the script and jumped to the second part.
s
This is really strange, let me check with my internal team.
s
Okay thank you 👍
s
It is mostly due to bad transcription like the quality of it varies a lot when we are using phone and a web call. I would suggest you to use nova-2-phonecall if your bot language is in english.
s
Okay and if my bot is in French, what would you recommend?
Also relating to what you said, is the transcription of dialouge shown in the transcription the same text that is fed to the LLM to generate the AI response? Because sometimes it doesn't make sense to me how the AI responds based on what was captured by my user input in the transcript.
s
Nova general and 11lab
It is feed exactly as we receive from deepgram
s
okay thank you