### Answer
It sounds like there might be an issue with how the transient assistant is being set up through your POST request. When creating a transient assistant via a POST request, it's crucial to ensure that the assistant's configuration matches exactly what is needed for your specific script. This includes setting up the language model, transcriber, voice options, and any specific behaviors or functionalities precisely as intended. Since you mentioned using "llama 3 70 b" as the model, which does not directly correspond to the models mentioned in the Vapi documentation, this might be contributing to the issue. The Vapi documentation primarily references models from OpenAI (e.g., GPT-4) and other specific providers for transcribing and voice rendering [(1)](
) .
It's also important to note that any slight deviation in the configuration between the persistent assistant created on the Vapi website and the transient assistant created via the API could lead to differences in behavior. Ensure all parameters related to the model, transcriber, voice, and specific script handling settings are correctly replicated in your POST request body when creating the transient assistant [(2)](
) [(1)](
) .
If you've confirmed that all settings are correctly replicated and the issue persists, it might be related to how the transient assistant is processing the script or the interaction between the chosen model and the specific demands of your script. Since transient assistants are created on the fly, double-check if all dynamic elements of your script (if any) are correctly handled in each request.
If none of this resolves the issue, it might be beneficial to review the detailed API references and examp