weird output from the ai
# support
t
Using gpt 3.5 the ai says my instructions.... Cause they are right after the interaction..... How do I avoid that even though its between () , thanks
v
Hi @TheSnowguru, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [List Assistants]() * [Delete Assistant]() * [Web]() * [Outbound Calling]() -----------------------------
### Answer To mitigate the issue of unexpected outputs or "hallucinations" from GPT-3.5, especially when it seems to be prematurely triggering functions or misinterpreting instructions, consider refining your prompt. Discussions on a Discord forum suggest that meticulous prompt engineering can significantly reduce such occurrences. Specifically, adjusting the instructions or context provided to the model can help guide it towards the desired behavior without prematurely assuming or actioning instructions [(1)]() . Additionally, if GPT-3.5's latency or response style is not a critical concern for your use case, experimenting with GPT-4 might offer improvements in terms of reducing undesired early function triggers, as it is known to handle context and instructions more effectively. However, if sticking with GPT-3.5, fine-tuning the model with more specific examples that closely match your interaction scenarios could also yield better results. Remember, the way instructions are structured and the specificity of the context provided to the model play crucial roles in how effectively it interprets and acts upon those instructions.
Got it. Tagging @User for additional help.
t
Maybe if i put a warning on the top of the prompt, Warning(do not say whatever the parentacis during the call those are just instructions for you ?
s
GPT-3.5 hallucinates a lot please use SOTA models.