AI hallucinates and triggers the function 9/10 tim...
# support
Is this a Gpt 3.5 problem? Is 3.5 simply too dumb for functions? It ALWAYS assumes a booking time and immediately triggers the function even though it is NOT referred till the end of the conversation. Does anyone have any idea if this hallucination can be limited on 3.5? 4's delay is too high for it to be viable in this usecase
Hi @Huza, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Function Calling]() * [Delete Assistant]() * [Vapi iOS SDK]() * [Get Assistant]() -----------------------------
### Answer The challenge you're facing with premature function triggering and hallucination is not uncommon in conversational AI, including scenarios utilizing GPT-3.5. It's important to note that while GPT-3.5 is a powerful model, it is not specifically tailored to avoid presumptions or "hallucinations" during interactions without careful tuning or additional logic layers. One way to mitigate such issues is through more granular control over the conversation flow and logic integration, which might involve customizing the prompts or adjusting the parameters sent to the model to guide its responses more effectively. Additionally, incorporating a layer of logic that evaluates the likelihood or relevance of triggering a function based on the conversation context before doing so can help. Vapi provides certain capabilities that could be leveraged to adjust the behavior of the AI in response to the user's input, such as modifying the system prompts or using intermediary custom ML models to better manage conversational dynamics [(1)]() . While the documentation provided does not directly address limitations or enhancements specific to GPT-3.5 for reducing hallucinations, Vapi's architecture allows for a high degree of customization and optimization, possibly including the integration of mechanisms to better manage premature function triggering . For applications where the latency of GPT-4 is a concern, it might be worth exploring a hybrid approach, using GPT-3.5 for parts of the conversation that require faster response times and resorting to GPT-4 or other mechanisms for parts of the conversation where accuracy and understanding context are more critical, albeit within the constraints of Vapi's platform capabilities. Lastly, experimenting with different configurations and closely monitoring the assistant's performance in various sc
enarios could provide insights into optimizing its behavior for your specific use case.
Got it. Tagging @User for additional help.
For me, its all in the prompt. I've got good success with 3.5 through rigourous prompt tweaking.
It is nearly ALWAYS assumed as "tomorrow at 3 pm"
Paste in your prompt
i don't know if it is retaining information somehow
ive rigorously tested it, it isnt anywhere
Paste your prompt, i'll help
will dm you!
What’s the function?
I literally hate GPT-3.5 it hallucinates a lot. I would suggest you to use GPT4 and tell me whether it hallucinates or not. But, I think you need GPT3.5 why don’t you fine tune it and give it a try? you can get a very good results with it
A basic appointment booking function that takes date and time as a single parameter The hallucination occurs where it auto assumes a date and time and triggers the function immediately since it has all the necessary parameters
Starting to really hate it too but 4'# delay is genuinely very unnatural for a conversation Will try fine tuning!
Let me know the results of it.
continuing on the topic, I have also felt that the calls doesn't seem stateless. Are there information that gets carried over across multiple call sessions? Are there leaking/memory sharing if the phone call is from the same phone number?
ie. does the previous call transcript gets into new call session?
this, literally
as i stated, somehow the AI is ALWAYS hallucinating for the SAME time at the same day
and its so specific, "tomorrow at 3:00 PM"
i do think there is SOME sort of memory sharing going on because there is no reason otherwise why the hallucination would be so consistent on its value and yes the prompt has been very carefully probed to ensure there is no possible chance of any influence from within the prompt
to the point where i even removed the term "tomorrow" and all numbers, time or not, from the script
You've probably already checked this but in your custom function setup, in the description do you have anything like "Input the time in HH:MM Format. Eg: Tomorrow at 3:00PM = 15:00"
have absolutely checked it hahahah
havent left any places where i could look for anything
the first few hours were spent genueinly thinking i MUST have inserted it SOMEWHERe for it to use such a consistent example
isnt the case unfortunately