Avoiding Hallucinations
# support
a
What are some proven ways to avoid hallucinations with Vapi assistants. Eg saying parts of the prompt out loud, skipping some steps of the task, not following the steps of the task accurately as stated in the prompt. What techniques are used in production environments to avoid this kind of hallucinations because from what I’ve built and tested so far, deploying what I have to production is just a disaster waiting to happen. So any help would be greatly appreciated
c
Better prompting usually. Trying different models etc.
a
Alright gotcha, by better prompting you mean better formatted prompts or wording and do you please have an example any chance
a
There's alot of stuff that could cause Hallucination. Like Chowderr said it's Mostly your Prompting and your model choice. But It could also be stuff like Knowledge Bases and Assistant Configuration
What's your current set up ?
a
I used the UI to make the assistant with default settings the only thing I changed was the prompt, first message, the model to gpt4o-mini and temperature to 0.1. For the prompt I used the prompting guide in the vapi docs
I was thinking of leveraging the custom LLM feature to create my own backend logic, for creating a more structured pathway
s
can you share the call id with epected vs actual response?
a
i have been thinkering with the prompts and i have found better success with the flow of the assistant
if i come across any unexpected results i will let you know
8 Views