assistant calling functions too early
# support
m
This was one of the things that worked well, but now it doesn't work at all. I haven't changed my providers or my prompt, but the assistant simply calls the functions when it's not suposed to. In the prompt its strictly guided on how and when to call every function, but at the moment its calling it too early. Like instead of collecting all the nedded information, it asks one question and runs the function straight away. Very frustrating. call id: 5ff6608c-d654-45c6-84c0-f7a052ee33a4
v
Hi @Marco, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Function Calling]() * [Web Calling]() * [Prompting Guide]() * [Tools Calling]() -----------------------------
### Answer It sounds like there may be an issue with how the assistant's functions are being triggered, unrelated to changes in your prompt or provider configurations. This could be due to an internal error or a misinterpretation of the assistant’s logic concerning function calls. Although I don't have direct access to troubleshoot specific call IDs or backend logs, I recommend reviewing the assistant configuration, particularly how the functions are scripted to be triggered within the conversation flow. Ensure there are no conditional logic errors that might cause the function to trigger prematurely. If this issue persists without any clear misconfigurations, it might be helpful to contact Vapi's support directly for a more in-depth investigation into the specific call instance and configuration, as the problem might relate to deeper backend processes not detailed in our accessible documentation. For more detailed insights on managing function calls, you might find it helpful to revisit the official documentation on function calling [(1)]() . If you need further assistance, revising the implementation based on the latest principles from the documentation or seeking direct support might be necessary.
Got it. Tagging @User for additional help.
s
I know this is not the solution but can you try GPT-4o for now and test it out once?
m
works fine with 4o and actually, the other problem that I had with the chopping, is also fixed if i use the model 4o. Thanks for the idea. Now, I can't use 4o all the time, cus its a lot more expensive, but at least we know that the problem is with the 3.5, right?
4o is so much better, god damn. If the problem with the 3.5 is not resolved, I guess I can start using the 4o. its so much more accurate for 2 cents. What would you do @Sahil?
s
Well, personally, if it is just 2 cents, then I would go for it.
m
you can also try fine tuning 3.5 on your functions and it'll be better at calling them than 4o
m
Hey Mason. Thanks for the insight. Can you explain a bit further?
hey @Mason | Building KOI 😆 🤔
m
Yooo sorry I missed that dawg
I got you one min driving
m
no worries, thanks man
m
Alright I’m settled lol, what’s your question?
m
Can you give a lil more detail on this?
Here is the docs you'll see a drop down called Function calling
it's normal jsonL formatted training data but you can also include functions and how they're called.
Once you make the training data in something like vs code (easier than it sounds) you can upload it to the openai fine tuning playground and it'll cost like $.40 to train it and then it'll be an expert on your data
I'm actually building a web app where you just put in your openai api key and you can generate training data based on your use case and train the model with a few clicks