initial function calls
# support
g
I have a function to load current date and time in my appointment setter and in the prompt I tell it to call the function asynchronously if you don't know the date and stuff. But it just doesn't. you have to ask it for a date and then it does. and then says it out loud. This is too dumb of a behaviour and I wanna know if async function calling with good prompt even works or not @Sahil and @nikhil , have you done this?
v
Hi @GeneralKugelBlitz, I'm an AI assistant for Vapi. While a team member reviews this question, I'll look through relevant sources and see if I can provide suggestions. Please tag @Vapi Support with any follow-up questions.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Call Functions]() * [Vapi iOS SDK]() * [Set up your Server URL]() * [Outbound Sales Example]() -----------------------------
### Answer Vapi does support asynchronous function calling, where the assistant can perform tasks without waiting for a server response, allowing for non-blocking operations within your assistant's workflow. This behavior is specified in your function's configuration by setting the
"async": true
property [(1)]() . However, for the assistant to decide to call a function, including asynchronously, the system message provided in your model's prompt must instruct the assistant clearly on when to trigger these function calls. The effectiveness of function calling, including asynchronous calls, greatly depends on how well the prompts are designed and the conditions set for calling a function. If the assistant isn't calling the function as expected, review the prompt and ensure it's designed to trigger the function call under the right conditions. Remember, the assistant's behavior is guided by the instructions defined in the prompt. For further refinement and testing of function calls, including asynchronous ones, you might need to adjust your prompt or function configuration accordingly.
Got it. Tagging @User for additional help.
g
the point is Mr. Support Robot is I want it to trigger at the start of each call and have tried all sorts of prompts BUT it doesn't get triggered. If you know of such a prompt please enlighten us.
v
Thanks for the follow-up, @GeneralKugelBlitz. I'll get back to you soon.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Call Functions]() * [Vapi iOS SDK]() * [Set up your Server URL]() * [Dashboard]() -----------------------------
### Answer Based on the information provided in the documentation, to have a function trigger at the start of each call without explicitly asking for it can be a challenging scenario because Vapi function calls depend significantly on the dialogue flow and conditions set within your assistant's configuration and prompts. However, there's an approach you can consider: using the
firstMessage
property when setting up your assistant. This property allows you to define a message that will be spoken by your assistant when the user starts the call. While this doesn't directly trigger a function, you can design this first message in a way that leads into the scenario where your function would be needed, thus indirectly initiating the condition for the function call. For setting up
firstMessage
and integrating it into your workflow effectively, you might want to review your assistant's setup and ensure that your system prompt and function definitions are aligned with the intended user interactions. Remember, the effectiveness of triggering functions also depends on how your assistant's model is prompted to recognize when to perform certain actions [(1)]() . For more complex scenarios or specific behaviors like implicitly triggering a function at the start of a call, you might need to delve deeper into customizing your assistant's behavior or consider reaching out for more tailored support. For more detailed information on how to set up your assistant including
firstMessage
, refer to the documentation on creating and updating assistants [(2)]() [(3)]() .
g
Script: 1. Call the get_inital_calendar function immediately to know the current date and time, two months of calendar and two weeks of available slots. (this information is for your reference do not tell it to user) 2. Ask the users dental complaint and empathize with them. (apologize and hang up if they are not a patient) 3. Ask the user to book an appointment and what date and day would he like? 4. You should already have the necessary information from get_inital_calendar if you don't use get_available_slots to get whatever availability you need. 5. Tell the user about the slots in spoken language only about the dates and times they are interested in. 5. Go back and forth until you user agrees on available slot, call the available slot function again if need be, never let user book a non-available slot, refuse the user and only accept available slots. 6. Ask users info like name, email, phone, sepertely one by one 7. repeat the information along with the appointment time back to them and confirm the appointment. 8. Send the information to book_appointment function. 9. Ask for any other queries, if none end the call. this is the sort of prompt I am using along.
s
I attempted to get the current time in an asynchronous manner, but it didn't call the specific function. What you can do is directly insert the time into the prompt instead of retrieving the current time from the custom function. This method works excellently.
g
of course, I am doing that buddy. I want more flexibility in how I do it
g
Have you found that sometimes the llm does not follow the prompt?
g
not sometimes
too often, its TOO EASY TO BREAK
like some calls are smooth as butter
and others are an abomination and give you depression 😂
m
completely random so far even in my experience ... in some cases it follows the prompt and ignores it otherwise
i havent seen difference in such a behavior by upgrading to gpt4
g
gpt 4 1106 preview also sucks for me, unforntunately
also 0125
g
So is this a VAPI issue? or a prompting issue? Or a "oh well it is what it is" issue, lol ?
i think the latter
m
most likely because functions are deprecated
g
its most likely azure openai issue
m
once vapi switches to tools hopefully works consistently
g
same prompt that works flawlessly in usual chatgpt ui or there api just hallucinates in azure openai cluster vapi uses
thats where I test my prompts and functions because YOU CAN SEE whats going on
and then I replicate it in vapi
there it works
here it sucks
g
??
m
g
so we are stuck untill vapi rolls out tools?
g
I have got news for you, tools don't make function calls better for us, they make it easier for them to have function calls supported for all models, also function call layer that we have isn't gonna go away, internally they are gonna use langchain tools to handle function calling because many models are supported. if its openai tools and why would it be? probably only to use their retriever which SUCKS compared to pinecone. I don't think its getting better until we have better models.
m
hoping @User can comment here
most probably nothing else to try
g
anyhow, Imma try prompting and all models we need to huddle together to find the best config
prompt, model and temperature. All of it
g
Im down.. lets figure this out once and for all..
s
Currently, function_calls are being used in Vapi, but we are in the process of migrating them to tools. Once complete, we will push the changes to the staging area and make an announcement. This will allow you all to test it before we push it to production.