function call testing
# support
s
Hi! I'm trying to set up custom webhook url function call, but I found that the "Test Endpoint" button triggers a POST reuqest that doesn't include the function name. This is contrary to the sample repos where it expects function name to be given, and then calling the correct function. Is this just a bug in the "Test Endpoint" functionality or should I set up one endpoint per function call? Thank you! Also is there a way to test how the LLM would construct its response given the function call results? https://cdn.discordapp.com/attachments/1251895098556547215/1251895098766004224/image.png?ex=66703daa&is=666eec2a&hm=68989cd334a367e6757a8e3672728be43a7e9791cedcb7350ecc05c1b125a323& https://cdn.discordapp.com/attachments/1251895098556547215/1251895099042959422/image.png?ex=66703daa&is=666eec2a&hm=d748895559306278fc4d514a3dbef321d09fad7490cc17ddd920b7eb31bcb40a&
v
message has been deleted
@skeang Hello! It sounds like you're encountering an issue with the "Test Endpoint" button not behaving as expected. Typically, the function name should be included in the POST request when testing an endpoint. This could be a bug or a misconfiguration in the testing setup. Regarding setting up your webhook, it's common to have one endpoint that can handle multiple functions by parsing the incoming data to determine which function to call. However, if you prefer a more straightforward approach or if your use case requires it, you can set up one endpoint per function call. As for testing how the LLM (Large Language Model) would construct its response given the function call results, you would typically simulate the function call and feed the results to the LLM in a controlled environment. This way, you can see how the LLM processes the input and constructs a response. If you need further assistance with setting up your webhook or testing the LLM response, please let me know, and I'll be happy to help!
You can continue this conversation by mentioning me in the message.
s
@User Could you elaborate more on how to "simulate the function call and feed the results to the LLM in a controlled environment"? I have made some test inbound calls to an Assistance backed by OpenAI, am I able to see the OpenAI requests?
j
I think the Vapi bot is confusing your situation. I'd suggest using custom tools over custom functions for more customisability. The tool calling docs should help you. Also be sure to watch the YouTube video in the docs 👍
s
thank you @Jotham , I'll have a look at custom tools again! But this would be my next question anyways: Is there a way to see/log the OpenAI requests, or to construct one from the test call transcripts? I'm trying to find a way to test
custom tool response => LLM response
without making a phone call everytime?
j
Sorry I'm not sure what you mean exactly. You can configure the final output made by the assistant when the tool request is complete by changing the "request-complete" message. This is shown in the docs. You might be able to create the tool within the openAI assistants API for testing purposes, but I think this would be more work than it's worth, I'd just keep making test calls until your happy with your output
s
Ah I'll have a thorough read through the docs, thank you!
One last thing: so is that a bug in the "Test Endpoint" triggered request which is missing the function name?
j
I'm away from my PC so I can't test atm but will be interesting to see later
c
lol not sure what you mean - if you created this correctly on the by naming it - that is the name. CreateBooking Creates booking for user -Name -Email -Day Then those parameters show up at the bottom - add your webhook and send the data. Also your call log should give you information as well.
s
@Jotham @Chowderr Thanks for helping!