DoubleA
09/09/2024, 11:59 AMa9a2c044-f9fc-4884-8c2a-d2bb88480a6c
After couple of other tries, my function got called by VAPI successfuly, but it takes a lot of time for VAPI to trigger it.
call id with function call, super long delay: a8c91fbe-62ef-452e-baa0-fedb9371b8c0
In logs I also see multiple of entries: "Model called tool: sendSMS()"
, but only in last call, my backend got actually called.
What could be the reason for this?
Thank you for help 👍Vapi
09/09/2024, 12:00 PMsendSMS()
.
4. Throttling: If there's a rate limit on your backend or within VAPI, and it's being exceeded, subsequent calls might be queued and delayed.
5. Caching Issues: Sometimes, caching mechanisms can cause delays if they're not properly invalidated or updated.
6. VAPI Internal Queuing: VAPI might be queuing requests internally if it's processing a high volume of calls, leading to delays.
To further investigate and resolve this issue, I would recommend checking the following:
- Review your backend server logs to see if there are any noticeable patterns or errors that correspond with the delays.
- Check the network latency between VAPI and your backend.
- Ensure that your backend is optimized to handle requests efficiently and can scale to meet demand.
- Contact VAPI support to inquire if there were any known issues or maintenance activities during the times you experienced delays.
- If possible, monitor the API calls in real-time to see if the delay is consistent or varies.
If you need further assistance or if the problem persists, please don't hesitate to reach out to VAPI support for more detailed analysis and support.Vapi
09/09/2024, 12:00 PMDoubleA
09/09/2024, 12:02 PMVapi
09/09/2024, 12:02 PMDoubleA
09/09/2024, 1:29 PMDoubleA
09/09/2024, 4:25 PMShubham Bajaj
09/09/2024, 9:19 PMDoubleA
09/10/2024, 7:14 AM"Model called tool: sendSMS()"
, it looks like Model does recognize to call sendSMS
(multiple times), so my understanding is that VAPI gets this information from OpenAI but does not call provided server.url
DoubleA
09/10/2024, 7:24 AMJohn George
09/10/2024, 9:34 AMDoubleA
09/10/2024, 10:45 AMtoolIds
array. In that case it works very fast, as it should!
It doesn't work when I add same information as transient tool into create call request.
Another very important thing to note is that, when using toolIds, instead of transient tool, AI works much much better, like night and day - in terms of quality of asnwers.
As a side note I remember that I used gpt-4o-mini
with toolIds
(but I was asked by VAPI to switch to transient tool), and when I started to use transient tool, AI started to talk in different languages and behaved like crazy - so I had to switch to gpt-4o
, but still it wasn't perfect. Now when using gpt-4o
and toolIds
it works perfectly.
I might need to use transient tool at some point in future for some very specific cases. I would appreciate VAPI team to take a look at it and fix it 👍 .Shubham Bajaj
09/10/2024, 9:08 PMShubham Bajaj
09/10/2024, 9:09 PMShubham Bajaj
09/10/2024, 9:09 PMDoubleA
09/11/2024, 5:39 AMDoubleA
09/11/2024, 5:40 AMDoubleA
09/11/2024, 5:47 AMShubham Bajaj
09/11/2024, 7:34 PM