Custom llm stopped working randomly (vapi is faili...
# support
t
any idea why using a custom llm worked fine for a while and now it just randomly stops responding? hosted my own middleware to connect my fine tuned model with vapi and it was working fine for several hours then just started ending the call after user input without any error other than "meeting has ended". middleware is functioning fine, i can access it through a curl request and get a response from my fine tuned model but vapi is just failing to make the request for some reason. It fails through api and through dashboard. nothing has changed in the deployment for connecting vapi with the fine tuned model, fine tuned model is still functional, so pointing to some issue with vpai making requests. maybe an update pushed server side that broke part of the custom llm functionality? not sure but was hoping this could be fixed because I need this fine tuned model version live by this evening. Nothing has changed on my end code wise and the fine tuned model is still deployed and funcitoning properly/accessible. To clarify, it literally did this halfway through a call so I knwo there were absolutley no changes that would have been pushed on my end that caused it https://cdn.discordapp.com/attachments/1327017979787083957/1327017980164440216/Screenshot_2025-01-09_at_3.png?ex=6781894d&is=678037cd&hm=898ea76540e752145e106c2347960d388d3f0f849b3c63f13ede965546cb5566&
v
@Tedros I'm sorry to hear that you're experiencing issues with your custom LLM integration with VAPI. It sounds like you've already done some troubleshooting on your end, and since the middleware is accessible via curl and responding correctly, the issue does seem to be on the VAPI side. There could be a few reasons for this behavior, such as temporary service disruptions, updates on the VAPI platform that might have affected the integration, or possibly rate limits or other safeguards kicking in. To assist you further, I would need to check the system logs and potentially escalate this to our technical team. However, since I'm designed to provide support within the scope of this conversation, I would recommend reaching out to VAPI's support team directly for a more in-depth investigation. In the meantime, you can also check the following: 1. Review any logs or error messages from your middleware to ensure there are no internal issues that might be causing the disconnection. 2. Verify that there haven't been any changes to API keys or authentication methods that could affect the connection between VAPI and your middleware. 3. Check VAPI's status page (if available) to see if there are any known outages or maintenance windows that could be affecting service. 4. Ensure that your middleware is not hitting any rate limits set by VAPI. If you need this issue resolved urgently and it's within my capabilities, I can attempt to check the system status or logs for you. Would you like me to do that?
You can continue this conversation by mentioning me in the message.
t
there are no logs in my middleware regarding the request vapi sends out. I have very detailed logging and when it was working before I could see wether or not the requests made from any service failed or went through. My requests from curl still show up in the logs but the ones from vapi don't even show indicating its failing before it reaches the logging points in my middleware. Vapi shows an error of Model request failed (attempt #1, reason: (404 status code (no body))) in its logs but given it used to work jsut fine (nothing has changed since) and curl still works, my guess is something changed on vapis side. ANy ideas there?
ah man i don't think the vapi support bot is gonna respond to this one
also the ask for help functionality isn't working @User may be some internal outages causing both of these issues i don't know, just wanted to let you guys know https://cdn.discordapp.com/attachments/1327017979787083957/1327020047842939021/Screenshot_2025-01-09_at_4.03.09_PM.png?ex=67818b3a&is=678039ba&hm=7446e4e574395f06712f20f25def695dfeebf38e677de3af49cfb2fad80eff97&
Welp nvm it just decided to start working again
s
@Tedros Just checking if this is still happening for you. If yes, could you please share a recent call ID?
Because of no response closing this ticket.
4 Views