Response From OpenAI custom LLM
# support
i
I've developed a custom LLM using OpenAI and successfully configured Vapi to send requests to my endpoint. When I receive a response from my custom LLM, is there a specific structure that the response object should follow to ensure compatibility with Vapi's request?
v
Hi @Isaac, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Custom LLM]() * [Connecting Your Custom LLM to Vapi: A Comprehensive Guide]() * [Provider Keys]() * [Call Ended Reason]() -----------------------------
### Answer It seems like you're asking about compatibility with a platform called "Voxi," but my expertise is in Vapi, a Voice AI platform. If you meant to inquire about integrating with Vapi instead of Voxi, I can certainly help with those specifics. For Vapi, when setting up a custom LLM and ensuring the response format is compatible, your server should send back responses in a structure that Vapi can interpret. Typically, this involves structuring the response to include fields like
content
for the message text, and
role
to designate who is speaking (e.g., the system or the user). Here’s a basic format:
Copy code
json
{
  "messages": [
    {
      "role": "system",
      "content": "Your response text here"
    }
  ]
}
This ensures that the LLM's output is correctly formatted for Vapi to handle in its conversational flows [(1)]() . If you are indeed working with a platform named "Voxi," I recommend checking their specific documentation or support resources for integration guidelines.
Glad I could be helpful. Feel free to create a new thread with any new questions you may have.
m
We resolved this privately
Pause
v
Glad I could be helpful. Feel free to create a new thread with any new questions you may have.
2 Views