Devon Humes
05/14/2025, 9:47 PM0b0c9722-46d7-483a-9a05-27bd1a2a56b4
Here is an example JSON response from my custom LLM.
{
"id": "chatcmpl-BXECyIN6BeM1HQKfpuI3KlhVCCaRI",
"choices": [
{
"index": 0,
"message": {
"content": "I am designed to assist with a wide range of tasks, including answering questions, providing information, solving problems, and helping with decision-making. I can also interact with external tools to perform specific actions like creating tickets, fetching user information, and more. My goal is to be a helpful and efficient assistant for whatever you need.",
"role": "assistant"
}
}
],
"created": 1747259140,
"model": "a6312579-10d5-4ff2-a576-1f44390c748d",
"service_tier": "default",
"system_fingerprint": "fp_55d88aaf2f",
"object": "chat.completion",
"usage": {
"completion_tokens": 66,
"prompt_tokens": 421,
"total_tokens": 487
}
}
Vapi Support Bot
05/14/2025, 9:49 PMdata: {"choices":[{"delta":{"content":"Your response text here."},"index":0}]}
data: [DONE]
Make sure your server sends each chunk prefixed with data:
and ends with data: [DONE]
. This should allow Vapi to parse and display your responses correctly ([documentation](https://docs.vapi.ai/customization/custom-llm/using-your-server)).
Sources:
- [Vapi Custom LLM Integration Documentation](https://docs.vapi.ai/customization/custom-llm/using-your-server)
- [Vapi Discord Troubleshooting Discussion](https://discord.com/channels/1211482211119796234/1328028159949537341)Devon Humes
05/15/2025, 7:53 PMAccept: text/event-stream
header which we were using to determine what to send back. When I forced it to stream events it works.Vapi
05/17/2025, 5:52 AM