amit.bens
05/25/2024, 11:07 AMVapi Support
05/25/2024, 11:07 AMVapi Support
05/25/2024, 11:07 AMid
, object
, created
, model
, and messages
[(2)]() .
3. **Logs and Error Messages**: Review the detailed logs from both your custom LLM server and Vapi's call logs. Look for specific error messages or indicators at the point where the LLM fails. This might give clues on whether the issue is with data formatting, response timing, or connection handling.
4. **Server Configuration**: Double-check your server’s endpoint configuration, ensuring it matches the specifications required by Vapi for streaming compatibility. This includes setting up appropriate headers, response formats, and handling of continuous data streams.
Here’s a basic example of how your server might format a response to Vapi:
json
{
"id": "42",
"object": "chat.completion",
"created": "timestamp",
"model": "your_model_id",
"messages": [
{"role": "assistant", "content": "Response text here"}
]
}
Vapi Support
05/25/2024, 11:07 AMVapi Support
05/25/2024, 11:10 AMVapi Support
05/25/2024, 11:24 AMSahil
05/25/2024, 5:44 PM