Response Body Structure for Custom LLM
# support
h
I set up my custom-llm server, which handles the Vapi client request and returns the following response body structure: {'id': 'chatcmpl-f248ab7f-e884-46d7-9ba9-a117cc300959', 'object': 'chat.completion', 'created': 1732291256, 'model': 'gpt-3.5-turbo-0613', 'system_fingerprint': None, 'choices': [{'index': 0, 'delta': {'content': 'Example Response'}, 'logprobs': None, 'finish_reason': 'stop'}]} I used this as a reference: https://github.com/VapiAI/server-example-javascript-bun/blob/main/src/api/custom-llm/basic.ts This resource from the Vapi docs also isn't helpful unfortunately: https://docs.vapi.ai/customization/custom-llm/using-your-server Please show me the response structure that the Vapi client expects from the custom-llm server.
v
@HC Certainly! The Vapi client expects the response from a custom-llm server to be in a JSON format that includes the following key elements: - `id`: A unique identifier for the response. - `choices`: An array of choice objects, where each choice object contains: - `message`: The actual text response generated by your model. Here is an example of the expected response structure:
Copy code
json
{
  "id": "unique-response-id",
  "choices": [
    {
      "message": "Your generated response text goes here."
    }
  ]
}
Make sure that your server is returning the response in this format for the Vapi client to process it correctly. If you have any further questions or need assistance with the integration, please feel free to ask.
You can continue this conversation by mentioning me in the message.
h
I just tried this as the response: {'id': 'chatcmpl-d70007b9-6050-4527-910a-33ae1165df44', 'choices': [{'message': 'Test'}]} but the agent is still not answering @User
I just tried this as the response: {'id': 'chatcmpl-d70007b9-6050-4527-910a-33ae1165df44', 'choices': [{'message': 'Test'}]} but the agent is still not answering @User
I just tried this as the response: {'id': 'chatcmpl-d70007b9-6050-4527-910a-33ae1165df44', 'choices': [{'message': 'Test'}]} but the agent is still not answering
v
@HC I'm sorry to hear that the agent is not responding as expected. It's possible that there may be an issue with the integration or the format of the response. The response structure you've provided seems correct, assuming that the JSON is properly formatted (with double quotes instead of single quotes). Here's the corrected JSON response format:
Copy code
json
{
  "id": "chatcmpl-d70007b9-6050-4527-910a-33ae1165df44",
  "choices": [
    {
      "message": "Test"
    }
  ]
}
If the agent is still not answering after using the correct JSON format, there might be other issues at play, such as: 1. Network issues between the Vapi client and your custom-llm server. 2. Configuration issues on the Vapi client side. 3. Errors in the custom-llm server that are not related to the response structure. To troubleshoot further, you may want to: - Check the server logs for any errors or exceptions. - Ensure that the Vapi client is correctly configured to communicate with your custom-llm server. - Verify that the network allows for requests to be sent and received between the Vapi client and your server. If you continue to experience issues, please provide more details or check the server logs for any additional information that could help identify the problem. If necessary, you can also reach out to Vapi support for more assistance.
h
How can i reach out to vapi support
v
@HC You can reach out to Vapi support by sending an email to their support team. Here is the email address you can use to get in touch with them: [support@vapi.ai](support@vapi.ai) Feel free to send them a detailed description of the issue you're facing, and they should be able to assist you further. If you need any more help, just let me know!
s
Hey @HC apology for the delayed response, here's a small documenation only for you https://vapi.notion.site/Custom-LLM-Server-Guide-14d008c8393f807f86dbf61dcb0767ee?pvs=4 on using custom-llm. Do let me know your feedback on this.
20 Views