Custom-llm output structure
# support
z
Hey all, prototyping with VAPI and we have an in-between server doing transformations on text before we hit the voice generation step. Seems like everything's plugged together correctly, but not getting any voice response when testing on the dashboard. Is this the proper structure for the custom server approach to return upon a POST?
Copy code
{
            "id": "42",
            "object": "chat.completion",
            "created": time.time(),
            "model": request_data.get("model"),
            "messages": [{"role": "assistant", "content": resp_content}],
        }
Thanks
Note: this is non-streaming right now so it's assumed the text should just be dumped straight into VAPI for voice generation