how to send back streaming response back when usin...
# support
m
I am using custom llm(https://docs.vapi.ai/assistants/custom-llm) to get the response from openai assitant in streaming format and I want to send the streamed response back to vapi. I didnt find anything in documentation how can I do that
v
Hi @mukul, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Custom LLM]() * [Connecting Your Custom LLM to Vapi: A Comprehensive Guide]() * [Frequently Asked Questions]() * [Introduction]() -----------------------------
### Answer Currently, the official Vapi documentation does not provide specific instructions on how to implement streaming responses back to Vapi when using a custom LLM. However, generally, when setting up a server to act as the LLM for Vapi, you would need to ensure that your endpoint supports streaming completions [(1)]() . If you are creating an assistant with a custom LLM, and your server is capable of handling streaming responses, you would typically set up your server to continuously send data back to Vapi in the format it requires. This often involves using websockets or similar technology that supports real-time data transmission. For more guidance on how your server should handle requests and send responses to be compatible with Vapi, you may want to refer to the section on using your own server with a custom LLM in the Vapi documentation. If you are looking for specific code examples or more detailed implementation guidance, these are not explicitly covered in the current documentation. You might find it useful to experiment based on the general principles outlined for custom LLM integrations or reach out to other developers in the community who might have implemented similar solutions.
Got it. Tagging @User for additional help.
m
6. Send Response to Vapi: The formatted response is sent back to Vapi as a JSON object. https://docs.vapi.ai/assistants/custom-llm can someone please share structure of json object when sending back response to vapi
d
@Sahil
Check this out
m
this is not working so I am calling openai assistant getting the streamed response and that response i want to send to vapi so it narretes to user. example const stream = openai.beta.threads.runs.stream( threadId, { assistant_id: 'assistantId'}, ); for await (const event of stream) { I want to send text from the event to vapi back }
and @Sahil can you please confirm if custom llm works with voice also so when I sent response back to vapi it narrate the response back to user
d
@Sahil we are trying to make this work with Assistant API rather than chat so we can have images as well