Urgent – Send Real-Time Message to Be Spoken by As...
# support
s
Hi team, I am using web call feature and in that i am passing a custom-llm url in the assistantOptions, now my custom llm endpoint has various stages of validating user request, planning solutions, judging etc. so it takes time to respond back to user after the whole process completes. So I want to keep the user informed about the ongoing background processes occuring. *For Example*: My step 1 for custom-llm endpoint is validating user request so when it comes to this step, I want to inform the user in that ongoing call through the assistant that 'validating your request' and then same like for all the steps this is what is similar to Web socket Connection Is there any feature available in vapi for this requirement If yes, Can you share me the required endpoint for this ? Thanks
any views on this @Vapi @Shubham Bajaj ??
Hi @Vapi would appreciate a quick support on this
i can see live call control for the phone call
Copy code
curl -X POST 'https://aws-us-west-2-production1-phone-call-websocket.vapi.ai/7420f27a-30fd-4f49-a995-5549ae7cc00d/control' 
-H 'content-type: application/json' 
--data-raw '{
  "type": "say",
  "content": "Welcome to Vapi, this message was injected during the call.",
  "endCallAfterSpoken": false
}'
is this available for web call also ? please let me know soon as i am in lil bit of hurry here @Vapi @User
hi team any help on this @Vapi ???
Hey @Shubham Bajaj can you please look into this
k
VAPI's custom LLM integration allows for streaming responses, but I don't see a built-in solution specifically for sending non-content status updates like "validating your request" while your custom LLM is processing. The custom-llm integration primarily follows OpenAI's streaming protocol where text content is streamed, not status metadata. The OpenAI streaming protocol, which VAPI's custom LLM integration follows, includes both content chunks and a final end signal (data: \[DONE\]). You're asking if you can continue sending content updates while holding back the end signal until all processing is complete. you can send multiple content updates through the stream and only send the end signal once all processing is complete. For actual content that should be shown to the user, send content in the OpenAI format: data: {"choices":\[{"delta":{"content":"Your actual content here","role":"assistant"}}\]} Continue sending both types of messages as needed until all processing is complete data: \[DONE\]
s
@User You mean to use a web socket connection for the custom llm url which will continuously keep on sending the data to vapi sdk in the mentioned openai format while the http connection will be still alive. Will this work and the assistant will keep on speaking whatever i keep on sending ?
k
Your assistant will keep on speaking you send as a part of content until you send the done or exit signal.
s
plese dont close this ticket i will implement and get back on this
Hi @Shubham Bajaj I noticed that the assistant only speaks after the HTTP connection is closed, regardless of whether we are sending data (DONE) or not. It will only respond once the connection is closed. Even if I keep sending data (keeping the connection alive) via the eventBus, it won't speak until the HTTP request is complete. To clarify: does the custom LLM URL in VAPI support WebSocket connections, or does it only support HTTP requests? If it only supports HTTP, is there any way to make the assistant speak without closing the HTTP connection? It would be great if you could help me out here
k
Hi, checking if this is resolved/solved for you?
s
hi this is not solved but ya you can close this as I am folowing different approach now
k
Marking this ticket as Solved ✅