custom llm integration issue
# support
m
Hi i implemented a custom llm backend using langgraph I'm supplying content back in the response for Vapi to consume I can see in my longs the content is filled with some sort of response But Vapi doesn't seem to pick up hence its just silence after the first intro message Can someone help me? Litterally this is my last blocker before getting a end to end solution in place! Thanks Call ID 25de6020-2b3d-4278-96ca-6fa32606e434
k
Make sure you’re returning a properly structured JSON (with toolCallId and result), or streaming SSE responses with the right headers and format (Content-Type: text/event-stream, each message prefixed by data:), or Vapi will stay silent after the intro.
m
Thanks man, however I'm not using any tools or the workflow builder thats managed in the backend within the graph nodes. Ok so you're saying to implement server send event endpoint for the chat completion for the custom llm in the model? https://cdn.discordapp.com/attachments/1373571346667143271/1373576278128721930/image.png?ex=682aea18&is=68299898&hm=0165f05dcdb45e3251d9a3f1134d2837167a7adb303ab2a6700f13cc3f052455&
k
Your Vapi setup points to a custom LLM endpoint, so to avoid silence after the intro, your /chat/completions route must stream responses using Server-Sent Events (SSE) with proper headers and a data: format ending in [DONE].. https://docs.vapi.ai/customization/custom-llm/using-your-server
m
ok that worked !!!! please update your docs and the youtube video to explicitly state that for custom llm you need to use sse endpoints.
p
It is already mentioned in the documentation.