Custom LLM
# support
k
v
You can continue this conversation by mentioning me in the message.
@KillerBoss Hello! I'd be happy to assist you with implementing the Custom LLM in VAPI. It sounds like VAPI is able to reach your server, but there seems to be an issue with the response being delivered back to the call. Let's go through a few steps to troubleshoot this: 1. **Check Response Format**: Ensure that the response from your server is in the correct format that VAPI expects. It should typically be a JSON object with the appropriate fields filled in. 2. **Response Timing**: Verify that your server is responding within the expected time frame. Delays in response could cause timeouts. 3. **Logs and Errors**: Review the logs on your server to see if there are any errors when processing the request or generating the response. 4. **Network Issues**: Make sure there are no network issues that could be preventing the response from being sent back to VAPI. 5. **VAPI Configuration**: Double-check your VAPI configuration in the dashboard to ensure that the Custom LLM is set up correctly. If you've already checked these and the issue persists, could you provide more details about the response your server is sending? This will help me assist you further. If you have any specific error messages or logs, that would be very helpful as well.
k
What is the correct format that VAPI expects?
There is no doc about it
v
Got it. Tagging @User for additional help.
k
We have our own implementation with OpenAI 1+ years. We only want to use our assistants.
@User
Update??
@User
s
did you got the chance to checkout this repository? https://github.com/VapiAI/advanced-concepts-custom-llm

https://www.youtube.com/watch?v=-1xWhYmOT0A

k
Hi, I did. I have no knowledge about Python but in other site I saw one in node js and the response was like mine
@Sahil
Other people have asked regarding the same and no response
Look
@Sahil
s
We don't have any support service on stack overflow
can you send me one latest call_id?
let me provide you the exact error message
k
Ok, hold on
@Sahil { id: 'bc30f3ef-9237-4182-8d9e-0ebf97b6d0af', orgId: 'ed638ec9-e74d-4c6d-aa5a-544b29c60f68', createdAt: '2024-09-24T08:44:11.220Z', updatedAt: '2024-09-24T08:44:11.220Z', type: 'webCall', monitor: { listenUrl: 'wss://aws-us-west-2-production2-phone-call-websocket.vapi.ai/bc30f3ef-9237-4182-8d9e-0ebf97b6d0af/listen', controlUrl: 'https://aws-us-west-2-production2-phone-call-websocket.vapi.ai/bc30f3ef-9237-4182-8d9e-0ebf97b6d0af/control' }, webCallUrl: 'https://vapi.daily.co/a1w0aCsuEDB1lnWQAiKA', status: 'queued', assistantId: '4040490d-6a2d-490a-a632-bd2105d0045e' }
s
🔵 08:44:23:740 LLM Stream Complete. Cost: 0 (+0). Prompt: 39 (+39). Completion: 0 (+0)
we didn't received any response
🔵 08:44:23:467 CustomLLMRequest Messages: [ { "role": "assistant", "content": "Hola, gracias por llamar a Florida Find Cars. Nombre es Pamela." }, { "role": "user", "content": "Buenos días, quiero comprar un vehículo." } ]
can you please use the python library that is sent you
I don't know how to use python at all.
Are you blocking ngrok?
Maybe not the expected content type "application/json"
We are not a Python-coded software. I want this to work with Node JS
I will continue checking. Thanks
s
Can you enable streaming
and respond back with ( data: your streaming data )
check the format
k
Ready. Thanks buddy res.writeHead(200, { 'Content-Type': 'text/event-stream', 'Cache-Control': 'no-cache', Connection: 'keep-alive' }); res.write(
data: ${JSON.stringify(payload)}\n\n
); res.end();
s
it is working right?
k
Yes @Sahil
streaming is the key