Open-AI Finetuned custom llm without using my own ...
# support
e
call id: e95c8aff-1bd3-4166-be44-71d3f8dc77c0 As seen in picture i'm getting a custom llm pipeline error, i tried to use open ais chat completion endpoint but it didn't work. I want to use openai server directly not my own. I've set custom llm url to: https://api.openai.com/v1 but it fails.. However, it works if i use my own server.. these are the logs if i use replit as an intermediate: INFO:werkzeug:172.31.196.105 - - [20/Sep/2024 18:23:18] "GET / HTTP/1.1" 404 - INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" INFO:werkzeug:172.31.196.105 - - [20/Sep/2024 18:23:54] "POST /chat/completions HTTP/1.1" 200 - INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK" INFO:werkzeug:172.31.196.105 - - [20/Sep/2024 18:24:13] "POST /chat/completions HTTP/1.1" 200 - THe code for a server that uses custom llm open api can be found here: https://replit.com/@jannismoore/Vapi-Custom-LLM-Conversational-Pathways?v=1 please help me, i have already read the vapi docs which is why I am here. I also spoke to @nikhil who said it would be useful if i provided the info above. Thanks guys! Under pressure from client atm 😭 https://cdn.discordapp.com/attachments/1286756005363056681/1286756005631361094/image.png?ex=66ef106a&is=66edbeea&hm=1d7de20241fdf2098d90f0f67533e23662886505359a48c6a8cf26798fc8d91c&
s
Let's hop on a quick call
checkout this repository
Also, if you are using openai fined-tuned model then why don't you directly put the url of openai directly?
"url": "https://custom-llm-oasis.replit.app",
remove it and put openai chat completeion api url
3 Views