An error occurred in this call: pipeline-error-cus...
# support
r
Hi there, I'm getting the error "An error occurred in this call: pipeline-error-custom-llm-llm-failed" when I try to use a fine tuned model from OpenAI. The hint from the "Custom LLM URL" filed in the assistant shows the message: "Vapi will use https://api.openai.com/v1/chat/completions for completions." Checking the logs, they say the API Key is incorrect. However, when I test it with Postman (same API Key, custom model and endpoint) it works fine (see attached images) Any help? Many thanks in advance. https://cdn.discordapp.com/attachments/1265349204524400671/1265349204687982673/Captura_de_pantalla_2024-07-23_a_las_17.47.02.png?ex=66a12fc7&is=669fde47&hm=8eb623ade171f17e550630a00b370e9d031d6b9fe5ce27dad2ce77c7f5728865& https://cdn.discordapp.com/attachments/1265349204524400671/1265349205191557272/Captura_de_pantalla_2024-07-23_a_las_18.13.44.png?ex=66a12fc7&is=669fde47&hm=ac13908c7581dbb094d30e6558b0fe4f88dbf704e393b97b882d0169a2a2089c&
v
message has been deleted
@Rick S. Hello! It sounds like you're experiencing an issue with the API key not being recognized correctly by VAPI when using a fine-tuned model, despite it working fine with Postman. This could be due to a configuration issue within VAPI. To assist you better, I'll need to check a few details: 1. Can you confirm that the API key you are using has the necessary permissions for accessing the fine-tuned model? 2. Have you made sure that the API key is correctly entered in VAPI's settings without any leading or trailing whitespace? 3. Is the fine-tuned model ID specified correctly in VAPI's configuration? Once you've confirmed these details and if the issue persists, we might need to look into updating the model configuration on VAPI's end to ensure it's aligned with the correct API key and model settings. If you need assistance with this process or if there's anything specific you'd like me to do, please let me know!
You can continue this conversation by mentioning me in the message.
r
Can you confirm that the API key you are using has the necessary permissions for accessing the fine-tuned model? Yes, it's fine. Have you made sure that the API key is correctly entered in VAPI's settings without any leading or trailing whitespace? Yes, it's correct. Is the fine-tuned model ID specified correctly in VAPI's configuration? Yes, it's.
s
@Rick S. your setting api key incorrectly. 1. https://api.openai.com/v1 set as custom llm url 2. Assign the finetuned llm key to the OpenAI key. 3. Update the model to your model. 4. Execute a PATCH request to the /assistant endpoint and ensure that model.metadataSendMode is set to off.
r
Hi @Shubham Bajaj I did the PATCH of the assistant, here is the request: { "model": { "provider": "custom-llm", "model": "ft:gpt-3.5-turbo-1106:deliverlogic:test4:9mT9Dcob", "url": "https://api.openai.com/v1", "metadataSendMode": "off" } } but I'm still getting the error: Model request failed (attempt #1, reason: (Incorrect API key provided: no-custo**************ided. You can find your API key at https://platform.openai.com/account/api-keys.)) why???
s
where you have passed your api key in the vapi dashboard?
also if possible can you share the loom video of your setup hide key.
23 Views