Call hangs up after first message
# support
After the first message of the Outbound call, I get this error: An error occurred in this call: pipeline-error-together-ai-llm-failed Call ID: 259f3b6d-de71-45d2-8f96-fd5cddcee94f It works on the Web app
Hi @Ethan Tan, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Call Ended Reason]() * [Create Phone Call]() * [List Calls]() * [Get Call]() -----------------------------
### Answer The error "pipeline-error-together-ai-llm-failed" indicates a specific failure within the AI processing pipeline related to the Large Language Models (LLMs), in this case, involving Together AI. This error can occur for several reasons, such as issues with the LLM integration or configuration, exceeding your subscription or quota limit with the LLM provider, or other underlying problems with the LLM service itself [(1)]() . Given that the issue doesn't occur on the Web app, it suggests the problem might be specific to how the outbound call setup interacts with the LLM or possibly related to the environment where the call is initiated. To address this issue, you might check the following based on the provided information: - Ensure the LLM provider (Together AI) settings and keys are correctly configured for outbound calls. - Verify if there are any limitations or issues on the Together AI side that could affect the call flow. - Consider testing with a different LLM provider or settings as a process of elimination to pinpoint the cause. Unfortunately, specific steps to resolve this error aren't detailed in the provided documents, so these suggestions are based on general troubleshooting practices for such integration issues.
"error": { "message": "meta-llama/Llama-3-70b-chat-hf is not supported for JSON mode/function calling", "type": "invalid_request_error", "param": null, "code": "constraints_model" }, "code": "constraints_model", "param": null, "type": "invalid_request_error", "message": "400 meta-llama/Llama-3-70b-chat-hf is not supported for JSON mode/function calling", "stack": "Error: 400 meta-llama/Llama-3-70b-chat-hf is not supported for JSON mode/function calling\n at APIError.generate (/app/node_modules/openai/error.js:45:20)\n at OpenAI.makeStatusError (/app/node_modules/openai/core.js:263:33)\n at OpenAI.makeRequest (/app/node_modules/openai/core.js:306:30)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)" }
Thank you
Glad I could be helpful. Feel free to create a new thread with any new questions you may have.