Cerebras Pipeline Error
# support
h
I once again got: An error occurred in this call: pipeline-error-cerebras-llm-failed I thought this was fixed?
v
Hi there! 👋 I understand you're encountering the "pipeline-error-cerebras-llm-failed" error again, which is certainly frustrating. Based on the information I have, this error is still being worked on. According to our records, this error indicates a failure within the processing pipeline specifically related to the Cerebras LLM. A ticket has been opened for this issue, and the estimated resolution timeframe is "most likely by the start of next week" [as noted in a Discord thread](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1342548300561121330). The team is aware that this error has occurred multiple times and is causing pipeline failures. They're actively working on a fix, and there have been some updates to improve error handling for Cerebras call termination scenarios. Is there anything else I can help you with regarding this issue or any other questions you have? Sources: - [Discord Thread](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1342548300561121330) - [Vapi Documentation Call Ended Reason](https://docs.vapi.ai/calls/call-ended-reason) - [Vapi Changelog](https://docs.vapi.ai/changelog/2025/1/11)
k
Hey! To help track down this issue, could you share: - The call ID This would really help us figure out what went wrong!
h
103fca72-6255-47ad-90c2-6c166536f50f
k
This appears to be a model-specific issue with how the Cerebras implementation of Llama-3.3-70b handles function calling, rather than an issue with your specific function definition or configuration. The Cerebras Llama-3.3-70b model failed to generate proper JSON-formatted tool calls. Instead of producing the expected JSON structure for the tool call, it attempted to use a direct function call syntax:|
Copy code
"failed_generation": "book_zoom(name=Hugo, time=2 PM, email=hugopodw@gmail.com, iana_timezone=America/New_York)"
The most immediate solution is to use a different model provider like OpenAI or Anthropic that properly supports the expected JSON format for function/tool calls.
You can try, reporting this issue to Cerebras as it appears to be a limitation or bug in their implementation of function calling in the Llama-3.3-70b model.
h
got it thanks
k
Marking this ticket as Solved ✅