Cerebras Enterprise API Problem
# support
d
Hey we are trying to debug what is happening we have a cerebras enterprise account using our own API. An error occurred in this call: pipeline-error-cerebras-llm-failed We get this but don't really unerstand what the issue seems to be.
v
Hi there! 👋 I understand you're encountering a "pipeline-error-cerebras-llm-failed" error with your Cerebras Enterprise API. While I don't have information about your specific account setup or the exact cause of your particular error, I can help with general debugging approaches and potential causes for this type of error. Based on the information I've found, here are several potential causes for the "pipeline-error-cerebras-llm-failed" error: ### Potential Causes 1. **Service Issues**: This could be due to intermittent issues with the Cerebras LLM service itself, including service outages or network problems. 2. **Tool Integration Problems**: If you're using tools with your LLM, there might be issues with how tool results are formatted and sequenced. According to [similar errors with other LLMs](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1331340140064407573), problems can occur when tool use and tool result blocks aren't correctly formatted. 3. **Regional Restrictions**: The Cerebras model might not be available in your region, similar to issues seen with [other LLMs](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1333867863462903868). 4. **Token Limits**: Your request might be exceeding token limits or there might be incorrect token calculations. 5. **API Key Issues**: There could be problems with your Cerebras API key, such as it being invalid or exceeding usage limits. 6. **Concurrency Issues**: The error might be due to unhandled parallel calls or multiple tool calls without responses. 7. **Incorrect Configuration**: The
assistantOverrides.model.provider
field might be incorrectly specified. ### Debugging Steps Here are some general debugging steps you can take: 1. **Check Logs**: Use the
GET Logs
API endpoint to retrieve logs related to the API call that resulted in the error. Look for any specific error messages or stack traces.
2. **Review Configuration**: Ensure that your API keys and other configurations are correct. 3. **Check Network Connectivity**: Verify that your API calls have proper network access. 4. **Update Packages**: Ensure that all packages are up to date. 5. **Use Analytics**: The
POST Create Analytics Queries
API endpoint can provide insights into the performance and behavior of the API. 6. **Check Tool Formatting**: If you're using tools, ensure that tool use and tool result blocks are correctly formatted and sequenced. 7. **Enterprise Support**: If you have an Enterprise plan, you can reach out to the 24/7 support with a dedicated solutions engineer. Would you be able to share more details about your implementation or any specific error messages you're seeing? This would help me provide more targeted assistance. Sources: - [Discord conversation about pipeline errors](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1331340140064407573) - [Discord conversation about regional restrictions](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1333867863462903868) - [Documentation on call ended reasons](https://docs.vapi.ai/calls/call-ended-reason) - [Documentation on Enterprise support](https://docs.vapi.ai/enterprise) - [API reference for logs](https://docs.vapi.ai/api-reference/tools/list)
d
cc04fad8-d7a7-4b7b-9e9e-496fa55f2e53 this is the call log @Shubham Bajaj
10:29:53:326 [LOG] Model request started (attempt #1, llama-3.3-70b, cerebras) 10:29:53:532 [WARN] Model request failed (attempt #1, reason: (400 status code (no body))) getting this error
message has been deleted
@Shubham Bajaj
k
looking into it.
You're encountering a 400 Bad Request error when making a model request to the Cerebras LLM provider using Llama-3.3-70b, but the API isn't returning any specific details about what's wrong with your request. This makes it difficult to diagnose the exact issue causing the failure. Based on my analysis of the logs, the most likely issues are: 1\. The conversation history is exceeding the model's maximum context window 2\. There may be formatting issues in the message structure 3\. There might be content in the messages that triggers the model's content filters{ The most likely issue is that your conversation history is too long for the Llama-3.3-70b model on Cerebras. Looking at your logs, there are over 100 messages in the conversation history, which is likely exceeding the model's token limit. Solution: \- Trim the conversation history to include only the most recent and relevant messages \- Consider keeping only the last 10-20 messages, focusing on the most recent interactions \- You can implement a sliding window approach to automatically remove older messages
Copy code
markdown
If the issue persists with Cerebras, you might want to try an alternative model provider.
d
Thanks