Hi there! 👋 I understand you're running into an error with your message content despite being under the documented character limit.
I can't provide the exact character limit, but I can explain why you might be experiencing this issue and what the "pipeline-no-available-llm-model" error means.
According to the information I found, while the documentation states a 1,000,000 character limit for message content, there are several factors that might be causing your error:
1. **Payload Size Limits**: Vapi rejects payloads exceeding 100KB, which is significantly less than the documented character limit. Some users have reported issues with payloads smaller than 100KB, with one user finding a limit of approximately 64KB. These errors can manifest as HTTP 413 errors (Payload Too Large) [(Source)](
https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1342521496920395878).
2. **Token Limits**: The error might be related to exceeding token limits, even if the character count is within the documented limit [(Source)](
https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1323288623629471866).
As for the "pipeline-no-available-llm-model" error specifically, this means that "No suitable LLM was available to process the request" [(Source)](
https://docs.vapi.ai/calls/call-ended-reason). This could be due to:
- LLM failures due to service outages, network issues, or problems with specific requests
- Incorrectly configured LLM keys or exceeding usage limits
- Token limits being exceeded
- The project not having access to the specified model
To resolve this issue, you might want to try:
- Further reducing the size of your content
- Checking if your LLM key is correctly configured and has not exceeded any usage limits
- Ensuring your requests are within the token limits of the model you're using
Is there anything specific about your implementation that you'd like me to help with?
Sources: