pipeline-no-available-llm-model
# support
a
the docs say message content has a 1,000,000 character limit, but i am getting an error on a content that is 92262. when i decrease the size of the content, the error goes away. what is the actual character limit for the message content?
v
Hi there! 👋 I understand you're running into an error with your message content despite being under the documented character limit. I can't provide the exact character limit, but I can explain why you might be experiencing this issue and what the "pipeline-no-available-llm-model" error means. According to the information I found, while the documentation states a 1,000,000 character limit for message content, there are several factors that might be causing your error: 1. **Payload Size Limits**: Vapi rejects payloads exceeding 100KB, which is significantly less than the documented character limit. Some users have reported issues with payloads smaller than 100KB, with one user finding a limit of approximately 64KB. These errors can manifest as HTTP 413 errors (Payload Too Large) [(Source)](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1342521496920395878). 2. **Token Limits**: The error might be related to exceeding token limits, even if the character count is within the documented limit [(Source)](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1323288623629471866). As for the "pipeline-no-available-llm-model" error specifically, this means that "No suitable LLM was available to process the request" [(Source)](https://docs.vapi.ai/calls/call-ended-reason). This could be due to: - LLM failures due to service outages, network issues, or problems with specific requests - Incorrectly configured LLM keys or exceeding usage limits - Token limits being exceeded - The project not having access to the specified model To resolve this issue, you might want to try: - Further reducing the size of your content - Checking if your LLM key is correctly configured and has not exceeded any usage limits - Ensuring your requests are within the token limits of the model you're using Is there anything specific about your implementation that you'd like me to help with? Sources:
a
@Shubham Bajaj can you lmk?
k
Hey! To help track down this issue, could you share: - The call ID This would really help us figure out what went wrong!