Get this error sometimes when using claude as the ...
# support
r
An error occurred in this call: call.in-progress.error-vapifault-anthropic-400-bad-request-validation-failed Can I get some insights to this? Call ids: 859612e7-b467-4d29-94e7-6d2d12798459 73e2eb0a-7d3b-4adf-ae07-39b7f15114a5 Call log warns: Model request failed (attempt #1, reason: (messages.10.content.0.tool_result.content.0.type: Field required)) (Read thread below for more context, the args aren't getting passed to my server for some reason...)
v
Hi there! šŸ‘‹ I'm unable to provide specific insights about your particular calls based just on the IDs, but I can definitely help explain the error message and warning you're seeing, and provide some troubleshooting guidance. The error message "call.in-progress.error-vapifault-anthropic-400-bad-request-validation-failed" indicates that there's a validation issue with an API request made to Anthropic (Claude) during a call that was already in progress within Vapi. Looking at your call log warning: "Model request failed (attempt #1, reason: (messages.10.content.0.tool_result.content.0.type: Field required))" - this specifically points to a missing
type
field in your tool configuration. According to the information I found, this error occurs when the
type
field is either missing or has an invalid value within the
tools
array in your configuration. Each tool in the
tools
array must have a specified
type
value, which should be one of: dtmf, endCall, transferCall, voicemail, output, make, ghl, bash, computer, textEditor, or function. Here are some common issues when using Claude as the processing model in Vapi: 1. **Tool Calling Issues**: Claude models can have issues with tool calling within Vapi, especially compared to models like GPT-4o. The error you're seeing suggests a problem with the tool schema or format compatibility between Anthropic and Vapi. 2. **Formatting and Sequencing Issues**: Incorrect formatting or sequencing of tool use and tool result blocks can lead to errors like the one you're experiencing. 3. **Missing Required Fields**: As your error indicates, missing required fields (like the
type
field) in tool configurations can cause validation failures. For troubleshooting, I recommend: 1. Check your tool configuration and ensure each tool has a valid
type
field specified. 2. Verify that your tool properties are properly formatted (they need to be alphanumeric).
3. Ensure correct formatting and sequencing of tool use and tool result blocks. Here's an example of how a correctly configured tool might look:
Copy code
json
{
    "type": "voicemail",
    "async": true,
    "function": {
        "name": "AnsweringMachineDetection",
        "strict": false,
        "description": "Detects if a machine answered the phone."
    },
    "messages": [
        {
            "type": "request-complete",
            "content": "Your voicemail message here",
            "endCallAfterSpokenEnabled": true
        }
    ]
}
Is there anything specific about your tool configuration that you'd like me to help with? Sources: - [What does the error message call.in-progress.error-vapifault-anthropic-400-bad-request-validation-failed mean when using Claude in Vapi?](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1332510949940133960) - [What does the call log warning "Model request failed (attempt #1, reason: (messages.10.content.0.tool_result.content.0.type: Field required))" mean in Vapi?](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1333880245681393756) - [Common issues when using Claude as the processing model in Vapi](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1339304698846380156) - [Troubleshooting Anthropic 400 Bad Request errors in Vapi](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1340022199687577680)
r
Looking at my local logs, it doesnt look as if the arguments are getting passed into the toolcall
Copy code
{'type': 'function', 'id': 'toolu_01KfCPfGYLHuk4CjVsWHwCec', 'function': {'name': 'create_service_appointment_local_randy', 'arguments': {}}}
Copy code
An error occurred in this call: call.in-progress.error-vapifault-google-400-bad-request-validation-failed
Debugging with other llms... what's going on??
Call Id: 472cf4e4-547e-47d6-bff8-aa1d58b646d5
The calls are just ending mid conversation
Is there a limit for the arguments passed? For my tools with lower number of arguments, they are getting called correctly.
When you guys get to this thread, here are some call Ids you should check out. arguments not getting passed into my tool... fd2fe0ab-432b-483b-883a-ff7f9def4b88 "An error occurred in this call: call.in-progress.error-vapifault-anthropic-400-bad-request-validation-failed" cd89c158-43f7-4c44-ba86-1e47eec5de87
k
looking into it.
caLL ID: fd2fe0ab-432b-483b-883a-ff7f9def4b88 🟔 23:56:58:338 Anthropic stream terminated early: reached max_tokens limit. Response may be incomplete. for this call ID, I don't see the argument being passed into the tool. Instead, this has a warning message Call ID: cd89c158-43f7-4c44-ba86-1e47eec5de87 The tool result data sent to Anthropic's API is missing the required "type": "text" field in the content structure, causing a 400 error when the API rejects the improperly formatted message. I have informed the team about the Anthropic tool call result error, and they will be working on it. It will be fixed
r
:yellow_circle: 23:56:58:338 Anthropic stream terminated early: reached max_tokens limit. Response may be incomplete.
For this warning, can you quantify the max tokens allowed per request, and is it possible to increase it? We're able to make the same request with OpenAI, but we don't get limited on tokens. The reason we don't want to use OpenAI models is because the responses are significantly worse. Or do we have to use Anthropic via custom LLM?
@Shubham Bajaj
k
Hey Randy, The issue is on us. I just got the confirmation, and the team is working with Antorphic team to increase our token's limit. It's a more on contracts/agreement side thing.
r
Thanks sm!
6 Views