Randy
03/31/2025, 11:04 PMVapi Support Bot
03/31/2025, 11:06 PMtype
field in your tool configuration. According to the information I found, this error occurs when the type
field is either missing or has an invalid value within the tools
array in your configuration.
Each tool in the tools
array must have a specified type
value, which should be one of: dtmf, endCall, transferCall, voicemail, output, make, ghl, bash, computer, textEditor, or function.
Here are some common issues when using Claude as the processing model in Vapi:
1. **Tool Calling Issues**: Claude models can have issues with tool calling within Vapi, especially compared to models like GPT-4o. The error you're seeing suggests a problem with the tool schema or format compatibility between Anthropic and Vapi.
2. **Formatting and Sequencing Issues**: Incorrect formatting or sequencing of tool use and tool result blocks can lead to errors like the one you're experiencing.
3. **Missing Required Fields**: As your error indicates, missing required fields (like the type
field) in tool configurations can cause validation failures.
For troubleshooting, I recommend:
1. Check your tool configuration and ensure each tool has a valid type
field specified.
2. Verify that your tool properties are properly formatted (they need to be alphanumeric).Vapi Support Bot
03/31/2025, 11:06 PMjson
{
"type": "voicemail",
"async": true,
"function": {
"name": "AnsweringMachineDetection",
"strict": false,
"description": "Detects if a machine answered the phone."
},
"messages": [
{
"type": "request-complete",
"content": "Your voicemail message here",
"endCallAfterSpokenEnabled": true
}
]
}
Is there anything specific about your tool configuration that you'd like me to help with?
Sources:
- [What does the error message call.in-progress.error-vapifault-anthropic-400-bad-request-validation-failed mean when using Claude in Vapi?](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1332510949940133960)
- [What does the call log warning "Model request failed (attempt #1, reason: (messages.10.content.0.tool_result.content.0.type: Field required))" mean in Vapi?](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1333880245681393756)
- [Common issues when using Claude as the processing model in Vapi](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1339304698846380156)
- [Troubleshooting Anthropic 400 Bad Request errors in Vapi](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1340022199687577680)Randy
03/31/2025, 11:32 PM{'type': 'function', 'id': 'toolu_01KfCPfGYLHuk4CjVsWHwCec', 'function': {'name': 'create_service_appointment_local_randy', 'arguments': {}}}
Randy
03/31/2025, 11:41 PMAn error occurred in this call: call.in-progress.error-vapifault-google-400-bad-request-validation-failed
Randy
03/31/2025, 11:41 PMRandy
03/31/2025, 11:42 PMRandy
03/31/2025, 11:42 PMRandy
04/01/2025, 12:03 AMRandy
04/01/2025, 12:05 AMKyle Brunker
04/01/2025, 5:07 PMKyle Brunker
04/01/2025, 5:25 PMRandy
04/01/2025, 7:43 PM:yellow_circle: 23:56:58:338 Anthropic stream terminated early: reached max_tokens limit. Response may be incomplete.
For this warning, can you quantify the max tokens allowed per request, and is it possible to increase it? We're able to make the same request with OpenAI, but we don't get limited on tokens. The reason we don't want to use OpenAI models is because the responses are significantly worse. Or do we have to use Anthropic via custom LLM?Randy
04/01/2025, 7:43 PMKyle Brunker
04/01/2025, 10:27 PMRandy
04/01/2025, 10:37 PM