Gemini 2.0 Flash tool call issue (custom LLM)
# support
h
When I'm using Gemini, I encounter issues when I make a tool call. I have no issues with Groq or OpenAI using the same code. (Using their OpenAI compatibility URLs)
v
Gemini 2.0 Flash enforces stricter payload requirements than Groq or OpenAI. For tool calls, ensure that STRING parameters use only the 'enum' format and that your response is a stringified JSON including the tool call ID, with no line breaks. For example:
Copy code
json
{
  "parameters": {
    "insuredEmail": "user@example.com" // must be an enum value if STRING type
  },
  "tool_call_id": "abc123"
}
Double-check your payload structure to match Gemini's expectations to avoid errors like "Invalid JSON payload received..." [documentation](https://discord.com/channels/1211482211119796234/1344710846865084616). Sources: - [Gemini 2.0 Flash tool call implementation](https://discord.com/channels/1211482211119796234/1344710846865084616) - [Gemini 2.0 Flash tool call troubleshooting](https://discord.com/channels/1211482211119796234/1332510949940133960)
h
I don't understand what this documentation is saying
@User
a
Hey, Could you please schedule a brief call at your convenience so we can discuss this matter in detail? Kindly use the following link to book a suitable time: .
h
Bro the next available call is june
a
We recommend you to use OpenAI or Anthropic for more stable toolCalls
h
So does that mean gemini tool calling is not compatible with Vapi?
OpenAI and Anthropic have extremely high latency
a
For latency issues, can you send some call ids so we can look into it?
h
Its not vapi latency side issues its simply OpenAI have slow TTFT
This can be closed
a
sg.