Ben
01/16/2025, 8:09 PM{
"name": "call_PH8mMMjyeQt0ZSfe",
"role": "tool_call_result",
"time": 1737057643352,
"result": "An unexpected error occurred.",
"toolCallId": "call_PH8mMMjyeQt0ZSfe",
"secondsFromStart": 24.426
}
And the client received the same.
I get this with gpt-4o-realtime-preview-2024-12-17 but not "gpt-3.5-turbo". I appreciate the realtime is in preview/beta so is not production ready but would be good to know what is happening here. Maybe it doesn't support tool calling?Vapi
01/16/2025, 8:10 PMVapi
01/16/2025, 8:10 PMBen
01/16/2025, 8:13 PM{
"role": "tool_calls",
"time": 1737057643348,
"message": "",
"toolCalls": [
{
"id": "call_5LnMhRC3IgMAVgV1",
"type": "function",
"function": {
"name": "retrieve_and_generate",
"arguments": "{\"query\": \"経費精算 締め切りはいつ\"}"
}
}
],
"secondsFromStart": 24.422
}
Worked and was received by the API - which is performing RAG and returning a jsonified response in the correct format described here: https://docs.vapi.ai/tools-callingBen
01/16/2025, 8:14 PMShubham Bajaj
01/17/2025, 8:27 PMjson
{
"results": [
{
"toolCallId": "X",
"result": "Y"
}
]
}
If you want to store the result in message history, you can stringify the response in the format below, and the message content will be voiced out.
json
"results": [
{
"toolCallId": "<insert-your-tool-call-id-here>",
"result": "<insert-tool-call-result-here>",
"message": {
"type": "request-complete",
"role": "assistant",
"content": "<insert-the-content-here>"
}
}
]
}
Ben
01/20/2025, 11:22 AMBen
01/20/2025, 1:41 PM"results": [
{
"toolCallId": "<insert-your-tool-call-id-here>",
"result": "<JSON stringified response data here>",
"message": {
"type": "request-complete",
"role": "assistant",
"content": ""
}
}
]
}
As if we omit the message field you are saying I think that it will try and voice out the result string insteadShubham Bajaj
01/22/2025, 4:03 PMBen
01/28/2025, 12:52 PM{
"name": "call_xlnxf7fo9OLkVAhK",
"role": "tool_call_result",
"time": 1738068305471,
"result": "An unexpected error occurred.",
"toolCallId": "call_xlnxf7fo9OLkVAhK",
"secondsFromStart": 18.751
}
The output of the model has the JSON in format described, with a field 'result' which is a JSON serialised string, so the format matches that you describe as far as I can see
I wonder if it could be related to certain tokens not being parsable by VAPI/OpenAI even after setting result=json.dumps(content, ensure_ascii=False)
- but the unexpected error is hidden from me - I presume on your side you might be able to see what error specifically is occuring
The other thing I notice is that sometimes VAPI sends this kind of message to the assistant conversation update:
{
"role": "tool",
"tool_call_id": "call_Sp57yvMnrz0v9aGj",
"content": "Tool Result Still Pending But Proceed Further If Possible."
},
And the bot then it maybe uses that as a queue to re-call the tool, resulting in a spam of tool calls (?) at least we see several tool calls occuring when we get unexpected errors.
More generally, if the tool call is synchronous and not yet responded, I do not understand why we would be prompting the bot to "Proceed Further" --> that seems like a bug?
How can I inform VAPI to block until the call returns? I've already set it as syncronous and a generous server timeout but that doesn't seem to afefect this message when using Realtime API.Shubham Bajaj
01/29/2025, 7:50 AMShubham Bajaj
01/29/2025, 7:50 AMShubham Bajaj
01/29/2025, 7:55 AMretrieve_and_generate
is called at a time.
That way, you avoid collisions and “unexpected error” conditions that come from unhandled parallel calls.Ben
01/29/2025, 12:36 PMShubham Bajaj
01/29/2025, 2:15 PM