We're having an issue where our AI receptionist is not speaking the pricing result, even though everything appears to be configured properly.
The get_pricing tool calls correctly and returns a valid response:
json
CopyEdit
{
"match_found": "true",
"price_string": "$150.00"
}
Tool is defined with this request-complete block:
json
CopyEdit
{
"type": "request-complete",
"content": "The price is [price_string]. Would you like to go ahead and schedule the appointment?",
"conditions": [
{
"param": "match_found",
"value": "true",
"operator": "eq"
}
]
}
Assistant prompt clearly states:
βDo not speak immediately after tool calls; let the tool response speak instead.β
The tool is not async, and blocking is set to true on the request-start message.
β Whatβs Going Wrong
After calling the tool, the assistant still speaks manually:
βLet me check the pricing for you. 1 moment, please.β
This overrides or suppresses the request-complete message, even though the match was successful and tool messages are defined.
π§ͺ Test Info
Tool ID: eefd335d-9ba9-44db-9193-b5010f758543
Call ID: 1a7dfa3f-cac7-4069-9e66-eaf9a36ee725
Tool Call ID: call_DQ7c1KRVOZrUzBMRKoTL6EMQ
Assistant ID: 8de39e0c-9a38-4350-9475-036d9e91f178
Request Timestamp: 2025-05-14 18:10:29 UTC
Note: We've tested this with both:
A local Flask API
And a Render-hosted Flask endpoint
Both return the same exact result and behavior β pricing match is found, but the response is not spoken by the assistant.
π Request
Could you please help confirm:
Is this a known issue where request-complete gets ignored if the assistant speaks afterward?
Should we be changing something in the tool definition or prompt to force the tool message?
Can you verify whether the request-complete message is actually getting queued and spoken by Vapi?
Everything on our end appears to be wired up correctly, but we're still stuck at this last mile.#1211483291191083018