Should I consider VAPI tool calls with `async=True...
# support
j
Or is there a way to start a long running tool call in the background and call back with the result later? Essentially I'm looking for something similar as
client_tool_result
in ElevenLabs conversational agent ([docs](https://elevenlabs.io/docs/conversational-ai/api-reference/conversational-ai/websocket#send))
s
Yes, Vapi does support running long-running tool calls in the background and receiving callbacks with the results later. This functionality is similar to the client_tool_result in ElevenLabs conversational agent that you mentioned. How it works: First, you need to define your tool as asynchronous by setting async: true in your tool definition, and when the processing is complete, send a Message back to Vapi. ### 1. Set the Tool as Asynchronous When defining your tool, set `async: true`:
Copy code
javascript
{
  type: "function",
  function: {
    name: "yourLongRunningTool",
    description: "A tool that runs a long process",
    parameters: {
      // your parameters here
    }
  },
  async: true
}
### 2. Background Processing Flow 1. **Initial Tool Call**: When the LLM calls this async tool, Vapi sends the request to your webhook 2. **Non-Blocking Response**: Your webhook can return immediately with an acknowledgment 3. **Background Processing**: Your service continues processing the request in the background 4. **Background Message**: When processing completes, you send a background message to Vapi ### 3. Using Background Messages The key mechanism here is using background messages to inject the tool result back into the conversation:
Copy code
javascript
// Send result back as a background message
curl -X POST 'https://aws-us-west-2-production1-phone-call-websocket.vapi.ai/7420f27a-30fd-4f49-a995-5549ae7cc00d/control' 
-H 'content-type: application/json' 
--data-raw '{
  "type": "add-message",
  "message": {
    "role": "system",
    "content": "New message added to conversation"
  },
  "triggerResponseEnabled": true
}'
Learn more about the Background Message API here: https://docs.vapi.ai/calls/call-features#2-add-message-to-conversation ### 4. Control Over Response Generation An important advantage of this approach is that you have control over whether to generate a new LLM response: 1. **Silent Result**: You can add the tool result to the conversation history without triggering a new response 2. **Generate Response**: You can request that the LLM generates a new response based on the tool result 3. **System Message Context**: The tool result can be sent as part of a system message for context ### 5. Use Cases This pattern is ideal for: - API calls that take a long time to complete - Data processing tasks that shouldn't block conversation - Notifications that should be delivered when ready, regardless of conversation state - Scheduled reports or delayed information This implementation gives you maximum flexibility - the conversation remains fluid and responsive, while still incorporating the results of long-running operations when they become available.