JimKanepele
03/16/2025, 7:09 PMclient_tool_result
in ElevenLabs conversational agent ([docs](https://elevenlabs.io/docs/conversational-ai/api-reference/conversational-ai/websocket#send))Shubham Bajaj
03/17/2025, 1:22 PMjavascript
{
type: "function",
function: {
name: "yourLongRunningTool",
description: "A tool that runs a long process",
parameters: {
// your parameters here
}
},
async: true
}
### 2. Background Processing Flow
1. **Initial Tool Call**: When the LLM calls this async tool, Vapi sends the request to your webhook
2. **Non-Blocking Response**: Your webhook can return immediately with an acknowledgment
3. **Background Processing**: Your service continues processing the request in the background
4. **Background Message**: When processing completes, you send a background message to Vapi
### 3. Using Background Messages
The key mechanism here is using background messages to inject the tool result back into the conversation:
javascript
// Send result back as a background message
curl -X POST 'https://aws-us-west-2-production1-phone-call-websocket.vapi.ai/7420f27a-30fd-4f49-a995-5549ae7cc00d/control'
-H 'content-type: application/json'
--data-raw '{
"type": "add-message",
"message": {
"role": "system",
"content": "New message added to conversation"
},
"triggerResponseEnabled": true
}'
Learn more about the Background Message API here: https://docs.vapi.ai/calls/call-features#2-add-message-to-conversation
### 4. Control Over Response Generation
An important advantage of this approach is that you have control over whether to generate a new LLM response:
1. **Silent Result**: You can add the tool result to the conversation history without triggering a new response
2. **Generate Response**: You can request that the LLM generates a new response based on the tool result
3. **System Message Context**: The tool result can be sent as part of a system message for context
### 5. Use Cases
This pattern is ideal for:
- API calls that take a long time to complete
- Data processing tasks that shouldn't block conversation
- Notifications that should be delivered when ready, regardless of conversation state
- Scheduled reports or delayed information
This implementation gives you maximum flexibility - the conversation remains fluid and responsive, while still incorporating the results of long-running operations when they become available.