Tool Calls
# support
h
I am having issues using tools. I am able to create and call the tool calls via the webSDK but my arguments seem to be modified and extended by an LLM. How can I have a tool call return the actual arguments from the user to the vapi.on('message', (message) EVENTS? The returned argument { "imagePrompt": "a stunning photograph of the moon, shining brightly in a clear night sky with stars twinkling around it" } The original spoken argument from the the user "a picture of the moon"
v
Hi there! I understand you're having issues with tool call arguments being modified by an LLM in the webSDK. This is a common challenge when working with LLMs and tool calls. ### Retrieving Original User Arguments When using the webSDK, you can access the original user input by parsing the tool call messages correctly. Here's how you can retrieve the original arguments:
Copy code
javascript
vapi.on('message', (message) => {
  // Check if this is a tool call message
  if (message.type === 'conversation-update' && message.role === 'tools_calls') {
    // Extract the arguments from the function
    const toolCall = message.content;
    const argumentsStr = toolCall.function.arguments;
    
    try {
      // Parse the arguments
      const parsedArgs = JSON.parse(argumentsStr);
      // Now you have the arguments as provided to the tool call
      console.log('Tool call arguments:', parsedArgs);
    } catch (err) {
      console.error("Failed to parse arguments:", err);
    }
  }
});
Note that there's a [known discrepancy in message types](https://discord.com/channels/1211482211119796234/1211483291191083018/1328825176783065128) - you might receive
conversation-update
messages with roles
tools_calls
and
tool_call_result
instead of the expected
tool-calls
and
tool-call-result
. ### Best Practices for Maintaining Argument Integrity 1. **Break Down Complex Workflows**: Segment complex automations into smaller, focused workflows for better integrity and easier debugging. 2. **Use Clear Naming Conventions**: This helps with traceability and understanding of workflows. 3. **Proper Error Handling**: Always include robust error handling when parsing arguments:
Copy code
javascript
let parsedArgs = {};
try {
  parsedArgs = JSON.parse(argumentsStr);
} catch (err) {
  console.error("Failed to parse arguments:", err);
}
4. **Handle Asynchronous Calls Carefully**: Be aware that async tool calls automatically send a "Success" message back, which can cause confusion in the LLM. ### Regarding LLM Modification of Arguments While you can't completely prevent an LLM from expanding or modifying user inputs (that's part of their nature), you can: 1. Use more specific prompting in your LLM configuration to encourage more literal transcription 2. Store the original user utterance separately in your application logic 3. Consider implementing a pre-processing step where you extract key information from the user's request before passing it to the tool call Would you like more specific guidance on any of these approaches? 🙂 Sources: - [Discord message about message type discrepancies](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1328825176783065128) - [Vapi documentation on custom LLM tool calling integration](https://docs.vapi.ai/documentation/advanced-concepts/customization/custom-llm/custom-llm-tool-calling-integration) - [Vapi documentation on tools/GHL](https://docs.vapi.ai/tools/GHL) - [Discord message about async tool call behavior](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1342336047589359618)
k
Hello Herring, could you please provide me a call ID? I'll look into this for you
2 Views