Calling tool in custom LLM
# support
j
Hi folks, I am trying to call tool from custom llm. So first I ask vapi to call the tool, then vapi sends request to my llm, my llm returns reason tool_call, then vapi says hold on a second and sends me latest tool call as well as it adds weird No response message like this: { "role": "tool", "tool_call_id": "call_sajmqv3sSz7t8wBDxEcFqcYM", "content": "No result returned." } I am currently replacing this content and sending it to openai then it sends chunks to vapi. Is it normal behavior?
v
You can continue this conversation by mentioning me in the message.
@Javokhir Hello! It sounds like you're in the process of integrating a custom language model (LLM) with VAPI. When VAPI sends a request to your LLM and it returns a "tool_call" reason, VAPI expects a result from the tool call to be included in the response. The message you're seeing with "No result returned." indicates that VAPI did not receive the expected result from the tool call. This is a placeholder message that you're meant to replace with the actual result from the tool's execution. When your LLM receives the tool call, it should process the request and execute the desired function. After that, you should send back a response in the correct JSON format, including the "toolCallId" and the "result" of the tool call. Here's the format again for clarity:
Copy code
json
{
    "results": [
        {
            "toolCallId": "call_sajmqv3sSz7t8wBDxEcFqcYM",
            "result": "Your actual result here"
        }
    ]
}
Make sure to replace "Your actual result here" with the actual output from the tool's execution. Once you send this response back to VAPI, it should be able to process it correctly and continue the workflow. If you're following this process and still encountering issues, please ensure that your server is correctly processing the request and that the response is properly formatted. If you need further assistance, please ask support for more info on this.
Got it. Tagging @User for additional help.
s
Create a webhook-based system where VAPI (GPT tool calling) can request user data through Make.com, which then communicates with a Flutter app API to retrieve the user's name, phone number, and real-time location. The data should then be sent back to Make.com and forwarded to VAPI." Flow: 1️⃣ VAPI triggers Make.com webhook with a request for user data. 2️⃣ Make.com forwards the request to the Flutter app backend. 3️⃣ Flutter API retrieves user data (name, phone number, location) from the device. 4️⃣ Flutter API sends the response back to Make.com. 5️⃣ Make.com forwards the data to VAPI for further processing. ✅ The system should support real-time data retrieval and secure API communication.
Can I do this end to end with having a backend server ?
2 Views