How to get tool response context to model
# support
d
We're using a Custom LLM model with tool calls. The system is working and tool calls are being correctly run. However, after a tool run, the tool call and response are not passed to our LLM. This makes the LLM lose context on what has happened. Is there a way for this data to get passed back to our LLM?
v
To keep your Custom LLM aware of tool call results, you should append a
"function"
message with the tool's name and result to your LLM's messages array after each tool call. For example:
Copy code
js
const functionMessage = {
  role: "function",
  name: toolCallInfo.name,
  content: JSON.stringify(result)
};
// Add this to your messages before the next LLM call
This way, your LLM receives the tool response as context for future replies. You need to implement this step in your integration—Vapi does not do it automatically. [See the official guide for details.](https://docs.vapi.ai/customization/tool-calling-integration#how-it-works) Source: - [Vapi tool calling integration guide](https://docs.vapi.ai/customization/tool-calling-integration#how-it-works)
d
This was not helpful because VAPI is tracking the context. I reviewed the docs and I didn't find anything that will have VAPI return to us the tool call and response in follow-up messages.
k
Include the tool call response as a function role message in the conversation history sent to your custom LLM after each tool execution.
d
I'm sorry! I think there is a misunderstanding. We are not executing them as part of the response to our own system. We are executing them through VAPI. The flow is: 1. VAPI requests assistant configuration through callback -> returns VAPI configuration with tool calls 2. VAPI calls our custom LLM to get the next response 3. Our custom LLM returns a tool call to VAPI 4. VAPI calls our tool call URL to get the tool call response 5. VAPI calls our custom LLM to get the next response - This is what is missing the tool call response The solutions provided above appear to expect us to execute tool calls in our own system which we have elected not to do. We expected that tool call responses executed through VAPI would show up in subsequent messages in our system. Does that make sense?
An example call ID is:
7701049e-7426-4867-82b6-132103a5764b
a
In custom-llm you will need to maintain the whole context. So, I am not sure where it is going wrong.
d
When we use the custom LLM, VAPI returns to us the list of messages with each subsequent call. This list of messages includes the system, user, and assistant messages, but it doesn't include the tool calls or tool responses. Since our API is an OpenAI compatible API it does not store messages on its own because applications are supposed to track that. I would expect that VAPI would return that information.
Especially since it is tracking the context for system, user and assistant messages.
a
We do send those details as well
You can check the logs ^^
d
It says "No results returned", but we are returning results. I've extensively tested this. The results are a JSON encoded string. Could that be the problem? i.e. they are not in an expected format. I just ran a test and here is the result that we returned to VAPI for the API call
Copy code
[
  {
    "name": "Ticket_Creator",
    "toolCallId": "call_SCoUOCdQ1CFHoSaWoledhepI",
    "result": [
      {
        "Updated on": "2025-05-21T12:34:39.745Z",
        "Description": "The car is currently not working. The engine light is on, and there is a significant amount of oil leaking under the car. Immediate attention is needed to diagnose and repair the issue.",
        "Created on": "2025-05-21T12:34:39.745Z",
        "Updated by": "elroy@elementum.com",
        "Title": "Car not working with engine light on and oil leak",
        "ID": "SNOW-84",
        "Created by": "elroy@elementum.com",
        "id": "SNOW-84",
        "App Version": 1
      }
    ],
    "error": null
  }
]
VAPI still says that "No results returned." Here is the call ID
25cd0a59-7619-46fe-849b-66de2241d72f
Sorry for the confusion! I was using the wrong property name when I submitted the above response. I was using
responses
instead of
results
. It works as expected.
a
Glad you were able to get it resolved! Please reach out to us if you have any other questions regarding this issue
2 Views