Where is the serverUrl passed to Custom LLM?
# support
e
Hello - I set up an assistant from the dashboard - using a Custom LLM url - and set up a Tool with Server Url as in the image My understanding is that I need to capture the tool call from the request that Vapi sends to the Custom LLM, and then handle the tool call and pass it to the LLM and then back to Vapi However, I don't see the Tool's Server Url being passed to the Custom LLM. Could I get some assistance please? @Sahil Thank you https://cdn.discordapp.com/attachments/1252261538258092124/1252261538924990474/Screenshot_2024-06-17_at_14.56.09.png?ex=667192f0&is=66704170&hm=8822f57319845efa4f44bd2de8a1c509b3a2062650ac45b0edafcb94fa86f5cb&
v
message has been deleted
@Ethan Tan Hello! I'm glad to assist you with your setup. If you've configured a tool with a server URL and you're expecting VAPI to send requests to your Custom LLM, it's important to ensure that the tool is being called correctly within your assistant's logic. The server URL you set up for the tool should be invoked by VAPI when the tool is used. To troubleshoot this, please check the following: 1. Ensure that the tool is correctly configured in the dashboard with the correct server URL. 2. Verify that the assistant's logic is designed to call the tool when needed. This involves using the
toolIds
or
tools
parameters to reference the tool within your assistant's code. 3. Make sure that the secret (if you've set one) is being sent correctly as a header called
x-vapi-secret
in the requests to your server. This is crucial for your server to authenticate requests from VAPI. If you've checked all of the above and the issue persists, it could be helpful to look at the network requests being made to see if the tool's server URL is being called at all. If you're not seeing the expected requests, there might be a configuration issue on the VAPI side. If you need further assistance, please provide more details about how you're attempting to call the tool, and I'll do my best to help you resolve the issue. If the problem is beyond the scope of what we can resolve here, you may need to reach out to VAPI support for more in-depth troubleshooting.
You can continue this conversation by mentioning me in the message.
s
You don't need to handle the function call. But, if you want you can do it. You can refer to this https://dump.sahilsuman.me/streaming-custom-llm-vapi.txt
e
Thanks @Sahil could you clarify what you mean? 1. I'm using a Custom LLM (its not a finetuned model, its gpt-4o but with some custom processing), and I've set up a Tool in the Vapi dashboard with a Make.com URL 2. Vapi sends the model request to my Custom LLM endpoint 3. I need to make a chatcompletion call to openai including the Tool URL, but I dont see it passed from Vapi in (2) If I don't handle the tool call how would it work? Thanks
s
Let's do one thing let's hop on a call and we can discuss it.
e
thanks Sahil, booked the next availability but it's only on Friday
@Sahil Q: How am I supposed to handle the Request-start message when I use a custom llm, how do I get Vapi to say that while it's calling a tool? Thank you https://cdn.discordapp.com/attachments/1252261538258092124/1252928286166614046/Screenshot_2024-06-19_at_11.09.07.png?ex=6673ffe5&is=6672ae65&hm=bb8d8e99cad7966da12de4995cba7f0aa5f678b62f4f497aaaf01b631a19b464&
s
Simply, pass it directly to the VAPI assistant configuration and whatever configuration you get during the customllm call, just add the tools section to it.
Like that
e
thanks do i understand this correctly please: 1 . call Vapi with assistant config including tool config (as in your screenshot) 2. Vapi calls Customllm endpoint with model params + tool config 3. Customllm endpoint calls Openai with tool config + chat message 4. Customllm receives tool call + parameter from Openai 5. Customllm endpoint sends tool call + parameter to Vapi 6. Vapi speaks 'request-start' message 7. Customllm endpoint calls Tool endpoint and retrieves result 8. Customllm endpoint sends tools result to Openai 9. Customllm endpoint receives chat response from Openai 10. Customllm endpoint streams chat response to Vapi 11. Vapi speaks response
@Sahil
s
Yup.
e
thanks @Sahil - what does look (5) Customllm endpoint sends tool call + parameter to Vapi, look like please? e.g. if Openai returns:
Copy code
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1699896916,
  "model": "gpt-3.5-turbo-0125",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_abc123",
            "type": "function",
            "function": {
              "name": "get_current_weather",
              "arguments": "{\n\"location\": \"Boston, MA\"\n}"
            }
          }
        ]
      },
      "logprobs": null,
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 82,
    "completion_tokens": 17,
    "total_tokens": 99
  }
}
What should be sent to Vapi to trigger the request-start message? thank you
s
We have a call, right? We can discuss that over it.
e
@Sahil yes we do but I'd like to solve this earlier please if you know about it - our call is on friday and I have a developer working on this now
s
Check DM
e
Hey sahil I’ve already booked the next available slot which is Friday
s
Basically, just forget that you have any tools or anything. Simply create an assistant in the usual way and pass all the parameters such as temperature, tools, system prompt, and other things that you retrieve from Vapi. Now, whenever you have a tool call, Vapi will automatically send a tool request to the server URL that you specified when creating the tool in Vapi.
e
i thought i have to handle the tool call in my custom llm, not vapi? # 7 and #8 below
Copy code
thanks do i understand this correctly please:

1 . call Vapi with assistant config including tool config (as in your screenshot)
2. Vapi calls Customllm endpoint with model params + tool config
3. Customllm endpoint calls Openai with tool config + chat message
4. Customllm receives tool call + parameter from Openai
5. Customllm endpoint sends tool call + parameter to Vapi
6. Vapi speaks 'request-start' message
7. Customllm endpoint calls Tool endpoint and retrieves result
8. Customllm endpoint sends tools result to Openai
9. Customllm endpoint receives chat response from Openai
10. Customllm endpoint streams chat response to Vapi
11. Vapi speaks response
s
You can handle it as well but, we have pre-build features like say some messages like wait a second and all that. That's why I am suggesting you to do it through Vapi. Well, if you still want to do it then you can handle it on your own.
e
I'd like to do it the easiest way possible. How do I choose whether Vapi makes the tool call, or I make the tool call?
8 Views