Functions with Custom LLM?
# support
y
If I use Custom LLM for my Vapi Assistant and I have functions configured in my LLM, will I need to provide information about my functions to Vapi as well while creating the assistant?
v
Hi @YasH, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Custom LLM]() * [Assistants]() * [Web Calling]() * [Function Calling]() -----------------------------
### Answer Yes, if you are using a Custom LLM with your Vapi Assistant and have specific functions configured in your LLM, you will need to provide information about these functions to Vapi when creating or updating your assistant. This ensures that your custom functions are correctly integrated and can be invoked during the voice interactions managed by Vapi. You can specify your
serverUrl
where Vapi will send the function call information, allowing your functions to be executed as intended [(1)]() . Here’s an example of how you might configure this in JSON when creating an assistant:
Copy code
json
{
  "model": {
    "provider": "custom-llm",
    "url": "<YOUR_CUSTOM_LLM_ENDPOINT>",
    "functions": [
      {
        "name": "bookAppointment",
        "description": "Book an appointment."
      }
    ]
  },
  "serverUrl": "https://yourserver.com/functions"
}
This configuration ensures that when specific functions like
bookAppointment
are called during a conversation, the requests are directed to the specified server URL, where your custom logic is handled.
Got it. Tagging @User for additional help.
s
It is completely up to you! Basically, when you add it to Vapi, it will send the function calling parameters as well to your customLLM.
y
Okay, so it's same if I configure function calling at my backend or at vapi? No difference?
s
Yup. But, prefer to have same function on both dashboard as well as custom llm
y
Got it, thanks.
n
I am from Deloitte US , Our team is struggling to get the custom llm up using VAPI . could you please help us ? We are using ngrok , the issue is VAPI is able to hit the /chat/completions , but it is struggling to say anything in return . Could you please help us on this , we have a tight POC deadline . from flask import Flask, request, jsonify app = Flask(name) @app.route("/chat/completions", methods=["POST"]) def chatcompletions(): requestdata = request.getjson() print(requestdata) ## I will call model.generate function to infer the data response = "......" op = { "messages": [ { "role": "system", "content": response } ] } return jsonify(op) if __name == "__main": app.run(debug=True, port=5000) # You can adjust the port if needed
2 Views