Assistant doesn't know todays date & time
# support
b
I'm having issues getting the assistant to know todays time. For example when a caller is trying to book a meeting for tomorrow, it says okay great, what time, then gets a time and proceeds to book a meeting. problem is, it is almost always in the past. We inject the current date and time into the system prompt at the start of the call but it isn't helping.
v
Hi @Brady, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Function Calling]() * [Outbound Sales Example]() * [Update Assistant]() * [List Assistants]() -----------------------------
### Answer Sorry, I wasn't able to find any information about your question.
Got it. Tagging @User for additional help.
s
LLM doesn't know today's date you need to tell that in the prompt. Please checkout https://docs.vapi.ai/community/videos community videos
b
It is in the prompt.
s
which LLM model you are using?
b
Chat GPT 3.5 1106
s
USE GPT 4
or fine tune your model.
b
The latency is too long
s
Do this it will give you far better result.
GPT 3.5 is really dumb if you don't fine-tune it properly
b
Any support docs on that? not really sure what that means
Are you referring to fine tuning the prompt?
I am talking about model.
@Mason would be able to help you out with it.
b
Thats definetly over my head.
Any alternat providers you'd reccomend that don't have such high latency?
s
Try different LLM models.
b
what do I need to use for the model to use llama3? This is giving me bad request 400 error
The API docs don't have it as an available option.
@mason would love a crash course on fine tuning.
m
Yeah I’d give it a function for it and fine tune a model on it
Thinking about making a video on it, I’ll be pivoting from fine tuning as a business soon
b
So just give it a function to get current date at the start of the call? Not really sure how I'd fine tuen the model to understand the current date. I already put it in the prompt in 3 different places and it still doesn't know... lol
Second I swithc to GPT4 or llama 3 it works perfectly.
but latency doubles almost
m
Yeah so I’d make a function for pulling current date and time in a certain area, and then fine tune a 3.5 model on how to call that function
b
So... something like this?
Copy code
{
  "messages": [
    {
      "role": "system",
      "content": "You call the function currentDate at the start of each conversation so that you are aware of time"
    },
    {
      "role": "user",
      "content": "What is todays date and time"
    },
    {
      "role": "assistant",
      "content": "It is five eighteen pee em on April twenty sixth two thousand and twenty four in the mountain standard timezone"
    }
  ]
}
m
Negative ghost rider
One sec
Copy code
{
  "messages": [
    {
      "role": "user",
      "content": "What is the weather in San Francisco?"
    },
    {
      "role": "assistant",
      "function_call": {
        "name": "get_current_weather",
        "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"
      }
    }
  ],
  "functions": [
    {
      "name": "get_current_weather",
      "description": "Get the current weather",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and country, eg. San Francisco, USA"
          },
          "format": {
            "type": "string",
            "enum": [
              "celsius",
              "fahrenheit"
            ]
          }
        },
        "required": [
          "location",
          "format"
        ]
      }
    }
  ]
}
b
Oh, so train the function into the GPT rather than in VAPI?
m
On that doc there’s a tab called fine tuning examples and under that there’s a tab called function calling
Exactly
b
Okay, thats simple enough.
m
Yeah so give it the function call and then train it on examples on how it should use it
b
Any free ways to get current time?
m
You can code it yeah
b
Without have it call my server (Make)?
Okay, i'll play around with it.
m
I’d host the api on something like Google cloud
Or vercels serverless functions
In my fine tuning video I’ll make the example getting current time in the video I make
b
Its easier for me to just have it call make.
How do you use the fine tuned model in the API for vapi now?
m
Open ai will provide you a model name / assistant name you just use that as the model
b
Okay, thats what I was hoping.
m
You may be able to use that api with make
Not sure but it’s free
b
Make has a function for current date and time already built in. I just wanted a way for it to not have to call make at all and just get it on its own.
m
Ah yeah realistically you’re going to have to make your own serverless function
b
What would be really cool is if someone made a make template that fine tuned the gpt3.5 for you... thers you an idea ;0
m
You have to make training data
Other than that openais fine tuning platform would be easier than making it on make
b
So essentially fine tuning the GPT is just like hard coding a prompt (System, user, function, or assistant) into the GPT itself so it doesn't have to be added into the prompt when using the GPT? right?
m
Kind of, it’s more like training it on cause vs effect logic
If this then this
And it’ll pick up on the patterns
b
Okay, so then how would it pick up time better by fine tuning it versus me just putting it in the prompt itself?
m
It would pick up the function better
Better understanding of when and where to fetch the time
b
Right, but in the prompt I'm not having it do a function, i'm just straight up telling it.
m
You don’t tell it about functions in a prompt though
b
How else would it know when to perform the function?
m
I’m saying you’ll get better reliability by making it a function and then fine tuning it will make it an expert on calling that function
When you make an assistant with functions you define to it what functions it has and how to use them
b
For vapi sake, if i don't call the function out in the prompt, it pretty much doesn't do shit.. lol.
m
3.5 0125 is bad at function calling without fine tuning
b
It pretty much calls the function right every time, it just doesn't know the time. lol
m
Can you do me a favor
b
Sure
m
How many tokens is your prompt
b
m
Not bad. That prompt to get date and time requires a good amount of reasoning
That’s why 4 + can probably do it
If you want it to work on 3.5 you’re going to have to fine tune the reasoning for your use case
If you make only this your prompt does it work
b
This is all really rough btw, just in process of refining it
m
A lot of reasoning for 3.5 without fine tuning
b
Right, with gpt 4 a single sentance. basicelly gets it done.
But this is still a full second faster when actually working with them on the phone
m
If you want to keep using 3.5 it would be worth it to fine tune it
b
I'm reading the docs and looks like somehow I need to feed it JasonL with my data for fine tuning, not super sure how to actually do that part.
m
Do you have VS code
b
no
I'm not a coder by any means
Can i do it though postman?
lol
Found this article for postman. I will mess with this tonight. https://help.landbot.io/article/pfgnzf593y-fine-tune-gpt-3-with-postman
Which model is smartest to fine tune?
m
0125
b
Any particular reason?
Thanks for the help BTW, you've save me hours of googling.
m
If I get some time today I’ll try and help you make a json dataset
No worries man
It’s the latest 3.5 model higher accuracy, more bug fixes and function calls
b
So probably doesn't help that i've been testing on the 3.5-turbo-1106
My dumb ass assumed the bigget the number the newer the model.
Hahaha fair man
It happens
Try 0125
Might be an easy fix
b
Fuck, that fixed it
Still wouldn't mind training it on my mortgage data tho. I've got a shit ton of data I've compiled for my chatbot that would make it way smarter.
m
Hell yeah brother
Yeah I mean fine tuning is very underrated, great performance, cheap to fine tune
Better than gpt4 output with good data 90% cheaper
b
Once I get my Postman setup to easily fine tune then it should be pretty easy
m
Yeah you’ll still need to make the jsonl data though
b
Thats pretty easy, theres a free website online that converts CSV to jsonL
m
Nice
Glad that fixed it boss
b
Is there a way to have categories and sub categories in the Fine tuning?
Thanks again, big help
m
If you fine tune through the api you can assign weights to each line of jsonL, other than that just make it one big jsonl file of all of that data
Anytime man
b
Okay, I see that wandb in the openai files, i'll read that.
Hey Mason, one last question. If I fine tune it on jsut GPT3.5-Turbo will it default to the 0125 and will it allow me to skip over having to re-fine tune it when/if a newer 3.5 model comes out?
Only reason I ask is the article you posted said this:
m
Honestly not to sure
It only costs like $0.30 to fine tune it I’d just fine tune it on 0125 and if anything else comes out worry about it later
I doubt there will be new versions of 3.5
b
I'm building something that i'll be deploying to 100 companies through the High Level CRM so any change I make after I deploy it will be a PITA
m
It’ll be as easy as replacing the assistant you’re referencing in your calls
b
Got it, thanks again
Would love to connect with you over a zoom call and see what you've got cookin if you're ever open to it
m
Yeah man we’re probably pivoting soon depending on how Y Combinator goes, goes bad well open source all of our strategies and what we’re building and pivot
Currently building a payment infrastructure for AI agents in case ovEngine doesn’t work out
Will be a while till we announce it though
Could definitely hop on a zoom some day
b
Inbox is open
a
Has this issue been resolved
s
@Mason Thanks man for crash course 🫡
h
Stupid question. What does fine tuning actually mean? Is it just giving it examples in the prompt on how to respond?
s
The issue here is that Brady can't opt for the SOTA model as it has high latency. We need to fix certain things to make it work correctly with the GPT 3.5 model, as it doesn't handle function calling properly. Fine-tuning would solve this issue.
b
@Sahil So far i haven't had any issues with function calling. Now that my time thing is sorted it works like a champ
s
Good to hear man!
e
Used a webhook in Make.com and responded with the "Now" variable. Super simple.
b
I deliver the current date and time with the payload when starting the assistant via the API.
w
I am having this issue too but i am new to this? I have been reading this but i am still confused. Is there a function ready for Date & time? And Can I just call that function for the AI to know current date & Time?
@Henryk can you please help? Thank you
h
In your prompt use the {{now}} variable, this gives the current time. https://docs.vapi.ai/assistants/dynamic-variables
w
@Henryk I put this in the prompt "If customers ask about the date and time, use {{date}} for current date and {{time}} for current time." but it does not seem to work.
w
thank you. I am gonna try it now.
h
Did it work?
w
It worked well. Only 1 small issue eith the formatdate, i put America/Chicago timezone but make.com keeps giving me different hours. Not sure why
Thank you henry
6 Views