Ability for external service to push information t...
# support
z
I'm trying to build an audio-only personal trainer ai that can guide a user through their weightlifting workout, and I'm unsure how to do the following: For weightlifting, a user has a defined rest time between sets. I'd like for the AI have the ability to set a timer and then notify the user when it's time to begin the next set. I don't see a way to send messages to the LLM or voice output queue from an external source, which I think is required here. Any suggestions?
s
Hey @zbeaver4 another user here, I had a similar question a few days ago. The conclusion I came to was one would need to implement a custom LLM server to inject messages into the context. Ref to docs: https://docs.vapi.ai/custom_llm So you'd setup some intermediate server between your preferred LLM provider, inject the messages, then send to vapi. It be really nice if the devs of vapi gave us some way to do this through their platform as I imagine the above implementation would introduce additional lag which I'm unsure how to engineer around at the moment.
that or in your case, I guess doing silent text messages using the client SDKs might be a good fit as I imagine you're envisioning this as an app of some sort: https://docs.vapi.ai/text_message
z
Thanks! Through silent text messages, can you elicit a response from the model to be spoken to the user? I don’t think system messages will do that. Would you have to fake a user message?
m
You'd do this with custom webhooks. Run the edge function or a server that the bot calls when X happens and the edge function is the smart part, and you could write it to do whatever. I'd play with the prompt like your edge function will run the timer (even keep track of more if you want) and send a message back, like do not respond until you receive this message RaNdOMMessgagEKey283. I struggle with the coding there. I still have to actually read the documentation, but if you're using Twilio, you could still run verbs to output whatever is based on the webhooks you make. Or I have no idea what I'm talking about. 🙃 Someone back me up or set me straight.
n
Yes exactly you could use custom llms or functions and just have timeouts on your side. At some point, we'll have our own language like Twilio / TwiML because you can inject control variables but for now will need to workaround it
z
@nikhil https://docs.vapi.ai/clients mentions message events being sent to the client "for receiving messages sent to the Server URL locally on the client, so you can show live transcriptions and use function calls to perform actions on the client." So I'm thinking I could setup a server url that responds to the "set timer" function call from the llm (which is informed in the system message to set a timer between each workout set) but then listen for that function call message on the client side, set the timer locally, and then send a message to the llm from the client indicating that the timer has gone off. wdyt? This seems like it would bypass the need for a custom llm server. If it's reasonable, my only question would be if it's necessary to set up a server URL for function calling if I only plan to respond to function calls on the client side?
n
it's a good question. client side can't respond to function calls yet. they can only recieve them as events. server side has to actually respond (or you mark the function as async)
z
^ yes, I've been marking as async and then sending any follow-up from client side as a system message to hack this.