Silent messages to assistant running an outbound c...
# support
Maybe I'm missing something obvious here, but I'm trying to figure out how I could silently send messages to an agent that is currently on an outbound call. The way I understand function calling is that the agent will do something upon receiving information within the conversation but what I would like to do is have the ability to inject tokens into the conversation that could moderate the behaviour of the assistant that isn't originally triggered by the LLM, but another model that is monitoring the conversation.. Specifically what I have in mind is trying to create a special [SILENCE] token that the language model would understand as a sort of input in itself. There was a conversation I had with the bot where I was expecting to continue talking but it didn't because it received no input from me. It's a pretty normal thing in the course of conversation where one's silence is actually an indication for the other to continue the conversation, and I want an ability to control this somehow. That or yall implement some kind of controls around this? If you could give us some control around when we could inject a special token to indicate silence (like inject token after 400ms of silence) then we could tell the various language models in the prompt how to respond to such a token in various points of the conversation. Like in some cases you might want the model to just respond back with silence as it's likely the user is still talking. Or if I wanted the model to have a more graceful interruption where it generates a more passive message while the user is talking, like a "right.. right SO" and if the user stops talking a silence token could be injected back in so the model knows to start the next message. Maybe I just need to finetune/hack my own transcriber model for this? idk. This also feels like it might be a pretty buggy feature without fine-tuning the language models.
Apologies, I'm starting to see how one could do this by running one's own custom LLM server.