Jose
04/01/2025, 4:14 PMVapi Support Bot
04/01/2025, 4:16 PMfirstMessage
parameter to an empty string in your assistant configuration:
javascript
{
"assistant": {
"firstMessage": "",
// other configuration options
}
}
According to [VAPI documentation](https://docs.vapi.ai/quickstart/dashboard), the first message is optional, so it can be disabled.
2. **Use the triggerResponseEnabled
flag**: This controls whether a system message triggers an LLM response:
javascript
vapi.send({
type: 'add-message',
message: {
role: 'system',
content: 'Your announcement content here'
},
triggerResponseEnabled: false
})
This approach was mentioned in [Discord discussions](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1338730919904673853) about controlling LLM responses.
3. **Use the mute-assistant
control message**: This stops the current LLM generation:
javascript
vapi.send({
type: 'control',
control: 'mute-assistant'
})
4. **Set `assistantOverrides.maxDurationSeconds`**: This parameter can mute the assistant for a specified duration, preventing it from speaking:
javascript
{
"assistantOverrides": {
"maxDurationSeconds": 120
}
}
5. **Adjust endpointing parameters**: Try configuring numWordsToInterruptAssistant
to a higher value and set appropriate maxDurationSeconds
to control the AI's timing behavior.
Note that according to [VAPI documentation](https://docs.vapi.ai/quickstart/web), the first message cannot be interrupted and is guaranteed to be spoken, so you might need to experiment with these approaches to find what works best for your use case.Vapi Support Bot
04/01/2025, 4:16 PMKyle Brunker
04/01/2025, 9:06 PMJose
04/02/2025, 1:23 AMDebGit
04/03/2025, 7:41 AMKyle Brunker
04/06/2025, 12:43 PM