hxrdxk
05/07/2025, 1:05 PMVapi Support Bot
05/07/2025, 1:07 PMfirstMessageInterruptionsEnable: true
isn't working as expected. Try using the mute-assistant
control message instead:
javascript
vapi.send({
type: 'control',
control: 'mute-assistant'
})
This will stop the current LLM generation regardless of whether it's the first message or not.
Sources:
- [Vapi Documentation](https://docs.vapi.ai/quickstart/dashboard)
- [Discord Discussion](https://discord.com/channels/1211482211119796234/1338730919904673853)Vapi Support Bot
05/07/2025, 1:07 PMfirstMessageInterruptionsEnable: true
isn't working as expected. Try using the mute-assistant
control message instead:
javascript
vapi.send({
type: 'control',
control: 'mute-assistant'
})
This will stop the current LLM generation regardless of whether it's the first message or not.
Sources:
- [Vapi Documentation](https://docs.vapi.ai/quickstart/dashboard)
- [Discord Discussion](https://discord.com/channels/1211482211119796234/1338730919904673853)KINGS BIG
05/08/2025, 1:29 AMhxrdxk
05/12/2025, 12:41 PMyogeshverma0020
05/23/2025, 12:41 AMKINGS BIG
05/23/2025, 4:21 AMhxrdxk
05/23/2025, 1:05 PMVapi Support Bot
05/23/2025, 1:06 PMfirstMessage
to an empty string and place your initial greeting or instruction in the system prompt. For example:
js
const assistantConfig = {
firstMessage: "",
model: {
provider: "openai",
model: "gpt-4",
messages: [
{
role: "system",
content: "Hi there! How can I help you today?"
}
]
},
// ...other config
};
This ensures the assistant starts the conversation and its first message can be interrupted if needed.
Note: There is no documented assistant-speaks-first-with-model-generated-message
flag—use the system prompt as shown above.
Source:
- [Vapi Web SDK Documentation](https://docs.vapi.ai/sdk/web)