Stop the waffling.
# support
u
hey! how are you ensuring the assistant doesn't speak too much? When testing i find they just keep waffling on and it doesn't make for natural conversation. Like i do a cold call asking is now a good time to speak, they say yes it is but then expand on that more than what sounds natural. is there a setting i'm missing or a prompt i need to add?
v
Is there a specific aspect of these settings you'd like me to explain in more detail? Sources: - [Quickstart Phone Outbound](https://docs.vapi.ai/quickstart/phone/outbound) - [Speech Configuration](https://docs.vapi.ai/customization/speech-configuration)
Hi there! 👋 I understand your frustration with the assistant talking too much during conversations. I wasn't able to see your current settings, but I can definitely help with ways to prevent that "waffling" and create more natural-sounding conversations. Here are several approaches you can use to control your assistant's verbosity: ### System Prompts Add specific instructions in your system prompt like: - "Keep all your responses short and simple" - "This is a voice conversation, so keep your responses short, like in a real conversation. Don't ramble for too long" - "Use casual language, phrases like 'Umm...', 'Well...', and 'I mean' are preferred" According to the [documentation](https://docs.vapi.ai/quickstart/phone/outbound), these instructions help create more natural-sounding conversations. ### Adjust These Key Settings 1. **`numWordsToInterruptAssistant`**: Controls how many words the user must speak before the AI can interrupt (try 3-5 words) 2. **`maxDurationSeconds`**: Defines the maximum time the AI waits for a response (try 5-10 seconds) 3. **`maxTokens`**: Limits the number of tokens the assistant generates (default is 250, but you can lower this) For cold calls specifically, you might want to configure: - **`startSpeakingPlan.waitSeconds`**: Adds a delay before the assistant starts speaking - **`assistantOverrides.maxDurationSeconds`**: Limits the assistant's speaking time ### Voice Activity Detection According to the [speech configuration docs](https://docs.vapi.ai/customization/speech-configuration), you can adjust: - **`voiceSeconds`**: Controls how long the customer needs to be speaking before the assistant stops - **`backoffSeconds`**: Controls how long the assistant waits before starting to talk again after being interrupted ### Custom Endpointing Rules You can implement custom endpointing rules to pause the assistant for user input, which helps create more natural back-and-forth conversation.
v
Hey! To help track down this issue, could you share: - The call ID - When exactly this happened (the timestamp) - What response you expected to get - What response you actually got instead This would really help us figure out what went wrong!
u
yes thank you
callid: acda4cfb-1efe-4f33-8942-03be7538b18e timestamp: at 2:44 i say "yeah great question" about the pricing. and before i could continue, the assistant interupts and continues talking about pricing, which isn't really realistic.
v
You require to update your assistant transcriber endpointing to 300ms and start speaking plan wait seconds to 0.6. Along with it, enable the Smart endpoint plan. Try 2-3 test calls; it will work as you have expected it to work. Here is the curl request to update your Assistant Transcriber and start speaking plan config, make sure to pass the existing config in case if anything is required.
Copy code
# First, fetch the current configuration (optional)
curl -X GET "https://api.vapi.ai/v1/assistants/YOUR_ASSISTANT_ID" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" > current_config.json

# Update the assistant with modified configuration
curl -X PATCH "https://api.vapi.ai/v1/assistants/YOUR_ASSISTANT_ID" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "transcriber": {
      "provider": "deepgram",
      "endpointing": 300
    },
    "startSpeakingPlan": {
      "waitSeconds": 0.6,
      "smartEndpointingPlan": {
        "provider": "livekit",
        "waitFunction": null
      }
    }
  }'
18 Views