Robert Simons
04/10/2025, 11:28 PMVapi Support Bot
04/10/2025, 11:30 PMsilenceTimeoutSeconds, responseDelaySeconds, and llmRequestDelaySeconds settings as these [directly impact response times](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1332286268805087272).
- **Chunking Configuration**: The chunkPlan.enabled setting and parameters like minCharacters and punctuationBoundaries affect text processing speed. You might want to adjust these:
json
{
"voice": {
"chunkPlan": {
"enabled": true,
"minCharacters": 30, // Try adjusting between 1-80
"punctuationBoundaries": [".", "!", "?", ";"]
}
}
}Vapi Support Bot
04/10/2025, 11:30 PMVapi Support Bot
04/10/2025, 11:30 PMMistermanu#000
04/11/2025, 1:54 AMRobert Simons
04/11/2025, 6:28 PMRobert Simons
04/11/2025, 7:02 PMPraveen
04/11/2025, 7:42 PM{
"startSpeakingPlan": {
"waitSeconds": 0.6,
"smartEndpointingPlan": {
"provider": "livekit",
"waitFunction": "50 + 400 * x"
}
}
}Praveen
04/11/2025, 7:46 PMPraveen
04/11/2025, 7:47 PMPraveen
04/11/2025, 7:48 PMbaseValue + scalarValue * x
- baseValue is the minimum wait time in milliseconds
- scalarValue determines how much the prediction affects the wait
- x is the prediction value (0-1) indicating if you're still likely to speak
the default function in Vapi is 70 + 4000 * x, which gives a range from 70ms (minimum wait) to 4070ms (maximum wait).Praveen
04/11/2025, 7:49 PMPraveen
04/11/2025, 7:49 PM