Decrease endpointing
# support
r
How do I decrease my endpointing? I have two assistants that have the same settings but one has significantly higher endpointing. Call ID: dfcc23b8-8940-45c2-8814-70c14a211506 06:07:56:502 [INFO] Turn latency: 3376ms (transcriber: 0ms, endpointing: 1245ms, kb: N/A, model: 1477ms, voice: 624ms) 06:08:08:894 [INFO] Turn latency: 5135ms (transcriber: 403ms, endpointing: 3426ms, kb: N/A, model: 473ms, voice: 803ms) 06:08:20:827 [INFO] Turn latency: 3780ms (transcriber: 678ms, endpointing: 1756ms, kb: N/A, model: 695ms, voice: 608ms) 06:08:31:295 [INFO] Turn latency: 2932ms (transcriber: 0ms, endpointing: 1579ms, kb: N/A, model: 651ms, voice: 700ms) 06:08:40:864 [INFO] Turn latency: 4257ms (transcriber: 714ms, endpointing: 2491ms, kb: N/A, model: 595ms, voice: 440ms) 06:08:48:394 [INFO] Turn latency: 3268ms (transcriber: 594ms, endpointing: 1690ms, kb: N/A, model: 517ms, voice: 466ms) 06:09:01:698 [INFO] Turn latency: 4577ms (transcriber: 0ms, endpointing: 2871ms, kb: N/A, model: 721ms, voice: 981ms)
v
Hi there! 👋 I notice you're experiencing different endpointing behavior between two assistants despite having the same settings. Looking at your logs, I can see some significant endpointing times (ranging from 1245ms to 3426ms). While I can't see your current configuration details, I can definitely help you with ways to decrease endpointing in Vapi! Here are several methods you can try: 1. **Disable Smart Endpointing**: Set
smartEndpointingEnabled
to
false
in your assistant configuration [via the UPDATE /assistant/:id API](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1332286268805087272). 2. **Adjust Deepgram's Endpointing Settings**: Try setting Deepgram's endpointing to 300ms [in your configuration](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1332286268805087272). 3. **Configure startSpeakingPlan and Transcriber Settings**: Update your configuration with these settings:
Copy code
json
   "startSpeakingPlan": {
     "waitSeconds": 0.8,
     "smartEndpointingEnabled": false
   },
   "transcriber": {
     "model": "nova-2-phonecall",
     "provider": "deepgram",
     "endpointing": 300
   }
[as suggested in this thread](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1329414710583627796). 4. **Adjust numWordsToInterruptAssistant and maxDurationSeconds**: Try updating
numWordsToInterruptAssistant
to 5 and
maxDurationSeconds
to 10 in your settings [via the VAPI dashboard](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1334027136305991750). 5. **Check for Hidden Configuration Differences**: Even though your assistants have "the same settings," there might be underlying default values that differ. Use the
GET /assistant/:id
endpoint to get the full configuration of both assistants and compare them thoroughly. Would you like me to provide more specific guidance on implementing any of these solutions? Sources:
k
checking if this is resolved/solved for you?
r
endpointing is still really high
k
The assistant's delayed response is due to end pointing, which determines the duration to wait before recognizing the user has finished speaking. - End Pointing Plan Configuration: You need to adjust the Smart End Pointing plan provider to livekit and update the wait function within the Start Speaking Plan for English Transcription. - Wait Function Optimization: Iterative testing with potential users is necessary to determine the optimal wait function values, as suggested values may not be suitable for all scenarios.
Copy code
startSpeakingPlan": {
  "waitSeconds": 0.6,
  "smartEndpointingPlan": {
          "provider": "livekit",
          "waitFunction": "50 + 200 * x"
        }
}
k
@Robert Simons
k
a gentle reminder to continue this thread.