Start Speaking Plan settings in Dashboard have no ...
# support
e
When I set Wait Seconds = 5 seconds, or Smart Endpointing = true, the AI will still interrupt quickly, for example: Me: Well, I... AI (after 1 second): Yes, go on Smart Endpointing should detect this is obviously not end of speech https://cdn.discordapp.com/attachments/1305550609306484736/1305550610384556114/Screenshot_2024-11-11_at_15.09.38.png?ex=67337041&is=67321ec1&hm=be4971ae1c6c5062a71f220f405e5fd905441de177b8c0481cacbb14d1525cbf&
s
Hey @Ethan Tan can you share the call id?
e
Hi @Shubham Bajaj yes here's an example: 6c2eb1a8-656c-4a72-8247-2db9d704d927 I'd like it to wait much longer... like up to 30 seconds rather than a few seconds, if it detects that the user has not finished speaking
s
Hey @Ethan Tan your use case is during the call if a question is asked then let user speaks and wait for around 30 seconds before sending the request to the LLM.
Copy code
json

  startSpeakingPlan: {
    customEndpointingRules: [
      {
        type: 'assistant',
        regex: '^Hello\\? What would you love to create\\?$',
        regexOptions: [{ type: REGEX_OPTION.IGNORE_CASE, enabled: true }],
        timeoutSeconds: 30, 
      },
    ],
  }
Your required to use custom endpointing rules here, can you try using this. Now assistant will wait for 30seconds post asking
Hello What would you love to create.
to the user.
@Ethan Tan let me know how it goes.
e
Hi @Shubham Bajaj if I do this, won't it wait 30 seconds every time? What I want is for the assistant to wait up to 30s depending on what it detects, e.g. if it detects the user is not done speaking This is what I believed Smart Endpointing was for, however it seems it either doesn't detect it or does detect, it but doesn't wait
s
Hey @Ethan Tan the smart endpointing is used for finding when user is done speaking and whereas wait is stopping the bot before accepting user response for generating it's next response.
The detection or analysis is done using prompting and you can use
waitSeconds
to wait for 30 seconds before analysing or detecting what user has spoken, do this works for you?
e
hi @Shubham Bajaj apologies for the delay in responding. What I'm looking for is for it to intelligently know when to give more time for the user to respond. Basically, be more human and don't interrupt when the person is thinking. For example if the user says "Well I think.." and they pause, I want it to wait up to 30 seconds. Whereas if they say "Hello, I'm good." then it should respond normally It sounds like there is no way to do this, is that correct? I was hoping Smart Endpointing would do this, but it responds within 1 second even when the user says "Well I think.."
s
@Ethan Tan for this you need to play around custom endpointing rules and have to plan according to you prompt or call flow such as when user will speak normally and when requires more time then to speak. Using custom endpoting rules you can get it done. Do let me know if you require more help.
e
hi @Shubham Bajaj Could you say more please? Custom end pointing is different from smart end pointing, is that right? Are there any docs related to this?
s
Smart-Endpointing uses Vapi model to find out if user has stopped speaking. Custom-Endpointing is your set of rules using regex to identify when user or assistant has stopped speaking.
Example
https://discord.com/channels/1211482211119796234/1305550609306484736/1306181633950617652 @Ethan Tan as of now there is no documentation available, but you can create a #1211483291191083018 ticket for it, and by tomorrow I can create the documentaton.
e
@Shubham Bajaj I see, but isn't Regex about matching characters? We can't use that to decide when a person is done speaking. We would have to use an LLM's reasoning, i.e. Smart Endpointing, however right now it doesn't seem to do anything. It still speaks when the user is pausing in the middle of their sentence
s
yeah regex is about matching characters you can use it to wait for X seconds after assistant has finished speaking or done with a particular turn.