andromate12357892
05/21/2024, 6:22 PMVapi Support
05/21/2024, 6:22 PMVapi Support
05/21/2024, 6:22 PMresponseDelaySeconds and llmRequestDelaySeconds settings in your Vapi configuration. These settings control the delay before the assistant starts speaking and the time it waits after punctuation before sending a request to the Large Language Model (LLM), respectively.
1. **Response Delay Seconds (responseDelaySeconds)**: This parameter sets the minimum number of seconds to wait after user speech before the assistant begins speaking. Reducing this value can help decrease the latency between the user's message and the assistant's response.
2. **LLM Request Delay Seconds (llmRequestDelaySeconds)**: This setting controls the minimum number of seconds to wait after punctuation before sending a request to the LLM. Optimizing this value can help in faster processing and response generation by the assistant.
Here is an example of how you might adjust these settings when creating or updating an assistant:
json
{
"responseDelaySeconds": 0.2, // Reduced from default for quicker responses
"llmRequestDelaySeconds": 0.05 // Reduced from default for quicker LLM requests
}
[(1)]()
By fine-tuning these settings, you can enhance the responsiveness of your AI assistant, thereby improving the user experience by reducing the noticeable latency after the first message. Adjust these parameters based on your specific latency requirements and the complexity of the assistant's tasks.Sahil
05/21/2024, 6:25 PM