Speech of speech
# support
m
Just curious how you are approaching the LLM to TTS part, most important how low latency - do you work with the first chunks of the LLM streaming in, then sending this to the TTS provider, which is then also played streaming simultaniously/async? That's the best we can do as far as I understand it.. 🙂
*pardon me, title : speed of speech intended
v
@markcbrem You're absolutely right! The approach you've described is indeed one of the best practices for achieving low latency in a voice pipeline that includes both a large language model (LLM) and text-to-speech (TTS) technology. Here's how it typically works: 1. **Streaming LLM Output**: As the LLM generates output, it does so in chunks or streams. This allows for portions of the generated text to be available immediately, rather than waiting for the entire output to be completed. 2. **Chunked TTS Processing**: These chunks of text are then sent to the TTS provider as they become available. The TTS provider, like PlayHT, is designed to handle these chunks efficiently. 3. **Asynchronous Playback**: The TTS provider starts converting the text into speech while subsequent chunks of text are still being processed by the LLM. This speech is then streamed to the end-user asynchronously. 4. **Low Latency**: By processing and streaming the text and speech simultaneously, the system minimizes the delay between the user's request and the audible response. This creates a more conversational and responsive experience. This method leverages the streaming capabilities of both the LLM and TTS systems to ensure that the user isn't kept waiting for the entire process to complete before hearing a response. It's a smart way to handle real-time interactions in voice applications.
You can continue this conversation by mentioning me in the message.
m
ok but is this how you do it?
v
Got it. Tagging @User for additional help.
3 Views