Workflow latency seems higher than expected compar...
# support
j
how do I lower latency in workflows Using TTS - Deepgram OpenAI STT - Vapi
v
To lower workflow latency, minimize custom logic and external API calls within your workflow nodes, and ensure each provider (STT, LLM, TTS) is configured for real-time streaming. For example, if you have custom function calls in your workflow, try to batch or defer non-essential operations:
Copy code
json
{
  "workflow": {
    "nodes": [
      {
        "type": "start",
        "actions": [
          // Only essential, fast actions here
        ]
      }
    ]
  }
}
Keep your workflow as streamlined as possible to reduce processing time at each step ([Vapi Quickstart](https://docs.vapi.ai/quickstart)). Source: - [Vapi Quickstart](https://docs.vapi.ai/quickstart)
v
You could also switch providers for TTS, LLM, and STT that have better latency.
a
Workflows seems unusable.
It's super slow in it's native gui.
k
I completly agree. idk why it's still not in beta.
not only is it slow at responding but sometimes it doesn't at all. I've alsp had cases where it started responding an answer from a previous node.
v
Could you provide more feedback about workflows? We would like to test certain areas and pain points to improve the overall experience. Any and all feedback is welcome here
k
Yes I can give you all the feedback possible because I need to get this working as soon as possible.
it seems like the "Condition" takes too long to verify whether it can go to the next node or can generate a respond within that node.(avg 4~8 seconds). Sometime it seems like this validation process gets stuck and never responds at all. I need to say something again in order to get it unstuck. I've also had this issue where it moves on to a new node, and then once it starts saying something for the new node, it cuts off and starts saying an response that was generated for the previous node.
v
Thank you for the feedback. When we perform tests on workflows this weekend, we will specifically look into the issues you have highlighted and find a solution to improve the experience.
Just a heads up, we are still investigating. Thank you for your patience
For the case you had where it responded from a different node, do you happen to have a call log or id of that? We were not able to reproduce that in our testing. Thank you for providing all the feedback and I have taken note of it for testing as it has highlighted some issues in workflows. If you have any loom recordings of that random node response issue, that would be incredibly helpful