Marco Falcone
01/24/2025, 9:54 AMVapi
01/24/2025, 9:57 AMVapi
01/24/2025, 9:57 AMMarco Falcone
01/24/2025, 10:00 AMVapi
01/24/2025, 10:00 AMsquad.members.assistant.model.maxTokens
. This parameter defines the maximum number of tokens that the assistant will generate in each turn of the conversation. The default value is set to 250 tokens.
If you're looking to adjust this limit to accommodate longer text generations, you would need to update the maxTokens
parameter to a higher value, within the constraints of what the TTS system can handle.
To set this in the assistant configuration, you would typically access the assistant's settings through your development environment or API configuration panel and update the maxTokens
value accordingly.
Would you like to proceed with updating this configuration, or do you need assistance with another setting?Marco Falcone
01/24/2025, 10:04 AMVapi
01/24/2025, 10:04 AMMarco Falcone
01/24/2025, 10:04 AMVapi
01/24/2025, 10:04 AMMarco Falcone
01/24/2025, 10:06 AMVapi
01/24/2025, 10:06 AMShubham Bajaj
01/24/2025, 9:56 PMMarco Falcone
01/25/2025, 9:21 AMShubham Bajaj
01/27/2025, 6:54 AMMarco Falcone
01/27/2025, 12:23 PMMarco Falcone
01/27/2025, 12:24 PMShubham Bajaj
01/27/2025, 1:03 PMMarco Falcone
01/27/2025, 1:09 PMoptimizeStreamingLatency
to 1 and increased chunkPlan.minCharacters
to 60. This seems to improve the output generation at the cost of a minimum initial latencyShubham Bajaj
01/27/2025, 1:12 PMchunkPlan.minCharacters
this is used for voice output.