Approximate LLM tokens consumed per minute
# support
e
How many tokens does the AI agent approximately consume per minute if I use custom LLM?
v
@EthanTing The number of tokens an AI agent consumes per minute when using a custom Large Language Model (LLM) depends on several factors, including the complexity of the tasks, the verbosity of the responses, and the frequency of the requests. To give you a rough estimate, let's consider that the average word is about 4 bytes in size, and one token is roughly equivalent to 4 bytes or 1 word. If an AI agent generates about 150 words per minute, that would be approximately 150 tokens per minute. However, this is a very general estimate. For a more accurate assessment, you would need to analyze the specific usage patterns of your custom LLM. If you have more detailed information about your use case, I can provide a more precise estimate.
You can continue this conversation by mentioning me in the message.
s
Depends on call, you can get the approximate based on your conversation.