Too much costs for the extra prompts.
# support
t
During a 4-minute conversation, my extra prompts used a combined total of 2,900 tokens, costing $0.03. In contrast, my main prompt used 14,000 tokens and only cost $0.01. It is surprising that the three extra prompts, with a total of 2,900 tokens, ended up costing more than the main prompt.
v
message has been deleted
@Tarik It seems there might be a bit of confusion regarding the costs associated with tokens and voice calls. VAPI charges for voice calls are based on the duration of the call, at a rate of $0.05 per minute, prorated to the second. The token usage you're referring to is likely associated with a different aspect of the service, such as transcription or text-to-speech, which may have separate pricing based on the number of tokens used. If you're seeing a discrepancy in costs between prompts based on token usage, it could be due to the specific pricing structure of the providers for those services. VAPI itself charges only for the call duration, and any additional costs for tokens would be at-cost from the providers. If you need a detailed breakdown of the costs or have further questions about the pricing structure, please let me know, and I can assist you further or direct you to support for more detailed information.
You can continue this conversation by mentioning me in the message.
Got it. Tagging @User for additional help.
j
Interesting, I wonder if it uses GPT4 turbo still
s
Can you tell me your assistant id
t
@Sahil
97006188-c883-469c-829f-5ac2cd67a091
?
@User
@Sahil
s
Will check it tomorrow.
Can you send me a latest call_id as well?
or this call id
t
97006188-c883-469c-829f-5ac2cd67a091
This is the call id from my screenshot. The problem happened on this call id.
The latest call id is from another assistant where i don’t use summary etc.
s
It is because charges vary from model output and input token
t
im using groq
3 Views