@Fanfantulipan
Hello! I'm glad to assist you with the pricing calculation for your calls using VAPI and OpenAI's GPT-4 model.
Based on the information you've provided, let's break down the calculation for the cost of LLM (Large Language Model) usage for a 10-minute call:
For the first 10-minute call:
- Prompt tokens: 104,069
- Completion tokens: 2,325
- Cost per 1M prompt tokens: $2.50
- Cost per 1M completion tokens: $10.00
The cost for prompt tokens is calculated as follows:
104,069 tokens / 1,000,000 tokens * $2.50 = $0.2601725
The cost for completion tokens is calculated as follows:
2,325 tokens / 1,000,000 tokens * $10.00 = $0.02325
Adding both costs together gives us:
$0.2601725 + $0.02325 = $0.2834225 (rounded to $0.28)
For the second 10-minute call, you would perform the same calculation with the respective number of tokens used in that call.
Now, regarding the discrepancy you've noticed, there could be several reasons for this, including but not limited to:
- Rounding differences in the billing system.
- Additional hidden costs or fees that may not be immediately apparent.
- A delay in the reporting system that might not have updated the usage in real-time.
Unfortunately, I don't have access to the exact billing details or a simple calculator tool that can be used for this purpose. However, the calculation method you've used seems correct, and for a more detailed breakdown or to address the discrepancy, I would recommend reaching out to OpenAI's support with the specifics of your billing query.
If you need further assistance or have any other questions, feel free to ask. If your query is resolved, please let me know, and I can close this support ticket for you.