@s3e
Hello! The prompt limit, or the maximum number of tokens, can indeed vary depending on the model being used. For VAPI, the default max number of tokens that the assistant is allowed to generate in each turn of the conversation is set to 250. If you need information about different models and their specific limits, this information is typically provided in the documentation for each model. If you can't find the details there, please reach out to support for more comprehensive information on this topic.