Parth0774
03/25/2025, 7:29 PMVapi Support Bot
03/25/2025, 7:30 PMmistral-8x7b-32768
) has been decommissioned and is no longer supported, as shown in your error message.
Based on the information I found, here are some alternative models available in the Vapi ecosystem that you could consider:
- gemma2-9b-it
- llama-3.1-8b-instant
- llama3-70b-8192
- llama3-8b-8192
- deepseek-r1-distill-llama-70b
- llama-3.3-70b-versatile
- llama-3.1-405b-reasoning
Additionally, Vapi supports using any OpenAI-compatible endpoint as the LLM, including services like [OpenRouter, AnyScale, Together AI, or your own server](https://docs.vapi.ai/customization/custom-llm/fine-tuned-openai-models). You can also use open-source LLMs like Mixtral.
For more specific recommendations on which model to use instead of the decommissioned Groq Mistral model, I'd recommend checking the URL mentioned in your error message: https://console.grog.com/docs/deprecations.
Is there anything else you'd like to know about the available models or how to switch to an alternative?
Sources:
- [Discord conversation about model decommissioning](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1336171779772907692)
- [Discord conversation about available models](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1334534504324403230)
- [Vapi documentation on custom LLMs](https://docs.vapi.ai/customization/custom-llm/fine-tuned-openai-models)Kyle Brunker
03/26/2025, 11:10 PMParth0774
03/27/2025, 2:56 AMParth0774
03/27/2025, 2:56 AMParth0774
03/27/2025, 2:56 AMKyle Brunker
03/29/2025, 11:50 AM