Mistral Error
# support
p
@Shubham Bajaj @henry [Vapi Team] I am using Grog Mistral 7X8 Model and in Logs I see this Does this mean the model is no longer available? https://cdn.discordapp.com/attachments/1354175242087432343/1354175243278745640/image.png?ex=67e45579&is=67e303f9&hm=5f311bcaa14c134b445443d838a62f7b984af1e3b0602ea334851308e331c780&
v
Hi there! 👋 Yes, I can confirm that the Groq Mistral 7X8 Model (
mistral-8x7b-32768
) has been decommissioned and is no longer supported, as shown in your error message. Based on the information I found, here are some alternative models available in the Vapi ecosystem that you could consider: -
gemma2-9b-it
-
llama-3.1-8b-instant
-
llama3-70b-8192
-
llama3-8b-8192
-
deepseek-r1-distill-llama-70b
-
llama-3.3-70b-versatile
-
llama-3.1-405b-reasoning
Additionally, Vapi supports using any OpenAI-compatible endpoint as the LLM, including services like [OpenRouter, AnyScale, Together AI, or your own server](https://docs.vapi.ai/customization/custom-llm/fine-tuned-openai-models). You can also use open-source LLMs like Mixtral. For more specific recommendations on which model to use instead of the decommissioned Groq Mistral model, I'd recommend checking the URL mentioned in your error message: https://console.grog.com/docs/deprecations. Is there anything else you'd like to know about the available models or how to switch to an alternative? Sources: - [Discord conversation about model decommissioning](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1336171779772907692) - [Discord conversation about available models](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1334534504324403230) - [Vapi documentation on custom LLMs](https://docs.vapi.ai/customization/custom-llm/fine-tuned-openai-models)
k
Yes PARTH, you are right, the model has been decommissioned, so you have to use an another model.
p
@Shubham Bajaj How does pricing of openrouter works
?
I pay to openrouter and also pay additaional cost to vapi on top of 5 cents?
k
You can use your own OpenRouter key so you don't have to pay VAPI for LLM charges. But do note that you might hit a latency using your own key.