Llama3 as a LLM
# support
p
Hey Is it possible to use the self-hosted Llama-3 model as LLM in agents? If not, are you planning to add this possibility in the near future?
v
Hi @piechos, I'm an AI assistant for Vapi. While a team member reviews this question, I'll look through relevant sources and see if I can provide suggestions. Please tag @Vapi Support with any follow-up questions.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Frequently Asked Questions]() * [Glossary]() * [Introduction]() * [Custom LLMs]() -----------------------------
### Answer Yes, it's possible to use a self-hosted Llama-3 model as an LLM in Vapi agents. Vapi supports using any OpenAI-compatible endpoint as the LLM, including your own servers. To use a self-hosted model like Llama-3, you would set it up as a 'Custom LLM' provider, specifying the URL to your model in your API request [(1)]() .
Glad I could be helpful. Feel free to create a new thread with any new questions you may have.
p
Is https://replicate.com/ API compatible with that feature? If not, what LLM's hosting provider is?
v
Got it. Tagging @User for additional help.
s
The feature is now in the staging area for testing. You can go ahead and try it out. Additionally, you can set up any LLM you want using the custom LLM option available in the documentation.
p
Great! How can I access your staging area?
s
p
Thank you!
Unfortunately I keep getting An error occurred in this call: pipeline-error-groq-llm-failed after around 30 seconds of conversation. For example in 09ad97bd-b21c-4ef1-832f-061b1ce5763e
s
Currently function call are not supported in Groq that's why you are getting this issue.
p
Is this limitation on Groq's side or Vapi's side? Ans is it temporary?
g
s
Just checked with the team, and that function call is also supported with Groq.
g
noice
Sahil can I help develop and test function calling with all the llms at vapi? I wrote custom functions before function calling was even a thing at earlier chatgpt days. I would really love to work on this.
s
Sure. Please test it in the staging area and let me know if you encounter any bugs. I will then forward them to the team.
p
So, will this issue that I faced be resolved soon and we will be able to use Llama 3 through Groq?
s
Yup.
p
Awesome! Do you have a page with release notes where I can track the updates?
s
As of now, no, but I will be working on this as well to ensure complete transparency on the things we are currently working on and all that.
p
Thansk!
Ok, I see that Llama3 is available through Groq in production so I tested it but when I ask the assistant to do something, It tells me that there was an issue with calling the function without even trying to call it (I saw in make.com that scenario wasn't triggered). Here is call id 10de694a-ae28-44a7-bc02-8d3f4036b57e
The same happens when I try to use Mixtral through Groq and the assistant also repeats itself many times (it is not reflected in the transcript but you can hear that in the recording). Call id - 1cb77b92-f11b-4c17-9f7b-8f652855d17d
4 Views