VAPI as RAG
# support
h
Hi, i would like to use VAPI with a LLM connected to a vector database such as pinecone, is it possible ?
v
Hi @hugopdvn, I'm an AI assistant for Vapi. I'll analyze available sources and get back to you shortly.
----------------------------- **While I generate an answer, here are some sources that might be helpful**: * [Launch Screen Assets]() * [Glossary]() * [Vapi Web SDK]() * [Introduction]() -----------------------------
### Answer Yes, it's possible to use Vapi with a custom LLM connected to a vector database like Pinecone. Vapi supports using any OpenAI-compatible endpoint as the LLM, which includes the flexibility to connect to various services or your own server setup that could interface with Pinecone or similar vector databases [(1)]() [(2)]() . You can configure Vapi to use a custom LLM by specifying the provider and model in your assistant's configuration, allowing for integration with a setup that includes a vector database. This setup can be particularly useful for applications that require advanced search capabilities alongside conversational AI. For detailed steps on using your server or another LLM provider, you can refer to the [Custom LLM]() documentation.
s
h
dude i think you just made my day
ty
c
Which video addresses the vector database?
h
this one :

https://www.youtube.com/watch?v=9MD1VM7038Q

9 Views