### Answer
Yes, it's possible to use Vapi with a custom LLM connected to a vector database like Pinecone. Vapi supports using any OpenAI-compatible endpoint as the LLM, which includes the flexibility to connect to various services or your own server setup that could interface with Pinecone or similar vector databases [(1)](
) [(2)](
) .
You can configure Vapi to use a custom LLM by specifying the provider and model in your assistant's configuration, allowing for integration with a setup that includes a vector database. This setup can be particularly useful for applications that require advanced search capabilities alongside conversational AI.
For detailed steps on using your server or another LLM provider, you can refer to the [Custom LLM](
) documentation.