can you customize VAPI's RAG?
# support
a
can you customize VAPI's RAG (Retrieval-Augmented Generation) implementation to suit your specific needs? Does VAPI use RAG for content extraction or we should use tools calls or custom LLM?
@User @User
@Shubham Bajaj
v
@Ayush Yes, you can customize VAPI's Retrieval-Augmented Generation (RAG) implementation to suit your specific needs. This typically involves adjusting the underlying models, tweaking the retrieval sources, or modifying the way information is processed and presented. VAPI itself may not use RAG explicitly for content extraction; it depends on the specific implementation and the tools available within the VAPI ecosystem. For content extraction and other tasks, you might need to use tool calls or a custom Large Language Model (LLM) that is designed for your particular use case. If you need to perform a specific task with VAPI and are unsure whether to use RAG, tool calls, or a custom LLM, please provide more details about your requirements, and I can assist you further.
You can continue this conversation by mentioning me in the message.
a
what kind of tool calls?
@Vapi
v
@Ayush The kind of tool calls you can make with VAPI will depend on the specific functionalities it offers. Typically, these could include: 1. **Speech-to-Text (STT)**: Converting spoken language into text. 2. **Text-to-Speech (TTS)**: Generating spoken language from text. 3. **Natural Language Understanding (NLU)**: Interpreting the intent and entities within the text. 4. **Dialogue Management**: Handling the flow of conversation. 5. **Voice Biometrics**: Identifying or verifying a user by their voice. 6. **Audio Processing**: Enhancing or analyzing audio signals. If you have a specific task in mind, please let me know, and I can guide you on which tool calls to use or how to configure them for your needs.
a
what kind of tool calls to use for RAG? @Vapi
v
@Ayush For a Retrieval-Augmented Generation (RAG) system, you would typically use a combination of tool calls that involve: 1. **Retrieval Tools**: To fetch relevant information from a database, knowledge base, or the internet. This could be a custom search API or a pre-built retrieval tool that queries a set of documents. 2. **Natural Language Processing (NLP) Tools**: To process and understand the context of the query as well as the retrieved documents. This might include entity recognition, sentiment analysis, or summarization tools. 3. **Language Model (LM) Tools**: To generate responses based on the input query and the information retrieved. This could involve calls to a pre-trained language model that can integrate the retrieved data into coherent and contextually appropriate responses. 4. **Ranking Tools**: To evaluate and rank the generated responses based on relevance and coherence, ensuring the best output is selected. If you're looking to implement or use RAG within VAPI, you would need to orchestrate these tool calls to work together seamlessly. The exact tools and their configurations would depend on the specifics of your application and the resources available in the VAPI ecosystem. If you need more detailed guidance on setting up RAG with VAPI, please provide additional context or requirements.
a
Please tell me this implementation in detail
@Vapi
v
@Ayush Please ask support for more info on this.
a
@User
should i use Tool call or Custom LLM which one is better? @Vapi @User @User
v
@Ayush The choice between using a tool call or a custom Large Language Model (LLM) depends on the specific requirements of your task. Tool Calls are typically used for specialized operations such as speech-to-text, text-to-speech, natural language understanding, or other predefined functions. They are better when you need to perform a specific, well-defined task that can be handled by an existing tool within the VAPI ecosystem. Custom LLMs, on the other hand, offer greater flexibility and can be fine-tuned for a wide range of tasks, including those that require understanding context, generating text, or integrating various sources of information. They are better for complex tasks that require a deep understanding of language and context. If your task requires specialized functionality that is covered by existing tools, then tool calls might be the better option. If you need more general language processing capabilities that can be customized for your specific needs, then a custom LLM might be more appropriate. To provide a more detailed recommendation, I would need to know more about the task you are trying to accomplish. If you can share that information, I can help you decide which approach is better suited for your needs.
a
Does VAPI support RAG without external LLM or Tool calls? @Vapi @User @User
Does VAPI support RAG without external LLM or Tool calls? @Vapi
v
@Ayush Please ask support for more info on this.
a
Does VAPI support RAG without external LLM or Tool calls? @Vapi
v
@Ayush VAPI, as a Voice API, typically focuses on voice-related functionalities such as speech recognition, text-to-speech, and possibly natural language understanding. The implementation of Retrieval-Augmented Generation (RAG) would generally be outside the standard scope of a Voice API, as RAG is a more advanced technique that combines information retrieval with language model generation. If VAPI does not natively support RAG, you would need to integrate external tools or a custom Large Language Model (LLM) to achieve RAG functionality. This would involve making tool calls to retrieve data and then using an LLM to generate responses based on that data. For the most accurate and updated information regarding VAPI's capabilities and whether it supports RAG without external LLM or tool calls, please refer to the official VAPI documentation or contact VAPI support directly.
s
@Ayush can you be more clear here
can you customize VAPI's RAG (Retrieval-Augmented Generation) implementation to suit your specific needs?
such as: - What response you expected to get - What response you actually got instead > Does VAPI use RAG for content extraction or we should use tools calls or custom LLM? You can use knowledge base for it.
a
I wanted to know if VAPI uses RAG or not as i read that you have to use custom LLM or tool call to implement RAG in VAPI @Shubham Bajaj
s
@Ayush we have our RAG template you can check it about from here https://docs.vapi.ai/knowledgebase
Our Vendor stopped their services so suggested to use tool call as temporary fix**
a
So custom llm or tool call is the option to go with right now for implementing RAG..?
@Shubham Bajaj
s
No, I said previously. So, you can use KB we have added support for
trieve
our new KB provider and soon will add more.
a
Sorry if it doesn’t make any sense @Shubham Bajaj Does the knowledge base (KB) of VAPI use RAG for content extraction when using the OpenAI GPT-4 model?
s
@Ayush yeah your correct the previous answer is short of important punctuation. We at Vapi have our own RAG template which we use with KB to find the relevant document content and then asks the LLM to generate content according to it.
@Ayush do let me know how can i help you further.
25 Views