Having Trouble Getting AI to Follow Modular Knowle...
# support
s
Hey folks, I’m working on a voice assistant for my business using Vapi, and I’ve run into an issue that I’m hoping someone here can help me troubleshoot. My setup is structured so that my assistant (Jamie) uses a main system prompt for general behavior and call classification, and then is supposed to reference one of three markdown-based knowledge modules depending on the purpose of the call: New Quote Module Existing Quote Module General Support Module Each of these modules is uploaded as a separate markdown file to the assistant’s Knowledge Base in Vapi. The problem: Jamie is not consistently following the instructions in the appropriate knowledge module, even though the system prompt specifically routes her to the correct one after classifying the call type. Details: I’m using the markdown system prompt with clear routing instructions: “Once the call type is determined, reference the corresponding attached Knowledge Module markdown file and follow the exact instructions within.” The knowledge modules are uploaded and attached in the Vapi backend under the Knowledge Base section. The behavior works correctly for the New Quote flow — but not consistently for Existing Quote or Everything Else. Jamie appears to default to general behavior from the system prompt rather than fetching and following the markdown module content. I’m wondering: Is there a specific syntax, tag, or reference style required in the system prompt to make Vapi reliably pull the appropriate markdown module? Does Vapi automatically index all KB content and let the model reference the correct module, or is there a way to explicitly force a file selection or context switch mid-call? Should I be using one long markdown file instead of separate ones? (Ideally not — I want to keep things modular.) If anyone has dealt with this kind of modular KB routing issue or knows the proper formatting/setup to make it work reliably, I’d massively appreciate your insight. 🙏
v
Try explicitly instructing your assistant to cite the knowledge base in your system prompt:
Copy code
Once the call type is determined, use ONLY the corresponding Knowledge Module as your source of information. Always begin your response with "Based on the [Module Name] instructions..." and follow the exact instructions within that module.
This approach helps ensure the assistant clearly indicates which knowledge module it's using and follows its instructions. Source: - [Discord channel discussion on Knowledge Base usage](https://discord.com/channels/1211482211119796234/1335953119275782144)
s
This part does not sound natural in our call flow. We just need the assistant to be forced to access the appropriate knowledge base once it determines the purpose of the call , not actually tell the caller which knowledge base its accesing to further assit them with their request.
I’m running into a recurring problem with Jamie where she seems to be: Blending logic between modules (e.g., asking for property address during an Existing Quote call when it’s not required), Saying things out loud like “Based on the Existing Quote instructions...” which sounds robotic and unnatural, And sometimes not following the module strictly, even though the prompt explicitly forbids improvisation or guessing. Here’s how it’s currently set up: We’re using GPT-4o via Vapi with Gemini for knowledge base provider. The system prompt includes strict internal instructions to access the correct module once the call type is identified. Jamie is told not to mention the module aloud and to stay completely within that module’s logic. We’ve confirmed the knowledge base files are attached and the system prompt is in markdown format.
Prompt Snippet:
➡️ After determining the call type, you must: - Internally access the appropriate module based on the call type. - Follow the exact instructions in that module and do not refer to it or mention it to the caller. - Modules: - New Quote → New Quote Module - Existing Quote → Existing Quote Module - Everything Else → General Support Module You must not skip, combine, or reorder questions. Clarify only when needed. Do not rely on memory or general logic — always follow the module exactly.
And here’s a call example where she went off-script: Jamie correctly identified an Existing Quote request. She said “Based on the existing quote instructions…” out loud. Then proceeded to ask for the property address, even though that’s clearly not required per the module. The user corrected her, and she adjusted — but the damage was done. Can someone help me understand what might be causing this? Is there something more I need to configure in the backend, prompt structure, or knowledge base format? Or is this a behavior bug in the way knowledge modules are referenced during calls? Thanks in advance — trying to lock this in tight.
s
Even if you’ve told it not to mention modules or improvise, models can still blend instructions from different modules unless the prompt is extremely strict. Make it clear that under no circumstances should the assistant say anything like “based on the module” or mention the logic it’s following. It’s also important that your prompt tells the assistant to refer to the knowledge base directly and silently when responding. Additionally, make sure the knowledge base modules are cleanly separated and don’t contain conversational cues that the assistant might accidentally speak. If needed, I can help you restructure the prompt or review the formatting of your KB to reduce these issues. You should be able to test and go with it though.
s
Hey Mason, thanks for the additional information. How can I get your help with this to actually review it and provide some guidance based on the whole set up?
s
You can add my email to your dashboard. My email is mason.d@vapi.ai
s
Hey Mason, I just added you to my dash board.
Let me know what you need from me, thanks.
s
Hey Saltlife Kid, sorry for your bad experience and been blocked with it for long time . To help track down this issue, could you share: - The call ID - When exactly this happened (the timestamp) - What response you expected to get - What response you actually got instead This would really help us figure out what went wrong!