Squad transfer and first prompt issues
# support
j
I started my question here but was asked to start a new support chat. - https://discord.com/channels/1211482211119796234/1278372669276028949 I learned from the other chat that knowledgebase documents are causing issues with the transfer of calls within a squad. I can confirm that this was an issue of mine and has been fixed once I removed the files. As the files are important to the call I have my team working on an external knowledge base as this was suggested. My question to start off with is help with the prompt in maintaining a fluid call. I'm using "firstMessageMode": "assistant-speaks-first-with-model-generated-message"... which has only started working once I remove the knowledgebase files... for the start of each new assistant but it is not picking up the full conversation of the previous assistance. It also gets stuck every now and then and introduces itself again at the start... so any suggestion of what to include in the prompt to help with that would be good. Just looking for general help for starters in making the transition from assistant to assistant fluid and retaining the knowledge so that it doesn't ask the same question that it should already know the answer to. An example of this is the user mentions the type of AC units it has with the first assistant... transfers to the AC assistant... then the second assistnat asks what type of unit do you have. This is the first question which would normally get skipped if that information was said to that specific assistant. So I'm assuming therefore that "assistant-speaks-first-with-model-generated-message" does not retain the information of the previous assistant and is crafting the first message on the prompt of the current assistant?
@Shubham Bajaj tagging you here as you asked me to create a new ticket.
@ACME Mike as the other ticket was closed thought I would bring you along for the ride. Your suggestion did cross my mind but the answer was the same as yours... Get rid of the knowledgebase files. Then it all started to work as it should.
@Shubham Bajaj can you please explain why knowledgebase files are causing issues in squads... With regards to transfers. Is there an alternative more simple fix than a pinecone external KB. Seems a little overkill for 5 x KB files.
s
can you send one latest call_id?
s
can you share the call id to suggest alternative.
j
Here are a few codes to look over. da936c6c-7cba-4047-862a-b1d0a40180e5 - with error 6f59e02c-a93b-4a12-93c4-8f83eb942015 - an error d767da9d-73f9-48ed-ab6c-eb58b48b1c78 - This is an example of when the Ai speak first doesn't have context of the call before it. fa74294d-5f1d-400d-92d0-e957190c88f3 - This was an error that the speak first not working when KB files where attached. 89f10704-c63f-49fb-a9a3-d52009f2e2d4 - An error with the transfer and speak first with KB files. I spent about 3 hours talking with the AI so there are a lot of calls. I can record more with specific setup.
a2becc63-df32-422b-ba64-e46acef523c2 - This was an error that the ai said weird "You have all" every time it transferred. And this is a log of it all working... but not providing correct information as there was KB. I have also noticed that the transcript gets more accurate as the call progressed. The first couple aren't always 100%. b2027c7b-e236-43ae-b1cb-1343e849ff4b
I have jsut completed this test... In the first assistant there is a kb file (just one. When I add the FAQ file this seems to cause and issue) and it seems to be transfer ok. In the second AI when using the AI speaks first with model based response I'm getting mixed quality of a response. Here is an example where it just lost the flow. dfa879f7-e377-48cd-b5ab-868b818d6780. Is it possible the prompting that could help with this? If so any suggestions?
My biggest concern is with constancy. In this trial I removed the transfer message of sure and got gibberish 8fa745de-55a8-4ad1-9c9d-15216de81d0b In this version I put the transfer message back in as I find the transfer works better with one (not sure why) and it came back with a basic non contextual intro message. c03dc525-9119-47f8-96b2-5f43424ff0b3 In this version transfer message was in and the first message was good but there was a HUGE lag in producing the first message. c2b414b4-1edc-4bf1-afbc-a1a50d54883d In this version no transfer message. Worked but long lag before second assistant started speaking. Plus the Ai had to other moments where just spaced out with long lag. e362c5ad-1b9a-4a43-b1b2-68445c299199 In this one I have tested a different LLM. I'm finding that on different days they perform differently so hard to know which one to stad behind. Here with OpenAI the lad was reduced and the call flowed better. 3ca2f8b0-ed31-4d24-8d97-dfed10295f44
@Shubham Bajaj @Sahil Sorry for dumping a lot of information on you. It is just with the time differences we aren't able to interact live so I want to give you enough context so that you can provide with with thorough feedback and not require us to go back and forth with many questions.
Hi @Shubham Bajaj @Sahil I've done extensive more testing and the issues I have been experiencing are getting better if prompt changes and the removal of the FAQ doc. Not sure why that document was causing issues. I have a more specific question regarding "firstMessageMode": "assistant-speaks-first-with-model-generated-message". What memory or prompt is this working from. I have an issue which is shown in the call id below... where the AI ask the question of the answer I just gave. The answer I gave was in the first assistant and I'm wondering what knowledge is included in the "model-generated-message"? c15ac996-c243-44af-adac-6f6e4a92916d
s
LLM has used the complete context or chat history to generate the response.
@jason is your rest of squad issue resolved?
j
@Shubham Bajaj I'm having issues with the transfers. The Ai keeps referencing the transfer all of a sudden. As in let me transfer you to.... This XYZ specialist. It didn't this morning but then it started to say this regardless of the prompt. It weirdly feels like the ai has learnt a bad habit all of a sudden. The transfer messages is none. But the AI is contextual aware of the transfer and keeps referencing the transfer. Also, you mention that the AI has complete context.... My issue then is consistency. I can say the same thing 3 times with 1-2 of the 3 times having an answer based on complete context and then the other 1 basically ask a question that it should know the answer for. Hope that makes sense.
s
call id for this.
j
d452ef35-b377-4132-8639-ef6e1fbacbcb 6309c560-281d-439d-a500-507eaca6d965 This one in the log says let me connect you but not in the transcript which is weird - bf3a28f3-0a2c-48a8-b966-47569e4894d1 Same with this one - 575b5808-0854-458d-a397-c3e5025943bc - Shows in log not transcript.
I just did another test with and without the 1 small knowledge base file... And it worked without the file perfect. 8c1b6697-eaab-4417-b4cb-a9289ee151f9 A second later with the file added back it didn't work. 1ec3b0f9-3ceb-48ee-abb9-0661f3563123 This was an issue I was have a few days ago... But then when I stop using the FAQ file it seems to work. It would see that any attached files are causing an issue again. Consistency is becoming a huge issue at the moment. I spent half a day talking with the ai with the files attached and it worked fine. Then all of a sudden it stopped working and now is causing an issue again.
a
Hey @jason - I've had similar trouble. My current workaround (which isn't perfect) was to create a new squad member "knowledge" and I stuck all the kb content right into the system prompt on the dashboard. I had to simplify the info and reduce it a lot, but it works.
s
hey @jason it's KB issue you can integrate with external KB such as astra or other of your choice to make it work.
j
@Shubham Bajaj can you explain why the kB is causing an issue. Why does kB only cause an issue with squads and not a single assistant. I just need to understand the why, before I invest more time and money in an external KB set up. Thanks for your help Shubham.
s
what's happening is when some context is missing KB is getting invoked and causing the issue, using external tool you can eliminate this and i suggest try out with small example first.
j
If context is missing and KB is needed how is having the files hosted out of VAPI change the way that the AI functions? How do you suggest setting up a call for KB external files? As in how do you suggest this integration would function. Can you outline a suggested flow of this connection? I'm not very technical so I need to understand how to explain this process and how it's different to my team. Also, seeing as this is a significant flaw that everyone using squads will need to work on. I would suggest you (VAPI) create some kind of video or kB article for future people to reference. 👍
c
@ACME Mike can't find your previous ticket - but could you share the information you know. same thing correct? Also @mindofman
j
@Chowderr I have tested with and without KB files attached to the assistants. If there are KB files associated with any of the assistance in the squad it is causing the Ai to get confused. It doesn't transfer correctly based on the Squad setup and also goes off prompt. I've had to embed the KB now into the prompt which is not pretty to make the squad work as it should. Do you have any recommendations of a simple external KB setup... Just so the assistant can reference 1-3 documents.
c
I also commented on Skool - which is why I tagged these 2 amazing guys,
j
So in simple terms we need to create a tool that references an external file or URL for this knowledge.
Ok I was looking through some tool videos and using make.... Are you suggesting that all I'm looking to do is have a huge import of data of the KB via a "result" to the "toolCallId"? This is my issue with external KB is you need potentially to have some level of filtering of the data so as not to pull in a large chunk. In my scenario I have an AC cleaning company. They have say for example 3 different cleaning packages based on the users needs. When the KB in internal I don't need to chunk this out or rationalize to the AI why it needs the different pieces of the KB. Where as if it is external... so that it doesn't import the information for each of the 3 different cleaning packages I would somehow need the AI in VAPI rationalize the package it would need and then go pull all the inclusions. It now becomes a clunky experience. As the user might say what other packages might be good... and you now need to rationalize what the AI needs to find from the KB again. OR... am I over thinking it and a large chunk of info at the start of the call as soon as the AI knows it wants to clean the AC is enough for the AI to continue with the information it needed?
a
Yes - @jason found my ticket related to his issue, then he tagged me in this one. We've definitely been struggling with similar issues. KB created in dashboard + Squads = trouble.
s
we are working on this to be solved fundamentally.
use external tools instruct the ai to call the tool if feels some knowledge is missing or can't answer user query from system_prompt you can use any vector db
a
@jason question is how deep are you into understanding RAG? Because your chunk size and top-k settings when calling the vector db is what would prevent that from happening, not Make or your kb formatting or anything else.
3 Views