@Abishek
For Large Language Models (LLMs) like ChatGPT, the amount of data they can process in a single interaction is limited by the model's context window, which is typically in the range of a few thousand tokens (with one token roughly equivalent to a word). This means that while the LLM has been trained on a vast corpus of data, it can only consider and generate responses based on the text within its immediate context window during an interaction.
As for a knowledge base, if you are integrating an LLM with a knowledge base for answering questions, the size of the knowledge base itself isn't directly limited by the LLM. However, the LLM's ability to reference this knowledge base in real-time will depend on the implementation details of how the knowledge base is queried and how the information is presented to the model within its context window.
If you're asking about a specific product like OpenAI's ChatGPT or another LLM-based service, the limits would be defined by the particular implementation of that service. For precise limits, you would need to refer to the documentation or support resources provided by the service provider. If you have a specific provider in mind, I can try to give you more detailed information.