>>100349622AI does not 'remember' in the way you imagine it to, like a human who is constantly absorbing information around themselves so that if they are exposed to an event from yesterday, they can reference that today. Their "memory" of new things that makes it look like it learns is usually a function of the amount of context the AI has available to it at any one time, which can be seen as its short term memory. The fundamental unit of an LLM is a token, which can be summed up as 1 word = 1 token (even though that's not quite correct).
Essentially at the heart of it, an LLM is a glorified, very efficient autocomplete, text prediction engine. It finds the highest probability for the next word based on all the text input so far, and spits it out. There is no internal consciousness self-loop - this process of generating the next token itself is its 'thinking'. Thus obviously there is no continuous process either where it is constantly evaluating information behind the scenes and critically considering input, figuring out what to retain, what to discard, etc.
The limited ability for AI to effectively remember is what is hamstringing it today; models promise up to 32K context but degrade dramatically in effectiveness before hitting them. Many people blame this on the limitations of the transformer model and its attention mechanisms and think this is a cul-de-sac in AI advancement that only feels good right now because it works well enough on very small contexts, because as context grows it becomes more and more computationally expensive to consider large amounts of data.
Current LLMs in practice are 'frozen' with eternal short-term memory and don't learn anything permanently until they hit their next training cycle, and its extremely distracting short-term memory at that. You can test chatGPT out yourself - ask it for descriptions of vtubers on a completely blank, incognito conversation and it will give you accurate output. Turn off incognito and you will start getting incorrect answers to the same questions as your questions about cooking or electric circuits screw with its output and 'hard-coded knowledge' even though they're completely unrelated.