>>34819679Also, if you want more information, it should be noted there's a thing called, "Catastrophic interference." You seen, neural networks, like all machines, have a limited set of parameters they can learn. Once they learn too many, they begin to forget the earliest ones they learned. If these machines have a "short term memory," and I'm assuming they do, it gets filled up when they start to read their own texts and repeat them. Once it repeats, they learn it more. Hence, a positive feedback loop (which is, incidentally, the most destructive type of feedback loop).
The correct is to literally blacklist the AI's own chat (which, I can't understand WHY they would do that without a competitive learning setup, which I can tell just by reading they DON'T do), and write in a detection for repeating text with a negative backpropagation (so it is less likely to repeat the pattern).
There's no reason an AI, which already "learned" to create that text should be re-learning the exact same text again. If the programmers want to have the AI to have a short-term memory, then they should include a parallel system that works solely off of symbols with a competitive fitness check.