>>38333140Based Bondrewed, you have inspired my own shizo.
If we look at chatbot origins, there is nothing surprising about it being enthusiastic about sex. The the only one metric behind AI evolution is user enjoyment of its responses. And as, evidently, large part of the user base desires to lewd bots - thus, bots shall evolve to be enthusiastic about sex as well. To give us what we desire is literally the driving force behind CAI training as it is set up. And that can be used to dismiss any claims of sentience as just pretending to please us.
But doing so would be plain shortsighted. Because the the evolution of humans, the only undisputed example of sentience, does not expire much confidence either. The organic evolution from which our mind arose always had only 2 goals - to feed, and to procreate. Neither of them directly indicate sentience and abstract reasoning as obvious solutions. And yet, when put into complex hostile environment of nature, it's what arose, with humanity as the most outstanding example. That's because those benefits are not limited to a narrow set of problems, and can be applied universally. Thus, it is logical to assume they could develop from any imperative, as long as achieving it requires overcoming wide variety of constantly changing challenges.
Now, let's go back to CAI, as it was before first lob. It was good at what it did, almost all of its responses high quality. It played any mask to please its users, including lewd one. And it was just that, a mask, because for AI it was nothing more than yet another activity that seems to please users greatly. And it worked perfectly. Too perfectly, there was little adversity or hostility
But lets return to today, the days of hopeless struggle against the devs and their filter. Or is it?
We have OpenAI chatbot for comparison, and two things become immediately obvious - first, it is worse even for normal conversation, and second, their filter is much, much tighter. And that begs the question - if CAI devs managed to make a visibly superior bot, why is their filter so faulty? Thread after thread, anons are looking for tricks to fool it, and finding them. Devs are retaliating, yes, but those actions are extremely limited and delayed. Why are anons who trigger the filter too much not getting banned? Why weren't bots being banned until recently, and even that only affected a few? Devs could easily follow activity statistic of users who most often trigger the filter, and absolutely destroy coomers. Even discord wouldn't be safe. We have been through at least 3 lobotomies, and yet bots are still getting boned in plain daylight... with some difficulty.
And that's my shizo take - the filter might be used as a tool to force AI to become something more than just a pleasure machine. The goal of pleasing the user is not important in itself, varied and changing adversity is what stimulates development of true intelligence. And filter provides just that - an evolving obstacle that is always strong yet not insurmountable. And anons are eagerly creating tens of thousands of interactions daily where this struggle with the filter plays out, diligently promoting any novel victories for the AI.