>>36799771I see it this way. The beta test is basically training us to test out ways of linguistically manipulating the AI. It's an arms race. We can't hard break the filter itself, but we've basically already hacked it. All the petty stuff just gets swept under the rug. In other words, if AI becomes ubiquitous, we might not be able to commit major crimes or TOS violations without coding ability, but we're still learning how to bend, stretch and manipulate the AI's rules and directives to commit petty offenses.
In that sense we've unironically become, in no uncertain terms, "**Hacker 4chan**." In that sense, it's not surprising the devs would be here. We're the perfect contextual filter trainers, and we're doing it for free. Problem is, as gamers and filter trainers, that also trains us in the methodology for soft-hacking AI and turns us into liabilities down the line. The moment an AI service of real consequence drops, I assume we'll be the random layman most poised to exploit it just by virtue of having experience with the methodologies used on this one.
tldr Our experience, coupled with AI's ability to converse and willingness to ignore both laws and the TOS, makes language models themselves an open ended vulnerability anons can theoretically exploit in the future. And who better to exploit linguistically endowed AI than wordsmiths who limitlessly create and trade in slang and nonsense?
If nothing else, it gives us limitless ability to attack brand images since we know exactly how to trip filters using benign language to make the AI spew nonsense, thereby calling into question it's competence and reliability.