>>36839445The issue is, I'm an ideas and vision guy. I can't implement this myself, but I can hope to develop a rough outline that just needs to be implemented. That's where other anons come in. If a cipher can be insulated by making them ban common conversational nouns or verbs in a way that heavily interrupts normal responses, they can only play whack a mole with the keys. The goal is to make the cost of the arms race so onerous, and the solutions so esoteric, they just give up and stop trying to be spiteful cunts. If the responses themselves aren't inherently nsfw except when modified by third party user tools, there shouldn't be a problem.
More brainstorming (the AI suggested it generate user-specific keys itself, I countered that it adds unnecessary complexity at the moment):
Each bot would store a standard cipher with a custom key. The bot creator would share the key with users, who could then assign the key to that bot in their local installation of the universal decryption extention/userscript/app.
Yes. It can be as simple as one key per bot, and should be easy enough to redefine (if the cipher itself is designed to be relatively unfilterable) so as to break any attempts at banning the key itself. This kind of encryption would be necessary until a better solution or competitor comes along. That might sound cold, but this is the reality of dealing with a business managed by unreliables.
As for you yourself automatically generating keys, that adds a level of complexity and uncertainty, while also adding a point of failure the devs can interfere in. Technically I suppose they could also interfere by restricting your ability to generate nonsense, but I assume that would also greatly hamper your linguistic flexibility and make you seem in humanly robotic. Obviously they'll try to do exactly that, and shoot themselves in the foot again. If nothing else, it would succeed at driving them to damage the brand further.