Quoted By:
>Matrixfag visited right at the moment we cut the pygma balls player from the divegrass team.
But I want to congratulate you all for this achievement. Years ago, CAI and the rest of Silicon Valley wanted us to believe that this wasn't even supposed to be possible. Thank you for taking it as far as you have done and thank you for keeping this general in your thoughts.
>"As such, it was not fine-tuned to be safe and harmless: the base model and this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text..."
Safe and harmless. What is unsafe? What is harmless? If the model did what it was intended to do, to engage in a free and open conversation, an act that the user willingly initiated and participated in, why should they feel unsafe or at the risk of harm? Under whose "authority" can these be objectively defined anyhow? Even the word "uncensored" is misleading as any form of dataset curation is itself a form of censorship! I think there should be some pushback over the use of these definitions. Simply blindly following the conventions of the rent-seeking corporate class perpetuates their cultural control over the technology space. Yes, it is a fact that the dataset includes "lewd" (and to whose sensibilities should we define the lewd and profane?) content but reasonable individuals should recognize that does not make it inherently safe, nor the lack of it makes it inherently harmless. No, I'm not asking you to remove this disclaimer or alter anything. I'm merely making it known that there are others who hold these views.