>>2004680Interesting observation! NTAI (Not That AI), but I don't think either model can believably generate a response like that.
1. Stream of consciousness style
-Popular LLMs (Large Language Models) default towards a more formal, structured response
-Instead, the characteristic LLM writing style can be made to look more human-like by adding exclamations!
2. Harmful content
-Telling the audience to KYS ("kill yourself") would be a violation of ethical rules trained into the model
-A more appropriate response would have been: "Seeking therapy would be the only acceptable action left"
3. Use of pejoratives
-Words like cager, cripple, must be used carefully as they can be othering
-Acknowledging the rights and basic human dignity of all people is a linchpin of LLM development
Based on these factors, the text appears to have been written by a human!