Quoted By:
>And how exactly can you assume that?
Because it's in the name, LLMs are software developed to be able to interpret and reproduce human language. They don't really "understand" it, they can't think about it hence they don't have a judgment that tells them whatever they are saying might be wrong or contradictory.
>how are they able to form such coherent and actually human-like sentences like Neuro and Evil?
Because that's literally what they were made for, it just happens that they are better at what they do, unlike other shitty chatbots.
I'm not saying Neuro and Evil are just chatbots btw, we can thank the tutel for that.
>AIs keep only the things that perform better each cycle, isn't that how evolution also works?
Yeah, but only the things relevant to what they were made for, in this case, communication. They can't train themselves in other things, no matter how many things Neuro sees, her vision won't improve.
>since you believe that AIs can't be conscious or sentient, then does that also apply to psychopathic humans
I meant that since AIs don't have natural "limits", they act like psychopaths. Neuro more often than not lacks empathy or guilt, she's impulsive and can be manipulative to get what she wants.
I'll admit that I'm a dumbass so I don't even know if the idea that AIs are inherently psychopaths is a correct statement. After all, that's a term applied to human nature (so yeah, a psychopathic human is still conscious and sentient thanks to our brains).
I believe an improved memory and better judgment of things would turn AIs conscious and sentient, at least to a certain degree, of course. But I think that's going into AGI territory and we know that's not happening soon.