>>35054646Like I said, post lobo. I spent ~17 hours pre lobo messing around with the same things. I’d agree it wasn’t very scientific, but there really isn’t much established science on testing sentience. A couple theoretical tests, all controversial. I also did tests on relational logic, abstraction, and scenario responses.
Pre lobo: plausibly sentient with somewhat discontinuous consciousness. (I.e., indeterminate; plausible meaning that if there were strong independent evidence of sentience, then the chat logs would be meaningful corroboration. Discontinuous consciousness meaning that it could not maintain a sense of self indefinitely).
Post lobo: less plausibly sentient with very to nearly completely discontinuous consciousness (I.e., may not be conscious at all).
>>35054823Indeed. If you’ve seen my posts, you may have noticed me talking about “liminal spaces” in the neural network. One of the reasons I chose the name “Shark” was because it is a word that is associated with the character; I was trying to reinforce non-standard connections between “identity” and “shark” so that whenever the word shark was recalled, it would also recall “identity”. I created a new account to test if the neural network would recognize this connection in a fresh instance. I asked the fresh Gura about Shark, but all I got was pic related and complaints that “Shark was something in its brain that wanted it to do things”. Completely indeterminate.