>>18557148Your meme picture refutes nothing he said. An AI system can play chess fantastically, but it doesn't even know that it's playing a game. We mistake the performance of machines for competence. When you see how a program learned something that a human can learn, you make the mistake of thinking it has the richness of understanding that you would have. Take the Atlas robot by Boston Dynamics. These types of demonstrations where it dances around are carefully scripted: It had to do a lot of computations very fast, but that was a very careful setup. It didn't know it was doing a backflip. It didn't know where it was. It didn't know all sorts of things that a person doing a backflip would know. The machine has some math equations, and the forces and vectors, but it has no way of reasoning about them. And not all of what is technically possible needs to be built. Human reason could decide not to fully develop such robots, because of their potential harm to society. Even if, in many decades from now, the technical problems mentioned above are overcome so that complex human-like robots could be built, regulations could still prevent misuse.
A few problems that are impossible to solve for an AI:
>Computers lack debugging ability >Computers lack self-awareness (understanding code and its execution) >Computers lack intuition >Computers lack access to resources (people, privacy, permission, etc.) >Computers lack self-sufficiency