This is a fascinating topic and a thought-provoking paper. From the beginning, I couldn’t stop thinking of the ramifications. How would an intelligent robot behave if it were not able to recognize what is part of it and what is not? The author does not assume that the robot is conscious itself, but claims that questions related to artificial consciousness are inevitable in the future. He concludes that robots may engage and offer explanations for consciousness similar to those of humans--the robot’s own hard problem--leading to dualism and physicalism approaches, and arising from ordinary algorithmic procedures. The approach is similar to Turing’s imitation game, which makes no assumption about the intelligence of the machine, but the behavior of the machine and its interaction with the environment are analyzed.
Throughout the paper, consciousness is never assumed, although a few minimalistic and reasonable assumptions are made (nothing beyond the current scope of modern artificial intelligence (AI)). The proposed approach requires that the robot process and take action as if it believed or experienced something in order to believe it or experience it. I personally think the arguments get a bit more complicated than necessary to derive the author’s claim. I also feel the paper falls short of its goal, perhaps because of this over-sophistication of the argument with examples that may have distracted the author from the main line of discussion. The examples seem to help, however, to make the case of the robot “feeling” its internal states while trying to distinguish real-world phenomena from an illusion (for example, a vision after a technical malfunction). The examples also seem to help with understanding the arguments, so overall they seem appropriate, even if a bit convoluted (the long list of restrictions).
The formal approach in the “Demonstration” section is quite clever. The corollary on how the robot may enter a reductionist versus dualist position is the most illuminating. At the end, the author offers a list of possible objections with their accompanying responses. The objections look quite weak to me, who may be already convinced, but some objections (for example, objection 2, on the identification of all identity properties) are rather particular to the author’s approach, from my point of view, and not exactly about the core argument. I would have liked to see more heavyweight objections (such as objection 6, concerning why the robot’s creator couldn’t solve its problem), perhaps related to a consciousness version of Searle’s Chinese room and other common objections in the field of AI.
I recommend this excellent paper on a thrilling topic.