Given all of the above, only now can we get to the central question of this article: Could an AI could ever become self-aware? The potentially dull answer is: We don’t know. We can’t define consciousness outside of human experience, we don’t know what causes consciousness in humans, we don’t know how to “make” consciousness in anything else. As New Scientist outlines, the most we could do is possibly recognize the existence of consciousness by generating lists of “indicator properties” that supposedly, in summation, point to evidence of consciousness.
We say “evidence” as though human software engineers don’t know what they’re making, and that’s partially true. As the University of Michigan-Dearborn points out, the inner workings of AI are often a mystery even to its developers. Developers simply code parameters, let their creation go, and then observe to see what it does and how it evolves. Self-awareness, aka consciousness, is an emergent byproduct of a system, not a thing we can plan from the get-go — yet. This is another reason why AI developers restlessly tinker with systems to produce different outcomes.
Read Related Also: The Untold Truth Of The Gabor Sisters
On that note, researcher Michael Timothy Bennett on The Conversation talks about how consciousness requires frames of reference between itself and other things, and an awareness of cause and effect and their relationship to intention. This, some think, can be coaxed out of an AI using verbal prompts. At present, this might be how we uncork the genie’s bottle of AI consciousness.