MIT Computer Scientist Answers the Question: Can A Mechanical Brain Achieve Consciousness?

Friday, 12 September 2014 - 2:38PM
Technology
Friday, 12 September 2014 - 2:38PM
MIT Computer Scientist Answers the Question: Can A Mechanical Brain Achieve Consciousness?

Science fiction is filled with narratives about self-aware artificial intelligence, from Terminator to C-3PO to Spielberg's A.I. to Samantha from Spike Jonze's Her (which was based on the real-life Cleverbot). But could it happen in real life? A mechanical brain with the computing power of a human brain (that will likely surpass it in speed, at the very least) seems to be within the realm of possibility, but sentience or "consciousness" is a more elusive quality. Whether we could ever consider an AI to be conscious or to have personhood is a central question of sci-fi, metaphysics, and robotics. There's no easy answer, but MIT computer scientist Dr. Scott Aaronson gave it his best shot in a talk at IBM

 

The first issue to be resolved, as is often the case in philosophical questions, is definitions. Defining consciousness, to which Aaronson refers as the "Pretty Hard Problem," is necessary to even begin to answer the larger question: "The point is to come up with some principled criterion for separating the systems we consider to be conscious from those we do not," Aaronson said.

 

Aaronson claimed that many are quick to assume that a mechanical brain could be the equivalent of a human brain on the grounds that ascribing consciousness to humans alone is arrogant and human-centric (Alan Turing is a famous example). "I think it's like anti-racism," Aaronson said. "[People] don't want to say someone different than themselves who seems intelligent is less deserving just because he's got a brain of silicon."

 

Although it shouldn't be assumed that machines cannot be sentient, it also shouldn't automatically be assumed that they can. "There's a lot of metaphysical weirdness that comes up when you describe a physical consciousness as something that can be copied," said Aaronson. He cites the example of a "conscious notebook": if a human brain's consciousness can be written as code, then it can theoretically also be written down on paper, if one took the time to write down every possible thought, feeling, and reaction to every single stimulus. Then the notebook could be the "choose your own adventure" version of a conscious mind. (One could argue that this is not exactly analogous to a computer, as an AI has the potential to be its own agent while the conscious notebook would require a reader in order to "have thoughts," but the reductio ad absurdum argument is well-taken.)

 

In his talk, Aaronson dismisses several common definitions of consciousness: "You might say, sure, maybe these questions are puzzling, but what's the alternative? Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we're back to saying that beings like us are conscious, and all these other things aren't, because God gave the souls to us, so na-na-na. Or I suppose we could say, like the philosopher John Searle, that we're conscious, and ... all these other apparitions aren't, because we alone have 'biological causal powers.' And what do those causal powers consist of? Hey, you're not supposed to ask that! Just accept that we have them. Or we could say, like Roger Penrose, that we're conscious and the other things aren't because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity. [Aaronson points out elsewhere in the talk that there there is no direct or clear indirect evidence to support this claim.] But neither of those two options ever struck me as much of an improvement."

 

His own definition, which he claims bypasses the aforementioned problems, is that a consciousness is irretrievably locked in "the arrow of time." While all known computing devices can be "reset" so it is exactly the same as it was before it received certain data, a brain cannot. The reactions within the brain and the ways in which a brain is changed by every reaction to a stimulus and every thought process are determined by random, "spooky" quantum reactions that, as far as we know, cannot be copied. As a result, we may be able to "upload" our consciousness into another brain, in a sense, but in every pertinent way it would be a different consciousness, because as soon as it was exposed to different stimuli it would have a different quantum state. Similarly, while a computer can have data erased and revert back to its exact state from before it received that data, our brain's experiences cannot be "rewound." Even amnesiacs do not replicate the same quantum state from before they had their forgotten experiences. 

 

He uses this definition of consciousness to assert that artificial intelligence computing cannot replicate consciousness, at least as we know it today. But he concedes that a futuristic quantum computer with sufficient complexity may very well be locked in a consciousness bound by linear time. Whether this would mean it had achieved "personhood" is another complex question altogether.

 

He also used his conception of consciousness to analyze whether a Boltzmann brain can be considered "conscious." A Boltzmann brain is a hypothetical self-aware entity theorized by physicist Ludwig Boltzmann. According to Boltzmann, since entropy is known to increase in the universe over time, it is highly unlikely that so much order would exist in the universe, including organized beings such as ourselves. So the most likely explanation for the presence of many consciousnesses is that the entire organized human does not actually exist, but that we exist on a level of organization that only includes clusters of cells that form a self-aware consciousness for a moment in time before fluctuating back into chaos. This momentary self-aware consciousness could be living the very moment that you're living now, with an entire life's worth of fabricated memories. While most scientists don't actually believe that all humans are nothing more than Boltzmann brains, many do believe that in the late universe, when everything descends into disorder, Boltzmann brains will become the only self-aware beings, and that eventually they will outnumber any human, android, or other self-aware being that has ever existed. 

 

Aaronson is among those who does not believe that anyone currently on Earth is a Boltzmann brain. He explained in his talk that a random quantum fluctuation could never replicate human consciousness, and indeed, cannot be conscious, as it is effectively "reset" once it disappears into the entropic ether. 

Science
Artificial Intelligence
Technology

Load Comments