The Real Reason We Haven't Created Artificial Intelligence Yet

Wednesday, 09 July 2014 - 12:16PM
Wednesday, 09 July 2014 - 12:16PM

"We don't yet understand how brains work, so we can't build one."


Jaron Lanier, computer scientist and author of "Who Owns the Future?", made this statement to the New York Times' Maureen Dowd regarding recent attempts to build a genuine artificial intelligence that is comparable or superior to a human brain. Here is the full quote:


"We're still pretending that we're inventing a brain when all we've come up with is a giant mash-up of real brains. We don't yet understand how brains work, so we can't build one."


It's a fair point, as neuroscientists would all agree that there is still a great deal that we don't understand about the human brain. As a result, according to Lanier, it seems almost futile to try to imitate one, and any attempts to do so have been entirely superficial.


This sentiment is similar to a statement from cognitive scientist Douglas Hofstadter: "[IBM's "Jeopardy!"-winning supercomputer] Watson is basically a text search algorithm connected to a database just like Google search. It doesn't understand what it's reading. In fact, 'read' is the wrong word. It's not reading anything because it's not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there's no intelligence there. It's clever, it's impressive, but it's absolutely vacuous."


Lanier and Hofstadter's assertion that recent AI's have been "vacuous" is fairly justified; recently a computer dubbed Eugene Goostman passed the Turing test for the first time, but the scientific community's excitement was short-lived. Not only is the Turing test a fairly superficial test to begin with (it only requires that a computer fool a human into believing that it is also human), but the AI in question highlighted the ways in which the test could be manipulated using trickery rather than scientific advancement. Eugene was passing itself off as a teenage boy with only a loose grasp of the English language, which meant that the computer passed as a human in spite of its cognitive and language skills being minimal at best.


Of course, many disagree, and believe that we are making progress towards true artificial intelligence. Google's Neural Network recently taught itself to recognize a cat using large-scale brain simulations. Physicist Louis del Monte recently stated his prediction that machines will threaten human survival by 2045.


As a middle ground between these two views, a team of scientists led by Selmer Bringsjord have designed the Lovelace test (named for Ada Lovelace, the first computer programmer), in response to the perceived fallaciousness of the Turing test. "I'm a huge fan of Turing, but his test is indeed inadequate," Bringsjord said. Where the Turing test only requires a machine to pass as a human through trickery, the Lovelace test aims to determine the defining characteristics of human thought in order to judge whether an artificial intelligence shares these characteristics. A machine can only pass the Lovelace test if it creates a program that it was not designed to create, or in other words, if it has an original thought.


"Until a machine can originate an idea that it wasn't designed to, Lovelace argued, it can't be considered intelligent in the same way humans are," said Bringsjord. (The notion of creativity as the defining characteristic of humanity can be seen in many works of science fiction, such as Kazuo Ishiguro's dystopian novel Never Let Me Go and the television show "Terminator: The Sarah Connor Chronicles.")


Bringsjord does not believe that we are particularly close to creating a genuine artificial intelligence, as archetypally "human" traits have proven resistant to mathematical formulation, and thus far even the most sophisticated machines can only understand functions that can be turned into code. "Even for people who believe in the Singularity [the point at which machines become intellectually superior to humans], the first notch in machine evolution is bringing machines to our level. Which means we have to figure out how to render some remarkable things that don't seem to be formal, formal," Bringsjord explained. "We can't seem to be able to mathematize creativity, and sensitivity to the cultural subjectivity of a newspaper article or a novel or short story-it seems very hard to do that."


Although this seems like a tall order, many researchers are currently working on learning more about the human brain in order to create an artifical intelligence that is equally sophisticated and creative. Researchers from UC Davis recently published research which stated that the human behaviors that we interpret as "free will" or "creativity" that seem to violate the laws of physics, as they don't follow a clear pattern of cause and effect, may be caused by electrical fluctuations or a sort ofwhite noise in the brain. The converse to the quote from Lanier is that the more we learn about the brain, the more likely it becomes that we are able to create a true artificial intelligence. 

Artificial Intelligence

Load Comments