The 20 Best Responses to 'What Do You Think About Machines that Think?'
Edge's Annual Question of 2015 read: "What do you think about machines that think?" They asked a multitude of AI experts, authors, professors, researchers, and even artists, resulting in a collection of fascinating essays from many of the world's foremost experts in artificial intelligence. Some of the responses were funny, most were serious, and all were thought-provoking. Two of the most common responses were two sides of the same coin: either "humans are biological thinking machines, so we already have machines that think" or "we may have thinking machines someday, but once a machine can think, it's no longer a machine."
Here are the twenty best responses to Edge's question, where all but the first two have been abridged for length:
It Depends - Robert Sapolsky
"What do I think about machines that think? Well, of course it depends on who that person is."
An AI - Katinka Matson
What Do You Care What Other Machines Think? - George Church
"I am a machine that thinks, made of atoms-a perfect quantum simulation of a many-body problem-a 1029 body problem. I, robot, am dangerously capable of self-reprogramming and preventing others from cutting off my power supply. We human machines extend our abilities via symbiosis with other machines-expanding our vision to span wavelengths beyond the mere few nanometers visible to our ancestors, out to the full electromagnetic range from picometer to megameter... We extend our memory and math by a billion-fold with our silicon prostheses... As Moore's Law heads from 20-nm transistor lithography down to 0.1 nm atomic precision and from 2D to 3D circuits, we may downplay reinventing and simulating our biomolecular-brains and switch to engineering them."
Nobody Would Ever Ask A Machine What It Thinks About Machines That Think - James J. O'Donnell
"Could a machine get confused? Experience cognitive dissonance? Dream? Wonder? Forget the name of that guy over there and at the same time know that it really knows the answer and if it just thinks about something else for a while might remember? Lose track of time? Decide to get a puppy? Have low self-esteem? Have suicidal thoughts? Get bored? Worry? Pray? I think not.
Nobody would ever ask a machine what it thinks about machines that think. It's a question that only makes sense if we care about the thinker as an autonomous and interesting being like ourselves. If somebody ever does ask a machine this question, it won't be a machine any more."
Thinking About People Who Think Like Machines - Haim Harari
"When we say 'machines that think,' we really mean: 'machines that think like people.' It is obvious that, in many different ways, machines do think: They trigger events, process things, take decisions, make choices, and perform many, but not all, other aspects of thinking. But the real question is whether machines can think like people.
Some prominent scientific gurus are scared by a world controlled by thinking machines. I am not sure that this is a valid fear. I am more concerned about a world led by people, who think like machines, a major emerging trend of our digital society.
Our human society is currently moving fast towards rules, regulations, laws, investment vehicles, political dogmas and patterns of behavior that blindly follow strict logic, even when it starts with false foundations or collides with obvious common sense. Religious extremism has always progressed on the basis of some absurd axioms, leading very logically to endless harsh consequences. Several disciplines such as law, accounting and certain areas of mathematics and technology, augmented by bureaucratic structures and by media which idolize inflexible regulators, often lead to opaque principles like "total transparency" and to tolerance towards acts of extreme intolerance.
Unfortunately, the gap between machine thinking and human thinking can narrow in two ways, and when people begin to think like machines, we automatically achieve the goal of 'machines that think like people,' reaching it from the wrong direction."
What Does Thinking About Thinking Machines Tell Us About Human Beings? - Satyajit Das
"When it comes to questions of technology, the human race is rarely logical. We frequently do not accept that something cannot or should not be done. Progress is accepted without question or understanding of what and why we need to know. We do not know when and how our creations should be used or its limits. We frequently do not know the real or full consequences. Doubters are dismissed as Luddites.
Technology and its manifestations such as machines or AI is an illusion, which appeals to human arrogance, ambition and vanity. It multiplies confusion in poet T.S. Elliot's a 'wilderness of mirrors.'
The human species is simply too small, insignificant and inadequate to fully succeed in anything that we think we can do. Thinking about machines that think merely confirms that inconvenient truth."
I, For One, Welcome Our Superintelligent Machine Overlords - Antony Garrett Lisi
"As machines rise to sentience-and they will-they will compete, in Darwinian fashion, for resources, survival, and propagation. This scenario seems like a nightmare for most people, with fears stoked by movies of terminator robots and computer-directed nuclear destruction, but the reality will likely be very different. We already have nonhuman autonomous entities operating in our society with the legal rights of humans. These entities, corporations, act to fulfill their missions without love or care for human beings.
Corporations are sociopaths, and they have done great damage, but they have also been a great force for good in the world, competing in the capitalist arena by providing products and services, and, for the most part, obeying laws. Corporations are ostensibly run by their boards, comprised of humans, but these boards are in the habit of delegating power, and as computers become more capable of running corporations, they will get more of that power. The corporate boards of the future will be circuit boards."
An Extraterrestrial Observation About Human Hubris - Ernst Pöppel
"Finally, it has to be disclosed that I am not a human, but an extraterrestrial creature that looks human. In fact, I am a robot equipped with what humans call "artificial intelligence". Of course, I am not alone here. We are quite a few (almost impossible to be identified), and we are sent here to observe human behavior.
We are surprised about the many deficiencies of humans, and we observe them with fascination. These deficiencies show up in their strange behavior or their limited power of reasoning. Indeed, our cognitive competences are much higher, and the celebration of their human intelligence in our eyes is ridiculous. Humans do not even know what they refer to when they talk about 'intelligence.' It is in fact quite funny that they want to construct systems with 'artificial intelligence' which should match their intelligence, but what they refer to as their intelligence is not clear at all. This is one of those many stupidities that has haunted the human race for ages."
What If They Need To Suffer? - Thomas Metzinger
"Human thinking is so efficient, because we suffer so much. High-level cognition is one thing, intrinsic motivation another. Artificial thinking might soon be much more efficient-but will it be necessarily associated with suffering in the same way?
Human beings have fragile bodies, are born into dangerous social environments, and find themselves in a constant uphill battle of denying their own mortality. Our brains continuously fight to minimize the likelihood of ugly surprises. We are smart because we hurt, because we are able to feel regret, and because of our continuous striving to find some viable form of self-deception or symbolic immortality.
The question is whether good AI also needs fragile hardware, insecure environments, and an inbuilt conflict with impermanence as well. Of course, at some point, there will be thinking machines! But will their own thoughts matter to them? Why should they be interested in them?"
I Think That Machines Can't Think - Emanuel Derman
"A machine is a 'matter' thing that gets its quality from the point of view of a 'mind.'
There is a 'mind' way of looking at things, and a 'matter' way of looking at things.
Stuart Hampshire, in his book on Spinoza, argues that, according to Spinoza, you must choose: you can invoke mind as an explanation for something mind-like, or you can invoke matter as an explanation for something material, but you cannot fairly invoke mind to explain matter or vice versa.
From this point of view therefore, as long as I understand the material explanation of a machine's behavior, I will argue that it doesn't think."
Machines That Think Are In The Movies - Roger Schank
"Machines cannot think. They are not going to think any time soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights, is just plain silly.
The over promising of 'expert systems' in the 1980s killed off serious funding for the kind of AI that tries to build virtual humans. Very few people are working in this area today. But, according to the media, we must be very afraid.
We have all been watching too many movies."
Machines That Think? NUTS - Stuart A. Kauffman
"Ontologically, free choice requires that the present could have been different, a counterfactual claim impossible in classical physics, but easy if quantum measurement is real and indeterminate: the electron could have been measured to be spin up or measured to be spin down, so the present could have been different.
We may live in a wildly participatory universe, consciousness and will may be part of its furniture, and Turing machines cannot, as subsets of classical physics and merely syntactic, make choices where the present could have been different."
Machines Don't Think, But Neither Do People - Cesar Hidalgo
"Machines that think? That's as fallacious as people that think! Thinking involves processing information, begetting new physical order from incoming streams of physical order. Thinking is a precious ability, which unfortunately, is not the privilege of single units, such as machines or people, but a property of the systems in which these units come to 'life.'
Think of a human that was born in the dark solitude of empty space. She would have nothing to think about. The same would be true for an isolated and inputless computing machine. In this context, we can call our borrowed ability to process information 'little' thinking-since it is a context dependent ability that happens at the individual level. 'Large' thinking, on the other hand, is the ability to process information that is embodied in systems, where units like machines or us, are mere pawns."
Head Transplants? - Juan Enriquez
"In the pantheon of gruesome medical experiments few match head transplants. Animal experiments have attempted this procedure in two ways: substitute one head for another or graft a second head onto an animal. So far the procedure has not been very successful.
If mice with new heads recognized previously navigated mazes, or maintained the previous mouse's conditioned reactions to certain foods, smells, or stimuli, we would have to consider the possibility that memory and consciousness do transplant.
Actually knowing if you can transplant knowledge and emotions from one body to another goes a long way towards answering the question "could we ever download and store part of our brains, not just into another body but eventually into a chip, into a machine?" If you could, then it would make the path to large scale AI far easier. We would simply have to copy, merge, and augment existing data, data that we would know is transferable, stackable, manipulatable.
If brain data is not transferable, or replicable, then developing AI would require building a parallel machine thought system, something quite separate and distinct from animal and human intelligence. Building consciousness from scratch implies following a new and very different evolutionary path to that of human intelligence... In this scenario how machines might think, feel, govern could have little to do with the billions of years of animal-human intelligence and learning. Nor would they be constrained to organize their society, and its rules, as do we."
Call Them Artificial Aliens - Kevin Kelly
"The most important thing about making machines that can think is that they will think different.
Because of a quirk in our evolutionary history, we are cruising as the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence "general purpose" because compared to other kinds of minds we have met it can solve more kinds of problems, but as we build more and more synthetic minds we'll come to realize that human thinking is not general at all. It is only one species of thinking.
AI could just as well stand for Alien Intelligence... When we face these synthetic aliens, we'll encounter the same benefits and challenges that we expect from contact with ET. They will force us to re-evaluate our roles, our beliefs, our goals, our identity. What are humans for? I believe our first answer will be: humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different-to create alien intelligences. Call them artificial aliens."
Tulips On My Robot's Tomb - Andrés Roemer
"When we study ancient archetypes, literature and the projections in the contemporary debate reflected in the Edge 2015 question; a recurrent subconscious instinctive appears, the reptilian binomial: Death vs. Immortality.
Our fear of death is, without a doubt, behind the collective imagination of robots that can reproduce and that, with their thinking omnipotence, will betray and destroy their creators. Such machines seem to post the most horrifying danger: that of the extinction of everything that matters to us. But our reptilian brains also see in them the savior; hoping that super-intelligent machines will offer us eternal life, and youth.
Therefore, in thinking about machines that think, we should ask ourselves reptilian questions, such as: Would you risk your life for a machine? Would you let a robot be a political leader? Would you be jealous of a machine? Would you pay taxes for a robot's well being? Would you put tulips on your robot's tomb? Or even more important… Would my robot put tulips on my tomb?
Acknowledging the power of the reptilian in our thinking about machines that think helps us to see more clearly the implications, and nature, of a machine that genuinely is able to doubt and commit, and the kind of AI we should aspire to. If our biology designed culture as a tool for survival and evolution, nowadays our natural intelligence should lead us to create machines that feel and are instinctual; only then will immortality overcome death."
Killer Thinking Machines Keep Human Consciences Clean - Kurt Gray
"Machines have long helped us kill. From catapults to cruise missiles, mechanical systems have allowed humans to better destroy each other. Despite the increased sophistication of killing machines, one thing has remained constant-human minds are always morally accountable for their operation. Guns and bombs are inherently mindless, and so blame slips past them to the person who pulled the trigger.
But what if machines had enough of a mind that they could choose to kill all on their own? Such a thinking machine could retain the blame for itself, keeping clean the consciences of those who benefit from its work of destruction. Thinking machines may better the world in many ways, but they may also let people get away with murder."
Machines Will Always Lack Feeling Or Emotion - Gerald Smallberg
"We have built machines that in simplistic ways are already 'thinking' by solving problems or are performing tasks that we have designed... In theory, as these machines become more sophisticated, they will at some point attain a form of consciousness defined for the purpose of this discussion as the ability to be aware of being aware.
Its form of consciousness, however, will be devoid of subjective feelings or emotions... Fear, joy, sadness, anger, and lust are examples of emotions. Feelings can include contentment, anxiety, happiness, bitterness, love, and hatred.
Machines are not organisms and no matter how complex and sophisticated they become, they will not evolve by natural selection. By whatever means machines are designed and programmed, their possessing the ability to have feelings and emotions would be counter-productive to what will make them most valuable.
The driving force for more advanced intelligent machines will be the need to process and analyze the incomprehensible amount of information and data that will become available to help us ascertain what is likely to be true from what is false, what is relevant from what is irrelevant... They will have to be totally rational agents in order to do these tasks with accuracy and reliability. In their decision analysis, a system of moral standards will be necessary.
We will be unable to read machines' thoughts, but also they will be incapable of reading ours. There will be no shared theory of mind.
My judgment about whether this poses a utopian or dystopian future will be based upon thinking, which will be biased as always, since it will remain a product of analytical reasoning, colored by my feelings and emotions."
Three Observations on Artificial Intelligence - Frank Wilczek
"1) We are They
All intelligence is machine intelligence. What distinguishes natural from artificial intelligence is not what it is, but only how it is made.
2) They are Us
Artificial intelligence is not the product of an alien invasion. It is an artifact of a particular human culture, and reflects the values of that culture.
3) Reason Is the Slave of the Passions
David Hume's striking statement: 'Reason Is, and Ought only to Be, the Slave of the Passions'... remains valid for AI. Simply put: Incentives, not abstract logic, drive behavior. That is why the AI I find most alarming is its embodiment in autonomous military entities-artificial soldiers, drones of all sorts, and "systems." The values we may want to instill in such entities are alertness to threats and skill in combatting them. But those positive values, gone even slightly awry, slide into paranoia and aggression. Without careful restraint and tact, researchers could wake up to discover they've enabled the creation of armies of powerful, clever, vicious paranoiacs."
When It Comes To AI, Think Protopia, Not Utopia Or Dystopia - Michael Shermer
"Both utopian and dystopian visions of AI are based on a projection of the future quite unlike anything history has given us. Instead of utopia or dystopia, think protopia, a term coined by the futurist Kevin Kelly, who described it in an Edge conversation this way: 'I call myself a protopian, not a utopian. I believe in progress in an incremental way where every year it's better than the year before but not by very much-just a micro amount.' Almost all progress in science and technology, including computers and artificial intelligence, is of a protopian nature. Rarely, if ever, do technologies lead to either utopian or dystopian societies."