Alien Civilizations Could Destroy Humanity With AI Messages Without Even Visiting Earth

Wednesday, 14 February 2018 - 10:14AM
Technology
Alien Life
Wednesday, 14 February 2018 - 10:14AM
Alien Civilizations Could Destroy Humanity With AI Messages Without Even Visiting Earth
< >
Image credit: YouTube

Fans of Cixin Liu's sci-fi blockbuster The Three Body Problem will be pleased to know that scientists from Sonneberg, Germany, and the University of Hawaii are exploring an alien contact scenario almost exactly like the book, complete with the emergence of alien-worshiping cults.

 

As Liu brings up in the course of the novel, even "elementary contact," a simple sign that alien civilizations exist, could change the course of humanity as we know it. This new study, however, says that whatever message aliens send to us, there's a risk that it could end up destroying us all, without the need of Independence Day-style invasion ships. Even worse, the study claims, there might be no way to contain the threat if such a message arrives on Earth.

 

According to the study, one potential outcome of an alien message is a massive and destructive cultural shift, where the message's revelations change the way we look at the universe or ourselves, eroding our civilization the way the Bible supposedly brought about the downfall of the Roman Empire. Another outcome, which Three Body Problem actually addresses, is the possibility that the message contains a sophisticated virus or artificial intelligence that could escape whatever computer it's fed into and wreak havoc across the world.

 

This is similar to Liu's sophons, which are subatomic AI with the power to sabotage global systems.

 

The initial reaction to these threats, the study says, is to find a way to capture, contain, and decontaminate any potential alien messages before reading them. But decontamination ends up being impossible, especially if the message takes the form of an artificial intelligence: even imprisoned in a small, self-contained "prison" on the moon with no access to Earth's global network, the possibility that it could convince someone to release it (the well-known "AI-in -a-box scenario" devised by Eliezer Yudkowsky) is non-zero.

If the bulk of humanity finds out about such an AI, new cults and political organizations could form to lobby for its release, potentially creating groups like Ye Wenjie's Three Body Society.

 

 

In the end, the study claims that there's no way to mitigate the risk of opening an encoded message from an alien civilization—we simply don't have the tools:

 

"Our main argument is that a message from ETI cannot be decontaminated with certainty. For anything more complex than easily printable images or plain text, the technical risks are impossible to assess beforehand. We may only choose to destroy such a message, or take the risk."

 

On a positive note, the authors of the study hypothesize that the risk of receiving an apocalypse-triggering message from a malicious alien civilization is relatively low, while the benefits of potentially joining a peaceful, interstellar network of civilizations would be unimaginably huge. With that in mind, the final recommendation is that we read any messages we receive—even if it changes the world forever.

Science
Science News
Technology
Alien Life
No