No Surprises Here: Algorithm Proves to Be Better Than Humans in Detecting Fake News

Monday, 18 March 2019 - 3:42PM
Science of Sci-Fi
Dystopias
Monday, 18 March 2019 - 3:42PM
No Surprises Here: Algorithm Proves to Be Better Than Humans in Detecting Fake News
< >
Composite adapted from Pixabay images.
One of sharpest truths you learn as you realize that arguing with people – either on the Internet or elsewhere – is a waste of time and energy is that for the most part, people will not be swayed by facts, no matter how thoroughly demonstrated, rooted in science, or proven. Similarly, people will believe all kinds of things that aren't true, despite all evidence to the contrary. For some, it seems as if the more outlandish the claim, the more dedicated they are to not only believing in it, but in spreading it. Nowhere is this gullibility better evidenced than in the online proliferation of dezinformatsiya (disinformation) – also known as "Fake News" – by Russian provocateurs seeking to deepen social divisions in the United States preceding, during, and following the 2016 American presidential election. Fortunately for our republic, this type of enemy propaganda can be detected by special types of artificial intelligence (AI) that can analyze text, vet sources, and otherwise probe articles, memes, claims, and other sources of boldfaced BS that people consume over the course of their participation in the 24/7 news cycle. 


Now, a group of researchers at the University of Michigan have designed an algorithm that not only detects fake news, it does it better than humans do. The researchers, led by U-Michigan engineering professor Rada Mihalcea, utilized a linguistic analysis approach to train their AI in detecting disinformation. Samples were created by writers who took real news stories and created fake stories out of them, leaving certain text intact, while inventing facts and attempting to mimic the voice of the original article and fed to the AI along with real, unchanged news stories. Mihalcea's team found that their algorithm accurately detected fake news stories 76% of the time, as compared to a human rate of 70%. Their results were described in a paper entitled "Automatic Detection of Fake News," which will be presented at the 27th International Conference on Computational Linguistics, to be held in Santa Fe, New Mexico in August of this year. 


It could be argued that a sample of volunteers being asked to determine the veracity of a sample text in an experimental setting would likely skew the results high, regardless of what their biases were. Regardless of actual human ability to discern reality from fantasy when it comes to whatever they see on social media, the importance of the Michigan research shouldn't be underestimated. "You can imagine any number of applications for this on the front or back end of a news or social media site," Mihalcea said in an interview with U-Michigan's news site. "It could provide users with an estimate of the trustworthiness of individual stories or a whole news site. Or it could be a first line of defense on the back end of a news site, flagging suspicious stories for further review. A 76 percent success rate leaves a fairly large margin of error, but it can still provide valuable insight when it's used alongside humans."


Perhaps one day we'll be able to use them to choose presidential candidates.
Science
Artificial Intelligence
Science of Sci-Fi
Dystopias
No