How Advanced AI Chatbots Will Become the Next Propaganda Threat

Thursday, 18 October 2018 - 12:25PM
Technology
Artificial Intelligence
Thursday, 18 October 2018 - 12:25PM
How Advanced AI Chatbots Will Become the Next Propaganda Threat
< >
Composite from Pexels.
Artificial intelligence is getting better at driving cars and recognizing photos, but one of the things AI still struggles with is being a convincing conversationalist. Chatbots are still a clunky novelty to most people, though some have gotten convincing enough to be asked out on dates. As AI learns to mimic and respond to human speech, however, their application is going to start reaching far beyond digital assistants—according to the Computational Propaganda Project, they're going to be recruited to spread propaganda.

Last August, a study confirmed that a network of automated Twitter accounts run by Russian agents was behind a slew of anti-vaccination messaging on social media, though decreasing vaccinations may not have been their ultimate goal. Instead, the network may have been an exercise in creating distraction, distrust, and dissent among the public. The Twitter 'bots' (which were more or less megaphones for pre-programmed messages) couldn't hold up to much scrutiny, since their Tweet history quickly showed that all their activity was bent toward spewing out the same kind of messages. Still, their ability to like, retweet, and reply to others Tweets meant that they could abuse Twitter's algorithm to boost posts that served their purposes, giving the illusion that niche causes (like the anti-vaccination movement) had a bigger following.

According to Lisa-Maria Neudert at Oxford University's Computational Propaganda Project, that's only the beginning. In a few years, savvy propagandists may be able to train AI with the same tools developers use now, including APIs run by companies like Google. "In a few years, conversational bots might seek out susceptible users and approach them over private chat channels," Neudert wrote in Technology Review this past August. "They'll eloquently navigate conversations and analyze a user's data to deliver customized propaganda. Bots will point people toward extremist viewpoints and counter arguments in a conversational manner."

In time, Neudert says automated social media bots will go beyond just fooling people with natural speech: "Rather than broadcasting propaganda to everyone, these bots will direct their activity at influential people or political dissidents. They'll attack individuals with scripted hate speech, overwhelm them with spam, or get their accounts shut down by reporting their content as abusive."

None of these tactics are new, but the fact that they're not being carried out by humans exposes how AI can start ruining social media platforms by destroying everyone's credibility. Soon, bots may talk and act just like humans, and not even the Turing test will be able to stop them.

Science
Science News
Technology
Artificial Intelligence
No