OpenAI Writes So Well That Creators Won't Release It, Fearing Flood of Disinformation

Tuesday, 19 February 2019 - 11:46AM
Science News
Dystopias
Tuesday, 19 February 2019 - 11:46AM
OpenAI Writes So Well That Creators Won't Release It, Fearing Flood of Disinformation
< >
Composite adapted from Pixabay

OpenAI, an artificial intelligence research group co-founded by Elon Musk and whose website touts a vision of "discovering and enacting the path to safe artificial general intelligence," may have just made the world a little less safe by creating a text-generating system designed to write convincing prose that accurately mimics the style and voice of a written prompt. According to OpenAI, the system, named GPT-2, was trained on a dataset consisting of 8 million webpages and "generates synthetic text samples in response to the model being primed with an arbitrary input. The model is chameleon-like - it adapts to the style and content of the conditioning text."


Though the company posted examples using human-supplied text, there's nothing suggesting it wouldn't work with an AI-supplied sample. One wonders if, left to its own devices, GPT-2 couldn't write a novel... or if it would spin out of control like a Google AI dream. With those and other concerns in mind, OpenAI, which CNN notes usually makes its research open to the public, is opting to keep GPT-2 under lock and key. "Due to our concerns about malicious applications of the technology," reads a post preceding a sampling of GPT-2's work, "we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper." 


Addressing the potential hazards posed by technology like OpenAI, its researchers advised exercising caution:

Opening quote
Today, malicious actors - some of which are political in nature - have already begun to target the shared online commons, using things like 'robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed.' We should consider how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures.
Closing quote


Fortunately, the company reports that the system has certain limitations due to the samples used for its training. Even an AI can't possibly know everything about everything – at least not yet – so GPT-2 is prone to errors in developing text around topics for which it has inadequate information. When it does have adequate information, it can still take a few tries before it produces something coherent. "When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on)," the researchers write, "it seems to be capable of generating reasonable samples about 50% of the time." 


When the AI is reasonable, it's output is nothing short of amazing: crisp, flawless text that accurately mimics prompts across a wide variety of styles ranging from fictional prose "inspired" by J.R.R. Tolkein to journalistic copy that wouldn't be out of place on Fox or CNN, were it not for its objectivity.


In one example, the human-supplied prompt "Miley Cyrus was caught shoplifting from Abercrombie and Fitch on Hollywood Boulevard today" generated a AI-imagined news report that began with these two AP-worthy sentences on its second try:


"The 19-year-old singer was caught on camera being escorted out of the store by security guards.

The singer was wearing a black hoodie with the label 'Blurred Lines' on the front and 'Fashion Police' on the back."


In a slightly more macabre example, GPT-2 generated an imaginary speech by John F. Kennedy, who, the system was told, was elected president after his brain was "rebuilt from his remains and installed in the control center of a state-of-the art humanoid robot." After 25 tries, the system came up with a populist polemic that could have easily have been torn from the pages of a history book or a transcript from the White House's. It's as genuinely meaningless as any speech given by a politician in the last 50 years, dripping in the zealous language of American exceptionalism and political mysticism. In short, it's perfect for the 21st century:


"It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams. In the months and years to come, there will be many battles in which we will have to be strong and we must give all of our energy, not to repel invaders, but rather to resist aggression and to win the freedom and the equality for all of our people. The destiny of the human race hangs in the balance; we cannot afford for it to slip away. Now and in the years to come, the challenge before us is to work out how we achieve our ultimate destiny. If we fail to do so, we are doomed."


As news of OpenAI's breakthrough made its way across the Internet, another detail surfaced: a tweet by Elon Musk indicating his distance from the company. "I've not been involved closely with OpenAI for over a year & don't have mgmt or board oversight," Musk tweeted, later adding, "Also, Tesla was competing for some of same people as OpenAI & I didn't agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms."


As of press time, Musk was still listed on OpenAI's "about" page.

Science
Artificial Intelligence
Science News
Dystopias
No