An AI that was deemed “too dangerous” to be released to the world has been rolled out.
GPT-2 is an AI that can take text in any style and create convincing new material.
For example, when it was fed George Orwell’s classic novel 1984 it produced a plausible science fiction story set in China.
While makers OpenAI, which was originally co-founded by Elon Musk, have released all of their other algorithms for public use, they said they would never release GPT-2 because of the dangers of it being misused.
It would be too easy for it to completely flood the internet with “fake news” content.
It would be impossible for anyone to tell what was real and what was fake any more.
Researcher Jeremy Howard says that if the code was released it could “totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter”.
OpenAI agreed with him. They said in February: ”Due to our concerns about malicious applications of the technology, we are not releasing the trained model,”
“As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”
But they’ve now gone back on that decision, and released a much more powerful version of code.
However, they say this model of the AI is only slightly more convincing to humans than the previous version. Even though it’s already done some things they hadn’t actually taught it to do, like translate fro one language to another.
They believe that it’s unlikely to be used by email fraudsters and other low-tier criminals.
Nevertheless, they admit that the algorithm’s output could be useful to people trying to spread white supremacy, Marxism, jihadist Islamism, and anarchism online.
OpenAI have also developed a tool that can, 95% of the time spot AI-generated text. But they say it should be used in conjunction with basic human judgment, and public education.
By firstname.lastname@example.org (Michael Moran)