Forget fake news for a moment.
What’s replacing it? Artificial intelligence is now able to generate a convincing video of a celebrity or public figure. For illicit purposes, these videos are called deepfakes and show a celebrity superimposed into an adult movie. A programmer finds existing video and audio for a known figure, then the AI takes over and creates a brand new version.
However, fake videos showing President Trump speaking at an event, or a world leader declaring war, or a politician making false claims could be on the near horizon.
One recent example shows Alec Baldwin doing a Trump impersonation on Saturday Night Live, and then a new version built using machine learning that shows the real President Trump making the same quips. It’s not quite convincing yet, but you can see how it might evolve.
Last summer, a team of researchers at the University of Washington showed how AI could create a lifelike digital avatar for President Obama. They used 14 hours of footage to create the new video, mostly by adjusting speech patterns to match the new audio.
“It’s difficult to assess the national security risk or potential for disruption that is presented by the threat of AI-built fake videos,” says Michael Fauscette, chief research officer at G2 Crowd, a business software firm. According to Fauscette, fake videos will be used initially for coercion, public embarrassment, and for manipulating the voting public.
Andrew Keen, an entrepreneur and author of “How to fix the future,” says one of the scary things about AI-generated videos is that we won’t know the difference. They will look and sound authentic. In the example of President Obama, the average person would never know it’s fake. (At least with many fake news articles, it’s easier to sense when sources and facts seem invented.)
Fake videos will be also harder to verify, says Darren Campo, a NYU adjunct professor at Stern School of Business. “We’re already at a point where the content of certain video or streams is entirely controlled by programmers with political agendas,” he says. It will become harder and harder to use countermeasure AI routines to spot fake videos.
One shift, says Keen, is that major publishers and social media networks like Facebook and Twitter will be held more accountable for not vetting fake videos. Facebook in particular, with their vast economic resources, could employ an army of AI specialists who run algorithms to verify a video. For example, if President Trump appears in a video announcing a state of emergency, AI routines could verify the video against live sources, other instances of the same video appearing on the Internet, and if it appears on official White House sites.
Keen says there will be many thorny legal issues, since duped politicians and celebrities will hire video forensic experts to find out who created and hosted the AI-created videos.
Fortunately, all of the experts say that, even as fake videos become more convincing, their impact is limited.
“Large news organizations hold that editorial integrity and the public trust are key to maintaining their business over the long term,” says Campo. “Even if news organizations fail, we don’t know that circular reporting of fake news could incite, say, a nuclear war. Nuclear powers with state-controlled news such as China and North Korea have not initiated global catastrophes. This is evidence that it takes more than a news report to activate a nuclear deployment.”