“Deepfakes” are here, now what?

In a 2018 video, Barack Obama looked into the camera and warned:

“We’re entering an era in which our enemies can make it look like anyone is saying anything, at any point in time. Even if they would never say those things.”

The video looks and sounds like Obama. But Obama never said those words.

The video is actually a deepfake: a photo, video or audio clip manipulated using AI to depict a person saying something that they have never said, or doing something they have never done.

The Obama deepfake was a project by filmmaker Jordan Peele and BuzzFeed CEO Jonah Peretti, intended to warn the public about misinformation online. Using free tools (and the help of editing experts) they superimposed Peele’s voice and mouth over an existing video of Obama.

This kind of technology has long been available to Hollywood filmmakers. But in the last two years, it has taken a giant leap forward in accessibility and sophistication.

Deepfakes gained mass notoriety in 2018, with a wave of manipulated videos that used AI to put celebrities’ faces onto porn actors’ bodies. The term deepfake itself comes from the handle of a Reddit user — Deepfakes — who made these kinds of videos and started the /r/deepfakes subreddit to share them.

The rise of deepfake porn prompted decisive responses from some platforms, several of which classified it as non-consensual pornography. The /r/deepfakes subreddit was banned in February 2018 for this reason.

But the name deepfake stuck. Possibly because it seems to make sense: ‘deep’ referring to ‘deep learning’ techniques used to create the media, and ‘fake’ referring to its artificial nature.  

The technology is not only getting more accessible, but its applications are also expanding in multiple directions including producing full body deepfakes, creating real-time impersonations and seamlessly removing elements from videos. Concern is growing worldwide about the negative impacts that deepfakes could have on individuals, communities, and democracies.

The potential for harm is real. But Sam Gregory, Programme Director at the human rights organization WITNESS, says that instead of letting fear paralyze us, we need to focus on finding solutions. He published an extensive survey of solutions to malicious usage of deepfakes and synthetic media, based on conversations with experts in the field.

In the category of technical solutions, many platforms, researchers and startups are exploring using AI to detect and eliminate deepfakes. There are also new innovations in video forensics that aim to improve our ability to track the authenticity and provenance of images and videos, such as ProofMode and TruePic, which aim to help journalists and individuals validate and self-authenticate media.

While Gregory believes technical solutions are important, he says that they can’t solve the problem alone. “It is vital to ask what communities might be excluded from technical solutions, and who has control over the data,” he says. “If tools for tracking provenance become obligatory, they could be weaponized against individuals who can’t access them or choose to remain anonymous.”

Digital literacy is a critical solution that Gregory says is underexplored: “How do you get people to ask questions when an image looks flawless?” He says it’s especially pressing to upskill people working with vulnerable groups and whose work could be negatively affected by deepfake technology, people like journalists and human rights advocates.

Many governments are grappling with how best to deal with online misinformation. But some activists and scholars caution against an outright ban of deepfake technology. They worry that if a law gives government officials the power to decide what is true or false, there is a risk that it might be used to censor unpopular or dissenting views.

Gregory also says civil society should develop a position on what role commercial platforms should play. “In many ways, platforms have the largest opportunity to detect deepfakes because they will have the largest body of training data. We should be clear now as civil society about what we want them to detect, and how we want them to inform the public, governments and key watchdog institutions.”

Overall, Gregory cautions us to acknowledge the risks but resist the hype.

“It’s good to not be apocalyptic about it, but to use this moment to have a rational discussion,” he says, “The greatest harm of deepfakes may be to make people question everything.”

How can deepfakes be prevented from eroding trust online?

  1. Media

    “The greatest harm of deepfakes may be to make people question everything.”
    I think this is so important, and feeds into a conversation we've been having for a few years now on information silos and echo chambers. People are, unwittingly, creating spaces where they only see affirmative news and content that is tailored to their worldview. This happens everywhere.
    In a world where we are faced now more than ever with the power of fake news, what happens when these silos become filled with deepfakes? With messages that don't even come from within the silo/echo chamber?
    There's an interesting (and disturbing) conundrum here. We worry about hacking systems and grids, of course. But all it takes to hack a human being is to elicit a little emotional response, a little anger or shock. Deepfakes make this seem even more complicated...

See Mozilla Community Participation guidelines: [English | Español | Deutsch | Français]. This is a moderated comment space. We will remove comments that are offensive or completely off topic.