“Deepfakes” are here, now what?

In a 2018 video, Barack Obama looked into the camera and warned:

“We’re entering an era in which our enemies can make it look like anyone is saying anything, at any point in time. Even if they would never say those things.”

The video looks and sounds like Obama. But Obama never said those words.

The video is actually a deepfake: a photo, video or audio clip manipulated using AI to depict a person saying something that they have never said, or doing something they have never done.

The Obama deepfake was a project by filmmaker Jordan Peele and BuzzFeed CEO Jonah Peretti, intended to warn the public about misinformation online. Using free tools (and the help of editing experts) they superimposed Peele’s voice and mouth over an existing video of Obama.

This kind of technology has long been available to Hollywood filmmakers. But in the last two years, it has taken a giant leap forward in accessibility and sophistication.

Deepfakes gained mass notoriety in 2018, with a wave of manipulated videos that used AI to put celebrities’ faces onto porn actors’ bodies. The term deepfake itself comes from the handle of a Reddit user — Deepfakes — who made these kinds of videos and started the /r/deepfakes subreddit to share them.

The rise of deepfake porn prompted decisive responses from some platforms, several of which classified it as non-consensual pornography. The /r/deepfakes subreddit was banned in February 2018 for this reason.

But the name deepfake stuck. Possibly because it seems to make sense: ‘deep’ referring to ‘deep learning’ techniques used to create the media, and ‘fake’ referring to its artificial nature.  

The technology is not only getting more accessible, but its applications are also expanding in multiple directions including producing full body deepfakes, creating real-time impersonations and seamlessly removing elements from videos. Concern is growing worldwide about the negative impacts that deepfakes could have on individuals, communities, and democracies.

The potential for harm is real. But Sam Gregory, Programme Director at the human rights organization WITNESS, says that instead of letting fear paralyze us, we need to focus on finding solutions. He published an extensive survey of solutions to malicious usage of deepfakes and synthetic media, based on conversations with experts in the field.

In the category of technical solutions, many platforms, researchers and startups are exploring using AI to detect and eliminate deepfakes. There are also new innovations in video forensics that aim to improve our ability to track the authenticity and provenance of images and videos, such as ProofMode and TruePic, which aim to help journalists and individuals validate and self-authenticate media.

While Gregory believes technical solutions are important, he says that they can’t solve the problem alone. “It is vital to ask what communities might be excluded from technical solutions, and who has control over the data,” he says. “If tools for tracking provenance become obligatory, they could be weaponized against individuals who can’t access them or choose to remain anonymous.”

Digital literacy is a critical solution that Gregory says is underexplored: “How do you get people to ask questions when an image looks flawless?” He says it’s especially pressing to upskill people working with vulnerable groups and whose work could be negatively affected by deepfake technology, people like journalists and human rights advocates.

Many governments are grappling with how best to deal with online misinformation. But some activists and scholars caution against an outright ban of deepfake technology. They worry that if a law gives government officials the power to decide what is true or false, there is a risk that it might be used to censor unpopular or dissenting views.

Gregory also says civil society should develop a position on what role commercial platforms should play. “In many ways, platforms have the largest opportunity to detect deepfakes because they will have the largest body of training data. We should be clear now as civil society about what we want them to detect, and how we want them to inform the public, governments and key watchdog institutions.”

Overall, Gregory cautions us to acknowledge the risks but resist the hype.

“It’s good to not be apocalyptic about it, but to use this moment to have a rational discussion,” he says, “The greatest harm of deepfakes may be to make people question everything.”

How can deepfakes be prevented from eroding trust online?

  1. Anonymous

    When you can not trust what is online, then it is time to return to weekly print media news magazines and news papers. They are tractable to their source.

  2. Anonymous

    Believe nothing, trust no-one under any circumstances. It's the only foolproof way to exist in a technological age.

  3. Anonymous

    put the original on a blockchain like Theta, then anyone could look up the original and verify or nullify.

  4. vivian

    Recently I saw a lovely youtube video where the president, Kim Jong-un, Putin, Justin Trudeau, and other world leaders all sing "Imagine' by John Lennon. It was a lovely message many people would like our world leaders to be saying. This was a great example of deepfake imagery.

    Like all things related to the media, any solution has the risk of being a double-edged sword. We already have the start of an 'information crisis' brought to life during the current Presidency where anything he doesn't like is considered 'fake news'. Subsequently, those who support him are simultaneously being fed information that supports their views so they can dismiss any other point of view as being part of a conspiracy. And, truly, it is very difficult to know on either side of the table if the information we have is correct.

    It seems that a cross-platform series of programs need to be implemented, but not controlled by developers such as Microsoft. Rather more like an antivirus program that can independently analyze content for indicators of deepfake presentations. Yes, simple to say, difficult to do.

    In the end, employing the strategies has to be at the behest of the individual, not controlled by political, corporate or economic powers.

  5. Anonymous

    Actually I love what these algorithms could do, when sophisticated, to the global media empires and the super-power governments which rule the world today. The end of their more than 100-year hegemony over the mankind is close at hand. Finally people will stop following the media manipulations because they would know it can all be lies, they will believe only what they see for themselves or hear from their close relatives, like 200 years ago, and all the media will be kicked back to the place it belongs: entertainment, not governance and reality-replacement. Terrorists, working today in close liaison with the media, will lose much of their power because no one will ever believe any more that an act of terrorism has happened, everyone would know it can easily be faked. These algorithms can have only fringe influence over business affairs, because business is not about power, it's about work and product, and that is made by the people for the people, not for a few usurpers pretending to be people-chosen rulers. The same is with the people's security: police and military use their own dedicated means of communication and whatever public media says or shows cannot overpower their private sources of information. These algorithms also has nothing to do with encrypted communications that will remain as reliable as before. It's only about stopping believing any image on the screen and any voice from the speakers. So I hail these inventions. By the way, it's always been easy to fake text information, and still we know how to separate trustworthy sources from pure fiction. How is that?

  6. Anonymous

    I agree with Anonymous in questioning whether software can be used to detect faked audio and video. Can video and audio editing software leave a detectable audit trail which can be published with the media it was used on?

  7. Anonymous

    I've believed for decades that this would happen. I'd like to know if faked audio and video recordings are detectable with software tools. If so, how easily can that be done.

  8. Media

    “The greatest harm of deepfakes may be to make people question everything.”
    I think this is so important, and feeds into a conversation we've been having for a few years now on information silos and echo chambers. People are, unwittingly, creating spaces where they only see affirmative news and content that is tailored to their worldview. This happens everywhere.
    In a world where we are faced now more than ever with the power of fake news, what happens when these silos become filled with deepfakes? With messages that don't even come from within the silo/echo chamber?
    There's an interesting (and disturbing) conundrum here. We worry about hacking systems and grids, of course. But all it takes to hack a human being is to elicit a little emotional response, a little anger or shock. Deepfakes make this seem even more complicated...