close

Deborah Johnson, former professor at the University of Virginia, discusses the danger of deepfakes for institutions and how society can defend against disinformation

Screen Shot 2020-07-24 at 9.28.21 AM

A video circulated around the internet of former President Barack Obama calling President Donald Trump a “dipshit” and making references to the movies “Black Panther” and “Get Out.” 

“Now, you see, I would never say these things. At least not in a public address, but someone else would. Someone like Jordan Peele,” Obama appeared to say in the video. “This is a dangerous time. Moving forward, we need to be more vigilant with what we trust from the internet.”

This video was a “deepfake,” a video created using synthetic media technologies, a type of artificial intelligence, to show people saying and doing something that they never did. The video of Obama was created, said emeritus professor at the University of Virginia Deborah Johnson, by taking a picture of the former president and superimposing it on a video of Peele talking. The AI then made Obama’s face match the movements of Peele’s, making it look like Obama was talking.

Johnson recently retired as the Anne Shirley Carter Olsson Professor of Applied Ethics in the University of Virginia’s Department of Engineering and Society. She authored one of the first textbooks on computer ethics in 1985. She talked at 10:45 a.m. EDT Thursday, July 23, on the CHQ Assembly Video Platform as the fourth part of Week Four’s theme of The Ethics of Tech: Scientific, Corporate and Personal Responsibility.” Johnson discussed multiple examples of deepfakes and the potential harm they can cause on individuals and institutions, and how they are applied for entertainment and education — but also about how society must take action to defend against this kind of disinformation.

Johnson is a philosopher who looks at ethical implications of digital technologies, going into this field because she “saw that they were having this enormous effect on our world.”

An example she shared was a video where Adele’s face was doctored by AI to appear like she was talking about other deepfakes, such as ones impersonating Kim Kardashian and Arnold Schwarzenegger. The video ended with her saying she was not Adele, but media studies expert Claire Wendell. Another example of a deepfake was of Mark Zuckerberg, CEO of Facebook, talking about how one man can control billions of people’s data.

Johnson said that the most frequent use of deepfakes are in pornography, mainly taking a face of a female celebrity and superimposing it on a pornographic video. She also said that the technology is available online, making the process very easy — meaning a person does not have to be an expert to create a deepfake. 

Deepfakes are not solely used for misinformation and pornography, Johnson said, but also for entertainment and education. An example of a deepfake used to entertain is a video of actor Burt Reynolds as James Bond. Reynolds was considered for the role in the 1970s, and the creator of the deepfake wanted to see what that scenario would have looked like.

Johnson said a handful of museums are also using deepfakes to recreate historic events, and also have videos of dead artists talking about their work. One possibility in deepfakes, she said, is having speeches by political leaders being translated into multiple languages and making it seem like the person is speaking in the listener’s language.

“Traditionally, we have relied upon our senses, especially hearing and seeing, to tell us what’s real and true,” Johnson said. “If I see someone directly doing something or saying something, I believe that the person said that and did that.”

Some people may argue that technology has mediated information for hundreds of years, with inventions like the printing press, photographs and radio, but Johnson said deepfakes provide “a much greater opportunity for mischief.”

“I think the most worrisome thing about deepfakes is this idea that’s referred to as amplification,” Johnson said. “It’s the idea that you’ve got this deepfake that can spread across the globe very quickly and very broadly. So it has much more power than a single person telling you a lie.”

She said there are three categories of harm caused by deepfakes: harm to viewers, harm to reputations and harm to institutions.

When people make deepfakes and generate other fake news, Johnson said they are intentionally misleading the public for their individual goals.

“When someone gives us information that’s false, they’re undermining that autonomy; they’re there in a kind of classic term, they’re using us as a means to their end,” Johnson said. “They’re manipulating us to do their work and not allowing us to think, the way we think and vote the way we want to vote.”

Reputation is especially important with elections, when Johnson said “you win or lose because of your reputation.” She said that while deepfakes may seem like a clear example of defamation, courts also do not usually pursue defamation claims around political speech, because they do not want to interfere with the election process.

“They have this rule against not interfering with (political) speech, not because they think it’s not harmful — they know it is harmful. But they are worried about how to do it,” Johnson said. “They’re worried that it would be too hard and too political to try to draw the line between what is considered a lie and what is considered the truth.”

Deepfakes and other forms of disinformation harm institutions such as the electoral process, and Johnson said if people do not trust the process, they do not trust the outcome. 

“It hurts everyone. It hurts the winners as well as the losers,” Johnson said.

She said remedies to the problems caused by deepfakes exist, such as educating the public on media literacy and teaching people how to spot videos that spread disinformation. Two other strategies are protecting individuals targeted by deepfakes, like having a part of a political staff dedicated to keeping track of information on the internet, and creating technology that can decipher if something is true or false. 

The last way of defending against deepfakes is by stopping them from spreading. California, for example, bans the use of deepfakes in elections, but Johnson said she is not sure how much this is enforced and some people may believe banning deepfakes interferes with free speech. An alternative, Johnson said, is social media sites labelling the video as questionable or as not containing valid information. 

“We have a set of technologies that are capable of a good deal of mischief, to put it mildly,” Johnson said. “And not only should we be aware of them, we should all be trying to figure out how to get the benefits of these synthetic media technologies, without letting the technology undermine the integrity of our oral and visual experience.”

The lecture then transitioned to a Q-and-A session with Emily Morris, vice president of marketing and communications and chief brand officer at Chautauqua Institution. Morris asked how fast the development of detection tech is happening compared to the development of deepfake tech.

Johnson said it is hard to evaluate the pace, but the technology for detection is going forward. She said another detection strategy is having the creators of the technology behind deepfakes tell the public where to look for flaws in the video. 

“I always think about this as kind of an arms race. You have the new technology and they counter,” Johnson said. “Then they change the way they do it and then you have to get a new counter.”

Morris asked what’s next on Johnson’s agenda.

Johnson said she has recently started working on accountability in AI, and said that even the creators of an artificial intelligence sometimes do not understand how the algorithm came up with a decision they made. 

“Who is then responsible for decisions that are made by artificial intelligence? I’ve always, for a long time, thought we’re being set up to accept that nobody is responsible, which I don’t accept,” Johnson said. “I think it’s the people who design the artificial intelligence systems and test them, and then put them out there for use, knowing that they don’t really understand how they work.”

Tags : AIAnne Shirley Carter Olsson Professor of Applied Ethicsartificial intelligencechautauqua institutionChautauqua Lecture SeriesDeborah JohnsonDeepfakesUniversity of Virginia
blank

The author Nick Danlag

This is Nick Danlag’s second season at the Daily reporting the morning lecture recap. He worked remotely last year but loved waking up each day in Las Vegas to learn more about Chautauqua through his reporting. From Mount Laurel, New Jersey, Nick earned a creative writing degree from Eckerd College in St. Petersburg, Florida. As editor-in-chief of his student newspaper, The Current, he loved helping the staff develop their voices.

1 Comment

  1. This is one way Chautauqua is at it’s best. Bring up a troubling situation. Have someone a panel of people who are deeply involved with understanding and responding to it constructively and then inviting people of all backgrounds to probe it, express their views and at the end celebrate the depth and sincerity of what came out of the experience.. Deborah Johnson’s presentation and dialogue are an excellent example of that process. Well done all!

Comments are closed.