What Is Deepfake Porn and Why Is It Thriving in the Age of AI?
In this Q&A, doctoral candidate Sophie Maddocks addresses the growing problem of image-based sexual abuse.
With rapid advances in AI, the public is increasingly aware that what you see on your screen may not be real. ChatGPT will write everything from a school essay to a silly poem. DALL-E can create images of people and places that don’t exist. Stable Diffusion or Midjourney can create a fake beer commercial — or even a pornographic video with the faces of real people who have never met.
So-called “deepfake porn” is becoming increasingly common, with deepfake creators taking paid requests for porn featuring a person of the buyer’s choice and a plethora of fake not-safe-for-work videos floating around sites dedicated to deepfakes.
The livestreaming site Twitch recently released a statement against deepfake porn after a slew of deepfakes targeting popular female Twitch streamers began to circulate. Last month, the FBI issued a warning about “online sextortion scams,” in which scammers use content from a victim’s social media to create deepfakes and then demand payment in order to not share them.
Annenberg School for Communication doctoral candidate Sophie Maddocks studies image-based sexual abuse, like leaked nude photos and videos and AI-generated porn.
In this Q&A, we talk to Maddocks about the rise of deepfake porn, who is being targeted, and how governments and companies are (or are not) addressing it.
What is deepfake porn? How popular is it
Deepfakes are visual content created using AI technology, which anyone can access through apps and websites. The technology can use deep learning algorithms that are trained to remove clothes from images of women, and replace them with images of naked body parts. Although they could also “strip” men, these algorithms are typically trained on images of women.
Deepfakes really caught the public’s attention in 2017, and two years later, in 2019, there were 14,678 deepfake videos online and 96% were pornographic. At the time all of these featured women, according to research from Deeptrace Labs.
The rise of AI porn adds another layer of complexity to this, where new technologies like Stable Diffusion create fake porn images. These synthetic sexual images are AI-generated, in that they are not depicting real events, but they are trained on images of real people, many of which are shared non-consensually. In online spaces, it is difficult to disentangle consensually from non-consensually distributed images.
Creating fake erotic images is not inherently bad; online spaces can be a great way to explore and enjoy your sexuality. However, when fake nude images of people are created and distributed without their consent, it becomes deeply harmful.
What level of technical knowledge is required to make these images?
Anyone can create their own deepfake porn images, regardless of their skill level, using websites with deepfake generators.
The media often uses the phrase “revenge porn.” Are these terms interchangeable?
“Revenge porn” is defined as the non-consensual creation or distribution of explicit images. Although journalists often use that term, it is universally rejected by survivors, activists, and other experts. They prefer terms such as “image-based sexual abuse.” When it is produced without the consent of the person featured, deep fake porn is an example of image-based sexual abuse.
Are the majority of deepfakes made by people who know the person whose likeness they are using?
Many examples of deepfake porn target celebrities and women with high public profiles, presumably by people who do not know them personally. We can see from the people they target, that these pornographic deepfakes often seek to silence and shame women by spreading disinformation about them.
Are there any patterns in the demographics of who is targeted?
Broadly speaking, minoritized women and femmes are more likely to experience image-based sexual abuse, as are single people and adolescents. LGBTQ populations are also at increased risk of harassment. More research is needed to understand how this harm affects other minority groups, including trans people and sex workers, who anecdotally appear to be at increased risk.
Have any artificial intelligence companies addressed their role in deepfake creation?
Through an update that attempts to censor nudity, AI text-to-image model, Stable Diffusion, has made generating AI porn more difficult. In 2018, Reddit banned certain groups’ not-safe-for-work (NSFW) use of AI to generate porn.
Are there any laws regarding fake porn in the United States (or elsewhere)?
In the UK, a law has been passed that covers sharing (not creating) deepfake porn. In the US, only four states have deepfake laws — New York, Virginia, Georgia, and California. Some laws addressing image-based sexual abuse are expansive enough to include deepfake porn.
Are researchers or activists proposing ways to combat deepfake porn?
It is difficult to envisage solutions that address deepfake porn without challenging the broader cultural norms that fetishize women’s non-consent. The rise of misogyny online, through which some men perceive themselves to be victims of increasing efforts toward gender equality, creates the conditions for deepfake porn to proliferate as a form of punishment targeted towards women who speak out.
How do you see your research on this topic evolving in the future?
I’m increasingly concerned with how the threat of being “exposed” through image-based sexual abuse is impacting adolescent girls' and femmes’ daily interactions online. I am eager to understand the impacts of the near constant state of potential exposure that many adolescents find themselves in.