yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Deepfake Adult Content Is a Serious and Terrifying Issue


9m read
·Nov 4, 2024

As of 2019, 96% of deep fakes on the internet were sexual in nature, and virtually all of those were of non-consenting women. With the release of AI tools like DALL-E and MidJourney, making these deep fakes has become easier than ever before, and the repercussions for the women involved are much more devastating. Recently, a teacher in a small town in the United States was fired after her likeness appeared in an adult video. Parents of the students found the video and made it clear they didn't want this woman teaching their kids. She was immediately dismissed from her position, but this woman never actually filmed an explicit video. Generative AI created a likeness of her and depicted it onto the body of an adult film actress.

She pleaded her innocence, but the parents of the students couldn't wrap their heads around how a video like this could be faked. They refused to believe her, and honestly, it's hard to blame them. We've all seen just how good generative AI can be. This incident and many others just like it prove how dangerous AI adult content is, and if left unchecked, it could be so, so much worse. The truth is, the technology itself isn't the problem; it's the way people are using it and the lack of regulations surrounding its use.

Tech has given us amazing things, from the connectivity of social media to giving everyday people like you and me the ability to invest in art through the sponsor of today's video, Masterworks. Masterworks is an award-winning fintech company in New York City that allows everyday investors with little capital to invest like billionaires and reap the potential benefits. By allowing ordinary people to invest in shares of contemporary art from legends like Picasso, Basquiat, and Man Ray, Masterworks has sold over $45 million worth of artworks and distributed the net proceeds to investors.

Why invest in art, though? Art has outpaced the S&P 500 by a stunning 131 percent over the past 26 years. Even as the banking crisis continues, Masterworks has sold two more pieces in just the last month. Outlets like CNBC, CNN, and The New York Times have taken notice, and over 700,000 people have signed up so far. Demand is currently so high that art can sell out in minutes. But the subscribers of the channel can claim a free, no-obligation account using the link in the description below.

Back to our story: at first glance, AI pornography might seem harmless. If we can generate other forms of content without human actors, why not this one? Surely, it may reduce work in the field, but it could also create more problematic issues in the industry. If the AI was used to create artificial people, it wouldn't be so bad, but the problem is that generative AI has been mainly used with deep fakes to convince viewers that the person they're watching is a specific real person, someone who never consented to be in the video.

Speaking of consent, convincingly portraying women in suggestive situations, the perpetrators commit sexual acts or behaviors without the victim's permission, and that, by definition, is sexual assault. But does using generative AI to produce these videos cause any actual harm? Beyond being defined as assault for the victims involved, there are numerous consequences to being portrayed in these videos. This is what it looks like to see yourself naked against your will, spread all over the Internet.

QTCinderella is a Twitch streamer who built a massive following for her gaming, baking, and lifestyle content. She also created the Streamer Awards to honor her fellow content creators, one of whom was Brendan Ewing, AKA Atrioc. In January of 2023, Atrioc was live-streaming when his viewers saw a tab open on his browser for a deep fake website. After getting screenshotted and posted on Reddit, users found that the site addressed feature deep fake videos of streamers like QTCinderella doing explicit sexual acts. Cinderella began getting harassed by these images and videos, and after seeing them, she said, “The amount of body dysmorphia I've experienced seeing those photos has ruined me.”

It's not as simple as just being violated; it's so much more than that. For months afterward, QTCinderella was constantly harassed with these reminders of these images and videos. Some horrible people sent the photos to her 17-year-old cousin. This isn't a one-off case; perpetrators of deep fakes are known to send these videos to family members of the victims, especially if they don't like what the victim is doing publicly.

The founder of Not Your Porn, a group dedicated to removing non-consensual porn from the internet, was targeted by internet trolls using AI-generated videos depicting her in explicit acts. Then, somebody sent these videos to her family members. Just imagine how terrible that must feel for her and her relatives. The sad truth is that even when a victim can discredit the videos, the harm might already be done. A deep fake can hurt someone's career at a pivotal moment. Cinderella was able to get back on her feet and retain her following, but the school teacher who lost her livelihood wasn't so lucky.

Imagine someone running for office and leading in the polls only to be targeted with a deep fake video 24 hours before election night. Imagine how much damage could be done before their team could prove that the video was doctored. Unfortunately, there's very little legislation on deep fakes, and so far, only three states in the US have passed laws to address them directly. Even with these laws, the technology makes it difficult to track down the people who create them. Also, because most of them post on their personal websites rather than social media, there's no regulations or content moderation limits on what they can share.

Since tracking and prosecuting the individuals who make this kind of content is so challenging, the owners should be on the companies that make these tools to prevent them from being used for evil. And in fairness, some of them are trying. Platforms like DALL-E and MidJourney have taken steps to prevent people from creating the likeness of a living person. Reddit is also working to improve its AI detection system and has already made considerable strides in prohibiting this content on its platform.

These efforts are important, but I'm not sure they'll completely eliminate the threat of deep fakes. More generative AI tools are coming on the scene and will require new moderation efforts. Eventually, some of these platforms won't care, especially if that gives them an edge over well-established platforms. And then there's the sheer influx of uploaded content. In 2022, PornHub received over 2 million video uploads to its site; that number will likely increase with new AI tools that can generate content without needing a physical camera.

How can any moderation system keep up with that insane volume? The worst thing about these deep fakes is that the victims can't just log off of the internet either. Almost all of our livelihoods depend on the internet, so logging off would be an enormous disadvantage in their careers and personal life. Expecting anyone to leave the internet to protect themselves isn't a reasonable ask. The onus isn't on the victim to change; it's on the platforms and the government to create tools that prevent these things from happening so easily. If all the women who are being harassed went offline, the trolls would win, and this tactic of theirs would be incredibly successful. They could effectively silence critics and whoever they felt like attacking.

There's another problem with generative AI tools producing so much adult content: it introduces strong biases to the algorithms in how women should be presented. Many women have reported that they're often oversexualized when they try to create an image of themselves using AI tools. These biases are introduced by the source of the AI's training data: the internet. Although nudes and explicit images have been filtered out for some generative AI platforms, these biases still persist. These platforms have to do more than just let the open internet train their AI if they want to prevent the overt sexualization of women from being their normal output.

Deep fakes may be making headlines now, but the truth is they've been around in spirit for a very long time. Before generative AI, people used tools like Photoshop and video editing software to superimpose celebrities' heads on the bodies of adult film actors. Broadly, these doctored videos weren't compelling, but things are now very different. With AI, we're recruiting dangerously close to a point where we can no longer discern the real from the fake.

French post-modern philosopher Baudrillard warned of a moment when we can no longer distinguish between reality and simulation. Humans use technology to navigate a complex reality; we invented maps to guide us through an intricate mass of land. Eventually, we created mass media to understand the world around us and help simplify its complexity. But there will be a point where we lose track of reality. The point when we're spending more time looking at a simulation of the world on our phone than we will be participating in the real world around us, and we're almost there now.

With generative AI, our connection to reality is even further disconnected. Because technology can convincingly replicate reality on our devices, we're less inclined to go outside and see what's real for ourselves. This inability of human consciousness to distinguish what is real and what is simulation is what Baudrillard called hyperreality, a state that leaves us vulnerable to malicious manipulation: from things like deep fakes to people getting fired to propaganda leading to the loss of millions of lives.

You might remember that a couple of years ago, there were numerous PSAs, often from celebrities, warning us to keep an eye out for deep fakes. They were annoying, but ultimately they succeeded in making the public hyper-aware of fake videos. But not so much with the deep fake adult content. Maybe it's because the PSAs about deep fakes didn’t mention pornography; they just focused on fake speeches by presidents and famous people. Instead, or maybe it's because those who consume this content don't care whether it's real or fake; they're okay with the illusion.

One thing is true, though: if the general public was trained to recognize deep fake pornography, the potential for harm would be limited by being more critical as information consumers and reporting these harmful videos when we see them. We might be able to curb the effects of this dangerous new medium. It's not like we're strangers to being critical of what we see and read online. When Wikipedia was first introduced, the idea that it could be a legitimate source of information was laughable. It was mocked on sitcoms and late-night television; it symbolized the absurdity of believing what you read on the internet. That perception changed with time, deservedly so for Wikipedia, but we had a healthy skepticism towards user-generated internet platforms for a while.

The question is: can we be critical and discerning towards deep fakes while acknowledging that some content is real? Will we lose track of what is simulation and what is reality, and just distrust whatever we see online? Or worse, will manipulators succeed in making deep fake inflicted suffering an everyday occurrence, and we end up accepting that as the cost of existing online? And is there any hope of regulation stopping the constant assault of generative AI on our well-being?

When things became clear since ChatGPT and DALL-E started making headlines last year, AI will inevitably replace a lot of what humans currently do. They can already convincingly replicate human communication and human design, and our inability to distinguish between human output and AI output has created a laundry list of problems that will be challenging to address. Already, businesses are using ChatGPT's writing capabilities in their marketing and sales departments. It's even possible that we'll be watching AI-written TV shows soon. For writers, that's your dream job and your day job both vanishing overnight.

And then there will be all the opportunities for deception using AI. Like, imagine what phishing scams they'll be in a year when scammers can easily fabricate videos and audio of anyone you know. People with ill intent can create content to cause others real harm. And right now, that's AI-generated adult videos inflicting pain against women. If we're unable to tell what's real and what's an AI-generated fake, humanity has a tough road ahead, and I'm not sure any of us are ready for it.

More Articles

View All
Continuity and change in American society, 1754-1800 | AP US History | Khan Academy
In 1819, American author Washington Irving published a short story about a man named Rip Van Winkle. In the story, Rip lived in a sleepy village in the Catskill Mountains of New York, where he spent his days hanging around the local tavern, the King Georg…
YOU LIVE IN THE PAST
Hey, Vsauce, Michael here, and today we are going to be talking about the past. But not like history—in fact—we will be talking about what we call now. This very newest moment in time, and the fact that we can never really be aware of or live in what we c…
Identifying scale factor in drawings | Geometry | 7th grade | Khan Academy
So right over here, figure B is a scaled copy of figure A, and what we want to do is figure out what is the scale factor to go from figure A to figure B. Pause the video and see if you can figure that out. Well, all we have to do is look at corresponding…
Into the Forests | Branching Out | Part 1
April is Earth month, a time to celebrate our natural world. It’s also a call to reflect on our impact and think of new ways that we can protect and restore the planet. I’m Ginger Z, chief meteorologist at ABC News. My family and I are hitting the road t…
Stoicism & Buddhism Similarities, Stoicism As A Religion & More! | Q&A #2 | April 2019
Hello everyone! Welcome to the second edition of the monthly Idol Ganger Q&A. Like last month, I’ve searched the comments for questions and interesting remarks that I will answer and talk about a bit more. This is a public video in which I will touch …
Thin-layer chromatography (TLC) | Intermolecular forces and properties | AP Chemistry | Khan Academy
So let’s say that I have a vial of some mystery liquid right over here, and I want to start figuring out what’s going on there. The first step is to think about, is it just one substance or is it a mixture of multiple substances? The focus of this video i…