yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

The Incoming Wave of Deep Fakes | Chris Olson


7m read
·Nov 7, 2024

So, you know, I've been talking to some lawmakers in Washington about such things, about the protection of digital identity. One of the notions I've been toying with—maybe you can tell me what you think about this—is that the production of a deep fake, the theft of someone's digital identity to be used to impersonate them, should be a crime that's equivalent in severity to kidnapping. That's what it looks like to me. You know, 'cause if I can use my daughter's voice in a real-time conversation to scam you out of your life savings, it's really not much different than me holding her at gunpoint and forcing her to do the same thing.

I don't know if you've given some consideration to severity of crime, or even classification, but theft of digital identity looks to me something very much like kidnapping. What do you think about that?

Yeah, for me, I would simplify it a little bit. Using Section 230 or the First Amendment to try to claim that the use of our personal identity to do something online, when it's a crime, doesn't make sense. We want to simplify this first; we don't need a broad-based rule on identity necessarily before we simply state that if someone's using this for a crime, it's a crime, and that is going to be prosecuted if you're caught and detected.

This then goes back to actually catching and detecting that. The way that uses the pre-existing legal framework doesn't require much of a move. But I'm concerned that the criminals will just be able to circumvent that as the technology develops. That was why I was thinking about something that might be a deeper and more universal approach. I know it's harder to implement legislatively, but that was the thinking behind it.

For us, there is a path that leverages that content to bring it to the device. I think understanding that mechanism and how it's brought forward, versus looking at the content, is crucial. I'll give you an example of what's happening in political advertising as we speak.

Understanding the pathway for how that content is delivered is ultimately how we get back to the criminal or the entity that's using that to perpetrate the crime. The actual creation of the content is incredibly difficult to stop. It's when it moves out to our devices that it becomes something we need to really pay attention to.

In political advertising, up to October of this past year, our customers asked us to flag the presence of AI source code. The idea there was they didn't want to be caught holding the bag of being the server of AI-generated political content, right? Because it just looks bad in the news. Someone's letting someone use AI; it's going to wind up being disinformation or some form of deepfake. By October, we essentially stopped using that policy because we had achieved greater than 50% of the content that we were scanning had some form of AI.

It may have been to make the sun a little more yellow, the ocean a little bit more blue, but using that as a flag to understand what's being delivered, once you get over 50%, you're looking at more than you're not looking at. That's not a good automated method to execute on digital safety.

As we've moved forward, we have a reasonably sophisticated model to detect deepfakes—very much still in a test mode—but it's starting to pay some dividends. Unquestionably, what we see is that using the idea of deepfakes to create fear is significantly greater than the use of deepfakes limited to political advertising.

We're not seeing a lot of deepfake serving in information, or certainly not in the paid content side. But the idea of fearing what's being delivered to the consumer is becoming a mainstream conversation.

Well, wasn't there some insistence from the White House itself in the last couple of weeks that some of the claims that the Republicans were making with regards to Biden were a consequence of deepfake audio, not video. I don't think, but audio if I got that right. Does that story ring a bell?

I think where we are at this stage in technology is very likely there is plenty of deepfake audio happening around the candidates. Whether you're Donald Trump, Joe Biden, or even local political campaigns, it's really that straightforward.

I think on the video side, there are going to be people working on it left and right. The idea of using that as a weapon to sow some form of confusion among the populace, doubt is going to be dramatically more valuable than the actual utilization of deepfakes to move society.

Oh, so you do think that even if the technology develops to the point where it's easy to use, you think that it'll be the weaponization of the doubt that's sown by the fact that such things exist? We've been watching this for a very long time and our perspective is coming at this from a digital crime and safety in content.

Safety in content typically means don't run adult content in front of children, don't serve weapons. In New York State, they’re not going to like that. Don't have a couple walking down the beach in Saudi Arabia; their Ministry of Media is going to be very unhappy with the digital company bringing that kind of content in.

I have the beholder—safe content, drugs and alcohol, right? Targeting the wrong kinds of people. We look at this from a lens of how to find and remove things from the ecosystem. If we continue down the path we're on today, most people won't trust what they see. So we're discussing education; they're going to self-evolve to a point where so much of the information that's being fed to them is just going to be disbelieved because it’s going to be safer to not go down that path.

I'm wondering if live events, for example, are going to become once again extremely compelling and popular because they'll be the only events that you'll actually be able to trust. I think it's also critical that we find a way to get a handle on the anti-news and get back to the entities promoting trust in journalism. That is a very meaningful conversation, something we need to try to get back to.

It's much less expensive to have automation or create something that's going to create some kind of situation where people continue to click. That's a terrible relationship with the digital ecosystem; it's not good for people to have that in their hand.

With the place where digital crime is today, if you're a senior citizen, your relationship is often net negative with the internet. You may want to stick to calling your kids on Voiceover IP where you can see their face. Lots of different ways to do that in video calling, but doing other things on the internet, including things as simple as email, it may be more dangerous to engage than any benefit you’re going to get back.

As we move closer to that moment in time, this is where we all need to be picking up and focusing on digital safety, focusing on the consumer. I think corporates are going to have to engage on that.

Okay, let me ask you a question about that because one of the things I've been thinking about is that a big part of this problem is that way too much of what you can do on the net is free.

Now, the problem with free is that let's take Twitter for example. Well, if it's free, then it's 20% psychopaths and 30% bots because there's no barrier to entry. So wherever—maybe there's a rule like this—wherever the discourse is free, the psychopaths and the exploiters will eventually come to dominate, and maybe quite rapidly.

The psychopaths and the exploiters, because there's no barrier to entry and there's no consequence for misbehavior. We're putting together a social media platform at the moment that's part of an online university, and our subscription price will be something between $30 and $50 a month, which is not inexpensive, although compared to going to university, it's virtually free.

We've been concerned about that to some degree because it’s comparatively expensive for like a social media network, but possibly the advantage is that it would keep the criminal players at a minimum. Because it seems to me that as you increase the cost of accessing people, you decrease people's ability to do low-cost, you know, multi-person monitoring of the sort that casts a wide net and that costs no money.

So do you have thoughts about the fact that so much of this online pathology is proliferating because when we have free access to a service, so to speak, the criminals also have free access to us? Am I barking up the wrong tree, or does that seem viable?

No, I think it's going to go in two ways. One, you will find safety in how much money you spend, and that's already true. When there are paywalls, paywalls within even large news sites, the deeper you go into the paywall, the higher the cost to reach the consumer. Not just coming from the consumer, but even through advertising and other content producers.

The lower the activity of the criminal because it's more expensive for them to do business—that is true; that's been true throughout.

I think the other requirement, because we're very acclimated to having free content, is that the entire supply chain is going to have to engage. So when you think through who is responsible for the last mile of content that's able to reach our devices inside of our homes, right? Is that the big Telcos? Is it the companies that are giving us Wi-Fi and bringing data into our houses? Right now, they're putting their hands back and it's not our job to understand what happens to you on your device.

If anything, there's a data requirement that says we're not allowed to know, or we're not allowed to keep track of where you go and what comes on to your device. There's a big difference between monitoring where we go online and what is delivered into our device, and this is missing from the conversation.

Privacy is critically important, and privacy is about how we engage in our activities on the internet. The other side of that is what happens after the data about us is collected, and that piece is not something that is necessarily private. It should not be broadcast what is delivered to us, but someone needs to understand and have some control over what is actually brought in based on the data that is collected.

That is a whole-of-society issue, meaning all of the companies, all of the entities that are part of this ultimate transaction to put that piece of content on our phones, our laptops, and our TVs need to get involved in better protecting people.

More Articles

View All
Share your career story with Khan Academy for our new series
Hi, I’m Sal Khan, founder of the Khan Academy, and I’m here to invite you to participate in an exciting project that we have around career. Our mission statement as a not-for-profit is to provide a free, world-class education for anyone, anywhere, and par…
Directional derivatives and slope
Hello everyone! So what I want to talk about here is how to interpret the directional derivative in terms of graphs. I have here the graph of a function, a multivariable function: it’s ( F(x, y) = x^2 \cdot y ). In the last couple of videos, I talked abo…
Nearly 100 Captive Orcas and Belugas at Risk of Drowning, Freezing to Death | National Geographic
This video from November 2018 shows a holding facility near the small Russian town of Nicosia, where government officials are investigating the capturing and exporting of wild beluga whales and orcas. This is footage of the same facility taken in January …
Jamie Dimon: The $35 Trillion Dollar Storm Brewing in the US Economy
What you should worry about is the deficit. Today it is 7% of GDP. When Volcker was around and we had very high inflation, it was 3 and a half percent. The debt to GDP is 35% back then, 1982. It’s 100% today. The deficit is the biggest peacetime deficit w…
Penn Jillette: Don't Leave Atheists Out on Christmas
I don’t think there’s any attack against Christmas. I think that seeing including other people in a seasonal joy, how can that be seen as an attack? Maybe the greatest speech of our time, most probably the greatest speech of our time, is Martin Luther Kin…
Why loneliness feels so real, even when it’s not | Kasley Killam
From a very young age, we’re programmed to think that that person who’s by themselves is a loner. We project a story about that person is perhaps not likable or perhaps they don’t have what is needed in order to be embedded in a group. There’s this stigma…