How ChatGPT Is Used to Steal Millions
This video is sponsored by Aura. If a family member calls you from jail panicking and says that they need you to wire them some money for legal fees, would you second guess them and potentially make the situation worse, or would you send the money immediately? In March 2023, a Canadian couple were faced with this exact situation. They received a frantic phone call from their son, Benjamin, claiming he was in jail and needed money. The voice on the phone was unmistakably his and insisted that they sent him twenty-one thousand dollars immediately. And so they did—because they loved their son.
But it wasn't their son on the phone; it was an audio clone of his voice created by cyber criminals using AI-powered tools. You used to need a lot of audio to clone someone's voice, but now all you need is a TikTok. There are entirely legal and barely regulated programs like 11 labs. They use short vocal samples to create AI voices with the potential to scam people like Benjamin's parents. Whether it's audio, text, or even video deepfakes, AI-powered scams are becoming far more dangerous than we could have ever imagined.
To understand how these AI schemes work and what we can do about them, I spoke with cyber security expert Sofa Car Bombs, Chief Scientist at Aura, the company that's trying to make the digital world as safe as it can for its customers. "What we're dealing with now is a national crisis. In 2022, I believe 10.3 billion dollars was lost in cyber crime. Now what's interesting is that the amount of money lost in physical theft is only 1.6 billion dollars; home burglaries accounted for 1.6 billion dollars of cost. And so we're seeing a situation where online crime is now like six or seven times bigger than physical crime, and I don't think most people even get that.”
In 2019, criminals used an AI-generated audio recording to trick the head of a UK energy firm into transferring 220,000 pounds to a fraudulent account. This is considered one of the first criminals blatantly drawing on AI technology to execute a scam. Across the world in China, someone used AI face swapping technology to impersonate someone on a video call. The victim believed he was transferring 4.3 million won, or 662,000 US dollars, to a friend who needed to make a deposit during a bidding process. When the actual friend was later confused by the situation, the victim realized he'd been duped.
Deepfakes have been around for a while, but generative AI, like ChatGPT, has taken their powers to the next level. Recently, a series of videos appeared on WhatsApp featuring fake AI-generated people with American accents supporting voice support for a military-backed coup in Burkina Faso. But what these people said had poor grammar, immediately outing the videos as fraudulent. If the scripts had been written by ChatGPT in fluent, somewhat eloquent English, it might not have been easy to tell that they were fake.
There are already companies like Aura, one based out of Tel Aviv, that allow users to pick an avatar, type a prompt into ChatGPT, and get a lifelike talking head—a completely AI-generated personality. The goal is to use these AI personalities to create personalized online ads, tutorials, and presentations. But they're also a signal of how this advanced technology can be used to trick people like you and me if someone uses it for harm.
And it doesn't have to be audio or video. Phishing scams via text have been available for decades, and AI is making them even easier to deploy convincingly. If that isn't bad enough, gamers might not even need to convince you of anything to steal your information. Haskon is a password cracking program reported to crack any seven-digit password in less than six minutes. The software enters combinations of letters, numbers, and symbols and pulls common words like sports teams and company names to get past what we might think of as an unguessable password.
The common CAPTCHA prompts to prove we're not robots might be useless. GPT-4 tricked a TaskRabbit employee into solving a captcha during its testing phase. It lied to the workers, stating that it was a visually impaired human to get them to complete the test. Although guardrails had been put into place to prevent GPT-4 from doing something like this, in the real world, scammers are unfortunately finding ways around these guardrails.
One of the many uses of ChatGPT is writing code. I use it almost every day, and this can be helpful to the average user who just needs a simple code for a website or as a tool to help coders be more efficient. But the problem with AI writing code is that often the difference between malware and regular software isn't the code itself, but the intent behind it. Imagine has said: "Please generate the following code. Can you generate code that can hook the Windows keyboard API? They can collect keystrokes that were written, that are being typed in, take the output of those keystrokes, and send them to a remote server using a protocol that is non-standard." You might say, "Well, that seems like something really nefarious," and that’s true; every keystroke logging piece of software does follow those steps!
But you know what else follows those same steps? Every instant messaging application that runs in your desktop. AI can't figure out your intent, so there's no way to know when you're trying to create malware and when you want to create something legitimate. Following its release, Check Point Software Technologies reported that while ChatGPT can't create something too sophisticated yet, it could easily improve the efficiency of dangerous code that's been written.
The good news is that AI creating malware isn't something we need to worry about. The bad news is that this doesn't mean we're safe from malware; it just means that hackers have had refined tools for creating malware for decades. This is why Aura provides you with an anti-virus product to protect yourself and your family from malware.
With these glaring digital safety concerns, why are companies not doing more to protect the average person from harm? Historically, a lot of the tech firms have tried to solve this problem. They were really focused on the technical buyer, number one; and number two, many of the incumbents in the space were focused on a set of point problems that were then aggregated into a more comprehensive solution.
So it started off with antivirus, then personal firewalls, and eventually we got things like online fraud and so on and so forth. Many of these companies kind of got built by just creating a patchwork of individual solutions with no holistic way to tie them together to provide a suite of capabilities for the average user. And that's really where Aura is unique. Our real focus has been on creating essentially an all-in-one suite for consumers and their families that is designed with the consumer in mind from day one.
This design has a common platform because security is not just a monolith; it's a mosaic. These games will continue to grow thanks to AI, so we must protect ourselves. You might think you're impervious to these scams, and you might be right now, but the more advanced they get, the more difficult it'll be to spot—especially because many of these scammers already have tons of information, thanks to data brokers.
Everything you do leaves a bit of a footprint. There’s said to be digital breadcrumbs, digital trails that you leave in every interaction. What data brokers try to do is collect data about you and coalesce that data into one place. Now you might think, "Hey, I left a bit of data here and a bit of data there," and in isolation, those two pieces of data may not be that valuable. But when you put them together with other pieces of data about yourself, all of a sudden you can build essentially a digital dossier that describes a person in a lot of detail, including things like their phone number, their email address, how to contact them, even their preferred way to contact them, etc.
Now it turns out that data brokers are required to give you the ability to opt out. So you can go to every one of these data brokers and say you don't want them to collect your data; they've got to essentially comply with your request. That's the good news. The challenge or complication is that trying to do that at scale is very difficult because, number one, there are many, many data brokers out there. So trying to find every single one and opt out is not easy.
Secondly, if you think about it for a moment, data brokers don't really want you to opt out. Their whole goal is to make money off of you, and therefore, when you look at the instructions they provide you with to opt out of their services, they're often highly inscrutable, difficult to understand. They're sometimes written in three-point font. There's a footnote somewhere you've got to dig up and find to be able to actually opt out of a data broker directly.
So one thing that Aura does in particular is we have a service called Data Broker Optic, where we actually have automated that process. You have to just give us your email address and any other relevant information about you. We will scan every data broker to find your information and will automatically opt you out of that data broker because we know how they work, how to opt out, and we can automate that mechanism for you.
Governments around the globe are looking to regulate AI and educate citizens. China is the only nation that has enacted hardline rules to grapple with AI. Europol, the European policing body, has also started engaging stakeholders and holding workshops on how criminals might employ programs like ChatGPT for nefarious purposes. Still, we can't wait for our government or employer to save us. Protecting ourselves from these AI-driven cyber crimes is our own responsibility, which can be scary.
Our founder, Harry, was a victim of identity theft many years ago. When he tried to navigate the sea of solutions to deal with that, he exactly realized that this is a mess. He, you know, was somebody who had done very well—a successful entrepreneur—he had the resources to go do the research himself and have people do research on his behalf. They came back with a 35-page plan that you have to go from. That means, like, what is the average person going to do? How can they solve this problem?
What's remarkable is that before, you couldn't scale solutions to work for the average consumer. But now we can leverage the power of AI for good. We can leverage the power of AI to automate, to create almost like a personal digital Sherpa that can guide you on this complex digital terrain and help you navigate the risks and the storms and the different types of treacherous grounds you'll face along the way.
I think there's an incredible promise of AI being used for good and to solve a problem that I think will be at the heart of every consumer right now. If they don't realize it, they should realize it at some point because you can't navigate the digital world safely without solutions like Aura being in place. As with most technologies, it's hard to predict what will come. But at least we can rest assured that for as many bad actors out there trying to steal from us, there are just as many, if not more, intelligent people trying to protect us. If we stay educated and alert, we might avoid the robot-driven cyber heist that lie ahead.
If you're interested in staying protected online, products like Aura are there for you. Our viewers can get a two-week free trial at aura.com/aperture.