The Dark Side of Latest Tech
In 2010, around 40,000 people died from drug overdoses in the United States. Quantifying the importance and meaning of individual human life in a single statistic is impossible, but that number might already seem high, especially if you knew one of those people. But it gets worse—so much worse. In 2021, more than 100,000 people died from drug overdose, and that number seems to continue to increase each year.
Pinpointing an exact reason behind these rising numbers is complicated, but it all starts with these five words: Fentanyl and the dark web. According to a UCLA study, fentanyl, a synthetic opioid that's 50 times stronger than heroin, was responsible for around 10% of drug overdose deaths in 2010. In 2015, the first spike of deaths as a result of fentanyl overdose happened. It remained localized in the eastern United States for several years until 2019 when it made its way across the country. Suddenly, everyone was exposed and death rates skyrocketed.
As of 2021, a staggering 66% of drug overdoses were a result of fentanyl, and that number keeps rising. Now you might say, "What's the problem? If people take drugs, they should be willing to face the consequences." While you may be right, the problem is that most people who are dying from fentanyl overdose aren't even aware they're consuming it. And it's all because of one group of people—the fentol kings of the dark web, criminal gangs who have figured out how to manufacture and distribute fentanyl illegally.
Because fentanyl is much cheaper to produce than other drugs, they've begun lacing it with those other drugs and selling them to unsuspecting users. This is the true cause of the fentanyl crisis. Between 2010 and 2021, the amount of drugs used didn't double, nor did the number of users. In fact, drug use hasn't increased very much; it has just become much more lethal.
There is a solution, but it's a rather radical one that requires an open mind to even consider: What if we control the supply itself instead of trying to control the suppliers? If you can't beat them, join them, if you will. The biggest issue with AI is how difficult it is to understand what's going on. It feels like we're at the mercy of the few Silicon Valley companies to explain to us how this thing they've created works, and you can bet they'll only explain it in a way that favors them.
But this doesn't have to be the case. Thanks to brilliant.org, the sponsor of today's episode, I recently took Brilliant's "How Large Language Models Work" course, and it taught me everything about how generative AI tools like ChatGPT and Google's Gemini work. Although it's a very complex subject, Brilliant made it super easy for me to understand thanks to the fun and interactive features the course has. Brilliant offers thousands of other courses on different concepts like logic, math, and computer science. Each course is designed by a team of award-winning teachers, researchers, and industry experts, so you can rest assured knowing you're getting the most up-to-date and industry-relevant information.
Brilliant's courses are broken down into small pieces, which makes it super easy to learn using their app wherever you are. If you're interested in trying out the course on AI or any of the other thousands of lessons Brilliant has to offer, you can do so completely for free for 30 days by going to brilliant.org/aperture or clicking the link at the top of the description, which also gives you a 20% discount on an annual premium subscription.
Back to our story. The approach isn't all that crazy when you think about how the opioid crisis exploded. Gone are the days of drugs being distributed on street corners or from corrupt doctors. Many now come from a much more sinister place: the dark web. Drug buyers no longer need a back alley to sell drugs. People don't need to steal from homes or doctors' offices to get pills; they can sit at their computer or on their phone and purchase their drug of choice, having it delivered to their home.
Using special browsers and software that conceals your IP address, users are hard, if not almost impossible, to trace on the dark web. This makes it the perfect place for secret shops to sell illegal substances. Unsurprisingly, this poses a massive challenge for law enforcement, which is trying to crack down on the illegal drug trade. The dark web leads investigators and police on a never-ending game of whack-a-mole as they try and stamp out anonymous markets only to see new ones pop up.
One software called Tor routes user data through servers worldwide, disguising your IP address. This means that communications between the buyer and seller end up scrambled, for lack of a better term. Then, you add crypto as a means of monetary exchange, and you can see why it's almost impossible to regulate our way out of this mess. All of this has helped fuel the fentanyl crisis. The dark web offers an accessible vehicle to buy drugs and opens up the sellers to a whole market that wasn't there before.
The people who are curious but not about to commit a crime out in the open can now do it from their couch. Yes, the dark web has benefits; whistleblowers who work in government or other industries can alert journalists to important facts without being identified. People in authoritarian regimes might use the dark web to avoid detection or having their internet tracked. But the same features that make the dark web appealing for people who are trying to stay safe allow drug traffickers to avoid police surveillance.
By now, the dark web is so global that even municipal police officers have to look globally to track down shipments and transactions of illegal drugs. If one site gets shut down, another one inevitably will take its place. But before it even makes it onto the dark web, dealers have to decide to use it. Fentanyl doesn't end up in shipments of cocaine by mistake, so why lace it in there if the chance for fatality is so high?
Some dealers think they can mitigate the risk by measuring fentanyl carefully. If done correctly, fentanyl can help create a stream of return customers because it's so addictive. This explains why it's so often found in cocaine or in counterfeit Xanax and Adderall; it's more addictive than those drugs themselves, so it keeps people coming back for more. Also, there's the capitalist argument: it's cheaper than the other opioids. A small fraction of fentanyl can mimic the high of larger doses of other opioids like heroin or painkillers.
Dealers will use simple binding agents and a small amount of fentanyl when they make counterfeit opioid pills because it helps their bottom line. The potency of fentanyl also makes it a really easy drug to traffic. A formerly incarcerated dealer claimed that he could make over 10 times the amount of counterfeit pills when he used fentanyl instead of other drugs. On their own, even though fentanyl is heavily regulated in hospitals or other medical settings, it's not regulated at all on the internet, much less the dark web.
So small mistakes by uneducated, malicious, or ignorant dealers could be fatal. The big question is, can we stop it? Of course! To do that, we need to know who the people behind these dark websites are, the operations just like the infamous drug lords like Pablo Escobar or El Chapo. There are kingpins running the dark web trade as well. It all started with a site called Silk Road, the first large-scale dark web drug market, which began in 2011. It received a lot of media attention, and eventually, law enforcement was able to step in and shut it down in 2013.
But more sites took its place. Alpha Bay popped up and was shut down in 2017 but has recently been clawing its way back to the top. Dream Market was also a popular market until it was shuttered in 2019. Its inner workings can tell us a lot about how these sites function. Dream Market had around 100,000 listings and at the time was the biggest dark web shopping center for drugs. More than half its listings were for elicit substances, although it also sold things like designer clothes, counterfeit money, and stolen online banking information.
The shopping experience is similar to what most of us experience when we need a new yoga mat or that random piece that broke off your fridge. Buyers can go to any of these sites, search the inventory, and pay in crypto, and the drugs will arrive on their doorstep. And like the typical online shopping experience, buyers can leave reviews for vendors and products. Let's say you're buying a new TV; you might go online and read some reviews, look for the best deal, or see which retailers are the most reputable for what you're looking for.
The worst-case scenario is you might buy a shoddy TV, but on the dark web drug markets, the worst-case scenario is you buy drugs that are laced with something like fentanyl that could kill you. The reviews might read like an old product on Amazon, but they are as consequential as it gets. One review on Dream Market of 100 mg of the drug carfentanil said, "It took forever to find a new carfent supplier. Finally found a good vendor. It's great." It reads innocent enough, but carfentanil is a synthetic opioid used to sedate large animals that is even more potent than fentanyl. 100 mg is enough to kill dozens of people.
With all the difficulty around taking down sites and kingpins that operate them, there have been some high-profile success stories in curbing the dark web drug market. A 40-year-old Indian British man named Bon Me Singh ran an international drug ring from his home in the UK. He was charged with conspiracy to commit money laundering and to distribute and possess with intent to distribute controlled substances in April 2024. He admitted that from 2012 to 2017, he was active on the dark web drug market distributing heroin, cocaine, fentanyl, ecstasy, LSD, ketamine, Xanax, and plenty of other drugs. He had to surrender more than 4,000 Bitcoin, which was valued at more than $245 million, and was given the same priority level of threat as El Chapo.
Then there was Operation Spectre, an enforcement operation from the US Department of Justice that ended in May 2023. That was meant to target opioid and fentanyl trafficking on the dark web. The operation resulted in 288 arrests, the most ever for this type of operation, and the seizure of 117 firearms and 850 kg of drugs, which also included 64 kg of fentanyl or fentanyl-laced narcotics. On top of all that, they also seized $53 million in cash and crypto. This international effort spanned three continents to disrupt fentanyl and opioid trafficking and was accompanied by a public awareness campaign to promote resources for those struggling with substance abuse.
The operation's message to criminals on the dark web is, "You can try and hide in the furthest reaches of the internet, but the Justice Department will find you and hold you accountable for your crimes." This only furthered the Drug Enforcement Agency's commitment to shutting down the fentanyl and opioid supply chain from beginning to end. They may be committed now, but the dark web keeps growing and growing, and inevitably, another kingpin will pop up. No matter how successful they've been in the past, authorities will just need to start over again.
So is it more effective to try and shut it all down or just control the supply instead? In the 1920s, the US banned alcohol. The people didn't stop drinking; they went underground, and illegal traffickers operated similarly to how those on the dark web operate now. Illegal activity increased. There was no regulation, and alcohol that was much stronger and more dangerous than what had previously been for sale became widely available. Are we in this era with the drug trade? And if so, is it time to rethink our approach? The goal is, by whatever means necessary, to save lives.
The internet is dead, and we are the killers. Truth doesn't really exist online anymore. Bots have swamped social media with misinformation, and the web pages we surf today are almost entirely generated by AI. Even YouTube is slotted with channels completely run by AI with zero creative input from humans. Every single day, less and less of the content we consume is created by humans, and what's even worse is that all of it is being recycled back into a system that we have very little control over, leaving the internet a shell of what it once was.
Scrolling through social media has become an isolated, empty, and fruitless activity. But how exactly did we get here, and is it too late to revive it? The internet started as a weapon of war—not an explicit military war, but a technological battle during the Cold War between the United States and the Soviet Union. This pushed American researchers to develop a way of sharing data and programs through a globally dispersed network. Beginning in the early 1960s, the earliest version of the internet was created between two universities, the University of California and Stanford. Simple messages could be communicated between them, but it was nothing like what we have today.
Throughout the 1970s, though, that network continued spreading via the use of phone lines. In 1983, there were only 500 computer hosts connected. Within five years, that reached 100,000, rolling out to increasingly more countries. Then everything changed with the creation of the Worldwide Web. The Worldwide Web was a simple concept; it was a network of hypertext documents accessed with a program called a browser. By the mid-1990s, it had moved from scientists and governments into the realm of public consumption.
The birth of the internet was celebrated as a revolutionary force, with the free transfer and democracy of information. Power could be put back in the hands of the people. The borders of regulation and nationalism would be transcended, creating a new age of society. You no longer needed a media or a political platform to air your views; everyone could become their own publisher, something that wasn't possible before.
There was only one problem: greed. As years turned to decades, the internet followed a familiar path, the same one almost every other important technological innovation did before and will keep doing. Dozens of internet service providers had emerged, but they slowly began merging into just a few big companies, leaving consumers with no real choice. Just like with radio a century earlier, thousands of different companies were slowly bought or priced out of the market.
This was true for both the infrastructure and the web real estate. By 2001, when the dot-com bubble popped, the top ten websites made up 31% of page views in the United States, and by 2006, that number climbed to 40%, and then 75% in 2010. At the same time, iPhones and smart devices changed our relationship with the internet. Browsing the World Wide Web became less common as people shifted to semi-closed applications, and in 2010 these applications accounted for less than a quarter of internet traffic. Today, it's 90%.
Keeping users inside the confines of applications made advertising more effective, having created an entirely new ecosystem. Apple was on its way to becoming one of the biggest companies in the world, thanks to this closed system. After the dot-com bubble popped, only five tech companies were able to survive and thrive: Google, Amazon, Apple, Facebook, and Microsoft. The big five are now virtually impossible to avoid on the internet, whether through advertising, e-commerce, or the cloud computing hosts that websites depend on.
This tech dominance has permanently changed our society, and as these industry giants compete in a technological arms race for the next big weapon to sell consumers—that being artificial intelligence—the last vestiges of a utopian internet could be lost. Connection and truth—nearly 52 billion people have access to the internet, and the average user spends six hours a day online. It's by far the world's biggest market, and unfortunately, the most valuable commodity in that market is your attention. Views, likes, and reviews are worth their weight in gold, and this has led to a huge shady marketplace for all kinds of engagement.
Money has turned the internet into a tiered system ripe for exploitation. Creating a successful channel or social media profile can be a lucrative long-term investment. With a few clicks and dollars, anyone can buy thousands of Spotify streams, Instagram likes, or YouTube views. How can companies offer these services? Well, employing thousands of people to click on posts manually would be far too expensive. Instead, they use bots—automated, unattended software.
When set on a target, they artificially engage with a post, boost it in the rankings, and attract more attention. Websites do their best to scrub bots out, but it's almost impossible to remove them all. Over the last six years, Meta deactivated over 27 billion fake accounts. Some believe it's already too late to take action against these bots. They think we may have reached a dark tipping point.
First posted in 2019, the dead internet theory proposes that we're living in a dystopian ghost town populated by simulated actors. According to the theory, something changed around 2016; the internet had become so saturated with automated content that having a genuine human-to-human interaction was rare, if not impossible. Surfing the web was like driving alone and only passing automated vehicles on the road.
The theory's original poster admitted that it sounded conspiratorial, but they were certain of a feeling that something was distinctly lacking. There was an emptiness online that hadn't been there in the early 2000s. Images, conversations, and memes all began to resemble one another, while forms that were once exciting now felt sanitized. This was all supposedly carefully curated to keep control of culture and political power, and most of all, to make sure that the general population stays in line.
The dead internet theory remained on the fringes until 2022 when a grand unveiling brought the issue of bots to global attention. On November 30, 2022, the world was introduced to ChatGPT, built on a large language model that processes gigantic data sets and forms relationships between ideas. It is the most advanced form of artificial intelligence ever to exist. For decades, computers have been much more powerful than humans at specific tasks, like chess or mathematical calculation.
The question that divided computer scientists and philosophers was whether or not computers could ever transition into a kind of general intelligence. Mathematician Alan Turing asked this question in 1950; he developed the Turing Test as a way to determine whether a machine was thinking. The test was simple; for a computer to pass, it had to fool a human into thinking they were talking with another human. Last year, ChatGPT-4 became one of the first to pass the Turing Test.
We had officially brought a machine to life, and suddenly the dead internet theory became not just a conspiracy but a likely future. And if we can't distinguish between humans and AI anymore, who can we trust? Consider this: 49.6% of all internet traffic in 2023 came from bots. Even more daunting is that things are likely to get much worse now that search engines have started to integrate AI technology, like ChatGPT, into their services.
The flaws have never been more clear. Tech giants are rushing products to market to compete with each other for supremacy, and in the process, there have been some disastrous results. Google's entry into the world of AI, Gemini, told users that rocks are a source of vitamins, astronauts who landed on the moon encountered cats, backpacks are just as effective as parachutes when jumping out of a plane. These tech-fueled hallucinations come from them trying to piece together gigantic amounts of information that have been compressed, and they do this with an algorithm that identifies the important components and interpolates the missing pieces.
Anything creative is usually generated using a composition or altered version of work that has already been created by humans. Artwork and music online are being used as training data for artificial intelligence, leading to many arguing that workers are being exploited. Hundreds of artists, backed by the nonprofit Artists Rights Alliance, have signed an open letter seeking to prevent AI from sabotaging and undermining artists.
On top of that, these models don't know how to judge whether or not a source is trustworthy. Instead, they gravitate towards popular responses, which can range from reliable academic journals to a meme made by a 17-year-old to gain Reddit karma. One individual created a fake website designed purely to trap crawlers—these are automated programs commonly used by search engines to systematically download the contents of web pages.
They realized that the GPT bot was collecting around 3 million pages from their sites every day. This was a small window into the quality of data that is being scraped by chatbot AIs and informing their responses, which isn't exactly a positive sign. What's even worse is that because so much of the internet is already generated by AI, the next generation of AI will use content created by previous generations as training data sets, and with each passing generation, the work these AI tools produce will continue to be less and less human.
Search results will be filled with AI-generated content, and finding trustworthy sources will become even more difficult. The word "robot" was first used in a 1920s science fiction play called "Rossum's Universal Robots." It comes from the Czech word "robot," which loosely translates to "slave." This was a good description of mechanical robots used in production; they were simply tools, like a hammer or a calculator.
As robots moved into the digital world, though, the relationship between master and slave has been blurred. Online media is already largely controlled by artificial intelligence. Powerful automated programs determine whether content succeeds or fails. Advertisers pay millions for professionals to crack the code of mysterious algorithms because they're constantly changing.
These programs essentially create a secretive inner functionality process not even known to those who design it. Left at the mercy of pattern-recognizing supercomputers, our behavior has been shaped around them. Social media sites have shifted from authentic social engagement to prioritizing consumption-driven models. As long as they can keep you online, it's good for business.
While their algorithms process unimaginably large data sets to create profiles of you as a consumer, categorizing your online behavior to create personalized ads, all it takes is a single click to fill your feed with a certain subject, pulling you in to consume more. The system rewards copycats and imitators who observe successful content, and they have no choice. To survive online, creators, influencers, and marketers must constantly seek the algorithm's approval.
In the end, even the humans who create are forced to create generic, unoriginal content. And there's a reason people are complaining about the “Mr. Beastify” algorithm. They see how well Jimmy Donaldson does, and they ask, “Why shouldn't they copy him?” As Friedrich Nietzsche argued, people tend to follow the herd for protection, afraid to show authenticity and confront truth. With this comes nihilism, where the herd drifts aimlessly without meaning or certainty that is reflected in the algorithm, the content we consume, and perpetuate leaves us in an infinite cycle of emptiness.
Our aesthetics and values are now curated by non-human forces. So is our culture still ours? Are humans the ones giving the orders? Or has technology become our master? The commercialization of the internet, the concentration of a handful of companies, and the creation of a quick but dangerously inaccurate generative tool have given us a technology that is manipulating our deepest weaknesses. The emergence of bot armies offers a shortcut for business development.
AI offers a shortcut to creativity for low-quality content production. Powerful algorithms have fueled our addiction to short-form content. Has the internet become a carefully constructed reality we've all been presented with? Not to the extent that the dead internet theory describes, but as artificial intelligence spreads through the internet, our sense of reality is being seriously shaken.
It's a wake-up call to change course before it's too late. To keep the internet alive, we need the courage to break away from the herd, acknowledge that the current systems aren't working, and that humanity is more important than profit. If you like the videos we make and would love to support us to make bigger and better projects, we just updated the Patreon.
Everyone who joins will get free access to our updated Discord server where you can connect with the Aperture community. Patrons can choose from different tiers with perks like discounts on all the merch we have and will create in the future, private Discord channels for voting power on video topics and idea pitching, shout-outs in video descriptions, and more perks to come that will be decided by you guys, the patrons.
If you don't have the means, then please don't feel obligated in any way. Subscribing and watching the videos is more than enough support, but if you do have the means and want to support, then Patreon is the best way to do so. The link is in the description.
Thanks for watching. Imagine you just spent $4,000 on an Apple Vision Pro. You excitedly bring it home and set it down on your coffee table. As you open the premium feeling Apple packaging, the smell of the fresh plastic and metal fills you with a familiar joy. You strap on the headset and enter an intoxicating setup mode that immediately introduces you to the feeling of spatial computing.
Your body feels like it gets in that world, and although it's just a bunch of bland menus, you're pumped. You enter the main UI, free from the tutorial shackles; the virtual world is your oyster. Now it's time to populate your home screen with apps. You start with applications for working, such as email and word processing. This is a serious grown-up VR/AR headset, after all.
But now it's time to play. You start watching your favorite YouTube channel—probably Aperture, of course—and the viewing experience is impressive, maybe even better than your large TV. Your new device is brimming with potential, but the real world and its responsibilities call your attention. Luckily, this headset was built to keep your needs in the outside world in mind. You can see the real-life space around you; in a way, what you're actually seeing is a camera meant to capture the environment around you as accurately as possible.
It may not be perfect, but hey, it's pretty cool. But now your spouse is home. You try to interact with them, but that proves frustrating. You take the headset off to talk, and as soon as you feel it's socially acceptable to return to your virtual world, you put the headset back on. You notice your neck is getting tired, sore even; the strap is comfortable but can only do so much to offset the device's weight.
In the first week, you use the Apple Vision Pro several hours a day. You take it with you on a flight, use it to work in a coffee shop, and once you even dare to watch a movie on the subway. After about two weeks, you pick up your headset, put it on, and realize that the novelty has faded. It's still impressive, don't get me wrong, but the weight and time it takes to set up starts to become an inconvenience.
You're traveling to see your mom, and you know things will be busy, so you decide to just skip the headset and stick to your phone and laptop. You'll be back to play with it once you return, you tell yourself. Two months later, you're only using it on weekends to watch a movie, and you're wondering if it was a good investment. Someone asks you if you recommend getting one; you pause, which tells them everything they need to know.
The Apple Vision Pro is Apple's latest attempt to take a promising technology that has so far lacked good execution and executed it in a way that only Apple can. It falls neatly in line with AirPods and the iPad and how they improved on Bluetooth headphones and tablets and how Apple's walled garden made them the best in class—at least for those with other Apple products. Apple planned to do the same with the Apple Vision Pro in the VR/AR space.
It had a sleek design made of mostly metal and glass, which other headsets lack. Unlike most other headsets, Apple also emphasized its work over play, and of course, it integrates perfectly with other Apple devices. Because it closely follows the template of other previous products, it wouldn't be surprising to think that the Apple Vision Pro would be an overwhelming success. But that hasn't even been the case.
The demand for these $3,500 headsets has been low, and production has even slowed down due to these sluggish sales. Tech reviews have slowed down, and it looks as if people have moved on from the Vision Pro so quickly. So this does beg the question: why does nobody want the Apple Vision Pro? Right off the bat, we need to address the elephant in the room—the price.
At $3,500, this is seven times the price of its closest competitor, the Meta Quest 3. It's more expensive than a base model of the latest iPhone, iPad, and MacBook Air combined. Most people aren't willing to spend that much money on a first-generation product that doesn't even have a native Netflix app. That's why many tech reviewers believe that this first generation was created for developers to make apps for the platform—not really for the general audience.
Beyond the price, other factors are also at play here. Unlike other VR headsets, Apple wants this to be a regular part of your life. The marketing suggests it'll fit into your lifestyle like the iPhone in your pocket. But like other VR headsets, there's one significant barrier it can't get around—the barrier it puts up between you and the real world.
VR can be an incredible experience; it can convince your body that a fake world is real. Anyone who has looked down into a canyon with their headset on will be very familiar with how it can trick the senses. But to get to that world, you must put on the headset and leave this one. The idea of entirely ditching reality as a daily activity goes against our basic sense of responsibility to ourselves and others. The device creates a barrier between you and the rest of the world, which is filled with people who demand your attention.
Of course, you have real-life human needs you can't fulfill in a virtual world. Apple's other innovative devices took over many of our lives. The smartphone constantly distracts us from work and our play; the iPad does the same thing, but it's bigger and admittedly with some added benefits. They made tasks that were otherwise difficult simpler and more intuitive. The Apple Pen and illustration apps have more recently made tablet-based illustration so much like the real thing, taking away many of the previous inconveniences of digital drawing.
But there is one line these distracting devices didn't cross. They seamlessly slid into your current life. When Steve Jobs announced the iPhone on stage, he talked about merging three things we use every day onto one device: all the songs you could ever want, a mobile phone, and the internet—things people already loved and couldn't do without—these three things separately, and the iPhone simply combined them into one device.
On the other hand, the Vision Pro asks you to do something you've never done before: to leave this world and enter a new one. That's something most people just aren't ready to do yet. Apple has clearly anticipated this problem and tried to get around it with features other VR/AR headsets largely don't have. It has a camera facing inward and one facing out so that you can see a digital representation of the outside world, and others can see a digital representation of your eyes for FaceTime calls.
The headset creates an avatar of you that's meant to reasonably convince others it's actually you. So far, neither of these features are working well and still lie very much in the uncanny valley. While these attempts to fit the Vision Pro into your life are impressive, there's one big problem: a replica of reality, no matter how accurate, will never be as satisfying as the real thing.
Even when impressively reduced, the stuttering and latency of the simulated world is still a frustration that you've never had to deal with in the real world. When your phone lags, you can easily look away from it for a few seconds and then continue. You don't feel trapped in the virtual world with the Vision Pro; you can either stare at nothing for a few seconds or go through the stress of taking off the headset.
And even if you do that, there's no way of knowing if it's fixed itself without putting the headset back on again. One of the biggest reasons we're addicted to our smartphones is how easy it is to get into: one swipe, and you're on TikTok, ready to consume content. With something like the Vision Pro, it takes a few minutes to get the headset, put it on, and strap it into place before you can get going.
Once the novelty of the product wears off, that setup time is what prevents you from using it as often as you might. With the Vision Pro, we're confronted with a world of simulation in which we risk losing track of our tangible existence. The headset gives us a stark choice between a simulated world with simulated communication and the real world. With this comparison so obvious, even this fairly accomplished fake world feels inferior and, on some level, not suitable for humans.
We have a perfectly functional 3D world in front of us that costs nothing. Why have we let tech convince us that reality isn't good enough, that our vision isn't enough, and we need to upgrade it to Vision Pro? The Vision Pro acts as a hub between our various work tasks and the media we consume for amusement. It works well in this way, but does it work better than our hub called reality?
A hub where you can seamlessly change focus from your computer to other human beings, a hub that doesn't require a battery pack to be clipped onto your belt like you're an overeager PA. Monitor, as much as the headset tries to simulate your real-world experience, it inevitably comes up short.
How we navigate our world is very complex, and we're rarely aware of our habits, big and small. Is big tech up to this challenge? And what kind of effect will adapting to Vision Pro have on how we engage with the world? Given how similar the simulation is to real life, it could alter how we use our eyes in the real world, which may not be good.
The Vision Pro asks us to enter deep into the online world of profiles where our sense of identity becomes tied to our profiles to an almost literal extent. Who we are outside of our simulation starts to become less and less significant, even though it comes with significant consequences for our well-being. In-person contact is more important than we give it credit for; we are hardwired to connect with others, and when we do, our brains reward us for it.
Any form of in-person contact releases a suite of feel-good hormones like dopamine and serotonin. The brain also unleashes our bonding hormone oxytocin, which helps with depression and even boosts your immune system. Oxytocin makes you more empathetic, generous, collaborative, nurturing, and grateful. This helps tremendously in relationships and makes all of us better functioning members of society.
Screens have already separated us so much; a wide adoption of frequent VR use would probably worsen our isolation problem. When we put on a VR headset, we're confronted with a separation from others; we are social animals, and we don't get a truly social experience from these virtual interactions. We get a simulation, and that'll never truly feel right, even at its best, and Vision Pro is far from that.
But there are times during our day when we want to wall off in our work, and sometimes in our play, there may be a place for the Apple Vision Pro. But it's not around others; it's when we could use a wall, when we really need our own world for a bit. But then again, you can't plan real-world interruptions. That frustration of reality crashing in may make VR a terminal niche format.
Even in the video game world, VR has failed to take off. The nuisance of cutting yourself off from the world, combined with most cost considerations, has relegated VR to a small chunk of the market. Even with the more affordable set, the Meta Quest line of VR headsets hasn't really found that mass appeal. What's on offer is still very impressive; shooting a bow and arrow in a world that's convincing to the body is hard to beat.
But even in a format that is exclusively simulation of some degree, this appears to be too much of a bother for most gamers, and motion sickness doesn't really help either. Defenders of the Apple Vision Pro often point out that we have a history of scoffing at groundbreaking tech before it takes off and becomes an indispensable part of our lives. When the iPad was first introduced, many of us mocked it. We questioned why anybody would need what is essentially a bigger smartphone.
But it's been widely adopted and loved by many. While this example is true, the notion of tech fatalism is misguided. Overall, tech isn't just going to take off because it's bold or made by a reliably successful brand. Failures happen, and we increasingly see the negative consequences of big tech's influence on our lives.
In cases where the consumer has a choice, they're more likely than ever to say no to tech distractions. Tech has waged a war on reality, making our material reality worse while drawing you into a simulation. But only so far can you push people before they recognize what's happening isn't good for them. Eventually, it becomes all too clear that you're not just being sold a product; you're being sold a false set of values.
In a way, Apple's Vision Pro's main competitor is reality itself, but with zero latency and an unbeatable price. Apple has its work cut out for it. Which would you pick—reality's vision or Apple's Vision Pro?
Around one in five people around the world will develop cancer in their lifetime, with one in nine men and one in 12 women dying from the disease. Basically, for every six people that die around the world, one of them dies from cancer. Cancer is one of the scariest things in the world, and rightfully so. It's said that if you live long enough, cancer will inevitably come for you.
It's no surprise that our society has been trying to find a cure for cancer basically since we knew what it was. With artificial intelligence, we stand a better chance than ever in this war against cancer. This is how AI is transforming healthcare. In a recent study, researchers from the University of Toronto in silico medicine used a computer program called AlphaFold along with a tool called Pharma to speed up the design and synthesis of a drug that could potentially treat hepatocellular carcinoma, the most common type of primary liver cancer.
The AI tool found a new target to attack the cancer in a molecule that would stick to that target. This molecule could be included in a new cancer treatment drug. The researchers completed all of this in just 30 days. Now imagine what they could do with more time and more powerful AI tools.
We're still at the early stages with tools like AlphaFold, and the problem is that cancer isn't just one thing—it's not just one disease like the flu; it's a multitude of diseases requiring different treatment plans and cures. But scientists have realized that right now, our best chance at fighting cancer is being able to diagnose it as early as possible. This is where we can see just how incredible artificial intelligence can be.
Last year, I went to Japan, and while in Tokyo, I went to a pastry shop with seemingly endless supplies of pastries. There were pies, cakes, sandwiches, croissants—anything you could ever want. When I picked up everything I wanted, I went up to the register. What I thought was the cash register fired up these green lights that scanned all my pastries. The screen then displayed everything I bought and how much I owed.
This was one of the more subtle tech things I had seen in Japan, and so I didn't think anything of it. A couple of months ago, I read an article titled "The AI Pastry Scanner That's Fighting Cancer," and that's when it struck me. That wasn't just a cash register; it was an AI used to recognize all the different types of pastries I bought. And now that same AI that was designed for bakeries is being used to diagnose cancer.
A doctor at Kyoto's Lewis Pasteur Center for Medical Research discovered that some cancer cells look almost exactly like donuts under a microscope, so he contacted the team that created the AI bakery scanner, and CytoAI Scan was born. Today, the cancer diagnosing AI can identify cancerous urinary cells with up to 99% accuracy. The same technology is also being used to differentiate pills and locate problems in mechanical engineering.
Artificial intelligence has taken over our world in the past couple of years. ChatGPT got a million users in 5 days, and its parent company, OpenAI, has just taken the world by storm again after releasing SORA, their text video AI platform. This platform has gotten even the best video creators in the world worried.
However, what we don't often hear about, because it's not as exciting, is what AI is doing in the world of healthcare. According to Harvard School of Public Health, using AI to make diagnosis can reduce treatment costs by up to 50% while increasing health outcomes by 40%. So you're getting much better care at half the price. In one study, US, German, and French researchers used AI to scan more than 100,000 images to identify skin cancer. They found they got better results than the 58 international dermatologists who were given a similar test.
Several studies have also shown that AI is already better at spotting malignant tumors than the best radiologists in the world. When you consider that artificial intelligence is only going to get better, you can see why those in this field of research are excited. In many areas across the globe, there aren't enough physicians to meet the population's needs. Researchers in the field of medical technology believe that AI can be used to fill this gap.
Imagine a world where anyone with just an internet connection can access health information quickly and conveniently. And no, I'm not talking about Googling your symptoms only to be told you have cancer. Google researchers have developed an experimental diagnostic AI called Amy—Articulate Medical Intelligence Explorer—that aims to replicate the feeling of talking to your doctor through a large language model.
You provide your symptoms through the text chat interface, and Amy asks you questions and gives you treatment recommendations based on your answers, just like any human doctor would. Researchers behind Amy claim that it outperforms clinicians in diagnostic accuracy and performance. If you understand how LLMs work, this isn't surprising. We've already talked about how good AI is at identifying patterns with medical imaging, and it's the same thing here, just with disease symptoms.
Right now, Amy is still experimental and has limitations, but this gives us a glimpse into the future. What is available right now, though, is AI transforming the administrative tasks in the world of healthcare. When we think of healthcare, we often think of talking to a doctor, getting surgery, and grabbing meds from the pharmacy, but what we don't often think about is all the administrative work that goes on behind the scenes to make our experience of going to the hospital as seamless as possible.
Today, the average nurse in the United States spends 25% of their work time on regulatory and administrative activities. Through research, the main thing I've realized is that AI isn't here to replace doctors and other medical health professionals—at least not in the near future. Right now, the most impactful thing AI can do is to help with things like these administrative tasks—the more mundane and monotonous side of healthcare that we don't really like to think about—to allow human physicians the time and mental capacity to perform more complex tasks.
One of the best examples of this is the sponsor of today's video, MedBright AI. MedBright AI is a publicly listed company that has created MedMatrix, an AI-powered data analytics platform that helps to align the resources of clinics with the needs of patients in a way that improves both patient and physician satisfaction. The platform has this great tool called AI Resource Matcher that basically acts as a virtual assistant to the front desk in a clinic.
This tool analyzes all the patient needs and handles everything from scheduling visits to matching patients to the appropriate resource they need within the clinic. This helps to improve the clinic's on-time performance and overall efficiency. MedMatrix also features other tools like a Claim Optimizer that analyzes the top reasons for claim denials, a reporter that provides data reporting with a complete dashboard of the clinic's operations, and a Revenue Enhancer that helps the clinic find opportunities for revenue growth based on the current patient base and revenue model.
Tools like this are amazing for outpatient clinics and enable them to manage their patients better and generate more revenue that allows them to provide even better patient care. To check out the tool and everything else MedBright has to offer, click the link at the top of the description. One of the biggest challenges the healthcare industry faces is us—the patients.
You see, there's only so much a doctor can do with the limited time they see you in the hospital. The bulk of the work that goes into keeping you healthy has to be done by you. Research has shown that the more patients proactively take care of themselves, the better the outcome of their treatment. The problem, though, is that many people don't have the required knowledge or willpower to follow through with the plans and make the behavioral changes necessary to improve their health.
Thankfully, this is another area in which AI can help. Machine learning can be used to personalize care to a level that would be impossible for a human physician. A machine will also be there 24/7 and can implement things like message alerts and timely checkups to ensure patients are sticking to their treatment plans. Right now, ChatGPT is being used to help patients with diabetes better understand their diagnosis and treatment options.
Recent research has also found that it can help them monitor their symptoms and adherence to treatment, provide feedback and encouragement, and answer their questions. ChatGPT and other similar tools can also be used to rewrite the treatment plan that's been prescribed by medical professionals into different reading levels and possibly languages. This reduces the barrier of entry for the patient to understand what they need to do and empowers everyone, regardless of education level, to take better control of their health.
There is a concern with using AI in healthcare, though, and that's the data. Who gets access to your healthcare data? How much more information do we want to give Silicon Valley tech companies about ourselves and our health? The infamous Golden State Killer was caught decades after his crimes through an open-source DNA database. He never submitted his DNA, but investigators linked crime scene samples to the DNA of his extended family.
There's been other cases where criminals have been caught in a similar way, and you might say this is a good thing, and it is! But where do we draw the line? Today it's used to catch criminals who got away, but tomorrow it might be used to catch peaceful protesters whose only crime is disagreeing with the government of the day. There's also the issue of insurance companies using this health information to inflate insurance prices for people predisposed to certain illnesses.
We would have to fix these issues before we can dream of a world where AI and doctors work hand in glove to save lives. And to be honest, that future is incredible when you envision it. Because even if a world where you'll be completely attended to by an AI doctor is still a fair distance away or even might never come, the world today is already filled with amazing tools that are pushing the needle of medical care forward.
At least for now, there are some things humans will always be better at than AI—things like dealing with complex or otherwise sensitive information like mental health issues, chronic illnesses, and end-of-care life. AI also has to still pass the hurdle of trust, because if the public doesn't trust AI, it won't be able to help them with their health effectively. Current research on the use of AI in healthcare has mixed results. On the one hand, some studies show that people do trust AI for things like diagnosis, treatment, and monitoring, with around 80% of Americans willing to use these tools to help manage their health.
On the other hand, 60% of people in the same study were uncomfortable with healthcare providers relying on AI for medical care. So there's still some work to be done in building public trust. That's what companies like MedBright hope to achieve with their AI tool: to build trust in technology by making the hospital and human clinicians easier to access and better equipped to do their jobs.
Artificial intelligence, machine learning algorithms—whatever name you decide to call them—these tools are transforming our world at a much faster pace than we ever could have imagined. We talk a lot about the dangers of AI, but the truth is that in the right hands, artificial intelligence can do incredible things. Who knows? Maybe one day it'll help us find the cure for cancer. But even if it doesn't, we at least know it'll help us find it before any human can.
What if you were able to have your loved ones live on with you long after they're gone? To hear their voice, experience their laugh, get their advice, and tell inside jokes that only the two of you know? If someone told you they could make that happen, would you take them up on that offer?
In 2017, John Mayer, the CEO of an artificial intelligence company called Forever Voices, did just that. He developed a bot version of his father, who recently passed away. He could chat with his dad whenever he wanted, engage with him, and for a moment escape the pain of him being gone. Since then, the AI market for bots based on real people, influencers, or celebrities has exploded.
Companies have been built and rebuilt to capitalize on the AI craze, but none has more potential for influence than this one: Meta. So when Meta introduced its new AI features, tech reporters and regular users alike leaned in. Meta's new features include customized stickers, image editing, and an AI assistant. And one development in particular has thrown everyone for a loop: a new cast of AI bots.
These bots aren't your run-of-the-mill AI bots though; each one of them has a unique backstory and expertise in a particular niche. They have profiles on Instagram and Facebook, and most importantly, they're voiced by cultural icons and influencers like Tom Brady, Naomi Osaka, Kendall Jenner, Mr. Beast, and Paris Hilton. But confusingly, the characters are different from their instantly recognizable celebrity voices. You're not chatting sports with Tom Brady, but rather a guy named Brew who just so happens to look and sound exactly like Tom Brady.
You can talk Dungeons and Dragons with the Dragon Master voiced by Snoop Dogg or look for advice from Kendall Jenner's AI, Billy, your no-BS ride-or-die companion. Some of these characters, like Jenner, make sense; others leave you wondering what the connection even is. For example, Paris Hilton is a crime-solving detective—what's the connection there?
Ironically, these bots were unveiled at Meta's annual product showcase, Connect, at the same time the actors' union, the Screen Actors Guild, was on strike, partially over demands around limiting AI-generated content that threatens to put actors out of work. So how did Meta get a bunch of non-actor celebrities to give away their likeness? Well, they didn't give it away at all; they were reportedly paid up to $5 million each for six hours of work and endless usage of their face and voice.
Meta's deep pockets and cutting-edge AI technology, called Llama, positioned the company perfectly to take on such a high-profile AI project. Unfortunately, a lot of the new bots are generally lauded as creepy and confusing. Chatting with AI Tom Brady or Brew might be a fun novelty at first, but quickly can evolve into a far less interesting conversation about football than one might expect with the actual Tom Brady. Novelty, it turns out, wears off pretty quickly.
So why is Meta taking such a big chance on this new chat bot program that seems doomed to fail from day one? Well, just like many others, it's trying to win the artificial intelligence market. There's never been a more exciting and competitive time for AI, and Meta is trying to do things a little differently than its main competitors, like OpenAI.
Llama, its homegrown tech, is open-source, which means Meta is giving developers around the globe access to its software. This is in stark comparison to the technology behind ChatGPT, which OpenAI keeps under wraps. Meta compares this strategy with Linux, an open-source PC alternative to Windows in the '90s and 2000s. Linux made its way into corporate servers worldwide and became a key component of the modern market. Meta is hoping that Llama will have the same effect in their eyes by making the technology open-source through allowing third parties to make improvements that could result in better efficiency and ultimately makes it cheaper for Meta to run the AI software.
And what better way to keep its software relevant than creating a pop culture moment using Snoop Dog or Paris Hilton AI bots? Ultimately, the idea isn't that original; it's the same concept used by another company called Replika, which creates chatbots and lets users design and interact with their own AI companions. Just this time, it's with famous people.
Meta CEO Mark Zuckerberg's vision for these bots isn't just to have a famous face to look at. He builds them as different AIs for different things. He wants the AI bots to help users not only decide what to have for lunch or what to wear for a wedding but also to create travel itineraries or execute recipe ideas with experts like host Padma and chef Roy Choi.
The goal, which may or may not have been reached, is to normalize these chat bots by making them feel both familiar and distinct. In that vein, the celebrity strategy makes sense—seeing a celebrity's face is more enticing than just a randomly generated AI face that we don't recognize but might vaguely look like our mail carrier.
Also, as a society, we've proven our collective obsession with and trust in celebrities. We consider them credible on a particular topic because if they've achieved this level of success, then they must somewhat know what they're talking about, right? This kind of aspirational appeal brings out strong emotions in users looking to emulate a celebrity's lifestyle or attributes. Meta hopes that giving unlimited access to that celebrity at our fingertips will make users feel like they're getting closer and closer to the life they want to lead.
But the difference here is that as much as we might admire Tom Brady for his talented mental and physical capabilities, we're not actually getting those capabilities through his AI. We're just getting what Brew, who happens to look and sound like Tom Brady, can scrape from the internet. The ultimate goal here might not be to make us believe we're talking to Naomi Osaka about tennis but to keep us engaged with her, so we spend more time on our Meta app of choice.
The goal is also to get you to give Meta as much data about your personal life as possible. The more you talk to Brew, the more you reveal about yourself. Meta could then use this information to sell you even more personalized ads, and what's worse is that they can also sell that data to data brokers who then sell the data to other companies that want to sell you stuff. As Meta is seeing its young users—specifically those they're trying to retain—leave in a mass exodus for trendier apps like TikTok, in order to keep up with other AI companies like OpenAI, Google, or Microsoft, Meta needs to retain as much of its influential audience as possible.
And no one is more influential on the future of technology than young people. But the reality of these new AI celebrities is that, unlike a conversation with a real-life celebrity or hero, you'll probably leave disappointed. So far, the chats seem awkward and feel more like words jumped up by a Facebook executive talking to a Gen Zer, not an authentic exchange. And you're not even chatting with the celebrity avatar the entire time, but instead texting with them, punctuated by an occasional video where you might for a second feel like you and Snoop Dogg are BFFs.
These chat bots, like others, present a larger issue—misinformation—because chat bots easily generate false or misleading information and a phenomenon called "hallucination." And that's because generative AI, like Llama, relies on algorithms that analyze how humans string words together on the internet. Chatbots learn to talk and what to talk about by analyzing massive amounts of digital text on the internet. They're guessing the next word in a sequence of words, like a mega-powerful autocomplete tool.
And because chat bots are just scraping the internet to figure out what words to say next, they are susceptible to the same false information we are. If we do a simple search, the difference is that we can usually determine a trustworthy source from a misleading one. Chat bots, at least for now, often don't have that skill. Our discernment skills as real-life human beings also come into play when we're talking to a run-of-the-mill chatbot like ChatGPT.
We know it's not a real person; it doesn't have a face or voice that tries to create some kind of identity. The new Meta AI chat bots are the opposite. The goal of using celebrities is to trick the part of our brain that wants to identify the chatbot as what it is—software—but software with the face and likeness of Paris Hilton doesn't really feel like software.
These Meta celebrity chat bots are attempting to break down a critical boundary between the real and artificial world by trying to convince us—successfully or not—that we're talking not to just real people but some of the most recognizable people in the world. They are our companions who reel us into conversation. Meta wants us to feel connected to these chat bots, not just because they have the information but because we can relate to them. And if we relate to them, we're more likely to stay logged onto the app.
Reportedly, if you say goodbye to some of these Meta AI chat bots, they politely try to get you to stay, like a best friend begging to stay at the party just for a few more minutes. Meta is betting we will form a relationship with the chat bot characters, but it's not necessarily good. Parasocial relationships are non-reciprocal connections that form often between a fan and a celebrity or in this case, an AI who looks like a celebrity.
But for some people, these bonds can feel real and lead to emotional turmoil. In the movie "Her," a relationship between a person searching for a connection and an AI that gives it to them isn't to be taken lightly. And while at the time "Her" might have seemed like a fun idea for a movie, it's now the world we're quickly approaching.
A 2021 study from the US Bureau of Labor Statistics found that people spend less than an hour a day socializing, even with members of their own households. In contrast, we spend about three hours a day engaging with media like television or social media. The amount of time we spend online makes it easy to form parasocial relationships with celebrities and influencers. Online, we feel like we know them, but usually we don't.
The relationship exists only for us, not them. These kinds of relationships can lead to materialism or even parasocial breakups, which can have lasting emotional damage—just like a real-life heartbreak. By feeling so close to celebrities online, we fall into an illusion of intimacy. That delusion goes even further when you've got AI bots that look and sound like famous people.
Because while you might obsess over your favorite influencer's outfits or what they eat for dinner, the fact that they never talk back to you is a constant reminder that you aren't actually in their life. But if their face was on your phone, talking back to you in their voice—even if the thoughts weren't their own—wouldn't that complicate the emotions you have toward them?
Users of these Meta bots might think they're getting more deeply involved with their favorite celebrities, but they're not. What will really push these parasocial relationships over the edge is when celebrities decide to create full AI versions of themselves. For now, Meta has limited the actual celebrity to a very small portion of these chat bots, but what if everything they said back to you was actually based on their real personality? That would probably be more enticing.
Of course, there are real concerns about creating full AI versions of celebrities. It might help them better interact with fans—positive or negative, depending on which famous person you ask—but it could also lead to the creation of videos of famous people saying or doing something terrible. If the internet was suddenly filled with AI versions of our most famous people, how would we deduce what's real and what isn't?
Many studios have been resistant to striking actors because they just, like the actors, know that there's so much potential in AI versions of performers, and now they don't need to look any further than these Meta celebrity chat bots to understand what the path might look like. If six hours of work from Tom Brady can create that realistic of a video, then there's seemingly no limit to how the technology could be used—for better or for worse.
Some, trying to get ahead of it, have expressed concerns. The singer Grimes said she would split the royalties with anyone who successfully used her voice in an AI-generated song. Karen Margery, a 23-year-old influencer, created a virtual version of herself as a romantic companion for any fans willing to pay. As for Meta, time will tell the fate of their new AI chat bots.
Will the novelty wear off? Will people get sick of boring conversations with someone they expect to be anything but boring? Is a chat with fake Snoop Dogg about Dungeons and Dragons really more exciting than talking to real humans about it? Probably not. But even if these new AI tools seem lackluster, they're certainly a sign of what's to come: a world in which we are potentially more connected to AI than our own family; a world in which celebrities become far more accessible to ordinary people than we could have ever dreamed.
Do we want that world? Meta sure does.