Algorithms are Destroying Society
In 2013, Eric Loomis was pulled over by the police for driving a car that had been used in a shooting. A shooting, mind you, that he wasn't involved in at all. After getting arrested and taken to court, he pleaded guilty to attempting to flee an officer and no contest to operating a vehicle without the owner's permission. His crimes didn't mandate prison time, yet he was given an 11-year sentence, with six of those years to be served behind bars and the remaining five under extended supervision. Not because of the decision of a judge or jury of his peers, but because an algorithm said so.
The judge in charge of Mr. Loomis's case determined that he had a high risk of recidivism through the use of the Correctional Officer Management Profiling for Alternative Sanctions Risk Assessment Algorithm, or COMPAS. Without questioning the decision of the algorithm, Loomis was denied probation and incarcerated for a crime that usually wouldn't carry any time at all. What has society become if we can leave the fate of a person's life in the hands of an algorithm? When we take the recommendation of a machine as truth, even when it seems so unreasonable and inhumane?
Even more disturbing is the fact that the general public doesn't know how COMPAS works. The engineers behind it have refused to disclose how it makes recommendations and are not obliged to by any existing law. Yet, we're all supposed to finally trust and adhere to everything it says. Reading about the story, a few important questions come to mind: How much do algorithms control our lives, and ultimately, can we trust them?
It's been roughly 10 years since Eric Loomis's sentencing, and algorithms now have a far greater penetration into our daily life. From the time you wake up to the time you go to bed, you're constantly interacting with tens, maybe even hundreds of algorithms. Let's say you wake up, tap open your screen, and do a quick search for a place near you to eat breakfast. In this one act, you're triggering Google's complex algorithm that matches your keywords to websites and blog posts to show you answers that are most relevant to you.
When you click on a website, an algorithm is used to serve you ads on the side of the page. Those ads might be products you've searched for before, stores near your location, or even something you've only spoken to someone about. You then try to message a friend to join you for your meal. When you open any social media app today, your feed no longer simply displays the most recent post by people you follow. Instead, what you see can be best described by TikTok's For You page. Complex mathematical equations behind the scenes decide what posts are most relevant to you based on your view history on the platform.
YouTube, Twitter, Facebook, and most notoriously TikTok all use these recommendation systems to get you to interact with the content that their machine thinks is right for you. And it's not just social media. Netflix emails you recommendations for movies to watch based on what you've already seen. Amazon suggests products based on what you previously bought. Probably the most sinister of all, Tinder recommends you the person you're supposed to spend the rest of your life with, or at least that night.
These might seem like trivial matters, but it's more than that. Algorithms are also used to determine who needs more health care and when you have your day in court, and a computer program decides whether you'd spend the next decade of your life behind bars for a crime that usually doesn't carry any time. One of the most dangerous things about algorithms is the data that is used to power them because the more data you feed into an algorithm, the better its results.
So, where do companies get this data? It’s from their users, like you and me. Most of the time, giving out this information is harmless, but a lot of the time, these companies sell your information to data brokers, who then sell that data to other companies that want to sell you stuff. That's why you keep getting targeted ads from random companies you've never heard of before. What's worse is that these data brokers are often targeted by nefarious actors who steal all the information they have in data breaches. According to a report from the Identity Theft Resource Center, there were 68% more breaches in 2021 than in 2020, and that number seems to continue to go up.
A few months ago, my friend got this message from Google telling him that some of his passwords were found in a data breach from a company that he had never heard of before. Right after, he started getting personalized email ads from scam companies. This is how scammers are able to figure out your phone number, name, and even your address. The good news is that you can get these data brokers to delete the information they have about you. Sadly, to do it manually, it could take years. This is why I love using the sponsor of today's video, Incogni. All you have to do is create an account and grant them the right to work for you, and that's it.
Incogni will reach out to data brokers on your behalf to request all your personal data be deleted and deal with any objections from their end. To get started, sign up using the link in the description. The first 100 people to use code "APERTURE" with the link below will get 20% off of Incogni. It's completely risk-free for 30 days, so I encourage every one of you to at least give it a try, and if you're not happy, you'll get a full refund. But I can assure you when you see just how many data brokers have your information, you definitely want to keep your subscription.
Back to our story. I'm not saying that all algorithms are bad and we should get rid of them. An algorithm is probably the reason you're watching this video in the first place. I'm saying we, as a society, need to make some changes to the way we currently interact and use these systems. One of the scariest things about algorithms is that they're built and altered in a black box with little oversight. The engineers behind them determine what we see and don't see. They classify, sort, order, and rank, and we don't get to know how or why. Even the government doesn't get to know how and why, and if they did, would they understand it?
The engineers themselves often don't know why an algorithm behaves the way it does. They use AI and machine learning, which can make the outcomes become hard to predict. They become a mystery to makers as well. When companies like Google or Facebook are challenged about their platforms after something terrible happens, they hide behind the mythos of the algorithm: their cold, unbiased systems. They suggest they're rational to errors—human, not machine. They claim this is the notion of algorithms that is potentially dangerous. We think of them as pillars of objectivity, incapable of the kind of biases that corrupt human society. But are they genuinely unbiased? Are they pure instruments of rationality?
As much as big tech companies would like you to believe they are, the sad truth is they are not. When the engineers choose to classify and sort, they're using pre-existing classifications that are filled with bias already, and their methods of sorting enforce biases that can have real negative consequences. In 2019, an algorithm was used on more than 200 million patients in U.S. hospitals to determine who would need more care. Although race wasn't included in the criteria, black patients were discriminated against by the machine anyway. They were determined to require less care than white patients. How did this happen if race wasn't even an input, you might ask? Well, while race directly wasn't in the equation, previous health care expenses were a determining factor in deciding whether someone would need more care. Because black patients have historically spent less on health care, the results were that they required less care—an incorrect blanket conclusion for situations that should be case-by-case evaluations.
Although the racial bias was unintended, it still occurred as a result of the engineers' designs. It's because of issues like these that we can't hide behind the myth of the infallible machine. Biases like these will exist in machines as long as humans are the ones building them. There is one bias that exists in almost every algorithm we use today, with far-reaching consequences: meta, Twitter, Google, Amazon, Netflix, Tinder—most tech companies and the platforms they offer you and me as services are designed for one thing and one thing alone: profit.
These platforms generate revenue by primarily selling ads, and to generate more ad revenue, they try to keep you on their platforms longer. Because the longer you're there, the more ads you'll see, and the more money they make. Take YouTube, for example. There are three main things that make any video successful on the platform: click-through rate, watch time, and session time. So, all YouTube cares about is can you get people to start watching your video and can you keep them watching for as long as possible so they can serve them more ads.
For the most part, this works as it's supposed to, and people get served content they enjoy but would have never found on their own. As with everything in life, though, there are downsides. People have learned to game the system by using clickbait to lure viewers in and then pushing conspiracy theories that keep people glued to their screens, whether the information is factual or not. YouTube's algorithm has also been accused of having a radicalizing effect on its viewers. Moderate content always leads to recommendations of more extreme content, which leads people down the notorious rabbit hole. You can start by watching videos about jogging, and YouTube would continue to recommend you videos that push you further, slightly, until one day you wake up and you're watching videos about running an ultra marathon.
This book's algorithm shows you more content from friends whose posts you've liked or read in the past. This process slowly funnels you into a bubble where you're mostly reading the same opinions you already have, reinforcing them in your mind. The goal of this approach is, of course, to keep you on the platform longer with views you agree with. The consequence, though, is that many harmful beliefs are cemented into the heads of users on the platform instead of being challenged. The more you think about the algorithms of social media, the more they start to seem like programs for creating social problems for the sake of profit.
So, if that's the case, are all algorithms just evil piles of code that are determined to doom us all? Maybe, but maybe not. They do have extraordinary benefits to offer when used correctly. A data set of 678 nuns from the Nun Study, a research project started in 1986 on the development of dementia and Alzheimer's, showed something very peculiar. Researchers tried to find if they could spot any patterns in the data to suggest a relationship between something in a person's early life and the onset of these diseases later in life, but to no avail. The team also had access to the letters that the nuns wrote decades prior when they were entering into the sisterhood around ages 19 and 20.
An algorithm was able to detect with incredible accuracy through these letters which nuns would go on to have dementia in their elderly years. This is what algorithms are great at: comparing data sets and figuring out tiny patterns that humans are more likely to miss. They're sensitive to variations in data in finding patterns that lead to reliable predictions of possible outcomes. Today, algorithms are used in detecting the likelihood of getting breast cancer and presenting better models for tackling climate change. Except the machine isn't great on its own. Every potential positive here only works with a human behind it.
Algorithms can act as the first layer for screening breast cancers, but a human has to act as that necessary second layer to verify the results. Using an algorithm for determining an appropriate jail sentence might one day make sense, only if there's a human deciding whether or not the generated output is sensible or not. One of the main problems with Eric Loomis's case is that the judge didn't question the algorithm's recommendation. He simply accepted the supposed objectivity of the machine and sent a man to prison for a crime that didn't warrant it.
As it stands now, we just seem to be part of this enormous social experiment being run by tech gurus, and every year or so, another social experiment is added to the mix with its own unique set of social consequences. More recently, we're discovering what a rapid stream of bite-size videos does to teenagers or what a completely user-generated game does to tweens. So far, this video has been pretty hard on the big tech companies, but I think it's also really important to acknowledge that they are trying to address some of these issues with algorithms.
YouTube, for example, has changed its algorithm to include quality and authority as measures of determining whether a video is recommended or not. Facebook has limited its targeting options to try and avoid another Cambridge Analytica scandal, where user data was distributed without consent for political purposes. Are these adjustments to the algorithm helping? Yes, but not as much as necessary. Even more is the fact that these efforts point to two things. One is that human intervention in algorithms is not only necessary but needs a much stronger presence. Two is that tinkering with the algorithm is probably not going to resolve the consequences of their most significant bias: profit-seeking.
Keeping people on a platform is always going to be easier with content that sparks the most outrage. That's not always the case, of course. There is great content on YouTube and earnest viewers like you watching this video right now. But for every creator seeking to share legitimate information, there seem to be several others blatantly exploiting the algorithm for a quick buck. How can we take these platforms back from them? The sad truth is we can't. The algorithms need to change; they need to put human welfare above profits.
We need to start designing machines that take advantage of our psychological weaknesses to make that world possible. We need to be more critical of the algorithm. We need to dismantle the notion that the algorithm is all-knowing, objective, and rational. The black boxes need to open up, and our blind trust in these systems needs to be challenged at every turn. To paraphrase the co-founder of the Center for Humane Technology, Tristan Harris, we're all looking out for the moment when technology would overpower human strength and intelligence, but there's a much earlier moment when technology overwhelms human weaknesses. That point is being crossed right now, and it's reducing our attention spans, ruining our relationships, destroying our communities. It's downgrading humans.