The Dark Side of Social Media
are ugly, one that he shared with all new style patients suddenly appearing all over the world.
After making this connection, researchers found that all the patients who suddenly claimed to have ticks were also fans of Zimmerman. When MV confronted her distressed patients and told them that none of them actually had Tourette's, most of them recovered immediately. But despite their recoveries, this case presented researchers with an unprecedented psychological mystery, showing how imagined symptoms can spread purely from TikTok videos.
While these teenagers didn't suffer from Zimmerman's condition, something triggered their minds to believe that they did. All of the sudden, all of them simultaneously and independently developed these TikTok ticks. With TikTok becoming one of the most used social media apps today, it's becoming even more important to consider: could TikTok be causing a mass psychosis?
Before you answer that, I want to take a moment to thank the sponsor for today's video, Masterworks. For all the progress the human race has made on this planet, we still haven't figured out how to tell the future. As a result, we're burdened with the worries of what it might be. To help future generations, we rely on art to pass down our history and information.
Thanks to Masterworks, we can also rely on it to invest in the future. Looking at the last two years, it's obvious that no matter how much we think we know about the future, we can never accurately predict it. Even tech companies that alter the way we live on a daily basis have been wrecked by volatility in the last year, with experts like JP Morgan's CEO Jamie Dimon predicting that stocks could drop yet another 20%.
Mass psychogenic illness, or MPI, also known as mass sociogenic illness, is a real occurrence where a group of people starts feeling real physical symptoms at the same time, even though there's no physical or environmental reason for them to be sick. The dancing plagues of the Middle Ages were probably the most bizarre form of MPI. In July 1518, residents of the city of Strasbourg were struck by an uncontrollable urge to dance. It started with one woman stepping into the street and dancing for nearly a week before she was joined by three dozen others who also seemed to have the same uncontrollable urge. The town then hired musicians to provide backing music, which only worsened the situation as more people joined in. It wasn't long before the marathon started to take its toll. By August, the dancing plague had 400 people in its clutches, with 100 of them dancing themselves to death.
More recent but less dramatic cases include a boarding school in Mexico, where a student developed leg pain and paralysis soon after hundreds of their schoolmates began experiencing the same symptoms. In East Africa, three girls who started laughing uncontrollably managed to infect over 100 other students, forcing their school to close down. The triggers for mass psychogenic illnesses cannot be entirely isolated, but they can easily spread among people who share the same anxieties, fears, and sense of community. This is what makes the TikTok ticks an even more intriguing study.
TikTok rapidly gained popularity during the pandemic as a trendy dance video and crazy challenge app. But since then, it has established itself as the world's most popular app, with nearly three times as many users as Twitter. A recent study even found that it has dethroned Google as the most popular domain in 2021, with most of its visitors using the app as a primary search and discovery platform. With over 1 billion active monthly users who spend an average of 95 minutes a day using the app, it has become more important than ever to understand what kind of effects TikTok has on our brains.
While the effects of social media on our mental state have been a topic of debate for the past decade, the emergence of TikTok is different. It allows users to watch an unlimited stream of new content, observe trends rise and fall daily, and find something new with each swipe. Because the videos are often extremely short, the user can quickly decide whether to continue watching or move on to something more interesting. This constant stream of information can narrow and exhaust our attention span over time, limit our concentration, and affect our short-term memory.
TikTok places a constant focus for content to be delivered in 60 seconds or less, making anything longer feel like it's a waste of our time. Some users have reported that they don't even have the patience to watch 10-minute videos on YouTube anymore, even when the topic interests them. This reduction in our attention spans can come from a variety of risk factors, such as poor academic performance, communication struggles, social isolation, relationship difficulties, stress, and anxiety. A recent study conducted by Curtin University in Australia has shown that the heavy use of social media can lead to problematic mental health consequences, especially for people with lower attention spans, which brings us back to the TikTok ticks.
This curious Tourette's case was the first form of social media-induced sociogenic illness. In addition to the teenagers in Germany, about 50 patients across the globe presented the same symptoms, which demonstrates the domino effect of our social media landscape. Constant exposure can lead to low attention spans in those that never had the issue before, which can lead to psychological illnesses, which can in turn spread imagined conditions around the world in a way that we never even dreamt of before the age of the internet.
According to TikTok, videos tagged with the hashtag Tourette's have been viewed more than 5 billion times. The unexpected appeal of these videos among teenagers can be attributed to a need to stand out or be different. But the truth is, videos like these also provide a sense of community, acceptance, sympathy, and validation, which all seem to be present in patients suffering from MPI. Even though Zimmerman's intentions were to show his followers what it's like living with Tourette’s, it also validated violating social conventions and gave proof to young impressionable viewers that the more disruptive you are, the more viral you'll go.
This is the basic concept behind TikTok. The whole idea is to promote videos that can go viral in an instant and push young content creators to produce similar content. That's why the first thing you see when you open the app is the For You page, where short videos that are carefully selected to grab your attention are displayed. It's why the videos are on autoplay and shown in an endless scroll. There’s no signalization of progress or duration of the experience. All these features are intentionally designed to grab and keep the attention of young users for as long as possible and urge them to create similar content if they want to get that attention for themselves.
In a recent interview, comedian Andrew Schulz talked about how TikTok's algorithm promotes useless content in the West but shows entirely different content centered around innovation, architecture, and science in China. Like Schulz, many argue that TikTok is intentionally making people dumb by manipulating user behavior and pushing mindless content on its impressionable young viewers. To that point, TikTok trends have included things like the Blackout Challenge, where children were dared to hold their breath until they fainted, the Penny Challenge, encouraging kids to push pennies behind partially plugged phone chargers which could have dangerous results, and the Tooth File Challenge, where young users were causing permanent damage to their teeth just for some of that desired attention.
The more these videos went viral, the more content creators saw it as an opportunity to gain traction on the platform. This in itself feeds the vicious cycle of harmful content that leads to illnesses similar to the TikTok ticks and other mental disorders. Today, TikTok is being looked at in a number of U.S. states to determine its influence on its young users' mental health. According to Dr. David Barnhart, clinical mental health counselor at Behavioral Sciences of Alabama, all social media platforms impact how a person views themselves. But because of TikTok's rapid-fire influx of content, users are exposed to dozens of videos within minutes, which makes the effect much more devastating than other platforms.
Users can easily become addicted to the app and may see constant stimulation as a result. This constant stimulation increases stress and anxiety levels, especially with numerous videos that feel body dissatisfaction, appearance-related anxiety, and much more. Mental health professionals have reported seeing a number of younger patients who spend considerable time on TikTok, claiming to have severe mental health conditions such as schizophrenia and bipolar disorder. The main issue with such claims is that more exposure to targeted content can influence teenagers to misdiagnose themselves with a mental illness without consulting a professional, only because they can relate to some of the symptoms of a TikTok influencer they follow.
Just like the TikTok ticks case, young adults who self-diagnose can also do so from a desire to feel like they're part of a community or to rebel against social constructs, and in the process, they can genuinely believe that they suffer from a specific illness, even when they don't. The flip side to all of this is that there are several positives that can come from highlighting mental health issues online. The sense of community, which can be harmful in some cases, can also be helpful in normalizing these conditions and sending the message that people aren't alone. Young people suffering from their own issues can come together and support each other with helpful tips on how to deal with depression, anxiety, and other hurdles in their daily life.
Videos discussing mental health get millions of views on TikTok every day and draw people to symptoms that some people may not have realized were an issue. This can spur people into action and encourage them to seek help. As with all things, there are positives and negatives, which is why experts don’t believe that deleting TikTok is the answer. Instead, regulating and monitoring the time spent on the app is key; otherwise, the social media-fueled mental health crisis is only going to get worse.
The age of the internet has definitely brought out a whole new dimension of concerns that we should worry about, but TikTok itself has changed the online world. It's a cultural phenomenon with a superior algorithm that is unmatched by its social media rivals. It’s built to ruthlessly and aggressively collect your data and constantly feed you content that’s for you—content that could change the direction of your life, warp your perceptions of the world around you, and even cause mass psychogenic illnesses worldwide.
MPIs have existed for hundreds of years, and yet a lot of the reasons why they happen remain a mystery. But what we can learn from the TikTok ticks case is that we're now entering a new era of social media-induced sociogenic psychosis and apps like TikTok have more control and power over our youth than we thought. The best predictor of future behavior is previous behavior, and based on what we already know, we've most likely not seen the last of social media-induced psychological illnesses.
For now at least, you had the patience to watch this entire video. Thanks for not swiping to check out the latest escalator dance clips or watch a guy deep cleaning a horse's hoof. We've talked extensively about the dangers of TikTok, but what if I told you that Snapchat was way more dangerous? That while TikTok's influence is more subtle and psychological, Snapchat puts young people at immediate, sometimes life-threatening risk.
On the 2nd of October 2016, five men broke into Kim Kardashian West's apartment in Paris and robbed her at gunpoint. The masked men made away with around $10 million worth of jewelry. Luckily, Kim was left unharmed. In an interview with Vice, Unice Abus, a member of the infamous Grandpa robbers gang who allegedly robbed the reality star, said they used Snapchat to figure out everything they needed to know about the operation: where she was, how much jewelry she had with her, and the fact that she was alone in her apartment at the time. Kim had posted a Snapchat story about how she was home but everyone else was going out, informing the robbers of the perfect opportunity to strike.
Since that incident, other criminals have taken to Snapchat to find and rob people in a similar fashion. Now you might think you're more careful and that you don't give away personal information on your Snapchat stories, but the reality is even little things like an address on the package you're opening on screen or the number plate of your car can give criminals so much information about you. This is a big problem on every social media platform, but the unique challenge with Snapchat, unlike the other apps, is that it sells you a false sense of security with its supposed privacy features.
If you believe that everything you share on the app is temporary, you’re more likely to share things without giving them much thought, and this is what leads to many of the issues plaguing the app today. To understand how Snapchat's false sense of security negatively affects its users, we have to go back to the very beginning. Reggie Brown was allegedly talking about sexting with some friends when he came up with the idea of a social media app that allowed users to share pictures that disappeared after a few seconds.
For obvious reasons, Brown would then share this idea with his friends Evan Spiegel and Bobby Murphy, and the three of them came together to form Peekaboo. After some shady maneuvers to kick Brown out of the company, Spiegel and Murphy relaunched the app as Snapchat in July 2011. Making the case for Snapchat in the company's first blog post, Spiegel said, “Snapchat isn’t about capturing the traditional Kodak moment; it’s about communicating with the full range of human emotion, not just what appears to be pretty or perfect.”
Like when I think I'm good at imitating the face of a star-nosed mole, or if I want to show my friend the girl I have a crush on. It would be awkward if that got around. And when I'm away at college and miss my mom or my friends. But in actuality, everyone knew what Snapchat was for, at least the very core of it. And the company wasn’t hiding it. Have a look at Snapchat on the Wayback Machine and you’ll see that the company used young scantily dressed models to market the app; it was a subtle nod to the kinds of images the company felt people would use the app for the most.
And people did. The ephemeral nature of the app meant that users could do whatever they wanted without fear that their digital footprint would someday come back to haunt them. But the reality wouldn’t be further from the truth. While Snapchat prevents people from taking screenshots or screen recordings of messages without informing the other party, people have found many easy ways to circumvent this for nefarious purposes. This has led to some horrific experiences for users who thought they were sharing media that would disappear, only to find it all over the internet weeks later with no way of taking it down.
“There were hundreds if not thousands of sites all over the internet dedicated to sharing revenge that's explicitly sourced from Snapchat messages, pictures, and videos that the sender was convinced would disappear plastered all over the internet. My life has just gone through a downward spiral. I'm homeless because of this. I lost my family,” these were the words of a victim in the trial of Kevin Boulware, a man who ran one of the infamous sites where these snaps were re-uploaded. You might think that none of this is Snapchat's fault; after all, adults need to be responsible for their own actions and decisions, and to an extent, you would be right.
But the sad truth is that these issues don't only affect adults. According to Data Reportal, 20% of Snapchat's users—around 123 million people—are between the ages of 13 and 17, all of whom are exposed to the same dangers of the disappearing text and the vulnerability that it presents. The Times UK published an investigation into Snapchat that uncovered thousands of cases of pedophiles using the app to request inappropriate pictures from children and trying to groom young teenagers. Teenagers themselves were also found using the platform to share CP.
The self-destructing nature of Snapchat's messages makes it difficult to track the extent of the harm the app has caused and is still causing. The situation became so bad that every day, police in the UK are investigating about three new child sexual exploitation cases facilitated on Snapchat. It’s for these reasons that the investigation labeled the app a safe haven for child abuse, which is honestly one of the worst reputations a social media platform could ever have. For an app that allows anyone 13 and above to have an account, Snapchat needs to put a lot more measures in place to safeguard its own users, and sadly not much is being done to that effect.
And as if those issues weren't bad enough, in 2017, Snapchat released a feature called Snap Map that allows users to share their live location with friends on the app. On its own, this is already an alarming feature; it saves criminals the trouble of trying to decipher your location through all your posts. All they have to do is make an account, add you as a friend, and once you accept, that’s it—they've got everything they need to cause terrible damage.
Let's say you're careful not to add people you don't know in real life as friends: Snap Map still gives stalkers the perfect platform to find and follow their victims around. And if shows like you have taught us anything, it's that stalkers are usually closer to you than you think and way more dangerous than you can imagine. Child molesters, human traffickers, groomers—think about the fact that all these dangerous people can find the location of young kids in just a few clicks. It’s mind-boggling when you consider just how risky it is.
So far, we've talked about the unique dangers of using Snapchat, but let's not forget that the app also includes the same problems that most other social media apps suffer from—things like cyberbullying. The anonymity of the internet has created a safe space for cyberbullies, hate speech, and just vile comments in general. The volatile nature of Snapchat messages makes that problem even worse. People can send the most horrid messages without fear because they know that once the person reads it, there’s no receipt. This gives the receiver a memory they can’t run away from and the sender no repercussions for their actions because there’s no evidence unless they screenshot it.
Then there’s addiction. Figures from the Healthy Journal show that teenagers have an average daily screen time of 8 and a half hours—that's more than an entire adult workday spent on social media. To be fair, most of that time is spent on TikTok, thanks to its fast-paced content and impressively accurate algorithm, than on other social media platforms like Instagram and even YouTube. But Snapchat has a trick up its sleeve that keeps its users coming back for more: streaks.
The record for the longest running snap streak is currently held by Hannah and Lauren, best friends who have been sending each other a picture or video every single day since the feature was first released on April 6th, 2015. If somehow you don't know what they are, snap streaks form when you and your friends send each other a picture or a video within 24 hours for more than three consecutive days. It's represented by a fire emoji alongside the number of days you've snapped each other. This creates a huge incentive for people to use the app at least once every 24 hours to keep their streak going.
Snapchat uses tricks like this all over the app, like the friend emojis, which add little emojis to users' display pictures to indicate the frequency of interaction between you and them. Like this one when you and a person have been each other's best friend for two months in a row, or this one when they’re one of your best friends, and this one when your snap streak is ending soon so you can remember to send that snap. Features like these create a fear of missing out in users' hearts, which encourages them to stay on the app for as long as possible, even when it might not be healthy to do so.
When it's not addiction, there's also the problem of perfectionism. Remember what Spiegel said in that first blog post about Snapchat? “It’s about communicating with the full range of human emotion, not just what appears to be pretty or perfect.” Perhaps this was the company's original plan, but that quickly changed once they launched filters. Research has shown that people who use filters more frequently often experience increased feelings of dissatisfaction with their real selves. Humans are social animals, and we've always lived and survived in groups, and as a result, we've always compared ourselves to others.
Back when we lived in small hunter-gatherer communities, it was fine because there weren't many people to compare yourself to, and the need to compare wasn't constantly in your face. Sadly, with the rise of globalization and things like television and magazines becoming more widespread, people started comparing themselves to their favorite celebrities, which in itself was already bad enough. But what happens when people start comparing themselves not to other people, but to a digitally altered version of themselves—brighter skin, whiter teeth, more symmetrical features, accentuated cheekbones and jawline? Snapchat made all these available to users at the snap of a finger, pun intended.
The result? A 2021 study carried out by the University of London researchers on 175 women and non-binary people between the ages of 18 and 30 found that 94% of participants felt pressure to look a certain way, with over half of them saying the pressure was intense—all of this because of their use of filters. We’re witnessing a generation of people who no longer are satisfied with how they look in real life. Now, thanks to how much better filters can make them look, young people are now more than ever considering plastic surgery in order to look more like their filtered selves.
It's gotten so common that plastic surgeon Dr. Tano coined the term “Snapchat dysmorphia” to explain the phenomenon. Although this video has been pretty critical of the platform, the truth is that Snapchat isn’t all terrible; in fact, more than anything, it’s a fun messaging app. It helps bridge the gap between Android and iPhone users by providing a universal messaging platform that both the green and blue sides of the world can use. It's also a great way to keep in touch with your friends and family, especially for people who don’t see each other often.
With snaps, you can easily share bits about your day without overthinking it, because you know the pictures aren't going to live forever. Snapchat is actually the closest digital messaging platform we have to real-world communication. Our interactions with each other and the world around us are fleeting; we may remember the most important conversations, but most of our everyday interactions just fade away like a snap after 24 hours. Even features like Snap Map allow close friends and family to check up on the whereabouts of one another without any effort; in the case of an emergency, it can be life-saving by giving the authorities immediate access to the location of the victim.
What people need to remember, though, is that Snapchat is not impervious to the problems that affect every other social media platform or even the internet itself. Just like you would on any other platform, be conscious of what you share on Snap, and understand that nothing is ever a complete secret. Before you hit that send button, keep in mind that there's a possibility that whatever you send will get out one day, and as a result, you need to be careful with what you share and, more importantly, who you share it with.
Don't get bitten by the false sense of impermanence that the app gives. Also, remember to take breaks and don't let features like streaks and friend emojis keep you on the app longer than necessary, because overstimulation could ruin your life. Click the video on your screen right now to find out how and why.
With the cost of living soaring, an economic crisis and wages that can't be stretched far enough to provide an enjoyable life, people are worried about their finances now more than ever. The sad truth is that skimping and saving can only take you so far; as a result, around 46% of Gen Z and 37% of Millennials are working multiple jobs to make ends meet. We’re in an era of the side hustle, and thanks to the internet, you can now easily monetize your hobbies and expertise to help provide that supplemental income.
Some people play video games online and others sell handmade jewelry on Etsy; the online marketplace is thriving. So many sites allow you to sell your goods and services directly to consumers wherever they are in the world. But there's another way to make money online, and one that doesn't require you to spend hours making a bracelet or mastering a video game. The only thing you need is fans. For someone who's been laid off, a young person who's been kicked out of the house, or anyone who needs money immediately, signing up for this platform is very enticing.
They tell you all you need to do is post a few flirty photos and you'll be making loads of money in no time, all from the comfort of your bedroom. But they don’t tell you who the people lurking behind the screen are, or about the system of oppression and the long-term pain for short-term financial gain. This is the dark side of OnlyFans.
In 2016, British businessman Tim Stokely, alongside his older brother Thomas, created a website to help adult entertainers receive payments directly from their followers. Tim noticed that adult performers were promoting and selling their services on social media platforms like Instagram. But as the platform started cracking down on nudity, those performers found it challenging to promote their services, and trying to send content individually to each follower took a lot of work. OnlyFans solved the problem by creating a subscription service where fans could pay to access their favorite performers’ content.
In the beginning, only a few creators were frequent users, primarily performers who had already made a name for themselves in adult entertainment. Though initially small, the platform was a precursor for change in the industry; shifting their career online meant that a sex worker now had the option to control the means of production. No more going to set and working under the constraints of directors and producers that might not have always had their best interests in mind. They could now work from home and be self-employed.
They could have complete control over their image and working conditions. In 2018, American Lenoy Radvinsky, a veteran of the online entertainment scene, bought a 75% share of OnlyFans. He made millions by creating websites that claimed to sell stolen or hacked passwords to porn sites. He was actually getting paid by those sites for directing online traffic their way. After buying OnlyFans, Radvinsky revamped its business model and saw incredible growth.
Then the pandemic happened, and the popularity of OnlyFans increased exponentially, making it the household name it is today. With people out of work and in a vulnerable financial situation, OnlyFans provided an easy revenue stream—a job they could work from home. On the other side of the screen, boredom, loneliness, and isolation drove more people to seek out the services of sex workers online, creating more demand than ever before. Suddenly this small website was a smash hit, and on Twitter and other major media outlets, articles circulated about how much money some women were making—and not just women; men too can earn well on the site.
A lot of people started wondering, should I start an OnlyFans of my own? For new creators, there's the impression that OnlyFans can pull you from crippling debt or buy you a new house, or that it's easy money: just take off your clothes and pose under flattering light, maybe take a video or two, and once you hit upload, start counting those checks. The reality is that for most people, that's not going to happen. Nearly 75% of all the revenue on OnlyFans is going to the top 10% of creators—these people are mostly career sex workers or actual celebrities who had devout followings before their pivot to OnlyFans.
Bella Thorne and Cardi B are some of the most traditionally famous posters on OnlyFans, and at one point, Thorne was making $1 million a month on the site while Cardi B rakes in close to $10 million. And these are women who don’t post nude content, nor do they have to work nearly as hard as the 90% of creators who struggle for a share of the remaining 25% of revenue on the platform. Many of these creators have to work seven-day weeks promoting themselves, marketing, corresponding with brands, and speaking with their subscribers just to earn a livable wage, and many don't even earn enough to make OnlyFans a full-time gig. The average creator only makes $180 a month.
Now, OnlyFans has empowered sex workers, primarily those who have worked in the industry for decades, but the way the media covers the success of this minority suggests that anyone can make money this way. It's the classic survivorship bias— we listen to the ones who are thriving so much that we think that's the norm and not realizing that they are the outliers, and that the reality for the average creator isn't that glamorous. We're selling a dream to young men and women that sex work is easy money, and as a result, many people join the site and immediately start posting intimate and vulnerable photos without realizing the immense pressure and psychological effects that selling sex can have on a person.
The hard truth is that this industry isn't for everyone, yet the narrative around OnlyFans deceives people into thinking it is for everyone. At the end of the day, OnlyFans takes a 20% cut of your earnings and 0% of your trauma. The platform creates a direct link between your sexuality and your profit. It’s easy to obsess over the statistics and analytics of your channel, but here you're not judging your work or skill; you're judging your body. And it's not just you judging it; putting yourself online like that means your body is out there for public scrutiny.
And if there's one thing the internet loves to do, it's scrutinize. Before you know it, you start feeling less than adequate. Maybe you’d make more money if you had a bigger chest, lost weight, or dyed your hair a different color. Trying to make it on OnlyFans as an inexperienced sex worker changes your relationship with yourself and your sexuality, and it might seem worth it. Stories about people buying houses and paying off debt with money earned on OnlyFans are casual conversation.
The suggestion to start an OnlyFans channel is a flippant response to those with money struggles, when in reality the average creator makes pennies, and many would have been better off using their time to work a minimum wage job. Imagine going through all that only to not make the money you hoped for, and you’d be left feeling like you sold out, yet you never even got paid. Still, for people in desperate situations, even a tiny amount of money could be a good supplement to their income, and the promise of OnlyFans preys on those in vulnerable positions.
The site is notoriously awful at preventing underage users from signing up and takes no responsibility for hosting them. Teenagers can use a fake ID or a relative's passport to circumvent the safeguards currently in place to protect young people. Many minors are posting on the site, and OnlyFans hasn’t taken firm action to deter them. Images of minors are paid for by people much older than them with more money and power. They see a young person in a position to be taken advantage of— a child they can control through money.
This platform allows minors to be roped into exploitative relationships from a young age. Underage creators are vulnerable to the same risks adults are when posting on the site, yet they lack the maturity to understand the consequences. Everything you post on the internet is permanent. Images posted online are downloaded and reposted without consent, and even if one day you choose to stop posting or leave OnlyFans, your content doesn't just cease to exist. Could this negatively impact future relationships or how your family and friends view you? Of course.
The stigma that sex workers face is harsh and demeaning, and some people think they can treat sex workers as less than human, that they deserve any negative consequence that comes their way. Stalking, rape threats, doxing, death threats, harassment, hacking— it's hard to trust even your so-called fans have your best interests in mind. OnlyFans stars have been the victim of break-ins and blackmail at the hands of their patrons. In one case, a stalker repeatedly broke in and hid in his favorite OnlyFans model's attic. Just do a quick Google search and you’ll see countless women's accounts of negative experiences, and to be honest, there’s not much the site can do about it.
Also, when money is exchanged, the boundaries between the creator and the fan are blurred. Unhealthy parasocial relationships form whereby the fans think they own a portion of their favorite creator. Voyeurism plays a role too, as some fans insert themselves into made-up fantasies or relationships. It reminds me of the parallels between OnlyFans and online dating: you seek someone to love or give you attention, yet you want to control the interaction altogether.
More of us are conducting all aspects of our lives from the comfort of our home, so we miss out on meaningful human connection and spontaneous interactions with others. This generation is plagued by increased rates of social isolation and loneliness. Young people are looking for intimacy through their phones and screens, but these aren't substitutes for real connections. I talk more about this intersection of love and the internet in the video, “The Dark Side of Online Dating.”
From a subscriber's point of view, the state of OnlyFans is also dark. People are often quick to blame the women who create content for these platforms, but statistically, the average user is a middle-aged married man. What attracts this demographic to the site? It could be loneliness, dissatisfaction in the marriage, or even a particular fetish, so they look for what they can't have in person, or what they feel they lack in their relationships online.
Let me clarify that two consenting adults exchanging money for sexual pleasure isn't inherently wrong, but when the dynamic replaces in-person sexual intimacy, OnlyFans users can teeter into porn addiction. Irritability, social isolation, anxiety, and depression are all side effects of too much porn. You might display more violent or aggressive sexual behavior, mirroring fantasies or fetishes you see online—things that a partner might not be comfortable with. It skews how you view your sexual partners and potential partners in person, flattening them to characters on a screen. There’s something about intimacy that can only be achieved with another person.
When your sexuality revolves around people on your computer and the relationships you uphold through your OnlyFans subscriptions, intimacy's indescribable, intangible magic is lost. Sex work is literally the oldest profession in the world, and it's not going anywhere. Technology will continue to develop, with OnlyFans being this generation's preferred form of online adult content. So how can we engage with a platform ethically and humanely?
Well, first off, self-awareness is key on the part of both the creator and the user. Posting or consuming content without considering your why could cause misalignment between your values and actions. Just continue to ask yourself questions: Do I feel safe and comfortable showing this part of myself online? Is my use of OnlyFans interfering with relationships in my personal life? How does this platform and how it's promoted affect creators? How do my ethics align with all this? Be brutally honest with your answers. Your honesty sheds light on the dark side of OnlyFans.
You become an informed consumer or creator, making the site more hospitable for everyone. Almost half of the world's population uses one of Meta's services every month. Facebook and Instagram combined hold over 75% of the social media market share, and WhatsApp has become the world's default instant messaging app. This is the story of how Facebook took over the world.
In the early days of Facebook, it was reported that Zuckerberg ended meetings by shouting “domination,” and it's safe to say that he's achieved it. To understand how, we need to rewind nearly 20 years. Remember that old New Yorker cartoon? “On the internet, nobody knows you're a dog.” But in the real world, some of the internet's most influential and successful forces have their own version of that line: “On the internet, nobody knows you're a teenager.”
It's 2005. You're in high school, starting your last year of college, or maybe you’re a young parent trying to stay in touch with friends when you hear about a new website called Facebook. Or maybe you’re just 5 years old, so you sign up. At the time, you had no idea you were looking at a website that would change the world as we know it.
Facebook, of course, didn’t invent social networking. It started in 1997 with Six Degrees.com, the first website to feature profiles, friends, and location services. Then in 1999, LiveJournal came onto the scene as a way to keep in touch with friends via blogging. By 2007, it had 14 million users and was sold to a large Russian media company. Then there was Friendster in 2002 and MySpace in 2003. By 2005, MySpace was the dominant social network in the United States.
But MySpace, or any of us, didn’t have any idea what was coming. In one of Harvard’s dorm rooms was a young Mark Zuckerberg working on a social networking site that would soon overthrow every competitor in the market. One working in Facebook’s favor was timing: thanks to the rising availability of broadband, more people were on the internet in the mid-2000s than ever before. And these previous social networks helped Facebook compile a long list of technical and business mistakes to avoid.
But it certainly wasn’t all luck. Zuckerberg and his co-founders built Facebook in a controlled, methodical way. It started at Harvard, then slowly expanded to other universities, high schools, and corporations. It wasn't until September of 2006, after two years of limited availability, that Facebook opened its platform to anyone 13 and over. This slow growth allowed time to perfect the technology and allowed its founders to hire intelligent engineers who constantly added new features.
Facebook quickly gained around 12 million users, and by 2008, just two years after its public release, 100 million people were using Facebook. That same year, Sheryl Sandberg joined the company as Chief Operating Officer after working as the chief of staff for the Treasury Department in the Clinton administration. Sandberg was viewed as the adult in the room with Zuckerberg, and from there, things really took off.
By December 2009, Facebook had become the most popular social platform in the world. When the movie “The Social Network” came out in 2010, Facebook's supremacy was officially solidified in Hollywood history. But that wasn’t enough for Zuckerberg, who was looking for domination.
Much of Facebook’s success is thanks to its unique growth team. The company isn’t just worried about getting new people to join but is concerned with monthly active users, which indicates how often people return to the site and spend time on it. It might seem obvious these days, but in 2007, when Zuckerberg was just 23, he created a growth team that used data to generate engagement.
At the time, other companies largely considered growth to be the responsibility of the PR and marketing departments, whereas Facebook prioritized data and engineering. In the early days, people left the site because they couldn’t find their friends fast enough, so the growth team created the “People You May Know” function that allowed Facebook to access your contacts to suggest friends. Immediately. As you might expect, this led to some privacy issues—like psychologist-patients being recommended to befriend each other.
This certainly wouldn’t be Facebook's last dance with privacy concerns either. And that wasn’t the only tool in the growth team’s arsenal that came with controversy. Tristan Harris, a computer scientist and former Google employee, co-founded the Center for Humane Technology to push back against the addictive elements of technology. All tech companies, Facebook included, have relied on people’s weaknesses and addictive tendencies to gain time and attention.
The perfect example is the now ubiquitous “like” button. This kept people returning to the site for the dopamine hit they’d feel when someone liked their post. As Harris put it, the like button—and everything that came with it—essentially turned our smartphones into slot machines. And that's what the company wanted, because the more addictive the platform is to its users, the more opportunity for revenue—because, well, ads.
Facebook’s advertising model didn’t start off how we know it today. In the early college campus days, the site sold what it called "Flyers," which were ads students could buy to promote parties and other campus activities. As the company grew, businesses flocked to Facebook to advertise because they were able to directly target audiences by college degree, type, preferred courses, age, gender, and interest. This made Facebook’s advertising far more effective than traditional print or TV advertising, and by the end of 2007, 100,000 companies had signed up with Facebook’s Business Pages, promoting themselves through advertising.
Like Google Ads, Facebook’s advertising strategy focused on reinventing the wheel altogether by linking ads to specific and targeted users. This idea seems so normal to us now when we merely talk about a toaster oven, and suddenly it appears on our Instagram feed—but it was revolutionary then and continues to be insanely profitable for Facebook.
But some might argue that the ultimate key to Facebook status as a global behemoth is its consistent acquisition of companies it sees as competitors or additions to its master plan—aptly called the “Copy, Acquire, Kill” method. In the early days of building anything, time is your most precious commodity, and we can easily fall into the trap of spending so much time on what we’re working on that we don’t think about other essential things like our health.
It started in 2012 when Facebook bought Instagram for $1 billion, and two years later, grabbed the global messaging app WhatsApp for $19 billion and virtual reality company Oculus for $2 billion. When companies like Snapchat wouldn’t sell to them, Facebook simply copied and integrated the app’s features into their own app. If this sounds familiar, it’s because it’s the same thing they’re trying to do to Twitter by introducing Threads.
Facebook’s rise has been meteoric, but it came with a rocky and bumpy ride. The first sign of danger for Facebook came in 2013 when its content moderation strategy—or lack thereof—unraveled. It was found to be experimenting with its users by showing certain content to influence people’s moods. Eventually, it issued an apology, but that was small potatoes compared to what would come.
The idea of fake news didn’t really exist before 2015, but at the start of the 2016 U.S. election, when a study found that 63% of Americans on Facebook got their news from Facebook, the company knew it had to get ahead of the potential for misinformation. It introduced a new feature that allowed users to flag articles as false news and rolled out a program for journalists to try and favor hard-hitting journalists. Sadly, these measures did next to nothing to stop the spread of misinformation in the 2016 election. More people engaged with fake news stories than real ones—and it was all because of the way the algorithm was designed.
Fake news is sensational, and it’s purposely designed to cause outrage and fear. But this also means it’s more likely to be clicked on and commented on, prioritizing the story because the algorithm looks for and shares the articles that get the most interaction. Publicly, Zuckerberg insisted that Facebook wasn’t an influence in the election, but behind the scenes, it provided Congress with information that proved a Russian-based organization had run 3,000 ads between 2015 and 2016 in possible connection with election interference. The ads covering topics from race to gun rights reached 10 million U.S. citizens.
Facebook knew it was poisoning people’s brains and making tons of money off of it, but it would never admit to doing so publicly. I mean, who would? Perhaps that's because in 2017, riding the wave of fake news controversies, Facebook made a $3 billion profit, a 76% increase over the year before. Why would they admit to anything when it wasn’t hurting their earnings? If anything, the spread of misinformation increased engagement, which increased ad revenue.
If fake news wouldn't stop the rocket ship that was Facebook, perhaps privacy violations would. In March 2018, a story broke in the New York Times and the Guardian that the personal data of up to 87 million people had been scraped by a Facebook-adjacent app called “This Is Your Digital Life.” People passed on personal information via the app, and the British consulting firm Cambridge Analytica used the info to advise right-leaning groups like the campaigns of Donald Trump and Vote Leave, a pro-Brexit group.
This data leak brought a magnifying glass to Facebook's privacy issues like we've never seen before. As a result, the company's stock lost 60% of its value, wiping out $70 billion. In response, Zuckerberg suspended Cambridge Analytica and certain apps and testified before the United States Congress. The scandal ultimately ended up with Facebook paying a $663,000 fine in the UK and was seen as a turning point around social media platforms and their access to our information.
The following year, the U.S. Federal Trade Commission fined Facebook $5 billion over user privacy violations—a record-breaking fine for a tech company, but still a small price to pay for a company that makes upwards of $100 billion yearly. Since then, Facebook’s reputation has continued to plummet. In 2020, a data scientist said that the company failed to stop political manipulation by foreign governments.
In 2021, Facebook was discovered to be the central planning platform for the rise of the U.S. Capitol on January 6th. And later that year, Frances Haugen, a former employee, testified that Facebook and the companies it owned knew they were causing harm to people but continued to put profit over the welfare of their users. Only one other industry doesn’t care about the welfare of its users: it continues to sell them the things they know are harmful to them, and they also call their customers “users.”
Needless to say, while still raking in the cash, Facebook needed a facelift. “It is time for us to adopt a new company brand to encompass everything that we do, to reflect who we are and what we hope to build. I am proud to announce that starting today, our company is now Meta.” Enter Meta. Facebook announced that the parent company of the social platform and all the other companies that it had acquired would be changed to Meta.
The new name could potentially leave behind the controversies plaguing Facebook, because the reality is that Facebook just isn't what it used to be. A 2018 study found that 51% of people aged 13 to 17 used Facebook—a significant drop, as Facebook lost out to Snapchat, Instagram, YouTube, and then TikTok. Under its new parent company, Meta, Facebook has continued to flourish in merging markets, allowing its influence to grow globally.
One of the ways it does this is by enabling easy communication in parts of the world where other channels are inaccessible. For example, in much of Africa, Facebook is the internet. It’s free on many African telecom networks, and users don’t need phone credit to log on. Only 8% of African households have a computer, so internet access via mobile phone is critical. Facebook’s Free Basics provides internet service that gives users credit-free access to the platform and works on low-cost mobile phones.
In further expansion plans, Meta is developing satellites that can beam internet access to remote areas, mainly so residents can use Facebook. Some see this as digital colonialism—a way of turning people in the global South into consumers of Western corporate content. It’s a valid argument, but saying that Facebook is the sole culprit of this practice is naive. One can visit almost any country on the planet to order McDonald's or Starbucks.
Uber currently has cars on the roads of major African cities like Lagos with a safety rating of zero, just to increase profits. As crucial as these emerging markets are for the future of Facebook's world takeover, Meta's plans are even more far-reaching, and they need to be because during the COVID-19 pandemic, more than 70% of Meta's stock value eroded. This was partially due to Apple’s introduction of an app tracking transparency feature, which allowed people to opt out of apps like Facebook and Instagram from tracking their data.
This meant less targeted advertising, which meant less money for Meta. Of course, the introduction of TikTok can't be underestimated either. So Meta is making some big bets to become even more powerful than it already is. The first could pay off—one of the keys to Facebook and Meta’s domination has been eliminating competition. One of the main stressors throughout Facebook’s days has been Twitter.
Zuckerberg saw an opening when users soured on the app after Elon Musk bought it and upended some publicly favorable company policies. To an overwhelming response, Meta launched Threads to compete with Twitter, and in two hours, the app gained 2 million users. By the next day, over 30 million people had signed up. It has now gone down in history as the most rapidly downloaded app ever. While Meta's leadership was pleasantly surprised by the turnout, they planned for it, according to some of the most followed people on Twitter, to convert to Threads—like Ellen, Bill Gates, and Oprah.
In an odd turn of events, users seemed so soured by one tech magnate that they embraced another. For many, Zuckerberg feels like the lesser of two evils—the devil you know is better than the angel you don’t. Threads has been losing ground in recent weeks, although Zuckerberg insists that they’re doing basic work on the app to make it function better. And once it’s ready for a jolt, Meta will throw more weight behind it.
But Meta’s takeover isn’t dependent on a Twitter lookalike. Meta was born from the idea of creating the metaverse—a digital world accessed through virtual reality, where you can socialize, work, shop, and more. Facebook, based as a precursor to the metaverse, was a VR app that allowed participants to hang out with their friends in person while also wearing their VR headset. And who distributes most of the VR headsets in the world? Meta.
Meta has purchased seven of the most successful VR development studios in the world and has one of Earth’s largest VR content catalogs. In fact, it owns so many VR-related companies that in 2022, the FTC, an agency that protects the rights of U.S. consumers, blocked Meta from buying yet another popular VR studio, signaling that Meta’s copy-acquire-kill plan might be losing one of its legs. Regardless, Zuckerberg is staking Meta's future on the metaverse and the digital worlds it will contain.
He’s spoken about a future where users adopt avatars to work in virtual boardrooms, attend digital events with friends, and shop in digital stores. The final component to such immense power coming together is, unsurprisingly, artificial intelligence. Like almost every other tech company, Meta has increased its focus on AI. However, Meta has differentiated itself by pledging its AI will be open source. Open source AI means that the company's code will be free to developers and software enthusiasts worldwide.
Companies like Google and the not-so-aptly-named OpenAI have set limits on who can access the latest technology and control what can be done. Zuckerberg insists that making the AI code open source will allow people to scrutinize and improve upon it. Like any AI, there is concern that Meta could be used for evil.
I made a video on how AI could be a key ingredient to generating more spam, scamming, and disinformation, but Meta says that releasing the technology to the public can strengthen its ability to fight against these abuses. Nick C, Meta’s president of global public policy, said, “It’s not sustainable to keep this important technology in the hands of a few corporations,” which, coming from an executive at Meta, is a bit ironic. Because even if Meta shares its AI code, it still controls so many of us in ways we’ll never be able to surmount.
Copy-acquire-kill has been a mainstay of Meta's strategy. Facebook itself was ultimately just a copy of previous social networks. Albeit a better one, which it eventually killed. Then over time, the company made huge acquisitions like Instagram and WhatsApp to balloon its influence and copied other successful giants like Twitter and Snapchat. What if Meta gets all the power? Then that's called a monopoly.
And if you’ve ever played the board game, you’d know that it’s not fun when you’re on the losing side. Every person except the person who has a monopoly is on the losing side. If companies can’t compete with Facebook and the docket of other entities Meta owns, then Meta gets all the opportunity. This ultimately could limit innovation, and we're already seeing it play out. Recently, there’s been suspicion that Meta is selling VR headsets for dirt cheap and at a loss to drive out competitors. This is called predatory pricing, and while it’s illegal, it can be complicated to prove because generally low prices are seen as good for consumers. Right now, Meta is too young to be called a monopoly, but if the metaverse comes to fruition and Meta owns every entity in it, this predatory behavior could become the norm.
Consumers, us—the people who have been clicking, sharing, liking, and friending for all these years—might find it’s too late to turn back. But what's the alternative? Leave Threads for Twitter, Facebook for Snapchat, Instagram for TikTok? Well, certainly not the last one because TikTok is way more dangerous than you think, and the video on your screen right now tells you exactly why.
Once upon a time, there was a wild pig and the sea cow. The two were best friends who enjoyed racing against each other. One day, the sea cow got injured and couldn't race any longer, so the wild pig carried him down to the sea where they could race forever—one on land and the other in the water. If you were born into the hunter-gatherer community in the Philippines, you would have grown up listening to this story.
And indeed, no matter where you grew up in the world, most of us heard stories that echoed sentiments like this. While they may seem like mere fables on the surface, there’s a lot to learn from them—things like friendship, cooperation, and equality. In the past, stories like these permeated our culture from childhood to old age. But the world has changed a lot since our hunter-gatherer days. Stories that teach us about our sense of community are now limited to children's fables and no longer circulate through our culture as we get older.
All in the past, the job of passing on necessary life skills, history, and information was a collective effort. Today, all of that power has been given to commercial media. In the words of George Gerbner, “commercial media has eclipsed religion, art, oral traditions, and the family as the great storytelling engine of our time.” And whoever tells the stories of culture gets to govern human behavior.
And therein lies the biggest problem with commercial storytelling: Twitter, YouTube, TikTok, Facebook—all the different news apps and websites. How many times do we check the news on our phones every day? In the past, it took weeks, months, or even years to hear bad news from the other side of the world. But today, we have everything at our fingertips: wars, riots, chaos, scandals. The news feels inescapable, it’s like we’re trapped in a constant reel of negative information on all platforms and from every news outlet.
If you strip it down to its roots, the message behind it all is always the same—one that plays on our emotions and instills fear in our hearts, warning us against a world filled with people who want to hurt us, ideologies that threaten ours, and unexpected events that are meant to keep us on high alert. But is the world really as bad as mass media wants us to believe, or are we suffering from mean world syndrome?
In the 1970s, Dr. George Gerbner first coined the term mean world syndrome while conducting research on the effects of violent-related content on our view of the world. His findings showed that a heavy diet of violence—whether through entertainment or the news—can lead to a sort of cognitive bias that makes us perceive the world as more dangerous than it actually is. What is most interesting about Gerbner's research is that it doesn’t matter whether we know the content we’re consuming as factual—like a news report—or fictional, like a movie. The effect is the same.
When we’re constantly bombarded with negative information, we begin to develop a worldview that is highly skeptical, suspicious, and pessimistic. As part of the study, Gerbner estimated that the average American child will have watched over 8,000 murders on television before the age of 12. Consider the fact that Gerbner conducted his research in the 1970s, when the media's influence and its reach were substantially smaller, and you can imagine just how bad it must have gotten. How many murders, both real and fictional, do you think a child would have read, seen, or heard about in the media before the age of 12—8,000 or 8 million?
If that was the only problem with the media, then perhaps it wouldn't be that horrible after all. If bad things are happening, they need to be reported, right? Well, yes, but Gerbner said something while testifying before a U.S. Congressional subcommittee in 1981 that will send chills down your spine: “Fearful people are more dependent, more easily manipulated and controlled, more susceptible to deceptively simple, strong, tough, and hardline measures.”
Could it be that the media is designed to serve people the worst news to instill fear in us so we can be more easily controlled by the powers that be? This point becomes even more plausible when you consider the fact that 90% of the media in the United States is controlled by just six corporations. This means that roughly 232 media executives are calling the shots on the vast majority of the news being presented to Americans, which is then passed on across the globe.
Let’s say the situation isn’t as sinister as that and we aren’t being subversively controlled by some criminal masterminds. At the very least, CNN, Fox, and all the other outlets want one thing: our attention. And some of them will do anything to get it. People are more likely to pay attention to and remember negatives. Media outlets know this, which is why you’ll find more negative news than positive news in your feeds. It’s polarizing, engaging, and keeps us glued to our screens, which in turn results in more revenue for advertisers who are literally paying for our attention.
And once we start paying attention, the algorithms of social media take over, and all of a sudden we’re constantly being fed news that confirms our beliefs and further solidifies our already skewed worldviews. It’s no secret that controversial content—the content that triggers an emotional response—is the content that performs best, gets shared most, and circulates longest. So whether we like it or not, we become bombarded with an endless scroll of polarizing content that only manages to make us even more skeptical about the world around us and suspicious of anyone who does not happen to share the same exact beliefs.
This kind of reporting and these stories that we propagate throughout our society end up dividing us instead of bringing us together like the stories of old did. The sad reality is that whether the world is getting worse or not, the media will almost always make us think that it is, simply because it’s good for business. Why? Oh, oh yeah, so we’ve got our deadly disease—now we just have to blame it on something that’s in every household, something that people are a little bit afraid of already.
The truth, which should be an unbiased representation of facts, is no longer at the core of news reporting. The story has become much more important, and stories that elicit negative emotions often get more eyeballs, reactions, and ad revenue. As a result, the problems that are constantly depicted in movies, news outlets, and on social media are relentlessly overstated to the point where we might feel it’s even hopeless to do anything about them.
What’s worse is that this constant exposure to negative information that is relentlessly pushed on us by these obsessive algorithms can confuse the brain, such that it becomes almost impossible to differentiate between exciting fact and thrilling fiction. A study conducted by three MIT scholars in 2018 found that false news spreads on Twitter substantially faster, farther, and deeper than the truth. The research also found that this misinformation wasn’t spread through bots, but by actual human users like you and I retweeting.
Just like these algorithms, our brains recognize that the most polarizing information, whether true or not, is the information that will go viral and elicit the most emotional response from the public. And so we hit share or quote in hopes of getting that viral tweet without first verifying if the information we're spreading is accurate. Another reason why it seems like the world is substantially worse than what we see in front of us is that the news talks about things that did happen and not things that didn’t.
We don’t hear about wars that never started due to successful peace talks or shootings that were prevented through proper policing. We barely hear when unemployment rates go down and when the economy is experiencing a turn for the better, because again, it's just not as exciting as bad news. Sadly, as long as terrible things keep happening on the face of this planet, there will always be enough negative reports to fill the news—especially with smartphones now allowing people to become amateur reporters and crime investigators.
The mean world syndrome speaks correctly to our most innate fears, which then trigger our fight or flight instinct. When we watch a reporter covering a war zone, a shooting in a residential area, or a terrorist threat, our body naturally becomes flooded with hormones and chemicals designed to keep us on full alert in order to save us from the mean and bad world. While these survival characteristics were essential in our hunter-gatherer days, today all they manage to do is lead to anxiety, stress, and even trauma.
But the world isn't as bad as we think it is—only the stories are. This is why, to combat mean world syndrome, we have to take back control of how we’re thinking, feeling, and reacting to the constant stream of negative or violent news being depicted all around us. The truth is that the world today is much better than it has ever been. Don’t get me wrong: humanity is far from perfect. There are still conflicts in many places around the world, human rights issues we need to tackle, climate change problems we need to fix.
But the world has never been as good as it currently is, at least for most of us. Advancements in healthcare and technology have increased our lifespans, decreased mortality rates, and improved our living standards. We haven't witnessed any world wars for decades. We've grown more tolerant of each other and more accepting of our differences. Violence has steadily been on the decline since 1946. There have been fewer famine deaths in the past decade than any other time in human history, and extreme poverty has been declining literally by the second.
Yes, we face harsh realities on our personal and global scale every single day, but when tragedy, crime, and war are presented as the norm and not the outliers, it's only natural for us to feel angry and afraid. We have to choose our information sources carefully and not let the obsessive algorithms of social media dominate our perception of the world. We have to be conscious of our approach to news and entertainment and challenge the way we think.
The next time you're scrolling through your feed and find a disturbing news report, ask yourself, “Is this fact or fiction? What real evidence is there of this occurrence? What's the context, or am I just being manipulated so that I'll develop certain feelings of fear and suspicion?” If you find your social media platform serving you the same kind of content, be conscious of this and make sure you diversify your news feed to include positivity to balance out the negativity.
At the end of the day, we’re a storytelling species. And if we’ve learned anything from our history, it’s that the narrative we share with one another is the most important thing. Just like in our hunter-gatherer days, the tales we’re telling now will have a great influence in shaping our culture and our people. It might be time that we go back to telling stories like the wild pig and the sea cow. Maybe if we do, we can cultivate the values that truly make us human—like caring for one another, being compassionate, and giving people the benefit of the doubt.
The world is not as mean as the media wants you to believe. It's time we stop letting them lie to us that it is. TikTok is far more dangerous than we thought. In the past two years, at least 15 kids aged 12 or younger across the globe, from Milwaukee to Sicily, have painfully passed on after attempting what seemed to them like a harmless challenge they found on the world's most popular app, TikTok. The Blackout Challenge encourages users to record holding their breath until they passed out, and viewers could essentially watch as they regained consciousness.
Despite the fact that this is obviously dangerous for anyone on the planet, in the hands of children, the trend had far more devastating consequences. The NyQuil Challenge, Penny Challenge, the Milk Crate Challenge—time after time, we’ve seen TikTok spring up one dangerous viral trend after another, all of which puts users, especially younger users, in potentially serious danger.
In a previous Aperture episode, we looked at the mental health effects of TikTok, but it seems as though the self-inflicted risks directly linked to the app are more far-reaching and have greater consequences than we could have ever imagined—from health and safety to privacy and security.
Given the long list of problems that TikTok seems to pose to the general public, what do we do? What can we do? Can we fix TikTok, or should we just ban the app entirely? If you’ve opened TikTok, you’ll find its genius upon first glance—an endless stream of content from strangers all across the globe created specifically for you by one of the most refined algorithms humanity has ever created. You get a rush of dopamine from watching dogs do dances to amateur chefs making elaborate meals.
Before we look at just how dangerous TikTok has become, I think it's important for me to mention that it's definitely not all bad. TikTok's rise to the top was greatly accelerated during the pandemic. Everyone was forced to stay indoors without physical access to their friends, family, or community to help us find a way to connect. Young people flocked to TikTok, doing sly dances and challenges to cure their boredom and help them feel part of a community again. They found solace in the app, from doctors explaining COVID symptoms people were experiencing to random comments saving people from potentially life-threatening conditions.
Whether you got bullied at school or you were living in an unhappy home, you could log onto the app and see other people in the same exact situation as you, offering help, support, and a sense of belonging. Sadly, the reality is that with most things, there’s far more than meets the eye. Just spend a few minutes on the app, and instantly you can see just how difficult, almost impossible, it is to leave.
Today, people spend more time on TikTok than they do on any other social media platform, with a global average of 96 minutes per day. And for some, the number is far greater than that. This, for anyone, is just way too much time on just one app. But it's even worse when you realize that around half of TikTok's users are young people, many of whom are below the age of 13.
Creators and users who try to destigmatize mental health on the app see the benefits of more people learning and talking about these issues. Isn't that only the case though if users are getting the right information? One of the greatest problems on TikTok is that just about anyone can buy a scrub on Amazon and claim to be a doctor, spreading misinformation with ease. There's no formal fact-checking, so often we might just hear what we want to hear.
If we have a symptom we’re worried about, these videos could have the WebMD effect of making us think we’re dying when we’re really just having mild allergies. When you think about it like that, you realize that the videos pushed to us by TikTok can actually worsen what we’re feeling. If we’re anxious about a relationship or a meeting at work and we’re continually getting shown videos of other anxious people, is that going to help us feel better or make us spiral?
All of these issues are without mentioning the things that plague every social media app, like cyberbullying, social exclusion, or the temptation to compare ourselves to others. We get addicted to scrolling and posting and scrolling and posting until we’re convinced that we’re just not as good as everyone else. The scariest thing about it is that no one is immune to the grasp of the algorithm—not even those who should be better informed.
One doctor, Brian Boxer Wachler, grew an impressive TikTok following by offering medical advice and reacting to other health-related videos. Knowing his audience, he became engaging, slangy, to be more relatable. Despite being middle-aged, he became so obsessed with growing his following that his family had to stage an intervention to help him curb his addiction and approach his channel as a healthy distraction rather than an obsession.
“It was unbelievable when you wake up and you see your video got hundreds of thousands or millions of views. It’s just this rush.” Those are real effects. I mean, those are the same effects that people get with drugs. In late 2022, he released a book detailing his experiences with social media addiction. Creators have to produce new videos all the time to maintain view counts; the pressure and the burnout that comes with it is real.
No matter what your age, education level, or IQ score, in that sense, TikTok is a great democratizer of our time but with debilitating side effects. Seeing all the dangers of TikTok, the big question remains: Is this fixable? Is there a way to combat all those dopamine hits we feel every time we open the app? The problem is that the app wants us to be addicted. That’s how it keeps users, increases downloads, and stays relevant.
We can put restrictions on our screen time or make our phones require breaks from the app as much as we want, but if the app doesn’t change, chances are that we won’t either. And for most people, that’s okay. They might obsess over the latest TikTok dance or fairly compare themselves to a beauty influencer, but they’ll be fine; their life won’t be dramatically altered.
But for kids, the potential damage is much higher. TikTok took down 41 million underage accounts in the first half of 2022 alone, but that’s a fool’s errand; those users can just sign up again with a different account. TikTok’s army of 40,000 global moderators review potentially harmful videos, but it’s an impossible task to catch everything. Over a billion videos are viewed on the app every single day. That means each mod will have to review 25,000 videos—and let’s say the videos average out to around a minute each; that’s still 416 hours of content to watch in 24 hours. It’s literally impossible.
So how do we protect kids? There’s no effective way to block underage users from social media platforms because it’s impossible to verify their age. But what if it wasn’t? In 2021, TikTok met with providers of facial age estimation software, which can distinguish between a child and a teen and can work without directly identifying an individual or storing any data. This could be a game changer for an app that’s trying to be safer for children.
But unfortunately, child safety isn’t the only scandal swirling around TikTok. Facial recognition technology on an app that’s been accused of spying on its users and sharing data with the government wouldn’t be a great look. We can’t really talk about TikTok in today’s world without talking about privacy. Most of us know and ignorantly accept that our data is being stored, seen, and used in some way when we surf the internet. It’s the classic “accept all cookies” option without thinking.
But it’s TikTok and its parent company, ByteDance, taking it to a new level. An internal investigation found that a group of ByteDance employees was found to be surveilling several U.S. journalists who covered the app in an attempt to track potential anonymous sources. In 2020, a security update on the iPhone caught TikTok tracking the keystrokes of Apple users while on the app. But here’s the thing: ByteDance isn’t the first company to be accused of this.
Uber and Facebook have been known to track the location of journalists who report on their apps, just like TikTok employees were found to be doing. And in more than 124,000 documents leaked to the press in 2022, spanning 2014 to 2017, Uber was shown to be doing everything it could to bypass regulations across the globe. Meta, Twitter, Google, and even Apple all collect and use our personal data in some way, so what’s the big issue with ByteDance?
Geopolitics. ByteDance is a Chinese company. TikTok says that it doesn’t share user data with the Chinese government, but politicians, journalists, and other critics are quick to call a bluff. China already has a practice of stealing massive quantities of data about Americans and other governments. But do TikTok and its billions of user profiles offer a more direct line? If the security concerns turn out to be true, we must see widespread fraud, hacking, or influence operations launched through the platform as a result.
United States lawmakers have issued warnings about the app and enacted executive orders to address the potential security risks it poses. Calls from inside the U.S. Congress have, in a feat of modern-day politics, brought Republicans and Democrats together against a common enemy. Closed-door talks between TikTok executives and the committee on foreign investment in the U.S. have been going on for years. There’s a security contract in negotiations with the Treasury Department on how TikTok will handle Americans’ user data—all of this in an effort to fix things, curb the threat, and limit our exposure as users.
But will it work? Many think that contracts, talks, and hearings won’t get the job done; that fixing just isn’t an option. The calls to ban TikTok in the United States—and many other countries—grow by the day. The U.S. military banned the app in 2019. Now TikTok is off-limits on all government devices, and there’s a bill sitting in Congress to prohibit it completely.
Take a second to think what that would look like. The most popular app in the world, unavailable in the United States. Would there be revolts, cheers, confusion, a restructuring of society as we know it? To understand what life post-TikTok could be like, look no further than China’s neighbor, India. The East Asian country banned TikTok in 2020, after a geopolitical dispute with China. Of course, there are consequences. People employed by TikTok in India lost their jobs, influencers who had amassed followings on the app lost their income. But it wasn’t all bad in India—in fact, it was surprisingly good for many of its citizens.
Replacement apps developed in India are hoping to fill the hole that TikTok's departure created, with the hopes of focusing on the needs of its users rather than doing whatever it takes to fulfill business objectives. If India does this well, will other countries follow suit? For now, we can’t really say for sure.
As for U.S. national security, there’s no smoking gun—no evidence of an urgent threat. This begs the question, is all the discussion by politicians and regulators really about a unique national security issue, or is it a way for them to talk about larger issues like privacy, disinformation, and content moderation that help bolster their own personal gains? Platforms, despite any ulterior motives, have the potential for danger that seems to be enough to at least keep conversations about partial or full bands of the app going.
And you don’t even need to look through a globalized lens to understand that there are harmful aspects to TikTok. Remember the kids who suffered a terrible fate attempting the Blackout Challenge? They were never supposed to see it in the first place. The reality is that technology is always one step ahead of us—our governments and our ability to maintain our mental health. And as long as there’s money to be made, it won’t slow down.
Because users aren’t customers; we’re the product, something to be sold and analyzed in the name of financial gain. If measures like enforcing age restrictions to make an app safer aren’t in the interest of a company’s bottom line, why would they ever enact them?
So can we fix TikTok, or should we just get rid of it? The app is making small efforts to fix itself, adding new features like one that tells users the algorithm recommends certain videos. But the algorithm is designed to keep people watching for as long as possible, so it promotes the most extreme, the most controversial, the most eye-catching videos. And even if you know why you’re watching a video, will it really change the way the video makes you feel or act?
TikTok could try a route such as YouTube took with YouTube Kids to protect its youngest users. It’s a version of the platform designed for children 12 and under that hosts age-appropriate videos and screen time limits. As for our privacy and security, perhaps it’s up to the governments of the world to place restrictions on what information about its citizens a foreign body has access to. Or perhaps a full ban of the world’s most popular app is in our future—to protect us, our mental health, and the safety of our homes.
Those 96 minutes on average that we spend every day on TikTok wouldn’t have to go towards something else. I would encourage us all to focus that time on something more productive, because the reality is a world without TikTok will still have other platforms that embrace the blistering pace and addictive nature of short-form content—things being lit on fire, flash mobs performing goofy choreographed dances, and people eating too much pizza on camera.
In fact, right now, there are already YouTube Shorts, Instagram, and Facebook Reels, so we better watch out. The M stock trading trend, with its Wall Street Bets forum reaching a 1 billion dollar valuation in its latest funding round, says it has already raised more than $400 million from Fidelity and plans to raise up to $700 million in total. In 2021, Reddit was at its peak, valued at around $10 billion.
Today, just two years later, its valuation has dropped to around $5.5 billion. How did a Silicon Valley unicorn slice its value in half in