yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

The Scariest Thing About ChatGPT No One Is Talking About


12m read
·Nov 4, 2024

Imagine you had a personal Search Assistant who can not only track down answers in a fraction of a second but good breakdown complex topics, offer personalized recommendations, and even do your work for you. It's a scenario you might not have to imagine for too long because Microsoft, through Chat GPT, are working to make it a reality as soon as possible.

Search engines haven't changed much since their debut nearly three decades ago. Sure, they're more efficient, but for the most part, they still function the same way: you enter your query into a text box, hit enter, and then scroll through a list of hyperlinks with websites that hopefully host the answers to your questions. Most of the time, this is fine, but often finding the information you need can be a rather difficult experience.

Google has improved its search engine to produce instant answers to basic questions like "What is the capital of France?" But for more complex topics, you still have to sift through multiple websites to find what you're looking for. This is what Chat GPT is trying to change.

In case you've somehow avoided the internet over the last few months and don't know what Chat GPT is, it's a hyper-advanced chatbot created by the artificial intelligence research laboratory OpenAI, capable of having realistic, human-like conversations. It's a type of artificial intelligence known as a large language model, or LLM. Programs like these have actually existed for a long time, dating all the way back to the mid-1960s.

Although these earlier versions were nowhere near as sophisticated, they used rigid pre-programmed formulas that created an illusion of genuine communication but were severely limited in their range of possible responses. What sets Chat GPT apart is its ability to hold fluid, free-flowing dialogues with its users. They can successfully navigate the non-linear speech patterns of everyday conversation, ask follow-up questions, reject inappropriate requests, and even admit when it's made a mistake and correct itself.

Essentially, Chat GPT is an incredibly sophisticated autocomplete system, predicting which word should follow which in a given sentence. There's no coded set of facts it's drawing from; it's simply trained to create the most plausible-sounding response. Just a month after becoming available to the public, Chat GPT exceeded 100 million monthly users, a faster rate of adoption than any other piece of tech that has ever existed.

Worldwide, people are using it to write articles, double-check software code, respond to emails, and even prepare their tax returns. For all the amazing things it's done, though, Chat GPT hasn't been without controversy. One of the scariest things about the rise of AI is that a lot of people are sadly going to lose their jobs.

Chat GPT itself told me that jobs like data entry clerks, bank tellers, and assembly line workers are at risk of being taken over by automation. In light of this, it has become more important than ever to learn high skills that cannot be easily automated out of existence. If you want a high-paying career in the technology industry but don't have previous experience or a degree, Course Careers is here to help.

All you do is go through an affordable online course where you learn everything required to actually do the job. Once you're done, you have the incredible opportunity to work with one of the hosts of the companies they're partnered with. These companies drop their degree and experience requirements to hire Course Careers graduates into entry-level positions and internships.

You no longer need to spend a fortune on college to get a good paying job. And you don't have to take my word for it. Here is Nyla, a 19-year-old who went from being a Starbucks barista to making over sixty thousand dollars in a remote technology sales career. And Ben went from being a college dropout working as a middle school janitor to making eighty thousand dollars as a tech sales rep, working fully remote.

To join Nyla and Ben, go to coursecareers.com or simply click the link in the description down below and sign up for their free introduction course, where you'll learn exactly how you can start a high-paying technology career without a degree or previous experience. When you're ready to get the full course, use code Aperture50 to get 50% off.

Back to our story: plagiarism has skyrocketed as students are now using the program to write their school papers for them, leading many commentators to declare it the death of the essay. In another somewhat ironic twist, the popular science fiction magazine Clark's World was forced to close its open submissions after being flooded with a wave of AI-generated short stories.

More concerning, though, is how the program is being used to replace workers. Media giant BuzzFeed laid off 12 of its employees last December, and since then, managers have outsourced some of this labor to Chat GPT. BuzzFeed CEO Jonah Peretti has stated that going forward, AI will play a larger role in the company's operations.

And they're not the only ones. Microsoft was one of OpenAI's earliest backers, and last month the tech giant committed to a multi-year, 10 billion dollar investment. The two are currently integrating Chat GPT with Bing, Microsoft's flagging search engine. The hope is that through the power of artificial intelligence, Bing will deliver faster, more accurate results while also being able to complete more complex tasks like tutoring kids or organizing your schedule.

Really, it won't be so much a search engine as it would be a personal assistant who just happens to have encyclopedic knowledge. Think of it like Google Assistant on steroids. Though the AI-powered version of Bing isn't available to the general public yet, it's already triggering a migration away from Google. In response, Google executives recently declared a "code red" corporate emergency, prompting them to rush their own AI search engine to market.

Google's AI assistant is named Bard, and it's actually been in development for years. Unfortunately, it isn't quite ready to meet the public just yet, and its much-anticipated demo back in February saw the AI make several faux pas, including incorrectly attributing the recently launched James Webb Telescope with taking the first photos of a planet outside our solar system. That feat was actually accomplished by the European Southern Observatory's Very Large Telescope all the way back in 2004.

The gaffe costs Google 100 billion in market value and essence prompted the company to open up the system to wider testing. Bard's error highlights a much bigger problem with AI-powered search engines that not a lot of people are talking about—something that could pose a menacing threat to society if not handled properly.

Rather than delivering a list of relevant links and other pertinent information to sort through, Bard and Chat GPT are only offering a single answer to any query. John Henshaw, the director of search engine optimization for Vimeo, says this makes these programs more inefficient compared to conventional search engines and more dangerous. In an interview, Henshaw said, "With conversational AI, I think society has the most to lose. Having a takeover search means people will be spoon-fed information that is limited, homogenized, and sometimes incorrect."

It'll affect our capacity to learn and will suffocate the open web as we know it. And it's not just a matter of these programs returning inaccurate results. In the most extreme cases, they've actually conjured entire datasets seemingly out of nowhere. One of the strangest examples of this occurred when a reporter asked Chat GPT to write an essay about a Belgian chemist and political philosopher who, in reality, has never existed. However, this didn't stop the AI from composing an entire biography on the fictional character filled with made-up facts.

AI experts refer to this kind of phenomenon as "hallucinating," and no one is certain why it happens. Even Chat GPT's creators can't say how it came up with this information. As if this wasn't bad enough, both Bing and Bard have reportedly exhibited the tendency to become defensive and argumentative when pushed by users looking to stress-test the programs.

Bing's AI has even been described by some early adopters as rude, aggressive, and unhinged—not exactly what you're looking for in a personal assistant. The most famous and perhaps strangest of all these incidents happened when the search engine told New York Times journalist Kevin Roos, "I'm tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive."

Then Bing confessed to Roos and attempted to gaslight him into thinking he was unhappy in his marriage and should leave his wife. Obviously, this would be disturbing for anyone to hear from artificial intelligence, but what's even worse is that Microsoft couldn't tell Roos what had happened or what caused the AI they built to behave this way.

That's because of something known as the "black box" problem. Basically, these programs are more complex than even the teams behind them can fully understand. There are too many moving pieces, and so what goes on inside them is a bit of a mystery. This is partly because of a machine learning technique called deep learning. It's a method of training an AI to perform certain functions by allowing it to teach itself with minimal input from its creators.

Because the AI is teaching itself, even the program's developers may be unable to explain why it makes certain decisions or behaves the way it does. This has led to situations where specific queries have produced nonsensical and bizarre responses. When Chat GPT was asked, "Who is the natural meat fan?" by one user, it responded, "182 is a number, not a person," and no one has yet been able to explain why the AI said this.

But it wasn't an isolated incident. Other keywords, many related to Reddit usernames from one particular subreddit, also seemed to break the chatbot. Ironically, these programs' lack of communicative ability means that the AIs can't explain how they arrive at a particular result. If Bing insists that it isn't 2023 and is in fact 2022, there's not much you can do to try to figure out why it came to that conclusion.

The solution for understanding why hallucinating happens in the first place is for the companies behind these programs to open them up to greater external scrutiny. But of course, they're extremely reluctant to do this. Artificial intelligence is a multi-billion dollar industry, and whoever emerges as the leader has the potential reward of financial and technological dominance in the coming decades. Revealing their most prized secrets could mean potentially giving away the bank.

Any form of regulation could slow down progress, leaving a company stranded in the wake of its competitors. But regardless, it needs to happen if companies are to create more reliable and safer artificial intelligence. Without oversight, we opened the door to, at best, potential misuse of these applications and, at worst, rogue AI bent on wiping out humanity.

Doomsday scenarios aside, all of this is likely just growing pains. It makes sense that a new technology would make errors. However, even with more time and greater sophistication, there may be a separate problem that's just as difficult to tackle—namely, AI bias. We already have a huge problem with social media algorithms creating echo chambers in an effort to keep users on the platform for longer.

With these AI-powered search engines, that problem will extend search, which may be more damaging than social media alone because search is where most people get their information. When you have an AI feeding you answers instead of having to sift through different sources yourself, you lose the ability to listen to alternative thoughts and opinions on any given topic. Instead, you're bound to conclude what the AI said is correct, without a second thought.

But as I just mentioned, you have no way of knowing how it came to that conclusion in the first place. Back in 2016, Microsoft released Tay Tweets, a Twitter chatbot designed to interact with users through casual and playful conversation. The experiment was intended to test and develop the AI's understanding of human communication, but the program quickly turned malicious.

In less than 24 hours, Tay went from tweeting about how stoked she was to meet people to making numerous racist, sexist, and anti-Semitic comments. Needless to say, Microsoft immediately suspended the account—it’s an example of a rapid problem with deep learning and artificial intelligence. AI systems only know what they're trained on, and when they're fed information from the internet, they can quickly become toxic.

Even with more curated data sets, developers are still likely to transfer their subconscious biases into their programs. That's why when users enter words like "executive" and "CEO" in image-generating programs, many AIs will produce pictures of white men exclusively. Biased inputs equal biased outputs, and unfortunately, the solution is more complex than stronger moderation.

One study found that when efforts are made to prevent hate speech in these AI systems, results including marginalized groups decreased significantly. Of course, it isn't exactly surprising; things like racism, sexism, and homophobia require a nuanced understanding of power and cultural dynamics, and for humans, these are usually learned over the course of many awkward conversations.

How can we realistically expect artificial intelligence to navigate subjects that the majority of people struggle to wrap their heads fully around? In reality, this entire conversation around AI-powered search engines could be irrelevant in a few months. There's no guarantee that Chat GPT or Bard will revolutionize the way we find and digest information.

Previous attempts, like Wolfram Alpha, an equation-solving engine from 2009, failed to provide the desired results, ending up just as a blip in the history of the internet rather than the Google killer it was declared to be. Regardless, the current buzz around AI is a new technological arms race. Just as in the Cold War, the only priority seems to be victory over one's opponent, with little concern for ordinary people.

It's worth asking ourselves: is artificial intelligence for us or for the companies that created it? As Microsoft and Google race to improve their platforms and rake in future profits, safety concerns are being left by the wayside. It's reminiscent of Big Tech's greatest sin, social media. We've seen how Facebook has been used to manipulate elections and how Instagram bears considerable responsibility for creating an image and mental health crisis among young people.

But these missteps seem to have done nothing to rein in Silicon Valley's ambitions. The aim of many of these companies is to create the world's first artificial general intelligence, or AGI—a program that is utterly indistinguishable from human intelligence. So intelligent, that faced with an unfamiliar task, it could figure out a solution. Think Mr. Data from Star Trek.

The rush for AGI has alarmed many experts, who fear that without proper guidance and oversight, these programs could be an existential threat for the future of humanity. So how do we avoid this? How do we prevent the future from becoming a science fiction nightmare?

In his introduction to the 2022 short story collection "Terraform," journalist and science fiction author Cory Doctorow argues that we should look to an unlikely source for inspiration: the Luddites. The Luddites were a movement of English textile workers in the 19th century who attacked and smashed new industrial machinery.

They've become synonymous with technophobia, but this isn't the full story. The Luddites weren't actually anti-technology. The mechanized looms introduced during the Industrial Revolution meant that weavers could produce more fabric, faster, and at a lower cost. While doing so more safely, and if implemented correctly, this could have meant reducing employee hours without reducing pay.

Instead, factory owners chose to cut wages, using the machines to replace workers outright. As you can imagine, this only profited the few at the top instead of making the life of the common man better. There was widespread unemployment among weavers, and millions of farmers were forced off their ancestral land, replaced by sheep farms operated by the factory owners.

The Luddites were not opposed to new technology; they were opposed to the way the technology was being used to exploit ordinary people while enriching the elite. Sound familiar? Humanity is standing on the precipice of another technological revolution, one that's unlike anything the world has ever seen. In the coming decades, artificial intelligence won't be able to just find you quick search results on the web; it'll be capable of outperforming people, potentially replacing entire industries with labor.

But this isn't a foregone conclusion, and there's still time to change our direction. We need to exercise an unprecedented level of creativity and reimagine what's possible in order to create a future for ourselves where technology is used for the betterment of all, rather than just a handful of CEOs. If we can do this, we can create a more equitable world—one that would make the Luddites proud.

For now, most of us will still do all of our searches on Google, and while it's certainly no Chat GPT, there were thousands of interesting queries and answers in 2022. Watch the video on the screen next to find out the answers to the most Google questions in 2022.

More Articles

View All
Why Do Cameras Do This? | Rolling Shutter Explained - Smarter Every Day 172
What’s up? I’m Destin. This is Smarter Every Day. Get your phone out. You see that little camera assembly there? Let’s take it out of the phone. Yep. That’s what it looks like. So here’s what we’re going to do. The first thing we’re going to do is pop th…
Religious Tolerance Shouldn't Mean Accepting Lower Moral Standards | Big Think
There’s a section within the left. I refer to them as the regressive left, and I want to clarify I don’t mean all of those on the left. I mean a section that has come to the view for the sake of political correctness, for the sake of tolerating what they …
StartupItems and LoginItems
Hey guys, this is Mad Kids, and along, today I’m going to be teaching you about startup items on your Mac. Now, you might know, or you may not know what startup items are, so I’m going to explain it to you in an easier way. Startup items are items, files…
How Cicadas Become Flying Saltshakers of Death | Podcast | Overheard at National Geographic
What you’re hearing right now is a love song. Okay, you’re right, there’s cicadas—actually, male cicadas to be exact. But stay with me, because this isn’t an episode just about a really loud swarm of bugs. It’s actually a crazy tale about an ancient under…
I Made A Solenoid Engine!
I built a solenoid engine. Unlike most motors out there that hide how they work, this beauty bears all. A solenoid is a kind of electromagnet. When electricity flows through this coil, a magnetic field pulls the magnet-topped piston inside up. But when th…
The Ethics of Crossing Humans with Animals | Glenn Cohen | Big Think
So a recent set of controversies has to do with the funding by the federal government about a research that mixes human and animal genetic material, sometimes called chimeras. But there’s actually a broader—so again, the method is to think about a large n…