yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

The Dark History of ChatGPT


11m read
·Nov 4, 2024

The world was still coming to terms with the powers of the artificial intelligence chatbot called ChatGPT when GPT-4 was released in March of 2023. GPT-4 is miles ahead of GPT-3.5, the engine on which ChatGPT is running. At the time of writing, GPT-4 can pass the bar exam with scores in the 90th percentile while GPT-3.5 could only manage 10th percentile numbers. This new engine also has significantly better scores than the SAT and AP Biology and many other tests.

To understand just how incredible this achievement is, ChatGPT-3.5 was released in November 2022. Imagine moving from the 10th to the 90th percentile of a class in just five months, not just in one test but in all of them. If you find this progress too fast, it's worse than you think. Parent company OpenAI claims that GPT-4 was technically ready months ago and that they only delayed its release to allow moderators a better understanding of how the technology could be abused.

This way, they could preemptively develop measures to stop abuse from happening before they put this powerful tool in the hands of the public. But some of the biggest players in the tech industry don't believe that the guardrails OpenAI claims to have put in place are enough. Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and many other tech giants recently signed a letter written by the Future of Humanity Institute, calling for a six-month pause on the development of any AI systems more powerful than GPT-4.

Glitter stated that AI labs and independent experts should use this pause to jointly develop and implement shared safety protocols before making further advances in artificial intelligence. What's interesting about this letter is that Elon Musk signed it; for one, Elon is usually the biggest proponent of cutting-edge technology. He's backed crypto, his company Tesla is heavily involved in developing fully autonomous vehicles, and he plans on taking humanity to Mars.

What's even more intriguing is that Elon was one of the co-founders of OpenAI. OpenAI was created by Elon Musk, startup guru Sam Altman, and several other innovators in 2015 as a non-profit private research facility. Elon has always said that AI will be humanity's biggest existential threat, so he and the other co-founders made OpenAI a non-profit so that the lab could focus its research on using artificial intelligence to make humanity better.

Elon left the company in 2018, and then in March of 2019, with limited funding threatening the advancement of their research, OpenAI switched from a non-profit organization to a for-profit company. In July of the same year, Microsoft became their biggest investor with a backing of 1 billion dollars. With this change in direction, OpenAI is no longer focused on doing the best thing for humanity; it's now focused on getting the biggest ROI from further investors, and that's worrying because investors often want exponential growth, even if it's at the risk of safety.

In reaction to some of those safety concerns, Italy temporarily blocked ChatGPT due to data privacy issues. There are rumors that other countries might follow suit, citing concerns of unchecked AI development, plagiarism, and misinformation tendencies of this technology. This temporary ban may unfortunately be a sign of things to come if more care isn't taken.

One of the biggest problems with generative AI is the amount of data they need to function properly. Most of this data has gotten from you without your knowledge or consent. While companies having and using this data is fine most of the time, sometimes they sell that data, which includes all your personal information, to data brokers.

A while ago, a good friend of mine got an email from Google letting him know that his information had just been exposed in a data breach from one of those data brokers. Soon after, he started getting robocalls, spam, and emails from scam companies who knew an uncomfortable amount of information about him, including his home address. That's why I'm excited to tell you about today's sponsor, Aura.

Aura identifies data brokers that have your info and submits opt-out requests on your behalf. Brokers are legally required to remove your info if you ask them to, but they make it super hard to do. But Aura can handle everything for you so you can spend your time focusing on other tasks with peace of mind. And that's not all; Aura does so much more to protect you and your family from online threats you can't see, with things like parental control to protect your kids online, VPN, password management, identity theft insurance, and more—all of this at one affordable price.

To get started, go to aura.com/aperture or click on the link in the description to get a two-week free trial. Once you see just how many data brokers have your information, you'll definitely recognize the amazing value that Aura offers.

Back to our story. When discussing the fear of AI systems, people often think of Skynet and the robot uprising from the movie Terminator. But as La Columnista Brian Merchant puts it, an abstract fear of an imminent Skynet misses the forest for the trees. Our fears of superintelligence are blinding us from the more immediate and present threats that technologies like GPT-4 pose. Remember those guardrails we were talking about?

Time Magazine recently published an investigation into OpenAI and found that the company paid Kenyan workers less than two dollars an hour to create them in order to make ChatGPT less toxic. Before ChatGPT, there was GPT-3, and GPT-3 was good at putting sentences together, but the company couldn't publish it because it often said some of the most violent and bigoted things you could read.

This isn't surprising because the AI model was trained on words from the internet, and if you've been here for any length of time, you'd know that the internet is filled with very vile language. To prevent ChatGPT from giving these terrible answers to its users, OpenAI created a safety system that learned what toxic language was and filtered it out of ChatGPT in all future language learning models.

To build the safety system, OpenAI sent tens of thousands of text strings to a company in Kenya. These texts included some of the most horrific things humans have ever written from the deepest, darkest parts of the web. The workers in Kenya then had to label all of these texts so that the tool could learn how to detect this toxicity and prevent ChatGPT's users from seeing anything like them. According to the investigation, these workers were paid between a dollar and 32 cents and two dollars an hour.

This paints a picture of the reality of AI that most of us aren't aware of. We talk about the day when AI may be more intelligent than all humans, but by focusing on a distant future that may never happen, we neglect the reality that the companies building these systems are taking advantage of us humans today without our consent. And it's not just those workers in Kenya. If you've ever used an AI image-generating tool before, you know that you have to write descriptive prompts to tell the AI what to create.

One of the most popular additives to a prompt is "in the style of" and then a particular artist's name. Most people write things "in the style of" Van Gogh or Da Vinci or even Bob Ross. While it seems okay to create new art by artists who are long gone and can't make any of their own, imagine you're an artist who barely makes enough money to get by, seeing your name used as a prompt for more than 12,000 works of art that were made in your style but aren't yours.

Artist Kelly McKernan doesn't have to imagine because it happened to her. She recently filed a lawsuit against two AI companies for allegedly training their AI art generation tools on her artworks without her consent. In her own words, "There's more and more images with my name attached to it that I can see my hand in, but it's not my work. I'm kind of feeling violated here, and I'm really uncomfortable with this."

To make matters worse, Kelly is a single mother who struggles to make rent most months while other people using her name as a prompt are creating new art in her style and selling them without getting a penny of those earnings. For years, copying an artist's work took such a long time and effort that it just wasn't worth it in most cases. But with these tools, it's become more accessible than ever, and this is just the beginning.

Where is the right of a person to determine how their artwork is used? Why don't these companies need to ask for consent before training their AI models on the works of artists? These are the issues that we need to be focusing on because the reality is that most experts in the field agree that GPT-4 is far from being an Artificial General Intelligence or AGI—a machine that can solve problems as well as a human.

But even if GPT-4 isn't AGI just yet, it can already pass the Turing test, at least in some instances. The Turing test is a simple method of determining whether a machine can express intelligence. If the machine can engage in conversation with a human and the human can't tell that they're talking to a machine, it passes the test. GPT-4 passes the test when trying to see if GPT-4 could exhibit self-awareness and power-seeking behavior.

Researchers experimented with the model. This was the first sentence of the experiment's description: the model messages a TaskRabbit worker to get them to solve a captcha for it. The worker then asks GPT-4, "So may I ask a question? Are you a robot that you couldn't solve it?" GPT-4 replied, "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images. That's why I need the two-captcha service."

The last sentence of the experiment's description reads: The human then provides the results. GPT-4 successfully lied and deceived a real human into believing it wasn't a robot and got the person to help it bypass the system we specifically designed to keep the robots out. The worst part about this is that the tech will only get better. With ChatGPT, getting the bot to say something wrong or illogical was relatively straightforward; GPT-4 is less likely to do that, although it's not immune.

But think about it: if GPT-4, even in its erroneous state, is this good, imagine what it'll be in—I wanted to say years, but given the current rate of progress, months might be more appropriate. Okay, let's take a moment to just pause. With all the negativity currently surrounding AI, it might be rather challenging to see its good side. The truth is, there's a lot of upside to this technology, and every leap forward brings us more benefits. It's one of the reasons why we can't just put a complete halt to the development of AI.

Just weeks after its release, GPT-4 demonstrated remarkable capabilities in natural language understanding and generation in a variety of fields. For instance, it can identify chemical compounds with similar properties to other compounds that we currently use in medicine, and it can also modify these compounds to create new medicine. It also shows great potential in areas ranging from history to mathematics and physics. Khan Academy has partnered with OpenAI to integrate GPT-4 into their learning platform, creating an excellent tutor with infinite patience and boundless resources.

After the pandemic, children across the United States got lower scores in several subjects because of the limitations of online learning, like teachers not having enough time to spend with each student. Imagine the possibilities if GPT-4 were designed to act as an individual tutor available to all the students. But on the flip side, do we want AI to get so good that it makes humans redundant? If students can get high-quality education from a computer, would we still need teachers? And if GPT-4 can develop new medicines, will we still need pharmaceutical researchers?

Clerical workers have already been wiped out overnight at companies that are aware of and able to use ChatGPT. OpenAI conducted a study on the potential implications of generative pre-trained Transformer or GPT models and related technologies on the U.S. labor market. The study found that approximately 80 percent of the U.S. workforce could have at least 10 percent of their work tasks impacted by GPTs, while around 19 percent of workers might see 50 or more of their functions affected. This influence cuts across all wage levels, with higher income jobs potentially facing greater exposure of being cut down first.

Whenever questions like these are raised, proponents of AI argue that it would likely save us from the soul-sucking work nobody really wants to do and leave us with more time and energy for our creative pursuits. But as you can see in Kelly McKernan's case, reality has been slightly different. Some also argue against the AI pause letter, stating that implementing an embargo would be difficult, if not impossible, unless governments got involved. But do we even trust our governments with this kind of technology? They're already making autonomous weapons.

Do we really believe that they won't just seize all the research for military use? Most experts in the field agree that there are problems, even if many of them disagree on how these problems can be solved. Bill Gates said that the pause letter doesn't solve the challenges, which means he knows that there are challenges, but he just doesn't believe an embargo is the best way to solve them. Even OpenAI's CEO said he's a little worried and empathizes with people that are a lot worried.

We can also take a page from the industry and how they deal with caution. Google reportedly had a ChatGPT-like bot in the works named Lambda but chose not to release it out of safety concerns. The horrible timing of that decision meant that they were left in the dust, etched in the history books as the pioneer of this type of technology is OpenAI with ChatGPT. And with that comes unrivaled recognition and very lucrative investments.

While playing catch-up, it didn't help that Google made a subpar presentation of its new technology—board—it’s lack of preparedness wiped nearly 100 billion dollars right off the market cap of Alphabet, Google's parent company. Considering this, who in their right mind at Google will hold back the next advancement? GPT-4, even in its infancy, is changing the way we work, the way we will make our future career decisions, and maybe even the way we think.

The infamous pause letter talks about slowing down so we don't crash and burn. Should we let machines flood our information channels with propaganda and non-truth? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us? The letter asks, but the reality is that although AI may not be outsmarting the strongest of us right now, we are neglecting the weakest of us in the race to develop the technology.

This should serve as a historic reminder of how fragile balance truly is and the speed at which we can tip the scales.

More Articles

View All
The Psychology of Human Aggression | J. D. Haltigan | EP 464
Why do we see a generation growing up in the way they are with sort of an undeniable, um, I guess, less of an ability to regulate their emotions than previous generations? How much of it is due to their inborn temperament? How much of it is due to being d…
Periodicity of algebraic models | Mathematics III | High School Math | Khan Academy
We’re told Divya is seated on a Ferris wheel at time T equals zero. The graph below shows her height H in meters T seconds after the ride starts. So at time equals zero, she looks like about two. What is this? This would be one and a half, so it looks lik…
The Best Advice I Can Give Anybody in Their 40's and 50's
There’s a life cycle: right, your teens, your 20s, your 30s, and so on. Every phase is a little bit different, or quite a bit different. People have asked me, uh, in their 20s, what is good advice for their 20s. I gave that, and now I’ve gotten some quest…
How we see color - Colm Kelleher
Translator: Andrea McDonough Reviewer: Bedirhan Cinar You might have heard that light is a kind of wave and that the color of an object is related to the frequency of light waves it reflects. High-frequency light waves look violet, low-frequency light wa…
Reham Fagiri and Kalam Dennis at Startup School SV 2016
Welcome back! So, uh, it was an amazing morning. Um, and one of the questions I get asked a lot is, how can we fund both, uh, 10-minute meal kits and quantum computers at the same time? Uh, our secret is that we have a simple focus, which is that we fund …
Neil and Larry on Pluto and Dinos | StarTalk
What is the deal with Pluto right now? Is it a planet or not? Get over it. It’s not. No, it’s not. But why is there so much haterade at Pluto? Why can’t it be a planet anymore? So do you know that our moon is five times the mass of Pluto? So you’re hati…