yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

This is How The World Ends


53m read
·Nov 4, 2024

First, you have to know what happens when an atomic bomb explodes. You will know when it comes; we hope it never comes, but get ready. It looks something like this: in 1947, an international group of researchers called the Chicago Atomic Scientists began publishing a magazine titled The Bulletin of Atomic Scientists. This group of scientists had previously worked on the Manhattan Project, a World War II research and development undertaking tasked with producing the world's first nuclear weapons. They succeeded in their research and ended up creating two atomic warheads, Fat Man and Little Boy.

The cover of the first edition of the bulletin included a clock in the background with the minute hand at 7 minutes before midnight. To many people, this may just seem like another normal clock, not representing anything special, but this clock is quite the opposite. This isn't any normal clock; this clock represents utter anthropogenic global catastrophe, which might be confusing to hear. So, let me word it like this: the clock represents total obliteration by the hands of mankind.

Since published in 1947, the bulletin has updated the minute hand of the clock every year. The hand can move both forwards and backwards and does so according to many factors: nuclear tension, global warming, and many other things. In 1947, the clock was at 7 minutes to midnight; in 1963, 12 minutes; in 1991, 17 minutes. But today, in 2018, we aren't 20 minutes away; we aren't 10 minutes away; we aren't 5 minutes away. We're 2 minutes away—2 minutes from midnight, 2 minutes from complete obliteration.

Today, when discussing the destructive power of nuclear weapons, asteroids, or any large-scale detonation, we tend to measure them by how much TNT is needed to produce an equivalent explosion. For example, Fat Man and Little Boy, the bombs dropped on Hiroshima and Nagasaki during World War II, were at the time the most powerful and most destructive weapons ever made. Fat Man was a plutonium bomb and had a blast yield of over 21 kilotons—over 21,000 tons of TNT. Little Boy, on the other hand, was a uranium bomb and had a blast yield of over 15 kilotons.

However, these explosions didn't exactly reach their full potential. See, Fat Man was jam-packed with about 62 kg of plutonium, but when it detonated over the city of Nagasaki, only about 1 kg of that material actually fissioned—just a little over 16%. Not bad. Little Boy, on the other hand, was a lot worse. See, it contained over 64 kg of enriched uranium; however, when dropped over Hiroshima, less than 1 kg of that uranium actually fissioned—that's a little over 1%. Even though neither of these bombs lived up to their full potential, the fissioning of less than 2 kg of plutonium and uranium was enough to kill over 200,000 people.

People used to live here, and then, in a blink of an eye, they're ghost towns. That was 1945, and since then, over 125,000 nuclear weapons have been made that are much, much more powerful than both Fat Man and Little Boy combined. In 1961, the Soviet Union detonated a hydrogen bomb over the remote Siberian wilderness. This bomb had a yield of not 15,000 tons, not 21,000 tons, not even 100,000 tons—it had a yield of over 50 million tons of TNT.

The bomb has many names—Project 7000, product code 202, RDS-220—but the majority of people know this as the Tsar Bomba. When the Tsar Bomba was detonated, the mushroom cloud from the explosion exited the layer of the atmosphere that we live in and stretched far into the mesosphere. The mushroom cloud was over 64 km high—that's over seven Mount Everest stacked on top of each other. The explosion could be seen from over 300 km away, and the shock wave from the blast broke window panes nearly 1,000 km away from the explosion.

The shock wave circled the planet three times before it finally died out, and this wasn't even the most powerful version. The bomb was actually initially planned to be 100 megatons, which amounts to 100 million tons of TNT. It was eventually decided against, as a full 100 megaton explosion could have easily sent the world into a global nuclear winter. However, the fact that the bomb was theorized and nearly became a reality is truly terrifying.

Superimposing the 100 megaton explosion over popular cities around the globe really puts the destruction into perspective. If 10 of these bombs were detonated over the world's most populated cities, the casualties— the use of just 10 of these bombs on those specific cities—could cause almost as many deaths in just one day as there were fatalities from the combined total of every war in the 20th century. From just 10 of these bombs.

Today, however, in 2018, there are an estimated 15,000 nuclear weapons in the world on standby, waiting to be launched at any moment. And if or when these bombs are used, it won't just be one; it won't be 10; it will be all of them. See, if two countries that have access to nuclear weapons end up in a conflict, and that conflict escalates to the point to where these nukes are used, there's no reason for either side to hold back. Because if one side fires one nuke towards the other, why would the other side only send one back?

There isn't some written rule that says you have to play fair—no, not at all. It's quite the opposite. Studies have been conducted on the idea of a regional nuclear war between the two countries of Pakistan and India. Not only would death be inevitable from just that region, this would affect every last person on the planet. These countries have about 200 nukes combined. If all of these weapons were used and pinpointed to the right cities, it's more than likely that you could see upwards of more than 100 million casualties in just those countries alone.

The fallout from these weapons—that is, the radiation that is spread throughout the air after the bomb goes off—would be enough to finish off most of the population of those countries if anyone was even alive after the initial impacts. But that's only the beginning of the problem. The smoke and dust from these bombs will spread much further than just India and Pakistan. Nuclear fallout would make its way into the atmosphere and end up coating the planet in a layer of smoke and dust.

This would ultimately block out most of the sun's rays that reach the planet, which would end up causing the failure of plants and crops that all humans and life need to survive. Black rain would fall from the sky—rain polluted with radiation and dust. The fallout from these bombs would cause even more casualties than the initial explosions themselves. From only 200 nukes—compare this 200 to the total number of nuclear weapons in the world today, and you'll find that it's barely over 1%.

1% of the entire nuclear arsenal of the planet, in the perfect circumstances, is enough to cause over 1 billion deaths. In 2008, at Oxford University, the Future of Humanity Institute conducted a survey among participants at the university, asking them to make their best guess at the chance of possible extinction-level events before 2100.

The results of this survey actually covered the threat of nuclear Armageddon and placed the estimates of at least 1 billion people dead by the hands of nuclear weapons at 10% and placed the odds of complete human extinction by nuclear weapons at 1% by 2100. 1% doesn't seem that high; a one in 100 chance, but think about this: the chance of you dying in an airplane accident is 1 in 11 million. The chance of you dying by lightning is 1 in 84,000. The chance of humanity being wiped out by nuclear weapons is more likely than you dying from almost anything else other than heart disease and cancer.

This 1% assumes the worst scenario possible. The conflict you hear about more often involves the countries of the United States and Russia, who respectively own over 90% of the nukes that exist today. If for some reason these two countries went to war with each other, the results definitely wouldn't be good. Should the United States and Russia go into a full-scale nuclear war, with both sides throwing everything they have at each other, this may as well be the end of civilization as you know it. See, the goal of a war of that magnitude isn't for either side to win; it's to make sure the other side loses.

And the best way to make sure that the other side loses is not to wait to see them stand next to you on the podium but rather take them out of the picture completely, and that's exactly what the goal of this war would entail. On top of the massive number of nukes dropped on major cities across both countries, agricultural centers, hospitals, schools, and many more facilities would be destroyed to ensure that both sides endure the most suffering.

Both sides know that this isn't a game that has a pause button—that once you pick up the controller, you can't put it back down. Once a nuclear warhead is confirmed to have been sent to either side, the idea of a nuclear holocaust becomes a reality. Today, the Doomsday Clock is the lowest that it has ever been, and should we leave the issues at hand unchecked, the clock will inevitably keep ticking forward.

In Reasons and Persons, author Derek Parfit posed the following question: Compare these three outcomes: peace, a nuclear war that kills 99% of the world's population, or a nuclear war that kills 100% of the world's population. Two could be worse than one, and three could be worse than two, but the question is, which is the greater of these two differences? If we find ourselves in a full-scale nuclear war, is it even worth trying to survive? Because in the end, if you survive, no one will be there waiting for you.

The global nuclear winter that would come from this would leave almost every major city uninhabitable for years to come. The 1983 movie War Games is set in America during the midst of the Cold War. It discusses many possible outcomes of nuclear wars of various magnitudes. In it, there is a quote that states, "The only winning move is not to play." And I find that that really fits the situation. The only way to survive and come out on top at the end of the day is not to play the game in the first place.

On the 5th of February, 1958, a Mark 15 thermonuclear bomb was loaded onto a B-47 aircraft stationed at Homestead Air Force Base in Southern Florida. The plane was to take part in an extended training mission meant to simulate an attack on the Soviet Union. Over the course of 9 hours, it took a winding, circuitous route across the United States, flying over the Gulf of Mexico up to Chicago, then back down to its target in Radford, Virginia.

The intent was both to exhaust the crew, as they would be on a trip to Moscow, and to assess the bomber's ability to perform aerial maneuvers with heavy weapons aboard. The 3,400 kg nuclear bomb served as test cargo. This might seem incredibly reckless to us today, but during the height of the Cold War, training exercises like this were fairly routine. In fact, just 2 years later, the United States would implement Operation Chrome Dome, a defense strategy in which planes armed with nuclear weapons were kept in the skies at all times, just in case war with the USSR ever broke out.

Once above its target in Radford, the B-47 sent an electronic beam to a station on the ground, simulating a strike, after which it recorded the mission as a success. Unfortunately, on their way back to Florida, there was a bit of a mix-up. As the crew celebrated, a group of F-86 fighter jets was flying out of Charleston Air Force Base in South Carolina. The jets were directed to intercept the bomber as part of a separate training mission, but what the pilots didn't know was that there were actually two B-47s in the air above Tybee Island, off the coast of the state of Georgia.

The Charleston squadron spotted its target as one of the intercepting jets locked onto what he believed to be the sole bomber. A second aircraft appeared above him, and the two collided. This was the plane with the bomb. The F-86 was destroyed, and the pilot was forced to eject. The B-47, meanwhile, managed to stay airborne—though just barely. It had been heavily damaged and was losing altitude. Deciding it was better to drop its cargo rather than risk a crash landing, the bomber’s pilot jettisoned the nuclear payload, releasing a bomb with an explosive yield equivalent to 1.7 megatons of TNT.

For some perspective, that's 113 times the strength of the bomb that was dropped on Hiroshima. The blast radius from a weapon of this magnitude would have encompassed an area of around 620 km, just slightly smaller than New York City. After being deployed, the bomb fell for over 9,000 m before eventually crashing into Wassaw Sound. Luckily it didn't detonate. If it had, thousands would have died instantly in the city of Savannah, less than a 30-minute drive from the crash site, which would likely have become inhabitable due to radioactive fallout.

A recovery mission was immediately organized; a joint Air Force and Navy operation searched this area for 10 weeks, but the weapon was never located. Experts believed it had sunk beneath several meters of silt on impact, effectively rendering it invisible to sonar. Subsequent recovery attempts have produced few results, and to this day, a bomb capable of vaporizing an entire city is buried somewhere just off the U.S. coast.

The story of the Tybee Island bomb is terrifying in its own right, but you want to know the worst part? It's not the only time this has happened. A similar thing occurred in 1965 when a training exercise being held on the USS Ticonderoga went horribly wrong. An attack jet carrying a live 1 megaton nuclear bomb was being rolled into one of the ship's plane elevators when it began to tilt. Before anyone knew what was going on, the jet rolled off the side of the aircraft carrier and fell into the Pacific Ocean, taking the pilot with it. The plane, pilot, and bomb were never recovered.

In modern history, there have been 32 reported Broken Arrow events, which are accidents involving nuclear weapons. These range from crashes and unintended detonations to fires, accidental launches, and potential meltdowns. In six of these incidents, the weapon was never recovered. Besides the six missing bombs, a number of nuclear-tipped torpedoes, primarily Soviet, have also disappeared or are otherwise unrecoverable. While only possessing a fraction of the destructive capability of a bomb or missile, these weapons still contain highly radioactive materials and could potentially leak contaminants into the surrounding ocean, killing sea life.

You might be wondering how military officials and world leaders could simply abandon missing nuclear weapons. Shouldn't governments be scrambling to recover these devices? Unfortunately, it's not that simple. All six of the missing bombs were lost at sea, whether in Wassaw Sound or deep in the Pacific. Simply figuring out the exact location for even one of these weapons would likely require hundreds of people searching for months, if not years. Looking for something you dropped in the ocean is like looking for a needle in a haystack, only the haystack is hundreds of kilometers wide, shifts constantly, and requires the use of advanced technology to search it. There are no points of reference or any kind of markers to guide you.

Worse than that, stuff that falls into the ocean tends to move around. Currents and tides can carry even the heaviest objects kilometers before they finally come to rest on the sea floor. And all of that is just to find the bomb; recovery is a whole different story. According to experts, digging up these weapons could prove more dangerous than simply letting them stay where they are.

Take the case of the Tybee Island bomb. Today, there have been no unusual spikes in radiation detected in the area. This means, in all likelihood, the materials inside the bomb are still contained. Any attempt to retrieve it would risk potentially spilling toxic plutonium and uranium into the ocean. So, even if you found it, it would be too dangerous to dig up.

So, it's safer to leave it alone. But that's only assuming the device is disabled. It's normal protocol to remove the triggering mechanism from a nuclear weapon when it's being used in a training mission, and the Air Force maintains that this was the case in the Tybee Island incident. However, former Assistant Secretary of Defense W.J. Howard described the bomb in a 1966 Congressional testimony as "a complete weapon"—a bomb with a nuclear capsule. The fact remains that no one is really sure whether or not the bomb was disabled.

There could potentially be an active thermonuclear weapon sitting just off of America's eastern seaboard. If you're freaking out about it, that's understandable. The idea that there are missing weapons capable of killing millions of people hiding somewhere beneath the ocean is chilling to say the least. But I'm afraid I have some more bad news because here's the thing: these are just the bombs we know about.

The vast majority of Broken Arrow incidents have been recorded by the United States, but there are seven other territories known to possess nuclear weapons: Russia, China, the United Kingdom, France, India, Pakistan, Israel, and North Korea. While the U.S. has been historically at least semi-transparent about its nuclear mess-ups, these other nations have been much more secretive. We know very little, if anything at all, about their nuclear programs and almost nothing of any mishaps.

It's likely that we don't even have a full account of every Broken Arrow incident. Russia, in particular, has a tremendously checkered past when it comes to nuclear technology. Aside from the obvious example of Chernobyl, there's also the case of the Tsar Bomba, the largest nuclear weapon ever detonated. This device had a 50 megaton yield, producing shock waves that were felt around the globe and a mushroom cloud eight times the height of Mount Everest.

The USSR maintained the largest stockpile ever held by a single nation, with an estimated 45,000 weapons. Given the size of this arsenal and the historic recklessness of the Soviet military when it came to nuclear technology, it's not a stretch to assume that they probably lost a bomb or two. Other countries, such as China and North Korea, are even more secretive about their nuclear weapons programs. Both nations today are currently looking to expand their arsenals, with the latter rumored to be on the cusp of resuming testing.

Again, when it comes to the U.S., figures could very well be underreported for reasons we might not be aware of. In the case of the USS Ticonderoga bomb, it took the U.S. Navy 15 years to admit the incident even happened. Even worse, at the time, they claimed the bomb had been lost over 800 km away from land, but it was later discovered it had disappeared just 109 km from Japan's Ryukyu island chain.

It's possible then that there could be dozens of lost nuclear devices lurking in unknown locations around the world, the presence only known by a handful of top officials. The danger that these weapons pose shouldn't be understated. According to a declassified 1947 study from the Los Alamos laboratory, as many as 100 and as few as 10 could be enough to end society.

We know we've lost six, but how many are we truly missing? Granted, it's nearly impossible to imagine a scenario where all of these weapons would explode simultaneously, but even if one were to detonate, the scale of devastation, suffering, and environmental catastrophe would be unimaginable. It would be a disaster unparalleled in history.

The sad reality is that through sheer recklessness, our leaders have opened the door to the possibility of causing ourselves irreparable harm, both to humanity and the planet. The majority of the world's nuclear stockpiles were built during the Cold War, as the United States and the Soviet Union sought to intimidate one another and flex their military might. This arms race eventually culminated in a strategy of mutually assured destruction, otherwise known as MAD—a concept that saw both nations amass thousands of nuclear weapons as a kind of safeguard against any potential conflict.

The thinking was that if both sides had enough firepower to wipe the other off the face of the planet, then neither would pull the trigger. But humans are clumsy; we make mistakes. Even when every possible precaution is taken, accidents still happen. Just look at the Titanic, the space shuttle disasters, or Fukushima. Nothing is a better testament to this than the nuclear strategies adopted by governments around the world.

Operation Chrome Dome, for instance, resulted in five separate crashes—a program meant to serve as a guarantee against the possibility of nuclear disaster itself nearly led to tragedy. The most infamous of these is the 1961 Goldsboro B-52 crash, now the subject of innumerable history specials and YouTube videos. This incident has become something of a legend, not for what happened, but for what didn't happen.

Perhaps the most alarming Broken Arrow incident ever recorded, the Goldsboro crash occurred when a B-52 bomber flying over Goldsboro, North Carolina, suffered a mechanical failure. The aircraft's right wing sprung a fuel leak, and the plane was ordered to make an emergency landing just 24 km from its base. The bomber began breaking up and exploded in midair, sending fiery debris hurling towards Earth along with two 3.8 megaton thermonuclear bombs. Each one of these devices was more than twice as powerful as the Tybee Island bomb and contained more firepower than the combined destructive force of every man-made explosion from the beginning of time to the end of World War II.

Luckily, neither bomb detonated. One landed safely, slowly brought to the ground after its parachute successfully deployed. The other, more disastrously, crashed with full force, digging itself 55 m down into the Earth. While pieces of this device were recovered, the bomb's plutonium core still remains buried. The U.S. military initially attempted to retrieve it but abandoned the operation when it proved impossible to reach and simply purchased the land instead.

Ironically, it was the bomb that landed with the parachute that posed the bigger risk. A later examination revealed that three of its four arming mechanisms had activated after separation. Literally, the only thing that prevented the weapon from going off was the failure of two wires to cross. All this begs the question: is it all worth it? The idea that we could somehow ensure our survival by constructing hordes of weapons capable of annihilating the planet seems like a bad idea from the start, when you consider the possibility that any one of these devices could be lost or detonate accidentally.

Such a strategy appears downright irresponsible. Yes, MAD might prevent the world powers from pressing the proverbial button that leads to nuclear war, but what about all the broken arrows? Is it really worth the gamble, or are we just two crossed wires away from, well, mutually assured destruction?

Up until I was like 15, the way I found new music was through friends or songs I’d hear in the background of my favorite TV shows or movies. This can be a really slow process if you, like me, have a somewhat unconventional taste in music, and so it was no surprise that I would only add a few new songs to my playlist every few months. As of recent years, that's definitely changed.

You see, Spotify has been able to identify my tastes remarkably well. With its Discover Weekly and year-end playlist, Spotify seems to know what I like better than some of my closest friends. It follows a similar trend of surprising improvements in the fields of natural language processing and machine learning. So, when did Spotify and other apps get this good, and what does it mean for the future of technology? These and other recent advances are occurring at a surprising rate, or at least that’s what it seems like it is.

After all, progressing exponentially, and we humans are ill-equipped when it comes to visualizing or imagining such growth. We simply never evolved to do so. Animals, predators, and prey all move at a relatively constant rate. They don't keep accelerating. Technological progress, however, does. As it turns out, this exponential growth means we might be stepping into some very uncharted territory in the near future.

If technology continues to get better and better at its current pace, we will soon reach a stage where it not only matches but surpasses the intelligence of a human. Couple that with an ability to learn and an incentive to survive, and well, we don't know what will happen next. This is the technological singularity. Borrowed from astrophysics, the term "singularity" refers to a tipping point beyond which all laws that are currently known simply fall apart.

Like how the laws of physics fall apart beyond the singularity of a black hole, a technological singularity is a similar tipping point when technological progress is so overwhelming that we will no longer be in control of it or the things that it will lead to. In 1875, Carl Linder, an Austrian biologist, noted that when a man was giving a blood transfusion from another animal, the foreign blood tended to clump up in the blood vessels of the man, which can cause shock, which then ultimately leads to death.

This and years of research that followed led him to discover blood groups, for which he was awarded the Nobel Prize in 1930. Today he is remembered as the father of blood transfusion medicine, and we have him to thank for being able to donate and receive blood safely. When there is a technological singularity, scientists predict computers will be able to make life-changing, Nobel Prize-winning discoveries just like this every five seconds.

That may seem like an incredible future or potentially life-threatening one, and that's exactly why the prospect of a technological singularity is so complicated. On one hand, it may seem like rapidly progressing technology can eventually enslave humanity, but it also has an immense potential to improve human life, and this potential is the reason why it is being developed so rapidly.

There are enormous incentives to devote even more resources to the development of artificial intelligence, economic and otherwise. For example, it can help companies curate products each customer is more likely to buy. It can predict when demand is going to be low to prevent waste. It can also conduct research faster than any human ever has.

These innovations can lead to other, less inspiring changes in human society too; after all, if scientific research can be done with a computer, what use is there for researchers anymore? If cars can drive themselves and nanobots can repair organs and 3D printers can literally print bridges, are all jobs simply going to be replaced? Well, at its current state, the technology we have is only good enough to replace repetitive labor, such as connecting a car door to its chassis. For most things that are more complicated, we still need human intervention.

But it's not about now anyways; it's about the future. And without thorough consideration, we may be headed for unemployment the likes of which humanity has never seen. And if recent events haven't made it clear, it's not just about the economy or salaries, but also about the meaning that most of us tend to derive from our work. You know, not doing anything, as it turns out, is really, really boring.

Okay? Sometimes it's nice; we've established that technological progress is not slowing down anytime soon. What happens when computers replace not only our labor but also our intellect? What happens when they can mimic intelligence and learn on their own? All this could lead to a scenario where technology is not so friendly to us, or instead of just replacing us, it decides to do away with us completely.

And in such a situation, without much preparation, we would be completely powerless to hypothesize what would happen to our species during such an event. Scientists decided to look at what history tells us about how a more intelligent species, as humans, treats its less intelligent counterparts. Monkeys—you know, the same monkeys that we caged up, killed, ran any and all tests on, and had no ethical qualms about until very recently.

Yes, those monkeys! Sam Harris provides an analogy in his regard to help us visualize how we might be treated based on our past behavior. He draws on the relationship we humans have with ants by saying we don't hate them; we don't go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, we annihilate them without a thought.

This rather troubling thought has a lot of people concerned about the way in which we should be progressing toward the singularity. The best thing we could do as we head into the singularity is to ensure that AI develops into an ethically sound ecosystem instead of using it to spy, scam, and steal from people, which in reality is what it is currently being used to do.

Then there are also concerns about defining when the singularity has been reached. What is consciousness? How do we know when machines have it? What is intelligence? What is of value to us? What is art, and what is not? All these questions need to be answered for us to know when machines are indeed more intelligent.

This is capable of triggering a modern Renaissance that is not simply technological but also philosophical in that it causes us to try and define the human experience like never before. It can also help us reflect on what we, as the most intelligent species, have done to our planet and its other inhabitants.

But how near is all of this? Ray Kurzweil, renowned inventor and futurist, has said we may reach the singularity by 2049. Ray attributes this oddly specific date to what he calls "price performance calculations per consistent dollar." Look, I'll explain: he plotted these said numbers from 1980 through 2050. In 1981 and in 2015, those numbers were roughly where he predicted them to be.

Others are skeptical of such claims. Most notable amongst them is Noam Chomsky, widely regarded as the father of modern linguistics and one of the most cited scholars alive. What makes this perspective interesting is that he perhaps possesses a deeper understanding of language than most of us. This is a very important part of creating a generally intelligent machine since understanding how we communicate with other humans will help us communicate with other potentially conscious machines.

His perspective is that we are nowhere near where we need to be in terms of our understanding of the cognitive processes that go on consciously and subconsciously to be able to mimic them. Can we define a theory of being smart, he asks? He is certainly right about the complexity of human language and the nuances of what we say versus what we mean. It's why emotions like irony, sarcasm, and rhetorical questions are still unsolved puzzles in the world of AI research.

But do we really need to be able to mimic the entire process if we can simply reproduce the effect? What if we are simply able to reprogram some aspects of learning and let computational ability take care of the rest? Max Tegmark, an American-Swedish physicist from MIT, is interested in investigating the risk of extinction from artificial general intelligence.

He likes to use the analogy of when man first discovered fire—it was a wonderful discovery that has paved the way for modern life, but it hasn't always been safe. It's caused a lot of death, pain, and suffering in the process. But we are where we are because we were able to learn from our mistakes and devise things like fire escapes and fire extinguishers.

AI might be the same at the start, but the only difference here is that we only have one shot—it's all or nothing. If AI lights a fire, we may never be able to extinguish it in hopes of a next time. But there are critics who doubt that this is how the future will actually play out. Unlike Chomsky, they don't doubt the exponential progress or its ability to mimic human-like computation in the future; instead, they doubt whether the future will be so aggressively against our survival.

While the technological singularity is coming, we shouldn't fear it; instead, we should embrace the progress it can bring. Such is also the perspective of Garry Kasparov, widely considered the best chess player of all time. He was on the other side when IBM’s Deep Blue finally beat humanity's best at a game it invented. Garry believes that instead of seeing this as a man versus machine contest, we should take it as an opportunity to realize the potential of augmentation.

That's definitely more comfortable to think about, considering we've been augmenting ourselves with technology for decades now. First with things like calculators, and more recently with mobile phones. After watching a recent Apple keynote, it mentioned that Apple CPUs have gotten over 100 times faster in recent years, and their GPUs over a thousand times faster.

The singularity doesn't have to be an apocalyptic mess where every moving piece of metal is trying to kill us. It doesn't have to be a deceptive reality where no one and nothing can be trusted. It can simply be a reality where we're free to explore other dimensions of the human experience and engage in our creative pursuits—not to put food on the table, but simply for the sake of them.

In both of the major criticisms against the idea of a technological singularity, neither party actually denies that it's coming. They're either saying it's not coming anytime soon or that it's not going to be as bad as we think it is when it does. Or, you could argue that the technological singularity is nearer than we think, and it's going to be much, much worse than we anticipate.

What then, does it really matter how we prepare or what policies we come up with? Whatever the answer, one thing remains the same: it's coming. And if I were you, I'd get ready. It could be terrible, and it could be great; it's not clear right now. But one thing is for sure: we will not control it.

Go is arguably the most complex board game in existence. Its goal is simple: surround more territory than your opponent. This game has been played by humans for the past 2,500 years and is thought to be the oldest board game still being played today. However, it's not only humans that are playing this game. Now, in 2016, Google DeepMind's AlphaGo beat 18-time world champion Lee Sedol in four out of five games.

Now, normally, a computer beating a human at a game like chess or checkers wouldn't be that impressive, but Go is different. Go cannot be solved by brute force. Go cannot be predicted. There are over 10^170 moves possible in Go. To put that into perspective, there are only 10^80 atoms in the observable universe.

AlphaGo was trained using data from real human Go games; it ran through millions of games and learned the techniques used and even made up new ones that no one had ever seen. This is very impressive alone; however, what many people don't know is that only a year after AlphaGo's victory over Lee Sedol, a brand new AI called AlphaGo Zero beat the original AlphaGo—not in four out of five games, not in five out of five games, not in 10 out of 10 games, but beat AlphaGo 100 to zero—100 games in a row.

The most impressive part? It learned how to play with zero human interaction. This technique is more powerful than any previous version; why? It isn't restricted to human knowledge. No data was given; no historical figures were given. With just the bare-bones rules, AlphaGo Zero surpassed the previous AlphaGo in only 40 days of learning.

In only 40 days, it surpassed over 2,500 years of strategy and knowledge. It only played against itself and is now regarded as the best Go player in the world, even though it isn’t human. But wait! If this AI learned how to play without any human interaction and made up strategies of its own, and then beat us with those strategies, then that means there is more non-human knowledge about Go than there is human.

And if we continue to develop artificial intelligence, then that means there's going to be more and more non-human intelligence. Eventually, there's going to be a point where we represent the minority of intelligence, maybe even a very minuscule amount. That's fine, though; we can just turn it off, right? It's a thought, but think: when modern-day humans began to take over the planet, why didn't the chimps and the Neanderthals turn us off?

If this artificial intelligence becomes superintelligent and learns through and is connected to the internet, well, we can't just shut down the entire internet. There's no off switch. So what happens if we end up stuck with AI that is constantly and exponentially getting smarter than we are? What if it gets to a point where us humans get in the way, and the AI hits the off switch on humanity?

When people think of AI, they tend to think of superintelligent AI—AI that serves the human race but could also end us at a moment's notice. But is this really going to happen? Pop culture, mostly movies, tends to depict AI not as benevolent creations but rather as robots with malicious intent. Except for TARS in Interstellar—he's pretty cool! Yeah, you can use him to find your way back into the ship after I blow you out the airlock.

There's a lot more to AI than you might think. There's a lot more types that serve different purposes. Artificial narrow intelligence, also known as weak AI, is the only form of artificial intelligence that humanity has created so far. It makes this sound kind of bad, but trust me, it does a really good job at what it's supposed to do.

Narrow AI is AI that is created for the sole purpose of handling one task. It's the kind of AI that AlphaGo is. Narrow AI is good at speech and image recognition and also at playing games like chess or Go or even pretty complex games like Dota 2. At the International 2017 World Championship, OpenAI's artificial intelligence destroyed pro player Dendi 20-0.

But much like AlphaGo, it wasn't taught how to play the game. It played out millions of years worth of one versus one matches against itself and learned on its own. It started out barely knowing how to walk, and eventually, as time went on, it surpassed human-level skill. If you use Spotify, you'll see that it creates daily mixes for you based on the music you listen to.

Amazon learns from you, teaches itself your buying habits to suggest you new products—something practically impossible to do man for man. It can predict when demand is going to be low to prevent wastage. So how is that even possible? Well, through something called machine learning.

Machine learning is the science of trying to get computers to learn and think like we humans do. Machine learning is essentially the same way that babies learn. We start off as small, screaming sacks of meat, but over time we improve our learning. We take in more data and more information from observations and interactions, and most of the time, we end up pretty smart.

The most popular technique out there to make a computer mimic a human brain is known as a neural network. Our brains are pretty good at solving problems, but each neuron in your brain is only responsible for solving a very minuscule part of any problem. Think of it like an assembly line, where each neuron in your brain has a certain job to do in order to completely solve a problem.

Let's make a simple example of a neural network. In order to say that someone is alive, they have to either have a pulse or they must be breathing, but not necessarily both at the same time. This yellow dot represents a neuron in a neural network. It functions just like a neuron in your brain does; it takes in information and then gives an output.

If this neuron takes in the information that says, "Hey, this person has a pulse and is breathing," then the neuron deciphers this information and says, "Okay, this person is alive." It learns to analyze situations where the human would be declared alive—if it's breathing or has a pulse—in situations where the human would be dead, where neither of those is true.

That's essentially a barebones explanation of how it works. Of course, no neural network is really this simple; many have millions of parameters and are much more complex than just this one-layer network. The world is full of sounds and visuals and just data in general, and we take in all of this to form our view of reality.

However, as more and more complex topics show up with more and more data, it becomes harder and harder for humans to do this analysis on their own. This is where machine learning comes in handy. Machines can not only analyze data given to them but also learn from it and adapt their own view of it.

Let's go back to AlphaGo Zero. In only 40 days, it surpassed thousands of years of strategy and knowledge and even made up some of its own strategies. But how did it do all of this so quickly? Biological neurons in your brain operate at about 200 Hz; that's proved to be fine for us, but modern transistors operate at over 2 GHz—an entire order of magnitude faster.

Those neurons in your brain travel through what are known as axons, and they travel at about 100 m/s, which is pretty fast and gives us pretty good reaction time, but it's only about a third as fast as the speed of sound. Computers, however, can transmit information at the speed of light, or 300 million m/s. So there's quite a big difference between our brain's capabilities and a computer's.

In just one week, a computer can do 20,000 years' worth of human-level research or simulations or anything it is trained to do. A brain has to fit inside your head; there's a limit to how much space it can take up. But a computer could fill an entire room or even an entire building.

Now, obviously, weak AI doesn't require an entire server room to run. Like we saw with OpenAI, it only took a USB stick. But for more intelligent AI, it may require much more power. Artificial general intelligence, or AGI, is AI with more than a single purpose. AGI is almost at or equal to human-level intelligence, and this is where we’re trying to get to.

But there's a problem. The more we search into it, the harder and harder it seems to be able to achieve. Think about how you perceive things. When someone asks you a complicated question, you have to sort through a ton of unrelated thoughts and observations and articulate a concise response to that question.

This isn't exactly the easiest thing for a computer to achieve. See, humans aren't able to process information at the speed of light like computers can, but we can plan things. We can think of smart ways to solve problems without having to brute-force through every option. Getting a computer to human-level thinking is hard.

We humans can create things; we invent things. We create societies and play games and laugh. These are all very hard things to teach a computer. How can you teach a computer to create something that doesn't exist or hasn't even been thought of, and what would be its incentive to do so?

I believe AGI, or strong AI, is the most important artificial intelligence to be created, and here is why: machine learning is exponential, meaning that it starts off rather slow, but there's a certain tipping point where things start to speed up drastically. The difference between weak AI and strong AI is millions of times larger than the difference between strong AI and superintelligent AI.

Once we have the artificial general intelligence that can function like a human being— for the most part, at least—it may help us reach superintelligence level in only a few months or perhaps even weeks. But here comes another big problem. See, many people tend to see intelligence on a graph like this: we have maybe an ant here, a mouse at about here, the average human here, and maybe Einstein right here, just above.

If you asked most people where a superintelligent AI would lie on this graph, most would probably put it somewhere around here. But this just isn't the case. Although AI might not be at human-level intelligence yet, it will be one day, and it won't stop at human level; it'll most likely just zoom past and continue getting more and more advanced until eventually the graph looks something like this.

This is what is known as the technological singularity, where artificial intelligence becomes so advanced to the point where there's an extreme explosion of new knowledge and information—some that might not even be able to be understood by humans. If we make a superintelligent AI, that AI would be able to improve upon itself and, in turn, get smarter in a shorter amount of time, which means that this new and improved AI could do the same thing.

It continues to repeat this process, doing it faster and faster each time. The first recreation may take a month; the second a week; the third a day. And this keeps going until it's billions of times smarter than all of humanity.

Compared to chimps, we share about 96% of their DNA. We are 96% chimp. But in that little 4%, we went from being extremely hairy and mediocre primates to a species that has left the planet. We're a species that plans to colonize Mars—a species that made missiles and millions of inventions—all compacted down into that 4% DNA difference.

The number of genetic differences between a human and a chimp is 10 times smaller than the genetic difference between a mouse and a rat. But yet, only being 4% different from them, their entire fate seems to lie in our hands. If we want to, for some reason, put a McDonald's on a chimpanzee habitat, we just do it. We don’t ask.

There are 5.5 quadrillion ants on the Earth. Ants outnumber us 1 million to one, and yet we actually live like they don't exist at all. If we see one crawling on a table, we just smush it and move on with our day. What happens when the day comes where AI becomes the humans in the situation and we become the ants?

Superintelligent AI is different from software we know today. We tend to think our software is something that we program and the computer will follow our rules, but with highly advanced AI and through machine learning, the AI teaches itself how and what life is, as well as how to improve it. After a while, there is no need for any human interaction; perhaps humans would even slow it down as opposed to speeding it up.

We have to be careful about how we make it. As Sam Harris has stated, "The first thing you want to do is not give it access to the internet." Right? You're just going to cage this thing, right? Because you don't want it to get out, right? So, but you want to tempt it. You want to see if it's trying to get out. How do you know whether it...

So, this is called a honeypot strategy, where you tempt it to make certain moves in the direction of acquiring more power. For example, if we're somehow able to give a superintelligent AI orders and it follows those orders, it may just take the quickest and easiest route to solve them.

Just because we make a superintelligent AI doesn't mean that it's going to be wise. See, there's a difference between intelligence and wisdom. Intelligence is more about making mistakes and acquiring knowledge and being able to solve problems through that. Wisdom, on the other hand, is about applying the correct knowledge in the most efficient way. Wisdom is being able to see beyond the intelligence gained and being able to apply that to other things in hopefully a productive way.

If we give AI an order to solve world hunger, well, the easiest way to solve world hunger is just to kill all life on the planet, and nothing would ever be hungry again. But obviously, that isn't what we want. We would have to somehow teach the AI to have human-like values, a moral code to follow, and somehow work around that in a way we'd want it to be like a superintelligent human cyborg.

You may not even realize it, but the majority of humans are already cyborgs. Humans are already becoming less biological and more technical. We already put our minds onto non-biological things. How well? Many of you are watching this video on that device right now. Your phone has become an extension of yourself. It can answer any question you could ever ask at a moment's notice.

There's only one problem, though: your inputs are just too slow. If we were able to create a high-bandwidth link between the brain and the internet, we would be literally connected to everyone and everything on the entire planet. This is what Neuralink is aiming to do—create what is known as a neural lace.

Your brain has two big systems: your limbic system and your cortex, and these two are in a relationship with each other. Your limbic system is responsible for your basic emotions, your survival instinct; your cortex is responsible for your problem-solving skills, your critical thinking. Neuralink is aiming to create a third layer to this.

The AI would be like a third wheel in this relationship but would increase our capabilities by multiple orders of magnitude. We all would have eidetic memories—picture-perfect memory. We would have access to all the information available to the world and be able to access it instantly.

With the addition of this third layer, people may eventually realize that they like this newfound knowledge and way of living more than the alternative, more than living without this newfound AI that has been installed into them. Eventually, people may decide to ditch their human body—their biological self—in favor of the artificial world.

This, of course, is a long way away, but it doesn't mean we shouldn't think about it. There are, of course, many problems and counterarguments that come up, and I'll address some of them now. For starters, how do we program something like consciousness? We don't even know what it is yet.

How do we program something like a limbic system into a machine—something that has a sense of fear, perhaps even a fear of death? Can a machine or an AI really love someone or show that emotion? Because if it can love, it can also hate, which could raise big problems for us in the future. This is a fear of many people when the idea of superintelligent AI is brought up.

The biggest and most depressing question is, will it be pleasant, or will it be hateful? Could it even be either of those things in the first place? And more importantly, is it even necessary? Sure, a superintelligent AI would have all the information of the world at its disposal, but then again, so do you and I—so do all the people in the world with very radical views.

Who's to say that a superintelligent AI wouldn't adopt these views as its own and then decide to execute on those instead of what we had planned? Sure, it's possible that a superintelligent AI could improve upon itself using knowledge it gained, but then again, it isn't pulling random improvements out of thin air. It's not just going to unite quantum mechanics and general relativity with raw mathematics.

What if some of its upgrades needed experiments to be run? What if it needs more information? Is it just going to make it up? This leads into what if the AI learned to lie and then decides to lie to us about its accomplishments for its own selfish reasons?

It's also possible that the progress of computing will slow down, and it looks like it already is. I mean, look at airplanes. In the past 100 years, we went from this to this. But like many things, there's a threshold that is hard to get past. But if we do get past this barrier, the upsides could be tremendous.

It's almost obvious that superintelligent AI has the capability of making humanity billions of times better than it is today. But on the flip side, it could also be used as a weapon. Just think, like I said before, if this thing is running for a week and has access to all of the world's information, it could make over 20,000 years of technological progress.

You let this thing run for six months, and you could potentially have 500,000 years, or even more, of technological progress. Imagine if this got into the wrong hands and imagine the repercussions it would have.

When you Google something, not only are you giving out data and telling a potential artificial intelligence what you're thinking, you're also telling it how you're thinking. We feed it all of our questions, all of our answers. If we were going to try and program a limbic system into an AI, we'd be teaching an artificial intelligence what we're afraid of.

Out of all the data in the world, over 90% of it has been created in the past three years. AI learned how to recognize faces; it learned how to recognize voices; it learns languages and then will eventually translate between them all seamlessly. This knowledge and information train could continue until it has gone through 100% of this data.

This is why artificial intelligence is such an important topic. Superintelligent AI is the last invention that humanity will ever make. Once it's invented, it can't be uninvented. The technology and developments that could possibly turn the human species into immortal, god-like figures is coincidentally the same technology that could also cause the downfall of humanity.

Many people are resistant to AI because they fear it may have some of the negative traits that we humans do. It may be greedy; it may lie; it may be selfish; it may be rebellious, it may be short-tempered. The power of superintelligent AI is there, patiently waiting to be found. But the question is, do we actually want to find it?

New York City, one of the United States's most recognizable cities, in September 2021, one of the many artistic landmarks of the city was repurposed. It was the metronome near Union Square. If you've ever walked by it or have seen it online, you'll probably notice two things: one, a giant piece of artwork by Kristen Jones and Andrew Ginzel that is supposed to convey instancy and infinity, transience and permanence all at once.

While the pulsing nature of the artwork is supposed to embody the city's energy, elements such as the massive piece of bedrock symbolize millennia of geological history. And the rippling centerpiece all come together to help the viewer visualize one thing: time. Another thing you might notice is the nearly 60 ft long display of digits.

This digital facet of the artwork is what allows it to be reprogrammed to fit such an occasion. Previously, it was used to display the time of the day and the number of hours, minutes, and seconds that remain in it. It is all fittingly titled The Passage. But this September, the artwork embarked on a new mission. The numbers changed to not display only the regular time, but the time that Earth has left in our carbon budget before given the current rate of emissions.

We crossed the 1.5 C threshold outlined by the IPCC in 2018. Now, it all sounds like things people have heard before—scientists come up with numbers, they urge how important it is, and then people move on with their days. This is different. A complete depletion would result in a total destruction of our planet, and it will have been at the fault of our own hands.

If the Earth's temperatures increased by just 1.5°C, we will feel its consequences: extreme heat waves, fire across the world, droughts in places there shouldn't be, and less and less of the one resource on Earth we all need: water. The concern for climate change is certainly nothing new; in fact, it's been with us not for just the past few decades, but for centuries.

British archaeologist Ian Morris spoke at the World Economic Forum to talk about how civilizations in the past had collapsed. He incorporated modern-day scientific methods, excavation practices, and billions of artifacts to dissect the collapse of previous civilizations. He concludes by saying that all major collapses tend to have five common factors, time and time again.

Firstly, they tend to have massive uncontrolled population movements that overwhelm societal infrastructures—pretty much overpopulation. Secondly, they have major pandemics and diseases, which, because of the population movements, spread and merge faster. Then there's state failure and increased warfare. This, in turn, leads to economic collapse.

And then there's one final piece of the puzzle: climate change. Civilizations rarely collapse because of one thing, and so these five factors can often co-occur. And it's their combined effect that leads to the collapse of civilization. It's easy to see how these factors connect with each other because all of these things are taking place right now around the world. Climate change is displacing millions of people; a pandemic is ravaging the world as we speak, and bad governance about these issues is making it all worse.

While the population growth is certainly not out of control, it is, in fact, declining steadily. All in all, we're well on our way to collapsing. Ian Morris says there is some hope, however. In fact, in his study of past civilizations, he notes that quite a few actually survived the five horsemen of the apocalypse and rebuilt themselves afterward. He credits their survival to economic growth.

Now, I'm really not about to debate capitalism versus other economic models in the world, because, well, I don't want to start a war. In his book about the future of human civilization, Homo Deus, historian Yuval Noah Harari notes that although we experience occasional economic crises and international wars in the long run, capitalism has not only managed to prevail but also to overcome famine, plague, and war.

In fact, in modern times, we've experienced so much economic growth that today more people die of eating too much than eating not enough. It turns out we have a little too much stuff sometimes, and that too much stuff isn't produced out of nothing. Beyond the dollar value we pay for things, there's a far greater cost to abundance: an ecological cost—the overuse of unsustainable natural resources, water pollution, soil pollution, loss of biodiversity.

In recent times, this has meant that we are closer to ecological collapse than we have ever been. Even though in the past, there wasn't a global governing body like the United Nations to oversee and recommend actions to reduce our environmental footprint, the footprint itself was always very localized. People hardly traveled as far as we do today, and business was not nearly as robust.

Sure, a few centuries ago, there were no solar panels or recycled plastic, but there also weren't fuel-hungry airliners, either. We also weren't siphoning oil and fossil fuels out of the ground. The scale of the industrial world seems truly unique to our time and civilization.

There are, of course, other aspects to our civilization that make it more unique and, by extension, more complicated. Total nuclear obliteration is still a non-zero possibility. And as low as that number might be, the simple possibility of something like that happening is terrifying. Both natural pandemics and bioweapons are also a threat. In fact, both are quite common throughout the history of collapsed civilizations.

Until very recently, more soldiers died from disease than from actual combat itself. It’s just that modern advances, while allowing us to cure far more diseases than before, have also opened up a Pandora's box of future threats. There's obviously the threat of runaway superintelligence as well. Such a technological singularity can encompass the threat of nanotechnology and their rising incorporation into everything from manufacturing to medicine.

All of these factors understandably complicate the modern infrastructure, and Joseph Tainter, a historian, suggests that this rising complexity is what could ultimately be our society's downfall. He suggests that societies emerge as problem-solving collectives—that's what they’re there for. But eventually, they reach a point where the complexity and intricate structures required to solve problems actually reach a point of diminishing returns, and civilizations collapse under their own weight.

Tied to this idea is that of energy return on investment or EROI for short. Simply put, it’s how much energy is needed to produce or extract a set amount of energy. Fossil fuels have historically had good ERIs, but as we're burning through tons and tons of non-renewable fossil fuels, this EROI is steadily declining.

The ROI for petroleum, for example, has fallen by around 10 times in the past century because, well, we're stealing all of it. Besides, despite being more in line with our goals for a greener future, as it stands, renewable energy sources are quite hard to develop, manufacture, and implement—which doesn't help their ERIs either.

These are all factors that are truly unique to our time, and they sound pretty complex. However, there is one factor that civilizations in the past have certainly never had before: social media. It's unique in the sense that there's never been a platform that allows us to connect with so many people so easily. It's a society within our society—one we certainly didn't evolve to live in. It was just a byproduct of other advances.

Of course, social media has led to some wonderful things. Like every other element I just talked about, it has allowed lost families to reconnect and allowed fundraisers to reach significantly larger audiences. It's given a platform to the ordinary person.

But the problem with social media, well, it's much more nuanced. Much like the cost of biological advancements, there’s the threat of virality in social media too—only it's much, much worse. If falsehood spreads six times faster than truth on Twitter, then there's only ever going to be one winner in the battle of ideas.

The virality of ideas, or bad ideas I should say, is a newfound threat to civilization. It means the ripple effect of a bad idea is now much more likely to escape the geographic confines in which it took place and affect the rest of the world. The sharing of misinformation on social media has led to levels of polarization that have never been seen before. Lack of tolerance seems to be at an all-time high.

You see, one of the psychological side effects of prolonged social media is a broken concentration. You could argue that that's less of a side effect and more of an objective. Regardless, people are constantly glancing at their phones, no matter what they’re doing, and it's turning them into short-sighted individuals who are preoccupied with the next burst of dopamine.

That is exactly the problem we're facing with climate change and ecological collapse. We now have a world that is unwilling to look beyond the next presidency, or the next election, or the next generation or two. Social media, in this case, might just be a good analogy.

But the fact of the matter is we are slowly but surely walking towards the collapse of civilization as we know it. Now, while an asteroid strike or an alien invasion is a more spectacular possibility, they’re not nearly as likely as, say, slow ecological collapse. A popular saying goes, "Civilization may not end with a bang, but with a whimper."

The moral imperative to do something really presents itself when you think about the number of lives that will eventually be affected by climate change, ecological collapse, or another factor that might cause societal collapse. The number of people presently alive pales in comparison to the number of people that can be alive in the future on this very planet—at least as long as the sun isn't too close.

That number skyrockets once you factor in the possibility that, given enough time, civilization could become multiplanetary. But that’s the thing: enough time. We barely have enough time to think about the policies of today, let alone tomorrow.

Think about this: a man named Danny Hillis invented the idea of a 10,000-year clock in 1984, with one singular purpose: to encourage long-term thinking. As the name suggests, this clock will continue ticking without human intervention for 10,000 years. When construction is complete, much like the metronome near Union Square, this clock reminds us of time, albeit in a different way.

This clock is not necessarily counting down from something, but by its sheer presence, its sheer scale, it serves to remind us that the world will go on long after we've passed, and as such, we can still have an impact. Who knows? Maybe under thousands of layers of sediment and fossilized remains, there could lie buried another civilization—another civilization exactly like ours, at this exact stage of their journey. Their time had come, seven years, 36 days. The passage takes away second by second to remind us that ours will too.

Have you ever stood on a mountaintop or gazed up from the bottom of a roaring waterfall or sat in a field, staring at the stars above? Did it inspire you in a feeling of insignificance? Where do you go to seek out those humble yet peaceful moments when you come face to face with a world larger than you?

Nature is a place of spirituality for many, yet because of the climate crisis, nature as we know it is dying out: wildfires, algae blooms, mass extinction— we're watching life on Earth disappear right before our eyes. It's not just that our homes are threatened, although that is true. The loss of nature makes us feel like we're losing something—something essential to our humanity.

Is there a correlation between our suffering planet and our spiritual despair? What if I told you that our very idea of nature is responsible for environmental destruction? And what if upholding this concept of nature is exactly what's causing us to feel disconnected from our physical world? Maybe living in a world without nature is the very thing needed to save our planet.

The concept of nature has been ingrained into our culture, with the help of art and literature. Artists and writers interpret the landscapes they deem beautiful throughout their work. The inspiring beauty of the Simplon Pass in the Swiss Alps whispered to poet William Wordsworth. He wrote that "black drizzling crags and unfettered clouds were like the workings of one mind in their own view."

Through the environment, the poet encountered God. Rural Massachusetts had a similar effect on Henry David Thoreau. In 1862, he published an essay in The Atlantic entitled "Walking," talking about the benefits of spending time outdoors. "I think that I cannot preserve my health and spirits unless I spend four hours a day sauntering through the woods, over the hills and fields—absolutely free from all worldly engagements."

He encouraged people to take up the same habit, especially those who spend their days in the city, hunched over a desk, or like many of us, in front of a computer screen. "You must be sure your walk takes place in dense forests or untilled fields," because in Thoreau's words, "roads are made for horses and men of business; we go to Nature to find tranquility and a much-needed break from our hectic lives."

But where exactly is this nature we go to? And is it such a bad thing if nature, as you know it, doesn't exist? These are frightening questions to confront. Think about your daily routine. You wake up, check your email or socials, and drink your coffee or tea or water. During your commute, you might pull on a pair of sweatpants with an old t-shirt and log on to your morning Zoom meetings.

You bemoan the endless hours you spend scrolling through social media. On most nights, you grab takeout, unable to find the time to cook. You go to bed tired and wake up the following day to do it all over again. Amidst your daily routine, do you ever encounter the sublime? You probably don't have time, and even if you did, you wouldn't know where to find it in your everyday surroundings.

Those soaring and soul-feeding emotions are reserved for the family trip to the Grand Canyon or your weekend hike in the Redwood Forest. In the meantime, though, we push those emotions aside. You might finally feel at home in the world when you're standing in front of the expansive ocean. It's difficult to harness that feeling while you're waiting for the bus in the middle of a blizzard.

We use the natural world for recreation. Let's consider that word for a second: recreation. Nature is where we go when we're broken to become whole again. But the benefits of nature are exclusive and reserved for those who can afford it. A lot of people aren't able to take a vacation from their jobs or don't have the means to get out of the city for the weekend.

There's a conception that nature is a common good. The stream belongs to no one; the fall foliage of the forest is for everyone to enjoy. Only those with money and enough free time can enjoy the wilderness. If nature doesn't exist, and if for many people it's inaccessible, where do we turn to find inner peace in our otherwise chaotic world?

Let's back up for a second. When I say nature doesn't exist, I mean that the concept of nature was invented by people. This might seem counterintuitive. We think of nature as the only place left on Earth untouched by human hands, but maybe that's by design. Let me explain.

Our idea of nature exists inside marked boundaries and places blocked off from where we decided society shouldn't be. And in that way, nature is just as constructed as towns and urban centers. When early colonists came to the Americas, they were faced with a daunting challenge. They were determined to transform a land of dense forests, impenetrable mountains, and roaring rivers into the concept of a functioning place for their society to work.

They needed to harvest and pillage the land for resources to build homes, roads, and farms to sustain the newly arrived European populations. There was also a demand for goods native to these lands back home—things like sugar, coffee, and furs were imported and exported, creating a global system of trade. The task at hand must have felt insurmountable.

It's hard for us to truly grasp the awe early settlers felt coming across this vast expanse of untouched and seemingly empty wilderness, with their ambitious goals in mind. Only the land wasn't exactly untouched, and it was definitely far from empty. Indigenous peoples cared for and tended to the land for thousands of years before European arrivals, not to mention the countless species of birds, plants, fish, and all other creatures that harmoniously composed a complex system of life.

Early colonists understood wilderness as an obstacle to their goal of establishing an orderly civilization in the New World. Their idea of society didn't account for the vibrant life already thriving for millennia prior to their arrival. The invention of nature didn't happen consciously. The ideals and goals early settlers imported from their homes overseas required that the vast wilderness of the new continent be tamed. Their philosophy was in direct odds with the indigenous teachings that relied on a holistic understanding of people's place in relation to their environment.

As indigenous cultures were systematically destroyed or wiped out, this wisdom was being buried because it threatened the project of building a society that would more closely resemble European cities. From the perspective of early colonists, society is built and maintained by people, whereas nature encompasses all that is outside the human domain. This idea informs how you understand the world today.

When you're at home, at work, at the store, you feel as if you're in society. You are contained by the infrastructure built by people, such as roads and buildings. When you're camping in the woods, you feel like you're in nature; the trees exist for their own sake. You turn to nature to feel free from the confines of civilization. Nature even feels like a unified and coherent place to most of us.

We say things like, "I think I'll spend this weekend out in nature," or "I hear British Columbia has beautiful nature," in conversation without a second thought. This phrasing flattens any distinctions in a landscape. The Amazon rainforest and the Great Barrier Reef are both considered nature. That label ignores the particular ecology of each environment on Earth.

Nature strips uniqueness from the world. Using this term causes us to overlook the nuances and details inherent in life on Earth. Each tree becomes the same as the others. Once you've seen one squirrel, you've seen them all. If we're so flippant about nature in our language, we further entrench our misguided beliefs.

Our idea of nature only implies a vast elsewhere, free from our everyday baggage—nature, in this context, simply refers to where society isn't: national parks, camping grounds, and large expanses of private property. These areas exemplify how borders have been drawn to contain a place to be called nature that is separated from society. Nature is the place we've created to escape what we consider our greatest invention: the society we live in every day.

We’re alienated from our need to connect to the Earth on a daily basis because we don’t believe true nature exists where we spend most of our lives. We have this idea that our real home exists elsewhere. Think of Thoreau needing to divorce himself from society in order to find what he called spiritual fulfillment on his daily walk.

If we can only connect with the most precious parts of ourselves when we're in a faraway nature, what happens to our spirit in the day-to-day? Why have we created a society we wish to escape? Can't there be a way to live without needing to turn to nature to do our true soul searching?

The consequence of upholding nature is that the landscape where we actually live—the majority of our lives—becomes cheap in comparison. We rarely consider that the world directly outside our front door is connected to the global system of life in the same way as a lake or a forest. There are ecosystems within our cities and suburbs, and they're beautiful too. The best part is we are an integral species within these webs of life.

In the early 1960s, environmentalist Peter Berg was thinking a lot about this. To Berg and his friends, it became clear that people should take cues from their local ecosystem and should consider their place within them. He advocated for bioregionalism. It's a philosophy that suggests political, cultural, and economic systems are more sustainable and just if they're organized to include the ecology of the immediate physical environment in this formation of the world.

A boundary between nature and society is redundant. Bioregionalism relies on the interconnectedness of all systems of life, those created by humans and those that exist outside the human domain. The construction of human activity should be in harmony with the systems of the Earth. Acknowledging and interacting with your immediate environment will make you more at home in the world.

You could try your best to eat local and pay attention to the growing seasons where you live. You'll notice the food tastes better because you're eating it when the harvests are most nourishing. As the days get shorter in the winter, maybe in the long evenings, you'll rest. It could be that the Earth is giving you more darkness because you need it. Or on your daily walk to school or work, what plants do you notice?

You might not know the names of them, but how do they change throughout the seasons? Noticing is the gateway to curiosity. When you pay attention to what kinds of birds are fluttering outside your window or the squirrels hopping from branch to branch, it's difficult to miss the abundant life surrounding you every day.

Noticing isn't easy; it takes practice to break the habit of not noticing. We're so used to walking with our heads down, eyes on the sidewalk; it's common to take what surrounds us every day for granted. It can also take time to notice. Think about the yearly cycle of a tree. From one day to the next, you can't expect to see the leaves turn and fall or the buds emerge and blossom.

But if you observe the same tree every day over the course of weeks, months, and years, you will witness its full cycle revolve in front of your eyes. Directing your attention towards your bioregion can help combat loneliness and social alienation. When you have a relationship to the land where you live and work, you are part of something greater than yourself.

We need to treat our homes with the same reverence we grant most of the beautiful and pristine landscapes on Earth. By doing this, we'll become better stewards of the Earth because we will learn to foster a deep connection to the land without seeking it elsewhere.

In other words, nature is wherever you can find it. In June 2019, Kirsten M.V., a psychiatrist at Harvard Medical School and head of its Tourette's outpatient department, noticed unusual symptoms in her new set of patients. To begin with, all of them were teenagers, and they were suffering from sudden and uncontrollable tics, even though none of them had any history of the condition. They were all shouting different kinds of obscenities.

M.V. consulted her tight-knit group of global Tourette researchers and found out that her newest patients were not unique. It seemed that a shift in patients and symptoms was happening all over the world, and what was even more surprising was that it was happening at the same time. But what really puzzled M.V. was that most were repeatedly shouting the same phrase: "You are ugly."

As it turned out, this phrase was the key to understanding the strange spike in cases. Four months before the mysterious global outbreak, a 20-year-old German suffering from Tourette's named Jim Zimmermann launched a YouTube channel and a TikTok page detailing what it's like to live with his condition. He immediately became a social media sensation, gathering more than 2 million subscribers on YouTube and millions of views on TikTok, where he shows his viewers how his condition can force him to blurt obscene words or experience uncontrollable tics and convulsions.

Zimmermann had the tendency to blurt out the phrase "You are ugly," one that he shared with all newly styled patients suddenly appearing all over the world. After making this connection, researchers found that all the patients who suddenly claimed to have tics were also fans of Zimmermann. When M.V. confronted her distressed patients and told them that none of

More Articles

View All
The Last Light Before Eternal Darkness – White Dwarfs & Black Dwarfs
Humans can survive in this universe as long as we have an energy source. Unfortunately, the universe will die. It will happen slowly, over many billions of years, but it will happen. On a universal time scale, stars like our sun will be gone in no time. …
The 5 Musketeers Have an Impala Feast – Day 62 | Safari Live
This is the most mind-blowing wildlife experience you could ever hope to have. Hello, and look at that flat cat times two; they’re so flat they almost merge into one! We’re with the Five Musketeers here in the eastern sectors of the Maasai Mara Reserve in…
Boost writing skills with Khan Academy's new essay feedback feature
Hey there! If you’ve heard of Kigo KH Academy’s AI-powered tutor and teaching assistant, you probably know about how it’s been developed to help students solve math problems without giving away the answer, strengthen arguments through debate, or break dow…
How I got on Million Dollar Listing Los Angeles...Twice
What’s up, you guys? It’s Graham here. So definitely do yourself a favor of watching this video. From probably everything I’ve done, this has had the biggest impact on me. So much so that I don’t think I would have started this YouTube channel if it wasn’…
Secrets from Longevity Experts l Transform Your Health and Extend Your Lifespan
I think of all the money I’ve invested in so many businesses over the years, and I didn’t invest enough in myself, which is the most important business I have. So I’m obviously trying to fix that these days. Mr. Wonderful here, back in the United Arab Emi…
Rethinking Our Relationship With Water | National Geographic
It’s hard to believe the world could ever run out of fresh water. Even though we live on a blue planet, only about three percent of Earth’s water is fresh. Of that, only one percent can be used as drinking water, and that is threatened by climate change a…