yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

The future of the mind: Exploring machine consciousness | Dr. Susan Schneider


3m read
·Nov 3, 2024

Processing might take a few minutes. Refresh later.

So consciousness is the felt quality of experience. So when you see the rich hues of a sunset, or you smell the aroma of your morning coffee, you're having conscious experience. Whenever you're awake and even when you're dreaming, you are conscious.

So consciousness is the most immediate aspect of your mental life. It's what makes life wonderful at times, and it's also what makes life so difficult and painful at other times. No one fully understands why we're conscious.

In neuroscience, there's a lot of disagreement about the actual neural basis of consciousness in the brain. In philosophy, there is something called the hard problem of consciousness, which is due to the philosopher David Chalmers. The hard problem of consciousness asks, why must we be conscious?

Given that the brain is an information processing engine, why does it need to feel like anything to be us from the inside? The hard problem of consciousness is actually something that isn't quite directly the issue we want to get at when we're asking whether machines are conscious.

The problem of AI consciousness simply asks, could the AIs that we humans develop one day or even AIs that we can imagine in our mind's eye through thought experiments, could they be conscious beings? Could it feel like something to be them? The problem of AI consciousness is different from the hard problem of consciousness.

In the case of the hard problem, it's a given that we're conscious beings. We're assuming that we're conscious, and we're asking, why must it be the case? The problem of AI consciousness, in contrast, asks whether machines could be conscious at all.

So why should we care about whether artificial intelligence is conscious? Well, given the rapid fire developments in artificial intelligence, it wouldn't be surprising if within the next 30 to 80 years, we start developing very sophisticated general intelligences. They may not be precisely like humans. They may not be as smart as us. But they may be sentient beings.

If they're conscious beings, we need ways of determining whether that's the case. It would be awful if, for example, we sent them to fight our wars, forced them to clean our houses, made them essentially a slave class. We don't want to make that mistake. We want to be sensitive to those issues.

So we have to develop ways to determine whether artificial intelligence is conscious or not. It's also extremely important because as we try to develop general intelligences, we want to understand the overall impact that consciousness has on an intelligent system.

Would the spark of consciousness, for instance, make a machine safer and more empathetic? Or would it be adding something like volatility? Would we be, in effect, creating emotional teenagers that can't handle the tasks that we give them?

So in order for us to understand whether machines are conscious, we have to be ready to hit the ground running and actually devise tests for conscious machines. In my book, I talk about the possibility of consciousness engineering.

So suppose we figure out ways to devise consciousness in machines. It may be the case that we want to deliberately make sure that certain machines are not conscious. So for example, consider a machine that we would send to dismantle a nuclear reactor.

So we'd essentially quite possibly be sending it to its death. Or a machine that we'd send to a war zone. Would we really want to send conscious machines in those circumstances? Would it be ethical?

You might say, well, maybe we can tweak their minds so they enjoy what they're doing or they don't mind sacrifice. But that gets into some really deep seated engineering issues that are actually ethical in nature that go back to brave new world, for example, situations where humans were genetically engineered and took a drug called soma, so that they would want to live the lives that they were given.

So we have to really think about the right approach. So it may be the case that we deliberately devise machines for certain tasks that are not conscious. On the other hand, should we actually be capable of ma...

More Articles

View All
Sigma Male Or Joker? (animated)
The Sigma male is the hierarchical chameleon that shape-shifts himself through life, and by his very nature, does not belong anywhere. Because the Sigma male rather sees human existence as a game, he sometimes chuckles a bit when he sees people taking lif…
How Weed Eaters Work (at 62,000 FRAMES PER SECOND) - Smarter Every Day 236
Hey, it’s me, Destin. Welcome back to Smarter Every Day. It’s time for the Weed Eater episode. And the way—I wanted to shut the door. The way you can tell that I’ve staged all this is that this Weed Eater’s going to crank up immediately. But here’s the de…
Ayaan Hirsi Ali on Mohammed, the Anti-Innovator | Big Think
Early on, it seemed as if Mohammed’s ambition was simply to go from door to door, from person to person, and say, “Leave alone what you believe in. Believe in the one God, the one who spoke to me through the angel Gabriel.” That’s how you say it in Arabic…
Chandragupta, Ashoka and the Maurya Empire | World History | Khan Academy
We’re now going to talk about the Moria Empire, which is not just one of the greatest empires in Indian history, and really the first truly great Empire. It’s also one of the great empires of world history. Just for a little bit of context, we can see whe…
Checking bus fares with if statements | Intro to CS - Python | Khan Academy
Let’s design a program using Boolean expressions and if statements. The public transit system wants to build an app that determines a passenger’s bus fare. The standard bus fare is $4.25; however, they offer discounts for certain age groups. Kids under fi…
Pavlovian reactions aren't just for dogs - Benjamin N. Witts
Transcriber: Andrea McDonough Reviewer: Bedirhan Cinar You’ve probably heard of Pavlov’s dogs, the phrase that often summarizes Dr. Ivan Pavlov’s early 20th century research, in which he demonstrated that we can alter what stimuli elicit a reflective res…