yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

It Would Not Be Cool If AI Were Conscious — It Would Be Dumb | Daniel Dennett


3m read
·Nov 3, 2024

Processing might take a few minutes. Refresh later.

I think a lot of people just assume that the way to make AIs more intelligent is to make them more human. But I think that's a very dubious assumption. We're much better off with tools than with colleagues. We can make tools that are smart as the dickens and use them and understand what their limitations are without giving them ulterior motives, purposes, a drive to exist and to compete and to beat the others. Those are features that don't play any crucial role in the competences of artificial intelligence.

So for heaven's sake, don't bother putting them in. Leave all that out, and what we have is very smart “thingies” that we can treat like slaves, and it's quite all right to treat them as slaves because they don't have feelings; they're not conscious. You can turn them off; you can tear them apart the same way you can with an automobile, and that's the way we should keep it.

Now that we're in the age of intelligent design—lots of intelligent designers around—a lot of them are intelligent enough to realize that Orgel's Second Rule is true: "Evolution is cleverer than you are." That's Francis Crick’s famous quip. And so what they're doing is harnessing evolutionary processes to do the heavy lifting without human help.

So we have all these deep learning systems, and they come in varieties. There's Bayesian networks and reinforcement learning of various sorts, deep learning neural networks… And what these computer systems have in common is that they are competent without comprehension. Google Translate doesn't know what it's talking about when it translates a bit of Turkish into a bit of English. It doesn't have to. It's not as good as the translation that a bilingual can do, but it's good enough for most purposes.

And what's happening in many fields in this new wave of AI is the creation of systems, black boxes, where you know that the probability of getting the right answer is very high; they are extremely good, they're better than human beings at churning through the data and coming up with the right answer. But they don't understand how they do it. Nobody understands in detail how they do it, and nobody has to.

So we've created entities that are as inscrutable to us as a bird or a mammal considered as a collection of cells is includable; there's still a lot we don't understand about what makes them tick. But these entities, instead of being excellent flyers or fish catchers or whatever, they're excellent pattern detectors, excellent statistical analysts, and we can use these products, these intellectual products without knowing quite how they're generated but knowing we have good responsible reasons for believing that they will generate the truth most of the time.

No existing computer system, no matter how good it is at answering questions like Watson on Jeopardy or categorizing pictures, for instance, no such system is conscious today— not close. And although I think it's possible in principle to make a conscious android, a conscious robot, I don't think it's desirable; I don't think there would be great benefits to doing this; and there would be some significant harms and dangers, too.

You could at tremendous expense, but you'd have to have, in fact, quite a revolution in computer design, which would take you right down to the very base of the hardware...

More Articles

View All
Warren Buffett: How To Profit From Inflation (feat. Mohnish Pabrai)
We’re seeing very substantial inflation. It’s very interesting. I mean, we’re raising prices, people are raising prices to us, and it’s being accepted. Take home building. I mean, you know, the cost of—we’ve got nine home builders and, in addition to our …
School Bullying: Are We Taking the Wrong Approach? | Big Think
Nikhil Goyal: Bullying is one of the major issues that have always been discussed in the education reform conversation and the debate. And a lot of people think when they talk about bullying is that we can just solve it by kind of teaching kids to be kind…
Jeannie Gaffigan: Finding comedy in a brain tumor | Big Think
This kind of all started pretty quickly when I was taking my kids to the pediatrician, and I was super busy, and I had to fill out all these forms for my kids. But my pediatrician noticed that I was like leaning to hear her, and she said, “What’s wrong wi…
For One Flint, Michigan School - This is the Last Dance | National Geographic
Good morning, second students! Today is Friday, calm day in Wildcat country, and these are your morning announcements. [Music] * Describe it. It’s like magical, like the Grammys. Words I get butterflies in my stomach. So, fashion show, a competition—i…
Making Music and Art Through Machine Learning - Doug Eck of Magenta
Hey, this is Craig Cannon and you’re listening to a Y Combinator’s podcast. Today’s episode is with Doug Eck. Doug’s a research scientist at Google, and he’s working on Magenta, which is a project making music and art through machine learning. Their goal …
Exceptions to the octet rule | AP Chemistry | Khan Academy
In this video, we’re going to start talking about exceptions to the octet rule, which we’ve talked about in many other videos. The octet rule is this notion that atoms tend to react in ways that they’re able to have a full outer shell; they’re able to hav…