yego.me
💡 Stop wasting time. Read Youtube instead of watch. Download Chrome Extension

Elementary, Watson: The Rise of the Anthropomorphic Machine | Big Think


3m read
·Nov 4, 2024

Processing might take a few minutes. Refresh later.

So I've been asked periodically for a couple of decades whether I think artificial intelligence is possible. And I taught the artificial intelligence course at Columbia University. I've always been fascinated by the concept of intelligence. It's a subjective word. I've always been very skeptical. And I am only now newly a believer.

Now, this is subjective. This is sort of an aesthetic thing but my opinion is that IBM's Watson computer is able to answer questions, in my subjective view, that qualifies as intelligence. I spent six years in graduate school working on two things. One is machine learning, and that's the core to prediction—learning from data how to predict. That's also known as predictive modeling.

And the other is natural language processing or computational linguistics. Working with human language, because that really ties into the way we think and what we're capable of doing, and does turn out to be extremely hard for computers to do. Now, playing the TV quiz show Jeopardy means you're answering questions—quiz show questions.

The questions on that game show are really complex grammatically. And it turns out that in order to answer them, Watson looks at huge amounts of text, for example, a snapshot of all the English speaking Wikipedia articles. And it has to process text not only to look at the question it's trying to answer but to retrieve the answers themselves.

Now at the core of this, it turns out it's using predictive modeling. Now, it's not predicting the future, but it's predicting the answer to the question, you know. It's the same in that it's inferring an unknown even though someone else may already know the answer, so there's no sort of future thing. But will this turn out to be the answer to the question?

The core technology is the same. In both cases, it's learning from examples. In the case of Watson playing the TV show Jeopardy, it takes hundreds of thousands of previous Jeopardy questions from the TV show, having gone on for decades, and learns from them. And what it's learning to do is predict, is this candidate answer to this question likely to be the correct answer?

So, it's gonna come up with a whole bunch of candidate answers—hundreds of candidate answers—for the one question at hand at any given point in time. And then, amongst all these candidate answers, it's going to score each one. How likely is it to be the right answer?

And, of course, the one that gets the highest score as the highest vote of confidence—that's ultimately the one answer it's gonna give. It's correct, I believe, about 90 or 92 percent of the time that it actually buzzes in to intentionally answer the question.

You can go on YouTube and you can watch the episode where they aired the, you know, the competition between IBM's computer Watson and the all-time two human champions of Jeopardy. And it just rattles off one answer after another. And it doesn't matter how many years you've been looking at—in fact, maybe the more years you've studied the ability or inability of computers to work with human language, the more impressive it is.

It's just rattling one answer after another. I never thought that, in my lifetime, I would have cause to experience that the way I did, which was, "Wow, that's anthropomorphic. This computer seems like a person in that very specific skill set. That's incredible. I'm gonna call that intelligent."

More Articles

View All
Worked example: Analyzing the purity of a mixture | AP Chemistry | Khan Academy
We’re told you have a solid that you know is mostly sodium chloride. You suspect that it might have, or it may have, some sodium iodide, potassium chloride, or lithium chloride as well. When you analyze a sample, you see that it contains 73% chlorine by m…
Why the Future of Cars is Electric
I was invited here, to Munich, by BMW, the sponsor of this video, to find out why the future of cars is electric. But electric cars are actually nothing new—they date back to 1832, well before the first gasoline-powered car. In fact, the first car to go f…
Explorer: The Last Tepui Trailer | National Geographic
First descents are basically modern exploration. When you’re climbing a piece of rock that no human has ever touched, you literally step into the unknown. [Music] We’re on an expedition into the glorious heart of South America. Oh look, right there, Dr.…
Quantum Entanglement & Spooky Action at a Distance
In the 1930s, Albert Einstein was upset with quantum mechanics. He proposed the thought experiment where, according to the theory, an event at one point in the universe could instantaneously affect another event arbitrarily far away. He called this spooky…
Sign convention for passive components | Electrical engineering | Khan Academy
Today we’re going to talk about the sign convention for passive components. It’s a big mouthful, but it’s a fairly simple idea. So first of all, let’s look at this word: passive. Passive is the way we describe components that do not create power or compo…
Solving equations by graphing: intro | Algebra 2 | Khan Academy
We’re told this is the graph of y is equal to three halves to the x, and that’s it right over there. Use the graph to find an approximate solution to three halves to the x is equal to five. So pause this video and try to do this on your own before we work…