More Compute Power Doesn’t Produce AGI
The artificial general intelligence crew gets this completely wrong too. Just add more compute power and you'll get intelligence when we don't really know what it is underneath that makes us creative and allows us to come up with good explanations.
People talk a lot about GPT-3, the text matching engine that AI put out, which is a very impressive piece of software. But they say, "Hey, I can use GPT-3 to generate great tweets." Well, that's because, first, as a human, you're selecting which tweets out of all the garbage that it generates are good. Second, it's using some combination of plagiarism and synonym matching and so on to come up with plausible sounding stuff.
But the easiest way to see that what's generating doesn't actually make any sense is just asking it to follow a question. Take a GPT-3 generated output and ask it why—why is that the case? Or make a prediction based on that and watch it completely fall apart because there's no underlying explanation. It's parroting; it's a brilliant Bayesian reasoning. It's reading from what it already sees out there, generated by humans on the web.
But it doesn't have an underlying model of reality that can explain the scene in terms of the unseen. I think that's critical. That is what humans do uniquely— that no other creature, no other computer, no other intelligence, biological or artificial, that we have ever encountered does. And not only do we do it uniquely, but if we were to meet an alien species that also had the power to generate these good explanations, there is no explanation that they could generate that we could not understand.
We are maximally capable of understanding. There is no concept out there that is possible in this physical reality that a human being, given sufficient time, resources, and education, could not understand.