The intelligence explosion: Nick Bostrom on the future of AI
- In this century probably, we will be building this hugely consequential thing which is the first general intelligence that will be smarter than humans. This involves an enormous responsibility. This is like maybe the most important thing that our species will ever have done on this planet: giving birth to this new level of intellect.
I'm Nick Bostrom. I am a professor at Oxford University where I run the Future of Humanity Institute. With the unusual mandate of trying to think carefully about the really big picture questions for humanity and the future of earth originating intelligent life. AI has been a big focus of mine really since my teenage years. It always seemed that if you look around and ask what accounts for why the world is the way it is? Our human world, a lot of it is because we humans have made it so. We have invented all kinds of technologies.
And so all these things, whether it's jet planes or art or political systems, have come into the world through the birth canal of the human brain. That immediately made it plausible to me that if you could change that channel, creating artificial brains, then you would change the thing that is changing the world.
(intense music) I think we have this notion of what's smart and what's dumb. Whereas I think there is actually a huge amount of space above us between our level of intelligence and God's. And once you go a little bit beyond human, then you get this feedback loop, where the brains doing the AI research will become AIs themselves. Therefore, I think there is a significant chance that we'll have an intelligence explosion.
So that within a short period of time, we go from something that was only moderately affecting the world to something that completely transforms the world. All the things that we could imagine human intelligence being useful for, which is pretty much everything, artificial intelligence could be useful for as well if it just became more advanced.
Whether it's like diseases or pollution or poverty, we would have vastly better tools for dealing with if you had super intelligence. You could help develop better clean energy technologies or medicines. So it does look to me like all the plausible paths to a really great future involve the development of machine super intelligence at some point.
There are, I think, existential risks connected with the transition to the machine intelligence era. And the most obvious being the possibility of underlying super intelligence that then overrides the earth, human civilization, with its own value structures. Another big class of failures would be if this technology were used for destructive purposes.
Then I think there is a third dimension that has received less attention so far, which is how good the outcome is for the AI stem cells. If we're going to construct digital minds that are maybe conscious or have moral status of various degrees, then how can we ensure that they are treated well?
If you think about it, most of us would acknowledge that various non-human animals have degrees of moral status. Even something as simple as a humble lab mouse. At that point, it becomes an active question of whether we have obligations to the AIs not just to make sure we don't misuse AIs against one another or protect ourselves from the AI, but also make sure we do what we ought to do with respect to the AIs.
And if we succeed at that and things go well, then we can imagine living lives way beyond anything that is possible now. This is why there has been so much interest in AI in recent years because it does look like it could be this fourth ground on which the future depends.
So on the one hand, it does look from this kind of slightly abstract point of view that we might develop, in the not too distant future, greater than human AI and it could change everything. On the other hand, it seems kind of rather incredible that this world that we've known for our whole lives, that that will be a plausible scenario in which that changes radically in our lifetime.
And we become, I don't know, some sort of semi-mortal uploaded creatures, with Jupiter-sized minds. Like, is it? I actually take that seriously; like it seems to go against day to day lived experience. So keeping both of those in mind creates this kind of interesting tension between two different ways of thinking about the world.
I think rather than just eliminate one of them, just keep them both there and struggle with that tension.
(intense music)