Artificial Intelligence Is the New Science of Human Consciousness | Joscha Bach | Big Think
I think right now everybody is already perceiving that this is the decade of AI. And there is nothing like artificial intelligence that drives the digitization of the world. Historically, artificial intelligence has always been the pioneer battalion of computer science. When something was new and untested, it was done in the field of AI because it was seen as something that requires intelligence in some way, a new way of modeling things.
Intelligence can be understood to a very large degree as the ability to model new systems, to model new problems. And so it’s natural that even narrow AI is about making models of the world. For instance, our current generation of deep-learning systems are already modeling things. They’re not modeling things quite in the same way with the same power as human minds can do it—they’re mostly classifiers, not simulators of complete worlds.
But they’re slowly getting there, and by making these models, we are, of course, digitizing things. We are making things accessible in data domains. We are making these models accessible to each other by computers and by AI systems. And AI systems provide extensions to all our minds. Already now, Google is something like my exo-cortex.
It’s something that allows me to act as vast resources of information that get integrated in the way I think and extend my abilities. If I forget how to use a certain command in a programming language, it’s there at my fingertips, and I entirely rely on this like every other programmer on this planet. This is something that is incredibly powerful and was not possible when we started out programming, when we had to store everything in our own brains.
I think consciousness is a very difficult concept to understand because we mostly know it by reference. We can point at it. But it’s very hard for us to understand what it actually is. And I think at this point, the best model that I’ve come up with—what we mean by consciousness—it is a model of a model of a model.
That is: our new cortex makes a model of our interactions with the environment. And part of our new cortex makes a model of that model, that is, it tries to find out how we interact with the environment so we can take this into account when we interact with the environment. And then you have a model of this model of our model, which means we have something that represents the features of that model, and we call this the Self.
And the self is integrated with something like an intentional protocol. So we have a model of the things that we attended to, the things that we became aware of: why we process things and why we interact with the environment. And this protocol, this memory of what we attended to is what we typically associate with consciousness.
So in some sense, we are not conscious in actuality in the here and now because that’s not really possible for a process that needs to do many things over time in order to retrieve items from memory and process them and do something with them. Consciousness is actually a memory. It’s a construct that is reinvented in our brain several times a minute.
And when we think about being conscious of something, it means that we have a model of that thing that makes it operable, that we can use. You are not really aware of what the world is like. The world out there is some weird quantum graph. It’s something that we cannot possibly really understand—first of all because we as observers cannot really measure it.
We don’t have access to the full vector of the universe. What we get access to is a few bits that our senses can measure in the environment. And from these bits, our brain tries to derive a function that allows us to predict the next observable bits. So in some sense, all these concepts that we have in our mind, all these experiences that we have—sounds, people, ideas, and so on—are not features of the world out there.
There are no sounds in the world out there, no colors, and so on. These are all features of our mental representations. They’re used to predict the next set of bits.