Perception: Chaos and Order | Dr. Karl Friston | EP 298
Okay, when you make progress towards a valued goal, let's say we inhabit a shared narrative and we're making progress towards our mutual stated goal. When we see ourselves making progress, we get a bit of a dopamine hit. Could you say that the fundamental reason for the positively rewarding effect of that movement forward is that as I move forward towards a goal, I decrease the entropy that still remains between me and the goal? Is even that reward, is even that movement forward readable as an entropy reduction? I mean, it's almost written into the mathematical meaning of the word.
So, if entropy just is uncertainty, and as I get close to resolving that uncertainty—getting my fruit juice, pleasing my wife, or you know, being able to watch the news—if it's an epistemic reward, it is just expected. Surprise just is the uncertainty and the closer you get, the more—um—the less uncertain you are, and all they have been suggests exactly as you say, it's dopamine.
[Music]
Hello everyone, thank you for tuning in to watch and listen. I have the great privilege today of being able to talk with Dr. Carl Friston. In addition, let's say in a signal addition to the recent conversation I had with Andrew Huberman, Dr. Carl Friston is arguably the world's most renowned neuroscientist, a professor at University College London. He is one of the world's leading authorities on brain imaging. Ninety percent of the work published in fields employing such imaging relies on methods he pioneered.
Dr. Friston is also well known for his work on many of the topics we will discuss today—work I find even more exciting, at least conceptually speaking, than his work on brain imaging. We will discuss the ideas that concepts and precepts, categories—that’s another way of thinking about it—bind free energy or entropy, the idea of computation, especially the kind of computation that's approximates brain function as hierarchical, the theory of predictive coding, and active inference.
Welcome, Dr. Friston. It's very good of you to agree to talk to me on this podcast. I'm really looking forward to it.
That's a great pleasure to be here. Thank you.
So let me start maybe by helping people understand this idea of hierarchical computation and the binding of entropy, and so if you could walk through that briefly, then I'll ask some questions if that seems appropriate?
Yeah, sure. The binding of free energy and entropy—that sounds delightfully Freudian—and I don't mean that in a sort of disparaging sense. I think that some of the tourisms and the insights of that era have now proved themselves in modern formulations of computation, information processing, sense making in the brain.
One nice link there is to think of free energy as surprise. So, one way of looking at the way that we make sense of our world—bringing explanations, concepts, categories, notions—to the table that provide the best explanation for the myriad of sensations to which we are exposed is to see that process as a process of minimizing surprise.
So binding free energy, I think, can be read very simply as minimizing surprise. But, of course, to be surprised you have to have something you predicted; you have to have a violation of predictions. So immediately you're in the game now of predictive processing—predicting what would I see if the world out there was like this—and then using the ensuing prediction errors to adjust your beliefs and update your beliefs in the service of minimizing those prediction errors or minimizing that surprise or minimizing that free energy.
And you artfully introduce the notion of hierarchy, you know, in that question, which I think speaks to another fundamental point that in making sense of the world, in making those good predictions, we have to have an internal model—sometimes called a world model—a model that can generate what I would have seen if this was the state of affairs out there.
And that notion of a generative model I think is quite key and holds the attribute of hierarchy simply in the sense that we live in a deeply structured world. Very dy...