What your designs say about you - Sebastian Deterding
[Music] [Music] [Applause]
We are today talking about moral persuasion. What is moral and immoral in trying to change people's behaviors by using technology and using design? And I don't know what you expect, but when I was thinking about that issue, I early on realized what I'm not able to give you or answers. I'm not able to tell you what is moral or immoral because we're living in a pluralist society. My values can be radically different from your values, which means that what I consider moral or immoral based on that might not necessarily be what you consider moral or immoral.
But I also realized that there is one thing that I could give you, and that is what this guy behind me gave the world: Socrates. It is questions. What I can do, and what I would like to do with you, is give you like that initial question, a set of questions to figure out for yourself, layer by layer, like peeling an onion, getting at the core of what you believe is moral or immoral persuasion.
And I'd like to do that with a couple of examples of technologies where people have used game elements to get people to do things. So it's the first very simple, a very obvious question I would like to give you: What are your intentions? If you're designing something, obvious intentions are not the only thing. So here is another example for one of these applications: There are a couple of these kinds of Eco dashboards right now, so dashboards built into cars which try to motivate you to drive more fuel efficiently. This here is Nissan's, where your driving behavior is compared with the driving behavior of other people, so you can compete for who drives around the most fuel efficiently.
And these things are very effective, it turns out. So effective that they motivate people to engage in unsafe driving behaviors, like not stopping on a red headlight, because that way you had to stop and restart the engine, and that would use quite some fuel, right? Wouldn't it? So despite this being a very, you know, well-intended application, obviously there is a side effect to that.
And here's another example for one of these side effect commendable: a site that allows parents to give their kids little badges for doing the things that parents want their kids to do, like tying their shoes, right? And first of that sounds very nice, very benign, well-intended. But it turns out, if you look into research on people's mindset, that caring about outcomes, caring about public recognition, caring about these kinds of public tokens of recognition is not necessarily very helpful for your long-term psychological well-being. It's better if you care about learning something; it's better when you care about yourself than how you appear in front of other people.
So that kind of motivational tool that is used actually in and of itself has a long-term side effect in that every time we use a technology that uses something like public recognition or status, we're actually positively endorsing this as a good and a normal thing to care about that way, possibly having a detrimental effect on the long-term psychological well-being of ourselves as a culture.
So that's a second very obvious question: What are the effects of what you're doing, the effects that you're having with the device, like less fuel, as well as the effects of the actual tools you're using to get people to do things, public recognition. Now, is that all intention effect? Well, there are some technologies which obviously combine both good long-term and short-term effects and a positive intention, like Fred Stutzman's Freedom, where the whole point of that application is, well, you know, we're usually so bombarded with toss and requests by other people.
With this device, you can shut off the internet connectivity of your PC of choice for a preset amount of time to actually get some work done. And I think most of us will agree with, well, that's something well-intended and also has good consequences, right? In the words of Michel Foucault, it is a technology of the self. It is a technology that empowers the individual to determine its own life course, to shape itself.
But the problem is, as Foucault points out, that every technology of the self has a technology of domination as its flip side. As you see in today's modern liberal democracies, the society, the state not only allows us to determine ourselves, to shape ourselves, it also demands it of us. It demands that we optimize ourselves, that we control ourselves, that we self-manage continuously, because that's the only way in which such a liberal society works. These technologies want us to stay in the game that society has devised for us. They want us to fit in even better. They want us to optimize ourselves to fit in.
Now, I don't say that is necessarily a bad thing. I just think that this example points us to a general realization, and that is, no matter what technology or design you look at, even something we consider as well-intended and as good in its effects, like Fritz Stutzman's Freedom, comes with certain values embedded in it. And we can question these values. We can question: Is it a good thing that all of us continuously self-optimize ourselves to fit better into that society?
Or to give you another example, what about a piece of persuasive technology that convinces a Muslim woman to wear their headscarves? Is that a good or a bad technology in its intentions or in its effects? Well, that basically depends on the kind of values that you bring to bear to make these kinds of judgments. So that's a third question: What values do you use to judge?
And speaking of values, I've noticed that in a discussion about moral persuasion online, and when I'm talking with people, more often than not there is a weird bias. And that bias is that we're asking: Is this or that still ethical? Is it still permissible? We're asking things like, is this Oxfam donation form where the regular monthly donation is the preset default? And people, maybe without intending it or that way, kind of encouraged or notched into giving a regular donation instead of a one-time donation. Is that still permissible? Is it still ethical?
We're fishing at the low end. But in fact, that question: Is it still ethical? is just one way of looking at ethics. Because if you look at the beginning of ethics in Western culture, you see a very different idea of what ethics also could be. For Aristotle, ethics was not about the question: Is that still good or is it bad? Ethics was about the question of how to live life well. And he put that in the word "arete," which we from the Latin translate as virtue, but really it means excellence. It means living up to your own full potential as a human being.
And that is an idea that I think Paul Richard Banan nicely put in a recent asset way: "Products of vivid arguments about how we should live our lives." Our designs are not ethical or unethical in that they use ethical or unethical means of persuading us. They have a moral component just in the kind of vision and the aspiration of the good life that they present to us.
And if you look into the designed environment around us with that kind of lens, asking what is the vision of the good life that our products and designs present to us, then you often get the shivers because of how little we expect of each other and of how little we actually seem to expect of our life and what the good life looks like.
So that's a fourth question I'd like to leave you with: What vision of the good life do your designs convey? And speaking of design, you notice that I already sort of broadened the discussion because it's not just persuasive technology that we're talking about here; it's any piece of design that we put out here in the world. I don't know whether you know the great communication researcher Paul Watzlawick, who back in the '60s made the argument: "We cannot not communicate."
Right? Even if we choose to be silent, we chose to be silent; we're communicating something by choosing to be silent. And in the same way that we cannot not communicate, we cannot not persuade. Whatever we do or refrain from doing, whatever we put out there as a piece of design into the world, has a persuasive component. It tries to affect people. It puts a certain vision of the good life out there in front of us, which is what Peter-Paul Verbeek, the Dutch philosopher of technology, says, right? No matter whether we as designers intend it or not, we materialize morality.
We make certain things harder and easier to do. We organize the existence of people. We put a certain vision of what good or bad or normal or usual is in front of people by everything we put out there in the world. Even something as innocuous as a set of school chairs is a persuasive technology because it presents and materializes a certain vision of the good life. A good life in which teaching and learning and listening is about one person teaching, the others listening, in which learning is done while sitting, in which you learn for yourself, in which you're not supposed to change these rules because the chairs are fixed to the ground.
And even something as innocuous as a single design chair like this one by A.R. Jakobson is a persuasive technology because, again, it communicates an idea of the good life. A good life—a life that you, as a designer, consent to by saying in a good life goods are produced as sustainably or unsustainably as this chair, workers are treated as well or as badly as the workers who built that chair.
A good life is a life where design is important because somebody obviously took the time and spent the money for that kind of well-designed chair, where tradition is important because this is a traditional classic and someone cared about this, and where there is something as conspicuous consumption where it is okay and normally expected to spend a humongous amount of money on such a chair to signal to other people what your social status is.
So these are the kinds of layers, the kind of questions I wanted to lead you through today: the question of what are the intentions that you bring to bear when you're designing something, what are the effects, intended and unintended, that you're having, what are the values you're using to judge those, what are the virtues, the aspirations that you're actually expressing in that, and how does that apply not just to persuasive technology but to everything you design?
Do we stop there? I don't think so. I think that all of these things are eventually informed by the core of all of this, and this is nothing but life itself. Why, when the question of what the good life is informs everything that we design, should we stop at design and not ask ourselves: How does it apply to our own life? Why should the lamp or the house be an art object but not our life?
As Michel Foucault puts it, just to give you a practical example of Buster Benson—this is Buster setting up a pull-up machine at the office of his new startup, Habit Labs, where they're trying to build up other applications like Heal for people. And why is he building up a thing like this? Well, here is the set of axioms that Habit Labs, Buster's startup, put up for themselves on how they wanted to work together as a team when they're building these applications: a set of moral principles they set themselves for working together.
And one of them being: We take care of our own health and manage our own burnout. Because ultimately, how can you ask yourselves, and how can you find an answer on what vision of the good life you want to convey and create with your designs without asking the question: What vision of the good life do you yourself want to live?
And with that, I thank you. [Applause]