The unexpected impact of AI on animals | Peter Singer
There are quite a few things that concern me about AI. It clearly has both positive and negative aspects.
"Okay, Google, what's the weather?"
"Right now in Orlando, it's 86."
There are a lot of factors, and one of those concerns is about the impact of AI on animals. When you look at past technologies, they have always been used to the disadvantage of animals in various ways. We invented the wheel, a great invention of course; they help us move around, but that means that we've tied horses and oxen and various other animals, effectively enslaved them to pull the carts that we've made with wheels.
Similarly with AI, we are already using it on animals in a variety of ways. Factory farms are starting to use AI to run factory farms and to remove humans even further from the animals in the factory farms. In New Zealand, there are feral possums that had been imported from Australia for fur, but they're damaging to New Zealand's native forests, which never had possums. The drones are being used to kill them.
And in general, when you look at studies of AI ethics, it tends to talk about AI must be used for human benefit— but I don't think that's enough. We share this planet with other species who are capable of feeling pain and whose interests must be counted. So I think that statements of AI ought to instead talk about AI being used for the benefit of all sentient beings.
There are other broader concerns that are somewhat more philosophical. One, is about whether AI could become more intelligent than us. A superintelligent, artificial, general intelligence? We've already created vast numbers of conscious beings. We're creating animals all the time, and we vary them in their nature by breeding.
I don't see any, in principle, reason why you couldn't get something similar happening in something that isn't a carbon-based life form, that is made of silicon chips. If AI becomes conscious, if we develop an artificial intelligence that is itself a conscious, sentient being, how can we tell whether it's mimicking consciousness or whether it's genuinely conscious? And what would its moral status be?
Would its moral status be similar to that of humans? Would it be more like animals, or would it still be a tool we could use as we pleased? The question is then: Will we treat them as the other non-human conscious beings we've created who we have mostly exploited for our particular purposes? Just as I believe governments should set standards for animal welfare, they should not permit the treatment of animals in the way they're now treated in factory farms.
So, I would think governments will need to set standards for the treatment of sentient, conscious AI. And then there are reasonable concerns about will we be able to control it? The Oxford philosopher, Nick Bostrom, has a fable about a group of sparrows who think that it would be terrific if they had an owl to help them with some labor tasks; owls are much bigger and stronger than they are.
And so, they think about getting an owl egg and hatching the owl, and then training the owl to do what they want. And there's one wise old sparrow who says, "Well, before we actually hatch this egg, shouldn't we make sure that we can train the owl to do what we want?"
And the other sparrows say, "Oh, no, it's gonna be so wonderful, so let's keep going." The point of the fable, of course, is that owls eat sparrows, and once you have hatched an owl, the sparrows are not gonna be able to control it.
So, is a super intelligent AI going to be like the owl would've been to the sparrows?
- Want to dive deeper? Become a Big Think member and join our members-only community, watch videos early, and unlock full interviews.