AI Can Now Self-Reproduce—Should Humans Be Worried? | Eric Weinstein | Big Think
There are a bunch of questions next to or adjacent to general artificial intelligence that have not gotten enough alarm because, in fact, there’s a crowding out of mindshare. I think that we don’t really appreciate how rare the concept of selection is in the machines and creations that we make.
So in general, if I have two cars in the driveway, I don’t worry that if the moon is in the right place in the sky and the mood is just right, that there’ll be a third car at a later point because, in general, I have to go to a factory to get a new car. I don’t have a reproductive system built into my sedan. Now almost all of the other physiological systems—what are there, perhaps 11?—have a mirror.
So my car has a brain, so it’s got a neurological system. It’s got a skeletal system in its steel, but it lacks a reproductive system. So you could ask the question: are humans capable of making any machines that are really self-replicative? And the fact of the matter is that it’s very tough to do at the atomic layer, but there is a command in many computer languages called Spawn.
And Spawn can effectively create daughter programs from a running program. Now as soon as you have the ability to reproduce, you have the possibility that systems of selective pressures can act because the abstraction of life will be just as easily handled whether it’s based in our nucleotides, in our A, C, Ts and Gs, or whether it’s based in our bits and our computer programs.
So one of the great dangers is that what we will end up doing is creating artificial life, allowing systems of selective pressures to act on it and finding that we have been evolving computer programs that we may have no easy ability to terminate, even if they’re not fully intelligent. Further, if we look to natural selection and sexual selection in the biological world, we find some very strange systems, plants or animals with no mature brain to speak of effectively outsmart species which do have a brain by hijacking the victim species’ brain to serve the non-thinking species.
So, for example, I’m very partial to the mirror orchid, which is an orchid whose bottom petal typically resembles the female of a pollinator species. And because the male in that pollinator species detects a sexual possibility, the flower does not need to give up costly and energetic nectar in order to attract the pollinator.
And so if the plant can fool the pollinator to attempt to mate with this pseudo-female in the form of its bottom petal, it can effectively reproduce without having to offer a treat or a gift to the pollinator but, in fact, parasitizes its energy. Now how is it able to do this? Because if a pollinator is fooled then that plant is rewarded.
So the plant is actually using the brain of the pollinator species, let’s say a wasp or a bee, to improve the wax replica, if you will, which it uses to seduce the males. That which is being fooled is the more neurologically advanced of the two species. And so what I’ve talked about, somewhat controversially, is what I call artificial out-telligence.
Where instead of actually having an artificially intelligent species, you can imagine a dumb computer program that uses the reward, through let’s say genetic algorithms and selection within a computer framework, to increasingly parasitize, using better and better lures, fully intelligent humans. And in the case of artificial intelligence, I don’t think we’re there yet.
But in the case of artificial out-telligence, I can’t find anything that’s missing from the equation. So we have self-modifying code. You have Bitcoin so you could have a reward structure and blockchains. And there’s nothing that I see that keeps us from creating.
Now that’s such a strange and quixotic possibility. In this framework, I don’t see an existential risk so that my friends who worry about machine intelligence being a terminal invention for the human species probably don’t need to be worried. But I think there’s a lot of exotica around artificial intelligence which hasn’t been explored and I think which is much closer to fruition.
Perhaps that’s good. Maybe it’s a warning shot so that we’re going to find that just as we woke up to Bitcoin as digital gold, we may wake up to a precursor to artificial general intelligence which alerts us to the fact that we should probably be devoting more energy into this absolutely crazy-sounding future problem which no humans have ever encountered.