Should we be worried about AI turning into a "Machine God"?
Scenario 4 is AGI: artificial general intelligence. This is the idea that a machine will be smarter than a human at almost all tasks, and this is the explicit goal of OpenAI. They want to build AGI, and there's a lot of debate about what this means. When we have a machine smarter than a human, and it can do all humans' jobs, can it create AI smarter than itself? Then we have artificial superintelligence (ASI), and humans become obsolete overnight.
There's people genuinely worried about this, and I think it's worth spending a little time being slightly worried too because other people are. But I think that that scenario tends to take agency away from us because it's something that happens to us. I think that it's more important to worry about what I call scenarios 2 and 3, which is continued linear or exponential growth.
We don't know how good AI is going to get. It's very likely that AI will continue to improve and get better in the near term. Now is a good time for you to start to figure out how to use AI to support what makes you human or good at things, and what things, as AI gets better, that you might want to start handing off more to the AI.