The biggest A.I. risks: Superintelligence and the elite silos | Ben Goertzel | Big Think
We can have no guarantee that a super intelligent AI is going to do what we want. Once we're creating something ten, a hundred, a thousand, a million times more intelligent than we are, it would be insane to think that we could really like rigorously control what it does. It may discover aspects of the universe that we don't even imagine at this point.
However, my best intuition and educated guess is that much like raising a human child, if we raise the young AGI in a way that's imbued with compassion, love, and understanding, and if we raise the young AGI to fully understand human values and human culture, then we're maximizing the odds that as this AGI gets beyond our rigorous control, at least its own self-modification and evolution is imbued with human values and culture and with compassion and connection.
So I would rather have an AGI that understood human values and culture become super intelligent than one that doesn't understand even what we're about. And I would rather have an AGI that was doing good works like advancing science and medicine and doing elder care and education become super intelligent than an AGI that was being, for example, a spy system, a killer drone coordination system, or an advertising agency.
So even when you don't have a full guarantee, I think we can do things that commonsensically will bias the odds in a positive way. Now, in terms of nearer-term risks regarding AI, I think we now have a somewhat unpleasant situation where much of the world's data, including personal data about all of us and our bodies and our minds and our relationships and our tastes, much of the world's data and much of the world's AI firepower are held by a few large corporations, which are acting in close concert with a few large governments.
In China, the connection between big tech and the government apparatus is very clear, but in the U.S. as well. I mean, there was a big noise about Amazon's new office; well, 25,000 Amazon employees are going in Crystal City, Virginia, right next-door to the Pentagon. There could be a nice big data pipe there if they want. We in the U.S. as well have very close connections between big tech and government. Anyone can Google Eric Schmidt versus NSA as well.
So there's a few big companies with close government connections hoarding everyone's data, developing AI processing power, hiring most of the AI PhDs, and it's not hard to see that this can bring up some ethical issues in the near-term, even before we get to superhuman superintelligences potentially turning the universe into paper clips.
And decentralization of AI can serve to counteract these nearer-term risks in a pretty palpable way. So as a very concrete example, one of our largest AI development offices for SingularityNET, and for Hanson Robotics, the robotics company I'm also involved with, is in Addis Ababa, Ethiopia. We have 25 AI developers and 40 or 50 interns there.
I mean these young Ethiopians aren't going to get a job for Google, Facebook, Tencent, or Baidu except in very rare cases when they managed to get a work visa to go to one of these countries somehow. And many of the AI applications of acute interest in those countries, say AI for analyzing agriculture and preventing agricultural disease or AI for credit scoring for the unbanked to enable microfinance, AI problems of specific interest in sub-Saharan Africa don't get a heck of a lot of attention these days.
AI wizardry from young developers there doesn't have a heck of a lot of market these days, so you've got both a lot of the market and a lot of the developer community that's sort of shut out by the siloing of AI inside a few large tech companies and military organizations. And this is both a humanitarian ethical problem because there's a lot of value being left on the table and a lot of value not being delivered, but it also could become a different sort of crisis because if you have a whole bunch of brilliant young hackers throughout the developing world who aren't able to fully enter into the world economy, there's a lot of other less pleasant things than work for Google or Tencent that these young hackers could choose to spend their time on.
So I think getting the whole world fully pulled into the AI economy in terms of developers being able to monetize their code and application developers having an easy way to apply AI to the problems of local interest to them, I mean, this is ethically positive right now in terms of doing good and in terms of diverting effort away from people doing bad things out of frustration.