01/16/2015
The AI Threat: Is It Us or Them?
So some folks who should know are worried that artificial intelligence (AI) could become a danger to humankind (see previous post, “Is Artificial Intelligence a Threat?” --January 11).
We might respond, in the immortal words of Mad magazine’s Alfred E. Neuman: “What, me worry?”
en.wikipedia.org/wiki/Alfred_E._Neuman
Should we be concerned about AI in computers and networks joining forces and rising up like the Terminator to squash us like bugs, or could we maybe just relax with Alfred?
www.tcm.com/mediaroom/video/188651/Terminator-The-Original-Trailer-.html
Yeah, maybe we should worry a little. And the reason is not because there is anything intrinsically sinister about AI technology. After all, technology by itself has been proven totally harmless, hasn’t it? (Just ask the National Rifle Association) Seriously, I would worry not so much because of AI itself, but because WE design this stuff.
Think about this: AI was developed by the same animals that created and then obliterated a series of powerful bipedal civilizations over the past 5,000 years, invented tens of powerful new fields of knowledge and figured out how to use pretty much all of them to slaughter one another, flew to the Moon and then basically dropped the interplanetary travel ball, and endlessly revere money and power over knowledge and wellbeing.
So people of influence are now debating the threat of AI. This is a good thing. Let us by all means adapt and implement Asimov’s Three Rules of Robotics (now actually four). Put safeguards in place. But understand that for each one of the possible threats that we identify and parse out, determining methods for controlling each thread, there are inevitably unstated issues, oblique connections of probability, that can and will put some or all of us in harm’s way.
en.wikipedia.org/wiki/Three_Laws_of_Robotics
The nuclear arms race sped along its predictable pathway to MAD (no, not the vaunted magazine, but “mutually assured destruction”). This late 20th century concept, far more mad than the magazine ever could have been, was to have so many weapons ready to use that any use of nuclear warheads between two adversaries would generate a response that would devastate both participating nations, making such use unthinkable. It worked, sort of. On the surface it seems to have been good. Or was it? All those warheads now must be decommissioned, and are deteriorating in less than secure locations, unstable where they sit, and vulnerable to diversion to nefarious ends by people possibly even more insane than the ones who cooked up the MAD idea in the first place.
en.wikipedia.org/wiki/Mutual_assured_destruction
Science fiction in the nuclear age has discussed technology gone awry in myriad ways, including my own novel Micro. In all of them we pay a heavy price for less than perfect thinking, poor projections into the future, and for unknown and unintended consequences.
We plan for what we know, what we see in front of us, and what we imagine the future to be. We cannot anticipate nor mitigate what we cannot imagine, so we typically fail to identify and neutralize every issue that might arise. In Terminator, the development of a centralized “safe” system to ensure peace for the human race was in fact its undoing when that system was taken over by, yep, AI.
So the concern with AI is that, as we visualize a utopian existence but implement it imperfectly, we may inadvertently ensure that AI will one day become sentient, and finally sweep us away as outmoded hosts for inferior intelligence. Logically this is entirely possible, and some futurists would argue, even desirable on some level.
Should we then become latter-day Luddites and smash all systems beyond some arbitrary level of artificial intellect, or blindly trust that all will work out well?
www.smithsonianmag.com/history/what-the-luddites-really-fought-against-264412/?no-ist
Perhaps an answer lies in something closer to the Russian-proverb-inspired concept behind controlling and finally decommissioning many of those nuclear warheads: trust but verify.
en.wikipedia.org/wiki/Trust,_but_verify
Test each step toward artificial intelligence with rigorous methods checked many times, in many ways, by all parties. Take the steps we need to move ahead as a species, expanding our capabilities while making life more enjoyable and healthier for all of us, while making sure that we don’t get stupid with technology. It is possible, done right, for us to believe in and move toward the future—without giving it away.
Alfred E. Neuman is the fictitious mascot and cover boy of Mad magazine. The face had drifted through U.S. pictography for decades before being claimed by Mad editor Harvey Kurtzman, and later named by the magazine's second editor Al Feldstein. He briefly appeared in the animated TV series Mad.