Are Machines Too Dumb to Take Over the World? Part I: the Duh! Factor

Existential Angst: Nuclear War, Donald Trump, or Artificial Intelligence?

Apart from worldwide nuclear war (unlikely), or Donald Trump grabbing dictatorial powers (not quite as unlikely), my greatest worry is the possibility of Artificial Intelligence (AI) taking over the world—or at least enough of it to doom humanity as we know it.*

Likely? Experts have views as divergent as the sides that disputed whether the notorious DRESS was black and blue or white and gold.  More seriously, people way smarter than me (and perhaps you) have made predictions ranging from AI threatens the elimination of humankind, to AI is the greatest tool for the betterment of humankind that has ever existed. 

(The remainder of this post addresses machine intelligence, which is really a sub-category of AI—but since most people assume  AI equivalent to machine intelligence, I use the terms interchangeably unless specified otherwise.)

Ultimately AI may be a greater threat than Climate Change.** I know Green New Dealers don’t want to hear it, but consider: there have been drastic changes in climate in the geological record—and life, including humans, adapted. Recent Ice Ages are notable examples.  (This is NOT to defend inaction on Climate Change! Especially because the changes we are imposing on the planet, unlike most previous climate shifts, are so devastatingly swift.)

Super-AI, on the other hand, will be utterly unprecedented, and its advent, unlike Climate Change, could come swiftly and with little warning—especially if we continue to pooh-pooh it as an illusory bogeyman.

(For some premature pooh-poohing, see my Will They Be Coming for Us?  For some subsequent unpooh-poohing, see Superintelligence: Proceed with Caution. In the latter, the take-off point is Nick Bostrom’s Superintelligence, recommended reading for anyone interested in a very deep, very thorough treatment of the subject. Believe me, Nick Bostrom has thought of every angle that 99% of us have ever thought of.)

The GOOD news is that the solutions, if possible, are much cheaper than coping with Global Warming. The number of scientists required to head off  AI Armageddon is relatively small, and the required resources are tiny when compared with such budgetary monstrosities as the U.S. Defense budget. But the cost is non-negligible, and who’s going to pay for it?

AI Takeover – A Chimera?

A great many very smart people believe that AI—Elon Musk and Stephen Hawking, to name two—poses an  existential threat not merely to civilization but to the human race itself. Superintelligent machines would have little need for people, and could find us so contrary to their agendas that extermination might be called for on the grounds of efficiency.

On the other hand, a large and skeptical group of very smart people believe that super-AI is not worth bothering about—at least for the time being, when we have so many more immediate dangers to deal with (e.g. nuclear war, climate change, Trump). There’s a third group who wants  to promote the development of superintelligent machines as quickly as possible, because the benefits so heavily outweigh the risks.  That’s the really scary group.

The threat of AI domination hangs on two conditions, both of which would have to occur for world takeover:

(1) Machines get a lot smarter than we are, not only in the narrow domains where they already excel or are soon to excel, but across many domains in combination, with what we can call “general intelligence.”

(2) Machines, given the capability, will want to  take over. This need not stem from malevolent intent. It could even arise from benign intent—benign by machine standards.  They  might consider human civilization worth saving, while also valuing a stable, flourishing biosphere. These values might even have been instilled in them by their human creators. If these two values are paramount, they might quickly conclude that the imposition of radical birth control would be the highest priority.

You can see where this could have most unpleasant consequences for us—the quickest path to minimizing births would involve forced sterilization, and carrying out a high number of abortions.

The next highest priority might be pushing human technological development back 50, or 100 years, or more, to lower the per capita impact of humans on the natural world.

(Bostrom identifies such scenarios as arising from the ambiguities of instrumentality—where AI finds perverse solutions [instruments] to achieve seemingly benevolent ends.)

If both of these conditions come true, we’re cruisin’ for an existential bruisin’. 

BUT just maybe . . . 
AI’s  are not so smart after all.

There’s some consolation in recent research suggesting that the capabilities of AI are not what enthusiasts celebrate, nor doomsayers fear.  They can even get tripped up in those domains where we are inclined to acknowledge machine superiority.

[FYI: links below are to New Scientist articles, and if you are not a subscriber you might hit a paywall. I’ve tried to convey the key points.]

For example, we know that AIs are great at math, and recently we have heard of a Google-birthed AI, DeepMind, that has already proved 1,200 math theorems.

On the other hand, in New Scientist we also read that “DeepMind’s AI fails maths exam.” That is, received a D- grade on an exam given to 16-year-olds in the UK.

What explains this seeming paradox, of AI brilliance on the one hand, and AI stupidity on the other?

An answer is suggested in a post to come (Are Machines too Dumb etc. Part II), but for now, here are two examples from the Brit’s exam:

(1) Given the problem 1+1+1+1+1+1, DeepMind correctly calculated an answer of 6. (Why 16-year-old students were given such a  question may go part of the way to explain Brexit.) Add another +1, however, it failed to get 7 (the article doesn’t tell us what answer it did give, perhaps to save Google some embarrassment—we all know how sensitive they are at Google).

(2) It correctly answered 68 to the question “calculate 17 x 4.” But when the period (full stop) was removed, calculated 69. Go figure.***

Are these minor glitches, or are they clues to deeper deficiencies in AI comprehension? This question is important, because machines often arrive at correct answers along paths that their programmers do not fully understand. In addition, when they can reverse-engineer the AI’s “thought processes,” scientists are finding that the machines have come up with novel lines of thinking that humans  never discovered on their own—and might never have discovered on their own.

When world (human) chess champion Gary Kasparov lost to DeepBlue in 1997, he observed that his opponent seemed to have shown some original thought. The world’s best human Go players are regularly getting crushed by AIs (e.g. AlphaGo, AlphaZero, “Master”) that introduce innovations that at first glance stump the experts—innovations the machines discover by playing against themselves.

Combine the mysteriousness of inscrutable and novel machine thought processes with a lack of common sensewhaddya mean, 17 x 4. =  68, but 17 x 4 = 69? and you contemplate brilliant machines being led astray by things as simple as missing punctuation marks. Looking on, we would not know why, and secondly, the machines themselves would not be aware of error. They would not smack themselves upside their digital heads and say, “Well, duh!”

Of course computer scientists will solve the punctuation problem, but that’s a trivial example.  That’s a relatively simple”known unknown,” for which a solution will be found. The broader problem is when “unknown unknowns come into play—unanticipated by both humans and machines.

Will machines ever have “common sense?”

“Deep Learning” employs artificial neural networks with graphical processing units for machines to emulate the way humans learn and decide things. (More on this in  Robots Get a D+ at Tesla, where we  see an example of machine ineptitude, and Elon Musk admits that “humans are underrated.”) Give these advanced machines enough data and a problem to solve, and they quickly find solutions by contriving their own sub-routines without human assistance.

As the example given of AlphaGo illustrates, machines have proven capable not only of learning much on their own, but also capable of what appears to be imagination and originality.

Still, when we find machines blundering in simple arithmetic problems, we can pat ourselves on our backs and say, “I told you so—computers just don’t have common sense.  Never will.”

If by common sense we mean the capability to avoid doing really stupid stuff, where does it come from? What’s it good for?

Some hints to come in “Are Machines Too Dumb to Take over the World? Part II.”

——————– footnotes —————–

* I say “humanity as we know it” to leave open the possibility that some humans will merge with machines and enhance themselves biologically to become demigods. The humans that do it will be the ones with the resources to make it happen, i.e. the wealthy, whose desire for world domination is a given—they’re already halfway there.

** For the (probably flawed) idea that the danger of Climate Change is overblown, see my earlier post,  Bjorn Lomborg Runs the Numbers

*** It’s possible that DeepMind threw out the “69” as a joke, knowing that  humans are obsessed with sex, and enjoy ribald humor—the 16-year-old Brits also taking the test would get a kick out of it. Oh, that naughty DeepMind!

Leave a Reply

Your email address will not be published. Required fields are marked *