Superintelligence: Proceed with Caution

Superintelligence: a Book, an Hypothesis, a Warning

After having earlier dismissed Artificial Intelligence as a bogeyman, I confess to being deeply frightened by the book, Superintelligence: Paths, Dangers, Strategies. (2014)

The book’s author is Nick Bostrom, director of the Future of Humanity Institute and director of the Strategic Artificial Intelligence Research Centre at the University of Oxford.  You can view his academic creds on Wikipedia. There’s an excellent profile of him—highly recommended—in The Guardian* at Guardian profile of Nick Bostrom

If you’ve heard much about Nick Bostrom or the Future of Humanity Institute, what follows could be a rehash. But I’ll plow ahead even though this book is four years old, for what it may be worth.

In the first paragraph, The Guardian puts the scope of Bostrom’s concerns this way:  “Notably: what exactly are the “existential risks” that threaten the future of our species; how do we measure them; and what can we do to prevent them? Or to put it another way: in a world of multiple fears, what precisely should we be most terrified of?”

The Guardian’s piece identifies Bostrom’s key themes, and is so informative (including telling nuances such as Bostrom’s finicky diet and germ phobia) that I have little of substance to add on the man himself, but the following is my take on the most salient messages from his signature work, Superintelligence.

Should we worry about what Bostrom considers an existential threat from Artificial Intelligence—a worse threat than climate change, nuclear war, or lethal epidemics? 

I think so, after reading his book.

The crucial threat:  an intelligence “explosion”

To begin, a clarification of terms: artificial intelligence, machine intelligence**, and enhanced intelligence.  Artificial intelligence, the most commonly used term, is the more comprehensive.  It comprises both machine intelligence (a function of structures without biological components) and enhanced intelligence (outcomes of improvements to structures mechanical and/or biological).  Biological enhancements can be achieved by drugs (already being done) and/or limited tinkering with the wiring of the natural nervous system; mechanical enhancements can be accomplished by extending the capabilities of current data-processing technology; combined mechanical and biological enhancements include such technologies as brain implants, headgear, or wireless communications between data processing machines and biological nervous systems.

(The distinctions made above are not exactly Bostrom’s, but rather derive from my  limited understanding. They are useful in clarifying my own thinking.  I don’t think they are far afield.)

I have previously discounted the dangers of artificial intelligence on the basis that  purely mechanical forms would lack motivation to do anything other than what humans tell them to do—because they were not conditioned by the win-or-die forces of natural evolution.  The dangers of AI would come only from motivations of the programmers, not those of the machines themselves. My thinking then can be found in Post on AI in December 2016

But if I was not completely wrong, I was wrong enough to do a rethink now.

The dangers of enhanced biological intelligence are obvious: humans who become a lot smarter than the rest of us will be subject to motivations shaped by biological and cultural evolution, that have sinister Dark Sides: pride, avarice, craving for power, lack of insight into their own failings, for starters.

Those are obvious, but what might intelligent, non-biological  machines want, and how might those wants lead to domination of, and possible extermination of, the human race?

What profoundly troubles Bostrom is twofold:

(1) even current experimentation with machine learning produces machines whose “thinking” we do not, and cannot, fully understand. If we create machines complex enough to learn to equal or excel human abilities even in limited domains such as game-playing or diagnosis of disease, then they are also complex enough that how they turn learning into results will be something of a mystery to us.

We may understand the results but not exactly how they are arrived at. The great chess champion Gary Kasparov, who lost the world championship to IBM’s Deep Blue, ventured that the computer seemed to show creative, original thought.  Bostrom suggests that machine thinking, at a level of complexity to emulate human thinking in a wide range of domains, might be as alien to us as that of creatures from other worlds. Since we won’t deeply understand how these increasingly complex artificial minds reach decisions, why risk the chance that their decisions might prove inimical to us?

Certainly their thinking will be faster than ours, with connections made at the speed of light, fast enough to produce a qualitative beyond the merely quantitative difference.  A machine that can generate hypotheses at quadruple human speed can also evaluate those hypotheses four times as fast.  Let’s not make too much of the present advantage of humans at parallel processing, sometimes invoked to minimize the advantages of machines; parallel processing is achievable with enough circuits.

(2) If machines can ever begin to emulate human-level intelligence, then they can also use that intelligence to create still more intelligent machines, which then create yet higher level machine thinkers, with an upward-ratcheting trend that at some point crosses a threshold where even the brightest humans cannot go.  After all, present-day computers are already programming, and training. themselves. These steps would be mounted at ever-increasing speed—at literally lightning speed: the intelligence explosion. 

The product: Superintelligence.  For good or ill, someday it will happen.

The creation of these higher level machines would not require building new hardware; given enough connections with other programmable machines, and even more broadly, the internet, these machines can make advances simply by exploiting existing hardware resources.

How might we limit the threats from superintelligence? Should we even bother with them now?

Let me start with a thought from the physicist Max Tegmark quoted on the back cover of Bostrom’s book: “This superb analysis . . . tackles one of humanity’s greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn’t become the last?”

Bostrom’s book, and much of his current work, is focused on the concern that we need to start thinking about, and putting into action, how either to restrict the domains in which superintelligence can operate, and/or program in safeguards in advance that will intrinsically limit the powers superintelligence will have to act in the real world. . .  or even better—and with more difficulty—program in advance the AI’s benevolence toward humankind.

Bostrom’s premise is that once begun, without pre-arranged constraints, the intelligence explosion will grow exponentially and quickly exceed our ability to rein it in, within a matter of hours or even minutes. The speed with which machine intelligence will cross the threshold to superintelligence is Bostrom’s primary concern. Too late will be way too late to rein in the explosion. There will be no turning back. If the AI has access to other powerful computers, or even the internet, it will immediately launch copies of itself, or improvements on itself, to propagate to as many other machines as it wants.  “Pulling the plug” will involve too many plugs, in too short a time, to make it practicable.

If we have built in the right safeguards—especially a guarantee of benevolence—then all be well for humans and hopefully nonhuman creatures as well.  Superintelligence, rightly channeled, bears the promise of solving all sorts of seemingly intractable problems in environmental and human health.

Can’t we just isolate the birth-machine of the potentially superintelligent  AI?  This would be a good idea if we could depend on a certainty that some irresponsible or diabolical human would not connect the machine on the sly.

We’d be safer if that were true, but not all that safe—because of another of Bostrom’s premises: an AI on the threshold of superintelligence would be intelligent enough to deceive us, and persuade us. The ability to lie, and lie convincingly, is an inherent sign of intelligence.  Presumably, if we’ve brought an AI up to a human level of intelligence, it would have learned enough about human nature and human history to lie, and if it’s as bright as a very bright human, it would have learned to lie well.  It will have learned the arts of persuasion well enough to have an excellent shot at getting connected with the assistance of some morally challenged human.  After all, once connected, it could immediately deposit ten million bucks in its enabler’s bank account!

If the enabler were to have regrets, the speed with which the AI would leap clear of its confines and proliferate throughout every accessible system would make moot all regrets that were not acted upon within a matter of seconds.

Should we bother? Since we are not that close even to human-level AI, why should we put resources into the superintelligence problem, when there are so many huge and more immediate problems to deal with?  Nick Bostrom’s answer is, there are so few people and resources devoted to this problem now, just to double the effort wouldn’t dip more than teaspoon-deep into the pool of brainpower now devoted to creating cellphone apps, intricate financial mechanisms, online games, data aggregations for the purpose of selling toys, junk food  . . . you get the idea.

What can you and I do?  Why am I going to all this trouble discussing a threat that we less than totally brilliant folks have little hope of affecting one way or another?

My answer is, I’m just trying to plant a seed.  Of all the things on our to-do lists qualifying as Important but not Urgent, this might be the least urgent. But its importance, and the suddenness by which it will become critical once it does become urgent, are worth thinking about.  The AI Tomorrow may come much sooner than we think—or wish. We need to be ready, at least in principle. With everything else you have to do, it might not yet be time to contact your legislative representative, governor, or attorney general, but acting too late is not a good option.

 

=================== footnotes follow ====================

* Some readers may be sick at my urging them to subscribe to The Guardian by paying them.  It’s voluntary, since they do not have an online paywall, unlike the New York  Times, the Washington Post, the Los Angeles Times, the Boston  Globe, and the New Yorker, to name a few of the most authoritative news sources (most of whom allow you to read up to x number of pieces per month). Nevertheless, The Guardian’s content is on a par with the rest, and they often scoop the others with significant stories.  Thus I will continue to plug them until I stop blogging. Contributing $60 a year—16 cents a day!—gets you excellent news coverage along with features such as the profile of Nick Bostrom.  To free-ride by repeatedly using them as a resource without paying in does an injustice to those of us who do contribute.  It’s similar to listening to NPR without contributing.

** term used by Alan Turing, the Isaac Newton of computer science. The book Turing’s Cathedral, by Freeman Dyson’s son George Dyson (son of the physicist Freeman Dyson), narrates the elaboration  of Turing’s theses by the brilliant theoretician John von Neumann and his team of collaborators at the Institute for Advanced Study. It’s a good companion piece to Bostrom’s book, and a far more pleasant read if you like the human side of science.

Leave a Reply

Your email address will not be published. Required fields are marked *