National Combustion, Part 2: Artificial Intelligence and the Collapse of the State

The fundamental equation:
 Political instability + Artificial Intelligence
-> Collapse of the State

Forces of history, combined with the ways artificial intelligence multiplies the forces of technology, are already acting to undermine the polity of the United States. The recent Republican meltdown in the U.S. House of Representatives is a foreshadowing of what is likely to come.

National Combustion, Part 1 (link in next paragraph), drew on social scientist  Peter Turchin’s historical framework to make sense of how we came to this fraught moment, when it seems quite possible the United States might slide into civil war.  Yes, many of the January 6 insurrectionists are in jail and Donald Trump’s national con-job is fraying. Yet the factors going into Turchin’s model of “political disintegration,” with abundant historical antecedents, remain the same today as on January 6th, 2021. It’s no great mystery that we are in a highly unstable political situation;  it matches a pattern that has been repeated time and time again in human history. So much for American exceptionalism.

U.S. society is already crossed by multiple fault lines besides that of the Big Lie that Donald Trump won the 2020 Presidential election. On guns, voting rights, reproductive rights, minority rights, workers’ rights, the distribution of wealth, public health, immigration, affirmative action, educational freedoms, content of school and library books, historical analysis challenging the status quo, white nationalism—these fault lines, already under tremendous stress, could split open as a result of a precipitating event, sudden or prolonged. Another disputed national election; a political assassination; a nationwide or even international cyberattack; a Waco-like siege of an anti-government enclave; a takedown of the grid by actors unknown; a spate of terrorist attacks; a pandemic; a depression—any of them could unleash the partisans itching to fight a civil war against the federal government.

There are several paths an attempt to initiate a civil war could take. None of them seems likely to end in overthrowing the federal government—barring help from within the government itself—without artificial intelligence.

With artificial intelligence, however, the game changes in favor of widespread disruption, disunion, and a period of anarchy, leading either to authoritarian rule (to preserve the state by force) or a breakup of the country into territories reflecting the existing red/blue tribal divisions. (Far less likely is a continuing period of anarchy, the country splitting into tribes fighting each other over diminishing resources; but I wouldn’t rule it out. Some folks have been stockpiling food, guns, ammunition, and explosives for just such a situation.)

Either outcome would bring about an economic calamity. That’s the one certainty. That certainty implies another certainty: external agents—in particular the Russian and Chinese governments—would be active in the information/misinformation/disinformation environment from the beginning.

The historical context: the world according to Turchin

For Turchin’s thesis, best to read the book, End Times (title chosen by publisher, not Turchin).  If you don’t have time for that, check out this review by Paul Rosenberg in Salon. For a somewhat different slant on Turchin, you can go to my post, National Combustion, Part 1: Political Disintegration and the Potential for Civil War. Both pieces have links to other sources. The most important link in Rosenberg’s is probably https://peterturchin.com/structural-demographic-theory/ —a rather dry academic analysis of one of the pillars on which Turchin’s theory stands.

On YouTube, you can find plenty of interviews of Turchin.  One of my favorites is Understanding societal collapse with complexity scientist Peter Turchin (see end of this post) because the interviewer challenges Turchin on a few points, making for a more lively  discussion than you find elsewhere.

Turchin draws parallels between current conditions of instability in the U.S. and similar conditions in hundreds of past societies, the majority of which came to undemocratic ends.

What Turchin left out is the impact of artificial intelligence and its capacity to multiply other forces of technology which are already dangerous in themselves, making the collapse of the state more likely.

Vulnerability of the state to cyber warfare

Is speaking of the collapse of the state hyperbole? Not if one refers to the views of AI superstar Mustafa Suleyman. Suleyman is one of the founders of DeepMind—the AI that defeated a world champion in the game of Go, along the way demonstrating original tactics that lay outside the scope of the data it was trained on. More significantly, DeepMind worked out the 3D structures of 200 million proteins, the discovery of any one of which had been a devilishly challenging, time-consuming problem for human researchers. The  program was capable of problem-solving not just by speeding up known methods,  but by creating new methods and new knowledge.

Most of what follows stems from Suleyman’s recent book, The Coming Wave. I’m drawing on his work because it provides a clear historical perspective, primarily about technology’s impact on human societies from stone tools to AI.  I emphasize clear because while there are elements of that history discussed in other sources—many of them coming from brilliant minds, and not necessarily in agreement—his is the easiest to follow, and it’s all in one place with a smooth, logical narrative flow.  

Example of vulnerability: a ransomware attack (WannaCry) on the British National Health Care Service that temporarily disabled the entire system in 2017. Speaking of the attack, Suleyman says the following about the vulnerability of our institutions in his book:

It’s tempting to argue cyberattacks are far less effective than we might have imagined, given the speed at which critical systems recovered from attacks like WannaCry. With the coming wave that assumption is a serious mistake. Such attacks demonstrate that there are those who would use cutting-edge technologies to degrade and disable key state functions. They show that the core institutions of modern life are vulnerable. A lone individual and a private company (Microsoft) patched up the systemic weakness. This attack did not respect national boundaries. Government’s role in handling the crisis was limited. . . . 

Today’s cyberattacks are not the real threat; they are the canary in a coal mine of a new age of vulnerability  and instability degrading the nation-state’s role as the sole arbiter of security.

Suleyman goes on to elaborate on risks to the state posed by AI. To quote again:

AI-enhanced weapons will improve on themselves in real time. WannaCry’s impact ended up being far more limited than it could have been. . . . AI transforms this kind of attack. AI cyberweapons will continuously probe networks, adapting themselves autonomously to find and exploit weaknesses.

Artificial intelligence as force multiplier

In the long run AI may doom humankind altogether (a subject of a later post)—be it 5 years, 10 years, 50 years hence, or never, depending on which expert you ask . But of more urgent concern is what damage artificial intelligence can do in the next few years, in the hands of people with the mentality of Visigoths and Huns plundering Rome 1600 years ago. Or of lone wolves with no purpose but taking revenge on enemies real or imagined.

AI, in combination with cheap modern weaponry (a switchblade drone can go for as little as $1,000), is a force multiplier unlike anything the ancients could have thought of. Heck, unlike anything military strategists could have thought of just 70 years ago.

Don’t take my word for it. Consider what Lindsay Clark reported in msn.com five months ago: “Proliferation of AI Weapons among non-state actors ‘could be impossible to stop.'”

Or consider a presentation in the 2022 World Economic Forum: “Why we need to regulate non-state use of arms.” I quote from that presentation:

Open-source artificial intelligence (AI) capabilities and lightweight, low-power onboard processing make it possible to create “home-made” autonomous weapons by converting civilian devices, such as camera-equipped quadcopters.

Scared yet? Certainly U.S. senators (belatedly) are:

Both Democrat and Republican senators have sounded the alarm on the potential malevolent use of artificial intelligence (AI). The bipartisan alarm was raised during a hearing by the Senate Judiciary Committee on Tuesday (July 25, [2023]). 

The risk grows with every passing day—not only because of the leaps in AI’s capability on the leading edge of research, but also because the fruits of that research are becoming cheaper, more abundant, and more readily available.

Bad actors buying drones is one thing.  But, says Suleyman, black-market production and sale of weaponry such as drones will be facilitated by 3-D printing:

Technologies like 3-D printing and advanced mobile communications will reduce the cost of tactical drones to a few thousand dollars, putting them within reach of everyone from amateur enthusiasts to paramilitaries to lone psychopaths.

AI is not just a physical force multiplier.  It is also an information multiplier, in two ways: (1) machines can take in, process, store, and output a volume of information trillions of times larger than human brains can handle, at speeds millions of times faster; (2) machines can create new knowledge, such as DeepMind generating the structures of 200 million proteins, as mentioned above.

It follows that AI is also a disinformation multiplier. Read Thor Benson’s piece in Wired, ‘This Disinformation Is Just for You,’ to hear how a handful of people armed with advanced generative AI could flood the internet with disinformation not simply with general-purpose lies, but with a limitless variety of lies targeted at groups and individuals adapted to profiles deduced from their online activities.

“Deepfakes” may become the most potent weapon of disinformation campaigns, adding highly realistic video and audio to text, or replacing text. Artificial intelligence is constantly enhancing the realism of deep fakes, and making it harder to distinguish fakes from reality. Imagine a deepfake of President Biden admitting to pressuring Ukrainian law enforcement to lay off Hunter Biden, or a deepfake of special counsel Jack Smith shutting down the prosecution of Donald Trump for lack of evidence. Imagine a fake of National Guard troops gunning down pro-Trump demonstrators. Or, taking the opposite tack, a deepfake of Donald Trump, eyes downcast, tears trickling down his cheeks, confessing to having lost the 2020 election and then bilking his followers of millions of dollars.

Such is the speed with which AI is being developed, within a year it may be difficult for even an expert to detect deepfakes without labeling.

How effective would labeling be? An article from NiemanLab discusses several ways to label AI content. You’ll notice an example in the section on watermarks—something that might easily escape attention in a 10-second clip, or could even be read once it is noticed: if it’s an action video you’ll be more likely to focus on the subject’s eyes and mouth than the little script below his chin.

Then there’s the challenge of passing labeling laws and ensuring their enforcement, especially when some free speech advocates raise a ruckus.

Even with labeling, mistrust of all news sources would grow (beyond what it already is), if people find they cannot even believe their own eyes and ears to tell them the truth.

Asymmetric forces and the vulnerability of the nation-state:
Mustafa Suleyman fears an unraveling

Suleyman is as much a humanist as he is a technophile, having begun his higher education as a student of philosophy, and his ideas reflect an expansive view of history and human pre-history. The Coming Wave is one of those books where new insights jump out at you on nearly every page. It covers far more territory than I can even summarize here.

But apropos of the narrow argument I am making about the risks to the United States, he raises two main points:

First: cheap, abundant technology gives the forces of disruption asymmetrical advantages over established institutions.

The proliferation of cheap and powerful weaponry—both mechanical and informational—means single individuals and small groups pose game-changing threats not only to vulnerable groups such as racial minorities, but also to the general public, law enforcement, and the top levels of governments themselves via asymmetric warfare. An attack may begin as a widespread disinformation blitz, followed by surgical strikes on political leaders, infrastructure (the fragile electricity grid being a prime target), government offices, data centers, financial institutions, hospitals, shopping centers, places of worship, conference centers, airports, food storage facilities, etc. As few as 50 people performing coordinated attacks on multiple targets could throw the country into chaos. Terrorism on a huge scale could follow. Artificial intelligence in the wrong hands will make massive social disruption feasible.

Synthetic biology may be the single most horrific threat at present. Early in the book, Suleyman recounts the alarm he felt during a seminar on technology risks when one presenter showed how the price of DNA synthesizers had dropped to the point where, in principle, a single individual could create “novel pathogens more transmissible and lethal than anything found in nature. . . .  If needed, someone could supplement homemade experiments with DNA ordered online and assembled at home. The apocalypse, mail ordered.”

The presenter of this apocalyptic scenario in the seminar, says Suleyman, was “a respected professor with more than two decades of experience.”

AI assists in research, understanding, and use of synthetic biology, making it all the easier for hackers to acquire expertise outside of traditional channels and unleash sophisticated bioweapons upon an unsuspecting populace.

Unthinkable? Production of nuclear weapons was unthinkable just 100 years ago. It so happens that the resources and equipment required to make nuclear bombs lie out of reach of the non-state actors we know of; but within their reach are AR-15s, attack drones, quadcopters, and 3-D printed explosives, to name some tools of asymmetric warfare that are not quite as scary as a synthetic plague.

Second: only the nation-state has the resources to contain the dangers of these seismic technologies, and the nation-state is under assault

Once again, to quote Suleyman directly:

The foundation of our present political order—and the most important actor in the containment of technologies—is the nation-state. Already rocked by crises, it will be further weakened by a series of shocks amplified by the [new and radical technological] wave: the potential for new forms of violence, a flood of misinformation, disappearing jobs, and the prospect of catastrophic accidents.

Suleyman warns of “tectonic shifts in power both centralizing and decentralizing at the same time” that may lead on the one hand in the direction of authoritarianism, and on the other to “empower groups and movements to live outside traditional social structures.” (It’s left to your imagination to conceive of what that last phrase is meant to imply.)

Lastly: the risks of “pessimism aversion”

Remember the scenario of a single individual creating a pandemic worse than any to date, using synthetic biology with affordable DNA synthesizers? At that presentation, Suleyman was shocked by the complacency of the other attendees. Here’s how he summed it up:

The collective response in the seminar was more than just dismissive.  People simply refused to accept the presenter’s vision. No one wanted to confront the implications of the hard facts and cold probabilities they had heard. I stayed silent, frankly shaken.

This incident exemplified what Suleyman calls “pessimism aversion”—an unwillingness to grapple with a terrifying reality, even on the part of those tasked with addressing technology risks. He comes upon it repeatedly, to his repeated dismay. You can’t solve a problem without acknowledging that it is a problem. Even if you acknowledge it is a problem, you can’t solve it without digging into it, and these challenges are so enormous that even experts shy away from them—shutting their eyes rather than looking into the face of the tiger just inches away.

This post is asking you to look What people outside the high tech circle might do about it will be in a post to follow.

===================================================

Below: good video interview of Peter Turchin

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *