National Combustion, Part 2: Artificial Intelligence and the Collapse of the State

The fundamental equation:
 Political instability + Artificial Intelligence
-> Collapse of the State

Forces of history, combined with the ways artificial intelligence multiplies the forces of technology, are already acting to undermine the polity of the United States. The recent Republican meltdown in the U.S. House of Representatives is a foreshadowing of what is likely to come.

National Combustion, Part 1 (link in next paragraph), drew on social scientist  Peter Turchin’s historical framework to make sense of how we came to this fraught moment, when it seems quite possible the United States might slide into civil war.  Yes, many of the January 6 insurrectionists are in jail and Donald Trump’s national con-job is fraying. Yet the factors going into Turchin’s model of “political disintegration,” with abundant historical antecedents, remain the same today as on January 6th, 2021. It’s no great mystery that we are in a highly unstable political situation;  it matches a pattern that has been repeated time and time again in human history. So much for American exceptionalism.

U.S. society is already crossed by multiple fault lines besides that of the Big Lie that Donald Trump won the 2020 Presidential election. On guns, voting rights, reproductive rights, minority rights, workers’ rights, the distribution of wealth, public health, immigration, affirmative action, educational freedoms, content of school and library books, historical analysis challenging the status quo, white nationalism—these fault lines, already under tremendous stress, could split open as a result of a precipitating event, sudden or prolonged. Another disputed national election; a political assassination; a nationwide or even international cyberattack; a Waco-like siege of an anti-government enclave; a takedown of the grid by actors unknown; a spate of terrorist attacks; a pandemic; a depression—any of them could unleash the partisans itching to fight a civil war against the federal government.

Continue reading “National Combustion, Part 2: Artificial Intelligence and the Collapse of the State”

Another Weapon in the Radicals’ Arsenal: Deepfakes

Deepfakes: when you can’t believe your eyes, what can you believe?

Recently I sent out a link to an article on deepfakes that appeared in Reuters (not paywalled): https://www.reuters.com/world/us/deepfaking-it-americas-2024-election-collides-with-ai-boom-2023-05-30/.

Here’s another perspective from Jim Puzzanghera published in the paywalled Boston Globe where the content is nearly identical but adds a couple of political points.  From the Globe:

There are very few rules right now and few, if any, are likely coming. Democrats in Congress have introduced legislation mandating the disclosure of AI in political ads, but no Republicans have signed on.

and . . .

On June 22, the Federal Election Commission deadlocked along party lines on a petition by the consumer advocacy group Public Citizen to consider rules banning AI deepfake campaign ads. All three Republicans opposed the move, with GOP commissioner Allen Dickerson saying the agency lacked the authority.

and . . .  from Republican strategist Eric Wilson (not to be confused with the true conservative and co-founder of the anti-Trump Lincoln Project Rick Wilson)  who maintains that regulation isn’t needed right now:

I want to tamp down the moral panic because this is something that happens with any new technology. You go back to TV debates and people were worried about what that would do for voters,” he said. “We’re having conversations about it, but no one’s sitting around and having struggle sessions around artificial intelligence on our side. . . . 

We are unlikely to see professional campaigns use generative AI for disinformation purposes. That’s not to say that malign actors like nation states aren’t going to try it.

Yeah.  What malign nation states could Wilson possibly be referring to? Maybe states like Russian and China that are already hard at work confusing the American public with fake news and outright untruths? Who are already busy trying to undermine trust in our institutions, particularly the federal government? Who are hoping for an America increasingly fragmented into warring tribes? Whose activities support the agenda of the Radical Right? Those nation states?

Continue reading “Another Weapon in the Radicals’ Arsenal: Deepfakes”

Fake Fears, Legit Fears . . . and Fears of the Undefinable

Happy? Thanksgiving?

Yes, it’s still a beautiful world in  many respects.  So as we head into the holidays with visions of impeachments dancing in our heads, let us rejoice that: we are not in a nuclear war; Donald Trump has not assumed dictatorial powers; William Barr is about to resign in disgrace;* Adam Schiff has not been assassinated (as of this writing); Russia has not annexed the whole of Ukraine; New York City is still above sea level; more than a dozen elephants remain in the wild;  Ruth Bader Ginsburg lives on; and Artificial Intelligence has still not determined that it’s worth taking over this messy, irrational, bigotry-infested world. 

You have much to be thankful for. You can be thankful that, despite much Fox News/National Enquirer-generated fake news, we do not have on our southern border hordes of raping, thieving, murderous people  itching to invade the U.S. and take away our jobs; Ukraine is not hacking our elections although Russia has and is; a non-negligible number of Americans actually understand the value of the rule of law; wind turbines do not cause cancer;  the mainstream media are not Enemies of the People; vaccines do not cause autism; Hillary Clinton is not running a child sex ring; a majority of Americans actually do believe that guns kill people; George Soros has no plan to undermine the American political system.

Continue reading “Fake Fears, Legit Fears . . . and Fears of the Undefinable”

Are Machines Too Dumb to Take Over the World? Part III: Yes.

“Human intelligence is underrated”

Longtime readers of this blog who may have tired of my ruminations about AI imposing absolute reign over humanity should be overjoyed to hear that I am dropping the apocalyptic Artificial Intelligence thread for the foreseeable future.

That’s because this article in New Scientist has put my fears (mostly) to rest, with one of the pioneers of Deep Learning,  Yoshua Bengio,  saying,  “[the machines] don’t even have the intelligence of a 6-month-old.” He is even quoted as saying “AIs are really dumb”—essentially answering my very question. Thanks Yoshua!

Bengio expresses himself in deceptively simple language, but that’s an exercise in humility, because . . .

Bengio is a recipient of the A.M. Turing Award, the “Nobel Prize of computing,” which gives his opinions great authority.  He’s one of the originators of “deep learning,” that combines advanced hardware with state-of-the-art software enabling machines to train themselves to solve problems.  Bengios’s high standing is enough to persuade me not to worry to excess until a contradictory view by an equally qualified AI expert comes out.   Most of those sounding alarms about AI Apocalypse are not computer scientists, no matter how smart they are. Elon Musk, for example, discovered that robots in his Tesla factory were making stupid mistakes, and concluded, “human intelligence is underrated.”

Continue reading “Are Machines Too Dumb to Take Over the World? Part III: Yes.”

Are Machines Too Dumb to Take Over the World? Part II: the Common Sense Factor

Common sense and competence

In Part I of this series, we saw examples of how machines, putatively endowed with “Artificial Intelligence,” commit laughably stupid mistakes doing grade-school arithmetic. See Dumb machines Part I

You’d think that if machines can make such stupid blunders in a domain where they are alleged to have superhuman powers—a simple task compared with, say, getting your kid to school when the bus has broken down and your car is in the shop—then they could never be expected to achieve a level of competence across many domains sufficient for world domination.

Possibly machines are not capable of the “common sense” that is vital to real, complicated life, where we range across many domains, often nearly simultaneously.

A trivial example from Part I: the machine correctly calculates 68 when asked for the product of “17 x 4.”   But it calculates  “17 x 4” as 69.   Stupid, right? A human looks at the discrepancy and says aha! It’s the missing period that threw it off. Getting the correct answer would require knowing something about punctuation. The period is not a mathematical object, it’s a grammatical object.  Getting the difference requires bridging from math to grammar—another common sense activity we do without consciously missing a beat.

Continue reading “Are Machines Too Dumb to Take Over the World? Part II: the Common Sense Factor”

Are Machines Too Dumb to Take Over the World? Part I: the Duh! Factor

Existential Angst: Nuclear War, Donald Trump, or Artificial Intelligence?

Apart from worldwide nuclear war (unlikely), or Donald Trump grabbing dictatorial powers (not quite as unlikely), my greatest worry is the possibility of Artificial Intelligence (AI) taking over the world—or at least enough of it to doom humanity as we know it.*

Likely? Experts have views as divergent as the sides that disputed whether the notorious DRESS was black and blue or white and gold.  More seriously, people way smarter than me (and perhaps you) have made predictions ranging from AI threatens the elimination of humankind, to AI is the greatest tool for the betterment of humankind that has ever existed. 

(The remainder of this post addresses machine intelligence, which is really a sub-category of AI—but since most people assume  AI equivalent to machine intelligence, I use the terms interchangeably unless specified otherwise.)

Ultimately AI may be a greater threat than Climate Change.** I know Green New Dealers don’t want to hear it, but consider: there have been drastic changes in climate in the geological record—and life, including humans, adapted. Recent Ice Ages are notable examples.  (This is NOT to defend inaction on Climate Change! Especially because the changes we are imposing on the planet, unlike most previous climate shifts, are so devastatingly swift.)

Super-AI, on the other hand, will be utterly unprecedented, and its advent, unlike Climate Change, could come swiftly and with little warning—especially if we continue to pooh-pooh it as an illusory bogeyman.

Continue reading “Are Machines Too Dumb to Take Over the World? Part I: the Duh! Factor”

Robots Get D+ at Tesla : Automation Gone too Far?

Elon Musk: “Humans are underrated.” Future of human workers looking up for now

In Quartz (May 1st), Helen and Dave Edwards report on the downside of automation on the production ramp of Tesla’s Model 3.  “Over-automation” is the culprit in weekly production approximating 2,000 vehicles per week in contrast with the target of 5,000 per week. Such was the conclusion of a report written by Toni Sacconaghi and Max Warburton. Telsa’s robotic underperformance echoes results from automation at Fiat, Volkswagen, and GM.

Tesla owner, founder, and prime mover Elon Musk tweeted that “humans are underrated.”  Musk is taking time off from planning an invasion of Mars to get the factory back on track (presumably with the help of humans).

Check it out at Robots underperform at Tesla, and why

and Musk admits complacency to CBS News

How robots screw up . . . but won’t continue to do so

Sacconaghi and Warburton observed that In final assembly, robots can apply torque consistently—but they don’t detect and account for threads that aren’t straight, bolts that don’t quite fit. . . .” (See more in the block quote in the Quartz article, where the authors get in a jibe at Tesla’s quality deficiencies.)

Continue reading “Robots Get D+ at Tesla : Automation Gone too Far?”

Robots Coming for Our Jobs? – Not So Fast

Reassuring News on Automation and Employment?

A recent study led by Melanie Arntz, acting head of the labor markets research department at the Center for European Economic Research,*  addressed the specter of massive unemployment due to automation.  It concluded that the risks of robots taking our jobs has been exaggerated.  Looking forward 10-20 years, it revises downward the estimates of job losses in the U.S. from 38% to 9%.  As we know, doomsayers (such as I) have forecast job losses more like 50% by 2040.

Here’s a link to the study, where you can download a free .pdf: Revisiting the Risk of Automation

The paper, released in July 2017, is chock-full of jargon and hairy statistical equations, but the thrust of it is commonsensical: scary scenarios of massive job losses** fail to take into account what the authors call “the substantial heterogeneity of tasks within occupations” [emphasis mine] “as well as the adaptability of jobs in the digital transformation.” (I take this language from the abstract, which nicely encapsulates the study and findings in the nine pages that follow.)

These findings stem from an approach that distinguishes between occupation-level work and  job-level work.

Continue reading “Robots Coming for Our Jobs? – Not So Fast”

“They don’t understand how it works.” Information Technology and the Queasy Underbelly of Democracy

Politicians low on the tech learning curve

Alexander Nix, CEO of Cambridge Analytica, and chief architect of the Trump-assisting “defeat crooked Hillary” campaign, commenting on his testimony before the (U.S.) House Intelligence Committee, said “They’re politicians, they’re not technical. They don’t understand how it works.”

The exploits of Cambridge Analytica in suppressing votes and unleashing torrents of misinformation and flat-out falsities upon the data rivers of social media got (as usual, excellent) coverage by The Guardian in this piece dated March 21, 2018: Cambridge Analytica’s Assault on Decency For more on Nix, the Facebook data breaches, and the “crooked Hillary” campaign.

This echoes a theme emerging from previous U.S. Congressional hearings dealing with social media: politicians are way out of their depth in advanced information technology. As Nix, says, they simply do not understand how it works.

Continue reading ““They don’t understand how it works.” Information Technology and the Queasy Underbelly of Democracy”

Superintelligence: Proceed with Caution

Superintelligence: a Book, an Hypothesis, a Warning

After having earlier dismissed Artificial Intelligence as a bogeyman, I confess to being deeply frightened by the book, Superintelligence: Paths, Dangers, Strategies. (2014)

The book’s author is Nick Bostrom, director of the Future of Humanity Institute and director of the Strategic Artificial Intelligence Research Centre at the University of Oxford.  You can view his academic creds on Wikipedia. There’s an excellent profile of him—highly recommended—in The Guardian* at Guardian profile of Nick Bostrom

If you’ve heard much about Nick Bostrom or the Future of Humanity Institute, what follows could be a rehash. But I’ll plow ahead even though this book is four years old, for what it may be worth.

In the first paragraph, The Guardian puts the scope of Bostrom’s concerns this way:  “Notably: what exactly are the “existential risks” that threaten the future of our species; how do we measure them; and what can we do to prevent them? Or to put it another way: in a world of multiple fears, what precisely should we be most terrified of?”

The Guardian’s piece identifies Bostrom’s key themes, and is so informative (including telling nuances such as Bostrom’s finicky diet and germ phobia) that I have little of substance to add on the man himself, but the following is my take on the most salient messages from his signature work, Superintelligence.

Continue reading “Superintelligence: Proceed with Caution”