Robots Coming for Our Jobs? – Not So Fast

Reassuring News on Automation and Employment?

A recent study led by Melanie Arntz, acting head of the labor markets research department at the Center for European Economic Research,*  addressed the specter of massive unemployment due to automation.  It concluded that the risks of robots taking our jobs has been exaggerated.  Looking forward 10-20 years, it revises downward the estimates of job losses in the U.S. from 38% to 9%.  As we know, doomsayers (such as I) have forecast job losses more like 50% by 2040.

Here’s a link to the study, where you can download a free .pdf: Revisiting the Risk of Automation

The paper, released in July 2017, is chock-full of jargon and hairy statistical equations, but the thrust of it is commonsensical: scary scenarios of massive job losses** fail to take into account what the authors call “the substantial heterogeneity of tasks within occupations” [emphasis mine] “as well as the adaptability of jobs in the digital transformation.” (I take this language from the abstract, which nicely encapsulates the study and findings in the nine pages that follow.)

These findings stem from an approach that distinguishes between occupation-level work and  job-level work.

Here’s an example quoted from the study: “Consider Numerical and Material Recording Clerks (ISCO08=43), for which our occupation-level estimates suggest a high (74.4%) risk of automation. According to the data, many clerks of this profession specialize in niches that involve non-automatable tasks such as presenting, planning or problem solving. Taking the large and heterogeneous range of their tasks into account suggests that only 18.2% of them actually face a high risk of automation.”

So here’s an occupation (numerical recording) that superficially appears quite automatable—after all, it’s only numbersbut in practice what the clerks actually do requires communication skills, judgement, imagination, and flexibility at a level we cannot expect of machines. Number crunching by machines, as fast and accurate as it is, occurs within a workplace where human interactions put numbers in context.  It is one “job” within a nexus of human jobs that give value to the automated task.

The future through rose-colored glasses – dimming

This paper by Arntz and her collaborators sounds heartening. But it is sounded against a drumbeat of expert expectations and studies that have produced less rosy forecasts on the human jobs vs automation front, which have heavy political implications.  The Brookings Institution surveyed several studies on the topic, and its report ends with a warning that even  “small increases in unemployment or underemployment have an outsized political impact.” Destabilization of employment may lead to authoritarian governments   See Brookings article on impact of automation on employment

What the Arntz-led report seems to lack is an appreciation of the accelerated pace of developments in machine learning.  In “Deep Learning,” the form of Artificial Intelligence that most resembles human intellectual development, machines are given not hard instructions , but a set of training data, and a goal. They are then set free to “think” for themselves—and in an increasing number of realms, their thoughts outstrip human abilities.

Perhaps the best publicly known example of deep learning is AlphaGo, the system that won its first game of Go —a more difficult game than chess—against the world champion, after having  observed thousands of hours of human play of the game . Those thousands of hours (absorbed in a matter of minutes) were the training data. (AlphaGo then played against itself countless times.) Given the goal to win, AlphaGo’s  victory was inevitable, if not in that first game, then soon thereafter in subsequent games, once enough data were amassed.

Data + Goal + Complex Networks -> Deep Learning.  (“Complex networks” refers to both hardware and software. Search “deep learning” on the web for how it is implemented.  And be afraid.) 

Expert observers were puzzled by  some of AlphaGo’s moves, which seemed contrary to best strategy as humans saw it.*** This rings a familiar chord to those who follow the progress of artificial intelligence: the inscrutability of AI “thought.” Here’s a chilling observation by David Gunning at the U.S. Defense Advanced Research project: “These things think in a very foreign way. They use bizarre mathematical logic that is very alien to us.” (Quoted in the 14 April edition of New Scientist.)  I apologize for playing yet another variation on this theme, twice raised in previous posts, but I feel it doesn’t hurt to keep driving this point home: the smarter machines get, the less we understand them. David Gunning’s remark suggests that the smarter the machines get, the less we can understand them.

The automated “deep learners” find efficient and often novel solutions,  and build further on those by programming themselves. This ability points to leaps forward of machine capabilities in qualitative as well as quantitative terms (we knew they were fast, but now we’re also finding out that they are agile, and growing ever more so).

Human aids to machine learning just keep coming

Besides machines teaching themselves,  human developers enhance the deep learning (at least) two ways:

(1) Adding datasets. Given the volume of Big Data available over the internet, a “deep learner” can instantaneously mine petabytes of information in almost every sphere of knowledge, and the trainers (both human and machine) can furnish datasets that cannot be found on the web, much of it in high-tech research.

(2) Adding (human-made) software.  The astonishing versatility of Alexa owes to a rich buffet of apps being added to by the day. As PC Magazine puts it: “Think of Alexa as the cloud-based brain [emphasis mine]  behind the Echo. . . .  It gets smarter and more powerful as Amazon adds features.  . . . ”  Amazon’s opening up Alexa to all developers has resulted not only in features but in skills that extend Alexa’s range of competencies.  Examples of skills are scheduling, filtering email, editing of Google documents, and much more. In March 2017, according to PC Magazine, there were already 10,000 skills. There is even an Alexa “Skill” for finding skills.

This is not even to discuss other advanced and now familiar machine competencies, such as facial recognition and translation between natural languages.  Note that machines are still imperfect in these tasks, but by comparison there are also plenty of people with “face blindness” in the world, and natural language translation by humans is seldom 100% fluent (as a few awkward English expressions in the paper by Arntz et al testify).

These skills and competencies described above are arguably only “deep” within narrow domains (playing Go or Jeopardy), and otherwise shallow outside of these domains. But at what point do these machines understand what they’re doing well enough to master domains where self-awareness and a theory of mind are required to take over jobs held by humans—let’s say, in the task of “presenting,” cited by Arntz in the case of the numerical recording clerks? A point that is not far off.

The future through smoky glasses – job displacement slow at first, then accelerating

For the foregoing reasons, I don’t share the optimism that robots will fail to take over a third or more of human jobs within 10-20 years, nor that displacement of humans by robots will be more than made up for by increasing numbers of new, higher-skill jobs.

Ironically, one of the tightest employment bottlenecks at present—particularly in the U.S.—is in the very tech fields where many of the brightest minds are directly or indirectly involved with the development of Artificial Intelligence.  In other countries where STEM education is strongly bolstered by state and private investment, that bottleneck is opening up.  The effect will be to ratchet up the advances in AI to the point where robots will be—without any evil intent on their part—shouldering us aside in the world of work.

Recommended reading: Martin Ford’s Rise of the Robots.  It’s a bit dated, but his overall thesis still holds: he takes off the blinders of those who think that the Industrial Revolution’s creation of many new jobs will be paralleled in our time by the I.T. Revolution.  He, like many another observers of disruption, says “This time is different.” Yow.

=================== footnotes ===================

 * Center for European Economic Research

 ** Scary, that is, if you think employment is a good thing

 *** For more on AlphaGo’s learning curve, see deepmind.com analysis of AlphaGo’s learning

Leave a Reply

Your email address will not be published. Required fields are marked *