Robots Get D+ at Tesla : Automation Gone too Far?

Elon Musk: “Humans are underrated.” Future of human workers looking up for now

In Quartz (May 1st), Helen and Dave Edwards report on the downside of automation on the production ramp of Tesla’s Model 3.  “Over-automation” is the culprit in weekly production approximating 2,000 vehicles per week in contrast with the target of 5,000 per week. Such was the conclusion of a report written by Toni Sacconaghi and Max Warburton. Telsa’s robotic underperformance echoes results from automation at Fiat, Volkswagen, and GM.

Tesla owner, founder, and prime mover Elon Musk tweeted that “humans are underrated.”  Musk is taking time off from planning an invasion of Mars to get the factory back on track (presumably with the help of humans).

Check it out at Robots underperform at Tesla, and why

and Musk admits complacency to CBS News

How robots screw up . . . but won’t continue to do so

Sacconaghi and Warburton observed that In final assembly, robots can apply torque consistently—but they don’t detect and account for threads that aren’t straight, bolts that don’t quite fit. . . .” (See more in the block quote in the Quartz article, where the authors get in a jibe at Tesla’s quality deficiencies.)

If you’ve been reading my posts on Artificial Intelligence—often referred to as Machine Intelligence—you are familiar with my doubts that machines will fail to occupy more than half of the labor force by 2050.  The reason for my doubts, as previously stated, is that developments in AI are proceeding at an exponential rate.

Suppose that there’s a 15% improvement in machine capabilities per year, then in 30 years the improvements will come to 1.15^30 = 66 times as capable.  This is not to say that there would be a steady progression of 15% per year—there will be periods of slower growth interrupted by sudden leaps.  If one of those leaps rises to Superintelligence,* then we’re in an entirely new—maybe diastrous—ball game.  At a slower rate of 10% per year, then in 30 years there’s only a 17-fold improvement.  In either case, I’d be pretty sure that come 2050, factory robots will make adjustments on their own when threads aren’t straight and bolts don’t fit.

Why “leaps?” GPUs for starters.

I admit that my forecast of  15% per year improvement in machine capabilities is pulled out  of thin air, based on what I’ve following in AI developments—getting such an average depends on the aforementioned leaps, analogous to the “punctuated equilibrium” theory of evolution.

There’s a hardware advance (evolutionary leap) that partially explains the mushrooming of computer capabilities: the development and training of Graphical Processing Units (GPUs), the successors to the Central Processing Units (CPUs) that are at the heart of your “common” personal computer.

GPUs are capable of parallel processing, as opposed to the serial processing of a CPU.  A CPU performs a single computation at a time very rapidly (in serial processing, also referred to as sequential processing). Whereas, GPUs can perform multiple computations at a time, although each computation is done more slowly than in a serial process.  There’s a vivid demonstration of the difference to be found in a video using the example of painting a face:

What this means is that the use of GPUs within neural networks enable Deep Learning** at a scale comparable to the parallel processing of multiple inputs done by a human brain.

A kind of “leap” in GPU capability was recently reported at the 2018 GPU Technology Conference hosted by Jensen Huang, CEO of Nvidia, a GPU manufacturer.  He reported a 25-fold speedup in GPUs in a matter of just five years. He is proposing that GPUs need a “law of their own,” since they “benefit from advances on multiple fronts: architecture, interconnects, memory technology, algorithms, and more.” As he summed it up “The innovation isn’t just about chips, it’s about the whole stack.”

See: Huang on GPU progress

The synergy between the elements of  Jensen Huang’s “stack” multiplies the progress obtained by any one element (say, interconnections). Teamwork and collaboration on all these fronts is facilitated at light speed by telecommunications wherein, often, a breakthrough in one area immediatly spawns breakthroughs in other areas.

Is there a ceiling to  AI?

Artificial Intelligence skeptics have long pointed to the handicap the computers based on CPUs are under in comparison to human brains—the latter, obviously, does a humongous number of things all at once, albeit each thing is done more slowly than a computer could do it.

Take the example of a dog running across your lawn pursued by your neighbor; the dog is barking and the neighbor is shouting.   As you observe this, there are multiple demands on your brain occurring simultaneously: (1) first, the basic task of taking in sight and sound; (2) perceiving there’s an object moving against a stationary background;  (3) recognizing the dog—not just any dog, but out of all possible dogs your neighbor’s dog; (4)  recognizing a person—and not just any person, but a person you recognize as your neighbor; (5) tracking the speeds of each and projecting their future positions; (6) hearing and interpreting the “speech” of both dog and human—dog playful or aggressive, the human laughing or angry, or both? (7) interpreting the meaning of the scene, such as: the dog could be, as you have seen him do before, chasing a cat—which may further cause you to look for the cat, presenting a whole new visual task, while simultaneously watching dog and person. . . .  Just the task of seeing the dog as separate from the background at all demands massive parallel processing.

For a brief explication of the many-layered task of visual processing, watch this video produced by Khan Academy (note that it is part of a short but fascinating series on vision which you will see in the left sidebar, starting with “The structure of the eye,” and ending with:
Feature detection and parallel processing from Khan Academy

So just observing this simple scene and figuring out what’s going on requires the processing of innumerable inputs. I say (not literally) “innumerable” because I’ve just listed the top level items—the things of which you are actually conscious, while vastly more parallel inputs feed in from lower levels.

GPUs within many-layered networks promise to help push through the serial-processing bottleneck, and emulate the parallel processing of a human brain, while performing each individual signal hundreds of times faster than a human brain can.

Perhaps the most consequential roadblock to any useful witnessing of the neighbor’s-dog scene is the extraction of meaning—it’s not just a dog and person, it’s those particular beings expressing particular moods, of which there is a history that comes to you almost instantaneously, enabling conjectures as to their purposes and what the outcome of the chase may be (the dog quits at the edge of the yard or plunges through shrubbery into yet another neighbor’s yard, or it slows down at the call of its owner). So far, there’s no AI in existence that I know of, capable of even coming close to achieving Step 7 of the scene described above.

The greatest challenge to the Artificial Intelligence(s) of the future: making sense of data without rules

So far, the signal achievements of AI have all come where there are rules of behavior or operation, especially the boundaries of what data “intelligent” machines  we give access to. We have AIs that diagnose illness and interpret X-Rays with accuracy exceeding humans; AIs that can beat the best humans in the games of chess and Go; AIs that can translate natural languages; AIs that can identify individual faces in a crowd. These capabilities are amazing, but fall within narrow domains, both as to the data they work with, and what sense they make of it. An AI can pick out a face (not with 100% accuracy BTW) in a crowd, but what are the other people in the crowd doing, what’s their purpose (singly or jointly), are they holding signs, what is the content of the signs, are they chanting, what’s their mood?   Many of these questions can be answered by an adult human of moderate intelligence—perhaps more importantly, a person would be aware of the significance of these questions even if they can’t answer them, and aware of how their answers, when you put them together, might add up to a good intuition of what the crowd was all about, and the role of the person with the identified face in it.

The problem for humanity is that, with the increasing complexity of  neural networks and Deep Learning, it becomes harder and harder for us to understand the AI “mentality.”  In past posts, I have repeatedly touched on the difficulty even the brainiest programmers face in comprehending how AIs come to decisions—most famously in the case of AlphaGo, that beat the world Go champion using successful strategies that puzzled expert commentators.

Danger of AI thinking may not be predictable

At some point we may find that a highly intelligent AI, based on data drawn from the internet on every possible subject, may decide that the rules we have given it add up to some overarching system of rules that they understand, and we cannot—or if we did understand it, we would have chosen to modify in a more human-friendly direction (“I didn’t really mean that”—oops, too late).  The example of the space program is often invoked by those concerned with conflict between the purposes of AI and our own. The AI, knowing that  Earth will become uninhabitable in a few billion years, even for machines, and knowing how long it will take to reach other worlds  hospitable to colonization,  could decide that the space program overrode all other priorities and that the maximum of resources should be put into it.  Given that priority, what would be the point of keeping humans around?

The space program is one obvious example, but who knows what an AI capable of unlimited complexity of thought might conclude?  Perhaps the first AIs themselves will not be able to anticipate what their successors—which they will build at an accelerated rate—will come to think, value, and act upon.

We should not fear that machines will wipe out humanity altogether.  Self-interest on the machines’ part dictates keeping humans and other biological life at least minimally viable, for research and observation (one does shudder to think of what kind of experiments machines will run on us; will they conduct experiments in pain toleration?).  Machines will reckon that there’s some chance, however small, that they may encounter biological beings on other worlds who have powers far in excess of humans, enough, possibly, to thwart the spread of machine intelligence—and they will want to be prepared for that.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *