A recent article in The Economist examined automation’s capacity to displace workers. If techno-optimists are to be believed, automation is a net win: They say more jobs will actually be created by robots than destroyed by them. But that reasoning doesn’t satisfy the pessimists, who point to the expanding array of tasks carried out by machines as evidence that robots will eventually be doing most people’s jobs. SI interviewed Saeculum Research Founder and President Neil Howe to find out where he weighs in on the debate—and to learn how this technology will impact the nation’s economic and regulatory future.
SI: Neil, we’ve heard a lot about robotics and how they’re transforming the workforce. What’s happening in this sector?
NH: Companies are spending more on this technology than ever. In the year 2000, total international spending on robotics was just above $7 billion. Certainly impressive, but these numbers pale in comparison to what’s happening today. This year, global robotics spending will max out around $27 billion. And the future looks even brighter: By 2025, companies and consumers worldwide could very well be spending $67 billion annually on robotics technology.
SI: Where is all of this money going?
NH: First and foremost, robots have become a fixture of industrial manufacturing. Robots have for decades performed functions like welding and material handling that were once the exclusive domain of assembly-line workers. Today, these systems are more cost-efficient than ever. Advances in extremely cheap sensors and microchips mean that even small companies can afford their own machines—and that the technology is no longer limited to sprawling automotive plants with their FANUC industrial robots.
Another robot-heavy sector is the military. Unmanned vehicles, long used in the armed forces, have taken on a magnified role in military operations due to their inherent capacity to cut down troop casualties. Just last year, the Department of Defense allotted nearly $24 billion over a five-year period to drones and other unmanned vehicles. Additionally, the military is also using robots to carry heavy loads, rescue fallen soldiers, and perform non-combat tasks such as clearing forestry for new training grounds.
Just as impressive—if not more so—have been innovations in the commercial sector. In medicine, for example, surgeons now increasingly operate alongside robot assistants—such as da Vinci, a human-guided machine that allows for precise, non-invasive surgery. Meanwhile, in construction, machines like Gramazio & Kohler’s R-O-B bricklaying robot can use preloaded algorithms to build stable, stylish building facades. Companies are also turning to robots to streamline their internal operations. Last year, Amazon deployed more than 15,000 Kiva sorting robots at its warehouses to save employees from walking up to 15 miles in a single shift. Though this initiative cost the company $775 million in acquisition fees, Amazon has the massive scale to make the investment worth it.
SI: What makes these robots different from those of, say, 10 years ago?
NH: In short, they’re smarter. Today’s robots are nothing like the early, ruthless machines that performed a single rote function for hours on end. These robots come equipped with “smart” sensors and algorithms that allow them to mirror—even improve upon—human decision-making. For example, LettuceBot, an agricultural trimming machine, is smart enough to categorize plants in a split second, allowing it to trim away only the unnecessary parts without damaging the head of lettuce. Machine intelligence isn’t just revolutionizing the crop fields, either. At business processing company Xchanging, a robot called Henry completes data entry and clerical work in a fraction of the time of a human operator.
Look no further than Google’s strides on its self-driving car for an example of how far machine intelligence has come. Through immersive software algorithms, Google’s cars are able to classify objects around them by type—other cars, pedestrians, cyclists—and then predict what those objects are likely to do in any given situation. Google cars understand, for instance, that arm signals are a cyclist’s way of signaling that he is about to turn. And the project isn’t some pipe dream: The company hopes to have a consumer product ready for market in four years.
Smart robots have the capacity to transform even the highest-touch sectors. UC Berkeley researchers, for instance, are training robots to identify and remove cancerous tissue completely autonomously. In the future, a robot may very well be able to take on a surgeon’s duties entirely—even for the most delicate procedures.
NH: Very. It’s astounding how humanlike some of today’s more advanced models are. For instance, Baxter, a product made by Rethink Robotics, can be taught new tasks through demonstration. A human operator simply manipulates the robot’s arms in a particular sequence—for example, reaching them out to grab a package and then extending them to a high shelf—and Baxter will “know” how to do the task.
Even more impressive—or eerie, depending on how you look at it—is Baxter’s ability to convey emotion. The robot comes with a monitor that displays a humanlike “face,” which changes based on the task at hand. If Baxter is focused on completing a job, the face will convey concentration. On the other hand, if Baxter doesn’t understand something, its face will take on a quizzical look that signals to nearby humans that it needs help.
SI: When you say “robot,” how broad is that term? Does it have to have two arms and two legs? Does it need to be a physical thing at all?
NH: Good question. Smart robotics is part of a far broader movement toward artificial intelligence (AI) in general. As we’ve discussed before (see:“The Age of Artificial Intelligence”), AI refers to both machines and software that simulate human intuition—meaning that robotics is only half of the equation. Perhaps less, considering the fact that, unlike robots, software is relatively cheap to implement and costs nothing to maintain.
It’s not just machines like Baxter that should raise eyebrows—it’s also software products like the ones offered by Nuance, which listen to medical reports and match the physician’s dictation with the appropriate Medicare reimbursement code.
SI: It sounds like AI at large could do a wide variety of jobs. What kinds of work will be most affected by machine intelligence?
NH: It’s my belief—and I’m not alone—that middle-skill, middle-wage jobs have the most to lose. In The Lights in the Tunnel, Martin Ford breaks down all jobs into three distinct classifications: “hardware” jobs, “software” jobs, and “interface” jobs. Hardware jobs require at least some investment in robots in order to automate—these are low-skill, low-wage jobs such as an assembly-line worker or an automotive mechanic. Software jobs require algorithms to automate, such as clerical work. We associate them with higher-skilled, higher-wage knowledge workers. And interface jobs are ones that rely on a blend of technology and human labor—such as a loan officer who has to collect, collate, and fax myriad paperwork.
Out of these positions, the ones with the highest potential for automation are not hardware jobs, but software jobs—precisely the type of work always thought to be immune to automation. As cognitive scientist Steven Pinker famously observed, “The main lesson of 35 years of AI research is that the hard problems are easy and the easy problems are hard.” We humans hold the conscious mind in very high esteem. But for all its capabilities, the actual performance of a conscious mind has been the easiest to simulate. We’ve taught a robot to play chess better than the most brilliant and experienced human player, after all. But we still can’t get a robot to walk—something that most of us do without thinking—with anything close to the grace of just an ordinary human being.
If a knowledge job consists of applying a set of rules, however complicated, an algorithm can learn to do it better than you can. To borrow again from Pinker, it’s the stock analysts and radiologists whose jobs are most at risk. The gardeners and cooks are safe for the time being.
SI: Which employment categories are the most difficult to automate?
NH: One could argue that interface jobs are the most resistant to automation, because they would be clunky to completely outsource to a machine. But these jobs, in fact, represent much of the work most susceptible to automation—or that have already been automated to a degree. These are the cashiers, loan officers, and travel agents whose jobs are largely cognitive and routine. Today, we have self-service kiosks that allow shoppers to check themselves out, algorithms that determine whether or not someone is eligible for a loan, and price-comparing websites where travelers can go to get the best deal on airfare and hotels. Generation X consumers in particular, who have always enjoyed cutting out the middleman in any way possible, are likely driving demand for these automated interface jobs.
The toughest jobs to automate actually fall on the low-skill hardware end of the spectrum—for two reasons. The first is the enormous monetary investment needed to implement the types of machines that can do these tasks. The second is the sheer skill required of such machines: Far more difficult to automate than cognitive processes are the basic functions that we all take for granted every day—grasping, talking, recognizing faces, the list goes on and on.
SI: What do all of these gains in machine intelligence mean for human workers themselves? Are robots truly replacing people’s jobs?
NH: It depends on who you ask. First, let me start by saying this isn’t the first time that the issue of job displacement through technology has been raised. During the early Industrial Revolution, many factory workers—notably the Luddites in the 1820s—feared that industrial weaving machines would make their hand-weaving skills obsolete.
The Luddites, in fact, were the first “techno-pessimists,” a term describing those who say that algorithms and intelligent machines simply take away jobs. These individuals cite a popular Oxford Martin report, penned by economists Carl Frey and Michael Osborne in 2013, which predicts that 47 percent of U.S. jobs could be replaced or significantly affected by machines within a decade or two. Similarly, in last year’s Pew Future of the Internet report, nearly half of surveyed experts believed that advances in technology would eliminate more jobs than they create in the coming years.
Some experts are definitely more pessimistic than others. Robert Cannon, for example, predicts that “everything that can be automated will be automated,” and that the only thing holding the machines back is their high cost of implementation in certain professions. Martin Ford says that the economy is a zero-sum game, where every job gained by a machine means one lost by a human.
SI: Does everyone share this mindset?
NH: Certainly not. On the other side of the debate are the “techno-optimists,” who say that instead of replacing their jobs, AI systems are working with people, augmenting their capabilities and enhancing their productivity. They believe algorithmic programs cut out knowledge workers’ rote activities and allow them to focus on aspects of their job that only a human can do.
The way the Internet has helped knowledge workers, the burgeoning field of “cobotics” is beginning to help manual laborers. So-called cobots come equipped with sensors and safety mechanisms that will shut the system down if a human comes too close—which enables machines and humans to work together without the potential for accidents.
This pro-tech group tends to be heavy on Silicon Valley personalities like Erik Brynjolfsson and Andrew McAfee. Also in their camp is Tyler Cowen, who, in Average Is Over, describes the emerging sport of “freestyle” chess, in which each “player” consists of a team of humans combined with their computer software. Cowen himself says he often sets two algorithm-based chess computers to play against each other—only he joins in on the action, teaming up with one of the machines. In a passage with symbolic implications, Cowen states that man-plus-machine is able to outwit just the machine four out of five times.
SI: So let’s talk about timing. When is automation going to affect workers on a large scale? Or is this already happening?
NH: That’s the mystery. On the one hand, one can point to a depressed labor market with many people out of the workforce throughout the high-income world—which suggests that machines may already be putting people out of work. But on the other hand, we see no evidence of this in the productivity numbers.
If all of these machines are taking jobs away from people because they’re cheaper and more productive, it should manifest itself as a real boost in total output per hour of human labor. But we haven’t seen any of that. Since 2005, U.S. output per worker hour has dropped to a historic low—a meager 1.4 percent per year. Since 2010 it has sunk down to just 0.4 percent per year, and over the last three it has actually been negative.
SI: How do you explain this complete disconnect between automation and productivity?
NH: The question of why advances in computing power at large haven’t boosted productivity is a vexing one to many economists. All the way back in 1987, economist Robert Solow famously quipped, “You can see the computer age everywhere but in the productivity statistics.”
One way out is simply to say that we’re still at an early stage of this technological transition. Economists Paul David and Gavin Wright argue that general purpose technologies, foundational discoveries that affect an entire economy, are particularly slow to implement—with the famous example of electricity, which did not produce major productivity gains until the 1920s, decades after its discovery. Techno-optimists Brynjolfsson and McAfee echo this sentiment in their glowing account of the AI boom, The Second Machine Age.
But some disagree with this line of reasoning. First of all, we began seeing the emergence of revolutionary computing and AI technology in the 1990s. By this logic, we should be seeing some measure of improvement by now—and productivity certainly shouldn’t be slowing down.
Second,as economist Larry Summers has pointed out, if a workplace AI revolution is happening at all, we should either be experiencing huge demand for employment (if we’re introducing these machines alongside humans), or we should be seeing productivity gains (if they’re putting people out of work). But under no scenario should we be experiencing weak employment and decelerating productivity.
SI: Are there any other explanations for the productivity paradox?
NH: Yes. One argument particularly popular among techno-optimists is that our numbers are wrong. The Silicon Valley crowd believes that our ways of measuring productivity are antiquated, and that the government has no way of taking into account technological advances such as the free Internet and GPS that vastly improve our quality of life. Some say we understate dollar gains in output; others say we overstate inflation.
I don’t particularly buy this reasoning. In reality, we’ve gotten far better at measuring productivity over the years than some think. For example, the Bureau of Labor Statistics constantly updates its GDP measures using differential price data to take into account the quality of goods and services (such as the growth in computing speed). I think that the Internet, while it certainly improves quality of life, doesn’t do all that much to boost productivity in ways that the BLS does not measure.
If techno-optimists feel that technology is helping our quality of life, then why, according to Pew, do vastly fewer Americans identify as middle class today—and do more see themselves as lower class—compared to in 2008? Go survey families who have been making an inflation-adjusted $50,000 over the past decade, and ask them if their standard of living has actually been rising.
SI: OK, so what do you think gives rise to the disconnect?
NH: There is another explanation: Baumol’s cost disease, which holds that the sectors of the economy employing a growing share of the workforce are the ones with the slowest productivity growth.
By this line of reasoning, the tech sector most attuned to gains in productivity employs a shrinking share of workers as it advances because fewer people are needed to do the same job—while today’s most monolithic sectors resistant to automation employ more and more people. Case in point: How do you measure, much less improve, the labor productivity of a social service worker who helps disadvantaged youth?
It follows that, while robotics may very well be revolutionizing the tech sector, this impact only affects a small—and shrinking—share of workers.
SI: Would you say that you align more with the techno-optimists or the techno-pessimists?
NH: On the one hand, I disagree with the optimists in that I believe in the statistics. I do believe that AI will usher in a major economic transition, even though I think it will be a slower process than most people realize. Why? All of the low-hanging fruit has already been plucked—the jobs that were easy to automate have been automated. Replacing the service-oriented tasks will be a slower job.
I don’t, on the other hand, have much patience for the argument that having machines create products at low cost is a net loss for our society. It raises some tough questions about how to properly train and otherwise invest in our workers—but the ability to produce more with less labor is an unambiguous gain because it makes the pie bigger.
SI: Overall, you sound very optimistic about how this will shake out. Are there any areas of concern going forward that we haven’t discussed?
NH: There is: the way in which intelligent machines in the digital age may be helping to push the entire economy toward increased market concentration. As we have discussed elsewhere (see: “Are Conglomerates Back in Fashion?”), industries are increasingly being dominated by fewer and fewer companies. We have noticed a number of indicators pointing in this direction—the rising GDP share of Fortune 500 firms, the declining number of public companies, and falling rates of startup creation, to name a few.
To be sure, some forces having nothing to do with technology are making this happen, such as the confluence of regulation and lobbying that preferentially benefit large companies. But the tech revolution is also pushing us in this direction by creating firms with infinite economies of scale. Our old idea of a firm was that there was a size at which companies were most efficient at what they do and it would be unfeasible to grow beyond that. But now, they become ever-more efficient the larger they are.
I’d argue that this is what we’re beginning to see today with tech giants like Google, Amazon, and Facebook. As automation and machine intelligence progresses, it will be precisely these companies that will reap the most benefits.
SI: What are some symptoms of this shift toward conglomerates?
NH: Anticompetitive businesses practices—buyouts, predatory pricing, product tie-ins and crossovers, companies opting to sit on innovation rather than foster it, you name it.
We could even see a scenario in which the growth of the tech sector could easily turn into stagnation. The sector-leading companies could afford to buy out any competition—either to slap their name on innovative products and services produced by the smaller company, or to simply let their competitors die a slow death—instead of investing in innovation of their own. (See: “Where Have All The New Businesses Gone?”)
I can’t help but think of Facebook’s acquisition of mobile messenger WhatsApp as a prime example of this scenario. Though undoubtedly Facebook has plans for its $22 billion prize, including making its own Messenger app resemble WhatsApp, the company is in no rush to combine the two services.
I predict that, for this very reason, U.S. antitrust will be one of the largest growth areas in the years ahead. Right now, tech companies’ biggest antitrust headache is playing out overseas, where European officials have been cracking down on companies such as Google, Apple, Amazon, and Microsoft—but the battle could spill onto domestic soil before too long, and the tech sector, which has until now been given free rein, could be in trouble.
SI: Interesting. How long do you think it will be before our predictions about the automated future will come to life?
NH: It’s difficult to predict the pace of this transition. If history tells us anything, it’s that repeatedly our predictions about the speed of tech innovation have been far too sanguine.
In 1956, for example, AI pioneer Herbert Simon predicted that robots would soon be able to do any work that a human could do. Among his prophecies, he asserted that in 10 years, computers would be the world’s best chess players. In reality, Deep Blue wouldn’t defeat chess champion Garry Kasparov for another 40 years.
Given historical precedent, it’s easy to call into question today’s glowing predictions about the machines of tomorrow—as well as reports that nearly half of the workforce will be automated within two decades.
SI: So where is this all going? Where do observers believe that automation will lead us as a society?
NH: There are certainly some fantastic predictions out there. In Ray Kurzweil’s The Singularity Is Near, he describes how exponential gains in AI and robotics technology will lead to an event he calls the Singularity, when the speed at which intelligent machines analyze and improve themselves accelerates exponentially—and escapes the ability of humans to control or comprehend. At that point, he argues, people will have the capacity to upload their brains onto machines, effectively merging their “wetware” with the “software” and “hardware” the robots are creating.
The interesting part about Kurzweil’s prediction is that he believes the singularity will occur in the year 2045. That’s right—precisely when some analysts believe machines will be taking full control of the workforce, computers will be advancing faster than we could ever keep up with. In other words, you’d better upload your consciousness to the machines in time, or you will be forever lost.
As you can see, the stakes are high. This scenario would be the apocalypse, the end of human history as we know it. Some, like Kurzweil and Jeremy Rifkin, believe that it will all be for the best. In the utopian future, machines will make sure that society runs perfectly and precisely, and humans will have the benefit of evolving into higher forms.
The other camp is much grimmer. Dystopian depictions of the future resemble a Terminator movie, in which humans take one final stand against intelligent machines with the entire fate of the world at stake. One particularly striking scenario involves self-replicating nanorobots one day taking over the planet, turning all matter on Earth—including humans—into grey goo in the process.
SI: So should we be happy or not about what the future holds?
NH: I choose to be positive in my outlook on this new era we’re heading toward.
First of all, I don’t buy into end-of-the-world scenarios. Machines won’t gain sentience and hold court over humans. In fact, I think AI’s greatest potential is as an agent of human will—so it’s more useful to fear the intentions of the human behind the machine than the machine itself.
Second, if you have a job that can be done by a machine, it probably ought to be—and you will become a more whole person when you learn to do something that engages your full human potential.
Ironically, in our future world increasingly reliant on tech, most of us will work non-tech jobs. Machines will do all of the analytics, leaving us to integrate emotion, experience, and wisdom into our work—and we’ll be better for it.