There may be no more important question for the future of the US economy than whether the ongoing advances in information technology and artificial intelligence will eventually (and this \”eventually\” is central to their argument) translate into substantial productivity gains. Erik Brynjolfsson, Daniel Rock, and Chad Syverson make the case for optimism in \”Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics\” (NBER Working Paper 24001, November 2017). The paper isn\’t freely available online, but many readers will have access to NBER working papers through their library. The essay will eventually be part of a conference volume on The Economics of Artificial Intelligence

Brynjolfsson, Rock, and Syverson are making several intertwined arguments. One is that various aspects of machine learning and artificial intelligence are crossing important thresholds in the last few years and the next few years. Thus, even though we tend to think of the \”computer age\” as having already been in place for a few decades, there is a meaningful sense in which we are about to enter another chapter. The other argument is that when a technological disruption cuts across many parts of the economy–that is, when it is a \”general purpose technology\” as opposed to a more focused innovation–it often takes a substantial period of time before producers and consumers fully change and adjust. In turn, this means a substantial period of time before the new technology has a meaningful effect on measured economic growth. 
As one example of a new threshold in machine learning, consider image recognition. On various standardized tests for image recognition, the error rate for humans is about 5%. In just the last few years, the error rate for image-recognition algorithms is now lower than the human level–and of course the algorithms likely to keep improving. 
There are of course a wide array of similar examples. The authors cite one study in which an artificial intelligence system did as well as a panel of board-certified dermatologists in diagnosing skin cancer. Driverless vehicles are creeping into use. Anyone who uses translation software or software that relied on voice recognition can attest to how much better it has become in the last few years. 
The author also point to an article from the Journal of Economic Perspectives in 2015, in which Gill Pratt pointed out the potentially enormous advantages of artificial intelligence in sharing knowledge and skills. For example, translation software can be updated and improved based on how everyone uses it, not just on one user. They write about Pratt\’s essay: 

[Artificial intelligence] machines have a new capability that no biological species has: the ability to share knowledge and skills almost instantaneously with others. Specifically, the rise of cloud computing has made it significantly easier to scale up new ideas at much lower cost than before. This is an especially important development for advancing the economic impact of machine learning because it enables cloud robotics: the sharing of knowledge among robots. Once a new skill is learned by a machine in one location, it can be replicated to other machines via digital networks. Data as well as skills can be shared, increasing the amount of data that any given machine learner can use.

However, new technologies like web-based technology, accurate vision, drawing inferences, and communicating lessons don\’t spread immediately. The authors offer the homely example of the retail industry. The idea or invention of of online sales became practical back in the second half of the 1990s. But many of the companies founded for online-sales during the dot-com boom of the late 1990s failed, and the sector of retail that expanded most after about 2000 was warehouse stores and supercenters, not  online sales. Now, two decades later, online sales have almost reached 10% of total retail. 
Why does it take so long? The theme that Brynjolfsson, Rock, and Syverson emphasize is that a revolution in online sales needs more than an idea. It needs innovations in warehouses, distribution, and the financial security of online commerce. It needs producers to think in terms of how they will produce, package, and ship for online sales. It needs consumers to buy into the process. It takes time. 
The notion that general purpose inventions which cut across many industries will take time to manifest their productivity gains, because of the need for complementary inventions, turns out to be a pattern that has occurred before. 
For economists, the canonical comment on this process in the last few decade is due to Robert Solow (Nobel laureate \’87) who wrote in an essay in 1987, \”You can see the computer age everywhere but in the productivity statistics\” (“We’d better watch out,” New York Times Book Review, July 12, 1987, quotation from p. 36). After all, IBM had been producing functional computers in substantial quantities since the 1950s, but the US productivity growth rate had been slow since the early 1970s. When the personal computer revolution, the internet, and surge of productivity in computer chip manufacturing all hit in force the 1990s, productivity did rise for a time. Brynjolfsson, Rock, and Syverson write: 

\”For example, it wasn’t until the late 1980s, more than 25 years after the invention of the integrated circuit, that the computer capital stock reached its long-run plateau at about 5 percent (at historical cost) of total nonresidential equipment capital. It was at only half that level 10 years prior. Thus, when Solow pointed out his now eponymous paradox, the computers were finally just then getting to the point where they really could be seen everywhere.\”

Going back in history, my favorite example of this lag that it takes for inventions to diffuse broadly is from the invention of the dynamo for generating electricity, a story first told by economic historian Paul David back in a 1991 essay. David points out that large dynamos for generating electricity existed in the 1870s. However, it wasn\’t until the Paris World Fair of 1900 that electricity was used to illuminate the public spaces of a city. And it\’s not until the 1920s that innovations based on electricity make a large contribution to US productivity growth. 
Why did it take so long for electricity to spread? Shifting production away from being  powered by waterwheels to electricity was a long process, which involved rethinking, reorganizing, and relocating factories. Products that made use of electricity like dishwashers, radios, and home appliances could not be developed fully or marketed successfully until people had access to electricity in their homes. Large economic and social adjustments take time time.

When it comes to machine learning, artificial intelligence, and economic growth, it\’s plausible to believe that we are closer to the front end of our economic transition than we are to the middle or the end. Some of the more likely near-term consequences mentioned by Brynjolfsson, Rock, and Syverson include a likely upheaval in the call center industry that employs more than 200,000 US workers, or how automated driverless vehicles (interconnected, sharing information, and learning from each other) will directly alter one-tenth or more of US jobs. My suspicion is that the changes across products and industries will be deeper and more sweeping than I can readily imagine.

Of course, the transition to the artificial intelligence economy will have some bumps and some pain, as did the transitions to electrification and the automobile. But the rest of the world is moving ahead. And history teaches that countries which stay near the technology frontier, and face the needed social adjustments and tradeoffs along the way,  tend to be far happier with the choice in the long run than countries which hold back. 

Leave a Reply