Everyone agrees that it would be troubling news if America’s rate of innovation were to decrease. But we can’t seem to agree at all about whether this is actually happening.
We care about innovation so much not simply because we like new stuff, although we certainly do. As the novelist William Makepeace Thackeray observed, “Novelty has charms that our mind can hardly withstand.” Some of us can hardly withstand the allure of new gadgets; others are charmed by the latest fashion styles or places to see and be seen. From an economist’s perspective, satisfying these desires is great—taking care of consumer demand is usually seen as a good thing. But innovation is also the most important force that makes our society wealthier.
Paul Krugman speaks for many, if not most, economists when he says, “Productivity isn’t everything, but in the long run it is almost everything.” Why? Because, he explains, “A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker”—in other words, the number of hours of labor it takes to produce everything, from automobiles to zippers, that we produce. Most countries don’t have extensive mineral wealth or oil reserves, and thus can’t get rich by exporting them.* So the only viable way for societies to become wealthier—to improve the standard of living available to its people—is for their companies and workers to keep getting more output from the same number of inputs, in other words more goods and services from the same number of people.
Economist Bob Gordon, one of the most thoughtful, thorough, and widely respected researchers of productivity and economic growth, recently completed a major study of how the American standard of living has changed over the past 150 years. His work left him convinced that innovation is slowing down.
Gordon emphasizes—as do we—the role of new technologies in driving economic growth. And like us, he’s impressed by the productive power unleashed by the steam engine and the other technologies of the Industrial Revolution. According to Gordon, it was the first truly significant event in the economic history of the world. As he writes, “there was almost no economic growth for four centuries and probably for the previous millennium” prior to 1750, or roughly when the Industrial Revolution started. As we saw in the first chapter, human population growth and social development were very nearly flat until the steam engine came along. Unsurprisingly, it turns out that economic growth was, too.
As Gordon shows, however, once this growth got started it stayed on a sharp upward trajectory for two hundred years. This was due not only to the original Industrial Revolution, but also to a second one, it too reliant on technological innovation. Three novelties were central here: electricity, the internal combustion engine, and indoor plumbing with running water, all of which came onto the scene between 1870 and 1900.
Gordon also writes that “it is useful to think of the innovative process as a series of discrete inventions followed by incremental improvements which ultimately tap the full potential of the initial invention.”
This seems sensible enough. An invention like the steam engine or computer comes along and we reap economic benefits from it. Those benefits start small while the technology is immature and not widely used, grow to be quite big as the GPT improves and propagates, then taper off as the improvement—and especially the propagation—die down. When multiple GPTs appear at the same time, or in a steady sequence, we sustain high rates of growth over a long period. But if there’s a big gap between major innovations, economic growth will eventually peter out. We’ll call this the ‘innovation-as-fruit’ view of things, in honor of Tyler Cowen’s imagery of all the low-hanging fruit being picked. In this perspective, coming up with an innovation is like growing fruit, and exploiting an innovation is like eating the fruit over time.
Another school of thought, though, holds that the true work of innovation is not coming up with something big and new, but instead recombining things that already exist. And the more closely we look at how major steps forward in our knowledge and ability to accomplish things have actually occurred, the more this recombinant view makes sense. For example, it’s exactly how at least one Nobel Prize–winning innovation came about.
Kary Mullis won the 1993 Nobel Prize in Chemistry for the development of the polymerase chain reaction (PCR), a now ubiquitous technique for replicating DNA sequences. When the idea first came to him on a nighttime drive in California, though, he almost dismissed it out of hand. As he recounted in his Nobel Award speech, “Somehow, I thought, it had to be an illusion. . . . It was too easy. . . . There was not a single unknown in the scheme. Every step involved had been done already.”
Facebook has built on the Web infrastructure by allowing people to digitize their social network and put media online without having to learn HTML. Whether or not this was an intellectually profound combination of technological capabilities, it was a popular and economically significantly one—by July 2013, the company was valued at over $60 billion. When photo sharing became one of the most popular activities on Facebook, Kevin Systrom and Mike Krieger decided to build a smartphone application that mimicked this capability, combining it with the option to modify a photo’s appearance with digital filters. This seems like a minor innovation, especially since Facebook already had enabled mobile photo sharing in 2010 when Systrom and Krieger started their project. However, the application they built, called Instagram, attracted more than 30 million users by the spring of 2012, users who had collectively uploaded more than 100 million photographs. Facebook acquired Instagram for approximately $1 billion in April of 2012.
This progression drives home the point that digital innovation is recombinant innovation in its purest form. Each development becomes a building block for future innovations. Progress doesn’t run out; it accumulates. And the digital world doesn’t respect any boundaries. It extends into the physical one, leading to cars and planes that drive themselves, printers that make parts, and so on. Moore’s Law makes computing devices and sensors exponentially cheaper over time, enabling them to be built economically into more and more gear, from doorknobs to greeting cards. Digitization makes available massive bodies of data relevant to almost any situation, and this information can be infinitely reproduced and reused because it is non-rival. As a result of these two forces, the number of potentially valuable building blocks is exploding around the world, and the possibilities are multiplying as never before. We’ll call this the ‘innovation-as-building-block’ view of the world; it’s the one held by Arthur, Romer, and the two of us. From this perspective, unlike in the innovation-as-fruit view, building blocks don’t ever get eaten or otherwise used up. In fact, they increase the opportunities for future recombinations.
Gordon asks the provocative question, “Is growth over?” We’ll respond: “Not a chance. It’s just being held back by our inability to process all the new ideas fast enough.”
* In reality, many of the countries that do have large amounts of mineral and commodity wealth are often crippled by the twin terrors of the “resource curse”: low growth rates and lots of poverty.
Adapted from the book The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson and Andrew McAfee. Excerpted by arrangement from W. W. Norton & Company. Copyright 2014.
Erik Brynjolfsson is the director of the MIT Center for Digital Business and one of the most cited scholars in information systems and economics. Andrew McAfee is a principal research scientist at the MIT Center for Digital Business and the author of Enterprise 2.0.