Evidence of technological change is all around us—smartphones, self-driving cars, amazing drug discoveries, and even drone warfare. With all of this novelty many futurists and other pundits breathlessly proclaim that technological change is speeding up. In fact, some go so far as to claim that the pace of innovation is not only accelerating, it is accelerating exponentially (which, anyone with a rudimentary understanding of exponents can see, is utter nonsense). Peter Diamandis, author of Abundance: The Future Is Better Than You Think, argues that “Within a generation, we will be able to provide goods and services, once reserved for the wealthy few, to any and all who need them. Or desire them.”
But is the rate of technological change really getting faster? Other commentators, including some notable academic economists, actually think the opposite—that we have run out of the “easy” technological advances and new breakthroughs will take much more work.
Questions about the rate of technological change may seem trivial—will I get one hoverboard or two?—but getting a handle on an answer is critical because in economic terms, technological change equals economic growth. And growth has powerful implications for the future of national competitiveness and economic health. We will not reach Diamandis’s predicted utopia, for example, without world productivity exceeding recent historical rates eightfold.
Why technological change is tough to measure
So how can we know whether the future holds untold wonders of economic efficiency—or a stagnant morass of decline? Unfortunately, despite its fundamental importance, technological change can be tricky to measure and even more difficult to predict. Economists disagree (as do business people) on whether growth will be off the charts on the high side, or off the charts on the low side. Analysts even disagree on what has happened in the past. Why is it so hard to get a handle on technological change?
First of all, when we think of technological change we tend to be biased toward the present, so it can be hard to zoom out and look objectively at all the changes that have happened along the way. There almost always appears to be more shiny new stuff now than there was years ago, and it is hard to tell whether that is because it is true, or because newness loses its luster over time and we remember the recent changes more easily. How many people remember the amazing innovations of the 1960s (audio cassettes and CDs, the BASIC programming language, computer mice, and the first integrated circuit computer, to name a few) that were so exciting that they even helped launch a stock market “tronics” boom and bust in electronics.
Second, each type of technological change is different—we can’t have the same industrial revolution twice. The mechanical revolution of the late 1800s was very different from the electro-mechanical revolution after WWII which was very different from today’s IT revolution. This lack of predictability makes technological change hard to identify, particularly in the moment, or accurately measure over time.
Third, while some innovations improve quantity, others only affect quality. Inventions like computer-controlled machine tools increase output/decrease costs in straightforward ways. But others inventions, like cell phones, can change the quality of their role in our lives. Economics attempts to quantify this quality, of course, either through price changes or through some measurable product difference (like gas mileage). But quality resists quantification for two reasons. First, it resists quantification because it’s based on our subjective perceptions and preferences: how can you quantify the comfort of a modern car compared to a Model T? Second, quality resists quantification because quality deals with our relationship with the world and power over it—that is, it changes what we can do and how we can do it. It is possible to quantify our changing ability on some level, like the amount of effort or time necessary to achieve desired ends. For example, a cell phone might save you a half hour of time per day. But our actions vary in ways other than pure utility of effort, and utility measures still miss important components of our subjective experience. (the movie Point Break comes to mind)
Finally, the overall process of change can be complex. While the effects of some technological changes can be immediate and straightforward—opening the Panama Canal because of the development of steam shovels, for example—other effects may take years or decades to work themselves through the economy and be adopted by businesses and consumers. Electric cars were invented a century ago, but superseded by fossil fuels until only recently, when environmental concerns and desires for better battery technology have enabled them to gain some modest market share. Furthermore, discoveries and inventions interact and build off of other discoveries and inventions, only a fraction of which trickle down to new uses and cost reductions. One new discovery may turn out to be the keystone that makes brings together many previous discoveries into a practical new invention.
Techniques for measuring technological change
Despite these challenges, plenty of attempts have been made to lay the tangled brambles of our technological history out into a discernible trend.
Economists usually start by measuring productivity, defined as output per hour of work–this is generally the clearest and most common way of measuring our technological progress. This metric can be useful but only if we recognize how extreme of a simplification it is. First of all, productivity is measured in prices—and prices are a product of both supply and demand. An increase in demand can therefore raise prices and make it appear that output increased, despite no actual changes in productivity. In addition, national productivity is a messy aggregate of many different industries with wildly varying changes in output. Manufacturing productivity may be trending in the opposite direction from productivity in construction or human services.
Furthermore, this method of measuring technological change achieves its clarity by skipping over any questions about how or why the output increased. We know where we are, but we have no idea how we got here. And since we do not know how we got here, it makes it hard to figure out where we are going.
Other attempts to quantify technological change offer more by way of explanation but are less useful in terms of linking technological change with economic growth. In part, this is because they examine only a limited slice of the economy.
We can, for example, view technological progress in terms of energy usage. If we use energy for useful things, and we use more of it per person, it can be assumed that additional energy use correlates to economic growth and can be called technological progress. The obvious problem here is that it ignores efficiency. If I drive and you take the high speed train, am I more technologically advanced than you because I used more energy—even though you got their first?
Another way to understand technological progress is by measuring computational processing power. Processing power is relatively easy to quantify, and has direct application to a broad range of new technologies. However, the relationship between processing power (itself increasing exponentially thanks to Moore’s law) and economic value is far from linear. There may be points at which additional processing power adds little value; there may be other points where a bit more processing power adds enormous value. Worse, processing power may be applicable for measuring technological growth in areas such as information technology, but there are many technologies, such as automobiles, where processing power has little direct correlation with productivity increases in the real world.
Another method is examining patents. Intended to provide monetary incentives for useful new inventions, new patents should in theory represent useful new technologies. We should therefore be able to estimate the speed of technological progress using the rate of new patents. However, there is a wide variety in both quality and potential impact: patents like microchips or new types of batteries may lead to fundamental changes in the economy, while other patents may have no impact at all. Moreover, the propensity to patent varies in part by the extent companies change business strategies and feel, for example, that they must engage in heavy defensive patenting.
While these alternative methods of measuring technological change are revealing in their own way, they fail to encapsulate the entire economy in a satisfactory way. And they come to obviously different conclusions: thanks to semiconductor companies, processing power for computing continues to increase exponentially , while patents are increasing but at a much slower rate. Energy use of various types typically starts off exponentially but does not continue along the trend. Our GDP is increasing but the numbers have been lackluster recently.
Can we do a better job of figuring out what constitutes technological progress and how to measure it? Again, such a question may seem esoteric, but in fact is central to understanding the performance of the 21st century innovation economy. With the U.S. and other advanced economies increasingly being driven by innovation, it is disturbing that we simply have no real measures to tell if the innovation economy is performing well or not. We can, however, do a very good job of measuring how many bushels of corn we produced, or how many houses were built.
All metrics have a purpose, and the effective use of metrics requires an understanding of what that purpose is. My next post will take a broad look at how technology has changed our lives and our economy, and think a bit about how we can simplify that change into a useful metric.
(Note: More details added under the paragraph on measuring quality, May 2, 2014.)
(Artist credit to James R. Powers.)