The Arab-Israeli War of October 1973, known as the Yom Kippur War, led the oil-producing Arab states to impose an oil embargo against Israel’s Western allies, the United States above all. The spike in inflation that began in 1973 followed the oil embargo and its soaring energy costs.
The spike also came on the heels of the momentous decision by Nixon to take the United States off the gold standard in 1971. Because either of these events was on its own sufficiently disruptive to shake the equilibrium of prices, many argued both at the time and since that the Great Inflation of the 1970s was the product not of inept policy by central bankers following economic orthodoxies but rather the result of global forces beyond the control of bankers and economic policy makers. The critique of the policy makers—voiced particularly by Milton Friedman, the Chicago University economist who was the scourge of easy money bankers everywhere and always—was that the government had created too much money.
According to accepted economic theory, inflation was first and foremost a product of the money supply, which the government controlled. Even if foreign crises precipitated bad economic policy decisions, it was those decisions that were to blame. Regardless of cause, however, the inflation of the 1970s returned economic policy to a level of prominence last seen during the Great Depression, when many of the indicators that would then be used as reference points in the 1970s were invented.
No one questioned that there was high inflation in the 1970s, but many wondered both what was generating it and how high it actually was. The first question was one of economic theory; the second one of how the statistic called “inflation” was calculated. And if those issues weren’t intractable enough, there was an additional twist: throughout this period, the agency responsible for calculating prices, the Bureau of Labor Statistics, was engaged in a multiyear internal debate about whether it was overstating inflation. The result, ultimately, was a new formula that showed, not surprisingly, that inflation wasn’t quite so high as had been thought and not as severe as people experienced in their everyday lives.
There were two forces at work: one was what that proverbial man on the street experienced when he bought groceries, or a car, or filled his tank with gas. The other was what the consumer price index said each month. The former was the lived experience of prices; the latter was a statistic, an indicator, that we call “inflation.”
Like the other leading metrics, “inflation” was a product of the early twentieth century. It emerged somewhat earlier than national accounts and just a tad after unemployment statistics rose to the fore in the days of Ethelbert Stewart. The modern concept of inflation was an outgrowth of government efforts to measure prices, which stemmed from the same Progressive impulses to assess whether the industrial system was allowing most citizens to meet their basic needs. There had been a few initial efforts to measure prices in the late nineteenth century, and the BLS had done some preliminary surveys of prices in a few cities in 1907 and again in 1912. Then in 1916 the reformist-minded BLS commissioner Royal Meeker authorized a survey of the expenditures of more than two thousand families in the District of Columbia in order to answer a simple question: “What does it cost the American family to live?” That, in turn, led to the first official “cost of living” index published in 1918.
Refinements to the index, however, were slow to evolve. The one group adamant about better gauges was organized labor. Unions had started to demand that any wage agreements be pegged to the cost of living, arguing that a “living wage” was, by definition, a wage that one could live on. The only effective way to demand a certain wage that would meet basic needs was to have a neutral party compile an index that determined what those needs cost. Hence, the work of the BLS, and, hence as well, the start of decades of dispute about whether the consumer price index did, in fact, accurately report what those costs were.
By the mid-1930s, the BLS was still using the basic methodology of 1917, with quarterly surveys of a basket of consumer goods taken in cities across the country. A comprehensive survey of family expenditures between 1934 and 1936 covered 12,903 white families and 1,566 African-American families in forty-two cities. The New Deal Works Progress Administration also conducted surveys. The demand for these metrics intensified during the war, in light of government price controls and wage freezes. After the tribulations of the Depression and the wage controls demanded by the war, labor leaders believed that the American working class was bearing the brunt of economic hardship. George Meany of the American Federation of Labor, who would continue that harangue over the coming decades, charged that the BLS and the government, in league with big business, systematically and purposely underestimated the cost of living. Meany accused the Roosevelt administration of manipulating the numbers. Having failed to keep down the cost of living, FDR (in Meany’s eyes) had an interest in snookering the American people and convincing them that the cost of living was less than they thought. “We are led,” declared the implacable labor leader, “to the inescapable conclusion that the bureau has become identified with an effort to freeze wages, to the extent that it is no longer a free agency of statistical research.”
Thus began a long and tortured legacy of how citizens relate to official government statistics. No number has been the subject of greater controversy and antagonism than the consumer price index (CPI), which is the source of the official inflation statistic. The CPI was the direct successor to prewar cost of living indices, and was christened in 1945 as the “consumer price index for moderate income families in large cities.” Ever since, it has engendered dark conspiracy theories about government officials purposely understanding the cost of living in order to pay citizens lower Social Security benefits and allowing corporations to underpay workers. By the early twenty-first century, the CPI affected the government benefits of nearly eighty million people. And given that cost-of-living adjustments in wages and benefits are often pegged to inflation, the CPI may be the leading indicator that most directly impacts our everyday lives.
It was never intended to carry such weight. Speaking in 1952, the deputy commissioner of the BLS responded to the criticisms of the agency’s work and placed those in the context of a vastly changed landscape. People were beginning to use indicators in ways that few professional statisticians or economists had anticipated, and as we saw, the sudden ubiquity of these numbers and the way they were being disseminated in popular culture transformed what had been modest indices for use by government or academics into social, political, and cultural touchstones. The commissioner warned that the statistical profession was “scarcely prepared, and certainly not organized, to meet the serious responsibilities placed upon us by the new use of statistics.” If both government and private compilers were to retain credibility, they had to be rigorous about methods and responsive to critics.
Excerpted from THE LEADING INDICATORS: A Short History of the Numbers That Rule Our World by Zachary Karabell. Copyright © 2014 by Zachary Karabell. Reprinted by permission of Simon & Schuster, Inc. All Rights Reserved.
Zachary Karabell is an author, money manager, commentator, and head of global strategy at Envestnet. He is also the president of River Twice Research, where he analyzes economic and political trends. Educated at Columbia, Oxford, and Harvard, Karabell has written eleven previous books. He is a regular commentator on CNBC, MSNBC, and CNN. He writes the weekly “Edgy Optimist” column for Reuters and The Atlantic, and is a contributor to The Daily Beast, TIME, The Wall Street Journal, The New Republic, The New York Times, and Foreign Affairs.