Digging into Capital and Labor Income Shares

A wide array of evidence suggests that if you split all income in an economy into either labor or capital income, the labor share has falling in the last couple of decades. For example, here\’s a previous post referring to comments from the 2013 Economic Report of the President:

\”The “labor share” is the fraction of income that is paid to workers in wages, bonuses, and other compensation. … The labor share in the United States was remarkably stable in the post-war period until the early 2000s. Since then, it has dropped 5 percentage points. …. The decline in the labor share is widespread across industries and across countries. An examination of the United States shows that the labor share has declined since 2000 in every major private industry except construction, although about half of the decline is attributable to manufacturing. Moreover, for 22 other developed economies (weighted by their GDP converted to dollars at current exchange rates), the labor share fell from 72 percent in 1980 to 60 percent in 2005.\”

Or here\’s a post with comments from the Global Wage Report 2012/13 by the International Labour Organization:

\”The OECD has observed, for example, that over the period from 1990 to 2009 the share of labour compensation in national income declined in 26 out of 30 developed economies for which data were available, and calculated that the median labour share of national income across these countries fell considerably from 66.1 per cent to 61.7 per cent.\”

Other posts on this subject have referred to a report from the Congressional Budget Office, reports from research by Federal Reserve economists, and a 2013 paper in the Brookings Papers on Economic Activity. While the overall fact pattern of a falling labor share seems well-established, digging down into what it actually means reveals some complexities that are often not much discussed. Matthew Rognlie considers some of these issues in \”Deciphering the fall and rise in the
net capital share,\” presented today at the Brookings Papers on Economic Activity spring conference. Here are some of my own thoughts about the issues.

1) The capital labor movement isn\’t about inequality of incomes. 

All wages and compensation are counted as \”labor income.\” Perhaps there are some underlying common reasons why the rising share of capital income and greater inequality of labor income are happening at the same time, but they are not the same thing.

2) Should self-employment income be considered as labor or capital?

If you own a business, then part of your income from that business is a return to your labor, while another part is a return to the risk-taking of business ownership, and should conceptually be considered a return to capital. At certain times, this distinction can make a big difference. Rognlie refers to a debate among economists back in the 1950s, after several decades of a falling number of farmers. Most of the farmers were self-employed, and thus their income was categorized as capital income. As more workers moved from farm to non-farm employment, their income then became treated as labor income. There are a variety of ways to split up self-employment income into labor and capital components, none of them fully satisfactory. However, there hasn\’t been a big upward trend in self-employment income in recent decades, so such measurement choices are not going to be much help in explaining the rise in share of income going to capital.

3) Should the focus be on net capital or gross capital? 

The difference between \”gross\” and \”net\” is that \”net\” takes depreciation of past  capital into account. Rognlie explains why the difference matters using an example from an industry where output is produced by short-lived software. As a result, the producer spends a lot on capital every year, but almost all of it is replacing the obsolete software that depreciated the previous year.

For instance, in an industry where most of the output is produced by short-lived software, the gross capital share will be high, evincing the centrality of capital’s direct role in production. At the same time, the net capital share may be low, indicating that the returns from production ultimately go more to software engineers than capitalists—whose return from production is offset by a loss from capital that rapidly becomes obsolete. Both measures are important: indeed, a rise in the gross capital share in a particular industry is particularly salient to an employee whose job has been replaced by software, and it may proxy for an underlying shift in distribution within aggregate labor income—for instance, from travel agents to software engineers. The massive reallocation of gross income in manufacturing from labor to capital, documented by Elsby et al. (2013), has certainly come as unwelcome news to manufacturing workers. But when considering the ultimate breakdown of income between labor and capital, particularly in the context of concern about inequality in the aggregate economy, the net measure is likely more relevant. This point is accepted by Piketty (2014), who uses net measures; the general rationale for excluding depreciation is pithily summarized by Baker (2010), who remarks that “you can’t eat depreciation.”

Here are two figures, with the first one showing the capital share based on gross capital, and the second showing the capital share based on net capital. Capital share is clearly rising in recent years with gross capital, as shown in the bottom figure. With net capital, the rise is more modest. It looks more as if there was a drop-off in capital shares in the 1970s which has since been reversed. Again, the underlying difference here refers to changes over time in how quickly capital investment is wearing out, and to changes in what share of current capital investment is replacing old depreciated capital vs. adding to the overall capital stock.

It\’s worth noting that the conclusion that the rising capital share mostly goes away if you look at net capital rather than gross capital is not at this stage a consensus finding. For an argument that using net capital makes the capital share fall, but by a more modest amount, you can check Loukas Karabarbounis and Brent Neiman, “Capital Depreciation and Labor Shares Around the World: Measurement and Implications,” Technical Report, National Bureau of Economic Research 2014.

4) Is the rise in capital share mainly housing? 

Perhaps the most striking finding from Rognlie\’s analysis is that all of the rise in the capital share can be accounted for by a rise in housing values. Here\’s a figure illustrating the point. the top yellow line shows the share of capital income using a \”net\” measure. The red line shows the measure of capital income with housing, the blue line, subtracted out. 

Many noneconomists don\’t think about owning a house as a form of capital ownership. But from the viewpoint of economic statistics, a homeowner is someone with a piece of capital–a house–that is providing a service. Of course, homeowners do not go through the formality of paying rent to themselves. Rognlie explains it this way: 

Income from housing is unlike most other forms of capital income recorded in the national accounts: in countries where homeownership is dominant, most output in the housing sector is recorded as imputed rent paid by homeowners to themselves. … Indeed, imputed rents from owner-occupied housing should arguably be treated as a form of mixed income akin to self-employment income: in part, they reflect labor by the homeowners themselves. … [H]ousing has a pivotal role in the modern story of income distribution. Since housing has relatively broad ownership, it does not conform to the traditional story of labor versus capital, nor can its growth be easily explained with many of the stories commonly proposed for the income split elsewhere in the economy—the bargaining power of labor, the growing role of technology, and so on.

The importance of housing in looking at movements of capital and labor income is not a new insight. For example, Odran Bonnet, Pierre-Henri Bono, Guillaume Camille Chapelle, Étienne Wasmer discuss the point in this readable June 30, 2014, note about their own research:\”Capital is not back: A comment on Thomas Piketty’s ‘Capital in the 21st Century.’\”

When thinking about the long-term evolution of capital and labor income, it becomes important to remember that capital income can mean different things at different times, and land and housing are part of capital, too. It\’s easier to provide an economic justification for  capital income being paid to those who invest in a productive, job-creating, profit-making firm than it is to justify capital income being paid to a lord from the 19th century who inherited large amounts of land and receives capital income from the rent paid by tenant-farmers. The capital income received by the owner of a factory with a huge and costly physical plant is also not identical in economic meaning to the capital income received by the owner of a firm where the capital depreciates to near-zero almost every year and the value of the firm is based in intellectual property. The rise in capital income as a result of a long-term rise in land and housing prices across the high-income countries is a phenomenon that isn\’t easily crammed into the usual disputes over whether capital owners are exploiting wage-earners.

China\’s Consumption Transition

A standard pattern of long-term economic development is that a country goes through a period of higher savings and investment, along with correspondingly lower consumption levels. After the growth spurt, consumption levels rise again. For illustration, here\’s a figure from a chapter by
Jutta Bolt, Marcel Timmer, and Jan Luiten van Zanden appearing in the OECD report from last fall, How Was Life? Global Well-Being Since 1820, which can be read online here.

As the figure shows, countries like Japan and Korea had substantial drops in their ratio of consumption to GDP, reflecting rises in saving and investment, but then consumption as a share of rose again. Technological leaders like the US and German economies show much more stability in their consumption/GDP ratio looking back a couple of centuries. Poorer countries back in 1950 like China and Ethiopia have high consumption/GDP ratios, although it\’s intriguing that Ethiopia is showing a drop in consumption/GDP–and thus a rise in investment–over the last decade or so.

In this pattern, consider the peculiarity of China\’s consumption patterns. It\’s of course not unexpected that China would have a high consumption/GDP ratio back around 1950. It\’s not surprising that the various requirements for forced saving under the Mao regime pushed the consumption/GDP ratio down, often at extremely high human cost. It\’s not surprising that as China\’s economy liberalized in the 1980s, consumption/GDP fell and investment rose. But what is quite shocking is that as China\’s economy has grown rapidly, the consumption level has not kept up, and so the consumption/GDP ratio just keeps falling. China\’s rates of saving and investment are extraordinarily high.

The figure above looks at a broad measure of consumption that includes both household consumption and consumption done directly by government. In China, most of the decline in consumption is traceable to a fall in the household portion of consumption. Here are a couple of graphs with data going back to 1980, generated using data from the World Development Indicators from the World Bank. The first one shows the fall in household consumption as a share of GDP over time. The second shows government final consumption expenditure as a share of GDP, and it has not moved much over time, even as China\’s economy has grown explosively.

Here are a few thoughts on these patterns:

1) For perhaps a decade or so, there has been a strong argument the next stage for China\’s economic growth would involve \”rebalancing\” away from an economy that is so extraordinarily heavily on saving and investment and toward an economy more driven by consumption (see also here and here).

2) One channel through which a consumption rise could happen is through government spending on health, education, and assistance for the poor. However, while government spending on consumption has been rising in absolute levels, it has not increased as a share of GDP.

3) The other obvious channel through which consumption could rise is through higher wages and consumption levels by China\’s households. Again, while household consumption has been rising in absolute levels, it has not been keeping up with growth of GDP, and thus has been falling as a share of GDP.

4) China\’s economic resurgence is so unprecedented that making predictions is especially uncertain. The optimistic prediction would be that China\’s economy smoothly rebalances away from investment and toward consumption. The pessimistic prediction is that the extraordinarily fall in consumption/GDP ratios in China is sending us an important message.

In well-functioning economies, there is a connection where as firms grow and receive higher revenues and profits, those funds are then cycled back to the broader population in the form of higher wages, as well as higher returns that flow into savings accounts and retirement funds for future consumption. Of course, this process by which firms cycle revenues back to households is always a source of controversy. Various laws and institutions will shape the forms in which money flows back from the firm sector to the household sector, and the level of inequality of incomes that result. But in China\’s economy, this process of funds flowing from firms back to households seems not to be working very well.

The very low rates of consumption/GDP in China, and the corresponding high levels of saving and investment, are driving the amount of credit in China\’s economy sky-high (as illustrated here and here). China continues to have possibilities for rapid economic growth in the decades to come. But in the short-term or the middle-term, it also increasingly appears that because of the lack of rebalancing to consumption, China\’s economy is experiencing a credit and investment bubble in a number of sectors that will not end well.

Randomness is Lumpy: Pareidolia

\”Pareidolia\” refers to the common human practice of looking at randomness and seeing patterns. Some standard examples are when you see a basketball player make several shots in a row and interpret that as a \”hot hand,\” not just the kind of streak that will happen every now and then among thousands of basketball players taking shots where each shot has a roughly 50:50 chance of going in. Or when you see a stock market adviser have several above-average years in a row and interpret that as evidence that future returns are likely to follow the same pattern, rather than as the kind of random streak that will happen every now and then when there are thousands of stock market advisers, each with a roughly 50:50 change of an above-average performance in any given year.

How good are you at perceiving randomness? Here\’s an example from Steven Pinker\’s 2011 book The Better Angels of Our Nature. This example and others were discussed in an article by Aatish Bhatia, \”Empirical Zeal: What Does Randomness Look Like\” in the December 21, 2012, issue of Wired magazine.

Consider the two panels with a bunch of points. The points on one panel are distributed randomly, but not on the other. Which is which?

The most common answer to say that the pattern on the right is random. The pattern on the left seems to have certain gaps and clusters and curves, which you can imagine as having some underlying meaning. But given the lead-in of the discussion here, you may be unsurprised to find that the random distribution is the one on the left. the distribution on the right is actually a representation of the pattern of glow-worms on a cave ceiling. The glow-worms compete for food and thus avoid being too close to each other. The greater evenness of the spacing is actually a giveaway that some underlying process is at work. Randomness is lumpy.

This may seem counterintuitive. After all, \”random\” refers to an equal probability of outcomes occuring–like where points occur in these panel. But an equal probability of something happening does not mean an equally spread out set of outcomes.

As an example, imagine that you flip a coin twice. On average, you expect to get one head and one tail. Now repeat this experiment of two coin flips 100 times. If every single time out of 100 you got one head and one tail, you could be extremely confident that you were not seeing a random outcome. After all, random chance suggests that one-quarter of the time you would expect to see two heads and one-quarter of the time you would expect to see two tails. In other words, if you don\’t see lumpy clusters, the odds are good that you aren\’t seeing randomness.

Separating what is random from what is an underlying pattern is of course the central task in figuring out what is happening in any complex system: the weather, outbreaks of disease, the path of an economy. Beware the dreaded phase, \”It can\’t be just a coincidence.\” Sometimes, it can. Many people have a degree of pareidolia, and they will tend to assume that clusters must have an explanation other than randomness. A compelling reason for a course or two in statistics is to help people harness and shape their intuitions about what constitutes evidence of randomness or pattern.

Data Movement Mushrooms

I live my life a long hike away from the technological frontier. But a couple of white papers published by Cisco offer some glimpses of where information technology is headed. The two reports are \”The Zettabyte Era: Trends and Analysis\” (June 10, 2014) and \”Cisco Visual Networking Index: Global MobileData Traffic Forecast Update, 2014–2019\” (February 3, 2015).

As a starting point, Internet traffic can be measured in terms of the number of \”bytes\” of information transmitted. The current measures of annual global internet traffic are in exabytes, where the prefix \”exo-\” means 10 raised to the 18th power. According to Cisco, by the end of next year, annual IP (Internet protocol) traffic will reach 1000 exabytes, which is called a zettabyte–that is, is 10 raised to the 21st power. (In case you\’re wondering what label comes next, a yottabyte is 10 raised to the 24th power.)

What does that volume of traffic mean in more concrete terms? Cisco writes:

To appreciate the magnitude of IP [Internet protocol] traffic volumes, it helps to put the numbers in more familiar terms:
● By 2018, the gigabyte equivalent of all movies ever made will cross the global Internet every 3 minutes.
● Globally, IP traffic will reach 400 terabits per second (Tbps) in 2018, the equivalent of 148 million people streaming Internet HD video simultaneously, all day, every day.
● Global IP traffic in 2018 will be equivalent to 395 billion DVDs per year, 33 billion DVDs per month, or 45 million DVDs per hour. 

Personal computers have generated the majority of Internet traffic in the past, but explosive growth of smartphones and tablets means that in the next few years, personal computer will be accounting for less than half of Internet traffic. In this illustrative table, M2M refers to \”machine-to-machine\” Internet traffic, which is sometimes goes under the name IoE for \”Internet of Everything.\”

The reports are full of detail about connection speeds, ways in which the Internet is being used and breakdowns across devices and regions. One figure offered a thought-provoking comparison about how much Internet traffic is generated by higher-end devices, compared to a basic-feature mobile phone. For example, a smartphone is 37 times the data volume of a regular phone, and a tablet is almost 100 times the data volume. Moving video is very data-intensive: \”As in the case of mobile networks, video devices can have a multiplier effect on traffic. An Internet-enabled HD television that draws 50 minutes of content per day from the Internet would generate as much Internet traffic as an entire household today.\”

Although smartphones and tablets are currently the main devices altering how people generate internet traffic, the reports also include some predictions about what could come next. One big change could come through the wearable Internet. Cisco writes:

An important factor contributing to the growing adoption of IoE [Internet of Everything] is the emergence of wearable devices, a category with high growth potential. Wearable devices, as the name suggests, are devices that can be worn on a person and have the capability to connect and communicate to the network either directly through embedded cellular connectivity or through another device (primarily a smartphone) using Wi-Fi, Bluetooth, or another technology. These devices come in various shapes and forms, ranging from smart watches, smart glasses, heads-up displays (HUDs), health and fitness trackers, health monitors, wearable scanners and navigation devices, smart clothing, et al. The growth in these devices has been fueled by enhancements in technology that have supported compression of computing and other electronics (making the devices light enough to be worn). These advances are being combined with fashion to match personal styles … By 2019, we estimate that there will be 578 million wearable devices globally, growing fivefold from 109 million in 2014 … 

Another big change in Internet traffic will come from M2M or machine-to-machine connections. Cisco writes;

M2M connections—such as home and office security and automation, smart metering and utilities, maintenance, building automation, automotive, healthcare and consumer electronics, and more—are being used across a broad spectrum of industries, as well as in the consumer segment. As real-time information monitoring helps companies deploy new video-based security systems, while also helping hospitals and healthcare professionals remotely monitor the progress of their patients, bandwidth-intensive M2M connections are becoming more prevalent. Globally, M2M connections will grow from 495 million in 2014 to more than 3 billion by 2019… —a sevenfold growth.

Cisco also argues that its forecasts of future Internet traffic should be viewed as conservative. For example, \”cloud gaming\” refers to the situation where the graphics for high-powered games are done on a remote server and then transferred to the user, \”If cloud gaming takes hold, gaming could quickly become one of the largest Internet traffic categories.\”Another possible game-changer for Internet traffic would be if people begin to get a large share of their television by \”unicast,\” a separate stream coming to them, rather than by broadcast, which \”carries one stream to many viewers.\” A final change would be a large move by consumers  to \”ultra-high-definition\” television.

It\’s intriguing to speculate about what very high levels of connectedness could mean. Easy access to movies and games is only the beginning, of course. High levels of connectivity may influence  where people will choose to live and work. It may mean that extraordinary levels of variety and customization become available. In a few years, downloading a file or movie from the cloud seems likely to be just as fast as downloading it from your personal computer or smartphone right now. The coming technology  means smart clothes. It may mean that buildings operate almost like a single machine: including homes, office buildings, and factories.It has applications for monitoriing of health and delivery of health care, transportation of goods and people, variable pricing, and the widespread availability of tracking what people watch, hear, and do. In  some very practical ways, our interactions with the world around us are going to feel quite different in just a few years.

When a Summer Job Could Pay the Tuition

When I was graduating from high school in 1978, a number of my friends went to the hometown University of Minnesota. At the time, it was possible to pay tuition and a substantial share of living expenses with the earnings from a full-time job in the summer and a part-time job during the school year. Given the trends in costs of higher education and the path of the minimum wage since then, this is no longer true.

Here\’s an illustration of the point with the University of Minnesota, with its current enrollment of about 41,000 undergraduates,as an example. (The figure is taken from a presentation by David Ernst, who among his other responsibilities is Executive Director of the Open Textbook Network, which provides links to about 170 free and open-license textbooks in a variety of subjects.)

Just to put this in perspective, say that a full-time student works 40 hours per week for 12 weeks of summer vacation, and then 10 hours per week for 30 weeks during the school year–while taking a break during vacations and finals. That schedule would total 780 hours per year. Back in the late 1970s, even being paid the minimum wage, this work schedule easily covered tuition. By the early 1990s, it no longer covered tuition. According to the OECD, the average annual hours worked by a US worker was 1,788 in 2013. At the minimum wage, that\’s now just enough to cover tuition–although it doesn\’t leave much space for being a full-time student.

Remembering Murray Weidenbaum: 1927-2014

Murray Weidenbaum died last May at the age of 78. During Weidenbaum’s career, his jobs and affifiliations included: the New York State Department of Labor; the U.S. Bureau of the Budget; Ph.D. study at Princeton; jobs at General Dynamics and Boeing; the University of Washington; the Stanford Research Institute; director of a Presidential Committee on the Economics of Defense and Disarmament (we’re now up to the 1960s); a NASA Economic Research Program based at Washington University in St. Louis, which by the end of his career had turned into a University Professorship; Assistant Secretary of the U.S. Treasury for Economic Policy; Chairman of President
Reagan’s Council of Economic Advisers; positions on corporate boards (Centerre Bank, Hill and Knowlton, May Department Stores, Tesoro Petroleum, Beatrice Foods, the Harbour Group); affiliations with the American Enterprise Institute and the Center for Strategic and International Studies; and blue-ribbon commissions on everything from trade deficits to terrorism.

As an economist focused on defense spending and the costs of regulation, perhaps his best-known line was: \”Don’t just stand there, undo something.” David R. Henderson offers some memories of Weidenbaum in \”A Feel for EconomicsMurray Weidenbaum 1927–2014,\” appearing in Winter 2014-15 issue of Regulation magazine. 

\”Murray was the ultimate economist, as two stories about him bear out: Although he was Jewish, his family celebrated Christmas, and he and his wife encouraged their son and two daughters to believe in Santa Claus. In time, his older daughter reached the age when she began to doubt Santa Claus’s existence and suspected that her parents were the real source of Christmas gifts. But on Christmas morning, she opened a gift and found an expensive item that she had wanted. “There must be a Santa Claus,” she said, excitedly. “Dad’s too cheap to spend that much money.” Murray delighted in telling that story.

\”The second story is about traveling light. While working at CSAB [the Center for the Study of American Business at Washington University], I was getting ready to go to a conference. One thing that always happened at conferences, before the Internet achieved its prominence, was that one returned home with copies of various papers. If you collected enough such papers, it was hard to take just a carry-on, and you had to check a bag. What to do? I told Murray that my packing for the conference included old underwear that I didn’t mind losing. On the trip, I would throw away the underwear instead of bringing it home, creating space for papers in my carry-on. As far as I knew, I was the only person who did this—until Murray told me (eyes twinkling) that he often did that very thing.\”

Fear of Cheap Foreign Labor in the Long Depression: 1873-1879

The US economy was in a continuous recession for 65 months from October 1873 to March 1879. Historians call is the \”Long Depression,\” because the Great Depression from 1929 to 1933 saw \”only\” 42 consecutive months of economic decline. For comparison, the more recent Great Recession lasted 18 months.

Samuel Bernstein offered one of the classic descriptions of the Long Depression in his 1956 article, \”American Labor in the Long Depression, 1873-1878\” (Science & Society, Winter 1956, 20:1, pp. 59-83, available through JSTOR). Precise government statistics are not available for this time period, of course, but estimates of the unemployment rate for the later part of this period often exceeded 20%, and some exceeded 30%. For those with a job, real wages fell by half. Even those real wages were often paid in the form of company scrip, which could only be used at the company store, and was worth substantially less than cash.

Bernstein quotes from a Bulletin of the American Iron and Steel Association in the first quarter of 1874, when the Long Depression had barely begun. The report stated \”that the manufacturing industries of the country are rapidly sinking; and the conclusion is equally inevitable that all branches of business will soon collapse under the dead weight of the paralysisw hich has seized manufacturers and driven the labor classes into idleness, unless means are devised to stimulate and encourage productive enterprises.\” Output fell sharply. Here\’s Bernstein:

\”From 1873 to 1878 production fell precipitously. Mills either closed or ran part time. At the end of the first year of the crisis the Bureau of Statistics in Pennsylvania reported:\” Probably never in the history of the country has there been a time, when so many of the working classes, skilled and unskilled, have been moving from place to place seeking employment that was not to be had–never certainly for so long a time.\” Estimates on the production of pig-iron, coal, on cotton consumption, railroad revenues, imports of merchandise and bank clearing showed a reduction of 32 percent between 1873 and 1878. Its magnitude was second only to that between 1929 and 1932, namely 55 percent.\”

And what was the cause of this collapse? At least one writer back in October 1879, writing for the Atlantic Monthly, believed that globalization and competition from China, India, and Brazil were to blame. An author identified as W.G.M. wrote an essay called \”Foreign Trade No Cure for Hard Times,\” which through the magic of the web can be read online here. W.G.M. argued: 

\”We read in a London paper that the Chinese government have purchased machinery,and engaged experienced engineers and spinners in Germany to establish cotton mills in China, so as to free that country from dependence upon English and Russian imports. Though China is somewhat tardy in her action, we may be certain that she is thorough. … More than this, the time is  not far distant when the textiles from the Chinese machine looms, iron and steel and cutlery from the Chinese furnaces, forges and workshops, with everything that machinery and cheap labor can produce, will crowd every market. The four hundred millions of China, with the two hundred and fifty millions of India,–the crowded and pauperized populations of Asia,–will offer the cup of cheap machine labor, filled to the brim, to our lips, and force us to drink it to the dregs, if we do not learn wisdom. It is in Asia, if anywhere, that the world is to find its workshop. There are the masses, and the conditions, necessary to develop the power of cheapness to perfection, and they will be used. For years we have been doing our utmost to teach the Chinese shoemaking, spinning and weving, engine driving, machine building, and other arts, in California, Massachusetts, and other States; and we may be sure they will make good use of their knowledge; for there is no people on earth with more  patient skill and better adpated to the use of machinery than the Chinese. When the Chinese goernment is doing for China, Dom Pedro is doing for Brazil [this would be Dom Pedro II, the last ruler of the Empire of Brazil], though in a different form.\”

It gives me a smile to think that that dangers of global competition from China, India, and Brazil were being stated so eloquently back in 1879! 
From a modern point of view, W.G.M. wasn\’t quite thinking clearly. The essay argues that increased exports won\’t be nearly enough to help the US economy recover, which seems clearly correct if a bit of a straw man argument. The argument also implies that the causes of the Long Depression could be found in an effort to cut costs for the purpose of raising exports, which recognizes at least that the 1870s were a time of enormous structural change in the US economy. 

A readable overview of the 1873-1878 period is available here.  If you had to describe the causes of the Long Depression in modern terms, you might call it a combination of a tech boom-and-bust cycle, an industrial transformation, a banking crisis, and a a euro-style problem of currency arrangements that were not serving the economy well.

The tech boom of that time was the railroad mania, which lead to a cycle of overbuilding and then bust, which in turn dragged down other manufacturing industries. By the later 1870s, about half of all the railroad track in the country was owned by those who had received in after bankruptcy proceedings. At the same time, the transportation network established by rail was feeding into a growth of much larger companies that were finding cost savings by investing in equipment and economies of scale. At least some of the unemployment was what we would today call \”technological unemployment,\” which is labor that is displaced by rapid technological change and cannot soon find alternative places of employment. International trade and big business was often conducted on a gold standard, but the government continued to circulate large numbers of \”greenback\” paper currency that had been introduced during the Civil War. As firms and consumers went broke, and currency values fluctuated against gold, there were were banking crises and times when financial payments could not be made.

Thus, W.G.M. was correct in that 1879 essay to perceive a transformation of American production. I wonder how the 1879 argument would have differed if the writer had been able to see how little progress the economies of China and India had made even 100 years after the writing of the article in 1979! For me, an ongoing lesson is that when economic times are rough, blaming other countries is always an easy temptation.

Homage: I ran across a mention of the 1879 Atlantic Monthly article in the overview written by Prakash Lougani for the March 2015 issue of Finance & Development, and tracked down the original.

The Rare Earths Shortage: A Crisis with a Supply and Demand Answer

Rare earths had a moment in the media spotlight in late 2010 and early 2011. The story goes that in September 2010, a Chinese and a Japanese ship collided in waters claimed by Japan. Japan detained the captain of the Chinese ship. China appears to have responded (although China denied this) by placing an embargo on sales of rare earths to Japan. It came to public attention that over the last two decades, rare earth mining operations had been opening in China, but often closing in the rest of  the world, and more than 95% of rare earth production was happening in China.

A number of chin-stroking analyses and warnings followed. As one example, Katherine Bourzac wrote an article in the April 2011 issue of Techology Review called \”The Rare-Earth Crisis,\” and subtitled \”Today’s electric cars and wind turbines rely on a few elements that are mined almost entirely in China. Demand for these materials may soon exceed supply. Will this be China’s next great economic advantage?\” Here\’s a sample of Bourzac\’s argument:

But even without Chinese restrictions and with the revival of the California mine, worldwide supplies of some rare earths could soon fall short of demand. Of particular concern are neodymium and dysprosium, which are used to make magnets that help generate torque in the motors of electric and hybrid cars and convert torque into electricity in large wind turbines. In a report released last December, the U.S. Department of Energy estimated that widespread use of electric-drive vehicles and offshore wind farms could cause shortages of these metals by 2015.

What would happen then is anyone’s guess. There are no practical alternatives to these metals in many critical applications requiring strong permanent magnets—materials that retain a magnetic field without the need for a power source to induce magnetism by passing an electric current through them. Most everyday magnets, including those that hold notes on the fridge, are permanent magnets. But they aren’t very strong, while those made from rare earths are tremendously so. Alloys of neodymium with iron and boron are four to five times as strong by weight as permanent magnets made from any other material. That’s one reason rare-earth magnets are found in nearly every hybrid and electric car on the road. The motor of Toyota’s Prius, for example, uses about a kilogram of rare earths. Offshore wind turbines can require hundreds of kilograms each.

New mining activity, not only at Mountain Pass but also in Australia and elsewhere, will increase supplies—but not enough to meet demand for certain critical metals, particularly dysprosium, in the next few years. … Because rare earths make such excellent magnets, researchers have put little effort since the early 1980s into improving them or developing other materials that could do the job. Few scientists and engineers outside China work on rare-earth metals and magnet alternatives. Inventing substitutes and getting them into motors will take years, first to develop the scientific expertise and then to build a manufacturing infrastructure. …  Few experts express optimism that there will be enough rare-earth materials to sustain significant growth of clean energy technologies like electric cars and wind power, which need every possible cost and efficiency advantage to compete.

So what in fact happened after the rare earths crisis of 2010-11? For the basic storyline, here\’s a price chart for rare earths–which are usuall mined together and then separated into constituent parts–in dollars per ton, from the industry research and consulting company TRU Group.

Clearly, the story is that prices spiked, but it was not a lasting crisis. Instead, it\’s one more example of free market forces at work. Eugene Gholz tells the story in \”Rare Earth Elements and National Security,\” an October 2014 report written for the Council on Foreign Relations. 
As background, according to the US Geological Survey, \”The rare earths are a relatively abundant group of 17 elements composed of scandium, yttrium, and the lanthanides [which are the elements with atomic numbers 57  6o 71]. The elements range in crustal abundance from cerium, the 25th most abundant element of the 78 common elements in the Earth\’s crust at 60 parts per million, to thulium and lutetium, the least abundant rare-earth elements at about 0.5 part per million.\” The USGS reports that rare earths are heavily used as catalysts, while other main uses are in metallurgical applications and alloys, permanent magnets, and glass polishing.
In late 2010 and early 2011, prices for rare earths spiked in large part because of a response to the worrisome news stories. Gholz explains that spot prices for rare earth elements rose \”especially as
downstream users—companies that incorporate REEs [rare earth elements] into other products—filled inventories to protect themselves from future disruptions. Speculators also bought the stocks of many small mining companies that promised to develop new sources of rare earths around the world. But once buyers realized that actual supply to consumers around the globe was not that tight, prices plunged.\” Gholz\’s story of what happened reads like a supply and demand primer: supply rises, demand falls, substitutes and alternatives found.
When prices rise and it appear that a dominant producer might be unreliable, entry to the market occurs. 

Despite its relatively small size, the rare earths market managed to attract plenty of interest outside China prior to the 2010 supply scares. Motivated by expected increases in demand, investors in the United States, Japan, and Australia were already opening rare earth mines and building new processing capabilities by 2010, and other investors were moving ahead on mines around the world in places like Canada, South Africa, and Kazakhstan. The major investments made by Molycorp in the United States and Lynas in Australia and Malaysia started delivering non-Chinese rare earths to global markets by 2013. When rare earth prices surged in 2010, even more potential entrants swarmed. Hundreds of companies around the world started raising money for new mining projects. Rhodia, long established as a leading rare earths processor in Europe (physically in France though now owned by Belgian chemical company Solvay), ramped up its use of its existing plant capacity and accelerated plans to recycle rare earths, effectively creating a new source of supply to the global market. These new, non-Chinese sources hold the potential to profoundly change market dynamics. Although Chinese producers will still contribute a substantial majority of supply, competition from the rest of the world will moderate Chinese pricing power and feed high-priority end uses even in the event of a cutoff of all Chinese exports.

The high prices also encourage those who demand the material to find substitutes and alternatives

An embargo or other supply disruption makes users think hard about an input that may have been relatively cheap before, meaning that the users had previously focused their attention on maximizing efficient use of other, more costly inputs. The new attention to the disrupted input can yield “low-hanging fruit” adjustments. For example, at the time of China’s 2010 export embargo to Japan, the largest-volume use of rare earths was in gasoline refining. But gasoline refining still works without rare earth catalysts, just slightly less efficiently; in fact, at the peak of the 2011 rare-earths price bubble (well after theembargo crisis), some refiners stopped using the rare earth catalysts to save input costs. …

The magnet market also adapted through “demand destruction.” Companies such as Hitachi Metals that make rare earth magnets (now including in North Carolina) found ways to make equivalent magnets using smaller amounts of rare earths in the alloys. Some users remembered that they did not need the high performance of specialized rare earth magnets; they were merely using them because, at least until the 2010 episode, they were relatively inexpensive and convenient. Whenthe price rose following China’s alleged embargo, users turned to simpler (and less material-intensive) rare earth magnets or even to magnets that included no rare earths at all. Such adjustments take a little time, thought, and design effort, but their availability means that supply interruptions
often have a less dramatic effect than one might expect, based on precrisis demand.

Although China tried to put export controls on rare earths during the 2000s, the controls were often circumvented:

Comparing official Chinese export statistics to statistics on downstream rare-earth oxide consumption in countries like Japan reveals that probablyas much as 20,000–30,000 tons of rare earth oxides were smuggled out of China each year in the late-2000s, roughly 15 to 30 percent of official production, depending on the year.

New technologies emerge:

Far from Chinese technical dominance, the striking feature of recent developments in rare earth markets has been the continuation of U.S., European, and Japanese technological leadership. Molycorp’s reopened mine and separation facility use a suite of new technologies that have increased the purity of extracted rare earth products, substantially reduced the environmental impact of the mining and chemical processing, and drastically lowered the cost of American production compared to the Mountain Pass operations that shut down in 2002. Japanese companies are leading the way with new, low-dysprosium magnet technologies, and Rhodia in Europe has made tremendous
progress in developing viable rare-earth recycling operations. In the current market, China looks like a technical laggard—for example, using old, environmentally destructive extraction technologies— rather than a technical leader.

Overall, Gholz writes: \”Future crises are unlikely to seem so perfectly orchestrated to make the United States and its allies vulnerable: the materials in question may be more prosaic or the country where supplies are concentrated may loom less ominously than China. But even in the apparently most-dangerous case of rare earth elements, the problem rapidly faded—and not primarily due to government action.\”

The Economics of Media Bias

Here are four basic questions about media bias:

\”First, is media news reporting actually slanted? …

Second, if reporting is biased, what is the reason? Is such bias driven by the
supply-side, as when reporting reflects the prejudices of an outlet’s owners or journalists? …

Third, what is the effect of media competition on accuracy and bias?  …

Finally, does media reporting actually matter for individual understanding and
action? Does it affect knowledge? Does it influence participation in the political
process? Does it influence how people vote?\”

The questions are posed by Andrei Shleifer in his paper on \”Matthew Gentzkow, Winner of the 2014
Clark Medal,\” in the Winter 2015 issue of the Journal of Economic Perspectives. As background, the Clark medal is given by the American Economic Association each year \”to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge.\” Shleifer is describing the academic work for which Gentzkow won the award last year. Shleifer argues: \”In a very short decade, economic research has obtained fairly clear answers to at least some of these questions.\”

(Full disclosure: My paid job has been Managing Editor of the JEP since the first issue in 1987. All papers in JEP from the first issue to the most recent are freely available on-line courtesy of the American Economic Association. Shleifer was Editor of JEP, and thus my boss, from 2003-2008.)

On the first question of the existence of media bias, how does one go beyond anecdotes about how different newspapers or TV channels covered certain stories to come up with a defensible quantitative way of detecting media bias? The modern approach has been to use text analysis. For example, have a computer search a dataset of all speeches given in Congress during the year 2005. Have the computer search for phrases that are much more commonly used by Republicans or by Democrats. For example, in 2005 Democrats were much more likely to refer to the \”war in Iraq\” while Republicans were more likely to refer to the \”war on terror.\” Now do a search on the text used by media outlets, and see if they are more likely to be using Republican phrases, Democratic phrases, or an even mixture of the two.

Gentzkow didn\’t invent this approach to meausuring media bias. For earlier work on the subject in the research literature, a starting point would be the article by Tim Groseclose and Jeffrey Milyo, “A Measure of Media Bias,\” in the Quarterly Journal of Economics in 2005 (120:4, pp. 1191–1237). But in work with co-author Jesse Shapiro, Gentzkow applied the approach to newspapers across the US and was thus able to provide hard evidence that many newspapers indeed exhibit partisan bias in how they report the news.

Does the bias of newspapers reflect their owners, or their customers? Here\’s how Shleifer describes it:

\”Gentzkow and Shapiro then collected data on the use of these highly diagnostic phrases in US daily newspapers and used these data to place news outlets on the ideological spectrum comparable to members of Congress. In addition to this large methodological advance in how to measure partisan newspaper slant, the paper used detailed information on newspaper circulation and voting patterns across space to estimate a model of the demand for slant and to show that—consistent with the theory—consumers gravitate to like-minded sources, giving the newspapers an incentive to tailor their content to their readers. They also show that newspapers respond to that incentive and that variation in reader ideology explains a large portion of the variation in slant across US daily newspapers. … [A]fter controlling for a newspaper’s audience, the identity of its owner does not affect its slant. Two newspapers with the same owner look no more similar in their slant than newspapers with different owners. Ownership regulation in the US and elsewhere is based on the premise a news outlet’s owner determines how it spins the news. Gentzkow and Shapiro produced the first large-scale test of this hypothesis, which showed that, contrary to the conventional wisdom and regulatory stance, demand is much more influential in shaping content than supply as proxied by ownership.

Does more competition in the media tend to increase or diminish this bias? This question is tough  to answer, but in a different paper by Gentzkow and Shapiro, they look at a closely related topic of how people of different political beliefs use the Internet. Specifically, do people tend to cluster at the websites that that match their ideology, or do they surf around? Shleifer describes the result:

One might worry that the increase in choice among news suppliers as a result of the Internet would allow news consumers to self-segregate, reading only news that confirms their preconceptions. Gentzkow and Shapiro test this claim using data from a panel of Internet users for which they have a survey-based measure of political ideology and tracking data on online news consumption. They find that ideological segregation is surprisingly low online. The average conservative’s news outlet on the Internet is about as conservative as usatoday.com; the average liberal’s is as liberal as cnn.com. Strikingly, the Internet is less ideologically segregated than US residential geography: two people using the same news website are less likely to have an ideology in common than two people living in the same zip code.

Finally, is it the case that people just choose the media outlets that reflect their bias, in which case the media bias doesn\’t affect their opinions or their voting patterns? Or is there reason to believe that the extent of media bias does affect opinions and voting patterns?

In one study, Gentzkow looked at historical data on how television coverage spread across the United States, and what changes in voting patterns followed. As Shleifer writes; \”He estimates a huge negative effect: the availability of television accounts for between one-quarter and one-half of the total decline in voter turnout since the 1950s. Matt argues that a principal reason for this is substitution in media consumption away from newspapers, which provide more political
coverage and thus stimulate more interest in voting.\”

In a different study, Gentzkow and co-authors look at the patterns of newspapers being born and dying from 1869 to 2004, and compare this with voting patterns. Shleifer writes:

They find that newspapers have a large effect in raising voter turnout, especially in the period before the introduction of broadcast media. However, the political affiliation of entering newspapers does not affect the partisan composition of an area’s vote. The latter result contrasts with another important finding, by DellaVigna and Kaplan (2007), that the entry of Fox News does sway some voters toward voting Republican. An interpretation consistent with these findings is that newspapers motivate but don’t persuade, while television does the opposite.

Research on media bias and its political effects is certainly not settled, but for what it\’s worth, I\’d sum up the existing evidence in this way. There\’s lots of political bias in the media, mainly because media outlets are trying to attract customers with similar bias. But in the world of the Internet, at least, people of all beliefs do surf readily between news websites with different kind of bias. The growth of television to some extent displaced the role of newspapers and lowered the extent of voting. For the future, a central question is whether a population that gets its news from a mixture of websites and social media becomes better-informed or more willing to vote, or whether it becomes a population that instead becomes expert at selfiesm, cat videos, World of Goo, Candy Crush, Angry Birds, and the celebrity-du-jour.

US Dependency Ratios, Looking Ahead

In the lingo of demographers and economists, the \”dependency ratio\” refers to the fact that the working age population from ages 18-64 produces most of the output in any economy, but a certain amount of the consumption is done by those under 18 and those over 65. Thus, there is an \”old-age dependency ratio,\” which is the population 65 and older divided by the population from 18-64, a \”youth dependency ratio\” which is the under-18 population 17 divided by the population from 18-64, and at \”total dependency\” ratio which is the sum of the under-18 and 65-and-over population, divided by the 18-64 population.

Sandra L. Colby and Jennifer M. Ortman from the US Census Bureau offer some projections about dependency ratios in the March 2015 report \”Projections of the Size and Composition of the U.S. Population: 2014 to 2060\” (P25-1143).

As the figure shows, the youth dependency ratio is expected to hover around 35%–in fact, to decline a bit–in the decades to come. However, the old-age dependency ratio is on the rise. It\’s now about 23%, but 2035 will be up to about 38%. Taking the two ratios together, the under-18 population plus the 65-and-over population is now about 60% of the size of the 18-64 population, but the ratio is headed for about 75% in the next two decades.

It\’s worth emphasizing that the old-age dependency ratio for a couple of decades in the figure can be estimated with a pretty high degree of accuracy. After all, anyone who is going to  be 21 or older in 2035 has already been born. Large fluctuations in death rates or immigration rates are the only factors that can move the old-age dependency ratio substantially.

The report also includes a breakdown of the growth of population by age that helps to clarify what is happening behind these ratios. By 2040, the under-18 population is projected to rise by a total of 5%; the 18-44 population by 12%; the 45-64 population by 10%; and the 65 and older population by 78%.

Most of the rise in the old-age dependency ratio happens by the early 2030s. Thus, one can think about the next two decades as a time of transition: transition in public policies affecting the elderly like Social Security and Medicare; transition in work patterns as we seek to encourage at least some of the elderly to stay in the workforce longer; transition is how we think about the design of public services and facilities everywhere from hotel rooms to park trails for a population with a larger share of the elderly; and transition in how we start building systems that can support families and communities in providing assistance and care for the elderly who need it.