How Economic Statistics Are Being Transformed

Economic statistics are invisible infrastructure, supporting better decisions by government, business, and individuals. But the fundamentals survey-based methods of US government statistics have substantially eroded, because people and firms have become less willing to fill out surveys in a timely and accurate way. There are active discussions underway about how to replace or supplement existing statistics with either administrative data from government programs or private-sector data. But these approaches have problems of their own.

For a big-picture overview of these issues, a useful starting point is the three-paper \”Symposium on the Provision of Public Data\” in the Winter 2019 issue of the Journal of Economic Perspectives

But if you want to get down and dirty with the details of what changes to government statistics are being researched and considered, you will want turn to the papers from the most recent Conference on Research in Income and Wealth, held March 16-17 in Washington, DC. The CRIW, which is administered by the National Bureau of Economic Research, has been holding conferences since 1939 with a mixture of researchers from government, academia, business, and nonprofits to talk about issues of economic measurement. Sixteen of the papers from the conference, most also including presentation slides, are available at the website.

In Winter 2019 JEP, Hughes-Cromwick and Coronado point out that the combined annual budget for the 13 major US government statistical agencies is a little over $2 billion. For comparison,  the “government-data–intensive sectors” a sector of the economy, which includes firms that rely heavily on government data like \”investment analysts, database aggregator firms, market researchers, benchmarkers, and others,\” now has annual revenues in the range of $300 billion or more. They also offer concrete examples how firms in just a few industries–automobiles, energy, and financial services–use government data as a basis for their own additional calculations for a very wide range of business decisions.

Rockoff points out that main government statistical series like inflation, unemployment, and GDP all emerged out of historical situations where it became important for politicians to have an idea of what was actually going on. For example, early US government efforts at measuring inflation emerged from the public controversy over the extent of price change in the Civil War and the early 1890s. Early measurement of GDP and national income emerged from disputes over the extent of inequality in the opening decades of the 20th century. Some of the earliest US unemployment statistics were collected in Massachusetts in the aftermath of the panic of 1873 and the depression that followed. As he points out, the ongoing development of these statistics was then shaped by changes in price, output, and unemployment during World Wars I and II, and the Great Depression.

This interaction between US government policy and statistics goes back to the origins of the US Census in 1790, when James Madison (then a member of the House of Representatives) argued that the Census should do more than just count heads, but should collect other economic data as well. In the Congressional debate, Madison said:

If gentlemen have any doubts with respect to its utility, I cannot satisfy them in a better manner, than by referring them to the debates which took place upon the bills, intend, collaterally, to benefit the agricultural, commercial, and manufacturing parts of the community. Did they not wish then to know the relative proportion of each, and the exact number of every division, in order that they might rest their arguments on facts, instead of assertions and conjectures?\”

In my own view, Madison\’s plaintive cry \”in order that they might rest their arguments on facts\” doesn\’t apply only or even mainly to Congress. Public economic statistics are a way for all citizens to keep tabs on their society and their government, too.

In JEP, Jarmin points out that survey-based methods of collecting government data have been seeing lower response rates. This pattern applies to the the main government surveys of households, including the Current Population Survey, the Survey of Income and Program Participation (SIPP), the Consumer Expenditure Survey, the National Health Interview Survey, and the General Social Survey. Similar concerns apply to surveys of businesses: the Monthly Retail Trade Survey, the Quarterly Services Survey, and the Current Establishment Survey. Surveys have the considerable advantage of being nationally representative, but they also have the disadvantage that you are relying on what people are telling you, rather than what actually happened. For example, if you compare actual payments from the food stamp program to what people report on surveys, you find that many people are receiving assistance from food stamps but not reporting it (or underreporting it) on the survey. Moreover, surveys are costly to carry out.

Can survey-based data be replaced by some combination of administrative data from government programs, private-sector data (which could perhaps be automatically submitted by firms), and \”big data\” automatically collected from websites? Sure up to a point.

For example, people\’s income and work can be examined by looking at income tax data and Social Security payroll data. A private company called NPD collects point-of-sale data directly from retailers; could the government tap into this data or perhaps contract with NPD to collect the data teh government desires, rather than doing its own separate survey on retail sales? Instead of collecting price data from stores for the measure of inflation, might it be possible to use automated data from price scanners in stores, or even scrape the data from websites that advertise prices for certain goods?

The papers presented at the CRIW conference talk about lots of specific proposals along these lines. Many are promising, and none are easy. For example, using administrative data from the IRS or Social Security raises concerns about privacy, and practical concerns about linking together data from very different computer systems. Is data collected by a private firm likely to be nationally representative? If the US government relies on a private firm for key data, how does the government make sure that the data isn\’t disclosed in advance, and what happens to the data if the firm doesn\’t perform well or goes out of business?

The idea of using data from barcodes to get a detailed view of sales and prices is definitely intriguing. But barcodes often change, which makes analyzing them complex to work with. As Ehrlich, Haltiwanger,  Jarmin,  Johnson,  and Shapiro point out in their paper for the CRIW conference:

Roughly speaking, if a good defined at the barcode or SKU level is sold today, there is only a fifty percent chance it will be sold a year from today. This turnover of goods is one of the greatest challenges of using the raw item-level data for measurement, but also is an enormous opportunity. When new goods replace old goods there is frequently both a change in price and quality. Appropriately identifying whether changing item-level prices imply changes in the cost of living or instead reflect changes in product quality is a core issue for measuring activity and inflation. The statistical agencies currently perform these judgments using a combination of direct comparisons of substitutes, adjustments, and hedonics that are very hard to scale.

Moreover, if government statistics are emerging from an array of different sources and evolving over time, how does one figure out whether changes in unemployment, inflation, and GDP are a result of actual changes in the economy, or just changes in how the variables are being measured? How does one balance the desire for accurate and detailed measurement, which often takes time, with a desire for continual immediate updates to the data?

Overall, it seems to me that one can discern a shadowy pattern emerging. There will be highly detailed and representative and costly government statistics published at longer intervals–maybe a year or five years or even 10 years apart. These will often rely on nationally-representative surveys. But In between, when it comes to smaller time intervals of single-month or three-month periods, the updates to these figures will rely more heavily on  extrapolations from the administrative and private sources that are available. We will know that these updates are not necessarily representative and subject to later corrections. The short-term updates may not always be fully transparent, because of concerns over privacy from both firms and individuals, but for the short-term, they will be a reasonable way to proceed.

The dream is that it becomes possible to develop better statistics with costs remaining the same or even lower. But for me, some additional investment in government statistics is an inexpensive way of supporting the decisions of firms and policymakers, and providing accountability to citizens.

Here\’s a list of the papers (and presentation slides) available at the conference website:

Why Call it "Socialism"?

I\’ve been coming around to the belief that most modern arguments over \”socialism\” are a waste of time, because the content of the term has become so nebulous. When you drill down a bit, a lot of \”socialists\” are really just saying that they would like to have government play a more active role in providing various benefits to workers and the poor, along with additional environmental protection.

Here is some evidence on how Americans perceive \”socialism\” from a couple of Gallup polls, one published in May 2019 and one in October 2018. The May 2019 survey found that compared to 70 years ago, not long after World War II, both more American favor and oppose socialism–it\’s the undecideds that have declined.

But when people say they are in favor of \”socialism\” or opposed to it, what do they mean? The same survey found that when asked a question about market vs. government, there were heavy majorities in favor of the free market being primarily responsible for technological innovation, distribution of weal, and the economy overall, and modest majorities in favor of the free market taking the lead in higher education and healthcare. Government leadership was preferred in taking the lead in online privacy and environmental protection.

There are apparently a reasonable number of those who think socialism is a good thing, but would prefer to see free markets be primarily responsible in many areas. Clearly, this form of socialism isn\’t about government control of the means of production. The October 2018 Gallup survey asked more directly what people primarily meant by socialism, and compared the answer to a poll from the aftermath of World War II.

Seventy years ago, the most common answer for a person\’s understanding of the term \”socialism\” was government economic control, but that answer has fallen from 34% to 17% over time. Now, the most common answer for one\’s understanding of socialism is that it\’s about \”Equality – equal standing for everybody, all equal in rights, equal in distribution.\” As the Gallup folks point out, this is a broad broad category: \”The broad group of responses defining socialism as dealing with `equality\’ are quite varied — ranging from views that socialism means controls on incomes and wealth, to a more general conception of equality of opportunity, or equal status as citizens.\” The share of those who define \”socialism\” as \”Benefits and services – social services free, medicine for all\” has also risen substantially.  There are also 6% who think that \”socialism\” is \”Talking to people, being social, social media, getting along with people.\”

The October 2018  survey also asked whether the US already had socialism. Just after World War II, when the US economy had experienced extreme government control over the economy and most people defined \”socialism\” in those terms, 43% said that the US already had socialism. Now, the share of those who believe we already have socialism has dropped to 38%. One suspects that most of those who think we have socialism are not happy about it, and a substantial share of those who think we don\’t have socialism wish it was otherwise. Clearly, they are operating from rather different visions of what is meant by \”socialism.\”

There\’s no denying that the word \”socialism\” adds a little extra kick to many conversations.  Among the self-professed admirers, \”socialism\” is sometime pronounced with an air of defiance, as if the speaker was imagining Eugene Debs, five times the Socialist Party candidate for President, voting for himself from a jail cell in the 1920 election. In other cases, \”socialism\” is pronounced with an air of smiling devotion in the face of expected doubters, reminiscent of the very nice Jehovah\’s Witnesses or Mormons who occasionally knock on my door. In still other cases, \”socialism\” is pronounced like a middle-schooler saying a naughty word, wondering or hoping that poking round with the term will push someone\’s buttons, so we can mock them for being uncool. And \”socialism\” is sometime tossed out with a world-weary tone, in a spirit of I-know-the-problems-but-what-can-I-say.

My own sense is that the terminology of \”socialism\” has become muddled enough that it\’s not  useful in most arguments. For example, say that we\’re talking about steps to improve the environment, or to increase government spending to help workers. One could, of course, could have an argument over whether the countries that have bragged most loudly about being \”socialist\” had a good record in protecting worker rights or the poor or the environment. One side could yelp about Sweden and the flaws that arise in a market-centric economy;  the other side could squawk about the Soviet Union or Venezuela and the flaws of a government-centric economy. (As I\’ve argued in the past, I view the Scandinavian countries–and they view themselves–as a variation of capitalism rather than as socialism.)

While those conversations wander along well-trodden paths, they don\’t have much to say about–for example–how or if the earned income tax credit should be expanded, or the government should assist with job search, or if the minimum wage should rise in certain areas, or how a carbon tax would affect emissions, or how to increase productivity growth, or how to address the long-run fiscal problems of Social Security. Bringing emotion-laden and ill-defined terms like \”socialism\” into these kinds of specific policy conversations just derails them.

Thus, my modest proposal is that unless someone wants to advocate government ownership of the means of production, it\’s more productive to drop \”socialism\” from the conversation. Instead, talk about the specific issue and the mixture of market and government actions rules that might address it, based on whatever evidence is available on costs and benefits.

Why Did the US Labor Share of Income Fall So Quickly?

The share of US national income going to labor was sagging through the second half of the century, but then plunged starting around 2000. The McKinsey Global Institute takes \”A new look at the declining labor share of income in the United States\” in a report by James Manyika, Jan Mischke, Jacques Bughin, Jonathan Woetzel, Mekala Krishnan, and Samuel Cudre (May 2019).

Here\’s a figure showing basic background. From 1947-2000, the labor share of income fell from 65.4% to 62.3%. There already seemed to be a pattern of decline in the 1980s and 1990s in particular, which was then reversed for a short time at the tail end of the dot-com boom. But since 2000, the labor share has sunk to 56.7% in 2016.

Why did this happen? The MGI analysis looks at 12 different sectors of the economy and how different possible explanations played out in these sectors. As they point out, these sectors all experienced a decline in labor share, but the sectors as a whole make up less than half of total US output, and tend to be \”more globalized, more digitized, and more capital-intensive than the overall economy.\” Thus, factors that might affect the labor share like globalization, growth of the digital economy, and substituting capital for labor are likely to play a bigger role in these sectors than in the rest of the economy.

 But here\’s their ranking of five main causes for the decline in labor\’s share of income:

We find that that the main drivers for the decline in the labor share of income since 1999 are as follows, starting with the most important: supercycles and boom-bust (33 percent), rising depreciation and shift to IPP capital (26 percent), superstar effects and consolidation (18 percent), capital substitution and technology (12 percent), and globalization and labor bargaining power (11 percent)

The list is intriguing and a little surprising, because some of the factors most commonly discussed as potential causes of the decline in labor share–globalization, capital substituting for labor, and \”superstar\” firms emerging from industry consolidation–play a relatively smaller role. One possible interpretation is that sharp drop in labor income from 1999-2016 was a little deceptive, because in part it was based on cyclical factors, but a number of the factors underlying a longer-term decline in labor share continue to operate.

On the topic of supercycles and boom-bust, the MGI report says:

\”Even after adjusting for depreciation, we estimate that supercycles and boom-bust effects—particularly in extractive industries and real estate—account for one-third of the surge in gross capital share since the turn of the millennium. In two sectors, mining and quarrying and coke and refined petroleum, capital share increases were led by increased returns on invested capital and higher profit margins during a sharp and prolonged rise in prices of metals, fuels, and other commodities fed by China’s economic expansion in the 2000s. … Housing-related industries also contributed. The capital-intensive real estate sector grew in importance in terms of gross value added during the bubble, leading to a substantial mix effect raising the capital share of income.\”

On the topic of rising depreciation and shift to intangible capital, they write: 

Higher depreciation is the second-largest contributor to the increase in gross capital share, accounting for roughly one-fourth  (26 percent) of the total. Depreciation matters for labor share analyses, because the  baseline, GDP, is a “gross” metric before depreciation; if more capital is consumed during the production process, there is less net margin to be distributed to labor or capital. This fact, which receives little attention in the literature, is particularly visible in manufacturing, the public sector, primary industries, and infrastructure services. One reason depreciation has become such a large factor in driving up the capital share is the increase in the share of intellectual property products capital—software, databases, and research and development—which depreciates faster than traditional capital investments such as buildings. The increase has been substantial: the share of IPP capital rose from 5.5 percent of total net capital stock for the total economy in 1998 to 7.3 percent in 2016, an increase of almost 33 percent.

On the topic of  superstar effects and consolidation: 

We estimate that superstar effects contribute about one-fifth of the capital share increase. We base this estimate on analyzing which industries actually saw an increase in ROIC [return on investment capital] as a direct driver of capital share increases and where the increase goes hand-in-hand with (and may partially result from) rising consolidation or rise of superstar firms. Such patterns seemed particularly pronounced in several industries, and for each of them, superstar effects were marked as a “highly relevant” or “relevant” driver. Telecommunications, media, and broadcasting, for instance, experienced significant rises in returns. The transportation and storage industry went through another round of  airline consolidation and recovered from the crisis to ROIC levels that are high by historical standards. The pharmaceutical and chemicals as well as information and computer services sectors are also known for superstar effects. Finally, wholesale and retail as well as refining also went through a spurt in returns and consolidation. 

On the topic of capital substitution and technology:

The fourth-most important factor driving the increase in capital share of income appears to relate to a substitution of capital for labor, through factors including decreasing technology prices and better capabilities of machines. We estimate that this effect accounts for 12 percent of the increase in capital share in the industries we analyzed. 

On the topic of globalization and labor bargaining power:

One of the most discussed reasons for labor’s declining share of income—the weakening of labor bargaining power under pressure from globalization—is, in our analysis, not as important as other factors for the total economy in the time frame we focus on. It explains 11 percent of the overall decline. … Globalization and labor bargaining power did have a very large and visible impact in a few of our selected sectors. A prime example is automobile manufacturing, where declining union coverage and falling wages as production shifted to the southern United States and Mexico increased the capital share. … To a lesser extent, the computer and electronics sector, which contributed 9 percent to the total increase in capital share, was also affected by growing globalization of supply chains and offshoring, although other factors played an important role for this sector. Finally, the numerous remaining smaller manufacturing industries not among the 12 sectors we extensively analyze have probably been affected by globalization, and more specifically by the rise of China as a major trade hub since its accession to the World Trade Organization in 2001.

You can read the report for the sector-by-sector analysis, and consider how the factors might affect the sectors of the economy outside the focus of this report. But again, because the sector on which they focus are \”more globalized, more digitized, and more capital-intensive than the overall economy,\” globalization, growth of the digital economy, and substituting capital for labor are less likely to be major factors in the rest of the economy than in these sectors.  

What if the Argument Was About Whether to Remove Congestion Pricing?

When economists talk about \”congestion pricing\”–the idea of charging tolls during rush-hour periods to reduce congestion–it ends up sounding to a lot of people like an unpleasant combination of tangible costs and nonexistent benefits. But what if we turned the question upside down. Instead of thinking about adding congestion tolls, what if we were having an argument about removing them?

Michael Manville offers an interesting speculation along these lines in \”Longer View: The Fairness of Congestion Pricing: The choice between congestion pricing fairness and efficiency is a false one,\” in the Spring 2019 issue of Tranfers magazine. He writes:

\”Suppose we had a world where all freeways were priced, and where we used the revenue to ease pricing’s burden on the poor. Now suppose someone wanted to change this state of affairs, and make all roads free. Would we consider this proposal fair? The poorest people, who don’t drive, would gain nothing. The poor who drive would save some money, but affluent drivers would save more. Congestion would increase, and so would pollution. The pollution would disproportionately burden low-income people. With priced roads, poor drivers were protected by payments from the toll revenue. With pricing gone, the revenue would disappear as well, and so would compensation for people who suffered congestion’s costs.

\”This proposal, in short, would reduce both efficiency and equity. It would harm the vulnerable, reward the affluent, damage the environment, and make a functioning public service faulty and unreliable. Most people would view the idea with skepticism — the same way they might view a proposal to abolish water meters. Today, however, this situation is not a proposal but our status quo, and so it is a departure from this scenario, not its introduction, that arouses our suspicion. We have so normalized the current condition of our transportation system that we unthinkingly consider it fair and functional. It is neither. Our system is an embarrassment to efficiency and an affront to equity. \”

It\’s an interesting question. If the question was about whether to eliminate an existing congestion tax, would people prefer a return to congestion, along with a loss of revenue to, say, support an improved mass transit system?

One of the concerns often expressed about congestion  pricing is that it would be a burden on the poor. Manville offers some reflections on that them as well:

Few equity agendas in other areas of social policy, after all, demand that all goods be free. Almost no one, for example, suggests that all food be free because some people are poor. Society instead identifies poor people and helps them buy food. So why should all roads be free because some drivers are poor? Most drivers aren’t poor, many poor people (including the poorest) don’t drive, and most driving is done by the middle and upper classes. It is entirely possible to price our roads while maintaining a commitment to economic fairness.

Free roads are not a good way to help poor people. Virtually every fairness-based criticism of priced roads — they help the rich more than the poor, they prevent some people from traveling, they actively harm the poor — also applies to free roads. … There is nothing intrinsically unfair about pricing roads, or intrinsically fair about leaving them free. And people who worry about harms to the poor when roads are priced, but not when roads are free, may be worried more about the prices than the poor.

For some of Manville\’s research on the issue of congestion pricing and the poor, see Manville, M., & Goldman, E. (2018). \”Would Congestion Pricing Harm the Poor? Do Free Roads Help the Poor?\” Journal of Planning Education and Research, 38(3), 329–344.

For some of my previous attempts to explain the case for congestion pricing, see:

Time for Fiscal Rules?

Should governments set rules to constrain the size of government borrowing on an annual basis or government debt accumulated over time? Pierre Yared discusses the question in \”Rising Government Debt: Causes and Solutions for a Decades-Old Trend,\” in the Spring 2019 issue of the Journal of Economic Perspectives.

There\’s really no economic case to be made for the plain-vanilla rule that national governments should balance their budget every year. During a recession, for example, tax revenues will fall as income falls, and government spending on  programs like unemployment insurance, Medicaid, and food stamps will rise. If in the face of these forces the government wanted to keep a balanced budget during a recession, it would thus need to find ways to raise its tax revenues and cut other spending even while the economy is weak. A more sensible strategy is to find ways for these fiscal \”automatic stabilizers\” to function more strongly.

But the foolishness of a simplistic rule to balance the budget every year doesn\’t mean that no rules at all can work. But as Yared writes (citations omitted):  \”Thus, governments across the world have adopted fiscal rules—such as mandated deficit, spending, or revenue limits—to curtail future increases in government debt. In 2015, 92 countries had fiscal rules in place, a dramatic increase from
1990, when only seven countries had them.\”

The form of these rules varies across countries. A basic lesson seems to be that all fiscal rules are imperfect, and can be gamed or avoided if a government wishes to do so, but also that well-designed rules–even with looseness and imperfections–do offer some constraints and limits that can hold down the amount of government borrowing.

Yared mentions an IMF study by Luc Eyraud, Xavier Debrun, Andrew Hodge, Victor Duarte Lledo, and Catherine A Pattillo called \”Second-Generation Fiscal Rules : Balancing Simplicity, Flexibility, and Enforceability\” (IMF Staff Discussion Note, SDN/18/04,  April 13, 2018).  They sum up the situation with fiscal rules in this way:

By improving fiscal performance, well-designed rules help build and preserve fiscal space while allowing its sensible use. Good rules encourage building buffers in good times and allow fiscal policy to support the economy in bad times. This implies letting automatic stabilizers operate symmetrically over the cycle and including escape clauses that allow discretionary fiscal support when needed. By supporting a credible commitment to fiscal sustainability, rules can also create space in the budget for financing growth-enhancing reforms and inclusive policies. 

To be effective, fiscal rules should have three main properties—simplicity, flexibility, and enforceability. These three properties are very difficult to achieve simultaneously, and past reforms have struggled to find the right balance. In the past decade, “second-generation” reforms have expanded the flexibility provisions (for example, with new escape clauses) and improved enforceability (by introducing independent fiscal councils, broader sanctions, and correction mechanisms). However, these innovations as well as the incremental nature of the reforms have made the systems of rules more complicated to operate, while compliance has not improved. … 

This paper presents new evidence that well-designed rules are indeed effective in constraining excessive deficits. Country experiences show that successful rules generally have broad institutional coverage, are tightly linked to fiscal sustainability objectives, are easy to understand and monitor, and support countercyclical fiscal policy. Supporting institutions, like fiscal councils, are also important. In contrast, rules that are poorly designed and do not align well with country circumstances can be counterproductive. Novel empirical research finds that fiscal rules can reduce the deficit bias even when they are not complied with.

In his essay in JEP, Yared offers some more detailed insights. In some ways, the key issue isn\’t the fiscal rule you set, but rather what consequences will arise if the rule is broken. Here\’s Yared:

There are several issues to take into account when considering punishments for breaking fiscal rules. First, whether or not rules have been broken might be unclear. There are numerous examples of how governments can use creative accounting to circumvent rules. Frankel and Schreger (2013) describe how euro-area governments use overoptimistic growth forecasts to comply with fiscal rules. Many US states compensate government employees with future pension payments, which increases off-balance-sheet entitlement liabilities not subject to fiscal rules (Bouton, Lizzeri, and Persico 2016). In 2016, President Dilma Rousseff of Brazil was impeached for illegally using state-run banks to pay government expenses and bypass the fiscal responsibility law (Leahy 2016). Given this transparency problem, many countries have established independent fiscal councils to assess and monitor compliance with fiscal rules (Debrun et al. 2013).

A second issue to consider is the credibility of punishments. As an example, the Excessive Deficit Procedure against France and Germany in 2003 was stalled by disagreement between the European Commission and the European Council; consequently, French and German deficits persisted without penalty  …

A third issue is the response of the private sector to the violation of rules, which can also serve as a form of punishment. For example, Eyraud, Debrun, Hodge, Lledó, and Pattillo (2018) [in the IMF study mentioned above] find that the violation of fiscal rules is associated with a significant increase in interest rate spreads for sovereign borrowing. Such an increase in financing costs immediately penalizes a government for breaching a rule. …

Many governments’ fiscal rules feature an escape clause that allows violating the rule under exceptional circumstances (Lledó et al. 2017). Triggering an escape clause typically involves a review process, which culminates in a final decision by an independent fiscal council, a legislature, or citizens via a referendum. In Switzerland, for example, the government can deviate from a fiscal rule with a legislative supermajority in the cases of natural disaster, severe recession, or changes in accounting method. The cost of triggering an escape clause deters governments from using them too frequently. Moreover, because these costs largely involve a facilitation of information gathering to promote efficient fiscal policy, escape clauses are useful even in the presence of perfect rule enforcement.

Again, a theme that emerges is that a government which is serious about a fiscal rule will want to set up procedures to be followed when that rule is being broken. In turn, those procedures should be high-profile at least in a publicity sense, so that the decision to break the fiscal rule must be explained, justified, and evaluated by an independent commission. 

Another issue Yared mentions is that a fiscal rule can be designed with different categories: instrument-based rules that focus on specific categories of  spending or taxes, or overall target-based rules. He writes:

In practice, fiscal rules can constrain different instruments of policy, such as specific categories of government spending or tax rates. Different instruments may call for different thresholds … For instance, due to volatile geopolitical conditions, military spending needs may be less forecastable than other spending needs, and may thus demand more flexibility. Capital spending is another category where allowing increased flexibility may be optimal, as the benefits of capital spending accrue well into the future and are thus subject to a less-severe present bias. Thus, many countries have “golden rules,” which limit spending net of a government’s capital expenditure. … Overall, the evidence [suggests that rules that distinguish across categories are indeed associated with better fiscal and macroeconomic outcomes (for discussion, see Eyraud, Lledó, Dudine, and Peralta 2018). Moreover, it can be optimal to set multiple layers of rules, for example specifying a fiscal threshold for individual categories of taxes and spending as well as on the total level of taxes and spending in the form of a (forecasted) deficit rule.  

Ultimately, Yared argued for the benefits of a hybrid rule, \”which allows for an instrument threshold that is relaxed whenever a target threshold is satisfied.\” 
In short, practical fiscal rules are quite possible, at least according the 90-plus countries that have them. And research suggests that such rule do constrain government borrowing, even given that they are going to be broken from time to time. But simple-minded fiscal rules like the US government \”debt ceiling\” will be essentially pointless, except for connoisseurs of short-term political dramas. Meaningful fiscal rules will not be simple, and will need to pay detailed attention not just to the overall goal, but to the practical issues of how much flexibility should surround the goal and what consequences will result when government borrowing that break through even a flexible rule. 

Origins of "Microeconomics" and "Macroeconomics"

Economists have written about topics that we would now classify under the headings of \”microeocnomics\” or \”macroeconomics\” for centuries. But the terms themselves are much more recent, emerging only in the early 1940s. For background, I turn to the entry on \”Microeconomics\” by Hal R. Varian published in The New Palgrave: A Dictionary of Economics, dating back to the first edition in 1987.

The use of \”micro-\” and \”macro-\” seems to date back to the work of Ragnar Frisch in 1933, but he referred to micro-dynamics and macro-dynamics. As Varian writes:

Frisch used the words ‘micro-dynamic’ and ‘macro-dynamic’, albeit in a way closely related to the current usage of the terms ‘microeconomic’ and ‘macroeconomic’: 

\”The micro-dynamic analysis is an analysis by which we try to explain in some detail the behaviour of a certain section of the huge economic mechanism, taking for granted that certain general parameters are given … The macrodynamic analysis, on the other hand, tries to give an account of the whole economic system taken in its entirety (Frisch 1933).\” 

Elsewhere Frisch gives a more explicit definition of these terms that is closely akin to the modern usage of micro and macroeconomics: ‘Microdynamics is concerned with particular markets, enterprises, etc., while macro-dynamics relate to the economic system as a whole’ …. 

John Maynard Keynes does not seem to have used the micro- and macro- language. But Varian quotes a passage from the General Theory in 1936  show that Keynes was quite aware of the distinction. Keynes wrote:

The division of Economics between the Theory of Value and Distribution on the one hand and the Theory of Money on the other hand is, I think, a false division. The right dichotomy is, I suggest, between the Theory of the Individual Industry or Firm and of the rewards and the distribution of a given quantity of resources on the one hand, and the Theory of Output and Employment as a whole on the other hand [emphasis in the original]. 

Varian points to a somewhat obscure economist P. de Wolff as the first to use \”microeconomic\” and \”macroeconomic\” in 1941. Varian writes:

The earliest published reference that explicitly uses the term ‘microeconomics’ that I have been able to locate is de Wolff (1941). De Wolff, a colleague of Tinbergen at the Netherlands Statistical Institute, was well aware of the macrodynamic modelling efforts of Frisch, and may have been inspired to extend Frisch’s use of ‘micro-dynamics’ to the more general expression of ‘microeconomics’. De Wolff’s note is concerned with what we now call the ‘aggregation problem’ – how to move from the theory of the individual consuming unit to the behaviour of aggregate consumption. … He [de Wolff] is quite clear about the distinction between micro- and macroeconomics: 

\”The concept of income elasticity of demand has been used with two entirely different meanings: a micro- and macro-economic one. The micro-economic interpretation refers to the relation between income and outlay on a certain commodity for a single person or family. The macro-economic interpretation is derived from the corresponding relation between total income and total outlay for a large group of persons or families (social strata, nations, etc.) [emphasis in original].\”

In Varian\’s telling, the terms of macroeconomics start popping up in academic journals and even some lesser-used textbooks in the 1940s, are in widespread use by the mid-1950s, and first appear in Paul Samuelson\’s canonical intro economics textbook in the 1958 edition.

Strengthening Automatic Stabilizers

For economists, \”automatic stabilizers\” refers to how tax and spending policies adjust without any additional legislative policy or change during economic upturns and downturns–and do so in a way that tends to stabilize the economy. For example, in an economic downturn, a standard macroeconomic prescription is to stimulate the economy with lower taxes and higher spending. But in an economic downturn, taxes fall to some extent automatically, as a result of lower incomes. Government spending rises to some extent automatically, as a result of more people becoming eligible for unemployment insurance, Medicaid, food stamps, and so on. Thus, even before the government undertakes additional discretionary stimulus legislation, the automatic stabilizers are kicking in.

Might it be possible to redesign the automatic stabilizers of tax and spending policy in advance so that they would offer a quicker and stronger counterbalance when (not if) the next recession comes? The question is especially important because in past recessions, the Federal Reserve often cut the policy interest rate (the \”federal funds\” interest rate) by about five percentage points. But interest rates are lower around the world for a variety of reasons, and the federal funds interest rate is now at 2.5%. So when the next recession comes, monetary policy will be limited in how much it can reduce interest rates before those rates hit zero percent, and will instead need to rely on nontraditional monetary policy tools like quantitative easing, forward guidance, and perhaps even experiments with a negative policy interest rate.

Heather Boushey, Ryan Nunn, and Jay Shambaugh have edited a collection of eight essays under the title Recession Ready: Fiscal Policies to Stabilize the American Economy (May 2019, Hamilton Project at the Brookings Institution and Washington Center for Equitable Growth).

In one of the essays, Louise Sheiner and Michael Ng look at US experience with fiscal policy during recessions in recent decades, and find that it has consistently had the effect of counterbalancing economic fluctuations. They write: \”Fiscal policy has been strongly countercyclical over the past four decades, with the degree of cyclicality somewhat stronger in the past 20 years than the previous 20. Automatic stabilizers, mostly through the tax system and unemployment insurance, provide roughly half the stabilization, with discretionary fiscal policy in the form of enacted tax cuts and increased spending accounting for the other half.\”

Automatic stabilizers are important in part because the adjustments can happen fairly quickly. In contrast, when the discretionary Obama stimulus package–American Recovery and Reinvestment Act of 2009–was signed into law in February 2019, the Great Recession had started 14 months earlier.
In another essay, Claudia Sahm proposes that along with the already-existing built-in shifts in taxes and spending, fiscal stabilizers could be designed to kick in automatically when a recession starts. In particular, she proposes that the trigger for such actions could be when \”the three-month moving average of the national unemployment rate has exceeded its minimum during the preceding 12 months by at least 0.5 percentage points. … The Sahm rule calls each of the last five recessions within 4 to 5 months  of its actual start. … The Sahm rule would not have generated any incorrect signals in the last 50 years.\”
Sahm argues that when this trigger is hit, the federal government should have legislation in place that would immediately make a direct payment–which could be repeated a year later if the recession persists. She makes the case for a total payment of about 0.7% of GDP (given current GDP of around $20 trillion, this would be $140 billion). She writes: \”All adults would receive the same base payment, and in addition, parents of minor dependents would receive one half the base payment
per dependent.\” This isn\’t cheap! But a lasting and persistent recession is considerably more expensive. 
Other chapters of the book focus on a number of other proposals, which include: 
  • \”[T]ransfer federal funds to state governments during periods of economic weakness by automatically increasing the federal share of expenditures under Medicaid and the Children’s Health Insurance Program\”
  • \”[C]reating a transportation infrastructure spending plan that would be automatically triggered during a recession\”
  • Publicize availability of unemployment benefits when the unemployment rate starts rising, and extend the length of unemployment insurance payments at this time
  • Expand Temporary Assistance for Needy Families to include subsidized jobs in recessions
  • An automatic rise of 15% in Supplemental Nutrition Assistance Program (SNAP) benefits during recessions

The list isn\’t exhaustive, of course. For example, one policy used during the Great Recession was to have a temporary cut in the payroll taxes that workers pay to support Social Security and Medicare. For most workers, these taxes are larger than their income taxes. And there is a quick and easy way to get this money to people, just by reducing what is withheld from paychecks. 

The broader issues here, of course, is not about the details of specific actions, some of which are more attractive to me than others. It\’s whether we seize the opportunity now to reduce the sting of the next recession.

For estimates of automatic stabilizers in the past, see \”The Size of Automatic Stabilizers in the US Budget\” (November 23, 2015).

Here\’s a table of contents for the book edited by Boushey, Nunn, and Shambaugh:

Daniel Hamermesh: How Do People Spend Time?

For economists, the idea of \”spending\” time isn\’t a metaphor. You can spend any resource, not just money. Among all the inequalities in our world, it remains true that every person is allocated precisely the same 24 hours in each day. In \”Escaping the Rat Race: Why We Are Always Running Out of Time,\” the Knowledge@Wharton website interviews Daniel Hamermesh, focusing on themes from his just-published book Spending Time: The Most Valuable Resource.

The introductory material at the start quotes William Penn, who apparently once said, “Time is what we want most, but what we use worst.” Here are some comments from Hamermesh:

Time for the Rich, Time for the Poor

The rich, of course, work more than the others. They should. There’s a bigger incentive to work more. But even if they don’t work, they use their time differently. A rich person does much less TV watching — over an hour less a day than a poor person. They sleep less. They do more museum-going, more theater. Anything that takes money, the rich will do more of. Things that take a lot of time and little money, the rich do less of. …

I think complaining is the American national pastime, not baseball. But the thing is, those who are complaining about the time as being scarce are the rich. People who are poor complain about not having enough money. I’m sympathetic to that. They’re stuck. The rich — if you want to stop complaining, give up some money. Don’t work so hard. Walk to work. Sleep more. Take it easy. I have no sympathy for people who say they’re too rushed for time. It’s their own darn fault.

Time Spent Working Across Countries

Americans are the champions of work among rich countries. We work on average eight hours more per week in a typical week than Germans do, six hours more than the French do. It used to be quite a bit different. Forty years ago, we worked about average for rich countries. Today, even the Japanese work less than we do. The reason is very simple: We take very short vacations, if we take any. Other countries get four, five, six weeks. That’s the major difference. …

What’s most interesting about when we work is you compare America to western European countries, and it’s hard to find a shop open on a Sunday in western Europe. Here, we’re open all the time. Americans work more at night than anybody else. It’s not just that we work more; we also work a lot more at night, a lot more in the evenings, and a heck of a lot more on Sundays and Saturdays than people in other rich countries. We’re working all the time and more. …

It’s a rat race. If I don’t work on a Sunday and other people do, I’m not going to get ahead. Therefore, I have no incentive to get off that gerbil tube, get out of it and try to behave in a more rational way. …  The only way it’s going to be solved is if somehow some external force, which in the U.S. and other rich countries is the government, imposes a mandate that forces us to behave differently. No individual can do it. …

We have to force ourselves, as a collective, as a polity, to change our behavior. Pass legislation to do it. Every other rich country did that between 1979 and 2000. We think the Japanese are workaholics. They’re not workaholics. Compared to us, they work less than we do, yet 40 years ago they worked a heck of a lot more. They chose to cut back. ,.. It’s going to be a heck of a lot of trouble to change the rules so that people are mandated to take four weeks of vacation or to take a few more paid holidays. Other countries have done it. It didn’t just happen from the day the countries were born. They chose to do it. It’s a political issue, like the most important things in life. 

Time and Technology, Money Chasing Hours

Time is an economic factor; economics is about scarcity more than anything else. Because our incomes keep on going up, whereas time doesn’t go up very much, time is the increasingly important scarce factor.  …

There’s no question technology has made us better off. Think about going to a museum. When I went to the Museum of Science and Industry in Chicago as a kid, you’d pull levers. You did a few things. These days, it’s all incredibly immersive. Great technology. But you can’t go to the museum in any less time. You can’t cut back on sleep. A few things are easier to do more quickly because of technology: cooking, cleaning, washing, I don’t know if you’re old enough to remember the semi-automatic washing machine with a ringer. Tremendous improvements in the things you do with the house. Technology has made life better, but it hasn’t saved us much time. … So, we are better off, but it’s not that we’re going to have more time; we’re going to have less time. But we have more money chasing the same number of hours.

 For a longer and more in-depth and wide-ranging discussion of these subjects, listen to the hour-long EconTalk episode in which Russ Roberts interviews Daniel Hamermesh  (March 25, 2019). 

Time for a Return of Large Corporation Research Labs?

It often takes a number of intermediate steps to move from a scientific discovery to a consumer product. A few decades ago, many larger and even mid-sized corporations spent a lot of money on research and development laboratories, which focused on all of these steps. Some of these corporate laboratories like those at AT&T, Du Pont, IBM, and Xerox were nationally and globally famous. But the R&D ecosystem has shifted, and firms are now much more likely to rely on outside research done by universities or small start-up firms. These issues are discussed in \”The changing structure of American innovation: Cautionary remarks for economic growth,\” by Ashish Arora, Sharon Belenzon,  Andrea Patacconi, and Jungkyu Suh, presented at conference on  \”Innovation Policy and the Economy 2019,\” held on on on April 16, 2019, hosted by the National Bureau of Economic Research, and sponsored by the Ewing Marion Kauffman Foundation.

On the importance of corporate laboratories much better decades of US productivity growth, they authors note:

From the early years of the twentieth century up to the early 1980s, large corporate labs such as AT&T\’s Bell Labs, Xerox\’s Palo Alto Research Center, IBM\’s Watson Labs, and DuPont\’s Purity Hall were responsible for some of the most consequential inventions of the century such as the transistor, cellular communication, graphical user interface, optical bers, and a host of synthetic materials such as nylon, neoprene, and cellophane.

But starting in the 1980s, firms began to rely more on universities and on start-ups to do their R&D. Here\’s one of many examples, the closing of the main DuPont research laboratory: 

A more recent example is DuPont\’s closing of its Central Research & Development lab in 2016. Established in 1903, DuPont Central R&D served as a premiere lab on par with the top academic chemistry departments. In the 1960s, the central R&D unit published more articles in the Journal of the American Chemical Society than MIT and Caltech combined. However, in the 1990s, DuPont\’s attitude toward research changed as the company started emphasizing business potential of research projects. After a gradual decline in scientifi c publications, the company\’s management closed the Experimental Station as a central research facility for the firm after pressure from activist investors in 2016.

The pattern shows up in broader trends. The authors write that \”the number of publications per firm fell at a rate of 20% per decade from 1980 to 2006 for R&D performing American listed firms.\” Business-based R&D as a share of total R&D peaked back in the 1990s, and has been falling since then. The share of business R&D which is \”research,\” as opposed to \”development,\” has been falling, too. 
The authors tell the story of how so much research was based in corporations, or shared by corporations and universities, for the first sis or seven seven decades of the 20th century, and how the shift to a greater share of research happening universities took place. One big change was the Bayh-Dole act of 1980 (citations omitted):

Perhaps the most widely commented on reform of this era is the Bayh-Dole Patent and Trademark Amendments Act of 1980, which allowed the results of federally funded university research to be owned and exclusively licensed by universities. Since the postwar period, the federal government had been funding more than half of all research conducted in universities and owned the rights to the fruits of such research, totaling in 28,000 patents. However, only a few of these inventions would actually make it into the market. Bayh-Dole was meant to induce industry to develop these underutilized resources by transferring property rights to the universities, which were now able to independently license at the going market rate.

As universities took on more research, corporations backed off. Here are a couple of examples: 

In 1979, GE\’s corporate research laboratory employed 1,649 doctorates and 15,555 supporting staff, while IBM employed 1,900 staff and 1,300 doctorate holders. The comparable figures in 1998 for GE was 475 PhDs supported by 880 professional staff, and 1,200 doctorate holders for IBM. Indeed, rms whose sales grew by 100% or higher between 1980 and 1990 published 20.6 fewer scienti c articles per year. This contrast between sales growth and publications drop persists into the next two decades: rms that doubled in sales between 1990 and 2000 published 12.0 fewer articles. Publications dropped by 13.3 for such fast growth firms between 2000 and 2010.

A common pattern seems to be that the number of researchers and scientific papers is falling at a number of firms, but the number of patents at these same firms has been steadily rising.  Firms are putting less emphasis on the research, and more on development that can turn into well-defined intellectual property. This pattern seems to hold (mostly) across big information technology and computer firms. The pharmaceutical and biotech firms offer an exception of an industry that has continued to publish research–probably because published research is important in regulatory approval for many of their products. 

Overall, the new innovation ecosystem exhibits a deepening division of labor between universities that specialize in basic research, small start-ups converting promising new findings into inventions, and larger, more established firms specializing in product development and commercialization. Indeed, in a survey of over 6,000 manufacturing- and service-sector firms in the U.S. … 49% of the innovating firms between 2007 and 2009 reported that their most important new product originated from an external source.

But in this new eco-system of innovation, has something been lost? The authors argue that as businesses have outsourced R&D, it has contributed to the sustained sluggish pace of US productivity growth. They write: 

Spinoffs, startups, and university licensing offices have not fully filled the gap left by the decline of the corporate lab. Corporate research has a number of characteristics that make it very valuable for science-based innovation and growth. Large corporations have access to signi ficant resources, can more easily integrate multiple knowledge streams, and their research is directed toward solving specifi c practical problems, which makes it more likely for them to produce commercial applications. University research has tended, more so than corporate research, to be curiosity-driven rather than mission-focused. It has favored insight rather than solutions to specifi c problems, and partly as a consequence, university research has required additional integration and transformation to become economically useful. This is not to deny the important contributions that universities and small rms make to American innovation. Rather, our point is that large corporate labs may have distinct capabilities, which have proved to be difficult to replace. Further, large corporate labs may also generate signi ficant positive spillovers, in particular by spurring high-quality scienti fic entrepreneurship.

It\’s not clear how to encourage a resurgence of corporate research labs. Companies and their investors seem happy with the current division of R&D labor. But from a broader social perspective, the growing separation of companies from the research on which they rely suggests that the gap between scientific research and consumer products is growing, along with the the possibility that economically valuable innovations are falling into that gap and never coming into existence.


Those interested in this argument might also want to check \”The decline of science in corporate R&D,\” written by Ashish Arora, Sharon Belenzon, and Andrea Patacconi, published in Strategic Management (2018, vol. 39, pp.  3–32).

For those with an interest in the broader subject of US innovation policy, here\’s the full list of papers presented at the April 2019 NBER conference:

Does the Federal Reserve Talk Too Much?

For a long time, the Federal Reserve (and other central banks) carried out monetary policy with little or no explanation. The idea was that the market would figure it out. But in the last few decades, there has been an explosions of communication and transparency from the Fed (and other central banks), consisting both of official statements and an array of public speeches and articles by central bank officials. On one side, a greater awareness has grown up that economic activity isn\’t just influenced by what the central bank did in the past, but on what it is expected to do in the future. But does the this \”open mouth\” approach clarify and strengthening monetary policy, or just muddle it?

Kevin L. Kliesen, Brian Levine, and Christopher J. Waller present some evidence on the changes in Fed communication and the results in \”Gauging Market Responses to Monetary Policy Communication,\” published in the Review of the Federal Reserve Bank of St. Louis (Second Quarter 2019, pp. 69-92). They start by describing the old ways, by quoting an exchange between John Maynard Keynes and Bank of England Deputy Governor Sir Ernest Harvey on December 5, 1929:

KEYNES: Arising from Professor Gregory\’s questions, is it a practice of the Bank of England never to explain what its policy is?
HARVEY: Well, I think it has been our practice to leave our actions to explain our policy.
KEYNES: Or the reasons for its policy?
HARVEY: It is a dangerous thing to start to give reasons.
KEYNES: Or to defend itself against criticism?
HARVEY: As regards criticism, I am afraid, though the Committee may not all agree, we do not admit there is a need for defence; to defend ourselves is somewhat akin to a lady starting to defend her virtue.

From 1967 to 1992, the Federal Open Market Committee released a public statement 90 days after its meetings. The FOMC then started, sometimes, releasing statements right after meeting. Here\’s a figure showing how the length of these statements has expanded over time, with the shaded area showing the period of \”unconventional monetary policy\” during and after the Great Recession.

As one example,

[F]ollowing the August 9, 2011, meeting, the policy statement stated the following:

\”The Committee currently anticipates that economic conditions—including low rates of resource utilization and a subdued outlook for inflation over the medium run—are likely to warrant exceptionally low levels for the federal funds rate at least through mid-2013.\”

In this case, the FOMC\’s intent was to signal to the public that its policy rate would remain low for a long time in order to spur the economy\’s recovery.

Here\’s count of the annual \”remarks\” (speeches, interviews, testimony) by presidents of the regional Federal Reserve banks, members of the Board of Governors, and the chair of the Fed:

Here are some comments about Fed communication that seems to m:

\”Speeches have become important communication events. Chairman Greenspan\’s new economy speech in 1995 and his \”irrational exuberance\” speech in 1996 were among his more notable speeches. Chairman Ben Bernanke also gave notable speeches during his tenure. Two that standout are his \”Deflation: Making Sure \’It\’ Doesn\’t Happen Here\” speech in 2002 and his global saving glut speech in 2005. …

One of the key communication innovations during the Bernanke tenure was the public release of individual FOMC participants\’ expectations of the future level of the federal funds rate. Once a quarter, with the release of the SEP [Summary of Economic Projections], each FOMC participant—anonymously—indicates their preference for the level of the federal funds rate at the end of the current year, at the end of the next two to three years, and over the \”longer run.\” These projections are often termed the FOMC \”dot plots.\” According to the survey, both academics and those in the private sector found the dot plots of limited use as an instrument of Fed communication (more \”useless\” than \”useful\”). One-third of the respondents found the dot plots \”useful or extremely useful,\” 29 percent found them \”somewhat useful,\” and 38 percent found them \”useless or not very useful.\” …

We find that Fed communication is associated with changes in prices of financial market instruments such as Treasury securities and equity prices. However, this effect varies by type of communication, by type of instrument, and by who is doing the speaking. Perhaps not surprisingly, we find that the largest financial market reactions tend to be associated with communication by Fed Chairs rather than by other Fed governors and Reserve Bank presidents and with FOMC meeting statements rather than FOMC minutes.

It\’s probably impossible for a 21st century central bank to operate with what used to be an unofficial motto attributed to the long-ago Bank of England: \”Never explain, never apologize.\” Just for purposes of political legitimacy, and for maintaining the independence of the central bank, a greater degree of transparency and explanation is needed. But if the choice is between the risk of  instability from financial markets making predictions in a situation of very little central bank disclosure, or the risk of instability from financial markets making predictions in a situation with the current level of central bank disclosure, the current level seems preferable. The authors write:

The modern model of central bank communication suggests that central bankers prefer to err on the side of saying too much rather than too little. The reason is that most central bankers believe that clear and concise communication of monetary policy helps achieve their goals.