It’s well-known that those who major in economics earn more, on average, than those who major in other social sciences or in the humanities. But why? One possibility is that those who are more interested in earning income or in industries that tend to pay more will be more likely to become economics majors. Thus, the higher pay of economics majors might just reflect this sorting of pre-existing preferences, not any causal effect of majoring in economics.
To determine if majoring in economics causes higher incomes in the ideal world of social scientists, one might want to experiment with a large group of students, who would be randomly assigned to majors and then we could track their life outcomes. For good and obvious reasons, that experiment is impractical. However, with a dollop of research creativity, it is possible to look for a “natural experiment” where real-world experience approximates this ideal social science experiment. Zachary Bleemer and Aashish Mehta find a way to do this in “Will Studying Economics Make You Rich? A Regression Discontinuity Analysis of the Returns to College Major” (American Economic Journal: Applied Economics 2022, 14:2, 1–22, ).
The authors focus on a specific rule at the University of California-Santa Cruz: If a student doesn’t have a grade point average of at least 2.8 in the two introductory economics courses, that student (usually) cannot go on to major in economics. We can reasonably assume (and this assumption can be checked) that students who are just a little above the 2.8 cut-off are in many ways quite similar to those just below the 2.8 cut-off, except for a few points awarded (or not awarded) on a midterm or a final exam. Thus, the authors focus not on comparing all econ majors to all non-econ majors, but instead just compare those slightly above or below the grade cut-off.
This methodology is called “regression discontinuity.” Basically, it looks at at those just above or below a certain cutoff, which can be grades or income or qualifications for a certain program or any number of things. Comparing those just above and just below the cutoff is close to a hypothetical randomized experiment: that is, you have a group of very similar people, and because some are a tick above and others are a tick below the cutoff, some are eligible for a certain program and some are not. At least as a first approximation, subject to later in-depth checking, it’s reasonable to view later differences between those just-above and just-below the cutoff as a causal effect. The authors write:
Students who just met the GPA [grade point average] threshold were 36 percentage points more likely to declare the economics major than those who just failed to meet it. Most of these students would have otherwise earned degrees in other social sciences. Students just above the threshold who majored in economics were surprisingly representative of UCSC economics majors on observables; for example, their average SAT score was at the forty-first percentile of economics majors.
Comparing the major choices and average wages of above- and below-threshold students shows that majoring in economics caused a $22,000 (46 percent) increase in the annual early-career wages of barely above-threshold students. It did so without otherwise impacting their educational investment—as measured by course-adjusted average grades and weekly hours spent studying—or outcomes like degree attainment and graduate school enrollment. The effect is nearly identical for male and female students, may be larger for underrepresented minority students, and appears to grow as workers age (between ages 23 and 28). About half of the wage effect can be explained by the effect of majoring in economics on students’ industry of employment: relative to students who did not qualify for the major, economics majors became more interested in business and finance careers and were more likely to find employment in higher-wage economics-related industries like finance, insurance, and real estate (FIRE) and accounting.
For comparison, the income difference between economics majors as a group and non-econ majors is not a lot larger than this: “Forty-year-old US workers with undergraduate degrees in economics earned median wages of $90,000 in 2018. By comparison, those who had majored in other social sciences earned median wages of $65,000, and college graduates with any major other than economics earned $66,000.”
Thus, this finding suggests that a large part of the income difference between econ majors and other majors is a causal effect of the economics major. In addition, a much of the difference is because those who are just-above the 2.8 cut-off are much more likely to end up in certain categories of high-paying jobs than those who were just below the cut-off. The data in this study can’t answer the underlying question of why this might be so. Do economics majors have an academic background that makes them more suited for such jobs? Do economics majors have a culture that is more supportive of looking for such jobs? Do employers in these sectors have a preference for economics majors over other majors? Whatever the underlying reason, becoming an economics major does seem to have a substantial causal effect on later career choices and income.
It’s not common. But every now and then, a prominent economist will make a strong near-term prediction that flies in the face of both mainstream wisdom and their known political loyalties. Lawrence Summers did that back in March 2021. As background, Summers was Secretary of the Treasury for a couple of years during the Clinton administration and head of the National Economic Council for a couple of years in the Obama administration. Moreover, he is someone who has been arguing for some years that the US economy needs a bigger boost in demand from the federal government to counter the forces of “secular stagnation” (for discussion, see here and here). Thus, you might assume that Summers would have been a supporter of the American Rescue Plan Act of 2021, backed by the Biden administration and signed into law in March 2021.
Instead, Summers almost immediately warned that this rescue package, when added to the previous pandemic relief legislation, would increase demand in the economy in a way that would be likely to set off a wave of inflation. There was some historical irony here. Back during the early years of the Obama administration when the US economy was struggling to rebound from the Great Recession, Summers was a prominent supporter of increased federal spending at that time–often warning that the problem was likely to be doing too little rather than doing too much. Some opponents of the 2009 legislation predicted that it would cause inflation, but it didn’t happen. Now in March 2021, Summers was on the “it’s too big and will cause inflation” side of the fence.
Jump forward a year to March 2022, and Summers looks prescient. Yes, one can always argue that someone made a correct prediction, but that they did so for the wrong reasons, and events just evolved so that their prediction luckily turned out to be true. And one can also argue that all those making the wrong prediction about inflation back in spring 2021 were actually correct, but events just evolved in an unexpected way so that just by bad luck they turned out to be wrong. But maybe, just maybe, those who were wrong have something to learn from those who were right.
I’m probably as apprehensive about the prospects for a soft landing of the U.S. economy as I have been any time in the last year. Probably actually a bit more apprehensive. In a way, the situation continues to resemble the 1970s, Ezra. In the late ’60s and in the early ’70s, we made mistakes of excessive demand expansion that created an inflationary environment.
And then we caught really terrible luck with bad supply shocks from OPEC, bad supply shocks from elsewhere. And it all added up to a macroeconomic mess. And in many ways, that’s the right analogy for now. Just as L.B.J.’s guns and butter created excessive and dangerous inflationary pressure, the macroeconomic overexpansion of 2021 created those problems, and then layered on with something entirely separate, in terms of the further supply shocks we’ve seen in oil and in food.
And so now I think we’ve got a real problem of high underlying inflation that I don’t think will come down to anything like acceptable levels of its own accord. And so very difficult dilemmas as to whether to accept economic restraint or to live with high and quite possibly accelerating inflation. So I don’t envy the tasks that the Fed has before it. …
On how short-term stimulus can be a bad idea if it leads to long-term costs
I share completely the emotional feelings that you describe around the benefits of a strong economy. But I think it’s very important not to be shortsighted and to recognize that what we care about is not just the level of employment this year, but the level of employment averaged over the next 10 years. That we care not just about wages and opportunities this year, but we care about wages and opportunities over the long-term.
And the doctor who prescribes you painkillers that make you feel good to which you become addicted is generous and compassionate, but ultimately is very damaging to you. And while the example is a bit melodramatic, the pursuit of excessively expansionary policies that ultimately lead to inflation, which reduces people’s purchasing power, and the need for sharply contractionary policies, which hurt the biggest victims, the most disadvantaged in the society, that’s not doing the people we care most about any favor. It’s, in fact, hurting them.
The excessively inflationary policies of the 1970s were, in a political sense, what brought Ronald Reagan and brought Margaret Thatcher to power. So I share your desires. I think the purpose of all of this is to help people who would otherwise have difficulty. That is what it’s all about in terms of making economic policy. But if you don’t respect the basic constraints of situations, you find yourself doing things that are counterproductive and that in the long-run prove to be harmful.
You raise an interesting example when you talk about wage increases. If you look at the rate of wage increases, percentage wage increases each year for the American economy, and then you look at the increase in the purchasing power of workers each year, what you find is that as wage increases go up, the growth of purchasing power increases until you get to 4 percent or 5 percent. And when wage increases start getting above 4 percent or 5 percent, then you start having serious inflation problems and actually the purchasing power of workers is going down.
So my disagreement with policies that were pursued last year had nothing to do with ends. I completely shared the end. I did not care about inflation for its own sake. But what I did care about was real wage growth over time, average levels of employment and opportunity over time, and a sense of social trust that would permit progressive policies.
And I thought those vital ends were being compromised by those with good intentions but a reluctance to do calculations. And I have to say that the early evidence at this point — and it gives me no pleasure to say this — but the evidence at this moment in terms of what’s happened to real wages, in terms of what’s happened to concerns about recession, in terms of what political prognosticators are saying, suggests that those fears may, to an important extent, have been justified.
What does the Federal Reserve need to do?
I don’t think we’re going to avoid and bring down the rate of inflation until we get to positive real interest rates. And I don’t think we’re going to get to positive real interest rates without, over the next couple of years, getting interest rates north of 4 percent. What happens to real interest rates depends both on what the Fed does and on what happens to inflation.
My sense of this is that given the likely paths of inflation, we’re likely to have a need for nominal interest rates, basic Fed interest rates, to rise to the 4 percent to 5 percent range over the next couple of years. If they don’t do that, I think we’ll get higher inflation. And then over time, it will be necessary for them to get to still higher levels and cause even greater dislocations.
One of the frustrations of the modern US economy is that it costs money to make payments. Credit cards have fees. Banks have fees. Electronic payment systems have fees. Paying by cash doesn’t have an explicit fee, but it does have the implicit costs and risks of walking around carrying cash–and a growing number of merchants don’t take cash. One estimate is that US consumers pay 2.3% of GDP just for payment services, and while that is relatively high, payment services are more than 1% of GDP in many countries.
Brazil decided on an alternative approach. The basic idea is that the Brazil Central Bank (BCB) has set up a platform for payment service providers. The central bank sets up the “application programming interfaces,” which essentially means that there is a common technology through which payment services providers can accessing the system. The central bank owns and operates the platform itself, and all large banks and other payment services providers are required to participate. If two parties are using different service providers, they can still readily make a transaction over the shared platform. From the launch of the system in November 2020, it went from zero to 67% of Brazil’s adult population in its first year. Angelo Duarte, Jon Frost, Leonardo Gambacorta, Priscilla Koo Wilkens and Hyun Song Shin tell the story in “Central banks, the monetary system and public payment infrastructures: lessons from Brazil’s Pix” (Bank of International Settlements, BIS Bulletin #52, March 23, 2022). Here’s a description:
The BCB [Brazil Central Bank] decided in 2018 to launch an instant payment scheme developed, managed, operated and owned by the central bank. Pix was launched in November 2020. The goals are to enhance efficiency and competition, encourage the digitalisation of the payment market, promote financial inclusion and fill gaps in currently available payment instruments. The BCB plays two roles in Pix: it operates the system and it sets the overall rulebook. As a system operator, the BCB fully developed the infrastructure and operates the platform as a public good. As rulebook owner, the BCB sets the rules and technical specifications (eg APIs) in line with its legal mandate for retail payments. This promotes a standardised, competitive, inclusive, safe and open environment, improving the overall payment experience for end-users.
Since its launch, Pix has seen remarkable growth. By end-February 2022 (15 months after launch), 114 million individuals, or 67% of the Brazilian adult population, had either made or received a Pix transaction. Moreover, 9.1 million companies have signed up – fully 60% of firms with a relationship in the national financial system. Over 12.4 billion transactions were settled, for a total value of BRL [Brazilian reais] 6.7 trillion (USD 1.2 trillion) (Graph 1, left-hand panel). Pix transactions have surpassed many instruments previously available – eg pre-paid cards – and have reached the level of credit and debit cards (Graph 1, right-hand side). Pix partly substituted for other digital payment instruments, such as bank transfers. Yet notably, the total level of digital transactions rose substantially. Using accounts from banks and non-bank fintech providers, more individuals entered the digital payment system. Indeed, Pix transfers were made by 50 million individuals (30% of the adult population) who had not made any account-to-account transfers in the 12 months prior to the launch of Pix. Thus, Pix helped to expand the universe of digital payment users.
Credit transfers between individuals have been the main use case for Pix since its launch. Indeed, adoption by individuals is very straightforward. Individuals can obtain a Pix key and quick response (QR) code to initiate transfers to friends and family, or for small daily transactions. In line with its strategic agenda for financial inclusion, the BCB decided to make Pix transfers free of charge for individuals. PSPs pay a low fee (BRL 0.01 per 10 transactions) to the BCB so that the BCB can recover the cost of running the system.
To ensure access and integrity, PSPs must digitally verify the identity of users. With the existing interface and know-your-customer (KYC) processes provided by their bank or non-bank PSP, users can have an “alias” – such as a phone number, email address or other key – which forms the basis of digital identification. The most common such aliases are randomly generated keys (eg QR codes), but phone numbers, email addresses and tax IDs are also used … The ease of use for individuals and the multiplicity of use cases may be one reason why actual use has increased quite rapidly …
Brazil’s Pix system is still quite new, and one suspects there will be growing pains: security breaches, fraud, efforts by private actors to use the system to reduce competition, and so on. But the experiment itself may teach lessons about how a country can simultaneously expand access to electronic transactions while dramatically reducing the fees imposed by current payment systems, and thus is deeply interesting.
Is the ongoing inflation in the US economy more likely to be a temporary burst that will fade away, or a longer-term pattern that will require a policy reaction? One way to get some insight into this question is to look at whether many prices are rising briskly or only a few. If inflation is being driven up by just a few prices–say, energy prices–then an overall inflationary momentum has not yet spread to the rest of the economy. However, if a wide array of prices have been rising, then a broader inflationary momentum becomes a more plausible story. Alexander L. Wolman of the Richmond Fed carries out the calculations in “Relative Price Changes Are Unlikely to Account for Recent High Inflation” (March 2022, Economic Brief 22-10).
Wolman breaks up the economy into roughly 300 consumption categories. For each category, there is data on the changes in prices as well as quantity of the good consumed. Thus, one can look at whether overall inflation is emerging from relatively few or many categories of goods–and how this pattern has changed in the last year or so.
As one example, he calculates the share of expenditures, using the 300+ categories of consumption, that account for 50% of inflation: that is, does a relatively small or large share of expenditures account for half of the inflation? Each dot in the figure below is monthly data going back to 1995. As you can see on the vertical axis, monthly inflation rates were low during much of this time, often in the range of 0.2% or less. The horizontal axis shows that the share of expenditures accounting for half of this low inflation rate was also often quite small, often in the range of 2-4% of all expenditures.
However, the colored dots show more recent months from the start of the inflation in March 2021 (yellow dot) up through January 2022 (purple dot). As you can see on the vertical axis the monthly inflation rate has bee substantially higher during this time, often at about 0.6%. But look at how the colored dots spread out horizontally. Back in March 2021 (yellow) and April 2021 (green) , half of inflation was coming from about 3% of expenditures, a common earlier pattern. But by December 2021 (orange) and January 2022 (purple), half of the higher inflation rate was coming from about 12% of expenditures. In other words, price changes across a much wider swath of the economy were driving inflation–a sign that inflation had become more entrenched.
Here’s a different calculation reaching a similar conclusion. In any given month, what share of relative price increases are above or below the inflation rate? Again, you can see that in March 2021 (yellow) and April 2021 (green), a smaller share of price increases were above the overall inflation rate, but in January 2022, almost half the prices are rising at above the average rate.
The message I would take away from these calculations is that it was plausible in the summer of 2021 to think that inflation was mainly a few price changes, perhaps due to short-term factors and supply disruptions, that would fade. But the growing number of consumption categories with substantial prices makes that interpretation less plausible. In addition, this data doesn’t take into account the more recent disruptions and price hikes associated with the Russian invasion of Ukraine and the economic sanctions that have followed.
The Russian invasion of Ukraine has led to me reading more about foreign affairs and international relations than usual, which in turn reminded me of Arthur M. Schlesinger’s comments about the quality of writing in the US State Department. Schlesinger was a Harvard history professor who became a “special assistant” to President John F. Kennedy. In 1965, he published a memoir titled A Thousand Days: John F. Kennedy in the White House. Here’s his discussion of the internal battles over the quality of writing emanating from the US Department of State (pp. 418-419). I’m especially fond of a few of his comments:
“The writing of lucid and forceful English is not too arcane an art.”
“At the very least, each message should be (a) in English, (b) clear and trenchant in its style, (c) logical in its structure and (d) devoid of gobbledygook. The State Department draft on the Academy failed each one of these tests (including, in my view, the first).”
Here’s the fuller passage:
After the Bay of Pigs, the State Department sent over a document entitled “The Communist Totalitarian Government of Cuba as a Source of International Tension in the Americas,” which it had approved for distribution to NATO, CENTO, SEATO, the OAS and the free governments of Latin America and eventually for public re- lease. In addition to the usual defects of Foggy Bottom prose, the paper was filled with bad spelling and grammar. Moreover, the narrative, which mysteriously stopped at the beginning of April 1961, contained a self-righteous condemnation of Castro’s interventionist activities in the Caribbean that an unfriendly critic, alas! could have applied, without changing a word, to more recent actions by the United States. I responded on behalf of the White House:
It is our feeling here that the paper should not be disseminated in its present form. …
Presumably the document is designed to impress, not an audience which is already passionately anti–Castro, but an audience which has not yet finally made up its mind on the gravity of the problem. Such an audience is going to be persuaded, not by rhetoric, but by evidence. Every effort to heighten the evidence by rhetoric only impairs the persuasive power of the document. Observe the title: ‘The Communist Totalitarian Government of Cuba’… This title presupposes the conclusion which the paper seeks to establish. Why not call it `The Castro Regime in Cuba’ and let the reader draw his own conclusions from the evidence? And why call it both ‘Communist’ and ‘totalitarian’? All Communist governments are totalitarian. The paper, in our view, should be understated rather than overstated; it should eschew cold .war jargon; the argument should be carried by facts, not exhortations. The writing is below the level we would hope for in papers for dissemination to other countries. The writing of lucid and forceful English is not too arcane an art.
The President himself, with his sensitive ear for style, led the fight for literacy in the Department; and he had the vigorous support of some State Department officials, notably George Ball, Harriman and William R. Tyler. But the effort to liberate the State Department from automatic writing had little success. As late as 1963, the Department could submit as a draft of a presidential message on the National Academy of Foreign Affairs a text which provoked this resigned White House comment:
This is only the latest and worst of a long number of drafts sent here for Presidential signature. Most of the time it does not matter, I suppose, if the prose is tired, the thought banal and the syntax bureaucratic; and, occasionally when it does matter, State’s drafts are very good. But sometimes, as in this case, they are not.
A message to Congress is a fairly important form of Presidential communication. The President does not send so many — nor of those he does send, does State draft so many — that each one can- not receive due care and attention. My own old-fashioned belief is that every Presidential message should be a model of grace, lucidity and taste in expression. At the very least, each message should be (a) in English, (b) clear and trenchant in its style, (c) logical in its structure and (d) devoid of gobbledygook. The State Department draft on the Academy failed each one of these tests (including, in my view, the first).
Would it not be possible for someone in the Department with at least minimal sensibility to take a look at pieces of paper designed for Presidential signature before they are sent to the White House?
It was a vain fight; the plague of gobbledygook was hard to shake off. I note words like “minimal” (at least not “optimal’) and ‘pieces of paper” in my own lament.
The policy challenge of climate change is not likely to have a single magic bullet answer, but rather will require a cluster of answers. In addition, it seems extremely unlikely to me that world energy production is going to decline any time soon, especially given the fact that only about 16% of global population lives in “high-income” countries as defined by the World Bank, and the other 84% are strongly desirous of a standard of living that will require higher energy consumption. Thus, the question is how to produce the quantity of energy that will be demanded in the future in a way that will be cost-competitive and also created lower environmental costs (by which I include costs of conventional air pollution from burning fossil fuels as well as issues of carbon emissions and climate change). It is fundamentally a question to be addressed by the cluster of technologies that affect energy production, storage, and consumption.
How responsive is innovation to climate policy? … We will see that innovation is key to ensure economic growth while preserving the environment, but innovation is not a silver bullet. First, there is not a single innovation that will reduce our dependence on fossil fuels; instead, we will need many (sometimes incremental) innovations in energy-saving technologies and in clean energy. Second, innovation is not manna from heaven, and it is not even necessarily clean. Instead, the direction of innovation responds to incentives and to policies, which means that policies should be designed taking their induced effect on clean (or dirty) innovation into account.
Hémous points out that from per capita emissions of carbon dioxide have been shrinking for the world as a whole, and also for many advanced economies. However, substantial differences in per capita carbon emissions remain even among high-income countries:
“[T]he correlation between income and emissions is far from perfect. Switzerland is below the world average, France is close to it (and below it once we take changes in land use into account), but clearly both countries are relatively rich. Such cross-country differences reflect differences in technologies (how electric power is produced, how well buildings are insulated, etc.) and consumption choices (what type of cars are popular, how far people live from work, etc.).”
The key driving force here is the energy needed to produce a given amount of GDP–commonly known as “energy intensity”–has been falling all around the world. A substantial part of the reason is that as economies evolve, their economies shift from energy-hungry production of goods to a less energy-intensive production of services and knowledge. The growth of their GDP is less about “more” and instead focused on “better.”
Hémous describes his own research focused on automotive engines. He looks at “clean” innovations that offer alternatives to fossil fuel engine: “grey” innovations that make fossil fuel engines more efficient–but for that reason can also lead to a “rebound effect” of additional driving; and “dirty” innovations that tend to lead to higher emissions. Looking at “3,412 international firms over the period 1986–2005. We find that a 10% increase in fuel prices leads to 8.5% more clean innovations and 8.3% less purely dirty innovations 2 years later, with no statistically significant effect on grey innovations.”
The lesson here is a general one: the direction of technological change is not fully predetermined by the discoveries of scientists and engineers, but instead responds to economic incentives. Development of new pharmaceuticals, to choose another example, responds to the size of the perceived market–which may be quite different from the potential health benefits. Some technologies may be better at replacing labor and displacing jobs; other technologies may be better at complementing labor and raising wages. When it comes to energy, some technologies will be greener than others, and the incentives for a greener path can come in the form of price signals, regulations, and support for research and development.
Most people don’t have a clear intuitive grip on numbers that are either pretty small or pretty large. Lotteries, which combine an unthinkably low chance of winning with an unthinkably large prize, are a vivid example, but there are many others. But in a big and diverse world, the very human inability to get a clear intuitive grip on smaller and larger means that we misperceive the world around us. Here’s a vivid example from the British public opinion and data company YouGov (by Taylor Orth, “From millionaires to Muslims, small subgroups of the population seem much larger to many Americans,” March 15, 2022).
This survey asked Americans to estimate the size of various groups in the population. The blue dots show the survey estimates; the orange dots show the actual value based on data from the US Census Bureau, the Bureau of Labor Statistics, and similar sources. For example, the first row shows that the actual share of Americans who had over $1 million in income last year is close to 0% (actually, about 0.1%). But according to the survey, Americans think that about 20% of Americans had over $1 million in income last year. Conversely, the bottom row says that Americans believe about 65% of the adult population has a high school degree, when the actual share is more like 89%.
The short article with the survey mentions some of the hypotheses here. For example, one lesson is that the framing of a survey will shape the results. This survey doesn’t require percentages to add up to 100% for any given category. For example, the survey responses suggest that 30% of the US population is in New York City, another 30% is in Texas, and an additional 32% is in California–which suggests that the entire rest of the US has only 8% of the total population. Similarly, the survey suggests that 27% of the population is Muslim, 30% of the population is Jewish, 33% are atheists, and 58% are Christian–which adds up to way more than 100%. If the survey was structured in a way that required the category of where you live or your religious identification to add up to 100%, the answers would presumably look different.
Some possible explanation for these patterns is that some groups are highly salient for survey respondents: perhaps people have a few vivid examples of people in a certain category, or a strong emotional charge about people in a certain category, and thus are likely to inflate the numbers for that group. I suspect this answer has some truth. But it’s worth noting that similar divergences hold for categories like being left-handed, which wouldn’t seem to have the same issues of vividness and emotional charge.
The article suggest that what is happening here is “uncertainty-based rescaling,” which basically means what when people are uncertain, they tend to answer surveys by making the small percentages larger and the large percentages smaller.
It’s also possible to ask whether any of this matters. There are social science studies where some groups are given information before being asked their opinions, while other groups are not, and providing information often doesn’t have much effect on the opinions people express. In the Summer 2020 issue of the Journal of Economic Perspectives, Brendan Nyhan wrote “Fact and Myths about Misperceptions,” which digs into these issues. For many people, how they feel about, say, transgender rights will not change if they are informed that transgender people are 1% rather than 21% of the population.
But as an old-fashioned 20th century kind of person, I do think these wide divergences between reality and perception matter. When beliefs about actual real-world quantities become unmoored from reality, it’s gets harder even to distinguish the bigger issues from the smaller ones, much less to think about the tradeoffs of alternative policy choices.
When inflation first started kicking up its heels last summer, there was a dispute over whether it was likely to be temporary or permanent. The argument for “temporary” went along the following lines: the underlying causes of the inflation were a mixture of factors like supply chain disruptions, the fact that people shifted to buying goods rather than services during the pandemic recession, and federal government overspending earlier in 2021. Inflation has been stuck at low levels for several decades now, and as these underlying factors fade, this temporary blip won’t alter people’s long-run expectations about future inflation. The argument for “permanent” sounded like this: the supply chain and spending pressures would fad only gradually. In the meantime, rising inflation could become embedded in the expectations of firms as they set prices and workers as they looked for wage increases. In this way, the surge in inflation could become self-sustaining.
Of course, that argument happened before Russia decided to invade Ukraine. The supply chain disruptions that we were concerned about in summer 2021 were epitomized by lines of cargo ships waiting to be unloaded in west coast ports. But the current supply chain disruptions are about the waves of sanctions being imposed on Russia, the loss of agricultural output, spikes in energy prices, and a COVID outbreak in China that threatens supply chains there. So what’s the current thinking? The stable of macroeconomists at the Peterson Institute for International Economics have been been posting a series of working papers and essays on the prospects for future inflation. The full range of views are represented, just in time for the meeting of the Federal Open Market Committee meeting happening today and tomorrow (March 15-16).
Reifschneider and Wilcox spell out an overall macroecoomic model, and along the way offer an interesting calculation about the connection between short-term inflation changes and what happens in the longer-term. On the horizonal axis of this graph, each point represents a 20-year period. The question is: During this 20-year period, if there was a one-time jump in inflation, did it portend a higher inflation rate for the longer-term. Back in the 1960s and 1970s, a short-term movement in inflation pretty much always translated one-to-one into a longer term move. But in the last 20 years, short-term shocks to inflation have typically faded away quickly, having little or no longer-term persistence.
As you look at this figure, an obvious question is whether 2021 looks more like the late 1960s–that is, a run-up to lasting and higher inflation–or whether it looks more like an unclassifiable pandemic blip. I should add that the authors
The statistical analysis in this Policy Brief was conducted before Russia invaded Ukraine. As a result of the war, the inflation situation will probably get worse during the next few months before it gets better, and could do so in dramatic manner if Russian energy exports are banned altogether. Nonetheless, if the key considerations identified in this Policy Brief remain in place, and if monetary policymakers respond to evolving circumstances in a sensible manner, the inflation picture should look considerably better in the next one to three years.
For a gloomier view, Olivier Blanchard responded in “Why I worry about inflation, interest rates, and unemployment” (March 14). Blanchard points out that when inflation rises, the Fed typically raises interest rates. This graph shows the inflation rate in red. The blue line shows the real policy interest rate–that is, the federal funds interest rate set by the Federal Reserve minus the rate of inflation. Because it’s a real interest rate, the spike in inflation in the 1970s and more recently push the real interest rate down into negative territory. You can see back in the late 1970s and early 1980s that as the real policy interest rate rose, inflation came down–albeit at the cost of a severe double-dip recession in 1979-80 and again in 1981.
In short, Blanchard argues that the Federal Reserve is “behind the curve” as it was in February 1975 when inflation had already hit double-digits and would do so again in the later part of that stagflationary decade.
Of course, historical comparisons always require some interpretation. Given that short-term inflationary blips have tended to fade away for a few decades now, why won’t it happen this time? Blanchard makes a case that this time is different:
The issue, however, is how much the past few decades, characterized by stable inflation and nothing like COVID-19 or war shocks, are a reliable guide to the future. There are good reasons to doubt it. What I believe is central here is salience: When movements in prices are limited, when nominal wages rarely lag substantially behind prices, people may not focus on catching up and may not take variations in inflation into account. But when inflation is suddenly much higher, both issues become salient, and workers and firms start paying attention and caring. I find the notion that workers will want to be compensated for the loss of real wages last year, and may be able to obtain such wage increases in a very tight labor market, highly plausible, and I read some of the movement in wages as reflecting such catchup.
Will the statistical relationships of the 25 years leading up to the pandemic reassert themselves in 2022? They may. But it would not be my central case: Short-run inflation expectations and wage- and price-setting behavior indicate a degree of inflation inertia that is closer to what was experienced in the 1960s–1980s than the low inflation of recent decades. Moreover, counting on inflation to fall could lead to policy errors that could actually prevent it from happening. … Over the last two years, productivity rose at an annual rate of 2.3 percent, only slightly above trend, while compensation per hour rose at 7.0 percent per year, far above the trend rate … As a result, unit labor costs are up 4.7 percent per year, a rate that is consistent with a similar rate of inflation if the labor share is unchanged. … Given that wages can be sticky and are adjusted only periodically, it is likely that much of the big price increases will show up in wages going forward. It is likely that nominal wage growth over the next year or two will be at least 5.5 percent, given the combination of catch-up for past price increases, staggered wage setting, and tight labor markets. A reasonable forecast is that annual productivity growth will be about 1.5 percent. …
Furman also suggests that it may be time for the Fed to change its inflation target from 2% to 3%.
Inflation coming under control need not be strictly defined as 2 percent inflation or 2 percent average inflation. Stabilizing inflation at 3 percent would itself be an accomplishment. If inflation does settle there, it would be very painful to bring it down much more: With a flat Phillips curve, doing so would likely require a recession. The ideal outcome could be inflation settling at 3 percent—and the target resetting in the Fed’s next framework review—a measure that could improve macroeconomic stability by giving the Fed more scope to cut nominal interest rates to combat future recessions.
How high will the Fed need to raise the federal funds interest rate, the “policy” interest rate that it targets? The current target rate for this interest rate is near-zero: specifically, in the range of 0-0.25%. Karen Dynan (who was on the staff of the Fed for 17 years) discusses “What is needed to tame US inflation?” (March 10, 2022). She argues that the Fed needs to raise its policy interest rate dramatically and to do so soon, because it’s better to risk a recession now than to risk an even bigger recession from not acting quickly enough. She writes:
To keep inflation expectations anchored (or reanchor them) and restore slack, the Fed will need to tighten policy considerably, moving from its very accommodative current stance to a neutral stance and perhaps beyond. Doing so will entail both reductions in the size of its balance sheet and significant increases in the federal funds rate. If the equilibrium real funds rate is 0.5 percent, as currently implied by Fed projections, and expected inflation is just 2 percent, the funds rate would need to reach 2.5 percent to achieve a neutral stance. Because relevant inflation expectations are probably higher and a tighter-than-neutral stance may be needed, the Fed should move toward a federal funds rate of 3 percent or higher over the coming year. Such an increase would create a material risk of a sharp slowdown in economic activity—but not tightening policy significantly now would increase the chance that inflation stays high, which would require even tighter policy later.
In response to the 2020 pandemic-induced recession, the Fed quickly adopted an ultra-loose policy stance, dropping short-term interest rates to zero and buying bonds to pull down long-term rates. In retrospect, it should have begun returning to a more neutral policy stance after the passage of the American Rescue Plan, in March 2021. But neither the Fed nor most private forecasters began to predict an overheating economy until much later in the year. As the magnitude and persistence of the rise in inflation became apparent in late 2021, the Fed appropriately signaled a tightening of policy, beginning with a tapering of its bond purchases. These purchases will have ended by March 16, when the Fed is expected to announce the first increase in its short-term policy rates. The Fed should project a steady rise in policy rates to a neutral level, just over 2 percent, by January 2023. It should also announce that it will soon start to allow some of the bonds it purchased to run off its balance sheet as they mature. These runoffs should increase over the summer and reach a peak of $100 billion per month by the fall. That would be twice as fast as the reduction in bond holdings after the Fed’s last bout of bond buying, and it would hasten a return to neutral conditions for longer-term interest rates. …
If PCE inflation settles in much above 2 percent by the end of 2023, the big question will be whether the Fed needs to slow the economy further, risking a recession, to get all the way back to 2 percent.[11] As some economists have argued, the evidence of the past 25 years shows that an inflation rate of 2 percent is too low for many reasons, all of which lead to higher unemployment rates than necessary. It would be a mistake to cause or even risk a recession to get inflation down to a level that is too low. … The Fed should take this opportunity to raise its inflation target to 3 percent.
On Considering the Model and the Estimation Method Separately
You did mention the difference-in-difference work, so let me focus on what I’ve actually written about. I think this is generally a good lesson that I showed you can use traditional regression methods. In particular, you can expand the usual two-way fixed effects estimator. Actually, it’s not expanding the estimator, but expanding the model. My interpretation of the recent criticism of two-way fixed effects is that it’s not a criticism of the estimator but a criticism of the model. The model assumes a single treatment effect, regardless of how many cohorts there are and how long the time period is for the intervention. And I simply noted that if you set things up properly, you can apply regression methods to estimate a much more flexible model, and this largely overcomes the criticisms of the simple two way fixed effects analysis.
So, what I tried to emphasize with my students is that it’s very important to keep separate the notion of a model and an estimation method. And I sometimes forget myself. I will say things like OLS [ordinary least squares] model, but OLS is not a model. It’s an estimation method which we can apply to various kinds of models. It’s up to us to be creative and use the tools that we have so that we apply those methods to models that don’t make strong assumptions. I hope that this idea bridges again a lot of my research, which is pretty simple. It’s trying to find simpler ways to do more flexible analysis, at the point that it gets really hard.
On the Temptations of Simulations and Machine Learning
I was in the middle of doing some simulations for some recent nonlinear difference-in-differences methods that I’ve been working on. But then I was thinking, as I was doing the simulations and changing the parameters of the simulations, am I doing this to learn about how this estimator compares with other estimators, or am I trying to rig it so that my estimator looks the best? So, I was really just making a statement. Like you know, it’s human nature to want yours to be the best, right? One uses the machine to learn about that, and I’m partly making a statement. I’m trying to be as objective as I can by showing cases where the methods I’m proposing work better but also being upfront about cases where other methods will work better. …
When we publish papers, the best way to get your work published is to show that it works better than existing methods. Since the people writing the theory and deriving the methods are the same ones doing the simulations, it will probably be better if there’s some disconnection there. … I’ve always thought that we should have more competitions, such as blind competitions where people who participate don’t know what the truth is. They apply their favorite method across a bunch of different scenarios, so we can evaluate how the different methods do. I’m guessing that machine learning will come out pretty well with that, but that’s an impression. I’m not convinced that somebody using basic methods who has good intuition and is creative can’t do as well. …
I think the work on applying machine learning methods to causal inference has guaranteed that it will have a long history in econometrics and other fields that use data analysis. When I took visits to campuses, Amazon, Google, they’re using machine learning methods quite a bit. That’s no secret. These companies are in the business of earning profits, and they’re not going to employ methods that somehow aren’t working for them. So, I think the market is certainly speaking on that. For prediction purposes, they seem to work very well.
On Simplicity and Credibility of Methods
It’s interesting that if you look at the literature on intervention analysis and difference-in-difference, in some ways we’re trying to go back to simpler things. So, if you were to compare today with twenty years ago and see what econometrics people are doing, it seems to me that structural methods may be more out of favor now than they were fifteen years ago with this re-emergence of difference-in-difference. It seems that we are always looking for natural experiments and interventions to learn things about policy. … So, I wonder if our reaction to these complications in the real world is leading us to simplify the econometrics. Or, at least we are going to only believe analyses that have some clear way to identify the causal effect of an intervention rather than our relatively simple economic models.
For gains in computing power, it’s of course well-known that productivity growth has taken off with in the last 60 years or so in what is often referred to as “Moore’s law,” the roughly accurate empirical prediction made back in the 1960s that the number of components packed on a computer chip would double about every two years, implying a sharp fall in computing costs and a correspondingly sharp rise in the uses of this technology.
The first calculator to enjoy large sales was the “arithmometer,” designed and built by Thomas de Colmar, patented in 1820. This device used levers rather than keys to enter numbers, slowing data entry. It could perform all four arithmetic operations, although the techniques are today somewhat mysterious. The device was as big as an upright piano, unwieldy, and used largely for number crunching by insurance companies and scientists. Contemporaneous records indicate that 500 were produced by 1865, so although it is often called a “commercial success,” it was probably unprofitable.
According to the calculations from Nordhaus, “there has been a phenomenal increase in computer power over the twentieth century. Depending upon the standard used, computer performance has improved since manual computing by a factor between 1.7 trillion and 76 trillion.”
Nordhaus writes: “This finding implies that the growth in the frontier volume of lighting has been underestimated by a factor of between nine hundred and sixteen hundred since the beginning of the industrial age.”
Back in 1997, one might have assumed that the rise of the compact fluorescent bulb was the apotheosis of gains in lighting technology. But LED lighting was already on the way. Indeed, Roland Haitz proposed what has come be called “Haitz’s Law” back in 2000 “which predicts that for every 10 years, the cost per lumen falls by a factor of 10 and the amount of light generated per LED package increases by a factor of 20.” Since then, the gains in cost and quality of LED lighting have largely driven the coil-shaped compact fluorescent light bulbs out of the market, and the efficiency gains in lighting that can be customized and programmed for desired uses has continued to march ahead.
The productivity gains in production of nails are not nearly as large as in computing or lighting, but from a certain perspective they are just as remarkable. After all, the methods of producing computing power or lighting would look like magic two centuries ago, but a modern nail would be readily recognizable to those using nails 200-300 years ago. If the product remains essentially the same, how much room can there be for productivity gains?
The changes can be real and substantial. As Sichel explains, hand-forged nails were common from Roman times up to the 1820s. There was then a shift in the 19th century to cut nails, “made by a bladed machine that cuts nails from thin strips of iron or steel,” which were produced with water and then steam then electrical power. By the 1880s these nails had shifted from iron to steel. At about this time, there was a shift to wire nails, “made by cutting each nail from a coil of drawn wire, sharpening a tip, and adding a head,” which were much lighter and thus changed the cost-effectiveness of shipping nails over longer distances.
Sichel collects a wide range of data on nails, and on the transitions between different kind of nails over time, and suggests that the real price of nails didn’t change much during the 1700s and 1800s, but then started a substantial decline, falling by roughly a factor of 10 from about 1800 up through the 1930s.
After that, the rise in the price of US nails represents a different story: imported nails took over the low-price end of the US nail market starting back in 1950s, while US nail producers focused instead on higher-priced nails for specialized used–which lead to the higher prices for US-produced nails in the figure.
The change here is dramatic. Nails used to be precious. In the 1700s, abandoned buildings were sometimes burned down to facilitate recovering the nails that had been used in their construction. Circa 1810, according to Sichel’s calculations, nails were about 0.4% of US GDP: “To put this share into perspective, in 2019 household purchases of personal computers and peripheral equipment amounted to roughly 0.3 percent of GDP and household purchases of air travel amounted to about 0.5 percent. That is, back in the 1700s and early 1800s, nails were about as important in the economy as computers or air travel purchased by consumers are today.”
Of course, the changes with nails did not happen in a vacuum, but instead were closely related to other technology-related changes in materials, energy sources, machines used in manufacturing, the skills of worker, and so on. This interdependence with other technological changes also holds true about the productivity gains and cost decreases in computing and lighting, too. Indeed, Sichel points out that even thought the price of nails themselves stopped declining, the price of an installed nail dropped dramatically in recent decades with the invention of the pneumatic nail gun.