Some Economics of Tobacco Regulation

Cigarette smoking in the United States is implicated in about 480,000 deaths each year–about one in five deaths. Cigarette smokers on average lose about 10 years of life expectancy. According to a US Surgeon General report in 2020:

Tobacco use remains the number one cause of preventable disease, disability, and death in the United States. Approximately 34 million American adults currently smoke cigarettes, with most of them smoking daily. Nearly all adult smokers have been smoking since adolescence. More than two-thirds of smokers say they want to quit, and every day thousands try to quit. But because the nicotine in cigarettes is highly addictive, it takes most smokers multiple attempts to quit for good.

Philip DeCicca, Donald Kenkel, and Michael F. Lovenheim summarize the evidence on “The Economics of Tobacco Regulation: A Comprehensive Review” (Journal of Economic Literature, September 2022, 883-970). Of course, I can’t hope to do justice to their work in a blog post, but here are some of the points that caught my eye.

  1. US efforts at smoking regulation has changed dramatically in the late 1990s century, with an enormous jump in cigarette taxes and smoking restrictions.

For example, here’s a figure showing the combined federal and state tax rate on cigarettes as a percent of the price (solid line) and as price-per-pack (dashed line). In both cases, a sharp rise is apparent from roughly 1996 up through 2008.

In addition, smoking bans have risen substantially.

Governments around the world have implemented smoking bans sporadically over the past five decades, but they have become much more prevalent over the past two decades. … [W]orkplace, bar, and restaurant smoke- free indoor air laws became increasingly common. As of 2000, no state had yet passed a comprehensive ban on smoking in these areas, although some states had more targeted bans. From 2000–2009, the fraction of the US population covered by smoke- free worksite laws increased from 3 percent to 54 percent, and the fraction covered by smoke- free restaurant laws increased from 13 percent to 63 percent … Since the turn of the century, the increased taxation and regulation of cigarettes and tobacco is unprecedented and dramatic.

2. Given that tobacco usage is being discouraged in a number of different ways, all at the same time, it’s difficult for researchers to sort out the individual effects of, say, cigarette taxes vs. workplace smoking bans vs. government-mandated health warnings vs. changing levels of social approval.

3. My previous understanding of the conventional wisdom was that demand for cigarettes from adult smokers was relatively inelastic, while demand from younger smokers was relatively elastic. The underlying belief was that (as a group) adult smokers have had a more long-lasting tobacco habit and have more income, so it was harder for them to shake their tobacco habit, while the tobacco usage of younger smokers is more malleable. This conventional wisdom may need some adjustments.

The consensus from the last comprehensive review of the research that was conducted 20 years ago (Chaloupka and Warner 2000) indicates that adult cigarette demand is inelastic.
More recent research from a time period of much higher cigarette taxes and lower smoking rates supports this consensus, however, there is also evidence that traditional
methods of estimating cigarette price responsiveness overstate price elasticities of
demand. As well, more recent research casts doubt on the prior consensus that youth
smoking demand is more price-elastic than adult demand; the most credible studies on
youth smoking indicate little relationship between smoking initiation and cigarette
taxes. The inelastic nature of cigarette demand suggests cigarette excise taxes are an efficient revenue-generating instrument.

To put it another way, higher cigarette taxes do a decent job of collecting revenue, but they don’t do much to discourage smoking.

4) If cigarette taxes are really about revenue collection, because they don’t do much to discourage smoking, then it becomes especially relevant that low-income people tend to smoke more, and thus end up paying more in cigarette taxes. This figure shows cigarette use by income group; the next figure shows cigarette taxes paid by income group.

5) Broadly speaking, there are two economic justification for cigarette taxes. One is what economists call “externalities,” which are the costs the cigarette smokers impose on others in ways including secondhand smoke and higher health care costs that are shared across public and private health insurance plans with non-smokers. The other is “internalities,” which are the costs that smokers who would like to quit, but find themselves trapped by nicotine addition, impose on themselves. The authors write:

However, evidence on the magnitude of the externalities created by smoking does not necessarily support current tax levels. Behavioral welfare economics research suggests that the internalities of smoking provide a potentially stronger rationale for higher taxes and stronger regulations. But the empirical evidence on the magnitudes of the internalities from smoking is surprisingly thin.

6) Finally, in reading the article, I find myself wondering if the US is, to some extent, substituting marijuana for tobacco cigarettes. As the authors point out, the few detailed research studies on this subject have not found such a link. However, in a big picture sense, the trendline for cigarette use is decidedly down over time, while the trendline for marijuana use is up. A recent Gallup poll reports that “[m}ore people in the U.S. are now smoking marijuana than cigarettes.” Evidence from the National Survey on Drug Use and Health doesn’t more-or-less backs up that claim:

Among people aged 12 or older in 2020, 20.7 percent (or 57.3 million people) used tobacco products or used an e-cigarette or other vaping device to vape nicotine in the
past month. … In 2020, marijuana was the most commonly used illicit drug, with 17.9 percent of people aged 12 or older (or 49.6 million people) using it in the past year. The
percentage was highest among young adults aged 18 to 25 (34.5 percent or 11.6 million people), followed by adults aged 26 or older (16.3 percent or 35.5 million
people), then by adolescents aged 12 to 17 (10.1 percent or 2.5 million people).

However, about one-fifth of the tobacco product users didn’t smoke cigarettes, and with that adjustment, cigarette smoking would be a little below total marijuana use.

For those who would like more on smoking-related issues, here are some earlier posts:

Some Human Capital Controversies and Meditations

The idea that job skills and experience can be views as an investment in future production–with the costs occurring in the present and a stream of payoffs going into the future–is an idea that goes back a long way. For example, when Adam Smith was writing in The Wealth of Nations back in 1776 about types of “fixed capital” in an economy (in Book II, Ch. 1), he offers four categories: “machines and instruments of trade,” “profitable buildings,” “improvements of land,” and a fourth category made up

. . of the acquired and useful abilities of all the inhabitants or members of the society. The acquisition of such talents, by the maintenance of the acquirer during his education, study, or apprenticeship, always costs a real expence, which is a capital fixed and realized, as it were, in his person. Those talents, as they make a part of his fortune, so do they likewise of that of the society to which he belongs. The improved dexterity of a workman may be considered in the same light as a machine or instrument of trade which facilitates and abridges labour, and which, though it costs a certain expence, repays that expence with a profit.

The concept wasn’t new to Adam Smith, and wasn’t neglected by other economists of that time, either. For example, B.F. Kiker wrote about “The Historical Roots of the Concept of Human Capital” back in the October 1966 issue of the Journal of Political Economy and offers numerous examples of discussing what we now call “human capital” from the late 17th century up through Adam Smith and up to modern times. As Kiker pointed out:

Several motives for treating human beings as capital and valuing them in money terms have been found: (1) to demonstrate the power of a nation; (2) to determine the economic effects of education, health investment, and migration; (3) to propose tax schemes believed to be more equitable than existing ones; (4) to determine the total cost of war; (5) to awaken the public to the need for life and health conservation and the significance of the economic life of an individual to his family and country; and (6) to aid courts and compensation boards in making fair decisions in cases dealing with compensation for personal injury and death.

However, the terminology of “human capital” has been the source of various overlapping controversies. Putting a monetary value on people is in some cases necessary for certain practical kinds of decision-making. For example, a court making a decision about damages in a wrongful death case cannot just throw up its hands and say that “putting a monetary value on a person is impossible,” nor can it reasonably say that a human life is so precious that it is worth, say, the entire GDP of a country. But in making these practical decisions, it’s important to remember that estimating the economic value produced by a person is not intended to swallow up and include all the ways in which people matter. When a more formal economic analysis of “human capital” was taking off in the early 1960s, Theodore W. Schultz delivered a Presidential Address to the American Economic Association on this subject (“Investment in Human Capital,” American Economic Review, March 1961, pp. 1-17). He sought to explain the concerns over the terminology of “human capital” in this way:

Economists have long known that people are an important part of the wealth of nations. Measured by what labor contributes to output, the productive capacity of human beings is now vastly larger than all other forms of wealth taken together. What economists have not stressed is the simple truth that people invest in themselves and that these investments are very large. Although economists are seldom timid in entering on abstract analysis and are often proud of being impractical, they have not been bold in coming to grips with this form of investment. Whenever they come even close, they proceed gingerly as if they were
stepping into deep water. No doubt there are reasons for being wary. Deep-seated moral and philosophical issues are ever present. Free men are first and foremost the end to be served by economic endeavor; they are not property or marketable assets. And not least, it has been all too convenient in marginal productivity analysis to treat labor as if it were
a unique bundle of innate abilities that are wholly free of capital.

The mere thought of investment in human beings is offensive to some among us. Our values and beliefs inhibit us from looking upon human beings as capital goods, except in slavery, and this we abhor. We are not unaffected by the long struggle to rid society of indentured service and to evolve political and legal institutions to keep men free from bondage. These are achievements that we prize highly. Hence, to treat human beings as wealth that can be augmented by investment runs counter to deeply held values. It seems to reduce man once again to a mere material component, to something akin to property. And for man to look upon himself as a capital good, even if it did not impair his
freedom, may seem to debase him. No less a person than J. S. Mill at one time insisted that the people of a country should not be looked upon as wealth because wealth existed only for the sake of people [15]. But surely Mill was wrong; there is nothing in the concept of human wealth contrary to his idea that it exists only for the advantage of people. By investing in themselves, people can enlarge the range of choice available to them. It is one way free men can enhance their welfare. …

Yet the main stream of thought has held that it is neither appropriate nor practical to apply the concept of capital to human beings. Marshall [11], whose great prestige goes far to explain why this view was accepted, held that while human beings are incontestably capital from an abstract and mathematical point of view, it would be out of touch with the market place to treat them as capital in practical analyses. Investment in human beings has accordingly seldom been incorporated in the formal core of economics, even though many economists, including Marshall, have seen its relevance at one point or another in what they have written.

The failure to treat human resources explicitly as a form of capital, as a produced means of production, as the product of investment, has fostered the retention of the classical notion of labor as a capacity to do manual work requiring little knowledge and skill, a capacity with which, according to this notion, laborers are endowed about equally. This notion of labor was wrong in the classical period and it is patently wrong now. Counting individuals who can and want to work and treating such a count as a measure of the quantity of an economic factor is no more meaningful than it would be to count the number of all manner of machines to determine their economic importance either as a stock of capital or as a flow of productive services.

Laborers have become capitalists not from a diffusion of the ownership of corporation stocks, as folklore would have it, but from the acquisition of knowledge and skill that have economic value [9]. This knowledge and skill are in great part the product of investment and, combined with other human investment, predominantly account for the productive superiority of the technically advanced countries. To omit them in studying economic growth is like trying to explain Soviet ideology without Marx.

As Schultz mentions, it has been a convenient analytical device for a long time now to divide up the factors of production into “capital” and “labor,” with capital owned by investors. The idea of “human capital” scrambles these simple categories and leads to various logical fallacies: for example, if capitalists own capital, and “human capital” exists, then aren’t economists claiming that capitalists own humans, too? Such strained verbal connections miss the basic reality that what differentiates high-income and low-income countries is not primarily the quantity of physical capital investment, but rather the skills and capabilities of the workers. If you wish to explain growth of economic production or differences in production around the world, discussing how these skills and capabilities can be enhanced and how they affect economic production is not an avoidable question.

The Summer 2022 issue of the Journal of Economic Perspectives (where I work as Managing Editor) includes a short two-paper symposium on human capital. Katharine G. Abraham and Justine Mallatt discuss the methods of  “Measuring Human Capital” (pp. 103-30). As Kiker noted back in his 1966 essay:

Basically, two methods have been used to estimate the value of human beings: the cost-of-production and the capitalized-earnings procedures. The former procedure consists of estimating the real costs (usually net of maintenance) incurred in “producing” a human being; the latter consists of estimating the present value of an individual’s future income stream (either net or gross of maintenance).

Abraham and Mallett discuss the current efforts to carry out these kinds of calculations. Both approaches present considerable difficulties. For example, it’s possible to add up spending per student on education, adjust for inflation, make some assumptions about how the human capital created by education depreciates over time, and build up a cost-based approach to estimating human capital. But obvious questions arise about how to adjust for the quality of education receive, or whether to include other aspects of human capital, including physical health or on-the-job training.

Similarly, it’s possible to start with the idea that workers with more education, on average, get paid more, and then work backwards into an extended calculation of what their earlier education must have been worth, if it resulted in this higher pay level. Doing this calculation across generations with very different education and work outcomes places a lot of stress on the available data and requires copious assumptions on issues like how to treat those who may still be completing their education, along with estimates of future growth in wages and how best to discount these future values back to a single present value. It turns out that estimates of human capital based in the income received from education are often 10 times larger than estimates of human capital based on the costs of providing that education–which suggests that further research that clarifies the parades of underlying assumption is needed.

In the other JEP paper, David J. Deming discussed the state of the evidence on “Four Facts about Human Capital” pp. (75-102). He writes:

This paper synthesizes what we have learned about human capital … into four stylized facts. First, human capital explains a substantial share of the variation in labor earnings within and across countries. Second, human capital investments have high economic returns throughout childhood and young adulthood. Third, the technology for producing foundational skills such as numeracy and literacy is well understood, and resources are the main constraint. Fourth, higher-order skills such as problem-solving and teamwork are increasingly economically valuable, and the technology for producing them is not well understood. We have made substantial progress toward validating the empirical predictions of human capital theory. We know how to improve foundational skills like numeracy and literacy, and we know that investment in these skills pays off in adulthood.
However, we have made much less progress on understanding the human capital
production function itself. While we know that higher-order skills “matter” and are an important element of human capital, we do not know why.

In my own mind, the essays make a compelling case that understanding human capital is of central importance for understanding many aspects of the economy. Maybe a different label than “human capital” would be more rhetorically pleasing: I certainly would not claim that the economic profession has been especially graceful or sensitive in its framing and nomenclature. But it is a striking fact that no country with broadly high levels of educational human capital is also a low-income country. When I think about the importance of human capital for entrepreneurialism and innovation that can address major problems, I sometimes find myself saying that “in the long run, a country’s economic future is all about its human capital.”

Financial Services: Share of Profits

Like most economists, I’m not allergic to the idea of profits. I view them as a signal that, in a certain area of an economy, the willingness of people to buy a certain product exceeds the costs of producing that product–thus, profits are a way to encourage additional production in that area. Still, it’s provocative even to me to observe that about one-quarter of all corporate profits go to the financial services sector. Here’s a figure from Paul W. Wilson in his article, “Turbulent Years for U.S. Banks: 2000-20” (Review: Federal Reserve Bank of St. Louis,
Third Quarter 2022, pp. 189-209).

If you want to see the underlying data for this calculation, it’s available from the US Bureau of Economic Analysis at “Table 6.16D. Corporate Profits by Industry/.” Just to be clearly, financial services includes a lot more than just banks: it’s “credit intermediation and related activities; securities, commodity contracts, and other financial investments and related activities; insurance carriers and related activities; funds, trusts, and other financial vehicles; and bank and other holding companies.”

What makes this interesting, is that financial services, as a share of the value-added in GDP, are about 7-8% of GDP in recent decades, according to the US Bureau of Economic Analysis (see the “Value added by Industry as a Percentage of Gross Domestic Product” table, line 55). Thus, the question is why the financial services accounts for 7-8% of GDP, but about 25-30% of all corporate profits.

One possibility, as noted at the start, is that the profits are sending a signal that the US economy would benefit from a dramatic expansion of financial services. While I’m sure that financial services could benefit from innovation and entry, like other sectors, it’s not at all clear to me that, as a society, we are crying out for a much greater quantity of financial services.

Another possibility is that there is limited competition in financial services, perhaps due to difficulties of entry and costs of regulation, which leads to higher profits for incumbents. In the specific area of banking, Wilson notes:

The number of Federal Deposit Insurance Corporation (FDIC) insured commercial banks and savings institutions fell from 10,222 at the end of the fourth quarter of 1999 to 5,002 at the end of the fourth quarter of 2020. During the same period, among the 5,220 banks that disappeared, 571 exited the industry because of failures or assisted mergers,1 while the creation of new banks slowed. From 2000 through 2007, 1,153 new bank charters were issued, and in 2008 and 2009, 90 and 24 new charters were issued, respectively. But from 2010 through 2020, only 48 new commercial bank charters were issued. The decline in
the number of institutions since 2000 continues a long-term reduction in the number of banks operating in the United States since the mid-1980s.

A lack of competition in the financial sector essentially implies that when financial transactions are involved–whether corporate or household–this sector is able to carve itself a bigger slice of the pie than would be likely if there were more competitors. A more subtle possibility is that at least some of the profits being attributed to the financial sector were actually created in other sectors of the economy, but through various accounting transactions these profits are being reported instead in the financial sector.

A final possibility is that the way in which GDP measures the value-added of the financial services sector may tend to understate the size of the sector. After all, it’s hard to separate out what the financial sector produces into changes in quantity of services produced, changes in the quality of those services (as financial technology evolves), and price for each service.

I don’t have evidence to back up my suspicions here. But when the share of profits for a sector is much large than the value-added of that sector, on a sustained basis over several decades, something is out of whack.

Work From Home: Disrupting the Unwritten Labor Market

In simple descriptions of the labor market, people work for pay. This isn’t wrong, but it is incomplete. A job also involves a wide array of other costs and understandings. For example, commuting to work is a cost, and in many jobs so is dressing for work. Jobs can have greater or lesser flexibility about when they start in the day, whether you can extend a lunch break when needed, or when you are done. Some workplaces may have a subsidized lunchroom, or just some occasional free doughnuts. For a given employer, co-workers can be pleasant or alienating, and customers can be businesslike or obstreperous. Some jobs can expose workers to additional health risks. Managers can encourage input and offer what flexibility they can, or they can be mini-tyrants.

In short, a job is not just hours-for-pay, but is also a set of background characteristics and rules. In a happy workplace, both workers and employers will go beyond the minimum necessary courtesies and try to help each other out; conversely, an unhappy workplace is a sort of cold war of animosities and provocations.

In almost every job, the COVID pandemic changed a substantial number of non-pay characteristics. Working from home, for example, is a shorthand for changes in commuting costs, flexibility, level of managerial oversight, the co-workers, benefits of what used to be available near the on-site location, and so on. Those who couldn’t work from home found, as a result of the pandemic, that the health risks and day-to-day patterns of in-person work had changed. Unemployment rates have come back down in the last couple of years, but the wide array of disrupted non-pay job-related arrangements is still working its way to new agreements and arrangements.

Cevat Giray Aksoy, Jose Maria Barrero, Nicholas Bloom, Steven J. Davis, Mathias Dolls, Pablo Zarate look at some aspects of this transition in “Working From Home Around the World,” written for the most recent Brookings Papers on Economic Activity (Fall 2022, paper, comments, and video available online). Much of their paper looks at surveys on the prevalence of work-from-home across a number of countries, and the hopes and expectations about what future work-from-home patterns will look like. From the abstract:

The pandemic triggered a large, lasting shift to work from home (WFH). To study this
shift, we survey full-time workers who finished primary school in 27 countries as of mid 2021 and early 2022. Our cross-country comparisons control for age, gender, education, and industry and treat the U.S. mean as the baseline. We find, first, that WFH averages 1.5 days per week in our sample, ranging widely across countries. Second, employers plan an average of 0.7 WFH days per week after the pandemic, but workers want 1.7 days. Third, employees value the option to WFH 2-3 days per week at 5 percent of pay, on average, with higher valuations for women, people with children and those with longer commutes. Fourth, most employees were favorably surprised by their WFH productivity during the pandemic. Fifth, looking across individuals, employer plans for WFH levels after the pandemic rise strongly with WFH productivity surprises during the pandemic. Sixth, looking across countries, planned WFH levels rise with the cumulative stringency of government-mandated lockdowns during the pandemic.

These results to me that suggest that a collision of expectations about work-from-home is on its way. Workers would like a little more more work-from-home than they experienced in the pandemic; employers would like substantially less. One can imagine a future where some companies offer more work-from-home, and attract workers who place a high value on this aspect of the job, and other companies don’t. Workers would then need to reshuffle between these companies, and the market would then sort out how they compete against each other. Some employers had good experience with unexpectedly high work-from-home productivity during the pandemic; some did not. Some organizations and workers have made substantial investments in facilitating work-from-home; others did not.

The authors also point out, without taking a strong position, some of the broader issues raised by a shift to work-from-home. For example, it’s one thing for an experienced worker with a well-defined job to work from home, but for young adults just entering the workforce and looking for on-the-job training and support, it may look quite different. There is a well-supported belief that physical interactions in the workplace are often important for innovation and the spread of knowledge. Interactions in the virtual workplace have become much easier, but in terms of supporting innovation, will they be able to substitute for a decline in live in-person interactions?

For cities, perhaps the biggest short-term effect of a rise in work-from-home is less urban activity and a lower tax base from commercial real estate and sales taxes.
As an earlier example, the waves suburbanization after World War II imposed high costs on cities, and on those who were not able for whatever reason to relocate to the suburbs. But economic forces can also cut in unexpected direction. As the authors write: “If older and richer workers decamp for suburbs, exurbs and amenity-rich consumer cities, the resulting fall in urban land rents will make it easier for young workers to live in and benefit from the networking opportunities offered by major cities.” With this kind of change, cities and those who run them would need to rethink the sources of their economic vitality and attractiveness, along with the mix of taxes and services they currently provide.

Job Vacancies and Core Inflation

Perhaps the key question about the higher level of US inflation is whether it is likely to be transitory, and thus to fade away on its own, or whether it is likely to be permanent unless aggressive policy step are taken–like additional increases in interest rates by the Federal Reserve.

The case that the current inflation is likely to fade away on its own often argues that inflation has been caused by a confluence of events that probably will not last: people spending money in certain limited sectors of the economy when pandemic restrictions shut down other parts of the economy, supply chain hiccups that occurred partly as a result, and the spike in energy prices. The hope or expectation is that as these factors fade, inflation will fade, too.

The argument that inflation is likely to be permanent is linked to the “Great Resignation” in US labor markets. Firms are trying to hire. Job vacancy rates are sky-high. The result is that upward pressures on wages are likely to keep inflation high, unless or until the Fed unleashes additional interest rate increases that are likely to bring on a recession in the next year or two. Let’s walk through some of the evidence on job vacancies and inflation, and then address the obvious concerns in “blaming” higher wages for causing inflation.

As a starter, here’s the basic numerical count of job vacancies since 2000. Notice that the number was rising even before the pandemic recession in 2020. Indeed, if you squint really hard, you can imagine a more-or-less straight line from the vacancy trends before the pandemic to vacancy numbers that are not too far from current levels.

But from the standpoint of inflation, really matters is job vacancies relative to the number of unemployed people. The concerns over wage inflation arise when there are lots of vacancies and a relatively small number of unemployed people seeking those jobs. Here’s the ratio of unemployed people to job vacancies from the Job Openings and Labor Turnover (JOLTS) survey by the US Bureau of Labor Statistics. Again, notice that this ratio was already trending lower before the pandemic. Also, notice that when this ratio falls below 1, the number of unemployed people is lower than the number of job vacancies.

The “Beveridge curve” is another way to look at this data. This curve graphs the unemployment rate on the horizontal axis and the rate of job openings rate on the vertical axis. The usual pattern here is a generally downward-sloping relationship, which is intuitively sensible: that is, times of higher unemployment also tend to be times with lower rates of job openings. Indeed, you can see in the bottom set of colored lines, all jumbled together, the economy in the first decade of the 2000s was basically moving back and forth along a Beveridge curve.

But then the Beveridge curve shifts up during the period from July 2009 to February 2020: that is, for any given level of unemployment rate, the job vacancy rate was higher than before. And since the pandemic, the Beveridge curve has bumped up still higher to the black line: again, for any given level of unemployment rate, the rate of job vacancies is higher than it used to be. To put it another way, employers are seeking to fill more jobs now than they were at previous times when unemployment rates were roughly this low.

In short, no matter how you look at it, job vacancies are really high right now. I don’t think this phenomenon is yet well-understood. But workers are quitting jobs at high rates, often to take an alternative preferred job, and the employment/population ratio for US adults has not yet returned to pre-pandemic levels.

These job vacancy patterns set the stage for the research paper by Laurence Ball, Daniel Leigh, and Prachi Mishra,” Understanding U.S. Inflation During the COVID Era,” just published as part of the Brookings Papers on Economic Activity (Fall 2022, text as well as video of presentations and comments available). The research paper is full of models and calculations, and will not be especially accessible to the uninitiated. But the basic idea is straightforward.

The authors look at the recent rise in inflation, They find that when one looks at the overall inflation rate, the changes are indeed often due to factors like supply chain issues, energy price spikes, and the like. But if you focus only on “core” inflation, with changed in food and energy prices stripped out, then they find that the ratio of job vacancies to the unemployment rate is the major driving force. In other words, yes, some of the inflation of the last couple of years is temporary, but not all of it. In addition, they find that breaking the back of this underlying core inflation is likely to require additional interest rate increases by the Fed and higher unemployment rate–which sounds to me like the kind of multidimensional economic downturn in product and labor markets that is called a “recession.”

One common response to discussions of how higher wages can drive up inflation is to accuse the teller of the tale of just being opposed to higher wages. This response has a nice “gotcha” zippiness. One can readily imagine a group of sympathetic listeners nodding vigorously and yelling “you tell ’em” and “preach it” and the like. But in an analytical sense, the response is confused. Over time, the only way to have sustainable broad-based higher wages is if they are built on a foundation of sustainable broad-based higher labor productivity. Perhaps the workplace-related innovations unleashed by the pandemic experience will lead to a surge of productivity, but at least so far, that isn’t apparent in the data. Otherwise, higher wages can unleash what used to be called the wage-price spiral: that is, higher wages driving prices up, and higher price levels pushing higher wages. The appropriate social goal is to have higher wages that also have greater buying power, not higher wages that are just perpetually chasing higher price levels.

The Ball, Leigh, and Mishra paper is of course just one set of estimates, albeit from a well-regarded set of economists and published in a high-profile publication outlet. But they are not alone in suggesting that the pattern of very high job vacancies given the level of unemployment may be the key factor driving core inflation higher.

Interview with Stephanie Schmitt-Grohé: Inflation and Other Topics

David A. Price interviews Stephanie Schmitt-Grohé in the most recent issue of Econ Focus (Federal Reserve Bank of Richmond, Third Quarter 2022, pp. 24-28).

On the problem of using surprise inflation to finance government debt:

When Martín [Uribe] and I got interested in the topic of price stability, there was an influential paper on optimal monetary and fiscal policy that concluded that when you have a change in the fiscal deficit or government spending, responding by adjusting distortionary taxes — say, labor income taxes — is not good from a welfare point of view. What you can do instead, the argument went, is to have surprise inflation. So if you get, say, an increase in government spending, and you need to finance that, then if nobody’s expecting inflation, you can just have a one-year surprise inflation. And that literature concluded it was, in fact, the best thing to do: Keep tax rates steady and finance surprises to the budget with surprise inflation.

Martín and I wondered what would happen to this result if one were to introduce sticky prices — the idea that prices are costly to change — into the situation. Our contribution was to show in a quantitative model that the tradeoff between surprise inflation and tax smoothing was largely resolved in favor of price stability. With price stickiness, volatile inflation is welfare-reducing. It sort of overturned the previous result. …

One issue that I think has been coming back a little bit is how is the United States going to finance a massive fiscal deficit that created the big stack of debt? Are we going to use surprise inflation? Here our research would say no, it’s not optimal to do that.

On what historical experience has to say about temporary and permanent inflation:

We find ourselves a little bit in an unprecedented situation. Inflation has gone up rapidly. And so we [Schmitt-Grohé and co-author Martín Uribe] were thinking about this pretty unusual development for the postwar period.

We wanted to answer the question that I think everybody is interested in: Is this inflation hike temporary or permanent? Our idea was that during the postwar period — since 1955, say — the only big inflation was the inflation of the 1970s. And that was an inflation that built up slowly and then was ended also relatively slowly — quicker than it built up, but relatively slowly — by Paul Volcker in the 1980s. So we said, since the current inflation is unprecedented in the postwar period, what will we see if we just go further back in history?

Because we wanted to go back in history, we used the database of Òscar Jordà, Moritz Schularick, and Alan Taylor, which goes back to 1870. We saw that the macroeconomic stability that we had in the postwar era was special, at least compared to what we see since 1870. There were many more episodes of high and variable inflation. So we just asked if we give the purely statistical model a longer memory by allowing it to go back in time, how would it interpret the current increase in inflation?

We found that if we estimate the model since 1955, which is what most people do when they talk about cyclical fluctuations — actually, many people only start in the 1990s or look at the last 30 or 40 years, the so-called Great Moderation period — the model is led to interpret the entire current increase in inflation as permanent. But if the model is given the chance to look back further in time, where we had more episodes of a short-lived and large inflation spike, the interpretation is that only 1 or 2 percent of the current increase in inflation is of a more permanent nature.

An example to look at is the Spanish Influenza of 1918 in the United States. That was also a period of an inflation spike, but inflation had started already a year or two before the influenza pandemic. There were similarities to now, namely a pandemic and high inflation. There was a small increase in the permanent component of inflation during the years around the influenza pandemic, but the majority of it was transitory.

 On the multiple benefits of starting your post-PhD economics career as a research economist at the Federal Reserve:

I would say four things were great about the job. At the beginning, you have almost all of your time for research. So you come out of graduate school, you have all the papers of your dissertation, and you’re trying to polish them to send to journals. The Fed gives you the time to do that. I would say you have more time to do that if you work in the research department at the Fed than if you start teaching at a university because you have to make one or two course preps, which takes time. So that was one great thing.

A second great thing is they used to hire — probably this is still true — something like 20 or 30 Ph.D.s a year out of top graduate schools. And they were more or less all in macroeconomics. If you go to a university, most likely you have, at most, two or three junior colleagues in your field. But at the Fed, you had a large cohort of them with whom you could interact and talk at lunch — there was a culture of going for lunch together in the Fed’s cafeteria — so it was stimulating in that way.

Another thing that was great was that you had to do a little bit of policy work. The Board of Governors wants to learn what the research staff thinks about the economic issues of the moment and what economic policy would be the correct one. Once or twice a year, you had to write a memo that you would read aloud in the FOMC briefing, so your audience was Alan Greenspan and the other governors. So you got to work on interesting issues and you got an understanding of what the relevant questions are. The process gave you a pipeline of research questions that you could work on later.

Lastly, because the Board is such a big institution, it runs a pretty large program of workshops with outside speakers. Almost too many speakers came through — more than one per week. You got exposed to all the major figures in your field because they came to give a workshop or they came to visit the Fed for one or two days.

Higher Education and Critical Thinking

What and much do students actually learn in college? How much does what they learn help them later in life? One slice of evidence on these very large questions comes from “Does Higher Education Teach Students to Think Critically?” edited by  Dirk Van Damme and Doris Zahner, and just published by the OECD.

The primary focus of the study is on a test called the CLA+, which stands for Collegiate Learning Assessment (CLA+). The test “is an assessment of higher education students’ generic skills, specifically critical thinking, problem solving and written communication.” I’ve many times heard colleges and universities offer a comment to incoming students like: “Don’t worry too much about your specific major. What’s important is that you will learn how to think.” The CLA+ test can be viewed as an attempt to measure these “how to think” skills. The test is a proprietary instrument created by a US-based non-profit, the Council for Aid to Education. The test has been extensively used in US higher education for the last decade or so, although a main focus of this report is the effort to develop an international version of the test.

Here’s the broad pattern for US colleges and universities. It’s not an official random sample, but it does include hundreds of thousands of students at hundreds of institutions, so it’s likely to have some useful information content.

The chart gives me a decidedly half-full, half-empty feeling. On the positive side, the share of college students at the lowest levels of “emerging” and “developing” decline between the entering (blue bars) and exiting (gray bars) students. On the negative side, the average gains are not as large as one might prefer to see for four years of college education; for example, 58% of the entering students are in the lowest two categories, and 47% of the exiting students are still in those lowest two categories. The report describes the general pattern in this way:

Overall, it is encouraging to see that during their time in a higher education programme, students improved their critical thinking skills. However, given the importance that most higher education programmes attach to promoting critical thinking skills, the learning gain is smaller than could be expected. If universities really want to foster 21st-century skills such as critical thinking, they need to upscale their efforts. While universities produce graduates who can be considered, on average, as proficient in critical thinking, the distribution of achievement is quite wide, with one-fifth of students performing at the lowest level. With half of exiting students performing at the two lowest levels, it is difficult to claim that a university qualification reliably signals a level of critical thinking skills expected by the global market place.

In addition, comparing US students who enter college and university with those who graduate will not capture the experience of those who don’t complete a degree.

However, despite the upward trend in college enrolment over the last two decades, college graduation rates remain relatively low within the United States. According to the National Center of Education Statistics (Hussar et al., 2020[7]), as of spring 2020, nearly 40% of students who began seeking a bachelor’s degree at a four-year institution in 2012 have yet to receive their degree. Furthermore, year-to-year retention rates vary considerably across institutions. Between 2017 and 2018, although highly selective institutions had high student retention rates of 97%, less selective and open-admissions schools retained a substantially smaller percentage (62%) of their students during this same period (Hussar et al., 2020[7]). Contrary to an oft-perpetuated notion that student retention is a “first-year” problem, student attrition remains a risk for students at all class levels, with approximately one-third of college dropouts having obtained at least three-quarters of the credits required for graduation (Mabel and Britton, 2018[8]). Although many students cite non-academic reasons such as financial difficulties, health or family obligations as the primary causes for dropping out or deferring their college education (Astin and Oseguera, 2012[9]), academic failure is also a significant factor contributing to lack of persistence and decreased retention of students in higher education.

Thus, as the report notes:

The analysis cannot positively confirm that the learning gain is caused by the teaching and learning experience within university programmes. It is possible that, for example, selection effects (selective drop-out), general maturing of the student population or effects of learning outside university contribute to the average learning gain.

With higher education, as with many other areas of public policy, it feels to me as if we often have argument-by-anecdote. One side points out some individual success stories; the other points out contrasting examples of failures. The CLA+ is just one test (although a well-used test that, as the OECD notes, is highly regarded both in the US and internationally). But this evidence is at best only mildly reassuring that the average college student is making strong progress in the general goal of “learning how to think.”

US Unions and Alternative Voices for Labor

In polling data, Americans offer strong support for labor unions. But in practice, the share of US workers belonging to a labor union has been falling for decades. How will these competing tensions work themselves out?

A report by an interdisciplinary group of union-friendly labor market researchers under the auspices of the Worker Empowerment Research Network explores the current state of “U.S. Workers’ Organizing Efforts and Collective Actions” (June 2022). Specifically, the report is co-authored by Thomas A. Kochan, Janice R. Fine, Kate Bronfenbrenner, Suresh Naidu, Jacob Barnes, Yaminette Diaz-Linhart, Johnnie Kallas, Jeonghun Kim, Arrow Minster, Di Tong, Phela Townsend, and Danielle Twiss.

Here’s Gallup survey data on the share of Americans who say that they approve of unions. It has often hovered near 60%, the most recent survey (just available a week ago, and not shown in this graph) puts support at 71%.

In addition, a substantial share of US workers report that they would like to have more say in their workplace.

However, the share of US workers belonging to a union peaked back in the late 1940s and early 1950s at about one-third of the labor force. Here’s the share of private-sector US workers belonging to a union since the earlier 1970s.:

There are an array of possible explanations for this gap between what would seem to be support for unions and what unions can provide, and actual membership in unions. Those who support unions often emphasize that US labor law can make it hard to set up and win a union election in a given workplace. By international standards, the legal rules for setting up a union are objectively harder in the US than in most other high-income countries. However, it also seems true that many American workers who support unions in the abstract are much less supportive of unions in their own workplace, and often seem to mistrust whether an official union will focus and make progress on the day-to-day issues that matter to them as workers.

Given the steady downward trend of unionization rates over the decades, it seems unlikely that the current US legal framework for unionization is likely to lead to a renaissance of American unions any time soon. Indeed, the deeper issue here may be that the US legal framework for unionization is establishment-based–that is, it is primarily focused on workers who share a common location voting for a union. For company with many different locations–a current example is Starbucks–each location needs to vote for a union. Conversely, existing labor law about unionization is not focused on groups of workers: say, gig workers, part-time workers, workers who hold domestic jobs, adjunct professors and graduate students, farmworkers, mid-level managers, and others. The report notes:

A number of unions and other worker advocates argue that attempting to organize large, multilocation employers one location at a time is not a viable way to engage them in dialogue and/or negotiations on workers’ issues of concern. This has led to a range of different protests, mobilizing efforts, and political campaigns for new regulations aimed at
gaining a voice in decision-making and governance processes at the corporate level, where key employment and labor strategies and decisions are made. To date, very few of these efforts have been successful in bringing worker and corporate management representatives to a table for dialogue. What steps, via public policy, private actions, and/or dialogue across business and worker representatives at the firm, sector, or
national levels might explore ways to foster some form(s) of engagement?

When the report refers to a range of efforts, what sorts of things do they have in mind? Here are a few examples of institutions that seek to express concerns of labor, but do not involve a standard US-style dues-paying union based at a certain location:

Worker centers are “community-based mediating institutions that provide support to and organize among communities of low-wage workers. … Although worker centers were being founded throughout the 1980s and 1990s, their numbers began to increase substantially in the late 1990s. By 2005, there were at least 135 active worker centers in the U.S., up from roughly 30 in 1992. As of late 2018, there were at least 234 active worker centers in the U.S., and we have identified 12 new centers that have emerged since then.

A prominent example of a focused campaign that relies on political mobilization is Fight
for $15. Started by a group of fast-food workers in New York City in 2012 with extensive financial and organizational support from SEIU and Change to Win, Fight for $15 now operates in over 300 cities and six continents. The campaign has spread beyond fast food
to include other low-wage workers such as home care workers, airport workers, and adjunct professors. It relies on citywide and regional organizing committees that mobilize brief strikes in order to create political leverage and change the narrative on low-wage work.

Founded by Sara Horowitz in 1995, Freelancers Union is one of the longest-standing worker advocacy organizations that does not seek to achieve formal collective bargaining rights. A multi-occupational professional association promoting the interests of
independent workers through policy advocacy, benefits provision, resources, and community building, Freelancers Union has more than 500,000 members
nationwide. In recent years, Freelancers Union has pursued a variety of policy advocacy campaigns for independent contractors, including the 2017 enactment of the
Freelance Isn’t Free Act in New York City protecting independent contractors from nonpayment, and the inclusion of self-employed people in pandemic unemployment assistance benefits authorized in the CARES Act of 2020.

Coworker.org, founded in 2013, is a peer-based digital platform that provides online resources to workers engaging in workplace petition campaigns and other power-building strategies. Coworker.org’s petition site empowers workers to exercise their voice and push for better working conditions, as well as to bring greater public awareness to issues and challenges within specific worker communities. Coworker.org supports the collection of signatures among employees in organizations and also provides resources such as training, funds, and communication spaces that aim to help workers maintain large decentralized networks in workplaces. According to Coworker.org co-executive director Michelle Miller, the first two months of the pandemic saw a large increase in worker activities on the site. While dedicated to serving all types of workers, the past and current organizing activities on Coworker.org have mostly taken place in the low-wage service and retail sector and in the tech sector. Over 700 campaigns were listed on its site as this report was being prepared. Petitions target a range of issues, including wages and benefits, health and safety, the coronavirus, hiring and firing, paid sick leave, scheduling, dress code, staffing levels, discrimination and workplace harassment, training and development, and parental leave.

Accordingly, organizing among gig workers has risen, as these workers push for better
working conditions. Rideshare Drivers United, Gig Workers Rising, Gig Workers Collective, New York Taxi Workers Alliance, We Drive Progress, and Mobile Workers Alliance are examples of the worker advocacy organizations that are forming among workers who provide services for app-based platforms such as food delivery and ride-hailing.

The report offers a number of other examples of these kinds of organizations, as well as examples of situations in which substantial groups of workers carried out a “sickout” or a protest to make their point, outside of the standard union framework. As mentioned already, “very few of these efforts have been successful in bringing worker and corporate management representatives to a table for dialogue,” at least so far.

“A Shy Guest at the Feast of the World’s Culture”

As the managing editor of an academic journal, inhabiting the broader universe of academia, I am continually stunned by how much people know about their given subject of choice. Robert E. Lucas captured some of that feeling in an interview back in 1998, when he was asked about whether it was important for economists to also be competent historians. Lucas replied:

No. It is important that some economists be competent historians, just as it is important that some economists be competent mathematicians, competent sociologists, and so on. But there is neither a need nor a possibility for everyone to be good at everything. Like Stephen Dedalus, none of us will ever be more than a shy guest at the feast of the world’s culture.

(The quotation is from Brian Snowdon and Howard R. Vane, “Transforming macroeconomics:
an interview with Robert E. Lucas Jr.,” Journal of Economic Methodology, 1998, 5:1, 115-146, with the comment on p. 121.(

Of course, Stephen Dedalus is the protagonist in the 1916 James Joyce novel, Portrait of the Artist as a Young Man. Here’s the original “shy guest” comment from Joyce:

The pages of his timeworn Horace never felt cold to the touch even when his own fingers were cold; they were human pages and fifty years before they had been turned by the human fingers of John Duncan Inverarity and by his brother, William Malcolm Inverarity. Yes, those were noble names on the dusky flyleaf and, even for so poor a Latinist as he, the dusky verses were as fragrant as though they had lain all those years in myrtle and lavender and vervain; but yet it wounded him to think that he would never be but a shy guest at the feast of the world’s culture and that the monkish learning, in terms of which he was striving to forge out an esthetic philosophy, was held no higher by the age he lived in than the subtle and curious jargons of heraldry and falconry.

I’ve been Managing Editor here at the Journal of Economic Perspectives for 36 years now. I think I’m good at the job. But I’m still a shy guest at the feast.

Robert Nozick: Entitled by Education?

I ran across a headline a few weeks ago in a US Census Bureau press release: “Teachers Are Among Most Educated, Yet Their Pay Lags.” The article itself is perfectly sensible, just offering statistics to point out that teachers often have lower pay than others who have BA degrees. But in seeing the headline, I was reminded of a feeling, sometimes expressed explicitly, that seems to me fairly common among those who work in the world of education: Many of us were above-average in the classroom, but our pay often does not reflect that. I’ve heard PhD economists point out, for example, that they had higher grades than many of those who went on to become lawyers and doctors, but the lawyers and doctors often make higher salaries.

The philosopher Robert Nozick once described this frame of mind in a 1998 essay called “Why Do Intellectuals Oppose Capitalism?” He wrote:

Intellectuals now expect to be the most highly valued people in a society, those with the most prestige and power, those with the greatest rewards. Intellectuals feel entitled to this. … Intellectuals feel they are the most valuable people, the ones with the highest merit, and that society should reward people in accordance with their value and merit. But a capitalist society does not satisfy the principle of distribution `to each according to his merit or value.’ Apart from the gifts, inheritances, and gambling winnings that occur in a free society, the market distributes to those who satisfy the perceived market-expressed demands of others, and how much it so distributes depends on how much is demanded and how great the alternative supply is. Unsuccessful businessmen and workers do not have the same animus against the capitalist system as do the wordsmith intellectuals. Only the sense of unrecognized superiority, of entitlement betrayed, produces that animus.

In fairness, most academics accepted the economic limits on their professional with their eyes open. Personally, I have known for a long time that I won’t ever get a payout from stock options or an annual bonus for an especially good year. On the other side, my economics journal isn’t likely to go out of business in down year, either. I have steady and secure work that uses my skills and interests. I have perhaps one meeting every two or three weeks, and the rest of the time, I spend editing, thinking, reading, and writing. I’m responsible for schedules being met over time, but my organization is a small one, and my day-to-day work is self-directed. I don’t have to monitor people working for me, and I don’t have a boss monitoring me each day, or each hour. I’m generally happy with my work/life tradeoffs.

In addition, I know perfectly well that one’s wages are heavily determined by supply and demand for one’s skills, not by test scores (and a good thing, too). But I admit that sometimes, when I see an old friend, or meet a new acquaintance who has made more lucrative choices, and I hear about their patterns of consumption, I wonder just a little about the paths I have not followed. One of the great benefits of a free market society is the freedom to make choices about one’s work. But such choices also bring ineluctable tradeoffs.