Some Economics of Diapers

I remember family holidays when our children were little, where the first step was to grab a large suitcase and start packing in the diapers. But for a number of low-income US families, affording the diapers is a challenge to their limited resources. Jennifer Randles discusses “Fixing a Leaky U.S. Social Safety Net: Diapers, Policy, and Low-Income Families” (RSF: The Russell Sage Foundation Journal of the Social Sciences, August 2022, 8:5, 166-183). She writes (citations omitted):

Diaper need—lacking enough diapers to keep an infant dry, comfortable, and healthy—affects one in three mothers in the United States, where almost half of infants and toddlers live in low-income families. Diaper need … exacerbates food insecurity, can cause parents to miss work or school, and is predictive of maternal depression and anxiety. When associated with infrequent diaper changes, it can lead to diaper dermatitis (rash) and urinary tract and skin infections.

Infants in the United States will typically use more than six thousand diapers, costing at
least $1,500, before they are toilet trained . Cloth diapers are not a viable alternative for most low-income parents given high start-up and cleaning costs and childcare requirements for disposables. Many low-income parents must therefore devise
coping strategies, such as asking family or friends for diapers or diaper money; leaving
children in used diapers for longer; and diapering children in clothes and towels.

Low-income parents also turn to diaper banks, which collect donations and purchase
bulk inventory for distribution to those in need and usually provide a supplemental supply of twenty to fifty diapers per child per month. In 2016, the nation’s more than three hundred diaper banks distributed fifty-two million diapers to more than 277,000 children, meeting only 4 percent of the estimated need. Many of those who seek diaper assistance
live in households with employed adults who have missed work because of diaper need. …

As a highly visible and costly item that must be procured frequently according to norms of proper parenting, diapers are part of negotiations about paternal responsibility
and access to children. Unemployed nonresidential fathers give more in-kind
support, such as diapers, than formal child support; the provision of diapers can be a form of or precursor to greater father involvement, especially among fathers who are disconnected from the labor market and adopt nonfinancial ideas of provisioning.

As Randles points out, some existing aid programs for poor families, like the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) or food stamps, explicitly don’t cover diapers. She writes: “/The $75 average monthly diaper bill for one infant would alone account for 8 to 40 percent of the average state TANF [Temporary Assistance to Needy Families] benefit.” In addition, “[s]ince the onset of the COVID-19 pandemic, disposable diaper costs have increased 10 percent due to higher demand and input material costs, supply-chain disruptions, and shipping cost surges.”

Diapers are taken for granted as parents’ responsibility, but politically deemed a discretionary expense. This is misaligned with how mothers understood their infants’
specific basic needs, which for most came down to milk and diapers. Food stamps and
WIC offered support for one. No comparable acknowledgment or assistance for the other
meant that mothers struggled even more to access a necessity policy did not officially recognize their children have.

Several bills have been introduced in Congress in the last few years to provide diaper-focused assistance, but apparently none has made it out of committee. There are a number of state-level programs as well. Some are linked to what level of sales tax is imposed on diapers. Some seek to build up the supply at diaper banks. As another example, California gives voucher for diapers to TANF recipients who have children under the age of three.

US Household Wealth: 1989-2019

Total household wealth is equal to the value of assets, including both financial assets and housing, minus the value of debts. The Congressional Budget Office has just published “Trends in the Distribution of Family Wealth, 1989 to 2019” (September 2022). Here are a few of the themes that caught my eye.

In 2019, total family wealth in the United States—that is, the sum of all families’ assets minus their total debt—was $115 trillion. That amount is three times total real family wealth in 1989. Measured as a percentage of the nation’s gross domestic product, total family wealth increased from about 380 percent to about 540 percent over the 30-year period from 1989 to 2019, CBO estimates. … From 1989 to 2019, the total wealth held by families in the top 10 percent of the wealth distribution increased from $24.3 trillion to $82.4 trillion (or by 240 percent), the wealth held by families in the 51st to 90th percentiles
increased from $12.7 trillion to $30.2 trillion (or by 137 percent), and the
wealth held by families in the bottom half of the distribution increased from
$1.4 trillion to $2.3 trillion (or by 65 percent).

There are several points worth pausing over here. First, the share of wealth/GDP fluctuated but in the long-term stayed around 360% of GDP from the 1950s up to the early 1990s. Indeed, I remember being taught in the 1980s that, for quick-and-dirty calculations, wealth/GDP could be considered a constant. But since then the wealth/GDP ratio has taken off, not just in the US but worldwide. Part of the reason is the run-up in stock market prices; part is the run-up in housing prices. One of the major questions for financial markets is whether this higher wealth/GDP ratio will persist: in particular, to what extent was it the result of gradually lower interest rates since the 1990s that have helped drive up asset prices, and will a reversion to interest rates more in line with historical levels lead asset prices to slump in a lasting way?

Second, the growth in wealth has not been equal: households in the upper part of the wealth distribution now hold a greater share of wealth than in the past. The CBO points out that differences in wealth are correlated with many factors, like age, marriage, and education. But while these factors can help to explain differences in wealth at a point in time, it’s not clear to me that changes in these factors can explain the growing inequality of wealth. Instead, my own sense is that the growing inequality of wealth is a version of a “Matthew effect,” as economists sometimes say. In the New Testament, Matthew 13:12 reads (in the New King James version): “For whoever has, to him more will be given, and he will have abundance; but whoever does not have, even what he has will be taken away from him.” In the context of wealth, those who were already somewhat invested in the stock market and in housing by, say, the mid-1990s have benefited from the asset boom in those areas; those who were not already invested in those areas had less chance for pre-existing wealth to grow.

Third, it’s worth remembering that for many people, especially young and middle-aged adults, their major wealth is in their own skills and training–their “human capital“–that allows them to earn higher wages. As an example, imagine a newly minted lawyer or doctor, who may have large student debts and not yet have had a chance to accumulate much financial wealth, but their skills and credentials mean that the personal wealth broadly understood to include human capital that will generate decades of future income is already quite high.

Finally, the pattern of wealth accumulation over the life cycle appears to be shifting. In this graph, notice that those born in the 1940s have substantially more wealth when they reach their 60s than does the previous generation of those born in the 1930s. However, the generation born in the 1950s is on a lower trajectory: that is, their median wealth in their late 50s is less than what has been accumulated by the generation born in the 1940s. As you work down to more recent generations, each line is below that for the previous generation: that is, each generation is accumulating less wealth than the previous generation did at the same age.

The CBO writes: However, for cohorts born since the 1950s, median wealth as a percentage of median income was lower than that measure was for the preceding cohort at the same age, and median debt as a percentage of median assets was higher.”

The CBO report also offers some updated through the first quarter of 2022, at which time total wealth and the stock market were holding up pretty well through the pandemic recession. But since April, US stock markets are down about 20%., and the totals and distributions above would need to be adjusted accordingly..

Telemedicine Arrives

Back in 2015, the American College of Physicians officially endorsed telemedicine — that is, providing health care services to a patient who is not in the same location as the provider. Nonetheless, many health care providers continued to rely on in-person visits for nearly all of their patients, with occasional telephone follow-ups. When the pandemic hit, it induced the birth of telemedicine as a widespread practice.

When I’ve chatted with doctors and other health care providers about the change, I’ve heard mainly two reactions: 1) a degree of surprise that telemedicine was working well for patients; and 2) a comment along the lines that “we were always willing to do it, but what changed is that insurance was willing to reimburse for it.” It’s of course not surprising that insurance reimbursement rules will drive the manner and type of health care that’s provided, but it’s always worth remembering.

Evidence on telemedicine is now becoming available. Kathleen Fear, Carly Hochreiter, abd Michael J. Hasselberg describe some results from the University of Rochester Medical Center in “Busting Three Myths About the Impact of Telemedicine Parity” (NEJM Catalyst, October 2022, vol. 3, #10, need a subscription or library to access). The U-Rochester med center is good sized: six full-service hospitals and nine urgent care centers, along with various specialty care hospitals and a network of primary care providers. Before the pandemic they were serving about two million outpatient visits per year.

During the pandemic, telemedicine at U-Rochester spiked from essentially nothing to 80% of patient contacts, and now seems to have settled back to about 20%.

Here is how the authors summarize the experience:

Three beliefs — that telemedicine will reduce access for the most vulnerable patients; that reimbursement parity will encourage overuse of telemedicine; and that telemedicine is an ineffective way to care for patients — have for years formed the backbone of opposition to the widespread adoption of telemedicine. However, during the Covid-19 pandemic, institutions quickly pivoted to telemedicine at scale. Given this rapid move, the University of Rochester Medical Center (URMC) had a natural opportunity to test the assumptions that have shaped prior discussions. Using data collected from this large academic medical center, UR Health Lab explored whether vulnerable patients were less likely to access care via telemedicine than other patients; whether providers increased virtual visit volumes at the expense of in-person visits; and whether the care provided via telemedicine was lower quality or had unintended negative costs or consequences for patients. The analysis showed that there is no support for these three common notions about telemedicine.

At URMC, the most vulnerable patients had the highest uptake of telemedicine; not only did they complete a disproportionate share of telemedicine visits, but they also did so with lower no-show and cancellation rates. It is clear that at URMC, telemedicine makes medical care more accessible to patients who previously have experienced substantial barriers to care. Importantly, this access does not come at the expense of effectiveness. Providers do not order excessive amounts of additional testing to make up for the limitations of virtual visits. Patients do not end up in the ER or the hospital because their needs are not met during a telemedicine visit, and they also do not end up requiring additional in-person follow-up visits to supplement their telemedicine visit. As the pandemic continues to slow down, payers may start to resist long-term telemedicine coverage based on previous assumptions. However, the experience at URMC shows that telemedicine is a critical tool for closing care gaps for the most vulnerable patient populations without lowering the quality of care delivered or increasing short-term or long-term costs.

The authors are careful to point out that a substantial part of health care does need to be delivered in person–a point with which it would be hard to disagree. But this evidence also strongly suggests that telemedicine was dramatically underused before the pandemic. It raises broader questions as to whether there are other ways that the provision of health care is stuck in its ways, unwilling or unable to adopt promising innovations in a timely manner.

Some Economics of Tobacco Regulation

Cigarette smoking in the United States is implicated in about 480,000 deaths each year–about one in five deaths. Cigarette smokers on average lose about 10 years of life expectancy. According to a US Surgeon General report in 2020:

Tobacco use remains the number one cause of preventable disease, disability, and death in the United States. Approximately 34 million American adults currently smoke cigarettes, with most of them smoking daily. Nearly all adult smokers have been smoking since adolescence. More than two-thirds of smokers say they want to quit, and every day thousands try to quit. But because the nicotine in cigarettes is highly addictive, it takes most smokers multiple attempts to quit for good.

Philip DeCicca, Donald Kenkel, and Michael F. Lovenheim summarize the evidence on “The Economics of Tobacco Regulation: A Comprehensive Review” (Journal of Economic Literature, September 2022, 883-970). Of course, I can’t hope to do justice to their work in a blog post, but here are some of the points that caught my eye.

  1. US efforts at smoking regulation has changed dramatically in the late 1990s, with an enormous jump in cigarette taxes and smoking restrictions.

For example, here’s a figure showing the combined federal and state tax rate on cigarettes as a percent of the price (solid line) and as price-per-pack (dashed line). In both cases, a sharp rise is apparent from roughly 1996 up through 2008.

In addition, smoking bans have risen substantially.

Governments around the world have implemented smoking bans sporadically over the past five decades, but they have become much more prevalent over the past two decades. … [W]orkplace, bar, and restaurant smoke- free indoor air laws became increasingly common. As of 2000, no state had yet passed a comprehensive ban on smoking in these areas, although some states had more targeted bans. From 2000–2009, the fraction of the US population covered by smoke- free worksite laws increased from 3 percent to 54 percent, and the fraction covered by smoke- free restaurant laws increased from 13 percent to 63 percent … Since the turn of the century, the increased taxation and regulation of cigarettes and tobacco is unprecedented and dramatic.

2. Given that tobacco usage is being discouraged in a number of different ways, all at the same time, it’s difficult for researchers to sort out the individual effects of, say, cigarette taxes vs. workplace smoking bans vs. government-mandated health warnings vs. changing levels of social approval.

3. My previous understanding of the conventional wisdom was that demand for cigarettes from adult smokers was relatively inelastic, while demand from younger smokers was relatively elastic. The underlying belief was that (as a group) adult smokers have had a more long-lasting tobacco habit and have more income, so it was harder for them to shake their tobacco habit, while the tobacco usage of younger smokers is more malleable. This conventional wisdom may need some adjustments.

The consensus from the last comprehensive review of the research that was conducted 20 years ago (Chaloupka and Warner 2000) indicates that adult cigarette demand is inelastic.
More recent research from a time period of much higher cigarette taxes and lower smoking rates supports this consensus, however, there is also evidence that traditional
methods of estimating cigarette price responsiveness overstate price elasticities of
demand. As well, more recent research casts doubt on the prior consensus that youth
smoking demand is more price-elastic than adult demand; the most credible studies on
youth smoking indicate little relationship between smoking initiation and cigarette
taxes. The inelastic nature of cigarette demand suggests cigarette excise taxes are an efficient revenue-generating instrument.

To put it another way, higher cigarette taxes do a decent job of collecting revenue, but they don’t do much to discourage smoking.

4) If cigarette taxes are really about revenue collection, because they don’t do much to discourage smoking, then it becomes especially relevant that low-income people tend to smoke more, and thus end up paying more in cigarette taxes. This figure shows cigarette use by income group; the next figure shows cigarette taxes paid by income group.

5) Broadly speaking, there are two economic justification for cigarette taxes. One is what economists call “externalities,” which are the costs the cigarette smokers impose on others in ways including secondhand smoke and higher health care costs that are shared across public and private health insurance plans with non-smokers. The other is “internalities,” which are the costs that smokers who would like to quit, but find themselves trapped by nicotine addition, impose on themselves. The authors write:

However, evidence on the magnitude of the externalities created by smoking does not necessarily support current tax levels. Behavioral welfare economics research suggests that the internalities of smoking provide a potentially stronger rationale for higher taxes and stronger regulations. But the empirical evidence on the magnitudes of the internalities from smoking is surprisingly thin.

6) Finally, in reading the article, I find myself wondering if the US is, to some extent, substituting marijuana for tobacco cigarettes. As the authors point out, the few detailed research studies on this subject have not found such a link. However, in a big picture sense, the trendline for cigarette use is decidedly down over time, while the trendline for marijuana use is up. A recent Gallup poll reports that “[m}ore people in the U.S. are now smoking marijuana than cigarettes.” Evidence from the National Survey on Drug Use and Health doesn’t more-or-less backs up that claim:

Among people aged 12 or older in 2020, 20.7 percent (or 57.3 million people) used tobacco products or used an e-cigarette or other vaping device to vape nicotine in the
past month. … In 2020, marijuana was the most commonly used illicit drug, with 17.9 percent of people aged 12 or older (or 49.6 million people) using it in the past year. The
percentage was highest among young adults aged 18 to 25 (34.5 percent or 11.6 million people), followed by adults aged 26 or older (16.3 percent or 35.5 million
people), then by adolescents aged 12 to 17 (10.1 percent or 2.5 million people).

However, about one-fifth of the tobacco product users didn’t smoke cigarettes, and with that adjustment, cigarette smoking would be a little below total marijuana use.

For those who would like more on smoking-related issues, here are some earlier posts:

Some Human Capital Controversies and Meditations

The idea that job skills and experience can be views as an investment in future production–with the costs occurring in the present and a stream of payoffs going into the future–is an idea that goes back a long way. For example, when Adam Smith was writing in The Wealth of Nations back in 1776 about types of “fixed capital” in an economy (in Book II, Ch. 1), he offers four categories: “machines and instruments of trade,” “profitable buildings,” “improvements of land,” and a fourth category made up

. . of the acquired and useful abilities of all the inhabitants or members of the society. The acquisition of such talents, by the maintenance of the acquirer during his education, study, or apprenticeship, always costs a real expence, which is a capital fixed and realized, as it were, in his person. Those talents, as they make a part of his fortune, so do they likewise of that of the society to which he belongs. The improved dexterity of a workman may be considered in the same light as a machine or instrument of trade which facilitates and abridges labour, and which, though it costs a certain expence, repays that expence with a profit.

The concept wasn’t new to Adam Smith, and wasn’t neglected by other economists of that time, either. For example, B.F. Kiker wrote about “The Historical Roots of the Concept of Human Capital” back in the October 1966 issue of the Journal of Political Economy and offers numerous examples of discussing what we now call “human capital” from the late 17th century up through Adam Smith and up to modern times. As Kiker pointed out:

Several motives for treating human beings as capital and valuing them in money terms have been found: (1) to demonstrate the power of a nation; (2) to determine the economic effects of education, health investment, and migration; (3) to propose tax schemes believed to be more equitable than existing ones; (4) to determine the total cost of war; (5) to awaken the public to the need for life and health conservation and the significance of the economic life of an individual to his family and country; and (6) to aid courts and compensation boards in making fair decisions in cases dealing with compensation for personal injury and death.

However, the terminology of “human capital” has been the source of various overlapping controversies. Putting a monetary value on people is in some cases necessary for certain practical kinds of decision-making. For example, a court making a decision about damages in a wrongful death case cannot just throw up its hands and say that “putting a monetary value on a person is impossible,” nor can it reasonably say that a human life is so precious that it is worth, say, the entire GDP of a country. But in making these practical decisions, it’s important to remember that estimating the economic value produced by a person is not intended to swallow up and include all the ways in which people matter. When a more formal economic analysis of “human capital” was taking off in the early 1960s, Theodore W. Schultz delivered a Presidential Address to the American Economic Association on this subject (“Investment in Human Capital,” American Economic Review, March 1961, pp. 1-17). He sought to explain the concerns over the terminology of “human capital” in this way:

Economists have long known that people are an important part of the wealth of nations. Measured by what labor contributes to output, the productive capacity of human beings is now vastly larger than all other forms of wealth taken together. What economists have not stressed is the simple truth that people invest in themselves and that these investments are very large. Although economists are seldom timid in entering on abstract analysis and are often proud of being impractical, they have not been bold in coming to grips with this form of investment. Whenever they come even close, they proceed gingerly as if they were
stepping into deep water. No doubt there are reasons for being wary. Deep-seated moral and philosophical issues are ever present. Free men are first and foremost the end to be served by economic endeavor; they are not property or marketable assets. And not least, it has been all too convenient in marginal productivity analysis to treat labor as if it were
a unique bundle of innate abilities that are wholly free of capital.

The mere thought of investment in human beings is offensive to some among us. Our values and beliefs inhibit us from looking upon human beings as capital goods, except in slavery, and this we abhor. We are not unaffected by the long struggle to rid society of indentured service and to evolve political and legal institutions to keep men free from bondage. These are achievements that we prize highly. Hence, to treat human beings as wealth that can be augmented by investment runs counter to deeply held values. It seems to reduce man once again to a mere material component, to something akin to property. And for man to look upon himself as a capital good, even if it did not impair his
freedom, may seem to debase him. No less a person than J. S. Mill at one time insisted that the people of a country should not be looked upon as wealth because wealth existed only for the sake of people [15]. But surely Mill was wrong; there is nothing in the concept of human wealth contrary to his idea that it exists only for the advantage of people. By investing in themselves, people can enlarge the range of choice available to them. It is one way free men can enhance their welfare. …

Yet the main stream of thought has held that it is neither appropriate nor practical to apply the concept of capital to human beings. Marshall [11], whose great prestige goes far to explain why this view was accepted, held that while human beings are incontestably capital from an abstract and mathematical point of view, it would be out of touch with the market place to treat them as capital in practical analyses. Investment in human beings has accordingly seldom been incorporated in the formal core of economics, even though many economists, including Marshall, have seen its relevance at one point or another in what they have written.

The failure to treat human resources explicitly as a form of capital, as a produced means of production, as the product of investment, has fostered the retention of the classical notion of labor as a capacity to do manual work requiring little knowledge and skill, a capacity with which, according to this notion, laborers are endowed about equally. This notion of labor was wrong in the classical period and it is patently wrong now. Counting individuals who can and want to work and treating such a count as a measure of the quantity of an economic factor is no more meaningful than it would be to count the number of all manner of machines to determine their economic importance either as a stock of capital or as a flow of productive services.

Laborers have become capitalists not from a diffusion of the ownership of corporation stocks, as folklore would have it, but from the acquisition of knowledge and skill that have economic value [9]. This knowledge and skill are in great part the product of investment and, combined with other human investment, predominantly account for the productive superiority of the technically advanced countries. To omit them in studying economic growth is like trying to explain Soviet ideology without Marx.

As Schultz mentions, it has been a convenient analytical device for a long time now to divide up the factors of production into “capital” and “labor,” with capital owned by investors. The idea of “human capital” scrambles these simple categories and leads to various logical fallacies: for example, if capitalists own capital, and “human capital” exists, then aren’t economists claiming that capitalists own humans, too? Such strained verbal connections miss the basic reality that what differentiates high-income and low-income countries is not primarily the quantity of physical capital investment, but rather the skills and capabilities of the workers. If you wish to explain growth of economic production or differences in production around the world, discussing how these skills and capabilities can be enhanced and how they affect economic production is not an avoidable question.

The Summer 2022 issue of the Journal of Economic Perspectives (where I work as Managing Editor) includes a short two-paper symposium on human capital. Katharine G. Abraham and Justine Mallatt discuss the methods of  “Measuring Human Capital” (pp. 103-30). As Kiker noted back in his 1966 essay:

Basically, two methods have been used to estimate the value of human beings: the cost-of-production and the capitalized-earnings procedures. The former procedure consists of estimating the real costs (usually net of maintenance) incurred in “producing” a human being; the latter consists of estimating the present value of an individual’s future income stream (either net or gross of maintenance).

Abraham and Mallett discuss the current efforts to carry out these kinds of calculations. Both approaches present considerable difficulties. For example, it’s possible to add up spending per student on education, adjust for inflation, make some assumptions about how the human capital created by education depreciates over time, and build up a cost-based approach to estimating human capital. But obvious questions arise about how to adjust for the quality of education receive, or whether to include other aspects of human capital, including physical health or on-the-job training.

Similarly, it’s possible to start with the idea that workers with more education, on average, get paid more, and then work backwards into an extended calculation of what their earlier education must have been worth, if it resulted in this higher pay level. Doing this calculation across generations with very different education and work outcomes places a lot of stress on the available data and requires copious assumptions on issues like how to treat those who may still be completing their education, along with estimates of future growth in wages and how best to discount these future values back to a single present value. It turns out that estimates of human capital based in the income received from education are often 10 times larger than estimates of human capital based on the costs of providing that education–which suggests that further research that clarifies the parades of underlying assumption is needed.

In the other JEP paper, David J. Deming discussed the state of the evidence on “Four Facts about Human Capital” pp. (75-102). He writes:

This paper synthesizes what we have learned about human capital … into four stylized facts. First, human capital explains a substantial share of the variation in labor earnings within and across countries. Second, human capital investments have high economic returns throughout childhood and young adulthood. Third, the technology for producing foundational skills such as numeracy and literacy is well understood, and resources are the main constraint. Fourth, higher-order skills such as problem-solving and teamwork are increasingly economically valuable, and the technology for producing them is not well understood. We have made substantial progress toward validating the empirical predictions of human capital theory. We know how to improve foundational skills like numeracy and literacy, and we know that investment in these skills pays off in adulthood.
However, we have made much less progress on understanding the human capital
production function itself. While we know that higher-order skills “matter” and are an important element of human capital, we do not know why.

In my own mind, the essays make a compelling case that understanding human capital is of central importance for understanding many aspects of the economy. Maybe a different label than “human capital” would be more rhetorically pleasing: I certainly would not claim that the economic profession has been especially graceful or sensitive in its framing and nomenclature. But it is a striking fact that no country with broadly high levels of educational human capital is also a low-income country. When I think about the importance of human capital for entrepreneurialism and innovation that can address major problems, I sometimes find myself saying that “in the long run, a country’s economic future is all about its human capital.”

Financial Services: Share of Profits

Like most economists, I’m not allergic to the idea of profits. I view them as a signal that, in a certain area of an economy, the willingness of people to buy a certain product exceeds the costs of producing that product–thus, profits are a way to encourage additional production in that area. Still, it’s provocative even to me to observe that about one-quarter of all corporate profits go to the financial services sector. Here’s a figure from Paul W. Wilson in his article, “Turbulent Years for U.S. Banks: 2000-20” (Review: Federal Reserve Bank of St. Louis,
Third Quarter 2022, pp. 189-209).

If you want to see the underlying data for this calculation, it’s available from the US Bureau of Economic Analysis at “Table 6.16D. Corporate Profits by Industry/.” Just to be clearly, financial services includes a lot more than just banks: it’s “credit intermediation and related activities; securities, commodity contracts, and other financial investments and related activities; insurance carriers and related activities; funds, trusts, and other financial vehicles; and bank and other holding companies.”

What makes this interesting, is that financial services, as a share of the value-added in GDP, are about 7-8% of GDP in recent decades, according to the US Bureau of Economic Analysis (see the “Value added by Industry as a Percentage of Gross Domestic Product” table, line 55). Thus, the question is why the financial services accounts for 7-8% of GDP, but about 25-30% of all corporate profits.

One possibility, as noted at the start, is that the profits are sending a signal that the US economy would benefit from a dramatic expansion of financial services. While I’m sure that financial services could benefit from innovation and entry, like other sectors, it’s not at all clear to me that, as a society, we are crying out for a much greater quantity of financial services.

Another possibility is that there is limited competition in financial services, perhaps due to difficulties of entry and costs of regulation, which leads to higher profits for incumbents. In the specific area of banking, Wilson notes:

The number of Federal Deposit Insurance Corporation (FDIC) insured commercial banks and savings institutions fell from 10,222 at the end of the fourth quarter of 1999 to 5,002 at the end of the fourth quarter of 2020. During the same period, among the 5,220 banks that disappeared, 571 exited the industry because of failures or assisted mergers,1 while the creation of new banks slowed. From 2000 through 2007, 1,153 new bank charters were issued, and in 2008 and 2009, 90 and 24 new charters were issued, respectively. But from 2010 through 2020, only 48 new commercial bank charters were issued. The decline in
the number of institutions since 2000 continues a long-term reduction in the number of banks operating in the United States since the mid-1980s.

A lack of competition in the financial sector essentially implies that when financial transactions are involved–whether corporate or household–this sector is able to carve itself a bigger slice of the pie than would be likely if there were more competitors. A more subtle possibility is that at least some of the profits being attributed to the financial sector were actually created in other sectors of the economy, but through various accounting transactions these profits are being reported instead in the financial sector.

A final possibility is that the way in which GDP measures the value-added of the financial services sector may tend to understate the size of the sector. After all, it’s hard to separate out what the financial sector produces into changes in quantity of services produced, changes in the quality of those services (as financial technology evolves), and price for each service.

I don’t have evidence to back up my suspicions here. But when the share of profits for a sector is much large than the value-added of that sector, on a sustained basis over several decades, something is out of whack.

Work From Home: Disrupting the Unwritten Labor Market

In simple descriptions of the labor market, people work for pay. This isn’t wrong, but it is incomplete. A job also involves a wide array of other costs and understandings. For example, commuting to work is a cost, and in many jobs so is dressing for work. Jobs can have greater or lesser flexibility about when they start in the day, whether you can extend a lunch break when needed, or when you are done. Some workplaces may have a subsidized lunchroom, or just some occasional free doughnuts. For a given employer, co-workers can be pleasant or alienating, and customers can be businesslike or obstreperous. Some jobs can expose workers to additional health risks. Managers can encourage input and offer what flexibility they can, or they can be mini-tyrants.

In short, a job is not just hours-for-pay, but is also a set of background characteristics and rules. In a happy workplace, both workers and employers will go beyond the minimum necessary courtesies and try to help each other out; conversely, an unhappy workplace is a sort of cold war of animosities and provocations.

In almost every job, the COVID pandemic changed a substantial number of non-pay characteristics. Working from home, for example, is a shorthand for changes in commuting costs, flexibility, level of managerial oversight, the co-workers, benefits of what used to be available near the on-site location, and so on. Those who couldn’t work from home found, as a result of the pandemic, that the health risks and day-to-day patterns of in-person work had changed. Unemployment rates have come back down in the last couple of years, but the wide array of disrupted non-pay job-related arrangements is still working its way to new agreements and arrangements.

Cevat Giray Aksoy, Jose Maria Barrero, Nicholas Bloom, Steven J. Davis, Mathias Dolls, Pablo Zarate look at some aspects of this transition in “Working From Home Around the World,” written for the most recent Brookings Papers on Economic Activity (Fall 2022, paper, comments, and video available online). Much of their paper looks at surveys on the prevalence of work-from-home across a number of countries, and the hopes and expectations about what future work-from-home patterns will look like. From the abstract:

The pandemic triggered a large, lasting shift to work from home (WFH). To study this
shift, we survey full-time workers who finished primary school in 27 countries as of mid 2021 and early 2022. Our cross-country comparisons control for age, gender, education, and industry and treat the U.S. mean as the baseline. We find, first, that WFH averages 1.5 days per week in our sample, ranging widely across countries. Second, employers plan an average of 0.7 WFH days per week after the pandemic, but workers want 1.7 days. Third, employees value the option to WFH 2-3 days per week at 5 percent of pay, on average, with higher valuations for women, people with children and those with longer commutes. Fourth, most employees were favorably surprised by their WFH productivity during the pandemic. Fifth, looking across individuals, employer plans for WFH levels after the pandemic rise strongly with WFH productivity surprises during the pandemic. Sixth, looking across countries, planned WFH levels rise with the cumulative stringency of government-mandated lockdowns during the pandemic.

These results to me that suggest that a collision of expectations about work-from-home is on its way. Workers would like a little more more work-from-home than they experienced in the pandemic; employers would like substantially less. One can imagine a future where some companies offer more work-from-home, and attract workers who place a high value on this aspect of the job, and other companies don’t. Workers would then need to reshuffle between these companies, and the market would then sort out how they compete against each other. Some employers had good experience with unexpectedly high work-from-home productivity during the pandemic; some did not. Some organizations and workers have made substantial investments in facilitating work-from-home; others did not.

The authors also point out, without taking a strong position, some of the broader issues raised by a shift to work-from-home. For example, it’s one thing for an experienced worker with a well-defined job to work from home, but for young adults just entering the workforce and looking for on-the-job training and support, it may look quite different. There is a well-supported belief that physical interactions in the workplace are often important for innovation and the spread of knowledge. Interactions in the virtual workplace have become much easier, but in terms of supporting innovation, will they be able to substitute for a decline in live in-person interactions?

For cities, perhaps the biggest short-term effect of a rise in work-from-home is less urban activity and a lower tax base from commercial real estate and sales taxes.
As an earlier example, the waves suburbanization after World War II imposed high costs on cities, and on those who were not able for whatever reason to relocate to the suburbs. But economic forces can also cut in unexpected direction. As the authors write: “If older and richer workers decamp for suburbs, exurbs and amenity-rich consumer cities, the resulting fall in urban land rents will make it easier for young workers to live in and benefit from the networking opportunities offered by major cities.” With this kind of change, cities and those who run them would need to rethink the sources of their economic vitality and attractiveness, along with the mix of taxes and services they currently provide.

Job Vacancies and Core Inflation

Perhaps the key question about the higher level of US inflation is whether it is likely to be transitory, and thus to fade away on its own, or whether it is likely to be permanent unless aggressive policy step are taken–like additional increases in interest rates by the Federal Reserve.

The case that the current inflation is likely to fade away on its own often argues that inflation has been caused by a confluence of events that probably will not last: people spending money in certain limited sectors of the economy when pandemic restrictions shut down other parts of the economy, supply chain hiccups that occurred partly as a result, and the spike in energy prices. The hope or expectation is that as these factors fade, inflation will fade, too.

The argument that inflation is likely to be permanent is linked to the “Great Resignation” in US labor markets. Firms are trying to hire. Job vacancy rates are sky-high. The result is that upward pressures on wages are likely to keep inflation high, unless or until the Fed unleashes additional interest rate increases that are likely to bring on a recession in the next year or two. Let’s walk through some of the evidence on job vacancies and inflation, and then address the obvious concerns in “blaming” higher wages for causing inflation.

As a starter, here’s the basic numerical count of job vacancies since 2000. Notice that the number was rising even before the pandemic recession in 2020. Indeed, if you squint really hard, you can imagine a more-or-less straight line from the vacancy trends before the pandemic to vacancy numbers that are not too far from current levels.

But from the standpoint of inflation, really matters is job vacancies relative to the number of unemployed people. The concerns over wage inflation arise when there are lots of vacancies and a relatively small number of unemployed people seeking those jobs. Here’s the ratio of unemployed people to job vacancies from the Job Openings and Labor Turnover (JOLTS) survey by the US Bureau of Labor Statistics. Again, notice that this ratio was already trending lower before the pandemic. Also, notice that when this ratio falls below 1, the number of unemployed people is lower than the number of job vacancies.

The “Beveridge curve” is another way to look at this data. This curve graphs the unemployment rate on the horizontal axis and the rate of job openings rate on the vertical axis. The usual pattern here is a generally downward-sloping relationship, which is intuitively sensible: that is, times of higher unemployment also tend to be times with lower rates of job openings. Indeed, you can see in the bottom set of colored lines, all jumbled together, the economy in the first decade of the 2000s was basically moving back and forth along a Beveridge curve.

But then the Beveridge curve shifts up during the period from July 2009 to February 2020: that is, for any given level of unemployment rate, the job vacancy rate was higher than before. And since the pandemic, the Beveridge curve has bumped up still higher to the black line: again, for any given level of unemployment rate, the rate of job vacancies is higher than it used to be. To put it another way, employers are seeking to fill more jobs now than they were at previous times when unemployment rates were roughly this low.

In short, no matter how you look at it, job vacancies are really high right now. I don’t think this phenomenon is yet well-understood. But workers are quitting jobs at high rates, often to take an alternative preferred job, and the employment/population ratio for US adults has not yet returned to pre-pandemic levels.

These job vacancy patterns set the stage for the research paper by Laurence Ball, Daniel Leigh, and Prachi Mishra,” Understanding U.S. Inflation During the COVID Era,” just published as part of the Brookings Papers on Economic Activity (Fall 2022, text as well as video of presentations and comments available). The research paper is full of models and calculations, and will not be especially accessible to the uninitiated. But the basic idea is straightforward.

The authors look at the recent rise in inflation, They find that when one looks at the overall inflation rate, the changes are indeed often due to factors like supply chain issues, energy price spikes, and the like. But if you focus only on “core” inflation, with changed in food and energy prices stripped out, then they find that the ratio of job vacancies to the unemployment rate is the major driving force. In other words, yes, some of the inflation of the last couple of years is temporary, but not all of it. In addition, they find that breaking the back of this underlying core inflation is likely to require additional interest rate increases by the Fed and higher unemployment rate–which sounds to me like the kind of multidimensional economic downturn in product and labor markets that is called a “recession.”

One common response to discussions of how higher wages can drive up inflation is to accuse the teller of the tale of just being opposed to higher wages. This response has a nice “gotcha” zippiness. One can readily imagine a group of sympathetic listeners nodding vigorously and yelling “you tell ’em” and “preach it” and the like. But in an analytical sense, the response is confused. Over time, the only way to have sustainable broad-based higher wages is if they are built on a foundation of sustainable broad-based higher labor productivity. Perhaps the workplace-related innovations unleashed by the pandemic experience will lead to a surge of productivity, but at least so far, that isn’t apparent in the data. Otherwise, higher wages can unleash what used to be called the wage-price spiral: that is, higher wages driving prices up, and higher price levels pushing higher wages. The appropriate social goal is to have higher wages that also have greater buying power, not higher wages that are just perpetually chasing higher price levels.

The Ball, Leigh, and Mishra paper is of course just one set of estimates, albeit from a well-regarded set of economists and published in a high-profile publication outlet. But they are not alone in suggesting that the pattern of very high job vacancies given the level of unemployment may be the key factor driving core inflation higher.

Interview with Stephanie Schmitt-Grohé: Inflation and Other Topics

David A. Price interviews Stephanie Schmitt-Grohé in the most recent issue of Econ Focus (Federal Reserve Bank of Richmond, Third Quarter 2022, pp. 24-28).

On the problem of using surprise inflation to finance government debt:

When Martín [Uribe] and I got interested in the topic of price stability, there was an influential paper on optimal monetary and fiscal policy that concluded that when you have a change in the fiscal deficit or government spending, responding by adjusting distortionary taxes — say, labor income taxes — is not good from a welfare point of view. What you can do instead, the argument went, is to have surprise inflation. So if you get, say, an increase in government spending, and you need to finance that, then if nobody’s expecting inflation, you can just have a one-year surprise inflation. And that literature concluded it was, in fact, the best thing to do: Keep tax rates steady and finance surprises to the budget with surprise inflation.

Martín and I wondered what would happen to this result if one were to introduce sticky prices — the idea that prices are costly to change — into the situation. Our contribution was to show in a quantitative model that the tradeoff between surprise inflation and tax smoothing was largely resolved in favor of price stability. With price stickiness, volatile inflation is welfare-reducing. It sort of overturned the previous result. …

One issue that I think has been coming back a little bit is how is the United States going to finance a massive fiscal deficit that created the big stack of debt? Are we going to use surprise inflation? Here our research would say no, it’s not optimal to do that.

On what historical experience has to say about temporary and permanent inflation:

We find ourselves a little bit in an unprecedented situation. Inflation has gone up rapidly. And so we [Schmitt-Grohé and co-author Martín Uribe] were thinking about this pretty unusual development for the postwar period.

We wanted to answer the question that I think everybody is interested in: Is this inflation hike temporary or permanent? Our idea was that during the postwar period — since 1955, say — the only big inflation was the inflation of the 1970s. And that was an inflation that built up slowly and then was ended also relatively slowly — quicker than it built up, but relatively slowly — by Paul Volcker in the 1980s. So we said, since the current inflation is unprecedented in the postwar period, what will we see if we just go further back in history?

Because we wanted to go back in history, we used the database of Òscar Jordà, Moritz Schularick, and Alan Taylor, which goes back to 1870. We saw that the macroeconomic stability that we had in the postwar era was special, at least compared to what we see since 1870. There were many more episodes of high and variable inflation. So we just asked if we give the purely statistical model a longer memory by allowing it to go back in time, how would it interpret the current increase in inflation?

We found that if we estimate the model since 1955, which is what most people do when they talk about cyclical fluctuations — actually, many people only start in the 1990s or look at the last 30 or 40 years, the so-called Great Moderation period — the model is led to interpret the entire current increase in inflation as permanent. But if the model is given the chance to look back further in time, where we had more episodes of a short-lived and large inflation spike, the interpretation is that only 1 or 2 percent of the current increase in inflation is of a more permanent nature.

An example to look at is the Spanish Influenza of 1918 in the United States. That was also a period of an inflation spike, but inflation had started already a year or two before the influenza pandemic. There were similarities to now, namely a pandemic and high inflation. There was a small increase in the permanent component of inflation during the years around the influenza pandemic, but the majority of it was transitory.

 On the multiple benefits of starting your post-PhD economics career as a research economist at the Federal Reserve:

I would say four things were great about the job. At the beginning, you have almost all of your time for research. So you come out of graduate school, you have all the papers of your dissertation, and you’re trying to polish them to send to journals. The Fed gives you the time to do that. I would say you have more time to do that if you work in the research department at the Fed than if you start teaching at a university because you have to make one or two course preps, which takes time. So that was one great thing.

A second great thing is they used to hire — probably this is still true — something like 20 or 30 Ph.D.s a year out of top graduate schools. And they were more or less all in macroeconomics. If you go to a university, most likely you have, at most, two or three junior colleagues in your field. But at the Fed, you had a large cohort of them with whom you could interact and talk at lunch — there was a culture of going for lunch together in the Fed’s cafeteria — so it was stimulating in that way.

Another thing that was great was that you had to do a little bit of policy work. The Board of Governors wants to learn what the research staff thinks about the economic issues of the moment and what economic policy would be the correct one. Once or twice a year, you had to write a memo that you would read aloud in the FOMC briefing, so your audience was Alan Greenspan and the other governors. So you got to work on interesting issues and you got an understanding of what the relevant questions are. The process gave you a pipeline of research questions that you could work on later.

Lastly, because the Board is such a big institution, it runs a pretty large program of workshops with outside speakers. Almost too many speakers came through — more than one per week. You got exposed to all the major figures in your field because they came to give a workshop or they came to visit the Fed for one or two days.

Higher Education and Critical Thinking

What and much do students actually learn in college? How much does what they learn help them later in life? One slice of evidence on these very large questions comes from “Does Higher Education Teach Students to Think Critically?” edited by  Dirk Van Damme and Doris Zahner, and just published by the OECD.

The primary focus of the study is on a test called the CLA+, which stands for Collegiate Learning Assessment (CLA+). The test “is an assessment of higher education students’ generic skills, specifically critical thinking, problem solving and written communication.” I’ve many times heard colleges and universities offer a comment to incoming students like: “Don’t worry too much about your specific major. What’s important is that you will learn how to think.” The CLA+ test can be viewed as an attempt to measure these “how to think” skills. The test is a proprietary instrument created by a US-based non-profit, the Council for Aid to Education. The test has been extensively used in US higher education for the last decade or so, although a main focus of this report is the effort to develop an international version of the test.

Here’s the broad pattern for US colleges and universities. It’s not an official random sample, but it does include hundreds of thousands of students at hundreds of institutions, so it’s likely to have some useful information content.

The chart gives me a decidedly half-full, half-empty feeling. On the positive side, the share of college students at the lowest levels of “emerging” and “developing” decline between the entering (blue bars) and exiting (gray bars) students. On the negative side, the average gains are not as large as one might prefer to see for four years of college education; for example, 58% of the entering students are in the lowest two categories, and 47% of the exiting students are still in those lowest two categories. The report describes the general pattern in this way:

Overall, it is encouraging to see that during their time in a higher education programme, students improved their critical thinking skills. However, given the importance that most higher education programmes attach to promoting critical thinking skills, the learning gain is smaller than could be expected. If universities really want to foster 21st-century skills such as critical thinking, they need to upscale their efforts. While universities produce graduates who can be considered, on average, as proficient in critical thinking, the distribution of achievement is quite wide, with one-fifth of students performing at the lowest level. With half of exiting students performing at the two lowest levels, it is difficult to claim that a university qualification reliably signals a level of critical thinking skills expected by the global market place.

In addition, comparing US students who enter college and university with those who graduate will not capture the experience of those who don’t complete a degree.

However, despite the upward trend in college enrolment over the last two decades, college graduation rates remain relatively low within the United States. According to the National Center of Education Statistics (Hussar et al., 2020[7]), as of spring 2020, nearly 40% of students who began seeking a bachelor’s degree at a four-year institution in 2012 have yet to receive their degree. Furthermore, year-to-year retention rates vary considerably across institutions. Between 2017 and 2018, although highly selective institutions had high student retention rates of 97%, less selective and open-admissions schools retained a substantially smaller percentage (62%) of their students during this same period (Hussar et al., 2020[7]). Contrary to an oft-perpetuated notion that student retention is a “first-year” problem, student attrition remains a risk for students at all class levels, with approximately one-third of college dropouts having obtained at least three-quarters of the credits required for graduation (Mabel and Britton, 2018[8]). Although many students cite non-academic reasons such as financial difficulties, health or family obligations as the primary causes for dropping out or deferring their college education (Astin and Oseguera, 2012[9]), academic failure is also a significant factor contributing to lack of persistence and decreased retention of students in higher education.

Thus, as the report notes:

The analysis cannot positively confirm that the learning gain is caused by the teaching and learning experience within university programmes. It is possible that, for example, selection effects (selective drop-out), general maturing of the student population or effects of learning outside university contribute to the average learning gain.

With higher education, as with many other areas of public policy, it feels to me as if we often have argument-by-anecdote. One side points out some individual success stories; the other points out contrasting examples of failures. The CLA+ is just one test (although a well-used test that, as the OECD notes, is highly regarded both in the US and internationally). But this evidence is at best only mildly reassuring that the average college student is making strong progress in the general goal of “learning how to think.”