Africa: Too Many Currencies?

It is commonplace to observe that the enormous US domestic market benefits from having a single currency, rather than, say, 50 state-level currencies. Indeed, the case for a single currency across a broad market was compelling enough to persuade 19 of the 27 countries in the European Union to trade in their historically separate currencies for the euro. In contrast, the nations of Africa have 42 separate currencies.

There is a newly-founded African Continental Free Trade Area, seeking to reduce barriers to trade and investment across the countries of within Africa. It could offer a real boost to productivity and growth across Africa: a June 2022 World Bank study estimates that it could “bring income gains of 9 percent by 2035 and reduce extreme poverty by 50 million.” But for trade to work, it has to overcome the problems of 42 currencies. Chris Wellisz provides an overview in “Freeing Foreign Exchange in Africa” (Finance & Development, September 2022). He writes:

Making payments from one African country to another isn’t easy. Just ask Nana Yaw Owusu Banahene, who lives in Ghana and recently paid a lawyer in nearby Nigeria for his services. “It took two weeks for the guy to receive the money,” Owusu Banahene says. The cost of the $100 transaction? Almost $40. “Using the banking system is a very difficult process,” he says.

His experience is a small example of a much bigger problem for Africa’s economic development—the expense and difficulty of making payments across borders. It is one reason trade among Africa’s 55 countries amounts to only about 15 percent of their total imports and exports. By contrast, an estimated 60 percent of Asian trade takes place within the continent. In the European Union, the proportion is roughly 70 percent.

What are the options here? In theory, it would be possible for countries across Africa to unite with a single currency of their own. In practice, this seems pretty unlikely. At present, there are 14 countries in Africa that use the “CFA franc” as their currency: six in central Africa (Cameroon, the Central African Republic, Chad, the Congo, Equatorial Guinea and Gabon) and eight in west Africa (Benin, Burkina Faso, Côte d’Ivoire, Guinea Bissau, Mali, Niger, Senegal and Togo). Indeed, there are technically two different CFA francs, one for each of these regions, but their exchange rate in terms of euros is always the same. Together, these countries comprise about one-eighth of Africa’s GDP.

Current perceptions of the CFA franc are, at best, only partially favorable. It has provided monetary stability, but at times the exchange rate value of the currency has been so high that it strangled exports from these countries. It’s also a legacy of colonialism by France. The current plan seems to be that the west African version of the CFA franc will be phased out in favor of a shared currency called the “eco,” which may be more widely used across other nations of west Africa. But the potential transition is scheduled for a few years away, and it’s unclear (at least to me), whether the countries using the central African version of the CFA franc will join in. There’s a lot of talk about “taking back control of the currency,” but the current proposals for the “eco” would continue to have a fixed exchange rate with the euro.

In short, the existing currency unions in Africa are being sharply question and seem to be in transition. A even broader currency union isn’t on the table. And frankly, it’s not obvious that a broader currency for Africa is a good idea at this moment in time. A shared currency across a geographic area works best when the economy of that area is already somewhat united by flows of goods and services, finances and people, and shared government programs. Obviously, the question of whether, say, Greece should share a currency with Germany, has posed real problems for the euro.

So the current plan, as Wellicz describes it, is to create a Pan African Payment and Settlement System (PAPSS):

The system aims to link African central banks, commercial banks, and fintechs into a network that would enable quick and inexpensive transactions among any of the continent’s 42 currencies. … PAPSS aims to solve such problems by settling transactions in local African currencies, obviating the need to convert them into dollars or euros before swapping them for another African currency. In essence, PAPSS would eliminate costly overseas intermediaries. The system aims to complete transactions in less than two minutes at a low though unspecified cost.

The careful reader will note that this description makes heavy use of “aims to.” PAPSS was apparently formally launched in January 2022, but had not cleared any commercial transactions through this summer. The success of Africa’s efforts to promote trade across the continent may well depend on whether PAPSS or a similar arrangement can succeed.

For broader discussion of issues facing the economies of nations across Africa, a useful starting point is the five-paper symposium in the Winter 2022 issue of the Journal of Economic Perspectives (where I work a Managing Editor). As always, the articles are all freely available online:

Monetary Tightening: Previous US Episodes

As the Federal Reserve raises US interest rates in an effort to quell inflation, are there any lessons to be learned from similar previous episodes. The most recent Global Financial Stability Report from the IMF offers some discussion.

Here’s a figure showing the patterns of past monetary tightening back to 1960. The figure is a little busy, here’s what you’re looking at. The shaded areas are when recessions occurred, with the horizontal black lines near the bottom of the figure showing the total decline during each given recession. The yellow line is the rate of inflation. The red line shows the federal funds interest rate, which is the policy rate targeted by the Fed: this is shown as a solid red line during periods when the Fed is raising interest rates, but as a dashed red line at other times. The blue line shows the yields on 10-year US Treasury bonds which can be viewed as one way of measuring market interest rates.

What might we learn from these patterns? In some cases, the monetary tightening and higher interest rates was followed by no recession at all (1965, 1984, and 1994) or by rather small recessions (1970 and 2001). What are the chances that we sidestep a substantial recession this time? The IMF offers a mixed view. On one side:

In terms of inflation levels, the current period resembles more closely the 1970s and
early 1980s, when recessions following tightening cycles were characterized by high inflation and low growth (so-called stagflation). In those episodes, a substantial rise in the policy rate was necessary to tame inflation, followed by significant economic downturns.

On the other side;

While the current inflationary environment may be reminiscent of the 1970s or early 1980s, the nature of the COVID-19 shock is unprecedented. Moreover, the policy framework today is also very different. The Federal Reserve benefits from inflation-fighting
credibility built over the past several decades, helping long-term inflation expectations remain much better anchored. That said, financial vulnerabilities have emerged in some sectors in the wake of the COVID pandemic, and financial market volatility has notably risen after having remained relatively compressed over the preceding protracted period of low rates. The financial and regulatory architecture, however, has evolved considerably since the global financial crisis, and policymakers today have at their disposal a number of risk management tools that could be used to deal with the potential adverse systemic fallout from a disorderly tightening in financial conditions.

Finally, I’d just add that in past cycles of monetary tightening, the federal funds interest rate (red line) was consistently raised to a point where it was at or above the inflation rate (yellow line). In other words, the real interest rate (nominal interest rate minus inflation rate) was positive. However, in the current situation, the federal funds interest rate remains well below the inflation rate (red line below yellow line on the far right of the figure). The real federal funds interest rate has been negative most of the time since 2008. In addition, market interest rates as proxied by the 10-year Treasury bond yields has historically been (mostly) at or above the inflation rate, but even after recent increases, it remains well below the current inflation rate.

Thus, the policy question for the present is whether making the policy interest rate less negative, in a situation with negative market interest rates, will be enough to bring down inflation. A key element to this question is whether the inflation rate is being partly driven by temporary factors–price increases linked to the Russia-Ukraine war, as well as ongoing supply chain problems and other disruptions from the pandemic.

It seems to me that the Fed is (in effect) hoping that some of the existing inflation will fade on its own, and that its step-by-step increases in interest rates will be sufficient to knock out any remaining inflation. In this scenario, the current procession of monetary tightening might be followed by lower inflation with no recession or only a modest recession. On the other side, it’s possible for inflation to begin from one-time causes, but then develop its own ongoing momentum. If the existing inflation doesn’t fade on its own, then the Fed seems committed to an ongoing succession of higher and higher interest rates, and a significant recession sometime next year becomes more likely.

Ethanol Controversies, Redux

From the standpoint of producing carbon-free energy, using ethanol to supplement gasoline might seem like a no-brainer. Ethanol comes from crops like corn, which collect carbon from the atmosphere as they grow. When the ethanol is burned, it does releases carbon into the atmosphere–but then carbon can again be collected in the next corn crop. But of course, nothing is that simple. Corn needs to be grown, typically using machinery and fertilizer, and then processed into ethanol, all of which require energy inputs. If growing additional corn for ethanol requires cultivation of additional land, plowing and preparing that land will release substantial amounts of carbon dioxide. Moreover, the use of corn for ethanol drives up demand for corn and keeps the price of corn higher than it would otherwise be, which is a political selling point for US farmers, but can put stress on the diets of the poor–especially in developing economies.

These issues have bubbled up again with the disruption of agricultural markets due to Russia’s invasion of Ukraine, combined with the ongoing global supply chain disruptions, but the topic isn’t new. Back in 2011, I amused myself for a few months at this blog by posting examples about of international organizations that had come out against subsidizing biofuels like ethanol. For example, in a June 2011 post, ”Everyone Hates Biofuels,” I pointed out a report in which 10 international agencies made an unambiguous proposal that high-income countries drop their subsidies for biofuels. I followed up with ”The Committee on World Food Security Hates Biofuels” in August 2011 and ”More on Hating Biofuels: The National Research Council” in October 2011. For a couple of additional whacks at the pinata, see “Biofuels and Hunger in Low-Income Countries” in January 2013 and “Against Biofuel Subsidies” in June 2015.

For an update, Dan Charles focuses on the environmental issues in a readable essay “How green are biofuels? Scientists are at loggerheads” (Knowable Magazine, October 6, 2022). Here’s a figure showing the rise in US ethanol production, which was launched to higher level starting in 2005 with a mixture of government requirements and subsidies.

Charles emphasizes some useful points about where the argument over biofuels currently stands. One key issue dividing supporters and opponents is the extent to which corn grown for ethanol affects land use. On one side, there is a detailed and comprehensive model of land use called the Global Trade Analysis Project at Purdue University, or GTAP-BIO, which broadly suggests that the rise of ethanol hasn’t cause much additional land to come under cultivation but instead has been enabled by higher crop productivity on existing land.

On the other side, the crops for ethanol have to come from somewhere. Charles writes:

 Ethanol factories now consume about 130 million metric tons of corn every year. It’s about a third of the country’s total corn harvest, and growing that corn requires more than 100,000 square kilometers of land. In addition, more than 4 million metric tons of soybean oil is turned into diesel fuel annually, and that number is growing fast.

Charles notes that the amount of US farmland under cultivation had been gradually declining from the 1980s up until the passage of the Renewable Fuel Standard in 2007. At this point, the decline in cropland stopped–because of the rise in corn and soybean production for biofuels. “Without the ethanol boom, the pre-2007 trend in land use would have continued. More land — 5 million acres — would have remained in grass between 2008 and 2016, rather than being converted to grow crops.” Thus, it can simultaneously be true that crops for ethanol have not dramatically expanded land under cultivation, and also that without crops for ethanol, there would be substantially less land under cultivation.

If we work from the political assumption that US corn and soybean farmers are a potent and focused special interest, and that their Congressional representatives will be able to block the withdrawal of ethanol requirements and subsidies for the indefinite future, what are the possible next steps here? There seem to be two answers on which supporters and opponents of ethanol can more-or-less agree.

One approach “is figuring out ways to measure those environmental benefits and pay landowners for them, just as they get paid for growing corn. To some extent, the US Department of Agriculture does this already, with programs that pay farmers to preserve areas of grassland or forest. Such initiatives are set to expand; the Inflation Reduction Act, which Congress passed in August, gives them an extra $18 billion in funding.”

The other approach is technological. For some years now, there has been discussion of “cellulosic biomass,” which involves getting ethanol from prairie grasses. This transmutation is possible in a laboratory, at high cost, but it seems far from being a commercial proposition. But if the process was commercially viable: “The grass could be harvested, leaving the roots to grow undisturbed, building up carbon-rich organic matter in the soil and avoiding most of the environmental damage that results from converting land into cornfields.”

Ultimately, the idea of using corn and soybeans as a primary energy source doesn’t seem like sensible policy, whether the goals are environmental or to ensure affordable global food supplies. Charles writes:

[T]he world’s croplands, which have claimed vast ecosystems, cover less than half an acre per person on the planet. Producing enough biofuel to power one typical passenger car, meanwhile, requires more than 1.2 acres. (Photovoltaic solar arrays produce many times more usable energy per acre of land than biofuels, and can also be located in dry areas that can’t grow food.)

A Nobel for Insights About Banks and Financial Crises: Bernanke, Diamond, and Dybvig

As time goes by, I find it increasingly hard to explain to the non-econ world just what happened back in September 2008, when during a period of perhaps 2-3 weeks there seemed to me a meaningful chance (and by “meaningful,” I mean large enough to keep me awake at night) that the US banking and financial sector would melt down in a way that would have led not just to the substantial recession that did occur, but to something much worse. But in one of those odd quirks of history, the Federal Reserve at that time was being chaired by a former academic economist named Ben Bernanke who was actually a recognized expert in the subject of banking and financial collapses, based on research that he and others like Douglas Diamond and Philip Dybvig had done back in the 1980s and 1990s. The Great Recession from 2007-2009 was very bad, and it could have been so much worse.

The Royal Swedish Academy of Sciences has now awarded what is formally known as the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2022” to Bernanke, Diamond, and Dybvig “for research on banks and financial crises.” As usual, the prize committee has published two explanatory pieces for those who want to know what the fuss is all about. There’s a shorter “popular science background” article called “The laureates explained the central role of banks in financial crises,” and a longer and more technical (that is, about 70 pages) “scientific background article titled “Financial Intermediation and the Economy.” Earlier narratives tended to describe the “Great” character of the Depression as a list of poor decisions and things that went wrong. Bernanke made a strong case that the length and depth of the Great Depression was intimately tied to one main cause: the crisis fo the banking system at the time. The prize committee writes:

Prior to Bernanke’s study, the general perception was that the banking crisis was a consequence of a declining economy, rather than a cause of it. Instead, Bernanke established that bank collapses were decisive for the recession developing into deep and prolonged depression. Once a bank goes bankrupt, the relationship between the bank and its borrowers is cut; this relationship contains knowledge capital that is necessary for the bank to manage its lending efficiently. The bank knows its borrowers, it has detailed information about what borrowers have used the money for and what requirements are needed to ensure the loan will be repaid. Building up such knowledge capital takes
a long while, and it cannot simply be transferred to other lenders when a bank fails. Repairing a failed banking system can therefore take many years, during which time the economy functions very poorly. Bernanke demonstrated that the economy did not start to recover until the state finally implemented powerful measures to prevent additional bank panics.

At about the same time in the early 1980s, Diamond and Dybvig were developing a theoretical model of banking and the financial sector that addressed these issues. They began with the basic idea of a bank as a “financial intermediary”–that is, some economic agents (people and firms) have savings they would prefer not to spend in the present, while others would like to borrow and spend now, and then repay later. Banks serve as the intermediaries between these groups.

But an immediate challenge arises here. What is a substantial number of savers been decide to withdraw their money from these theoretical banks, perhaps because the savers don’t trust that their savings are safe at the bank? The theoretical bank will not have the money for everyone: after all, that money has already been loaned out to people who borrowed to purchase homes and cars, and to businesses who borrowed to make investments. Thus, there is a possibility of a “bank run,” where people rush to the theoretical bank and try to withdraw their funds right now–and in doing so, they make it impossible for the bank to continue functioning.

The bank run is of course a staple of stories and accounts of the time before the 1930s: two of the best-known examples from movies are from Jimmy Stewart in It’s a Wonderful Life and when the little boy Michael tries to withdraw his tuppence from the bank run by Dick van Dyke in Mary Poppins. However, if the government provides deposit insurance, then there is no reason for bank runs. From this perspective, it isn’t a coincidence that when federal bank deposit insurance was enacted in 1933, he role of the fragile banking system in propagating the Depression faded away, and the worst of the Great Depression ended.

But a follow-up tradeoff emerges here. If there is bank deposit insurance, then savers no longer need to worry about whether the bank is making sensible decisions about lending, or whether the bankers are chasing higher risks and perhaps higher payoffs. Thus, Diamond in particular emphasized that deposit insurance needs to be paired with government oversight of banks to make sure that they are not taking undue risks. This pairing of deposit insurance and financial oversight of banks functioned to keep the US banking and financial system stable for a number of decades.

But there were warning signs that not all the problems has been addressed. Perhaps most notably, during the savings and loan crisis of the later 1980s, a number of S&Ls found themselves in a bad financial situation: they had made long-term loans for home mortgages over the previous decades at relatively low interest rates, but the high rates of inflation in the 1970s had pushed up interest rates. Under law, the S&Ls were limited in the interest rates they could pay, so savers instead wanted to move their funds to money market accounts, which didn’t have the same restrictions. This was a modern kind of bank run: funds being withdrawn from one part of the financial sector and moved to another. A number of the S&Ls were faced with bankruptcy as their deposits diminished, and some of the managers chose the strategy of making risky loans at high interest rates to try to regain solvency. Some politicians pressured the financial regulators of their local financial institutions not to crack down. Ultimately, federal government ended up needing to pay off more than $150 billion to protect depositors who had kept funds in these institutions. For more background, readers could start with the three-paper symposium in the Fall 1989 issue of Journal of Economic Perspectives.

With the Great Recession of 2007-9, a somewhat similar problem arose again. A substantial and growing part of the US financial sector had moved outside banks, into what is often called the “shadow banking” sector. If you wanted a loan for a house, you could get it through a non-bank institution, and behind the scenes, mortgage loans were repackaged and sold to investors like pension funds or insurance companies. Businesses found other ways to borrow as well, by using bond markets as well as their own versions of loans that were repackaged and resold. These shadow banking financial institutions were not under the rules of federal deposit insurance or the prudential scrutiny of federal bank regulators. Risky loans were made. And when the shadow banking institutions went sideways, the actual banking system was threatened as well. It was extraordinarily important to have a knowledge base, and people in charge who understood that knowledge base, to recognize that the dire problems of financial institutions in 2007 and 2008 should not just be viewed as an outcome of mortgage sector problems, but that these issues could become a cause of additional and deeper problems.

These underlying problems persist today. Every new innovation in the broader financial sector–the shadow-banking sector–raises the issue of possible financial runs from one sector to another, along with questions of how government might create (or not!) some combination of safety guarantees and a regulatory apparatus. The Nobel prize committee wrote:

Banks and bank-like institutions have existed for thousands of years. Today they are active in every country around the world. Banks obviously perform important functions, but they have also been at the epicenter of some of history’s most devastating economic crises such as the Great Depression. Nevertheless, it was not until the work of this year’s laureates, Ben S. Bernanke, Douglas W. Diamond, and Philip H. Dybvig, that we had a comprehensive theory of why banks exist in the form we observe, what role they play in the economy, why they are fragile, and an empirical account of how devastating
and long-lasting the consequences of massive bank failures can be. …

The research from the 1980s for which this year’s Prize in Economic Sciences is awarded obviously does not provide us with final policy recommendations. Deposit insurance does not always work as intended. It can lead to perverse incentives for banks and their owners to gamble to take the profit if things go well and let taxpayers pay the bill if not. Runs on new financial intermediaries, engaging in profitable maturity transformation like banks, but operating outside of bank regulation, were arguably key for the financial crisis 2007–2009 leading to the Great Recession. When central banks act as lenders of last resort, this can lead to large and unintended wealth redistribution and have negative moral hazard
effects on banks who may increase reckless lending, potentially leading to future crises.

How to regulate the financial market so that it can perform its important function of channeling savings to productive investments, without from time to time causing financial crises, is a question that is actively debated to this day. The same is true about what policies are most effective in preventing a threatening crisis from developing. However, based on the foundational work of the laureates and all research that has followed, society is now better equipped to handle financial crises.

Why Do So Many Interventions Help Women, but Not Men?

Richard V. Reeves points out a disconcerting finding: in studies of interventions that seek to boost the life prospects of the disadvantaged, when positive effects are found, the benefits tend to accrue to women, not men. He discusses the findings in “Why Men Are Hard to Help,” appearing in the most recent issue of National Affairs. The essay is adapted from his recent book:  Of Boys and Men: Why the Modern Male Is Struggling, Why It Matters, and What tDo about It. Some examples:

Thanks to a group of anonymous benefactors, students educated in the city’s K-12 school system receive paid tuition at almost any college in the state. Other cities have similar initiatives, but the Kalamazoo Promise is unusually generous. It’s also one of the few programs of its kind to have been robustly evaluated — in this case by Timothy Bartik, Brad Hershbein, and Marta Lachowska of the Upjohn Institute. They found that the Kalamazoo Promise made a major difference in the lives of its beneficiaries — more so than other, similar programs made in theirs. But the average impact disguises a stark gender divide. According to the evaluation team, women in the program “experience very large gains,” including an increase of 45% in college-completion rates, while “men seem to experience zero benefit.” The cost-benefit analysis showed an overall gain of $69,000 per female participant — a return on investment of at least 12% — compared to an overall loss of $21,000 for each male participant. In short, for men, the program was both costly and ineffective.

One of the other studies that jumped off my desk in considering this evidence was an evaluation of a mentoring and support program called “Stay the Course” at Tarrant County College, a two-year community college in Fort Worth, Texas. Community colleges are a cornerstone of the American education system, serving around 7.7 million students — largely from middle- and lower-class families. But there is a completion crisis in the sector: Only about half the students who enroll end up with a qualification (or transfer to a four-year college) within three years of enrolling. Many of these schools produce more dropouts than diplomas. The good news is that there are programs, like Stay the Course, that can boost the chances of a student succeeding. The bad news is that, as the Fort Worth pilot shows, they might not work for men, who are most at risk of dropping out in the first place. Among women, the Fort Worth initiative tripled associate-degree completion. This is a huge finding: That kind of effect is rare in any social-policy intervention. But as with free college in Kalamazoo, the program had no impact on college completion rates for men.

But Stay the Course and the Kalamazoo Promise are just two among dozens of initiatives in education that seem not to benefit boys or men. An evaluation of three preschool programs — Abecedarian, Perry, and the early Training Project — for example, showed “substantial” long-term benefits for girls but “no significant long-term benefits for boys.” Project READS, a North Carolina summer reading program, boosted literacy scores “significantly” for third-grade girls — giving them the equivalent of a six-week acceleration in learning — but there was a “negative and insignificant reading score effect” for boys. …

Students who attended their first-choice high school in Charlotte, North Carolina, after taking part in a school-choice lottery earned higher GPAs, took more Advanced Placement classes, and were more likely to go on to enroll in college than their peers — but the overall gains were “driven entirely by girls.” A new mentoring program for high-school seniors in New Hampshire almost doubled the number of girls enrolling in a four-year college, but it had “no average effect” for boys. Urban boarding schools in Baltimore and Washington, D.C., boosted academic performance among low-income black students, but only female ones. College scholarship programs in Arkansas and Georgia increased the number of women earning a degree but had “muted” effects on white men and “mixed and noisy” results for black and Hispanic men.

And so on, and so on, for studies of the effects of wage subsidies, worker training, and other areas. Reeves notes that a number of studies of such programs point out the gap between outcomes for boys and girls, or men and women, and then note (as academic research papers love to do) that it deserves further study. But those further studies–much less proposals for policies that would have improved outcomes for men–don’t seem to happen.

Thus, Reeves, like the rest of us, ends up falling back on explanations that have a plausible ring, but aren’t exactly the result of gold standard cause-and-effect social science research. He writes: “The problem is not that men have fewer opportunities; it’s that they are not seizing them. The challenge seems to be a general decline in agency, ambition, and motivation.” I

Reeves also notes: “[W]here there is a difference by gender, it is essentially always in favor of girls and women. The only real exception to this rule is in some vocational programs or institutions, which do seem to benefit men more than women — one among many reasons we need more of them.” Perhaps such programs speak more clearly to those with lower agency, ambition, and motivation?

If women had dramatically lower rates of college attendance, it would be viewed as a national problem. Indeed, it was viewed that way. As Reeves notes:

In 1972, Congress passed Title IX — a landmark statute to promote gender equality in higher education. Quite rightly, too: At the time, there was a 13 percentage-point gap in the proportion of bachelor’s degrees going to men compared to women. Just a decade later, the gap had closed. By 2019, the gender gap in bachelor’s degrees was 15 points — wider than it had been in 1972, but in the opposite direction. Today, women far outperform men in the American education system. … In the United States, for example, the 2020 drop in college enrollment was seven times greater for male students than for female students. At the same time, male students struggled more than female students with online learning.

Societies with a substantial proportion of disgruntled and flailing young men will suffer from an array of other related problems.

How Much of the Gasoline Price is Crude Oil?

The US Energy Information Administration gives this answer for August 2022:

I was buying gas recently, and because I was also picking up snacks, I went into the gas station to pay the cashier rather than swiping my credit card at the pump. I said something low-key conversational about “price of gas sure is up” and he visibly winced. So I added something like: “Of course, the gas station makes, what, about 5 cents when you sell a gallon of gas?” He visibly relaxed and even gave me a rueful smile. It made me suspect that the customer service people behind the counter, everywhere, have probably been taking a lot of face-to-face direct heat for higher prices.

Some Economics of Diapers

I remember family holidays when our children were little, where the first step was to grab a large suitcase and start packing in the diapers. But for a number of low-income US families, affording the diapers is a challenge to their limited resources. Jennifer Randles discusses “Fixing a Leaky U.S. Social Safety Net: Diapers, Policy, and Low-Income Families” (RSF: The Russell Sage Foundation Journal of the Social Sciences, August 2022, 8:5, 166-183). She writes (citations omitted):

Diaper need—lacking enough diapers to keep an infant dry, comfortable, and healthy—affects one in three mothers in the United States, where almost half of infants and toddlers live in low-income families. Diaper need … exacerbates food insecurity, can cause parents to miss work or school, and is predictive of maternal depression and anxiety. When associated with infrequent diaper changes, it can lead to diaper dermatitis (rash) and urinary tract and skin infections.

Infants in the United States will typically use more than six thousand diapers, costing at
least $1,500, before they are toilet trained . Cloth diapers are not a viable alternative for most low-income parents given high start-up and cleaning costs and childcare requirements for disposables. Many low-income parents must therefore devise
coping strategies, such as asking family or friends for diapers or diaper money; leaving
children in used diapers for longer; and diapering children in clothes and towels.

Low-income parents also turn to diaper banks, which collect donations and purchase
bulk inventory for distribution to those in need and usually provide a supplemental supply of twenty to fifty diapers per child per month. In 2016, the nation’s more than three hundred diaper banks distributed fifty-two million diapers to more than 277,000 children, meeting only 4 percent of the estimated need. Many of those who seek diaper assistance
live in households with employed adults who have missed work because of diaper need. …

As a highly visible and costly item that must be procured frequently according to norms of proper parenting, diapers are part of negotiations about paternal responsibility
and access to children. Unemployed nonresidential fathers give more in-kind
support, such as diapers, than formal child support; the provision of diapers can be a form of or precursor to greater father involvement, especially among fathers who are disconnected from the labor market and adopt nonfinancial ideas of provisioning.

As Randles points out, some existing aid programs for poor families, like the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) or food stamps, explicitly don’t cover diapers. She writes: “/The $75 average monthly diaper bill for one infant would alone account for 8 to 40 percent of the average state TANF [Temporary Assistance to Needy Families] benefit.” In addition, “[s]ince the onset of the COVID-19 pandemic, disposable diaper costs have increased 10 percent due to higher demand and input material costs, supply-chain disruptions, and shipping cost surges.”

Diapers are taken for granted as parents’ responsibility, but politically deemed a discretionary expense. This is misaligned with how mothers understood their infants’
specific basic needs, which for most came down to milk and diapers. Food stamps and
WIC offered support for one. No comparable acknowledgment or assistance for the other
meant that mothers struggled even more to access a necessity policy did not officially recognize their children have.

Several bills have been introduced in Congress in the last few years to provide diaper-focused assistance, but apparently none has made it out of committee. There are a number of state-level programs as well. Some are linked to what level of sales tax is imposed on diapers. Some seek to build up the supply at diaper banks. As another example, California gives voucher for diapers to TANF recipients who have children under the age of three.

US Household Wealth: 1989-2019

Total household wealth is equal to the value of assets, including both financial assets and housing, minus the value of debts. The Congressional Budget Office has just published “Trends in the Distribution of Family Wealth, 1989 to 2019” (September 2022). Here are a few of the themes that caught my eye.

In 2019, total family wealth in the United States—that is, the sum of all families’ assets minus their total debt—was $115 trillion. That amount is three times total real family wealth in 1989. Measured as a percentage of the nation’s gross domestic product, total family wealth increased from about 380 percent to about 540 percent over the 30-year period from 1989 to 2019, CBO estimates. … From 1989 to 2019, the total wealth held by families in the top 10 percent of the wealth distribution increased from $24.3 trillion to $82.4 trillion (or by 240 percent), the wealth held by families in the 51st to 90th percentiles
increased from $12.7 trillion to $30.2 trillion (or by 137 percent), and the
wealth held by families in the bottom half of the distribution increased from
$1.4 trillion to $2.3 trillion (or by 65 percent).

There are several points worth pausing over here. First, the share of wealth/GDP fluctuated but in the long-term stayed around 360% of GDP from the 1950s up to the early 1990s. Indeed, I remember being taught in the 1980s that, for quick-and-dirty calculations, wealth/GDP could be considered a constant. But since then the wealth/GDP ratio has taken off, not just in the US but worldwide. Part of the reason is the run-up in stock market prices; part is the run-up in housing prices. One of the major questions for financial markets is whether this higher wealth/GDP ratio will persist: in particular, to what extent was it the result of gradually lower interest rates since the 1990s that have helped drive up asset prices, and will a reversion to interest rates more in line with historical levels lead asset prices to slump in a lasting way?

Second, the growth in wealth has not been equal: households in the upper part of the wealth distribution now hold a greater share of wealth than in the past. The CBO points out that differences in wealth are correlated with many factors, like age, marriage, and education. But while these factors can help to explain differences in wealth at a point in time, it’s not clear to me that changes in these factors can explain the growing inequality of wealth. Instead, my own sense is that the growing inequality of wealth is a version of a “Matthew effect,” as economists sometimes say. In the New Testament, Matthew 13:12 reads (in the New King James version): “For whoever has, to him more will be given, and he will have abundance; but whoever does not have, even what he has will be taken away from him.” In the context of wealth, those who were already somewhat invested in the stock market and in housing by, say, the mid-1990s have benefited from the asset boom in those areas; those who were not already invested in those areas had less chance for pre-existing wealth to grow.

Third, it’s worth remembering that for many people, especially young and middle-aged adults, their major wealth is in their own skills and training–their “human capital“–that allows them to earn higher wages. As an example, imagine a newly minted lawyer or doctor, who may have large student debts and not yet have had a chance to accumulate much financial wealth, but their skills and credentials mean that the personal wealth broadly understood to include human capital that will generate decades of future income is already quite high.

Finally, the pattern of wealth accumulation over the life cycle appears to be shifting. In this graph, notice that those born in the 1940s have substantially more wealth when they reach their 60s than does the previous generation of those born in the 1930s. However, the generation born in the 1950s is on a lower trajectory: that is, their median wealth in their late 50s is less than what has been accumulated by the generation born in the 1940s. As you work down to more recent generations, each line is below that for the previous generation: that is, each generation is accumulating less wealth than the previous generation did at the same age.

The CBO writes: However, for cohorts born since the 1950s, median wealth as a percentage of median income was lower than that measure was for the preceding cohort at the same age, and median debt as a percentage of median assets was higher.”

The CBO report also offers some updated through the first quarter of 2022, at which time total wealth and the stock market were holding up pretty well through the pandemic recession. But since April, US stock markets are down about 20%., and the totals and distributions above would need to be adjusted accordingly..

Telemedicine Arrives

Back in 2015, the American College of Physicians officially endorsed telemedicine — that is, providing health care services to a patient who is not in the same location as the provider. Nonetheless, many health care providers continued to rely on in-person visits for nearly all of their patients, with occasional telephone follow-ups. When the pandemic hit, it induced the birth of telemedicine as a widespread practice.

When I’ve chatted with doctors and other health care providers about the change, I’ve heard mainly two reactions: 1) a degree of surprise that telemedicine was working well for patients; and 2) a comment along the lines that “we were always willing to do it, but what changed is that insurance was willing to reimburse for it.” It’s of course not surprising that insurance reimbursement rules will drive the manner and type of health care that’s provided, but it’s always worth remembering.

Evidence on telemedicine is now becoming available. Kathleen Fear, Carly Hochreiter, abd Michael J. Hasselberg describe some results from the University of Rochester Medical Center in “Busting Three Myths About the Impact of Telemedicine Parity” (NEJM Catalyst, October 2022, vol. 3, #10, need a subscription or library to access). The U-Rochester med center is good sized: six full-service hospitals and nine urgent care centers, along with various specialty care hospitals and a network of primary care providers. Before the pandemic they were serving about two million outpatient visits per year.

During the pandemic, telemedicine at U-Rochester spiked from essentially nothing to 80% of patient contacts, and now seems to have settled back to about 20%.

Here is how the authors summarize the experience:

Three beliefs — that telemedicine will reduce access for the most vulnerable patients; that reimbursement parity will encourage overuse of telemedicine; and that telemedicine is an ineffective way to care for patients — have for years formed the backbone of opposition to the widespread adoption of telemedicine. However, during the Covid-19 pandemic, institutions quickly pivoted to telemedicine at scale. Given this rapid move, the University of Rochester Medical Center (URMC) had a natural opportunity to test the assumptions that have shaped prior discussions. Using data collected from this large academic medical center, UR Health Lab explored whether vulnerable patients were less likely to access care via telemedicine than other patients; whether providers increased virtual visit volumes at the expense of in-person visits; and whether the care provided via telemedicine was lower quality or had unintended negative costs or consequences for patients. The analysis showed that there is no support for these three common notions about telemedicine.

At URMC, the most vulnerable patients had the highest uptake of telemedicine; not only did they complete a disproportionate share of telemedicine visits, but they also did so with lower no-show and cancellation rates. It is clear that at URMC, telemedicine makes medical care more accessible to patients who previously have experienced substantial barriers to care. Importantly, this access does not come at the expense of effectiveness. Providers do not order excessive amounts of additional testing to make up for the limitations of virtual visits. Patients do not end up in the ER or the hospital because their needs are not met during a telemedicine visit, and they also do not end up requiring additional in-person follow-up visits to supplement their telemedicine visit. As the pandemic continues to slow down, payers may start to resist long-term telemedicine coverage based on previous assumptions. However, the experience at URMC shows that telemedicine is a critical tool for closing care gaps for the most vulnerable patient populations without lowering the quality of care delivered or increasing short-term or long-term costs.

The authors are careful to point out that a substantial part of health care does need to be delivered in person–a point with which it would be hard to disagree. But this evidence also strongly suggests that telemedicine was dramatically underused before the pandemic. It raises broader questions as to whether there are other ways that the provision of health care is stuck in its ways, unwilling or unable to adopt promising innovations in a timely manner.

Some Economics of Tobacco Regulation

Cigarette smoking in the United States is implicated in about 480,000 deaths each year–about one in five deaths. Cigarette smokers on average lose about 10 years of life expectancy. According to a US Surgeon General report in 2020:

Tobacco use remains the number one cause of preventable disease, disability, and death in the United States. Approximately 34 million American adults currently smoke cigarettes, with most of them smoking daily. Nearly all adult smokers have been smoking since adolescence. More than two-thirds of smokers say they want to quit, and every day thousands try to quit. But because the nicotine in cigarettes is highly addictive, it takes most smokers multiple attempts to quit for good.

Philip DeCicca, Donald Kenkel, and Michael F. Lovenheim summarize the evidence on “The Economics of Tobacco Regulation: A Comprehensive Review” (Journal of Economic Literature, September 2022, 883-970). Of course, I can’t hope to do justice to their work in a blog post, but here are some of the points that caught my eye.

  1. US efforts at smoking regulation has changed dramatically in the late 1990s, with an enormous jump in cigarette taxes and smoking restrictions.

For example, here’s a figure showing the combined federal and state tax rate on cigarettes as a percent of the price (solid line) and as price-per-pack (dashed line). In both cases, a sharp rise is apparent from roughly 1996 up through 2008.

In addition, smoking bans have risen substantially.

Governments around the world have implemented smoking bans sporadically over the past five decades, but they have become much more prevalent over the past two decades. … [W]orkplace, bar, and restaurant smoke- free indoor air laws became increasingly common. As of 2000, no state had yet passed a comprehensive ban on smoking in these areas, although some states had more targeted bans. From 2000–2009, the fraction of the US population covered by smoke- free worksite laws increased from 3 percent to 54 percent, and the fraction covered by smoke- free restaurant laws increased from 13 percent to 63 percent … Since the turn of the century, the increased taxation and regulation of cigarettes and tobacco is unprecedented and dramatic.

2. Given that tobacco usage is being discouraged in a number of different ways, all at the same time, it’s difficult for researchers to sort out the individual effects of, say, cigarette taxes vs. workplace smoking bans vs. government-mandated health warnings vs. changing levels of social approval.

3. My previous understanding of the conventional wisdom was that demand for cigarettes from adult smokers was relatively inelastic, while demand from younger smokers was relatively elastic. The underlying belief was that (as a group) adult smokers have had a more long-lasting tobacco habit and have more income, so it was harder for them to shake their tobacco habit, while the tobacco usage of younger smokers is more malleable. This conventional wisdom may need some adjustments.

The consensus from the last comprehensive review of the research that was conducted 20 years ago (Chaloupka and Warner 2000) indicates that adult cigarette demand is inelastic.
More recent research from a time period of much higher cigarette taxes and lower smoking rates supports this consensus, however, there is also evidence that traditional
methods of estimating cigarette price responsiveness overstate price elasticities of
demand. As well, more recent research casts doubt on the prior consensus that youth
smoking demand is more price-elastic than adult demand; the most credible studies on
youth smoking indicate little relationship between smoking initiation and cigarette
taxes. The inelastic nature of cigarette demand suggests cigarette excise taxes are an efficient revenue-generating instrument.

To put it another way, higher cigarette taxes do a decent job of collecting revenue, but they don’t do much to discourage smoking.

4) If cigarette taxes are really about revenue collection, because they don’t do much to discourage smoking, then it becomes especially relevant that low-income people tend to smoke more, and thus end up paying more in cigarette taxes. This figure shows cigarette use by income group; the next figure shows cigarette taxes paid by income group.

5) Broadly speaking, there are two economic justification for cigarette taxes. One is what economists call “externalities,” which are the costs the cigarette smokers impose on others in ways including secondhand smoke and higher health care costs that are shared across public and private health insurance plans with non-smokers. The other is “internalities,” which are the costs that smokers who would like to quit, but find themselves trapped by nicotine addition, impose on themselves. The authors write:

However, evidence on the magnitude of the externalities created by smoking does not necessarily support current tax levels. Behavioral welfare economics research suggests that the internalities of smoking provide a potentially stronger rationale for higher taxes and stronger regulations. But the empirical evidence on the magnitudes of the internalities from smoking is surprisingly thin.

6) Finally, in reading the article, I find myself wondering if the US is, to some extent, substituting marijuana for tobacco cigarettes. As the authors point out, the few detailed research studies on this subject have not found such a link. However, in a big picture sense, the trendline for cigarette use is decidedly down over time, while the trendline for marijuana use is up. A recent Gallup poll reports that “[m}ore people in the U.S. are now smoking marijuana than cigarettes.” Evidence from the National Survey on Drug Use and Health doesn’t more-or-less backs up that claim:

Among people aged 12 or older in 2020, 20.7 percent (or 57.3 million people) used tobacco products or used an e-cigarette or other vaping device to vape nicotine in the
past month. … In 2020, marijuana was the most commonly used illicit drug, with 17.9 percent of people aged 12 or older (or 49.6 million people) using it in the past year. The
percentage was highest among young adults aged 18 to 25 (34.5 percent or 11.6 million people), followed by adults aged 26 or older (16.3 percent or 35.5 million
people), then by adolescents aged 12 to 17 (10.1 percent or 2.5 million people).

However, about one-fifth of the tobacco product users didn’t smoke cigarettes, and with that adjustment, cigarette smoking would be a little below total marijuana use.

For those who would like more on smoking-related issues, here are some earlier posts: