Dementia: A Public Policy Challenge

As the US population ages, the number of people with dementia keeps rising. It’s a problem from hell and a huge social challenge. A committee convened by the National Academy of Sciences offers an overview in Reducing the Impact of Dementia in America: A Decadal Survey of the Behavioral and Social Sciences (September 2021, available with free registration). The committee writes (references and citations omitted): “More than 6 million people in the United States are currently living with Alzheimer’s disease, a number that will rise to nearly 14 million by 2060 if current demographic trends continue. It is estimated that approximately one-third of older Americans have Alzheimer’s or another dementia at death …”

Here, I’ll focus on the economic side of the issues and set aside the direct costs and reduced quality of life for the person with dementia, although the personal side affects my own extended family, along with so many others. The core of the economic problem is that people with dementia need care. The NAS report notes (again, citations omitted):

The primary economic costs of dementia to persons living with dementia and their families are (1) medical and long-term care costs, and (2) the value of unpaid caregiving provided by family (most commonly) and friends. Most estimates of these costs in the literature draw on such nationally representative data sources as the Health and Retirement Study, the Medicare Current Beneficiary Survey, and Medicare claims data. An estimate of annual per-person costs for 2019, which includes health care and the value of unpaid care provided to persons with Alzheimer’s disease, is approximately $81,000 ($31,000 is the value of the unpaid care). This estimate is about four times higher than the costs of the same care provided to similarly aged persons without
the disease. …

Residential care is very expensive. Estimates of the typical costs of long-term care range from $52,624 per year for a home health aide to $90,000 for a semiprivate room in a nursing home and up to $102,000 for a private room. Medicaid, which covers long-term care for low-income individuals and those who become poor as a result of paying for health care and long-term care, is the largest public payer for long-term care, covering 62 percent of nursing home residents, and one-quarter of adults with dementia who live in the community are covered by Medicaid over the course of a year.

When aggregated to the U.S. population, the costs are estimated to have exceeded $500 billion in 2019 and are projected to increase to about $1.5 trillion by 2050. Unaccounted for in these estimates are other economic costs, such as the impact on caregivers’ wages and future employability; when included, these costs increase estimates of unpaid caregiver costs by as much as 20 percent . Moreover, these costs may be underestimated because the physical and mental strain associated with unpaid caregiving likely translates to other costs, such as for caregivers’ own health care. … Other costs unaccounted for include financial harm to persons living with dementia and their families. Cognitive impairment may lead to financial decision-making errors, including payment delinquency and susceptibility to financial exploitation, starting years before diagnosis. Financial harm to individuals living with dementia may also have long-term implications for the surviving spouse.

What might be done? One can try to think about ways of providing the needed services less expensively, but without compromising quality. One can think about steps that might reduce the incidence of dementia. One can hope for a cure. All of these seem worth trying; none at present seems especially promising.

The idea of less expensive and higher quality care is of course enticing, and perhaps it can be delivered by some combination of facilities designed for dementia patients, which would try to free up the time of human staff to provide care by handing off other tasks like cleaning and cooking to lower-cost automation. But I’m not aware of any big success stories along these lines.

There is strong evidence that being in better health overall reduces one’s chance of dementia. As the report notes: “For example, robust evidence suggests that people who take such common-sense measures as eating a healthy diet, exercising regularly, maintaining a healthy weight, and reducing cardiovascular risk have a lower risk of dementia.” Of course, a step-increase in healthy behaviors would have many other benefits as well, but I’m unaware of any big success stories that would dramatically improve health in this way beyond current levels.

Will technology ride to the rescue? Maybe. The FDA has just approved aducanumab, the first drug for treating Alzheimer’s disease. With wider use, we’ll see how well it works, and perhaps develop something beter. But new technologies come at a cost, too. The NAS report describes the issue this way:

First, more than 130 innovative treatments for Alzheimer’s disease and related dementias are being investigated in clinical trials, and some may turn out to slow or halt disease progression and reduce costs. A simulation study found that a hypothetical treatment innovation that delayed the onset of Alzheimer’s disease by 5 years would reduce the population with the disease by 41 percent in 2050, which would reduce annual costs by $640 billion. However, novel treatments, which would likely have high prices, could exacerbate the overall economic impact of the disease. …

The recent approval by the U.S. Food and Drug Administration (FDA) of the first new drug in decades that is intended to treat Alzheimer’s disease, aducanumab, is likely to have substantial impact on the cost picture. … The manufacturer of aducanumab initially estimated that 1 to 2 million persons would currently be eligible to receive the medication, although that number may change depending on eligibility guidelines. Using the manufacturer’s estimated cost of $56,000 per patient per year, the total cost just for the drug could range from $56 billion to as much as $112 billion. Whatever number of people ultimately receive the drug, such estimates do not include the costs of infusion, monitoring and treating adverse effects, and additional pre-administration testing. The magnitude of ancillary costs is not yet established, but observers have suggested that they could add tens of thousands in costs per eligible patient. To put the cost of the drug alone into perspective, the total 2021 National Institutes of Health budget is $43 billion and the total 2021 Medicare budget is $688 billion.

It’s past time for an Operation Warp Speed aimed at dementia, which would guarantee that the government would purchase a certain quantity of the drug in exchange for meeting certain health and cost-per-patient targets. But barring salvation via technology, the question of how society will treat its dementia patients–especially those who do not have family caregivers or financial resources–is looming over our health care policy debates.

Fall 2021 Journal of Economic Perspectives Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Fall 2021 issue, which in the Taylor household is known as issue #138. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.

_____________________________________

Symposium on Criminal Justice

The Economics of Policing and Public Safety,” by Emily Owens and Bocar Ba

The efficiency of any police action depends on the relative magnitude of its crime-reducing benefits and legitimacy costs. Policing strategies that are socially efficient at the city level may be harmful at the local level, because the distribution of direct costs and benefits of police actions that reduce victimization is not the same as the distribution of indirect benefits of feeling safe. In the United States, the local misallocation of police resources is disproportionately borne by Black and Hispanic individuals. Despite the complexity of this particular problem, the incentives facing both police departments and police officers tend to be structured as if the goals of policing were simple—to reduce crime by as much as possible. Formal data collection on the crime-reducing benefits of policing, and not the legitimacy costs, produces further incentives to provide more engagement than may be efficient in any specific encounter, at both the officer and departmental level. There is currently little evidence as to what screening, training, or monitoring strategies are most effective at encouraging individual officers to balance the crime. reducing benefits and legitimacy costs of their actions.

Full-Text Access | Supplementary Materials

“Next-Generation Policing Research: Three Propositions,” by Monica C. Bell

The Black Lives Matter movement has operated alongside a growing recognition among social scientists that policing research has been limited in its scope and outmoded in its assumptions about the nature of public safety. This essay argues that social science research on policing should reorient its conception of the field of policing, along with how the study of crime rates and police departments fit into this field. New public safety research should broaden its outcomes of interest, its objects of inquiry, and its engagement with structural racism. In this way, next-generation research on policing and public safety can respond to the deficiencies of the past and remain relevant as debates over transforming American policing continue.

Full-Text Access | Supplementary Materials

“The US Pretrial System: Balancing Individual Rights and Public Interests,” by Will Dobbie and Crystal S. Yang

In this article, we review a growing empirical literature on the effectiveness and fairness of the US pretrial system and discuss its policy implications. Despite the importance of this stage of the criminal legal process, researchers have only recently begun to explore how the pretrial system balances individual rights and public interests. We describe the empirical challenges that have prevented progress in this area and how recent work has made use of new data sources and quasi-experimental approaches to credibly estimate both the individual harms (such as loss of employment or government assistance) and public benefits (such as preventing non-appearance at court and new crimes) of cash bail and pretrial detention. These new data and approaches show that the current pretrial system imposes substantial short- and long-term economic harms on detained defendants in terms of lost earnings and government assistance, while providing little in the way of decreased criminal activity for the public interest. Non-appearances at court do significantly decrease for detained defendants, but the magnitudes cannot justify the economic harms to individuals observed in the data. A second set of studies shows that that the costs of cash bail and pretrial detention are disproportionately borne by Black and Hispanic individuals, giving rise to large and unfair racial differences in cash bail and detention that cannot be explained by underlying differences in pretrial misconduct risk. We then turn to policy implications and describe areas of future work that would enable a deeper understanding of what drives these undesirable outcomes.

Full-Text Access | Supplementary Materials

“Fragile Algorithms and Fallible Decision-Makers: Lessons from the Justice System,” by Jens Ludwig and Sendhil Mullainathan

Algorithms (in some form) are already widely used in the criminal justice system. We draw lessons from this experience for what is to come for the rest of society as machine learning diffuses. We find economists and other social scientists have a key role to play in shaping the impact of algorithms, in part through improving the tools used to build them.

Full-Text Access | Supplementary Materials

“Inside the Box: Safety, Health, and Isolation in Prison,” by Bruce Western

A large social science research literature examines the effects of prisons on crime and socioeconomic inequality, but the penal institution itself is often a black box overlooked in the analysis of its effects. This paper examines prisons and their role in rehabilitative programs and as venues for violence, health and healthcare, and extreme isolation through solitary confinement. Research shows that incarcerated people are participating less today than in the 1980s in prison programs, and they face high risks of violence, disease, and isolation. Prison conditions suggest the mechanisms that impair adjustment to community life after release provide a more complete account of the costs of incarceration and indicate the performance of prisons as moral institutions that bear a responsibility for humane and decent treatment.

Full-Text Access | Supplementary Materials

Symposium on Geographic Disparities in Health

“Rising Geographic Disparities in US Mortality,” by Benjamin K. Couillard, Christopher L. Foote, Kavish Gandhi, Ellen Meara and Jonathan Skinner

The twenty-first century has been a period of rising inequality in both income and health. In this paper, we find that geographic inequality in mortality for midlife Americans increased by about 70 percent between 1992 and 2016. This was not simply because states like New York or California benefited from having a high fraction of college-educated residents who enjoyed the largest health gains during the last several decades. Nor was higher dispersion in mortality caused entirely by the increasing importance of “deaths of despair,” or by rising spatial income inequality during the same period. Instead, over time, state-level mortality has become increasingly correlated with state-level income; in 1992, income explained only 3 percent of mortality inequality, but by 2016, state-level income explained 58 percent. These mortality patterns are consistent with the view that high-income states in 1992 were better able to enact public health strategies and adopt behaviors that, over the next quarter-century, resulted in pronounced relative declines in mortality. The substantial longevity gains in high-income states led to greater cross-state inequality in mortality.

Full-Text Access | Supplementary Materials

“The Causal Effects of Place on Health and Longevity,” by Tatyana Deryugina and David Molitor

Life expectancy varies substantially across local regions within a country, raising conjectures that place of residence affects health. However, population sorting and other confounders make it difficult to disentangle the effects of place on health from other geographic differences in life expectancy. Recent studies have overcome such challenges to demonstrate that place of residence substantially influences health and mortality. Whether policies that encourage people to move to places that are better for their health or that improve areas that are detrimental to health are desirable depends on the mechanisms behind place effects, yet these mechanisms remain poorly understood.

Full-Text Access | Supplementary Materials

Articles

“When Innovation Goes Wrong: Technological Regress and the Opioid Epidemic,” by David M. Cutler and Edward L. Glaeser

The fourfold increase in opioid deaths between 2000 and 2017 rivals even the COVID-19 pandemic as a health crisis for America. Why did it happen? Measures of demand for pain relief—physical pain and despair—are high and in many cases rising, but their increase was nowhere near as large as the increase in deaths. The primary shift is in supply, primarily of new forms of allegedly safer narcotics. These new pain relievers flowed in greater volume to areas with more physical pain and mental health impairment, but since their apparent safety was an illusion, opioid deaths followed. By the end of the 2000s, restrictions on legal opioids led to further supply-side innovations, which created the burgeoning illegal market that accounts for the bulk of opioid deaths today. Because opioid use is easier to start than end, America’s opioid epidemic is likely to persist for some time.

Full-Text Access | Supplementary Materials

“Neighborhoods Matter: Assessing the Evidence for Place Effects,” by Eric Chyn and Lawrence F. Katz

How does one’s place of residence affect individual behavior and long-run outcomes? Understanding neighborhood and place effects has been a leading question for social scientists during the past half-century. Recent empirical studies using experimental and quasi-experimental research designs have generated new insights on the importance of residential neighborhoods in childhood and adulthood. This paper summarizes the recent neighborhood effects literature and interprets the findings. Childhood neighborhoods affect long-run economic and educational outcomes in a manner consistent with exposure models of neighborhood effects. For adults, neighborhood environments matter for their health and well-being but have more ambiguous impacts on labor market outcomes. We discuss the evidence on the mechanisms behind the observed patterns and conclude by highlighting directions for future research.

Full-Text Access | Supplementary Materials

“College Majors, Occupations, and the Gender Wage Gap,” by Carolyn M. Sloane, Erik G. Hurst and Dan A. Black

The paper assesses gender differences in pre-labor market specialization among the college-educated and highlights how those differences have evolved over time. Women choose majors with lower potential earnings (based on male wages associated with those majors) and subsequently sort into occupations with lower potential earnings given their major choice. These differences have narrowed over time, but recent cohorts of women still choose majors and occupations with lower potential earnings. Differences in undergraduate major choice explain a substantive portion of gender wage gaps for the college-educated above and beyond simply controlling for occupation. Collectively, our results highlight the importance of understanding gender differences in the mapping between college major and occupational sorting when studying the evolution of gender differences in labor market outcomes over time.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

Taxing Global Companies: The Imperfect Choices

The argument over taxing corporate profits may sound familiar, because claims that corporations aren’t paying their fair share of taxes have been going on for decades. But the circumstances of large corporations have changed considerably in at least two ways: 1) they have become much more likely to cross national boundaries in their inputs, production, customers, and owners; and 2) they depend more on intellectual property, which means that profits are coming from something that isn’t physical in nature.

Alan Auerbach lays out the issues that arise, and runs through the solutions that have been tried, in “The Taxation of Business Income in the Global Economy,” delivered as the 2021 Martin S. Feldstein lecture at the National Bureau of Economic Research (NBER Reporter, September 2021). Here’s Auerbach on how the nature of the largest US corporations has changed (footnotes omitted):

Fifty years ago, the top five companies by market capitalization were IBM, General Motors, AT&T, Standard Oil of New Jersey (Esso, the predecessor of today’s ExxonMobil), and Eastman Kodak. … These were companies that “made things” in identifiable locations, to a large extent in the United States. If we shift to today, we see another five familiar names, all giant companies: Apple, Microsoft, Amazon, Alphabet (Google’s parent), and Facebook. These companies are worldwide multinationals, relying very heavily on the use of intellectual property in the goods and services they provide …

In the last half century, the share of intellectual property measured in US nonfinancial corporate assets more than doubled, according to the Fed’s Financial Accounts of the United States.  That’s probably a conservative estimate, because the measurement of intellectual property is a fairly narrow one here. The share of before-tax US corporate profits coming from overseas operations nearly quintupled, according to data from the Bureau of Economic Analysis. US companies have become much more multinational in character, not just selling things abroad, but making them abroad as well. And the share of cross-border equity ownership has steadily increased, to the point that foreign individuals and companies account for a significant fraction of US companies’ share ownership.

When production, customers, and owners are distributed around the world, when the “product” can be a service supplied digitally, and when the ultimate source of profit is closely tied to intellectual property, taxing big global firms becomes complex. The idea of corporate profits itself is far from a crystal-clear idea in a world of high-powered accountants and finance, operating against a backdrop of differing national tax codes. Every country would prefer to draw up the rules so that it attracts companies that it can tax. Companies that are operating across national borders in multiple ways have the ability to shift between countries, and to claim that profits are actually being earned in one place rather than another. What might be done. Auerbach discussed the possibilities, which I summarize here.

  1. Make corporate taxation rules that seek to block firms from avoiding taxes. This has been the main strategy for some years, and the whole point of the earlier discussion about the changing nature of large firms is that it can be hard to make this work in the modern economy.
  2. “Patent boxes” are a way of encouraging firms with intellectual property to claim residence in your country, so that your country can tax the company. But to attract the company, the “patent box” approach often promises lower corporate taxes. As Auerbach writes: “One problem with patent boxes is that, in a sense, they deal with tax competition by simply giving up.”
  3. Tax global companies based on their users, not their profits. For example, European countries where a company like Google or Facebook has essentially zero employees might seek to tax these companies because people in the country use services from these firms. The idea is that some of the profits of the company trace back to users from a certain country, so the country should be able to tax the firm’s profits. But of course, drawing a straight line from users to profits for these companies is an enormous oversimplification. Their profits are based on many factors, including advertising revenues, operating costs, intellectual property, and other factors. The US is typically not pleased if foreign countries try to tax US-based firms.
  4. Use a “destination-based tax,” which means taxing firms in proportion to where their sales are. This approach requires some detailed analysis, and Auerbach runs through variations of a destination-based tax like the Residual Profit Allocation by Income (RPAI) and a Destination-Based Cash Flow Tax (DBCFT). Given international cooperation, this approach seems broadly workable. But notice that it is a fundamental shift from the idea of taxing corporate profits to an idea of taxing corporate sales: that is, two firms with equal sales would pay equal taxes, even if one earned positive profits and one lost money. When people argue over whether corporations pay their fair share, I’m not sure if this approach accords with their intuition.
  5. Scramble all of these approaches together, don’t worry too much about the contradictions, and do a bit of each. As Auerbach points out, this was essentially the approach of the 2017 Tax Cuts and Jobs Act.

The current international negotiations over global corporate taxation basically have two “pillars,” as they are often called. One “pillar” would take a certain share of the profits of big digital services companies and let other countries split up that tax revenue. Given that these big companies are mostly American firms, the rest of the world likes this idea, but it’s not clear that the US Senate will approve. The other “pillar” would be a minimum 15% tax rate on profits of large companies. The problem with this approach is that it essentially ignores the underlying issues. As Auerbach writes:

But beyond the immediate hurdles facing adoption, there is also a more fundamental, longer-term challenge arising from the attempt to preserve a tax system based on concepts that don’t really work anymore, that are ill-defined and endogenous: corporate residence and the location of production and profits (something that tax authorities have taken to referring to as the location of value creation).  Because it relies on these ill-defined concepts, the two-pillar system is not going to be sustainable unless countries adopt and adhere to similar rules that lessen incentives for companies to shift production, profits, and residence.

Auerbach is a supporter of a destination-based tax approach, and thinks that economic and political forces will tend to push in that direction over time. Maybe he’s right. But an alternative possibility is that the nature of corporations keeps evolving, the importance of intangibles like intellectual property keeps growing, and the long-term argument over what it means for corporations to pay their fair share keeps sounding much the same, even though the underlying conditions keep changing.

Biden Appointments: Is Lack of Personnel A Policy?

One of the complaints fairly levelled against President Donald Trump was that, in a pure numerical sense and setting policy disagreements aside, his administration did a poor job of appointments. There are about 4,000 jobs in the federal government that require presidential appointment, with about 1,200 of those jobs requiring Senate confirmation. The Trump administration was slow to fill these jobs. For example, the Economist magazine noticed the slow pace of Trump administration appointments back in July 2017 with an article titled, “Donald Trump’s missing government: Presidential lethargy, not Democratic obstinacy, is to blame.” By the end of the Trump administration, of the roughly 750 appointments requiring Senate confirmation, about one-third had no appointments at all.

How is the Biden administration doing? The nonprofit Partnership for Public Service together with the Washington Post maintain an ongoing tracker that keeps track of appointments to about 800 of the more prominent positions that need Senate confirmation. Here’s their figure showing confirmed appointments. So far, the Biden administration is quite similar to the Trump administration, both lagging well behind the two preceding administrations.

I’ll confess that this total number of required appointees seems crazy to me. My suspicion is that bill after bill specified a need for certain positions to be presidentially appointed, without anyone really keeping track of the general total. It would probably be sensible to cut back the number substantially. The only way this total number of appointments can possibly work is to have a president with an exceptionally deep bench of advisers, who in turn can draw on deep networks of their own. Thus, one might be unsurprised that political novice President Trump lacked the network to find a large number of nominees, while expecting that nominees would be easier to find for a Biden administration. But it hasn’t worked that way.

In the early 1980s during the Reagan administration, it was common to hear the slogan “personnel is policy.” Conversely, a lack of personnel will inhibit policy. When problems come along–from pandemics to trade, from supply chain shutdowns to foreign policy–and there is no political appointee in place, then the regular government workers in that department are left to muddle through as best they can, without a lot of direction and with strong incentives not to do anything innovative or creative that might either attract blame or end up contradicting whatever official policy line ultimately emerges A lack of personnel is a kind of policy, too.

The Cellar-Dwelling Performance of US Ports

A vivid illustration of America’s supply chain woes are the photographs of container ships lined up outside the ports of Los Angeles and Long Beach, waiting to unload. But this outcome shouldn’t have been a major surprise. The World Bank and IHS Markit published The Container Port Performance Index 2020 back in May 2021 (available with free registration here). It is the first effort to offer a systematic ranking of performance of 351 ports around the world. In general, US ports do very poorly. As the report notes:

The CCPI 2020 was constructed based on two different methodological approaches, or what have been termed the administrative approach: a pragmatic methodology reflecting expert knowledge and judgment, and the statistical approach, using factor analysis (FA). … The top ranked container ports in the CPPI 2020 are Yokohama port (Japan), in first place, followed by King Abdullah port (Saudi Arabia) in second place. These two ports occupy the same two positions irrespective of the methodology. The top 50 ranked ports are dominated by ports in East Asia, with ports in the Middle East and North Africa region, such as King Abdullah port, Salalah in Oman (ranked 6th and 9th respectively), Khalifa port in Abu Dhabi (ranked 26th and 22nd respectively), and Tanger Med (ranked at 27th and 15th respectively) as the notable exceptions. Algeciras is the highest ranked port in Europe (ranked 10th and 32nd respectively), followed by Aarhus (ranked 44th and 43rd respectively). Colombo is the top-ranked port in South Asia (ranked 17th and 33rd respectively). Lazaro Cardenas the highest ranked port in Latin America (ranked 25th and 23rd respectively), with
Halifax the highest ranked port in North America (ranked 39th and 25th respectively).

The reader will notice that no US ports are named in the top 50, and the top-ranked North American port is located in Canada. Here, I’ll just give the factor analysis statistical ranking of some major US ports, which essentially comes down to a measure of how long it takes ships to unload–adjusting for the size and type of ship. However, the scores based on expert knowledge and judgement are similar. Out of the 351 ports ranked around the world, the main west coast US ports are Los Angeles (#328), Oakland (#332), Long Beach #333), and Tacoma (#335).The main US east coast ports are New York and New Jersey (#89), Savannah (#279), Virginia (#85), and Charleston (#95). The main US Gulf of Mexico port is Houston (#266).

Inefficient ports matter. As the report notes:

Maritime transport is the backbone of globalized trade and the manufacturing supply chain, with more than four-fifths of global merchandise trade (by volume) carried by sea. The maritime sector offers the most economical, energy efficient, and reliable mode of transportation over long distances. A significant and growing portion of that volume, accounting for approximately 35 percent of total volumes and over 60 percent of commercial value, are carried by containers. …

Unfortunately, ports and terminals, particularly for containers, can often be sources of shipment delays, supply chain disruption, additional costs, and reduced competitiveness. Poorly performing ports are characterized by limitations in spatial and operating efficiency, limitations in maritime and landside access, inadequate oversight, and poor coordination between the public agencies involved, resulting in a lack of predictability and reliability. Poor performance can also have an impact far beyond the hinterland of a port: Container shipping services are operated on fixed schedules with vessel turnaround at each of the ports of call on the route planned within the allocated time for port stay. Poor performance at one port on the route could disrupt the entire schedule. The result far too often is that instead of facilitating trade, the port increases the cost of imports and exports, reduces the competitiveness of its host country …

While much of the discussion of US ports focuses on international trade, it’s worth noting that the issues affect within-US trade as well. There are substantial US flows of goods that could be shipped up and down the east coast, the west coast, or in and out of the Gulf of Mexico. But much of that ocean-based shipping doesn’t happen because of costly and inefficient ports. The result is that those goods end up being shipped overland by truck and rail.

The World Bank/IHS Markit report just offers a ranking: it doesn’t make any effort to sort out underlying explanations for the poor performance of US ports. I haven’t made a deep study of this subject, but there are plausibly three main contributing factors.

First, the benefits of more efficient ports are spread across the logistics system, with lower costs for all the carriers hooked into ports and ultimately lower costs for producers and consumers. But those potential benefits are somewhat invisible. Those who run the port will capture only a small share of those benefits if they go to the time and trouble of updating the capacities of a port and running it as efficiently as possible, so they have only mild incentives to make such an effort.

Second, the Jones act is a century-old law which requires that water transportation of cargo between U.S. ports is limited to ships that are U.S.-owned, U.S.-crewed, U.S.-registered, and U.S.-built.” The goal of the law was to protect US shipbuilders from foreign competition. The result has been that it costs far more to make ships in the US than anywhere else, and the ships that are U.S.-owned, -crewed, and -registered have much higher shipping costs. In other words, it’s not just in its ports where the efficiency of US shipping has fallen far behind.

Finally, the International Longshore and Warehouse Union is the famously militant union representing port workers on the west coast. The union has done a fabulous job of negotiating high pay and benefits for its workers, and in that sense, I say more power to it. But the union has also been able to pass along these higher costs along the supply chain, while making it harder to update the efficiency of the west coast US ports in particular.

Most of the largest US ports have dramatic room for improvement. The pandemic-related supply crunch has brought the issue to the surface, but it was a largely ignored issue that existed before the pandemic and–unless some dramatic and permanent changes are made–it will exist as the pandemic wanes, too.

How Spending on Food Changed in the Pandemic

Spending on food is divided into two main categories in the government statistics: “food at home” and “food away from home.” Unsurprisingly, the pandemic caused “food at home” to rise and “food away from home” to fall. But at least to me, the shift was less dramatic than I might have expected, and “food away from home” remains quite high. Eliana Zeballos and Wilson Sinclair of the US Department of Agriculture discuss the patterns in “Food Spending by U.S. Consumers Fell Almost 8 Percent in 2020” (Amber Waves, October 4, 2021).

Here’s the split between food at home and food away from home over time. The sharp pandemic-related movements in 2020 are obvious. But I remember being surprised when the food away from home share began to exceed the food at home share in 2020.

Here’s the spending in terms of dollars. The drop in total food spending of 8% in 2020 shows that the rise in the dollar value of food at home was smaller than the dollar value of the drop in food away from home.

Of course, the food away from home category is actually a bundle of goods and service: that is, it’s combines food with shopping, preparation, service and clean-up. A shift to food at home is also a shift to providing many of those complementary services yourself.

For my own family, we became more likely during the worst of the pandemic to prepare meals at home that used more costly ingredients, like cooking the steak or making the cocktails at home, given that we weren’t going out to eat. We ate our share of spaghetti during the pandemic, but we have also have added to our repertoire of high-end meals that required unique ingredients, shopping, and prep time. Also, the pandemic strengthened our incentives to identify the local restaurants with especially good take-out options. I suspect that some of those changes will be persistent.

Distributional Effects of Tax Expenditures

The idea of a “tax expenditure” dates back to Stanley Surrey, who was Assistant Secretary of the US Treasury for Tax Policy back in the 1960s. He pointed out that certain government programs could be implemented either through direct spending or through the tax code: for example, the government could either cut checks to subsidize research and development by firms, or it could offer them a tax credit for doing so. The first list of tax expenditures was produced in 1967; it’s been an official part of the federal budget since 1974.

Just to be clear, calling something a “tax expenditure” is not meant to carry a positive or a negative connotatino. There can be good and sensible reasons why some policies operate through the tax code and others do not. But there had been concern that enacting a policy through the tax code could make it appear “free”–because after all no direct spending was involved. Instead, it seemed useful to at least be aware of the dollar magnitude of tax expenditures, as a starting point for a reasonable discussion of whether they were sensible policy.

The Congressional Budget Office has just published The Distribution of Major Tax Expenditures in 2019 (October 2021). This report evaluates tax expenditures along just one dimension: who receives the benefits by income group? Here’s the list of the major tax expenditures by dollar amount:

To give a sense of perspective, here’s a graph showing the total of these major tax expenditures relative to some other main categories of spending and taxation. The total for tax expenditures is higher than for federal discretionary spending, and almost as much as revenues from the federal income tax.

As you look through the list of major tax expenditures , you may notice that the distributional effects of these provisions are likely to be greater either at the top or at the bottom of the income distribution. If you look at the top couple of provisions, which involve employer-provided health and retirement benefits where the value is excluded from taxable income, these provisions will always tend to be more valuable for those with higher incomes, who would have been paying higher tax rates, than for those with lower incomes who would have been paying lower tax rates. On the other side, the Child Tax Credit, the Earned Income Tax Credit, and the Premium Tax Credit all are focused on those with relatively low income levels and phase out as incomes rise.

This figure shows the distribution of tax expenditures by income quintile. The darkest bars refer to the top income quintile; the lightest bars to the lowest income quintile.

The result of this pattern is that if you look at these major tax expenditures in dollar amount, most of the benefits flow to the top quintile of income–and indeed to the upper part of the top quintile.

However, if you look at these tax expenditures as a share of income, it turns out that they are more important to those with lower income levels.

Each of these tax expenditure provisions is worthy of individual examination. In my own mind, for example, employer-provided health insurance should be taxed as income, because the benefit is directly received in that year. However, I don’t have a problem in general with excluding payments to retirement accounts from taxation, as long as that income is taxed when it is actually received in the future. But the main theme here is just to bring these tax expenditures, and their distributional consequences, out into the light.

The Prospects for Services-Led Development

The path of economic development for countries around the world has followed a similar pattern of sectoral shifts: from agriculture to manufacturing to services. This pattern was followed (at different times) both by today’s high-income countries including the United States, countries western Europe, Japan, and South Korea, and also by rising economies like China.

But the nature of manufacturing has been changing. Across many industries, the importance of low-cost labor has declined, while the cost of machine-driven manufacturing and robotics has been falling while its capabilities keep rising. Dani Rodrik has called this the problem of “premature de-industrialization”–that is, the potential for development through a movement from low-wage manufacturing to higher-wage manufacturing has diminished, even though a number of countries still very much need a path to economic development. Moreover, services have been growing faster than manufacturing for the world economy as a whole.

Can service industries offer an alternative pathway to economic development? Gaurav Nayyar, Mary Hallward-Driemeier, andn Elwyn Davies tackle this question in their World Bank monograph At Your Service? The Promise of Services-Led Development (September 2021). Here’s how they set up the discussion:

Evidence suggests that manufacturing-led development in the past delivered the twin gains of productivity growth and large-scale job creation for the relatively unskilled. Underlying these were economies of scale, access to international markets, innovation, and supply chain linkages with other sectors, combined with the ability to leverage relatively unskilled labor with capital. Although services are labor intensive, they often require simultaneous production and consumption that precludes accessing larger markets. Their more limited ability to use capital to improve labor productivity also limits both scale economies and incentives to innovate. Conventional wisdom is therefore pessimistic about the prospects for services-led development.

This book seeks to test that conventional wisdom. To that end, there are two guiding questions. The first concerns whether the services sector has the potential to expand opportunities for poor people within LMICs [low- and middle-income economies] and whether these jobs can raise their productivity over time. … The second question is the extent to which the services sector can help lower-income countries catch up with the productivity and wealth of higher-income countries.

The authors make a case that, in fact, a process of growth in jobs and productivity has already been underway in many parts of the world for a few decades now. The exceptions are in the East Asia and Pacific region, where manufacturing has led the way in jobs and productivity growth. But in the rest of the developing world, services jobs and productivity growth have been faster. Here are some overall figures, showing that for the low- and middle-income countries of the world, it has been services, not manufacturing, that have been offsetting the decline in share of agricultural jobs and output.

The authors write:

First, industry has played a special, dominant role in East Asia (and to a smaller extent in Eastern Europe), whereas LMICs in other regions, on average, have not benefited as much from industry as a central driver of their development. Second, it is not the case that industry inherently outperforms services. For many LMICs, therefore, the choice between manufacturing- and services-led development is not of dire importance. The data show that services can deliver productivity growth—in several cases, growth that is higher than that of industry. What matters for the longer-term potential of services-led development is whether the features of industrialization that have enabled scale, innovation, and spillovers along with job creation for unskilled
labor—as in East Asia—are increasingly shared by the services sector. … It is not necessarily the production of “goods” or ‘“services” per se that matters but how these are produced.

A key insight here is that the nature of production in services is fundamentally changing, thanks in large part to the widespread arrival of information and communications technologies. These technologies allow a number of services to take advantage of economies of scale: for example, think of the issues involved in scheduling and managing trucking or courier services. Along with logistics, it seems that many service industries do have productivity spillovers to other areas: for example, some examples they mention include telecommunications, finance, education, and health care. Many services have become storable: the work can be done and saved online for later use. Services can be standardized and codified: think of uses like on-line banking or computer programming or long-distance examination of medical images. Services can be traded internationally, so developing economies have ways to tap into the buying power of high-income countries in global markets. The interaction of these activities, especially as new information technologies continue to develop and to spread, drives new innovations and economies of scale.

In short, the old-fashioned idea of services in low- and middle-income countries often involves small-scale and individual tasks, not ongoing salaried jobs. The new idea of services holds much more potential for economies of scale, innovation, and cross-sector spillovers, and thus much greater potential for jobs and growth. One possibility mentioned is international medical tourism: “Skill-intensive social services (health and education) are not amenable
to international specialization and will continue to need a substantial domestic presence owing to a significant face-to-face component. However, they also benefit from exporting opportunities, such as through health tourism. Costa Rica, India, Jordan, Malaysia, Mexico, Turkey, and Thailand have emerged as destinations for world-class health care at lower prices.”

Of course, none of this should be taken to mean that economic development is now an easy task. The report goes into some detail about different categories of service industries and their various strengths. It also discusses a policy agenda based on the “four T’s” of trade, technology, training, and targeting. But the report suggests that developing countries need not focus their efforts on low-wage manufacturing as the primary and necessary first step to economic development, and that other pathways are possible.

Was Vaccine Development an Example of Successful “Industrial Policy”?

Was the development of COVID-19 vaccines under Operation Warp Speed a successful example of “industrial policy”? On one side, it was a policy, and it led to a product–so that seems like an example. But in a broader sense, the production of the vaccine was a narrow effort. It did not transform any particular US industry, much less the broader industrial base as a whole. So perhaps Operation Warp Speed was a success, but without falling in the “industrial policy” category?

Scott Lincicome and Huan Zhu make a case for the skeptical answer in “Questioning Industrial Policy: Why
Government Manufacturing Plans Are Ineffective and Unnecessary”
(Cato Institute Working Paper #63, June 16, 2021). They write (footnotes omitted):

Finally, the COVID-19 vaccines developed under “Operation Warp Speed” have been heralded as a triumph of American industrial policy, but the first vaccine to market (Pfizer/BioNTech) disproves the assertion. BioNTech was a German company that had been working on mRNA vaccines for years and began its collaboration with Pfizer (based on an earlier working relationship) months before the U.S. government began OWS [Operation Warp Speed] in May 2020 or contracted with the companies for a vaccine in July of that same year. (Management actually predicted in April 2020 that distribution of finished doses would occur in late 2020.) The
companies famously refused government funds for R&D, testing and production – efforts that instead leveraged Pfizer’s substantial pre-existing U.S. manufacturing capacity, as well as multinational research teams, global capital markets and supply chains, and a logistics and transportation infrastructure that had developed over decades. In fact, the Trump administration’s contract with Pfizer was for finished, FDA-approved vaccine doses only and expressly excluded from government reach essentially all stages of vaccine development (i.e., “activities that Pfizer and BioNTech have been performing and will continue to perform without use of Government funding”). There is even some evidence that OWS’ allocation of vaccine materials to participating companies (some of which still have not produced an approved vaccine) may have impeded non-participant Pfizer’s ability to meet its initial production targets and expand production after the vaccine was approved.

Surely, some state support (e.g., support for mRNA research and a large vaccine purchase commitment) was involved both before and during the pandemic, but it all lacked the necessary commercial, strategic, or nationalist elements of “industrial policy.” In fact, mRNA visionary Katalin Karikó actually left her government-supported position at the University of Pennsylvania “because she was failing in the competition to win research grants” and thus “moved to the BioNTech company, where she not only created the Pfizer vaccine but also spurred Moderna to competitive imitation.” The NIH grant supporting her early work actually came through her colleague, Drew Weissman, and “had no direct connection to mRNA research.” Other efforts,
such as Moderna’s mRNA vaccine, had more state support, but the BioNTech/Pfizer vaccine shows that it was not a necessary condition for producing a wildly successful COVID-19 vaccine.

Indeed, Lincicome and Zhu argue some elements of vaccine contractors and their lobbying interactions with US government may have hindered the vaccine development process. They write:

Most recently, a New York Times investigation into Maryland vaccine manufacturer Emergent Biosolutions – a “longtime government contractor that has spent much of the last two decades cornering a lucrative market in federal spending on biodefense” – found that the company invested heavily in lobbying while ignoring various safety and manufacturing best practices; had effectively “captured” the government agency, the Biomedical Advanced Research and Development Authority, authorized to disburse and monitor pandemic-related contracts; yet, despite repeated contracting failures, was rewarded with a $628 million contract to manufacture Covid-19 vaccines. Emergent’s actions ultimately imperiled millions of doses of Johnson & Johnson vaccines and weakened the Strategic National Stockpile by monopolizing its “half-billion-dollar annual budget throughout most of the last decade, leaving the federal government with less money to buy supplies needed in a pandemic.”

One might of course object that Lincicome and Zhu have an overly narrow definition of “industrial policy,” but perhaps the broader less is that “industrial policy” means many things for different people. If “industrial policy” was limited to support for research and development, workforce training, and perhaps occasional government commitments to purchase successful innovations, my sense is that few free-market economists would object. In emergencies like a pandemic, many predominantly free-market economists would be willing to support government steps to prioritize key inputs in supply chains, as well. But of course, this kind of “industrial policy” is a long way from widespread government industrial planning, tariffs against imported goods, and subsidies or even government ownership of favored industries.

A further difficulty is that political discussions of industrial policy can become quite vague. The proponents of industrial policy often focus on issues of concern–say, the loss of well-paid manufacturing jobs–but they are fuzzier on holding themselves accountable for policies that will address the problem. Lincicome and Zhu quote Mancur Olson (from a 1986 book) on this issue: “Those publications that I happen to have seen advocating industrial policy are also relatively vague. Some are so vague that they invite the reaction that industrial policy is neither a good idea nor a bad idea, but no idea at all; that it is the grin without the cat.”

Some Economics of Sawdust

Cambridge University Press has published a 35th anniversary edition of The Economist’s View of the World and the Quest for Well-Being by Steven E. Rhoads. The book offers a sympathetic verbal (that is, no graphs or math) explanation of basic concepts in microeconomics: for example, the opening chapters are “Opportunity Costs,” “Marginalism,” and “Economic Incentives.”

Rhoads is an economically-minded political scientist. This book is not at all an attack on economics: indeed, I think it has sometimes been used as a textbook for a nonmathematical introduction to the economics, both at the undergraduate and with master’s degree programs in areas like public administration. I suspect that the book does a good job of building bridges with those who are skeptical or hostile to what they perceive as the field of economics, because Rhoads is quick to emphasize that economic efficiency and growth are not the only ingredients of human well-being, and that fairness and equality should also play a role. My only real quibble with the book, and its a small one, is that Rhoads seems in some places to think this insight will be news to economists, while my own experience is that economists have been emphasizing for decades how equality and fairness may in some contexts have tradeoffs with efficiency and growth, while in other cases they may complement each other.

The discussion throughout is based on solid explanations and a wealth of interesting examples. To provide a flavor, here’s one example from the introduction of an economic story about sawdust. The story works on several levels: as a basic story about supply and demand, a story about the intricacies of economic interconnectedness, and a parable about the perils of economic central planning. Rhoads writes (footnotes omitted):

In 2008 the price of milk was much higher than usual. An economist asked a dairy farmer, how come? The farmer said his inputs were much more expensive. (Within two years it had gone up by a factor of four for some uses.) He used sawdust to bed his cows more comfortably. They produced more milk when they were more often off their feet. The reason for the increase in the price of sawdust was the sharp downturn in the production of new housing. Since construction of new houses was down, there was less sawdust.

So, imagine you are a politician or a planner trying to satisfy citizens complaining about the high price of milk. … But another problem citizens were complaining about was homelessness and the price of affordable housing. Would you realize that using more sawdust to produce milk would increase the price of housing? Probably not. But it would increase housing costs, because sawdust is also the principal component in particleboard, which is used widely in the building industry. It is cheaper than substitutes such as lumber and plywood. You probably wouldn’t know that.

Many of your constituents also love gardening, and they would not be happy if the sawdust they use to make their mulch became more expensive because some of it was being siphoned off to help “higher-priority” users. Sawdust is also used in the production of charcoal briquettes and as part of a mix to make a lightweight material for dashboards. It would take a planner a lot of time to decide on the fair and efficient allocation of sawdust. …

Of course, no politician or planner would have time to worry about sawdust. If there were no entrepreneurs or markets, sawdust would probably be thrown away or sued only for mulch; no one would know that the waste product had these other uses. Even if they eventually figured it out, how would they decide which usage was the most important and how much should go into it and how much for the second most important usage?

The lowly sawdust example shows that there is a “dense interconnection” of different kinds of scarce resources. No planner could sort out everything efficiently. This is an important reason why we need markets.

The Agricultural Marketing Research Center (a group of universities operating under a grant from the US Department of Agriculture) wrote a few years ago about the many industrial uses of sawdust, as well as the rising importance of sawdust, and wood waste in general, as a source of renewable biomass energy. Here is the AgMRC on uses of sawdust:

Shavings and sawdust may be reground into wood flours, or the wood flour may be recovered as sized “dust” materials that have been screened and separated. Wood flour has major industrial markets in industrial fillers, binders and extenders in industrial products like epoxy resins, fertilizers, adhesives, absorbent materials, felt roofing, inert explosive components, ceramics, floor tiles, cleaning products, wood fillers, caulks and putties, soil extenders and a vast array of plastics. Some wood flours like mesquite may be used in edible flavorings for human or pet consumption.
 
Shavings and sawdust can be marketed for use in molded or laminated composite wood products (e.g., toilet seats, countertops) in automotive materials and in oil and water isolation and solidification products for the environmental control industry. Other uses include fillers, bulk shavings, sawdust, hog fuel (dried bark shavings), meat-smoking chips, barbeque cooking fuels and composite fireplace logs. Landscaping applications include playground “footing,” equestrian arena and other “wood edge footing” (safety margin and walkway material) and some exhibit and tradeshow applications. A few manufacturers are using post-consumer plastic waste mixed with a sawdust extender to make high-value extruded composite decking lumber and similar products for the home improvement market. Currently, a primary use of baled dry shavings is for equine and livestock bedding or small pet bedding applications.

However, the AgMRC emphasizes a future role for sawdust and wood waste products in biomass energy production. I once had a conversation with a professor of forestry who pointed out that the “carbon cycle” in burning fossil fuels and eventually having that carbon return to the form of oil or coal or natural gas was measured in millions of years, while the carbon-cycle in burning wood products and then having that carbon reabsorbed into trees was a matter of years and decades. The AgMRC writes:

At this time and in the near future, wood wastes are and probably will be the most commonly used biomass fuel for heat and power. The most economic sources of wood fuels are usually wood residues from manufacturers (mill residues), discarded wood products or woody yard trimmings diverted from landfills, and non-hazardous wood debris from construction and demolition activities. A significant environmental benefit of using these materials for generating electricity is that their energy value is utilized while landfill disposal is avoided. As long as clean-burning combustion technologies are employed, carbon emissions to the atmosphere can be minimized. 

Recent studies indicate that quantities of available (presently unused) mill and urban wood residues exceed 39 million dry tons per year in the United States. This is enough material to supply more than 7,500 MW, doubling the existing U.S. bio-power capacity in the United States. To illustrate this point, this amount of power could supply the yearly electricity demand of the residential customers in all six New England states.

Moreover, there is agricultural innovation in the growth of “wood grass” species that will probably affect the sawdust industry as well.

The use of crop residues, livestock manures and short-rotation-intensive-culture (SRIC) plantings of fast-growing “wood grass” tree species as fuel resources can improve the economics of farming while solving some of the most intractable environmental problems in agriculture today. In SRIC systems, “wood grass” species are cultivated and then chipped on-site for use in energy production (by combustion) or wood-product manufacturing (composites). The advent of energy crops for power production is a new agricultural market. However, these crops provide soil conservation and nutrient management benefits for the land and may be compatible with government conservation set-aside incentive programs. Increased woody-biomass utilization will impact other groups including architectural and engineering firms, consultants, and processing and handling equipment vendors.

Centralized government decision-making can be a useful method of production when it is focused on a specific goal: create a specific new vaccine or fighter jet, or provide electricity. But the production and use of sawdust are multifarious. Detailed and granular knowledge of people who are deeply involved at every stage of the process, and who have personal incentives to make it work smoothly, is needed to know what is possible now and what innovations might be made. In these settings of what Rhoads calls “dense interconnection,” the decentralized decision-making of markets coordinated by individual incentives and a price mechanism can be remarkably effective.