The Super-rich and How to Tax Them

How might one define the super-rich and how might the government tax them? Florian Scheuer tackles these questions in \”Taxing the superrich: Challenges of a fair tax system\” (UBS Center Public Paper #9, November 2020). Also available at the the website is a one-hour video webinar by Scheuer on the subject. Those who want a more detailed technical overview  might turn to the article by Scheuer and Joel Slemrod, Taxation and the Superrich,\” in the 2020 Annual Review of Economics (vol. 12, pp. 189-211, subscription required).

When discussing the superrich in a US context, there are two common starting points. One is to focus on the Forbes 400, an annual list of the 400 wealthiest Americans.  Another is to focus on the very top of the income distribution–that is, not just the top 1%. but the top 0.1% or even the top 0.01%. 

On the subject of the Forbes 400, Scheuer writes: \”The cutoff to make it into the Forbes 400 in 2018 was a net worth of $2.1 billion, and the average wealth in this group was $7.2 billion. The share of aggregate U.S. wealth owned by the Forbes 400 has increased from less than 1% in 1982 to more than 3% in 2018.\” It\’s worth pausing over that number for a moment: the share of total US wealth held by the top 400 has tripled since 1982. Scheuer also points out that one can distinguish whether those in top 400 inherited their wealth or accumulated it themselves. Back in 1982, 44% of the top 400 had accumulate it themselves, while in 2018, 69% had done so. 

Of course, wealth is not the same as income. For example, when the value of your home rises, you have greater wealth, even if your annual income hasn\’t changed. Similarly, when the price of stock in Amazon or Microsoft changes, so does the wealth of Jeff Bezos and Bill Gates (#1 and #2 on the Forbes wealth list), even if their annual income is unchanged. 

The IRS used to (up to 2014) release data on the \”Fortunate 400\” top income-earners in a given year; in 2014, the cutoff for making this list was $124 million in income for that year. Another approach is to looking at the top of income distribution. the top 0.01% represents the 12,000 or so households with the highest income in the previous year. 

There are basically four ways to tax the super-rich: income tax, capital gains taxes, the estate tax, or a wealth tax. 

In the 2020 tax code, the top income tax bracket is 37%: for example, if you are married filing jointly, you pay a tax rate of 37% on income above $622,500. (This is oversimplified, because there are phase-outs of various tax provisions and surtaxes on investment income that can lead to a marginal tax rate that is a few percentage points higher.) But one obvious possibility for taxing the superrich would be to add additional higher tax brackets that kicks in a higher income levels, like $1 million or $10 million in annual income. 

The difficulty with this straightforward approach is what Scheuer refers to as the \”plasticity\” of income, that is, \”the ease with which higher-taxed income can be converted into lower-taxed income.\” Scheuer writes: 

Plasticity is an issue when different kinds of income are subject to different effective tax rates. By far the most important aspect of plasticity, with implications both for understanding the effective tax burden on the superrich and for measuring the extent of their income and therefore income inequality, concerns capital gains.

To put this in concrete terms, if you look at the wealthiest Americans like Jeff Bezos or Bill Gates, their wealth doesn\’t rise over time because they save a lot out of the high wages they are paid each year; instead, it\’s because the stock price of Amazon or Microsoft rises. They only pay tax on that gain if they sell stock, and receive a capital gain at that time. Thus, if you want to tax the super-rich,  taxing their annual income will miss the point. You need to think about how to tax the accumulation of their wealth 

In the US, taxes on capital gains have several advantages over regular income. The tax rate on capital gains is 20%, instead of the 37% (plus add-ons) top income tax rate. In addition, you can let a capital gain build up for years or decades before you realize the gain and owe the tax; thus, along with the lower tax rate there is a benefit from being able to defer the tax. Finally, if someone who has experienced a capital gain over time dies, and then leaves that asset to their heirs, the capital gain for that asset during their lifetime is not taxed at all. Instead, the heir who receives the asset can \”step up\”: the basis, meaning that the value for purposes of calculating a capital gain for the heir starts from the value at the time the asset was received by the heir. Taken together, the \”plasticity\” of being able to gain wealth by a capital gain, rather than by annual income, is a core problem of taxing the superrich. Scheuer explains: 

Most countries’ tax systems treat capital gains favorably relative to ordinary labor income (Switzerland being an extreme case where most capital gains are untaxed). Realized capital gains represent a very high fraction of the reported income of the superrich. For example, realized capital gains represented 60% of total gross income for the 400 highest-income Americans in tax year 2014. … For tax year 2016, those earning more than $10 million report net capital gains corresponding to 46% of their total income, whereas capital gains are a negligible fraction of income for those earning less than $200 k.

There are other ways to tax capital gains. For example, one of Joe Biden\’s campaign promised was to tax capital gains income at the same rate as personal income for anyone receiving more than $1 million in income in that year. Before getting into some of the reasons, it\’s worth noting that every high-income country taxes capital gains at a lower rate. Scheuer writes: 

Five OECD countries levy no tax on shareholders based on capital gains (Switzerland being a prominent example). Of those that do, all tax is on realization rather than on accrual. Five more countries apply no tax after the end of a holding period test, while four others apply a more favorable rate afterwards. The tax rate varies widely with the highest as of 2016 being Finland, at 34%. With a few exceptions, the accrued gains on assets in a decedent’s estate escape income taxation entirely, because the heir can treat the basis for tax purposes as the value upon inheritance.

Why is capital gains taxed at a lower rate, all around the world? Why is it taxed only when those gains are realized, perhaps after years or decades, rather than taxed as the gains happen? One reason is that there is an annual corporate tax, so income earned by the corporation is already being taxed. Or if the capital gain is being realized on a gain in property values, there were also property taxes paid over time. In general, many countries want to have a substantial share of patient investors, who are willing to  hold assets for a sustained time. Trying to tax capital gains as they happen, rather than when they are realized, would also raise practical questions–for example it might require people to sell some of their assets to pay their annual taxes. 

Scheuer runs through a variety of  different ways of ways of taxing capital gains, and you can consider the alternatives. But again, there are reasons why no country has pursued taxing capital gains as they accrue, rather than as they are realized, and why no country taxes such gains as ordinary income–and in fact why some countries don\’t tax them at all. 

Another alternative is to tax wealth directly. I\’ve written about a wealth tax before, and don\’t have a lot to add here. Scheuer offers the reminder that Donald Trump was an advocate of a large but one-time wealth tax on high net-worth individuals back in 1999, when he ran for president on the Reform Party ticket, as a way to pay off the national debt. Here, I\’ll just offer a reminder that a wealth tax is based on total wealth, not on gains. Thus, if there is a an annual wealth tax of, say, 3%, then if your wealth was earning a return of 3% per year, the wealth tax means you are now earning a return of zero. If there is a year where the stock market drops, and the returns for that year are negative, you still owe the wealth tax. 

About 30 years ago, 12 high-income counties had wealth taxes, but the total is now down to three. The general consensus was that the troubles of trying to value wealth each year for tax purposes (and just consider for a moment how the superrich might shuffle their assets into other forms to avoid such a tax), just wasn\’t worth the relatively modest total amounts being collected.  The one country that continues to collect a substantial amount through its wealth tax is Switzerland–but remember, Switzerland doesn\’t have any tax at all on capital gains.  Scheuer writes: 

So far, the Swiss case is the only modern example for a wealth tax in an OECD country that has been able to generate sizeable and stable revenues in the long run. It enjoys broad support, as evidenced by the fact that it keeps being reaffirmed by citizens in Switzerland’s direct democracy, where most tax decisions must be put directly to voters. However, its design and the role it plays in the overall tax system are quite different from current proposals in the United States. In particular, it is not geared towards a major redistribution of wealth, and indeed wealth concentration in Switzerland remains high in international comparison.

A final option, which is not a focus of Scheuer\’s discussion, would be to resuscitate the estate tax: that is, instead of taxing the superrich during their lives, tax the accumulated value of their assets at death. For an example of a proposal along these lines, William G. Gale, Christopher Pulliam, John Sabelhaus, and Isabel V. Sawhill offer a short report of \”Taxing wealth transfers through an expanded estate tax\” (Brookings Institution, August 4, 2020). They point out, for example, that back in 2001 estates of more than $675,000 were subject to the estate tax; now, it applies only to estates above $11.5 million. Maybe $675,000 was on the low side, but an exemption of $11.5 million is pretty high–only about 0,2% of estates are subject to the estate tax at all. They calculate that rolling back the estate tax rules to 2004–which was hardly a time of confiscatory taxation–could raise about $100 billion per year in revenue. 

Taking all this together, it seems to me that a middle-of-the-road answer on how to raise taxes on the  superrich would focus in part on the estate tax, and in part on the capital gains tax–and perhaps in particular on limiting the ability to pass wealth between generations in a way that avoids capital gains taxes. 

Public-Private Partnerships: The Importance of Contract Design

Most transportation infrastructure in most countries is funded by government. But in a public-private partnership, a private company puts up at least some the money to build the project in exchange for being able to earn a return from that project in the future–typically through some combination of tolls or other charges to those using the infrastructure. 
For cash-strapped governments, a public-private partnership can sound enticing.  Reduced need for public spending up front! Those who pay in the future will be users of the system after it is built! But unsurprisingly, whether a PPP is a good deal for the public turns out to depend on the details of the contractual arrangement. Eduardo Engel, Ronald Fischer, and Alexander Galetovic provide a readable overview of what we know in in \”Public–Private Partnerships: Some Lessons After 30 Years\” (Regulation, Fall 2020, pp. 30-35). The subheading on the article reads: \”The savings policymakers usually claim for these projects are illusory, but well-designed contracts can deliver public benefits.\” 
As the authors note: \”[I]nvestment in PPPs over the last 30 years has been substantial, adding €203 billion of infrastructure spending in Europe and $535 billion of spending in developing countries. Most investments are in roads, seaports, and airports, but in some countries investment via PPPs has been significant in other types of infrastructure, such as hospitals and schools. In comparison, PPP investments in the United States have been small.\”
To understand the economic perspective here, consider the following question: If a PPP is such a good deal that businesses are bidding against each other for the contract, then maybe it would make sense for the government to spend the money up front, via deficit spending if needed, and then have the government collect the tolls or other revenues in the future. As the authors point out, the real economic gains from a PPP (if any) don\’t come from the private partner being willing to invest some cash up front. Instead, the gains come from incentives in the contract that would cause a private firm to to build or maintain or run the infrastructure in a more efficient or higher-quality way than if the government just took it over. 
For example, an accumulation of evidence suggests that the private firm in a long-term PPP may do a better job of ongoing maintenance than a government agency running the same project.  As the authors write: 

Many governments do not perform regular, continuous maintenance because building new infrastructure or repairing severely deteriorated projects is politically more attractive. … Moreover, the annual logic of public budgets makes it difficult to set aside funds for future maintenance at the time the project is built. Indeed, a study suggests that one-third of expenditures on new infrastructure should be allocated to maintaining existing projects. The cost of poor maintenance under traditional provision can be high. Not only is the quality of service poor, but the cost of intermittent maintenance, which often involves costly rehabilitation, has been estimated to lie between 1.5 and 3 times the cost of continuous maintenance. We estimate that maintenance savings are somewhere between 10% and 16% of initial investment.

To put it another way, if a PPP has a contract where government inspectors will be checking to make sure that regular maintenance is done–and paid for–by the private firm, that maintenance is more likely to happen than via direct government spending, where other items will always seem to have a higher priority than regular maintenance. 
On the other side, lots can go wrong when negotiating a PPP contract. One of the major issues is that a firm may win the contract under the bright lights of transparency with a lowball bid, and then almost immediately initiate backroom discussions of why the contract now needs to be renegotiated for higher payments. The authors write: \”When Mexico privatized highways in the late 1980s, Mexican taxpayers
incurred costs of more than $13 billion following renegotiation of the initial contracts. In Chile, 47 out of 50 PPP concessions awarded by the Ministry of Public Works between 1992 and 2005 had been renegotiated by 2006, and one of every four dollars invested had been obtained through renegotiation.\” However, contractual terms can be redrawn to reduce this risk. As the authors write:  

To do so, the contract should limit the present value of a concessionaire’s compensation during the life of the contract to the amount determined by the original bid (the so-called “sanctity of the bid” principle). Moreover, any works added to the original project should be auctioned off to the lowest bidder and the concessionaire should be excluded from bidding. To ensure the sanctity of the bid, renegotiations should be reviewed by an independent panel and all contract modifications should be easily accessible to the public via the internet so that an informed public can question the reasons for renegotiations and the amounts involved.

Another issue arises when a private company is going to be allowed to impose tolls or user charges in the future. It has been a common practice that the right to charge is granted for fixed time period (and if the firm doesn\’t collect as much as expected, it then tries to renegotiate). This can lock in large payments to the private firm over a long period. The authors note: 

Portugal received €20 billion in PPP investments in roads, hospitals,and other projects between 1995 and 2014. Of this amount, 94% was spent in highways that used “shadow tolls” that the government paid to the concessionaire per user. Government-guaranteed minimum revenue from the tolls amounted to 1% of the country’s gross domestic product annually over the period 2014–2020, though it will fall to an estimated 0.5% of GDP by 2030.

An alternative is for the contract to specify how much the private firm will collect over time, and when that amount has been collected, ownership of the project revert to the government. Indeed, the government can even decide, if it wished to do so, to pay the private firm the amount it was to receive in advance, and then let the government take over the project sooner. 
In short, with any PPP, it\’s worth remembering that the private partner isn\’t making investments to help the government \”save money,\” but rather because the firm expects to earn a profit from doing so. If the contract is poorly designed, the firm will quite likely take every opportunity to renegotiate it upward.  The best reason for a PPP is that, if the contract is well-designed, it provides a way to reduce the government tendency to skimp on routine maintenance. In many settings, it can often work out better for society if government takes the role of active monitoring and oversight of the private provision of certain services, rather than having government try to monitor its own actions in providing those services. 

Extraordinary Spending in the 2020 Campaign

When you want information about campaign spending, the place to turn is the Open Secrets website run by the Center for Responsive Politics. It will take a few weeks longer to get a final tally on campaign spending for the 2020 elections, but some patterns are already pretty clear, as they point out in \”2020 election to cost $14 billion, blowing away spending records\” (October 28, 2020). They write:

The 2020 election is more than twice as expensive as the runner up, the 2016 election. In fact, this year’s election will see more spending than the previous two presidential election cycles combined.

The massive numbers are headlined by unprecedented spending in the presidential contest, which is expected to see $6.6 billion in total spending alone. That’s up from around $2.4 billion in the 2016 race.

Democratic presidential nominee Joe Biden will be the first candidate in history to raise $1 billion from donors. His campaign brought in a record-breaking $938 million through Oct. 14, riding Democrats’ enthusiasm to defeat Trump. President Donald Trump raised $596 million, which would be a strong fundraising effort if not for Biden’s immense haul. …

Spending by deep-pocketed national groups also is driving the total cost of election higher. In the month of October alone, outside spending by super PACs and other big-money groups totaled nearly $1.2 billion. These groups are spending far more to boost Biden than help Trump, further aiding the Democrats cash-flush campaign. …

Democratic candidates and groups have spent $6.9 billion, compared to $3.8 billion for Republicans. Democrats\’ spending falls to $5.5 billion when excluding spending by billionaire presidential candidates Michael Bloomberg and Tom Steyer.

The article and website are full of interesting illustrative figures. For example, here\’s total spending for the last few presidential years, divided into spending on the presidential and congressional campaigns. 

Here\’s campaign spending broken down by the sources of funds. 

And here\’s campaign spending from different industries, divided into Ds and Rs. 

Finally, there\’s an irony worth noting here. It\’s more common for Democrats than for Republicans to express concern over high levels of campaign spending, and to suggest ways of limiting it. But if Joe Biden ends up prevailing when all the election counts are said and done, as seems likely, the enormous edge in campaign spending for Biden and Democratic candidates overall may well have made the difference in the key states Biden needed to win. 

The Work/Family Balance for College-Graduate Women: From a Century Ago to the COVID Era

Claudia Goldin delivered the 2020 Martin Feldstein Lecture at the National Bureau of Economic Research on the topic \”Journey across a Century of Women\” (NBER Reporter, October 2020).  Much of her talk is focused on the changing work/family balance for college graduate women over time.

Five distinct groups of women can be discerned across the past 120 years, according to their changing aspirations and achievements. Group One graduated from college between 1900 and 1919 and achieved “Career or Family.” Group Two was a transition generation between Group One, which had few children, and Group Three, which had many. It achieved “Job then Family.” Group Three, the subject of Betty Friedan’s The Feminine Mystique, graduated from college between 1946 and 1965 and achieved “Family then Job.” Group Four, my generation, graduated between 1966 and 1979 and attempted “Career then Family.” Group Five continues to today and desires “Career and Family.”

College-graduate women in Group One aspired to “Family or Career.” Few managed both. In fact, they split into two groups: 50 percent never bore a child, 32 percent never married. In the portion of Group One that had a family, just a small fraction ever worked for pay. More Group Two college women aspired to careers, but the Great Depression intervened, and this transitional generation got a job then family instead. As America was swept away in a tide of early marriages and a subsequent baby boom, Group Three college women shifted to planning for a family then a job. Just 9 percent of the group never married, and 18 percent never bore a child. Even though their labor force participation rates were low when they were young, they rose greatly — to 73 percent — when they and their children were older. But by the time these women entered the workplace, it was too late for them to develop their jobs into full-fledged careers.

“Career then Family” became a goal for many in Group Four. This group, aided by the Pill, delayed marriage and children to obtain more education and a promising professional trajectory. Consequently, the group had high employment rates when young. But the delay in having children led 27 percent to never have children. Now, for Group Five the goal is career and family, and although they are delaying marriage and childbirth even more than Group Four, just 21 percent don’t have children.

There\’s much more detail in the talk itself, but I was especially struck by this figure showing the evolution of attitudes about whether pre-school children are likely to suffer if the mother works.

Moving into modern times, Goldin has been making the case for a few years now that fundamental issue affecting the work-life balance for college-graduate women is \”large nonlinearities and convexities in pay.\” This is economist-talk for saying that many high-paying jobs require that the worker be available or at least potentially on call much more than 40 hours per week. Goldin writes: 

 Many jobs, especially the higher-earning ones, pay far more on an hourly basis when the work is long, on-call, rush, evening, weekend, and unpredictable. And these time commitments interfere with family responsibilities. … For many highly educated couples with children, she’s a professional who is also on-call at home. He’s a professional who is also on-call at the office. In consequence, he earns more than she does. That gives rise to a gender gap in earnings. It also produces couple inequity. If the flexible job were more productive, the difference would be smaller, and family equity would be cheaper to purchase. Couples would acquire it and reduce both the family and the aggregate gender gap. They would also enhance couple equity.

In normal times, Goldin\’s usual recommendation is that organizations should seek to reorganize work to create more \”temporal flexibility\” so that tasks can be handed off, rather than relying on individuals to march though extra-long workweeks, and some professions have made steps along these lines. She writes: 

Clients could be handed off with no loss of information. Successfully deployed IT could be used to pass information with little loss in fidelity. Teams of substitutes, not teams of complements, could be created, as they have been in pediatrics, anesthesiology, veterinary medicine, personal banking, trust and estate law, software engineering, and primary care.

But we aren\’t in normal times, what Goldin calls the BCE period (that is, Before the Corona Era). Instead, we have been living through the DC (During Corona) period and hoping to reach the AC (After Corona) period. Goldin looks at time-use data to show that the hours needed for child-care increased dramatically when the pandemic hit in March and April 2020. But at that time, it was common for both parents to be at home, and so the extra child-care burden was distributed somewhat evenly. But looking ahead, one likely outcome is that the college-educated male workers will be called back to their jobs that require especially long hours, while the college-educated female workers will then become responsible for a rising share of the greatly extended child care hours: \”If history is any guide, men will go back to work full time and revert to their BCE childcare levels. Women will take up the slack and do a greater share of the total.\”

Goldin offers a thought experiment for how government might respond in the COVID era with a Civilian College Corps:  

When public and free elementary schools spread in the United States in the 19th century, and when they expanded during the high school movement of the early 20th century, a coordinated equilibrium was provided by good governments. Good government today could do the same thing. We need to find safe ways to have classes for children — for their futures and for their parents’. …

Today, many of the unemployed are highly educated recent college graduates and gap-year college students with little to do. They could be harnessed in a new Works Progress Administration manner and put to work educating children, especially those from lower-income families. They could free parents, especially women, to return to work. I’ll repurpose a name and call them the “Civilian College Corps” — a new CCC.

Some of the Corps’ educational work could be done remotely. The Corps could support beleaguered parents too exhausted to correct their children’s essays and too confused to help their children with algebra. Other Corps members could be in the classroom, helping districts cope with having fewer teachers because some older teachers don’t want to return to a school building. The Corps could employ those without jobs, meaning, and direction and give them something worthy to do: educate the next generation and help women go back to work full time, either in their homes or on-site. 

 

Some Economics of Collectibles

Collectibles is the broad name given to ownership of goods including fine art, antiques, watches and jewelry, wines classic cars, luxury handbags, and even musical instruments. Credit Suisse gives some insights into the market for collectibles among the very rich in \”Collectibles: An integral part of wealth\” (October 2020, available at the website of the Credit Suisse Research Institute). The report has separate chapters on the collectibles mentioned above. Here, I\’ll jus focus on some \”Collectibles Facts and Figures\” from an opening chapter by Nannette Hechler-Fayd’herbe and Adriano Picinati di Torcello. 

They carry out a small-scale survey of 55 ultra-high-net worth individuals, defined here as households with net worth of more than $30 million, and then do some extrapolating: 

However, from a collectibles standpoint, individuals or families with wealth above USD 30 million are a more relevant segment to consider. Based on the Credit Suisse Global Wealth Report 2019, we estimate that people with net worth exceeding USD 30 million accounted for USD 26.3 trillion of global wealth prior to the outbreak of the COVID-19 pandemic. … Conservatively estimated, an approximate share of 3%–6% in collectibles would bring the value of collectibles owned by private individuals to around USD 1.2 trillion.

They point out that \”the collectibles market has been deeply disrupted by the COVID-19 outbreak. For example, major events like the Pebble Beach Concours d’Elegance for classic cars or the Art Basel Hong Kong for fine art, as well as high profile marquee auctions like Sotheby’s Asia Week Series in New York have been cancelled or postponed.\” In general, prices in collectibles markets seem to be pro-cyclical–that is, they boom when times are good but suffer when times are bad. For example, here\’s a graph comparing an Art Market Confidence Indicator to a Purchasing Managers Index (PMI), which is often used as a leading indicator in studies of business cycle movements. 

What about longer-term returns on collectibles? The authors write: 

In Figure 6, for illustrative purposes, we compare the historical evolution of the Sotheby’s Mei Moses index for fine arts with the Liv-ex Fine Wine 100 Index, the HAGI Top 100 Index for classic cars, the AMR indices for watches and jewelry, and a luxury handbag index, all rebased to 100 in 2010. Over the last ten years, most collectible categories have gained in value, but with substantial differences from one to the other. On aggregate, wines and fine art have returned the least. Watches and jewelry have been effective stores of value, with cumulative 10-year returns between 27% and 61%. Classic cars were by far the best-performing collectibles category (see Figure 6). Naturally, this trend is time- and index-dependent, and other periods will show different developments. Nevertheless, it does give an indication of how the last decade panned out for various categories of collectibles.

Or here is a more detailed table of returns to various collectibles over differing time frames. The table shows the returns and the standard deviation of the returns, to give a sense of how some of the prices in the collectibles markets can be quite volatile. 

The topic of collectibles poses both analytical and public policy challenges. On the analytical side, the authors of the Credit Suisse report are quick to caution that indices of asset market prices pose all kinds of issues. They write: 

Compared to liquid financial asset indices, however, collectibles indices face considerable challenges as to how relevant they are as a reflection of financial performance and risk. Besides being mostly non-investable, collectibles indices are made up of assets that, in contrast to financial assets, are non-fungible, unique/heterogeneous, item-by-item, and not particularly liquid as transactions only occur from time to time. Additionally, while public auction data are available, private sales data are not, which means that a significant number of transactions are unavailable for observation and inclusion. In turn, transactions included in public auctions are typically subject to significant selection bias Hence the construction of these indices nearly always means compromising on at least one
of the above-mentioned aspects, such that the indices can only be taken as rough indications of performance, volatility and correlations.

On the public policy side, collectibles pose some hard questions if one is thinking about taxes based on wealth. To what extent should measures of wealth created for purposes of thinking about inequality or a possible wealth tax include collectibles? If you don\’t include the $1.2 trillion in wealth from collectibles and the return on those investments, it seems an obvious loophole to be exploited by those seeking to avoid wealth taxes. But placing a clear value on the changes in the value of collectibles from year-to-year–say, for purposes of a wealth tax– is very hard. Many collectibles also have a value in terms of being used or appreciated for themselves, which goes beyond their market value. When it comes to fine art or antiques, one might argue that incentives to gather collectibles have social value, because such items are a preservation of the past that often end up in museums, eventually.

For those who want more, Benjamin J. Burton and Joyce P. Jacobsen wrote an article some years back on \”Measuring Returns on Investments in Collectibles\” in the Fall 1999 Journal of Economic Perspectives (13:4, pp. 193-212). The specific examples in that article are of course somewhat outdated, but the underlying issues remain much the same. 

Revisiting March 2020: What Were Epidemiologists Thinking about Masks?

A basic part of building credibility is not to flip-flop on your advice. But early in the pandemic, public  health authorities at first advised against wearing masks, and then shifted over to recommending but not requiring masks. Now masks are required in many states in public and indoor settings. What were public health authorities and epidemiologists thinking when they were recommending against masks? 

As one example of such a recommendation, the World Health Organization published on January 29, 2020, \”Advice on the use of masks in the community, during home care and in health care settings in the context of the novel coronavirus (‎‎‎‎‎‎2019-nCoV)‎‎‎‎‎‎ outbreak: interim guidance.\” Here\’s a taste of the recommendations: 

Wearing a medical mask is one of the prevention measures to limit spread of certain respiratory diseases, including 2019- nCoV, in affected areas. However, the use of a mask alone is insufficient to provide the adequate level of protection and other equally relevant measures should be adopted. If masks are to be used, this measure must be combined with hand hygiene and other IPC measures to prevent the human-to-human transmission of 2019-nCov. … Wearing medical masks when not indicated may cause unnecessary cost, procurement burden and create a false sense of security that can lead to neglecting other essential measures such as hand hygiene practices. Furthermore, using a mask incorrectly may hamper its effectiveness to reduce the risk of transmission. … [In a community setting,] a medical mask is not required, as no evidence is available on its usefulness to protect non-sick persons. … Cloth (e.g. cotton or gauze) masks are not recommended under any circumstance.

Early recommendations from US health officials also tended to downplay the usefulness of masks. Jack Brewster offers a review of the timeline in Forbes

February 27
One day after the Centers for Disease Control confirmed the first possible instance of Covid-19 “community spread,” CDC Director Robert Redfield is asked at a hearing on Capitol Hill whether healthy people should wear a face covering and responds, “No.”

February 29

On the same day public health officials announce the first death in the United States from Covid-19, U.S. Surgeon General Jerome Adams orders Americans to “STOP BUYING MASKS!” in an all-caps message on Twitter, claiming they are “NOT effective in preventing [the] general public from catching coronavirus” and will deplete mask supplies for healthcare providers.

March 8
During an interview with 60 Minutes—an interview Trump and his allies cite as an example of when the doctor was wrong—Fauci says \”there\’s no reason to be walking around with a mask,” though adds he’s not “against masks,” but worried about health care providers and sick people “needing them,” and says masks can lead to “unintended consequences” such as people touching their face when they fiddle with their mask.
By late March and early April, US health public health authorities were beginning to recommend wearing masks in settings with other people. But the World Health Organization in its updated April 6 guidance continued to express uncertainty about average people wearing masks in public, emphasizing that medical masks should be reserved for health care workers and that \”As described above, the wide use of masks by healthy people in the community setting is not supported by current evidence and carries uncertainties and critical risks.\” However, by June 5 the WHO had altered its guidance, now saying: 

The use of masks is part of a comprehensive package of the prevention and control measures that can limit the spread of certain respiratory viral diseases, including COVID-19. Masks can be used either for protection of healthy persons (worn to protect oneself when in contact with an infected individual) or for source control (worn by an infected individual to prevent onward transmission). However, the use of a mask alone is insufficient to provide an adequate level of protection or source control, and other personal and community level measures should also be adopted to suppress transmission of respiratory viruses. Whether or not masks are used, compliance with hand hygiene, physical distancing and other infection prevention and control (IPC) measures are critical to prevent human-to-human transmission of COVID-19. … the use of non-medical masks, made of woven fabrics such as cloth, and/or non-woven fabrics, should only be considered for source control (used by infected persons) in community settings and not for prevention.

Toward the end of August, I tried to summarize the state of the research evidence about masks. Here, I just want to note that when there is such a dramatic switch in public health recommendations, from saying that masks in general aren\’t helpful all the way to requiring them in a few months, it raises an obvious question: What were they thinking? 

In the most recent issue of the Journal of Economic Perspectives, epidemiologist Eleanor J. Murray offers an honest and open answer to the question (Fall 2020, \”Epidemiology’s Time of Need: COVID-19 Calls for Epidemic-Related Economics\”). Rather than try to summarize her nuanced view, I\’ll just quote from her paper: 

In January 2020, there was strong evidence supporting the use of personal protective equipment, including face masks, in high-risk settings such as health care facilities for the prevention of respiratory infections. However, the existing epidemiologic literature on the use of face masks by the general public for control of respiratory infections was extremely limited and showed mixed results (Brosseau 2020; Brosseau and Sietsema 2020; Chu et al. 2020). For example, one meta-analysis found that mask use in health care approximately halved the risk of influenza infection (Saunders-Hastings et al. 2017), and a randomized trial of non-pharmaceutical interventions in the home found an approximately 20 percent reduction in influenza infection for households using both face masks and hand sanitizer compared to hand sanitizer alone (Larson et al. 2010). In contrast, several randomized trials of households limited to face mask use alone had found no reduction in influenza transmission (Aiello et al. 2010; Canini et al. 2010; Cowling et al. 2008).

Lacking clear information on the benefits of community-level face mask use,epidemiologists in early 2020 engaged in internal discussion about the potential harms and benefits of this intervention, considering aspects such as the limited existing research, the limited supply and interrupted supply chains of masks, what was known at the time about the epidemiology of SARS-CoV-2 transmission, and concerns around the potential for “risk compensation” if people who were wearing masks then engaged in fewer other preventive measures (Bamber and Christmas 2020; Brosseau 2020; Brosseau and Sietsema 2020; Cheng 2020; Javid, Weekes, and Matheson 2020; King 2020). Based on these discussions, many applied epidemiologists, including those at the World Health Organization and Centers for Disease Control, initially advised against the use of face masks by the general public. Instead, they stressed the importance of hygiene and distancing-based interventions, such as hand-washing, social distancing, and quarantine.

Over time, however, new information emerged. First, it became clear that at least some subset of Americans would be amenable to wearing masks. Second, we learned that SARS-CoV-2 could be transmitted by individuals who were not (yet) symptomatic (Gandhi, Yokoe, and Havlir 2020). Finally, as the availability and use of both fabric and surgical masks increased, it became clear that even when individuals wearing masks did increase their risk behaviors (by, for example, joining protests), the evidence did not suggest that transmission in these settings was any higher than if attendees had been unmasked (Dave et al. 2020). Together, these observations have shifted most applied epidemiologists and public health officials towards encouraging the use of face masks by all individuals (Greenhalgh et al. 2020; Roderick et al. 2020).

However, this recommendation does not mean that the academic epidemiology of face mask usage by the general public during respiratory outbreaks has necessarily advanced much beyond what we knew in January 2020, and many academic epidemiologists remain agnostic about the value of face masks. In fact, if anything, it may be fair to say that academic epidemiologists have fewer answers about the science of face masks than we did 10 months ago—simply because we now have more questions. 

Previous research on face mask usage in respiratory outbreaks focused chiefly on evaluating either N95 masks or surgical masks, both of which are subject to regulatory standards. In contrast, many of the face masks used by the general public during the COVID-19 pandemic are made from fabric, both commercially and homemade, and the filtration efficacy of these masks is both unknown and potentially highly variable (Aydin et al. 2020; Davies et al. 2013; Tcharkhtchi et al. 2020). In addition, previous studies of face mask usage typically assumed individuals had been provided with training and guidance on how to appropriately don, doff, and wear face masks to maximize their benefits. In reality, adherence both in terms of frequency and correctness of face mask use is extremely variable among the general public. Despite this, existing attempts to model the population impacts of community-level face mask use have typically assumed perfect adherence and correct usage (Ferguson et al. 2020). Academic epidemiologists likely will be investigating and debating these topics for many years to come, both to fully characterize the causal effect of community level mask-wearing strategies and to explore the actual risks and benefits that result from these (Bundgaard et al. 2020; Doung-ngern et al. 2020).

In short, the current recommendation of experts is to wear masks–even cloth masks–in many settings, not so much to protect yourself as to protect others. Lynn Peeples offers a nice readable overview of the existing evidence base in favor of wearing masks in Nature (October 6, \”Face masks: what the data say.
The science supports that face coverings are saving lives during the coronavirus pandemic, and yet the debate trundles on. How much evidence is enough?\” But Peeples is a fair-minded presenter of the evidence, and so she also includes statements like: 

“There’s a lot of information out there, but it’s confusing to put all the lines of evidence together,” says Angela Rasmussen, a virologist at Columbia University’s Mailman School of Public Health in New York City. “When it comes down to it, we still don’t know a lot.” …

Human behaviour is core to how well masks work in the real world. “I don’t want someone who is infected in a crowded area being confident while wearing one of these cloth coverings,” says Michael Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota in Minneapolis. … For now, Osterholm, in Minnesota, wears a mask. Yet he laments the “lack of scientific rigour” that has so far been brought to the topic. “We criticize people all the time in the science world for making statements without any data,” he says. “We’re doing a lot of the same thing here.”

I wear a mask in all the recommended settings, because wearing a mask seems very likely to be better than not wearing one. But I\’m also aware that countries of the European Union have recently seen a huge growth in COVID-19 cases, from well below the US level to well above it, despite widespread mask-wearing in many countries. As the WHO kept saying as it kept revising its guidance about masks, masks aren\’t enough—or at least certainly not the fabric or gauze masks that most of the public wear. They need to be combined with social distancing, hand-washing, self-quarantining when possibly or actually infected, and similar steps. If wearing a mask gives people a sense that they are incapable of transmitting the disease themselves or invulnerable to the disease when transmitted by others, then when the epidemiologists finish their studies of masks five or ten years down the road, they may find that the potential benefits of mask-wearing were offset when people reduced their other efforts to minimize the spread of COVID-19. 

I understand that opinions evolve. But at least in theory, public health experts have been holding conferences and publishing learned reports about pandemics for years. Wearing masks is not a high-tech intervention. It seems like the kind of issue where one might expect that public health experts have worked out a recommendation in advance, rather than making a parade of their indecisiveness–and then criticizing those who are skeptical about playing follow-the-leader as they switch recommendations.

Revisiting March 2020: What Was Wrong With the Very High Early Death Estimates?

The \”curse of knowledge\” refers to a well-known behavioral bias: when you know something, it\’s hard to remember what is was like not to know it. One standard example is a teacher who knows the subject extremely well, but fails to communicate with students because the teacher can no longer remember what it was like not to know the subject. At this point, we all know that the COVID-19 pandemic has caused more than 200,000 deaths in the United States alone. Along with the health costs, we have experienced the businesses closing, the lockdowns affecting schools and churches, and the many disruptions of everyday life. It\’s hard to avoid our own \”curse of knowledge,\” and think back to those long-age moments of February and early March 2020 when at least most of us didn\’t know what to make of the pandemic. 
In the Fall 2020 issue of the Journal of Economic Perspectives, we have a two-paper Symposium on Economics and Epidemiology–one paper by a group of economists aimed introducing basic epidemiology of infectious diseases, and the other by an epidemiologist looking back from the perspective of that field. Today and for the next couple of days, I\’ll draw on these articles to take a look back at what what we were thinking before we knew what we know now. Some lessons for when (not if) the next pandemic arrives will naturally suggest themselves. The two JEP papers are: 

As the Avery et al group point out at the start of their paper, there was a strong shift in attitudes about the COVID-19 pandemic in mid-March 2020. In the US, for example, they note: 

\”[O]n March 11, President Donald Trump reassured the American people that for “[t]he vast majority of Americans, the risk is very, very low.” Just five days later, the Trump administration recommended that “all Americans, including the young and healthy, work to engage in schooling from home when possible. Avoid gathering in groups of more than 10 people. Avoid discretionary travel. And avoid eating and drinking at bars, restaurants, and public food courts” …
A similar shift happened in the United Kingdom. What changes? On March 16,  a prominent group of epidemiologists at Imperial College in London published a study with estimates that the death toll of the pandemic could reach 2.2 million in the United States and 500,000 in the United Kingdom. The specific study was \”Report 9: Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand,\” written by Neil M Ferguson and a long list of co-authors \”on behalf of the Imperial College COVID-19 Response Team.\” The first figure in the report looked like this:
The report also placed a heavy emphasis on how hospitals and health care providers could be overwhelmed by this projected death toll. The report is very clear in noting that this prediction is based on the absence of any policy or individual changes: 

In the (unlikely) absence of any control measures or spontaneous changes in individual behaviour, we would expect a peak in mortality (daily deaths) to occur after approximately 3 months (Figure 1A). In such scenarios … we predict 81% of the GB and US populations would be infected over the course of the epidemic.

This particular forecast took on particular force. It was often criticized for being a naive estimate, because it assumed no public or individual response. The deeper meaning is that those who made this criticism had not apparently looked at the report. Most of the report is a discussion of strategies for suppression or mitigation of the virus, with a detailed look at the possible effects of \”non-pharmaceutical interventions\” including case isolation, voluntary quarantine, social distancing of those 70 and older, social distancing of the entire population, and closure of schools and universities. There are charts showing the effects of different combinations of these strategies: for example, one chart suggests that with these kinds of measures in place, the overall death toll could be reduced 50-fold, from over 500,000 in the UK to less than 10,000, with similar estimates for the US. 

In short, the Imperial College study from back in March 2020 was often dramatically mischaracterized. Yes, their model suggests 2.2 million US deaths and 510,000 UK deaths is in the report if there was zero response by the public sector or by individuals. But the report was not expecting or predicting a response of zero! Instead, the report was trying to show how a variety of interventions could affect the death toll. To put it another way, the report was trying to show the dangers of inaction and the benefits of action. 
But there\’s a powerful irony here. Although many commenters were quick to note that the top of the bell-shaped curve of the figure above was based on a lack of public or private response, many of the same commenters seemed to accept the overall shape of the bell-shaped curve–and its implicit prediction that the novel coronoavirus would pass through the population and be done in a few months. say by early July. For example, knowledgeable observers like former Fed chairman Ben Bernanke were saying in late March that the pandemic was like a giant snowstorm–something that would dramatically slow economic activity for a time but then would pass. 
The emphasis back in March 2020 was that having everyone stay at home as much as possible for a couple of weeks would buy time so the health care system wouldn\’t be overloaded. The original relief bills passed by Congress envisioned financial support for workers and firms that would only need to last for a couple of months. For example, back in March 2020, pretty much no one was talking about how we might still be talking about keeping all K-12 schools, colleges and universities, or churches closed by late 2020. We weren\’t talking about waves of COVID-19 that could last for a year or more.  
The Imperial College report back in March 2020 actually spelled out this kind of scenario. For example, it considered what would happen if the non-pharmaceutical interventions to mitigate the pandemic would be switched on and off over time, depending on the capacity of the health care system. The idea was that when infections surged, then the mitigation measures would be tightened, and at other times they would be eased back. The resulting simulation was that the pandemic would come in a series of waves over time, as the March 2020 report showed in figures like this one: 

Notice that in this \”adaptive triggering of suppression strategies,\” the pandemic is having a wave about every three months through 2021. 

In taking a deeper look into the epidemiology models, the Avery et al. team note that this assumption of a bell-shaped curve of infections has been standard for a long time–but it\’s a pattern that applies for epidemics that are not effectively controlled. In addition, Murray points out in her essay that the problem at the beginning of any epidemic is that the data is very limited and the characteristics of the disease (methods of transmission, health effects, and so on) are never fully known. She writes: 

But to put these concerns in real-world context, no infectious disease modeler expects to be able to accurately forecast the future based on sparse data from early in a pandemic. Even “nowcasting,” the task of modeling the current number of true infections, is extremely challenging, especially early in a pandemic. Asking an infectious disease modeler to predict the exact trajectory of an outbreak is akin to asking an economist to select stocks for your portfolio or a climate scientist to predict the best day in 2022 for an outdoor wedding. These tasks, while of interest to many people, are not generally within the purview of scientists. Instead, the goal of both mechanistic and phenomenological models in epidemiology is to forecast a range of possible futures, given a specified set of assumptions. …

In the case of the Imperial College model, two of the key assumptions which defined their original model were that the government would not respond to the COVID-19 pandemic with any interventions and that the general public would not respond to the pandemic with any changes to their own behavior. These assumptions are clearly unrealistic. However, by making these strong assumptions, the Imperial College model was able to provide epidemiologists and public health practitioners with a rapid estimate of the worst-case scenario: if SARS-CoV-2 was allowed to run unchecked through the population, what is the maximum amount of death that we might expect over the course of the outbreak until it burned out via herd immunity? The answer––510,000 deaths in the United Kingdom and 2,200,000 in the United States (Walker et al. 2020)—rightly spurred both governments and individuals to action.

What lessons does this experience hold for the next pandemic? One obvious lesson is to go beyond the big death estimate in the headlines (2.2 million will die!) or the simple criticism on social media (the estimate ignores public and private actions!) and look at the actual underlying report and arguments. At a deeper level, there seems to be a tradeoff to face: a country can have a uncontrolled pandemic with high health costs that passes through the population fairly quickly and is finished, or it can have a semi-controlled pandemic with a lower death rate that lasts for a much longer  period. Be wary if you see a bell-shaped prediction of how a pandemic will shoot high and then fall to zero, and remember that this pattern really only holds if the pandemic is uncontrolled. 

Public health authorities should be very hesitant before making statements that seem to imply that a pandemic will be over fairly soon. If actions are taken and the pandemic is not over soon, people will remember the earlier statements and the public health authorities will lose some credibility that their recommendations are useful at all. Yes, it would have been a lot harder back in March 2020 to say that the short-term shutdown was really just to avoid a catastrophic outcome and reduce the death toll, but that the pandemic was likely to go on for months and additional shutdowns would probably be needed over time. But that explanation would have been an honest reading of the March 2020 Imperial College report.  
Similarly, public policy actions to help workers and firms at the start of the pandemic shouldn\’t be based on an implicit assumption that the pandemic will be a short-term event. By all means, act quickly to offer some short-term assistance. But bear in mind that you are need to think about what will be sustainable for perhaps a year or more into the future, not just the next few months. 
A final lesson is that while financial support for those affected by a pandemic and various non-pharmaceutical interventions all have their place, government may also have an important role to play in facilitating and smoothing transition.  It\’s of course hard to know how the actions of people, firms, and other institutions might have changed with different advice and policy. But one can at least imagine a situation where national groups were convened back in April to start thinking about best-practice recommendations for workplaces, schools and universities, health care, retail, restaurants, theaters, and other sectors of the economy. Perhaps a vision could have developed of the kinds of adjustments that might be sustainable over time in these various environments, along with investments like making sure more people (adults and children) had reliable at-home web connections or that delivery services could be substantially expanded or even subsidized. Additional planning and resources spent early in the pandemic to facilitate shifts and transitions might already have been paying big dividends by late summer.  But we would have needed to accept, back in March 2020, that the effects of lasting pandemic were not likely to involve a bad few months and quick snap-back to the period before the pandemic, but instead would involve lasting changes for the medium-term or even the long-term as we learned how to live with the pandemic for a sustained time. 

Entering the Labor Market in a Recession: Some Findings and Advice

Trying to get a job during a period of recession and high unemployment is hard for everyone, bit for those just entering the labor market for the first time it can leave scars on their lifetime earnings and even their personal health. Till von Wachter discussed the evidence from past recessions–and offers some advice for those currently living through this experience, in \”The Persistent Effects of Initial Labor Market Conditions for Young Adults and Their Sources.\” Journal of Economic Perspectives, 34:4, pp. 168-94). In describing the longer-term effects of entering the labor market in a recession, he writes: 

Over the last 15 years, an increasing number of studies have analyzed the short- and long-term effects on individuals entering the labor market in a recession, and this article will take stock of the core empirical methods and findings from this literature. On average, individuals entering the labor market in a typical recession (a 3–4 point rise in unemployment rates) experience a reduction in earnings of about 10–15 percent initially—somewhat smaller for college graduates, somewhat larger for high school graduates, and a particularly large reduction for nonwhites. Estimates for college graduates suggest that during recessions, workers tend to start jobs at less prestigious occupations and smaller- and lower-paying firms. For some groups, such as PhD economists and possibly MBA graduates, an initial occupation choice permanently affects career outcomes. An early-career economic shock has the potential to be disruptive beyond strictly economic outcomes, too. An increasing number of studies document that adverse labor market entry has effects on health and other outcomes like marriage, divorce, and women’s fertility and can affect socio-economic outcomes, health, and mortality in middle age. …

As he digs down into the evidence and models, a basic insight here is that early jobs often set the stage for the later jobs. Many workers in their 20s switch jobs a number of times before settling into what feels like a good fit for at least the medium term. But when dealing with limited options for employment, finding that employer who is a a good fit for the long-term is harder, and switching between jobs may be harder, too. Von Wachter offers some advice: 

The crisis in the labor market triggered by the COVID-19 pandemic has given this line of research increased urgency and has made it relevant to the 4 million or so young individuals graduating from college or high school in the summer of 2020.  Some useful lessons emerge from the research reviewed here:
  1. Your first job out of school may not be what you had expected, but that’s OK. Being flexible in your choice of, say, occupation or where you live will give you more options.
  2. Your career will take longer to develop than that of luckier peers. Do what you can to avoid being locked into that first job by continuing to accumulate general skills and looking for opportunities to move to other jobs.
  3. If things are going slow, remember, it is hard for everyone. At the same time, all findings discussed here are for averages and do not necessarily apply to you—you have agency in shaping your life and career. 
  4. You may need to save a higher percentage of income early in life to meet long-term wealth goals.
  5. Your desired patterns of marriage and fertility may take more effort to achieve. 
  6. Take particular care to develop and maintain a healthy lifestyle and be kind to yourself, in part because it will help you weather difficult initial labor market conditions. 

(N.B. I should add that I\’m the Managing Editor of the Journal of Economic Perspectives, and thus in all likelihood personally biased in thinking that the articles therein are of considerable interest.) 

Fall 2020 Journal of Economic Perspectives Online

I am now in my 34th year as Managing Editor of the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Fall 2020 issue, which in the Taylor household is known as issue #134. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.

___________________________

Symposium: How Much Income and Wealth Inequality? 
\”The Rise of Income and Wealth Inequality in America: Evidence from Distributional Macroeconomic Accounts,\” by \”Emmanuel Saez and Gabriel Zucman
This paper studies inequality in America through the lens of distributional macroeconomic accounts—comprehensive distributions of the aggregate amount of income and wealth recorded in the official macroeconomic accounts of the United States. We use these distributional macroeconomic accounts to quantify the rise of income and wealth concentration since the late 1970s, the change in tax progressivity, and the direct redistributive effects of government intervention in the economy. Between 1978 and 2018, the share of pre-tax income earned by the top 1 percent rose from 10 percent to about 19 percent, and the share of wealth owned by the top 0.1 percent rose from 7 percent to about 18 percent. In 2018, the tax system was regressive at the top-end; the top 400 wealthiest Americans paid a lower average tax rate than the macroeconomic tax rate of 29 percent. We confront our methods and findings with those of other studies, pinpoint the areas where more research is needed, and describe how additional data collection could improve inequality measurement.
Full-Text Access | Supplementary Materials

\”Business Incomes at the Top,\” by Wojciech Kopczuk and Eric Zwick
Business income constitutes a large and increasing share of income and wealth at the top of the distribution. We discuss how tax policy treats and shapes how businesses are organized and how they distribute economic gains to owners, with the focus on closely held and pass-through firms. These considerations influence whether and how labor and capital income is observed in economic data and feed into research controversies regarding the measurement of inequality and the progressivity of the tax code. We discuss the importance of these issues in the United States and highlight that limited evidence from other countries suggests that they are likely to be important elsewhere.
Full-Text Access | Supplementary Materials

\”Growing Income Inequality in the United States and Other Advanced Economies,\” by Florian Hoffmann, David S. Lee and Thomas Lemieux
This paper studies the contribution of both labor and non-labor income in the growth in income inequality in the United States and large European economies. The paper first shows that the capital to labor income ratio disproportionately increased among high-earnings individuals, further contributing to the growth in overall income inequality. That said, the magnitude of this effect is modest, and the predominant driver of the growth in income inequality in recent decades is the growth in labor earnings inequality. Far more important than the distinction between total income and labor income, is the way in which educational factors account for the growth in US labor and capital income inequality. Growing income gaps among different education groups as well as composition effects linked to a growing fraction of highly educated workers have been driving these effects, with a noticeable role of occupational and locational factors for women. Findings for large European economies indicate that inequality has been growing fast in Germany, Italy, and the United Kingdom, though not in France. Capital income and education don\’t play as much as a role in these countries as in the United States.
Full-Text Access | Supplementary Materials

Symposium: Economics and Epidemiology

\”An Economist\’s Guide to Epidemiology Models of Infectious Disease,\” by Christopher Avery, William Bossert, Adam Clark, Glenn Ellison and Sara Fisher Ellison
We describe the structure and use of epidemiology models of disease transmission, with an emphasis on the susceptible/infected/recovered (SIR) model. We discuss high-profile forecasts of cases and deaths that have been based on these models, what went wrong with the early forecasts, and how they have adapted to the current COVID pandemic. We also offer three distinct areas where economists would be well positioned to contribute to or inform this epidemiology literature: modeling heterogeneity of susceptible populations in various dimensions, accommodating endogeneity of the parameters governing disease spread, and helping to understand the importance of political economy issues in disease suppression.
Full-Text Access | Supplementary Materials

\”Epidemiology\’s Time of Need: COVID-19 Calls for Epidemic-Related Economics,\” by Eleanor J. Murray
The COVID-19 pandemic has catapulted scientific conversations and scientific divisions into the public consciousness. Epidemiology and economics have long operated in distinct silos, but the COVID-19 pandemic presents a complex and cross-disciplinary problem that impacts all facets of society. Many economists have recognized this and want to engage in efforts to mitigate and control the pandemic, but others seem more interested in attacking epidemiology than attacking the virus. As an epidemiologist, I call upon economists to join with us in combating COVID-19 and in preventing future pandemics. In this essay, I attempt to provide some insight for economists into how epidemiology works, where it doesn\’t work, and the much-needed answers that economists can help us obtain. I hope this will spur economists towards an epidemic-related economics that can provide a blueprint for a healthy economy and population.
Full-Text Access | Supplementary Materials

Articles

\”A 30-Year Perspective on Property Derivatives: What Can Be Done to Tame Property Price Risk?\” by Frank J. Fabozzi, Robert J. Shiller and Radu S. Tunaru
The housing sector is the largest spot market in the world without a developed derivative contract to serve the risk management needs of market participants. This paper describes the evolution within a wider economic context of property derivatives in the United States and worldwide. We review various economic arguments presented in the literature to highlight the advantages of these financial instruments to society. The paper also provides a critical perspective on the principal obstacles hindering the development of property derivatives based on real estate prices—especially housing prices—and what can be done to overcome these difficulties. The issues discussed can serve as a guide for designing property derivatives capable of hedging real estate risk that has resurfaced time and time again in financial crises.
Full-Text Access | Supplementary Materials

\”Welfare Analysis Meets Causal Inference,\” by Amy Finkelstein and Nathaniel Hendren
We describe a framework for empirical welfare analysis that uses the causal estimates of a policy\’s impact on net government spending. This framework provides guidance for which causal effects are (and are not) needed for empirical welfare analysis of public policies. The key ingredient is the construction of each policy\’s marginal value of public funds (MVPF). The MVPF is the ratio of beneficiaries\’ willingness to pay for the policy to the net cost to the government. We discuss how the MVPF relates to \”traditional\” welfare analysis tools such as the marginal excess burden and marginal cost of public funds. We show how the MVPF can be used in practice by applying it to several canonical empirical applications from public finance, labor, development, trade, and industrial organization.
Full-Text Access | Supplementary Materials

\”The Persistent Effects of Initial Labor Market Conditions for Young Adults and Their Sources,\” by Till von Wachter
Unlucky young workers entering the labor market in recessions suffer a range of medium- to long-term consequences. This paper summarizes the findings of the growing empirical literature on this subject and uses it to assess economic models of career development. The literature finds large initial effects on earnings, labor supply, and wages that tend to fade after ten to fifteen years in the labor market, and that are accompanied by changes in occupation, job mobility, and employer characteristics. Adverse initial labor market entry also has persistent effects on a range of social outcomes, including timing and completed fertility, marriage and divorce, criminal activities, attitudes, and risky alcohol consumption. There is also evidence that early exposure to depressed labor market lowers health and raises mortality in middle age, patterns accompanied by a reopening of earnings gaps.
Full-Text Access | Supplementary Materials

\”Retrospectives: Regulating Banks versus Managing Liquidity: Jeremy Bentham and Henry Thornton in 1802,\” by John Berdell and Thomas Mondschean
At nearly the same moment, Jeremy Bentham and Henry Thornton adopted diametrically opposed approaches to stabilizing the financial system. Henry Thornton eloquently defended the Bank of England\’s actions as the lender of last resort and saw its discretionary management of liquidity as the key stabilizer of the credit system. In contrast, Jeremy Bentham advocated the imposition of strict bank regulations and examinations, without which, he predicted, Britain would soon experience a systemic crisis—which he called \”universal bankruptcy.\” There are strong parallels but also dramatic differences with our recent attempts to reduce systemic risk within financial systems. The Basel III bank regulatory framework effectively intertwines Bentham\’s and Thornton\’s diametrically opposed approaches to stabilizing banks. Yet Bentham\’s and Thornton\’s concerns regarding the stability of the wider financial system remain alive today due to financial innovation and the politics of responding to financial crises.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

The Dam Removal Problem

One of the many highlights of my fatherhood experience was when one of my children was assigned a paper on the Hoover Dam. This meant that for a couple of weeks I could ask during family dinnertime conversations: \”How is the dam paper going? Is the dam first draft ready yet? What are you learning from your dam school project?\” Years later, my children still wince about the experience. Well, the dam topic is back again. 

Researchers at Resources for the Future have been looking at the issue of old and outdated dams. The National Inventory of Dams (NID) from the US Army Corps of Engineers counts 91,500 dams in the United States with an average age of 57 years. Some of the dams continue to serve useful purposes: carbon-free power generation, flood protection, drinking water supply, irrigation, creating bodies of water for recreation, and so on. But many others are both no longer useful and  \”high-hazard,\” meaning that there is risk to human life if/when they fail. For these dams, removal will often be a better option. Margaret A. Walls and Vincent Gonzales provide a readable overview of the work in \”Dismantling Dams Can Help Address US Infrastructure Problems\” (Resources, October 2020). For more details, their longer report is \”Dams and Dam Removals in the United States\” (RFF Report 20-12, October 2020).

The National Inventory of Dams tries to include all dams in the US that meet \”at least one of the following criteria:

  1. High hazard potential classification – loss of human life is likely if the dam fails,
  2. Significant hazard potential classification – no probable loss of human life but can cause economic loss, environmental damage, disruption of lifeline facilities, or impact other concerns,
  3. Equal or exceed 25 feet in height and exceed 15 acre-feet in storage,
  4. Equal or exceed 50 acre-feet storage and exceed 6 feet in height.

These are the relatively big dams. A fuller count of smaller dams from a couple of decades ago suggests that there might be 2.5 million dams in the US as a whole. 

The Stanford National Performance of Dams Program also provides useful background information. As their 2018 report on \”Dam Failures in the U.S.\” points out, there was a major surge in dam-building after World War II, which has then slowed to nearly a halt in the last couple of decades. 

The Walls and Gonzales report digs into the data from the National Inventory of Dams and reports: 

[A] hazard rating is a classification that conveys the consequences should the dam fail or be operated improperly and release water. It is highly dependent on the location of the dam—that is, whether it is located near heavily populated areas—and the size of the reservoir. Seventeen percent of dams in the NID have a high hazard rating, 12 percent are rated as a significant hazard, 65 percent are considered a low hazard, and 5 percent have an undetermined hazard rating. Of the high-hazard dams, private entities own the largest share, at 43 percent. Comparing this with private ownership of dams as a whole, however, which is 62 percent, private entities appear to have fewer of the high-hazard dams than do other owner types. Local governments, for example, own 20 percent of all dams but 29 percent of high-hazard dams, and the federal government owns only 4 percent of all dams but 9 percent of high-hazard dams.

As they point out, removing dams can both reduce the risks of a collapse, and also offer other potential benefits: 

Removing an obsolete or deteriorating dam can often be a better option than repairing it. In many cases, removal is less costly than repair, and if the dam no longer provides services of sufficient value, spending money on repairs makes little sense.

Removing a dam can have many environmental benefits. Dam removal restores a river’s natural function, improving water quality and conditions for aquatic habitat by increasing flows and reducing water temperatures, and provides passage to and from the ocean for anadromous fish species such as salmon. … 

The removals have ranged from very small dams, such as the 81 dams removed from the Cleveland National Forest in Southern California, to large dams with removal costs in the tens of millions of dollars, such as the 106-foot-tall San Clemente Dam in California and the Elwha and Glines Canyon Dams in Washington. On the East Coast, removals of dams such as the Edwards Dam on the Kennebec River in Maine and the Embrey Dam on Virginia’s Rappahannock River have benefited oceangoing species such as American shad, alewife, blueback herring, and American eel (a catadromous species that lives in freshwater and returns to the ocean to breed).

Dam removal also can create new river recreation opportunities by providing unimpeded boat passage and restoring whitewater conditions. The removal of three dams on the Cuyahoga River in Ohio was motivated by concerns over water quality, but removing the dams actually spurred growth in the local outdoor recreation economy by producing Class 5 rapids in downtown Cuyahoga Falls. The final dam slated to come down on the Cuyahoga River, the 60-foot-tall Gorge Dam, is expected to reveal a buried 200-foot natural waterfall.

The removal of certain kinds of dams can improve river safety. Low-head (or “run-of-the-river”) dams, which lie across the width of a river or stream and typically form only a minimal reservoir, create underwater circulating hydraulics that have caused hundreds of drowning deaths. After six deaths in one summer at low-head dams in Iowa, the state launched the Water Trails and Low-head Dam Mitigation Program, which focuses on removing and reengineering low-head dams around the state while providing canoe and kayak trails to enhance river recreation.

Despite these success stories, as of January 2020, only an estimated 1,700 dams have been removed in the United States. Numbers are on the rise—nearly half of the removals have taken place in the last ten years—but are low relative to the total number of dams. Moreover, a mere five states account for half of all removals: Pennsylvania (343), California (173), Wisconsin (141), Michigan (94), and Ohio (82).

There are lots of reasons for why outdated and high-hazard dams aren\’t removed. There\’s an up-front cost. Those who would benefit from the dam removal may not even know that they would do so. The regulators who in theory are overseeing the dams are stretched thin. Margaret Walls discusses these issues and how they have been overcome in various states in \”Aligning Dam Removal and Dam Safety: Comparing Policies and Institutions across States\” (RFF Report 20-13, October 2020). 

Older, outdated, and high-hazard dams are situation where the most useful infrastructure spending can involve removal, not rebuilding.