Are Theme Park Rides a Giffen Good?

It is canonical among economists that as the price of a good rises, the quantity demanded of that good falls (other factors held equal). The underlying logic is that a higher price for a certain good has two effects. The “substitution effect” is that a higher price for a certain good causes potential buyers to shift at least some of their buying to substitutes. The “income effect” is that the if the price of a certain good rises, the overall buying power of one’s income is reduced, which tends to lead to a reduction in consumption of all “normal goods” that a person buys. Indeed, the economic definition of a “normal good” is a good where more is consumed as income rises: this is in contrast to an “inferior good,” where less is consumed as income rises. A standard example is that steak may be a “normal good,” while hamburger is an “inferior good:” as income rises, a typical household consumes more steak and less hamburger.

But there is a well-recognized theoretical exception to this rule, known as a “Giffen good.” Imagine a very “inferior” good. When the price of that good rises, the substitution effect provides an incentive to shift to other substitute goods. However, for an inferior good, when the higher price leads to reduced buying power, people with effectively lower incomes buy more of the good. If there aren’t many (or any) possible substitutes for the good, and the inferior good effect is strong, a higher price can lead to people buying more of the good.

As a real-world example, consider the situation of a low-income household where a single food item plays a large role in their near-subsistence diet: the example I grew up hearing in classrooms was the role of potatoes in the diet of of poor people in Ireland in the 19th century. Say the price of potatoes rises, because of some external factor like a bad harvest, but that even after this price increase, potatoes remain the cheapest source of calories in a basic diet. In this situation, the poor do not have a cheaper food product to which they can substitute. Moreover, a price increase for a staple food product that is a big part of overall household spending means that the buying power of family income is diminished, and potatoes are an “inferior good.” Thus, poor people in this situation react to a higher price of potatoes by purchasing a greater quantity.

This situation of a Giffen good–a food staple that does not have easy substitutes, but is an “inferior good” making up a large share of purchases of low-income households– is obviously rare. Francis Ysidro Edgeworth described Giffen goods in this way in 1914: `Only a very clever man would discover that exceptional case; only a very foolish man would take it as the basis of a rule for general practice.`” Indeed, I’m not aware of clear empirical evidence that the Giffen good argument applied to low-income Irish households eating potatoes in the 19th century. But there is some modern evidence that for low-income households in a particular location in China, rice was a Giffen good.

Garth Heutel offers the possibility of a completely different example in “Theme park rides are Giffen goods” (Southern Economic Journal, published online September 17, 2024). His argument requires a shift of perspectives fancy footwork. He focuses on a particular context, in which people pay an admission fee to enter an amusement park, but then do not pay an additional fee for the rides. In this context, given that you have already paid for admission, the “price” of a ride is the time that you wait in line. In addition, you go into the park knowing that there will be lines, especially at the most popular rides. Thus, Heutel argues, a “higher price” in this context is a situation in which, after entering the park, you find that the lines for a given ride are longer than you expected.

Here’s a simplified example from Heutel to provide some intuition:

Consider a theme park with just two rides: a high-demand ride with a long wait, like a roller coaster, and a low-demand ride with a short wait, like a carousel. A guest has made a touring plan based on the expected wait times for the rides – 15 min for the carousel and 120 min for the roller coaster – and on the total time spent in the park of 7 h. Given that budget constraint, the guest chooses three rides on the roller coaster (6 h) and four rides on the carousel (1 h). …

When the guest arrives in the park, she finds an unexpected increase in the wait time for the carousel, increased from 15 to 30 min. How does she change her behavior? One option … would be to maintain her three rides on the roller coaster, leaving her just enough time (1 h) for two rides on the carousel instead of four. This would reduce the total number of rides from seven to five. An alternate option … is to ride one fewer ride on the roller coaster (two instead of three), leaving her an extra 2 h to ride the carousel, which would allow her six rides instead of the original four on the carousel. This would increase the total number of rides from seven to eight. This option amounts to Giffen behavior in her demand for the carousel; the price (wait time) increased, and her quantity demanded increased. The Giffen option is more likely to be chosen if the guest has some minimum total number of rides that she would like to achieve, similar to subsistence demand.

Heutel offers data and theory to argue that this simplified example is as plausible representation of reality. I can’t do justice to the complexities of the analysis here, but here’s an overview:

I use a unique proprietary data set, which contains observations from several hundred guests of several major theme parks in California and Florida. Each guest has at least one “touring plan,” which is an itinerary of planned rides that the guest would like to do in the park. For each ride in the plan, I observe the ex-ante expected wait time, the actual wait time once the guest arrives at the ride, and the decision over whether the guest rode it. The key empirical test is the effect that the actual wait time, or the deviation between the actual and expected wait times, has on the probability of riding the ride. … The law of demand says that a higher wait time should cause a decrease in the probability of riding; Giffen behavior suggests the opposite. …

I find statistically significant evidence of Giffen behavior among theme park guests. On average, a 10-min increase in the difference between the actual wait time of a ride and its ex-ante expected wait time increases the probability of riding it by about three to five percentage points. This relationship holds under a variety of specifications and different controls, including controlling for weather, for overall park wait times, and for a set of user, ride, date, and time-of-day fixed effects. While true on average across all rides, I find that the effect predominantly arises from the rides that are the least desirable, that is, not the headliner rides like roller
coasters. These rides are more likely to be inferior goods and thus more likely to exhibit Giffen behavior. I show that the Giffen effect is larger in theme parks with a smaller number of substitute rides, consistent with the theory.

For me, it requires some mental exertion to think about the “subsistence” problem of those attending theme parks, in terms of the limited time they have in their day, as compared to the “subsistence” problem of impoverished Chinese people with a diet heavily dependent on rice. But of course, the very different context is what makes the example worth passing along.

Fall 2024 Journal of Economic Perspectives Free Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided back in 2011–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Fall 2024 issue, which in the Taylor household is known as issue #150. Below that are abstracts and direct links for all of the papers. I plan to blog more specifically about some of the papers in the few weeks, as well.

_____

Symposium on Industrial Policy

Export-Led Industrial Policy for Developing Countries: Is There a Way to Pick Winners?” by Tristan Reed

     Industrial policy prioritizes growth in specific sectors. Yet there is little agreement about how to target sectors in practice, and many argue that governments cannot pick winners. This essay observes that governments can and do identify tradable sectors where public inputs accelerate growth and generate economic benefits. These strategic sectors are: (1) those that are relatively more productive, and (2) those that are relatively less productive but require technology like the country’s existing technology and have rapidly growing markets and limited international competition. Since developing countries are productive in fewer sectors and have less technology, targeting can be more valuable for them. Export promotion agencies are institutions that have demonstrated effectiveness in coordinating public inputs to grow these sectors. Compared to protectionism, this alternative approach to ‘industrial policy’ is cheaper, less susceptible to capture by unproductive firms, and permissible under the rules of international trade agreements. Many countries’ development strategies adopt this approach.

The Political Economy of Industrial Policy,” by Réka Juhász and Nathan Lane

We examine the ways in which political realities shape industrial policy through the lens of modern political economy. We consider two broad “governance constraints”: (1) the political forces that shape how industrial policy is chosen and (2) the ways in which state capacity affects implementation. The framework of modern political economy suggests that government failure is not a necessary feature of industrial policy; rather, it is more likely to fail when countries pursue industrial policies beyond their governance capacity constraints. As such, our political economy of industrial policy is not fatalist. Instead, it enables policymakers to constructively confront challenges.

Industrial Policy: Lessons from Shipbuilding,” by Panle Jia Barwick, Myrto Kalouptsidi, and Nahim Bin Zahur

     Industrial policy has been used throughout history in some form or other by most countries. Yet, it remains one of the most contentious issues among policy makers and economists alike. In part, this is because the empirical evidence on whether and how it should be implemented remains slim. Scant data on government subsidies, conflicting theoretical arguments, and the need to account for governments’ short and long-run objectives, render research particularly challenging. In this article, we outline a theory-based empirical methodology that relies on estimating an industry equilibrium model to measure hidden subsidies, assess their welfare consequences for the domestic and global economy, as well as evaluate the effectiveness of different policy designs. We illustrate this approach using the global shipbuilding industry as a prototypical example of an industry targeted by industrial policy, especially in periods of heavy industrialization. Just in the past century, Europe, followed by Japan, then South Korea, and more recently China, developed national shipbuilding programs to propel their firms to global leaders. Success has been mixed across programs, certainly by welfare metrics, and sometimes even by growth metrics. We use our methodology on China to dissect the impact of such programs, what made them more or less successful, and how we can justify why governments have chosen shipbuilding as a target.

Semiconductors and Modern Industrial Policy,” by Chad P. Bown and Dan Wang

     Semiconductors have emerged as a headline in the resurgence of modern industrial policy. This paper explores the political economic history of the sector, the changing nature of the semiconductor supply chain, and the new sources of concern that have motivated the most recent turn to government intervention. It also explores details of that turn to industrial policy by the United States, China, Japan, Europe, South Korea, and Taiwan. Modern industrial policy for semiconductors has included not only subsidies for manufacturing, but also new import tariffs, export controls, foreign investment screening, and antitrust actions.

Alexander Hamilton’s Report on Manufactures and Industrial Policy,” by Richard Sylla

     Hamilton’s 1791 state paper on manufactures is a forward-looking argument for US industrialization supported by public policies designed to encourage it. Conventional wisdom circa 1790, along with static considerations of comparative advantage indicated that the United States should stick to farming, export its agricultural surpluses, and import European manufactures. Mercantilist trade policies of the major European empires, however, were barriers to US exports. Hamilton therefore contended that US manufacturing using the latest machine technologies would alleviate the effects of European trade restrictions by creating domestic demand for agricultural surpluses. His report specifies industries worthy of support, and policy measures to encourage their development. During the century that followed, US governments adopted nearly all of Hamilton’s recommendations. These measures contributed to an average annual rate of growth of industrial output of 5 percent during that century, helping the United States to become the world’s leading manufacturing nation.

Symposium on Behavioral Incentive Compatability

Evaluating Behavioral Incentive Compatibility: Insights from Experiments,” by David Danz, Lise Vesterlund, and Alistair J. Wilson

     Incentive compatibility is core to mechanism design. The success of auctions, matching algorithms, and voting systems all hinge on the ability to select incentives that make it in the individual’s interest to reveal their type. But how do we test whether a mechanism that is designed to be incentive compatible is actually so in practice, particularly when faced with boundedly rational agents with nonstandard preferences? We review the many experimental tests that have been designed to assess behavioral incentive compatibility, separating them into two categories: indirect tests that evaluate behavior within the mechanism, and direct tests that assess how participants respond to the mechanism’s incentives. Using belief elicitation as a running example, we show that the most popular elicitations are not behaviorally incentive compatible. In fact, the incentives used under these elicitations discourage rather than encourage truthful revelation.

Behavioral Incentive Compatibility and Empirically Informed Welfare Analysis: An Introductory Guide,” by Alex Rees-Jones

     A growing body of research conducts welfare analysis that assumes behavioral incentive compatibility—that is, that behavior is governed by pursuit of incentives conditional on modeled imperfections in decision-making. In this article, I present several successful examples of studies that apply this approach and I use them to illustrate guidance for pursuing this type of analysis.

Designing Simple Mechanisms,” by Shengwu Li

     It matters whether real-world mechanisms are simple. If participants cannot see that a mechanism is incentive-compatible, they may refuse to participate or may behave in ways that undermine the mechanism. There are several ways to formalize what it means for a mechanism to be “simple.” This essay explains three of them, and suggests directions for future research.

Articles

The Problem of Good Conduct among Financial Advisers,” by Mark Egan, Gregor Matvos, and Amit Seru

     Households in the United States often rely on financial advisers for investment and savings decisions, yet there is a widespread perception that many advisers are dishonest. This distrust is not unwarranted: approximately one in fifteen advisers has a history of serious misconduct, with this rate rising to one in six in certain regions and firms. We explore the economic foundations of the financial advisory industry and demonstrate how heterogeneity in household financial sophistication and conflicts of interest allow poor financial advice to persist. Using data on the universe of financial advisers and the Survey of Consumer Finances, we document who uses financial advisers and the prevalence of misconduct in the industry. Our findings suggest that a lack of financial sophistication is a key friction, making enhanced disclosure a potentially effective policy response. Supporting this, we show through a difference-in-differences approach that “naming and shaming” firms with high misconduct rates was associated with a 10 percent reduction in misconduct.

Retrospectives: Friedman and Schwartz, Disaggregated,” by Jennifer Burns

       What was the contribution of Anna Schwartz to the landmark book she co-authored with Milton Friedman, A Monetary History of the United States, 1867–1960? A close examination of archival evidence suggests three primary contributions Schwartz made to the work, and to Friedman’s career more generally. The first was meeting the classic challenge of quantitative economic history: going into the field to locate and collect archival data that had been assembled for purposes unrelated to economic research, and deciding how best to use that data. Second, Schwartz had a decades-long role as technical sounding board and shaper of the statistical approach taken in the book. Schwartz’s third and arguably greatest contribution was to transform A Monetary History of the United States into a compelling narrative argument that made an impact far beyond the economics profession. Together, these findings show Schwartz to be a scholar who made significant and lasting contributions to monetary economics, economic history, and the broader field of economics.

“Recommendations for Further Reading,” by Timothy Taylor

Housing: Supply Chasing Demand

To explain the high and rising price of housing, the standard economic intuition is that growth in supply isn’t keeping up with growth in demand. One piece of evidence supporting this intuition is the “vacancy rate”–that is, what share of owner-occupied housing or rental housing is empty?

Here are a couple of figures from the ever-useful FRED website managed by the St. Louis Fed, with the first showing national vacancy rates for owner-occupied housing and the second showing national vacancy rates for rental housing. As you can see, vacancy rates spiked higher during the Great Recession of 2007-09, but now are near historically low levels for the last half-century.

When people say there is a “housing shortage,” what they are often referring to are estimates that if there had been sufficient housing construction in the past to bring the vacancy rate up to historically average levels, then housing would not feel so costly and unaffordable. David Wessel offers a crisp overview of existing estimates in “Where do the estimates of a `housing shortage’ come from?” (Hutchins Center at the Brookings Institution, October 21, 2024).

Estimates of the size of the housing shortage will differ according to various factors: the underlying assumption about a “normal” vacancy rate; whether the calculations are being made at a national-, state-, or census-district level; whether the estimates include only metropolitan areas or also rural areas; and so on. Here’s a set of estimates on the size of the housing shortage from the National Association of Home Builders and others. The first three estimates are based on the “vacancy” method. The last estimate, from a study commissioned by the National Association of Realtors, is based on the fact that about 1.5 million per year home were added to the total housing stock annually from 1968-2000, but only about 1.225 million homes were added to the housing stock each year since then.

The constraints on home-building, whether owner-occupied or rental, typically happen at the state and especially the local level. I won’t try here to delve into the range of proposals in metropolitan areas in the US and around the world for building more housing. But I will note that when vacancy rates are very low, subsidizing home-buyers or renters for their housing costs will only drive up the price. One way or another, making housing affordable needs a supply-side solution.

How Much Money from a Confiscatory Tax on the Uber-Wealthy?

Every now and then, I hear proposals to fund some program by taxing the wealthy–or even just confiscating wealth above a certain level. For the purposes of this post, I’m not interested in the question of whether this tax would be a good idea. (Shocking disclosure: I don’t think it’s a good idea.) I just want to know how money is is out there at the top of the US wealth distribution. Here’s a figure from the ever-useful FRED website maintained by the Federal Reserve Bank of St. Louis.

The top line shows total net worth for the top 1% of the wealth distribution, which totals about $44 trillion. The blue line focuses on the top 0.1% of the wealth distribution, which totals about $20 trillion. For perspective, the US has something over 120 million households. Thus, the top 1% is 1.2 million households, which have average wealth of $36 million. The top 0.1% is 120,000 households, which have average net worth of $166 million. There are about 800 US households with more than a billion in net worth, which is something less than .001% of the US population, who hold a combined $5.7 trillion in net worth.

Obviously, the total wealth held by the very wealthy adds up a really large chunk of change. Just for the sake of illustration, say that we could scoop up all $44 trillion in wealth of the top 1%, or all $20 trillion of the wealth of the top 1%, and go on a spree.

(Of course, even proposing such a step is wildly impractical. It’s worth emphasizing that taxing wealth is a tricky business, because the wealth takes the form of ownership of corporate stock or companies, or in some cases land, real estate, and natural resources. Thus, somehow requiring that trillions in wealth should be paid to the government would require that these assets be sold–but if wealth is being taxed at very high rates, it’s not clear who would buy them. There’s a reason why countries that have taxes wealth typically do so at rates of 1% per year, or less, and there’s a reason why most of the European countries that experimented with wealth taxes for a time have repealed them, as discussed here and here).

Remember, we can only do this once: when the wealth has been taken, it’s not there to take again, and the incentives to build up such wealth (in a form that could be taxed) would be greatly reduced. But to put the amount in perspective, here are some potential uses for spending the wealth of the top top 0.1%).

We could take $20 trillion and give all 330 million Americans a check for $60,000. Again, we could do this once.

We could take $20 trillion and pay of about half of the accumulated US federal debt.

We could take $20 trillion and pay off most of the expected shortfall for Social Security over the next 75 years.

Of course, one can add a number of other programs and priorities here, from education to the electricity grid and everything else. My point is that even the combined wealth of the very wealthy is not an infinite amount. The US GDP in 2024 will be about $28 trillion, so $20 trillion is about 8-9 months of US economic output. The federal budget for 2024 is nearly $7 trillion, so even $20 trillion is about three years of federal spending. Of course, if the $20 trillion in wealth was taxed at 1% per year, it would be about $200 billion per year–a real chunk of money, but a little more than the US budget deficit for any given month.

Just to be clear, I’m not arguing here against (or for) having those with very high wealth (or very high income) pay more in taxes. I’m pointing out that in the context of the broader US economy, the potential revenue available from taxing only billionaires is not huge, and even the revenue from taxing hundred-millionaires has its limits. It’s not a bottomless purse that can be tossed, over and over, at every political want.

Afterthought: There is an arithmetically illiterate comment, which sometimes makes the social media rounds, that because Jeff Bezos (or Elon Musk or Bill Gates) has a net worth of say, $100 billion and there are 7.5 billion people in the world, then that rich person could just give everyone in the world $1 billion, and still have $92.5 billion left over. I will leave it to the reader to locate the error in this calculation.

A Benefits Cliff in Washington, DC

Government attempts to assist household with low incomes face an inevitable practical problem. As income for a household rises, it will be necessary to phase out the government assistance. But what happens if–at least over a certain range of incomes–a previously low-income household that increases its earnings discovers that its government benefits are being decreased by the same amount? In effect, this household faces an effective tax rate of 100%, because any increase in earnings from a job is being 100% offset by a decline in the value of government assistance.

Maybe this scenario sounds hypothetical and extreme, but Elias Ilin and Alvaro Sanchez of the Federal Reserve Bank of Atlanta point out that it holds true for low-income households in Washington, DC. (“Mitigating Benefits Cliffs for Low-Income Families: District of Columbia Career Mobility Action Plan as a Case Study,” NO. 23–1, September 2023, Community and Economic Development Discussion Paper).

This table looks at the situation of a single parent with two children. If the household earns $11,000, it is eligible for the benefits shown in the first column. If the household earns $65,000, then its eligibility for most of these benefits is phased out, as shown in the second column, and the taxes owed by the family rise as well. For those who don’t live and breathe these acronyms: TANF is Temporary Assistance for Needy Families (commonly called “welfare”), EITC is Earned Income Tax Credit, CTC is Child Tax Credit, SNAP is Supplemental Nutrition Assistance Program (often called “food stamps”), WIC is Special Supplemental Nutrition Program for Women, Infants, and Children, FRSP is Family Re-Housing Stabilization Program,  LIHEAP is Low Income Home Energy Assistance Program,  CCDF is Child Care and Development Fund, CHIP is Children’s Health Insurance Program, and ACA Premium Subsidy refers to the Affordable Care Act passed in 2010.

In passing, it seems appropriate to note that the administrative burden that this group of programs places on low-income households is quite real. Of course, as income levels or life circumstances change, support from these programs shifts as well.

But the main point here is that as household earnings for this D.C. household rises from $11,000 to $65,000, the decline in benefits and rise in taxes almost completely offsets the higher earnings. As the authors write:

[B]etween $11,000 and $65,000 our hypothetical family experiences no overall financial gain from an increase in earnings. … [A]n increase in income from $11,000 to $65,000 results in a complete or partial loss of most of the public assistance programs and tax credits. Paired with an increase in tax liability, these losses fully offset income gains. … We observe that at certain levels of employment income within the $11,000 to $65,000 range the family’s net resources dip. It means that the combined loss of public assistance programs outweighs the gain in income, meaning the family faces benefits cliffs. The first dip occurs at $22,000 when the family loses access to SNAP. A second benefits cliff occurs at $27,000, where the family loses TANF. That is followed by several small benefits cliffs that occur due to the loss of school meals, WIC, federal and state EITCs, Medicaid for Adults, and Medicaid for Children/CHIP. Finally, at $61,000 the last and the largest benefits cliff occurs, which entails a loss of the CCDF childcare subsidy.

The authors call this a “benefit cliff.” I have sometimes called it a “poverty trap” (for example, here and here), because of the work disincentives it provides to poor and near-poor households. There’s no simple way to address this situation. Cutting benefits to low-income households has an obvious downside for those families. Phasing out the benefits more slowly, as income rises, will mean providing benefits to more households and will cost substantially more. Ultimately, I think our society ends up relying on the fact that many low-income households would actually like to be self-supporting, to work, and to avoid or minimize their use of government assistance. But for other low-income households, the work disincentives of the poverty trap will bite.

The Fourth Industrial Revolution and the Future of Work

I occasionally pause for a moment to remember that just a few years ago, before we were concerned that AI would take all the jobs, we were worried that new robotics technologies would take all the jobs. And before that, we were worried that automation would take all the jobs. Can we learn something from previous industrial revolutions about how the current one?

Arthur H. Goldsmith delivered the Presidential Address to the Southern Economic Association in 2022 on “The 4th Industrial Revolution and the Future of Work: Reasons to Worry and Policies to Consider.” The written version (co-authored with James F. Casey) is now posted online prior to publication in the Southern Economic Journal.

What were the first three industrial revolutions? As Goldsmith tells it:

The First IR emerged in England around 1765 with the advent of the steam engine to mechanize production, especially in the agriculture, mining, and transportation sectors. … The Second IR arrived in 1865, roughly a 100 years after the first. New sources of power—electricity and the internal combustion engine—were the signature technological advancements of the Second IR. … The Third IR—the Digital Revolution—begins around 1970, once again about a century after the preceding IR. Digitization entails representing information in bits, and the 3rd IR is about how a new collection of machines, and advances in coding (i.e., HTML) are used to—store, transfer, and analyze data (Goldfarb & Tucker, 2019). Introduction of the personal computer, super computers, and the internet of things enabled automation of industrial processes, space exploration, and dramatic advances in telecommunications, and science through research and development, such as the Human Genome Project.

So what’s the Fourth Industrial Revolution? Goldsmith argues that the technological changes of the last decade or two are not just a continuation of the Digital Revolution, but qualify as a separate Industrial Revolution:

Building on the era of digitization a dazzling array of new technologies have emerged in recent years including—3-D printing, nanotechnology, artificial intelligence, machine learning, quantum computing, big data, and cloud storage. Simultaneously, there have been vast improvements in existing technologies such as industrial robots, vision systems, sensors, and algorithms. The 4th IR is the result of integrating these technologies in creative and productive ways with robotics and artificial intelligence at the center of the transformation. Perhaps the most visible form of this evolution is Generative AI, which combines machine learning and artificial intelligence—guided by the architecture of the human brain—to learn about countless relations and patterns through exposure to vast amounts of data. This technology is then capable of producing data including text, images, audio, and code. …

The velocity, scope, and systems (i.e., production, management, and governance) impact of the 4th IR is striking. Robot density, the number of industrial bots per 10,000 workers, a standard barometer of manufacturing automation doubled globally between 2017 and 2022 (Heer, 2021, 2024)—an extraordinary pace—while in the U.S. robot density more than tripled between 1995 and 2017 (Bharadwaj & Dvorkin, 2019) and rose another 12% between 2020 and 2022. Likewise, the velocity of artificial intelligence activity in recent years is impressive. Corporate artificial intelligence investment spending in the U.S. rose by 423%, from 13 to 68 billion, between 2015 and 2020 (Statista Research Team, 2022), and global growth in artificial intelligence investment advanced at an even greater rate (Thormundsson, 2023) from 13 billion in 2015 to 92 billion in 2022.

The reader will note that by Goldsmith’s accounting, the US and global economy is really just at the start of this fourth Industrial Revolution. Thus, discussions about what effects it will have are necessarily speculative. As he writes: “An overarching question is will automation reduce the number of jobs or will mechanization generate so many new positions employment, on net, rises?”

I confess that I am skeptical of phrasing the issue in terms of total jobs. An economist pointed out to me long ago that the single biggest determinant of the number of jobs in any economy is the population of the country. Thus, it seems to me as if the key issues are not about raw number of jobs, but about whether this industrial revolution will lead either to historically high levels of persistent unemployment, or to a pattern with usual levels of unemployment, but a higher share of workers in lower-wage jobs. Goldsmith argues the point this way:

This is the fundamental conceptual difference between the 4th IR and prior industrial revolutions during which advances in technology were considered skill-neutral—they improved the productivity and workplace outcomes for workers with different levels of formal education. The impact of this difference, skill-biased versus skill neutral, is profound, since it will undermine the job prospects and earnings for those with modest formal educational attainment—persons in the middle class—while advancing the economic situation for individuals who possess high levels of formal education.

At least to me, it isn’t obvious that the 4th Industrial Revolution is different in this way. I’ve been reading for decades that the 3rd Industrial Revolution involved “skill-based” technical change, and in this way helped to generate growing inequality of incomes since about 1980. In addition, the very limited evidence now available on effects of artificial intelligence tools in the workplace suggests that they can be especially valuable to lower-skill workers, rather than higher-skill workers. The underlying reason is that AI tools in effect can make pre-existing expertise more available to everyone, which is a bigger boost for those with less experience or lower skill.

But while it isn’t (yet) clear to me that AI will lead to bigger technological displacement of workers than previous industrial revolutions, it nonetheless remains true that–in a healthy economy which is ever-shifting and ever-evolving–some workers will find that the skills and experience they developed in an existing job are no longer valued as highly in the market. In response to this ongoing issue, I’ve argued in the past for “active labor market policies” (for example, here and here). The US currently emphasizes “passive” labor market policies like paying unemployment insurance or providing safety net support to low-income households, while “active” labor market policies would expand the government role in job search and training.

In particular, Goldsmith emphasized a potential role for the federal government as a coordinator and accreditation mechanism for “certificate programs.” As Goldsmith points out, private firms like Google have taken some prominent steps in this direction already:

Google rolled out the Google Career Certificate Program—a skill development initiative—in 2020 (Google, 2021; Hess, 2020). The Program offers Certificates in: IT Support, Data Analytics, Project Management, UX Design, and Android Development. The curriculum for each certificate was developed by Google and is taught by Google employees, solely online, using the learning platform Coursera. These certificate programs are self-paced, designed to be completed in 3–6 months, and admission does not require a college degree. It costs $49 a month to use the Coursera platform, and Google has funded 100,000 need-based scholarships for eligible applicants. In addition, Google awarded $10 million in grants to three nonprofits that partner with Google to provide workforce development to targeted groups including women, veterans, and underrepresented groups. The Google Career Certificates Employer Consortium (Google, 2024) includes over 150 U.S. companies including Deloitte, Target, and Verizon who consider Google Career Certificate graduates for entry-level jobs, which typically require a four-year college degree. Moreover, certificate holders have access to an exclusive job platform where they are fast-tracked when applying for jobs with Consortium employers. This initiative could be scaled up by the inclusion of additional firms and the government independently developing, or codeveloping, additional certificate programs, and leading the delivery.

To me, a main challenge for certificate programs is one of focus: each one needs to be laser-focused on specific skills, and not get loaded up with a bunch of other topics and skills that might be nice, but should be separated off into a different certificate. This kind of focus helps to keep down costs of the program and the time needed to qualify for the certificate, and thus will encourage workers to see it as a viable option.

A Federal Guarantee of Paid Vacation?

The rules governing the US labor market are just different from other high-income countries in some way. One difference involves paid time off. Here’s a figure from Betsey Stevenson, “A federal guarantee for earned paid time off” (Hamilton Project at the Brookings Institution, October 2024).

You will notice that in this comparison across various high-income countries, the US does not have a bar at all. Hmmm.

There various ways to respond to this table. One is to point out that the rules sketched here apply to full-time, full-year workers, and often only apply after the worker has been with an employer for several years. Stevenson writes:

These estimates report the statutory required number of paid leave days based on full-time, full-year workers. Many countries require a waiting period before paid leave becomes available. Canada and Japan have fewer days required for recent employees with the full amount of statutory leave being granted after several years of tenure with an employer. Only countries that statutorily require public holidays to be paid are listed as requiring paid holidays. However, in many countries that do not mandate pay for public holidays, workers often receive holidays off with pay or are compensated with another day off by custom or collective bargaining agreements.

Another way to respond is to wonder how these rules and laws about paid vacation and paid holidays interact with other kinds of paid leave. Stevenson notes that if we put together a graph with those kinds of leave, the US would still be the country with no bar at all on the graph: “In addition to these requirements for annual leave as part of employees’ compensation packages, most advanced economies also require additional amounts of paid sick leave. In addition, all advanced economies have national programs ensuring that people have access to paid family and medical leave …”

Stevenson sketches how a US policy along these lines might work: in her version, paid time off would accrue based on hours worked:

I propose that earned time off should accrue at a rate of one hour per 50 hours worked (2 percent of hours worked per week) in the first two years of the policy, increasing to one hour per 25 hours worked (4 percent of hours worked per week) after two years. In the first two years, workers must be able to accrue up to 40 hours a year; after two years, they must be able to accrue up to 80 hours a year. The reason for capping the earned leave is so that employers can simply offer full-time, full-year employees 80 hours a year (40 in the first two years), without needing to count hours. It is an administratively easy option.

Those interested in sorting through specific questions about such a policy will find a lot of answers in Stevenson’s article. Here, I’ll just note that there are roughly a jillion questions one might ask about a paid leave policy: for example, it’s not clear how it works for workers who change jobs frequently. Some high-income countries have developed an “insider/outsider” dynamic, where “insider” workers who have an attachment to a certain employer receive lots of benefits, but firms have an incentive to figure out ways of hiring that won’t make workers eligible for benefits, and a group of “outsider” workers develops without access to the benefits. I would worry that a paid leave policy will not actually be a guarantee for all workers, but instead will tend to leave out younger workers, as well as those with lower incomes and less connection to the labor force. In addition, labor market policies that work for much smaller countries (in the graph above, Iceland and Luxembourg each have a national population of less than 1 million) or countries with a much stronger tradition of unionization may not be well-suited for the enormous and widely varied US economy. US states, the “laboratories of democracy,” are already experimenting with various kinds of paid leave policies.

Of course, one answer to any practical questions about paid leave is that other countries have already in fact been doing it. But in addition, building political support for a paid leave policy in the United States is also a cultural issue. The typical American worker puts in more hours each year work more hours per year than workers in most other high-income countries; indeed, many American workers don’t take all of the paid vacation to which they are currently entitled. Stevenson argues:

Americans work more hours per year than workers in any other advanced country. Many Americans view hard work and long hours as a path to career advancement, personal fulfillment, and higher incomes. In a competitive job market, some Americans fear that taking too much time off could jeopardize their job security. They may worry about losing wages, falling behind, being replaced, or missing out on promotions and career advancement opportunities. Many employees have high workloads and believe that taking time off would lead to an unmanageable backlog of work. And, certainly many workers work long hours to make ends meet. In the U.S. there is a both a stigma against taking time off from work and no federal right to do so. … A national policy guaranteeing paid earned time off would shift attitudes toward time off from work and make earned paid time off a basic right for workers. In America, if you work hard and play by the rules, you should be able to afford to take a day off and not lose pay.

The National Childcare Program During World War II

The United States has had a nationwide childcare program at one time in its history: a temporary program during World War II. Tim Sablik of the Federal Reserve Bank of Richmond tells the story and summarizes some economic research on the topic in “When Uncle Sam Watched Rosie’s Kids: To support women working on the homefront in World War II, the U.S. government funded a temporary nationwide child care program” (Econ Focus: Federal Reserve Bank of Richmond, Fourth Quarter 2024).

As Sablik reminds us, only about 10% of married women reported working outside the home in the 1920s. A number of female-dominate professions like teaching had “marriage bars”–that is, a woman was barred from continuing in the job if she got married, on the basis that married women should be focused on raising children. But World War II changed the terms of the social debate. As Sablik writes:

Once the United States entered the war in late 1941, the country needed to mobilize both the personnel and the materials to fight a war on two fronts. While American men reported to training camps and shipped off overseas, government officials called upon women to support the production of tanks, planes, ships, munitions, and other supplies at home. According to a 1953 report from the U.S. Department of Labor’s Women’s Bureau, nearly half of all single women were already in the workforce prior to the war. But the labor force participation rate for married women was much lower — around 15 percent. For policymakers hoping to ramp up war production, the report’s authors observed, “Married women constituted the country’s greatest labor reserve.”

Many of these married women were also mothers, so bringing them into the workforce meant grappling with the issue of child care. During a 1943 hearing before the Senate Committee on Education and Labor, witnesses shared stories of children locked in cars or chained to trailers while mothers were at work. Factories reported an increase in absenteeism on Saturdays when schools were closed. Others expressed concerns about rising juvenile delinquency among school-age children left to their own devices after school and during the summer.

The legislative path seems to have worked like this. In 1940, Congress passed the the National Defense Housing Act, often called as the Lanham Act, aimed a building more housing. But by 1941, Congress had expanded the law so that its funding could support “any facility necessary for carrying on community life substantially expanded by the national-defense program.” In 1942, the House Committee on Public Buildings and Grounds agreed, without public debate or legislation, that the funding could also be used for child care. By 1943, Lanham act funding was available for 1,150 nurseries. At the peak in 1944, there were 3100 centers with about 130,000 children enrolled across the country.

Sablik draws on research from Chris Herbst for some descriptive details:

Lanham nurseries provided care for children from ages 2 to 5, while child care centers looked after school-age children before and after school and during the summer. Consistent with the Children’s Bureau’s recommendations, few if any Lanham facilities provided care for children under the age of 2, despite expressed demand from working mothers with young children. According to Herbst, it was typical for preschool children to spend 12 hours per day at the nurseries. When school was in session, older children might spend a few hours before and after school. The availability of care also varied according to local need. In communities with factories operating 24 hours per day, centers were open at night.

To get the program up and running quickly, FWA [Federal Works Agency] administrators rented and reused existing buildings and relied on schoolteachers for staff. Federal agencies created a training program for Lanham teachers and volunteers, and some cities partnered with local universities to create their own training. Federal guidelines recommended keeping classrooms small, with a 10:1 student-to-teacher ratio, and Herbst found that most centers followed this recommendation. Students were served lunch, a snack, and even dinner in cases where centers were open late. That said, quality varied, as the FWA left operations largely up to the discretion of local administrators. In his article, Herbst cited the example of a center in Baltimore that had 80 children in one room with one bathroom, and those children had to cross a highway to reach the playground.

It’s not clear in a statistical sense how much this national child care effort actually increased the labor force participation of women. There was no rule limiting the centers to working mothers. The centers were typically established in places where the share of mothers who were working was already quite high. Three days after the Japanese surrender in August 1945, the program administrators announce that the program would be wound down. The expectation was that women would leave the workforce, freeing up the jobs for returning soldiers.

However, follow-up research “found lasting positive effects on children who grew up in areas with Lanham centers, including generally improved outcomes in high school and higher earnings in adulthood.” Given the extreme disruptions of civilian life during World War II, and many changes in the United States since then (like smaller average family size and higher education and incomes for parents), it would be unwise to extrapolate too readily from this earlier program. But the outcomes are nonetheless interesting.

For those who want a taste of the academic research on outcomes for children, useful starting points are:

  • Derrington, Taletha M., Alison Huang, and Joseph P. Ferrie. “Life Course Effects of the Lanham Preschools: What the First Government Preschool Effort Can Tell Us About Universal Early Care and Education Today.” National Bureau of Economic Research Working Paper No. 29271, September 2021. (Article available with subscription.)
  • Ferrie, Joseph P., Claudia Goldin, and Claudia Olivetti. “Mobilizing the Manpower of Mothers: Childcare under the Lanham Act during WWII.” National Bureau of Economic Research Working Paper No. 32755, July 2024. (Article available with subscription.)
  • Herbst, Chris M. “Universal Child Care, Maternal Employment, and Children’s Long-Run Outcomes: Evidence from the US Lanham Act of 1940.” Journal of Labor Economics, April 2017, vol. 35, no. 2, pp. 519-564. (Article available with subscription.)

Larry Summers on the Economics of AI

Joe Walker serves as interlocutor in “Larry Summers — AGI and the Next Industrial Revolution” (The Joe Walker Podcast, October 22, 2024). Here are a couple of points that caught my eye, but there is much more in the interview itself.

Here’s Summers on the long-term increase in economic output over time and the interrelationship with technology:

[T]he more I study history, the more I am struck that the major inflection points in history have to do with technology. I did a calculation not long ago, and I calculated that while only 7% of the people who’ve ever lived are alive right now, two-thirds of the GDP that’s ever been produced by human beings was produced during my lifetime. And on reasonable projections, there could be three times as much produced in the next 50 years as there has been through all of human history to this point. .. Of course, I think that this [AI] technology potentially has implications greater than any past technology, because fire doesn’t make more fire, electricity doesn’t make more electricity. But AI has the capacity to be self-improving.

There’s an interesting dynamic between strong technological advance in a given sector and the share of that sector in the economy. Imagine that it becomes much cheaper to make something, so that its price is falling sharply. The quantity demanded of the good increases, at least up to a point. In the context of the economy as a whole, the size of a given sector is determined by the quantity it produces multiplied by the price. If price keeps falling, and quantity demanded doesn’t keep rising as quickly, then the share of a high-productivity sector in the economy will decline. Similarly, as AI technologies plummet in price, it’s at least possible that the output share of AI technologies in the economy will decline as well. Here’s Summers:

[S]ectors where there’s activities where … there is sufficiently rapid growth almost always see very rapidly falling prices. And unless there’s highly elastic demand for them, that means they become a smaller and smaller share of the total economy. So we saw super rapid growth in agriculture, but because people only wanted so much food, the consequence of that was that it became a declining share of the economy. And so even if it had fast or accelerating growth that had less and less of an impact on total GDP growth. In some ways we’re seeing the same thing happen in the manufacturing sector where the share of GDP that is manufacturing is declining. But that’s not a consequence of manufacturing’s failure. It’s a consequence of manufacturing’s success. 

A classic example was provided by the Yale economist Bill Nordhaus with respect to illumination. The illumination sector has made vast progress, 8, 10 per cent a year for many decades. But the consequence of that has been that on the one hand, there’s night little league games played all the time in a way that was not the case when I was a kid. On the other hand, candlemaking was a significant sector of the economy in the 19th century, and nobody thinks of the illumination sector as being an important sector of the economy [today]. So I think it’s almost inevitable that whatever the residuum of activities that inherently involve the passage of time and inherently involve human interaction, it will always be the case that 20 minutes of intimacy between two individuals takes 20 minutes.

And so that type of activity will inevitably become a larger and larger share by value of the economy. And then when the productivity growth of the overall economy is a weighted average of the growth individual sectors, the sectors where there’s the most rapid growth will come over time to get less and less weight.

To put it another way, the economic issues about AI do not involve the capabilities of the technology in splendid isolation; instead, it’s how AI technology interacts with workers and consumers, with production and consumption of goods and services. Some tasks that workers currently do will be replaced, but possibilities for brand-new goods and services, as well as improvements in existing ones, will be created. I do not pretend to know how it will all work out in the decades to come, but I do know that in the globalized world economy, the AI cat is already out of the bag. Paul Romer (Nobel ’18) offered a pithy aphorism  a few years ago: ”Everyone wants progress. Nobody wants change.” Alternatively, one might say that some folks are fearful or hesitant about change until or unless society or government has full control over the direction of change and complete knowledge of its future effects–in which case, of course, it barely qualifies as “change” at all.

A Prescription for Fixing the US Healthcare System

Among the major issues not being discussed in the US presidential campaign are those facing the US healthcare system. The two main concerns are well-known.

One is high cost. The US economy spends about $12,500 per person on health care in 2022, according to the OECD. The second- and third-highest countries, Switzerland and Germany, spend about $8,000 per person on health care. Canada is at about $6,300 per person, about half the US level. The United Kingdom is even lower at $5,600 per person. I’m not in favor of cutting US health care spending by half or more! But high and rising health care costs for government programs like Medicare and Medicaid are part of what makes forecasts for US budget deficit so dire. And for those of us who get our health insurance through our employers, the high and rising cost of health insurance makes it harder to get increases in our paychecks, as well.

The other main concern is the number of people who do not have access to health insurance. Census Bureau statistics suggest that 11% of working-age Americans (ages 19-64) and about 6% of children did not have health insurance in 2023. Many of these households fall through the cracks of the current system: they don’t have jobs that provide health insurance, they have enough income that they may not qualify for Medicaid, but they don’t have enough income that paying for health insurance look affordable to them. Up to about half of the uninsured are actually eligible for health insurance at zero cost to them, whether private or public, but lack of knowledge and the administrative burdens of applying are too much for them.

So what might be done? The Summer 2024 Journal of Policy Analysis and Management has a useful back-and-forth that identifies some possibilities, issues, and tradeoffs. On one side, Liran Einav and Amy Finkelstein summarize their arguments in a book they published in 2023 last year with their plan, called We’ve Got You Covered: Rebooting American Health Care. But redesigning the US health insurance system involves some big leaps, and as they acknowledge, their plan may be politically impractical. Thus, Jason Furman discusses the possibilities of more incremental–but potentially still important–health insurance reform. Here are links to the point/counterpoint:

The Einav and Finkelstein plan focuses on the idea of giving all Americans access to a basic level of health care at no charge to them. They argue that when other countries have included out-of-pocket cost-sharing for patients–say, co-pays, co-insurance, or deductibles–the other countries also end up having copious and often complex exceptions: say, for pregnant women, veterans, the unemployed, those with lower income levels, and so on and so on. Rather than create what can easily become an administrative swamp for cost-sharing, they would drop the idea for this basic level of care. They argue that “sharing in universal coverage is on a collision course with itself.”

What would be included in this basic level of care? Linav and Finkelstein get a little fuzzy here, and start talking about “gray areas.”

Basic coverage must cover all essential medical care, including primary and preventive care, specialist care, and hospital care—both emergency and non-emergency. Much of what this means is obvious. Flu shots and appendectomies are in. Purely cosmetic plastic surgery is out. But there is also a large gray area of specific types of care where there are cases that can be made both for exclusion and for inclusion in basic care. Infertility treatment, dental care, vision care, physiotherapy, treatment of erectile dysfunction, various forms of long-term care—the list goes on and on. We deliberately do not weigh in here, other than to say that the starting point must be to define a budget for basic care—how much taxpayer money we are willing to devote to health care. Only then can we have a meaningful discussion about these gray area decisions. … [M]ost countries have a formal process for considering whether to cover new treatments under universal health care. We will need one too.

In addition to the question of what will be covered, there is also a question of how it will be covered. The social contract is about providing essential medical care, not providing a high-end experience. There are many non-medical aspects of care that may be desirable without being essential. The ability to see the doctor of your choice at your preferred timing and location, for example, or semi-private hospital rooms. This would be substantially limited under basic coverage. Basic coverage would likewise involve longer wait times for non-urgent care than what people with private health insurance or Medicare are currently accustomed to. Wait times would be closer to those experienced by Medicaid patients, or by veterans who receive their medical care through the Veterans’ Administration (VA).

Thus, the Einav-Finkelstein vision is that everyone would receive basic care through the same system, but they estimate that perhaps two-thirds of Americans would have supplemental insurance on top of that. To put it another way, employer-provided health insurance could pay part of the premium to the government to cover basic health care, and then the rest of the premium would be converted into top-up insurance.

They argue that we could “fulfill our social contract without tackling the other multi-trillion-dollar elephant in the room: the problem of high and often inefficient healthcare spending. … Which is a relief, since we don’t (yet) have the silver bullet for dramatically lowering healthcare spending while fulfilling the dictate to “do no harm” to the patient. Nor, we hasten to add, does anyone
else. Despite what you may have heard on TV. It’s indisputable that there is a lot of waste in U.S. health care. But the old adage about advertising is also true: half of spending is wasted, we just don’t know which half.”

Jason Furman was a top economic adviser to President Obama, and thus a supporter of the Patient Protection and Affordable Care Act of 2009, which reduced the number of those without health insurance by about 22 million, at an annual cost of more than $100 billion. But one political advantage of the legislation is that for many (not all!) people who had private or government health insurance, their avenues to health care were not much changed by the legislation.

As Furman points out, it’s easier to generalize about “basic care” than to define it in detail. It’s hard to imagine a politically practical “basic care” system that includes less than Medicaid–and Medicaid already pays such low amounts that many health care providers refuse to take additional patients. How “basic” could “basic” be? And are Americans willing to tolerate “basic”? Furman notes:

As Einav and Finkelstein discuss at length, much of what is provided by the health system is “amenities,” which cost money and resources but do not contribute to better health outcomes. This distinction between the primary purpose and the amenities is rarely made in other spheres. For example, imagine a management consultant studying the $150 billion annually spent on hotel rooms in the United States. They might conclude that about $125 billion of that sum was wasted because hostels could have provided the same shelter, with a bed, access to toilet, and showers, at a much lower cost. But this recommendation would miss the point.

Furman argues for cost-sharing when it comes to health care expenses, on the grounds that people need to have some connection to what they are actually paying for health care, because if they don’t do so, they aren’t likely to think about tradeoffs. He writes: “The financing of healthcare is already very opaque with a typical family of four spending about $32,000 annually but possibly only noticing the about $3,000 they pay out of pocket or maybe also the about $6,000 they contribute to the premium for their plan. The rest of the money is in the form of foregone wages (the incidence of the employer contribution for health insurance) and taxes for healthcare.”

He points out that cost-sharing for health care expenses in some form is common across other countries. Indeed, the existing level for cost-sharing on health care, as a share of household consumption spending, is actually not that different in the US than in many other countries.

Furman writes:

One thing I learned from working on the ACA [Affordable Care Act] was that no one had all or even most of the answers, especially when it comes to delivery system reform … The answer is to take more seriously how to put in place systems and processes that can discover better answers over time, not simply assume that one knows them in advance—let alone knowing whether they will be politically or socially sustainable. …

But it is also wrong to ignore the fallibility of government or the people that implement its policies either. Medicare is a poorly designed insurance plan that would not even qualify as insurance under the ACA mandate because of its unlimited cost sharing (despite having first dollar coverage for many services), as a result it is basically unusable as a sole insurance plan—with 90% of beneficiaries
supplementing it with something else. It took the federal government decades to add a prescription drug benefit to Medicare, an omission that would have driven any private insurer out of business. And even when government plans have come in under cost, like the prescription drug benefit, a big part was because of innovations that were unanticipated or underestimated by the creators of these plans, like tiered formularies for prescription drugs.

I do not know the answer, but it should involve some of what is best about markets while remedying what is worst about them … It also needs to do what is best about the government while building in a process of innovation and change, something like the Center of Medicare and Medicaid Innovation Center. And the most vexing issues in healthcare are how to balance its cost against the many other desires and priorities people have—so a mechanism that makes costs and tradeoffs more transparent is essential to ensuring the competition and innovation process will lead to better results over time.

I don’t have a one-size fits all answer for how to fix the US healthcare system either. But I do think it’s important that people have a better sense of what health insurance actually costs. One proposal with cost estimates from the Congressional Budget Office would be to look at the range of employer-provided health insurance across employers, and figure out the median amount provides, which CBO estimates at “$8,900 a year for individual coverage and $21,600 a year for family coverage.” That median amount would continue to be excluded from taxation. But for health insurance plans costing more than this amount, the additional amount would be counted as income to the worker. The CBO estimates that this would raise more than $100 billion per year by 2027.