The New AI Techologies: How Large a Productivity Gain?

The new artificial intelligence technologies are getting a lot of buzz. How are they likely to be used and how will they affect productivity. It seems to me obviously too early to know, but just the right time to start thinking concretely about plausible outcomes. In that spirit, Martin Neil Baily, Erik Brynjolfsson, and Anton Korinek discuss “Machines of mind: The case for an AI-powered productivity boom” (Brookings Institution, May 10, 2023).

The authors focus on what they call “foundation models,” which are “vast systems based on deep neural networks that have been trained on massive amounts of data and can then be adapted to perform a wide range of different tasks. ” Examples include “large language models” like ChatGPT (from Open AI), Bard (from Google), and Claude (from Anthropic). “But generative AI is not limited to text: in recent years, we have also seen generative AI systems that can create images, such as MidjourneyStable Diffusion or DALL-E, and more recently multi-modal systems that combine text, images, video, audio and even robotic functions. “

Evidence is accumulating about how these technologies will affect actual jobs. The unifying theme here is saving time: that is, just as I save time when I can download articles while sitting at my desk, rather than walking through library stacks and making photocopies, lots of existing jobs can be done more quickly with the new technologies. Some examples:

There is an emerging literature that estimates the productivity effects of AI on specific occupations or tasks. Kalliamvakou (2022) finds that software engineers can code up to twice as fast using a tool called Codex, based on the previous version of the large language model GPT-3. That’s a transformative effect. Noy and Zhang (2023) find that many writing tasks can also be completed twice as fast and Korinek (2023) estimates, based on 25 use cases for language models, that economists can be 10-20% more productive using large language models.

But can these gains in specific tasks translate into significant gains in a  real-world setting? The answer appears to be yes. Brynjolfsson, Li, and Raymond (2023) show that call center operators became 14% more productive when they used the technology, with the gains of over 30% for the least experienced workers. What’s more, customer sentiment was higher when interacting with operators using generative AI as an aid, and perhaps as a result, employee attrition was lower.  The system appears to create value by capturing and conveying some of the tacit organizational knowledge about how to solve problems and please customers that previously was learned only via on-the-job experience.

It’s easy enough to run across other examples. Two MIT researchers, Shakked Noy and Whitney Zhang, have a working paper up called “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence” (MIT working paper, March 2, 2023).

We examine the productivity effects of a generative artificial intelligence technology—the
assistive chatbot ChatGPT—in the context of mid-level professional writing tasks. In a
preregistered online experiment, we assign occupation-specific, incentivized writing tasks
to 444 college-educated professionals, and randomly expose half of them to ChatGPT.
Our results show that ChatGPT substantially raises average productivity: time taken
decreases by 0.8 SDs and output quality rises by 0.4 SDs. Inequality between workers
decreases, as ChatGPT compresses the productivity distribution by benefiting low-ability
workers more. ChatGPT mostly substitutes for worker effort rather than complementing
worker skills, and restructures tasks towards idea-generation and editing and away from
rough-drafting. Exposure to ChatGPT increases job satisfaction and self-efficacy and
heightens both concern and excitement about automation technologies.

There have been studies for a few years now suggesting that use of AI technologies can help doctors to more accurate diagnoses. A recent study along these lines that caught my eye is “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum,” by John W. Ayers,  Adam Poliak, Mark Dredze, et al. (JAMA, April 28, 2023). From the abstract:

In this cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed health care professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly on a public social media forum. The chatbot responses were preferred over physician responses and rated significantly higher for both quality and empathy.

AI systems are also pushing research forward more rapidly. Here’s an article from Steven Rosenbush in the Wall Street Journal (“Biologists Say Deep Learning Is Revolutionizing Pace of Innovation,” March 22, 2023).

A milestone in computational biology was announced last July, when Alphabet Inc.’s DeepMind Technologies subsidiary announced that its AlphaFold2 AI system had been used to predict the three-dimensional structure of nearly all proteins known to science, essentially solving a problem that researchers had been trying to crack for the past 50 years. On March 16, Facebook-parent Meta Platforms Inc. said its research arm, Meta AI, had used its new AI-based computer program known as ESMFold to create a public atlas of 617 million predicted proteins. Like OpenAI’s ChatGTP, the Meta tool employs a large language model, which can predict text from a few letters or words.

What about overall effects? Tyna Elaoundou (who works for Open AI), Sam Manning, Pamela Mishkin, and Daniel Rock have a working paper called “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (arXiv working papers, March 23, 2023). They write:

We investigate the potential implications of large language models (LLMs), such as generative Pretrained Transformers (GPTs), on the U.S. labor market, focusing on the increased capabilities arising from LLM-powered software compared to LLMs on their own. Using a new rubric, we assess occupations based on their alignment with LLM capabilities, integrating both human expertise and GPT-4 classifications. Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted. …
The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to LLM capabilities and LLM-powered software. Significantly, these impacts are not restricted to industries with higher recent productivity growth. Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks.

In a similar spirit, a much-cited report from two Goldman Sachs analysts, Joseph Briggs and Devesh Kodnani, considers ” The Potentially Large Effects of Artificial Intelligence on Economic Growth” (March 26, 2023, not directly at the Goldman Sachs site but available on the web if you hunt for it). They write:

We estimate that generative AI could raise annual US labor productivity growth by just under 1½pp over a 10-year period following widespread adoption, although the boost to labor productivity growth could be much smaller or larger depending on the difficulty level of tasks AI will be able to perform and how many jobs are ultimately automated. The boost to global labor productivity could also be economically significant, and we estimate that AI could eventually increase annual global GDP by 7%. Although the impact of AI will ultimately depend on its capability and adoption timeline, this estimate highlights the enormous economic potential of generative AI if it delivers on its promise.

Again, it seems to me too early to trust any specific estimates here. But several themes of this line of research seem especially salient to me.

First, these are practical discussions of how the new technologies can help workers in various jobs. Thus, they help us stop thinking about the new AU technologies as the embodiment of bad science fiction movies (and in fairness, a few good science fiction movies, too!), and instead to think about practical realities. These technologies are not about being taken over by sentient robots. They are about humans being able to do their work more quickly.

Second, many of these studies have an interesting theme that they tend to help the lesser-skilled worker in any occupation by more. The better-skilled workers often have already developed their own shortcut and information sources and methods, and are drawing on their greater mental database of past experiences. The AI tools often help other workers catch up.

Third, we apparently are doomed to replay, one more time, one of the long-standing public dramas of new technologies: that there is only a fixed amount of work to do, and if existing workers can do it faster, then the available jobs will shrink dramatically, leading to mass poverty. This fear has been manifested many times in the past. Some of the examples I’ve collected over time include: worries from the US Secretary of Labor about automation and job loss in 1927; fear of robotics and automation in 1940; the US government commission on the dangers of automation and job loss in 1964; and when Nobel laureate Wassily Leontief predicted in the early 1980s how automation would lead to mass unemployment. A few years back I linked to an essay by Leslie Willcocks called “Robo-Apocalypse Cancelled,” going through reasons why predictions of a technology-driven economic disaster never quite seem to happen.

But big picture, think about all the technological changes of the last two decades–heck, over the past two centuries. Surely, if technological advances and automation were likely to lead to mass unemployment, we would already have arrived at a world where only 10% or fewer of adults have jobs? But instead, needing many fewer workers for jobs like growing wheat, lighting streetlights, filling out accounting ledgers by hand, operating telephone switchboards, making a ton of steel, and so on and so on have opened the way for new occupations to arise. I see no compelling reason why this time and this technology should be different.

The IRS Audit Algorithm and Racial Effects

The Internal Revenue Service gets something north of 100 million individual tax returns each year. So how does the IRS decide how to deploy its 6,500 auditors? It counts on the computer programs to flag returns that seems more likely to be understating income. For example, a highly-paid two-earner couple might have income well into the mid-six-figures, but if what’s on the tax form matches what their employers and financial institutions reported, there’s not likely to be much gain in auditing them (at least not without some additional information). Nowhere on the tax form is the race of a taxpayer specified, and thus it is impossible for computer algorithm for who-gets-audited to take race into account in any explicit way. Nonetheless, it appears that the algorithm is auditing blacks substantially more than whites.

Hadi Elzayn, Evelyn Smith, Thomas Hertz, Arun Ramesh, Robin Fisher, Daniel E. Ho, and Jacob Goldin dig into the evidence in “Measuring and Mitigating Racial Disparities in Tax
Audits”
(Stanford Institute for Economic Policy Research, January 2023). They write: “Despite race-blind audit selection, we find that Black taxpayers are audited at 2.9 to 4.7 times the rate of non-Black taxpayers.” The research result has gotten considerable press coverage, like the recent “I.R.S. Acknowledges Black Americans Face More Audit Scrutiny” in the New York Times (May 15, 2023).

The method behind the study is interesting. Given that tax return and audit data doesn’t include race, on what basis can the researchers reach this conclusion? They infer race from data on names and where people live. The authors write:

Through a unique partnership with the Treasury Department, we investigate these
questions using comprehensive microdata on approximately 148 million tax returns and
780,000 audits. … To address the problem of missing race, we use Bayesian Improved First Name and Surname Geocoding (BIFSG), imputing race based on full name and census block groups (Imai and Khanna, 2016; Voicu, 2018). We then propose and implement a novel approach for bounding the true audit disparity by race from the (imperfectly measured) BIFSG proxy. By individually matching a subset of the tax data to self-identified race data from other administrative sources, we provide evidence that the assumptions underlying our bounding approach are satisfied in practice.

When the researchers dig down into the data, they find that the difference in audits by race arises almost entirely in one category: audit rates for the working poor who are claiming the Earned Income Tax Credit. They write: “Black taxpayers claiming the EITC are between 2.9 and 4.4 times as likely to be audited as non-Black EITC claimants. … We find that the disparity cannot be fully explained by racial differences in income, family size, or household structure, and that the observed audit disparity remains large after conditioning on these characteristics. For example, among unmarried men with children, Black EITC claimants are audited at more than twice the rate of their non-Black counterparts.”

The EITC audits are almost all “correspondence audits,” which means that the taxpayer gets a letter from the IRS with some questions, and if you don’t write back with acceptable answers, your tax credit is denied.

Like many economists, I’m a fan of the Earned Income Tax Credit (as explained here). But I’ve also recognized that it has a long-standing problem: about one-fifth of the payments have often gone to those who did not qualify for them (as explained here). This problem arises from a combination of factors, ranging from complexity and uncertainty over whether households actually qualify to outright fraud (as discussed here). But again, it’s not obvious why these factors should affect blacks more than others.

The authors don’t have a definitive answer to this question, but they try to gain some insight by tinkering with the IRS algorithm that determines who gets audited, and then exploring how the mixture of audits would have shifted as a result. They show how “seemingly technocratic choices about algorithmic design” can lead to different results.

For example, it turns out that the IRS audit algorithm is calibrated (in part) to minimize the “no-change rate”–that is, the chance that an audit will not lead to any change in the amount of tax owed. This may seem reasonable enough, but consider two possible audits: one audit has a 95% chance of leading to a small change in taxes owed of less than $500. The other audit as a 10% chance of a large change in taxes owed of more than $10,000. Focusing on the larger payoffs will bring in more money. As the authors write: “[T]he taxpayers with the highest under-reported taxes tend to be non-Black, but the available data allow the classifi er model to assign the highest probabilities of underreporting to more Black than non-Black taxpayers.”

As another example, it seems that the algorithm emphasizes the possibility of ” over-claiming of refundable tax credits rather than total under-reporting due to any error on the return.” One can imagine a possible political motive for this emphasis on over-claiming rathe rather than under-reporting, but it’s not a way to collect more revenue.

Finally, these “correspondence audits” of the working poor who receive the Earned Income Tax Credit are relatively easy to automate: the algorithm flags them and the letters go out. But when most of us think about audits, what we have in mind is a detailed look at the finances of high-income folks, perhaps especially those who own complex businesses or have complex financial arrangements. Given existing economic inequalities by race, such audits would focus less on blacks. And plausible estimates suggest that audits focused in this way could raise $100 billion per year, just through enforcement of existing tax laws. But it takes highly-skilled tax professionals to carry out such audits, and the IRS has a tough time holding on to people with the necessary skills and training.

Robert E. Lucas Jr., 1937-2023

Robert E. Lucas Jr. (Nobel 1995) has died. I will not try here to provide an overview of his work. For those who are interested in more detail, here are a few starting points.

Lucas was awarded the Nobel prize “for having developed and applied the hypothesis of rational expectations, and thereby having transformed macroeconomic analysis and deepened our understanding of economic policy.” V.V. Chari provides an overview of that work in “Nobel Laureate Robert E. Lucas, Jr.: Architect of Modern Macroeconomics,” in the Winter 1998 issue of the  Journal of Economic Perspectives. I wrote a post on this blog about a year ago on the 50th anniversary of one of his most prominent papers, the 1972 “Expectations and the Neutrality of Money.”

In the late 1980s, Lucas began to focus more of his attention on issues of long-run growth. In what I think was his first prominent paper on the subject, he famously wrote (“On the Mechanics of Economic Development, Journal of Monetary Economics, 1988, pp. 3-42):

Is there some action a government of India could take that would lead the Indian economy to grow like Indonesia’s or Egypt’s? If so, what, exactly? If not, what is it about the’ nature of India’ that makes it so? The consequences for human welfare involved in questions like these are simply staggering: Once one starts to think about them, it is hard to think about anything else.

In the Winter 2000 issue of the Journal of Economic Perspectives, Lucas applied some of these growth model ideas in “Some Macroeconomics for the 21st Century,”  offering a long-run prediction that the world economy would become both much richer and much more equal over time, as countries that have been laggards in growth took advantage of possibilities for catch-up growth.

What I wanted to emphasize was that Lucas, among his other gifts, was a gifted writer and expositor. This gift wasn’t always readily apparent, because his research papers often intertwined verbal and algebraic exposition in a way that could be inaccessible to the uninitiated. Here are three examples that come immediately to mind.

One example is tacked up to the bulletin board outside my office. It’s from an essay on economic growth that Lucas wrote for the 2003 Annual Report of the Federal Reserve Bank of Minneapolis:

Of the tendencies that are harmful to sound economics, the most seductive, and in my opinion the most poisonous, is to focus on questions of distribution. In this very minute, a child is being born to an American family and another child, equally valued by God, is being born to a family in India. The resources of all kinds that will be at the disposal of this new American will be on the order of 15 times the resources available to his Indian brother. This seems to us a terrible wrong, justifying direct corrective action, and perhaps some actions of this kind can and should be taken. But of the vast increase in the well-being of hundreds of millions of people that has occurred in the 200-year course of the industrial revolution to date, virtually none of it can be attributed to the direct redistribution of resources from rich to poor. The potential for improving the lives of poor people by finding different ways of distributing current production is nothing compared to the apparently limitless potential of increasing production.

Whether you agree with the sentiment or not (personally, I’m about 85% agreement on this one), it’s a strong piece of prose writing.

Here’s another example from his 2000 JEP essay on economic growth. This is Lucas describing a model in words–specifically, describing how he sees the pattern of economic growth across countries as a kind of horse race with rules of its own:

We begin, then, with an image of the world economy of 1800 as consisting of a number of very poor, stagnant economies, equal in population and in income. Now imagine all of these economies lined up in a row, each behind the kind of mechanical starting gate used at the race track. In the race to industrialize that I am about to describe, though, the gates do not open all at once, the way they do at the track. Instead, at any date t a few of the gates that have not yet opened are selected by some random device. When the bell rings, these gates open and some of the economies that had been stagnant are released and begin to grow. The rest must wait their chances at the next date, t + 1. In any year after 1800, then, the world economy consists of those countries that have not begun to grow, stagnating at the $600 income level, and those countries that began to grow at some date in the past and have been growing every since. …


The exact construction … is based on two assumptions. … The first is that the first economy to begin to industrialize—think of the United Kingdom, where the industrial revolution began—simply grew at the constant rate α from 1800 on. I chose the value α = .02 which … implies a per capita income for the United Kingdom of $33,000 (in 1985 U.S. dollars) by the year 2000. There is not much economics in the model, I agree, but we can go back to Solow (1956) and to the many subsequent contributions to the theory of growth for an understanding of the conditions under which per capita income in a country will grow at a constant rate. In any case, it is an empirically decent description of what actually happened.


So much for the leading economy. The second assumption is that an economy that begins to grow at any date after 1800 grows at a rate equal to α = .02, the growth rate of the leader, plus a term that is proportional to the percentage income gap between itself and the leader. The later a country starts to grow, the larger is this initial income gap, so a later start implies faster initial growth. But a country growing faster than the leader closes the income gap, which by my assumption reduces its growth rate toward .02. Thus, a late entrant to the industrial revolution will eventually have essentially the same income level as the leader, but will never surpass the leader’s level.

At least for me, this description of a racetrack, with the leader getting an early start and others having the ability to draw upon catch-up growth (because they can rely on skills and knowledge already invented) is a powerful way to describe an underlying algebraic model that illuminates overall patterns of long-run growth. The prose here isn’t flashy, but it is succinct and crystalline. Based on this model, Lucas wrote: “I think the restoration of inter-society income equality will be one of the major economic events of the century to come. Of course, this does not entail the undoing of the industrial revolution. In 1800 all societies were equally poor and stagnant. If by 2100 we are all equally rich and growing, this will not mean that we haven’t got anywhere!”

Finally, here’s an example from the short “banquet speech” that Lucas gave in accepting the Nobel prize, with a deathbed thought. Here’s the speech in full:

Your Majesties, Ladies and Gentlemen,

As you all know, Alfred Nobel did not choose to establish a prize in Economics. This prize was established in the 1960s, as a memorial, through the generosity of the Bank of Sweden. Generosity and, I would say, wisdom, as the establishment of a Nobel Prize in Economics has had a very beneficial effect on my profession, encouraging us to focus on basic questions and scientific method. It is as if by recognizing Economics as a science, the Bank of Sweden and the Nobel Foundation have helped us to become one, to come close to realizing our scientific potential. Now in 1995 this great honour is given to an economist who maintains that central banks should focus exclusively on the control of inflation, that they must be resolute in resisting the temptation to pursue other objectives, no matter how worthwhile these objectives may be. It would be understandable if people at the Bank of Sweden were now thinking: “Why don’t we tell this man to take his theories to the Bundesbank, and see how many kronor he can get for them over there?”

But this is no occasion for ill-feeling. It is not the time to criticize central bankers or anyone else. When Voltaire was dying, in his eighties, a priest in attendance called upon him to renounce the devil. Voltaire considered his advice, but decided not to follow it. “This is no time,” he said, “to be making new enemies”. In this same spirit, I offer my thanks and good wishes to the Bank of Sweden, to the Nobel Committee, and to everyone involved in this wonderful occasion.

Measuring How a Higher Minimum Wage Affects Employment

When it comes to measuring how a minimum wage affects employment, the simple answers are wrong. For example, one simple approach is to can look at employment before and after a minimum wage increase, but if a recession occurs along the way, surely one would not want to attribute the resulting job changes that to minimum wage increases. Also, it’s understandably easier for politicians to pass minimum wage increases when the economy is growing, but you wouldn’t want to mix up an overall climate of economic growth with the effects of a minimum wage increase, either.

Ideally, you would come up with several other and more plausible methods for isolating the effects of a minimum wage increase, so that you could compare results across these methods. What I want to do here is to describe first how such a study might be done, but not reveal the results here at the start. After all, you aren’t the sort of person who would judge the methods of a study by whether the results confirmed your previous biases, are you? No, I’m sure you’re not. Instead, you’re the kind of person who thinks first whether the method of the study makes sense, and then to the extent that it does, you will put a corresponding degree of belief on the results.

The city of Minneapolis voted in 2017 to phase in an increase in the city-wide minimum wage, starting in 2018. It would read $15/hour for large firms (employment of 100 or more) by July 2022 and $15/hour for small firms by July 2024. The city also commissioned a series of studies on the effects of this change, to be overseen by the Federal Reserve Bank of Minneapolis. The most recent of these reports is “Economic Impact Evaluation of the City of Minneapolis’s Minimum Wage Ordinance,” with Loukas Karabarbounis, Jeremy Lise, and Anusha Nath as the primary investigagors (Federal Reserve Bank of Minneapolis, May 1, 2023).

The researchers have access to non-public data from the state government agency runs the unemployment insurance program, because when employers pay their unemployment insurance premiums, they need to file forms each quarter saying giving total compensation and total hours worked for each employee–which means you can easily calculate the average wage per hour. The researchers merge this with data from the Quarterly Census of Employment and Wages, carried out by the US Bureau of Labor Statistics. This data includes the industry, the location by city and zip code, and if a given business establishment at a certain location is part of a firm that has establishments at other locations, too.

The authors of the Minneapolis Fed study apply two main methods for thinking about the question of the effects of a minimum wage that is being phased higher over time. I’ll try to offer a quick-and-dirty intuitive summary here.

First, they use what are called “synthetic control methods,” which look at changes over time. Specifically, they look at 36 US cities similar in size to Minneapolis, but which were not seeing a rise in their minimum wage during this time. They average together data from these cities, putting different weights on different cities, so that the weighted average of these other cities tracks the data from Minneapolis pretty well in the years leading up to 2017. The hypothesis behind this approach is that jobs and wages in the other cities tracked Minneapolis pretty well up to the passage of the minimum wage, it should have kept doing so–unless something changed.

As another example of this synthetic control method, they take the same approach but instead look only at cities inside Minnesota. Again, they weight the data from these cities so that in the lead-up to the higher minimum wage, it tracks the Minneapolis data. Again, the hypothesis is that a divergence after 2018 or so can be attributed to the higher minimum wage.

This synthetic control approach has been used in other studies, but I think it’s fair to say that it has less plausibility for this study in recent years. Compared to other cities around the country, as well as cities within Minnesota, Minneapolis experiences rioting after the murder of George Floyd in May 2020. In addition, the pandemic recession may well have affected different cities in different ways: in particular, the effects in bigger cities like Minneapolis might be different from the effects in smaller cities.

But I know you’re not the kind of person who would be happy with the results of any single methodology, right? You are the kind of person who wants several different methods, so that you can compare between them.

A second approach uses “cross section” methods which do comparisons across firms and workers. In one cross-section approach, the authors look at “establishment effects” within the city of Minneapolis. Remember, they have data on industry, location, average wages, and hours worked for firms in Minneapolis. They write:

Consider a full-service restaurant, Restaurant A. It is located on the fictitious Plain Street and pays all of its workers at least 16 dollars per hour in 2017. This restaurant is not directly exposed to the increase in minimum wage, because all its workers are already earning a wage above 15 dollars. Next, consider another full-service restaurant on Plain Street, Restaurant B, which pays all its workers in 2017 an hourly wage of 7.5 dollars. Restaurant B is highly exposed to the minimum wage increase, because to continue to operate using the same workforce, it needs to increase wages for all workers.

Thus, they can compare establishments in the same industry and the same neighborhood, some of which are more affected by the higher minimum wage than others. You can compare whether the establishments that are more affected by the minimum wage adjust hours or employment by larger amounts.

However, An obvious concern here is that perhaps some firms that were previously paying low wages go under, but their workers are absorbed by firms in the same industry and neighborhood paying higher wages. Or if workers found jobs just outside the Minneapolis city boundaries, it might look as if jobs were lost, when the jobs actually just moved a few miles. Such effects needs to be taken into account.

Thus, a different cross-section approach looks at comparisons across workers. Consider workers in Minneapolis, some working for establishments that previously paid lower wages, and some that did not. The researchers can then track what happens to those workers. They can also take into account effects of being in a certain industry: say, restaurants took a bigger hit during the pandemic. Thus, the researchers can see what happened to the workers who seemed most likely to be affected by a higher minimum wage.

One final step before describing the results. Some industries have a lot of low-wage workers, so one would expect these industries to be more exposed to the effects of a higher minimum wage. Thus, the researchers do these comparisons in two ways: looking at the effects across all jobs, and looking at the effect just in the most exposed industries. In Minneapolis, the six industries where more than 30% of the workers had been making less than the new minum wage were: ” retail trade (44); administration and support (56); health care and social assistance (62); arts, entertainment, and recreation (71); accommodation and food services (72); and other services (81), which consists of repair and maintenance shops, personal and laundry services, and various civic, professional, and religious organizations.”

The first column shows four categories of outcomes. The second column shows an average effect for all jobs, and also just for the industries whose share of lower-wage workers made them “most exposed” to an increase. The third column, “Time Series,” is average results from the two synthetic control method. The fourth column, “Cross Section,” is results from the comparisons across establishments and across workers with different exposures to a higher minimum wage. The final column averages results from the two previous columns.

The overall results are similar to those from a number of other studies. As one might expect, the effects are much smaller for the average of all jobs than for the most exposed industries. A minimum wage tends to raise wages (for those who still have jobs), but leads to a decline in job and a decline in hours worked. For the average job, the higher wages and lower hours worked pretty much balance each other out, so total earnings don’t change much. For the jobs most-exposed to a minimum wage increase, the drop in hours exceeds the rise in wages, so total wage earnings decline.

My bottom line here is not to cheerlead for or against higher minimum wages. I’m trying to make the point that serious studies using a variety of methods show genuine tradeoffs–especially for industries that tend to pay lower wages and for those working in such industries. A serious discussion of minimum wages won’t ignore such tradeoffs or try to sweep them aside with assertions of how things “should” be. Specifically, a substantially higher minimum wage in a city will discourage some industries more than others, and in this way affect the mix of goods and services available nearby to residents of the city, and will have heavier effects on the hours and jobs of workers in those industries, too.

Gender, Academia, Economics

When reading about controversial topics in economics (and I assume in other fields), it’s common to have the uncomfortable feeling that if you know what the authors argued in a previous paper, you will also know what they are arguing in the current paper. One interpretation of this pattern is that authors have biases that influence their results. If you think this may be a problem, one way to push back is through an “adversarial collaboration,” which means that authors who have previously found different results agree to publish a paper together–and not in some pro-and-con format of disagreements, but actually to write the paper such that they can all agree with it.

Stephen J. Ceci , Shulamit Kahn, and Wendy M. Williams rise to the challenge in “Exploring Gender Bias in Six Key Domains of Academic Science: An Adversarial Collaboration” (Psychological Science in the Public Interest, published online April 26, 2023). In this area of research, previous work by Ceci and Williams tends to find little evidence of gender bias in academia, while Kahn has published a number of studies suggesting bias in economics, in particular. Here’s a sense of their process:

This article represents more than 4.5 years of effort by its three authors. By the time readers finish it, some may assume that the authors were in agreement about the nature and prevalence of gender bias from the start. However, this is definitely not the case. Rather, we are collegial adversaries who, during the 4.5 years that we worked on this article, continually challenged each other, modified or deleted text that we disagreed with, and often pushed the article in different directions. Although the three of us have exchanged hundreds of emails and participated in many Zoom sessions, Kahn has never met Ceci and Williams in person.

Here’s the broad takeaway of their results from the abstract:

We synthesized the vast, contradictory scholarly literature on gender bias in academic science from 2000 to 2020. In the most prestigious journals and media outlets, which influence many people’s opinions about sexism, bias is frequently portrayed as an omnipresent factor limiting women’s progress in the tenure-track academy. … We evaluated the empirical evidence for gender bias in six key contexts in the tenure-track academy: (a) tenure-track hiring, (b) grant funding, (c) teaching ratings, (d) journal acceptances, (e) salaries, and (f) recommendation letters. We also explored the gender gap in a seventh area, journal productivity, because it can moderate bias in other contexts. We focused on these specific domains, in which sexism has most often been alleged to be pervasive, because they represent important types of evaluation, and the extensive research corpus within these domains provides sufficient quantitative data for comprehensive analysis. Contrary to the omnipresent claims of sexism in these domains appearing in top journals and the media, our findings show that tenure-track women are at parity with tenure-track men in three domains (grant funding, journal acceptances, and recommendation letters) and are advantaged over men in a fourth domain (hiring). For teaching ratings and salaries, we found evidence of bias against women; although gender gaps in salary were much smaller than often claimed, they
were nevertheless concerning.

But what if I’m interested less in the overall picture of academic as a whole than in my specific field of economics. The study is broadly focused, but on those occasions when it singles out economics, it’s typically because economics looks worse with regard to gender bias.

For example, one way to look at bias in tenure-track hiring is to compare the share of women getting PhDs in a given field to the share of women who are hired as assistant professors in that field and the share of women who become tenured professors in that field. In many of the fields they study, the share of women becoming tenured professors is higher than the share of women who received doctorates in the field. Economics is the exception:

Thus, these cohort analyses offer little support for the claim of widespread gender discrimination in tenure-track hiring in GEMP [the authors’ acronym for the mathematics-intensive fields of geoscience, engineering, economics, mathematics, computer science, and physical science], even before 2000. Economics is the exception …[T]he percentage of women among tenure-track assistant professors (within 7 years after obtaining their PhDs) was similar to the percentage of women among PhDs only through 2004; for the next eight PhD cohorts, however, the percentage of female assistant professors stagnated, despite growth of newly minted female PhDs.

Here’s a description of another study about hiring, in which economics was different:

Williams and Ceci (2015) studied a stratified national sample of 872 faculty from two GEMP fields (engineering and economics) and two LPS fields (biology and psychology) to determine preferences for identically qualified men and women possessing outstanding credentials. …[I]in the authors’ main experiment (N = 363), faculty expressed a significant preference
for hiring women. This pro-female preference was similar across fields, types of institution, and gender and rank of faculty. The only group in which the preference did not appear was male economists, who showed no gender preference.

“No preference” is not an obviously bad thing. But it does make economics stand out. There is some evidence that has been interpreted to argue that women in economics need higher-quality research to publish their papers. This evidence is not based on a lower rate of acceptance of research by women at academic journals, but on the later pattern of citations of that work.

Card et al. (2020) argued that although there was no gender difference between acceptance rates [at economic journals] in their study, even after controlling for factors such as numbers
of past publications, this might not ensure that the quality of men’s and women’s accepted articles are similar. Instead, they argued that only subsequent citations to the accepted articles signal quality. Analyzing citations, Card et al. found that in economics, accepted women’s articles had higher subsequent citations, and from this they concluded that bias against women exists, despite no gender differences in acceptance rates. There might indeed be some gender bias in economics publication evaluation, particularly given that there is evidence from another study showing that the quality of writing in published articles in economics by women was higher than in articles by men, whereas the time until women’s articles were finally accepted was longer, suggesting bias (Hengel, 2022).

But this type of evidence is a bit slippery. As the essay points out, if women’s published research being cited more often is evidence of bias in economics, does that mean that if research by women was cited less often (as is true in some other fields and journals), this would prove that research by women was of lower quality? There are a variety of forms of bias that might have an effect here: bias by journal reviewers in what gets published, bias by readers in what they choose to study, and bias by future authors in what they choose to cite. There is also a possibility that women in economics tend to favor quality over quantity in their research choices. Disentangling these possibilities won’t be easy.

Other research suggests that men have an overall productivity advantage in economics. The authors describe another study this way:

In social sciences besides psychology, two fields stand out as opposites. Political science had a large female advantage in annualized publications and a 3.3-ppt female advantage in total impact (Huang et al., 2020). … In contrast, Huang et al. found that in economics, men had a 28% productivity advantage in early careers and a 50% advantage in midcareers. Ceci et al. found that the productivity gap among economists increased from 1990 to 1995 to 2005 to 2008 (from 22% to 52%), a concerning trend.

A few years back, the Journal of Economic Perspectives, where I work as Managing Editor, published a three-paper symposium on the situation of women in economics. To me, the articles make a prima facie case that economics has problems in this area–even if some of its problems, like bias against women in student ratings of teachers–seem shared across fields.

But I also think that some of the vexed relationship between gender and economics starts a lot sooner than academic hiring. If you go back to high school AP exams, the standard pattern is that females are more likely to take the exams and to do well on them. Economics is an exception. Males are more likely to take the AP micro and macro economics, and also to get higher scores on them. In undergraduate programs, about one-third of economics majors are women–a percentage that hasn’t changed much in recent decades–even though women are an overall majority of college students. In thinking about the pipeline for future professors of economics, It might prove useful to think about reasons behind these earlier gender differences.

Replace Federal Taxes with a National Sales Tax?

The idea of using a national retail sales tax to replacing pretty much all of the federal tax structure–that is, instead of the federal individual income tax, corporate income tax, payroll tax, and estate and gift tax–has some elements of broad appeal. There’s an old Greek legend of the “Gordian knot,” where whoever could untangle the knot would become a great conqueror. Alexander the Great reputedly just cut the Gordian knot with a sword. The modern tax code may be a Gordian knot for our own time, where untangling it is impossible but cutting through it can achieve the goal. As an individual or someone running a business, imagine never filling out a tax form again.

Thus, a Fair Tax proposal has been introduced in every Congress since 1999 to replace other federal taxes with a national sales tax. It’s not likely to pass, now or in the future, but the thought experiment is intriguing. For those of us who prefer to root around in the policy details rather than to make sweeping gestures with swords, what are the tradeoffs and issues here? William G. Gale and Kyle Pomerleau discuss “Deconstructing the Fair Tax” (Tax Notes Federal, March 27, 2023, p. 2169+).

How high would the national sales tax rate need to be?

Of course, the answer to this question depends in part on what is counted as “sales.” The Fair Tax takes a broad perspective here: for example, it includes pretty much all goods and services, including rent on housing, purchasing a newly built home, health care spending, and even interest payments and fees for credit cards and mortgage debt (which is viewed as a kind of “service” payment).

There’s also a bit of terminology here that needs exploring. Say that something costs $100, and there is a 30% tax added, so when you get to the cash register, you pay $130. Most people would think of that tax rate as 30%. But if you wanted to make the tax rate sound lower to people, you might instead calculate the tax rate as 30/130–that is, the tax divided by the total price after-tax, not the price before tax. That’s called the “tax-inclusive” rate, and your added 30% has suddenly changed to a 23% tax rate. The supporters of the Fair Tax promise a 23% tax-inclusive rate, which is the same a 30% tax-exclusive rate.

But Gale and Pomerleau point out that the 23% rate is likely to be a considerable understatement, as well. When they dig into the underlying calculations, they find, for example, that the authors of the Fair Tax are assuming that a positive rate of inflation will push up collections from the Fair Tax over time, but also that government spending will not experience any inflation rate at all. Obviously, this kind of assumption (and there are others like it) makes it easier for the 23% tax-inclusive rate to cover future spending.

What if inflation affects both taxes and government spending in the same way? Then Gale and Pomerleau calculate that to maintain current spending levels would require a tax-exclusive rate of 39% (which is a tax-inclusive rate of 28%.

But the Fair Tax supporters also assume that there would not be a black-market economy where these sales taxes are often evaded, understated, and avoided. If you build in an assumption that 17% of a national retail sales tax will be avoided or evaded–which matches the assumption of current levels of avoidance and evasion–then the necessary tax rate to maintain current spending levels would be a tax-exclusive rate of 51% (or a tax-inclusive rate of 34%).

If you think that future Congresses would be likely to rule that certain items should not be included in the national sales tax, then the tax rate on everything that remains included would need to be higher still. In short, the national sales tax doesn’t come cheaply. And remember that the national sales tax would be on top of any state and local sales taxes.

What about progressivity and taxing those with high incomes?

An obvious concern with a national retail sales tax is that everyone pays the same tax rate, no matter their income. There’s a bit of a political challenge here for those who would like those with high incomes to pay more in taxes, but who tend to overstate their case by saying that those with high incomes currently pay little or nothing in the current tax system. If you believe that, then a national sale tax would raise taxes on the rich! But in fact, those with high incomes do in fact pay more in taxes (whether they should pay still more is a question I leave in abeyance here). Thus, a national sales tax by itself would raise taxes on those with lower incomes and reduce taxes on those with higher incomes.

To their credit, the supporters of the Fair Tax recognize this issue and offer a suggested fix, called a “family consumption allowance.” It works similarly to a universal basic income, but the idea is only to offset the national retail sales tax, not to provide enough to live on. Eevery household would get a monthly check from the government. The amount of the check would be determined by multiply the poverty-level income for that household times the tax-inclusive tax rate.

How much might this payment be? For a family of three, the US poverty line is $24,860 in 2023. Multiplied by 23%, and then divided into 12 monthly checks, this works out to $477 per month. Again, this is not intended to be enough to live on. The Fair Tax proposal doesn’t make any changes to existing programs to support the poor. It’s just intended to offset the national sales taxes paid on poverty-level income.

This part of the proposal would make the national retail sales tax less of a burden to the poor than to the non-poor. But it would not come close to making the overall federal tax rate as progressive as it currently is.

What about interactions with other tax issues?

A national retail sales tax would have unpredictable interactions with a number of other policies. For example, 42 states (and a number of cities) have income taxes. Under current law, these taxes can piggyback on the federal income tax, and make a few idiosyncratic changes. But if no federal income tax existed, a lot of people and companies would presumably get pretty grumpy about retaining the state and local income taxes. In addition, income taxes are used for a number of public policies. Getting rid of the income tax means getting rid of all tax deductions, including mortgage interest. It means getting risk of income tax breaks for, say, buying an electric vehicle.

Of course, government could still provide subsidies in these areas. But instead of providing the subsidies in the form of tax breaks, which have the effect of spending but don’t look like spending, it might need to do so by actually cutting checks for such subsidies.

There are also a number of administrative issues in shifting to the collection of a national retail sales tax that I’ll skip over there, but they aren’t trivial.

There’s an old joke among economists about value-added taxes, which function as a sort of national sales tax. The joke goes: “America has not enacted a value-added tax because the Democrats fear that it’s not progressive and the Republicans fear that it would become a money machine for the government to raise taxes. However, America will enact a value-added tax as soon as the Republicans realize that it’s not progressive and the Democrats recognize that it can be a money machine for raising taxes.”

Transferring all of the US tax burden to a national retail tax seems an unwise idea, and an idea where the marketing of a 23% rate overpromises what it can deliver. The proposal for a national sales tax is interesting, in part, because it potentially opens the door to some ideas that have had very little traction in national-level American politics, like a universal income payment and a national value-added tax.

Marx on Economics: “Its True Ideal is the Ascetic but Rapacious Skinflint and the Ascetic but Productive Slave”

May 5 is the birthday of Karl Marx. I published a version of this post five years ago, on the occasion of the 200th anniversary of Marx’s birth. This is a very slightly updated version.

______________

Here’s a characteristic little taste of Karl Marx’s writing that I ran across the other day. It’s from Economic and Philosophical Manuscripts, a set of essays written in 1844, not necessarily intended for publication themselves, but an early attempt at sorting through ideas and themes later developed in in Capital. This is from the Third Manuscript on “Private Property and Labor.” Marx wrote (what follows was all part of one paragraph, and I’ve inserted the paragraph breaks for ease of blog-post reading):

“Political economy, this science of wealth, is therefore at the same time the science of denial, of starvation, of saving, and it actually goes so far as to save man the need for fresh air or physical exercise. This science of the marvels of industry is at the same time the science of asceticism, and its true ideal is the ascetic but rapacious skinflint and the ascetic but productive slave. 

“Its moral ideal is the worker who puts a part of his wages into savings, and it has even discovered a servile art which can dignify this charming little notion and present a sentimental version of it on the stage. It is therefore – for all its worldly and debauched appearance – a truly moral science, the most moral science of all. Self-denial, the denial of life and of all human needs, is its principal doctrine. 

“The less you eat, drink, buy books, go to the theatre, go dancing, go drinking, think, love, theorize, sing, paint, fence, etc., the more you save and the greater will become that treasure which neither moths nor maggots can consume – your capital. The less you are, the less you give expression to your life, the more you have, the greater is your alienated life and the more you store up of your estranged life. 

“Everything which the political economist takes from you in terms of life and humanity, he restores to you in the form of money and wealth, and everything which you are unable to do, your money can do for you: it can eat, drink, go dancing, go to the theatre, it can appropriate art, learning, historical curiosities, political power, it can travel, it is capable of doing all those thing for you; it can buy everything: it is genuine wealth, genuine ability. But for all that, it only likes to create itself, to buy itself, for after all everything else is its servant. And when I have the master I have the servant, and I have no need of his servant. 

“So all passions and all activity are lost in greed. The worker is only permitted to have enough for him to live, and he is only permitted to live in order to have.”

The quotation has the tone of prophetic certainty that is so enticing in Marx. You can almost hear someone preaching at you from behind a lectern, voice rising and falling, waving their arms and pointing for emphasis. You may want to punch your fist up in the air while reading it.

But for any economist, the specific ideas here are ostentatiously incorrect. For example, the statement that “the true ideal is the ascetic but rapacious skinflint and the ascetic but productive slave” is profoundly wrong. Capitalism is not built on misers and workaholics, and the US economy is not built on asceticism and self-denial (!). Instead, economics is about the interactions that arise when people in their role as consumers are searching around to buy the products they prefer, when people in their role as workers are thinking about how to acquire skills and contribute to production, when people in their role as managers and entrepreneurs are thinking about how to produce and innovate, and yes, when people in their role as savers and investors direct the flow of capital to provide security for their families and eventual retirement for themselves.

Moreover, economists tend to argue that we all wear many hats: not just consumer, worker, and saver, but also spouse, parent, child, community member, church member, cultural participant, book club member, hobbyist, vacationer, and many others. As to  Marx’s list of activities that economic forces are supposedly discouraging–“eat, drink, buy books, go to the theatre, go dancing, go drinking, think, love, theorize, sing, paint, fence”–explicit economic activity certainly interacts with these activities, but it does not particularly seek to limit them.

Marx is openly disbelieving that political economy can be detoxified. He views  descriptions of buying and selling as a cover story for oppression; moreover, it’s a kind of oppression that takes over participants, separating people from their true selves.  He wrote elsewhere that the division of labor itself–that is, the idea of people having jobs–is a form of enslavement. In the passage above, money becomes the master, with people as the servants. Again, these Marxist views seem to me categorically wrong as a description of the subject of economics.

But as a description of how people can feel in a world of choices and scarcity, Marx seems to me to be touching on some deeper truths, even if his tone feels off-kilter to me: he is using drums and trumpet blasts to play a theme that would play better on string instruments in a minor key. Marx’s words echo with the doleful insight that many people do indeed live through weeks, months, and longer when their job feels like a burden that they cannot put down. Many people do wish that they could spend their time in other ways. Many people would like to have more consumption in various forms. Many people worry about having enough money in the bank to cover an emergency, or enough for retirement.  These economic pressures and worries and fears can shape what kind of people we are and how we act, sometimes in unpleasant ways.

But when Marx’s viewpoint focuses only on the burdens and pressures of economic life, it has little to say about more positive aspects. Yes, it’s fun to “eat, drink, buy books, go to the theatre, go dancing, go drinking, think, love, theorize, sing, paint, fence, etc.” as Marx writes. But it’s also rewarding to do a good day’s work, to have camaraderie at work, to build up skills and a higher level of responsibility, to save up some money, to support one’s family, to support a local business, to buy gifts for friend or a treat for oneself, and generally to have some sense of responsibility and ownership and control over one’s economic life.

Of course, it would be silly to get dewy-eyed while romanticizing some potentially positive aspects of economic life. But frankly, it’s also silly when Marx describes economic interactions as if they were a Gothic horror story. Contra Marx, our economies worries are don’t arise because money is our master and jobs are enslavement. Instead, it’s all just tradeoffs, just reality, just various aspects of the human condition.

We should all know enough history to have an idea of what “masters” and “enslavement” really mean, and working at a US job in the modern economy doesn’t qualify.  For those of us living in the United States 200 years after Marx was born, it’s worth keeping the perspective that the economic stresses in our lives are first-world problems.

Spring 2023 Journal of Economic Perspectives Free Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Spring 2023 issue, which in the Taylor household is known as issue #144. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.

________________

Symposium on Spatial and Urban Economics

“Economic Activity across Space: A Supply and Demand Approach,” by Treb Allen and Costas Arkolakis

What do recent advances in economic geography teach us about the spatial distribution of economic activity? We show that the equilibrium distribution of economic activity can be determined simply by the intersection of labor supply and demand curves. We discuss how to estimate these curves and highlight the importance of global geography—the connections between locations through the trading network—in determining how various policy relevant changes to geography shape the spatial economy.

Full-Text Access | Supplementary Materials

“Neighborhood Change, Gentrification, and the Urbanization of College Graduates,” by Victor Couture and Jessie Handbury

We study changing trends in within-city sorting by education over the last 40 years. We show that neighborhoods closest to the centers of large US cities rose from having the lowest levels of college attainment in 1980 to the highest in 2017. We discuss the determinants of changes in sorting patterns, focusing on the role of transportation technology and income growth. We outline various consequences of the recent urbanization of college graduates on neighborhood amenities, house prices, and segregation. We highlight the tendency of college graduates to cluster into select central neighborhoods, likely limiting opportunities for interactions across educational lines.

Full-Text Access | Supplementary Materials

“Constraints on City and Neighborhood Growth: The Central Role of Housing Supply,” by Nathaniel Baum-Snow

The US urban population increased by almost 50 percent between 1980 and 2020, with this growth heavily concentrated in the Sun Belt and at the fringes of metropolitan areas. This paper considers the role of housing supply in shaping the growth of cities and neighborhoods. Housing supply constraints have meant that demand growth has increasingly manifested as price growth rather than as increases in housing units or population in larger and denser metropolitan areas and neighborhoods. New housing is provided at increasingly higher cost in areas that have higher intensity of existing development and more restrictive regulatory environments. Both forces have strengthened over time, making quantity supplied less responsive to growing demand, driving housing price growth in many areas, and pushing housing quantity growth further out into urban fringes. As a result of such pressures on the cost of new construction, the United States has recently experienced more rapid price growth and a declining influence of new construction on the housing stock.

Full-Text Access | Supplementary Materials

“Quantitative Urban Models: From Theory to Data,” by Stephen J. Redding

Economic activity is highly unevenly distributed within cities, as reflected in the concentration of economic functions in specific locations, such as finance in the Square Mile in London. The extent to which this concentration reflects natural advantages versus agglomeration forces is central to a range of public policy issues, including the impact of local taxation and transport infrastructure improvements. This paper reviews recent quantitative urban models, which incorporate both differences in natural advantages and agglomeration forces, and can be taken directly to observed data on cities. We show that these models can be used to estimate the strength of agglomeration forces and evaluate the impact of transportation infrastructure improvements on welfare and the spatial distribution of economic activity.

Full-Text Access | Supplementary Materials

Symposium on Universal Health Insurance

Achieving Universal Health Insurance Coverage in the United States: Addressing Market Failures or Providing a Social Floor?” by Katherine Baicker, Amitabh Chandra and Mark Shepard

The United States spends substantially more on health care than most developed countries, yet leaves a greater share of the population uninsured. We argue that incremental insurance expansions focused on addressing market failures will propagate inefficiencies and will fail to facilitate the active policy decisions needed to achieve socially optimal coverage. By instead defining a basic bundle of services that is publicly financed for all, while allowing individuals to purchase additional coverage, policymakers could both expand coverage and maintain incentives for innovation, ensuring universal access to innovative care in an affordable system.

Full-Text Access | Supplementary Materials

“The Prices in the Crises: What We Are Learning from 20 Years of Health Insurance in Low- and Middle-Income Countries,” by Jishnu Das and Quy-Toan Do

Governments in many low- and middle-income countries are developing health insurance products as a complement to tax-funded, subsidized provision of healthcare through publicly-operated facilities. We discuss two rationales for this transition. First, health insurance would boost fiscal revenues for healthcare, as post-treatment out-of-pocket payments to providers would be replaced by pre-treatment insurance premia to health ministries. Second, increased patient choice and carefully designed physician reimbursements would increase quality in the healthcare sector. Our essay shows that, at best, these objectives have only been partially met. Despite evidence that health insurance has provided financial protection, consumers are not willing to pay for unsubsidized premia. Health outcomes have not improved despite an increase in utilization. We argue that this is not because there was no room to improve the quality of care but because behavioral responses among healthcare providers have systematically undermined the objectives of these insurance schemes.

Full-Text Access | Supplementary Materials

Symposium on the Economics of Mental Health

America’s Continuing Struggle with Mental Illnesses: Economic Considerations,” by Richard G. Frank and Sherry A. Glied

Mental illnesses affect roughly 20 percent of the US population. Like other health conditions, mental illnesses impose costs on individuals; they also generate costs that extend to family members and the larger society. Care for mental illnesses has evolved quite differently from the rest of health care sector. While medical care in general has seen major advances in the technology of treatment this has not been the case to the same extent for the treatment of mental illnesses. Relative to other illnesses, the cost of care for mental illnesses has grown more slowly and the social cost of illness has grown more rapidly. In this essay we offer evidence about the forces underpinning these patterns and emphasize the challenges stemming from the heterogeneity of mental illnesses. We examine institutions and rationing mechanisms that affect the ability to make appropriate matches between clinical problems and treatments. We conclude with a review of implications for policy and economic research.

Full-Text Access | Supplementary Materials

“Depression and Loneliness among the Elderly in Low- and Middle-Income Countries,” by Abhijit Banerjee, Esther Duflo, Erin Grela, Madeline McKelway, Frank Schilbach, Garima Sharma and Girija Vaidyanathan

We combine data from longitudinal surveys in seven low- and middle-income countries (plus the United States for comparison) to document that depressive symptoms among those aged 55 and above are prevalent in those countries and, unlike in the United States, increase sharply with age. Depressive symptoms in one survey wave are associated with a greater decline in ability to carry out basic daily activities and a higher probability of death in the next wave. Using additional data from a panel survey we conducted in Tamil Nadu with a focus on elderly living alone, we document that social isolation, poverty, and physical health challenges are strongly correlated with depression. We discuss potential policy interventions in these three domains, including some results from our randomized control trials in the Tamil Nadu sample.

Full-Text Access | Supplementary Materials

Articles

“An Introductory Guide to Event Study Models,” by Douglas L. Miller

The event study model is a powerful econometric tool used for the purpose of estimating dynamic treatment effects. One of its most appealing features is that it provides a built-in graphical summary of results, which can reveal rich patterns of behavior. Another value of the picture is the estimated pre-event pseudo-“effects”, which provide a type of placebo test. In this essay I aim to provide a framework for a shared understanding of these models. There are several (sometimes subtle) decisions and choices faced by users of these models, and I offer guidance for these decisions.

Full-Text Access | Supplementary Materials

“Retrospectives: Edgar Sydenstricker: Household Equivalence Scales and the Causes of Pellagra,” by Philip Clarke and Guido Erreygers

In the early part of the 20th century the disease pellagra, now almost unknown, affected and killed thousands of people in the United States. Some claimed it was an infection, while others maintained it was due to a dietary deficiency. The economist Edgar Sydenstricker (1881–1936), who was a member of a US Public Health Service team examining the disease, argued it was critical to understand how pellagra varied by levels of income. Collecting survey data, he realized equivalence scales were needed to adjust household incomes. His research demonstrated that there was a strong negative correlation between the incidence of pellagra and equivalized household income. Further analysis of the dietary differences between households suggested that a dietary deficiency associated to a restricted availability of animal protein food was the cause of pellagra. This was confirmed more than a decade later when a deficiency of vitamin B-3 was identified as the cause.

Full-Text Access | Supplementary Materials

“Recommendations for Further Reading,” by Timothy Taylor

Full-Text Access | Supplementary Materials

The AT&T Merger with Time Warner: Another Follow-up

Back in 2018, AT&T merged with Time Warner. The merger was challenged on antitrust grounds by the US Department of Justice, the case went to court, and the government lost–which means the merger was allowed to proceed. Now, a few years later, we can see what happened.

As I pointed out in a post a few weeks ago, when big mergers are litigated, there is often a clash between dire predictions of what will happen if the merger is allowed to proceed from one side, confronted by arguments asserting the enormous social gains that are sure to happen if the merger is allowed. In the case of AT&T/ Time Warner, the original idea behind the merger was to combine the AT&T broadband and wireless networks (including ownership of DirecTV) with the content and of Time Warner (which was made up of Warner Bros., HBO and Turner Broadcasting). At a deeper level, the idea was that AT&T would be able to use its vast customer database to see how people were actually interacting with shows–and thus guide the content providers to more popular fare.

On the other side, the antitrust concern here is about a “vertical” merger. Most people are used to the idea of a “horizontal” merger, in which two firms are selling similar products, so if the two firms merge–and there is not an array of other competitors also in the market–the merged firm could have power to raise prices. But in vertical merger, one firm (in this case, Time Warner) is providing an input to another firm (in this case, AT&T). The antitrust concern is that this kind of merger can lead to less competition if one firm locks up access to a key input. The concerns over reducing competitiveness are harder to enunciate in a vertical merger: indeed, the AT&T/Time Warner merger was the first vertical merger to be litigated by antitrust authorities in the last 40 years.

The AT&T merger turns out to be another case where neither the rosy predictions of its supporters nor the gloom-and-doom of its opponents quite came to pass. The merger was announced in October 2016. The court decision allowing the merger happened in June 2018. By May 2021, AT&T announced that it was selling off TimeWarner, a deal that closed in April 2022.

The merger clearly failed in business terms: for a discussion of the strategy and culture clashes between the firms, the New York Times ran a story last November with the title, “Was This $100 Billion Deal the Worst Merger Ever?” But of course, the job of the antitrust authorities is not to second-guess whether a merger is a good business idea, but only to make sure consumers and competition are protected. Given that the merger evaporated in three years, it’s hard to make a case that this merger allowed AT&T to rake in higher profits in a way that caused injury to consumers and competition were injured.

The case is interesting to economists in part because litigation over vertical mergers is so rare, and in part because high-powered economists consulted for both sides and testified before the court. From the side favoring AT&T and the merger, Dennis W. Carlton, Georgi V. Giozov, Mark A. Israel, Allan L. Shampine have written “A Retrospective Analysis of the
AT&T/Time Warner Merger”
(Journal of Law and Economics, November 2022, pp. S461-S497). From the side supporting the US government and opposing the merger, Carl Shapiro has written “Vertical Mergers and Input Foreclosure Lessons from the AT&T/Time Warner Case”
(Review of Industrial Organization 2021, pp. 303–341.

Shapiro explains the government case that the AT&T merger with Time Warner would reduce content available to other “multichannel video program distributors” like Comcast, Dish Network, Sony Vue, and others. Carlton and his co-authors make the case that competition would not be injured. The arguments get into the details of what Shapiro called a “raising rivals costs” model and what the Carlton group calls a “bargaining leverage over rivals” model can captures the interactions in this kind of market and how the vertically-merged firm might lead to higher costs for competitors–which in turn could be passed along to consumers.

I won’t relitigate the arguments and evidence here, except to note that details end up mattering a lot. For example, if AT&T ended up negotiating with other firms over distributing the TimeWarner content, is it better to model these negotiations over as one-shot or multi-stage? How might negotiations with one firm affect outcomes with other firms? In what ways did evidence from previous mergers apply here–in particular, when Comcast National purchased NBC Universal in 2011, which included the NBC network, along with Universal Pictures  and cable channels such as Syfy, CNBC and MSNBC.

As one example of the kinds of issues that arise, in the Comcast/NBC Universal case, which involved a consent decree but did not go to litigation, the US Department of Justice required that the newly merged firm agree to binding arbitration. In the AT&T case, the rule would have been that if some outside distributor wanted access to the TimeWarner content, AT&T would not be allowed just to deny that content while a negotiation was underway, but instead would need to provide the content and submit to binding arbitration over the price. Presumably, this provision makes it harder for the vertically-merged firm to exploit its access to the key input. Given that the US Department of Justice had required this form of binding arbitration in the previous case, the government was then in the awkward position of arguing that it was not a sufficient safeguard in the AT&T case.

What are the main takeaways from all of this? Vertical mergers are hard for the antitrust authorities to challenge, and losing this high-profile case makes them harder. Modeling how a merger will come out is an inexact science: this holds whether the model is done by the firms involved in the merger or by those supporting or opposing the merger. Antitrust is about preserving competition, not about preventing firms from making deals that turn out to be unwise in a business sense. But at a broad level, antitrust also seeks to shape where businesses look for the opportunity to make profits. It seeks to push firms away from looking for profits by squeezing consumers, and instead to push them toward looking for profits by providing more desirable products and services. The next time a vertical merger is challenged, it is likely to involve the details of a different industry in a different time and place, and the antitrust arguments for and against may play out differently, too.

Some Snapshots about Electrify Everything

One of the big-picture strategies for reducing carbon emissions into the atmosphere goes under the shorthand “electrify everything.” The basic idea is that if sufficient electricity could be generated in carbon-free ways, it could replace fossil fuels in many uses–not just in generating electricity, but also as fuel for transportation, for home heating, for industrial uses, and so on. What would be required over the next few decades to make this this happen? For a useful starting point on the basics, a group of authors at the Hamilton Project at the Brookings Institution have written a background paper “Ten Economic Facts about Electricity and the Clean Energy Transition” (April 27, 2023). Here are a few of their facts that caught my eye.

The composition of sources for electricity generation has changed substantially in the last decade.

The big shift, as the graph shows is a decline in the share of electricity produced from coal and a rise in the share produced from natural gas. An optimist will note that electricity from natural gas emits only about half the carbon of a coal-fired plant; a pessimist will note that electricity from natural gas still produces about half the carbon of a coal-fired plant. Toward the bottom of figure, you can also see the substantial rises in electricity generated from wind-power and from grid-scale solar.

Is the existing growth rate of wind and solar sufficient for these sources of energy to serve as the basis for an “electrify everything” scenario?

The answer is “no.” As the graph above shows, it will take decades at the current pace for solar and wind to provide sufficient electricity to dominate the electricity grid. But in the US economy only about 27% of greenhouse gas emissions come from electricity. A similar share of carbon emissions comes from transportation, and a similar share again from industry uses, with the rest of emissions coming from commercial/residential real estate and from agriculture. For the “electrify everything” agenda to work (even with whatever energy conservation efforts are possible), the US electricity grid will need to be dramatically larger than it currently is–and the needed expansion of wind and solar would need to be much larger, too. Of course, an upturn in nuclear power would reduce the need for expanding wind and solar, as well as providing baseline energy for nights when the wind isn’t blowing.

Solar has become the cheapest way to generate electricity–at least on sunny days in the right locations.

The authors of the report note:

Solar is, on a levelized cost (meaning apples-to-apples) basis, the cheapest source of new energy today and is likely to play a central role in the buildout of a zero-carbon grid. It is cost-effective to have more solar generation because the production costs are lower: although costs can vary substantially based on a variety of factors, the average cost to produce one megawatt-hour of utility-scale solar energy ranges between $28 and $41 compared to between $45 and $74 and between $65 and $152 for the equivalent amount of energy from natural gas and coal, respectively (Lazard 2021). As a result, new construction of solar far outpaces new natural gas plants and there is no new coal under construction in the United States (EIA n.d.e). The growth of solar will need precise planning to fully exploit the benefits of its lower cost while accommodating its intermittency.

The quotation points to several major issues with solar and wind power. One is that they are intermittent sources of energy, so backup power is needed for say, frigid windless winter nights in Minnesota, where I live. Another is that solar and wind power are somewhat location-specific: that is, you can build a natural gas-burning electricity plant pretty much anywhere, but the cost-effectiveness of solar and wind depends on whether they are located in sunny or windy locations.

The problem of intermittent power from solar and wind isn’t just day-to-day–it’s also an annual fluctuation.

The dashed blue line in the figure shows month to month consumption of electricity in 2022, compared to January. Notice that demand for electricity drops off a little in January and February, then rises in the summer with the demand for air-conditioning, and then drops off again in the fall. This consumption pattern is more-or-less matched by fluctuations in solar power, which is at its lowest in the shorter, colder days of January, and much higher by mid-summer. (It’s important to be clear that the graph shows percentage changes compared to January, not quantities. Thus, the graph does not say that solar produces a high enough quantity to cover all consumption!) Wind power drops off in the summer. Different parts of the country are more likely to use either electricity from natural gas or from coal, which helps to explain the seasonal movements in these electricity sources.

The “electrify everything” agenda requires a dramatic increase in mining and processing of minerals, for uses like solar panels, wind turbines, and batteries of electric vehicles.

The report notes: “A typical electric car uses more than five times more minerals than an internal combustion engine-powered car, which raises issues around where US companies source those minerals. Similarly, relative to energy produced by coal and natural gas, wind and solar energy are also mineral intensive, relying on significant quantities of zinc and silicon …”

The “electrify everything” doesn’t just require generating the electricity, but also requires transmitting it across potentially long distances from the solar panels or the windmills to the users.

To make the “electrify everything” agenda work, you need to imagine a dramatically larger quantity of electricity than is currently produced being moved on average longer distances–from south to north, from windy to not windy. This step involves not just a financial cost and a need for minerals and materials to expand the grid, but also plans and political power to build new right-of-way for these many new electricity lines all around the country. Sure, some of these can be expansions of transmission lines along existing right-of-way. But remember that the new solar and wind generation will be at the locations naturally best-suited to generate power, which in many cases is not going to be where existing electricity-generating capacity is located. So dramatic expansions of right-of-way are going to be needed, too.

The “electrify everything” agenda isn’t impossible. But it’s daunting. And a number of those who favor the agenda in theory are not eager to support many of actual specific steps needed to make it happen. It won’t be possible to dramatically expand generating capacity, mineral production, and transmission lines without some noticeable tradeoffs.