Biggest Economic Applications of Generative AI: McKinsey

What are likely to be the biggest economic applications of the current wave of artificial intelligence technologies? The McKinsey Global Institute takes a shot at answering the question in “The economic potential of generative AI” (June 2023). Their estimate is that these technologies could add $2.6 trillion to $4.4 trillion annually to the global economy; for perspective, total world GDP is about $100 trillion.

As a starting point, what is “generative AI” and why is it distinctive?

For the purposes of this report, we define generative AI as applications typically built using foundation models. These models contain expansive artificial neural networks inspired by the billions of neurons connected in the human brain. Foundation models are part of what is called deep learning, a term that alludes to the many deep layers within neural networks. Deep learning has powered many of the recent advances in AI, but the foundation models powering generative AI applications are a step change evolution within deep learning. Unlike previous deep learning models, they can process extremely large and varied sets of unstructured data and perform more than one task. Foundation models have enabled new capabilities and vastly improved existing ones across a broad range of modalities, including images, video, audio, and computer code. AI trained on these models can perform several functions; it can classify, edit, summarize, answer questions, and draft new content, among other tasks. …

The rush to throw money at all things generative AI reflects how quickly its capabilities have developed. ChatGPT was released in November 2022. Four months later, OpenAI released a new large language model, or LLM, called GPT-4 with markedly improved capabilities. Similarly, by May 2023, Anthropic’s generative AI, Claude, was able to process 100,000 tokens of text, equal to about 75,000 words in a minute—the length of the average novel—compared with roughly 9,000 tokens when it was introduced in March 2023. And in May 2023, Google announced several new features powered by generative AI, including Search Generative Experience and a new LLM called PaLM 2 that will power its Bard chatbot, among other Google products.

In what particular areas of these economy are these tools likely to be especially useful? McKinsey suggests that four areas will account for about three-quarters of the productivity gains:

About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. Across 16 business functions, we examined 63 use cases in which the technology can address specific business challenges in ways that produce one or more measurable outcomes. Examples include generative AI’s ability to support interactions with customers, generate creative content for marketing and sales, and draft computer code based on natural-language prompts, among many other tasks. …

Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities.
Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today. … The acceleration in the potential for technical automation is largely due to generative AI’s increased ability to understand natural language, which is required for work activities that account for 25 percent of total work time. Thus, generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work.

The McKinsey report goes into much more detail. But in broader terms, the discussion makes me think about how people have interacted with computer technology over time. For example, it used to be that only a limited number of high-caste workers could access the computers. When I was in high school in the 1970s, there were only a couple of terminals for the entire class to use–and we felt pretty up-to-date to have those. The personal computer revolution of the 1980s and 1990s, and then the smartphone revolution of the last two decades, have democratized access to the technology. But the specific manner of using the technology still mostly involved using a more-or-less intuitive piece of software or an app. We are now moving into a space where it will be possible to use the technology with “natural language”–not just for searching on a map, but for a range of less structured tasks.

I don’t have a well-developed sense of how this will ultimately affect the tasks we usually do at work, along with wages and inequality. I suspect it will touch us all in various ways. But in particular, those in customer operations, marketing and sales, software engineering, and R&D should have eyes wide open to the evolving possibilities.

Harry Markowitz (1927-2023): Bringing Finance Into Economics

Harry Markowitz has died. He won the Nobel prize in economics in 1990, jointly with Merton Miller and William Sharpe “for their pioneering work in the theory of financial economics.”

I won’t try here to review how Markowitz’s work underlies the capital asset pricing model (“cap-M,” as economists pronounce it) that is now a fundamental part of finance and business school classes. Some readable and accessible starting points for the details are Markowitz’s Nobel lecture and an overview article by Hal Varian, “A Portfolio of Nobel Laureates: Markowitz, Miller and Sharpe,” in the Winter 1993 issue of the Journal of Economic Perspectives.

For me, the most fundamental part of Markowitz’s work was that he, as much as anyone, brought finance under the umbrella of what was understood to be “economics.” Markowitz told a story to illustrate this point in a number of interviews: here’s the version from the Journal of Financial Planning in May 2010. To set the stage, it’s 1955, Markowitz has been working at the Rand Corporation in California, but he needs to fly back to Chicago for the oral defense of his doctoral dissertation. Markowitz says:

“I remember landing at Midway Airport thinking, ‘Well, I know this field cold. Not even Milton Friedman will give me a hard time.’ And, five minutes into the session, he says, ‘Harry, I read your dissertation. I don’t see any problems with the math, but this is not a dissertation in economics. We can’t give you a Ph.D. in economics for a dissertation that isn’t about economics.’ And for most of the rest of the hour and a half, he was explaining why I wasn’t going to get a Ph.D. At one point, he said, ‘Harry, you have a problem. It’s not economics. It’s not mathematics. It’s not business administration.’ And the head of my committee, Jacob Marschak, shook his head, and said, ‘It’s not literature.’

“So we went on with that for a while and then they sent me out in the hall. About five minutes later Marschak came out and said, ‘Congratulations, Dr. Markowitz.’ So, Friedman was pulling my leg. At the time, my palms were sweating, but as it turned out, he was pulling my leg …”

There’s a minor dispute over whether Friedman was indeed serious: it’s not clear to me that Friedman was just goofing. Yes, Friedman wasn’t ultimately willing to block this dissertation. However, in a later interview, Friedman did not recall the episode but said: “What he [Markowitz] did was a mathematical exercise, not an exercise in economics.”

But in Markowitz’s Nobel his acceptance lecture back in 1990, he ended by telling a shorter version of this story, and then said: “As to the merits of his [Milton Friedman’s] arguments, at this point I am quite willing to concede: at the time I defended my dissertation, portfolio theory was not part of Economics. But now it is.”

In Varian’s JEP article, he quotes a 1990 comment from Robert Merton (who would share the Nobel prize in 1997 with Myron Scholes for work on to value derivative financial instruments): “As recently as a generation ago, finance theory was still little more than a collection of anecdotes, rules of thumb, and manipulations of accounting data. The most sophisticated tool of analysis was discounted value and the central intellectual controversy centered on whether to use present value or internal rate of return to rank corporate investments.” As much as anyone, Markowitz brought to economics and finance the ideas that financial choices involved systematic tradeoffs between risk and return, and that risks needed to be assessed in the context of an overall portfolio rather than one investment at a time. Such ideas and their implications were radical at the time; now, they are commonplace.

Nation’s Report Card: Test Scores Sink for 13 Year-Olds

The National Assessment of Educational Progress (NAEP) is sometimes called “the nation’s report card.” It tests a nationally representative group of students at ages 9, 13, and 17. At an individual school, it’s possible for tests to get easier over time, or for grade inflation to result in higher letter grade for the same test scores. But the NAEP test is written so that the scores are directly comparable from year to year.

The test results offered some modest reasons for encouragement in the early 2000s. But the 2023 scores for 13 year-olds are bleak. The press release with the “highlights” is titled: “Scores decline again for 13-year-old students in reading and mathematics.” Here are the reading and math scores since 1971:

The test was previously given in 2020, so it’s plausible to read the scores as the result of many students losing at least a year of in-classroom education. A cynic would point out that scores were already falling from 2012 to 2020. The 2023 reading scores are essentially back to 1971 levels, and the math scores have fallen nearly that much. Scores fell for pretty much every group: by gender, race/ethnicity, public or private school, region of the country, and so on.

This is a survey, not an academic study, so it doesn’t seek to pin down causes and effects in a rigorous statistical way. But the survey data has some clues.

Absences are up. “[T]here were increases in the percentages of 13-year-old students who reported missing 3 or 4 days and students who reported missing 5 or more days in the last month. The percentage of students who reported missing 5 or more days doubled from 5 percent in 2020 to 10 percent in 2023.”

The share of students who read for fun is falling. “In 2023, fourteen percent of students reported reading for fun almost every day. This percentage was 3 percentage points lower than 2020, and 13 percentage points lower than 2012. Overall, the percentage of 13-year-old students who reported reading for fun almost every day was lower in 2023 than in all previous assessment years.”

Fewer students are taking algebra by age 13. “Compared to 2020, there were no significant changes in the percentages of students by type of mathematics taken during the 2022–23 school year. Compared to 2012, however, the percentage of 13-year-old students in 2023 who reported they were taking regular mathematics increased from 28 to 42 percent, while the percentage of students taking pre-algebra decreased from 29 to 22 percent, and the percentage of students taking algebra dropped from 34 to 24 percent.” 

I’ve got no magic wand for improving school performance, but there are some tips the research literature. For example international comparisons with countries that have high-performing K-12 education systems suggest the importance of 1) comprehensive and competitive exit exams for high school graduates; 2) competition between schools; and 3) teachers drawn from the top one-third of all college graduates. Lessons drawn from US charter schools that have success with low-income and minority students suggest the importance of five factors: 1) intensive teacher observation and training, 2) data-driven instruction, 3) increased instructional time, 4) intensive tutoring, and 5) a culture of high expectations. The resource inputs were more traditional things like per-pupil-spending and student-teacher ratios. In particular, the availability of small-group tutoring so that students don’t fall behind seems supported by a variety of studies.

Forty years ago in 1983, there was a National Commission on Excellence in Education which issued a report called “A Nation At Risk: Imperative For Educational Reform.” It caused a lot of hand-wringing at the time. Here are the often-quoted opening paragraphs from the 1983 report:

Our Nation is at risk. Our once unchallenged preeminence in commerce, industry, science, and technological innovation is being overtaken by competitors throughout the world. This report is concerned with only one of the many causes and dimensions of the problem, but it is the one that undergirds American prosperity, security, and civility. We report to the American people that while we can take justifiable pride in what our schools and colleges have historically accomplished and contributed to the United States and the well-being of its people, the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people. What was unimaginable a generation ago has begun to occur– others are matching and surpassing our educational attainments.”

If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves. We have even squandered the gains in student achievement made in the wake of the Sputnik challenge. Moreover, we have dismantled essential support systems which helped make those gains possible. We have, in effect, been committing an act of unthinking, unilateral educational disarmament.

As the 1983 report points out, the US has been losing its lead in education. Historically, the US was one of the first countries to have near-universal elementary education, and then one of the first countries to have a mass expansion of high school education, and then to have a mass expansion of college education. As other countries caught up in K-12 education, it used to be common, a few decades ago, to argue the large share of US students going on to higher education helped to make up for the relatively poor performance of the US K-12 schools.

But now other countries have caught up, both in K-12 education and in the share of population going on to higher education. Other countries like Germany also have aggressive and wide-ranging apprenticeship programs for high school students to build skills and get connected to work opportunities.

The ongoing performance of the US K-12 system has been a slow motion crisis for decades. The drop-off of text scores among current 13 year-olds in the aftermath of the pandemic is another crisis. As far as I know, there isn’t any evidence that just giving existing teachers a raise, without more sweeping and fundamental changes, will make any difference. But we know that, on average, those who try to navigate 21st-century labor markets with education levels little different from the 1970s are facing a lifelong disadvantage.

Helping the Neighborhood by Charging for Parking

When economists look at a resource (including their own work lives!), they contemplate whether that resource is being used in the most effective way. Donald Shoup has been writing for years about the resource of the “curb lane”–that is, where drivers often park their cars. His ongoing theme is that free (or underpriced) curbside parking often seems attractive to both drivers and nearby stores and businesses, but when the predictable tradeoff of the seemingly “free” resource is that drivers are cruising up and down streets and can’t find a place to park near where they want to go, the attraction of “free” is diminished.

In a recent essay, Shoup explores the idea of “Parking Benefit Districts” (Journal of Planning Education and Research, published online March 2023). He points out some of the tradeoffs of “free” parking, like the wasted time and energy of drivers as they cruise the streets, criss-crossing with pedestrians and bicyclists, looking for a spot. His common recommendation is that when parking is scarce, parking meters should use prices that vary by location and time of day, in such a way that if you are looking for street parking, there will usually be an available space within a block or two–if you are willing to pay the price, of course. In some cases, the appropriate answer may be to reduce curb parking and free up space for bus and bike lanes, for outdoor dining, or for additional trees and less cluttered sidewalks.

Shoup writes:

Transportation planners have neglected curb parking because nothing is moving, and land-use planners have neglected it because it is in the roadway. No one seems to know how to solve the curb parking problem, except for followers of Nobel laureate William Vickrey who proposed that cities should set the prices for curb spaces to “keep the amount of parking down sufficiently so there will almost always be space available for those willing to pay the fee” (Vickrey 1954). Prices can vary by place and time of day to leave one or two open curb spaces on every block. Where all but one or two curb spaces on a block are occupied, the parking is both
well used and readily available. …

Market prices for curb parking exemplify what Jaime Lerner (2013) called urban acupuncture: a simple touch at a critical point (in this case, the curb lane) can benefit the whole city. In another medical metaphor, streets are a city’s blood vessels, and overcrowded free curb parking is like plaque on the vessel walls, leading to a stroke. Market prices for curb parking prevent this urban plaque.

The biggest problem with charging for curb parking is politics. Cities charge for public services like water and electricity to recover the capital and operating costs of providing them, but curb parking doesn’t have any obvious capital or operating cost to recover. Unmoored from the need to recover any costs in the city’s budget, curb parking prices are purely political (Manville and Pinsky 2021). How can cities create political support for paid parking?

The idea of a “parking benefit district” is that the money spent on parking in any given neighborhood should go to that neighborhood, to be used for local purposes like picking up garbage, planting trees, cleaning up graffiti, fixing sidewalks, and so on. As Shoup writes:

The goal is not to persuade drivers they should pay for curb parking. The goal is to convince stakeholders they should charge for curb parking. Anyone who does not store a car on the street may begin to see free curb parking the way landlords see rent control. Free curb parking is rent control, for cars. If people want better public services more than they want free curb parking, the curb lane can benefit everyone, not just drivers who store their cars on the street.

Shoup offers a brief discussion of cities that have enacted a version of parking benefit districts for certain areas of the city:

Shoup offers a thought experiment of how a parking benefit district might work in New York City. He writes: “Because New York does not charge drivers for parking in 97 percent of its three million curb spaces, it offers a titanic subsidy for cars. If the city charged only $5.50 per curb space per day, it would earn $6 billion a year, about the same as the $6.1 billion farebox revenue from all New York City public transit in 2019 …” He points out that on the Upper West Side of Manhattan, off-street parking costs from $35 to $147 per day. If parking meters for this area collected the low-end number here–$35/day–it could provide more than $100 million per year for street-level neighborhood improvements.

For some previous posts on the tradeoffs of offering “free” parking, see:

Why are the Recent US Bank Runs So Much Faster?

When Silicon Valley Bank was shut down in March 2023, 25% of its deposits has been withdrawn on March 9, and the expectation was that another 62% of deposits was to be withdrawn on March 10. The bank was expected to lose almost 90% of its deposits in two days! There haven’t been a lot of US bank runs in the last few decades, but the recent ones have been fast. Jonathan Rose discusses “Understanding the Speed and Size of Bank Runs in Historical Comparison” (Economic Synopsis: Federal Reserve Bank of St. Louis, 2023: #12).

Here’s a table from Rose to help frame the topic:

The column showing “deposit insurance coverage” is meaningful because if deposits were already covered by insurance, there is no reason for them to be withdrawn. In retrospect, the bank runs of 2008–at Wachovia and Washington Mutual–stand out on this list. At those institutions, most of the deposits were already insured, and the decline in deposits during the run was relatively small–and also took a few weeks.

In the recent examples, including Silicon Valley Bank, the share of deposits covered by deposit insurance was much lower. Instead, these banks sought to attract large deposits from particular groups, like venture-capital backed firms (Silicon Valley Bank), crypto investors (Silvergate and Signature), private equity firms (Signaure and Silicon Valley Bank), and high net worth individuals (Signature and First Republic). A bank where most of the deposits are not covered by deposit insurance is more vulnerable to runs.

In addition, it has been possible for decades now to withdraw funds from a bank very quickly: as Rose points out, the Continental Illinois bank run back in 1984 was described at the time as a “lightning fast electronic run” by big corporations pulling out their deposits. What seems different now is that a run can happen with a large number of mid-sized and smaller investors with similar concerns (like VC-backed firms or crypto investors), who are connected with each other via social media networks.

The underlying lesson here is an old one, but in a new context. Banks will seek out ways to make money, including by specializing in providing services to specific groups like VC-backed firms, crypto investors, private equity, and so on. There’s nothing wrong with a bank trying to attract customers! But when those specialized groups also provide substantial deposits to the bank, going beyond the previous deposit insurance limits, then there are risks of bank runs that cause shivers through the entire banking system. At a minimum, the bank regulators need to be aware of these risks and adapt the deposit insurance premiums for such banks accordingly. The more aggressive step would force banks to separate their provision of specialized financial services into a company separate from the actual “bank.”

Looking Back at the Supply Chain Woes

During the supply chain woes that plagued the global economy in 2020 and 2021, the Federal Reserve Bank of New York put together what it called the Global Supply Chain Pressure Index (GSCPI). The latest version of the index both highlights the size of those disruptions and also suggests these pressures have now faded away.

Here’s the index from 1997 up through May 2023. The spikes in supply chain in spring and summer of 2020, and then again in late 2021 and into 2022, are apparent. The decline to usual levels in the last few months is also clear.


What is actually being measured here? The New York Fed explains:

The GSCPI integrates a number of commonly used metrics with the aim of providing a comprehensive summary of potential supply chain disruptions. Global transportation costs are measured by employing data from the Baltic Dry Index (BDI) and the Harpex index, as well as airfreight cost indices from the U.S. Bureau of Labor Statistics. The GSCPI also uses several supply chain-related components from Purchasing Managers’ Index (PMI) surveys, focusing on manufacturing firms across seven interconnected economies: China, the euro area, Japan, South Korea, Taiwan, the United Kingdom, and the United States.

The Fed stirs together these ingredients and gets a distribution of scores. The vertical axis of the scale is measured by standard deviations, which may not be intuitive for some readers. Here’s my off-the-cuff effort at explaining. Imagine a bell-shaped curve showing a distribution of some variable: that is, most of the scores are centered around the middle, with many fewer scores on the left or right tails. For a particular version of this bell-shaped curve known as a normal distribution, about two-thirds of the scores are within one standard deviation (plus or minus) of the average score; 95% of the scores are within two standard deviations; and 99.7% of the scores are within three standard deviations.

The scores on this Global Supply Chain Pressure Index are not a neat bell-shaped “normal” curve. But it’s still true that when a score gets to be three or (gulp!) four standard deviations from the mean, it’s extremely rare. In other words, those supply chain disruptions from 2020 through early 2022 were really, truly outside the experience of the previous quarter-century.

For some earlier posts on these supply chain issues, see:

Update on US Budget Deficits (If Anyone Cares)

The standard economic advice about big budget deficits is that they can make sense when an economy is experiencing a recession and high unemployment rates, to jump-start the economy. However, when an economy is not in recession and unemployment rates is low, then big budget deficits tend to fuel inflation in the present and an unpleasant burden of payments in the future. The Congressional Budget Office regularly publishes updates on the US budget situation. Here are some graphics from its May 2023 report (“An Update to the Budget Outlook: 2023 to 2033”).

Here’s the pattern of budget deficits in the last 50 years. The black line shows the deficit. It’s notable that the deficits accompanying the Great Recession from 2008-2009 were by far the largest over this period, until being outstripped by the deficits accompanying the COVID pandemic. In the graph, the light purple areas show net interest outlays of the government: thus, you can see that interest payments were quite high (as a share of GDP) in the 1980s and 1990s, much lower in the early 2000s, but are now projected to rise again in the next few years. The dark purple area is the “primary” budget deficit, which is based on spending and taxes after stripping out the interest payments on past borrowing.

This pattern of much substantially higher annual budget deficits will drive up the total federal debt. As the figure shows, the federal debt/GDP ratio is on its way to rising past the heights only previously scaled by the debt financing to fight World War II. But unlike the end of World War II, there is no dramatic fall in federal debt on the horizon.

An obvious question is the extent to which these deficits reflect either higher spending or lower taxes. As the figure shows, federal taxes are slightly above their 50-year average, while federal spending–after spiking in response to the pandemic–is projected to remain substantially above its 50-year average.

I’d emphasize two lessons here:

1) The federal government spending reaction to the COVID pandemic turned out to be excessive. It’s hard to put one’s mind back in the mindset of March 2020. The sense was that something needed to be done, and it was nearly impossible (and heartless) to do too little. Some economists were re-fighting battles of 2008, when they felt that fiscal stimulus was too low, and were determined not to make that mistake again. But as it turned out, the actual recession as a result of the pandemic was only two months long. In comparison, the recession from December 2007 to June 2009 was 18 months long. The result was three major stimulus programs: expanded unemployment insurance, direct household payments, and the “Paycheck Protection Program” aimed at supporting businesses. The emphasis in spring 2020 was on getting money out the door quickly, not on targeting the aid to those in need. Thus, it’s perhaps not a shock that the Associated Press is now reporting that $400 billion or more was stolen, wasted, or misspent. Adding these funds to the US economy, especially at a time when the ability of the economy to supply goods and services was still constrained by the pandemic, played a big role in setting off the inflation that started in 2021. It would be nice to think that lessons have been learned about how to react more effectively in the future.

2) The higher spending in the pandemic seems to have reset US federal spending to a higher level. Of course, the historical averages for federal spending and taxes shown above are not a natural law, like the boiling point of water. One can certainly make a case that as America’s elderly population rises, spending on programs like Social Security and Medicare will necessarily rise as well–and it may not be good policy to try to trim other federal spending in response to an aging US population. Bu that said, substantially higher spending and much-the-same level of taxes is not a recommended formula for a time when there isn’t a recession, unemployment rates are low, and inflation is above desired levels. This isn’t just a US pattern. It’s why publications like the Economist magazine are running headlines like: “Fiscal policy in the rich world is mind-bogglingly reckless: High inflation and low unemployment require tighter budgets not looser ones” (June 14, 2023). But the constituency for lower budget deficits is always a small one, until the harms become undeniably large.

Remote Sales Tax on E-Commerce Purchases From Other States

When I was first looking at issues of state-level sales taxes, many years ago, the issue was when buyers went outside a jurisdiction with a sales tax: for example, people buying cars in New Hampshire (with no sales tax) to avoid the Massachusetts sales tax. But the online economy has brought a much bigger version of this problem. If a business has no physical presence in a state, but sells online to people in that state, does it owe state sales tax? In the case of South Dakota vs. Wayfair (2018), the US Supreme Court held that states could apply their sales tax to out-of-state online sellers.

The decision seems reasonable to me, but it opened a new can of worms. It meant that an online seller now was supposed to comply with different sales tax rules across the United States. The Government Accountability Office (GAO) reviews the issues in “Remote Sales Tax: Federal Legislation Could Resolve Some
Uncertainties and Improve Overall System
” (November 2022).

The GAO report notes that online sales have been a rising share of retail sales overall.

The challenge for sellers, faced with the Supreme Court decision, is how to comply with the tax rules of states. It’s worth remembering that most online sellers are not Amazon–which is sometimes called a “market facilitator. According to GAO, most businesses with remote online sales are pretty small: 85% have fewer than 10 employees and 74% have fewer than five employees.

Many states have sales taxes, and many additional have local-level sales taxes as well. GAO writes:

Of the 45 states with a statewide sales tax, 37 also have local sales taxes. In addition, while Alaska does not have a statewide sales tax, it does have local sales taxes. Local sales tax authority varies widely. In some states, only selected jurisdictions may impose a sales tax, while in others a broad range of jurisdictions—such as counties, municipalities, and various local authorities—may opt, either by ordinance or local referendum, to impose a sales tax. Tax policy specialists have estimated that approximately 30,000 local jurisdictions in the U.S. have the authority to impose sales taxes and that between 10,000 and 12,000 do impose sales taxes. While technically imposed on the purchaser, both state and local sales taxes are usually accompanied by a collection requirement—sellers are required to collect the tax at the time of purchase.

For sellers, knowing the tax rates for the different jurisdictions is just the start. Many states have thresholds to exempt smaller firms from the sales taxes, sometimes defined in terms of number of transactions and sometimes in value of sales. GAO writes:

[A]s of September 2022, 22 states and the District of Columbia had adopted economic nexus threshold values of $100,000 in sales or 200 transactions into the state each year. Three large-population and large-Gross Domestic Product states (California, New York, and Texas) adopted higher monetary thresholds of $500,000. More recently, some states (including Florida, Kansas, and Missouri) adopted monetary thresholds without an accompanying transactional threshold. Other states (including Iowa and Maine) eliminated previously-established transactional thresholds in favor of monetary-only thresholds. Some states also raised or lowered their previously-established monetary thresholds, including Tennessee, which moved from $500,000 to $100,000.

The timing of these thresholds is calculated in various ways: for example, sometimes the previous calendar year, sometimes the previous 12 months from the current date, sometimes the previous four quarters. In addition, some states exempt certain items from the sales tax. If an out-of-state firm exceeds the thresholds, it typically needs to register with the tax authorities of each relevant states.

Imagine a firm located in a state that doesn’t have a sales tax: now, it needs to be set up to deal with 45 different state sales taxes.

There are a number of uncertainties here. For example, is a firm making out-of-state sales subject to being audited separately by every state where it has customers? The Supreme Court decision focused on state sales tax, but can states also collect local sales taxes? As GAO notes, some states have a substantial number of taxing authorities:

According to our review of state documentation and other third-party legal analysis, Alabama has more than 300 local tax authorities, Alaska has more than 100, Colorado has 70, and Louisiana has 64. Each of these states has a centralized system to streamline registration and filing for remote businesses.

This fragmentation of sales tax rules across states and localities just can’t be the best way to collect sales tax on online purchases. The GAO estimates that states are currently collecting about $30 billion a year on sales tax from out-of-state online sellers, so these tax liabilities are not going away. But if one believes, as I do, that online sales are a boon to consumers (greater choice, often lower prices, reduced time in transit), then it makes sense to think about a simpler system.

Broadly speaking, there are two options here. One is to establish a set of state-level guidelines for describing their remote sales taxes–rates, thresholds, exemptions, everything. This would at least make it easier for the sellers. The more sweeping choice would be to have an interstate collaborative mechanism: if states want to collect from online sellers in other states, they would register with the interstate organization. In taking this step, the state would submit a comprehensive form that covered all aspects of its sales taxes. Private companies that sell accounting and tax software would be able to access this combined data. The idea is that an online seller would have one place to turn for all the needed information, and complying with the information available from that centralized site would fulfill its legal obligations as a taxpayer.

The task of coordinating such a collaborative mechanism isn’t a simple one. The federal government has limited power to tell the states what to do in this area. However, the question of setting up this kind of mechanism isn’t an especially partisan question: every state can retain its own tax rules. Thus, it’s really a test of the ability of state-level governments to solve a nuts-and-bolts practical problem.

What Permits are Needed for New Electricity Transmission Lines?

Imagine that you want to build a new electricity transmission line, so that the power generated by new solar or wind projects can be transmitted over the grid to where it is needed. The Permitting Institute provides the following helpful figure to show the process, although you may need to expand the figure on your screen or go to the original source to read the steps. As noted in small type at the bottom, the figure only includes federal requirements, and so state and local permitting issues would be in addition to these. If you consider yourself a supporter of trying to electrify multiple sectors of the economy–electricity, transportation, heating and cooling of buildings, manufacturing processes–with a major expansion of solar and wind power, the electrical grid will need to perhaps triple in size in the next few decades. From that standpoint, this process in the figure is a problem.

Antitrust: Dilatancy Before the Earthquake?

Many years ago, I learned that “dilatancy” is a stage in the build-up before an earthquake, when rocks have been pushed together as tightly as possible under the pressures of shifting tectonic plates. Antitrust regulation may be experiencing its own dilatancy before an earthquake.

The underlying problem here is that the actual antitrust laws (the Sherman and Clayton antitrust acts) lay out general guidelines, but not in a level of detail that necessarily offers clear guidance in specific cases. Thus, since 1968 the federal antitrust authorities at the Federal Trade Commission and the US Department of Justice have spelled out more detailed guidelines for what mergers will be deemed acceptable and which ones may be challenged by the regulators. However, FTC and DoJ don’t have a blank slate for writing these guidelines; instead, the guidelines are strongly shaped by past court decisions, which in turn are influenced by legal and economic arguments. Thus, the guidelines are updated from time to time, maybe every 10-20 years or so.

The guidelines now come in two parts: the Horizontal Merger Guidelines published in 2010 look at mergers happening between two firms in the same industry. The Vertical Merger Guidelines published in 2020 look at mergers happening between two firms where one is “upstream” and another is “downstream” in the same supply chain.

However, the FTC antitrust regulators under Lina M. Khan withdrew support of the vertical merger guidelines in 2021, although the US DoJ antitrust regulators did not officially do so. Then in January 2022, the FTC and DoJ antitrust regulators announced plans to “modernize” both sets of antitrust guidelines. But here we are, 18 months later, without a concrete proposal for these new guidelines. The talk is that the current antitrust regulators would like to shift the existing guidelines quite substantially, but because the existing guidelines are built on decades of actual court precedents, they are struggling with how to phrase the rules they prefer in a way that has a chance of passing judicial muster.

In broad terms, what’s at stake here? Timothy J. Muris was chair of the Federal Trade Commission from 2001-2004, and thus writes as (on the whole) a defender of the existing approach in “Neo-Brandeisian Antitrust: Repeating History’s Mistakes” (AEI Economic Policy Working Paper Series, January 30, 2023). Here are a few of his points that struck me in particular.

1) Beware of oversimplified histories of how antitrust regulation has developed.

A standard and often-repeated story is that the antitrust regulators used to be “tough,” which was good, but then a bunch of free-market ideologues (many based at the University of Chicago) ensorcelled the courts and academia and made antitrust “weak,” which was bad. According to this story, it’s time to cast off the ideological blinder and go back to the good old days.

This story has a certain rhetorical appeal, but even a cursory appreciation of US history suggests that it has some severe holes. Back in the 1950s, when giant firms like General Motors, Ford, US Steel, Exxon, AT&T, General Electric, and DuPont were dominating markets across the US, this is supposed to be the timeframe of exceptionally aggressive antitrust enforcement? Sure doesn’t look like it. I wrote about a more nuanced history of the evolution of antitrust doctrine at the start of this year in “Complexifying Antitrust.

Muris adds the additional useful point that while University of Chicago scholars were certainly involved in critiquing the prevailing antitrust in the 1960s, they were neither the first to do so, nor were they the ones who led the way in actually formulating the new rules. As Muris writes: “The Chicago scholars, like the revolutionaries of 1776, agreed on what they opposed, but not on what the world post- revolution should look like …”

I guess the names of antitrust critics who predated the University of Chicago critique won’t mean much, unless you read the Muris paper about their arguments, but for the sake of naming some names, the earlier critics of antitrust in the 1950s and 1960s include Fred Rowe (Yale), Morris Adelman (MIT), Donald Turner (Harvard), Robert Pitofsky (NYU), Milton Handler (Columbia), Thomas Kauper (Michigan) ,and a critique from the American Bar Association published reports in 1956–among others. Muris also points out that the leading legal treatise on antitrust doctrine starting in the late 1960s was Philip Areeda of Harvard, who was later joined as a co-author by Herbert Hovenkamp of the University of Pennsylvania, which through multiple editions remains what Muris calls “by far the most influential source on antitrust law for courts, scholars, and practitioners alike.” Former Supreme Court Justice Stephen Breyer, appointed by Bill Clinton in 1994 and usually regarded as leaning toward the left side of the court, was generally a prominent advocate of the antitrust that has been conventional for the last half-century or so.

2) Will we see the return of Robinson-Patman antitrust arguments?

Lina Khan at the FTC has repeatedly praised the Robinson-Patman Act of 1936. In contrast, as Muris notes: “Virtually all antitrust enforcers, commentators, and practitioners have condemned this statute for over 50 years.”

For some insight into the issues here, it’s useful to consider the the A&P grocery chain, which started in the mid-19th century and became the largest retail chain store in the US for over 40 years. Muris quotes Mark Levinson, a biographer of the company, with this reminder:

By 1929, when it became the first retailer ever to sell $1 billion of merchandise in a single year, A&P owned nearly 16,000 grocery stores, 70 factories, and more than 100 warehouses. It was the country’s largest coffee importer, the largest butter buyer, and the second-largest baker. Its sales were more than twice those of any other retailer.

How did A&P get so big? It set up its own purchasing and distribution network, buying directly from farmers and cutting out wholesalers and middlemen. It bought in large and predictable volumes, and thus got good prices as a buyer–which were passed along to consumers. It set up the most efficient distribution system of its time: for example, the same trucks that delivered bread from A&P-owned bakeries to A&P stores were also used by the company for other distributions. It used its massive sales data to reduce the share of unsold products and spoilage. It also customized products for varying tastes across the country: for example, Philadelphians like their butter with less salt and a lighter color than do most New England markets.

In short, A&P expanded its markets by selling high-quality groceries at lower prices. Of course, it was cordially hated by the both the wholesalers/middlemen and by the smaller stores it drove out of business. In general, A&P was the leader of a rise in chain stores during the 1920s and 1930s, driving out smaller single-store businesses. Many states imposed special taxes to limit chain stores; some states proposed laws to block them altogether. Indeed, references to “monopoly” or “anticompetitive behavior” in this time frame often refer to bigger stores attracting customers with low prices, high quality, and improved selection.

As Muris notes, the Robinson-Patman Act of 1936 was originally titled the Wholesale Grocer’s Protection Act; in fact it was drafted by lawyers for the Wholesale Grocers Association, which represented the wholesalers and smaller retailers who were losing market share to the chain stores. Patman famously said: “Chain stores are out. There is no place for chain stores in the American economic picture.”

In 1944, the federal government indicted A&P and its executive for violating the Sherman antitrust act. The company was essentially convicted of expanding its sales though more efficient methods of operating and lower prices, and thus of causing harm to competitors. The government also advanced a theory of “predatory pricing,” which was that although the A&P prices had been lower for decades, this was all part of a longer-run plot where–after driving out competitors–A&P would then be able to charge much higher prices for decades into the future. To block the theoretical scenario of higher future prices, it was necessary to prevent actual current prices from being so low.

As innumerable commentators pointed out, then and now, consumers were clearly better-off for decades as a result of A&P. But that was no defense under the antitrust doctrine of the time. Indeed, under the antitrust rules that prevailed up through the 1960s, a company that used efficiency gains to compete with lower prices would often deny doing so. Muris writes:

Lawyers arguing for mergers certainly performed handstands 60 years ago to avoid claiming their mergers reduced costs and prices. They feared they would be accused of planning to lower prices, therefore taking market share from competitors, harming rivals, and thus committing the paramount sin of the era: increasing concentration. Both the harm to rivals and the increased concentration could themselves have been enough to jeopardize a merger, and they thus prompted such vigorous denials from the merging parties. Will such arguments be the new requirement for merging firms, de jure or de facto?

But under pressure from a wide array of critics, the idea that efficiency and price-cutting should be considers as anti-competitive was fading by the early 1960s. As Muris notes:

In 1977, this growing criticism led the DOJ to publish a major attack on Robinson- Patman, finding the act “protectionist” with a “deleterious impact on competition” and, ultimately, on consumers. The FTC, which had issued nearly 1,400 Robinson- Patman complaints over the preceding four decades, was reaching the same conclusion: The agency dramatically slowed enforcement in the 1970s and all but ended it thereafter.

3) Protecting competitors or consumers?

The official mission of the Federal Trade Commission is “Protecting America’s Consumers,” which may seem straightforward. But in a prominent essay about Amazon back in 2017, Lina Khan argued for the possibility that while Amazon might seem to be benefiting consumers in the short run, it maybe possibly might be setting consumers up to be worse off in the long run. This is the Robinson-Patman logic resurrected and brought up the present in a different context.

Indeed, the underlying logic of the Robinson-Patman arguments as antitrust legislation and practice evolved into the 1950s and 1960s was that a reduction in the number of competitors in a market was an antitrust violation–even if competition in that market still seemed quite robust.

The Brown Shoe case is one of many prominent examples. Two shoe companies, Brown Shoe and GR Kinney, wished to merge. Muris describes the state of competition in the shoe market at the time:

Brown Shoe was the third-largest retailer nationwide, and it made about 4 percent of all shoes. Kinney was the eighth-largest retailer and manufacturer, although it accounted for less than 2 percent of retail sales and was the manufacturer of only 0.5 percent of all shoes. Moreover, manufacturing overall was not concentrated, as the four largest firms made 23 percent of the country’s shoes and the 24 largest firms accounted for only 35 percent of all shoes manufactured. At retail, the two combined for 2.3 percent of all stores selling shoes

Notice that both companies made shows and also had retail outlets. The idea was that the retail outlets could offer shoes from both companies. However, the government antitrust regulators argued that having one company one company producing 4.5% of shoes was excessive concentration. Moreover, it argued that greater efficiency from the merger would allow the prices for shoes to be cut–which would disadvantage other shoe companies.

Again, this old-style argument seems to support a version of competition in which no firms lose out–and especially that firms do not lose out because they are less efficient or have higher prices. The old-style theory is that consumers benefit from having lots of places to shop, but that consumers do not benefit if many of them choose to shop at places with lower prices, and thus drive some firms out of business.

4) How does resurrecting these older and discredited theories of antitrust relate to the modern economy?

Many of the current issues in antitrust are about digital companies: Amazon, Google, Facebook, Netflix, Apple, and others. Other topics are about large retailers like WalMart, Target, and Costco. Still other topics are about mergers in local areas: for example, if a small metro area has only two hospitals, and they propose a merger, how will that affect both prices to consumers and wages for health care workers in that area? Another set of topics involves how to make sure that when drug patents expire, generic drugs have a fair opportunity to compete. Another topic is about tech companies that pile up a “thicket” of patents, with new patents continually replacing those that expire, as a way of holding off new competitors.

None of these issues require returning to the old antitrust argument that passing along efficiency gains to consumers in the form of lower prices should be prosecuted by antitrust authorities. None of them imply that the goal of antitrust should be to protect competitors, rather than consumers. If the antitrust powers-that-be at the Federal Trade Commission and the US Department of Justice try to revise the existing merger guidelines back to the 1940s, 1950s, and 1960s, it will be a seismic shock to this body of law. Such an effort would almost certainly be blocked by the courts. Perhaps the cynical prediction is that the new horizontal and vertical merger guidelines, if and when they emerge, will involve a blast of old-style populist rhetoric but relatively few major substantive changes.

For those interested in the more discussion of antitrust policy, and especially how in certain areas a more activist antitrust policy might help consumers and workers without a need to return to Robinson-Patman, some useful starting points on this blog include: