Update on US Budget Deficits (If Anyone Cares)

The standard economic advice about big budget deficits is that they can make sense when an economy is experiencing a recession and high unemployment rates, to jump-start the economy. However, when an economy is not in recession and unemployment rates is low, then big budget deficits tend to fuel inflation in the present and an unpleasant burden of payments in the future. The Congressional Budget Office regularly publishes updates on the US budget situation. Here are some graphics from its May 2023 report (“An Update to the Budget Outlook: 2023 to 2033”).

Here’s the pattern of budget deficits in the last 50 years. The black line shows the deficit. It’s notable that the deficits accompanying the Great Recession from 2008-2009 were by far the largest over this period, until being outstripped by the deficits accompanying the COVID pandemic. In the graph, the light purple areas show net interest outlays of the government: thus, you can see that interest payments were quite high (as a share of GDP) in the 1980s and 1990s, much lower in the early 2000s, but are now projected to rise again in the next few years. The dark purple area is the “primary” budget deficit, which is based on spending and taxes after stripping out the interest payments on past borrowing.

This pattern of much substantially higher annual budget deficits will drive up the total federal debt. As the figure shows, the federal debt/GDP ratio is on its way to rising past the heights only previously scaled by the debt financing to fight World War II. But unlike the end of World War II, there is no dramatic fall in federal debt on the horizon.

An obvious question is the extent to which these deficits reflect either higher spending or lower taxes. As the figure shows, federal taxes are slightly above their 50-year average, while federal spending–after spiking in response to the pandemic–is projected to remain substantially above its 50-year average.

I’d emphasize two lessons here:

1) The federal government spending reaction to the COVID pandemic turned out to be excessive. It’s hard to put one’s mind back in the mindset of March 2020. The sense was that something needed to be done, and it was nearly impossible (and heartless) to do too little. Some economists were re-fighting battles of 2008, when they felt that fiscal stimulus was too low, and were determined not to make that mistake again. But as it turned out, the actual recession as a result of the pandemic was only two months long. In comparison, the recession from December 2007 to June 2009 was 18 months long. The result was three major stimulus programs: expanded unemployment insurance, direct household payments, and the “Paycheck Protection Program” aimed at supporting businesses. The emphasis in spring 2020 was on getting money out the door quickly, not on targeting the aid to those in need. Thus, it’s perhaps not a shock that the Associated Press is now reporting that $400 billion or more was stolen, wasted, or misspent. Adding these funds to the US economy, especially at a time when the ability of the economy to supply goods and services was still constrained by the pandemic, played a big role in setting off the inflation that started in 2021. It would be nice to think that lessons have been learned about how to react more effectively in the future.

2) The higher spending in the pandemic seems to have reset US federal spending to a higher level. Of course, the historical averages for federal spending and taxes shown above are not a natural law, like the boiling point of water. One can certainly make a case that as America’s elderly population rises, spending on programs like Social Security and Medicare will necessarily rise as well–and it may not be good policy to try to trim other federal spending in response to an aging US population. Bu that said, substantially higher spending and much-the-same level of taxes is not a recommended formula for a time when there isn’t a recession, unemployment rates are low, and inflation is above desired levels. This isn’t just a US pattern. It’s why publications like the Economist magazine are running headlines like: “Fiscal policy in the rich world is mind-bogglingly reckless: High inflation and low unemployment require tighter budgets not looser ones” (June 14, 2023). But the constituency for lower budget deficits is always a small one, until the harms become undeniably large.

Remote Sales Tax on E-Commerce Purchases From Other States

When I was first looking at issues of state-level sales taxes, many years ago, the issue was when buyers went outside a jurisdiction with a sales tax: for example, people buying cars in New Hampshire (with no sales tax) to avoid the Massachusetts sales tax. But the online economy has brought a much bigger version of this problem. If a business has no physical presence in a state, but sells online to people in that state, does it owe state sales tax? In the case of South Dakota vs. Wayfair (2018), the US Supreme Court held that states could apply their sales tax to out-of-state online sellers.

The decision seems reasonable to me, but it opened a new can of worms. It meant that an online seller now was supposed to comply with different sales tax rules across the United States. The Government Accountability Office (GAO) reviews the issues in “Remote Sales Tax: Federal Legislation Could Resolve Some
Uncertainties and Improve Overall System
” (November 2022).

The GAO report notes that online sales have been a rising share of retail sales overall.

The challenge for sellers, faced with the Supreme Court decision, is how to comply with the tax rules of states. It’s worth remembering that most online sellers are not Amazon–which is sometimes called a “market facilitator. According to GAO, most businesses with remote online sales are pretty small: 85% have fewer than 10 employees and 74% have fewer than five employees.

Many states have sales taxes, and many additional have local-level sales taxes as well. GAO writes:

Of the 45 states with a statewide sales tax, 37 also have local sales taxes. In addition, while Alaska does not have a statewide sales tax, it does have local sales taxes. Local sales tax authority varies widely. In some states, only selected jurisdictions may impose a sales tax, while in others a broad range of jurisdictions—such as counties, municipalities, and various local authorities—may opt, either by ordinance or local referendum, to impose a sales tax. Tax policy specialists have estimated that approximately 30,000 local jurisdictions in the U.S. have the authority to impose sales taxes and that between 10,000 and 12,000 do impose sales taxes. While technically imposed on the purchaser, both state and local sales taxes are usually accompanied by a collection requirement—sellers are required to collect the tax at the time of purchase.

For sellers, knowing the tax rates for the different jurisdictions is just the start. Many states have thresholds to exempt smaller firms from the sales taxes, sometimes defined in terms of number of transactions and sometimes in value of sales. GAO writes:

[A]s of September 2022, 22 states and the District of Columbia had adopted economic nexus threshold values of $100,000 in sales or 200 transactions into the state each year. Three large-population and large-Gross Domestic Product states (California, New York, and Texas) adopted higher monetary thresholds of $500,000. More recently, some states (including Florida, Kansas, and Missouri) adopted monetary thresholds without an accompanying transactional threshold. Other states (including Iowa and Maine) eliminated previously-established transactional thresholds in favor of monetary-only thresholds. Some states also raised or lowered their previously-established monetary thresholds, including Tennessee, which moved from $500,000 to $100,000.

The timing of these thresholds is calculated in various ways: for example, sometimes the previous calendar year, sometimes the previous 12 months from the current date, sometimes the previous four quarters. In addition, some states exempt certain items from the sales tax. If an out-of-state firm exceeds the thresholds, it typically needs to register with the tax authorities of each relevant states.

Imagine a firm located in a state that doesn’t have a sales tax: now, it needs to be set up to deal with 45 different state sales taxes.

There are a number of uncertainties here. For example, is a firm making out-of-state sales subject to being audited separately by every state where it has customers? The Supreme Court decision focused on state sales tax, but can states also collect local sales taxes? As GAO notes, some states have a substantial number of taxing authorities:

According to our review of state documentation and other third-party legal analysis, Alabama has more than 300 local tax authorities, Alaska has more than 100, Colorado has 70, and Louisiana has 64. Each of these states has a centralized system to streamline registration and filing for remote businesses.

This fragmentation of sales tax rules across states and localities just can’t be the best way to collect sales tax on online purchases. The GAO estimates that states are currently collecting about $30 billion a year on sales tax from out-of-state online sellers, so these tax liabilities are not going away. But if one believes, as I do, that online sales are a boon to consumers (greater choice, often lower prices, reduced time in transit), then it makes sense to think about a simpler system.

Broadly speaking, there are two options here. One is to establish a set of state-level guidelines for describing their remote sales taxes–rates, thresholds, exemptions, everything. This would at least make it easier for the sellers. The more sweeping choice would be to have an interstate collaborative mechanism: if states want to collect from online sellers in other states, they would register with the interstate organization. In taking this step, the state would submit a comprehensive form that covered all aspects of its sales taxes. Private companies that sell accounting and tax software would be able to access this combined data. The idea is that an online seller would have one place to turn for all the needed information, and complying with the information available from that centralized site would fulfill its legal obligations as a taxpayer.

The task of coordinating such a collaborative mechanism isn’t a simple one. The federal government has limited power to tell the states what to do in this area. However, the question of setting up this kind of mechanism isn’t an especially partisan question: every state can retain its own tax rules. Thus, it’s really a test of the ability of state-level governments to solve a nuts-and-bolts practical problem.

What Permits are Needed for New Electricity Transmission Lines?

Imagine that you want to build a new electricity transmission line, so that the power generated by new solar or wind projects can be transmitted over the grid to where it is needed. The Permitting Institute provides the following helpful figure to show the process, although you may need to expand the figure on your screen or go to the original source to read the steps. As noted in small type at the bottom, the figure only includes federal requirements, and so state and local permitting issues would be in addition to these. If you consider yourself a supporter of trying to electrify multiple sectors of the economy–electricity, transportation, heating and cooling of buildings, manufacturing processes–with a major expansion of solar and wind power, the electrical grid will need to perhaps triple in size in the next few decades. From that standpoint, this process in the figure is a problem.

Antitrust: Dilatancy Before the Earthquake?

Many years ago, I learned that “dilatancy” is a stage in the build-up before an earthquake, when rocks have been pushed together as tightly as possible under the pressures of shifting tectonic plates. Antitrust regulation may be experiencing its own dilatancy before an earthquake.

The underlying problem here is that the actual antitrust laws (the Sherman and Clayton antitrust acts) lay out general guidelines, but not in a level of detail that necessarily offers clear guidance in specific cases. Thus, since 1968 the federal antitrust authorities at the Federal Trade Commission and the US Department of Justice have spelled out more detailed guidelines for what mergers will be deemed acceptable and which ones may be challenged by the regulators. However, FTC and DoJ don’t have a blank slate for writing these guidelines; instead, the guidelines are strongly shaped by past court decisions, which in turn are influenced by legal and economic arguments. Thus, the guidelines are updated from time to time, maybe every 10-20 years or so.

The guidelines now come in two parts: the Horizontal Merger Guidelines published in 2010 look at mergers happening between two firms in the same industry. The Vertical Merger Guidelines published in 2020 look at mergers happening between two firms where one is “upstream” and another is “downstream” in the same supply chain.

However, the FTC antitrust regulators under Lina M. Khan withdrew support of the vertical merger guidelines in 2021, although the US DoJ antitrust regulators did not officially do so. Then in January 2022, the FTC and DoJ antitrust regulators announced plans to “modernize” both sets of antitrust guidelines. But here we are, 18 months later, without a concrete proposal for these new guidelines. The talk is that the current antitrust regulators would like to shift the existing guidelines quite substantially, but because the existing guidelines are built on decades of actual court precedents, they are struggling with how to phrase the rules they prefer in a way that has a chance of passing judicial muster.

In broad terms, what’s at stake here? Timothy J. Muris was chair of the Federal Trade Commission from 2001-2004, and thus writes as (on the whole) a defender of the existing approach in “Neo-Brandeisian Antitrust: Repeating History’s Mistakes” (AEI Economic Policy Working Paper Series, January 30, 2023). Here are a few of his points that struck me in particular.

1) Beware of oversimplified histories of how antitrust regulation has developed.

A standard and often-repeated story is that the antitrust regulators used to be “tough,” which was good, but then a bunch of free-market ideologues (many based at the University of Chicago) ensorcelled the courts and academia and made antitrust “weak,” which was bad. According to this story, it’s time to cast off the ideological blinder and go back to the good old days.

This story has a certain rhetorical appeal, but even a cursory appreciation of US history suggests that it has some severe holes. Back in the 1950s, when giant firms like General Motors, Ford, US Steel, Exxon, AT&T, General Electric, and DuPont were dominating markets across the US, this is supposed to be the timeframe of exceptionally aggressive antitrust enforcement? Sure doesn’t look like it. I wrote about a more nuanced history of the evolution of antitrust doctrine at the start of this year in “Complexifying Antitrust.

Muris adds the additional useful point that while University of Chicago scholars were certainly involved in critiquing the prevailing antitrust in the 1960s, they were neither the first to do so, nor were they the ones who led the way in actually formulating the new rules. As Muris writes: “The Chicago scholars, like the revolutionaries of 1776, agreed on what they opposed, but not on what the world post- revolution should look like …”

I guess the names of antitrust critics who predated the University of Chicago critique won’t mean much, unless you read the Muris paper about their arguments, but for the sake of naming some names, the earlier critics of antitrust in the 1950s and 1960s include Fred Rowe (Yale), Morris Adelman (MIT), Donald Turner (Harvard), Robert Pitofsky (NYU), Milton Handler (Columbia), Thomas Kauper (Michigan) ,and a critique from the American Bar Association published reports in 1956–among others. Muris also points out that the leading legal treatise on antitrust doctrine starting in the late 1960s was Philip Areeda of Harvard, who was later joined as a co-author by Herbert Hovenkamp of the University of Pennsylvania, which through multiple editions remains what Muris calls “by far the most influential source on antitrust law for courts, scholars, and practitioners alike.” Former Supreme Court Justice Stephen Breyer, appointed by Bill Clinton in 1994 and usually regarded as leaning toward the left side of the court, was generally a prominent advocate of the antitrust that has been conventional for the last half-century or so.

2) Will we see the return of Robinson-Patman antitrust arguments?

Lina Khan at the FTC has repeatedly praised the Robinson-Patman Act of 1936. In contrast, as Muris notes: “Virtually all antitrust enforcers, commentators, and practitioners have condemned this statute for over 50 years.”

For some insight into the issues here, it’s useful to consider the the A&P grocery chain, which started in the mid-19th century and became the largest retail chain store in the US for over 40 years. Muris quotes Mark Levinson, a biographer of the company, with this reminder:

By 1929, when it became the first retailer ever to sell $1 billion of merchandise in a single year, A&P owned nearly 16,000 grocery stores, 70 factories, and more than 100 warehouses. It was the country’s largest coffee importer, the largest butter buyer, and the second-largest baker. Its sales were more than twice those of any other retailer.

How did A&P get so big? It set up its own purchasing and distribution network, buying directly from farmers and cutting out wholesalers and middlemen. It bought in large and predictable volumes, and thus got good prices as a buyer–which were passed along to consumers. It set up the most efficient distribution system of its time: for example, the same trucks that delivered bread from A&P-owned bakeries to A&P stores were also used by the company for other distributions. It used its massive sales data to reduce the share of unsold products and spoilage. It also customized products for varying tastes across the country: for example, Philadelphians like their butter with less salt and a lighter color than do most New England markets.

In short, A&P expanded its markets by selling high-quality groceries at lower prices. Of course, it was cordially hated by the both the wholesalers/middlemen and by the smaller stores it drove out of business. In general, A&P was the leader of a rise in chain stores during the 1920s and 1930s, driving out smaller single-store businesses. Many states imposed special taxes to limit chain stores; some states proposed laws to block them altogether. Indeed, references to “monopoly” or “anticompetitive behavior” in this time frame often refer to bigger stores attracting customers with low prices, high quality, and improved selection.

As Muris notes, the Robinson-Patman Act of 1936 was originally titled the Wholesale Grocer’s Protection Act; in fact it was drafted by lawyers for the Wholesale Grocers Association, which represented the wholesalers and smaller retailers who were losing market share to the chain stores. Patman famously said: “Chain stores are out. There is no place for chain stores in the American economic picture.”

In 1944, the federal government indicted A&P and its executive for violating the Sherman antitrust act. The company was essentially convicted of expanding its sales though more efficient methods of operating and lower prices, and thus of causing harm to competitors. The government also advanced a theory of “predatory pricing,” which was that although the A&P prices had been lower for decades, this was all part of a longer-run plot where–after driving out competitors–A&P would then be able to charge much higher prices for decades into the future. To block the theoretical scenario of higher future prices, it was necessary to prevent actual current prices from being so low.

As innumerable commentators pointed out, then and now, consumers were clearly better-off for decades as a result of A&P. But that was no defense under the antitrust doctrine of the time. Indeed, under the antitrust rules that prevailed up through the 1960s, a company that used efficiency gains to compete with lower prices would often deny doing so. Muris writes:

Lawyers arguing for mergers certainly performed handstands 60 years ago to avoid claiming their mergers reduced costs and prices. They feared they would be accused of planning to lower prices, therefore taking market share from competitors, harming rivals, and thus committing the paramount sin of the era: increasing concentration. Both the harm to rivals and the increased concentration could themselves have been enough to jeopardize a merger, and they thus prompted such vigorous denials from the merging parties. Will such arguments be the new requirement for merging firms, de jure or de facto?

But under pressure from a wide array of critics, the idea that efficiency and price-cutting should be considers as anti-competitive was fading by the early 1960s. As Muris notes:

In 1977, this growing criticism led the DOJ to publish a major attack on Robinson- Patman, finding the act “protectionist” with a “deleterious impact on competition” and, ultimately, on consumers. The FTC, which had issued nearly 1,400 Robinson- Patman complaints over the preceding four decades, was reaching the same conclusion: The agency dramatically slowed enforcement in the 1970s and all but ended it thereafter.

3) Protecting competitors or consumers?

The official mission of the Federal Trade Commission is “Protecting America’s Consumers,” which may seem straightforward. But in a prominent essay about Amazon back in 2017, Lina Khan argued for the possibility that while Amazon might seem to be benefiting consumers in the short run, it maybe possibly might be setting consumers up to be worse off in the long run. This is the Robinson-Patman logic resurrected and brought up the present in a different context.

Indeed, the underlying logic of the Robinson-Patman arguments as antitrust legislation and practice evolved into the 1950s and 1960s was that a reduction in the number of competitors in a market was an antitrust violation–even if competition in that market still seemed quite robust.

The Brown Shoe case is one of many prominent examples. Two shoe companies, Brown Shoe and GR Kinney, wished to merge. Muris describes the state of competition in the shoe market at the time:

Brown Shoe was the third-largest retailer nationwide, and it made about 4 percent of all shoes. Kinney was the eighth-largest retailer and manufacturer, although it accounted for less than 2 percent of retail sales and was the manufacturer of only 0.5 percent of all shoes. Moreover, manufacturing overall was not concentrated, as the four largest firms made 23 percent of the country’s shoes and the 24 largest firms accounted for only 35 percent of all shoes manufactured. At retail, the two combined for 2.3 percent of all stores selling shoes

Notice that both companies made shows and also had retail outlets. The idea was that the retail outlets could offer shoes from both companies. However, the government antitrust regulators argued that having one company one company producing 4.5% of shoes was excessive concentration. Moreover, it argued that greater efficiency from the merger would allow the prices for shoes to be cut–which would disadvantage other shoe companies.

Again, this old-style argument seems to support a version of competition in which no firms lose out–and especially that firms do not lose out because they are less efficient or have higher prices. The old-style theory is that consumers benefit from having lots of places to shop, but that consumers do not benefit if many of them choose to shop at places with lower prices, and thus drive some firms out of business.

4) How does resurrecting these older and discredited theories of antitrust relate to the modern economy?

Many of the current issues in antitrust are about digital companies: Amazon, Google, Facebook, Netflix, Apple, and others. Other topics are about large retailers like WalMart, Target, and Costco. Still other topics are about mergers in local areas: for example, if a small metro area has only two hospitals, and they propose a merger, how will that affect both prices to consumers and wages for health care workers in that area? Another set of topics involves how to make sure that when drug patents expire, generic drugs have a fair opportunity to compete. Another topic is about tech companies that pile up a “thicket” of patents, with new patents continually replacing those that expire, as a way of holding off new competitors.

None of these issues require returning to the old antitrust argument that passing along efficiency gains to consumers in the form of lower prices should be prosecuted by antitrust authorities. None of them imply that the goal of antitrust should be to protect competitors, rather than consumers. If the antitrust powers-that-be at the Federal Trade Commission and the US Department of Justice try to revise the existing merger guidelines back to the 1940s, 1950s, and 1960s, it will be a seismic shock to this body of law. Such an effort would almost certainly be blocked by the courts. Perhaps the cynical prediction is that the new horizontal and vertical merger guidelines, if and when they emerge, will involve a blast of old-style populist rhetoric but relatively few major substantive changes.

For those interested in the more discussion of antitrust policy, and especially how in certain areas a more activist antitrust policy might help consumers and workers without a need to return to Robinson-Patman, some useful starting points on this blog include:

Was Bailing out the Silicon Valley Bank Depositors the Right Decision?

When Silicon Valley Bank failed in March 2023, the actual legal rule was that th Federal Deposit Insurance Corporation (FDIC) insured bank deposits only up to $250,000. This is plenty for just about every household. But a number of businesses had much larger sums on deposit at the bank, and when these businesses became concerned that the bank wasn’t financially security, they started pulling out those deposits–and this bank run then caused the bank to be shut down. For a more detailed discussion of these events, see my earlier post “An Autopsy of Silicon Valley Bank from the Federal Reserve.”

The federal bank regulators were concerned that other firms were showing signs of pulling out deposits from other banks, and they announced that to stabilize the US banking system, they would guarantee all deposits–even those above the $250,000 limit. Was that decision appropriate? Raghuram Rajan and Luigi Zingales make the case against in “Riskless Capitalism” (Finance & Development, June 2023). They write:

Did uninsured depositors in the failed Silicon Valley Bank (SVB) need to be saved? The argument is that even though everyone knew that deposits over $250,000 were uninsured, if uninsured depositors had not been made whole, panic would have coursed through the banking system. Large depositors’ withdrawals from other banks would have compromised financial stability.

Perhaps! But if large depositors are always protected in the name of financial stability, why aren’t they at least charged the insurance fee that burdens the insured deposits? There are many low-cost ways for corporate treasurers to mitigate the risk of having money in a transaction account at a bank. They can keep only the amount needed to meet payroll and other immediate transactions in a demand deposit (checking) account and put additional soon-to-be needed cash in liquid money market funds. Yet too many firms did not practice elementary risk management. Streaming device maker Roku had more than $450 million in deposits at SVB, according to Reuters. While shareholders in SVB were deservedly wiped out and management let go, large depositors enjoyed riskless capitalism as the government changed the rules to benefit them.

A haircut could have been imposed on SVB’s large depositors. Based on past interventions by the Federal Deposit Insurance Corp (FDIC) this would have cost uninsured depositors about 10 percent of their balances. A few red-faced corporate treasurers would have justifiably lost their jobs. And if there were signs of contagion to other banks, the government could have announced a blanket implicit guarantee for all deposits, as US Treasury Secretary Janet Yellen eventually did. But the FDIC would have saved $20 billion and retained the principle that at least some of those who took risks paid the consequences. SVB would then be seen as capitalism penalizing the incompetent, rather than as an aberration—setting a precedent that will likely engender more attempts at riskless capitalism.

More generally, as the Federal Reserve’s own investigation put it, SVB failed “because of a textbook case of mismanagement by the bank.” If so, flighty uninsured demand deposits can be a feature, not a bug, in the system. If uninsured depositors pay attention, they can shut down incompetent or greedy bank management quickly, saving the taxpayer immense sums. If they are anesthetized because regulators invoke the tired argument that “this is not the time to worry about moral hazard,” uninsured depositors will not pay attention in the future.

The government decision was made after immense lobbying, including many cries for help from venture capitalists. David Sacks, of Craft Ventures, tweeted, “I’m asking for banking regulators to ensure the integrity of the system. Either deposits in the U.S. are safe or they’re not.” 

It’s important to remember that the choices here were not all-or-nothing. The federal regulators could have saved $20 billion by imposing losses of 10%–and provided some useful incentives for large bank depositors to pay attention to their corporate cash, as well. In addition, the payments that banks make for deposit insurance could be scaled so that banks with a greater share of very large deposits would pay more.

But the final paragraph I quoted from Rajan and Zingales crystalizes some of the key issues. Rules aren’t supposed to be changed after-the-fact to benefit businesses. Venture capitalists are supposed to know something about finance. When they start saying that “either deposits are safe or they’re not,” they seem to be stating that they were unaware that deposit insurance, by law, only went up to $250,000.

It’s perhaps useful to consider a hypothetical scenario: Say that a venture capital fund has done all of its due diligence, and determines that investing $50 million in a certain company is a good idea. In this hypothetical, the money is being sent to the company in an armored car, when it is suddenly hit by a passing disintegration ray from an alien spaceship. The money is gone–but it’s a sunk cost unrelated to the business prospects of the company. If it was previously worthwhile to invest $50 million in this company, it’s still worth making the same investment. It’s not a surprise that the venture capitalists wanted to rewrite the rules avoid any losses at all in the Silicon Valley Bank debacle. But venture capitalists like to pride themselves on providing useful oversight for the firms in which they invest, and in the basic task of managing corporate cash jammed into large accounts at Silicon Valley Bank, they badly fell down on a basic aspect of the oversight they claim to provide.

Interview with Daron Acemoglu: Tilting the Benefits of Technology to Workers

David A. Price serves as interlocutor in an interview: “Daron Acemoglu: On Henry Ford, making AI worker-friendly, and how democracy improves economic growth” (Econ Focus, Federal Reserve Bank of Richmond, Second Quarter 2023, pp. 22-26). The preface to the interview offers this summary: “Today, Acemoglu says hurray for economic growth — but is also concerned that choices made by policymakers and companies are channeling the gains from that growth away from workers. And as he sees things, the powerful AI technologies that have come to the fore in the past several years, embedded in products such as ChatGPT, should be regulated with the economic interests of workers in mind.” Here are a few of Acemoglu’s comments that caught my eye:

What type of AI do we want? What are the technologies of the future that would be most beneficial to society, particularly workers? I cannot imagine any technology that would be harmful to workers for a long period of time and yet would be beneficial for society. And therefore, my view is that right now we are going in the wrong direction in the AI community. We are going in the wrong direction in the tech community, because there is no regard paid to what these technologies are doing to workers’ jobs, democracy, mental health, all sorts of issues. So we really need to ask, can we redirect these technologies? …

[O]f course workers need to adapt as well. And I think workers who have skills or choose to specialize in things that one way or another are going to be done by machines are not going to do well. So I think social skills, social communication, teamwork, adaptability, and creativity are going to be rewarded by the labor market. The way that machines augment humans, humans should also augment machines.

But make no mistake, it’s not just those skills. Today, and I believe in the next 10 years, the United States economy is going to need a huge number of carpenters, electricians, plumbers, lots of people who do very valuable, very meaningful skill-requiring, expertise-requiring combinations of manual and cognitive work. It’s a mistake for us to think everything is going to be digital. And it could be very beneficial for us if we tried to make new machines, including AI, in such a way that they complement electricians, plumbers, carpenters. I think that complementarity is really critical. …

 If you want to think about workers benefiting, you have to think about what new tasks they can perform. And the key thing about electrical machinery — and the Ford factory in the early 20th century is a great exemplar of this — is that it generated a whole series of new tasks.

With the introduction of electrical machinery, production became more complex. So you needed workers to attend to the machinery and then you needed a lot of supporting occupations: maintenance, design, repair, and a whole slew of engineering tasks as well as many other white-collar occupations. So what really was beneficial both from the point of view of the workers and from the point of view of productivity wasn’t the fact that those factories were substituting electrical power for some other kind of power. They were completely reorganizing work in a way that made it more complex and thus created more gainful activities for workers.

Not everything was rosy. It was hard work. Compared to today, workers were worn out. They found it very difficult to keep up with the pace. It was still much noisier than the kind of factories that we would see later. And Henry Ford himself, especially later in his career, became zealous for anti-union activity. So it’s not like saying Ford was a visionary in every dimension. But Ford exemplified a new type of industrialization, which created new tasks and thus opportunities for workers.

I am perhaps less optimistic than Acemoglu about the ability of economists and social scientists to predict the current direction and effects of new technologies, and to propose ways of redirecting these technologies. Even if such analysis can be carried out in broadly persuasive ways, I am downright skeptical of the ability of the political system to implement such policies. Moreover, while the US and perhaps a few other countries are debating about what technology might become, other countries around the world will not be waiting for the results of this contemplative process, but will be moving ahead on the cutting edge of these technologies.

That said, it’s interesting to contemplate what kinds of technologies are encouraged by present economic and institutional arrangements. Technology often chases market size. Thus, investments in health care technologies that might be desirable to consumers in high-income countries will tend to be larger than those that could save lives in low-income countries. In addition, a health care technology aimed at a new market of consumers with health insurance may be a more attractive investment than a technology which, say, cuts an existing expense by 10%. Similarly, investments in agricultural technology that affect crops and farmers in high-income countries are likely to be larger that those that would improve the situation of crops and farmers in low-income countries. As Acemoglu suggests, business executives in high-income countries may be more likely to prioritize technologies that can replace workers, rather than technologies that empower workers. Venture capitalists may be more likely to support digital companies that can start up with relatively few employees, rather than supporting companies in industries that would require building factories and hiring more workers. A common criticism is that government tends to want research projects that are pretty likely to show a positive result, and thus tends to emphasize research that offers predictable but modest gains, rather than research that offers unpredictable but sometime much higher gains. There’s a lot of useful thinking to be done about whether the underlying incentives built into the existing eco-system technological investment.

In 2019, Acemoglu and Pascual Restrepo wrote “Automation and New Tasks: How Technology Displaces and Reinstates Labor” in the Spring issue of the  Journal of Economic Perspectives. Interested readers might turn there for more detail. From the abstract of that article:

We present a framework for understanding the effects of automation and other types of technological changes on labor demand, and use it to interpret changes in US employment over the recent past. At the center of our framework is the allocation of tasks to capital and labor—the task content of production. Automation, which enables capital to replace labor in tasks it was previously engaged in, shifts the task content of production against labor because of a displacement effect. As a result, automation always reduces the labor share in value added and may reduce labor demand even as it raises productivity. The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage. The introduction of new tasks changes the task content of production in favor of labor because of a reinstatement effect, and always raises the labor share and labor demand. We show how the role of changes in the task content of production—due to automation and new tasks—can be inferred from industry level data. Our empirical decomposition suggests that the slower growth of employment over the last three decades is accounted for by an acceleration in the displacement effect, especially in manufacturing, a weaker reinstatement effect, and slower growth of productivity than in previous decades.

Tocqueville on Self-Interest Well Understood

One of the repelling magnets of the subject matter of economics, at least for many encountering it for the first time, is the assumption that people are self-interested. In the telling of this assumption, what listeners seem to hear is that economists believe that people are always selfish. Any distinctions are lost: for example, the notion that self-interest can be viewed as a working assumption rather than as a claim about the essential nature of people; or that self-interest might be important in certain settings, while being fully compatible with altruism in other settings; or that self-interest can easily co-exist with many forms of cooperation; or that self-interest can be viewed as another way of saying “freedom to make your own choices for the reasons you see fit; or that that self-interest fully understood is not a license to disregard the interests and desires of others.

Alexis de Tocqueville wrote about the distinctively American relationship with the idea of self-interest, which he just calls “interest,” in the second volume of Democracy in America, published in 1840. In particular, I’m thinking of Chapter VIII in volume II, section II, titled: “The Americans Combat Individualism By The Principle Of Interest Rightly Understood.”

Tocqueville writes about how, in earlier times, wealthy and powerful individuals often liked to talk about how they were guided by virtue, and by the greater public good, rather than by self-interest. A common implication was that ordinary people should do what they were told, and follow the path allotted to them, because this virtuous path led to the common good. But Americans were different, Tocqueville argued. They instead were excited about the idea that pursuing self-interest (well-understood!) was the most useful way for a society to pursue the common good. Here is a part of Tocqueville’s meditation on the subject:

[T]he inhabitants of the United States almost always manage to combine their own advantage with that of their fellow-citizens … In the United States hardly anybody talks of the beauty of virtue; but they maintain that virtue is useful, and prove it every day. The American moralists do not profess that men ought to sacrifice themselves for their fellow-creatures because it is noble to make such sacrifices; but they boldly aver that such sacrifices are as necessary to him who imposes them upon himself as to him for whose sake they are made. They have found out that in their country and their age man is brought home to himself by an irresistible force; and losing all hope of stopping that force, they turn all their thoughts to the direction of it. They therefore do not deny that every man may follow his own interest; but they endeavor to prove that it is the interest of every man to be virtuous. I shall not here enter into the reasons they allege, which would divert me from my subject: suffice it to say that they have convinced their fellow-countrymen.

Montaigne said long ago: “Were I not to follow the straight road for its straightness, I should follow it for having found by experience that in the end it is commonly the happiest and most useful track.” The doctrine of interest rightly understood is not, then, new, but amongst the Americans of our time it finds universal acceptance: it has become popular there; you may trace it at the bottom of all their actions, you will remark it in all they say. It is as often to be met with on the lips of the poor man as of the rich. In Europe the principle of interest is much grosser than it is in America, but at the same time it is less common, and especially it is less avowed; amongst us, men still constantly feign great abnegation which they no longer feel. The Americans, on the contrary, are fond of explaining almost all the actions of their lives by the principle of interest rightly understood; they show with complacency how an enlightened regard for themselves constantly prompts them to assist each other, and inclines them willingly to sacrifice a portion of their time and property to the welfare of the State. In this respect I think they frequently fail to do themselves justice; for in the United States, as well as elsewhere, people are sometimes seen to give way to those disinterested and spontaneous impulses which are natural to man; but the Americans seldom allow that they yield to emotions of this kind; they are more anxious to do honor to their philosophy than to themselves. …

The principle of interest rightly understood is not a lofty one, but it is clear and sure. It does not aim at mighty objects, but it attains without excessive exertion all those at which it aims. As it lies within the reach of all capacities, everyone can without difficulty apprehend and retain it. By its admirable conformity to human weaknesses, it easily obtains great dominion; nor is that dominion precarious, since the principle checks one personal interest by another, and uses, to direct the passions, the very same instrument which excites them. The principle of interest rightly understood produces no great acts of self-sacrifice, but it suggests daily small acts of self-denial. By itself it cannot suffice to make a man virtuous, but it disciplines a number of citizens in habits of regularity, temperance, moderation, foresight, self-command; and, if it does not lead men straight to virtue by the will, it gradually draws them in that direction by their habits. If the principle of interest rightly understood were to sway the whole moral world, extraordinary virtues would doubtless be more rare; but I think that gross depravity would then also be less common. … I am not afraid to say that the principle of interest, rightly understood, appears to me the best suited of all philosophical theories to the wants of the men of our time, and that I regard it as their chief remaining security against themselves. Towards it, therefore, the minds of the moralists of our age should turn; even should they judge it to be incomplete, it must nevertheless be adopted as necessary.

I do not think upon the whole that there is more egotism amongst us than in America; the only difference is, that there it is enlightened—here it is not. Every American will sacrifice a portion of his private interests to preserve the rest; we would fain preserve the whole, and oftentimes the whole is lost. … No power upon earth can prevent the increasing equality of conditions from inclining the human mind to seek out what is useful, or from leading every member of the community to be wrapped up in himself. It must therefore be expected that personal interest will become more than ever the principal, if not the sole, spring of men’s actions; but it remains to be seen how each man will understand his personal interest.

There’s a lot to chew on here (as is so often true with Tocqueville). I might emphasize the insight that self-interest well-understood may help to build “habits of regularity, temperance, moderation, foresight, self-command,” and the insight that how people learn to understand their self-interest may be of considerable importance. But perhaps the biggest question is that if people are not to act in their work and civic lives in the ways that they personally perceive as in their own self-interest (well-understood!), then who instead gets to decide how people should act?

How PowerPoint (and Other Slide Presentations) Can Inhibit Thinking

Twenty years ago, Edward Tufte published The cognitive style of PowerPoint: pitching out corrupts within, an essay that still speaks to many of us who have sat through presentations where bullet points are read aloud to us, one by one by one, and where the speaker feels a need to race through the last two-dozen slides in the final five minutes of allotted time. Tufte studied thousands of slides from actual presentations, and offers detailed and precise analysis of concrete examples. But for a sense of his overall argument, I’ll quote only some of his broader themes:

The fans of PowerPoint are presenters, rarely audience members. Slideware helps speakers to outline their talks, to retrieve and show diverse visual materials, and to communicate slides in talks, printed reports, and internet. And also to replace serious analysis with chartjunk, over-produced layouts, cheerleader logotypes and branding. and corny clipart. That is, PowerPointPhluff.

PP convenience for the speaker can be costly to both content and audience. These costs result from the cognitive style characteristic of the standard default PP presentation: foreshortening of evidence and thought, low spatial resolution, a deeply hierachical single-path structure as the model for organizing every type of content, breaking up narrative and data into slides and minimal fragments, rapid temporal sequencing of thin information rather than focused spatial analysis, conspicuous decoration and Phluff, a preoccupation with format not content, an attitude of commercialism that turns everything into a sales pitch. …

Many true statements are too long to fit on a PP slide, but this does not mean we should abbreviate the truth to make the words fit. It means we should find a better way to make presentations. With so little information per slide, many many slides are needed. Audiences consequently endure a relentless sequentiality, one damn slide after another. When information is is stacked in time, it is difficult to understand context and evaluate relationships. Visual reasoning usually works more effectively when the relevant information is shown adjacent in space within our eyespan. This is especially the case for statistical data, where the fundamental analytical act is to make comparisons …

In day-to-day practice, PowerPoint templates may improve 10% or 2o% of all presentations by organizing inept, extremely disorganized speakers, at a cost of detectable intellectual damage to 80%. For statistical data, the damage levels approach dementia. Since about 1010 to 1011 PP slides (many using the templates) are made each year, that is a lot of harm to communication with colleagues. Or at least a big waste of time. The damage is mitigated since meetings relying on the PP cognitive style may not matter all that much. By playing around with Phluff rather than providing information, PowerPoint allows speakers to pretend that they are giving a real talk, and audiences to pretend that they are listening.

Tufte is of course a genius at thinking about effective graphical presentation of data. Many of us are not going to live up to his standard. But many of us can do better, too. As he points out, a good image that presents a set of data relationships can convey a great deal, and breaking those messages into bullet-points can obscure so much.

Tufte quotes from perhaps the classic Powerpoint satire of all time, Peter Norvig’s “Gettysburg Powerpoint Presentation.” Notice that it manages to include six slides for a two-minute presentation. Before the more famous part of the text, Lincoln would begin:

Good morning. Just a second while I get this connection to work. Do I press this button here? Function-F7? No, that’s not right. Hmmm. Maybe I’ll have to reboot. Hold on a minute. Um, my name is Abe Lincoln and I’m your president. While we’re waiting, I want to thank Judge David Wills, chairman of the committee supervising the dedication of the Gettysburg cemetery. It’s great to be here, Dave, and you and the committee are doing a great job. Gee, sometimes this new technology does have glitches, but we couldn’t live without it, could we? Oh – is it ready? OK, here we go:

The speech that follows would be accompanied by six slides, which are perhaps not in the most useful order, but hey, the slides show professionalism and really help out the audience, right?

In a similar vein, Gokul Rajaram recently posted an anecdote about his experience in creating a set of slides for Eric Schmidt at Google:

In 2006, I helped Eric Schmidt [CEO of Google at the time] create a deck outlining Google’s strategy, for a presentation Eric was delivering to the company. It taught me a profound lesson on how to present.

When I showed up to my first meeting with Eric, he asked me to visit with every product team at Google, chat with them to figure out what they were working on, and then summarize it on one slide (for each team).

Easy enough, I thought. I would use 3-5 bullet points per slide.

“But”, Eric said, “I want no words on any slide”.

My well-laid plans disintegrated in an instant. How was I supposed to convey the key messages from each team, without WORDS?

Eric must have seen the panic on my face, and kindly gave me a hint. “Put the text in speaker notes”.

“But what goes on the slides, Eric?” I continued panicking.

That classic, gentle “Eric smile” fluttered on his face. “Why, images, of course!”

“You mean, you want each slide to just be comprised of images?”

“You got it. And use the title wisely. 7-8 words max. Let’s meet in a week to review progress.”

Rajaram suggests several lessons from his experience. For example, one of them is “The larger the audience, the fewer the words on the slide.” But my point here is not to reify Schmidt’s approach to slide-decks, or Tufte’s for that matter. (Tufte is a believer in detailed paper handouts to accompany slides, which might have been workable in 2003 when he wrote his essay, but is so countercultural in 2023 as to be from a different era.) I just think we would all be better off with slide presentations that have fewer bullet points, fewer pages jam-packed with words, and fewer detailed numerical tables that can’t be read by anyone more than 30 feet away. Presentations impose costs of time and attention on others. In successful presentations, your attention is attracted, rather than taxed, and the entire time feels well-spent.

Shifting US Population Pyramids

A “population pyramid” is a graph that shows the number of people of each age group divided into male and female. Because the oldest age groups at the top of the figure have small populations, the graph will narrow toward a point at the top. But looking across population pyramids for different years, you can see the movements of larger and smaller generations as they age. Here’s are population pyramids for 2000, 2010, and 2020 from the US Census Bureau (“Age Profiles of Smaller Geographies Don’t Always Mirror the National Trend,” by Laura Blakeslee, Megan Rabe, Zoe Caplan and Andrew Roberts, May 25, 2023).

The authors write:

The pyramid was larger in 2020 than it was in either 2010 or 2000. This reflects the growth in the U.S. population: 331.4 million people in 2020, up 22.7 million (7.4%) from the 308.7 million in 2010. Between 2000 and 2010, the population grew by 27.3 million (9.7%) from 281.4 million people. The U.S. population also aged since 2000. The baby boom cohort moved up the pyramid, from 36-to-54-year-olds in 2000 to 46-to-64-year-olds in 2010 and 56-to-74-year-olds in 2020. The millennials were mostly in their teens and 20s in 2010 but young adults (in their 20s and 30s) a decade later. At the same time, the base of the pyramid representing children under age 5 got smaller in 2020, reflecting a recent decrease in the number of births in the United States.

Population pyramid diagrams have been around a long time. The authors also include one from a Bureau of the Census report in 1900. Again, each bar represents five years of age. You can see that the number of people in the 85-90 age group almost disappears in this diagram, and the bars for those 90 and over do disappear. You can also see that this is a true “pyramid,” in the sense the younger the age group, the bigger it is. Of course, this is the sign of a growing population.

US Spending on Mental Health: Why No Increase?

US spending on health care as a whole has famously expanded as a share of the US economy over time, from 5% of US GDP in 1960 to about 20% of US GDP at present. However, total US spending on treatment services or mental health has remained at about 1% of GDP since 1975. Moreover, the share of Americans receiving mental health treatment has increased ove rtime. These patterns can be explained by the shifts in how mental health treatment is delivered. Richard G. Frank and Sherry A. Glied. 2023. “America’s Continuing Struggle with Mental Illnesses: Economic Considerations” (Journal of Economic Perspectives, 37:2, 153-78). (Full disclosure: I’m the Managing Editor of JEP, and have been so for 37 years now.)

There are four main reasons for the difference in spending growth between mental health care and general medical care.

First, the main driver of cost growth in the general health care sector has been technological change, particularly through the introduction of capital-intensive devices and procedures (Chernew and Newhouse 2011). In contrast, the technology of treatment in mental health continues to rely on labor and prescription drugs. Newer treatments for mental health conditions have typically offered few gains in efficacy, although they have generated improvements in treatment adherence and outcomes by reducing side effects and increasing the tolerability of treatments (Insel 2022). While psychopharmacology experienced considerable innovation prior to 2000, relatively few new classes of drugs for treating mental
illnesses have been introduced since then. …

Second, over the past 50 years there has been dramatic, cost-reducing substitution for the human and institutional inputs that were previously used to provide mental health care. In 1975, 63 percent of mental health care spending was for institutional care in hospitals and nursing homes; today, 31 percent of expenditures occur in these costly settings (SAMHSA 2014; 2016). Treatment with prescription drugs has taken a central position in treatment of mental illnesses, often substituting for costlier psychotherapy for the most prevalent mental health conditions, depression and anxiety. … The cost of psychotherapy itself has also dropped sharply because the mental health sector has been far more accommodating of diverse types of health care providers than has general health care. Psychotherapy provision has shifted from treatment by psychiatrists and PhD-level psychologists to treatment by social workers, counselors, and MA-level psychologists. … Today over 90 percent of psychotherapists are trained below the doctoral level, a far higher share than in the 1970s and 1980s. The shift towards lower cost professionals with less extensive training has driven the costs of psychotherapy down, without any documented evidence of a reduction in quality—although no recent studies have directly compared the quality of services delivered by those with varied professional training …

Third, a much larger share of mental health care (just under two-thirds) is paid for by public funds (about one-third is paid by Medicaid) than is the case for general health care, and a much larger share—20 percent—is paid for by programs under fixed budgets. Public programs generally pay lower prices. …

Finally, mental health spending appears to be growing much more slowly than general health spending, in part because of a change in classification. In the 1970s and 1980s, when institutional treatment of those with serious mental illness accounted for a much larger share of mental health spending than it does today, all the expenses of institutional treatment—including the costs of whatever limited clinical treatment was provided as well as the costs of institutional room and board, often of poor quality—were counted as part of mental health spending. Today, the costs of housing and food for people with serious mental illness, who are not typically institutionalized, are no longer counted as part of mental health treatment
spending.

The authors also point out that support and services for mentally ill people end up being provided in a range of non-health-care contexts. They draw up on a wide array of evidence across studies and programs to compile the following table.

As the authors point out, it seems plausible that the US is investing too little in care for those who have serious mental illnesses, but too much for those with milder concerns. They write:

Current policy choices have led to a misallocation of resources in the delivery of clinical services. Too few people with treatable mental health conditions, including those with serious illness, obtain care that could help them. This situation may arise, in part, because the decisions of people suffering from mental illness to seek care may not accurately reflect the likely value of such care to themselves and to others, as well as because of underinvestment in treatment capacity for the most serious conditions. At the same time, moral hazard associated with insurance coverage of mental health services may lead to overuse (or inappropriate use) of some services within this category, either to address problems of living that cause relatively little impairment or because the quality and nature of treatments are so variable. Both overuse and underuse reflect the fundamental difficulty of matching people and treatments in the face of great heterogeneity and uncertain diagnosis.