An Autopsy of Silicon Valley Bank from the Federal Reserve

When discussing issues like “why does deposit insurance exist” and “why does the government regularly look over the financial accounts of banks,” people like me often end up describing bank runs. However, our examples tend to be old ones, often drawn from movies like “Mary Poppins,” “It’s a Wonderful Love,” or old westerns. Our “modern” example tend to be the Northern Rock bank run in the UK back in 2007 or the runs triggered by the failure of the Home Savings Bank of Ohio back in 1985.

But now, and for years to some, we have the example of the bank run that happened earlier this year at Silicon Valley bank. The Federal Reserve has just published its “Review of the Federal Reserve’s Supervision and Regulation of Silicon Valley Bank” (April 2023). The report is both useful and unsatisfying.

The unsatisfactory part is that the bank run at SVB happened on March 9. But as the report explicitly states: “The report does not review the events that occurred after March 8, 2023, including the closure of SVB on March 10, 2023, by the California Department of Financial Protection and Innovation (CDFPI), and the actions on March 12, 2023, by the U.S. Department of the Treasury, the Board of Governors of the Federal Reserve System (Board), and the Federal Deposit Insurance Corporation.”

Thus, those looking for a discussion of how the SVB situation affected other banks, or how it SVB contributed to the giant Credit Suisse bank being shut down and taken over by UBS in Switzerland, or how the run on SVB actually unfolded on March 9–well, you need to look somewhere else or wait for a follow-up report. For those interested in whether it was a useful policy step or an overreaction for the Fed to declare that all bank deposits would be protected, even those that far exceeded the $250,000 legal limit, you will need to look elsewhere.

In short, this report is about the lead-up to the SVB debacle, not what followed. But within that limit, there are many details of interest.

The report tells the basic story. SVB was a bank where more than half the deposits were from small new companies, mostly backed by venture capital, not from typical households with a bank account. These companies were holding large amounts at SVB, often measured in millions of dollars.

SVB took a substantial share of these funds and invested them in long-term debt, with a focus on US Treasury bonds or mortgage-backed securities also guaranteed by the federal government. With these kinds of investments, there is essentially zero risk of default. But there is a different risk: if you lock yourself into long-term debt that pays low interest rates, you are going to have a problem when interest rates go up. Say that you buy a 10-year bond with a face value of $100 that pays 1% interest. After interest rates go up, there are now 10-year bonds with a face value of $100 that pay 3% interest. When the bonds that pay a higher interest rate become available, no one is going to want to pay as much for the earlier bonds that have only a 1% interest rate. But SVB did not take steps to hedge against that risk of higher interest rates.

Usually, if a bunch of depositors want to take money out of a bank, the bank can sell some of the bonds it is holding, if needed, and give the depositors their money. But when the value of the bonds has declined, then if a bunch of depositors want to take money out of the bank, and the bank starts selling the low-interest bonds it is holding, the bank won’t have enough money to pay off the depositors. Remember these venture-capital backed firms were holding millions of dollars each at SVB, far above the amount protected by federal deposit insurance. The word went out that it might be safer to withdraw their fund, and it happened with rush. As the report notes:

Uninsured depositors interpreted SVBFG’s announcements on March 8 as a signal that [the firm] was in financial distress and began withdrawing deposits on March 9, when SVB experienced a total deposit outflow of over $40 billion. This run on deposits at SVB appears to have been fueled by social media and SVB’s concentrated network of venture capital investors and technology firms that withdrew their deposits in a coordinated manner with unprecedented speed. On the evening of March 9 and into the morning of March 10, SVB communicated to supervisors that the firm expected an additional over $100 billion in outflows during the day on March 10. SVB did not have enough cash or collateral to meet the extraordinary and rapid outflows. … This deposit outflow was remarkable in terms of scale and scope and represented roughly 85 percent of the bank’s deposit base. By comparison, estimates suggest that the failure of Wachovia in 2008 included about $10 billion in outflows over 8 days, while the failure of Washington Mutual in 2008 included $19 billion over 16 days.

Again, this is a report more about what happened than about policy steps. But part of what happened is that the bank management didn’t handle the risk of higher interest rates and the bank regulators didn’t intervene in time. The report hands out some blame on these issues, too.

For corporate management:

The full board of directors did not receive adequate information from management about risks at Silicon Valley Bank and did not hold management accountable for effectively managing the firm’s risks. The bank failed its own internal liquidity stress tests and did not have workable plans to access liquidity in times of stress. Silicon Valley Bank managed interest rate risks with a focus on short-run profits and protection from potential rate decreases, and removed interest rate hedges, rather than managing long-run risks and the risk of rising rates. In both cases, the bank changed its own risk-management assumptions to reduce how these risks were measured rather than fully addressing the underlying risks.

For the bank regulators, an issue was that large banks are held to tighter standards than small banks. But what about a very rapidly expanding bank? Although there was substantial concern expressed by the bank supervisors, there was also a sense that the bank should be allowed a little time to to adjust to the more strict standards of larger banks. This decision obviously doesn’t look good in retrospect.

While the firm was growing rapidly from $71 billion to over $211 billion in assets from 2019 to 2021, it was not subject to heightened supervisory or regulatory standards. The Federal Reserve did not appreciate the seriousness of critical deficiencies in the firm’s governance, liquidity, and interest rate risk management. … As Silicon Valley Bank continued to grow and faced heightened standards in 2021, the regulations provided for a long transition period for Silicon Valley Bank to meet those higher standards and supervisors did not want to appear to pull forward large bank standards to smaller banks in light of policymaker directives. This transition meant that the new supervisory team needed considerable time to make its initial assessments. After these initial assessments, liquidity ratings remained satisfactory despite fundamental weaknesses in risk management and mounting evidence of a deteriorating position. The combination of internal liquidity stress testing shortfalls, persistent and increasingly significant deposit outflows, and material balance sheet restructuring plans likely warranted a stronger supervisory message in 2022. With regard to interest rate risk management, supervisors identified interest rate risk deficiencies in the 2020, 2021, and 2022 Capital, Asset Quality, Management, Earnings, Liquidity, and Sensitivity to Market Risk (CAMELS) exams but did not issue supervisory findings. The supervisory team issued a supervisory finding in November 2022 and planned to downgrade the firm’s rating related
to interest rate risk, but the firm failed before that downgrade was finalized.

In the broader legislative context, the Economic Growth, Regulatory Relief, and
Consumer Protection Act (EGRRCPA) of 2019 was based on the idea that the giant “global systemically important banks” needed closer regulation, but other banks were fine with lower levels of regulation.

Over the same period that Silicon Valley Bank was growing rapidly in size and complexity, the Federal Reserve shifted its regulatory and supervisory policies due to a combination of external statutory changes and internal policy choices. In 2019, following the passage of EGRRCPA, the Federal Reserve revised its framework for supervision and regulation, maintaining the enhanced prudential standards (EPS) applicable to the eight global systemically important banks, known as G-SIBs, but tailoring requirements for other large banks. For Silicon Valley Bank, this resulted in lower supervisory and regulatory requirements, including lower capital and liquidity requirements. While higher supervisory and regulatory requirements may not have prevented the firm’s failure, they would likely have bolstered the resilience of Silicon Valley Bank.

I would only add that players in financial markets, including banks, investors, and regulators, have a tendency to relax about risks that haven’t occurred for awhile. By early 2022, interest rates has been rock-bottom low since late in 2008–for more than a decade. Some economic reports suggested that low interest rates would persist long into the future. Those who paid to hedge against the risks of higher interest rates had been doing so, without any apparent need, for more than a decade. At some point, the risks posed by higher interest rates just weren’t taken as seriously as they should have been: not by banks, not by investors, and not by the Fed.

But as IMF Managing Director Kristalina Georgieva said a few weeks ago: “There is simply no way that interest rates would go up so much after being low for so long and there would be no vulnerabilities. Something is going to go boom.” My guess is that we have not yet seen the last of things that go boom.

The Rise of Sports Gambling

Gambling has been on the rise across American society for a half-century now, and Victor Matheson provides an overview of the latest frontier in “Sports Gambling” (Milken Institute Review, Second Quarter 2023, pp. 12-21). He begins with some broader context:

The past 60 years have witnessed a massive transformation of the gambling landscape in the United States. In the early 1960s, the only legal casinos in the country operated in Nevada, no states ran lotteries, and essentially all sports bets were either made informally among friends or through illegal bookies. These days, 45 state governments sell $100 billion in lottery tickets each year, with multistate lotto games like Powerball and Mega Millions occasionally offering jackpots exceeding $1 billion. Nearly 1,000 casinos and card rooms operate across 41 states, generating over $50 billion in net gaming revenue. And nowhere has the gambling industry changed more rapidly than in sports betting, where nationwide expansion has led to an increase in legal wagering from just under $5 billion in 2017 to nearly $100 billion in 2022.

Matheson runs through a number of the most prominent US examples of how gambling on sports led to bribery of players and officials to alter the outcome of games. But for big-revenue sports, a huge share of the money comes from television and media rights in general–and gambling helps to drive that interest. People watch games they would not otherwise watch, and watch until the bitter end, if they have a bet placed on the game or the point spread.

In 2018, the Supreme Court decision in Murphy v. National Collegiate Athletic Association held that the Professional and Amateur Sports Protection Act of 1992 was unconstitutional. Under that law, the federal government could prevent a state from legalizing sports gambling. Just five years later, 28 states and the District of Columbia have legalized sports gambling, and more state are on the way. At least so far, however, California, Texas and Florida are hold-outs against the general trend.

But not all sports gambling is created equal. In some states, the gambler needs to actually show up in person at a casino or other designated location to place a bet. In other states, you can also bet through your computer or through smartphone apps. Matheson writes:

There is a world of difference in impact between mobile gambling and in-person wagering. In-person-only states typically averaged total betting of less than $100 per person in 2022, while the comparable figure in states with legal mobile betting is in the neighborhood of $1,000 per resident. In New York, for example, the sports handle (the total amount wagered on sport competitions) rose from $21 million in December 2021 to $1.7 billion in January 2022 after the state’s first mobile gambling apps went live.

Ironically, in New Jersey, which led to fight to overturn PASPA in order to revive the Atlantic City casinos, only $900 million of the $11 billion of sports betting that took place in the state in 2022 was wagered at its casinos. Even in Nevada, known for its casino pleasure palaces, two-thirds of all sports betting has moved out of the gaming rooms to mobile apps since the latter debuted in 2020.

The most common argument in favor of sports gambling is that having it above-ground and legal is better than having it be hidden away. The argument has some truth in it. Matheson writes:

[O]rganized sports’ dim views on gambling evolved for multiple reasons. … [G]ambling-related corruption in countries with legal sports gambling — notably, the UK — didn’t appear to be markedly worse than in the U.S. In fact, a reasonable case can be made that legalizing gambling actually reduces match-fixing by bringing betting out of the shadows. In a legal setting, leagues can partner with sportsbooks (companies that organize sports gambling) to identify athletes engaging in gambling. In addition, leagues can work with sportsbooks to identify patterns of suspicious betting — tracking that would be nearly impossible if the wagers were underground.

As someone who is overly interested in a wide variety of sports, seeing odds and how they evolve is one more interesting prism through which to view games. Moreover, I have my own preferences for blowing modest amounts of money on entertainment that would not necessarily be approved by others, so I try not to be obnoxious about how others spend their entertainment budget.

On the other side, I’ve been known to rail against pretty much all forms of legalized gambling, as essentially a tax on mathematical illiteracy. In the case of sports gambling, Matheson cites estimates that every dollar bet with sports books results in a loss of 7.7 cents. When you bet again and again, on average you lose again and again. Matheson writes: ” Between the court’s decision in June 2018 and the end of 2022, sports gamblers made over $190 billion in legal wagers generating $14 billion in gross revenues for the sportsbooks (that is, the amount taken in from bets minus payoffs to winners). This represents nearly a 20-fold increase in under five years.”

The enormous rise in legalized gambling across many settings is typically described as a fun and games, with just a soupçon of adults-only wickedness for added attractiveness. But the general pattern across gambling industries is that the bulk of revenue doesn’t come from people who bet, say, $20 on the Super Bowl. Instead, it comes from a relatively small share of the population (maybe 5-10%) that bets enough to cause real tradeoffs, sometimes severe tradeoffs, in their day-to-day economic lives. In the case of sport gambling, young men in particular are disproportionately vulnerable to the “all the cool dudes do it” vibe. The rise in gambling revenue across US society is built on such vulnerabilities.

Rising Interest Payments on the National Debt

What happens when you borrow a lot of money, and then interest rates go up? Your interest payments can take a big jump, too. That seems to be happening with interest payments on the national debt. All numbers and projections that follow are from the Historical Tables volume of the proposed FY 2024 US budget produced by the US Office of Management and Budget. This volume provides actual overview data on government spending and taxes up through 2022, and the provides estimates from 2023-2028.

As an example, a previous time that the US government borrowed large amounts at a time of high interest rates was the 1980s. As a result, net interest payments by the federal government doubled from 1.5% of GDP in 1977 to 3.0% of GDP by 1985. In very round numbers, federal spending has been around 20% of GDP for some decades now. Thus, 3% of GDP is about 15% of the total federal budget.

We are now repeating that experience. Net interest payments had fallen as low as 1.3% of GDP in 2009, when the Federal Reserve chopped its policy interest rat (the “federal funds” rate) to near-zero. Interest payments were 1.3% of GDP in 2016 and 1.6% of GDP in 2021. But federal borrowing was high and interest rates started rising. Interest payments are supposed to hit 2.9% of GDP of GDP by 2024, and 3% of GDP a few years later.

Just in case you are the kind of person who doesn’t think in terms of percentages of GDP, interest payments are essentially going to double from $299 billion in 2021 to $583 billion in 2024.

Let’s put 3% of GDP in the perspective of other federal taxes and spending. A few years from now, 3% of GDP will be net federal interest payments. It will also be more-or-less equal to the sum of all federal corporate income taxes (2.5% of GDP) and all federal excise taxes (like those on gasoline, alcohol, and tobacco). In other words, all the money raised by those taxes will effectively be going out the door to pay interest on past borrowing. Or to look at it another way, 3% of GDP is projected national defense spending for FY 2028; by then, spending on interest payments to cover past federal borrowing will equal national defense spending.

I’m a believer in the standard wisdom that when confronted by the Great Recession of 2008-9 or by the pandemic recession of 2021, it makes sense for the federal government to borrow more. It cushions the immediate blow, and helps the economy recover.

But another part of the standard wisdom, not so easily followed, is that when not facing a recessionary threat, the government should scale back on its borrowing–not to balance the budget, but to get on a trajectory where the debt is growing more slowly than the growth of the economy, and thus the debt/GDP ratio is falling. In addition, the government needs to be aware of the risks that rising interest rates will drive up borrowing costs. Future budget decisions will be constrained as a result.

Back in 1693, in his comedy “The Old Bachelor,” William Congreve coined the line: “Married in haste, we may repent in leisure.” As we face a future of much higher interest payments, we will have leisure to repent some of the earlier federal borrowing, as well.

Using Self-Declared Value for Taxation

There are a number of assets where determining the value for purposes of taxation may be difficult to check or uncertain. For example, I know generally what my house is worth, but for purposes of property taxation, whether that estimate is, say, 20% high or low makes a meaningful difference to my tax bill.

At various times and places, a proposed solution has been that the owner of the asset declares a certain value for purposes of taxation–and if that valuation seems too low, the government reserves the right to buy that asset at the owner’s declared value. Marco A. Haan, Pim Heijnen, Lambert Schoonbeek, and Linda A. Toolsema explored the incentives of this approach about ten years ago in “Sound taxation? On the use of self-declared value” (European Economic Review, 2012, 56: 205-215). They begin with some historical examples:

In the 16th century, the Kingdom of Denmark controlled both sides of the Sound (Øresund), an important waterway situated between present-day Denmark and Sweden. All foreign ships passing through this strait had to make a stop in Helsingør (known in English as Elsinore, the stage for Shakespeare’s Hamlet) and pay taxes to the Danish Crown, which varied between some 1% and 5% of the value of the cargo. These taxes are often referred to as the Sound Dues. Although obviously unfamiliar with the concept of incentive compatibility, the Danish Crown was fully aware that such a tax would give skippers a strong incentive to cheat and declare a value much lower than the true one. It came up with an intriguing solution. The Crown reserved the right to purchase the cargo at the value declared by the skipper. Thus, a Skipper who declared a value that was too low ran the risk of losing his cargo at a price below market value. But a Skipper who declared a value that was too high ran the risk of paying too much in taxes.

This mechanism is clearly ingenious, but it does raise a number of questions. First, what is the optimal confiscation strategy for the tax authority? Obviously, it cannot be part of a Nash equilibrium to never purchase the cargo. Then, the threat of confiscation would simply be empty. Second, does this mechanism induce truth-telling? That is, does it give skippers the incentive to always declare the true value of their cargo? Third, does it allow the tax authority to effectively raise the tax rate that it desires? Put differently, did this mechanism really amount to sound taxation or was there something rotten in the state of Denmark? …

There are numerous instances where a similar tax has either been proposed or implemented. In 1891, New Zealand passed a Land and Income Tax act, based on a ‘‘self-assessment with the shrewd device of making Government’s purchase at the tax value an effective check on the owner’s assessment’’… Dr. Sun Yat-sen, the first provisional president when the Republic of China was founded in 1912, proposed a land tax using the exact same mechanism: landowners are taxed according to their declared value of the land, and the government can also buy the land at the same price. Anyone buying a house in southern Europe may face property taxes based on self-declared value. Other examples include land tax in India around 1900 as well as in present-day Taiwan, taxes on works of art leaving Mussolini’s Italy, and British taxes on imported American Jerome clocks …

(I had to look up “Jerome clocks.” It refers to clocks made by the American clockmaker Chauncey Jerome (1793-1868), who revolutionized the clock-making business by using metal rather than wood for the inner workings, thus allowing the firm to sell millions of long-lasting and inexpensive clocks around the world.)

The paper by Haan, Heijnen, Schoonbeek, and Toolsema applies mathematical game theory tools to describe various outcomes. In intuitive terms, start by imagining that the King can seize and resell the cargo with zero cost of effort or transaction. However, a King who seizes and resells the cargo does not receive tax payments! The Skipper of this ship, knowing this, will have an incentive to understate the value of the ship slightly, knowing that the King has a preference against the seize-and-resell option.

If the King does face transactions costs of seizing and reselling, the Skipper knows that the King will be even less eager to choose the seize-and-resell option, and has an additional incentive to understate the value of the ship by an additional amount.

However, the King must sometimes choose the seize-and-resell option, just to keep the estimates of the skippers within a reasonable distance of actual values. One can then imagine a King who looks at the estimated values received from Skippers, and think about whether it’s more useful just to seize at random a certain percentage of ship cargoes, or whether certain kinds of ship cargoes might be more prone to cheating–or more likely to cover the costs of the seize-and-resell option.

As the authors point out, this problem is quite similar to the question of government choosing who to audit for income tax purposes: some degree of randomness seems desireable, but also some focus on those where the returns from doing the audit are more likely to be larger.

Another aspect of the situation that occurs to me is that government may act for reasons other than tax revenue. For example, say that the King was an enemy of certain Skippers for political reasons, or perhaps an enemy of some customers who were originally scheduled to receive the cargo. The King might want to give the Skipper a reputation as “that captain who never actually manages to deliver the cargo,” or delay and disrupt the ability of the final customer to get their delivery. (The power to audit tax returns has sometimes been wielded as a political weapon in this way, too.) If you are a Skipper in this situation, you may find yourself overstating the value of the cargo, knowing that you will pay higher taxes, but also viewing that as a cost to be paid for dissuading the King from choosing the seize-and-resell option.

Reference Class Forecasting: An Origin Story from Daniel Kahneman

Some people seem to be better at forecasting future events than other. And one technique used by the superior forecasters is called “reference class forecasting.” Daniel Kahneman tells Joseph Walker the origin story of the concept in an interview on the Jolly Swagman podcast (“#143: Dyads, And Other Mysteries — Daniel Kahneman,” April 14, 2023, audio and transcipt available). The hour-plus interview is quite interesting throughout: Kahneman must be one of the most interesting social scientists alive, and Walker does nice job of drawing him out. On the topic of reference class forecasting, Kahneman says:

Well, first let’s define our terms, what the reference class is. I don’t know a better way of doing this than telling the origin story of that idea in my experience, which is that, 50 years ago approximately, I was engaged in writing a textbook with a bunch of people at Hebrew University, a textbook for high school teaching of judgement and decision making. We were doing quite well, we thought we were making good progress. It occurred to me one day to ask the group how long it would take us to finish our job. There’s a correct way of asking those questions. You have to be very specific and define exactly what you mean. In this case I said, “Hand in a completed textbook to the Ministry of Education — when will that happen?” And we all did this. Another thing I did correctly, I asked everybody to do that independently, write their answer on a slip of paper, and we all did. And we were all between a year and a half and two and a half years.

But one of us was an expert on curriculum. And I asked him, “You know about other groups that are doing what we are doing. How did they fare? Can you imagine them at the state that we are at? How long did it take them to submit their book?” And he thought for a while, and in my story he blushed, but he stammered and he said, “You know, in the first place they didn’t all have a book at the end. About 40%, I would say, never finished. And those that finished…” He said, “I can’t think of any that finished in less than eight years — seven, eight years. Not many persisted more than ten.”

Now, it’s very clear when you have that story, that you have the same individual with two completely different views of the problem. And one is thinking about the problem as you normally do — thinking only of your problem. And the other is thinking of the problem as an instance of a class of similar problems.

In the context of planning, this is called reference class planning. That is, you find projects that are similar and you do the statistics of those projects, and it’s absolutely clear. It was evident to us at the time, but idiotically, I didn’t act on it. That was the correct answer, that we were 40% likely not to succeed. Because I also asked a friend, the curriculum expert, I asked “When you compare us to the others, how do we compare?” He said, “We are slightly below average.” So the chances of success were clearly very limited.

So that’s reference class forecasting. Now, how do you pick a reference class? In this case it was pretty obvious. I mean, we were engaged in creating a new curriculum. In other cases, when you are predicting the sales of the book or the success of the film, what is the reference class?

So if it’s a director and he’s had several films, is the reference class his films or similar films, same genre or whatever? And there isn’t a single answer, the answer is actually… You were asking how do you choose a reference class, my advice would be… And today I’m not the expert on that. The expert on that is Bent Flyvbjerg at Oxford. And I think what he probably would tell you is, “Pick more than one reference class to which this problem belongs.” Look at the statistics of all of them and if they are discrepant, you need to do some more thinking. If they all tend to agree, then you probably have got it more or less right.

The idea of reference class forecasting is applicable in many contexts: business plans, public policy implementation, outcomes of personal decisions, and others. It’s a way of getting outside your own head–outside the plans you can write down on a calendar or a spreadsheet about what you hope will happen. In thinking about the appropriate reference class and the experience of others with similar situations, you may also find that there are ways to improve your chances of success.

Why Have Overdraft Fees Declined?

Before overdraft fees arrived in the 1990s, if you wrote a check that exceeded the amount in your checking account at your bank, the check would “bounce”–and the payment would not be made. If the payment was for something like your heating bill or your mortgage, you might get charged late fees or penalties as a result. But banks began offering “overdraft protection,” where in exchange for a fee, they would go ahead and make the payment.

These overdraft fees often cost about $35 per transaction. Some banks charged such fees each day until you deposited more money in the account. The fees added up. Pre-pandemic estimates were that banks were getting $15-30 billion per year from such fees. Some large banks were reporting more than $1 billion per year in overdraft fees. There were stories of certain smaller banks, making losses in other areas, where the overdraft fees were more than 100% of their profits. A few years back, a bank executive for a mid-sized bank in my home state of Minnesota reportedly named his boat “Overdraft.”

However, it now seems as if these lucrative overdraft fees are declining substantially, and even going away entirely at some banks. Given that big banks are not known for their altruism in cutting fees, there is a minor mystery as to why this is happening. For background, useful starting points are Aaron Klein, “Getting Over Overdraft” (Milken Institute Review, posted October 31, 2022) and Michelle Clark Neely, “Is the Era of Overdraft Fees Over?” (Regional Economist, Federal Reserve Bank of St. Louis, March 8, 2023).

Before sketching the recent trends and possible explanations, it’s useful to be clear on the economic nature of overdraft fees. For a person who overdraws their checking account, the bank is effectively making a short-term loan. This loan was often very short-term, like just a day or two before another paycheck was direct-deposited into the account. Indeed, there have been many cases where an employee had good reason to believe that their paycheck had already been deposited, and then wrote a check, only to find that their personal check was subtracted from their bank account before the paycheck was added–thus triggering the overdraft fee. The $35 “overdraft” fee was, in effect, an interest rate charged by the bank for lending relatively small amounts for very shot times.

As you would expect, the people most exposed to overdraft fees were those with bank accounts hovering around zero: thus, overdraft fees were mostly paid by those with lower income levels. Neely of the St. Louis Fed writes:

A relatively small proportion of bank customers account for the lion’s share of overdraft fees. According to the Consumer Financial Protection Bureau (CFPB), people who frequently overdraft their accounts represented just 9% of bank customers but generated almost 80% of overdraft and nonsufficient funds (NSF) fees in 2017. Consulting firm Oliver Wyman estimates that customers who heavily use overdraft services generate, on average, more than $700 in profit for the bank per year on a basic bank account; customers who don’t use overdraft services produce an average of $57 in profit for the bank per year.

Thus, the overall result of overdraft fees is that a substantial share of profits for many US banks depended on charging high fees to predominantly low-income borrowers for extremely short-term loans–which doesn’t sound like the basis for a healthy banking industry.

Overdraft fees have not gone away by any means. But according to both Klein and Neely, they have dropped by billions of dollars since their peak back in 2019. Klein reports: “The largest banks are planning to cut overdraft fees by about half from 2019 levels.” However, no new regulations or legislation about the fees have been enacted. So why the decline?

One theory is that this is an example where legislators and regulators made sufficiently hostile and growling noises about these fees, so that banks were motivated to back off.

The difficulty with this explanation lies in believing that profit-seeking banks were willing to give up billions of dollars per year in revenue in response to these kinds of warning rumbles–without actual rules or legislation being passed. Klein writes:

Congress and regulators did put pressure on banks to change their ways. Sen. Chris Van Hollen (D-MD) prodded the Comptroller of the Currency, the agency that regulates national banks about overdrafts. Sen. Elizabeth Warren (D-MA) confronted JP Morgan Chase CEO Jamie Dimon, pointedly asking why his institution earns seven times as much in overdraft revenue as comparably sized Citibank. Rep. Caroline Maloney (D-NY) repeatedly introduced legislation that would force sweeping changes to overdraft policy, although it never came close to enactment. Meanwhile, the Consumer Financial Protection Bureau published research highlighting the overdraft bonanza’s magnitude and who’s paying for it. 


The difficulty with this theory is that it suggests that after about three decades of lucrative overdraft fees, bank decided to drop them dramatically over some bad publicity. A complementary theory is that the ongoing evolution of financial markets have increased competition. Neely writes:

Competition—from other banks and nonbank providers such as fintech firms—arguably has affected overdraft practices more than anything else. The growth of online and mobile banking has given customers more choices, and those wishing to avoid overdraft fees are voting with their feet by switching to competitors. In addition to introducing low or no overdraft charges, some fintech firms have created financial management tools for account holders. The digital banking platform Dave, for example, created a bank account in partnership with Evolve Bank & Trust that has no minimum balance or overdraft fees, real-time spend alerts and early access to paycheck deposits.

The difficulty with this theory is that some of the new fintech entrants are still quite small, and it seems unlikely that they are putting a lot of competitive pressure on the fees charged by, say, Citibank or Bank of America.

Of course, one can blend these theories together. Perhaps the warnings of legislators and regulators caused a few banks to shift their practices, and some new competitors entered the market, and the pressures of competition snowballed in a way that pressured other banks to follow along. But at least to me, the reasons behind the abrupt decline in overdraft fees remains something of a mystery. Neely describes the major changes that are taking place like this:

Banks have modified their overdraft programs in response to these factors, with large banks taking the lead. These changes include:

  • Lowering the overdraft fee (from $35 to $10, for example)
  • Increasing the trigger value (charging a fee only when the overdraft exceeds a given amount)
  • Curing (adding a grace period to allow customers to cover the shortfall)
  • Reducing the daily maximum number of overdraft fees charged (the average is four to eight)
  • No longer charging fees to cover transactions that overdraw linked accounts
  • Giving customers early access to their direct deposits …

The research arm of Pew Charitable Trusts estimates that these changes will save customers of large and regional banks more than $4 billion a year in overdraft fees; the CFPB projects half those savings will come from three institutions: JPMorgan Chase, Bank of America and Wells Fargo. Citigroup, the nation’s third-largest bank, has eliminated overdraft fees altogether. By August 2022, 13 of the country’s 20 largest banks had stopped charging NSF fees as part of their overdraft programs, and four more were scheduled to do so by the end of 2022. This represents a dramatic change from a year earlier, when 18 of the 20 largest U.S. banks charged NSF fees. …

Several banks have taken a further step, turning account deficits into small-dollar loans that will cost the customer less than a series of overdrafts. A negative balance can be turned into an installment loan with a fixed rate or a charge based on the amount borrowed, rather than on the number of transactions that overdrew the account. Customers then make regular payments on these loans, allowing them both to avoid overdrafts and build credit. …

As of January 2023, six of the eight largest U.S. banks, ranked by number of branches, offered small-dollar loans. In addition to substituting for overdrafts, small-dollar loans are being touted by consumer groups as less costly alternatives to payday loans, auto-title loans and rent-to-own agreements. Pew estimates that small-dollar loans at four of these large banks—Bank of America, Huntington Bank, U.S. Bank and Wells Fargo—are priced at least 15 times lower than the average payday loan.

Klein describes these kinds of changes as well, and suggests that some additional regulations may be useful. For example, bank regulators might take a much closer look at financial institutions that make an outsized share of their profits from collecting high overdraft fees from low-income household–and ask if that bank is fundamentally stable. Or banks could be required to post the additions to a checking account in a given day before they post the subtractions –rather than the reverse.

The news about declining overdraft fees is generally welcome, but it does raise the question of how banks might try to make up the revenue they are losing. As Neely writes: “The speed at which banks continue modifying overdraft services will depend largely on whether they are able to compensate elsewhere for lower overdraft fee revenue.”

After that Big Merger, What Happened?

There are often substantial controversies over whether a merger should be allowed to happen, but then relatively little follow-up after the event. Of course, if a merger was blocked, then it’s hard to know whether it would have led to good or bad outcomes. But when a controversial merger is allowed, it’s fairly straightforward to see if the negative predictions actually happened.

Brian AlbrechtDirk AuerEric Fruits, and  Geoffrey A. Manne take on this task in “Doomsday Mergers: A Retrospective Study of False Alarms” (International Center for Law and Economics, March 22, 2023). Here’s their summary:

Amazon-Whole Foods

The first merger we look at is Amazon’s purchase of Whole Foods [in 2017]. Critics at the time claimed the deal would reinforce Amazon’s dominance of online retail and enable it to crush competitors in physical retail. As now-FTC Chair Lina Khan put it: “Buying Whole Foods will enable Amazon to leverage and amplify the extraordinary power it enjoys in online markets and delivery, making an even greater share of commerce part of its fief.”

These claims turned out to be a bust. … [S]several large retailers have grown faster than Amazon; Whole Foods’ market share has barely budged; and several new players have entered the online retail space. Moreover, the Amazon-Whole Foods deal appears to have delivered lower grocery prices and increased convenience to consumers.

Beer-Industry Consolidation

… ABI’s acquisition of SABMiller in 2016, … critics claimed would increase the price of beer and decimate the burgeoning craft-beer segment. Instead, the concentration of the beer industry decreased after the mergers, prices did not increase on average, and the craft-beer segment thrived. …

Bayer-Monsanto

… Bayer’s acquisition of Monsanto [in 2018] … was met with stern rebukes from policymakers and academics. Critics argued that the merger would raise the price of key seeds, such as corn, soy, and cotton. Perhaps more fundamentally, the deal’s opponents argued it would further concentrate the agri-food industry, forcing farmers to deal with only a handful of seed providers. … Fast forward to today, and these fears appear overblown. Seed prices have remained roughly constant … and there is little evidence that the life of farmers and rural communities has been significantly affected by the merger. …

Google-Fitbit

[Before] Google’s acquisition of Fitbit [in 2019]… The deal’s opponents claimed the merger would reinforce Google’s position in the ad industry and prevent new entry; harm user privacy by enabling Google to integrate Fitbit health data into its other ad services (or sell this data to health insurers); and crush burgeoning rivals in the wearable-device industry. … [A]vailable evidence suggests the exact opposite has occurred: Google’s share of the online-advertising industry has declined, as has Fitbit’s position in the wearable-devices segment. Likewise, Google does not use data from Fitbit in its advertising platform; not even in the United States, where it remains free to do so. Meanwhile, the merger enabled Google’s entry into the smartwatch market as an upstart competitor against the market leader, Apple. …

Facebook-Instagram

Facebook’s acquisition of Instagram provides a different perspective. At the time, basically no one worried about it from an antitrust perspective and many pundits lambasted the purchase as a poor business decision. It is only in retrospect that people have started to see it as the merger that got away and evidence of the problems with allegedly weak enforcement. … Even in retrospect, however, it is far from obvious that the acquisition was anticompetitive. Immediately upon purchase, Facebook was able to bring Instagram’s photo-editing features to a much larger audience, generating value for users. Only later did Instagram turn into the social-media giant that we know today. The recent rise of TikTok casts further doubt on claims regarding the supposed market dominance of a combined Facebook and Instagram. A merger that benefited consumers without generating impenetrable market dominance hardly seems like overwhelming proof of the failures of enforcement.

Ticketmaster-Live Nation

…[P]eople have complained about Ticketmaster being a monopolist ever since it came to prominence. Yet, there was little outrage at the merger with Live Nation [in 2010]. … Ticketmaster’s market share appears to have fallen following the merger with Live Nation. There is thus little sense that the deal harmed consumers. So why the disconnect between longstanding frustration and antitrust enforcement? The agencies have seen the beneficial effects of mergers in a difficult multi-sided market between fans, venues, and artists. After investigation, the agencies found that the merger was primarily a vertical one between a ticketing website (Ticketmaster) and a concert promoter (Live Nation), which could be pro-competitive for the overall multi-sided market. The DOJ placed behavioral remedies in place and allowed the merger.

The article goes through these examples in some detail. I don’t mean to endorse all of their interpretations of events and outcomes, but they do show pretty clearly that dire predictions about ill effects of mergers need to be taken with a few spoonfuls of salt. I would only add that from the perspective of investors and shareholders, the promised benefits of mergers–those “synergies” that are conjured up every time a few consultants and investment bankers get in a conference room together–need to be taken with few spoonfuls of salt, too.

Two Choices for Selling Tickets

I am by no means an expert on things related to Taylor Swift or Bruce Springsteen, but even I am aware that ticket sales for their recent concert tours have been controversial. From an economic viewpoint, there are really two choices for pricing tickets for the show of a megastar who can sell out large arenas. Tyler Smith offers a short interview with Eric Budish on this topic in “Market design and live events” (March 20, 2023, AEA Research blog).

The tickets to high-profile concerts may seem disconcertingly expensive. But from an economics point of view, if the arenas are selling out at their official price and there is an active resale market at a substantially higher price, then the tickets are “underpriced”–in the specific sense that others are willing to pay more than their face value. In the old pre-internet days, this willingness to pay more than face value of tickets could show up in other ways as well, like starting the line hours or days early, so that getting tickets involved paying with time as well as money.

Budish argues that there has been an equilibrium shift. In the old days, while there was some resale of tickets for big shows, it was relatively limited. Thus, the performance groups were willing to sell ticket at a face value that was probably somewhat lower than they could have gotten, but the performers wanted a sold-out show and a long-term relationship with fans (and perhaps to encourage sales of recordings and merchandise), so they were willing to take a lower ticket price. But when ticket sales went online, the situation changed. Budish says:

I think there was an old equilibrium, and it’s old in the sense that it describes the ticket market probably for the whole 20th century. In this pre-internet equilibrium, tickets were often underpriced and often sold out quite quickly. Many tickets were purchased by speculative resellers, but the large majority of tickets were purchased by ordinary fans. The best numbers come from a great research paper by Phil Leslie and Alan Sorensen. They estimate that in the pre-internet era of the ticket market, it may have been about 5 percent of tickets that went through the secondary market. The internet really transformed the tickets market starting in the first decade of the 21st century. The internet changed the technology of both the primary market and the secondary market. 

The internet technology made it a lot easier to amass large numbers of tickets for events all across the country, really all across the world. There were economies of scale in the amassing of large quantities of tickets, whereas in the good old days ticket resellers had to stand in line. But there is just a lack of scale economies to that. Whereas if you can use an algorithm to scoop up underpriced tickets in the primary market, you can do it at substantial scale. The resale of tickets also changed with the internet because now most of the secondary market goes through internet platforms like StubHub and SeatGeek and TickPick. The secondary market with the rise of the internet is also a lot easier to use.

I think that there was this kind of uncomfortable tension in the pre-internet era when it was a little weird that artists were often pricing their tickets significantly below the market-clearing price. But you can kind of make up stories for why it might be in an artist’s long-run interest to do that. Most of the value from the underpricing is at least accruing to the intended recipients of the underpricing, i.e., to the fans. But the internet kind of broke all that with the rise of algorithms—bots as they are sometimes described—in the ticket market. And in a very low friction, globalized resale market, a lot more of the rents are just accruing to brokers. Ticketmaster estimated that it’s 20 percent. They’ve described to me in a meeting that in the right tail, in some events, it might be as much as 90 percent. 

Basically, if the resale market was not too extensive, performers were willing to charge less for tickets than the maximum they could get. Fans who saw the show benefited. But when performers recognized that they were selling tickets to brokers, who were then marking them up substantially and reselling them, the benefit of lower-priced ticket sales was going to the brokers and not the fans. Ticket prices for many prominent shows skyrocketed.

There are two ways to approach this situation. On one side, accept that brokers and the ticket resale market are going to be active. As a result, many of those attending any particular show will have purchased above-face-value tickets. In this situation, performers will raise ticket prices substantially so that a larger share of the higher prices come to them, rather than the brokers. Some performers will have an online auction of their own to sell tickets, as Ticketmaster is essentially doing for some events already.

The other approach is to try to choke off the role of ticket brokers and the resale market in some way. As a simple example, selling tickets only in blocks of four or eight is a step in this direction. But these kinds of rules just lead to an arms race between the programmers, where ticket brokers have an incentive to write programs which can purchase MANY blocks for four or eight tickets for resale.

What seems like the ultimate way to turn off brokers is that when you buy a ticket, your name is attached to it. Think about airline tickets, for example. Your name is on the ticket, and broker can’t just buy large blocks of plane tickets for popular flights and then just resell them on the internet. In practice, it might work this way: You buy tickets in advance. But when you show up at the event, you need to show them the credit card that was used to buy a ticket and a photo ID. The ticket-taker has a hand-held machine that scans your credit card, and then prints out a slip of paper with your seats on it.

One can imagine various creative ways to get around this arrangement, but it comes pretty close to turning off the resale market. The only way to exchange your ticket is to sell it back to the original seller, who is the only one that can link a ticket to a personal credit card. Budish says:

In an ideal world, I think there’s essentially two economically logical ways to sell tickets. One is to sell tickets at a market-clearing price. The auctions were an effort to do that. The other way to do it is to set a below market-clearing price, but turn off the rent-seeking. The way to do that is essentially to put names on tickets, which makes tickets nontransferable or harder to transfer—in the same way that if I buy an airline ticket and my plans change, I can’t resell that ticket on eBay or StubHub. One of the policy goals ought to be if artists want to sell their tickets at a market-clearing price, great. That’s sort of standard Econ 101. And that’s already quite feasible with the current market structure. I think the other thing that policy might have a role in facilitating is enabling artists to have a sincere option to set a below-market price and turn off the resale market if they want. There have been some efforts. In the United States the euphemism is “paperless ticketing.”

In other words, there is a technical fix for performers who say that they would like to sell tickets at lower prices and shut out the ticket brokers. You can even combine this system with a lottery approach: that is, everyone who wants tickets registers at the website before a certain time, specifies what tickets they are willing to buy, and random chance determines who gets seats. Of course, the real problem is that we all want both lower-priced tickets and the ability to get the seats we want–and for high-profile events, that’s not possible for everyone.

Carbon Taxes or Green Energy Subsidies?

For politicians, the choice between handing out energy subsidies imposing additional taxes is pretty clear.

With subsidies, companies and firms and research institutions get checks and tax breaks. There may be ribbon-cutting publicity opportunities at the start, and even if only one project out of every five or ten is actually successful, that’s still plenty of places to go and hold a self-congratulatory political event a few years down the road. Meanwhile, the costs of these subsidies–the opportunity cost of other priorities that don’t get funded, along with higher budget deficits or taxes in the future–are largely invisible. The tradeoffs that happen in the background when, say, the subsidies for one form of non-carbon energy like wind or solar just lead to less use of another form of non-carbon energy like hydropower, or where supporting technology from a politically favored firm crowds out better technology from an unfavored firm, won’t be noticed either.

With carbon taxes, on the other hand, consumers/voters everywhere will see the effect on gasoline and home energy prices. The costs are up-front. Meanwhile, the altered incentives that lead to innovation and growth of efforts to produce cleaner energy and to conserve on fossil fuels are largely invisible. The effects happen flexibly all over the economy, but the opportunities for ribbon-cutting ceremonies and political favoritism are scarce. The benefits of additional revenue for other priorities like (choose your priority here) a permanent child tax credit, saving Social Security, rebuilding defense stocks after what has been sent to help Ukraine, cutting other taxes in an offsetting way, or reducing the budget deficit are likely to seem indirect and disconnected from the revenue gains.

But in this case, at least, the politically easy choice also imposes high costs. John Bistline.
Neil Mehrotra, and Catherine Wolfram discuss the “Economic Implications of the Climate Provisions of the Inflation Reduction Act”
(Brookings Papers on Economic Activity, Spring 2023, conference version of the paper), a law which is actually much more about clean energy subsidies than inflation-fighting. Their essay goes through the law in some detail: detailed provisions, costs, distributional and macroeconomic effects. (Content warning: It’s a research paper, so some of the chunks that describe mathematical modelling choices won’t be easy reading for the uninitiated.) Here, I focus on just one section of the paper, where they compare the subsidy-driven approach to reducing carbon emissions to a carbon tax approach that would have similar reductions in emissions over the same timeframe.

The authors point out various differences between how a subsidies for non-carbon energy and a carbon tax affect the goal of reducing carbon emissions.

For example, carbon taxes do a better job of encouraging conservation than clean energy subidies. “Relative to a carbon tax, subsidies encourage electricity consumption and discourage conservation. If household and industrial demand for electricity is sensitive to price, a carbon tax would have a relatively large effect on electricity consumed and
hence emissions. By contrast, a subsidy policy –by encouraging electricity consumption — would partially undo the switch from fossil to clean energy by raising overall electricity consumption …”

The higher price of carbon emissions from a carbon tax focuses specifically on fossil fuel emission. However, “Under IRA, clean energy that displaces zero-carbon energy such as hydropower is subsidized at the same rate as clean energy that displaces the dirtiest resources.”

A carbon tax is also automatically scaleable–that is, those who emit more carbon bear a higher cost. However, the subsidies of the IRA are based on production, not use: “Other provisions of IRA subsidize the energy-using or energy-producing asset, irrespective of
how much it is operated. The investment tax credit for zero-carbon electricity subsidizes the
construction of the facility rather than its operation. … Similarly, the electric vehicle tax credits subsidize vehicle purchases without regard to how much they are driven. Electric vehicles that are used as second cars and driven less will offset fewer emissions than vehicles that replace a household’s only car.”

Fixed tax credits and production subsidies are relatively inflexible as technology and market conditions change: “Carbon pricing enables households and businesses to select their preferred approaches to lower emissions, which can help to reduce costs and account for other welfare-relevant considerations that vary across individuals and firms.”

Our model abstracts from many dimensions of difference between subsidies and carbon pricing. One important difference is that pricing carbon, depending on how it is implemented, could generate revenue for the government. These revenues could be used to offset other distortionary taxes (Barron et al., 2018; Goulder, 1995), address equity concerns (Goulder et al., 2019), or be directed toward other policy objectives. A subsidy-based approach costs the government the subsidy amounts and imposes the marginal cost of raising government funds on the economy.

The IRA includes various components like requiring firms producing non-carbon energy to have certain percentages of “domestic content,” meaning that they are required to buy certain inputs from American suppliers even if the price is higher than it would otherwise be. There are also rules about labor practices and wages in order to quality for the subsidies, again, even if it makes production of the non-carbon energy more expensive.

When the authors run some of these differences through their model of energy and the economy. They find that the cost of reducing carbon emissions through subsidies approach of the Inflation Reduction Act costs $83 per metric ton of reduced carbon emissions, while getting the same reduction in carbon emissions through a carbon tax would cost about $12-15 per metric ton of reduced carbon emissions. This large gap is probably an underestimate, because the model they are using focuses on differences in incentives for energy conservation and the efficiency of investment in new technology, but doesn’t include factors like the additional revenue raised by a carbon tax or the costs of added-on rules like “domestic content.”

I’m fully aware that for the US political system, a carbon-tax is a heavy lift. But for those who dislike the idea, consider that if the alternative is paintballing government subsidies for various aspects of non-carbon energy–with all their inflexible rules and administrative requirements–then the carbon tax begins to look pretty good.


Industrial Policy Lessons from South Korea

When economists or policymakers talk about industrial policy, Korea usually enters the conversation. There’s no question that Korea has had remarkable economic success in recent decades. There’s no question that various government policies have contributed to that success. But there is continuing controversy over what broader conclusions to draw from this experience. Shahid Yusuf offers useful background by laying out what actually happened Korea’s development experience in “Could Innovation and Productivity
Drive Growth in African Countries?
Lessons from Korea” (Center for Global Development, Working Paper 635, March 2023).

As a starting point, here’s a graph from the World Development Indicators database maintained by the World Bank. It shows per capita GDP for Japan and South Korea in inflation-adjusted dollars. In 1990, Japan’s per capita GDP was triple that of South Korea. By 2021, Japan’s lead was down to just 7%. I try to adhere to the creed of never making predictions, especially about the future. But it is not at all inconceivable that a few years down the road, Korea will surpass Japan in per capita GDP.

What policy lessons can be learned from Korea’s remarkable growth trajectory? Yusuf hits the key themes here, but I’ll quote some of his comments in an order of my own.

As a starting point, remember Korea’s situation after World War II and the Korean War that followed from 1950-53. As Yusuf notes, Korea had an egalitarian redistribution of land after World War II: “The distribution of land formerly owned by the Japanese to Korean farmers in 1949 may have contributed to the egalitarian distribution of income and political stability that buttressed Korea’s later development. This redistribution was undertaken by the US. Military Government.” Moreover, South Korea had an external enemy–North Korea–to unite against: “One should not overlook, the threat South Korea faced from its neighbor to the North, a threat that drove the government to accelerate industrialization. Industrial diversification and deepening enabled Korea to meet more of its defense requirements and neutralize the pressure from its hostile neighbor.”

In the 1960s, Korea’s government and business leaders determined to begin a drive to industrialization:

“The unwavering commitment of the political leadership and the business elite, starting with President Park Chung Hee in the mid 1960s and sustained by his successors, to a relatively inclusive, export-led industrial strategy entailing systematic diversification into more complex manufactures, is arguably the most frequently retailed. The strategy itself was choreographed and implemented by Korea’s economic bureaucracy headed by the Economic Planning Board (EPB) in consultation with the leading business groups. The Five-Year Economic Development and Science and Technology Comprehensive Plans spelled out the government’s vision and objectives. Presidential focus on industrial and export outcomes with reference to assigned targets and the attention given to cross sectoral coordination by the EPB mandarins, minimized failures that can short circuit linkage effects and stymie industrial change (Rodrik 1996).”

It’s useful to emphasize several themes about this broad policy approach.

1) There is a built-in emphasis on export-led development. This is a useful policy guideline, because a government can do a lot of things to favor its companies in their domestic sales, but expanding market share in international markets suggests genuine growth of competitiveness. Korean companies that couldn’t meet their targets in international markets found that their government subsidies were reduced or eliminated.

2) Korea’s development strategy was very broad-based, going way beyond offfering subsidies and cheap credit to certain industries. For example, World Bank data says that back in 1970, about 15% of the over-25 population in Korea had completed “upper secondary” education (basically the equivalent of high school. By 1990 the proportion was about half, and now it’s about three-quarters. “Education especially in STEM disciplines and the development of industrial skills was also a priority from the very outset. Thousands of Korean students went abroad to study and links with foreign universities also facilitated a sharing of information on curricula and teaching modalities. Industrial diversification could not have succeeded had the supply of human capital and workforce skills fallen short .”

This graphic shows a range of policies: interventionist support for manufacturing, but also development of human capital (including heavy use of vocational training, science and technology education, and overseas training and education), strong support for research and development; and policies to bring technology from around the world and diffuse it into Korean firms.

3) Korea followed a step-by-step approach to development, with each step building on the previous one.

Korea’s manufacturing in the 1960s revolved around light, labor intensive activities such as
garments, footwear, toys, food products, and light consumer electricals. The kind that were the norm in other low income economies. But starting in the early 1970s, Korea initiated a structural break and launched its heavy and chemical industry promotion plan (HCIPP) so as to diversify into more complex and technology intensive products. It constructed a state-owned iron and steel complex at Pohang financed in part by Japanese grants, a machinery production complex at Changwon, a petrochemical complex at Wulsan, an electronics complex at Kumi, and a major shipbuilding yard at Ulsan.

The development of individual industries followed a step-by-step approach. For example, Korea’s shipbuilding industry started with domestically produced steel, cheap local labor, and subsidized government loans. But then:

[L]eading shipyards such as Hyundai Heavy Industries sought foreign assistance on such
areas as ship designs, operating instructions, the design of dockyards, and production processes. They hired European engineers and technicians to assist with the running of the shipyard and training of the workforce and adopted quality control measures modeled on the best practices of leading competitors. This plus the recruitment of newly minted engineers from Korean universities—with some having received advanced training overseas—aided in the rapid upgrading of the workforce and allowed the shipyards to largely dispense with foreign assistance by the late 1980s. A more capable, tech savvy workforce plus learning by doing delivered substantial gains in productivity as well as in quality (Kim and Seo 2009). These have continued into the present day with Korean firms among the frontrunners in the production of smartphones, autos, consumer durables, and engineering equipment.


Design capabilities took longer to acquire—close to fifteen years. For some years ship designs were acquired from overseas and technology licensed to build new types of higher value ships such as LNG carriers. This dependence gradually tapered once in-house research and testing facilities had matured. During the latter half of the 1980s, Korean shipbuilders were responsible for advances in protective coatings, welding techniques and in core technologies related to ship propulsion, engine performance, and hull design to minimize pressure and friction drag.

4) Korea’s government invested heavily in infrastructure: seaports, airports, roads, rail. But as a more recent example, World Bank data shows that the share of Koreans using the internet went from 7% in 1998 to 73% just six years later in 2004–and the share has been above 90% and rising since 2016. A US-based techie friend of mind said to me: “We have the fastest home internet access available where we live, which means it’s almost as good as the crappy internet access in Seoul.”

5) Continual technological advance was seen as essential. Korea spends more of its GDP on R&D than just about any other country.

Technology was seen as essential to the success of industrialization and export mpetitiveness. MOST (Ministry of Science and Technology) and KIST (Korea Institute of Science and Technology) were established in order to promote technology transfer and absorption by Korea’s nascent manufacturing sector. Incremental institutional additions continued through the 1970s and the 1980s with the creation of the Korea Advanced Institute of Science (now KAIST, the leading S&T university), as well as a flock of specialized government research institutes (GRIs), many located in the Daedeok Science Town, which later morphed into the Daedeok Science Valley, housing public and private research entities employing thousands of highly trained professionals.


6) Korea had some advantages from its global neighborhood: “East Asian neighborhood effects that conferred reputational advantages and attracted the attention of foreign buyers and investors …”

How does one sum up this policy approach? It’s true that Korea’s government subsidized certain industries. As Yusuf writes: “”The state created a controlled environment in which competition among Korean producers was encouraged but the market was protected by tariff barriers, and by restrictions on the entry and exit of firms. Moreover, Korean exporters were assisted by export subsidies, tax benefits and subsidized financing.”

But the Korean government also did a good job of choosing the industries to be subsidized, in a way that didn’t just pump money into existing production or doomed sectors. It emphasized broad education and improving the skills of its workforce. It emphasized modern infrastructure. It emphasized moving up the technological frontier, by importing expertise when needed, but also by taking many steps to develop its own technology. Also, Korea embraced the continual evolution of technological expertise–and the continual disruption that it brings.

South Korea’s economy faces ongoing challenges moving forward, like other economies around the world. For South Korea, a big shift is that much of its economic growth has been based on large industrial conglomerates (“chaebol”) doing various kinds of increasingly sophisticated manufacturing. But for the world economy as a whole, with robotics and automation on the rise, economic growth is increasingly in the services side of production–innovation, design, and support services like finance and legal–not in the making of physical objects. Thus, Korea’s current government plans emphasize continued technological skills and sophistication, but also application of that technology in small- and medium-size enterprises, as well as in the services sector.

There’s an ongoing argument about whether the specific subsidies to industry were more important than the more general improvements in human capital, infrastructure, and support for high levels of investment and technological growth. I won’t try to untangle that knot. But it seems clear that the industrial subsidies by themselves, without the array of broad supporting policies, would not have had the same success.