Has China’s Industrial Policy Worked? Interview with Lee Branstetter

I frequently read or hear a claim that the United States needs a more aggressive industrial policy to keep pace with China’s industrial policy. On its face, the claim is a curious one. After all, China’s “industrial policy” didn’t work very well from the 1950s up through the 1970s. It was only when China’s industrial policy started involving considerably less government control over the economy in the early 1980s that China’s impressive surge of economic growth began. There are a bunch of different ways to compare per capita GDP across countries (depending on the exchange rate between currencies that seems most relevant), but on World Bank estimates, China’s per capita GDP is still only about 20-25% of the US level.

Thus, from an overall perspective, China’s economic success looks a lot less like a carefully directed move forward and more like a bunch of political leaders trying to scramble up to the front of the economic parade so that they can claim to be leading it. And while I’m willing to take my economic lessons from wherever they emerge, the question of what lessons the US economy should be learning from an economy that has grown over four decades to reach maybe one-quarter of US per capita GDP are less than obvious.

But what if we dig down into the details of what China’s government has actually done to favor certain industries, and what lessons might be learned? Chad Bown interviews Lee Branstetter on this subject in “Is China’s industrial policy working?” (Trade Talks podcast, April 23, 2023, audio and transcript available).

Their discussion starts off with a quick overview of why even many market-oriented economists think it’s appropriate and productive for the government to play a role in supporting research and development (through some combination of intellectual property laws, subsidies and tax breaks), as well as education and infrastructure. Thus, the focus here is on “industrial policy” that focuses not on the overall economic climate, but on support for specific chosen industries. In addition, as Branstetter points out, China’s economic policy through the 1980s and 1990s was mostly about less government intervention, or on supportive conditions for extremely broad areas of the economy–like manufacturing. As Bown describes this time: “China reformed and became more market oriented. The government is still giving out a lot of subsidies, but in terms of industrial policy, those subsidies were not precisely targeted. And I suppose when you are subsidizing everything, you are preferencing and targeting nothing.”

But then China’s leadership becomes more involved in targeting specific industries. Branstetter describes the kind of research he has been doing in this way:

In 2007, China made it mandatory for companies listed on a Chinese stock exchange to disclose in their annual reports the subsidies that they received from the Chinese government. What that means is that starting in 2007, at least for China’s list listed companies we can get pretty rich firm-level data on the subsidies they received, and we can actually use that data along with all the other data disclosed by these firms to their investors to try and get a sense of how subsidies are correlated with firm characteristics, especially productivity. If the focus of Chinese industrial policy after the mid-2000s was really to strengthen the
innovative capacity of these national champions, then it should be making firms more productive.

It’s then possible to look at whether China is targeting firms that already have high productivity, and trying to push them ahead, or whether it is targeting firms with lower productivity to help them catch up. In either case, one can look at the connection from subsidies to later productivity gains. It turns out that subsidies were mainly going to lower-productivity firms, and didn’t seem to help their productivity. Instead, firms were being chosen for subsidies because they were large. Branstetter says:

We find that the Chinese government is not giving subsidies to initially more
productive firms. If anything, the statistical association is actually negative. The Chinese government is, on average, giving more subsidies to less productive firms. …

Chinese firm’s annual reports do often include language that describes what particular subsidies were for. But if we focus on that subset of subsidies that are meant to promote research and development, or the subset of subsidies that are meant to support upgrading of equipment, even for these specific subsidies, we find no relationship with productivity. It’s not the case that firms that are more productive are more likely to receive these subsidies in the first instance. And it’s not the case that firms that receive these subsidies become more productive later. …

Firms that are larger, as measured by total assets or employment, appear to be somewhat more likely to receive these subsidies. As we dug into this data, it became
increasingly clear to us that the subsidies provided to Chinese firms had lots of objectives, many of which were not connected to productivity. We see significant quantities of subsidies going into declining industries like mining. We see significant subsidies that appear to be designed to support employment in large firms. … It’s understandable why the government would pursue this objective, but it’s also crystal clear that the pursuit of this objective directly undermines the pursuit of turning the already more productive firms into super innovators. The money that’s given to prop up failing firms is money that cannot be given to support the technology leaders of the future, and the more you do the former, the less resources you have to do the latter.

In 2015, China announces a “Made in China 2025” policy, which is clearly seeking to have products produced with a wide range of technologies in other countries be produced in China instead. It turns out that this group of subsidy-receiving firms is more likely to have patents–which sounds as if industrial policy may be at work. But at least for the first few years of the program, the firms getting subsidies don’t seem to improve productivity and don’t seem to be taking out new patents at a faster rate. As Branstetter says:

And given how easy it is for firms to get patents in China, given the strong incentives they have with subsidies available at the local government level and elsewhere to take out these patent applications, it’s really surprising that we just don’t see any impact that these subsidies are making these farms more innovative or more productive. … When we try to evaluate the net benefits, if any, of Made in China 2025, it’s hard, at least on the basis of our analysis, to come to a very positive conclusion. Resources have been expended, but the desired innovative outcomes have not yet emerged.

These kinds of arguments may seem counterintuitive, because the conviction that that China’s industrial policy is a sweeping success has become so embedded. For those who would like to dig in more, the study on subsidies is available as NBER working paper #30699, the study on Made in China 2025 as NBER working paper #30676, and a draft of an overall essay on China’s industrial policy is also at the NBER website.

The fundamental point, of course, is that industrial policy can’t be evaluated by pointing to a few companies that are success stories and that also received subsidies. Conversely, industrial policy also can’t be evaluated by looking at a few spectacular failures of companies that got subsidies, either. You need to look at the full array of companies receiving subsidies, see what mixture of political and economic factors guided that choice, and then look at later outcomes for the group as a whole.

Cryptocurrencies: The Classic Problems of Inside and Outside Money

Cryptocurrency is new in some ways, but in other ways, if it’s going to be “money” then it will share some of the problems of money in the past. Daniel Sanches discussed the traditional division between “outside” and “inside” money, and the inherent flaws of each one, in “New Moneys in the Digital Era” (Economic Insights: Federal Reserve Bank of Philadelphia, Q2 2023, pp. 2-10)

“Outside money” refers to money that is either not backed by anything (“fiat money”) or backed by something that is not a liability for anyone in the private sector, as in the case of money backed by government-held supplies of gold. “Inside money” is created “when two private parties engage in a transaction that involves the issuance of a liquid debt claim (that is, a claim that can circulate as a medium of exchange).” For example, when the two parties are me and my bank, and I deposit money in the bank, then we have created inside money that can be used to buy. Sanches writes:

To sum up, in the modern monetary system, central banks control the amount of outside money created in the economy, and private financial firms issue inside money to facilitate private transactions. Inside money is usually a promise to pay outside money, and each dollar of outside money is backing several dollars of inside money.

The classic problem with outside money arises when the quantity available of such money doesn’t adjust to economic conditions. For money to work well as a medium of exchange, it needs to have a (roughly) fixed value–that is, not much inflation or deflation. But there are historical examples (during the gold standard, the Great Depression) where the US economy was dramatically slowed down because the supply of outside money as limited or declined, which caused the economy to slow as well.

Some major cryptocurrencies like Bitcoin have an outside money problem. The quantity of these currencies in circulation is governed by the software behind the currencies themselves (that runs the “blockchain). As a result, the quantity of these currencies can’t adjust to demand. When demand goes up or down, the price of Bitcoin also zooms up or down. If you are serious about having money that has a more-or-less stable value, then this inability to adjust to demand makes Bitcoin unsuitable as money. Sanches writes: “Based on the accumulated experience and the theoretical research in monetary economics, it is hard to believe that any existing cryptocurrency will soon emerge as a sound monetary system, as opposed to a speculative investment vehicle.”

Some cryptocurrencies instead have tried to be inside money. For example, a “stablecoin” is a cryptocurrency backed by by US dollar-denominated financial asset. The quantity of this currency can rise if the market so desires–just buy more financial assets to back the new currency. It can also contract if the market so desires. However, there is no regulation or guarantee of exactly what assets are being used to back these stablecoins. Are the assets something very safe and easy to sell, but with a low rate of return, like US Treasury bonds, or something riskier which pays a higher return? Sanches writes:

In reality, it is not clear what types of assets stablecoin issuers hold as collateral for their tokens. Many stablecoin issuers claim that their tokens are fully backed by U.S. dollars, but the issuers do not specify the types of dollar assets it holds, so it is not clear whether all dollar assets backing stablecoins are safe assets, such as bank deposits and government bonds, or risky assets, such as commercial paper. No regulatory mechanism verifies the types of assets and corresponding balances in custodial accounts.

As a result of this kind of uncertainty the issue for inside money stablecoins is bank runs. What happens if investors fear that the financial backing of the stablecoin is insufficient, and start pulling out their money? Sanches tells the recent story:

The recent run on two major stablecoin issuers demonstrates the problems associated with creating inside money outside of the regulated financial system. TerraUSD is a stablecoin hosted by the Terra Network and created by South Korea’s Terraform Labs. Investors were attracted to TerraUSD because they could earn returns of nearly 20 percent annually by lending their TerraUSD holdings via Anchor Protocol, a decentralized bank for crypto investors. Until May 2022, TerraUSD’s value remained very close to $1, as intended by its issuer, and it was the third-largest stablecoin, with a market capitalization of $18 billion. But on May 9, its value declined suddenly to 90 cents following large withdrawals from Anchor Protocol. As in a typical bank run, the initial withdrawals on May 9 led to further withdrawals, and within a few days TerraUSD was trading at approximately 20
cents. TerraUSD has not recovered from that crisis and, as of the writing of this article, was trading at roughly 2 cents.

If you are a short-term investor looking for newfangled financial assets that might make big moves up or down–so that you can capitalize on these changes–crypto makes sense. But if these kinds of financial instruments are going to eventually become a form of money that is in widespread use, these classic problems need to be addressed. Sanches sums up this way:

It is likely that in the not-so-distant future, our money will be entirely digital, and cryptocurrencies will likely play an important role in this new monetary system. However, this transition will inevitably be slow and bumpy, requiring both experimentation and prudence. At the very least, a stable cryptocurrency standard requires that unbacked digital tokens—which are, despite their novelty, just another outside money—acquire the properties of an elastic currency, as defined in this article. And without government regulation, such as a requirement that stablecoins be fully backed by short-term government bonds, digital tokens that take the form of demand deposits via currency pegs—which, despite their novelty, are just another inside money—are likely to suffer runs.

Craiutu on the Courage and Nonconformism of Moderation

Aurelian Craiutu is a professor of political science at Indiana University, who has spent a good chunk of his career thinking about what “moderation” means–from the perspective of someone who grew up in communist Romania during the rule of Nicolae Ceaușescu. Geoff Kabaservice talked with him for a podcast in April 2022. I’ll quote here from the transcript (“Why the leading challengers to liberalism and moderation come from the West, with Aurelian Craiutu,” Niskanen Center, April 29, 2022).

Craiutu’s comments resonated with me in part because of his clear-eyed view moderation is a “difficult virtue that requires a great dose of courage, nonconformism, and risk.” My sense is that when you get together a room of people who share a common view, a dynamic can emerge in which people compete to show their degree of allegiance to the shared view, and in doing so, some of the people will stake out ever-more-extreme positions. In such a setting, being a moderate isn’t easy. Here are some comments from Craiutu:

I don’t want to identify moderation with centrism. The way in which I think about moderation is that it can be found on both sides of the political spectrum. There are moderates on the left, in the center, and on the right. It’s not necessary to be a centrist in order to be a moderate. So that’s something that I think can be demonstrated by looking at thinkers in the past, politicians and agendas. …

I’ve always been fascinated by moderate thinkers who are concerned with maintaining the balance of the ship. Keeping the ship on an even keel is, I think, one of the best definitions of what political moderation is all about — hence the image of the trimmer. The trimmer is the person who trims the sails in order to prevent the ship from capsizing.

There is no algorithm, there is no science that could explain what to do, when to act, when not to act. You have to have political judgment. You have to have political flair. You have to be like a tightrope walker. And in this regard, I think it’s one of the riskiest things to try to act as a moderate when passions run high, when reason is overcome by passion and most people just want to shout and express their dismay, their concerns and so forth, without concern for political moderation. It’s a virtue, as a title of my book says, a virtue only for courageous minds. It’s a paradox. The image of moderation is that of a weak virtue. And I think, and we can talk at length about this, that it is a difficult virtue that requires a great dose of courage, nonconformism, and risk. …

It’s one of the most difficult concepts to define because moderation constitutes an archipelago. There is political moderation, so we look at the institutional aspects of moderation: What are the institutions and mechanisms that limit power, that prevent power from being abused? And we know what those are: checks and balances, constitutionalism, freedoms, freedom of the press — a very important freedom — freedom of association, constitutionalism, bicameralism. And there are others: federalism maybe, decentralization, subsidiarity. All of those constitute what I would say is the institutional archipelago of moderation.

But there’s also, when we talk about moderation, a host of ideas related to its ethical part. What does it mean to be a moderate? Well, there are lots of things here that can be said. One thing that I would emphasize is that to be a moderate is the opposite of being a fanatic. A fanatic is someone who doesn’t put things in perspective; that subsumes everything under one category, one principle; that is ready to sacrifice everything for the pursuit of that single value, be that liberty, equality, pro-life, pro-choice, low taxes, you name it. So that’s one.

There is also implied in the ethical component of moderation a good dose of skepticism and awareness of one’s fallibility — which is a form of modesty, if you wish, and a form of humility. Moderates are people who tend to be modest and display a good dose of humility, understanding very well that they may be wrong, that they may have only a portion of the truth. …

And there is also the third aspect of moderation, which is religious moderation. Now that’s a topic that I have not written about, and I’ve thought a little bit about it, but there is a whole continent of religious moderation that I think needs to be rediscovered today. To be religious and to be moderate are two different things, but they’re not incompatible — on the contrary. Reinhold Niebuhr is one of the thinkers that comes to mind here. He was able to combine both political moderation and religious moderation. But there are others as well. …

I’ll give you an example, a concrete example here. … Raymond Aron was a great French political thinker, sociologist, and journalist who at some points in his career acted as a trimmer. For example, in 1968 he criticized the university system for being sclerotic. The university was then, as it probably is still now, very anchored in old practices that didn’t serve the student needs. And he thought that professors should be more available to students, they should put less emphasis on exams and more on engaging with students. So he was for reform in the system.

On the other hand, he was vehemently against the students’ revolt in ‘68 because he thought that they were interested in carnival rather than real reform. So he was a trimmer. To the students, he talked the language of the university administrators, arguing for finding a modus vivendi between their claims and the university’s needs and constraints. And to the university administrators, he spoke the language of the students, pushing for reform against the sclerotic practices and habits of the professoriate. So I think that it’s possible to be a trimmer, and a principled one. It doesn’t mean that everyone who claims to do some trimming will be successful in avoiding the charge of opportunism. But Aron, for example, was a successful one.

 

Calvin Coolidge on Voter Evaluations of Entertainment and Accomplishment

On this July 4, I find myself thinking about the long run-up to the national elections that will happen in November 2024. As trial balloons are floated, exploratory committees are formed, and candidacies are announced, I’m reminded of the long-ago warning from Calvin Coolidge that voters should encourage those whose candidacy is based on what seems like an agreeable and/or entertaining personality, but should instead focus on character, ability and experience.

The background here is that Warren G. Harding was elected President of the United States in the 1920 election, with Calvin Coolidge as his Vice President. When Harding died in 1923, Coolidge succeeded to the presidency, and then won the 1924 election, before deciding against running again in 1928. From 1930-31, Coolidge wrote a series of short “Dispatches” published in newspapers around the country under the headline “Calvin Coolidge Says.” Here’s his dispatch for October 8, 1930, in the run-up to Election Day that year:

If self-government is to continue to be a success the voters must take their duties seriously. As the relations of the government in both our political and economic life become increasingly intricate, the necessary qualifications for discharging the functions of high office must be correspondingly raised. Administration and legislation are becoming more and more an exact science. It is no longer possible to expect the best results from men and women without previous training in public activities.

For important political service the three qualifications necessary are character, ability and experience. Some of our voters are not giving sufficient consideration to these requirements. They are often supporting candidates whose greatest appeal is that they are good fellows. An agreeable personality is a fine quality, but it is not enough to administer a great office. It is vain to support office seekers who smile, if it results in electing officeholders who are not competent.

The government cannot be run successfully by substituting the power of entertainment for the power of accomplishment. The essential quality for the voters to require in their choice of candidates is capacity for public service.

When I hear people talk about their voting choices these days, they often emphasize either the extent to which they agree with the candidate on certain issues, or disagree with the opposing candidate, or whether they feel as if their preferred candidate is “on their side” in some sense. These sorts of issues matter, of course. But the combination of “character, ability, and experience” matter, too. I have a strong preference for candidates who do the hard work of learning subjects in depth, reading the background materials personally, listening to a broad array of their constituents, running a competent group of staffers, negotiating with those who disagree, and putting in the time to craft the details of legislation that can actually work. By comparison, whether the candidate can hire public relations staff to write up some good zingers for social media consumption doesn’t have much to do with the actual practice of public service.

Biggest Economic Applications of Generative AI: McKinsey

What are likely to be the biggest economic applications of the current wave of artificial intelligence technologies? The McKinsey Global Institute takes a shot at answering the question in “The economic potential of generative AI” (June 2023). Their estimate is that these technologies could add $2.6 trillion to $4.4 trillion annually to the global economy; for perspective, total world GDP is about $100 trillion.

As a starting point, what is “generative AI” and why is it distinctive?

For the purposes of this report, we define generative AI as applications typically built using foundation models. These models contain expansive artificial neural networks inspired by the billions of neurons connected in the human brain. Foundation models are part of what is called deep learning, a term that alludes to the many deep layers within neural networks. Deep learning has powered many of the recent advances in AI, but the foundation models powering generative AI applications are a step change evolution within deep learning. Unlike previous deep learning models, they can process extremely large and varied sets of unstructured data and perform more than one task. Foundation models have enabled new capabilities and vastly improved existing ones across a broad range of modalities, including images, video, audio, and computer code. AI trained on these models can perform several functions; it can classify, edit, summarize, answer questions, and draft new content, among other tasks. …

The rush to throw money at all things generative AI reflects how quickly its capabilities have developed. ChatGPT was released in November 2022. Four months later, OpenAI released a new large language model, or LLM, called GPT-4 with markedly improved capabilities. Similarly, by May 2023, Anthropic’s generative AI, Claude, was able to process 100,000 tokens of text, equal to about 75,000 words in a minute—the length of the average novel—compared with roughly 9,000 tokens when it was introduced in March 2023. And in May 2023, Google announced several new features powered by generative AI, including Search Generative Experience and a new LLM called PaLM 2 that will power its Bard chatbot, among other Google products.

In what particular areas of these economy are these tools likely to be especially useful? McKinsey suggests that four areas will account for about three-quarters of the productivity gains:

About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. Across 16 business functions, we examined 63 use cases in which the technology can address specific business challenges in ways that produce one or more measurable outcomes. Examples include generative AI’s ability to support interactions with customers, generate creative content for marketing and sales, and draft computer code based on natural-language prompts, among many other tasks. …

Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities.
Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today. … The acceleration in the potential for technical automation is largely due to generative AI’s increased ability to understand natural language, which is required for work activities that account for 25 percent of total work time. Thus, generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work.

The McKinsey report goes into much more detail. But in broader terms, the discussion makes me think about how people have interacted with computer technology over time. For example, it used to be that only a limited number of high-caste workers could access the computers. When I was in high school in the 1970s, there were only a couple of terminals for the entire class to use–and we felt pretty up-to-date to have those. The personal computer revolution of the 1980s and 1990s, and then the smartphone revolution of the last two decades, have democratized access to the technology. But the specific manner of using the technology still mostly involved using a more-or-less intuitive piece of software or an app. We are now moving into a space where it will be possible to use the technology with “natural language”–not just for searching on a map, but for a range of less structured tasks.

I don’t have a well-developed sense of how this will ultimately affect the tasks we usually do at work, along with wages and inequality. I suspect it will touch us all in various ways. But in particular, those in customer operations, marketing and sales, software engineering, and R&D should have eyes wide open to the evolving possibilities.

Harry Markowitz (1927-2023): Bringing Finance Into Economics

Harry Markowitz has died. He won the Nobel prize in economics in 1990, jointly with Merton Miller and William Sharpe “for their pioneering work in the theory of financial economics.”

I won’t try here to review how Markowitz’s work underlies the capital asset pricing model (“cap-M,” as economists pronounce it) that is now a fundamental part of finance and business school classes. Some readable and accessible starting points for the details are Markowitz’s Nobel lecture and an overview article by Hal Varian, “A Portfolio of Nobel Laureates: Markowitz, Miller and Sharpe,” in the Winter 1993 issue of the Journal of Economic Perspectives.

For me, the most fundamental part of Markowitz’s work was that he, as much as anyone, brought finance under the umbrella of what was understood to be “economics.” Markowitz told a story to illustrate this point in a number of interviews: here’s the version from the Journal of Financial Planning in May 2010. To set the stage, it’s 1955, Markowitz has been working at the Rand Corporation in California, but he needs to fly back to Chicago for the oral defense of his doctoral dissertation. Markowitz says:

“I remember landing at Midway Airport thinking, ‘Well, I know this field cold. Not even Milton Friedman will give me a hard time.’ And, five minutes into the session, he says, ‘Harry, I read your dissertation. I don’t see any problems with the math, but this is not a dissertation in economics. We can’t give you a Ph.D. in economics for a dissertation that isn’t about economics.’ And for most of the rest of the hour and a half, he was explaining why I wasn’t going to get a Ph.D. At one point, he said, ‘Harry, you have a problem. It’s not economics. It’s not mathematics. It’s not business administration.’ And the head of my committee, Jacob Marschak, shook his head, and said, ‘It’s not literature.’

“So we went on with that for a while and then they sent me out in the hall. About five minutes later Marschak came out and said, ‘Congratulations, Dr. Markowitz.’ So, Friedman was pulling my leg. At the time, my palms were sweating, but as it turned out, he was pulling my leg …”

There’s a minor dispute over whether Friedman was indeed serious: it’s not clear to me that Friedman was just goofing. Yes, Friedman wasn’t ultimately willing to block this dissertation. However, in a later interview, Friedman did not recall the episode but said: “What he [Markowitz] did was a mathematical exercise, not an exercise in economics.”

But in Markowitz’s Nobel his acceptance lecture back in 1990, he ended by telling a shorter version of this story, and then said: “As to the merits of his [Milton Friedman’s] arguments, at this point I am quite willing to concede: at the time I defended my dissertation, portfolio theory was not part of Economics. But now it is.”

In Varian’s JEP article, he quotes a 1990 comment from Robert Merton (who would share the Nobel prize in 1997 with Myron Scholes for work on to value derivative financial instruments): “As recently as a generation ago, finance theory was still little more than a collection of anecdotes, rules of thumb, and manipulations of accounting data. The most sophisticated tool of analysis was discounted value and the central intellectual controversy centered on whether to use present value or internal rate of return to rank corporate investments.” As much as anyone, Markowitz brought to economics and finance the ideas that financial choices involved systematic tradeoffs between risk and return, and that risks needed to be assessed in the context of an overall portfolio rather than one investment at a time. Such ideas and their implications were radical at the time; now, they are commonplace.

Nation’s Report Card: Test Scores Sink for 13 Year-Olds

The National Assessment of Educational Progress (NAEP) is sometimes called “the nation’s report card.” It tests a nationally representative group of students at ages 9, 13, and 17. At an individual school, it’s possible for tests to get easier over time, or for grade inflation to result in higher letter grade for the same test scores. But the NAEP test is written so that the scores are directly comparable from year to year.

The test results offered some modest reasons for encouragement in the early 2000s. But the 2023 scores for 13 year-olds are bleak. The press release with the “highlights” is titled: “Scores decline again for 13-year-old students in reading and mathematics.” Here are the reading and math scores since 1971:

The test was previously given in 2020, so it’s plausible to read the scores as the result of many students losing at least a year of in-classroom education. A cynic would point out that scores were already falling from 2012 to 2020. The 2023 reading scores are essentially back to 1971 levels, and the math scores have fallen nearly that much. Scores fell for pretty much every group: by gender, race/ethnicity, public or private school, region of the country, and so on.

This is a survey, not an academic study, so it doesn’t seek to pin down causes and effects in a rigorous statistical way. But the survey data has some clues.

Absences are up. “[T]here were increases in the percentages of 13-year-old students who reported missing 3 or 4 days and students who reported missing 5 or more days in the last month. The percentage of students who reported missing 5 or more days doubled from 5 percent in 2020 to 10 percent in 2023.”

The share of students who read for fun is falling. “In 2023, fourteen percent of students reported reading for fun almost every day. This percentage was 3 percentage points lower than 2020, and 13 percentage points lower than 2012. Overall, the percentage of 13-year-old students who reported reading for fun almost every day was lower in 2023 than in all previous assessment years.”

Fewer students are taking algebra by age 13. “Compared to 2020, there were no significant changes in the percentages of students by type of mathematics taken during the 2022–23 school year. Compared to 2012, however, the percentage of 13-year-old students in 2023 who reported they were taking regular mathematics increased from 28 to 42 percent, while the percentage of students taking pre-algebra decreased from 29 to 22 percent, and the percentage of students taking algebra dropped from 34 to 24 percent.” 

I’ve got no magic wand for improving school performance, but there are some tips the research literature. For example international comparisons with countries that have high-performing K-12 education systems suggest the importance of 1) comprehensive and competitive exit exams for high school graduates; 2) competition between schools; and 3) teachers drawn from the top one-third of all college graduates. Lessons drawn from US charter schools that have success with low-income and minority students suggest the importance of five factors: 1) intensive teacher observation and training, 2) data-driven instruction, 3) increased instructional time, 4) intensive tutoring, and 5) a culture of high expectations. The resource inputs were more traditional things like per-pupil-spending and student-teacher ratios. In particular, the availability of small-group tutoring so that students don’t fall behind seems supported by a variety of studies.

Forty years ago in 1983, there was a National Commission on Excellence in Education which issued a report called “A Nation At Risk: Imperative For Educational Reform.” It caused a lot of hand-wringing at the time. Here are the often-quoted opening paragraphs from the 1983 report:

Our Nation is at risk. Our once unchallenged preeminence in commerce, industry, science, and technological innovation is being overtaken by competitors throughout the world. This report is concerned with only one of the many causes and dimensions of the problem, but it is the one that undergirds American prosperity, security, and civility. We report to the American people that while we can take justifiable pride in what our schools and colleges have historically accomplished and contributed to the United States and the well-being of its people, the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a Nation and a people. What was unimaginable a generation ago has begun to occur– others are matching and surpassing our educational attainments.”

If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves. We have even squandered the gains in student achievement made in the wake of the Sputnik challenge. Moreover, we have dismantled essential support systems which helped make those gains possible. We have, in effect, been committing an act of unthinking, unilateral educational disarmament.

As the 1983 report points out, the US has been losing its lead in education. Historically, the US was one of the first countries to have near-universal elementary education, and then one of the first countries to have a mass expansion of high school education, and then to have a mass expansion of college education. As other countries caught up in K-12 education, it used to be common, a few decades ago, to argue the large share of US students going on to higher education helped to make up for the relatively poor performance of the US K-12 schools.

But now other countries have caught up, both in K-12 education and in the share of population going on to higher education. Other countries like Germany also have aggressive and wide-ranging apprenticeship programs for high school students to build skills and get connected to work opportunities.

The ongoing performance of the US K-12 system has been a slow motion crisis for decades. The drop-off of text scores among current 13 year-olds in the aftermath of the pandemic is another crisis. As far as I know, there isn’t any evidence that just giving existing teachers a raise, without more sweeping and fundamental changes, will make any difference. But we know that, on average, those who try to navigate 21st-century labor markets with education levels little different from the 1970s are facing a lifelong disadvantage.

Helping the Neighborhood by Charging for Parking

When economists look at a resource (including their own work lives!), they contemplate whether that resource is being used in the most effective way. Donald Shoup has been writing for years about the resource of the “curb lane”–that is, where drivers often park their cars. His ongoing theme is that free (or underpriced) curbside parking often seems attractive to both drivers and nearby stores and businesses, but when the predictable tradeoff of the seemingly “free” resource is that drivers are cruising up and down streets and can’t find a place to park near where they want to go, the attraction of “free” is diminished.

In a recent essay, Shoup explores the idea of “Parking Benefit Districts” (Journal of Planning Education and Research, published online March 2023). He points out some of the tradeoffs of “free” parking, like the wasted time and energy of drivers as they cruise the streets, criss-crossing with pedestrians and bicyclists, looking for a spot. His common recommendation is that when parking is scarce, parking meters should use prices that vary by location and time of day, in such a way that if you are looking for street parking, there will usually be an available space within a block or two–if you are willing to pay the price, of course. In some cases, the appropriate answer may be to reduce curb parking and free up space for bus and bike lanes, for outdoor dining, or for additional trees and less cluttered sidewalks.

Shoup writes:

Transportation planners have neglected curb parking because nothing is moving, and land-use planners have neglected it because it is in the roadway. No one seems to know how to solve the curb parking problem, except for followers of Nobel laureate William Vickrey who proposed that cities should set the prices for curb spaces to “keep the amount of parking down sufficiently so there will almost always be space available for those willing to pay the fee” (Vickrey 1954). Prices can vary by place and time of day to leave one or two open curb spaces on every block. Where all but one or two curb spaces on a block are occupied, the parking is both
well used and readily available. …

Market prices for curb parking exemplify what Jaime Lerner (2013) called urban acupuncture: a simple touch at a critical point (in this case, the curb lane) can benefit the whole city. In another medical metaphor, streets are a city’s blood vessels, and overcrowded free curb parking is like plaque on the vessel walls, leading to a stroke. Market prices for curb parking prevent this urban plaque.

The biggest problem with charging for curb parking is politics. Cities charge for public services like water and electricity to recover the capital and operating costs of providing them, but curb parking doesn’t have any obvious capital or operating cost to recover. Unmoored from the need to recover any costs in the city’s budget, curb parking prices are purely political (Manville and Pinsky 2021). How can cities create political support for paid parking?

The idea of a “parking benefit district” is that the money spent on parking in any given neighborhood should go to that neighborhood, to be used for local purposes like picking up garbage, planting trees, cleaning up graffiti, fixing sidewalks, and so on. As Shoup writes:

The goal is not to persuade drivers they should pay for curb parking. The goal is to convince stakeholders they should charge for curb parking. Anyone who does not store a car on the street may begin to see free curb parking the way landlords see rent control. Free curb parking is rent control, for cars. If people want better public services more than they want free curb parking, the curb lane can benefit everyone, not just drivers who store their cars on the street.

Shoup offers a brief discussion of cities that have enacted a version of parking benefit districts for certain areas of the city:

Shoup offers a thought experiment of how a parking benefit district might work in New York City. He writes: “Because New York does not charge drivers for parking in 97 percent of its three million curb spaces, it offers a titanic subsidy for cars. If the city charged only $5.50 per curb space per day, it would earn $6 billion a year, about the same as the $6.1 billion farebox revenue from all New York City public transit in 2019 …” He points out that on the Upper West Side of Manhattan, off-street parking costs from $35 to $147 per day. If parking meters for this area collected the low-end number here–$35/day–it could provide more than $100 million per year for street-level neighborhood improvements.

For some previous posts on the tradeoffs of offering “free” parking, see:

Why are the Recent US Bank Runs So Much Faster?

When Silicon Valley Bank was shut down in March 2023, 25% of its deposits has been withdrawn on March 9, and the expectation was that another 62% of deposits was to be withdrawn on March 10. The bank was expected to lose almost 90% of its deposits in two days! There haven’t been a lot of US bank runs in the last few decades, but the recent ones have been fast. Jonathan Rose discusses “Understanding the Speed and Size of Bank Runs in Historical Comparison” (Economic Synopsis: Federal Reserve Bank of St. Louis, 2023: #12).

Here’s a table from Rose to help frame the topic:

The column showing “deposit insurance coverage” is meaningful because if deposits were already covered by insurance, there is no reason for them to be withdrawn. In retrospect, the bank runs of 2008–at Wachovia and Washington Mutual–stand out on this list. At those institutions, most of the deposits were already insured, and the decline in deposits during the run was relatively small–and also took a few weeks.

In the recent examples, including Silicon Valley Bank, the share of deposits covered by deposit insurance was much lower. Instead, these banks sought to attract large deposits from particular groups, like venture-capital backed firms (Silicon Valley Bank), crypto investors (Silvergate and Signature), private equity firms (Signaure and Silicon Valley Bank), and high net worth individuals (Signature and First Republic). A bank where most of the deposits are not covered by deposit insurance is more vulnerable to runs.

In addition, it has been possible for decades now to withdraw funds from a bank very quickly: as Rose points out, the Continental Illinois bank run back in 1984 was described at the time as a “lightning fast electronic run” by big corporations pulling out their deposits. What seems different now is that a run can happen with a large number of mid-sized and smaller investors with similar concerns (like VC-backed firms or crypto investors), who are connected with each other via social media networks.

The underlying lesson here is an old one, but in a new context. Banks will seek out ways to make money, including by specializing in providing services to specific groups like VC-backed firms, crypto investors, private equity, and so on. There’s nothing wrong with a bank trying to attract customers! But when those specialized groups also provide substantial deposits to the bank, going beyond the previous deposit insurance limits, then there are risks of bank runs that cause shivers through the entire banking system. At a minimum, the bank regulators need to be aware of these risks and adapt the deposit insurance premiums for such banks accordingly. The more aggressive step would force banks to separate their provision of specialized financial services into a company separate from the actual “bank.”

Looking Back at the Supply Chain Woes

During the supply chain woes that plagued the global economy in 2020 and 2021, the Federal Reserve Bank of New York put together what it called the Global Supply Chain Pressure Index (GSCPI). The latest version of the index both highlights the size of those disruptions and also suggests these pressures have now faded away.

Here’s the index from 1997 up through May 2023. The spikes in supply chain in spring and summer of 2020, and then again in late 2021 and into 2022, are apparent. The decline to usual levels in the last few months is also clear.

fasjdk

What is actually being measured here? The New York Fed explains:

The GSCPI integrates a number of commonly used metrics with the aim of providing a comprehensive summary of potential supply chain disruptions. Global transportation costs are measured by employing data from the Baltic Dry Index (BDI) and the Harpex index, as well as airfreight cost indices from the U.S. Bureau of Labor Statistics. The GSCPI also uses several supply chain-related components from Purchasing Managers’ Index (PMI) surveys, focusing on manufacturing firms across seven interconnected economies: China, the euro area, Japan, South Korea, Taiwan, the United Kingdom, and the United States.

The Fed stirs together these ingredients and gets a distribution of scores. The vertical axis of the scale is measured by standard deviations, which may not be intuitive for some readers. Here’s my off-the-cuff effort at explaining. Imagine a bell-shaped curve showing a distribution of some variable: that is, most of the scores are centered around the middle, with many fewer scores on the left or right tails. For a particular version of this bell-shaped curve known as a normal distribution, about two-thirds of the scores are within one standard deviation (plus or minus) of the average score; 95% of the scores are within two standard deviations; and 99.7% of the scores are within three standard deviations.

The scores on this Global Supply Chain Pressure Index are not a neat bell-shaped “normal” curve. But it’s still true that when a score gets to be three or (gulp!) four standard deviations from the mean, it’s extremely rare. In other words, those supply chain disruptions from 2020 through early 2022 were really, truly outside the experience of the previous quarter-century.

For some earlier posts on these supply chain issues, see: