Each fall and spring, the Brookings Institutions holds a conference with a set of papers from prominent economists on leading policy topics, with comments and discussion. The Fall 2024 conference happened last Thursday and Friday. You can go to the website and spend hours watching the whole thing, or you can pick and choose through the topics, and read over some of the conference discussion papers and comments. If you want to know why in Washington policy circles it seems as if “everyone knows” some economic fact or insight, the source of that common knowledge can, reasonably often, be tracked back to papers presented at BPEA. Here’s the agenda, with some links:
“Challenges Around the Fed’s Monetary Policy Framework and Its Implementation,” by William English and Brian Sack
“Considerations for a Post-Pandemic Monetary Policy Framework,” by Charles Evans
“Did the Federal Reserve’s 2020 Policy Framework Limit Its Response to Inflation? Evidence and Implications for the Framework Review,” by Christina Romer and David Romer
Economists commonly think about technology as an idea, but in one way or another, the technology interacts with physical forms–and these physical forms affect how the technology is applied and its social effects. In his da Vinci Medal Address, Donald MacKenzie considers some implications of this idea in “Material Political Economy” (Technology and Culture, July 2024).
One classic example he mentions is the conflict that arose some centuries ago, in feudal times, over how the technology of how grain should be milled. Feudal lords typically preferred a centralized system, in which commoners brought the grain to a watermill or windmill owned by the lord, and paid the lord for milling the grain. However, many commoners would have preferred to avoid the payment to the lord, and instead to mill the grain by hand. In turn, feudal lords sometimes sought to destroy handmills, where they could do so. In this setting, the technology choice is obviously not just about efficiency in an abstract sense, but about the interaction of efficiency with preexisting social structure.
As a modern example, I was intrigued by MacKenzie’s discussion of ultrafast high-frequency trading (HFT) for financial firms. He points out that when these firms were being established in the US in the 1990s, “HFT firms were sometimes excluded from trading or faced material barriers that protected incumbents’ slower systems.” But his focus is on more recent developments.
Just how fast is “ultrafast”? Each year, the European futures exchange Eurex publishes data from which we can infer the response times of the fastest HFT algorithms. Eurex’s 2023 measurements suggest a state-of-the-art response time (to a packet of market data that triggers a trading system’s action) of 8 nanoseconds, or billionths of a second. In a nanosecond, the fastest physically possible signal, light in a vacuum, travels only around 30 cm, or roughly a foot. That is not simply a helpful yardstick of HFT’s speed: getting messages to travel as close as possible to the speed of light in a vacuum is an important practical concern in HFT. Fiber-optic cable, for example, is not fast enough, because the refractive index of the glass at the core of such a cable slows laser-light signals to around two-thirds of light’s speed in a vacuum. Where possible, therefore, HFT firms send trading data and orders by microwave, millimeter-wave, or laser-light signals transmitted through the atmosphere, where they travel almost as fast as in a vacuum …
Thus, high-frequency trading is not an abstract technological innovation, but something embodied in the world of material, distance, light, and microwaves. Mackenzie writes:
HFT programmers cannot afford to consider the computer an abstract machine, as possibly presented during their college education. It must be seen as an ensemble of metal, semiconductors, and plastic through which signals pass, and ensuring that they do so as quickly as possible is an all-pervasive concern. For example, the preferred programming language in HFT is C++, which allows “close-to-the-metal” programming, not having to operate through layers of abstraction as with other languages. Since around 2010, furthermore, a conventional computer system, even if programmed in C++, is in many markets not fast enough for HFT. Trading algorithms are directly programmed into the hardware of the silicon chips known as field-programmable gate arrays (FPGAs) … There have been repeated rumors of firms moving beyond FPGAs to fully bespoke integrated circuits …
One result is what MacKenzie calls “speed races.” In the HPT universe, algorithms are programmed to react very quickly to new information. But when the fastest algorithms place orders and react first, and prices change, then the slightly slower algorithms realize that they are reacting to “stale” information. They frantically try to cancel orders, while faster algorithms try to take advantage of their “stale” bigs.
Either the bids for or offers of the underlying shares placed by market-making algorithms immediately become “stale,” as market participants describe them: if, for example, the price of the future has fallen, buying the shares at the preexisting bid price is likely to incur a loss. So market-making algorithms rush to cancel those stale bids as quickly as possible, while liquidity-taking algorithms race to execute against the stale bids before they are canceled. The difference between winning and losing those speed races, the Eurex data suggest, is now measured in billionths of a second.
These kinds of “speed races” are happening every minute. Another physical manifestation of this technology involves the towers and locations for transmitting signal from Chicago-based markets to New Jersey-based high frequency traders.
The need for ultrafast speed makes very specific physical locations exceptionally valuable, and those who own or control them can therefore exact rent. The fiber-optic cables or wireless links that transmit data from one financial trading computer data center to another have to follow as closely as possible the geodesic, the shortest path on the surface of the earth between the two data centers. In 2010, computer scientist Alex Pilosov led the building of the first microwave link for HFT between Chicago, where futures are traded, and northern New Jersey, the site of the data centers where U.S. shares are traded. Pilosov kept a low profile in this work, to avoid alerting potential competitors, but around a year later the owners of the attractively located microwave towers where he had leased space told him that others were also trying to place antennae on those towers. He says, “I was like, ‘Well, I’ll tell you what’s going on but you have to promise me that you have to charge them three times what you’re charging me. And I promise you that they will pay.’ And that’s what happened. That happened.” Similarly, Mike Persico, who built both millimeter-wave and atmospheric-laser links at the New Jersey share-trading data centers, reports that the owners of a tall building close to the relevant geodesics where this equipment could be placed suddenly possessed a very valuable resource. “Sometimes,” says Persico, “these landlords end up with the equivalent of a Willy Wonka golden ticket, because when they purchase[d] these properties, this was the furthest thing from their mind, and all of a sudden . . . it becomes very lucrative.”
Even for an economist, it’s possible to doubt whether the resources invested in ultra-fast trading are improving the economy for the average worker or consumer. But I’ll also add that development of new technologies often takes circuitous routes through different applications, and I wouldn’t be surprised to find that ultrafast communication, over time and as the price falls, turns out to have uses as-yet undreamed of.
Brand-name drugs in the United States cost more than in other countries. The primary reason is that the US has a sort-of-constrained but pretty free market in setting drug prices, while in other countries it is common for the government or national health service to give pharmaceutical companies what amounts to a take-it-or-leave-it offer: charge a lower price, or the drug won’t be prescribed in this country.
U.S. prices for brand-name originator drugs were 422 percent of prices in comparison countries, while U.S. unbranded generics, which we found account for 90 percent of U.S. prescription volume, were on average cheaper at 67 percent of prices in comparison countries, where on average only 41 percent of prescription volume is for unbranded generics. U.S. prices for brand-name drugs remained 308 percent of prices in other countries even after adjustments to account for rebates paid by drug companies to U.S. payers and their pharmacy benefit managers.
Here’s a figure summarizing the results by country. The numbers show US drug prices as a percentage of prices in the other country: overall, for brand-name drugs, and for generics.
For US health markets, there’s room for exasperation here. It’s a good thing to have incentives for developing new and better drugs. The health benefits can be extraordinary. But we live in a world where the US market is providing the incentives for development of new drugs, while governments around the world are if not quite “free”-riding on those drug-development incentives, riding at a much reduced rate. This doesn’t seem right or fair (any more than having many countries rely on US military spending for their security needs is right or fair). But the solution of having the US government control prices of brand-name drugs, which would greatly reduce the incentive to develop new and improved drugs, is worse than unfair–it would be outright harmful.
Also, it’s worth remembering the 90% of actual US prescriptions that involve generic drugs is one area where US consumers are paying less for health care.
When most people think of “experiments,” they think of test tubes and telescopes, of Petri dishes and Bunsen burners. But the physical apparatus is not central to what an “experiment” means. Instead, what matters is the ability to specify different conditions–and then to observe how the differences in the underlying conditions alter the outcomes. When “experiments” are understood in this broader way, the application of “experiments” is expanded.
For example, back in 1881 when Louis Pasteur tested his vaccine for sheep anthrax, he gave the vaccine to half of a flock of sheep, expose the entire group to anthrax, and showed that those with the vaccine survived. More recently, the “Green Revolution” in agricultural technology was essentially a set of experiments, by systematically breeding plant varieties and then looking at the outcomes in terms of yield, water use, pest resistance, and the like.
This understanding of “experiment” can be applied in economics, as well. John A. List explains in “Field Experiments: Here Today Gone Tomorrow?” (American Economist, published online August 6, 2024). By “field experiments,” List is seeking to differentiate his topic from “lab experiments,” which for economists refers to experiments carried out in a classroom context, often with students as the subjects, and to focus instead on experiments that involve people in the “field”–that is, in the context of their actual economic activities, including work, selling and buying, charitable giving, and the like. As List points out, these kinds of economic experiments have been going on for decades. He points out that government agencies have been conducting field experiments for decades.
In Europe, the early social experiments in the late 1960s included electricity pricing schemes in Great Britain. In the US, social experiments can be traced to Heather Ross, an MIT economics doctoral candidate working at the Brookings Institution. The first wave of such experiments in the United States began in earnest in the late 1960s and included government agencies’ attempts to evaluate programs by deliberate variations in agency policies. Such large-scale social experiments included employment programs, electricity pricing, job training programs, and housing allowances. While this early wave of social experiments tended to focus on testing new programs, since the early 1980s major social experiments have examined various reforms that test incremental changes to existing programs. These experiments have had an important influence on policy, as they were recognized as contributing to the Family Support Act of 1988, which overhauled the AFDC program.
Again, the key to an “experimental” approach is to have control over the different conditions, and there’s control is more clear-cut in a test tube than in, say, people being charged for electricity in different ways. But as the advocates of field experiments point out, the approach doesn’t require that individual people be identical, nor that social interactions be like chemical reactions. If people are randomly divided into groups of sufficient size, then the groups will be broadly quite similar in underlying characteristics. This assumption of randomness needs to be questioned and checked, of course, and it turns out that some of the early experiments were not always truly random.
In addition, there is a question of “scalability,” and whether findings from a field experiment can be scaled up to a real-world program. As List has pointed out in earlier work, there is often a “voltage effect,” where a study finds an effect that doesn’t generalize to a broader population. A common issue here is that the details of the experiment may be hard to replicate at scale. As an example, List discusses what is called an “A/B experiment,” in this case an example where certain children got an intervention to prepare them for kindergarten, and others did not. He writes:
In the A/B experimental test of an early childhood program summarized … the program is found to triple Kindergarten Readiness: from 17% to 51%! One might view this result as extraordinary, and immediately want to scale the program. To understand why that choice is not prudent, consider what exactly we have learned from this research. If it is a typical social science experiment, then it has likely been conducted as an efficacy test: the “best-case” test of the program is arm B versus the control, arm A. To understand why more information is necessary, we must consider the incentives that the researchers faced. Those incentives are set up to create a petri dish that provides results that gives the intervention its “best shot,” or likewise the largest treatment effects. In this manner, we are answering the wrong question if we are attempting to provide policy advice. We are asking: can this idea work in the petri dish under the best-case situation rather than will this idea work at scale? This is the wrong question. We must not only do the efficacy test but also relevant tests of scale within the original discovery process. The economics of many situations demand such an approach.
List suggests that experiments shouldn’t be designed with the “petri dish” mindset, but instead can be designed to think in advance about “what constraints will the idea face at scale, what key factors can impact scaling.” No one said that experimental design was easy. List suggests a set of criteria for better and worse design–with the final column referring to studies that will inevitably bomb.
List suggest that when economists of the future approach a question like the productivity gains of a pin factory (Adam Smith’s famous example), they will do so with an experimental mindset, systematically varying the conditions to understand the outcomes. He writes:
In the past few decades, there is perhaps no empirical innovation that has changed economics more than field experiments. Via controlling the assignment mechanism, the experimenter sheds light on both the “effects of causes” and the “causes of effects”. Yet, the scientific insights do not end there. With some imagination and theoretical guidance, the experimenter can generate data that permits an informed prediction of whether the causal impacts of treatments implemented in one environment transfer to other environments, be them spatially, temporally, or scale differentiated. When these dual goals are achieved, the power of the experimental approach is unleashed.
The nature of globalization is clearly shifting, but it’s not clear to me that the overall level is diminishing. It does seem true the level of goods moving across international borders is rising much more slowly–or even not at all. However, the level of services being performed across international borders is rising substantially, and movements of information, data, and people are on the rise as well. But as countries around the world contemplate the possibility of disengaging from the global economy, it’s useful to ask what the consequences of such a decision would be.
In particular, does greater openness to trade improve economic growth? What evidence could you offer to make the point, one way or another? Douglas A. Irwin describes the kind of studies that have been used to address this question in the last two decades or so in “Does Trade Reform Promote Economic Growth? A Review of Recent Evidence” (World Bank Research Observer, published online April 25, 2024).
As late as the 1990s, Irwin explains, the most prominent evidence for a connection between open trade and growth was based on a method where the researcher chooses an outcome variable, like per capita GDP, and then a method of measuring the openness of the economy (explicit barriers to trade like tariffs, more subtle barriers to trade like a government-controlled exchange rate). Then, calculate whether there is a correlation between the barriers to trade and the outcome of per capita income. Of course, you can also add in some other possible explanatory variables, and control for them statistically.
This approach has some obvious problems. Correlation isn’t causation, as every econometrics course teaches. The countries that decided to open up their trade probably differed in systematic ways from the countries that didn’t do so–in particular, countries that opened up their trade probably perceived fairly near-term benefits from doing so–and also carried out a group of complementary reforms– while countries that did not open up their trade did not perceive such benefits and didn’t carry out potentially complementary reforms. When researchers looked at alternative measures of measuring openness to trade, and alternative ways of taking other possible factors into account, they often found that the results changed, as well. Also, at some baseline level, if you think about all the factors that affect growth–human capital, physical capital, technology, a rule of law, culture, infrastructure, natural resources, geographic location and others–it’s not obvious that having government change the trade rules, by itself, should have much of an effect.
Recognizing this problem, Irwin describes how researchers tried some other approaches: in particular, instead of comparing across countries, these studies often look at growth within a particular country. For example, a “synthetic control” approach looks at a country that enacted a trade reform. It then seeks a group of countries that had very similar growth patterns in the years leading up to the trade reform (and are often geographically similar to the original country), but did not carry out a trade reform. The hypothesis here is that since these countries had evolved similarly in the past, then in the absence of a trade reform, they should have continued to evolve similarly in the future. If there’s a break in the similarity at the time of the trade reform, that means something. Other studies look at firm-level data in a country: do the industries affected by greater openness to trade (and in particular, from additional foreign competition and an improved ability to buy inputs to production) do better than other industries less affected? Finally, there are detailed country-by-country case studies of trade reform.
Economists tend to mistrust any single measure of a phenomenon, but if a variety of research methods using a variety of data sources find similar results, then one’s confidence in the finding is higher. Looking at the combined results of these methods, Irwin summarizes:
The findings from recent research, however, have been remarkably consistent. For developing countries that are behind the technological frontier and have significant import restrictions, there appears to be a measurable economic payoff from more liberal trade policies. … [A] variety of studies using different measures of policy have found that economic growth is roughly 1.0–1.5 percentage points higher for countries that undertake trade reforms. Several studies suggest that this gain cumulated to about 10–20 percent higher income after a decade. The effect is heterogeneous across countries, because countries differ in the extent of their reforms and the context in which reform took place. Understanding that heterogeneity, which is sometimes attributed to labor market rigidities, financial frictions, or service-sector in puts, merits further research. At a microeconomic level, the gains in industry productivity from reducing tariffs on imported intermediate goods are even more sharply identified. They show up time and again in country after country.
Again, these results about openness to trade are focused on “developing countries that are behind the technological frontier and have significant import restrictions,” and thus less relevant to a country like the United States, which is close to the technological frontier in many areas, has only moderate import restrictions, and has a huge internal market besides. But if you are someone who worries about the power of large US corporations, and a resulting lack of competition, then pressuring these firms to face competitors from around the world whenever possible seems like a plausible response.
Student loan debt took off around the year 2000. Adam Looney and Constantine Yanellis tell the story in “What Went Wrong with Federal Student Loans?” (Journal of Economic Perspectives, Summer 2024, pp. 209-236). They point out: “Between 2000 and 2020, the total number of Americans owing federal student loans more than doubled from 21 million to 45 million, and the total amount they owed more than quadrupled from $387 billion to $1.8 trillion …”
Looney and Yanellis point out that student loan boom-and-bust cycles have happened several times before in the US, as rules for student loans are loosened and tightened. For example, there was a wave of student lending, followed by a “crisis” and a tightening of the rules, in the 1950s and again in the 1980s. The stated goal of looser rules for student lending is to expand educational opportunities, but a common pattern is a dramatic expansion of lending to students that are a higher-risk for repaying the loans to attend lower-quality schools–where the chance of graduating with a useful degree can be low.
Consider two figures. The top figure shows the result of dividing higher education institutions into five groups, according to how how likely students from that institution are to repay their loans. The vertical axis shows enrollment, with enrollment at all five groups set equal to 1 in the year 2000. As you can seen changes over time. and the different colors show institutions according to their student loan repayment rates. As you can see, total enrollments at the institutions where students are most likely to repay has grown over time (the line with the red x’s), but slowly. Total enrollment at the institutions where students are least likely to repay has risen more quickly–and in particular, it zooms higher after about 2000 and peaks around 2010.
The bottom panel shows the change in enrollment for first-generation college students since 2000, measured in millions of students. First-generation students can be viewed as a category that, on average, comes from families with lower financial resources and perhaps also less of a social support system for attending higher education. The slim red area at the bottom shows the very small rise in enrollments of first-generation college students at institutions with the highest student loan repayment rates. The bulk of the rise in enrollments of first-generation college students was at institutions with the lowest rates of student loan repayment.
Looney and Yannelis summarize this pattern:
Indeed, in 2011, the average enrollment-weighted degree completion rate at institutions in the lowest-repayment rate quintile illustrated in Figure 3 was 23 percent, the average post-enrollment earnings were $27,760, and the average student loan default rate was 20 percent. In contrast, at institutions in the highest repayment rate quintile, 73 percent graduated, their average earnings was $48,375, and the default rate was 3 percent. What kinds of schools are these? In the lowest repayment quintile, the largest institutions are the University of Phoenix (at the time, the largest online for-profit institution); Kaplan University and Ashford University (which previously were large online for-profit institutions, but have since been acquired by Purdue University and University of Arizona, respectively, and are now operated as the online offerings of those public universities), and two large community college systems operating around Houston, Texas—the Houston Community College System and Lone Star College. The largest institutions in the highest repayment rate quintile are large public institutions: Texas A&M, Pennsylvania State University, University of Texas at Austin, Michigan State University, and University of Minnesota Twin Cities. Note that while these institutions are prestigious, they are also not highly selective, with acceptance rates between 31 percent and 75 percent. Across a range of student loan, educational, and labor-market outcomes, the pattern is the same—institutions offering the highest-quality educations and with the best outcomes expanded enrollment the least, whereas the lowest-performing institutions expanded the most.
The enrollment patterns shown in the figure imply that the share of students who were taking out student loans rose up to about 2010, and then declined, and also that the average borrowing per student rose up to about 2010, and then declined, and Looney and Yannelis present evidence that this is the case. They describe in some detail the loosening and tightening of student lending rules over time.
With that fact base in mind, the public policy questions here become sharper. Many people will have a reflexive positive reaction to the idea of expanding student loans. But if the expansion of student loans goes to student from backgrounds that are more disadvantaged–in terms of academic preparation and family finances–and those students then attend lower-quality schools where the degree-completion rate is 23%, then the tradeoffs of a policy of expanding student loans might still make sense, but it will look considerably less attractive.
From 2000-2010 in particular, my sense is that a substantial number of disadvantaged and at-risk students (and remember, while we’re talking about adults here, they are often very young adults) got poor advice and made a poor decision. They were encouraged (by teachers, families, counselors, politicians, institutions, society) to borrow heavily to attend institutions that, on average, were not going to pay off for them. The goal of student loans, of course, is not just to get students to begin a first year of college, but to have them complete a degree. Perhaps there needs to be an alternative and complementary social goal: getting those institutions that tend to have higher graduation rates and higher salaries after graduation to expand their enrollments.
I wrote a decade ago about the Double Irish Dutch Sandwich, a strategy for corporations to evade taxes that was widespread and large-scale enough to come to the attention of the International Monetary Fund. But due to various changes in national and international tax agreements, the strategy seems to have faded substantially. Ana Maria Santacreu and Samuel Moore of the St. Louis Fed provide some background in “Unpacking Discrepancies in American and Irish Royalty Reporting” (August 08, 2024).
For those who don’t keep up to speed on the details of international tax evasion schemes, Santacreu and Moore describe the Double Irish Dutch Sandwich works like this:
The Double Irish with a Dutch Sandwich tax scheme, as illustrated in the third figure, involved a complex arrangement between a U.S. parent company (USP) and three foreign subsidiaries. The first Irish subsidiary (I1) was incorporated in Ireland but managed from Bermuda, allowing it to avoid both Irish and U.S. taxes. The second Irish subsidiary (I2) was incorporated and managed in Ireland. The purpose of I2 is to control foreign distribution and collect income. A Dutch subsidiary (N) acted as an intermediary between I2 and I1 to avoid Irish taxes.
The scheme worked by exploiting specific provisions in Irish and U.S. tax laws. A USP would transfer intellectual property ownership to I1. Then, I2 would sublicense the intellectual property from I1 and pay royalties. These royalties would flow from I2 to N, and then from N to I1, taking advantage of European Union tax regulations. This structure allowed profits to be shifted to tax havens like Bermuda, effectively minimizing tax liabilities for the entire corporate structure.
However, a combination of Irish tax reforms in 2015 and changes in the US Tax Cuts and Jobs Act of 2017 made this strategy ineffective: “Consequently, Irish companies began paying royalties directly to American parent companies instead of routing them through tax havens.”
The shifts in royalty payments tell the story. This figure shows royalty payments by Irish companies to the US, which doubled from since 2019 to 2021.
Conversely, tax payments from Ireland and from Netherlands to Bermuda, well-known for it’s near-zero corporate taxes, went way down–showing a decilne in corporate profits being routed through Bermuda.
I’m sure the international tax lawyers around the world are strategizing new tax evasion strategies even as I write these words. But it’s worth remembering that there are two large costs at play here. The obvious loss is to government revenue, but the more subtle and still very real loss is the diversion of high-powered talent from what could have been gains in efficiency and productivity to focus instead on corporate reorganizations and tax evasion games.
Total enrollment in degree-granting US institutions of higher education took a big jump in the first decade of the 21st century, but has levelled out or even declined a bit since then. According to the National Center for Education Statistics, total enrollment went from 15.3 million in 2000 to 21.0 million in 2010–an increase of more than one-third in just 10 years. But by 2020, total enrollment was down to 19.0 million. Enrollment fell a little more in the pandemic, and then has bounce back a little bit. But the NCES projections are that total enrollment will reach 20.1 million by 2030–that is, enrollments in US higher education are projected to be lower in 2030 than in 2010.
(In passing, I’ll add that about 60% of these higher education enrollments are full-time students, with the other 40% being part-time. I’ll also add that the sharp increase in enrollments from 2000-2010 is part of what drove the rise in student loan defaults–although that’s a different story.)
Here, I want to focus in particular on enrollments of foreign students in US institutions of higher education. There are sometime comments (hopes?) that a larger number of international students could help to offset the decline in total enrollments. But while the source of international students is shifting, there doesn’t seem reason to belief that a new and additional surge of international students will be filling the seats and dormitories of US colleges and universities.
Again, based on data from the NCES, the total number of foreign students roughly doubled from the 2000-2001 to the 2015-16 academic year, rising from 547,000 to 1,043,000. Since then, the total dipped during the pandemic, but by 2022-23 is now basically back to the 2015-16 level at 1,057,000.
The source countries that send college and university students to the US are shifting. For example, in 2015-16, 3.4% of the international students came from countries in Africa. That’s now up to 4.7%, with most of the increase accounted for by students from Nigeria, in particular.
But in the big picture, the action is mostly about countries in Asia, which accounted for about 71% of all foreign students in the US in 2022-23: for comparison, total foreign students from Europe and from Latin American are both less than 10% of total foreign students in the US, and the Middle East/North Africa region is less than 5% of the US total (with students from Saudi Arabia accounting for about one-third of the students from this region.
Within the broader category of foreign students from countries in Asia, China and India dominate. In 2022-23, students from China were 27.4% of all foreign students at US colleges and universities, while students from India were 25.4%. But back inn 2015-2016, a full 31.5% of all foreign students were from China, while only 15.9% were from India. Thus, the share of foreign students from China has been falling, while the share from India has been rising.
In addition, looking back a little further to 2000-01, the share of foreign students from Japan, South Korea, and Taiwan were 8.5%, 8.3% and 5.2%, respectively. But in 2022-23, the share of foreign students from Japan, South Korea and Taiwan had fallen to were 1.5%, 4.1%, and 2.1%, respectively.
In the international market for higher education, universities and colleges and technology institutes have been springing up around the world in the last few decades. If the US higher education sector is to maintain its current numbers of foreign students, much less to increase those number substantially, it is going to need to draw from where the people are–which means China and India, and increasingly, countries of Africa.
Over time, a rising US standard of living is driven by productivity growth. Michael Peters succinctly describes the problem in “America Must Rediscover Its Dynamism” (Finance & Development, September 2024). He writes:
The US economy has a multitrillion-dollar problem. It’s the dramatic slowdown in productivity growth over the past couple of decades. Between 1947 and 2005, labor productivity in the US grew at an average annual rate of 2.3 percent. But after 2005, the rate fell to 1.3 percent. Such seemingly small differences have astonishingly large consequences: if economic output for each hour worked had kept expanding at 2.3 percent between 2005 and 2018, the American economy would have produced $11 trillion more in goods and services than it did, according to the US Bureau of Labor Statistics. This is part of a broad-based trend across advanced economies. Productivity growth in Europe has been even slower than in the US. As a consequence, Europe has fallen significantly behind the US in terms of GDP per capita. Productivity is a key driver of economic expansion.
What are the main drivers of this problem? Peters argues that advances in information technology are linked to economies of scale: that is, big companies are best-positioned to take advantage of new information technology, which makes dynamic entry from small- and medium-sized firms difficult. As a result, the main productivity gains information technology are accruing primarily to big firms, rather than diffusing through the economy. Peters writes:
In discussing the productivity dynamics of the 1980s and 1990s, the advent of IT is the elephant in the room. Could the availability of such technologies have caused the decline in dynamism and the peculiar boom-bust shape of productivity growth? Two recent papers argue that the answer is yes and that economies of scale play an important role. French economist Philippe Aghion and his research collaborators (2023) posit that advanced IT makes it easier for businesses to scale their operations across multiple product markets. The London School of Economics’ Maarten De Ridder (2024) argues that IT allows enterprises to reduce their marginal costs of production at the expense of higher fixed costs.
What these explanations have in common is that the adoption of such technologies is particularly valuable for productive companies. This implies that such businesses took advantage of IT developments in the late 1980s and early 1990s, and the economy experienced an initial productivity boom. More surprisingly, the researchers argue that the existence of these megabusinesses can have dynamic costs in the long run. If new businesses (such as a new IT start-up) expect that they will have a hard time competing with existing enterprises that produce at scale (such as Amazon, Microsoft, or Google), their incentives to enter the market shrink. As a result, overall growth and creative destruction can decline, and incumbent companies benefit by charging higher markups. …
A separate strand of research suggests that the process of knowledge diffusion among businesses has changed in fundamental ways. In particular, the argument goes, in recent decades technologically lagging companies had a harder time adopting technologies of competitors at the productivity frontier. This change could be technological in nature: companies such as Google or Apple may be so technologically advanced that adoption simply becomes impossible for smaller rivals. At the same time, it could also have legal origins, as large businesses increasingly engage in defensive patenting to protect their technological lead by creating a dense, overlapping thicket of patents. Consistent with this hypothesis, Ufuk Akcigit and Sina Ates (2023) document a substantial rise in the concentration of patenting among superstar firms and estimate that changes in technological adoption can explain why dynamism has declined, why incumbent enterprises enjoy noncompetitive rents, and why productivity growth has fallen.
A dynamic in which the advances in productivity are being captured by industry leaders, rather than diffusing across the economy, has been noted on this blog a number of times in the past (for example, here and here). Technology diffusion is hard (as an historical example, here is an earlier post on the diffusion of the fork as a dining utensil). The report just released by the European Commission, written by Mario Draghi, about “The future of European competitiveness,” emphasizes the how productivity growth has lagged for small- and medium-sized firms in Europe.
There are other potential culprits for the productivity slowdown. A related idea is that economies may have become slower in reallocating resources away from slower-productivity firms and sectors toward higher-productivity areas (see here and here). As Peters also notes, a variety of growth models suggest that lower population growth may also reduce economic dynamism and productivity growth. A recent study from the McKinsey Global Institute argues that the lower levels of productivity can be largely traced to lower levels of investment in tangible capital.
Are there any encouraging signs? In the same September 2024 issue of F&D, Nicholas Bloom argues that “Working from Home is Powering Productivity.” He lists various reasons why productivity can be higher for those who work at home at least a couple of days each week: 1) avoiding a commute to-and-from work means that the day’s work happens in less time; 2) employment of those with a disability is up sharply since the pandemic, especially in work-from-home occupations; 3) employment by prime-age females is also up since the pandemic, perhaps in part because occasional work-from-home makes it easier for a family to deal with child-care responsibilities; 4) work-from-home involves more intensive use of residential space–now being used partly as work-space, but less intensive use of commercial office space, which can then be reallocated to other uses; 5) traffic is moving a little faster with the rise in work-from-home, and even a few minutes faster on average adds up to substantial time savings when summed across all commuters, and reduces pollution as well; and 6) there is a positive feedback loop from more people working from home and innovations in the software tools and business practices that can reduce costs and increase benefits from work-from-home.
I don’t expect work-from-home to solve the productivity woes for the US and Europe. Indeed, a number of the gains that Bloom cites, like reduced commuting, are both very real and not captured very well in economic statistics about output and hours worked. But there is at least some preliminary evidence that the economic dislocations of the pandemic have also led to an upsurge in new US firms, at least some of which are surely taking advantage of work-from-home and information technology gains.
In 1950, the General Electric Research Laboratory in Schenectady, New York, assembled a consortium of chemists, physicists, and engineers to form Project Superpressure, an effort to synthesize diamonds in the lab. … To support its research, General Electric commissioned a hydraulic press that cost $125,000, stood two storeys high, and was capable of pressing 1,000 tons. Four years of intense experimentation followed, during which Project Superpressure exhausted all of its original research budget and two additional funding allocations. On the third occasion that the project manager asked for more money, General Electric’s research managers, having seen no tangible results and skeptical of future success, almost unanimously voted to discontinue their support. Guy Suits, General Electric’s director of research, overruled them and approved the funds. …
On 16 December 1954 chemist and Project Superpressure team member Howard Tracy Hall placed two diamond seed crystals into a graphite tube with iron sulfide, capped with tantalum disks. The tube’s graphite would act both as a carbon source and, when electric current was applied to the tantalum disks, a resistance heater. He then placed this cylindrical device into a pressure chamber of his own design. Hall’s pressure chamber, now known as a belt press, consisted of two opposing tapered anvils that compressed the reaction cell from above and below, with the sides supported by prestressed steel bands.
Hall describes how when he envisioned this device, his colleagues ‘felt negatively about it’. His proposal to build a prototype, which would have cost General Electric less than $1,000, was rejected and he was refused time in the machine shop to build it. ‘I fretted about this for a time’, he wrote, ‘and then decided on a sub-rosa solution. Friends in the machine shop agreed to build the Belt, unofficially, on slack time. This took several months. Ordinarily, it would have taken only a week.’
A practicing Mormon, whose church and large family left him little time to socialize with his colleagues, Hall attributed the refusal to build his design and other slights to religious prejudice. When his prototype belt apparatus proved capable of attaining high pressures and high temperatures, Hall requested that its critical components be reconstructed in carboloy (cobalt-cemented tungsten carbide). Once again, his request was refused and it was not until his former supervisor intervened that he obtained permission to buy the carbide components.
To further compound Hall’s sense of injustice, demand for Project Superpressure’s thousand-ton press was so high that Hall’s improved pressure chamber was ‘relegated’ (in his words) to an ‘ancient’ press, dating from the turn of the twentieth century, that was only capable of pressing 400 tons and still ran on water pressure. Hall would later describe how this press ‘leaked so badly that rubber footwear, mop, and bucket were standard accessory equipment’.
Using this antique press, Hall managed to compress his pressure chamber to ten gigapascals (about 100,000 times the pressure of the atmosphere) and to heat the reaction chamber to 1,600°C. The experiment ran for 38 minutes. Hall had created diamonds:
I broke open a sample cell after removing it from the Belt. It cleaved near a tantalum disk used to bring in current for resistance heating. My hands began to tremble; my heart beat rapidly; my knees weakened and no longer gave support. My eyes had caught the flashing light from dozens of tiny triangular faces of octahedral crystals that were stuck to the tantalum and I knew that diamonds had finally been made by man.
General Electric reproduced Hall’s results 20 times over the next two weeks and on 15 February 1955 announced to the rest of the world that it had created the first diamonds in the lab. In its press release, it implied that the diamonds had been created in their new thousand-ton press. Hall’s reward was a modest salary increase – from $10,000 to $11,000 per year – and a ten-dollar savings bond. He resigned from General Electric to become a full professor at Brigham Young University. …
Hall, meanwhile, was forbidden from disclosing details about the belt press he invented, or using it to pursue further research into high-pressure chemistry, because of a secrecy order imposed by the United States Department of Commerce. During the Second World War, the supply of diamonds had been a source of anxiety for both the Allied and Axis powers. The United States, which did not have a domestic supply of diamonds, was dependent on the De Beers cartel for the diamonds necessary for its industrial production, and the business model of De Beers was to drive up prices by artificially restricting the world’s supply.