Why Aren’t More Patents Leading to More Productivity?

The number of US patents granted has been rising rapidly. However, US productivity has not been rising. Why aren’t more patents leading to more productivity?

Aakash Kalyani of the St. Louis Federal Reserve discusses research that suggests an intriguing possibility in “The Innovation Puzzle: Patents and Productivity Growth” (Economic Syn0pses: Federal Reserve Bank of St. Louis, March 29, 2024). The underlying research paper, “The Creativity Decline: Evidence from US Patents,” is available here.

There’s a new wave of research in economics that finds ways to use text as data, and Kalyani’s research offers an interesting example. The idea that some patents are more creative, while other patents have more of a me-too flavor. Kalyani suggests that one way to distinguish them is that more creative patents will be more likely to use new terms. As the research paper described it:

A patent describes in detail the working or features of an invention, and to do so uses a range of technical terminology. To construct my measure, I decompose the text of each patent (beginning in 1930) into one-word (unigrams), two-word (bigrams) and three-word (trigrams) combinations, and subsequently remove those that are commonly-used in everyday English language to obtain a list of technical terms. I then classify these technical terms into ones that were previously unused in the five years before the patent was filed. This process yields the share of new technical terms in a patent that is my measure of patent creativity. For the baseline version, I consider the measure with bigrams, and I show that my empirical results are unchanged for a measure that uses all three–unigrams, bigrams and trigrams.

An intuitive example is that “cloud computing” is first used in a patent application in 2007. Thus, patents using the term “cloud computing” in 2007 would be counted as “creative,” but patents after 2007 would not be counted as creative because they used “cloud computing”–they would only be counted as creative if they used some new term.

Of course, one can think of various things that might be right or wrong with defining “creative” patents in this way, but if we just go with the distinction, what patterns to we see? Here’s a figure from Kalyani. Three points seem worth emphasizing here: 1) The total number of patents per per capita gradually declines from the 1930s up through the 1980s, and productivity generally declines at the same time. 2) The number of patents starts rising quickly in the 1980s, but productivity does not rise at the same rate. 3) Up to the 1990s, the number of “creative” patents shown by the green dashed line follows a general trend similar to total patents; starting around 2000, creative patents drop off sharply.

One issue with text-based evidence is that one can imagine reasons why the use of language might change: for example, perhaps the US Patent Office became more likely to grant patents if the applications avoided new terms and used pre-existing language. But one would of course have to back up that alternative hypothesis! Kalyani looks at other patterns as well and finds that “creative patents” seem to matter. He writes:

Through examination of top scoring creative patents, I observe that creative terminology in these patents captures the description of new products, processes and features. I undertake a series of validation exercises to further bolster this observation. First, I show that top scoring creative patents tend to cite recent academic research rather than previously filed patents. Second, top scoring patents also score higher on ex-post measures of patent quality. These patents receive more citations and higher valuation. Third, I show that creative patents are costlier investments for a firm, and that a creative patent is associated with about 24.9% higher R&D expenditure than a derivative patent. These findings together suggest that creative patents are costly investments that tend to originate from recent academic research and generate higher ex-post value and follow-on innovation than derivative patents.

What factors might explain the co-existence of a decline in “creative” patents and a sharp rise in more derivative “me-too” patents? One factor may be that the age of inventors is rising.

I use patents matched to inventors to show that inventors are about three-times (3.2x) as likely to file a creative patent on entry compared to later on in their career. This number falls to 1.52x for the second patent, 1.25x for the third patent, and so on. In the aggregate, I find that percentage of patents filed by first time inventors have dropped from about 50% in 1980 to 27% in 2016. This drop in share of patents by new inventors reflects the overall demographic shifts in the US. The share of 20-35 year olds in the US workforce dropped from about 47.6% to 28.5% during the same time period.

Another reason for the sharp rise of derivative me-too patents may involve business strategies. At least to me, it seems possible that firms in some industries are using “patent thickets“–the name given to a group of patents on a very similar subject. It’s hard for other firms to enter a market if they need to worry about not just one patent, but many. In addition, if a firm keeps adding new patents to the thicket as older patents expire, this barrier to entry from new firms may not go away. In this scenario, patents have become less of a marker showing genuinely new technological improvements that boost productivity.

Follow R-Star?

For economists, r* refers to the “natural rate of interest” that emerges from economic theory. It’s the “Goldilocks” interest rate that is not too high and not too low: that is, the interest rate that would occur “naturally” in the economy when the economy is at potential output and inflation is stable. A “tight” or restrictive monetary policy would involve the central bank setting interest rates above r*; conversely, a “loose” or stimulative monetary policy would involve the central bank setting interest rates below r*.

It would obviously be useful to have clear estimates of r*. Do such estimates exist? Gianluca Benigno, Boris Hofmann, Galo Nuño, and Damiano Sandri investigate in “Quo vadis, r*? The natural rate of interest after the pandemic” (BIS Quarterly Review: Bank for International Settlements, March 2024, pp. 17-30). For those whose Latin is rusty or nonexistent, like me, a modern translation of “quo vadis” would be “where are you going,” while an older translation would be “whither goest thou.” The author write:

[We are] assessing the natural rate of interest, commonly known as r*, in the post-pandemic era. The natural rate refers to the short-term real interest rate that would prevail in the absence of business cycle shocks, with output at potential, saving equating investment and stable inflation. Hence, the natural rate serves as a yardstick for where real policy interest rates are headed. It is also a benchmark for assessing the monetary policy stance “looking through” business cycle fluctuations. … Together with the long-run inflation rate, defined by the central bank inflation target, it pins down the long-run level of the nominal policy rate.

The challenge is that it’s not obvious how to estimate r*. After all, historical data tells us what interest rates were as the economy and monetary policy fluctuated, but for r*, you need an estimate of what the interest rate would have been if the economy had remained at potential GDP, with low unemployment and low inflation. It’s also quite possible that r* shifts over time, which makes estimating it even harder. Moreover, in a globalized capital market, r* will be affected by global factors, not just factors within the domestic economy. The BIS authors describe the main factors that would influence the natural rate of interest:

The natural rate is commonly thought to be determined by real forces that structurally affect the balance between actual and potential output, or equivalently
between saving and investment. Specifically, factors that increase saving or decrease investment lower the natural rate. These include potential growth, demographic trends, inequality, shifts in savers’ and investors’ risk aversion and fiscal policy. Lower potential growth lowers investment by reducing the marginal return on capital and increases saving by lowering expected income. Longer life expectancy raises saving as households need to support a longer retirement. A lower dependency ratio – reflecting a higher share of working age people in the population – increases saving as those in the workforce typically save more than the young and the elderly. Higher inequality raises saving as richer households save a larger share of their income. Higher risk aversion induces higher saving, in particular in safe assets, and at the same time lowers investment. Finally, persistent fiscal deficits reduce aggregate saving. In a globalised world economy, with free capital flows, the same considerations apply but at the global level.

For example, a common set of beliefs about r* in the last decade or so is that there has been a “global savings glut”–and a higher supply of savings will tend to drive down natural interest rates. The global savings glut comes partly from very high saving in countries like China, partly from greater inequality of income and wealth because those with higher incomes and wealth tend to save more, and from other factors as well.

Economist seeking to estimate r* usually build a model of the economy. They set up the model so that it does a reasonably good job of following what happened in the actual economy. Then, when the economy is out of balance, the model can be used to project what the interest rate would be if the economy moves back into balance. (In a very broad sense, this is like looking at a single market that has been shocked by events–like the crop harvest problems in the cacao market that have driven up chocolate prices–and projecting the price to which cacao will return when the shock is over.)

The BIS economist discuss estimates of r* based on several modeling approaches: a “semi-structural” model, a “vector autoregressive” model, a “dynamic stochastic general equilibrium,” a model that looks at differences between shorter-term and long-term interest rates and how they evolve over time, and plain old surveys of key market participants. I will not try to explain the differences across these models here: suffice it to say that they are built on differing theoretical perspectives. Here’s a set of estimate of r* for the US dollar and for the euro:

As you can see, estimates of the natural rate of interest have declined over time. For the US, estimates before the Great Recession of 2008-09 were in the range of 2-3%. Since then, estimates were often 1% or lower–with a noticeable upward movement in the last year or so. The estimates for the euro area have a broadly similar movement from higher to lower, but a number of the estimates suggest that the natural interest rate in euro markets involved a negative interest rate for substantial parts of the last few years, a policy recommendation that raises some complications of its own.

For present purposes, the main concern here is whether these estimated of r* can offer practical guidance on whether, say, the Fed should be raising or lowering interest rates. It seems dubious. It’s not just that the range of estimate for the US market is wide, which it is, but also that each of these individual estimates is not precise, either, but represents a range of uncertainty. Moreover, the basic theory of the natural rate of interest r* suggests that it should be independent of monetary policy, because it represents the interest rate for an economy in balance. But is it really just a coincidence that estimates of r* plunged after the Great Recession, when monetary authorities were cutting interest rates? Were central banks cutting interest rates because r* had fallen, or do the estimates of r* from the economic models reflect to some extent that central banks had cut rates? It’s not easy to know.

The BIS authors argue: “The uncertainty surrounding r* suggests that it is a blurry guidepost for assessing the monetary policy stance and hence the tightness of monetary policy, in particular at the current juncture. In this context, it appears advisable to guide policy decisions based more firmly on observed inflation rather than on highly uncertain estimates of the natural rate.”

Productivity Syndrome and the Investment Prescription

Economic productivity is about growing the size of the pie. I sometimes point out that no matter what your goal–spending increases, tax cuts, greater support for the poor, environmental protection–that goal is easier when the economic pie is growing. When the economic pie isn’t growing, after all, then all priorities have to pit potential winners against potential losers in a zero-sum game.

Thus, a global slowdown in productivity is bad news all around. For context and policy advice, the McKinsey Global Institute has published “Investing in productivity growth” (March 24, 2024, by  Jan Mischke, Chris Bradley, Marc Canal, Olivia White, Sven Smit, and Denitsa Georgieva). As they point out: “Advanced-economy productivity growth has slowed by about one percentage point since the global financial crisis (GFC).”

In a given year, 1 percent isn’t much, but remember that it’s a cumulative effect. If productivity growth had been 1% higher since, say, the end of the Great Recession in 2009, then over those 15 years the US economy would already be about 15% larger. In 2024, US GDP is $28 trillion, so 15% larger would have meant an additional $4.2 trillion. As the McKinsey folks note: “Today the world needs productivity growth more than ever. It is the only way to raise living standards amid aging, the energy transition, supply chain reconfiguration, and inflated global balance sheets.”

The report offers some interesting context on global productivity. In this figure, the horizontal axis shows the level of productivity for various countries and regions, so that the lower-productivity areas like China and India are on the left, while the high-productivity areas like North America are on the right. As you can see, there’s a general pattern that lower-income places have the potential to grow at faster rates. In part, this is because lower-income places can take advantage of technologies that are already developed and sell to higher-income countries. IN part, it’s essentially a matter of arithmetic: when you start very small, doubling in size is easier than when you start very large. The “productivity frontier” is really a thought experiment, suggesting that certain areas of the world, like sub-Saharan Africa, Latin America, and Western Europe may have potential for substantially more rapid productivity growth.

When it comes to China and India, I’m often asked about whether their pattern of growth is about to level off and top out. It might! In China, in particular, the current government seems to have decided that economic growth is less important than other priorities like military power and social control. But there is no law of economics which says that these countries have topped out.

This figure shows some notable historical growth experiences. On the far left, all countries start at a situation where their per capita GDP was roughly $2,800–which the graph sets equal to 100. As you can see, growth in China and India is really just following a path already blazed by South Korea, and before that Japan, as well as Malaysia and Thailand. Given the basic ingredients for productivity growth–the average worker is gaining in education and skills, the average work has more capital equipment to work with, technology is improving, and there are incentives for firms to improve and innovate–growth in India and China could potentially still have decades to run.

What about productivity in high-income countries, like the United States? The McKinsey report suggest several main reasons why growth has slowed down. Two of the reasons are that factors driving growth in the early 2000s have shifted.

While many drivers affect productivity growth, two stand out for explaining the performance of advanced economies in recent years. First, manufacturing experienced waves of productivity advances fueled by the effects of Moore’s law and a burst of offshoring and restructuring. (Moore’s law, which holds that the number of transistors in a microchip doubles every two years, signals more broadly that computers become more powerful and efficient while coming down in cost.) These waves yielded productivity gains before the GFC [global financial crisis] but petered out over time. The second major factor is a secular decline in investment across multiple sectors … These two trends explain the slump in advanced economies almost entirely. Digitization was much discussed as the main candidate to rev up productivity again, but its impact failed to spread beyond the information and communications technology (ICT) sector.

The main prescription for additional economic growth from the McKinsey analysis is to raise the level of investment: to be clear, this advice is meant to include both investment in actual physical capital as well as investment in “intangible” capital that leads to gains in knowledge, management, and skills. The report notes:

The slump in capital investment slowed productivity growth beyond manufacturing by 0.5 percentage point in the United States, 0.3 point in our Western European sample economies, and 0.2 point in Japan … This decline spanned almost all sectors: in the United States, the only exceptions were mining and agriculture; in Europe, only mining, construction, and finance and insurance generally remained stable, while real estate accelerated.

More specifically, slowing growth in tangible capital (for example, machines, equipment, and buildings) explains almost 90 percent of the drop in the United States and 100 percent in Europe. From 1997 to 2019, gross fixed capital formation in tangibles fell from 22 to 14 percent of gross value added in the United States and from 25 to 17 percent in Europe. Intangible capital growth (for example, R&D and software) was more resilient but could not make up for falling investment in the material world. Gross fixed capital formation in intangibles increased from 12 to 16 percent in the United States and from 10 to 12 percent in Europe. Investment in intangibles is needed to boost corporate performance and labor productivity, but it may face barriers (skills needed to scale up, limited collateralization and recovery value), and the productivity benefits can take longer to materialize.

Economic growth doesn’t happen purely from the invention of technology: instead, it happens when that technology moves into widespread use. There’s a gap between the invention and the application, sometimes called the “valley of death,” because moving from the conceptual idea to the practical application can be so hard. “Investment” is how an economy bridges the gap. The McKinsey writers note: “Post-GFC investment declined sharply and persistently, failing to generate anything to take their place. But today, directed investment in areas such as digitization, automation, and artificial intelligence could fuel new waves of productivity growth.” I’m a little less certain than they are about the directions of future growth: for example, I think genetics and material science may have big roles to play as well. But without a rise in investment, we aren’t even going to know what we’re missing.

If Not Unemployment, How To Measure the Labor Market?

Economic statistics are all useful, and all imperfect. They must be consumed with care. Consider the unemployment rate, a headline indicator of the US labor market. The US unemployment rate has been below 4% since December 2021. As you can see from the graph, which shows the unemployment rate going back to 1948, there was a sustained period back in the early 1950s, and then another in the late 1960s, when the unemployment rate was this low for this long. But for the half-century from 1970 up through 2020, the US economy could only dream of an unemployment rate below 4% for 15 consecutive months.

Does it make sense to interpret this unemployment rate as a sign of a historically wonderful US labor market? Or does it make more sense to think about whether, for one reason or another, the unemployment rate at present isn’t capturing the essence of what’s happening in the US labor market?

In January 2024, the Hutchins Center on Fiscal and Monetary Policy at Brookings brought together 40 labor market economists talk about this issue and others. Louise Sheiner, David Wessel, Elijah Asdourian wrote an overview paper describing the discussion in “The U.S. Labor Market Post-Covid: What’s Changed, and What Hasn’t?” The first question they tackle is “What is the best measure of labor market slack?”

As Sheiner, Wessel and Asdourian write: “Many economists are no longer confident about the adequacy of the unemployment rate as the only important measure. Although unemployment in 2023 was at about the same level as it was in 2019, other measures of slack suggested that the labor market was much tighter.” Specifically, one of these “other measures” is the number of job vacancies (also called the number of job openings) divided by the number of unemployed–sometimes abbreviated at V/U.

As you can see, at the worst of the Great Recession back in 2009, as well as in the pandemic recession, there were about 0.2 job vacancies for each unemployed person. Just before the pandemic recession, there were about 1.2 job vacancies for every unemployed person. Just after the pandemic recession, the number of job vacancies per person spiked as high as 2.0, before dropping back to 1.4.


This figure helps explain why a sub-4% unemployment rate before the pandemic isn’t the same as a sub-4% unemployment rate after the pandemic–that is, there are notably more job vacancies per unemployed person after the pandemic. The spike in the V/U ratio in early March may also help to explain why inflation started rising at about that time, just as the drop in V/U may help to explain the easing of inflationary pressures in the last year or so.

But there are also doubts about just what is being captured by the “job vacancy” measure. This is based on data about job openings posted by employers. But in the age of the internet job search, as it has become cheaper for firms to post job openings, perhaps firms have become more likely to post openings. The pandemic-related shifts in employment, especially for firms now willing to hire remote workers, may have changed the underlying meaning of “job vacancy” statistics as well. The total job openings seems to be trending up.

The discussion of labor economists made similar points:

Other participants argued against relying on V/U, largely because of skepticism about the reliability of the vacancy measure. Julia Coronado of MacroPolicy Perspectives and others pointed out that the recent rise in V/U is hard to separate from the upward trend in vacancies that began around 2008. Erica Groshen, former commissioner of the Bureau of Labor Statistics, said that vacancies are increasing across the board because digital technology makes vacancies much easier to post. “When I applied to colleges, my high school told us, ‘You can apply to five colleges,’” she remarked. “…My kids were told 12 colleges, because it was electronic, and I think the next generation is being told something like 20.” Without accounting for the long-term increase in vacancies, V/U’s detractors argued that the data as is could not inform the ongoing conversation about labor market tightness.

Yet another way to look at the labor market is based on how people leave their jobs. Basically, there are two broad reasons for leaving a job: a voluntary separation called a “quit” (blue line) or an involuntary separation called a “layoff/discharge” (red line). As the figure shows, layoffs rise during recessions, and spiked in the pandemic. However, the number of quits was rising before the pandemic, and after the pandemic spiked to new highs. This is sometimes called the “Great Resignation”–that is, people choosing to quit jobs.

The higher number of quits suggests another pattern for the modern US labor market. Many of us are used to a mental model where workers move to being unemployed, and then back to being employed. But what about people who quit for a new job, and thus don’t go through a spell of unemployment? Or people who leave the labor market for a time and then re-enter, but are not counted as “unemployed” in the meantime? Statistics from the Current Population Survey let you look at flows into jobs, and the statistics suggest: 1) the number of people who move from being employed at one place to employed somewhere else is on the rise (top figure); 2) the number moving from unemployed to employed is about the same (second figure); and 3) the number moving from out-of-the-labor-force to employed has risen a bit.

Some of these patterns blend together. The US economy seems to be exhibiting more people who already have jobs shifting to jobs with other employers. Faced with that situation, a rational response for employers is to post more job openings. In some cases, the firm may not feel it is necessary to hire in the near-term, but they want to have an available pool of applicants if they lose workers, and if the right candidate comes along, they are willing to hire.

Taken together, these statistics suggest that the US economy is indeed performing well in terms of availability of jobs. But it also suggests that a lot of workers are looking for something different, or better, or higher-paying in a way that will help to offset the accumulated inflation of the last few years and the higher interest rates that they are facing for consumer and home loans.

What Should Intro Econ Include?

For many college students, and high school students as well, a single introductory economics course is the only course in the field they are ever going to take. This is not their fault! People are allowed to be interested in subjects other than economics! Perhaps alternative interests should even be encouraged! But for those of us who inhabit econo-land, it raises a real question: If we only get one crack at many students, maybe for a single academic quarter or semester, what content is it most important to teach?

The Journal of Economic Education has just published a six-paper symposium on the topic “What should go into the only economics course students will ever take?”, edited by Avi J. Cohen, Wendy Stock and Scott Wolla.

An essay by Wendy Stock, “Who does (and does not) take introductory economics?” , sets the stage. From the abstract:

Among students who began college in 2012, 74 percent never took economics, up from 62 percent in 2004. Fifteen percent of beginning college students in 2012 took some economics, and 12 percent were one-and-done students. About half of introductory economics students never took another economics class, and only about 2 percent majored in economics. The characteristics of one-and-done and some economics students are generally similar and closer to one another than to students with no economics

In his paper, Avi Cohen makes the case for a “literacy-targeted” principles of economics course: “The LT approach argues that it is far more valuable for students to learn and be able to apply a few core economic concepts well than to be exposed to a wide range of concepts and techniques that the majority of students are unlikely to use again.”

Apparently, there was an American Economic Association Committee on the teaching of undergraduate economics back in 1950. Cohen writes:

Eighty-five members from 50 educational institutions met between 1944 and 1950 and produced a 230-page special issue of the American Economic Review (AER) in 1950. Two recommendations in the report “Elementary Courses in Economics” (Hewitt et al. 1950 , 52–71) were:

The number of objectives and the content of the elementary course should be reduced….[T]he content of the elementary course has expanded beyond all possibility of adequate comprehension and assimilation by a student in one year of three class hours a week (56, italics in original).

Students should receive more training in the use of analytical tools.…[T]he typical course in elementary economics tends to concentrate attention on the elucidation of economic principles, rather than on training the student to make effective use of the principles he has learned. Examination questions test the student’s ability to explain, rather than his ability to use principles (59, italics in original).

These concerns that the intro course tried to cover too much, and ends up with the typical student being able to do too little, has been a regular critique of intro econ since then, as Cohen describes with a brisk review of commentary on the intro class since 1950. A comment from George Stigler in 1963 has often been quoted:

The watered-down encyclopedia which constitutes the present course in beginning college economics does not teach the student how to think on economic questions. The brief exposure to each of a vast array of techniques and problems leaves with the student no basic economic logic with which to analyze the economic questions he will face as a citizen. The student will memorize a few facts, diagrams, and policy recommendations, and ten years later will be as untutored in economics as the day he entered the class (657). An introductory-terminal course in economics makes its greatest contribution to the education of students if it concentrates upon a few subjects which are developed in sufficient detail and applied to a sufficient variety of actual economic problems to cause the student to absorb the basic logic of the approach (658, emphasis added).

I’ve taught the intro econ course with some success and been involved in the writing of several principles textbooks, so I’ve watched the evolution of these arguments over the years with interest. Perhaps the fundamental problem, as Cohen describes, is that many econ departments want to have a single principles of economics class, they want that class to count toward the economics major, and they want that class to prepare students for the courses that follow: especially intermediate micro and intermediate macro. Departments have some confidence that the existing principles of economics textbooks and classes more-or-less accomplish this goal. The incentives of departments to adjust the existing courses–and then perhaps also need to adjust the intermediate courses–are low.

Given these realities, any substantial rethinking of the existing intro course is going to face an uphill battle for widespread acceptance. Some of the subjects that could be cut from standard intro courses, Cohen suggests, include cost curves, comparisons of imperfectly competitive industries, formulas for elasticities (beyond, for example, % change quantity/% change price), details of national income accounting, formulas for fiscal and money multipliers (beyond, for example, 1/% leakages from circular flow). Moreover, other papers in the JEE symposium emphasize on how different types of pedagogy, generally aimed at getting away from exclusive use of classroom lectures and multiple choice exams, with a heavy emphasis on graphs, can help the intro course evolve. I have no strong objections to this approach, but at the end of the day, I think its ultimate destination is a better-taught version of the existing course.

Over the years, my own thoughts along these lines have been running more toward an intro course that dramatically de-emphasizes the textbook, but does not eliminate it, because a textbook is a useful tool for basic terminology and graphs: opportunity cost, supply and demand, perfect and imperfect competition, externalities and public goods, fiscal and monetary policy, comparative advantage, trade balances, and others.

But when it comes to examples, it seems peculiar and anachronistic to me to rely overmuch on textbooks in the internet age. An intro course needs to provide conceptual guidance and curate examples, of course. But the web is full of real-world examples of economic reasoning and data: indeed, many of the links at this website go to such articles. If the goal is economic literacy and functionality for students, pointing introductory students at, say, the websites of the Bureau of Labor Statistics and the Bureau of Economic Analysis, the Congressional Budget Office, the Energy Information Administration, the Social Security Administration, the World Development Indicators, and other seems to me a useful starting point. It seems to me quite possible to develop a set of exercises and readings where students could even choose among different questions and exercises–and discuss what they found with each other.

In short, pick a slightly shorter list of concepts and tools that you want intro introductory students to have, with a textbook to explain, but for examples and illustrations, give the students both questions to answer and a list of web addresses.

Of course, jumping straight into real-world events, without underlying disciplinary structure, isn’t a fair intro to the subject. But focusing only on disciplinary structure, and treating the intro course as just a prelude to the rest of the economics major, isn’t going to be productive for the half of intro econ students who won’t ever take another economics course, and isn’t going to be attractive for the roughly three-quarters of college students who never take an intro course. When I ask people who had that single long-ago intro econ course what they remember today, they often shrug at me, grin ruefully, and say something about “there were a whole bunch of graphs.” We can do better.

Some Facts about US Rental Housing

About one-third of US households rent, rather than own. The renters are disproportionately lower-income, less-educated, and younger. The public policy concern, of course, is not really about, say, a group of college students or recent graduates sharing a rental, but instead about low-income adults and parents with families for whom rental housing may comprise a very large share of their incomes. Lauren Bauer, Eloise Burtis, Wendy Edelberg, Sofoklis Goulas, Noadia Steinmetz-Silber, and Sarah Wang from the Hamilton Project at the Brookings Institution present some basic facts in “Ten economic facts about rental housing” (March 2024).

When you think about rental housing, do you think about apartment buildings with dozens of units, or about smaller scale rentals? It can come as a surprise to recognize that, in the US housing market, landlords offering just one rental unit in a building nearly as many total rental units as rental buildings that include 50 or more units. When thinking about rental housing policy, it’s important to remember that it’s not just about big investors with large number of rentals, but also about the incentive that apply to the very large number of smaller units.

By the standards of the last two decades, the number of vacant rental units is low, although new construction of rental units did seem to be turning up a bit through much of 2022 and 2023.

The price of a new rentals spiked during the pandemic, as shown by the three shades of green lines that show different surveys of new rental prices. The price of existing rentals didn’t rise during the pandemic, in part because the federal government passed rules making evictions from rental apartments essentially illegal for a time. But as the figure shows, the higher prices of new rentals started feeding through into the price of existing rentals in 2022 and 2023.

For households in the middle fifth of the income distribution, rent is typically about 28% of income. For those in the bottom fifth of the income distribution–who have less income to begin with–rent in recent years has been 34-36% of total income. Federal housing support is not especially high. The Hamilton Project authors explain:

Figure 8 shows annual federal outlays for housing assistance per potentially eligible household (defined as a household with income below 200 percent of the poverty threshold for a family of four) between 2005 and 2022. Annual housing assistance per household is very low relative to average housing costs. In this period, annual federal housing assistance doubled from $475 (in 2022 dollars) per qualifying household per year ($23 billion in total) to $941 per qualifying household per year ($49 billion in total). Meanwhile, the median asking rent per month in the U.S. in November 2023 was $1,967 (Redfin 2023).

In many cities, the waiting lists for those who are eligible to receive housing vouchers can be lengthy–measured in years.

Like a lot of economists, I’m ambivalent about providing specific vouchers for housing, rather than providing those with low incomes with additional cash so that they can make life tradeoffs as they seem best. But the US political system has preferred to earmark support for different areas–Medicaid for health care, food stamps for food, housing vouchers for rental housing–while keeping cash payments to the poor relatively low, and often linked to work.

For some previous posts on US rental housing, see:


Non-Fungible Tokens: What Are They and How Much Should I Care?

I’ve had a hard time believing that non-fungible tokens (NFTs) matter in an important way, but enough people seem to be paying attention to them that I feel some need to do so as well. Roman Kräussl and Alessandro Tugnetti provide a useful overview of the state of play in “Non-Fungible Tokens (NFTs): A Review of Pricing Determinants, Applications and Opportunities” (Journal of Economic Surveys, April 2024, pp. 555-574).

A non-fungible token is a digital asset with a key characteristic: The ownership of the digital item is provable and traceable on a blockchain. However, it is not a digital currency like Bitcoin or Ethereum. Instead, as Kräussl and Tugnetti point out, an NFT can come in five different forms: Gaming, Collectibles, Metaverse, Utility, Art, and Metaverse. Here are their descriptions:

In the realm of gaming, NFTs represent assets that can be utilized within video games, with their elements stored on the blockchain. This offers a significant departure from traditional video games, as players gain real ownership of in-game assets through the purchase and sale of NFTs. Gaming NFTs have demonstrated a remarkable ability to engage active users, resulting in the highest participation rates compared to other categories. This high level of user involvement translates
to continuous exchanges between players, making the gaming sector highly liquid. … Examples of popular gaming NFTs include Axie Infinity, NBA Top Shot, and CryptoKitties.

Not much unlike physical collectibles, NFT collectibles are released in collections, or series, which represent variations of the same image, video, or other media. The characters in the Cryptopunks project, for instance, differ from each other in certain attributes that also make the price vary: man/woman, human/alien/monkey, and presence or absence of accessories. NFT collectibles record the highest level of transactions though the number of active wallets is much lower than that of gaming NFTs. … [T]his concentration of the market is due to a few large-value transactions. Nadini et al. (2021) show that the top 10% of buyer–seller pairs contribute 90% to the total number of NFT transactions. Examples of NFT collectibles are CryptoPunks, the Bored Ape Yacht Club (BAYC), and Azuki.

NFT utilities, the third main group, are assets that provide utility in the real or digital world through the blockchain. In other words, utility tokens give their holder consumptive rights to access a product or service (Howell et al., 2020) so that their use is not directly related to the need to collect or play with the token of interest. In particular, because these tokens serve as the means of payment on a platform or offer access to the firm’s services, they possess utility features (Gryglewicz et al., 2021). Utility NFTs comprise different categories: finance, health, supply chain, or digital ID. The most popular NFT utility projects are VeeFriends (which grant access to the VeeCon, a multi-day event exclusively for VeeFriends NFT holders), Ethereum Name Service (ENS, where users can purchase and manage domain names for their digital assets), and Nouns.

Art NFTs can be defined by exclusion from the previous sectors. Art NFTs are assets with an artistic function that have not been released in series (as could happen for collectibles) and that cannot be used within any type of video game hosted on the blockchain. This type of token has brought many innovations to the art market, especially due to the easing of barriers to entry this opaque market. Everyone can create and sell their works on different platforms in a much shorter time than on the traditional art market, with an average time between purchase and resale in art NFTs of just 33 days versus the average resale period on the traditional art market of 25–30 years (McAndrew, 2023). Furthermore, art NFTs have addressed issues that have affected the traditional art market for decades, such as provenance, title, authenticity, and a fairer distribution of income. The creation of communities by the artists themselves via social networks, such as Twitter gravitating around their NFTs collections, have allowed for a much deeper involvement of buyers. … Main examples of art NFTs are ArtBlocks, that is, tokens representing generative art through an algorithm, SuperRare, and The Currency by artist Damien Hirst which are 10,000 NFTs corresponding to 10,000 physical artworks stored in a physical vault.

The fifth main group, Web3 or Metaverse, can be defined as an extension and grouping of the previous ones. The Metaverse is a virtual universe accessible through a computer screen, laptop, virtual reality (VR), or any other digital system. Users who access this world can create their virtual avatar and interact with the surrounding reality, including other users. They can purchase virtual plots of land within the Metaverse to create their own organizations and host events. In many cases, firms have established virtual businesses and created a space where they can offer goods and services, promote their products and organizations, and hold virtual events (Goldberg et al., 2021). Some examples are the game developer company Atari in Decentraland, Adidas in The Sandbox, and Cryptovoxels.

The list helps to clarify for me why NFTs don’t play much role in my life. I’m not a gamer. I don’t play in the multiverse. I don’t do collectibles. When we get art, it’s to hang on the wall. I suppose at some point there might be an organization where a utility NFT works as a form of membership to an organization that matters to me. For now, annual membership cards for certain museums and standard online ticketing seems to be working for me just fine. I have no desire to diversify my assets into NFTs for either financial or aesthetic reasons.

But as I am continually reminded in the modern world, my tastes are not universally held. In the meantime, the NFT market seems to have turnover of a few hundred billion dollars per year: apparently, much of this is related to gaming or collections.

A Market Failure Case for Place-Based Policy

Economists have traditionally focused on policies aimed directly at low-income people, rather than at low-income places. For example, programs like welfare payments, food stamps, Medicaid, and Supplemental Security Income are based on individuals. But there has been a push in the last few years for consideration of “place-based policies,” which focus on different rules for tax or government benefit, or for finance and regulation, rules within certain geographical boundaries. The sense behind advocacy of place-based policies is that individual-based policies are all very well, but when certain places within metropolitan areas or certain regions within countries have been lagging for decades, perhaps supplementing with other approaches may be useful.

Anthony Venables digs into these issues in “The case for place-based policy” (Centre for Economic Policy Research Policy Insight 128, February 2024). Venables starts by describing how a model of unfettered free markets would predict that economically distressed areas can bounce back, and how the forces in that model don’t seem strong enough.

For example, the standard market-oriented story is that if an area lags badly in economic terms for a time–say, in terms of business formation, employment opportunities, and growth–then several effects should occur. At least some people will migrate out of that area to find jobs elsewhere, which will in a mechanical sense reduce unemployment in that area. In addition, real estate in that area should diminish in value: as a result, firms should begin to see that area as a less-expensive spot to relocate, and people should begin to see that area as a less-expensive spot to live. Over some (not very clear) period of time, the local economy of the distressed area should rebalance itself.

Venables emphasizes several problems with this vision:

1) Not everyone will find it easy to migrate to another area of the city or the country. In fact, those who find it easiest to migrate will be those who have good job opportunities elsewhere–or more generally, those who have a personal and economic network elsewhere on which they can draw. Some people will also just have a degree of drive and determination that manifests itself in moving. Thus, out-migration from a distressed economic area means that the area will lose many of those that, for purposes of future economic development, it would prefer to keep.

2) While some prices will adjust when an area becomes economically distressed, not all of them will. For example, a minimum wage may apply across a given area, or various good and services may have a similar cost across areas, or interest rates will tend to rise and fall across areas. Indeed, other than real estate costs, it’s not clear the extent to which costs will be lower for firms or households in an economically distressed area.

3) The movements of firms and households in response to these price changes may not be large, either. For a firm, the potential benefits of a cheaper location in an economically depressed area need to be weighed against the benefits of locating in an economically more vibrant area where the pool of workers, suppliers, and ideas is likely to be deeper. For a household, a lower cost of housing is a nice thing in isolation, but living surrounded by other people attracted by the lower cost of housing may have tradeoffs concerning the qualities of the neighborhoods, parks, schools, and so on. Venables calls this a “low-level spatial equilibrium”: “Firms don’t want to move [to the economically distressed area] because other firms have not moved, or because workers do not have appropriate skills. Workers don’t want to acquire particular skills, as they do not see job opportunities arising from them, and so on in a vicious circle.”

Of course, none of this is to say that all economically depressed areas are doomed forever. Some areas do reinvent their local or regional economies. But when it works, it often takes a substantial time; and many times, it doesn’t seem to work at all.

One can concede and appreciate the reasons why certain places seem stuck in a “low-level spatial equilibrium,” but lack confidence in the ability of government to engineer a solution. A few tax breaks aren’t likely to cut it. An “all-of-the-above” approach that tries to address to address all of the concerns of firms and households about moving to an economically depressed area might work in some cases, but there aren’t any guarantees. One can imagine an “on-the-edges” approach that tries to at least shrink the economically distressed area around its geographic edges. In this essay, Venables doesn’t have much to offer here other than a very high-level discussion of “clear objectives,” encouraging “complementaries,” considering “alternative scenarios,” and the like.

At a baseline level, one can imagine governments making an attempt to relocate a substantial portion of their own operations and employees to distressed areas. If such relocation runs into problems–say, a lack of transportation infrastructure to get to the jobs, or concerns about the safety of walking, parking, or receiving deliveries in the neighborhood–then that helps the government understand what needs fixing for private employers and households to be willing to relocate as well.

For additional discussion on place-based policies, I’ve posted on the subject before:

Also, the Summer 2020 issue of the Journal of Economic Perspectives (where I work as Managing Editor), had a two-paper “Symposium on Place-based Policies” in the Summer 2020 issue.

”Using Place-Based Jobs Policies to Help Distressed Communities,” by Timothy J. Bartik

Place-based jobs policies seek to create jobs in particular local labor markets. Such policies include business incentives provided by state and local governments, which cost almost 50 billion USD annually. The most persuasive rationale for these policies is that they can advance equity and efficiency by increasing long-term employment rates in distressed local labor markets. However, current incentives are not targeted at distressed areas. Furthermore, incentives have high costs per job created. Lower costs can be achieved by public services to business, such as manufacturing extension, customized job training, and infrastructure. Reforms to place-based jobs policies should focus on greater targeting of distressed areas and using more cost-effective policies. Such reforms could be achieved by state and local governments acting in their residents\’ interests or could be encouraged by federal interventions to cap incentives and provide aid to distressed areas.
Full-Text Access | Supplementary Materials

”Place-Based Policies and Spatial Disparities across European Cities,” by Maximilian v. Ehrlich and Henry G. Overman
Spatial disparities in income levels and worklessness in the European Union are profound, persistent and may be widening. We describe disparities across metropolitan regions and discuss theories and empirical evidence that help us understand what causes these disparities. Increases in the productivity benefits of cities, the clustering of highly educated workers and increases in their wage premium all play a role. Europe has a long-standing tradition of using capital subsidies, enterprise zones, transport investments and other place-based policies to address these disparities. The evidence suggests these policies may have partially offset increasing disparities but are not sufficient to fully offset the economic forces at work.
Full-Text Access | Supplementary Materials

Improving Regulatory Targeting: The OSHA Example

Many government agencies with enforcement power face a common problem: they only have the resources to visit or audit a tiny fraction of the possibilities, so they need to pick and choose their targets. How should they make that choice?

Consider the Occupational Safety an Health Administration, which is responsible for monitoring and passing rules about workplace safety. OSHA has jurisdiction over about 8 million workplaces, but (in cooperation with state-level agencies) it has resources to actually visit less than 1% of that number. How to choose which ones? Matthew S. Johnson, David I. Levine, and Michael W. Toffel discuss their research on this topic in “Making Workplaces Safer Through Machine Learning” (Regulatory Review, Penn Program on Regulation, February 26, 2024; for the underlying research paper, see “Improving Regulatory Effectiveness Through Better Targeting: Evidence from OSHA,” published in the American Economic Journal: Applied Economics, October 2023, 15:4, pp. 30-67; for an ungated preprint version, see here).

One insight is that it’s useful for regulatory purposes if the inspection process has a degree of randomness, because then firms need to be just a little on their toes. As it turns out, the largest OSHA inspection program from random OSHA process also allows researchers to look at workplace safety records in the aftermath of an OSHA inspection from 1999-2014 was called Site-Specific Targeting. The idea was to develop a list of firms that had the highest injury rates two years ago, and the randomly select a group of them for visits. It’s then possible to compare the aftermath of an OSHA regulatory visit for firms that (randomly) got one to the firms (remember, with similar high injury rates) that didn’t get one. The authors write: “We find that randomly assigned OSHA inspections reduced serious injuries at inspected establishments by an average of 9 percent, which equates to 2.4 fewer
injuries, over the five-year post- period. Each inspection thus yields a social benefit
of roughly $125,000, which is roughly 35 times OSHA’s cost of conducting an
inspection.”

But might it be possible, holding fixed the limited resources of OSHA, to do better? For example, what if instead of looking at injury rates from two years ago, one instead looked at the average injury rate over the previous four years–to single out firms with sustained higher rates of workplace injury? But is it possible to do better? What if we used a machine-learning model to predict which firms are likely to have the most injuries, or which firms could have the biggest safety gains, and and focused on those firms? The authors write:

We find that OSHA could have averted many more injuries had it targeted inspections using any of these alternative criteria. If OSHA had assigned to those establishments with the highest historical injuries the same number of inspections that it assigned in the SST program, it would have averted 1.9 times as many injuries as the SST program actually did. If OSHA had instead assigned the same number of inspections to those establishments with the highest predicted injuries or to those with the highest estimated treatment effects, it would have averted 2.1 or 2.2 times as many injuries as the SST program, respectively.

A few thoughts here:

1) I was surprised that the simple rule of looking back over four years of injury rates, rather than just looking at injury rates from two years ago, had such substantial gains. The reason is that injury rates in any given year can bounce around a lot. For example, imagine a firm that has one bad episode every 20 years, but quickly corrects the situation. In that bad year, it could turn up on the OSHA high-priority list–but the OSHA inspection won’t do much. A firm that is poorly ranked for accidents over four years is more likely to have a real problem.

2) Going beyond changing the inspection rule in the simple way of looking at fouro years of injury rates to using a more sophisticated and hard-to-explain machine learning approach has only modest gains. It might be that the machine learnig analysis is useful for showing if large gains are possible through better regulatory targeting, and if so, then regulators might wish to figure out a way to get most of those gains using a simple rule that they can explain, rather than black-box machine-learning rules they can’t easily explain.

3) One concern is that these new methods of targeting would leave out the randomization factor: firms would be able to predict that they were more likely to receive a visit from OSHA. It’s not clear that this is a terrible thing: firms which have poor workplace safety records over a period of several years should be concerned about a visit from regulators. But it may be wise to keep a random element in who gets visited.

Finally, it feels to me as if regulators, who are always under political pressure, sometimes see their role as akin to law enforcement: that is, they have an incentive to show that they are going after those who are provably in the wrong. But as this OSHA example shows, going after employers who had a really bad workplace event two years ago may not lead to as big a gain in workplace safety as going after employers who have worse records over a sustained time.

I wrote last year about a similar issue that arises in IRS audits. It turns out that when the IRS is deciding who to audit, it puts a lot of weight on whether it will be easy to prove wrongdoing. Thus, it tends to do a lot of auditing of low-income folks receiving the Earned Income Tax Credit, where the computers show that it should be straightforward to prove wrongdoing. But of course, there isn’t a lot of money to be gained from auditing those with low incomes. Consider the situation where the IRS audits 10 people who all had more than $10 million in income last year. Perhaps nine of those audits find nothing wrong, but the 10th results in collecting an extra $500,000. If the IRS auditors are focused on a high conviction rate, they make one choice; if they are focused on a strategy which brings in the most revenue, they will chase bigger fish.

My point is not that the choice of regulatory priorities should be turned over to machine learning! Instead, the point is that machine learning tools can help evaluate whether the existing rules are being set appropriately, and how well those rules work relative to alternatives.

    8 million, visit

    So, how do you choose? Might an alternative way of choosing have greater benefits?

    The Generic Drugs Antitrust Case

    Imagine that in the market for generic drugs, a group of companies form a cartel to raise prices on the products controlled by their group. Other companies were not involved. What pattern might you expect to see for the prices of drugs controlled by the cartel, or not controlled by the cartel. Amanda Starc and Thomas G. Wollmann carry out this analysis in “Does Entry Remedy Collusion: Evidence from the Generic Prescription Drug Cartel “(NBER Working Paper 29886, April 2023).

    The blue line shows prices of generic drugs where supply was controlled by the firms in the cartel. The black line shows prices of generic drug where supply is not controlled by the cartel. As you can see, prices changes for these two groups of generic drugs track each other closely before 2013. But after 2013, prices for the group of drugs not controlled by the cartel continues on its downward trajectory, while prices for the group of drugs controlled by the cartel suddenly rise and then maintain a higher level.

    Of course, one graph doesn’t prove that a cartel was actually formed or was successful in raising prices. It’s theoretically possible that a sudden surge of increased demand or reduced supply caused prices for all the drugs controlled by the supposed cartel to leap up in this way at just the time that an employee at Teva Pharmaceuticals started coordinating efforts across a number of firms to keep prices high. But as circumstantial evidence goes, it does raise one’s eyebrows.

    Some antitrust cases are resolved all at once, with a well-publicized court finding or a legal settlement. But in other cases, the resolution trickles out over time in a series of announcements, one company at a time. . That’s what seems to be happening in the ongoing antitrust case about the prices of a number of generic drugs. Last summer, Teva Pharmaceuticals and Glenmark Pharmaceuticals became the sixth and seventh companies to announce consent agreements with the antitrust authorities at the US Department of Justice. Teva, which is especially central to this case, agreed to a criminal penalty of $225 million to settle the case, along with divesting a certain cholesterol drug and other penalties.

    What exactly did Teva do? It’s hard to know what happened behind the scenes, and part of the reason that a company signs a consent decree is to avoid acknowledging the full extent of what happened. But we at least know the accusations that were laid out in US District Court in 2019.

    The complaint starts out by alleging that there has been a long-standing pattern in the generic drug industry of firms agreeing (at least tacitly) to divide up the market and not to compete too hard with each other. I can’t speak to the truth of this allegation, and the evidence above shows that prices of generic drug were falling steadily up through 2013. Thus, the heart of the case is not the allegations about a long-standing lack of competition, but events that started in 2013. Here, I’ll quote the allegations of the complaint about the actions of Nisha Patel at Teva Pharmaceuticals in 2013 (starting around p. 158 of the complaint):

    565. In April 2013, Teva took a major step toward implementing more significant price increases by hiring Defendant Nisha Patel as its Director of Strategic Customer Marketing. In that position, her job responsibilities included, among other things: (1) serving as the interface between the marketing (pricing) department and the sales force teams to develop customer programs; (2) establishing pricing strategies for new product launches and in-line product opportunities; and (3) overseeing the customer bid process and product pricing administration at Teva.

    566. Most importantly, she was responsible for – in her own words – “product selection, price increase implementation, and other price optimization activities for a product portfolio of over 1,000 products.” In that role, Patel had 9-10 direct reports in the pricing department at Teva. One of Patel’s primary job goals was to effectuate price increases. This was a significant factor in her performance evaluations and bonus calculations and, as discussed more fully below, Patel was rewarded handsomely by Teva for doing it.

    567. Prior to joining Teva, Defendant Patel had worked for eight years at a large drug wholesaler, ABC, working her way up to Director of Global Generic Sourcing. During her time at ABC, Patel had routine interaction with representatives from every major generic drug manufacturer, and developed and maintained relationships with many of the most important sales and marketing executives at Teva’s competitors.

    568. Teva hired Defendant Patel specifically to identify potential generic drugs for which Teva could raise prices, and then utilize her relationships to effectuate those price increases. …

      571. When she joined Teva, Defendant Patel’s highest priority was identifying drugs where Teva could effectively raise price without competition. On May 1, 2013, Defendant Patel began creating an initial spreadsheet with a list of “Price Increase Candidates.” As part of her process of identifying candidates for price increases, Patel started to look very closely at Teva’s relationships with its competitors, and also her own relationships with individuals at those competitors. In a separate tab of the same “Price Increase Candidates” spreadsheet, Patel began ranking Teva’s “Quality of Competition” by assigning companies into several categories, including “Strong Leader/Follower,” “Lag Follower,” “Borderline” and “Stallers.”

      572. Patel understood – and stressed internally at Teva – that “price increases tend to stick and markets settle quickly when suppliers increase within a short time frame.” Thus, it was very important for Patel to identify those competitors who were willing to share information about their price increases in advance, so that Teva would be prepared to follow quickly. Conversely, it was important for Patel to be able to inform Teva’s competitors of Teva’s increase plans so those competitors could also follow quickly. Either way, significant coordination would be required for price increases to be successful – and quality competitors were those who were more willing to coordinate.

      573. As she was creating the list, Defendant Patel was talking to competitors to determine their willingness to increase prices and, therefore, where they should be ranked on the scale. …

      574. It is important to note that Defendant Patel had several different ways of communicating with competitors. Throughout this Complaint, you will see references to various phone calls and text messages that she was exchanging with competitors. But she also communicated with competitors in various other ways, including but not limited to instant messaging through social media platforms such as Linkedin and Facebook; encrypted messaging through platforms like WhatsApp; and in-person communications. Although the Plaintiff States have been able to obtain some of these communications, many of them have been destroyed by Patel.

      575. Through her communications with her competitors, Defendant Patel learned more about their planned price increases and entered into agreements for Teva to follow them. …

      576. By May 6, 2013, Patel had completed her initial ranking of fifty-six (56) different manufacturers in the generic drug market by their “quality.” Defendant Patel defined “quality” by her assessment of the “strength” of a competitor as a leader or follower for price increases. Ranking was done numerically, from a +3 ranking for the “highest quality” competitor to a -3 ranking for the “lowest quality” competitor. …

      577. Defendant Patel created a formula, which heavily weighted those numerical ratings assigned to each competitor based on their “quality,” combined with a numerical score based on the number of competitors in the market and certain other factors including whether Teva would be leading or following the price increase. According to her formula, the best possible candidate for a price increase (aside from a drug where Teva was exclusive) would be a drug where there was only one other competitor in the market, which would be leading an increase, and where the competitor was the highest “quality.” Conversely, a Teva price increase in drug market with several “low quality” competitors would not be a good candidate due to the potential that low quality competitors might not follow Teva’s price increase and instead use the opportunity to steal Teva’s market share.

      578. Notably, the companies with the highest rankings at this time were companies with whom Patel and other executives within Teva had significant relationships.

        The legal complaint runs to several hundred pages, documenting contacts between firms with agreements to raise prices, or not to underbid on contracts. Taking it all into account, the legal complaint alleges:

        At the zenith of this collusive activity involving Teva, during a 19-month period beginning in July 2013 and continuing through January 2015, Teva significantly raised prices on approximately 112 different generic drugs. Of those 112 different drugs, Teva colluded with its “High Quality” competitors on at least 86 of them (the others were largely in markets where Teva was exclusive). The size of the price increases varied, but a number of them were well over 1,000%.

        Again, it’s worth remembering that these allegations are one side. But when it comes to the communications between Teva and other generic drug firms from 2013 to 2015, they have many of the actual messages. This doesn’t look like a relatively subtle anticompetition case, like the one about how Amazon charges fees to firms selling on its website. It sure looks like good old-fashioned price fixing.

        The final obvious question is: When prices for one group of generic drugs rose so substantially, why didn’t other manufacturers of generic drugs from outside the Teva-organized network enter the market? In the research mentioned above, Starc and Wollman find that some entry does occur. But entry isn’t simple. For example, because the regulatory process in the market generic drugs (even though the drugs are chemically identical!), it takes 2-4 years for a manufacturer of generic drugs to start producing a new product. Also, if a potential entrant gears up and invests to manufacture a new drug, the existing firms could then cut their prices, so that the funds spent on entering the market don’t pay off. Entering a new market is considerably easier in an economics textbook model than in the real world.