50 Years Ago: When the US Encouraged Coal Use

Coal is the dirtiest of the fossil fuels, both for its contribution to the standard pollutants like particulates and sulfur, but also because it emits more carbon per unit of energy produces than natural gas or petroleum. Thus, it’s good environmental news that, in the last couple of decades, US coal has declined to just 9% of total US primary energy consumption. The US Energy Information Administration reports: “In terms of coal’s total primary energy content, annual U.S. coal consumption peaked in 2005 at about 22.80 quads and production peaked in 1998 at about 24.05 quads.”

(For the curious, “primary” energy consumption refers to the original source of the energy. “Electricity” is not included, because electricity needs to be generated from something else like a natural gas power plant or a solar panel–electricity is not a primary source of energy by itself.)

(For those still more curious, NGPL refers to “Natural Gas Plant Liquids,” which are hydrocarbons like propane, which are separated from natural gas at processing plants.)

(For the additionally curious, the “renewable” energy category here includes hydropower, wind, solar, and biofuels like ethanol and wood. Of the 9% of total US energy consumption that traces to renewable energy in 2023, about three-fifths is biomass, like ethanol and wood. Not quite one-third of the 9% of US energy consumption from renewable energy in 2023 traces to wind and solar.)

But there was a time a half-century ago, when promoting coal use was a primary energy policy for the US government. Karen Clay, Akshaya Jha, Joshua Lewis, and Edson Severnini provide the background as part of their overall history in “Carbon Rollercoaster: A Historical Analysis of Decarbonization in the United States,” in the Summer 20205 issue of the Journal of Economic Perspectives (where I work as Managing Editor). 

If you flash back to a half-century ago, you may know that in 1973, the members of OPEC, the Organization of the Petroleum Exporting Countries, embargoed oil exports to the United States and any other countries that had supported Israel during the Yom Kippur War.  As Clay, Jha, Lewis and Severnini write: “The real price of imported oil rose dramatically, from $10.67 per barrel in 1972 (in 2007 US dollars) to $36.05 in 1974 (Seiferlein 2007, p. 171). Turbulence in the Middle East kept prices high. Unrest in Iran and the Iran-Iraq War caused further disruption, driving oil prices to $62.71 per barrel in 1980.”

In response, one policy goal of the time was to shift US energy use away from oil. The authors report:

Various regulations passed during and after the crisis reinforced the continued use of coal in electricity and other sectors. The first major piece of legislation was the Energy Supply and Environmental Coordination Act of 1974, which required that, if feasible, electric power plants burning oil and natural gas would have to convert to coal (Meltz 1975). This law was then largely superseded by the Fuel Use Act of 1978. Edward Lublin, Acting Deputy Assistant General Counsel for Coal Regulations in the Department of Energy, wrote: “The Fuel Use Act prohibits new facilities and allows DOE to prohibit existing facilities, from using petroleum or natural gas as a primary energy source unless DOE determines to grant to such facility an exemption from the Fuel Use Act’s prohibitions (Lublin 1981, p. 355).” This pro-coal legislation was often justified in terms of energy independence, given the abundant US reserves of coal. The legislation covered both electric utilities and major industrial fuel-burning installations …

The Three Mile Island nuclear powerplant meltdown happened in March 1979. Thus, an additional policy goal at this time was to shift away from nuclear. The authors write:

After Three Mile Island, no new nuclear power plant construction was authorized until 2012. Because nuclear plants displaced coal-fired electricity generation—one gigawatt-hour of nuclear generation resulted in a roughly 0.8 gigawatt-hour decrease in coal-fired generation historically (Adler, Jha, and Severnini 2020)—the nuclear upheaval kept coal consumption higher than it would otherwise have been.

One additional step was that the anti-pollution efforts of the original Clean Air Act had the useful effect of reducing “conventional” pollutants like ozone, particulate matter, carbon monoxide, and others. However, reducing carbon emissions was not yet on the policy agenda. Reducing these other pollutants had a tradeoff that coal was burned with lower efficiency–which meant that more carbon was emitted.

[E]fforts to cut local air pollution often increased carbon emissions. The 1970 Clean Air Act and subsequent amendments in 1977 coincided with less efficiency in converting coal to electricity sold and higher carbon emissions … The aggregate implications of this shift from 1970 to 1990 are meaningful: annual total carbon emissions in 1990 from coal-fired generation was 1,607 million tons, but would have been 1,415 million tons if the same amount of coal-fired electricity had been generated at 1970 levels of carbon emissions per gigawatt-hour. Similarly, the aggregate kilowatt hours of electricity sold per ton of coal burned decreased from 2,529 in 1970 to 2,065 in 1990. Thus, regulation increased coal consumption and carbon emissions.

Putting all of these together, “By 2005, coal consumption was five times what it had been in 1960.”

One of my complaints about the world, which I’m confident will never really be addressed, is that those who advocated for policies that turned out to have undesireable tradeoffs pretty much never acknowledge that reality. The US economy doesn’t really start getting off coal until the fracking revolution greatly expanded the supply of natural gas (as shown by the light blue area in the figure above). But what if it had been possible to move to natural gas sooner? Or France reacted to the OPEC oil embargo of 1973 by building nuclear power plants, which means that France’s carbon emissions have been quite low since then. Perhaps the worst thing about the US stepping away from nuclear is that several decades went by without intensive research on how to make the technology safer. What if solar and wind technology could have been accelerated as well? The carbon from the additional coal that was burned from, say, 1975 to 2005 is still in the atmosphere now, and will remain there for a very long time.

Recent Trends in US Antitrust Enforcement

The Biden administration appointed people to key antitrust positions in the Federal Trade Commission and the Antitrust Division at the US Department of Justice who, in general, promised to make antitrust regulation tougher. I’ve written here before about questions of doctrine: that is, how should antitrust cases be evaluated? But there’s also a more basic question: what changes in mergers and enforcement have actually happened?

Under the Hart-Scott-Rodino Act of 1976, all proposed mergers and acquisitions above a minimum threshold size must be reported to the US government, which gives the antitrust authorities a chance to look them over in advance. In 2024, the minimum threshold size over which a transaction needed to be reported in advance was $119.5 million. Each year, the Federal Trade Commission and the Antitrust Division of the US Department of Justice report on the transactions from the previous year, as well as what enforcement actions were taken. The most recent is the 47th Hart-Scott-Rodino Annual Report (FY 2024). Thus, the report is a chance to see what the newly aggressive antitrust administrators of the Biden administration did through 2024.

As a starting point, here are the number of proposed merger reported under the Hart-Scott-Rodino rules. Obviously, there’s a fair amount of year-to-year variation: for example, the low level in 2020 is probably attributable in part to the disruptions of the pandemic. The higher levels in 2021 and 2022 were partly a bounceback from the pandemic year, but also there was some talk in the financial press that firms were trying to complete transactions before the new Biden antitrust warriors wrote a new set of new merger guidelines. The drop in the last two years is essentially back to pre-pandemic levels. Part of the issue here is that at least some merger transactions are financed by debt, which is less enticing when interest rates go up. Overall, it seems fair to say that the number of proposed mergers in 2024 was near-average for the the previous decade.

Was there an effect on the average size of mergers? Here, the pattern is more clear: The share of proposed mergers with value of more than $1 billion has risen substantially in the lat few years.

When the FTC and the DoJ are notified of a potential merger, they can either allow it to proceed without challenge or they can put in a “second request” that expresses concern and asks for more information. This “second request” percentage is always pretty low. After all, the presumption of antitrust authorities is that they are not second-guessing whether a proposed deal will be a money-maker, but only whether it poses a risk of reduced competition. Also, the antitrust authorities have limited budgetary resources and need to pick and choose. That said, the share of transactions getting “second request” had been relatively low under the Biden antitrust team, although in 2024 it rose back to a level that had been common during the first Trump administration.

Ultimately, after these second requests, “The [Federal Trade] Commission took enforcement action against 18 transactions: 12 that the parties abandoned or restructured as a result of antitrust concerns raised during the investigation; and six that resulted in the Commission initiating administrative or federal court litigation. The [Antitrust] Division took enforcement action against 14 transactions: 12 that the parties abandoned in the face of questions from the Division; and two that were restructured after the Division raised concerns about the threat they posed to competition.”

Of course, it’s always hard to draw linkages from enforcement efforts to outcomes, in antitrust as in other areas. The tough talk from the Biden antitrust enforcers, their doctrinal arguments over what antitrust enforcement should be, and the specific cases where they brought enforcement actions surely shaped the types of mergers that firms were willing to propose. But with such effects duly noted, it’s hard to look at the raw number of proposed mergers, proposed large mergers, and enforcement efforts and interpret it as a sharp break from past antitrust practice.

For those who want more on antitrust doctrine, I’ve commented on this blog from time to time about the Biden antitrust team, the new merger guidelines, some current antitrust cases, and the historical changes in merger law over time. Some of these posts include:

Also, the Winter 2025 issue of the Journal of Economic Perspectives (where I work as Managing Editor) has a three-paper symposium on “The 2023 Merger Guidelines and Beyond.”

Some Trends in Global Debt from the IMF

The IMF has updated its Global Debt Database, and Vitor Gaspar, Carlos Eduardo Goncalves, and Marcos Poplawski-Ribeiro point out a few of big-picture changes in a short article “Global Debt Remains Above 235% of World GDP” (IMF Blog, September 17, 2025).

Here’s an overall view of global debt since 1950, measured as a share of global GDP:

The big-picture patterns here are intriguing. From 1950 up to about 1980, global debt remains at roughly 100% of global GDP. However, during this time the share of public debt (yellow bars) is falling, while the share of private debt (blue bars) is rising. in 1950, public debt is substantially larger than private debt; by 1980, private debt is substantially larger than public debt–and has remained larger ever since.

But in the 1980s, global debt as a share of GDP starts rising. Comparing the early 1990s to the present, corporate debt as a share of global GDP hasn’t risen much, household debt has risen moderately, and government debt has risen by a lot–although it has dropped a bit in the last few years as pandemic-related spending has diminished.

In one way, rising debt is not a surprise. Countries as a low level of economic development often have little debt, because their banking and financial sector is also underdeveloped. Such economies lack a well-developed channel through which savings by households and firms can become loanable funds for others in the economy.

But at some point, for any organization or household, rising levels of debt become a worry. It’s thought-provoking to me that the corporate sector, where outside investors in corporate stocks and bonds are monitoring company financial records, hasn’t seen much of a rise in debt. Instead, the rising debt levels are traceable to households and government.

Here’s another figure from the IMF authors, focused on changes in 2024 for the US, China, and for advanced and other economies around the world. In the US, public debt rose in 2024, but private debt dropped–in part because many US corporations have high profits and thus can reduce their borrowing, and perhaps also in part because rising public debt is leading to higher interest rates in a way that leads to “crowding out” of private borrowers

But when it comes to higher debt levels in 2024, the obvious “winner” is China, with dramatic rises in both public and private debt. Indeed, given that the banking and financial system in China is heavily controlled and backstopped by its government, even the private debt listed here is in some sense “public.” Many of the causes behind China’s economic growth involve real changes, like a better-skilled workforce, improved infrastructure, capital investment, and better technology. But at least one of the causes has also involved turning the debt spigots wide open, especially through local government lending to companies. Debt can be a facilitator of growth, but excessive debt can also cripple growth. China seemed to be attempting to address its pre-existing debt problem with additional debt–a policy approach that rarely ends well.

Economics of Trade Sanctions

The exercise of US foreign policy (along with the European Union and the United Nations) has been increasingly characterized by the use (or threat) of trade sanctions. What do we know about how such sanctions work? Gabriel Felbermayr, T. Clifton Morgan, Constantinos Syropoulos, and Yoto V. Yotov review the evidence in “Economic Sanctions: Stylized Facts and Quantitative Evidence” (Annual Review of Economics, 2025, 17: 175-195). They write:

According to the newest version of the Global Sanctions Data Base (GSDB; Yalcin et al. 2024), the number of sanction programs in place globally has shot up from about 200 10 years ago to about 600 in 2023. What is more, about 12% of all existing country pairs and 27% of world trade are currently affected by some type of sanction. In their various forms, sanctions are the leading geoeconomic tool aiming to coerce foreign governments into actions that they would not undertake otherwise. …

[S]anction processes are much better understood today than they were 30 years ago. The political science community has come to accept what economists already knew (i.e., that sanctions bring substantial economic effects), and economists have come to accept what political scientists have long understood (i.e., that substantial economic costs do not always bring changes in policy). Over that same period of time, the use of sanctions has dramatically increased, and they have come to affect many more bilateral economic relationships. This is a puzzling phenomenon: If sanctions are costly and frequently fail to deliver the desired policy objectives, why have they become so endemic?

Their discuss of this question offers various hypotheses about bargaining and negotiating. But I found one of the insights especially persuasive: “[T]he primary effects on [sanction] targets are negative, large, often long-lasting, and very heterogeneous; in contrast, the corresponding effects on senders tend to be small and short-lived.” In other words, sanctions have at least a chance of producing substantial pain, at little cost to those doing the sanctioning.

For more details, I useful starting point is a two-paper symposium on “Trade Sanctions and International Relations” in the Winter 2023 issue of the Journal of Economic Perspectives (where I work as Managing Editor):

Better Permitting and More Building: Possible?

It seems natural enough, at least based on US experience, to believe that building and permitting are in a natural opposition: that is, stronger permitting means less building. Zachary Liscow has been looking for a way out of this opposition. He spells out some of his thoughts in “Reforming Permitting to Build Infrastructure” (Hutchins Center on Fiscal & Monetary Policy at Brookings, September 2025).

I confess that I was drawn to the paper, in part, by footnote 1 after the first sentence. (Have I ever written a sentence more geeky than that? Probably.) It reads: “This report builds on Zachary Liscow, “Getting Infrastructure Built: The Law and Economics of Permitting,” Journal of Economic Perspectives 39, no. 1 (2025): 151–80). As the Managing editor of JEP, I recommend the earlier paper as well.

In particular, Liscow is concerned that the US needs more instructure, in particular for energy and transportation, and that the existing system of permitting has evolved in such a way that it can allow a well-funded and/or noisy minority to have effective power to slow and to block the needed infrastructure. Liscow’s central idea is to have permitting work better, in particular by thinking of ways that permitting might work better at allowing open consideration and evaluation of environmental and other issues–and also allosing for adjustments to the original plans. However, once this improved permitting process has occurred, the follow-up part of Liscow’s proposal is that judges would be considerably more hesitant to intervene in the decision of the permitting process, whether that interference would allow or block the proposed construction.

In this paper, Liscow offers a four-part plan to reform the permitting requirements created by the National Environmental Policy Act (NEPA). I’ll quote here from the summary of the paper at the Brookings website:

  1. Shifting legal power: Judicial oversight should be curtailed to reduce excessive litigation. This includes reforming the “hard look” standard courts use to assess agency actions, limiting the range of alternatives agencies must consider, shortening statutes of limitations for lawsuits, restricting standing to sue, and limiting the scope of judicial injunctions.
  2. Facilitating popular decision-making and negotiated agreements: Since the permitting process often pits government agencies against fragmented community opposition, we need new tools to foster popular decision-making and negotiated agreements that would bindingly preclude litigation. These could include mechanisms like local legislative approval, compensation through community benefit agreements, or more experimental models in which the government would designate a representative set of interest groups to negotiate on the public’s behalf.
  3. Strengthening state capacity: A well-functioning permitting regime requires well-resourced institutions. Agencies should be able to expand staffing, collect better data, coordinate their efforts, and make greater use of categorical exclusions and expedited reviews—especially for critical clean energy and transit projects.
  4. Improving public participation: A more democratic and equitable permitting process requires early and broad-based outreach. Such outreach should include not only the most vocal opponents but also previously marginalized groups and those who would stand to benefit from planned development. Experimentation with new models to ensure diverse stakeholder involvement should be encouraged.

The summary continues: “Taken together, these reforms comprise a `green bargain,’ speeding construction and lowering costs, allowing the construction of the infrastructure needed for the green transition, and empowering the broader public—especially lower-income communities most hurt from failing infrastructure—over narrow interests.”

It’s of course uncertain whether Liscow’s proposals will work. Would the increases in public/popular input and decision-making end up creating an even bigger obstruction to building infrastructure? Would judges actually back off, if the earlier process had taken place? What are the chances of a substantial increase in state and federal capability to oversee these kinds of changes? But it also seems clear to me that the current permitting system isn’t working well. Liscow’s proposed course of action seems better than at least one alternative, which would involve a dramatic reduction or outright scrapping of permit requirements.

Predistribution, Not Redistribution, in the Nordic Countries

Maybe it’s just because I live in Minnesota, a state where the differences between immigrants from Sweden, Norway, and Finland are still apparent in the names of towns and the surnames of people. But when I run into people who would prefer that the US distribution of income be more equal, they often point to the economies of northern Europe as a real-world example of what they have in mind.

How do these countries do it? Magne Mogstad, Kjell G. Salvanes, and Gaute Torsvik explore the evidence in “Income Inequality in the Nordic Countries: Myths, Facts, and Lessons” (Journal of Economic Literature 2025, 63:3, 791–839).

In thinking about why greater inequality of income prevails in the Nordic countries, it’s useful to divide possible reasons into redistribution and predistribution. An example of redistribution after income is received would be public policy decisions like higher marginal tax rates on the well-off, or greater support for those less well-off, or some combination of those two. In contrast, predistribution involves affecting what income is received in the first place, before taxes and transfer payments. Examples might include minimum wage laws, greater workers representation (though unions or other mechanisms), or rules that affect the ability of top executives to be paid in the form of bonuses and stock options. Thus, Mogstad, Salvanes, and Torsvik write:

We argue that the contemporary Nordic model is built on four principal pillars: (i) significant public investment in family policies, education, and health services; (ii) coordinated wage setting within and across industries; (iii) substantial expenditure on social insurance to safeguard against income losses due to unemployment, disability, and illness; and (iv) high and progressive taxation of labor income, complemented by subsidies for services that support employment. …

A key finding is that a more equal predistribution of earnings, rather than income redistribution, is the main reason for the lower income inequality in the Nordic countries compared to the United States and the United Kingdom. While the direct effects of taxes and transfers contribute to the relatively low income inequality in the Nordic countries, the key factor is that the distribution of pretax market income, particularly labor earnings, is much more equal in the Nordics than in the United States and the United Kingdom. Another key finding is that equality in hourly pay, not work hours, is the primary explanation for why the Nordic countries have much lower inequality in labor earnings than the United States and the United Kingdom. … Quantitatively, the compression of hourly wages matters the most, explaining a large majority of the difference in earnings inequality between the Nordic countries and the United States and the United Kingdom.

The authors go through possible alternative reasons for why the four elements of the Nordic model might lead to greater equality. For example, “Nordic governments spend heavily on children and families through heavily subsidized day care, education, and health programs. Although these programs are typically universal, they could help equalize the distribution of skills and human capital if the take-up or the positive effects of the program are concentrated among children from poor or disadvantaged families. We argue that most of the available evidence suggests that this is not the key explanation for income equality in the Nordics. A substantial body of research evaluating the causal effects of day care, education, and health policies in the Nordics suggests that these policies have a relatively modest impact on inequality in skills, educational attainment, and labor market outcomes.”

It’s important here to be clear on how the minds of economists operate. The authors are not arguing in an overall sense either for or against these universal social programs. They are only making the very specific argument that the evidence about the effects of these programs does not support the claim that they are a primary cause of the greater income equality that exists in the Nordic countries. For the authors, the key difference is that those with higher education and skills are paid a substantially higher premium in the US and UK economies than in the Nordic economies, compared with those who have lower levels of education and skills.

US Income Inequality Before Taxes and (Many) Transfers: Census Data

Each year, the US Census Bureau publishes three overview reports to update the annual data on income, poverty rates, and health insurance. Here, I focus on some figures from Income in the United States: 2024, by Melissa Kollar and Zach Scherer (September 2025, P60-286). Here’s a figure showing several measures of pre-tax income inequality.

It’s perhaps useful for most readers to start at the bottom. US households as a grouap are divided into five parts, or quintiles. The share of income going to the top quintile has been rising for decades, with an especially sharp jump in the 1990s. The middle figure offer several ratios with the income at the 90th percentile of the income, the 50th percentile. The 90th percentile is rising substantially compared to the 10th percentile, and rising but less so compared to the 50th percentile. The ratio of the 50th to the 10th percentile hasn’t moved much. Both of these figures suggest a rise in pre-tax income concentrated at the top of the income distribution.

The top panel shows the “Gini coefficient,” which will be less intuitively clear to many reader. I offered my own explanation of it here. But basically, it is a way of measuring the extent to which an income distribution departs from perfect equality of incomes. On this scale, perfect equality of incomes has a Gini coefficient of zero, while perfect inequality of incomes–that is, all income going to one person–has a Gini coefficient of 1. Again, this measure shows a steady rise over time.

A few thoughts here:

1) This is a measure of inequality in money income. It includes wages and salaries, rent payments, interest and dividends, as well as government cash payments like Social Security, and cash forms of public assistance. It does not include capital gains. It does not include after-tax effects, including both taxes paid and assistance to those with low incomes that happens through tax credits like the Earned Income Tax Credit. It also does not inclued the value of non-cash assistance programs like food stamps or Medicaid. But it seems safe to say that the rising inequality at the top of the distribution is being primarily driven by wage and salary payments at the top of the income distribution–which is a pattern of interest all by itself.

2) The steady rise in income inequality over time suggests that the driving factor is not something short-term, like the policies of a given president. I won’t try to run through a list of possible candidates for causal factors here. But it does seem worth noting some data on education and income also just released by the Census Bureau. This figure only goes back two decades. As the Census Bureau notes:

Overall, the income gap between householders with a bachelor’s degree or higher and those with a high school degree but no college widened during the 20-year period. In 2004, households headed by those with at least a bachelor’s degree had about twice as much income as those headed by someone with a high school degree but no college. By 2024, householders with a bachelor’s degree or higher had median household income 2.3 times higher than those with a high school degree. The makeup of educational attainment groups also changed over time. In recent decades, growth in the population with a bachelor’s degree or higher has been concentrated among racial and ethnic groups with historically low attainment. This growth has also disproportionately come from increasing educational attainment among women. 

One of the driving forces between rising inequality over time lies in the race between demand for skilled labor and supply of skilled labor. If the supply isn’t keeping up with demand, then the wage gap between the two groups will tend to rise. This is surely not the entire story of rising pre-tax, pre-transfer US income inequality, but it’s a part.

EU Productivity and Lack of Integration

Economic growth and productivity growth across the nations of the European Union has been lagging the US economy. What are the reasons and what might be done? A group of essays in the June 2025 issue of Finance & Development offers some insights. A common theme is that EU economic integration has not proceeded as planned. As a result, EU firms are selling into smaller national markets, rather than a continent-wide market, and their incentives to attract finance and to invest in economies of scale and new technologies are accordingly reduced.

For example, here is Alfred Kammer in “Europe’s Integration Imperative.

The EU has made significant progress freeing up trade between its member states, but plenty of obstacles remain. High trade barriers within Europe are equivalent to an ad valorem cost of 44 percent for manufactured goods and 110 percent for services, IMF research shows (2024). These costs are borne by EU consumers and companies in the form of less competition, higher prices, and lower productivity.

The EU is also a long way from capital market integration, with cross-border flows frustrated by persistent fragmentation along national lines. The total market capitalization of the bloc’s stock exchanges was about $12 trillion in 2024, or 60 percent of the GDP of the participating countries. By comparison, the two largest stock exchanges in the US had a combined market capitalization of $60 trillion, or over 200 percent of domestic GDP. Limited EU-level harmonization in important areas, such as securities law, hampers growth by preventing capital from flowing to where it’s most productive.

This is one reason Europe has fallen behind in the adoption of productivity-enhancing technologies and its productivity levels are low. Today, the EU’s total factor productivity is about 20 percent below the US level. Lower productivity means lower incomes. Even in the EU’s largest advanced economies, per capita income is about 30 percent lower than the US average (see Chart 1). 

Kammer points out:

Not only do Europe’s leading companies lag their US competitors, but they are falling further behind over time. This is true across all sectors, but especially for tech. While the productivity of US-listed tech firms has increased by about 40 percent over the past two decades, European tech firms have seen almost no improvement. One reason could be that US firms are simply trying harder: They have tripled their research and development spending to 12 percent of sales revenue, three times European companies’ ratio, which has languished at an average of 4 percent in recent decades.

The future would look brighter if Europe could hope for young high-growth firms to reduce the innovation and productivity deficit. Alas, the EU has few such companies. And they have a substantially smaller economic footprint than those in the US, where younger firms account for a far larger share of employment. In other words, the EU has too many small, old, and low-growth companies. About a fifth of European employees work in microfirms with 10 people or fewer, about double the US figure. And while the average European firm that has been in business 25 years or more employs about 10 workers, comparable US companies employ 70 (Chart 2).

Issues for the EU may be especially acute for young tech companies. As Kammer points out, banks are typically the primary source of capital for EU companies, and banks typically want to lend to companies with collateral–not a company based on a few patents and an idea: “[T]here is a troubling trend of innovative European firms taking their talents to more dynamic markets elsewhere, with future “unicorn” companies valued at more than $1 billion leaving the EU for the US at a rate that is 120 times faster than the other way around, according to research by Ricardo Reis, of the London School of Economics.”

Other essays in the issue focus on what would be needed for an EU savings and investment union that could support innovative new companies, as well as essays with more details on Germany, Poland, Greece, and Spain. But for now, in an essay that offers qualified optimism about the future for innovative EU firms, Alessandro Merli begins:

“The US innovates, China replicates, Europe regulates” is how critics summarize the continent’s approach to innovation. Exhibit A of the European Union’s regulatory overreach is the now infamous Artificial Intelligence Act, which governs AI—even though the region has not yet produced a single major player. Productivity in US technology firms has surged nearly 40 percent since 2005 while stagnating among European companies, according to IMF research. US research and development spending as a share of sales is more than double what it is in Europe. No European company ranks among the 10 largest tech companies by market share. 

Interview with Dean Karlan: US Government Foreign Aid

Back in November 2022, Dean Karlan took the job as the first “chief economist” for USAID. In that position, he had a staff of about 30 whose task was to figure out the benefits and costs of different aid programs, with the goal over time of refocusing aid on problems and situations where the payoff was highest. In February 2025, Karlan resigned his position as USAID when he felt that political opposition made it impossible to do the job for which he had signed on.

Santi Ruiz interviews Karlan in “How to Fix Foreign Aid: USAID’s former Chief Economist reflects on DOGE” (Statecraft, July 31, 2025). Here are a few of the points that caught my eye:

On the role of the Chief Economist at USAID

There had never been an Office of the Chief Economist before. In a sense, I was running a startup, within a 13,000-employee agency that had fairly baked-in, decentralized processes for doing things. … [T]he reality is, we were running a consulting unit within USAID, trying to advise others on how to use evidence more effectively in order to maximize impact for every dollar spent. We were able to make some institutional changes, focused on basically a two-pronged strategy. One, what are the institutional enablers — the rules and the processes for how things get done — that are changeable? And two, let’s get our hands dirty working with the budget holders who say, `I would love to use the evidence that’s out there, please help guide us to be more effective with what we’re doing.’ There were a lot of willing and eager people within USAID. 

On the challenge of Congressional earmarking

[T]he number that I heard is that something in the ballpark of 150-170% of USAID funds were earmarked. … Congress double-dips, in a sense: we have two different demands. You must spend money on these two things. If the same dollar can satisfy both, that was completely legitimate. There was no hiding of that fact. It’s all public record, and it all comes from congressional acts that create these earmarks. … There’s an earmark for Development Innovation Ventures (DIV) to do research, and an earmark for education. If DIV is going to fund an evaluation of something in the education space, there’s a possibility that that can satisfy a dual earmark requirement. That’s the kind of thing that would happen. One is an earmark for a process: “Do really careful, rigorous evaluations of interventions, so that we learn more about what works and what doesn’t.” And another is, “Here’s money that has to be spent on education.” That would be an example of a double dip on an earmark.

How the Department of Government Efficiency (DOGE) intervention operated

There was not really any looking at any of the impact of anything. That was never in the cards. There was a 90-day review that was supposed to be done, but there were no questions asked, there was no data being collected. There was nothing whatsoever being looked at that had anything to do with, “Was this award actually accomplishing what it set out to accomplish?” There was no process in which they made those kinds of evaluations on what’s actually working. You can see this very clearly when you think about what their bean counter was at DOGE: the spending that they cut. … Throughout the entire government, that bean counter never once said, “benefits foregone.” It was always just “lowered spending.” Some of that probably did actually have a net loss, maybe it was $100 million spent on something that only created $10 million of benefits to Americans. That’s a $90 million gain. But it was recorded as $100 million. And the point is, they never once looked at what benefits were being generated from the spending. What was being asked, within USAID, had nothing to do with what was actually being accomplished by any of the money that was being spent. It was never even asked.

Francisco Flores also interviewed Karlan for the Economics that Really Matters (ETRM) website in “ETRM Interview Series – Dean Karlan”  focused on the future of research in development economics, and for their advice to young researchers.” Here’s Karlan on the topic of broad political support for foreign aid:

[H]onestly, I’m not convinced that a lack of evidence is the main reason [foreign] aid isn’t more supported. It’s a bit of an oversimplification to say, “People don’t see the benefits, so they don’t support it.” There are many things governments do that only benefit a small segment of the population—like specific research initiatives or industry subsidies—and yet we still do them. If our standard were that every policy has to directly benefit 51% of people to be justified, we’d hardly get anything done. So, I don’t think that’s a fair criticism of foreign aid.

Also, the best evidence we can provide is about whether aid is effective—not whether it tangibly benefits, say, a middle-income family in Kansas. Sometimes there are material connections—like if USAID buys wheat from Kansas and a local farmer benefits—but those are exceptions. Most aid programs don’t have a direct economic payoff for Americans. Instead, the benefit is about soft power, about global leadership, and most importantly, about doing the right thing.

And that moral stand—that’s something a lot of Americans already live by. Most Americans donate to charity. Most care about others. We talk about ourselves as a generous, giving nation. So what’s wrong with living up to that identity as a country? Why shouldn’t our foreign policy reflect those values? … So I don’t think we need to show a financial return on foreign aid to justify it. And I don’t think a lack of direct benefit to Americans is the reason it sometimes loses support. 

What Do Managers Do?

Economists have been thinking for a long time about the operation of buying and selling in markets. However, they have traditionally spent less time studying what happens inside a firm–a setting in which forces of supply and demand are replaced by managerial decision-making. Anyone who has had both a good boss and bad boss knows that it makes a difference, but how and why? Alan Benson and Kathryn Shaw tackle the research on this question in “What Do Managers Do? An Economist’s Perspective” (Annual Review of Economics, 2025, 17: 635-664).  They write:

Economic activity requires motivating and coordinating individuals to work toward a common goal. These aims are the purview of managers. What, however, do managers actually do? We outline three defining principles of economic research on managers—technological determinism, skill distinction, and managerial self-interest—and relate them to the set of skills reported by managers on LinkedIn. We highlight “managers of people” and “managers of projects” as a useful distinction for categorizing theoretical, empirical, and descriptive accounts of managers. In light of our three principles, we review research on how managers can create value—namely, by hiring, retaining, training, monitoring, evaluating, allocating, and supervising. We propose that managers apply these skills in different proportions depending on the production technology in which they are embedded …

Empirical studies in this literature often involve finding data from within companies. For example, consider a company with a group of middle managers, all at the same level in the hierarchy, who oversee groups of workers. Moreover, say that workers sometimes are shifted from one manager to another, as business needs evolve. It may become apparent that most workers perform with higher productivity under some managers than others. What are some of the main themes that emerge from this research?

In hiring decisions, the evidence suggest that few managers are good at screening potential workers. A fairly robust literature finds that more productive workers are hired by a process that involves some mixture of highly structured interviews (so that answers are more comparable across applicants) or specific testing, or by direct observation of the person doing the job, when that is possible. But managers do a better job of hiring if the have incentives to overcome the biases that lead them to prefer hiring from their own friend-groups, social groups, or ethnic groups.

In retention of existing workers: “Perhaps the clearest evidence linking people skills and retention is provided by Hoffman & Tadelis (2021). Using data from a large high-tech firm, they find that survey-measured people management skills are highly correlated with greater subordinate retention: Replacing a manager at the 10th percentile in measured people skills with one at the 90th percentile corresponds to a 60% reduction in overall turnover and to declines in turnover among workers estimated to be high performers.” Retention is often easier when a worker and manager share a characteristic: for example, female managers are generally better at retaining female workers. There is also evidence that managers who are encouraged to focus on retention can often improve on this dimension.

In training and mentoring: “Sandvik et al. (2020) provide one of the most comprehensive recent field studies of how managers create value through training. They examine sales agents whose productivity may be tracked by revenue per call. Managers are responsible for improving sales agents’ performance through formal training, probationary screening, and ongoing feedback. Importantly, managers can encourage development by managing workplace knowledge flows, including by setting up policies that encourage peer learning from the best performers.” When it comes to mentoring, the approach that produces more productiv workers seems to be regular, mandatory, and broad-based mentoring, rather than selecting a smaller number of people for mentoring.

In the area of motivation: “For instance, Lazear et al. (2015) estimate a two-way fixed effect model in the context of supervisors of workers doing routine tasks. They find that the difference in productivity under a 90th-percentile manager and a 10th-percentile manager is equivalent to the productivity from an additional worker. Benson et al. (2019) estimate manager value added from the manager fixed effects in a regression with salesperson productivity. They find large differences in the productivity of sales workers under different managers: A worker under a 75th-percentile manager has nearly five times the sales of one under a 25th-percentile manager, which is approximately half the raw sales gap between workers at these quartiles.” Some of these differences in managerial ability seem traceable to differences in the “prosocial” skills of managers.

In the area of evaluating and monitoring, the research takes a certain need to limit cheating and shirking for granted, but focuses more broadly on how a manager can improve productivity fair process of evaluation can help in “providing workers with greater autonomy, enablement, and incentives for reaching prespecified outcomes, except in situations where a manager’s monitoring and supervision are required to check moral hazard Much of what economists refer to as monitoring also falls under what practitioners refer to as performance management, highlighting contemporary organizations’ emphasis on using evaluations for the dual purpose of evaluation and professional development (i.e., identifying and training high-ability workers).”

In the area of allocating, economists are familiar with the idea of “good matches” between workers and jobs that happen through markets, but managers often have the challenge of matching existing worker within the firm to the tasks that need to be done. For example: “Using data featuring manager job rotations at a large multinational company, Minni (2023) finds that good managers, defined as those revealed to be good by quick subsequent promotion, more actively move their subordinates both laterally and vertically and enhance their productivity and future advancement. Adhvaryu et al. (2022a), using data from an Indian garment plant, find that the most attentive managers enhance productivity by reassigning workers in response to particulate matter pollution.”

Economists probably still focus more on buying and selling within markets than on what happens inside firms, but digging into the inner workings of firms is becoming more common. This makes sense. Herbert Simon (Nobel 1978) wrote an essay on “Organizations and Markets” for the Journal of Economic Perspectives (where I work as Managing Editor) back in 1991, argued for the importance of looking inside the organizations of firms with a (to me) memorable metaphor. Simon wrote:

A mythical visitor from Mars, not having been apprised of the centrality of markets and contracts, might find the new institutional economics rather astonishing. Suppose that it (the visitor—I’ll avoid the question of its sex) approaches the Earth from space, equipped with a telescope that reveals social structures. The firms reveal themselves, say, as solid green areas with faint interior contours marking out divisions and departments. Market transactions show as red lines connecting firms, forming a network in the spaces between them. Within firms (and perhaps even between them) the approaching visitor also sees pale blue lines, the lines of authority connecting bosses with various levels of workers. As our visitor looked more carefully at the scene beneath, it might see one of the green masses divide, as a firm divested itself of one of its divisions. Or it might see one green object gobble up another. At this distance, the departing golden parachutes would probably not be visible.

No matter whether our visitor approached the United States or the Soviet Union, urban China or the European Community, the greater part of the space below it would be within the green areas, for almost all of the inhabitants would be employees, hence inside the firm boundaries. Organizations would be the dominant feature of the landscape. A message sent back home, describing the scene, would speak of “large green areas interconnected by red lines.” It would not likely speak of “a network of red lines connecting green spots.” …

When our visitor came to know that the green masses were organizations and the red lines connecting them were market transactions, it might be surprised to hear the structure called a market economy. “Wouldn’t ‘organizational economy’ be the more appropriate term?” it might ask. The choice of name may matter a great deal. The name can affect the order in which we describe its institutions, and the order of description can affect the theory. In particular, it may strongly affect our choice of the variables that are important enough to be included in a first-order theory of the phenomena.