Carbon Capture and Storage: Where It Stands

The group of technologies that go under the name of “carbon capture and storage” involve taking carbon out of the air directly. Oddly enough, some interest groups who express the strongest concerns about the risks of climate change often make ambivalent or even negative comments about this technology, I suspect because they fear that a belief that it may soon be cheap and easy to pull carbon out of the air would diminish support for other methods of reducing carbon emissions. But that’s a political judgement, not an environmental one. The more serious forecasts for how to reduce carbon in the atmosphere instead emphasize that a number of changes are likely to be useful, including a limited role for carbon capture and storage where it can be cost-effective.

For those looking to get up to speed, the Congressional Budget Office provides an overview of US efforts in “Carbon Capture and Storage in the United States” (December 2023), while for an international perspective, the Global CCS Institute has published its annual “Global Status of CCS Report 2023.

Carbon capture and storage projects can be broadly divided into two categories: in the first, some carbon is captured where it is being emitted at electricity-generating plants and industrial facilities. The other category, direct air capture, would just pull carbon out of the air.

The CBO report notes that the US now has 15 experimental carbon capture and storage projects in operation. They all happen at plants that generate carbon, and they typically provide the carbon dioxide to oil companies that, in turn, inject it into oil wells to push more oil to the surface. The costs of reducing carbon emissions by this method vary considerably: as the figure shows, it’s a relatively cheap method when applied to natural gas processing or to production of ethanol or ammonia.

However, the industries where carbon capture and storage is relatively cheap are also relatively small in size. Looking ahead, the question is whether the technology develops to a point where it can be applied to the larger sectors, like industrial processes and electric power generation.

What about the idea of “direct air capture” of carbon? For the near-term and probably also the medium-term, this technology isn’t nearly ready for prime-time. The CBO writes:

The cost to capture CO2 is greater using DAC [direct air capture[ than using CCS [carbon capture and storage] because the concentration of carbon dioxide is much lower (more diluted) in the atmosphere than in energy- or industrial-sector emissions. In addition, the pressure of the exhaust gas from those sectors is often higher than the pressure of the atmosphere, making CO2 in that exhaust easier to capture. According to the International Energy Agency, the cost to capture a metric ton of CO2 using DAC ranges from $135 to $345, compared with $15 to $120 for CCS in various industrial settings. Because DAC is a more experimental process than CCS, estimates of its costs are more uncertain. Other analysts estimate that costs are likely to be much higher—$600 to $1,000 per metric ton, or more—over the next decade.

The annual report from the Global CCS Institute provide a detailed overview of carbon capture and storage projects around the world, which are expanding rapidly, along with related issues like transportation of carbon dioxide, availability of financing, and government permitting processes. I was interested to notice that the US is encouraging some of the world’s largest “direct air capture” efforts. The report notes:

The US: Construction commenced on the first large-scale DAC [direct air capture] project, STRATOS, and operations are planned to start in 2025. The project aims to capture up to 500,000 tonnes of CO2 per year. … One DAC [direct air capture] project is in construction and two more are being developed in the US. DAC projects are supported by the billions of dollars flowing from the US government into research and development, through both the DAC Regional Hubs funding opportunity and the DOE Carbon Negative Shot, which aims to reduce carbon removal costs $100/net tonne. In August, the US Department of Energy announced up to $1.2 billion funding to advance the development of two commercial-scale direct air capture facilities in Texas and Louisiana.

As noted before, I’m skeptical about the prospects for direct air capture in near or medium term. But one practical advantage of this approach is that facilities for direct air capture could be located in the places where storage of carbon dioxide was especially safe and cheap–thus minimizing the need for transportation.

The Merger Surge of 2021 and 2022

The Hart-Scott-Rodino Antitrust Improvements Act of 1976 requites that all mergers above a certain size–now $101 million–must be reported to the federal government before they occur. This gives the authorities at the Federal Trade Commission and the Antitrust Division at the US Department of Justice a chance to challenge mergers, if and when warranted. The law also requires an annual report on the state of antitrust enforcement, and the report for fiscal 2022 has just been published.

One part of each annual report just provides a count of the mergers that were reported, the size of the mergers, and how many were challenged. Here’s a figure from the Hart-Scott-Rodino Annual Report Fiscal Year 2022, showing the boom in mergers in 2021 and 2022.

Here’s the size distribution of the reported transactions: 611 of the deals were more than $1 billion and another 643 were between $500 million and $1 billion. (The total here differs slightly from the figure above, because in a few cases, both parties to a merger file a report, and the double-reported mergers are not included in this table.)

Of the 3,152 reported to the antitrust authorities in 2022, how many were challenged? Given the tone of arguments about antitrust, you might assume the number is pretty high. But as the report notes:

During fiscal year 2022, the Commission brought 24 merger enforcement challenges: eleven in which it issued final consent orders after a public comment period; seven in which the transaction was abandoned or restructured as a result of antitrust concerns raised during the investigation; and six in which the Commission initiated administrative or federal court litigation. The 24 merger enforcement challenges the Commission brought in fiscal year 2022 is the second highest figure in the last ten years

This relatively low number seems reasonable to me, in the sense that companies proposing a merger know that the antitrust authorities will be taking a look, and so become less likely to proposed egregiously anticompetitive mergers. Also, the job of the antitrust regulators is not to second-guess whether a proposed merger is a wise business decision, but only whether it will have anti-competitive effects.

Each year, the report also does a bit of gentle bragging about the cases it has won–either in court, or as part of a consent decree (that is, the firms agreed to certain changes or to selling off certain parts of the firm before the merger), or because a merger was dropped when it was challenged. For example:

In January 2022, the Commission issued an administrative complaint and a authorized staff to seek a preliminary injunction to prevent Lockheed Martin’s proposed acquisition of Aerojet. The complaint alleged that this proposed vertical merger would likely allow Lockheed to harm rival defense contractors by cutting them off from Aerojet’s critical components needed to build competing missiles. Shortly after the Commission filed its complaint, the parties abandoned the transaction. This lawsuit represented the first time in decades that the Commission had sought to outright block a defense industry transaction.

In February 2022, the two largest healthcare systems in Rhode Island, Lifespan and Care New England Health System, called off their merger after the FTC, in conjunction with the Rhode Island Attorney General, sought to block the merger. On the same day in June 2022, the Commission voted to block two proposed hospital mergers: HCA’s acquisition of Steward Health Care System and RWJBarnabas’s acquisition of Saint Peter’s Healthcare System. Both of these acquisitions were later abandoned. The Commission will continue to identify and aggressively challenge hospital mergers that threaten access to critical healthcare services. …

One of the Division’s most notable successes was its efforts to block Penguin Random House’s proposed purchase of a major publishing rival, Simon & Schuster. The merger, if completed, would have eliminated competition that had led to higher advances, better services, and more favorable contract terms for authors trying to sell their work. The merger also jeopardized the breadth, depth, and diversity of written work by authors. The Division filed suit to block the merger in November 2021; after a thirteen-day trial in August 2022, the U.S. District Court for the District of Columbia found that the proposed acquisition violated Section 7 of the Clayton Act based on the harm it would cause to a specific class of workers in this case, authors.

The government antitrust authorities at the US Department of Justice and the Federal Trade Commission have also just released the final version of the Merger Guidelines. As the website notes, the Guidelines are “describe factors and frameworks the Agencies often utilize when reviewing mergers and acquisitions,” and “are a non-binding statement that provides transparency on aspects of the deliberations the Agencies undertake in individual cases under the antitrust laws.” The draft version of the guidelines released last summer engendered considerable controversy, as I noted here, here, and here.  Between the ongoing wave of merger activity and the more pugilant stance of the current antitrust authorities as expressed in the guidelines, antitrust topics are going to make some news in 2024.

When Fewer People Answer Surveys, What Should Government Statisticians Do?

Back in 1790, when Congress was arguing about process for the first Census, one argument was that the Census should limit itself to counting heads, for purposes of determining how many representatives each state should receive. But James Madison argued that it was important to seize the opportunity of the Census to gather additional information, as he put it, “in order that they might rest their arguments on facts, instead of assertions and conjectures.”

I suppose there was a time, pre-internet, when being surveyed by a government agency or a political campaign might have felt like the exciting part of your day. But those days are long behind us. But as people become less willing to answer survey questions, the database for understanding our society and policy choices diminishes.

Here’s a figure from the US Census Bureau about the decline in response rates to surveys over just the last decade. For the uninitiated, ATUS is the American Time Use Survey; CE is the Consumer Expenditures Survey, which can be carried out either by people keeping a diary or by an interviewer; CPI-Housing is a survey for measuring what people spend on housing for inclusion in the Consumer Price Index; CPS is the Current Population Survey uses for many purposes, including measurement of employment; and TPOPS is the Telephone Point of Purchase Survey used to gather consumption data.

The obvious problem is that lower response rates are probably not randomly distributed across the population: as a result, lower responses mean that the numbers will include a potentially changing degree of bias over time.

What might be done? As one step, the Census Bureau is experimenting with survey methods that can be conducted via the internet, rather than via interviewers. But an array of other approaches are under consideration, including use of “administrative” data that the government collects for an array of other purposes: Social Security, K-12 attendance and grades, unemployment insurance data, Medicare and Medicaid data, and others. There are also efforts to see if private-sector sources of data should be incorporated. Of course, new data sources may not be directly comparable to earlier data, which raises additional issues. But government spending on data collection is about 0.18% of the federal budget: to be clear, that’s not 18%, but less than one-fifth of 1 percent. We could jack that spending all the way up to a hefty, say. 0.25% of total spending, and it would still be indistinguishable from rounding error in the federal budget.

Interview with Angus Deaton: Critiques of Cosmopolitan Prioritarianism and Randomized Control Trials

David A. Price of the Richmond Fed carries out an interview titled “Angus Deaton: On deaths of despair, randomized controlled trials, and winning the Nobel Prize” (Econ Focus: Federal Reserve Bank of Richmond, Fourth Quarter 2023, pp. 18-22). Here are a few of Deaton’s comments that caught my eye:

On his shift from “cosmopolitan prioritarianism” to “domestic prioritarianism”

If you try to find out what an economist believes philosophically, they will say it’s utilitarianism. … And so there’s a widespread belief in economics that poorer people deserved our attention more than less poor people, because an extra dollar given to someone who is really poor would do more good than an extra dollar going to someone who already had plenty. Philosophers nowadays call that “prioritarianism,” meaning people who have the lowest level of well-being are the ones who deserve the most at the margin.

The other dimension is “cosmopolitan,” meaning you apply this idea across the whole world, without paying attention to national boundaries. Many do seem to embrace cosmopolitan prioritarianism, in which you metaphorically line up everybody in the world from worst off to best off and you prioritize the people at the bottom — without regard to where they are.

I certainly believed this for a long time, and I spent many years consulting for the World Bank where this view was strongly held. But I now think it’s wrong for a number of reasons … One of them is that national boundaries really do matter. … We accept obligations for other people in our country, which we don’t accept for other people outside the country. So whether we like it or not, we’re locked in this tangle or this system of reciprocal obligation … Our fellow countrymen, whether we care for them because we feel like them or not, we have a responsibility for in terms of our taxes and welfare systems, such as they are, and so on. So that’s part of it.

The other part is my suspicion, and this is deeply controversial, that some of the poorest people in America are every bit as poor in terms of overall well-being as the people in Africa or India or wherever the aid agencies like to hold up in front of us. And again, that’s not just money. It’s living in a functional society with societal supports. For instance, if you read some of the ethnographic literature about the Mississippi Delta, there are horrible things going on there in people’s lives. I don’t know how to estimate those in terms of numbers, because we don’t have very good tools for that. But I do challenge the idea that there’s no global poverty in America. So I am increasingly drawn to a form of domestic prioritarianism in which I worry a lot about others in my country who have the least.

On his doubts about the research methodology of randomized control experiments:

In the old days, we used to say here’s a regression and here’s a bunch of regression diseases. There’s a bunch of randomized controlled experiment diseases, too, which can get in the way.

People seem to think if you randomize — if you have two groups picked at random and one gets the treatment and one doesn’t — they say the only difference between the two groups is the treatment. But it’s dead wrong. When I used to teach this class, I would say, if I pick one of you at random with my eyes shut, and I pick another one with my eyes shut, does that make you identical? Of course not. You could argue that’s a large-sample or small-sample thing: If you pick a million people at random, then on average, they’re going to be the same in the two groups. And that’s true. But we don’t know how big it really has to be. And a lot of the experiments are pretty small. So it could be that the two groups you’re looking at are different at random but still different.

The other thing is that randomization can’t control for things that are the same in the two groups. That’s the external validity issue. One of my co-authors in the field of randomized controlled trials, the philosopher Nancy Cartwright, has an example that I like to give. There is famous work that Ed Miguel and Michael Kremer did on worms and deworming. They gave deworming pills in Kenya, and the kids who got the deworming pills did much better in school. Nancy lives in Oxford, and she said, “I have my granddaughter living with me and she’s not doing very well in school, so now I know what I should do, which is I should give her deworming pills, right?” But somewhere between Kenya and Oxford, the pills stop working.

So then, why and where? Of course, what’s on the line is there has to be worms or there has to be lack of sanitation or people are not wearing shoes or something, which is never in the experiment, because everybody in the experiment doesn’t have shoes. Or everybody in the experiment is walking around in an unsanitary field or something, and that’s not what you get in Oxford, so it’s not going to work there. But you have to know what these conditions are if you’re actually going to use those results. So sometimes these little experiments are not much more than anecdotes. You don’t really know what to take away from them. To paraphrase Bertrand Russell, you need a deeper view of the structure of reality.

Is Healthcare Spending Leveling Out at Last?

. Is the rise in US health care spending slowing down? For perspective, health care spending as a share of GDP had been rising steadily over the half-century prior to 2010: 5% of GDP in 1960, 6.9% of GDP in 1970, 8.9% of GDP in 1980, 12.1% of GDP in 1990, 13.3% of GDP in 2000, and 17.2% of GDP in 2010, and US health care spending reached a peak of 19.5% of GDP in 2020.

However, 2020 was the first year of the pandemic. The most recent figures on US health care spending show that it was 17.3% of GDP in 2022, similar to the average of the pre-pandemic years from 2016-2019.

Understanding health care spending matters. After all, back in 1960 health care spending was one-twentieth of the US economy; now, it’s closer to one-fifth. For workers whose employers pay for their health insurance, it has been common over the years to see pay “raises” in the form of more costly health insurance coverage, rather than equivalently higher take-home pay. For government, the soaring costs of health care programs like Medicaid and Medicare are a substantial part of what is driving higher long-run budget deficits.

For an overview of the just-released data on US health care spending in 2022, the baseline starting point is “National Health Care Spending In 2022: Growth Similar To Prepandemic Rates,” by Micah Hartman, Anne B. Martin, Lekha Whittle, Aaron Catlin, and The National Health Expenditure Accounts Team (Health Affairs, published online December 13, 2023, forthcoming in January 2024 issue).

The authors point out that in 2022, unlike in so many earlier years, inflation in other goods and services was faster than inflation in health care, which explains the lower ratio of health care spending to GDP in 2022.

In 2022, nominal GDP growth was largely driven by rapid economywide inflation (unlike in 2021), as the GDP price index increased 7.1 percent (the fastest rate since 1981), after growth of 4.6 percent in 2021. Medical price inflation, in contrast, increased only 3.2 percent in 2022 after even slower growth of 1.5 percent in 2021. Inflation in the medical sector might not follow the patterns of the overall economy, as prices for some goods and services that are predominantly paid for by insurance (such as Medicare, Medicaid, and private health insurance) tend to be set in advance through legislation, regulation, or contractual agreements.

They also point out that the share of Americans with some form of health insurance reached an all-time high in 2022: “The insured share of the population reached a historic high of 92.0 percent in 2022 as enrollment in private health insurance increased at a faster rate relative to 2021 and Medicaid enrollment continued to experience strong growth.”

Will US health care spending as a share of GDP tend to remain where it is in 2022, or perhaps even decline a bit? In late October, the Economist magazine pointed out that the rise in health care spending seemed to be slowing down, not just in the US, but in high-income countries across the world. The article put forward various hypotheses: supply-side technology changes and government pressures for lower prices. For example:

The nature of technological innovation in health care may now be changing. One possibility is that there has been a generalised slowdown in treatments that represent medical breakthroughs and are costly, such as dialysis. But this is difficult to square with a fairly healthy pipeline of drugs coming to market. Another possibility, which is perhaps more plausible, is that the type of advancements has changed, involving a shift from whizzy curative treatments to less glamorous preventive ones. There is decent evidence that the increased use of aspirin, a very low-cost preventative treatment, in the 1990s has cut American spending on the treatment of cardiovascular diseases today. …

Demand-side factors may also be keeping health-care spending in check. In America the Affordable Care Act (ACA)—which was introduced in 2010, at about the time costs tailed off—tightened up the ways in which the government reimburses companies that provide treatment. The aca also made it more difficult for doctors to prescribe unnecessary treatments (seven expensive scans, perhaps, instead of one cheap one) in order to make more money.

On the other side, the semi-official government predictions suggest that the rise in health care spending as a share of GDP is still proceeding. In a Health Affairs article back in June 2023, a team of government health care economists provided “National Health Expenditure Projections, 2022–31: Growth To Stabilize Once The COVID-19 Public Health Emergency Ends,” by Sean P. Keehan, Jacqueline A. Fiore, John A. Poisal, Gigi A. Cuckler, Andrea M. Sisko, Sheila D. Smith, Andrew J. Madison, and Kathryn E. Rennie.

These authors had health care spending data through 2021, and they forecasted the final spending levels of 2022 spending quite accurately. However, their projection is for US health care spending as a share of GDP to rise gradually through the rest of the 2020s, reaching 19.6% of GDP by the end of their projection window in 2031. One of the big drivers of the shift is the rise in the number of elderly and very elderly Americans. This forecast suggests that although the rise in US healthcare spending as a share of GDP paused in the 2010s, but will return in the 2020s.

But although the demographic patterns of an aging America are pretty much set in stone for the next couple of decades, the patterns of health care spending can be altered. As one example, I’ve argued in the past that although finding ways to support people in managing their chronic conditions (like high blood pressure and diabetes) has traditionally been outside what is regarded as “health care spending,” it could have large payoffs in terms of improved health and reduced need for expensive episodes of hospitalization (for discussion, see here, here, and here).

Caring about the Distant Future and Past: Social Discount Rates

When thinking about converting values from the distant past or the distant future to the present, the actual values themselves are often considerably less important than what is called the “discount rate”–that is, the annual percentage rate at which you do the conversion from past or future to the present. An example of converting future values into “what it would be worth today” is the debate over the costs that climate change may impose decades or a century into the future. An example of converting a past value into “what it would be worth today” is the effort to put a value on the costs imposed on American slaves before the US Civil War.

In general, the problem that arises is applying a percentage growth rate over long periods of time can lead to counterintuitive conclusions. As one example, consider the old story that the Dutch purchased Manhattan from the local Native American tribe for $24. Here, let’s set aside the details about the actual transaction, and whether it was in the form of guilders or trade goods. (For details, this useful article argues that the purchase price was equal to about 3-4 months wages for a skilled artisan in Holland, or what would have been paid for 30 beaver skins.)

For my purpose, the key question here is: “What was $24 (or whatever the relevant amount is) that was paid 400 years worth in today’s currency?” For simplicity and clarity, let’s say that the Dutch paid an amount that we will just set equal to 100.

Well, if we apply an interest rate of 0% over the intervening 400 years, that amount of 100 paid for Manhattan would still be 100 today.

If we apply an interest rate of 1% per year over the intervening 400 years, that amount of 100 paid for Manhattan would be (about) 5,300 today.

If we apply an interest rate of 3% per year over the 400 years, that amount of 100 paid for Manhattan would be (about) 13.6 million today.

And if we apply an interest rate of 6% per year over 400 years, that amount of 100 paid for Manhattan would be about 1.3 quadrillion today.

This example should illustrate that in thinking about converting the amount paid for the island of Manhattan 400 years ago to a modern value, the actual amount paid is almost irrelevant to the conversation. The original amount paid 400 years ago could be one-tenth as much or 10 times as much, but what really makes the difference in the conversion to a modern-day value is the annual interest rate you choose to apply. In addition, the example emphasizes that relatively small differences in interest rates, even just a percent or two per year, can lead to really dramatic differences in value when compounded over long periods of time.

Consider a more hot-button example: What is the value of the wages not paid to American Slaves from 1776 to 1860, converted to a modern value? Thomas Cramer carried out a set of calculations under various scenarios. In one of this scenarios, as a commenter notes, projecting his estimate of historical costs forward to 2009 at an annual rate of 3% leads to a total value of $14.2 trillion; however, projecting the same costs forward to 2009 at an annual rate of 6% leads to a total value of $7 quadrillion.

For perspective, US GDP is now about $26 trillion. A proposal to pay slavery reparations of $14 trillion is thus at least at the edge of conceivable, if spread out over time. However, a cost of $7 quadrillion for slavery reparations would require paying the equivalent of all of current US GDP for the next 260 years, which is not conceivable in practical terms.

Again, the choice of how to translate a value from the distant past into present-value terms is not primarily about the historical cost, but about the interest rate that one chooses to adjust the past value to the present. As Cramer wrote in his 2015 article: “Debt estimates over long periods of time are extremely sensitive to the choice of interest rate … Thus, settling on an interest rate would likely represent the most important topic of political negotiations in any reparations debate.”

The example of translating a future cost into a present cost which comes up most often these days relates to climate change issues. Back in 2006, the very eminent Nicholas Stern published what became known as the “Stern review” on The Economics of Climate Change. Soon after, the also very eminent William Nordhaus published a review of the book. Stern and Nordhaus are both prominent economists arguing that the risks of climate change are large and real and need policy action. However, they differ in the discount rate they would apply to future harms: Stern argued for a discount rate of 1.4%, while Nordhaus argued for a rate of around 4.5%.

The percentage difference may seem small, but remember that these are annual percentage rates, so when projected out over long periods of time, they make a huge difference.

If you are calculating how much to pay in the present to avoid a cost of $1 that will be incurred 100 years from now, and you apply an annual rate of 1.4%, the answer is about 25 cents. That is, $0.25 x (1 + .014)100≈ $1.

However, if you apply an annual rate of 4.5%, then the answer of how much to pay in the present to avoid a cost of $1 that will be incurred 100 years from now is about 1.2 cents. That is, $0.012 x (1 + .045)100≈ $1.

The Trump administration proposed applying a discount rate of 7%. With that discount rate, then the answer of how much to pay in the present to avoid a cost of $1 that will be incurred 100 years from now is about one-tenth of a cent. That is, $0.001 x (1 + .07)100≈ $1.

Notice that even if there is no difference at all in the estimates of the future costs of climate change a century from now, the choice of discount rates leads to dramatic differences in how much we should be willing to spend in the present to prevent those costs. The 1.4% rate implies spending 20 times as much as the 4.5% rate. The 7% rate suggest spending close to nothing.

The Environmental Protection Agency has recently been updating its estimates of the “social cost of carbon”–that is, the cost of emitting a ton of carbon into the atmosphere when all future environmental costs are taken into account. As I wrote in an earlier post: “The current EPA estimate of the social cost of carbon is $190 per metric ton, using a 2% discount rate. But to give a sense of how much the discount rate matters, the cost estimate would be $120/ton with all the same underlying estimates and a discount rate of 2.5%, but $340/ton with all the same estimates and a discount rate of 1.5%.” Notice how much difference moving the discount rate by just half-a-percent makes!

To repeat the earlier lessons, the choice of how to translate a value from the distant future into present-value terms is not primarily about the estimates of future cost, but about the interest rate that one chooses to adjust the future values back to the present. Even small differences in the interest rate that is used will lead to dramatic differences in the policy prescriptions.

So what discount rate is the correct one to choose? This question is both of central importance and brutally hard. As a useful starting point, The most recent Annual Review of Financial Economics has a four-paper symposium, along with an introduction, on the issues of choosing an appropriate discount rate. The papers are:

I can’t hope to review the papers here. Instead, I’ll just offer a list of themes and questions that seek to highlight some of the issues involved.

1) The choice of an appropriate social discount rate is a bear-trap of complexities. It will depend on interest rates, technological change, judgements about risk, choices about intergenerational fairness, and other factors. The choice might reasonably differ across specific issues.

2) Those who are focused on costs imposed in the past often prefer to apply a high discount rate–so that when projecting costs from the past up to the present, these past costs will look larger. Conversely, those who are concerned about taking steps in the present to avoid costs that may arise in the future tend to prefer a low discount rate–so that the costs from the future will look larger in the present, as well. Taking these together, a common implication that the present generation should bear both high costs for long-ago past injustices and also high costs for long-distant future benefits.

3) Thinking about questions of fairness in the present–what do those who have more money and wealth owe to those with less money and wealth–is difficult. Questions of fairness and equity become even harder when they reach across multiple generations. Almost none of the people who would be affected by climate change in 100 years are even born yet. Those who suffered deep and grave injustices a century or more ago are dead and gone.

4) A common belief is that social costs should be imposed more heavily, where possible, on the better-off. Thus, an issue with fairness choices across multiple generations is that, for the last couple of centuries, incomes have been rising in high-income countries. Given the long-term real growth rate of 2% for the US economy, the average person from the the present has about 7 times as much income as someone from 100 years ago–and the present also has a much longer life expectancy and access to a much wider array of technologies. It’s likely that people who are living 100 years from now will have much higher incomes, life expectancies, and technologies as well. Thus, imposing costs on the present generation for the benefit of those living 100 years from now will, in all likelihood, be a transfer from the relatively poor to the relatively rich.

5) Basing the discount rate on the interest rates might work fairly well when costs are, say, a decade or two in the past or a decade or two in the future–but then not work so well when the past or future events are many decades or centuries in the future. Lucas writes in her paper in the symposium:

For policies with long-term impacts, intergenerational concerns become paramount, projections of cash flows and discount rates become highly uncertain, and present value calculations are an intrinsically unreliable measure of value. No approach to discount rate selection can overcome those problems; alternative decision criteria need to be established. However, most government investments involve much shorter horizons, and the adoption of standard approaches to risk adjustment could significantly improve social welfare.

7) In thinking ab0ut how to project future costs back to what we should be willing to spend in the present, it seems important to remember that climate change is not the only issue here. Public resources are limited, and a standard guideline is to spend where the social rate of return, broadly understood, is highest. For example, how much should society be willing to spend now on young children to improve their changes of being healthy and productive adults some decades in the future? How much should we be willing to spend now to reduce the risks of future pandemics (naturally occurring or human-created, or nuclear wars, that might arise in future decades? How much should society be willing to spend now to reduce the solar storms, a supervolcano, or risk of the Earth being hit by a wayward comet some decades in the future?

8) I have occasionally seen interest groups who support aggressive spending against climate change argue that the idea of a discount rate is more-or-less just an excuse for inaction, and the appropriate discount rate for the future is zero percent. Frankly, I’m not sure these folks have thought through the implications of their position. A discount rate of zero implies that all future costs should be weighted as if they are occurring in the present. But is it really worth spending an equal amount to save the life of an existing person as it is to spend that amount to save the projected life of someone living 50 or 100 or 1000 years from now? We can have reasonable arguments about what discount rate makes sense in what context, but a zero discount rate that wipes out all distinctions between present, near future, and distant future is not sensible.

7) For a sense of the difficulties that arise if you wipe out the distinctions between present and future costs, consider the court case back in 2004 about storage of nuclear waste. Government regulators argued that they had projected 10,000 years ahead, and the risks of storing nuclear waste over that time interval were acceptable. The court ruled that looking “only” 10,000 years ahead wasn’t enough. Even a tiny positive discount rate will compound, over 10,000 years or more, to a point where the present value of taking action will be essentially zero. But the court was, in effect, arguing for a discount rate of zero percent. (Viscusi discusses this issue in the symposium.)

8) The expected path of future technology should affect one’s willingness to spend in the present as well. Imagine that we knew there was a high chance that new technology would provide batteries for electric cars in the next 5-10 year that used many fewer scarce resources, as well as being longer-lasting and easier to recharge. In that scenario, pushing everyone to buy today’s technology would be wasteful. Indeed, as Viscusi points out, if one assumes a discount rate of zero for the future, it can, perhaps counterintuitively, make sense to defer immediate spending on future problems, and instead use the better technology that will emerge.

9) For the record, none of this discussion is intended to express an opinion on appropriate current policy for climate change. However, I will say that, in my own mind, some of the strongest arguments for taking action to reduce fossil fuel emissions have to do with immediate and near-term costs of air pollution on human health. As the environmental economists say, there could be a “double dividend” of reducing the use fossil fuels, in both immediate and long-term ains.

10) In thinking about how to project past values to the present, it also seems important remember that the current topic under discussion is not the only event. If one thinks about past American injustices, for example, certainly slavery would loom large. But treatment of, say, Native Americans would also loom large, as would discriminatory events and practices against other ethnic groups and against women. One might then need to add other economic, social, and legal, and even foreign policy injustices. What the present owes to the past is a wide-ranging topic. It’s fine to argue that one might apply different interest rates to different examples. But

10) My own bias, for what it’s worth, is that achieving some understanding the past is enormously important. But I’m mostly not a fan of taking long-ago past events and turning them into present day conflicts. There are a lot of places around the world where some groups of people continue to bear deep and even violent grudges against other groups of people for injustices that occurred between these groups literally centuries ago. My preference is for current generations to be given a clean slate. In effect, I would argue for a very low or perhaps even a negative interest rate to be applied to the past.

11) For the record, my unwillingness to project costs of the distant past into the present doesn’t mean that one should ignore problems of the present! For example, some prominent proposals that the US should pay black Americans some form of reparations for slavery have shifted away from basing their argument on costs from the past, projected up to the present. Instead, their argument for reparations is based on the current black-white wealth gap, or the current socioeconomic gap between families of black and white children. This approach has the advantage of avoiding the social discount rate altogether. It also shifts the discussion, in a subtle way, from being directly about “reparations for slavery” to focus instead on current inequalities.  

12) There is a way for current generations to spend money, and then have future generations repay the loans. As a thought experiment, imagine a form of deficit spending in which no repayments are made for, say, 20 or 50 or 100 years. (The realistic version of this borrowing is that we just keep rolling over debts and interest as they grow, decade after decade.) Will the generations that eventually face repaying these debts be glad that we borrowed this money, or not? If the borrowed funds are spent on infrastructure, human capital, and long-term environmental preservation, and future generations are better-off than we are today, then such borrowing will look like a wise choice. But from the standpoint of 20 or 50 or 100 years into future, our announced intentions with regard to the future will not matter much. They will be able to judge our decisions by the actual consequences.

The Learning of US 15 Year-Olds in International Context: 2022 PISA Scores

The Program for International Student Assessment, commonly known as PISA, tests 15 year-olds (that is, mostly 10th-graders in the US) on reading, mathematics, and science literacy. It is done about every three years, and coordinated across 81 countries by the OECD. It’s high-level data for cross-country comparisons of what students know. The 2022 results are out. Here’s are some headline measures of US results in international context.

The blue dots and lines show actual US results and scores. The black line shows a trend-line for US results based on those scores. The orange line is the average for 23 OECD countries (that is, typically other high-income countries around the world).

As you can see, US scores are a mixed bag: pretty much flat in reading, trending down in math, and trending up in science. Compared to the other OECD nations, the US scores are higher in reading, lower in math, and used to be lower but now are higher in science. The international comparisons also suggest a substantial drop-off in scores for other countries across the board, which seems to have started before the pandemic, and which allows the US scores in reading and science to look better. I do not know of a good theory as to why schools across the rest of the high-income countries have declined in performance.

The two volumes of reports and underlying data, which look in more detail at issues like the distribution of top-level and bottom-level performers across countries, along with socioeconomic and gender differences, are available at https://www.oecd.org/publication/pisa-2022-results/index#pisa2022results. Here’s one of many interesting figures. The horizontal axis measures per capita GDP (converted at purchasing power parity exchange rates, a calculation which which emphasizes the prices of internationally traded goods), while the vertical axis measures mathematics scores.

In this type of figure, what’s interesting is to look at countries that are substantially above or below the curve–thus, performing substantially better or worse than one might expect given their per capita GDP. The US is in near the middle of the graph, underperforming the OECD average and lower than one would expect based on its per capita GDP. Those who are significantly outperforming what one would expect, based on per capita GDP, include a number of east Asian nations, ranging from Vietnam (on the left) to Japan and Korea (closer to the center) and Chinese Taipei, Macao, Hong Kong, and Singapore. Other out-performers in the middle of the figure include a cluster of eastern and central European countries, like Estonia, Latvia, Poland, Slovenia, and Czech Republic.

Financial Literacy: Still Low

There’s literacy and numeracy, and somewhere in the intersection of the two lies “financial literacy.” If you mess up your monthly budget in a mild way, and end up eating beans and rice for week, perhaps no great harm is done. But the consequences can be more extreme: if you don’t have the money to get your car fixed, so you lose your job; or you can’t pay the electric bill, and your power gets turned off; or you run up debt on your credit cards, and the payments start casting a shadow over your days. In practical terms, “financial literacy” means avoiding the financial choices that you will later regret.

In the Fall 2023 issue of the Journal of Economic Perspectives (where I work as Managing Editor), Annamaria Lusardi, and Olivia S. Mitchell discuss “The Importance of Financial Literacy: Opening a New Field.” Lusardi and Jialu L. Streeter offer some additional evidence in “Financial literacy and financial well-being: Evidence from the US” in the Journal of Financial Literacy and Well-Being (published online October 5, 2023).

Surveys to measure financial literacy come in a variety of sizes, but the “Big Three” questions offer a well-tested if very basic approach. Here’s your very own financial literacy quiz:

Q1. “Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow?”

a) More than $102 (correct)
b) Exactly $102
c) Less than $102
d) Do not know
e) Refuse to answer

Q2. “Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After 1 year, would you be able to buy?”

a) More than today
b) Exactly the same as today
c) Less than today (correct)
d) Do not know
e) Refuse to answer

Q3: “Do you think that the following statement is true or false? ‘Buying a single company stock usually provides a safer return than a stock mutual fund.’”

a) True
b) False (correct)
c) Do not know
d) Refuse to answer

The first question checks understanding of interest rates; the second question is about inflation; and the their question is about diversification. Their recent survey results for the US population are 69% correct and 15% “don’t know” for the first question; 53% correct and 23% “don’t know” for the second question; and 41% correct and 45% “don’t know” for the third question. Overall, 28% of the respondents got all three questions correct, while 52% answered at least one of the three questions with “don’t know.”

Lusardi and Streeter go through these results in more detail, with breakdowns by groups and questions. But perhaps the key points are that the results of the “Big Three” questions are a fair representation of the state of financial literacy revealed by longer and more detailed surveys, and that when it comes to financial choices like credit card debt, student loans, borrowing money for a house or car, having a nest egg for flexibility in dealing with unexpected expenses, making contributions to a retirement account, and many others, these kinds of results do not fill one with confidence.

My own belief is that pretty much everyone can grasp these kinds of concepts, so that they could answer the questions correctly even if some may have a harder time than others in putting the knowledge into practice. In the last decade or so, financial literacy courses have become more widespread in high schools: 30 states now require a financial literacy course for high school graduation. Such courses are also becoming more common at the college level, sometimes just for general knowledge, sometimes as part of a program for those looking at jobs where becoming a “certified financial planner” is useful. The workplace is another obvious nexus for offering such courses: any workplace that can offer access to physical fitness programs should be able to find ways to offer a basic financial literacy course as well.

Lessons from Fighting 100 Inflations Since the 1970s

Inflation rates have come down since their peak in mid-2022. Does the Federal Reserve need to continue its inflation-fighting ways, keeping interest rates high? Anil Ari, Carlos Mulas-Granados, Victor Mylonas, Lev Ratnovski, and Wei
Zhao of the IMF look to historical and international experience in “One Hundred Inflation Shocks: Seven Stylized Facts” (September 2023, WP/23/190).

As background, here’s a standard measure of inflation from the US Bureau of Labor Statistics, showing monthly data on the rise in the Consumer Price Index over the previous 12 months. You can see the peak at above 8% in mid-2002, and the more recent decline to the range of 3-7%.

The IMF authors are careful to note that they are just looking at patterns from past episodes: that is, they are not claiming that history will necessarily repeat itself. They describe the underlying data this way:

This paper identifies over 100 inflation shock episodes in advanced and emerging economies between 1970 and today. Over half of these episodes are linked to the 1973 and 1979 oil crises—large commodities-related, terms-of-trade and supply-side shocks—making them particularly insightful for today’s policy debates. The remaining inflation shocks in our sample have various origins, including demand surges and/or sizeable exchange rate depreciations.

Here are their seven lessons.

Fact 1: Inflation is persistent, especially after a terms-of-trade shock
We start the analysis by documenting how long it took to resolve inflation shocks historically, i.e., to bring inflation back to within 1 percentage point of its pre-shock rate. The results … caution against anticipating speedy disinflation. Only in under 60 percent of episodes in the full sample (64 out of 111) was inflation resolved within 5 years after a shock. Even then, disinflation took on average over 3 years.

By this metric, the lower rate of inflation in the last year or so has been relatively rapid. The current inflation rate of 3-4% is about 1-2 percentage point above the pre-shock inflation rates that had been running in the range of 2-3%.

Fact 2. Most unresolved inflation episodes involved “premature celebrations”

In about 90 percent of unresolved episodes (42 out of 47 in the full sample, and 28 out of 32 during the 1973–79 oil crises), inflation declined materially within the first three years after the initial shock, but then either plateaued at an elevated level or re-accelerated. One possible explanation for “premature celebrations” relates to base effects. As the factors behind the initial inflation shock recede (e.g., energy prices revert, alleviating the terms-of-trade shock), headline inflation may decline
temporarily despite sticky underlying inflation. Another possible explanation relates to inconsistent policy settings, such as premature policy easing in response to declining inflation …

Fact 3: Countries that resolved inflation had tighter monetary policy

The IMF authors emphasize here: “The key finding is that the successful resolution of inflation shocks was associated with more substantial monetary policy tightening.”

Fact 4. Countries that resolved inflation implemented restrictive policies more consistently over time

In addition to tighter macroeconomic policies per se, the policy stance in countries that resolved inflation was maintained more consistently over time …

Fact 5. Countries that resolved inflation contained nominal exchange rate
depreciation

Our data demonstrate that countries which successfully resolved inflation better maintained nominal exchange rate (ER) stability …

Exchange rate issues are less important for the enormous US economy, with its relatively low ratio ratio of trade-to-GDP compared to many other countries.

Fact 6. Countries that resolved inflation had lower nominal wage growth

We also analyze labor market outcomes, specifically wages. Due to lack of historical data on wages, and for this result only, we use the full sample of episodes during 1973–2014. We document that, in countries which resolved inflation, nominal wage growth moderated after the inflation shock.

This finding will be unhappy news for wage-earners. After a burst of inflation, w would all like it if our wages would also rise, so that we can catch up with the higher price levels. But lower overall wage growth is what tends to keep inflation tamped down.

Fact 7. Countries that resolved inflation experienced lower growth in the short term but not over the 5-year horizon

The pandemic recession is its own distinctive economic event, so I wouldn’t want to overinterpret these historical patterns in terms of policy advice. But some warnings here are worth noting. Have a consistent monetary policy. Beware of “premature celebrations” that inflation is over. Recognize that even if fighting inflation involved short-term losses in economic output, such losses are typically not lasting or permanent.

Whither the UK Economy?

In my experience, discussions of the UK economy almost immediately jump to the “Brexit” question, or whether it was wise for the United Kingdom to leave the European Union. But the Brexit vote was in 2016, and problems with the UK economy are apparent in the data well before the bill passed. Ending Stagnation
A New Economic Strategy for Britai
n
(December 2023, Resolution Foundation & Centre for Economic Performance)is one of those reports that draws on multiple study groups and background papers, in an attempt to build some consensus on an underlying narrative.

We [that is, the United Kingdom] were catching up with more-productive countries like France, Germany and the US during the 1990s and early 2000s. But that came to an end in the mid-2000s and our relative performance has been declining ever since, reflecting a productivity slowdown far surpassing those seen in similar economies. Labour productivity grew by just 0.4 per cent a year in the UK in the 12 years following the financial crisis, half the rate of the 25 richest OECD countries (0.9 per cent). The UK’s productivity gap with France, Germany and the US has doubled since 2008 to 18 per cent, costing us £3,400 in lost output per person. …

Weak productivity growth has fed directly into flatlining wages and sluggish income growth: real wages grew by an average of 33 per cent a decade from 1970 to 2007, but this fell to below zero in the 2010s. In mid-2023 wages were back where they were during the financial crisis. 15 years of lost wage growth has cost the average worker £10,700 a year. …

While Britons have been living with stagnant wages for the last 15 years, high inequality has been a problem for more than twice as long. Having surged during the 1980s, and remained consistently high ever since, income inequality in the UK is higher than any other large European country. … This toxic combination is a disaster for low-to-middle income Britain and younger generations. We might like to think of ourselves as a country on a par with the likes of France and Germany, but we need to recognise that, except for those at the top, this is simply no longer true when it comes to living standards.

Indeed, one can make a plausible case that the UK combination of stagnant growth and high inequality fed the political pressures for the Brexit vote. Here, I won’t work through the details of the nearly 300-page report, but instead offer some of the figures that caught my eye.

UK productivity fell behind France and Germany in the 1980s, but while those countries are now close to US levels (albeit based on fewer hours worked per person), the UK economy has not been closing the gap with the US.

Average weekly earnings in the UK are below the peak they reached in 2008.

The gap in average incomes between London and the other nations/regions of the UK has been rising.

Slower growth means fewer resources for many purposes. For example, waiting lists for the National Health Service have more than doubled since 2014–a change that started well before the pandemic.

Rates of fixed investment in the UK are at the bottom end of the range for advanced economies.

Other advanced countries are gradually expanding the area of their “built-up land,” but not the UK.

The grim news in the report goes on and on: shortcomings of public infrastucture, energy costs, pensions, housing prices, support for the working poor, and so on and on. The report discusses in some detail how Brexit isn’t helping the UK economy, either, but the country’s economic issues clearly run a lot deeper. One source of comfort for the UK: Sure, we’re not keeping up with Germany and France, but Italy is worse-off!