Advice from Charlie Munger

The legendary investor Warren Buffett each year publishes a letter to the shareholders of Berkshire Hathaway, both reporting returns from the previous year and also offering reflections on past investment decisions and philosophies. In this year’s just-published letter, Buffett quotes some advice from his long-time partner Charlie Munger. Buffett writes:

Charlie and I think pretty much alike. But what it takes me a page to explain, he sums up in a sentence. His version, moreover, is always more clearly reasoned and also more artfully – some might add bluntly – stated. Here are a few of his thoughts, many lifted from a very recent podcast:

  • The world is full of foolish gamblers, and they will not do as well as the patient investor.
  • If you don’t see the world the way it is, it’s like judging something through a distorted lens.
  • All I want to know is where I’m going to die, so I’ll never go there. And a related thought: Early on, write your desired obituary – and then behave accordingly.
  • If you don’t care whether you are rational or not, you won’t work on it. Then you will stay
    irrational and get lousy results.
  • Patience can be learned. Having a long attention span and the ability to concentrate on one thing for a long time is a huge advantage.
  • You can learn a lot from dead people. Read of the deceased you admire and detest.
  • Don’t bail away in a sinking boat if you can swim to one that is seaworthy.
  • A great company keeps working after you are not; a mediocre company won’t do that.
  • Warren and I don’t focus on the froth of the market. We seek out good long-term
    investments and stubbornly hold them for a long time.
  • Ben Graham said, “Day to day, the stock market is a voting machine; in the long term it’s
    a weighing machine.” If you keep making something more valuable, then some wise person is going to notice it and start buying.
  • There is no such thing as a 100% sure thing when investing. Thus, the use of leverage is
    dangerous. A string of wonderful numbers times zero will always equal zero. Don’t count on getting rich twice.
  • You don’t, however, need to own a lot of things in order to get rich.
  • You have to keep learning if you want to become a great investor. When the world changes, you must change.
  • Warren and I hated railroad stocks for decades, but the world changed and finally the
    country had four huge railroads of vital importance to the American economy. We were
    slow to recognize the change, but better late than never.
  • Finally, I will add two short sentences by Charlie that have been his decision-clinchers for decades: “Warren, think more about it. You’re smart and I’m right.”

And so it goes. I never have a phone call with Charlie without learning something. And, while he makes me think, he also makes me laugh.

I will add to Charlie’s list a rule of my own: Find a very smart high-grade partner – preferably slightly older than you – and then listen very carefully to what he says.

Clean Energy Industrial Policy: The US and EU Clash

The Inflation Reduction Act, signed into law by President Biden in August 2022, is actually a mixture of tax, healthcare, and clean energy policies. Here, I’ll focus on the last category. It represents a belief that industrial policy can work when it comes to clean energy: that is, large subsidies targeted at a specific industry can both accelerate the development of a new and healthy sector of the US economy, as well as reducing carbon emissions. David Kleimann, Niclas Poitiers, André Sapir, Simone Tagliapietra, Nicolas Véron, Reinhilde Veugelers and
Jeromin Zettelmeyer compare the US policy to pre-existing European policies in “How Europe should answer the US Inflation Reduction Act”
(Bruegel, February 2023). Here are some takeaways.

The clean energy subsidies enacted by the Inflation Reduction Act will catch the US up to the level of subsidies that are already available across EU countries in some areas, but not others.

The authors divide up up the new US clean energy subsidies into three categories. First, there is a tax credit of up to $7500 for consumer purchases of electric cars. However, this tax break is hedged around with requirements about how much of the car must be made in the US, as well as limits on the income of those receiving the tax credit. Second, there are subsidies for producers of “batteries, wind turbine parts and solar technology components, as well as for critical materials like aluminum, cobalt and graphite.” As one example, a “mid-sized 75kWh battery for an EV would receive $3,375 in subsidies, equivalent to roughly 30 percent of its 2022 price.” Third, there are subsidies for producers of carbon-neutral electricity. This includes solar and wind power, but also hydrogen, “clean fuels (such as renewable natural gas),” and nuclear power.

There are lots of details surrounding these rules, and I won’t try to do justice to them here. The authors cite overall estimates from the Congressional Budget Office that the cost will be $400 billion over 10 years–but they also warn that this cost estimate is based on underlying estimates about the extent to which people and firms will take advantage of these subsidies. Comparisons between the US and the different subsidies across EU countries are also necessarily imprecise. But the authors offer this chart:

In other words, the new US subsidies for electric cars and clean-tech manufacturing are similar to what already prevails in the European Union. The new US subsidies for renewable energy remain MUCH lower than similar subsidies in the EU.

One key difference between the US clean energy subsidies and the European approach is that the US approach includes “local content” requirements, which among other issues violate the fair trade rules that the US has long advocated for the World Trade Organization.

“Local content” requirements are politically popular everywhere: after all, they restrict tax breaks to domestic producers. That’s also why such rules are generally prohibited by World Trade Organization agreements. But first under President Trump, and now under President Biden, the US is showing that in decisions about tariffs and subsidies, it feels comfortable flaunting those rule. The authors describe the specific local content rules in the Inflation Reduction Act like this:

The $7500 consumer tax credit applies only to electric cars with ‘final assembly’ in North America (the US, Canada or Mexico). In addition, half of the tax credit is linked to the origin of batteries and the other half to that of raw materials used in the electric cars. To obtain either half, a minimum share of the value of battery components (presently 50 percent) or critical minerals (presently 40 percent) needs to come from the US or countries with which the US has a free trade agreement (presently 20 countries). These thresholds will increase by about 10 percentage points per year. In addition, from 2024 and 2025, any use of batteries and critical minerals from China, Russia, Iran and North Korea will make a vehicle ineligible for the tax credit.

Renewable energy producers are eligible for a ‘bonus’ subsidy linked to LCRs [local content rules]. If the steel and iron used in an energy production facility is 100% US-produced and manufactured products meet a minimum local-content share, the subsidy increases by 10 percent, with the required local-content share rising over time11. A similar bonus scheme conditional on local-content shares applies to investment subsidies for energy producers.

Local content rules are also a bit paradoxical. Presumably the reason such rules are needed is that, without them, a substantial share of the clean energy subsidies would flow to producers in other countries, because those producers would be providing products with the combination of price and quantity preferred by US consumers and firms. The Inflation Reduction Act is thus based on a claim that clean energy subsidies are badly needed for environmental reasons–but also that clean energy goals aren’t quite important enough to justify importing needed goods.

Will the clean energy industrial policy work?

The authors of this paper assert with some confidence that the US and EU industrial policies with regard to clean energy will work: that is, they will both build up local producers of clean energy–presumably to a point where they no longer need to rely on government subsidies–and also will reduce carbon emissions.

The future is of course unpredictable by definition, but I am while dubious that that the US clean energy industrial policy subsidies are likely to be very effective. First, the US clean energy subsidies are an all-carrot, no-stick policy. They hand out subsidies, but do not impose, say, additional limits or costs on carbon emissions. Second, the US industrial policies are focused on current tech, not future tech. AS the Bruegel authors write: “in the clean-tech area, the IRA [Inflation Reduction Act] focuses mostly on mass deployment of current generation technologies, whereas EU level support tends to be more focused on innovation and early-stage deployment of new technologies.” Third, tying the subsidies to local content rules will be a disadvantage for US producers in the clean energy arena, compared to producers in the European Union and other places who do not need to follow such rules.

Finally, government industrial policy to advance technology tends to work best when it is tied to concrete goals. For example, the incentives to produce COVID vaccines were linked to the vaccines actually being produced. In South Korea’s successful industrialization strategy several decades ago, government subsidies were linked to whether the firm was successfully exporting to the rest of the world–and the subsidies were cut off if the target level of exports wasn’t reached. But when industrial subsidies are just handed out, it’s a fairly common pattern for people and firms to soak up the subsidies, without much changing. Those who follow these issues will remember prominent examples like Solyndra, the solar energy company that burned through about a half-billion dollars in federal loan guarantees about a decade ago, or going back further, the “synfuels” subsidies that failed to deliver fuel alternatives back in the 1980s. I hope that I’m wrong about this, and that this time around, US industrial policy aimed at clean energy will be a big success. But I’m not optimistic.

How WWII Reduced US Productivity

There’s a conventional story that World War II was a boost for the US economy, both providing a burst of aggregate demand that ended the Great Depression, and also establishing the basis for several decades of postwar prosperity in US manufacturing. Alexander J. Field begs to differ. He lays out his case in “The decline of US manufacturing productivity between 1941 and 1948” (Economic History Review, published online January 16, 2023).

Of course, total manufacturing output did rises substantially during World War II, but “productivity” for economists doesn’t refer to total output. Instead, productivity refers to output per hour worked–or more generally, output relative to given inputs of labor and capital. With that in mind, you can get some of the flavor of the larger perspective of Field’s argument by taking a look at this data on the US manufacturing sector form 1929 to 1948. As you see, everything is expressed relative to 1929: that is, the level of all the variables is set at 100 for 1929. The first column is labor productivity, or output per hour. The second is “total factor productivity,” a more complex (although by no means truly difficult) calculation that is output per combined inputs of labor and capital. The third column is total manufacturing output; the fourth columns is hours worked in the manufacturing sector, and the last column is stocks of capital in the manufacturing sector.

Before looking at the World War II years, just run your eyes over the 1930s for a moment to get a sense of how this works–and perhaps to reset your perceptions of what happened during this time. Manufacturing output falls by half during the Great Depression from 1929 to 1933. The stock of capital equipment doesn’t fall by much during this time: after all the equipment may have been less used or unused, and some of it would wear out, but the equipment itself was mostly still there. Hours worked in manufacturing falls 40% from 1929 to 1933, which is less than the fall in output. Thus, output per hour worked, as shown in the first column falls substantially from 1929 to 1933.

There is a common perception that the Great Depression lasted throughout the 1930s, before the economy was jolted out of the Depression by World War II spending. The table shows that this perception is untrue. Manufacturing output doubles from 1933 to 1937. The economy is then jolted by a sharp increase in interest rates by the Federal Reserve leading to a steep recession in 1937-38–and then followed by a doubling of manufacturing output from 1938-1941. Readers will remember that while there is certainly fear of war in the late 1930s and early 1940s, the bombing of Pearl Harbor and the actual entry of the US into World War II doesn’t happen until Decemer 1941.

As the US scrambled to reshape its economy to a wartime footing, output rises substantially. But there are also substantial jumps in labor and capital inputs. Thus, both labor productivity (output per hour worked of labor) and total factor productivity (output per unit of inputs including labor and capital) start to decline. Field described the underlying process this way:

The [productivity] declines in 1942 reflect, above all else, the chaotic conditions associated with the changes in the product mix. Productivity took a huge hit as machinery to produce peacetime products made way for newly designed machine tools, and labour and management struggled to become proficient as they moved from making goods in which they had a great deal of experience to those in which they had little. Shortages, hoarding of inputs, and production intermittency plagued the war effort. The positive effects of learning by doing are evident in the change in both labour productivity and TFP growth between 1942 and 1943. They were nevertheless insufficient to compensate for the sharp drop during the previous year. Productivity resumed an accelerated decline in 1944, as a secondary round of major product changes kicked in, and was even more negative in 1945, due in part to the disruptions associated with demobilization. Partial recovery between 1945 and 1948 still left the TFP level in US manufacturing substantially below where it had been in 1941.

There is a conventional narrative about World War II that it was at least “good for the economy,” but that seems imprecisely put. It’s true that the US economy, with its high level of technical skills and extraordinary flexibility, was very good indeed for winning the war–which was clearly the highest priority at that time. But the war led to multiple dramatic disruptions in the US economy: restructuring to wartime production in multiple ways and times, labor shortages, supply shortages, and then a dramatic restructuring back to a peacetime economy.

Field offers a reminder that much of the capital investment made during World War II was useless at the end of the war.

With the temporary exception of B-29 bombers, most of the aircraft produced during the war were, at its conclusion, deemed surplus: obsolete or unneeded. Tens of thousands were flown to boneyards in Arizona: air bases such as Kingman and Davis-Monthan. … Some aircraft were flown directly from the factory gate to Arizona for disassembly and recycling. Many aircraft operating overseas were never repatriated. It was simply not worth the cost in fuel and manpower to fly them back to the United States so they could be scrapped. Similar fates befell Liberty ships (scrapped and recycled for the steel), tanks, and other military equipment including field artillery. …

There was indeed a huge investment in plant and equipment by the federal government. But the mass production techniques that made volume production of tanks and aircraft possible in the United States relied overwhelmingly on single- or special-purpose machine tools, and most of these tools and related jigs and frames were scrapped with reconversion. The United States did use multipurpose machine tools, which could more easily be repurposed, but this was principally in the shops producing machine tools. Already in 1944, the country confronted serious
surplus and scrappage issues. By early 1945 disposal agencies had surplus inventories of roughly $2 billion – equivalent to the entire cost of the Manhattan Project. By V-J Day that had risen to US $4 billion, and ultimately to a peak of US $14.4 billion in mid-1946. …

Both public and private capital accumulation in areas not militarily prioritized had been
repressed.Wartime priorities starved the economy of government investment in streets and highways,
bridges and tunnels, water and sewage systems, hydro power, and other infrastructure that
had played such an important role in the growth of productivity and potential output across the
Depression years. These categories of government capital complementary to private capital grew
at a combined rate of 0.15 per cent per year between 1941 and 1948, as opposed to 4.17 per cent per
year between 1929 and 1941.81
Portions of the private economy not deemed critical to the war effort also subsisted on a thin
gruel of new physical capital. Trade, transportation, and manufacturing not directly related to the
war are cases in point. Private nonfarm housing starts,which had recovered to 619 500 in 1941, still
34 per cent below the 1925 peak (937 000), plunged to 138 700 in 1944, barely above the 1933 trough
of 93 000. All ‘nonessential’ construction in the country was restricted beginning on 9 October
1941, almost two months before the Japanese attack.82

Of course, the US economy suffered a terrible loss of workers as a result of World War II. “As for labour, the immediate post-war impact of the war on potential hours was clearly negative: 407 000 mostly prime-age males never returned. Most would have been alive in the absence of the war. …There were another 607 000 military casualties. The 50 per cent wartime rise in female labour-force participation largely dissipated during the immediate
post-war period.

What about new technologies developed during World War II? As Field points out, the ability to run extraordinary assembly lines for making planes and ships was not a useful technology after the war ended. More broadly, he argues that World War II was more about taking advantage of technologies that had been developed earlier, not about the invention of technologies that would have lasting benefits in peacetime.

What of more general scientific and technological advance? Kelly, Papanikolaou, Seru, and Taddy digitized almost the entire corpus of US patent filings between 1840 and 2010, and analysing word counts, identified breakthrough patents: those that were novel at the time and influential afterwards. Such patents had low backward similarity and high forward similarity scores … Their time series of such patents shows a peak in the 1930s, particularly the first half of the decade, and a noticeable trough during the war years …

Much of what occurred during the war represented the exploitation of a preexisting knowledge base. In 1945, Vannevar Bush published Science, the endless frontier, a work often viewed as distilling the lessons and achievements of the war into an actionable blueprint for post-war science and technology policy. However, as David Mowery noted, ‘the Bush report consistently took the position that the remarkable technological achievements of World War II represented a depletion of the reservoir of basic scientific knowledge’ ,,,

It is of course impossible to re-run history and see what would have happened if the economic trends of the late 1930s had continued without the interruptions, costs, and material and human losses of World War II. But it is surely possible that the US have had a stronger economy in 1950 if it had not suffered losses of a million killed and wounded, and had not needed to scramble its outputs first to a wartime and then to a peacetime footing, and had been able to pursue non-war science and technology. Field goes so far as to speculate: “From a long-run perspective, the war can be seen, ironically, as the beginning of the end of US world economic dominance in
manufacturing.”

Given this kind of evidence, why is the belief in the economic benefits of World War II so deeply rooted? Field quote Elizabeth Samet’s argument that there is “pernicious American sentimentality about nation” and that “we search for a redemptive ending to every tragedy.” There is surely some truth in this, but perhaps a simpler truth is that the losses and costs of World War II are too staggering to contemplate.

Qualms about Linking Executive Pay to Social Goals

Should the pay of top executives be linked not just to the performance of the company in the stock market or other quantitative/financial goals, but also to whether the company meets environmental, social, and governance (“ESG”) goals? Lucian A. Bebchuk and Roberto Tallarita raise some doubts in “The Perils and Questionable Promise of ESG-Based Compensation” (Journal of Corporation Law, Fall 2022).

Bebchuk and Tallarita focus on the actual behavior of the 97 US companies in the S&P 100–which together represent over half the total value of the US stock market. They write:

We found that slightly more than half (52.6%) of these companies included some ESG metrics in their 2020 CEO compensation packages. These metrics focus chiefly on employee composition and employee treatment, as well as customers and the environment, but also, to a much smaller extent, communities and suppliers. ESG metrics are mostly used as performance goals for determining annual cash bonuses. However, most companies do not disclose the weight of ESG goals for overall CEO pay, and those that do disclose it (27.4% of the companies with ESG metrics) assign a very modest weight to ESG factors (between less than 1% to 12.5%, with most companies assigning a weight between 1.5% and 3%).

It is notable to me that if you hear a large company announcing that it has tied executive bonuses to environmental, social, and governance goals, it typically determines 1.5-3% of the executive bonus. Bebchuk and Tallarita look at what specific goals are mentioned in corporate reports:

Despite the potential richness and intricacy of a company’s stakeholders and their interests, ESG metrics used in the real world are inevitably limited and narrow. … Most companies use metrics linked to employee composition and employee treatment, and many use metrics connected to consumer welfare and environmental issues (especially carbon emissions and climate change). Very few companies, however, consider their impact on local communities, and only two companies use metrics linked to supplier interests.

Furthermore, with respect to each of these groups or interests, ESG metrics focus on a narrow subset of dimensions that are relevant for stakeholders. … [F]or each stakeholder group or interest, companies choose to give weight to specific dimensions that represent only part of what stakeholders care about. With respect to employees, for example, most companies choose goals related to inclusion or diversity, and many focus on work accidents and illness, but none incentivizes its CEO to increase salaries or benefits or to improve job security. With respect to community, many companies focus on trust and reputation, but almost none chooses incentives linked to reducing local unemployment or to distributing free products or services to disadvantaged residents. …

[S]takeholder welfare is multi-dimensional. However, some of these dimensions are easier to pin down and measure, while others, equally important, are difficult to measure. Consider, for example, the welfare of employees. Employees are interested in receiving a good salary, avoiding accidents and illnesses, and keeping their job: these goals are relatively easier to measure and assess. However, employees are also interested in being treated fairly, developing good professional relationships with supervisors and peers, growing professionally, and other factors that are very hard to measure. …

The narrowness of ESG metrics is an empirical fact and also a theoretical necessity. No compensation package could exhaustively identify and incentivize goals that address all of the interests and needs of all individuals and groups affected by a company’s activities. The very act of identifying a measurable goal and designing a metric to assess the achievement of that goal requires the choice of some specific dimension and measure and, therefore, the rejection of other potential dimensions and measures. Business leaders have embraced stakeholderism by promising win-win scenarios in which companies deliver value to shareholders and all stakeholders. The reality, however, is that companies choose only a few groups of core stakeholders and focus on a limited number of aspects of their welfare.

One response to these sorts of concerns is the “at least” defense. At least the firms are making a public statement in response to broader concerns. At least some firms may be trying a little harder along these lines. At least some broader social concerns might be addressed. Maybe the glass is only 1.5% to 3% full, but at least it’s not 100% empty. At least it’s a start.

However, a primary concern over executive pay for some decades now has been about overly cozy relationships between corporate executives and boards of directors, in which top executives get excessive pay over what their performance would actually deserve. Linking pay to stock prices or other financial performance is clearly not a perfect measure, but it’s at least an anchor for executive pay. If executives were to get a significant share of their pay based on the announced commitments they make about ESG concerns, or about subjective judgements on the extent to which they have achieved these goals, then it again becomes possible for cozy relationships between board members and executives to lead to higher pay: “Sure, the company had a dramatic decline in production and sales last year, but as a result, we also reduced carbon emissions, so the CEO gets a raise.” Moreover, if executives have incentives to re-jigger corporate resources toward more measurable social goals, but potentially at the expense of less measurable goals, there is no guarantee that the overall goals of making corporations more socially conscious will be met.

A similar set of themes arise in concerns over “Diversity Washing,” which refers to companies that make public announcements about their commitment to diversity, but don’t actually do much about it. Andrew C. Baker, David F. Larcker, Charles McClure, Durgesh Saraph and Edward M. Watts discuss the issue in a European Corporate Governance — Finance Working Paper (# 868/2023, January 2023).

The authors have detailed data on gender and racial diversity of the employees for over 5,000 US public companies. They also have company statements filed with Securities and Exchange Commission that discuss their diversity, equity, and inclusion policies (specifically, annual reports, current reports, and proxy statements. In addition, they have measures of firm misconduct related to diversity issues, as well as the rankings that firms have received about their diversity programs. Thus, they can look at whether firms that talk a good game about diversity actually walk the walk, and also whether firms that talk a good game get rewarded for what they say, rather than what they do.

It turns out that there is an overall positive correlation between the talking at the doing, but the correlation is a mild one, because of firms that they call “diversity washers.” They write:

We provide large-sample evidence showing many firms have significant discrepancies between their disclosed commitments to diversity and their actual hiring practices. Consistent with such firms making misleading commitments to DEI, we find diversity washers have less workplace diversity, experience future outflows of diverse employees, and are subject to higher diversity-related fines. Despite these negative DEI outcomes, we show diversity washers receive higher ESG scores from commercial rating organizations and attract more investment from ESG-focused institutional investors, suggesting these disclosures mislead outside stakeholders and investors.

My focus here has been on practical problems that arise when corporations seek to put a priority on ESG goals. But at a deeper level, I have qualms over whether it is a good idea for corporations to have such goals. Different institutions are good for different purposes. We don’t expect hospitals to educate fourth-graders, we don’t expect universities to produce smartphones, and we don’t expect churches to install dishwashers. It seems to me quite possible to support the idea that corporations should be focused on earning profits, and also to support both government and nongovernment efforts to define and pursue environmental and social goals. For prior posts on the general subject, starting points are:

Lead: A Global Health Problem

One of the major US public health victories in recent decades was to get the lead out of gasoline and paint, and also to (mostly) shift away from lead in pipes that carried water–such that when lead is detected in the water supply in places like Flint, Michigan, or Jackson, Mississippi, it’s rightfully a scandal, and typically linked to the use of badly outdated pipes. But dealing with sources of lead exposure around the world is very much a slow-motion work in progress. Rachel Silverman Bonnifield and Rory Todd discuss the issue in “Opportunities for the G7 to Address the Global Crisis of Lead Poisoning in the 21st Century: A Rapid Stocktaking Report” (Center for Global Development, 2023)/

They set the stage in this way (footnotes and references to figures omitted):

Lead poisoning is responsible for an estimated 900,000 deaths per year, more than from malaria (620,000) and nearly as many as from HIV/AIDS (954,000). It affects almost every system of the body, including the gastrointestinal tract, the kidneys, and the reproductive organs, but has particularly adverse effects on cardiovascular health. According to the World Health Organization (WHO), it is responsible for nearly half of all global deaths from known chemical exposures.

Despite this massive burden, the greater part of the harm caused by lead may come not through its effects on physical health, but its effect on neurological development in young children. The cognitive effects of lead poisoning on brain development are permanent, and most severe when lead exposure occurs between the prenatal period to the age of around 6 or 7. Even low-level lead exposure at this age has been conclusively shown to cause lifelong detriments to cognitive ability; though evidence is less definitive, there is also a very strong and compelling literature which links lead exposure to anti-social/violent behavior, attention deficits, and various mental disorders. An estimated 800 million children—nearly one in three globally, an estimated 99 percent of whom live in low- and middle-income countries (LMICs)—have blood lead levels (BLL) above 5 micrograms per deciliter (μg/dL), which the WHO uses as a threshold for recommending clinical intervention to mitigate neurotoxic effects. Effects on cognitive development have been demonstrated in BLLs significantly below this …

What are some of main vectors through which people, especially in low- and middle-income countries, are exposed to lead? The first one discussed is recycling of lead-acid batteries:

While lead has a number of industrial applications, at least 80 percent now goes into the production of lead-acid batteries. … Manual destruction of batteries without protective equipment, uncontrolled smelting, and dumping of waste into waterways and soils are common. Studies show high blood lead levels in children living near lead battery manufacturing and recycling facilities and in workers, and high levels of airborne lead in battery facilities and acute exposure to workers and their families.

The market price of lead has roughly doubled in the last 20 years, as has global mining of lead (which is typically accompanied by extraction of zinc, but also other metals). There are some recent graphic examples. A lead mine in Zambia “led to universal lead poisoning among 90,000 local children.” A gold-mining operation in Nigeria “led to the deaths of more than 400 children from acute lead poisoning in the space of six months in 2010, as a result of workers grinding ores within villages.” Such lead pollution often continues to affect the local environment for decades after the mine is closed.

Perhaps the most unexpected source of lead poisoning is via lead that is added to spices, which are then shipped around the world.

[A]n increasing body of evidence points to lead-adulterated spices as a significant driver of widespread lead poisoning, particularly in South and Central Asia. For turmeric and other spices with bright yellow/orange colors, lead chromate is typically added during the polishing stage to increase pigmentation and reduce polishing time (which also increases the weight of the final product); the bright pigmentation characteristic of lead chromate is considered a sign of high quality, and adulteration therefore allows producers to command a higher price point for their products. Lead may also be inadvertently introduced in smaller concentrations to a broader range of spices—for example oregano, thyme, ginger, or paprika—via contaminated soil, airborne pollution, or cross-contamination at a factory, though this is likely to be a relatively small part of the overall problem.

Via global supply chains, contaminated spices can drive lead poisoning far beyond their countries of origin … In the US, where roughly 95 percent of spices are imported,91 a Consumer Reports investigation found detectable levels of lead or other heavy metals in one third of sampled spices. In New York City, investigations of elevated blood lead levels frequently identify lead adulteration in spices purchased abroad as a likely source, with the highest concentrations of lead found in spices from the countries Georgia, Bangladesh, Pakistan, Nepal, and Morocco …

Lead in paint is an ongoing issue as well.

Despite their danger being established for decades, lead paints remain legal in the majority of countries, and are widely used for residential coatings and decorative purposes in most LMICs [low- and middle-income countries], and for industrial purposes in many high-income countries. Lead additives are primarily in solvent-based paints, and may be added to paint to improve durability, drying capacity, and corrosion prevention, as well as in the form of pigments—especially lead chromate—to enhance color. It can cause occupational exposure as workers inhale dust during manufacture, application, and removal, or exposing their families through take-home contamination. Children are exposed primarily through ingestion of chips and dust, which can occur throughout the life cycle, but may be exacerbated as paint ages as well as during application and removal. Lead paint is an avoidable source of exposure, and there are safe and cost-effective alternatives to lead additives …

Cookware can be another source of lead exposure, especially when lead-based glazed are used on ceramics.

Lead-glazed ceramics are popular in central Mexico, where they are primarily produced by indigenous communities, and are commonly used in restaurants for cooking and serving. They have been identified as a primary cause of elevated blood lead levels in the country,127 where 22 percent of children aged 1 to 4 years have blood lead levels above 5 μg/dL. But they are also used elsewhere in Latin America, North Africa, and South Asia, and may be a significant source of exposure. … More recently, aluminum pots and other cookware produced from scrap metal—used by poor families in LMICs across all regions—have been found to frequently contain lead and other heavy metals …

There are other examples. The cosmetic called Kohl was traditionally made with lead. “While safe, lead-free substitutes exist, traditional leaded Kohl—with up to 98 percent lead content—is still common across the world, and frequently found in G7 member states … Lead is also found in other consumer goods, particularly toys and jewelry. Use of lead in toys is typically to add pigment/color, including via lead paint on surfaces and lead pigment in crayons, sidewalk chalk, and other art supplies …”

The immediate health effects of lead exposure are grim. The long-term effects on child development are worse.

Hammers and Nails: Central Banks and Inequality

There’s an old saying that “if your only tool is a hammer, then every problem looks like a nail.” It’s about the temptation to use the tool you have on every problem that comes up–whether the tool you have is actually appropriate for the problem. Hammers work well with nails: they aren’t so helpful with screws, or when trying to water the flowers.

The situation with central banks and economic inequality is a little different. In this case, some of those who are focused on inequality have wondered whether the central bank might offer an appropriate hammer for this particular nail. But in then just-published Winter 2023 issue of the Journal of Economic Perspectives, Alisdair McKay and Christian K. Wolf explain why this is a case of the tool not fitting the problem in  “Monetary Policy and Inequality.” 

Economic inequality exists for many reasons, of course. The question is whether or how actions by central banks–say, decisions to raise or lower interest rates–might change the level of economic inequality. Perhaps the main complication in the analysis here is that monetary policy will affect different groups in different ways. For example, if you have borrowed money with an adjustable-rate mortgage, you might benefit from lower interest rates. If you haven’t borrowed money, but instead were hoping to receive interest payments on past savings, then lower interest rates will hurt you. If you are unemployed or low-paid, then to the extent that lower interest rates can stimulate the economy and lead to more jobs and higher wages, you are better off. Lower interest rates also tend to encourage investors to scale back on investments that are linked to interest rates (like corporate bonds) and instead to shift over to stock markets. If you own stocks, you benefit from this effect; otherwise, not so much.

Thus, the challenge is to look at how different groups, defined by age and income, are likely to be affected by monetary policy–and in turn how that affects the extent of inequality. After working their way through the evidence, they argue:

On the one hand, the incidence of the individual channels of monetary policy transmission to households is quite
uneven. For example, mortgage payments and stocks have much stronger effects at the top of the wealth distribution, while other debt services and labor income have stronger effects at the lower end. On the other hand, once aggregated across all channels, the overall consumption changes are much more evenly distributed. …
While there are some differences across groups, we view them overall as relatively modest. … The key takeaway is that … expansionary monetary policy roughly scales up everyone’s consumption by the same amount as the aggregate, leaving each household’s share of total consumption approximately unchanged.

The “dual mandate” of the Federal Reserve’s monetary policy is to worry about output and jobs when those seem at risk, and to worry about inflation when it seems to be rising. If the Fed was to add inequality as another objective, then the central bank would have to address the question of whether it should, in some situations, allow either more inflation or higher unemployment in pursuit of fighting inequality. But the McKay and Wolf essay suggests that monetary policy does not lead to substantial shifts in inequality in the first place. Thus, when it comes to the nail of economic inequality, monetary policy is not the hammer you are looking for.

Clean Energy: What’s Your Bad Movie Scenario?

Thanks to the always thought-provoking Marginal Revolution website, I ran across an article about how to solve all of America’s energy issues with one giant mega-project: tap into the geothermal energy that is sizzling away under the Yellowstone “caldera”–which is the area left behind after a volcanic eruption. Or in the case of the Yellowstone caldera, it’s thought to be the aftermath of three gigantic eruptions that happened over the last 2 million years or so, with the most recent eruption perhaps 700,000 years ago. Thomas F. Arciuolo and Miad Faezipour  describe their proposal in “Yellowstone Caldera Volcanic Power Generation Facility: A new engineering approach for harvesting emission-free green volcanic energy on a national scale” (Renewable Energy, October 2022, Pages 415-425). The abstract sums it up:

The USA is confronted with three epic-size problems: (1) the need for production of energy on a scale that meets the current and future needs of the nation, (2) the need to confront the climate crisis head-on by only producing renewable, green energy, that is 100% emission-free, and (3) the need to forever forestall the eruption of the Yellowstone Supervolcano. This paper offers both a provable practical, novel solution, and a thought experiment, to simultaneously solve all of the above stated problems. Through a new copper-based engineering approach on an unprecedented scale, this paper proposes a safe means to draw up the mighty energy reserve of the Yellowstone Supervolcano from within the Earth, to superheat steam for spinning turbines at sufficient speed and on a sufficient scale, in order to power the entire USA. The proposed, single, multi-redundant facility utilizes the star topology in a grid array pattern to accomplish this. Over time, bleed-off of sufficient energy could potentially forestall this Supervolcano from ever erupting again. 

When I have mentioned this article to people in the last week or so, the usual response is that they start chuckling, and then say something like: “I’m pretty sure I’ve seen that movie!” You know, pretty much any movie where science or industry tinkers with planet earth and a volcanic eruption results.

But of course, the fact that there is a possible disaster scenario, suitable for special effects, doesn’t mean that an idea is a bad one. Indeed, it seems to me that pretty much all the clean energy scenarios can be turned into bad movies.

For example, if the US had followed the example of France and gone on a binge of building nuclear power plants back in the 1970s, maybe 80% of US electricity today would be coming from non-carbon sources, as it does in France. But nuclear power plants everywhere are another bad movie scenario, right?

There are proposals for “geoengineering” by putting particulates into the atmosphere in a way that would offset the effects of carbon emissions. But the possible unintended consequences of such a policy are yet another bad movie scenario.

There are proposals for full “electrification” of the US economy, run largely by renewable energy sources like wind and solar. But if such policies are going to replace fossil fuels, it will face movie scenarios of its own. Existing solar- and wind-power equipment will need to be multiplied many times over, and will take up vast swaths of land when installed. We will need to greatly expand and update the electrical grid. We will need to invent methods of mass storage of electricity for times that are dark or windless. The two possible storage technologies at present seem to be giant battery farms or giant facilities for hydrogen storage–neither of which is currently viable at large scale and both of which have safety hazards of their own. Many of these steps will require dramatic expansions of mining for materials like copper to lithium for raw materials, energy-intensive manufacturing, and methods to deal with the resulting waste products. Movie scripts ranging from the heart-tugging to the catastrophic can be written about these policies, too.

And of course, not taking any of these steps runs the risks of its own bad movie scenario. In the short and medium run, burning fossil fuel for energy both adds to conventional air pollutants that are one of the major health risks worldwide, as well as sometimes causing localized environmental disasters. Then in the long-run, it turns out that the true pessimists about risks of climate change turn out to have been right all along, and the earth experiences devastating shifts in sea levels, weather patterns, and temperature.

When weighed against the costs and tradeoffs of other potential disaster scenarios, the idea of tapping into the geothermal energy around the Yellowstone caldera starts to sound more plausible. Perhaps more to the point, there is no gentle transition to non-carbon energy sources, where it all happens with some tax credits for solar panels and electric cars and a dose of back-slapping good feeling. If the shift to noncarbon energy is going to happen within a few decades, it will involve truly dramatic changes in energy production and transmission, changes that have their own costs and environmental risks, both in the US and around the world. You have to choose which bad-movie scenario you prefer.

Mergers and Enforcement in 2021: Hart-Scott-Rodino

The first step in US antitrust enforcement is the requirement, under the Hart-Scott-Rodino Antitrust Improvements Act of 1976, that all mergers above a certain size–now $92 million–must be reported to the federal government before they occur. This gives the authorities at the Federal Trade Commission and the Antitrust Division at the US Department of Justice a chance to challenge mergers before they occur. How is that working out? The Hart-Scott-Rodino law also requires an annual report on the state of antitrust in the previous year, and the report for fiscal 2021 has just been published.

Here’s the headline graph showing the number of mergers reported to the federal government for each year in the last decade.

There was an enormous merger boom in 2021. But by the middle of 2022, when the stock market was flattening an dipping and interest rates were rising, merger started slowing down.

In a market-oriented economy, it makes some sense that a lot of mergers should be allowed to proceed. Of course, private firms will sometimes make mistakes in merger decisions, just as they sometimes do in investment decisions, new product decisions, hiring and firing, and so on. But the firms and their managers are the ones closest to the ground with detailed information. There’s no reason to think that the government will be in a better position to figure out if a certain deal will improve a company’s efficiency or productivity. But if the merger threatens to injure consumers by limiting competition, antitrust authorities may have have a role to play.

So out of the 3,520 mergers reported in 2021, how many would you guess were challenged by the antitrust authorities? The Federal Trade Commission challenged 18: five settled by consent orders (that is, the companies proceeded after adjusting the deal); seven in which the transaction was abandoned or restructured; and six that led to litigation. The Antitrust Division at the US Department of Justice challenged another 14 mergers: two led to lawsuits; nine to consent degrees; and three in which the transaction was restructured without a formal consent decree.

Overall, less than 1% of the mergers were challenged, during a giant boom year for mergers Even for someone like me, who believes that companies should often be allowed to proceed and to make mistakes, it’s not a big number.

Some of the mergers that were blocked seem like relatively straightforward cases. For example, Aon plc was blocked from acquiring Willis Towers Watson plc., which would have combined two of the three largest insurance brokers in the world. CoStar was blocked from acquiring RentPath, which are two of the major websites that match renters with apartments. Some hospitals in Memphis were blocked from merging.

But some of the more interesting cases in 2021 were situations in which, rather than two well-established firms merging, the case involved the antitrust authorities seeking to improve possibilities for future competition. For example Visa had proposed buying a company called Plaid. The antitrust authorities argued that Visa is effectively a monopolist in online debit card services, and while Plaid is currently a small firm, it has some possibility for becoming a future competitor. In another case:

Illumina’s $7.1 billion proposed acquisition of Grail, a maker of non-invasive, early detection liquid biopsy that screens for multiple types of cancer using DNA sequencing. Illumina was the only provider of DNA sequencing that is a viable option for these multi-cancer early detection (MCED) tests. The complaint alleged that the proposed merger would likely harm innovation in the market for MCED tests.

These antitrust efforts which turn on possibilities of future competition, or possibilities of harms to future innovative efforts (after all, perhaps the combined company would have resources to make a stronger innovative effort?) are a gray area in the law, but an area that the current antitrust authorities seem eager to pursue.

In the last week, the Federal Trade Commission lost a case to block Facebook from buying a company called Within, which is a virtual reality fitness startup. The argument from the FTC was that this merger could inhibit future competition in the market for virtual reality fitness apps. I have have no strong opinion on the legalities of the ruling. I’ve read that this case was viewed as a borderline call, even within the FTC. But I will note that if, out of thousands of mergers per year, the antitrust authorities choose to focus their limited efforts and resources on competition within the market for virtual reality fitness apps, then they seem to be implicitly saying that anti-competitive concerns for the US economy as a whole are not especially severe.

Trade Sanctions: How Well Do They Work?

For a country with a large economy like the United States, trade sanctions have obvious attractions. They offer what looks like a muscular pursuit of foreign policy goals–that is, more than giving worthy speeches or recalling ambassadors from foreign embassies. They don’t involve declaring war or sending troops to fight. They can seem cost-free, in the sense that declaring that certain kind of trade will be blocked doesn’t involve a direct budgetary cost. Trade sanctions appeal to protectionists who view trad with suspicion, anyway.

So what’s the available evidence trade sanctions and how they work? Two papers take up this issue in the just-published Winter 2023 issue of the Journal of Economic Perspectives. T. Clifton Morgan, Constantinos Syropoulos and Yoto V. Yotov provide an overview in “Economic Sanctions: Evolution, Consequences, and Challenges,” while Marco Cipriani, Linda S. Goldberg and Gabriele La Spada focus on financial sanctions and the workings of the SWIFT system in “Financial Sanctions, SWIFT, and the Architecture of the International Payment System.” (Full disclosure: I’ve been the Managing Editor of JEP for 37 years, and thus am probably predisposed to think the articles are of interest. But no money is being made here. These articles, like all JEP articles back to the first issue, are freely available online courtesy of the American Economic Association.)

Syropoulos, Yotov, and a group of co-authors have created the Global Sanctions Data Base, which provides systematic data about 1,325 sanction cases during the period 1950–2022. A figure from the JEP paper shows the sharp rise in sanctions in the last couple of decades. In particular, financial and travel sanctions have risen especially quickly.

How well do the sanctions work? Here, Morgan, Syropoulos, and Yotov point out an interesting disjunction in how economists and political scientists answer this question. They write: “[E]economists have tended to interpret ‘effectiveness’ in terms of the economic damage that sanctions cause, while political scientists have considered sanctions ‘effective’ only if they achieve their declared political objectives.” For example, economists tend to judge the recent sanctions on Russia in terms of how much they hurt Russia’s economy, while political scientists tend to judge sanctions in terms of whether they lead Russia to stop its war in Ukraine. In addition, the success rate of trade sanctions is doubtless linked to factors like whether they are internationally coordinated, how well they are targeted, and whether they are reinforced with other policies. Or perhaps sanctions in the past have only been adopted when they seemed especially likely to work, which means that their past performance cannot be casually extrapolated to future scenarios.

Morgan, Syropoulos, and Yotov work through these various issues and offer this thought: “[E]ven in their worst light, sanctions have been shown to be effective in a modest fraction of cases. Even a 25 percent success rate for sanctions may be considerably higher than doing nothing, and the costs may be substantially lower than
other alternatives, like overt military interventions. Perhaps the `sanctions glass’ should be viewed as one-quarter full, not three-quarters empty.”

What about financial sanctions? In particular, the policy decision to cut off Russia from the SWIFT system received considerable attention, although even among economists, a fair number could not tell you in any detail–at least pre-2022–just what the SWIFT system actually did. Cipriani, Goldberg and La Spada discuss the history of financial sanctions, with some prominent examples of how they have worked or not in the past, and with some emphasis on the Society for Worldwide Interbank Financial Telecommunication, more commonly known as SWIFT.

When two banks need to transfer funds inside a country, they can do so through the central bank for that country–which is why the Federal Reserve is sometimes called the “bank for banks.” But most central banks (Switzerland is an exception) don’t facilitate transactions between domestic and foreign banks. Instead, there is a network of “correspondent” banks that operate in more than one country. If a Russian bank wants to transfer or receive funds with a bank in another country, it must typically operate through one of these correspondent banks. Messages must be sent back and forth

Back in the 1950s, 1960s, and 1970s, those messages were typically sent by Telex (for those of you old enough to remember that term). But Telex messages had no special formatting and were comparatively high-cost. Thus, banks and bank regulators around the world set up SWIFT as a nonprofit financial institution based in Belgium–and thus directly under financial laws of Belgium and the European Union. The idea was to have a set of computer protocols for a wide array of financial transactions. The correspondent banks were still needed, but messages about international financial transactions could be sent quickly and efficiently. Here’s a figure showing the number of financial institutions connected to SWIFT and the number of messages sent over the system.

The key point to remember here is that SWIFT doesn’t actually move any money or hold any money. It is just a messaging system between banks. In addition, the computer protocols that SWIFT has created for financial transactions are in the public domain. If a financial institution wants to use those protocols to move money, but to send the messages outside of the SWIFT system, it is quite possible to do so. Thus, cutting off Russia’s access to SWIFT surely raises the cost to Russia of making international financial transactions, but it does not block the transactions themselves.

The SWIFT system is by far and away the most dominant for sending messages between international financial institutions. But countries around the world have noticed that SWIFT is being used for trade sanctions–for Iran for a period of time in 2012, for North Korea in 2017, and now for Russia–and some of them are setting up alternative systems using the SWIFT protocols that don’t do much now, but could be ramped up if needed. As Cipriani, Goldberg, and La Spada point out, Russia has set up such a system:

Russia developed its own financial messaging system, SPFS (System for Transfer of Financial Messages). SPFS can transmit messages in the SWIFT format, and more broadly messages based on the ISO 20022 standard, as well as free-format messages. More than 400 banks have already connected to SPFS, most of them Russian or from former Soviet Republics. A few banks from Germany, Switzerland, France, Japan, Sweden, Turkey, and Cuba are also connected. By April 2022, the number of countries with financial institutions using SPFS had grown from 12 to 52, at which point the Central Bank of Russia decided not to publish the names of SPFS users. Due to its limited scale, SPFS mainly processes financial messages within Russia; in 2021, roughly 20 percent of all Russian domestic transfers were done through SPFS, with the Russian central bank aiming to increase this share to 30 percent by 2023 (Shagina 2021).

And so has China:

In 2015, the People’s Bank of China launched the Chinese Cross-Border Interbank Payment System (CIPS)
with the purpose of supporting the use of the renminbi in international trade and international financial markets. In contrast to SWIFT, … CIPS is not only a messaging system but also offers payment clearing and settlement
services for cross-border payments in renminbi. … [A]t the end January 2022, there were 1,280 participants from 103 countries. Among the direct participants, eleven are foreign banks, including large banks from the United States and other developed countries. The system is overseen and backed by People’s Bank of China. Similarly to Russia’s SPFS, CIPS uses the SWIFT industry standard for syntax in financial messages. Indirect participants can obtain services provided by CIPS through direct participants.

India is developing an interbank messaging system based on the SWIFT protocols as well, and there are some reports in the business press of plans for Russia, China, and India to merge these systems, although at present SWIFT remains far and away the dominant system for sending these international interbank messages.

Economic sanctions can impose costs. But whether they can achieve the desired political goals is a more complicated issue. The answer seems to be “sometimes,” but depending heavily on specific circumstances and the possibilities for evading the sanctions.

Hard and Soft Landings: The Federal Reserve’s Record

When the Federal Reserve raises interest rates to fight inflation, a “hard landing” refers to the possibility is that inflation is reduced at the cost of a significant recession, while a “soft landing” refers to the possibility that inflation is reduced with only a minor recession–or perhaps even no recession at all. Perhaps the canonical example of a hard landing happened in the late 1970s and early 1980s, when the Fed under chair Paul Volcker broke the back of the inflation of the 1970s by raising interest rates, but at the cost of back-to-back recessions in 1980-81 and 1982.

What is the historical record of the Federal Reserve in raising interest rates and managing a soft landing? Alan S. Blinder tackles that question in the just-published Winter 2023 issue of the Journal of Economic Perspectives in  “Landings, Soft and Hard: The Federal Reserve, 1965–2022.” (Full disclosure: I’ve been the Managing Editor of JEP for 36 years, so I am perhaps predisposed to find the articles of interest.)

From Blinder’s paper, here’s a figure showing the federal fund interest rate over time. There are some challenges in interpreting the figure when there are jagged jumps up and down in a short time, but Blinder argues that it is fair to read the historical record as involving 11 episodes where the Fed raised interest rates substantially since 1965.

What jumps out from the figure is that there are a number of times where the Fed raised interest rates and either it was not followed by a recession (1, 6, and 8), or the recession was very short (9), or the recession that followed was not caused by the higher interest rates (10 and 11). The Fed raising interest rates to nip inflation in the bud in 1994 (episode 8) is perhaps the best-known example of a landing so soft that a recession didn’t even occur. As another example, the Fed was gradually raising interest rates in the lead-up to the pandemic (episode 11), but the pandemic recession was clearly not caused by higher interest rates!

Here’s a table showing Blinder’s evaluation of the type of landing that followed each of the 11 episodes of monetary tightening. When he asks “was it a landing?”, he is raising the possibility that the higher interest rates didn’t actually bring inflation down at that time. For “would have been soft” (episode 7), Blinder argues that the Fed might have pulled off a soft landing with its interest rate increase in 1988-89, except for Iraq’s invasion of Kuwait in 1991.

The Fed has raised its key policy interest rate (the “federal funds rate”) from near-zero in March 2022 to about 4.6%–with talk of additional increases to come. Based on the historical record, what insights are possible about the whether a hard or soft landing is likely?

  1. There are clearly examples where higher interest rates from the Fed, in the service of fighting inflation, were followed by recessions.
  2. The Fed has faced two main challenges in the last few years: the pandemic recession, where the policy response led to a huge burst of disposable income along with supply chains problem, and then the Russian invasion of Ukraine in early 2022, which caused a new burst of higher prices for energy and food along with additional supply chain disruptions. History doesn’t offer do-overs. But it’s at least possible that the inflation which started in 2021 might have faded on its own if it had not been reinforced by the Russian inflation.
  3. One can argue that the last three recessions were not caused by higher Fed interest rates, but instead by the pandemic (2020), the implosion of financial instruments related to the housing price bubble (2007-2009), and the end of the dot-com boom in stock prices and investment levels (2001). Thus, perhaps the key question about the risks of a recession in 2023 may be less about interest rate policy and more about whether the US or world economy experiences a severe negative shock this year.
  4. Several of the Fed’s interest rate increases over time can be thought of as readjusting back to a more reasonable long-run level. For example, the rising Fed interest rates pre-pandemic were in some ways just based on a belief that the rate shouldn’t and couldn’t stay near zero percent forever. Or going back to Blinder’s episode 6, this was a time of chaotic shifts after a severe recession, with a plummeting price of oil and falling inflation, and the at that time seemed to believe that it had gone a little too far in cutting interest rates, so it adjusted back. Part of the Fed increasing interest rates in the last year or so is surely a belief that although it made sense to take the policy interest rate down to zero percent in the pandemic recession, again, it shouldn’t and couldn’t stay there forever.
  5. Macroeconomics is hard because the key factors driving the economy shift over time. It’s not obvious, for example, that the same lessons which applied to the stagflationary period of the 1970s should apply equally well and in the same ways to the dot-com boom-and-bust of the 1990s, or the housing price bubble of the early 2000s, and also to a short-and-sharp recession caused by a pandemic.
  6. Some of the arguments about inflation are really about momentum. When inflation rises, does it have a tendency to fade out? Or does it have a tendency to maintain the higher rate of inflation? Or does it have a tendency to build momentum, like a rock rolling downhill? Which scenario prevails probably depends to on how the causes of the inflation are perceived; the recent historical record and the credibility of the central bank in fighting inflation; and what expectations firms, workers, and consumers have about future inflation.

My own sense, for what it’s worth, is that the US economy is unlikely to escape this episode of higher Federal Reserve interest rates without experiencing a recession, by which I mean a period of higher unemployment and lower production. It seems to me that the pressures and tensions unleashed by the higher interest rates are working their way into reduced borrowing and credit, as well as tensions in bond markets. The Fed seems to be taking a middle road here, with a belief that part of the 6.5% inflation rate from December 2021 to December 2022 was temporary–in the sense that pandemic-related spending will fall, supply chains issues will resolve, and tensions from the Russian invasion of Ukraine will be manageable. Thus, the Fed is trying to raise interest rates only by as much as necessary to be clear that inflation will not gain a permanent foothold, while minimizing the risk of a hard landing.