Telemedicine: Lessons from Ukraine for the United States

The COVID-19 pandemic brought a surge of telemedicine, especially when public and private health insurers became willing to reimburse for it. Now, with the fading of the pandemic and its effect on day-to-day life, many health care providers seem poised to go back to the old ways. But even if telemedicine was overused during the pandemic, it was almost certainly underused before. Here are some insights from a source I was not expecting: a group of doctors, medical students, and telemedicine professionals involved in the Ukrainian relief efforts.  Jarone Lee, Wasan Kumar, Marianna Petrea-Imenokhoeva, Hicham Naim & Shuhan He offer some insights in “Telemedicine in Ukraine Is Showing That High-Tech Isn’t Always Better” (Stanford Social Innovation Review, March 2, 2023)

The authors describe their work with a Ykrainian-built telemedicine platform called Doctor Online, and in finding hundreds of Ukrainian- and Russian-speaking health care providers to volunteer to participate. Surveys of health care providers in Ukraine report widespread use of the platform.

Doesn’t seem all that applicable to the United States? The authors write:

Across the United States, 80 percent of rural areas qualify as federally designated “medical deserts.” This means that approximately 30 million people live at least an hour away from the nearest hospital with trauma services. Furthermore, many people suffer from chronic illnesses or social barriers that prevent them from accessing health care through physician offices and hospitals. Telemedicine could be an effective solution to reducing their suffering. For instance, telemedicine could allow someone with anxiety to receive mental health therapy from the comfort of their home, while a patient who noticed a suspicious lump on their skin could have a virtual consultation to help determine how serious it is.

The authors emphasize that many of their telemedicine contacts in Ukraine happened via text-message.

Texting has the obvious advantage of needing less connectivity, especially in a war zone. It has also become a generationally ingrained practice worldwide, while many people can struggle with video calls. But texting also allows a more productive use of the scarce resource: the time of clinicians. They can respond asynchronously, when it is most convenient, as can patients. With texting, telemedicine can handle a great many more patients than if it relied on synchronous technology. American providers are beginning to catch on. CirrusMD is a “text-first” virtual primary care platform, where patients begin each visit by sending a text to a physician. They can send images or host video calls and receive referrals to specialists. Asynchronous messaging allows for greater back-and-forth between a busy clinician and the patient.

There is a tradition in the provision of health care–a tradition that has some rational basis–that health care providers should meet with patients and do a reasonably full examination before treatment. But there are surely times when a full in-person visit seems like overkill. The authors cite one study involving children who had had appendectomies. Some of the families had access to text message for questions; some did not. Those with text messages ended up at emergency rooms less than half as often. One suspects there are many other examples.

For some previous posts on telemedicine, see:

How Will Remote Work Affect Cities?

Stijn Van Nieuwerburgh delivered the 2023 Presidential Address to the American Real Estate and Urban Economics Association on the subject of “The remote work revolution: Impact on real estate values and the urban environment” (Real Estate Economics, January2023, pp. 7-48, subscription or library access required). He writes:

I want to focus on the longer-term implications of the pandemic for residential and commercial real estate markets, looking out beyond the current cycle. Its most long-lasting implication in my opinion is the dramatic increase in remote work. Born out of necessity, remote work now appears to have taken hold as a permanent feature of modern labor markets. It is a benefit that employees enjoy and are willing to pay for. Their tolerance for commuting appears to be permanently reduced. Having experienced the flexibility that comes with working from home (WFH), the genie is out of the bottle. Firm managers too have come around to see its virtues, often in the form of higher productivity and profits, and have adjusted their own expectations about the number of days they expect employees to be in the office. Several firms have gone fully remote, while most others have moved to a hybrid work schedule of 2–3 days in the office. Various indicators of office demand appear to have stabilized at levels far below their prepandemic high-water marks.

How have patterns of working from home changed? A source cited by many people, including Van Nieuwerburgh, is the Survey of Working Arrangements and Attitudes (SWAA) carried out during the last few years by Jose Maria Barrero, Nicholas Bloom, Shelby Buckman, and Steven J. Davis. Here’s one main pattern from their most recent release in February 2023.

Using data from the American Time Use Survey carried out by the US Bureau of Labor Statistics, they estimate that about 5% of paid full days were worked from home before the pandemic hit. Early in the pandemic, this share spiked above 60% of all paid work day. Now is has dropped to less than 30% of all paid work days. The red line shows data from the Household Pulse Survey being run by the US Census Bureau, which has been finding very similar results.

Of course, this percentage is an overall average between those who are always at the workplace, those always working from home, and hybrid arrangements in-between. Before the pandemic, jobs tended to be either all at a workplace or all at home. The widespread use of hybrid arrangements is new.

How has this shift affected real estate? Van Nieuwerburgh offers many measures, and I’ll just pass along a few of them here. In terms of office space, the Kastle company gathers “turnstyle” data from the entrance area of office buildings. (It should be noted that there are questions about whether this measure or these buildings are representative of all office space use, but again, there are number of measures from varying sources on this point with the same general pattern.) If the office occupancy rate was 100 before the pandemic, it’s still only around 40 now. The highly teched-up San Francisco metro area has the lowest office occupancy rate.

Another measure is to look at revenue from leased paid office space. It’s worth remembering that commercial leases often last 5-10 years, so a number of leases have not yet come up for renegotiation since the pandemic hit. Thus, the drop-off in leasing revenue shown here is likely to persist in the future.

Finally, here’s a pattern on home sales and rental prices, using data from New York City. The horizontal axis shows how close the residence is to the center of New York City. The vertical axis shows the rise in rents or home prices. In the left-hand panel, the green line shows that over the six years or so before the pandemic, rents rose about the same (that is, the green line is pretty flat) whether you were closer to or farther from the center of the city. The red line shows that after the pandemic hit, rents closer to the center of the city dropped, while rents farther from the city center rose. The right hand panel shows that price growth for homes was higher for locations near the center of the city before the pandemic (green line), but price growth for homes near the city center was lower–in fact, was negative–after the pandemic (red line).

The pandemic knocked loose a number of old working arrangements. Let’s assume, as seems plausible, that a large share of previous commuters make a permanent switch to working from home at least a day or two a week–or perhaps more. What are some of the possible implications for real estate, for cities, and for productivity? The research here is of course quite preliminary, but here are some issues.

Lots of people really hate commuting. Having had a chance to do less of it, they really don’t want to go back full-time. On the other side, not everyone wants to work at their kitchen table or on their living room sofa, either. Thus, one possibility is that we will see a rise of satellite offices located in suburbs, or “co-working” offices in the suburbs where you can show up for the day. Such locations can offer some logistical support, like a printer that works or rooms for virtual meetings, and employers would often prefer to know that their employees have a lead gotten out of the house.

If many workers are going to be on a hybrid schedule, maybe working from home a couple of days each week, various coordination problems arise. How many days at home? If one goal is for people to be in the office together, then there must be agreement on what days people will be in the office. Employers may have concerns about having Monday and Friday be work-from-home days, for example, out of a fear that they are implicitly agreeing to a three-day workweek. Employers might want a situation where departments come to the office all together: perhaps marketing is there on Mondays and Tuesdays, and human resources i9s there on Thursdays and Fridays–and the two departments now share the same space. Personal spaces at the office may be reduced: after all, if you’re only going to be there 2-3 days a week, do you really need your own office or cubicle? Maybe you can do just fine with a rolling cart that has your stuff piled on it, and you just roll it over to an open space and grab a chair when you are in the office. Downtown employers might want more spaces for in-person and virtual meetings, or more flexible space where partitions can be rolled out and back, depending on who is in he office and what is needed that day.

With so many fewer workers downtown, and with the ongoing rise of online shopping, urban retail has suffered a huge decline (which of course is another group of people not working downtown, as well). In theory, workers on a hybrid schedule could do their downtown shopping on days that they are commuting to the office, but that doesn’t seem to be happening. Thus, work-from-home is likely to stagger the downtown retail sector as well.

The economic base of urban centers will shift. With less office work and less retail, they will becomes more reliant on restaurants and entertainment. They might also become more reliant on buying power of people who actually live there.

Lots of cities have a house shortage, in the sense that demand has been driving prices ever-higher in the face of limited supply. But what if some of that less-used office and retail space could be converted to residential space? There are a bunch of tough issues here. In a pure structural sense, lots of office space is not laid-out like residential space: as a basic example, the plumbing and hallways aren’t the same. One can imagine a large square office building divided into long narrow apartments with a window at the far end–but it’s not necessarily an attractive vision and it may run afoul of various residential building codes. The construction costs of such conversions are high, and you can bet that city councils are already salivating at the chance to dictate what kinds of conversions would be allowed and at the idea of setting prices and retail rates. It feels to me as if there is a huge opportunity here to bring housing to cities, and as if the political and economic constraints are likely to strangle that opportunity.

In the long-run, will the work-from-home pattern add to productivity? The evidence on this point is mixed. Early in the pandemic, several studies found that productivity remained pretty high with work-from-home. But during that time, workers at a lot of firms were also making special efforts to help out and get by. Over time, it became apparent that some jobs are more suited for remote work than others. Some worker who were scrupulous about giving a fulltime effort from home when the pandemic first hit began to ease off over time. Firms began to worry that some activities of business–like certain kinds of brainstorming and strategizing, or certain kinds of information-sharing–worked better when people saw each other every day and could gather in groups. In a work-from-home world, it isn’t clear how on-the-job training works for new hires, or how new hires get informal face-time with other workers. It’s not clear that employee training works as well, either. If you work from home for one company, but could switch to working from home for another company, how loyal are you to either employer?

When it comes to the effects of work-from-home, we are all learning on the fly. There are also likely to be conflicts. At present, it seems to me that lots of employees are willing to go to the office some of the time, and lots of employers are willing to have work-from-home some of the time–but who decides is very much in flux.

Changing Shares With Jobs: The Middle-Age Mystery

Here’s a figure showing changes in the share of those with jobs, from David H. Montgomery at the Minneapolis Fed, in “Who’s not working? Understanding the U.S.’s aging workforce” (February 27, 2023).

The 16-19 and 20-24 age groups show the biggest decline in share of population from March 2000 to March 2022. Much of this is because of increasing numbers of these age groups are spending more time in school; in addition, they have become less likely to work while in school at either part-time or full-time jobs.

The age groups over 55 all have a rising share with jobs. This shift is in part improved health for older Americans, in part incentives built-in to programs like Social Security to retire later, and a desire (especially among college-educated workers) to bear the tradeoff of later retirement in exchange for saving up a bigger nest egg before retirement.

The mystery is the declining share with jobs among what government statisticians refer as “prime age” workers, between the ages of 25-54. Montgomery doesn’t offer reasons for the decline of job-holding in this group, which are frankly mysterious. This is not a short-run phenomenon relate to the pandemic. It is primarily accounted for by a decline in job-holding among men. The decline in job-holding by prime-age men has been going on for decades, so it seems unlikely that it can be accounted for by a particular law or rule change, or by the political party in power.

The plausible theories suggest that over a period of rising wage inequality, workers who feel stuck at the bottom of the wage distribution may give up on formal work–even if they are in many cases working off-the-books. In addition, the share of adult men who are unpartnered (that is, not married or cohabitating) is high, and single men are increasingly likely to live in the homes of their parents. The disconnectedness of these prime-age adults from the labor force represents a loss of economic production, but surely more important, it represents a substantial group–many of whom have not left the labor force, but instead stuck it out in low-paid jobs–who are living their prime-age years with frustration and resignation relative to their earlier-in-life aspirations.

The Divergence in Firm Productivity Within the Same Sector: McKinsey Weighs In

One might expect that certain sectors of the economy will have faster productivity growth than others: for example, productivity seems likely to grow faster for semiconductor manufacturers than for a gas station. But one striking change both in the US economy and around the world in the last couple of decades is that looking at firms within the same sector–that is, within the same general line of business–the firms that are productivity leaders have been expanding their lead over the productivity-lagging firms in the same sector.

This shift is driving other economic changes. For example, it turns out that a main factor behind increases in income inequality is the widening gap between high- and low-productivity firms in the same sector. To put it another way, whether you do relatively better or worse as a result of widening inequality may not have much to do with you personally, or where you live, or your job; instead, it’s about whether you work for a a high- or low-productivity firm. The McKinsey Global Institute offers some thoughts about this issue and other productivity-related topics in “Rekindling US productivity for a new era (February 16, 2023). The report argues:

The most productive firms in every sector have widened their lead on the rest. In fact, the gap between the most and least productive is wider within sectors than in any other dimension we studied. Manufacturing provides a particularly striking example; leading firms operate at 5.4 times the productivity of laggards.9 In some manufacturing subsectors, the differences are extraordinary. The leading semiconductor manufacturers are 38 times more productive than the least-productive companies. This mirrors other research showing similar patterns of divergence across other sectors such as wholesale trade and information.10

The “frontier firms” in the productivity vanguard are accelerating away from their peers. These firms tend to be larger, more connected to global value chains, and focus on technology-intensive aspects of their sector. Research suggests these leading firms invest 2.6 times more in technology and other intangibles such as research and intellectual property, and attract and invest in more skilled talent.11

As a result, the gap between frontier firms and laggards has grown over the past 30 years. In manufacturing, the gap was 25 percent wider in 2019 than it was in 1989, with most of that change happening before 2000. At the same time, industry dynamism has fallen, as seen in metrics such as new firm entry rate (which has declined 29 percent from 1989 to 2019 in the United States) and labor reallocation rates (which are down 31 percent across sectors).

Standard economic principles would suggest that less productive firms would be replaced or would improve their performance. Researchers have offered multiple hypotheses for why this has not happened. For example, there is evidence that firms within the same sector may coexist without fully competing, by serving different customers, attracting different workers, or operating in different geographic markets. Finally, some researchers have pointed to declining measures of competition as a source of the divergence, which remains a matter of active debate.

Whatever the explanation for growing divergence, productivity gains must ultimately come from firms. If laggards don’t catch up or get replaced by more productive firms, US productivity will continue to splutter. For business leaders, the message is clear: improving your firm’s performance matters much more than the productivity of the sectors in which you operate.

As the McKinsey report points out, gains in labor productivity are fundamental for national prosperity. The key issue to remember here is that productivity gains build on each other. Thus, if productivity could be raised 1% per year, each year builds on the previous one, and after a decade the US economy would be (roughly) 10% larger. (Actually a little more than 10%, because the growth rate compounds over time.) The US economy is about $23 trillion in size right now, so being 10% larger involves gains of over $2 trillion. As I sometimes say, no matter whether your goal is higher wages or expanded government spending or tax cuts, it’s easier to achieve that goal in an expanding economy–where we are in effect arguing over how a growing pie will be divided up–than it is to accomplish your goals in a low-growth economy or even zero-sum economy, where gains for any particular goal require losses for other goals.

The MGI report discusses a number of ways for the US (or any nation) to improve productivity: better education and workforce skills, support for research and development, a competitive and evolving marketplace, and others.

Here, I want to emphasize a different lesson: The growing divergence between high- and low-productivity firms suggests that the challenge is not just one of cutting-edge innovation. Again, the cutting-edge firms across different sectors of the economy are doing pretty well with raising productivity. The challenge is for supporting an economic environment where the productivity laggards keep pace or die off, but don’t just keep falling farther behind.

Advice from Charlie Munger

The legendary investor Warren Buffett each year publishes a letter to the shareholders of Berkshire Hathaway, both reporting returns from the previous year and also offering reflections on past investment decisions and philosophies. In this year’s just-published letter, Buffett quotes some advice from his long-time partner Charlie Munger. Buffett writes:

Charlie and I think pretty much alike. But what it takes me a page to explain, he sums up in a sentence. His version, moreover, is always more clearly reasoned and also more artfully – some might add bluntly – stated. Here are a few of his thoughts, many lifted from a very recent podcast:

  • The world is full of foolish gamblers, and they will not do as well as the patient investor.
  • If you don’t see the world the way it is, it’s like judging something through a distorted lens.
  • All I want to know is where I’m going to die, so I’ll never go there. And a related thought: Early on, write your desired obituary – and then behave accordingly.
  • If you don’t care whether you are rational or not, you won’t work on it. Then you will stay
    irrational and get lousy results.
  • Patience can be learned. Having a long attention span and the ability to concentrate on one thing for a long time is a huge advantage.
  • You can learn a lot from dead people. Read of the deceased you admire and detest.
  • Don’t bail away in a sinking boat if you can swim to one that is seaworthy.
  • A great company keeps working after you are not; a mediocre company won’t do that.
  • Warren and I don’t focus on the froth of the market. We seek out good long-term
    investments and stubbornly hold them for a long time.
  • Ben Graham said, “Day to day, the stock market is a voting machine; in the long term it’s
    a weighing machine.” If you keep making something more valuable, then some wise person is going to notice it and start buying.
  • There is no such thing as a 100% sure thing when investing. Thus, the use of leverage is
    dangerous. A string of wonderful numbers times zero will always equal zero. Don’t count on getting rich twice.
  • You don’t, however, need to own a lot of things in order to get rich.
  • You have to keep learning if you want to become a great investor. When the world changes, you must change.
  • Warren and I hated railroad stocks for decades, but the world changed and finally the
    country had four huge railroads of vital importance to the American economy. We were
    slow to recognize the change, but better late than never.
  • Finally, I will add two short sentences by Charlie that have been his decision-clinchers for decades: “Warren, think more about it. You’re smart and I’m right.”

And so it goes. I never have a phone call with Charlie without learning something. And, while he makes me think, he also makes me laugh.

I will add to Charlie’s list a rule of my own: Find a very smart high-grade partner – preferably slightly older than you – and then listen very carefully to what he says.

Clean Energy Industrial Policy: The US and EU Clash

The Inflation Reduction Act, signed into law by President Biden in August 2022, is actually a mixture of tax, healthcare, and clean energy policies. Here, I’ll focus on the last category. It represents a belief that industrial policy can work when it comes to clean energy: that is, large subsidies targeted at a specific industry can both accelerate the development of a new and healthy sector of the US economy, as well as reducing carbon emissions. David Kleimann, Niclas Poitiers, André Sapir, Simone Tagliapietra, Nicolas Véron, Reinhilde Veugelers and
Jeromin Zettelmeyer compare the US policy to pre-existing European policies in “How Europe should answer the US Inflation Reduction Act”
(Bruegel, February 2023). Here are some takeaways.

The clean energy subsidies enacted by the Inflation Reduction Act will catch the US up to the level of subsidies that are already available across EU countries in some areas, but not others.

The authors divide up up the new US clean energy subsidies into three categories. First, there is a tax credit of up to $7500 for consumer purchases of electric cars. However, this tax break is hedged around with requirements about how much of the car must be made in the US, as well as limits on the income of those receiving the tax credit. Second, there are subsidies for producers of “batteries, wind turbine parts and solar technology components, as well as for critical materials like aluminum, cobalt and graphite.” As one example, a “mid-sized 75kWh battery for an EV would receive $3,375 in subsidies, equivalent to roughly 30 percent of its 2022 price.” Third, there are subsidies for producers of carbon-neutral electricity. This includes solar and wind power, but also hydrogen, “clean fuels (such as renewable natural gas),” and nuclear power.

There are lots of details surrounding these rules, and I won’t try to do justice to them here. The authors cite overall estimates from the Congressional Budget Office that the cost will be $400 billion over 10 years–but they also warn that this cost estimate is based on underlying estimates about the extent to which people and firms will take advantage of these subsidies. Comparisons between the US and the different subsidies across EU countries are also necessarily imprecise. But the authors offer this chart:

In other words, the new US subsidies for electric cars and clean-tech manufacturing are similar to what already prevails in the European Union. The new US subsidies for renewable energy remain MUCH lower than similar subsidies in the EU.

One key difference between the US clean energy subsidies and the European approach is that the US approach includes “local content” requirements, which among other issues violate the fair trade rules that the US has long advocated for the World Trade Organization.

“Local content” requirements are politically popular everywhere: after all, they restrict tax breaks to domestic producers. That’s also why such rules are generally prohibited by World Trade Organization agreements. But first under President Trump, and now under President Biden, the US is showing that in decisions about tariffs and subsidies, it feels comfortable flaunting those rule. The authors describe the specific local content rules in the Inflation Reduction Act like this:

The $7500 consumer tax credit applies only to electric cars with ‘final assembly’ in North America (the US, Canada or Mexico). In addition, half of the tax credit is linked to the origin of batteries and the other half to that of raw materials used in the electric cars. To obtain either half, a minimum share of the value of battery components (presently 50 percent) or critical minerals (presently 40 percent) needs to come from the US or countries with which the US has a free trade agreement (presently 20 countries). These thresholds will increase by about 10 percentage points per year. In addition, from 2024 and 2025, any use of batteries and critical minerals from China, Russia, Iran and North Korea will make a vehicle ineligible for the tax credit.

Renewable energy producers are eligible for a ‘bonus’ subsidy linked to LCRs [local content rules]. If the steel and iron used in an energy production facility is 100% US-produced and manufactured products meet a minimum local-content share, the subsidy increases by 10 percent, with the required local-content share rising over time11. A similar bonus scheme conditional on local-content shares applies to investment subsidies for energy producers.

Local content rules are also a bit paradoxical. Presumably the reason such rules are needed is that, without them, a substantial share of the clean energy subsidies would flow to producers in other countries, because those producers would be providing products with the combination of price and quantity preferred by US consumers and firms. The Inflation Reduction Act is thus based on a claim that clean energy subsidies are badly needed for environmental reasons–but also that clean energy goals aren’t quite important enough to justify importing needed goods.

Will the clean energy industrial policy work?

The authors of this paper assert with some confidence that the US and EU industrial policies with regard to clean energy will work: that is, they will both build up local producers of clean energy–presumably to a point where they no longer need to rely on government subsidies–and also will reduce carbon emissions.

The future is of course unpredictable by definition, but I am while dubious that that the US clean energy industrial policy subsidies are likely to be very effective. First, the US clean energy subsidies are an all-carrot, no-stick policy. They hand out subsidies, but do not impose, say, additional limits or costs on carbon emissions. Second, the US industrial policies are focused on current tech, not future tech. AS the Bruegel authors write: “in the clean-tech area, the IRA [Inflation Reduction Act] focuses mostly on mass deployment of current generation technologies, whereas EU level support tends to be more focused on innovation and early-stage deployment of new technologies.” Third, tying the subsidies to local content rules will be a disadvantage for US producers in the clean energy arena, compared to producers in the European Union and other places who do not need to follow such rules.

Finally, government industrial policy to advance technology tends to work best when it is tied to concrete goals. For example, the incentives to produce COVID vaccines were linked to the vaccines actually being produced. In South Korea’s successful industrialization strategy several decades ago, government subsidies were linked to whether the firm was successfully exporting to the rest of the world–and the subsidies were cut off if the target level of exports wasn’t reached. But when industrial subsidies are just handed out, it’s a fairly common pattern for people and firms to soak up the subsidies, without much changing. Those who follow these issues will remember prominent examples like Solyndra, the solar energy company that burned through about a half-billion dollars in federal loan guarantees about a decade ago, or going back further, the “synfuels” subsidies that failed to deliver fuel alternatives back in the 1980s. I hope that I’m wrong about this, and that this time around, US industrial policy aimed at clean energy will be a big success. But I’m not optimistic.

How WWII Reduced US Productivity

There’s a conventional story that World War II was a boost for the US economy, both providing a burst of aggregate demand that ended the Great Depression, and also establishing the basis for several decades of postwar prosperity in US manufacturing. Alexander J. Field begs to differ. He lays out his case in “The decline of US manufacturing productivity between 1941 and 1948” (Economic History Review, published online January 16, 2023).

Of course, total manufacturing output did rises substantially during World War II, but “productivity” for economists doesn’t refer to total output. Instead, productivity refers to output per hour worked–or more generally, output relative to given inputs of labor and capital. With that in mind, you can get some of the flavor of the larger perspective of Field’s argument by taking a look at this data on the US manufacturing sector form 1929 to 1948. As you see, everything is expressed relative to 1929: that is, the level of all the variables is set at 100 for 1929. The first column is labor productivity, or output per hour. The second is “total factor productivity,” a more complex (although by no means truly difficult) calculation that is output per combined inputs of labor and capital. The third column is total manufacturing output; the fourth columns is hours worked in the manufacturing sector, and the last column is stocks of capital in the manufacturing sector.

Before looking at the World War II years, just run your eyes over the 1930s for a moment to get a sense of how this works–and perhaps to reset your perceptions of what happened during this time. Manufacturing output falls by half during the Great Depression from 1929 to 1933. The stock of capital equipment doesn’t fall by much during this time: after all the equipment may have been less used or unused, and some of it would wear out, but the equipment itself was mostly still there. Hours worked in manufacturing falls 40% from 1929 to 1933, which is less than the fall in output. Thus, output per hour worked, as shown in the first column falls substantially from 1929 to 1933.

There is a common perception that the Great Depression lasted throughout the 1930s, before the economy was jolted out of the Depression by World War II spending. The table shows that this perception is untrue. Manufacturing output doubles from 1933 to 1937. The economy is then jolted by a sharp increase in interest rates by the Federal Reserve leading to a steep recession in 1937-38–and then followed by a doubling of manufacturing output from 1938-1941. Readers will remember that while there is certainly fear of war in the late 1930s and early 1940s, the bombing of Pearl Harbor and the actual entry of the US into World War II doesn’t happen until Decemer 1941.

As the US scrambled to reshape its economy to a wartime footing, output rises substantially. But there are also substantial jumps in labor and capital inputs. Thus, both labor productivity (output per hour worked of labor) and total factor productivity (output per unit of inputs including labor and capital) start to decline. Field described the underlying process this way:

The [productivity] declines in 1942 reflect, above all else, the chaotic conditions associated with the changes in the product mix. Productivity took a huge hit as machinery to produce peacetime products made way for newly designed machine tools, and labour and management struggled to become proficient as they moved from making goods in which they had a great deal of experience to those in which they had little. Shortages, hoarding of inputs, and production intermittency plagued the war effort. The positive effects of learning by doing are evident in the change in both labour productivity and TFP growth between 1942 and 1943. They were nevertheless insufficient to compensate for the sharp drop during the previous year. Productivity resumed an accelerated decline in 1944, as a secondary round of major product changes kicked in, and was even more negative in 1945, due in part to the disruptions associated with demobilization. Partial recovery between 1945 and 1948 still left the TFP level in US manufacturing substantially below where it had been in 1941.

There is a conventional narrative about World War II that it was at least “good for the economy,” but that seems imprecisely put. It’s true that the US economy, with its high level of technical skills and extraordinary flexibility, was very good indeed for winning the war–which was clearly the highest priority at that time. But the war led to multiple dramatic disruptions in the US economy: restructuring to wartime production in multiple ways and times, labor shortages, supply shortages, and then a dramatic restructuring back to a peacetime economy.

Field offers a reminder that much of the capital investment made during World War II was useless at the end of the war.

With the temporary exception of B-29 bombers, most of the aircraft produced during the war were, at its conclusion, deemed surplus: obsolete or unneeded. Tens of thousands were flown to boneyards in Arizona: air bases such as Kingman and Davis-Monthan. … Some aircraft were flown directly from the factory gate to Arizona for disassembly and recycling. Many aircraft operating overseas were never repatriated. It was simply not worth the cost in fuel and manpower to fly them back to the United States so they could be scrapped. Similar fates befell Liberty ships (scrapped and recycled for the steel), tanks, and other military equipment including field artillery. …

There was indeed a huge investment in plant and equipment by the federal government. But the mass production techniques that made volume production of tanks and aircraft possible in the United States relied overwhelmingly on single- or special-purpose machine tools, and most of these tools and related jigs and frames were scrapped with reconversion. The United States did use multipurpose machine tools, which could more easily be repurposed, but this was principally in the shops producing machine tools. Already in 1944, the country confronted serious
surplus and scrappage issues. By early 1945 disposal agencies had surplus inventories of roughly $2 billion – equivalent to the entire cost of the Manhattan Project. By V-J Day that had risen to US $4 billion, and ultimately to a peak of US $14.4 billion in mid-1946. …

Both public and private capital accumulation in areas not militarily prioritized had been
repressed.Wartime priorities starved the economy of government investment in streets and highways,
bridges and tunnels, water and sewage systems, hydro power, and other infrastructure that
had played such an important role in the growth of productivity and potential output across the
Depression years. These categories of government capital complementary to private capital grew
at a combined rate of 0.15 per cent per year between 1941 and 1948, as opposed to 4.17 per cent per
year between 1929 and 1941.81
Portions of the private economy not deemed critical to the war effort also subsisted on a thin
gruel of new physical capital. Trade, transportation, and manufacturing not directly related to the
war are cases in point. Private nonfarm housing starts,which had recovered to 619 500 in 1941, still
34 per cent below the 1925 peak (937 000), plunged to 138 700 in 1944, barely above the 1933 trough
of 93 000. All ‘nonessential’ construction in the country was restricted beginning on 9 October
1941, almost two months before the Japanese attack.82

Of course, the US economy suffered a terrible loss of workers as a result of World War II. “As for labour, the immediate post-war impact of the war on potential hours was clearly negative: 407 000 mostly prime-age males never returned. Most would have been alive in the absence of the war. …There were another 607 000 military casualties. The 50 per cent wartime rise in female labour-force participation largely dissipated during the immediate
post-war period.

What about new technologies developed during World War II? As Field points out, the ability to run extraordinary assembly lines for making planes and ships was not a useful technology after the war ended. More broadly, he argues that World War II was more about taking advantage of technologies that had been developed earlier, not about the invention of technologies that would have lasting benefits in peacetime.

What of more general scientific and technological advance? Kelly, Papanikolaou, Seru, and Taddy digitized almost the entire corpus of US patent filings between 1840 and 2010, and analysing word counts, identified breakthrough patents: those that were novel at the time and influential afterwards. Such patents had low backward similarity and high forward similarity scores … Their time series of such patents shows a peak in the 1930s, particularly the first half of the decade, and a noticeable trough during the war years …

Much of what occurred during the war represented the exploitation of a preexisting knowledge base. In 1945, Vannevar Bush published Science, the endless frontier, a work often viewed as distilling the lessons and achievements of the war into an actionable blueprint for post-war science and technology policy. However, as David Mowery noted, ‘the Bush report consistently took the position that the remarkable technological achievements of World War II represented a depletion of the reservoir of basic scientific knowledge’ ,,,

It is of course impossible to re-run history and see what would have happened if the economic trends of the late 1930s had continued without the interruptions, costs, and material and human losses of World War II. But it is surely possible that the US have had a stronger economy in 1950 if it had not suffered losses of a million killed and wounded, and had not needed to scramble its outputs first to a wartime and then to a peacetime footing, and had been able to pursue non-war science and technology. Field goes so far as to speculate: “From a long-run perspective, the war can be seen, ironically, as the beginning of the end of US world economic dominance in
manufacturing.”

Given this kind of evidence, why is the belief in the economic benefits of World War II so deeply rooted? Field quote Elizabeth Samet’s argument that there is “pernicious American sentimentality about nation” and that “we search for a redemptive ending to every tragedy.” There is surely some truth in this, but perhaps a simpler truth is that the losses and costs of World War II are too staggering to contemplate.

Qualms about Linking Executive Pay to Social Goals

Should the pay of top executives be linked not just to the performance of the company in the stock market or other quantitative/financial goals, but also to whether the company meets environmental, social, and governance (“ESG”) goals? Lucian A. Bebchuk and Roberto Tallarita raise some doubts in “The Perils and Questionable Promise of ESG-Based Compensation” (Journal of Corporation Law, Fall 2022).

Bebchuk and Tallarita focus on the actual behavior of the 97 US companies in the S&P 100–which together represent over half the total value of the US stock market. They write:

We found that slightly more than half (52.6%) of these companies included some ESG metrics in their 2020 CEO compensation packages. These metrics focus chiefly on employee composition and employee treatment, as well as customers and the environment, but also, to a much smaller extent, communities and suppliers. ESG metrics are mostly used as performance goals for determining annual cash bonuses. However, most companies do not disclose the weight of ESG goals for overall CEO pay, and those that do disclose it (27.4% of the companies with ESG metrics) assign a very modest weight to ESG factors (between less than 1% to 12.5%, with most companies assigning a weight between 1.5% and 3%).

It is notable to me that if you hear a large company announcing that it has tied executive bonuses to environmental, social, and governance goals, it typically determines 1.5-3% of the executive bonus. Bebchuk and Tallarita look at what specific goals are mentioned in corporate reports:

Despite the potential richness and intricacy of a company’s stakeholders and their interests, ESG metrics used in the real world are inevitably limited and narrow. … Most companies use metrics linked to employee composition and employee treatment, and many use metrics connected to consumer welfare and environmental issues (especially carbon emissions and climate change). Very few companies, however, consider their impact on local communities, and only two companies use metrics linked to supplier interests.

Furthermore, with respect to each of these groups or interests, ESG metrics focus on a narrow subset of dimensions that are relevant for stakeholders. … [F]or each stakeholder group or interest, companies choose to give weight to specific dimensions that represent only part of what stakeholders care about. With respect to employees, for example, most companies choose goals related to inclusion or diversity, and many focus on work accidents and illness, but none incentivizes its CEO to increase salaries or benefits or to improve job security. With respect to community, many companies focus on trust and reputation, but almost none chooses incentives linked to reducing local unemployment or to distributing free products or services to disadvantaged residents. …

[S]takeholder welfare is multi-dimensional. However, some of these dimensions are easier to pin down and measure, while others, equally important, are difficult to measure. Consider, for example, the welfare of employees. Employees are interested in receiving a good salary, avoiding accidents and illnesses, and keeping their job: these goals are relatively easier to measure and assess. However, employees are also interested in being treated fairly, developing good professional relationships with supervisors and peers, growing professionally, and other factors that are very hard to measure. …

The narrowness of ESG metrics is an empirical fact and also a theoretical necessity. No compensation package could exhaustively identify and incentivize goals that address all of the interests and needs of all individuals and groups affected by a company’s activities. The very act of identifying a measurable goal and designing a metric to assess the achievement of that goal requires the choice of some specific dimension and measure and, therefore, the rejection of other potential dimensions and measures. Business leaders have embraced stakeholderism by promising win-win scenarios in which companies deliver value to shareholders and all stakeholders. The reality, however, is that companies choose only a few groups of core stakeholders and focus on a limited number of aspects of their welfare.

One response to these sorts of concerns is the “at least” defense. At least the firms are making a public statement in response to broader concerns. At least some firms may be trying a little harder along these lines. At least some broader social concerns might be addressed. Maybe the glass is only 1.5% to 3% full, but at least it’s not 100% empty. At least it’s a start.

However, a primary concern over executive pay for some decades now has been about overly cozy relationships between corporate executives and boards of directors, in which top executives get excessive pay over what their performance would actually deserve. Linking pay to stock prices or other financial performance is clearly not a perfect measure, but it’s at least an anchor for executive pay. If executives were to get a significant share of their pay based on the announced commitments they make about ESG concerns, or about subjective judgements on the extent to which they have achieved these goals, then it again becomes possible for cozy relationships between board members and executives to lead to higher pay: “Sure, the company had a dramatic decline in production and sales last year, but as a result, we also reduced carbon emissions, so the CEO gets a raise.” Moreover, if executives have incentives to re-jigger corporate resources toward more measurable social goals, but potentially at the expense of less measurable goals, there is no guarantee that the overall goals of making corporations more socially conscious will be met.

A similar set of themes arise in concerns over “Diversity Washing,” which refers to companies that make public announcements about their commitment to diversity, but don’t actually do much about it. Andrew C. Baker, David F. Larcker, Charles McClure, Durgesh Saraph and Edward M. Watts discuss the issue in a European Corporate Governance — Finance Working Paper (# 868/2023, January 2023).

The authors have detailed data on gender and racial diversity of the employees for over 5,000 US public companies. They also have company statements filed with Securities and Exchange Commission that discuss their diversity, equity, and inclusion policies (specifically, annual reports, current reports, and proxy statements. In addition, they have measures of firm misconduct related to diversity issues, as well as the rankings that firms have received about their diversity programs. Thus, they can look at whether firms that talk a good game about diversity actually walk the walk, and also whether firms that talk a good game get rewarded for what they say, rather than what they do.

It turns out that there is an overall positive correlation between the talking at the doing, but the correlation is a mild one, because of firms that they call “diversity washers.” They write:

We provide large-sample evidence showing many firms have significant discrepancies between their disclosed commitments to diversity and their actual hiring practices. Consistent with such firms making misleading commitments to DEI, we find diversity washers have less workplace diversity, experience future outflows of diverse employees, and are subject to higher diversity-related fines. Despite these negative DEI outcomes, we show diversity washers receive higher ESG scores from commercial rating organizations and attract more investment from ESG-focused institutional investors, suggesting these disclosures mislead outside stakeholders and investors.

My focus here has been on practical problems that arise when corporations seek to put a priority on ESG goals. But at a deeper level, I have qualms over whether it is a good idea for corporations to have such goals. Different institutions are good for different purposes. We don’t expect hospitals to educate fourth-graders, we don’t expect universities to produce smartphones, and we don’t expect churches to install dishwashers. It seems to me quite possible to support the idea that corporations should be focused on earning profits, and also to support both government and nongovernment efforts to define and pursue environmental and social goals. For prior posts on the general subject, starting points are:

Lead: A Global Health Problem

One of the major US public health victories in recent decades was to get the lead out of gasoline and paint, and also to (mostly) shift away from lead in pipes that carried water–such that when lead is detected in the water supply in places like Flint, Michigan, or Jackson, Mississippi, it’s rightfully a scandal, and typically linked to the use of badly outdated pipes. But dealing with sources of lead exposure around the world is very much a slow-motion work in progress. Rachel Silverman Bonnifield and Rory Todd discuss the issue in “Opportunities for the G7 to Address the Global Crisis of Lead Poisoning in the 21st Century: A Rapid Stocktaking Report” (Center for Global Development, 2023)/

They set the stage in this way (footnotes and references to figures omitted):

Lead poisoning is responsible for an estimated 900,000 deaths per year, more than from malaria (620,000) and nearly as many as from HIV/AIDS (954,000). It affects almost every system of the body, including the gastrointestinal tract, the kidneys, and the reproductive organs, but has particularly adverse effects on cardiovascular health. According to the World Health Organization (WHO), it is responsible for nearly half of all global deaths from known chemical exposures.

Despite this massive burden, the greater part of the harm caused by lead may come not through its effects on physical health, but its effect on neurological development in young children. The cognitive effects of lead poisoning on brain development are permanent, and most severe when lead exposure occurs between the prenatal period to the age of around 6 or 7. Even low-level lead exposure at this age has been conclusively shown to cause lifelong detriments to cognitive ability; though evidence is less definitive, there is also a very strong and compelling literature which links lead exposure to anti-social/violent behavior, attention deficits, and various mental disorders. An estimated 800 million children—nearly one in three globally, an estimated 99 percent of whom live in low- and middle-income countries (LMICs)—have blood lead levels (BLL) above 5 micrograms per deciliter (μg/dL), which the WHO uses as a threshold for recommending clinical intervention to mitigate neurotoxic effects. Effects on cognitive development have been demonstrated in BLLs significantly below this …

What are some of main vectors through which people, especially in low- and middle-income countries, are exposed to lead? The first one discussed is recycling of lead-acid batteries:

While lead has a number of industrial applications, at least 80 percent now goes into the production of lead-acid batteries. … Manual destruction of batteries without protective equipment, uncontrolled smelting, and dumping of waste into waterways and soils are common. Studies show high blood lead levels in children living near lead battery manufacturing and recycling facilities and in workers, and high levels of airborne lead in battery facilities and acute exposure to workers and their families.

The market price of lead has roughly doubled in the last 20 years, as has global mining of lead (which is typically accompanied by extraction of zinc, but also other metals). There are some recent graphic examples. A lead mine in Zambia “led to universal lead poisoning among 90,000 local children.” A gold-mining operation in Nigeria “led to the deaths of more than 400 children from acute lead poisoning in the space of six months in 2010, as a result of workers grinding ores within villages.” Such lead pollution often continues to affect the local environment for decades after the mine is closed.

Perhaps the most unexpected source of lead poisoning is via lead that is added to spices, which are then shipped around the world.

[A]n increasing body of evidence points to lead-adulterated spices as a significant driver of widespread lead poisoning, particularly in South and Central Asia. For turmeric and other spices with bright yellow/orange colors, lead chromate is typically added during the polishing stage to increase pigmentation and reduce polishing time (which also increases the weight of the final product); the bright pigmentation characteristic of lead chromate is considered a sign of high quality, and adulteration therefore allows producers to command a higher price point for their products. Lead may also be inadvertently introduced in smaller concentrations to a broader range of spices—for example oregano, thyme, ginger, or paprika—via contaminated soil, airborne pollution, or cross-contamination at a factory, though this is likely to be a relatively small part of the overall problem.

Via global supply chains, contaminated spices can drive lead poisoning far beyond their countries of origin … In the US, where roughly 95 percent of spices are imported,91 a Consumer Reports investigation found detectable levels of lead or other heavy metals in one third of sampled spices. In New York City, investigations of elevated blood lead levels frequently identify lead adulteration in spices purchased abroad as a likely source, with the highest concentrations of lead found in spices from the countries Georgia, Bangladesh, Pakistan, Nepal, and Morocco …

Lead in paint is an ongoing issue as well.

Despite their danger being established for decades, lead paints remain legal in the majority of countries, and are widely used for residential coatings and decorative purposes in most LMICs [low- and middle-income countries], and for industrial purposes in many high-income countries. Lead additives are primarily in solvent-based paints, and may be added to paint to improve durability, drying capacity, and corrosion prevention, as well as in the form of pigments—especially lead chromate—to enhance color. It can cause occupational exposure as workers inhale dust during manufacture, application, and removal, or exposing their families through take-home contamination. Children are exposed primarily through ingestion of chips and dust, which can occur throughout the life cycle, but may be exacerbated as paint ages as well as during application and removal. Lead paint is an avoidable source of exposure, and there are safe and cost-effective alternatives to lead additives …

Cookware can be another source of lead exposure, especially when lead-based glazed are used on ceramics.

Lead-glazed ceramics are popular in central Mexico, where they are primarily produced by indigenous communities, and are commonly used in restaurants for cooking and serving. They have been identified as a primary cause of elevated blood lead levels in the country,127 where 22 percent of children aged 1 to 4 years have blood lead levels above 5 μg/dL. But they are also used elsewhere in Latin America, North Africa, and South Asia, and may be a significant source of exposure. … More recently, aluminum pots and other cookware produced from scrap metal—used by poor families in LMICs across all regions—have been found to frequently contain lead and other heavy metals …

There are other examples. The cosmetic called Kohl was traditionally made with lead. “While safe, lead-free substitutes exist, traditional leaded Kohl—with up to 98 percent lead content—is still common across the world, and frequently found in G7 member states … Lead is also found in other consumer goods, particularly toys and jewelry. Use of lead in toys is typically to add pigment/color, including via lead paint on surfaces and lead pigment in crayons, sidewalk chalk, and other art supplies …”

The immediate health effects of lead exposure are grim. The long-term effects on child development are worse.

Hammers and Nails: Central Banks and Inequality

There’s an old saying that “if your only tool is a hammer, then every problem looks like a nail.” It’s about the temptation to use the tool you have on every problem that comes up–whether the tool you have is actually appropriate for the problem. Hammers work well with nails: they aren’t so helpful with screws, or when trying to water the flowers.

The situation with central banks and economic inequality is a little different. In this case, some of those who are focused on inequality have wondered whether the central bank might offer an appropriate hammer for this particular nail. But in then just-published Winter 2023 issue of the Journal of Economic Perspectives, Alisdair McKay and Christian K. Wolf explain why this is a case of the tool not fitting the problem in  “Monetary Policy and Inequality.” 

Economic inequality exists for many reasons, of course. The question is whether or how actions by central banks–say, decisions to raise or lower interest rates–might change the level of economic inequality. Perhaps the main complication in the analysis here is that monetary policy will affect different groups in different ways. For example, if you have borrowed money with an adjustable-rate mortgage, you might benefit from lower interest rates. If you haven’t borrowed money, but instead were hoping to receive interest payments on past savings, then lower interest rates will hurt you. If you are unemployed or low-paid, then to the extent that lower interest rates can stimulate the economy and lead to more jobs and higher wages, you are better off. Lower interest rates also tend to encourage investors to scale back on investments that are linked to interest rates (like corporate bonds) and instead to shift over to stock markets. If you own stocks, you benefit from this effect; otherwise, not so much.

Thus, the challenge is to look at how different groups, defined by age and income, are likely to be affected by monetary policy–and in turn how that affects the extent of inequality. After working their way through the evidence, they argue:

On the one hand, the incidence of the individual channels of monetary policy transmission to households is quite
uneven. For example, mortgage payments and stocks have much stronger effects at the top of the wealth distribution, while other debt services and labor income have stronger effects at the lower end. On the other hand, once aggregated across all channels, the overall consumption changes are much more evenly distributed. …
While there are some differences across groups, we view them overall as relatively modest. … The key takeaway is that … expansionary monetary policy roughly scales up everyone’s consumption by the same amount as the aggregate, leaving each household’s share of total consumption approximately unchanged.

The “dual mandate” of the Federal Reserve’s monetary policy is to worry about output and jobs when those seem at risk, and to worry about inflation when it seems to be rising. If the Fed was to add inequality as another objective, then the central bank would have to address the question of whether it should, in some situations, allow either more inflation or higher unemployment in pursuit of fighting inequality. But the McKay and Wolf essay suggests that monetary policy does not lead to substantial shifts in inequality in the first place. Thus, when it comes to the nail of economic inequality, monetary policy is not the hammer you are looking for.