Offshoring is when production by US companies happens in other places. Onshoring is when at least some of that production that had been done in foreign places shifts back inside US borders. Friend-shoring is when countries decide, as a matter of policy, that they will emphasize and focus on trade with friendly nations. The idea of friend-shoring is being discussed at high levels, as T. Clifton Morgan, Constantinos Syropoulos, and Yoto V. Yotov discuss in “Economic Sanctions: Evolution, Consequences, and Challenges” (Journal of Economic Perspectives, Winter 2023; full disclosure, I work as Managing Editor of JEP).
The main focus of their article is on the rising use of trade sanctions and how well they work (as discussed here). But given the rise in US protectionism that started during the Trump administration and has continued into the Biden presidency, the prospect of friend-shoring also comes up. They write:
Finally, as states seek to reduce their vulnerability to sanctions, what ramifications may arise for the international economic and political system? For example, when members of the World Trade Organization mix trade and security policies, as has been the case in the Russia-Ukraine crisis, the survival of WTO and the rules-based approach to policymaking may be at risk. Comments from prominent officials suggest that the United States and the European Union may already be moving away from global multilateralism toward cooperation with limited circles of friends.
In [US Secretary of the Treasury] Janet Yellen’s (2022) words: “[W]e need to modernize the multilateral approach we have used to build trade integration. Our objective should be to achieve free but secure trade. We cannot allow countries to use their market position in key raw materials, technologies, or products to have the power to disrupt our economy or exercise unwanted geopolitical leverage. So let’s build on and deepen economic integration . . . And let’s do it with the countries we know we can count on. Favoring the friend-shoring of supply chains to a large number of trusted countries . . . will lower the risks to our economy as well as to our trusted trade partners.”
In similar spirit, Christine Lagarde (2022), President of the European Central Bank, remarked: “Russia’s unprovoked aggression has triggered a fundamental reassessment of economic relations and dependencies in our globalised economy . . . Today, rising geopolitical tensions mean our global economy is changing . . . [O]ne can already see the emergence of three distinct shifts in global trade. These are the shifts from dependence to diversification, from efficiency to security, and from globalisation to regionalisation.
Given the disruptions of the pandemic, and Russia’s invasion of Ukraine, and China’s military build-up, there are of course reasons for firms everywhere to shorten their supply chains, and practice some combination of reshoring and friend-shoring. But the current favorite flavor of trade policy in the US and Europe does more than that: It’s more about government subsidies for favored domestic industries, rather than about recalculating gains from trade. It looks as if policymakers may need to learn all over again that while some parts of an economy can subsidize other parts, the gains of the favored firms that get such subsidies don’t mean that the economy as a whole is actually better off. In addition, whatever the geopolitical reasons for reducing international trade, such reductions impose economic costs.
I won’t rehearse those arguments here. But remember when President Trump first imposed substantial trade sanctions on China? He tweeted in March 2018: ”When a country (USA) is losing many billions of dollars on trade with virtually every country it does business with, trade wars are good, and easy to win. Example, when we are down $100 billion with a certain country and they get cute, don’t trade anymore-we win big. It’s easy!” He imposed tariffs, and here’s the pattern of the US trade deficit since then (red line is trade deficit for goods only; green line is for goods and services combined. At this point, it’s pretty clear (as it has been many times before) that imposing tariffs is not the answer to reducing the US trade deficit, nor to jump-starting US productivity and competitiveness.
Less trade and more government subsidies to favored industries and firms is not a likely path to improved trade balances, or to prosperity in general.
For those who do not follow the ins and outs of academic macroeconomics, it is perhaps useful to say that there has been an ongoing struggle in recent decades between what is sometimes called freshwater and saltwater economics.
At risk of being struck dead by lightning for gross oversimplification, I’ll just say that freshwater economists tended in the 1970s and 1980s to cluster around places like the University of Chicago, the University of Minnesota, the University of Rochester, and Carnegie Mellon University. Their version of macroeconomics tended to emphasize the themes that economic fluctuations were caused by shocks to supply (like technology) and that discretionary federal macroeconomic policy was likely to have weak or even counterproductive effects. In contrast, the saltwater economists at that time tended to congregate at places like Harvard, Yale, and Berkeley. Their version of macroeconomics tended to emphasize that economic fluctuations were caused by shocks to demand (like bank failures or credit boom-and-bust cycles) and that discretionary federal macroeoconomic policy was not only useful but also necessary in helping to offset such shocks.
Over the decades, the two schools have become somewhat intertwined in the form of what is sometimes called “New Keynesian” economics (for discussions, see here and here). But the spirit of the old dividing lines still remains. The saltwater economists accuse their freshwater siblings of being slaves to models that assume excessively rational people and excessively perfect markets; in response, the freshwater economists accuse their saltwater kinfolk of promiscuously adding theoretical restrictions for immediate convenience, without digging deeply enough into their foundations and implications.
Paul Krugman, as a certified salt-water economist, offers a thoughtful explanation of the merits of this approach as exemplified in the macroeconomics of another certified salt-water economist in “The Godley–Tobin Memorial Lecture: The Second Coming of Tobinomics ” (Review of Keynesian Economics, Spring 2023, vol. 11: issue 1)). Krugman writes:
James Tobin was, obviously, a Keynesian in the sense that he believed that workers can and do suffer from involuntary unemployment, and that government activism, both monetary and fiscal, is necessary to alleviate this evil. But he wasn’t what people used to call a hydraulic Keynesian, someone who imagined that you could analyse the economy by positing mechanical relationships between variables like personal income and consumer spending, leading to fixed, predictable multipliers on policy variables like spending and taxes. …
Instead, Tobin was also a neoclassical economist. That is, he believed that you get important insights into the economy by thinking of it as an arena in which self-interested individuals interact, and in which the results of those interactions can usefully be understood by comparing equilibria — situations in which no individual has an incentive to change behaviour given the behaviour of other individuals.
Neoclassical analysis can be a powerful tool for cutting through the economy’s complexity, for clarifying thought. But using it well, especially when you’re doing macroeconomics, can be tricky. Why? It’s like the old joke about spelling ‘Mississippi’: the problem is knowing when to stop.
What I mean is that it’s all too easy to slip into treating maximising behaviour on the part of individuals and equilibrium in the sense of clearing markets not as strategic simplifications but as true descriptions of how the world works, not to be questioned in the face of contrary evidence. Notably … perfectly clearing markets wouldn’t have involuntary unemployment. So if you’re a neoclassical economist who doesn’t know when to stop, you end up denying that there can be recessions, or that, say, monetary policy can have real effects, even though it takes only a bit of real-world observation to see that these propositions are just false.
So part of the art of producing useful economic models is knowing when and where to place limits on your neoclassicism. And strategic placing of limits is a large part of what Tobinomics is about.
What do I mean by placing limits? Tobin was, first of all, willing to ditch the whole maximisation-and-equilibrium approach when he considered it of no help in understanding economic phenomena — which was the case for his views on labour markets and inflation, which I’ll get to later in this paper.
Where he did adopt a neoclassical approach, he did so using two strategies that economists need to relearn. First, he was willing to be strategically sloppy — to use the idea of self-interested behaviour as a guide to how people might behave without necessarily deriving everything from explicit microfoundations. Second, he was willing to restrict the domain of his neoclassicism — applying it to asset markets but not necessarily to goods markets or the labour market.
Krugman illustrates his argument with a detailed example from Tobin’s work, but for my purposes, I’ll stop there.
I like the old joke about the problem with spelling “Mississippi,” which seems applicable to me in a number of real-world and academic situations. In a number of situations it can be a useful exercise to start with a pure theory, and then take a steam shovel to dig into its foundations and a telescope to look out at its possible implications. But when you reach the stage of bringing a pure theory to data, especially in the social sciences, it becomes necessary and practical to introduce a degree of strategic sloppiness; for example, the data or the setting you have to work with often will not exactly match the pure theoretical assumptions. The choice of which kinds of real-world strategic sloppiness are most relevant to a given question will often be central to the real-world controversy.
Building mass transit infrastructure in the United States is considerably more expensive than in other countries. Why? Eric Goldwyn, Alon Levy, Elif Ensari and Marco Chitti dig down into case studies to get some answers in Transit Costs Project: Understanding Transit Infrastructure Costs in American Cities (New York University, Marron Institute of Urban Management, February 2023).
The main focus of the report is to look in some detail at extensions of mass transit in Boston and New York, and also at projects in Sweden, Italy, Istanbul and Istanbul. The disadvantage of a case study method, of course, is that you need to be cautious about drawing conclusions from a small number of examples. The counterbalancing advantage is that you can dig deeply into the details of each individual case. But they do offer some big-picture statistics as well..
The problem of high costs is nationwide. According to our database (Transit Costs Project .d.) of more than 900 projects in 59 countries, including Hong Kong, the United States is the sixth most expensive country in the world to build rapid-rail transit infrastructure. This is slightly misleading, however, because construction costs scale with the percentage of tunneled track, which is more expensive than building rail at grade. The five countries with greater average costs than the United States are building projects that are more than 65% tunneled. In the United States, on the other hand, only 37% of the total track length is tunneled (Figure 1). … Therefore, it is valuable to understand what it is about the physical, institutional, and social situation of American cities that frustrates subway expansion dreams.
What kinds of factors explain the difference?
The US rail transit stations are often “overbuilt,” meaning that they are built much longer than the actual train platforms, and are also sometime built with nice high vaulted ceilings. In addition, the stations in Boston and New York were not standardized: for example, “three stations [in New York] used two different escalator contractors and have a different number of exits, crossovers, and elevators, all of which raise design costs because each station needs to be customized rather than using a standard design that is modified slightly.” Thus, when you compare the costs of “systems and station finishes” to the costs of “tunnels and station civil works,” the ratio of those costs is about 50:50 in New York, but 25:75 in Paris Milan, and Sweden.
Labor is a much larger share of transit construction costs in the US: “In New York as well as in the rest of the American Northeast, labor is 40-60% of the project’s hard costs, according to cost estimators, current and former agency insiders, and consultants with knowledge of domestic projects. Labor costs in our low-cost cases, Turkey, Italy, and Sweden are in the 19-30% range; Sweden, the highest-wage case among them, is 23%. The difference between labor at 50% of construction costs and labor at 25%, holding the rest constant, is a factor of 3 difference in labor costs …”
Overlapping and redundant bureaucracies also add to US transit construction costs. “[W]e also found overstaffing of white-collar labor in New York and Boston (by 40-60% in Boston), due to general inefficiency as well as interagency conflict, while little of the difference (at most a quarter) comes from differences in pay. … We have identified numerous cost drivers that stem from procurement norms in the United States. These include a pervasive culture of secrecy and adversarialism between agencies and contractors, a lack of internal capacity at agencies to manage contractors, insufficient competition, and a desire to privatize risk that leads private contractors to bid higher. Overall, this raises costs by a factor of 1.85, with the extra money going to red tape, wasted contingencies, paying workers during delays, defensive design, and, owing to contractor risk, and profit.”
There are also “soft costs”: “Soft costs include design, planning, force account, insurance, construction management, and contingencies; breakdowns differ by city. Nonetheless, we harmonized definitions around third-party design, planning, and project management costs. Those add 5-10% on top of the hard contract costs in our comparison cases, most commonly 7-8%. But in English-speaking countries, soft costs add much more; for Second Avenue Subway, it was 21%.”
Thus, if you want to know why it’s so hard and costly for US cities to build transit systems, and why such systems seem to be built and extended in other cities much more easily, a big part of the issue is that the model that US cities have used for building such infrastructure has led to high costs.
A common proposal here is to try to set up construction contracts so that the risk of higher costs and cost overruns would be carried by private firms, not by taxpayers. The authors find little evidence that these kinds of reforms work:
Moreover, many ongoing reforms hailed as steps forward, which we call the globalized system in the Sweden report, at best do nothing and at worst are actively raising costs; these reforms all aim to privatize risk and have been popular throughout the English-speaking world, and while consultants, managers, and large contractors like them, costs grow sharply wherever they are implemented, such as England, Singapore, Hong Kong, and Canada. … The good news is that high-cost countries can adopt the practices of low-cost countries and build subways at costs more in line with those of low-cost Scandinavia, Southern Europe, and Turkey. To do this, it requires rethinking design and construction techniques, labor utilization, procurement, agency processes, and the use of private real estate, consultants, and contingencies. If it implements the best practices we detail in the rest of the overview, the highest-cost city in our database, New York, can reduce its construction costs to match those of Italy and match or even do better than Scandinavia.”
Of course, the US inability to build transit at a reasonable costs is part of a bigger problem: along a variety of dimensions, the US economy has a diminished ability to build, whether we are talking about mass transit, green energy, or housing.
For example, if the United States is serious as a society about a vast expansion of solar and wind energy (a proposition I find myself doubting), then we will need not only very large commitments of land to solar and wind projects, but also a vast physical expansion of the electrical grid–along with facilities for manufacturing and recycling the equipment and a willingness to mine for the copper and other raw materials needed. A report on some of the permitting reforms needed for those steps to happen is here.
Similarly, if we are serious as a society about affordable housing (a proposition it is also possible to doubt), then many cities need a vast expansions of their housing stock, not mostly built at the extreme outer edges of cities where land is cheap, but instead focused on locations within city boundaries where some combination of underused office and commercial space, surface parking lots, and even certain areas in residential neighborhoods (say, those near mass transit or the empty airspace above some local shops) will need to be committed to additional housing units.
None of this needs to mean a free-for-all of building. Zoning can still matter! But current US rules for construction in many areas often look remarkably like the rules you would have in place if you wanted to discourage and slow construction, and raise its costs, rather than the roles you would have in place to facilitate such projects.
Starting in the Great Recession of 2007-9, the Federal Reserves started a policy of “quantitative easing,” in which the Fed held substantial quantities of financial assets like US Treasury bonds and federally guaranteed mortgage-based debt. The problem at the time was that the Fed had reduced its policy interest rate to near-zero. The idea was that if the Fed held such assets, then interest rates could be somewhat lower, compared to a situation where these assets were being sold in financial markets.
An obvious question raised at the time was whether the Fed was, in effect, just printing money to cover government borrowing. “Monetizing” the debt it this way is typically thought to be a lousy idea, because it leads to inflationary pressures, and even to doubts about the true value of the debt on financial markets. But Fed economists at the time drew a clear distinction between quantitative easing and monetizing the debt. Here are David Andolfatto and Li Li of the St. Louis Fed on the subject (“Is the Fed Monetizing Government Debt?: February 1, 2013).
What is usually meant by “monetizing the debt,” however, is the use of money creation as a permanent source of financing for government spending. Thus, to ascertain whether the Fed has in fact monetized its purchases of $1.2 trillion in government bonds since 2008, we have to know what the Fed intends to do with its portfolio of assets over time.
If the recent rapid accumulation of Treasury debt on the Fed’s balance sheet constitutes a permanent acquisition, then the corresponding supply of new money would be expected to remain in the economy (as either cash in circulation or bank reserves) permanently as well. As the interest earned on securities held by the Fed is remitted to the Treasury, the government essentially can borrow and spend this money for free. If, on the other hand, the recent increase in Fed Treasury debt holdings is only temporary (an unusually large acquisition in response to an unusually large recession), then the public must expect that the monetary base at some point will return to a more normal level (through sales of securities or by letting the securities mature without replacing them). Under this latter scenario, the Fed is not monetizing government debt—it is simply managing the supply of the monetary base in accordance with the goals set by its dual mandate. Some means other than money creation will be needed to finance the Treasury debt returned to the public through open market sales.
Here’s a figure from the Fed showing the evolution of its asset holdings over time. You can see the build-up in assets after the Great Recession–those first waves of quantitative easing. You can then see a modest decline in Fed asset-holding, as Aldolfatto and Li described above. The idea during this time was that the Fed would simply hold the debt it had purchased until that debt expired, and thus gradually and slowly let its asset holdings phase down. A phrase commonly used was that the process would be about as exciting as watching paint dry.
But then the pandemic recession hits, the US government runs extraordinarily large deficits to fund its pandemic relief programs, and given the uncertainties in global financial markets, the Fed holdings of assets take another dramatic leap. For reference, the numbers are in millions of dollars, so the “8M” on the right-hand axis refers to eight million million–that is, $8 trillion.
So even if the Fed was not monetizing federal debt back in 2013 or so, is it doing so now? What’s the current plan for phasing down the now much-larger Fed holdings of government debt? Huberto M. Ennis and Tre’ McMillan of the Federal Reserve Bank of Richmond lay out part of the plan in “Fed Balance Sheet Normalization and the Minimum Level of Ample Reserves” (Economic Brief No. 23-07, February 2023).
Ennis and McMillan write:
From the beginning of the pandemic through the spring of 2022, the Fed’s balance sheet increased significantly due to the Fed’s efforts to aid market functioning and support the flow of credit to households and businesses. Reserves in the banking system increased to record highs, well beyond levels desired by the Fed in the long run. With financial and economic conditions improving, the Fed started the process of balance sheet normalization in March 2022, whereby it intends to significantly reduce the amount of Treasuries and mortgage-backed securities (MBS) that it holds in its System Open Market Account (SOMA) portfolio.
So what is going to happen, and how far and how fast? Here, the desired reduction in Fed assets over time–basically, the Fed holding less in Treasury bond and in federally guaranteed mortgage-backed securities–intersects with the imperatives of conducting everyday monetary policy. The main tool that the Fed uses for conducting monetary policy is to change the interest rate it pays on reserves that banks hold at the Fed, and in this way to affect its key policy interest rate, called the federal funds rate. A Fed website explains: ,
The FOMC [Federal Open Market Committee] has the ability to influence the federal funds rate–and thus the cost of short-term interbank credit–by changing the rate of interest the Fed pays on reserve balances that banks hold at the Fed. A bank is unlikely to lend to another bank (or to any of its customers) at an interest rate lower than the rate that the bank can earn on reserve balances held at the Fed. And because overall reserve balances are currently abundant, if a bank wants to borrow reserve balances, it likely will be able to do so without having to pay a rate much above the rate of interest paid by the Fed. Typically, changes in the FOMC’s target for the federal funds rate are accompanied by commensurate changes in the rate of interest paid by the Fed on banks’ reserve balances, thus providing incentives for the federal funds rate to adjust to a level consistent with the FOMC’s target.
Thus, as a first step, when Fed wants to raise interest rates, it raises the interest rate it pays on bank reserves. For this policy to work, there need to be “ample” bank reserves. As Ennis and McMillan write:
The idea is for the Fed to maintain a balance sheet large enough to accommodate growth in currency in circulation plus an ample quantity of bank reserves. “Ample” means that reserves are plentiful enough to not carry any significant convenience yield. In other words, banks should value the marginal unit of reserves for the interest on reserves that they earn, but not because that marginal unit facilitates the daily operations of the bank holding it in any meaningful way.
What does that “enough but not too much” language mean in practical terms? They suggest that the goal should be to get bank reserves back to their level in about 2019, before the run-up of Fed assets during the pandemic. (This seems a commonly held view at the Fed: for example, scroll down and see similar comments from Christopher Waller.) Here’s an illustrative figure with their calculations out to 2029.
For our purposes, here are the important lines. The top line shows the total holdings of Fed assets, as shown above. The light blue lines shows bank reserves actually held; the red line shows their calculation of “minimum ample reserves.” You can see their projection that bank reserves will decline in the next year or two until they reach the “minimum ample” level. The bottom orange TGA line shows the “Treasury General Account” at the Fed: you can see how it bounced up at the start of the pandemic, when the Treasury wanted to have cash on hand to make payments as legislation and emergencies dictated, but then drops again.
All of this raises two questions. The first question is practical: How does the Fed plan to make this happen? The answer is that as the Fed receives interest payments on the Treasury debt and the mortgage-backed securities that it is holding, it will not reinvest that money in new debt. Thus, the value of the debt it is holding will diminish over. Ennis and McMillan estimate that this process will reduce Fed holdings of debt by about $80 billion per month. Thus, the “ample” level of bank reserves would be reached by about 2026.
The second question is harder: Is this goal for reducing Fed assets the right one? Here, it’s worth pointing out that the policy goalposts have been shifted. Back in 2013, you’ll remember, Aldolfatto and Li argued that the first wave of quantitative easing was not “monetizing the debt,” because it wasn’t a permanent step, and debt held by the Fed was being phased down. Now here were are in 2023, and even if the Fed manages to stick to its plan of phasing down its Treasury debt holding in the next three years, total Fed assets would still be about $7 trillion by 2029. What looked like a slow phase-down of Fed assets back in 2013 will be starting to look like a permanent and generally rising pattern of Fed assets by 2029–which was the working definition of monetizing the debt.
I don’t see any easy choices for the Fed here. The optics of having a central bank decide to do a large-scale sell-off of the debt of its own country would not be a good look in financial markets. But the approach of reducing the debt that the Fed is holding by not reinvesting interest payments on the debt that the Fed holds is slow. As I see it, the Fed seems to be hoping that after having experienced two once-in-century events in the last 15 years–the Great Recession of 2007-9 and then the pandemic recession–it just won’t feel a need to do quantitative easing (or “monetizing the debt?”) again in the next decade or so.
The COVID-19 pandemic brought a surge of telemedicine, especially when public and private health insurers became willing to reimburse for it. Now, with the fading of the pandemic and its effect on day-to-day life, many health care providers seem poised to go back to the old ways. But even if telemedicine was overused during the pandemic, it was almost certainly underused before. Here are some insights from a source I was not expecting: a group of doctors, medical students, and telemedicine professionals involved in the Ukrainian relief efforts. Jarone Lee, Wasan Kumar, Marianna Petrea-Imenokhoeva, Hicham Naim & Shuhan He offer some insights in “Telemedicine in Ukraine Is Showing That High-Tech Isn’t Always Better” (Stanford Social Innovation Review, March 2, 2023)
The authors describe their work with a Ykrainian-built telemedicine platform called Doctor Online, and in finding hundreds of Ukrainian- and Russian-speaking health care providers to volunteer to participate. Surveys of health care providers in Ukraine report widespread use of the platform.
Doesn’t seem all that applicable to the United States? The authors write:
Across the United States, 80 percent of rural areas qualify as federally designated “medical deserts.” This means that approximately 30 million people live at least an hour away from the nearest hospital with trauma services. Furthermore, many people suffer from chronic illnesses or social barriers that prevent them from accessing health care through physician offices and hospitals. Telemedicine could be an effective solution to reducing their suffering. For instance, telemedicine could allow someone with anxiety to receive mental health therapy from the comfort of their home, while a patient who noticed a suspicious lump on their skin could have a virtual consultation to help determine how serious it is.
The authors emphasize that many of their telemedicine contacts in Ukraine happened via text-message.
Texting has the obvious advantage of needing less connectivity, especially in a war zone. It has also become a generationally ingrained practice worldwide, while many people can struggle with video calls. But texting also allows a more productive use of the scarce resource: the time of clinicians. They can respond asynchronously, when it is most convenient, as can patients. With texting, telemedicine can handle a great many more patients than if it relied on synchronous technology. American providers are beginning to catch on. CirrusMD is a “text-first” virtual primary care platform, where patients begin each visit by sending a text to a physician. They can send images or host video calls and receive referrals to specialists. Asynchronous messaging allows for greater back-and-forth between a busy clinician and the patient.
There is a tradition in the provision of health care–a tradition that has some rational basis–that health care providers should meet with patients and do a reasonably full examination before treatment. But there are surely times when a full in-person visit seems like overkill. The authors cite one study involving children who had had appendectomies. Some of the families had access to text message for questions; some did not. Those with text messages ended up at emergency rooms less than half as often. One suspects there are many other examples.
I want to focus on the longer-term implications of the pandemic for residential and commercial real estate markets, looking out beyond the current cycle. Its most long-lasting implication in my opinion is the dramatic increase in remote work. Born out of necessity, remote work now appears to have taken hold as a permanent feature of modern labor markets. It is a benefit that employees enjoy and are willing to pay for. Their tolerance for commuting appears to be permanently reduced. Having experienced the flexibility that comes with working from home (WFH), the genie is out of the bottle. Firm managers too have come around to see its virtues, often in the form of higher productivity and profits, and have adjusted their own expectations about the number of days they expect employees to be in the office. Several firms have gone fully remote, while most others have moved to a hybrid work schedule of 2–3 days in the office. Various indicators of office demand appear to have stabilized at levels far below their prepandemic high-water marks.
Using data from the American Time Use Survey carried out by the US Bureau of Labor Statistics, they estimate that about 5% of paid full days were worked from home before the pandemic hit. Early in the pandemic, this share spiked above 60% of all paid work day. Now is has dropped to less than 30% of all paid work days. The red line shows data from the Household Pulse Survey being run by the US Census Bureau, which has been finding very similar results.
Of course, this percentage is an overall average between those who are always at the workplace, those always working from home, and hybrid arrangements in-between. Before the pandemic, jobs tended to be either all at a workplace or all at home. The widespread use of hybrid arrangements is new.
How has this shift affected real estate? Van Nieuwerburgh offers many measures, and I’ll just pass along a few of them here. In terms of office space, the Kastle company gathers “turnstyle” data from the entrance area of office buildings. (It should be noted that there are questions about whether this measure or these buildings are representative of all office space use, but again, there are number of measures from varying sources on this point with the same general pattern.) If the office occupancy rate was 100 before the pandemic, it’s still only around 40 now. The highly teched-up San Francisco metro area has the lowest office occupancy rate.
Another measure is to look at revenue from leased paid office space. It’s worth remembering that commercial leases often last 5-10 years, so a number of leases have not yet come up for renegotiation since the pandemic hit. Thus, the drop-off in leasing revenue shown here is likely to persist in the future.
Finally, here’s a pattern on home sales and rental prices, using data from New York City. The horizontal axis shows how close the residence is to the center of New York City. The vertical axis shows the rise in rents or home prices. In the left-hand panel, the green line shows that over the six years or so before the pandemic, rents rose about the same (that is, the green line is pretty flat) whether you were closer to or farther from the center of the city. The red line shows that after the pandemic hit, rents closer to the center of the city dropped, while rents farther from the city center rose. The right hand panel shows that price growth for homes was higher for locations near the center of the city before the pandemic (green line), but price growth for homes near the city center was lower–in fact, was negative–after the pandemic (red line).
The pandemic knocked loose a number of old working arrangements. Let’s assume, as seems plausible, that a large share of previous commuters make a permanent switch to working from home at least a day or two a week–or perhaps more. What are some of the possible implications for real estate, for cities, and for productivity? The research here is of course quite preliminary, but here are some issues.
Lots of people really hate commuting. Having had a chance to do less of it, they really don’t want to go back full-time. On the other side, not everyone wants to work at their kitchen table or on their living room sofa, either. Thus, one possibility is that we will see a rise of satellite offices located in suburbs, or “co-working” offices in the suburbs where you can show up for the day. Such locations can offer some logistical support, like a printer that works or rooms for virtual meetings, and employers would often prefer to know that their employees have a lead gotten out of the house.
If many workers are going to be on a hybrid schedule, maybe working from home a couple of days each week, various coordination problems arise. How many days at home? If one goal is for people to be in the office together, then there must be agreement on what days people will be in the office. Employers may have concerns about having Monday and Friday be work-from-home days, for example, out of a fear that they are implicitly agreeing to a three-day workweek. Employers might want a situation where departments come to the office all together: perhaps marketing is there on Mondays and Tuesdays, and human resources i9s there on Thursdays and Fridays–and the two departments now share the same space. Personal spaces at the office may be reduced: after all, if you’re only going to be there 2-3 days a week, do you really need your own office or cubicle? Maybe you can do just fine with a rolling cart that has your stuff piled on it, and you just roll it over to an open space and grab a chair when you are in the office. Downtown employers might want more spaces for in-person and virtual meetings, or more flexible space where partitions can be rolled out and back, depending on who is in he office and what is needed that day.
With so many fewer workers downtown, and with the ongoing rise of online shopping, urban retail has suffered a huge decline (which of course is another group of people not working downtown, as well). In theory, workers on a hybrid schedule could do their downtown shopping on days that they are commuting to the office, but that doesn’t seem to be happening. Thus, work-from-home is likely to stagger the downtown retail sector as well.
The economic base of urban centers will shift. With less office work and less retail, they will becomes more reliant on restaurants and entertainment. They might also become more reliant on buying power of people who actually live there.
Lots of cities have a house shortage, in the sense that demand has been driving prices ever-higher in the face of limited supply. But what if some of that less-used office and retail space could be converted to residential space? There are a bunch of tough issues here. In a pure structural sense, lots of office space is not laid-out like residential space: as a basic example, the plumbing and hallways aren’t the same. One can imagine a large square office building divided into long narrow apartments with a window at the far end–but it’s not necessarily an attractive vision and it may run afoul of various residential building codes. The construction costs of such conversions are high, and you can bet that city councils are already salivating at the chance to dictate what kinds of conversions would be allowed and at the idea of setting prices and retail rates. It feels to me as if there is a huge opportunity here to bring housing to cities, and as if the political and economic constraints are likely to strangle that opportunity.
In the long-run, will the work-from-home pattern add to productivity? The evidence on this point is mixed. Early in the pandemic, several studies found that productivity remained pretty high with work-from-home. But during that time, workers at a lot of firms were also making special efforts to help out and get by. Over time, it became apparent that some jobs are more suited for remote work than others. Some worker who were scrupulous about giving a fulltime effort from home when the pandemic first hit began to ease off over time. Firms began to worry that some activities of business–like certain kinds of brainstorming and strategizing, or certain kinds of information-sharing–worked better when people saw each other every day and could gather in groups. In a work-from-home world, it isn’t clear how on-the-job training works for new hires, or how new hires get informal face-time with other workers. It’s not clear that employee training works as well, either. If you work from home for one company, but could switch to working from home for another company, how loyal are you to either employer?
When it comes to the effects of work-from-home, we are all learning on the fly. There are also likely to be conflicts. At present, it seems to me that lots of employees are willing to go to the office some of the time, and lots of employers are willing to have work-from-home some of the time–but who decides is very much in flux.
The 16-19 and 20-24 age groups show the biggest decline in share of population from March 2000 to March 2022. Much of this is because of increasing numbers of these age groups are spending more time in school; in addition, they have become less likely to work while in school at either part-time or full-time jobs.
The age groups over 55 all have a rising share with jobs. This shift is in part improved health for older Americans, in part incentives built-in to programs like Social Security to retire later, and a desire (especially among college-educated workers) to bear the tradeoff of later retirement in exchange for saving up a bigger nest egg before retirement.
The mystery is the declining share with jobs among what government statisticians refer as “prime age” workers, between the ages of 25-54. Montgomery doesn’t offer reasons for the decline of job-holding in this group, which are frankly mysterious. This is not a short-run phenomenon relate to the pandemic. It is primarily accounted for by a decline in job-holding among men. The decline in job-holding by prime-age men has been going on for decades, so it seems unlikely that it can be accounted for by a particular law or rule change, or by the political party in power.
The plausible theories suggest that over a period of rising wage inequality, workers who feel stuck at the bottom of the wage distribution may give up on formal work–even if they are in many cases working off-the-books. In addition, the share of adult men who are unpartnered (that is, not married or cohabitating) is high, and single men are increasingly likely to live in the homes of their parents. The disconnectedness of these prime-age adults from the labor force represents a loss of economic production, but surely more important, it represents a substantial group–many of whom have not left the labor force, but instead stuck it out in low-paid jobs–who are living their prime-age years with frustration and resignation relative to their earlier-in-life aspirations.
One might expect that certain sectors of the economy will have faster productivity growth than others: for example, productivity seems likely to grow faster for semiconductor manufacturers than for a gas station. But one striking change both in the US economy and around the world in the last couple of decades is that looking at firms within the same sector–that is, within the same general line of business–the firms that are productivity leaders have been expanding their lead over the productivity-lagging firms in the same sector.
This shift is driving other economic changes. For example, it turns out that a main factor behind increases in income inequality is the widening gap between high- and low-productivity firms in the same sector. To put it another way, whether you do relatively better or worse as a result of widening inequality may not have much to do with you personally, or where you live, or your job; instead, it’s about whether you work for a a high- or low-productivity firm. The McKinsey Global Institute offers some thoughts about this issue and other productivity-related topics in “Rekindling US productivity for a new era (February 16, 2023). The report argues:
The most productive firms in every sector have widened their lead on the rest. In fact, the gap between the most and least productive is wider within sectors than in any other dimension we studied. Manufacturing provides a particularly striking example; leading firms operate at 5.4 times the productivity of laggards.9 In some manufacturing subsectors, the differences are extraordinary. The leading semiconductor manufacturers are 38 times more productive than the least-productive companies. This mirrors other research showing similar patterns of divergence across other sectors such as wholesale trade and information.10
The “frontier firms” in the productivity vanguard are accelerating away from their peers. These firms tend to be larger, more connected to global value chains, and focus on technology-intensive aspects of their sector. Research suggests these leading firms invest 2.6 times more in technology and other intangibles such as research and intellectual property, and attract and invest in more skilled talent.11
As a result, the gap between frontier firms and laggards has grown over the past 30 years. In manufacturing, the gap was 25 percent wider in 2019 than it was in 1989, with most of that change happening before 2000. At the same time, industry dynamism has fallen, as seen in metrics such as new firm entry rate (which has declined 29 percent from 1989 to 2019 in the United States) and labor reallocation rates (which are down 31 percent across sectors).
Standard economic principles would suggest that less productive firms would be replaced or would improve their performance. Researchers have offered multiple hypotheses for why this has not happened. For example, there is evidence that firms within the same sector may coexist without fully competing, by serving different customers, attracting different workers, or operating in different geographic markets. Finally, some researchers have pointed to declining measures of competition as a source of the divergence, which remains a matter of active debate.
Whatever the explanation for growing divergence, productivity gains must ultimately come from firms. If laggards don’t catch up or get replaced by more productive firms, US productivity will continue to splutter. For business leaders, the message is clear: improving your firm’s performance matters much more than the productivity of the sectors in which you operate.
As the McKinsey report points out, gains in labor productivity are fundamental for national prosperity. The key issue to remember here is that productivity gains build on each other. Thus, if productivity could be raised 1% per year, each year builds on the previous one, and after a decade the US economy would be (roughly) 10% larger. (Actually a little more than 10%, because the growth rate compounds over time.) The US economy is about $23 trillion in size right now, so being 10% larger involves gains of over $2 trillion. As I sometimes say, no matter whether your goal is higher wages or expanded government spending or tax cuts, it’s easier to achieve that goal in an expanding economy–where we are in effect arguing over how a growing pie will be divided up–than it is to accomplish your goals in a low-growth economy or even zero-sum economy, where gains for any particular goal require losses for other goals.
The MGI report discusses a number of ways for the US (or any nation) to improve productivity: better education and workforce skills, support for research and development, a competitive and evolving marketplace, and others.
Here, I want to emphasize a different lesson: The growing divergence between high- and low-productivity firms suggests that the challenge is not just one of cutting-edge innovation. Again, the cutting-edge firms across different sectors of the economy are doing pretty well with raising productivity. The challenge is for supporting an economic environment where the productivity laggards keep pace or die off, but don’t just keep falling farther behind.