One reason why cryptocurrencies like Bitcoin don’t work well as a medium of exchange for typical transactions (along with difficulties like slow transactions and high cost per transaction compared with standard currencies) is that their value fluctuates so much. When a person or a company promised to make or receive a payment in a few weeks or even a few months, it wants to know, in the present, what that payment will be worth.
Stablecoins may use various approaches to maintaining parity with their peg. At a high level, a distinction can be made between the following four types of stablecoin based on whether they claim to hold a pool of reserve assets to back their value (ie whether they are collateralised or not), and if so, the type of these reserve assets:
Fiat-backed stablecoins: stablecoins that claim to be backed by assets denominated in a fiat currency. Examples include Tether and USD Coin.
Crypto-backed stablecoins: stablecoins that claim to be backed by other cryptoassets. Examples include Dai and Frax.
Commodity-backed stablecoins: stablecoins that claim to be backed by commodities. Examples are PAX Gold and Tether Gold.
Unbacked stablecoins: stablecoins that do not claim to be backed by any reserves, but rather seek to maintain a stable value through, for instance, algorithms or protocols. Examples include TerraClassicUSD and sUSD.
Note that fiat-backed, commodity-backed and crypto-backed stablecoins are sometimes also defined as collateralised stablecoins, with the first two referred to as “off-chain collateralised” and the latter “on-chain collateralised” stablecoins.
It took several years before stablecoins obtained significant traction (Graph 1). The first stablecoin (BitUSD) was issued in July 2014, and five years later the total market capitalisation had grown to roughly five billion US dollars. It was not until the start of the Covid-19 pandemic that the market capitalisation started to rise steeply (Graph 1.A, event a). This has been attributed to the turbulence in the traditional financial markets following the Covid-19 outbreak and the sharp decline of the price of Bitcoin, which led investors to turn to stablecoins. Over the course of two years, the market capitalisation grew more than ninefold, and in March 2022, it was more than 35 times higher than at the onset of the pandemic.
Most of the growth was driven by a strong increase in the market capitalisation of Tether. Tether was launched in 2014. While it quickly became the largest stablecoin, it started to gain traction only in 2021. Many other stablecoins were also launched during the pandemic: the total number of active stablecoins grew from 13 at the beginning of 2020 to 40 at the end of 2021. Initially, the stablecoin market consisted mainly of fiat-backed stablecoins. However, various stablecoins that entered the market over the course of 2021 were crypto-backed or unbacked stablecoins. In April 2022, fiat-backed stablecoins accounted for around 80% of the total stablecoin market in terms of market capitalisation.
The growth of the stablecoin market came to a halt in the first half of May 2022 when the crypto ecosystem was shaken up by the crash of various cryptoassets. Among these was Terra’s (unbacked) stablecoin “TerraUSD”, the third largest stablecoin at the time (Graph 1.A, event b). TerraUSD’s collapse was caused by its inability to redeem users’ holdings at par. The TerraUSD crash caused unbacked stablecoins to lose almost all of their value. It also undercut the market capitalisation of fiat-backed and crypto-backed stablecoins. Overall, by the end of September 2022, the total market capitalisation of stablecoins had shrunk by more than a fifth to $151.4 billion.
The stablecoin market continued to shrink into 2023. Between April 2022 and the end of January 2023, the total capitalisation of the stablecoin market had shrunk by more than 25% to $138 billion. While much of the fall was triggered by the TerraUSD collapse, the bankruptcy filing of FTX, a major crypto exchange, in November 2022, accelerated the declining trend, although not as strongly as the May turmoil.
The left-hand panel shows a breakdown of the stablecoin market by how the assets are backed: as you can see, those backed by fiat currency (like the US dollar, as is the case with Tether) have most of the market. The right-hand panel shows the top five stablecoins by market size.
The authors emphasize that no stablecoin has yet managed to maintain a truly fixed value. Even Tether “had an average daily price volatility of about 2 percentage points between end-September 2022 and end-September 2023. This shows that to date, no stablecoin has been able to meet an important prerequisite of becoming a safe store of value – guaranteeing full price stability.” Unbacked stablecoins, although a small share of the overall market, can be just as volatile as any non-stablecoin cryptocurrency.
It’s not clear to me that stablecoins have yet evolved in a way that they are more than a niche market. But I’m hoping anyone who plays around with these asset understands that they are not regulated like banks or other traditional financial markets. Some cryptocurrencies that refer to themselves as “stablecoins” are not actually backed by anything. And even the largest and best-known stablecoins can have their value move a few percent in a day.
Imagine that someone told you that a key issue will be determined by whether the countries of the European Union display “unity.” Is that statement encouraging, or despairing? Mario Draghi suggests (with a degree of optimism, I think?) that European unity about a common fiscal policy is the next step to make the euro function well in “The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone” (NBER Reporter, October 2023, delivered as the 15th Annual Martin Feldstein Lecture).
(Most readers of this blog, I expect, recognize Draghi’s name as head of the European Central Bank from 2011 to 2019. Indeed, during a time of eurozone financial crisis in 2012, Draghi is sometimes credited with having “saved the euro” by making a forthright “whatever it takes” promise, when he said during a press conference: “Within our mandate, the ECB is ready to do whatever it takes to preserve the euro. And believe me, it will be enough.” Not everyone knows, however that Draghi has a PhD in economics from MIT and was a professor at several Italian universities before getting into the central banking business.)
Draghi’s reference to the “flight of the bumblebee” is a reminder of the old line that bumblebees seem ill-suited to flight–but they fly anyway. Similar, economists have been arguing since the 1990s that the economies of Europe may be ill-suited to a common currency–but they adopted one anyway.
The concern has to do what the theory of “optimal currency areas.” When does it make sense for two geographic areas to share a common currency, and when does it not? Imagine that two economic areas experience different “shocks.” Perhaps one part is dependent on the price of oil, but another is dependent on the price of wheat. Perhaps one area depends more heavily on manufacturing, while another areas depends more on computers and information technology.
If these two areas have different currencies, they can can adjust to these shocks through shifts in the exchange rate. But if they are glued together with a single currency, then wages and prices in one area–measured by that common currency–will be shifting relative to the other. One area will feel “rich” and the other will feel “poor.”
If two areas are well-suited to be a common currency area, then there will be various adjustments to smooth out such differences over time. For example, workers will migrate from low-wage to high-wage areas; conversely, companies will shift their investment to take advantage of low-wage areas. Moreover, the central government will practice some degree of redistribution: the high-wage area will pay more in taxes, and the low-wage area will receive more in benefits.
But what happens if, because of various barriers (national boundaries, different regulations, culture and language barriers), workers and firms don’t move much between the high- and low-wage areas? What if the central government is relatively small and weak, so that it doesn’t practice a meaningful degree of redistribution? And what if, because of the common currency, no exchange rate adjustments are possible? In that setting, the lower-wage area may just be stuck in that position for a long time. This is arguably what happened in the US economy, where the southern states remained poor for decades from the late 19th century into the 20th century–until cross-region migration of workers and firms increased, along with an expanded fiscal role for the US government. It’s what is happening in modern Europe, as certain countries including Greece, Italy, and others seem stuck in a low-growth trap.
The bumblebee that is the euro continues to fly. As Draghi points out, the underlying assumption in adopting the euro was that even if the EU was not actually ready to be a common currency area in 2000, it would evolve in that direction. As he says:
But there was always another perspective, which was that the euro was the consequence of decades of past integration — notably the evolution of Europe’s single market — and that it was only one more step along a much longer road towards political union. And through the so-called “functionalist” logic of integration, where one step forward leads inexorably to the next as its shortcomings are revealed, the end goal of political union would drive the necessary macroeconomic changes. From this viewpoint, the key question was not whether the euro area was an optimal currency area from the start — evidently it was not — but whether European countries were prepared to make it converge towards one over time.
In some ways, this vision of greater economic mobility of workers, firms, and products across the countries of Europe has been coming true. Draghi notes:
Twenty-five years of economic integration have led to more integrated supply chains and more synchronized business cycles, making the single monetary policy more appropriate for all countries. Multiple studies find that business cycle synchronization in the euro area has risen since 1999 and the euro can explain at least half of the overall increase. At the same time, while labor mobility in the euro area remains some way short of US levels, studies have found a gradual convergence, reflecting both a fall in interstate migration in the US and a rise in the role of migration in Europe.And channels of risk sharing have improved further. For example, against the backdrop of banking sector integration — the so-called banking union — and generous official assistance, cross-border lending was notably more resilient during the pandemic than we had seen during previous large shocks.The further Europe can advance along this path — especially in terms of integrating its capital markets — the lower the need for permanent fiscal transfers will be.
But with all of these changes duly noted, it remains true that fiscal policy across the nations of Europe is dominated by individual countries, not by a centralized budget. US-style transfers from higher-wage to lower-wage areas are not possible. As Draghi points out, in the US states can be required to run balanced budgets, in part because the US federal government can run budget deficits when needed. But in a European context, every country can run budget deficits when it wished to do so, which already led to one deep EU recession back in 2012-13.
Draghi’s vision for a common EU fiscal policy starts from a belief that EU countries have some shared goals: for example, a shared interest in higher defense spending in the aftermath of Russia’s invasion of Ukraine, and a share interest in a transition to lower-carbon energy sources. As Draghi writes:
Whichever route we take, we cannot stand still or — like a bicycle — we will fall over. The strategies that had ensured our prosperity and security in the past — reliance on the USA for security, on China for exports, and on Russia for energy — are insufficient, uncertain, or unacceptable. The challenges of climate change and migration only add to the sense of urgency to enhance Europe’s capacity to act. We will not be able to build that capacity without reviewing Europe’s fiscal framework, and I have tried to outline the directions this change might take. But ultimately the war in Ukraine has redefined our Union more profoundly — not only in its membership, and not only in its shared goals, but also in the awareness it has created that our future is entirely in our hands, and in our unity.
I confess that for me, Draghi’s call for EU “unity” on these topics feels discouraging. Are the EU countries across eastern Europe, close to the Soviet border, going to agree on defense policy with countries in the rest of Europe? Is France, with its nuclear power plants, or Norway, with its North Sea oil reserves, going to agree on energy policy with, say, Portugal and Greece? Is Draghi is correct that without such agreement, the bicycle will fall over and the eurozone will be subject to another round of financial crisis? Or perhaps this bumblebee can just keep flying, even if economists can’t quite grasp how it is doing so.
For some my previous efforts to explain the euro and the conditions for optimal currency areas, see :
Orley Ashenfelter has been doing a series of interviews with labor and industrial relations economists. The most recent is “Myra Strober on women, work, and feminist economics” (November 6, 2023, Episode 19, audio and transcript available). The description of her early education and career has some great stories, although some of them have a high wince coeffiencient:
When she was an undergraduate at Cornell’s School of Industrial Labor Relations in the late 1950s:
The most extraordinary experience at Cornell was freshman year, a course that we labeled Bus Riding 101. We went every Wednesday morning early on a bus to some factory within busing distance of Ithaca. And so, by the time the semester was over, we had visited a steel mill in Ithaca, a pajama factory somewhere in Pennsylvania, IBM, Corning Glass, and a coal mine where they had to get special permission for women to go down into the mine because it was considered bad luck to have women in the mines. And I have to say that exposure to work, real work by real people who were struggling, as a 18-year-old, was an extraordinary experience.
And I have to tell you that, years later, I was teaching a course in labor relations at the Stanford Business School, and we got to the part on grievance procedures. And the case that we were studying was in a chemical factory, and one of the employees was grieving because he was not permitted by the foreman to go and use the bathroom when he needed to use it. And two students in the class objected to this case. They said that they were not paying the kind of tuition that they were paying in order to read a case about somebody who wasn’t permitted to go to the bathroom. And something clicked in my head, and I said, “How many of you have ever been inside a factory?” Not a single student in that MBA class had ever been inside a factory.
Applying to the PhD programs in economics at Harvard and MIT in the mid-1960s.
Well, I restricted my search for graduate school to Boston because my fiancé, then my husband, was a student at Harvard Medical School, and I wanted to get married and live in Boston. And so luckily for me, there were two terrific programs in economics in Boston. Harvard was a non-starter. I had an interview at Harvard. It was extremely brief. The first question the interviewer asked me was, was I normal? And I, in turn, asked him what that meant. And he said, “Oh, you know. Do you want to get married and have children?” And I said, “Well, I’m already married.” And he said, “well there you have it” and he opened the door and showed me into the hallway. So that was the end of Harvard.
MIT was different. They accepted me. I was one of three women. One of them left at the end of the year. So, there were two women in my class, and then there were two women in the class ahead of me, two women in the class behind me. And then they accepted a nun the following year, assuming that she would not get married and have children, but she fooled them. She married a man who lived across the street from her. He left the church. She left the church. And so, there was no safety for MIT.
How she moved from being a first job as a lecturer at Berkeley to a position at Stanford.
Yeah, so what had happened was before I came to Berkeley the previous spring, many women who were lecturers at Berkeley filed a complaint against Berkeley with the Labor Department for sex discrimination. So, I remember when the investigators came from the Labor Department, at first, they took a hotel room, then they decided they had to take an apartment because they’d be there for a while investigating sex discrimination at Berkeley. And so, eventually Berkeley made me an offer as an assistant professor. But Stanford did not have a suit filed against it because there were no women to file a suit.
I mean, not only did Stanford not have women faculty; they didn’t even have lecturers. So, Stanford got nervous that somehow there would be a complaint. And so I got an offer from the Stanford Business School to come and teach. I was the first woman faculty there. And that same year, in 1972, they hired their first Black man, their first Asian American man, and their first Hispanic man. So, it was a banner year. Stanford also hired the first woman faculty member in the law school and the School of Engineering. So they wanted to show that they were being good people.
Perhaps unsurprisingly, these kinds of experiences led Strober toward research interests involving the intersection of social constraints and economic motivations.
I always ask myself, “Why did the owners of steel mills in the 1890s go all the way to Eastern Europe to find new steel workers and spend money to pay their travel costs across the Atlantic and so on when they could have simply hired the wives of the current steel makers who were home ready to work?” But the idea of hiring women to work in steel mills, even though it would’ve been far more profit maximizing, they probably at that time could have even paid them less. They didn’t do it. Why not? Because social constraints were very strong. It was simply considered an impossible thing to do, to hire, recruit the wives of steelworkers or even the young daughters of steelworkers to work in those factories.
Strober has a new book out this year, Money and Love: An Intelligent Roadmap for Life’s Biggest Decisions, based in part on a course she taught for many years about life choices. In the interview with Ashenfelter, she says:
[A]lthough conventional wisdom tells you to make money decisions with your head and love decisions with your heart, in fact, for most of these really important decisions, the love and money aspects are intertwined. So, whom you marry has probably the most important effect on your career of anything you might do, and on your life, and on your family. And the idea that you marry simply for love without ever thinking about money is probably not quite correct. And so, all of these decisions need you to engage both your heart and your head. And in fact, some of the people who are most interested in this book are those that run financial advising firms because they recognize that their advisors can be far more effective if they consider family issues when they advise their clients rather than simply running the numbers and telling them what age they can retire at. So, that’s the first thing. That love and money are intertwined for all these decisions.
One remarkable shift during and after the pandemic recession was a remarkable rise in the US savings rate. This was was driven in part by the government spending programs enacted during the pandemic: the checks sent directly to households, the expansion of unemployment insurance, and the Paycheck Protection Program to help smaller businesses keep people employed. It was also driven by the fact that during the pandemic, options for spending money were limited both by various shutdowns (as an example, options for travel and entertainment were constricted) as well as by hold-ups in the supply chain.
Here’s the US personal saving rate on a quarterly basis since about 1960. You can see the gradual decline from the 1970s up to the early 2000s (a story in itself for another time), and then what looks like a rising trend up in the lead-up to the pandemic. During the pandemic and its aftermath, the saving rate spike wildly, and then falls back to the below-average levels of the early 2000s.
Have American households mostly spent the savings their pandemic savings? Or are they still sitting on a substantial share?
The answers matter for a number of reasons. In a general sense, inflation is driven by “too many dollars chasing too few goods.” Households seeking to spend the federal spending largesse that had built up in their savings was one of the drivers that kicked off inflation in 2021. Thus, if households are still sitting on a cache of savings to be spent in the next year or so, inflationary concerns are larger than if households have mostly spent down their savings. Other topics like the labor force participation rate or the willingness of entrepreneurs to start new firms are also interrelated with whether households feel as if their cushion of savings is more than they plan to have in the long-run, or about right.
Most estimates of excess savings differ because of seemingly innocuous assumptions about the long-term saving trend in the US economy. Excess savings are now depleted only if we assume that households need to set aside a higher share of their income today compared with before the pandemic. If we assume instead that the underlying saving rate should be equal to the pre-pandemic average (6.2 percent), only slightly more than one-third of the excess savings has been depleted. Our new method for estimating excess savings across the income distribution allows us to assign excess dollars to a specific US county. After mapping counties to their income levels, we find that as of the end of 2022, most income groups still had access to significant amounts of savings and that there is no substantive difference in the savings-reduction rate across income groups.
Personally, I have a hard time looking at the US personal savings rate over the decades and believing that a permanent move to a savings rate well above the pre-pandemic average was underway. Thus, my sense is that households are still sitting on substantial savings accumulated during the pandemic that they would be willing to spend in the next few years.
At least so far, it doesn’t appear that households are overly encumbered by debt, either. In the figure below, the top line shows total household debt payment as a share of personal income. The total can be broken into two parts: mortgage debt payments (the green line) and other consumer debt payments (the red line). The figure shows that mortgage debt payments ran up during the housing boom/bubble before the Great Recession, but have declined during the period of low interest rates since then. However, other consumer debt has stayed at about the same level since 2010 or so–and even seems to have increased a little.
All my life, I’ve been the kind of uncool person who wanted some space for serious things to be discussed seriously. And now there’s one more political debate coming up this week, this one among the non-Trump candidates for the Republican presidential nomination in the 2024 election, which is my excuse for this gripe about television and politics, or at least what those debates have become in the modern era.
Neil Postman, in his 1985 jeremiad Amusing Ourselves to Death: Public Discourse in the Age of Show Business, put it this way in his discussion of political debates:
The point is that television does not reveal who the best man is. In fact, television makes impossible the determination of who is better than whom, if we mean by “better” such things as more capable in negotiation, more imaginative in executive skill, more knowledgeable about international affairs, more understanding of economic systems, and so on. The reason has, almost entirely, to do with “image.” But not because politicians are preoccupied with presenting themselves in the best possible light. After all, who isn’t? It is a rare and deeply disturbed person who does not wish to project a favorable image. But television gives image a bad name. For on television the politician does not so much offer the audience an image of himself, as offer himself as an image of the audience. …We are not permitted to know who is best at being President or Governor or Senator, but whose image is best at touching and soothing the deep reaches of our discontent.
Postman argues that what underlies the interaction of television and politics is fundamentally similar to what underlies all commercial advertising. In the pre-Internet age, he writes:
Because the television commercial is the single most voluminous form of public communication in our society, it was inevitable that Americans would accommodate themselves to the philosophy of television commercials. … By “philosophy,” I mean that television communication has embedded in it certain assumptions about the nature of communication … For one thing, the commercial insists on an unprecedented brevity of communication. … A sixty-second commercial is prolix; thirty seconds is longer than most; fifteen to twenty seconds is about average … The commercial asks us to believe that all problems are solvable, that they are solvable fast, and that they are solvable fast through the interventions of technology, techniques and chemistry. This is, of course, a preposterous theory about the roots of discontent … But the commercial disdains exposition, for that takes time and invites argument. …
Such beliefs would naturally have implications for our political discourse; that is to say, we may begin to accept as normal certain assumptions about the political domain that either derive from or are amplified by the television commercial. For example, a person who has seen one million television commercials might well believe that all political problems have fast solutions through simple measures–or ought to. Or that complex language is not to be trusted, and that all problems lend themselves to theatrical expression. Or that argument is in bad taste, and leads only to an intolerable uncertainty.
I suppose the parallel complaint in the modern era would be that we no longer judge politicians on the basis of those lengthy 30-second TV spots, but instead on the basis of internet memes and 240-character messages on social media. But I confess that every time the host of a presidential debate says something like–“How should the US bring peace to the Middle East? You have 90 seconds”–something inside me dies a little. Even worse is when the moderator asks for a show of hands on who supports or opposes some proposition.
It seems to me that most people watch televised political debates for several reasons. First, they are hoping for a car crash–that is, a moment when some candidate gets off an especially good zinger or says something exceptionally incoherent or revolting. Second, they like viewing themselves as detached and above-it-all critics of what might play well or badly with the broader public. Third, they are asking themselves if the persona presented on television appeals to them. You’ll hear people make post-debate comments like: “Seems like a nice person.” “I can relate to them. “The kind of person you could sit down and have a beer with.” “That candidate stands up for me (or fights for me).”
These kinds of reasons are all understandable in some sense, but they are an extraordinarily poor way to decide who would be a good president or governor. Great leaders might not be especially good at zingers–and zingers may not be the most useful skill for a top administrator. Great leaders may not be especially warm and fuzzy, although they can fake it for short periods. The question of whether a politician has the skills to lobby key members of the legislature and build a majority maybe rather different from whether you would like to have a beer with them.
I’m resigned to the current state of political advertising and political debates, in the sense that I can ignore them. But surely, it surely would be nice to have some method of interaction that that offered a more revealing look–a kind of job market interview–for political candidates. Part of that is to show a command of topics in economics, politics, and international affairs, both with a chance to explain their position and to explain what they would say to those who disagree. But in addition, my desired interaction would find ways to dig down into abilities related to executive skills like management, negotiation, and consensus-building. It may be that an extended interview, or several of them, would do a better job of revelation than a debate. Or perhaps there could be a common set of questions where candidates would post a five-minute answer at a common website–with a rule that the answer needed to be delivered in the form of a talking head in front of a blue screen, with no teleprompter and no production values.
If we are to have political debates, can we at least find a way to make them more than sound bites? As Postman points out, the first Lincoln-Douglas debate back in 1860 featured Douglas having an opening address for 60 minutes, Lincoln with a 90-minute rebuttal, and then Douglas with a 30-minute closing statement. And this was a shorter than some of their previous exchanges. In 1854, the opening statement for Douglas took three hours, then they broke for dinner, and reconvened for Lincoln to have an equal time for a response.
Even for me, that sounds like a lot. But the current debates are, to me, unwatchable. I can feel the candidates trying to reach for a zinger, or trying to touch what they believe might be the deep reaches of popular discontent, and it seems awkward and unproductive.
I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided a little more than a decade ago–to my delight–that the journal would be freely available online, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Fall 2023 issue, which in the Taylor household is known as issue #146. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.
Symposium on After the Pandemic
“Why Did the Best Prepared Country in the World Fare So Poorly during COVID?” by Jennifer B. Nuzzo and Jorge R. Ledesma
Though all countries struggled to respond to COVID-19, the United States’ poor performance during the pandemic was unexpected. Despite having more pandemic preparedness capacities than other countries, the United States experienced more than one million COVID-19 deaths, which has contributed to historic declines in national life expectancy. Though some have raised questions as to whether preparedness capacities matter, data that appropriately address cross-country differences in age structure and surveillance approaches show that higher levels of national preparedness was associated with reduced mortality during the pandemic. The United States, however, stands out as a clear outlier in COVID-19 mortality comparisons with other highly prepared countries. We subsequently discuss and summarize the specific gaps in US pandemic preparedness that may have hampered COVID-19 responses in the country. Additional data and research are urgently needed to more accurately understand why the US did not make better use of its prepandemic advantages.
“The Evolution of Work from Home,” by José María Barrero, Nicholas Bloom and Steven J. Davis
Full days worked at home account for 28 percent of paid workdays among Americans 20–64 years old, as of mid-2023. That’s about four times the 2019 rate and ten times the rate in the mid-1990s. We first explain why the big shift to work from home has endured rather than reverting to prepandemic levels. We then consider how work-from-home rates vary by worker age, sex, education, parental status, industry and local population density, and why it is higher in the United States than other countries. We also discuss some implications for pay, productivity, and the pace of innovation. Over the next five years, US business executives anticipate modest increases in work-from-home rates at their own companies. Other factors that portend an enduring shift to work from home include the ongoing adaptation of managerial practices and further advances in technologies, products, and tools that support remote work.
“COVID-19, School Closures, and Outcomes,” by Rebecca Jack and Emily Oster
This article discusses the question of data and our perspective on the importance of public, accessible, and contemporaneous data in the face of public crisis. Then, we present data on the extent of school closures during the COVID-19 pandemic, both globally and within the United States. We describe the available data on the degree of these closures, which will provide a set of resources for studying longer-term consequences as they emerge. We also highlight what we know about the demographic patterns of school closures. We then discuss the emerging estimates of the short-term impacts of school closures. A central finding throughout our discussion is that school closures during the pandemic tended to increase inequality, both within and across countries, but that fully understanding the long-run impact of COVID-related school closures on students will take time and will surely be influenced by events and policies in the next few years.
“Changes in the Distribution of Black and White Wealth since the US Civil War,” by Ellora Derenoncourt, Chi Hyun Kim, Moritz Kuhn and Moritz Schularick
The difference in the average wealth of Black and white Americans narrowed in the first century after the Civil War, but remained large and even widened again after 1980. Given high levels of wealth concentration both historically and today, dynamics at the average may not capture important heterogeneity in racial wealth gaps across the distribution. This paper looks into the historical evolution of the Black and white wealth distributions since Emancipation. The picture that emerges is an even starker one than racial wealth inequality at the mean. Tracing, for the first time, the evolution of wealth of the median Black household and the gap between the typical Black and white household over time, we estimate that the majority of Black households only began to dispose of measurable wealth around World War II. While the civil rights era brought substantial wealth gains for the median Black household, the gap between Black and white wealth at the median has not changed much since the 1970s. The top and the bottom of the wealth distribution show even greater persistence, with Black households consistently over-represented in the bottom half of the wealth distribution and under-represented in the top-10 percent over the past seven decades.
“Why Do Retired Households Draw Down Their Wealth So Slowly?” by Eric French, John Bailey Jones and Rory McGee
Retired households, especially those with high lifetime income, decumulate their wealth very slowly, and many die leaving large estates. The three leading explanations for the “retirement savings puzzle” are the desire to insure against uncertain lifespans and medical expenses, the desire to leave bequests to one’s heirs, and the desire to remain in one’s own home. We discuss the empirical strategies used to differentiate these motivations, most of which go beyond wealth to exploit additional features of the data. The literature suggests that all the motivations are present, but has yet to reach a consensus about their relative importance.
“Where Does Wealth Come From? Measuring Lifetime Resources in Norway,” by Sandra E. Black, Paul J. Devereux, Fanny Landaud and Kjell G. Salvanes
In this paper, we use comprehensive administrative data on the population of Norway to create a measure of lifetime resources, which generates several stylized facts. First, lifetime resources are highly correlated with net wealth, but net wealth is more unequally distributed. Second, labor income is the most important component of lifetime resources, except among the top 1 percent where capital income and capital gains on financial assets become important. Lastly, lifetime resources are a better predictor of child human capital outcomes than net wealth, suggesting that, in some cases, inequality in lifetime resources may be more relevant than inequality in wealth.
“The Importance of Financial Literacy: Opening a New Field,” by Annamaria Lusardi and Olivia S. Mitchell
We undertake an assessment of our two decades of research on financial literacy, building on our empirical research and theoretical work casting financial knowledge as a form of investment in human capital. We also draw on recent data to determine who is the most—and least—financially savvy in the United States, and we highlight the similarity of our results in other countries. A number of convincing studies is now available, from which we draw conclusions about the effects and consequences of financial illiteracy, and what can be done to fill these gaps. We conclude by offering our thoughts on implications for teaching, policy, and future research.
“Transmission Impossible? Prospects for Decarbonizing the US Grid,” by Lucas W. Davis, Catherine Hausman and Nancy L. Rose
Encouraged by the declining cost of grid-scale renewables, recent analyses conclude that the United States could reach net zero carbon dioxide emissions by 2050 at relatively low cost using currently available technologies. While the cost of renewable generation has declined dramatically, integrating these renewables would require a large expansion in transmission to deliver that power. Already there is growing evidence that the United States has insufficient transmission capacity, and current levels of annual investment are well below what would be required for a renewables-dominated system. We describe a variety of challenges that make it difficult to build new transmission and potential policy responses to mitigate them, as well as possible substitutes for some new transmission capacity.
“The Economics of Electricity Reliability,” by Severin Borenstein, James Bushnell and Erin Mansur
The physics of an electrical grid requires that the supply injected into the grid is always in balance with the quantity consumed. If that balance is not maintained, cascading outages are likely to disrupt supply to all consumers on the grid. In the past, vertically integrated monopoly utilities have ensured that supply is adequate to meet demand and maintain grid stability, but with deregulation of generation, assuring adequate supply has become much more complex. The unique characteristics of electricity distribution means that there are immense potential externalities among market participants from supply shortfalls. In this paper, we discuss the institutions that US electricity markets have developed to avoid such destabilizing supply shortfalls when there are multiple generators and retailers in the market. Though many of the markets rely on standardized requirements for supplier reserves, we conclude that recent technological progress may steer future evolution towards a system that relies to a greater extent on economic incentives.
“The Economics Profession’s Socioeconomic Diversity Problem,” by Anna Stansbury and Robert Schultz
It is well-documented that women and racial and ethnic minorities are underrepresented in the economics profession, relative to both the general population and other academic disciplines. Less is known about the socioeconomic diversity of the economics profession. In this paper, we use data on parental education from the Survey of Earned Doctorates to examine the socioeconomic background of US economics PhD recipients, as compared to other disciplines. We find that economics PhD recipients are substantially more likely to have highly educated parents, and less likely to have parents without a college degree, than PhD recipients in other non-economics disciplines. This is true for both US-born PhD recipients and non-US-born PhD recipients, but is particularly stark for the US-born. The gap in socioeconomic diversity between economics and other PhD disciplines has increased over the last five decades, and particularly over the last two decades.
“Early Career Paths of Economists inside and outside of Academia,” by Lucia Foster, Erika McEntarfer and Danielle H. Sandler
Economics job candidates face considerable professional and financial uncertainties when deciding between academic and nonacademic career paths. Using novel panel data, we provide a broad picture of PhD economists’ early career mobility and earnings growth—both in and outside of academia. We find that academic jobs have fallen to just over half of US placements, with growing shares in tech, consulting, and government. We document considerable early career job mobility and higher earnings growth among job changers, private-sector economists, and men. We also find an earnings premium for graduates of top-ranked PhD programs that grows over early career years in academia while shrinking in the private sector. These different earnings dynamics mean the opportunity cost (in terms of potential earnings) of remaining in academia is generally less for graduates of top-ranked programs, although there is significant dispersion in mid-career earnings among these academics.
“Retrospectives: Margaret Reid, Chicago, and Permanent Income,” by Evelyn L. Forget
Margaret Gilpin Reid (1896–1990) began her career as a home economist and, with Dorothy Brady, Milton and Rose Friedman, played a central role in the development of the permanent income hypothesis at Chicago. Reid was the first woman to be elected Fellow of the American Economic Association, and was a key figure in the empirical tradition at the University of Chicago. This article examines the opportunities and constraints that shaped her career.
Mink are highly susceptible to infection with several viruses that also infect humans. In late 2020, government agencies and academics in Europe and North America repeatedly documented that farmed mink had become infected with SARS-CoV-2, the causative agent of COVID-19. Evidence of mink-adapted viruses spilling back into local communities further demonstrated the poor biosecurity guidelines and practices in the industry. With this in mind, some countries—for example, the Netherlands—shut down mink production altogether. Fortunately, the mink-adapted variants of 2020 were not fitter than viruses circulating at the time in humans and, hence, did not spread widely. … The establishment of animal reservoirs for viruses that evolve on a separate trajectory from variants in humans sets a potential time bomb for re-emergence of the virus in humans—especially as immunity wanes in the older population and unexposed younger people make up a larger proportion of the population. This is the scenario that led to the emergence of pandemic H1N1 influenza virus from pigs in 2009. …
[W]e argue that mink, more so than any other farmed species, pose a risk for the emergence of future disease outbreaks and the evolution of future pandemics. … We strongly urge governments to also consider the mounting evidence suggesting that fur farming, particularly mink, be eliminated in the interest of pandemic preparedness.
The United States is the fifth largest producer of mink pelts in the world. From 2015–19, U.S. mink pelt exports, imports, and production declined, fig. 1. As of 2021, the United States had about 100 mink farms that produced nearly $60 million worth of pelts. In 2022, U.S. mink pelt exports were $64 million, down 33 percent from $94 million in 2019 but up slightly from $50 million in 2021.
Pre- and post-pandemic, the global export value of mink pelts was mostly in decline (except in 2021), fig. 2, largely responding to pelt prices. China is the world’s largest producer of mink pelts, followed by Denmark, Poland, and the Netherlands. The United States, South Korea, Hong Kong, China, and France are the largest downstream users, accounting for 55 percent of global imports of fur goods (including finished mink coats) in 2022. Pre-pandemic, the global export value of fur goods had climbed steadily, bolstered by Asian demand. However, from 2019 to 2022, the value of global fur goods exports declined 55 percent due to COVID-19 related retail closures, decreased demand, and price effects from reduced input prices.
The US Department of Agriculture, in its most recent annual mink report in July, shows a rise in mink prices back around 2010, along with a corresponding rise in US production of mink pelts. My own very casual reading of the trade press suggests that the higher prices and output were driven by higher demand from east Asia China and to some extent from Russia. But mink fur prices had dropped off by about 2013, and production truly plummeted in 2020.
I lay no claims to being a prognosticator of the mink fur markets. However, I will note that predictions of its demise have been ongoing for some time. Back on October 27, 1975, Michael L. Geczi wrote an article for the Wall Street Journal titled “Mink Farming is Growing More Scarce as Costs Rise and Fur Demand Declines.” Geczi noted that in the peak year of 1966, the US had produced 6.2 million mink pelts; however, the US Department of Agriculture mink report found that production had fallen by half to 3.1 million pelts in 1974. He pointed to a shift in beliefs about wearing mink as a high-prestige good, along with increased competition from lower-priced foreign mink production, as main causes.
In 1975, it seemed as if mink production might be on its way out. However, you’ll note from the USDA figure above that annual mink production from 2003-2019–just before the pandemic–was hovering around 3 million per year, which is about the same as in 1974. Of course, US population and incomes have risen substantially since 1974, so in that sense mink has become relatively less important, but the absolute quantity produced remained similar–at least up to the pandemic.
However, the average US mink farm seems to have gotten considerably larger: it took about 1,100 mink farms to produce the 3 million pelts back in 1974, compared to only about 100 mink farms to produce that number today. in addition, according to the USDA, the US economy remains the largest market for mink of any country.
Some problems can wait a few years. A road is wearing out? You can patch it now, and fix it more thoroughly one of these years. But the learning loss experienced by K-12 students during the pandemic (for earlier discussions, see here and here) isn’t in the delayable category. The pandemic hit in February 2020. With the ’23-24 school year underway, the ’21-’22 school year and the ’22-’23 school years are now behind us. The processes for catching up need to already be underway, but they aren’t.
Jonathan Guryan and Jens Ludwig discuss the situation in “Overcoming Pandemic-Induced Learning Loss” (forthcoming in Building a More Resilient US Economy, edited by Melissa S. Kearney, Justin Schardin, and Luke Pardue from the Aspen Economic Strategy Institute, due out November 9, 2023, pp. 150-170). They describe the challenge of learning loss in this way:
For example, data from a single week in May 2020 showed that nearly a third of the Chicago Public School system’s 350,000 students did not log on to even one Google Classroom or Google Meet (Chicago Public Schools 2020; n.d.). Chronic absenteeism increased dramatically across the country, with student absences fully doubling in high-remote-instruction states like Virginia and California. (Given data limitations, those figures may, if anything, even underestimate the true rise in absenteeism). The US Department of Education estimated that at least 10.1 million students missed at least 10 percent of the 2020–2021 school year (Chang, Balfaz, and Byrnes 2022). Of course, missing this much school, and the imperfect substitution of remote school for in-person instruction, led to large learning losses, particularly for the most disadvantaged children in America. But the real public policy challenge is not merely short-term learning losses. Because education is intrinsically cumulative, there is the real possibility that pandemic-induced school disruptions may set a whole generation of students off track for the rest of their lives. …
The consequences of pandemic-induced learning loss, in other words, are likely to be long-term, and these consequences will be most dire for the most disadvantaged children. The potential magnitude of the long-term effects can be seen by pre-pandemic data on what happens when children miss key developmental milestones. Students who can’t read at grade level by third grade are four times less likely to graduate high school. Ninth graders who have not yet passed their required entry-level math class (Algebra I) are five times less likely to graduate.
Anyone consuming news in 21st-century America should be developing sensitivity to exaggeration, and this “rest of their lives” rhetoric may feel overstated. But in this case, for many students, it’s not. Making or missing these kinds of landmarks like high school graduation really does affect lifetime prospects.
A standard policy proposal for helping student catch up is known as “high-dosage tutoring,” which refers to having students who are falling behind meet with a tutor in small group once or twice a week. Indeed, when Congress passed the Elementary and Secondary School Emergency Relief (ESSER) Fund legislation in 2020 and 2021 sending about $190 billion to K-12 schools to address pandemic-related expenses, one recommendation was that they earmark some of the funds for tutoring programs.
Guryan and Ludwig review some evidence on the effectiveness of tutoring. For example:
A series of demonstration projects in the 1980s found that compared to regular classroom instruction, students tutored one-to-one spend almost 40 percent more time on-task. Students in tutoring learned fully 2 standard deviations (SDs) more than their peers in traditional classroom settings (Bloom 1984). As a way to benchmark the enormous magnitude of that learning gain, the average test-score gain over the course of a student’s high school career is about 0.6–0.7 SDs, and the test-score gap between high- and low-income eighth graders is 1.4 SDs (Reardon 2011; Loveless 2012). Another way to get a sense of the magnitude here is that a student who improved their test score by 2 SDs would move approximately from the 15th to the 85th percentile. We also see large gains from tutoring outside of controlled lab conditions, in real-world school settings. A review of more than 90 randomized controlled trials (RCTs) of smaller-scale tutoring programs showed an average effect of 0.37 SDs (Nickow, Oreopoulos, and Quan 2020).
As they note, high-dosage tutoring “is plausibly the intervention most up to the task of meeting the scale of our current learning-loss challenge.” So why hasn’t it happened?
The reasons are so pedestrian as to be tragic. Tutoring would require bringing in outsiders to do the tutoring. Remember that we’re talking here about tutoring very basic math and reading skills, so finding people who could be certified to do the job on a part-time basis, from retirees to parents to college students, doesn’t seem to be an impossible task. But it would require institutional flexibility and rescheduling time, because the most effective tutoring programs happen during the usual school day, rather than requiring students to come early or stay late. As Guryan and Ludwig write:
Presumably, that’s been hard for schools to do in part because all organizations suffer from a general change-aversion. … What we have seen in practice is that when schools are faced with the possibility of change, they tend to do fewer of the hard things that will help students and more of the easier things that are likely to have fewer learning benefits for children. For example, in our experiences working with districts around the country, many have punted on the problem of trying to find time during the school day and instead relied on after-school programs or tried virtual tutoring at home in the evenings or on weekends. None of those efforts that we have seen firsthand led to a high “dosage” of tutoring delivered to students at any sort of scale. As another example, a different district we worked with tried a decentralized approach to tutoring, giving individual schools lots of discretion over how they deployed their tutors. Often, the tutors wound up simply serving as teachers’ aides, which the research suggests have little impact on student learning in part because these aides wind up being assigned to largely do the parts of the teacher’s job teachers like least (grading, making copies, etc.) …
Guryan and Ludwig argue for additional federal funding to support tutoring programs. I’m not definitively opposed to federal funding here, but financing of K-12 education has traditionally been a state responsibility. I’d be happier if states would show a commitment to high-dosage tutoring by setting up systems to certify tutors and to reschedule school days, using a combination of the existing federal money and their own funds. But this seems like a situation where, after the pandemic, K-12 schools just wanted to go back to doing what they had before the pandemic.
It’s a story of how demographic changes can echo for decades. In the 1950s and 1960s, the fertility rate in China was 4-5 children per woman. To put it another way, children born during this time had on average 3-4 siblings. If one person with 3-4 siblings marries another person with 3-4 siblings, and that couple has children, then the child will have 6-8 aunts and uncles, and a substantial number of cousins, as well. And this quick sketch doesn’t count the great-aunts and great-uncles from the generation of the grandparents–along with their descendants.
Eberstadt and Verdery emphasize that even when China’s fertility rate drops dramatically in the 1970s and 1980s, it remain true that the parents of the children being born at that time were often from larger extended families. Thus it’s was possible for a few decades from the 1970s through the 1990s for a single child to have a substantial number of aunts, uncles, and cousins. But after multiple generations of low fertility, the average number of aunts, uncles, and cousins will diminish substantially.
China, like most other countries, does not keep official data on numbers of aunts, uncles, and cousins. Thus, Eberstadt and Verdery are forced to estimate these numbers in a way that is consistent with the information that is available about fertility, family size, mortality rates at different ages, and life expectancy. Here are some of their findings:
As recently as 1950, we estimate, only about 7 percent of Chinese men and women in their 50s had any living parents; the corresponding figure today (i.e., 2020) would exceed 60 percent (Table A1). Conversely, a decidedly larger fraction of Chinese men and women in their 70s and 80s have two or more live children today than around the time of the 1949 “liberation” (Table A2); they are also more likely to have a living spouse nowadays than in that much less developed era (Table A6).
By our estimates, men and women in their 40s are three times more likely to have two or more living siblings today than they were in 1950 (Table A3). In 1950, by our reckoning, only one in four Chinese in their 30s had 10 or more living cousins; today that share would be an amazing 90 percent or higher. And practically none of today’s 30-somethings lack cousins altogether, as opposed to about 5 percent of their counterparts in 1950 (Table A4).
Despite 35 years of coerced anti-natalism through Beijing’s notorious One-Child Policy (1980–2015), today’s teens in China are more likely to have 10 or more living cousins and vastly more likely to have 10 or more living uncles and aunts than their predecessors in 1950 were (Table A4). In fact, 10 or more cousins and 10 or more uncles and aunts looks to be the most common family type for teens in contemporary China (Table A5).
Here are their estimates of the number of cousins for people in different age groups. For the 0-9 age bracket at any given time, you can see that the number of cousins rises sharply in the early 20th century, with declining infant mortality and longer life expectancies, but then tops out and starts declining after fertility rates start falling in the 1970s. For those in the 30-39 age bracket at any given time, this pattern essentially happens 30 years later–although additional rises in life expectancy make the peak a little higher. For those in the 60-69 age bracket at any point in time, the rise and fall is again roughly 30 year later.
Again, the exact numbers here come from a population modeling exercise, and thus have a margin of error around them. But again, these numbers are based on the known estimates for population, fertility, and infant mortality, and so should capture any big-picture swings. Here are some main themes:
1) China’s extended networks of blood relatives may have been larger around the turn of the 21st century than at any previous time. In the words of Eberstadt and Verdery:
Our estimates obviously cannot speak to the (possibly changing) quality of familial relations in China, but in terms of sheer quantity, it seems safe to say that Chinese networks of blood kin have never been nearly as thick as they were at the start of the 21st century. … This proliferation of living relatives is a confluence of three driving forces: (1) the tremendous general improvement in life expectancy starting seven decades ago, (2) the generationally delayed impact of steep fertility declines on counts of extended kin, and (3) the as-yet-modest inroads of the “flight from marriage” (to borrow a phrase from Gavin W. Jones) witnessed already in the rest of East Asia, as well as affluent Europe and North America.
2) Extended family networks are a form of social capital: for example, during China’s period of dramatic economic growth, with all of its reallocations across industrial sectors and from rural to urban areas, deep kinship networks have been a way of spreading information about opportunities. Eberstadt and Verdery:
China has evidently enjoyed a massive “demographic deepening” of the extended family over the past generation or so—a deepening with many likely benefits, not least an augmentation of social capital. …
Social capital begets economic capital. China’s “kin explosion,” for instance, may have had highly propitious implications for guanxi, the quintessentially Chinese kin-based networks of personal connections that have always been integral to getting business done in China. With a sudden new wealth of close and especially extended relatives on whom potentially to rely, the outlook in China for both entrepreneurship and informal safety nets may have brightened considerably—and rapidly—in post-Mao China, as numbers of working-age siblings, cousins, uncles, aunts, and other relatives surged for ordinary people. It is intriguing that China’s breathtaking economic boom should have taken place at the very time that the country’s extended family networks were becoming so much more robust. No one has yet examined the role of changing family structure in China’s dazzling developmental advance over the past four decades. But there is good reason to suspect that family dynamics are integral to the propitious properties of the much-discussed “demographic dividend” in China and other Asian venues. If so, the family would be a crucial though unappreciated element in contemporary China’s socioeconomic rise, and this unobserved variable may require some rethinking of China’s modern development, both its recent past and its presumed future.
3) Although China has been living through what might be called a “golden era of the extended family,” this pattern of deep kin networks is going to end in the next few decades. Eberstadt and Verdery write:
The number of living cousins for Chinese under age 30 is about to crash. Just three decades from now, young Chinese on average will have only a fifth as many cousins as young Chinese have today (Figure 4). By 2050, according to our estimates, almost no young Chinese will live in families with large numbers of cousins (Table A4). Between now and 2050, the fraction of Chinese 20-somethings with 10 or more living cousins is set to plummet from three in four to just one in six. …
By then , a small but growing share of China’s children and adolescents will have no brothers, sisters, cousins, uncles, or aunts. Still more sibling-less young people will have just one or two such kin. Thus, a significant minority of this coming generation in China—many tens of millions of persons—will be traversing life from school through work and on into retirement with little or no firsthand experience of the Chinese extended family, the tradition hitherto inseparable from China’s very culture and civilization and experienced most acutely in fullest measure in the decades just now completed.
These kinds of deep demographic changes are likely to have momentous effects. For example, the Chinese government has been able to rely on extended family networks to a substantial extent to look after children, the elderly, or the disabled. It has been able to rely in family networks to spread information about available jobs, useful skills, and possible destinations. People’s family networks have been in some case leveraged into political networks and even to mechanisms for social control. But Eberstadt and Valery discussion suggests that China’s extended family networks are about to diminish sharply, and the functions they have served will be diminished as well.
The per capita GDP for the combined 27 countries of the European Union (EU-27) is about 72% of the US level. On the other side, the average worker in EU countries puts in far fewer hours on the job than do American workers. For example, OECD data says that the average US worker put in 1,811 total hours in 2022, while due to a combination of more holidays and more part-time workers, the average for a French worker was 1,511 hours a German worker was just 1,341 hours. To put it another way, the average French worker works 7 1/2 fewer 40-hour weeks than an average American worker–almost two full months less. The average German worker works 11 3/4 fewer 40-hour weeks in a year than the average American worker–almost three full months less.
Before doing the comparisons on a per-hour basis, we need to clear up a different issue: comparing the US and the EU economies requires converting between US dollars and the euro plus a few other remaining European currencies. Exchange rates can move around a lot in a few years, but it would be peculiar to claim that, say, because, the US dollar appreciated by 1/3 compared to the euro, the US economy was also now 1/3 bigger compared to the euro. As Darvas puts it:
In 2000, €1 was worth $0.92. By 2008, the euro’s exchange rate strengthened considerably, and €1 was worth $1.47. The EU’s GDP is mostly generated in euros, and thus it was worth many more dollars in 2008 than in 2000 because of the currency appreciation. But this was just a temporary rise in the value of the euro and not a reflection of skyrocketing economic growth in the EU. After 2008, the opposite happened. By 2022, €1 was worth $1.05, so compared to 2008, the euro’s significant depreciation relative to the dollar reduced the dollar value of EU GDP.
To avoid the complications of fluctuating exchange rates, Darvas instead uses “purchasing power parity” exchange rates, which are calculated by the International Comparison Project at the World Bank to measure the actual buying power of a currency in terms of purchased goods and services. For our purposes, the key point is that the PPP exchange rate doesn’t bounce around like the market exchange rate; for present purposes, it is similar to comparing the US and EU economies as if the market exchange rate had stated at the 2000 and 2022 levels, without the big bounce in the middle.
So using the PPP exchange rate, here’s per capita GDP from , with the US level expressed as 100. On the left-hand panel, the red line show’s China’s rise from 2% of US per capita GDP in 1980 to about 28% at present. The EU-27 line rises from 67% of the US level in 1995 to 72% at present. The breakdown on the right-hand side shows that this increase is mostly due to the countries of the “east EU” region, which is catch-up growth from Bulgaria, Czechia, Croatia, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, and Slovenia.
Now let’s do the comparison not in terms of GDP per person, but instead in terms of GDP per hours worked, and also GDP per worker. What’s interesting here is that for the EU as a whole, GDP per hour worked has risen from about 72% of US levels back in the early 2000s to about 82% of US levels (blue dashed line). For Germany, with its very low level of average hours worked, GDP per hour worked was roughly equal to the US level back in the mid-1990s, then dropped off, and has now caught up again.
To put it another way, for the EU as a whole, the lower per capita GDP–28% below the US level–is roughly two-thirds due to the fact that GDP per hour worked is below US levels, and one-third due to fewer hours worked. But for Germany (and for some other western and northern EU economies), the lower per capita GDP compared to the US level is entirely due to fewer hours worked.
Of course, comparisons like these are grist for conversation. These kinds of comparisons do suggest that it is possible to combine fewer annual hours worked with rising output-per-hour. For my American readers, would you personally be willing to take an annual income cut of 10%, in exchange for an extra month of vacation each year? Does your answer change if everyone else also takes an income cut of 10% in exchange for an additional month of vacation? If group of major American employers offered that combination of similar hourly pay but arranged the firm on the expectation of substantially lower hours worked, would the company attract at least a sampling of top talent? What if the employers offered this option only to employees who had worked the longer hours and remained with the firm for, say, five or ten years?