The Reputation of Karl Marx and the Soviet Revolution of 1917

Karl Marx (1818-1883) remains one of the most highly cited authors in academic literature, 140 years after his death. But when did his writing become especially prominent? During his lifetime or after? And how has his prominence trended in recent decades?

Philip Magness and Michael Makovi discuss the history and offer some measurements of how often Marx is cited in “The Mainstreaming of Marx: Measuring the Effect of the Russian Revolution on Karl Marx’s Influence” (Journal of Political Economy, June 2023).

Marx is not cited much by economists. The authors quote the 1925 comment of John Maynard Keynes that Marx’s Capital is “an obsolete economic textbook . . . without interest or application for the modern world.” However, Marx has become immensely popular in other fields:

A century later, Marx enjoys an immense scholarly stature—albeit almost entirely outside of economics. His critiques of capitalism are taught as foundational texts in sociology, political theory, philosophy, and literary criticism, and his socioeconomic doctrines of alienation, class consciousness, and historical materialism exert heavy influence through the academically fashionable analytical frameworks of critical theory, postcolonial theory, and cultural studies.

One 2013 paper estimated that Marx was the most-cited author in history. Looking at college syllabuses (and leaving aside textbooks), Marx remains among the most-assigned authors, rivaled only by Plato, and far ahead of John Stuart Mill, Adam Smith, Martin Luther King Jr., Jean-Jacques Rousseau, John Rawls, and others.

I will not try here to unpack just why economists in general have been dismissive of Marx’s work since the 19th century, although Magness and Makovi go into that topic in some detail. Instead, I focus on the evidence they compile from Google’s Ngram Viewer, which measures how often an author or a term is used in printed books over time. For comparison, they compare citations to Marx with the average of a group of other socialist writers from the 19th century, weighted so that their citations match those of Marx in the lead-up to 1917. This “synthetic Marx” group is mostly made up of mostly Frederick Lassalle, Johanne-Karl Rodbertus, and Oscar Wilde (who wrote a prominent 1891 essay called “The Soul of Man under Socialism”). Here’s a figure, with the solid line showing citations to Marx and the dashed line showing citations to “synthetic Marx.”

The fact that the two lines track each other before 1917 isn’t a surprise: “synthetic Marx” was constructed to track Marx before that date. What’s interesting is the divergence around 1917, when citations to Marx rise dramatically, and then keep rising. The authors write (citations and footnotes omitted):

The Bolshevik political ascendance drew widespread attention to Marx’s system particularly as the Western press sought to contextualize the revolution. … For many observers abroad, Marx became a clue to understanding the “Bolshevik threat” … Lenin’s political rise simultaneously enabled a sizable boost to the academic study of Marx’s doctrines. In 1919, the Soviet state created the Marx Engels Institute … Working with the newly established Frankfurt Institute of Social Research (the “Frankfurt School”), the Marx-Engels Institute published 12 volumes of the Marx-Engels-Gesamtausgabe (“MEGA1”) in German.

The Soviet state became the primary translator of Marx’s works through the government-funded Progress Publishers, founded in 1931. Marx played a similarly prominent role in Soviet propaganda through artwork and statuary, dating to Lenin’s personal direction . Indeed, Lenin initiated the practice of pilgrimage to Marx’s grave in 1903 and personally supervised the first of several unsuccessful Soviet attempts to have his remains relocated to Moscow in 1918. While other factors certainly shaped Marx’s reception in the mid-twentieth century, including the diaspora of the German-speaking academic Left in the face of Nazi persecution, the catalyzing event in the elevation of Marx’s intellectual stature appears to be the Russian Revolution. …

We hypothesize that the Soviet embrace of Marx not only elevated Marx absolutely but also crowded out other socialist traditions. Several of these competing thinkers linger in relative obscurity today, despite being closely matched contemporaries of Marx in the eyes of late-nineteenth-century socialists.

All of my professional life, it has been common for me to hear people argue that while they are a Marxist, they are not therefore a Stalinist, a Leninist, or a supporter of the politics, economics, and philosophies of Soviet Russia. At some level, this is all fair enough: blaming Marx for events that happened decades after his death seems unfair, as silly as blaming, say, Adam Smith (died in 1790) for modern capitalism. But on the other side, those who choose Marx as the avatar for their socioeconomic doctrines do bear some responsibility for their emphasis on Marx, who was uplifted by a considerable publicity effort from Soviet Russia, rather than choosing to fly the banner of his socialist contemporaries like Lassalle (who favored social-democratic labor reform in Germany and was denounced by Marx in anti-Semitic terms) or Rodbertus (who may well have originated the “surplus value” concept used by Marx). As Magness and Makovi put it:

While much of the discussion surrounding the bicentennial of Marx’s birth sought to differentiate consideration of his modern relevance from the totalitarian track record of twentieth-century communism, the elevation of Marx’s stature provided by the Russian Revolution illustrates that the two cannot be easily separated. It is insufficient to portray Soviet communism as an aberration from true Marxist doctrine, as the intellectual mainstreaming of Marxist theory is intimately intertwined with the political establishment of the Soviet Union. In assessing how this historical link shapes current interpretations of Marx, one must grapple with the implications of Marxism’s early twentieth-century intellectual ascendance as a Soviet political project.

Spreading Accounts Across Banks for the Deposit Insurance

One of the strange aspects of the collapse of Silicon Valley Bank back in March (discussed on this blog here, here, and here) was the realization that many banks hold deposits are much larger than $250,000. Thus, the depositors are not covered by federal deposit insurance and are ready to participate in a bank run if the bank looks shaky.

An obvious question arises: For these large depositors–often businesses using the bank account to receive payments for sales and to make payments to workers and suppliers–why not spread out their bank accounts over several or many banks, so that each account would be covered by deposit insurance? In the modern financial sector, is this really so hard to do?

Dylan Ryfe and Alessio Saretto of the Dallas Federal Reserve provide an explainer on how this process was already going on before the downfall of Silicon Valley Bank in “Reciprocal deposit networks provide means to exceed FDIC’s $250,000 account cap” (Federal Reserve Bank of Dallas, November 28, 2023).

The authors point out that there are about $16 billion US bank accounts covered by deposit insurance. About 99% of those accounts are below the $250,000 limit. But the other 1% of accounts hold $7 billion–about 43% of the total. They write:

Reciprocal deposit networks have aided this recent growth of insured deposits. These networks, which have been around since the early 2000s, essentially offer a matching service that allows banks to interchange deposits in order to increase exposure to FDIC insurance. Reciprocal deposits rose to more than $300 billion in second quarter 2023, up from almost $157 billion at the end of 2022

The idea is that big depositors in one bank can go through networks like IntraFi or R&T Deposit Solutions and “swap” with big depositors at another bank–thus spreading out bigger deposits over many banks so they can be covered by deposit insurance.

One of the main beneficiaries of reciprocal deposit networks seems to be medium-sized banks. Such banks might have trouble attracting big corporations as customers, if the big corporations reason (perhaps unfairly) that a medium-sized bank is a less secure place for deposits. But if the medium-sized bank use reciprocal deposits to provide a federal insurance guarantee for all deposits, then the big corporation doesn’t have to worry about the solvency of the bank. Apparently, one after-effect of the Silicon Valley Bank failure was a movement of big deposits away from medium-sized banks, and then the banks responding with an expanded use of reciprocal deposits (as reflected in the figure above).

How do bank regulators look at a situation where banks are participating in reciprocal accounts? Up to a certain amount, they don’t worry about it much: that is, “up to the lesser of $5 billion or 20 percent of liabilities for low-risk, well-capitalized banks.” To put it another way, $5 billion would allow for 20,000 swaps of $250,000. Above those levels, regulators would start taking a closer look at the risks involved.

But there are big-picture issues here as well. One of the dynamics of current banking regulation is that the 99% of accounts holding deposits below $250,000 don’t need to worry about whether their bank is safe, because they can rely on deposit insurance. However, the 1% with larger deposits should be worrying, at least a little! Together with federal regulators, outside investors, and financial press, those big depositors are part of the network that gathers information and provides feedback about bank safety. As Ryfe and Sarreto put it:

More deposit insurance can alleviate safety concerns and instill faith and security in the banking system, increase depositor welfare and incentivize small-versus large bank competition. However, it also raises concerns that banks, due to this increased perceived security, might engage in more aggressive profit-seeking activities …

Of course, none of this explains why the supposedly super-sharp venture capitalists, who had put their funds into the companies that held large deposits at Silicon Valley Bank, were not already requiring the use of reciprocal deposit arrangements before March 2023. A blind spot that big makes one worry about the existence of other blind spots.

Andrew Gelman: “Any Sufficiently Crappy Research is Indistinguishable from Fraud”

The prominent science fiction author Arthur C. Clarke developed “Clarke’s laws” over time. The ideas originally appeared in his 1962 essay, “Hazards of Prophecy: The Failure of Imagination” (in the collection Profiles of the Future: An Enquiry into the Limits of the Possible. They were reformulated as “laws” in the decades that follow.  The best-known of Clarke’s laws is: “Any sufficiently advanced technology is indistinguishable from magic.”

Back in 2016, statistician Andrew Gelman offered some reflections and updating on Clarke’s laws on his blog. Of his updates, my favorite is: “Any sufficiently crappy research is indistinguishable from fraud.” But here are Clarke’s laws and Gelman’s updates:

Clarke’s first law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Clarke’s second law: The only way of discovering the limits of the possible is to venture a little way past them into the impossible.

Clarke’s third law: Any sufficiently advanced technology is indistinguishable from magic.

My [that is, Gelman’s] updates:

1. When a distinguished but elderly scientist states that “You have no choice but to accept that the major conclusions of these studies are true,” don’t believe him.

2. The only way of discovering the limits of the reasonable is to venture a little way past them into the unreasonable.

3. Any sufficiently crappy research is indistinguishable from fraud.

For the literal-minded, it’s perhaps useful to note that just as Clarke was not claiming that a sufficiently advanced technology is literally magic, Gelman is not claiming that a sufficiently crappy research is literally fraud. In both cases, the claim is just that for an outside observer with limited knowledge, it’s impossible to tell the difference.

An Economist Chews over Thanksgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change in turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there’s anything wrong with that. [This is an updated, amended, rearranged, and cobbled-together version of a post that was first published on Thanksgiving Day 2011.]

Maybe the biggest news about Thanksgiving dinner this year is that the overall cost of a traditional meal is down 4.5% from last year–although still up 25% from 2019. For the economy as a whole, the starting point for measuring inflation is to define a relevant “basket” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical US household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner rose 20% from from 2021 to 2022, but then fell back 4.5% from 2022 to 2023. A significant part of the reason for last year’s price increase was an outbreak of Highly Pathogenic Avian Influenza (HPAI), which took a toll on turkey production, but HPAI been much less of an issue this year. The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The lower line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been an OK measure of the overall inflation rate over long periods of time.

Of course, for economists the price is only the beginning of the discussion of the turkey industry supply chain. This is just one small illustration of the old wisdom that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. The last time the U.S. Department of Agriculture did a detailed “Overview of the U.S. Turkey Industry” appears to be back in 2007, although an update was published in April 2014  Some themes about the turkey market waddle out from those reports on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1960s up to the early 1990s: for example, from consumption of 6.5 pounds of turkey per person per year in 1960 to 17.8 pounds per person per year in 1991. But since the early 2000s, turkey consumption has declined somewhat, falling to 14.6 pounds per person in 2022.

On the supply side, turkey companies are what economists call “vertically integrated,” which means that they either carry out all the steps of production directly, or control these steps with contractual agreements. Over time, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.

Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.”

U.S. agriculture is full of examples of remarkable increases in yields over periods of a few decades, but such examples always drop my jaw. I tend to think of a “turkey” as a product that doesn’t have a lot of opportunity for technological development, but clearly I’m wrong. Here’s a graph showing the rise in size of turkeys over time from the 2007 report.

more recent update from a news article shows this trend has continued. Indeed, most commercial turkeys are now bred through artificial insemination, because the males are too heavy to do otherwise.

The production of turkey is not a very concentrated industry with three relatively large producers (Butterball, Jennie-O, and Cargill Turkey & Cooked Meats) and then more than a dozen mid-sized producers.    Given this reasonably competitive environment, it’s interesting to note that the price markups for turkey–that is, the margin between the wholesale and the retail price–have in the past tended to decline around Thanksgiving, which obviously helps to keep the price lower for consumers. However, this pattern may be weakening over time, as margins have been higher in the last couple of Thanksgivings  Kim Ha of the US Department of Agriculture spells this out in the “Livestock, Dairy, and Poultry Outlook” report of November 2018. The vertical lines in the figure show Thanksgiving. She writes: “In the past, Thanksgiving holiday season retail turkey prices were commonly near annual low points, while wholesale prices rose. … The data indicate that the past Thanksgiving season relationship between retail and wholesale turkey prices may be lessening.”

If this post whets your your appetite for additional discussion, here’s a post on the processed pumpkin industry and another on some economics of mushroom production. Good times! Anyway, Thanksgiving is my favorite holiday. Good food, good company, no presents–and all these good topics for conversation. What’s not to like?

No, The American Dream Is not about Getting Rich

Somewhere along the way, the idea of “the American Dream” became constricted, and started referring purely to economic success: that is, the idea that if you just worked hard, you could become at least comfortably well-off, or even rich. Here is a sampling of examples. Mrinal Mishra, Jonathan Fu, and Steven Ongen recently published a paper called “Do Narratives about the American Dream Rally Local Entrepreneurship?” looking at the connections between mentions of the “American dream” in newspapers and local rates start-ups and entrepreneurship. They refer to the “American dream” as “the quintessential story of entrepreneurship and advancement.” A few years ago, Jimmy Narang, Robert Manduca, Nathan Hendren, Maximilian Hell, David Grusky, and Raj Chetty published a paper called “The fading American Dream: Trends in absolute income mobility since 1940,” in which the American dream is equated to income mobility. In a similar spirit, the American Enterprise Institute has started an “American Dream Initiative” to look at upward economic mobility. In 2014, Mark Robert Rank, Thomas Hirschl, and Kirk Foster, wrote a book called Chasing the American Dream: Understanding What Shapes Our Fortunes, which they defined as a combination platter of mostly economic outcomes, like economic security for oneself, the idea that one’s children will have more opportunities, and the freedom to pursue one’s passions,

Compare these definitions of the American dream to how Martin Luther King referred to the American Dream in his 1963 “I Have a Dream” speech:

I still have a dream. It is a dream deeply rooted in the American dream. I have a dream that one day this nation will rise up and live out the true meaning of its creed: “We hold these truths to be self-evident, that all men are created equal. I have a dream that one day out in the red hills of Georgia the sons of former slaves and the sons of former slaveowners will be able to sit down together at the table of brotherhood. I have a dream that one day even the state of Mississippi, a state sweltering with the heat of oppression, will be transformed into an oasis of freedom and justice. I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by their character.

Clearly, King’s description of the central message of the “American dream” has a different tone from middle-class economic security, increasing business start-up rates, and intergenerational income mobility.

In my own mind, I tend to label the dream of economic progress within and across generations as the “Horatio Alger myth.” For those unfamiliar with the name, Alger became in the years after his death in 1899 the best-selling American novelist of all time, with between 100 and 500 million copies in print by the end of the 1920s (publishing statistics at this time were not a precise science), although his total has been surpassed over the years by a few others including Danielle Steele and Dr. Suess.

As literature, Horatio Alger stories aren’t much. He turned two basic plots into almost 100 books. His alliterative heroes like Ben Bruce, Ned Newton and Dean Dunham rise from humble beginnings. They are tempted by bad companions, threatened by bullies, and unfairly accused, before winning the attention of a benefactor by saving a drowning child, returning a lost gem, or stopping a runaway horse. The message, pounded home with sledgehammer subtlety, is that through frugality, honesty, abstaining from smoking and drinking, standing up to bullies, and answering the door when fortune knocks, anyone can reach middle-class respectability.

Horatio Alger himself was not a sympathetic character, but his books caught something in the zeitgeist of his time. His books were best-sellers into the 1920s, and one shouldn’t underestimate how they continue to capture a powerful element of popular imagination. Modern-day popular culture still celebrates when a high school graduate, an immigrant, or garage entrepreneur rises to fame and fortune. 

The British scholar Sarah Churchwell has gone looking for the origins of the terminology of the “American Dream.” She writes:

The American dream was rarely, if ever, used to describe the familiar idea of Horatio Alger individual upward social mobility until after the Second World War. Quite the opposite, in fact.  … Although many now assume that the phrase American dream was first used to describe 19th century immigrants’ archetypal dreams of finding a land where the streets were paved with gold, not until 1918 have I found any instance of the “American dream” being used to describe the immigrant experience …

Instead, Churchwell points out that the terminology of the “American dream” was popularized by the Pulitzer prize-winning historian named James Truslow Adams in his 1931 book The Epic of America. Adams acknowledges that economic security is part of the American dream, but insists on a much broader meaning as well–the version of the American dream to which Martin Luther King was referring. Adams wrote:

But there has been also the American dream, that dream of a land in which life should be better and richer and fuller for every man, with opportunity for each according to his ability or achievement. It is a difficult dream for the European upper classes to interpret adequately, and too many of us ourselves have grown weary and mistrustful of it. It is not a dream of motor cars and high wages merely, but a dream of social order in which each man and each woman shall be able to attain to the fullest stature of which they are innately capable, and be recognized by others for what they are, regardless of the fortuitous circumstances of birth or position. I once had an intelligent young Frenchman as a guest in New York, and after a few days I asked him what struck him most among his new impressions. Without hesitation he replied, “The way that everyone of every sort looks you right in the eye, without a thought if inequality. Some time ago a foreigner who used to do some work for me, and who had picked up a very fair education, occasionally sat and chatted with me in my study after I had finished my work. One day he said that such a relationship was the great difference between America and his homeland. There, he said, “I would do my work and might get a pleasant word, but I could never sit and talk like this. There is a difference there between social grades which cannot be got over. I would not talk to you there as man to man, but as my employer.”

No, the American dream that has lured tens of millions of all nations to our shores in the past century has not been a dream of merely material plenty, though that has doubtless counted heavily. It has been much more than that. It has been a dream of being able to grow to fullest development as man and woman, unhampered by the barriers which had slowly been erected by older civilizations, unrepressed by social orders which had developed for the benefit of classes rather than just for the simple human being of any and every class. And that dream has been realized more fully in actual life here than anywhere else, though very imperfectly even among ourselves.

As Churchwell points out, Adams ends his book by suggesting that the Main Reading Room in the Library of Congress was a useful metaphor for the American Dream. Churchwell writes:

James Truslow Adams ended The Epic of America with what he said was the perfect symbol of the American dream in action. It was not the example of an immigrant who made good, a self-made man who bootstrapped his way from poverty to power, or the iconic house with a white picket fence. For Adams, the American dream was embodied in the Main Reading Room at the Library of Congress.

It was a room that the nation had gifted to itself, so that every American — “old and young, rich and poor, Black and white, the executive and the laborer, the general and the private, the noted scholar and the schoolboy” — could sit together, “reading at their own library provided by their own democracy. It has always seemed to me,” Adams continued,

to be a perfect working out in a concrete example of the American dream — the means provided by the accumulated resources of the people themselves, a public intelligent enough to use them, and men of high distinction, themselves a part of the great democracy, devoting themselves to the good of the whole, uncloistered.

It is an image of peaceful, collective, enlightened self-improvement. That is the American dream, according to the man who bequeathed us the phrase. It is an image that takes for granted the value of education, of shared knowledge and curiosity, of historical inquiry and a commitment to the good of the whole.

Once I have started thinking about the “American dream” in these terms, I find that I am unpleasantly startled when I see the term reduced to a narrowly economic vision.

South Africa’s Economy: 30 Years Since Apartheid

In April 1994, almost 30 years ago, Nelson Mandela was elected as the first black president of South Africa. The hopes at the time went beyond developing a representative political process, and included the idea that policies of inclusive growth would raise the standard of living for whose who had been excluded.

How is that economic promise working out? A research group at Harvard’s Growth Lab spent two years researching the issues, and has now published its discouraging findings in “Growth Through Inclusion in South Africa” (November 15, 2023). The authors are Ricardo Hausmann, Tim O’Brien, Andrés Fortunato, Alexia Lochmann, Kishan Shah, Lucila Venturi, Sheyla Enciso-Valdivia (LSE), Ekaterina Vashkinskaya (LSE), Ketan Ahuja, Bailey Klinger, Federico Sturzenegger, and Marcelo Tokman.

The basic story is that for the first decade or so after 1994, South Africa’s economy performed reasonably well; since then, not so much. This panel shows annual growth rates for South Africa (red line), compared with the rest of sub-Saharan Africa (blue line), and the upper-middle income countries of the world (gray line).

This graph shows South Africa’s real per capita GDP since 1994. You see the pattern of reasonably rapid growth for the first decade, and then no growth since then. (In other words, the growth shown in the figure above has only been keeping pace with population growth since 2004 or so.) The dashed lines on the far right show pre-pandemic and post-pandemic projections.

As the report says:

Income per capita has been falling for over a decade. Unemployment at over 33% is the world’s highest, and youth unemployment exceeds 60%. Poverty has risen to 55.5% based on the national poverty line, yet many more households depend on government transfers to sustain meager livelihoods. Most cities are failing to adequately connect people to productive opportunities and are failing to innovate, grow, and drive inclusion. Rural areas in former homelands, where almost 30% of South Africans live, exhibit dismally low employment rates and remain exceptionally poor.

The report suggest two main categories of economic failure that are plaguing South Africa’s economy:

This report aims to answer why South Africa is failing to grow and failing to move the needle on economic inclusion three decades after the end of apartheid. The evidence points to two causes: collapsing state capacity and the persistence of spatial exclusion.

State capacity has collapsed across many government functions that are essential for a functioning economy. Critical network industries, including electricity, transport infrastructure and services, security, and water and sanitation have experienced major deteriorations over the last 15 years. The economy has been forced to cope with increasing electricity rationing, leading to a declaration of national disaster in February 2023 after more than 15 years of load shedding. Rail and port capacity has declined, generating large losses in exports. The collapse in state capacity to deliver key inputs has, in effect, squandered the country’s comparative advantage in cheap, coal-fired electricity. Urban crime is very high, and theft and sabotage undermine the functioning of many national infrastructure systems. Communities across the country are increasingly vulnerable to all forms of disaster — both natural and manmade — due to weakened public services. National finances are under increasing strain as South Africa relies on fiscal transfers to bail out state-owned enterprises (SOEs) and to redistribute national income to households to alleviate poverty and hardship. Many municipalities now face severe fiscal challenges which undermine already weak public service delivery. South Africa is seeing signs of unsustainability in its repeated credit downgrades and large sovereign risk premia. All the while, as growth slows, exclusionary forces are becoming more entrenched.

Spatial exclusion has been entrenched by well-intentioned policies in urban areas and an absence of effective strategy to include rural former homelands. Under apartheid, townships were intentionally separated from central business districts and economic infrastructure, leading to fragmented and disconnected cities. Apartheid also relied on differential treatment to former homelands vis-à-vis the rest of the country, effectively separating those areas from the industrialized economy. Despite attempts to reverse this exclusion, policies since 1994 have unintentionally perpetuated many aspects of spatial exclusion. We find that urban planning regulations and zoning policies prevent dense, affordable housing in desirable locations and consequently limit both formal and informal employment. We also find strong evidence that formal jobs are limited because long commutes from low-density areas in and around cities make transportation costs and reservation wages high, while low residential densities prevent the development of a thriving informal economy. Meanwhile, rural former homelands continue to be economies separate and distinct from the rest of the country and face extremely low rates of employment. …

It is unfortunately clear that South Africa’s trajectory is not one of growth or inclusion, but rather stagnation and exclusion. South Africa’s economy is stagnating and, in fact, losing capabilities, export diversity, and competitiveness. While the racial composition of wealth at the top has changed, wealth concentration in South Africa has not and remains very high. Moreover, the broader structures of the economy have not allowed for the inclusion of the labor and talents of South Africans — black, white, and otherwise. There appear to be major spatial impediments to labor market inclusion in cities and large spatial patterns of exclusion in former homelands. As the performance of network industries and public capabilities have deteriorated and growth has slowed, exclusion has only worsened. Empowerment of a few has de facto come at the expense of the many.

The report goes into considerable detail on these issues. It also raises the possibility that South Africa could be well-positioned to benefit from a global shift to carbon-free and low-carbon electricity production: as a producer of key minerals needed for batteries and other uses, as a producer of domestic solar and wind power, and as a source of technological expertise in these areas. These shifts could also rebuild what used to be a comparative advantage for South Africa as a place with cheap (albeit coal-generated) electricity.

But overall, South Africa’s economy is on a disheartening path. The issues of improving the functioning of government and addressing the long-standing patterns of spatial exclusion is hard, and in the last couple of decades, South Africa’s government and political system hasn’t been up to the task. A virtue of this report is that it effectively lays out an agenda for what needs changing.

Some Wage Inequality Ratios

Back in 2010, Jonathan Heathcote, Giovanni L. Violante, and Fabrizio Perri, published a study that compiled data from a number of publicly available sources to measure the evolution of US inequality in income, wages, and wealth over time, with data up through 2006 (“Unequal we stand: An empirical analysis of economic inequality in the United States, 1967–2006,” Review of Economic Dynamics 13(1), 15–51). Now with the addition of Lichen Zhang as another co-authors, the band is back in town with a follow-up, updating their earlier study with data through 2021 (“More Unequal We Stand? Inequality Dynamics in the United States, 1967–2021,” Review of Economic Dynamics, October 2023, 50: 235-266; an ungated working paper version is available from the Federal Reserve Bank of Minneapolis).

The study takes a look at inequality measured in a number of ways: income, consumption, wages; individual-level and household-level; by education level, race/ethnicity, and gender; with adjustments for government payments; and so on. From the abstract:

We find that since the early 2000s, the college wage premium has stopped growing, and the race wage gap has stalled. However, the gender wage gap has kept shrinking. Both individual- and household-level income inequality have continued to rise at the top, while the cyclical component of inequality dominates dynamics below the median. Inequality in consumption expenditures has remained remarkably stable over time. Income pooling within the family and redistribution
by the government have enormous impacts on the dynamics of household-level inequality, with the role of the family diminishing and that of the government growing over time. In particular, largely due to generous government transfers, the COVID recession has been the first downturn in fifty years in which inequality in disposable income and consumption actually declined.

Here, I’ll focus on a subset of these comparisons from a single figure, with some discussion below. This paper is heavy on fact patterns, without trying to describe underlying causes. But even standing alone, the fact patterns provide insight about the evolution of US labor markets.

The upper-left panel looks at the ratio of wages for workers who have (at least) completed a four-year college degree to those who have not. The growth in the wage premium for college workers has been one of the primary drivers of rising inequality of incomes in recent decades: indeed, a widely-held theory is that demand for high-skilled labor has been rising faster than supply. But the college wage premium hasn’t risen much for men since about 2000, and for women since about 2010. However, the education premium for men has risen above the level for women. You can pick your own hypothesis here. Maybe the demand for skilled labor isn’t rising as fast? Maybe colleges in recent years aren’t providing the kind of skilled labor that employers want? Maybe less-skilled labor has become more accustomed to using technology, in such a way that information and computer technology is complementing less-skilled labor more that it used to? It’s a trend that bears watchin.

The upper right-hand panel shows the ratio of average wages for 45-55 year-olds compared to the average of 25-35 year-olds. This age premium rose substantially from the 1970s into the early 1990s, but since then has flattened out. Again, pick your hypothesis. The rise of information technology and the internet made some middle-aged skills obsolete, while elevating the wages of some younger workers who could use those skills?

The middle left-hand panel is the gender premium: the ratio of average wage of men to average wage of women. Back around 1970, the ratio of 1.7 works out to about 59 cents in wages earned by the average woman to $1 earned by the average man. The ratio has now fallen to about 1.2, which is about 83 cents in wages for the average woman compared to $1 earned by the average man. There is an ongoing debate over how much of this difference is accounted for by women being more likely to take on parental responsibilities. Here, I’ll just note that the ratio seems to have fallen notably even in the last decade, and looks as if it is still falling.

The middle right-hand panel is the white/black wage ratio. The white/black ratio for men is consistently higher than for women. However, neither ratio has moved much since the mid-1990s–that is, it hasn’t moved much for a generation.

The last two panels look at ratios across certain kinds of jobs. The bottom left panel looks at the ratio of average wages in “Cognitive” occupations to “Routine Cognitive and Routine Manual” occupations. Basically, “Cognitive” refers to jobs classified as “Professional, Managerial, and Technical,” while the “Routine” occupations comparison category includes “Clerical and Sales,” Production,” and “Operators.” It’s perhaps not surprising that the “Cognitive” jobs pay more. However, the ratio for Cognitive/Routine jobs has been rising pretty steadily for men since the 1970s. For women, the ratio rises up to the mid-1990s, but then more-or-less levels off. It’s not obvious from this comparison why the ratio should differ between men and women, but one possibility is that there are subcategories within “Cognitive” and “Routine” jobs that are not captured by the overall comparison. It’s also interesting that the college wage premium for men hasn’t been rising (as show earlier), but the ratio between Cognitive and Routine jobs for men has continued to rise–suggesting that “Cognitive” and “College” are not lining up in any precise way.

The bottom right panel looks at the ratio of wages in “Non-Routine Manual” to the same “Routine Cognitive and Routine Manual” occupations from the previous graph. Non-Routine Manual is all jobs categorized as Services; again, the “Routine” jobs include “Clerical and Sales,” Production,” and “Operators.” In general, the “Services” jobs are paid less (as you can see, the ratio on the vertical axis is below 1). But since the 1980s, the wage for “Services” jobs has been creeping a little closer to the ratio for the “Routine” jobs, especially for men. A possible explanation here is that the “Routine” jobs are more likely to be replaced by automation, in a way that means less-skilled workers are becoming able to do these jobs.

I have no bottom line here, except that a dynamic economy evolves, often for better of many groups, but sometimes for worse of certain groups as well.

National Flood Insurance Program: One More Short-Term Deadline

The National Flood Insurance Program is one of the programs caught in the wash as the federal government staggers from one short-term budgeting plan to the next. As the the website of the Federal Emergency Management Agency (FEMA) points out: “Congress must periodically renew the NFIP’s statutory authority to operate. On Sept. 30, 2023, the president signed legislation passed by Congress that extends the National Flood Insurance Program’s (NFIP’s) authorization to Nov. 17, 2023.”

The National Flood Insurance Program is intended to solve a problem: if you live in an flood zone, or want to live in a flood zone, you might prefer to have home insurance. Indeed, you probably can’t get a bank mortgage to buy the property without such insurance. But it’s just a fact that for properties built in certain areas–in zones where floods or wildfires are relatively likely, or on a common pathway for hurricanes–fairly-priced home insurance is going to be quite expensive. In fact, insurance companies may look at such areas and decide that, given the risks, they would prefer not to sell insurance there at all.

But unsurprisingly, the federal flood insurance faces a political imperative not to charge “too much,” so that when floods do happen, there isn’t enough money in the insurance fund to pay for the damages. From 1968 up through the early 2000s, this problem was apparent, but in the context of federal spending, not a huge deal: that is, the accumulated losses of the flood insurance program were in the range of $2-3 billion. But the 205 hurricane season, including Hurricane Katrina, meant that total losses for the program went up into the range of $20 billion.

One can make a case for the government to provide back-up housing insurance. But it’s very hard to make a persuasive case that this insurance should, in effect, subsidize the housing choices of people with homes near the ocean that also happen to be in flood plains.

Thus, since October 2021, FEMA has been trying to raise flood insurance premiums: for an overview and discussion of the of the proposal, see the report from the Government Accountability Office “Flood Insurance: FEMA’s New Rate-Setting Methodology Improves Actuarial Soundness but Highlights Need for Broader Program Reform” (GAO-23-105977, July 31, 2023). FEMA is trying to phase in the rate increases over time, which means that it will take until 2037.

In the meantime, states are suing FEMA to block the rate increases. As the Wall Street Journal reported a couple of months ago (Jean Eaglesham and Katy Steck Ferek, Flood-Insurance Program Faces a Backlash—and a Deadline; Home-purchase closings could be derailed if it lapses, September 21, 2023):

More than 3,000 properties had 10 or more claims from 1978 through 2022, according to FEMA. Nearly two-thirds of those were in five states: Louisiana, Texas, New Jersey, Missouri and New York. … The new pricing will take several years to be fully implemented and result in rate hikes for two-thirds of the program’s 4.7 million policyholders, according to the Government Accountability Office. The states suing FEMA say the new rates could drive people out of flood zones, slam property values and even lead to people losing their homes because they can no longer afford insurance that is a condition of their mortgages. Average annual premiums will eventually more than double in 12 coastal and landlocked states under the revamp …

The complaints from the states are probably realistic. The GAO report suggests that flood insurance could be means-tested–that is, the premium could be capped for those with lower income levels. But this step would then require a permanent subsidy to the flood insurance program. Again, it’s not obvious why federal taxpayers should subsidize home insurance for those living in flood plains.

For a readable overview of these issues, one useful starting point is “Introduction to the National Flood Insurance Program (NFIP)” from the Congressional Research Service (R44593, updated October 16, 2023). Another is “The National Flood Insurance Program: A Primer,” by Zoe Linder-Baptie, Jenna Epstein, and Carolyn Kousky (April 2022, Wharton Risk Management and Decision Processes Center). They point out that standard home insurance policies typically exclude coverage for floods, and 90% of the homes with flood insurance coverage rely on the National Flood Insurance Program. (Technically, private insurance companies write these policies, but the amount charged is set by the federal program, which also holds all of the risk.) There are about 5 million homes with government flood insurance. As they write:

The NFIP is an important program nationwide, providing policyholders with financial resilience in the face of flood risk that is escalating in many locations. But the program faces several challenges. The NFIP is in an unsustainable fiscal position, despite rating overhauls, which will require Congress to forgive its accumulated debt. At the same time, flood insurance policies can be unaffordable for some at-risk residents who most need the financial protection of insurance. …
Currently, the NFIP fails to sufficiently communicate future flood risk, does not a adequately address ever-increasing rainfall-related flooding, flood maps are backward looking and often out-of-date, and has not updated its development regulations to account for this growing risk.

How Stable are Stablecoins?

One reason why cryptocurrencies like Bitcoin don’t work well as a medium of exchange for typical transactions (along with difficulties like slow transactions and high cost per transaction compared with standard currencies) is that their value fluctuates so much. When a person or a company promised to make or receive a payment in a few weeks or even a few months, it wants to know, in the present, what that payment will be worth.

As a result, a number of companies have been launched to offer “stablecoins.” The goal is to have a cryptocurrency with a fixed and stable value. But how well are they succeeding? Anneke Kosse, Marc Glowka, Ilaria Mattei and Tara Rice provide an overview of stablecoins in “Will the real stablecoin please stand up?” (Bank of International Settlements, November 2023, BIS Papers No 141). The authors point out that stablecoins have four different ways to implement their commitment to have a value that doesn’t rise or fall. They write:

Stablecoins may use various approaches to maintaining parity with their peg. At a high level, a distinction can be made between the following four types of stablecoin based on whether they claim to hold a pool of reserve assets to back their value (ie whether they are collateralised or not), and if so, the type of these reserve assets:

  • Fiat-backed stablecoins: stablecoins that claim to be backed by assets
    denominated in a fiat currency. Examples include Tether and USD Coin.
  • Crypto-backed stablecoins: stablecoins that claim to be backed by other cryptoassets. Examples include Dai and Frax.
  • Commodity-backed stablecoins: stablecoins that claim to be backed by
    commodities. Examples are PAX Gold and Tether Gold.
  • Unbacked stablecoins: stablecoins that do not claim to be backed by any reserves, but rather seek to maintain a stable value through, for instance, algorithms or protocols. Examples include TerraClassicUSD and sUSD.

Note that fiat-backed, commodity-backed and crypto-backed stablecoins are sometimes also defined as collateralised stablecoins, with the first two referred to as “off-chain collateralised” and the latter “on-chain collateralised” stablecoins.

The big player in this market is Tether, which is backed by US dollar assets. Indeed, when Bitcoin falls, it’s not unusual for some cryptocurrency investors to shift money into Tether. A prominent unbacked “stablecoin” called TerraUSD at first earned returns of 20% for investors by lending out the stablecoin holdings in a decentralized bank for crypto investors, but then the value of the TerraUSD stablecoins fell to 2% of their original value. The stablecoin market as a whole has also suffered a series of negative news stories–some more directly related to stablecoins, and some not. Here’s an overview of recent events from the BIS authors:

It took several years before stablecoins obtained significant traction (Graph 1). The first stablecoin (BitUSD) was issued in July 2014, and five years later the total market capitalisation had grown to roughly five billion US dollars. It was not until the start of the Covid-19 pandemic that the market capitalisation started to rise steeply (Graph 1.A, event a). This has been attributed to the turbulence in the traditional financial markets following the Covid-19 outbreak and the sharp decline of the price of Bitcoin, which led investors to turn to stablecoins. Over the course of two years, the market capitalisation grew more than ninefold, and in March 2022, it was more than 35 times higher than at the onset of the pandemic.

Most of the growth was driven by a strong increase in the market capitalisation of Tether. Tether was launched in 2014. While it quickly became the largest stablecoin, it started to gain traction only in 2021. Many other stablecoins were also launched during the pandemic: the total number of active stablecoins grew from 13 at the beginning of 2020 to 40 at the end of 2021. Initially, the stablecoin market consisted mainly of fiat-backed stablecoins. However, various stablecoins that entered the market over the course of 2021 were crypto-backed or unbacked stablecoins. In April 2022, fiat-backed stablecoins accounted for around 80% of the total stablecoin market in terms of market capitalisation.

The growth of the stablecoin market came to a halt in the first half of May 2022 when the crypto ecosystem was shaken up by the crash of various cryptoassets. Among these was Terra’s (unbacked) stablecoin “TerraUSD”, the third largest stablecoin at the time (Graph 1.A, event b). TerraUSD’s collapse was caused by its inability to redeem users’ holdings at par. The TerraUSD crash caused unbacked stablecoins to lose almost all of their value. It also undercut the market capitalisation of fiat-backed and crypto-backed stablecoins. Overall, by the end of September 2022, the total market capitalisation of stablecoins had shrunk by more than a fifth to $151.4 billion.

The stablecoin market continued to shrink into 2023. Between April 2022 and the end of January 2023, the total capitalisation of the stablecoin market had shrunk by more than 25% to $138 billion. While much of the fall was triggered by the TerraUSD collapse, the bankruptcy filing of FTX, a major crypto exchange, in November 2022, accelerated the declining trend, although not as strongly as the May turmoil.

The left-hand panel shows a breakdown of the stablecoin market by how the assets are backed: as you can see, those backed by fiat currency (like the US dollar, as is the case with Tether) have most of the market. The right-hand panel shows the top five stablecoins by market size.

The authors emphasize that no stablecoin has yet managed to maintain a truly fixed value. Even Tether “had an average daily price volatility of about 2 percentage points between end-September 2022 and end-September 2023. This shows that to date, no stablecoin has been able to meet an important prerequisite of becoming a safe store of value – guaranteeing full price stability.” Unbacked stablecoins, although a small share of the overall market, can be just as volatile as any non-stablecoin cryptocurrency.

It’s not clear to me that stablecoins have yet evolved in a way that they are more than a niche market. But I’m hoping anyone who plays around with these asset understands that they are not regulated like banks or other traditional financial markets. Some cryptocurrencies that refer to themselves as “stablecoins” are not actually backed by anything. And even the largest and best-known stablecoins can have their value move a few percent in a day.

Mario Draghi on a Common Fiscal Policy for the Eurozone

Imagine that someone told you that a key issue will be determined by whether the countries of the European Union display “unity.” Is that statement encouraging, or despairing? Mario Draghi suggests (with a degree of optimism, I think?) that European unity about a common fiscal policy is the next step to make the euro function well in “The Next Flight of the Bumblebee: The Path to Common Fiscal Policy in the Eurozone” (NBER Reporter, October 2023, delivered as the 15th Annual Martin Feldstein Lecture).

(Most readers of this blog, I expect, recognize Draghi’s name as head of the European Central Bank from 2011 to 2019. Indeed, during a time of eurozone financial crisis in 2012, Draghi is sometimes credited with having “saved the euro” by making a forthright “whatever it takes” promise, when he said during a press conference: “Within our mandate, the ECB is ready to do whatever it takes to preserve the euro. And believe me, it will be enough.” Not everyone knows, however that Draghi has a PhD in economics from MIT and was a professor at several Italian universities before getting into the central banking business.)

Draghi’s reference to the “flight of the bumblebee” is a reminder of the old line that bumblebees seem ill-suited to flight–but they fly anyway. Similar, economists have been arguing since the 1990s that the economies of Europe may be ill-suited to a common currency–but they adopted one anyway.

The concern has to do what the theory of “optimal currency areas.” When does it make sense for two geographic areas to share a common currency, and when does it not? Imagine that two economic areas experience different “shocks.” Perhaps one part is dependent on the price of oil, but another is dependent on the price of wheat. Perhaps one area depends more heavily on manufacturing, while another areas depends more on computers and information technology.

If these two areas have different currencies, they can can adjust to these shocks through shifts in the exchange rate. But if they are glued together with a single currency, then wages and prices in one area–measured by that common currency–will be shifting relative to the other. One area will feel “rich” and the other will feel “poor.”

If two areas are well-suited to be a common currency area, then there will be various adjustments to smooth out such differences over time. For example, workers will migrate from low-wage to high-wage areas; conversely, companies will shift their investment to take advantage of low-wage areas. Moreover, the central government will practice some degree of redistribution: the high-wage area will pay more in taxes, and the low-wage area will receive more in benefits.

But what happens if, because of various barriers (national boundaries, different regulations, culture and language barriers), workers and firms don’t move much between the high- and low-wage areas? What if the central government is relatively small and weak, so that it doesn’t practice a meaningful degree of redistribution? And what if, because of the common currency, no exchange rate adjustments are possible? In that setting, the lower-wage area may just be stuck in that position for a long time. This is arguably what happened in the US economy, where the southern states remained poor for decades from the late 19th century into the 20th century–until cross-region migration of workers and firms increased, along with an expanded fiscal role for the US government. It’s what is happening in modern Europe, as certain countries including Greece, Italy, and others seem stuck in a low-growth trap.

The bumblebee that is the euro continues to fly. As Draghi points out, the underlying assumption in adopting the euro was that even if the EU was not actually ready to be a common currency area in 2000, it would evolve in that direction. As he says:

But there was always another perspective, which was that the euro was the consequence of decades of past integration — notably the evolution of Europe’s single market — and that it was only one more step along a much longer road towards political union. And through the so-called “functionalist” logic of integration, where one step forward leads inexorably to the next as its shortcomings are revealed, the end goal of political union would drive the necessary macroeconomic changes. From this viewpoint, the key question was not whether the euro area was an optimal currency area from the start — evidently it was not — but whether European countries were prepared to make it converge towards one over time.

In some ways, this vision of greater economic mobility of workers, firms, and products across the countries of Europe has been coming true. Draghi notes:

Twenty-five years of economic integration have led to more integrated supply chains and more synchronized business cycles, making the single monetary policy more appropriate for all countries. Multiple studies find that business cycle synchronization in the euro area has risen since 1999 and the euro can explain at least half of the overall increase. At the same time, while labor mobility in the euro area remains some way short of US levels, studies have found a gradual convergence, reflecting both a fall in interstate migration in the US and a rise in the role of migration in Europe. And channels of risk sharing have improved further. For example, against the backdrop of banking sector integration — the so-called banking union — and generous official assistance, cross-border lending was notably more resilient during the pandemic than we had seen during previous large shocks. The further Europe can advance along this path — especially in terms of integrating its capital markets — the lower the need for permanent fiscal transfers will be.

But with all of these changes duly noted, it remains true that fiscal policy across the nations of Europe is dominated by individual countries, not by a centralized budget. US-style transfers from higher-wage to lower-wage areas are not possible. As Draghi points out, in the US states can be required to run balanced budgets, in part because the US federal government can run budget deficits when needed. But in a European context, every country can run budget deficits when it wished to do so, which already led to one deep EU recession back in 2012-13.

Draghi’s vision for a common EU fiscal policy starts from a belief that EU countries have some shared goals: for example, a shared interest in higher defense spending in the aftermath of Russia’s invasion of Ukraine, and a share interest in a transition to lower-carbon energy sources. As Draghi writes:

Whichever route we take, we cannot stand still or — like a bicycle — we will fall over. The strategies that had ensured our prosperity and security in the past — reliance on the USA for security, on China for exports, and on Russia for energy — are insufficient, uncertain, or unacceptable. The challenges of climate change and migration only add to the sense of urgency to enhance Europe’s capacity to act. We will not be able to build that capacity without reviewing Europe’s fiscal framework, and I have tried to outline the directions this change might take. But ultimately the war in Ukraine has redefined our Union more profoundly — not only in its membership, and not only in its shared goals, but also in the awareness it has created that our future is entirely in our hands, and in our unity.

I confess that for me, Draghi’s call for EU “unity” on these topics feels discouraging. Are the EU countries across eastern Europe, close to the Soviet border, going to agree on defense policy with countries in the rest of Europe? Is France, with its nuclear power plants, or Norway, with its North Sea oil reserves, going to agree on energy policy with, say, Portugal and Greece? Is Draghi is correct that without such agreement, the bicycle will fall over and the eurozone will be subject to another round of financial crisis? Or perhaps this bumblebee can just keep flying, even if economists can’t quite grasp how it is doing so.

For some my previous efforts to explain the euro and the conditions for optimal currency areas, see :

. The