Reduced Wage Inequality Since the Pandemic

Since the start of the pandemic in early 2020, wage growth in the United States has tended to be higher for those at the bottom of the wage distribution. The evidence is from Mitchell Barnes, Lauren Bauer, and Wendy Edelberg, who provide a useful figure illustrating these dynamics for one of their “11 Facts on the Economic Recovery from the COVID-19 Pandemic” (Hamilton Project at the Brookings Institution, September 2021). 

The top lines show growth in nominal wages: the bottom lines show growth in wages after adjusting for inflation. Thus, two conclusions can hold true at the same time: 1) those in the lowest income quartile are doing better than those in the top quartile; and 2) the surge of inflation in recent months has meant that the real buying power of income has dropped.

Granted, the gap here between lowest and highest quartile is not enormous: we’re talking a matter of a few percentage points. But if the change was going the other way, with wage gains more weighted to those in the higher income group, that would be worthy of notice. This seems worthy of notice, too. Barnes, Bauer, and Edelberg write: “Some sectors have seen particularly strong wage gains. For example, over the past 12 months average hourly earnings in the leisure and hospitality sector have grown nearly twice as fast as the overall private industry average. Other sectors seeing strong gains in hourly earnings include retail trade,
transportation and warehousing, and financial activities.”

Why are those at the bottom of the income distribution doing better just now? My guess is that many of the jobs at the bottom of the income distribution were more severely disrupted by the pandemic: for example, think about jobs lost and disrupted in retail stores, or in travel-related industries. As overall GDP growth has recovered, those industries are trying to re-hire. But there are still a lot of pre-pandemic workers who are staying out of the labor force, at least for now. As a result, there are lots of job vacancies along lots of people quitting jobs (which is often a prelude to moving to a new job). Put these together, and the wages for lower-skilled workers are being bid up.

From Offshoring to Nearshoring?

One of the ongoing public narratives, beginning with the start of President Trump’s imposition of tariffs on trade with China and others and continuing up through the pandemic, has been whether a previous pattern of off-shoring–that is, importing goods and inputs from abroad–would emerge. More generally, the question has been whether the US economy would become less attached to Chinese imports.

The Kearney consulting company provides some information on the actual data in its Reshoring Index. The most recent version came out in May, and a summary appears under the title “Global pandemic roils 2020 Reshoring Index, shifting focus from reshoring to right-shoring

JHere’s one basic pattern: total US imports of manufactured goods as a share of US domestic production of manufactured goods. You can see the general rise since 2008, albeit with a small dip in 2019. US imports of manufactured goods are about one-eighth of domestic output of manufactured goods. In my experience, this proportion is considerably lower than a lot of people expect to hear. To put it another way, the potential gains for US manufacturing from being able to export into growing foreign markets around the world are much, much bigger than the potential gains from displacing imports of manufactured goods.

US imports of manufacturing goods from 14 Asian LCCs increased in 2020

What about China in particular? The report looks at imports from China as a share of total US imports from 14 low-cost Asian countries, what the report calls the LCC. The share of these imports from China does start falling in 2018, and in particular it plummets during the production and trade disruptions at the start of the pandemic in the first quarter of 2020.

Th

The drop in China’s share of LCC exports to the US has largely been matched by a rise in Vietnam’s exports to the US. As the Kearney report notes:

US companies have long viewed Vietnam as a viable sourcing option. Since China’s labor costs began to rise in 2007, Vietnam—where labor costs now fall almost 50 percent below those in China—has successfully used its cost advantage to attract global manufacturing business. A few examples include: 

Nike and Adidas reallocated a vast majority of their manufacturing and footwear base from China to Vietnam. 

In 2019, Hasbro announced it hoped to have only 50 percent of its production coming from China by the end of 2020, shifting to new plants in Vietnam and India. Hasbro is continuing to advance its transition to Vietnam and India. 

In 2019 Samsung ended mobile telephone production in China due to rising labor costs and economic slowdowns, shifting production to India and Vietnam. 

In addition, Vietnam is actively participating in free trade agreements to reduce trade barriers, improve market access for its goods, simplify customs procedures, and offer an increasingly attractive business environment. Vietnam has already invested heavily in improving its highways and ports, spending the most on infrastructure among Southeast Asia countries as a percentage of GDP (5.7 percent)

Even if relatively little reshoring of imported manufacturing goods has happened so far, what about the future? The Kearney report offers some data from a survey of US manufacturing executives. A general theme seemed to be that less reliance on imports from China and from the low-cost Asian countries in general was likely. However, a common response was not an expansion of US manufacturing capacity, but instead to emphasize the possibilities of “nearshoring” by relocating many of these supply chains to Mexico or Canada.

In addition, a number of the US manufacturing executives in the survey mentioned the uncertain state of the US workforce during and after the pandemic. Lots of service-oriented jobs can be done virtually, but working at an actual manufacturing job is likely to mean being physically present. One more complication is that 25% of the US manufacturing workforce is 55 or older. Thus, the future of US manufacturing will to some extent be linked to a skilled and willing labor force needed to make it happen.

Goods, Services, and Inventories Since the Pandemic Recession

In most recessions, consumption of services doesn’t move much, while consumption of goods drops fairly sharply and then rebounds over time. The short, sharp pandemic recession was different. With a combination of stay-at-home behavior, risk aversion, and official lockdowns, consumption of services dropped substantially and has not yet fully recovered. Meanwhile, consumption of goods dropped briefly, but quickly rebounded to above the pre-pandemic levels.

Mitchell Barnes, Lauren Bauer, and Wendy Edelberg provide a useful figure illustrating these dynamics for one of their “11 Facts on the Economic Recovery from the
COVID-19 Pandemic”
(Hamilton Project at the Brookings Institution, September 2021). They point out that the US economy has re-attained its pre-pandemic size in the second quarter of this year. But as production of services has not yet fully recovered, production of goods has climbed. The four lines on each graph show the four most recent recessions. The purple line is shows the unusually high rise in goods consumption on the left and the unusual drop in services consumption on the right.

This unexpectedly high consumption of goods, combined with disruptions of supply chains, has created some unusual situations for inventories–that is, what is being held in stock to be sold later. Overall inventories held by businesses can be divided up into three categories: held by manufacturers, held by wholesalers, and what’s held by retailers.

The blue line in this figure shows overall business inventories, measured relative to sales. As you can see, inventories rise during recessions, as unsold goods sit on the shelves. Firms then reduce their ordering, and when demand picks up again after the recession, inventories drop for a time. One interesting fact is that the overall level of business inventories (blue line) has not dropped to unprecedented levels–it’s a 1.25 multiple of sales, similar to levels before and after the Great Recession. But if you focus in on retail inventories, shown by the red line, you can see that retailers (understandably) tended to hold more inventory than the business sector as a whole before the pandemic. But retail inventories have plunged to historically low levels, below the overall levels for the business sector as a whole.

A more detailed breakdown within the retail category shows that some of the sectors with the biggest drop in inventories are motor vehicles, clothing stores, and department stores.

What about the wholesalers who supply the retailers? Their inventories peaked and dropped as well but not as severely. Again, the blue line shows inventories for the overall business sector as a basis for comparison, but the green shows inventories for wholesalers. Their inventory levels are low–for example, it’s not obvious that they have a backlog of inventories waiting to be passed along to the retailers–but the overall level of inventories for wholesalers is not ultra-low compared to pre-pandemic levels.

What about manufacturers? Again, the blue line shows inventories for the overall business sector, one more time, while this time the red line shows inventories for manufacturers. These inventories show the spike and fall one would expect, but overall, they don’t appear that low. On other side, I’ve been reading here and there about shortages for manufacturers of specific inputs, like computer chips used by car manufacturers. Perhaps manufacturers are holding inventories of some inputs because they are hampered by shortages of other inputs.

These shifts in good, services, and inventories are of course reflected in adjustments in actual jobs and businesses. By the standards of past recessions, the dislocations in services industries were much larger and longer than would have been expected, while in goods industries, the rebound was stronger than would have been expected.

The Importance of Basic R&D

R&D combined two ideas: “basic research,” which aims at discovering new science that doesn’t have a near-term commercial application, and all the other research, which does focus on developing a product with a near-term commercial application. For economic growth, both matter. But over time, the big breakthroughs in basic science matter more. The October issue of the semi-annual World Economic Outlook from the International Monetary Fund includes a chapter on “Research and Innovation: Fighting the Pandemic and Boosting Long-term Growth,” with a focus on the importance of basic research.

One emphasis of the report is that new research ideas flow pretty easily across national borders. Thus, other countries benefit from US R&D, and the US economy benefits from foreign R&D. The report notes:

Basic scientific research is a key driver of innovation and productivity, and basic scientific knowledge diffuses internationally farther than applied knowledge. A 10 percent increase in domestic (foreign) basic research is estimated to raise productivity by about 0.3 (0.6) percent, on average. International knowledge spillovers are more important for innovation in emerging market and developing economies than in advanced economies. Easy technology transfer, collaboration, and the free flow of ideas across borders should be key priorities.

Here are couple of interesting figures from the chapter. One shows the rising importance of academic research in patent applications: that is, over time such patent applications are citing a rising amount of academic research.

This figure show the gap between spending on applied and basic research around the world–that is, it shows applied minus basic. The gap has been rising slowly over time, which shows that quicker-to-pay-off applied research has been outpacing basic research.

After presenting various models of the R&D process, the IMF report recommends: “[D]oubling subsidies to private research and boosting public research expenditure by one-third could increase annual growth per capita by around 0.2 percent. Better targeting of subsidies and closer public‑private cooperation could boost this further, at lower public expense. Such investments could start to pay for themselves within a decade or so.”

Whether your focus is on raising the standard of living, improved education and health care, a cleaner environment and reducing carbon emissions, or many other goals, improved technology offers the possibility of doing it better, faster, and cheaper. For example, it’s interesting to speculate on what the pandemic would have been like if it had hit just 25 years earlier, without the technological ability of so many people and services to operate at a distance over the internet: the choices would have narrowed down to greater contagion risks or a truly gruesome economic shutdown.

A Economics Nobel Prize for Causality: Angrist, Card, and Imbens

At first glance, it may appear that the 2021 Nobel prize in economics (more formally the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2021) was given for two rather separate contributions. The award announcements says that one-half of the award is given to David Card “for his empirical contributions to labour economics,” while the other half is given jointly to Joshua D. Angrist and Guido W. Imbens “for their methodological contributions to the analysis of causal relationships.” (It must have been an interesting if slightly farcical conversation that led to splitting the prize 1/2, 1/4, 1/4, rather than 1/3, 1/3, 1/3, but of such stuff are committee decisions made.)

But the underlying connection between the three co-winners is obvious enough if you read the background materials released by the Nobel committee. Each year, the committee produces a highly readable “Popular science” explanation of the prize, this year titled “Natural experiments help answer important questions,” along with more specialized and longer “Scientific background” paper, this year titled “Answering Causal Questions Using Observational Data.”

As a starting point for understanding the contribution here, it’s useful to begin with an old familiar warning from every introductory statistics class that “correlation does not imply causation.” If two variables A and B are correlated with each other, it’s possible that A causes B, or B causes A, or both A and B are being affected by some unnamed set of factors C, or even that if there is a human tendency to look for patterns in data, but when you just look at enough enough combinations of variables, some of them will be correlated with each other by random chance, with no actual connection between them at all. Back when I was first learning statistics in the late 1970s, it was common for these warnings all to be mentioned in the intro econometrics class, but as a practical matter, we then went right back to calculating correlations.

Scientists often seek to demonstrated causation with controlled experiments. In one famous example, when Louis Pasteur developed a vaccine for sheep anthrax back in 1881, he took a herd of 100 sheep, vaccinated half of them, and then exposed the herd to anthrax. The unvaccinated sheep died and the vaccinated lived. But people aren’t sheep. So if an economist wants to address a question like whether a higher minimum wage causes unemployment (not is correlated with it!) or whether a surge of immigration leads to higher unemployment in the native-born population (not is correlated with it!), how is it possible to use real-world observational data in a way that lets a social science researcher draw conclusions about causality? To put it another way, do real-world events sometimes create a kind of “natural experiment” for researchers to study?

The Nobel committee describes some of the most prominent studies built on this “natural experiment” foundation. For example, consider the question of whether education raises one’s level of income. Yes, it’s of course true that there is a correlation between higher levels of education and income, but perhaps there are underlying persistence characteristics–say, persistence or rule-following or an ability to work with others–that are correlated both with higher education and with higher income. How can we look at real-world data and obtain a causal estimate? The Nobel committee explains.

Joshua Angrist and his colleague Alan Krueger (now deceased) showed how this could be done in a landmark article. In the US, children can leave school when they turn 16 or 17, depending on the state where they go to school. Because all children who are born in a particular calendar year start school on the same date, children who are born early in the year can leave school sooner than children born later in the year. When Angrist and Krueger compared people born in the first and fourth quarters of the year, they saw that the first group had, on average, spent less time in education. People born in the first quarter also had lower incomes than those born in the fourth quarter. As adults they thus had both less education and lower incomes than those born late in the year. Because chance decides exactly when a person is born, Angrist and Krueger were able to use this natural experiment to establish a causal relationship showing that more education leads to higher earnings: the effect of an additional year of education on income was nine per cent.

This study essentially uses the idea that people born in the fourth quarter of a given year are not fundamentally different than those born in the first quarter of the next year, and so the date when you are allowed to leave school in effect created a random separation of these two equivalent groups–logically similar to Pasteur’s vaccine. Researchers began to focus on other situations with the characteristics of a “natural experiment.”

For example, many government programs have an eligibility cut-off, and one can reasonably believe that those who are just barely on one side of the cutoff are pretty much the same as those who are just barely on the other side of the cut-off, with the cutoff itself randomly dividing these two groups. Thus, comparing those barely included from those barely excluded can allow for a causal estimate of the effect of the program.

Other programs are based on an element of randomness: for example, in many cities if the spots in desirable charter high schools are oversubscribed, there is a lottery for who gets admitted. In Oregon a few years back, the state decided to expand Medicaid coverage but because of limited funds, the new benefit was given out by lottery. Sometimes when a new program is implemented, it is rolled out in some areas before others–and those early-adopting areas may occur more-or-less at random. When economists and other social scientists hear about randomization in a program, they start thinking about whether it might serve as a natural experiment to provide evidence about causality.

In other situations, there can be an event or policy choice that works like a natural experiment. The Nobel committee describes a prominent study of immigration done by David Card:

A unique event in the history of the US gave rise to a natural experiment, which David Card used to investigate how immigration affects the labour market. In April 1980, Fidel Castro unexpectedly allowed all Cubans who wished to leave the country to do so. Between May and September, 125,000 Cubans emigrated to the US. Many of them settled in Miami, which entailed an increase in the Miami labour force of around seven per cent. To examine how this huge influx of workers affected the labour market in Miami, David Card compared the wage and employment trends in Miami with the evolution of
wages and employment in four comparison cities. Despite the enormous increase in labour supply, Card found no negative effects for Miami residents with low levels of education. Wages did not fall and unemployment did not increase relative to the other
cities. This study generated large amounts of new empirical work, and we now have a better understanding of the effects of immigration. For example, follow-up studies have shown that increased immigration has a positive effect on income for many groups who were born in the country, while people who immigrated at an earlier time are negatively affected. One explanation for this is that the natives switch to jobs that require good native language skills, and where they do not have to compete with immigrants for jobs.

In effect, the Cuban boatlift of 1980 was a natural experiment, addressing the question: “What would happen if an enormous number of unskilled immigrants arrived suddenly and without much warning in a major US city?” But once you start thinking along these lines, you can consider a variety of other events as natural experiments, too.

The natural experiment examples I have mentioned here are relatively straightforward. But as with many Nobel prizes, the award is given largely because the early work spawned a vast array of follow-up work and altered how economists think about these issues. Even in these relatively clear-cut cases, detailed and multi-faceted arguments have followed about exactly what can be inferred, or not, from looking at the data in different ways. The Nobel committee also describes the “natural experiment’ approach as “quasi-experimental.”

Together the work by this year’s Laureates laid the ground for the design-based approach, which has drastically changed how empirical research is conducted over the past 30 years. … Quasi-experimental variation can come from the many experiments provided by nature, administrative borders, institutional rules, and policy changes. The design-based approach features a clear statement of the assumptions used to identify the causal effect and
validation of these identifying assumptions.

To put it another way, when you hear economists say that a variable is “associated” or “correlated” with another variable, they mean something quite different from when they claim to have found a causal effect. The old statement that “correlation does not equal causation” is now taken with gimlet-eyed earnestness.

This is about This is

Michael S. Weisbach has just published The Economist’s Craft: An Introduction to Research, Publishing and Professional Development. If you or a loved one are either young economist–defined here as a graduate student or assistant professor–or responsible for advising young economists–it’s worth a look. As the editor of an economics journal, I was of course drawn like a moth to flame to the chapter on ‘Writing Prose for Academic Articles.”

Much of the chapter is given over to familiar advice that can’t be repeated too often: find prose stylists worth admiring in your field and compare your writing to theirs; avoid the passive voice; eliminate grammatical errors and sentence fragments; leave yourself time not just to write a paper, but also to reread and revise it; and so on. But one snippet that particular caught my eye was Weisbach’s comments on the use “this” as a crutch in academic writing. Weisbach writes:

“This.” One of my pet peeves about writing is the use of the word “this” to refer to a general idea, an argument, or virtually anything else the author has in mind but isn’t specifying. I constantly correct students and coauthors who use “this” in this manner, sometimes multiple times in the same paragraph. My view is that using “this” to refer to an idea you have just described is symptomatic of laziness. The author couldn’t quite think of what to say when describing his idea, so he says “this.” It doesn’t require any thought and the reader can usually (but not always) figure out what the author means.

Simply put, the pronoun “this” is a modifier, So do not use it as a kind of place-holder noun, no matter how many other people do. To keep it simple, make sure that whenever you use the word “this,” there is a noun after it. If you tempted to use “this” as a noun, think about what you are actually referring to as “this” and use that word or term instead. Try to treat this rule as a hard and fast one, and if you don’t allow yourself to violate it, your writing will improve.

My quibble with Weisbach is that “this” can in certain cases be appropriate as a shortcut pronoun in common speech. “This is your pilot speaking,” which more literally means “This voice which just erupted ut of the speakers is your pilot speaking.” “Is this your hat?” which more literally means, “Of the collection of hats present on this shelf, is the one to which I am pointing the one that belongs to you?”

But written academic communication doesn’t come with the same cues as a particular spoken context. The classic guide to writing, Strunk and White’s The Elements of Style, lists the issue of “this” in its chapter on “Misused Words and Ideas,”

This. The pronoun this, referring to the complete sense of a preceding sentence or clause, can’t always carry the load and so may produce an imprecise statement.

Visiting dignitaries watched yesterday as ground was broken on the new high-energy physics laboratory with a blowout safety wall. This is the first visible evidence of the university’s plans for modernization and expansion.

Visiting dignitaries watched yesterday as ground was broken on the new high-energy physics laboratory with a blowout safety wall. The ceremony afforded the first visible evidence of the university’s plans for modernization and expansion.

In the left-hand example, this does not immediately make clear what the first visible evidence is.

It’s common in academic writing to have one or more long sentences with a wealth of terminological and logical detail, followed by a sentence that starts “This is …” In this usage, the word “this” is pretty much devoid of specific content, except for a general waving of hands back to the big black box of all that stuff which came before. “This is …” is fine for rough first drafts. But as subject/verb combination, it’s a vague and weak way to start a sentence. As Weisbach writes: “[T]hink about what you are actually referring to as `this’ and use that word or term instead.”

Blips or Not? Alternative Inflation Measures

The Consumer Price Index has a history going back more than 100 years. It has some prominent uses as a method of adjusting for inflation: for example, it’s the measure of inflation that is used to adjust Social Security payments each year, so that their buying power does not decrease over time, and it’s the measure used to adjust the interest rate paid on Treasury Inflation-Protected Securities, a particular kind of debt issued by the US Treasury that guarantees you a return above the level of inflation.

But the CPI is not the only measure of inflation. For example, the Federal Reserve focuses instead at inflation as measured by the Personal Consumption Expenditure price index. About 70% of the price data in the PCE index is drawn from the CPI data, but the PCE index covers a broader range of prices and also is constructed in a way to allow for a a more plausible degree of substitution between goods when prices shift. But the ultimate difference between these measures isn’t large. This figure shows the CPI measure of inflation (percent change since the previous year) in blue and the PCE measure in red. As you can see, sometimes the blue CPI line runs a little higher, but you wouldn’t draw fundamentally different conclusions about inflation from looking at either measure.

Both approaches show the recent spike in inflation at the far right. But here a problem surfaces with both of these broad measures. The concept of “inflation” is meant to describe a broad-based rise in prices across the board. Thus, in looking at whether inflation is likely to be a lasting phenomenon or a short-term blip, a standard approach is to take out prices for energy and for food. Basically, the Fed and other central banks don’t want to treat a blip in, say, energy prices as if it was a shift in inflation that requires a policy response. Here’s a figure showing the same measure of the CPI on the previous graph, but this time the second line is the “core” CPI minus food and energy prices.

As you can see, the lines are often quite similar, especially in the in the 1960s and 1970s. But in more recent decades, the blue overall CPI line fluctuates a lot more than than the red line that leaves out energy and food prices. Thus, looking at core CPI meant that the Fed wasn’t deciding that here was a rising danger of inflation in the late 1990s or the early 2000s, nor was it panicking about deflation when the overall blue CPI line dipped a couple of times in the last decade. Instead, the focus was on the smoother red line–which, again, surges up in the most recent data.

But if the goal is to avoid overreacting to narrow price increases that come and go, and instead to focus on an idea of “core” inflation, is leaving out food and energy prices the best way to do it? The October 2021 issue of the World Economic Outlook from the International Monetary Fund has a chapter on “Inflation Scares,” which discusses the question of when and how short-run inflation blips are likely to become lasting or growing inflation. The report notes (citations omitted):

A common measure of core personal consumption expenditure inflation that excludes food and energy prices has recently spiked even higher than headline inflation. But simply removing food and energy prices is not the best way to measure core inflation: transitory movements can arise in different industries. These concerns have led to core measurement based on median inflation (the price change at the 50th percentile of all prices each month) or on trimmed mean inflation (stripping out a fixed share of price changes). Based on median or trimmed mean inflation, recent developments are less alarming. This difference reflects the large sectoral shocks to industries other than food and energy, which caused the traditional measure to rise sharply but are filtered out of median or trimmed mean inflation. For example, the April 2021 inflation spike reflected the prices of light trucks, hotel rooms, air transportation, spectator sports, and car rentals, which more than doubled at a monthly annualized rate, while median inflation was only 2.8 percent …

Here’s a figure again showing the standard measure of CPI for comparison, but in this case it includes the “median CPI” and the “trimmed mean CPI” as calculated by the Federal Reserve Bank of Cleveland. The median looks at the 50th percentile of price changes. The trimmed mean drops the top 8% and bottom 8% of all price changes, and looks at what’s left. Both methods create a measure of “core” inflation that reduces the influence of outliers, whether they are food and energy or something else.

Clearly, the rise in inflation or trimmed mean CPI looks less concerning, at least so far. This figure suggests that the rise in overall CPI inflation is being driven by outliers that are not food and energy. The IMF report puts it this way:

Which of these core measures is more relevant for understanding the current situation? Historical data suggest that it is median or trimmed mean inflation. … Trimming more extreme price movements increases the stability of the underlying inflation measure and strengthens its relationship with macroeconomic conditions. Inflation excluding food and energy has been 70 percent more volatile than median inflation and has had a much weaker relationship with unemployment. The COVID-19 crisis has strengthened the
case for median or trimmed mean inflation.

Of course, when it comes to household purchasing we all need to deal with actual inflation, not core inflation. And of course, future events will determine whether the current CPI inflation is a blip or the start of a bigger pattern of broad-based inflation. But the evidence from median and trimmed-mean inflation tends to support the belief that so far, it’s still more likely to be a blip.

Cryptocurrencies: Update and Overview

The most recent Global Financial Stability Report from the IMF includes a chapter on “The Crypto Ecosystem and Financial Stability Challenges” (October 2021). It seems clear that crypto is growing, but less clear (at least to me) what, if anything, needs to be done about it.

The size and growth rate of crypto assets does demand some attention. As the figure shows, total market value of these assets was under $500 billion in late 2020, but has been as high as $2.5 trillion in 2021. The figure also shows that Bitcoin (in the red) continues to have an outsized if diminishing share of the crypto market.

The conception of Bitcoin as a way of having anonymous but trustworthy transactions was brilliant, but the real-world shortcomings of Bitcoin as an alternative medium of exchange have become apparent. Bitcoin is a slow and costly method of making transactions, and thus unsuitable for casual everyday use. Instead, the primary users of Bitcoin have been those for whom anonymity is especially valuable–often because they are skirting or breaking the law–and those who want to speculate on the rising and falling value of the asset.

These weaknesses of Bitcoin have pushed crypto in two directions: trying to find ways in which transactions made by crypto can be done more cheaply and quickly, and trying to establish forms of crypto with a fixed value that doesn’t fluctuate much. These goals are in some ways intertwined: when agreeing to make or accept a payment in a cryptocurrency, many users would like to have a sense that the value of cryptocurrency itself is stable.

Thus, one innovation getting a lot of attention is “stablecoins,” which are a form of cryptocurrency that are backed by conventional and usually US-dollar-based financial assets. Here’s a figure showing some leading stablecoins and the financial asset behind them.

The prominence of stablecoins raises some obvious questions. Are these functionally the same as money market mutual funds? If you are going to rely on stablecoins for transactions, and stablecoins are based on conventional US-dollar financial assets, then why not just rely the US dollar in the first place? The IMF points out that if an economy decides to make widespread use of the main existing stablecoins–perhaps as a way of reducing the transactions costs of moving payments across national borders–that country is in effect choosing to “dollarize” that part of its money and its economy.

The most prominent alternative to Bitcoin that is seeking to make crypto more effective for transactions seems to be Ether, operated through the Ethereum marketplace. The IMF notes:

Bitcoin remains the dominant crypto asset, but its market share has declined sharply in 2021 from more than 70 percent to less than 45 percent. Market interest has grown for newer blockchains that use smart contracts and aim to solve the challenges of earlier blockchains by introducing features to ensure scalability, interoperability, and
sustainability. The most prominent is Ether, which surpassed Bitcoin trading volumes earlier in 2021 …

The other growth area in crypto in the last year or so is what the IMF calls “decentralized finance,” or DeFi for short (footnotes and references to figures omitted):

The size of DeFi grew from $15 billion at the end of 2020 to about $110 billion as of September 2021 largely due to the rapid growth of (1) decentralized exchanges that allow users to trade crypto assets without an intermediary and (2) credit platforms that match borrowers and lenders without the need for a credit risk evaluation of the customer. These services operate directly on blockchains (usually) without customer identification requirements. Most of DeFi is built on the Ethereum blockchain and uses Ethereum-based tokens, including stablecoins. DeFi is also one of the main drivers of the rapid growth of stablecoins and warrants close attention. Chainalysis (2021b) highlights that DeFi users for now are primarily institutional players from advanced economies, whereas adoption
among retail users and emerging market and developing economies in general is lagging.

The crypto world has just experienced an enormous shock: the just-announced outright ban by China on participating in cryptocurrency markets, which was announced the day before the IMF discussion was released and thus not included in the report. However, a major proportion of crypto activity had been based in China. The IMF writes that “[f]inancial stability risks are not yet systemic” for crypto markets. But of course, as an international bureaucracy, the IMF also bravely offers a clarion call for more data and sensible rules: “Policymakers should implement global standards for crypto assets and enhance their ability to monitor the crypto ecosystem by addressing data gaps. As the role of stablecoins grows, regulations should correspond to the risks they pose and the economic functions they perform.”

For myself, I’ve been intrigued for years about what is happening in the world of cryptocurrencies and more generally how non-currency uses of blockchain could evolve. But it’s fine with me if the players in those markets continue to experiment in a substantially unregulated environment for awhile longer.

Payoffs from Community College: How Choice of Major Matters

Community colleges serve many functions: a terminal two-year degree leading to better job, a stepping-stone to a four-year college degree, a chance to brush up on needed skills while not necessarily completing a degree, and a chance to pursue interests to enhance one’s life. Here, I focus on just one of these purposes: how does the choice of major in a two-year community college degree affect future earnings.

It turns out that the answer is mostly determined by the choice of major. Cody Christensen and Lesley J. Turner look at “Student Outcomes at Community Colleges: What Factors Explain Variation in Loan Repayment and Earnings?” (September 2021, Hutchins Center on Fiscal and Monetary Policy at Brookings). The horizontal axis of the graph measures “net earnings premium: “Generally speaking, a program’s NEP measures the extent to which former students’ earnings gains are large enough to cover the direct and indirect costs of attending the program.”

As you can see, most majors have a positive net earnings, but not all. The biggest gains seem to be for majors that teach technical skills. Of course, these figures represent average gains across majors, when in fact each major will produce a range of outcomes–some higher and some lower than the average.

When looking at the economic payoff of a community college degree, a common question is whether the differences show here reflect choice of major, or whether they reflect the socioeconomic patterns of students choosing those majors. The answer seems to be that while socioeconomic factors do matter, choice of major is the big difference-maker. The authors write:

[W]e examine the program-, institution-, and state-level correlates of community college student outcomes, using program-level data on post-college earnings and loan repayment for more than 1,200 community colleges. We find that student demographics are correlated with net earnings and loan repayment, largely because programs that enroll more underrepresented minority and female students have worse outcomes. Student demographics explain a relatively small share of the variation in earnings and repayment. In contrast, field of study explains most of the variation in net earnings across programs and much of the variation in loan repayment. Moreover, after controlling for field of study, we find a positive association between the share of students in a program who are underrepresented minorities and net earnings, suggesting that programs that enroll more Black and Hispanic students are more likely to be in fields that lead to smaller earnings gains. Finally, we show that institutions that enroll the largest shares of minority students tend to offer fewer programs with high earning premia and more seats in programs that have lower net earnings, on average.

I’m a fan of expanding the governmental commitment to community colleges. A couple of years ago, I mentioned a more fleshed-out proposal for supporting community colleges that would cost $22 billion annually–which is more-or-less a rounding error to the spending totals being proposed in Congress these days. But this evidence emphasizes that it’s not just about getting more students through any two-year degree; it’s about being clear with students about the typical outcomes for different majors, and about increasing the support for students who might prefer a major more likely to raise future wages.