In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
The new software tools that can produce a reasonable first draft of many essays pose a sort of existential question for students: Do you care about learning and getting smarter about ideas, even if it takes more work and perhaps even risks some outright failures? Or do you just want to turn in the assignments and get the credits?
Here’s a way to rephrase that question: If your primary skill as a student is asking a software program for answers, what will be your likely value-added in the workforce (or as a friend or romantic partner)? If you use the new programs occasionally as one more source of brainstorming and inspiration, along with ideas that bubble up from classrooms, readings, and discussion groups, and then your primary skill is building on those starting points with your own ideas and presentation skills, what will be your likely value-added in the workforce (or as a friend or romantic partner)?
Everyone should write. You know why? Because everyone is full of ideas they’re not aware of.
You don’t talk about these ideas, even in your own head, because you’ve never put them into words. They’re gut feelings. Intuitions. You use them a dozen times a day. But you’d shrug your shoulders if someone asked why. How you react to career risk. Why you invest the way you do. Why you like some people and question others. We’re all brimming with opinions on these topics that we may never discuss, even with ourselves. Like phantom intelligence.
Intuition is strong enough to put these ideas into practice. But intuition isn’t a tool; it’s a safety net at best, and is more often the fuel of biased decisions. Turning gut feelings into tools means understanding their origin, limits, and how they interact with other ideas. Which requires turning them into words.
And writing is the best way to do that.
Writing crystallizes ideas in ways thinking on its own will never accomplish.
The reason is simple: It’s hard to focus on a topic in your head for more than a few seconds without getting distracted by another thought, and distractions erase whatever you attempted to think about. But words on paper stick. They aren’t washed away by the agitator in your head who won’t shut up about the tone of an email someone just sent you. You might be able to hold focus just briefly in your head, but a sentence on paper has all the patience in the world, waiting for you to return whenever you’re ready. It’s hard to overemphasize how important this is. Putting ideas on paper is the best way to organize them in one place, and getting everything in one place is essential to understanding ideas as more than the gut reactions they often hide as. …
Sometimes writing is encouraging. You realize you understand a topic better than you thought. The process flushes out all kinds of other ideas you never knew you had hiding upstairs. Now you can apply those insights elsewhere.
Other times it’s painful. Forcing the logic of your thoughts into words can uncover the madness of your ideas. The holes. The flaws. The biases. … Things the mind tends to gloss over the pen tends to highlight. …
A common question people ask professional writers is, “Where do you get your ideas?” A common answer is, “From writing.” Writers don’t know exactly what they’ll write about until they start writing, because the process crystallizes the fuzzy ideas we all have floating around. This chicken-and-egg problem is probably why writing is intimidating for some people. They don’t think they can write because – in their head, as this moment – they don’t know what they’d write about. But hardly anyone does.
So, write. A journal. A business manifesto. An investment plan. You don’t have to publish it. It’s the process that matters. You’ll uncover so much you never knew.
I’ve tried to make this point in other posts over the years: for example, see my post “`I Don’t Know So Well What I Think Until I See What I Say,'” (August 29, 2018). The quotation is from Flannery O’Connor, but I also offer versions of the quotation from Andre Gide, William Makepeace Thackeray, and Montaigne. Of course, some students will hear the same advice more clearly if it is delivered by a modern venture capitalist.
In modern economics, “iceberg costs” is an assumption built into certain models of international trade The metaphor is that if you were actually trading an iceberg, it would melt along the way. The extent of the melting would be greater, the farther it was carried across the ocean. Thus, an item which needs to be shipped a longer distance can be modelled as having “iceberg transportation costs”–that is, the value of the item to the ultimate user diminished with greater distance, like an iceberg melting.
The metaphor of iceberg transportation costs is a venerable one in economics, going back to a prominent 1954 paper by Paul Samuelson (“The transfer problem and transport costs, ii: Analysis of effects of trade impediments,” Economic Journal, 64 (254) pp. 264–89). But what about trading of actual ice?
At its peak in the nineteenth century, an estimated 90,000 people and 25,000 horses were involved in the natural ice trade in the [United] States. In fact, such was the demand for American ice in London at one point, Lake Oppegård in Norway was rechristened “Wenham Lake” (after a lake in a Massachusetts town) to compete with American ice imports in England. By 1856, American ice was shipped to all four corners of the world, including South America, the Caribbean, Southeast Asia and Australia, the Persian Gulf, and its biggest market–India. The United States had three epicenters for ice: New York, Boston, and Chicago. …
Indeed the ice industry radically changed dietary habits, writes Stott. Thanks to it, salmon and lobster from Boston made its way to Calcutta–where, by 1833, “the first ice cream ever eaten in India was made using Massachusetts ice,” note Kistler, Carter, and Hinchey. Eventually, writes Stott, this is what would lead to the downfall of the ice harvesting industry, taking with it the need for ice harvesters: “[The] demand for ice stimulated the development of reliable means of artificial refrigeration. At the same time, the immediate success of an ice trade based on abundant ice ponds and efficient transportation, postponed the rapid development of artificial ice-making.”
But of course, the key question for readers of this blog is whether the actual melting of icebergs during the 19th century is an accurate measure of how distance affects trading costs in the modern economy. Is the metaphor quantitatively accurate?Maarten Bosker and Eltjo Buringh take a detailed look at “Ice(berg) Transport Costs” (Economic Journal, July 2020, pp. 1262-1287) and (perhaps unsurprisingly) find that it is not. They write:
Our data primarily comes from the records of the Tudor Ice Company, Boston’s leading ice exporting company that, during the nineteenth century, shipped over one million tons of natural ice all over the world on wooden sailing ships. Ice(berg) transport costs in practice consisted of both a true ‘iceberg’ component: melt in transit, as well as the standard transport cost components (freight, landing, loading and insurance costs).
It turns out that the costs of freight, landing, and loading were several orders of magnitude bigger than the melt costs. In addition, melt costs themselves were smaller when the ice being shipped was bigger: that is, there were economies of scale in packing larger icebergs with sawdust or wood shavings (to reduce how much they would melt) and larger ice kept themselves colder, longer.
As it turns out, modern economics research in the determinants of trade has been experimenting for some years with variations on the classic assumption of iceberg trading costs. Along with thinking about distance, the research has looked at other costs of trade that don’t vary with distance, the possibility of economies of scale in trade, the importance of “backhaul” and what a ship (or plane or truck) can do after carrying a freight in one direction, the interaction between the product being shipped and the technology used for shipping, and so on. Ironically, looking at the actual practical costs of trading ice is quite likely a more useful real-world model than the stylized model in which trading costs can be proxied just by melting.
Near the start of their article, they provide a quick overview of the historical ice trade as well:
But, before the widespread adoption of artificial refrigeration and ice making in the early-twentieth century, natural ice was a heavily traded natural resource in almost all parts of the world. It was used for cooling purposes and the preservation and preparation of food, both by households and businesses. Ice houses, storing large quantities of ice, dotted the North American landscape, and many (wealthy) people’s homes had a private ice cellar. To give an idea of the size of the trade, the 20 largest US cities consumed nearly 4,000,000 tons of ice in 1879 (Hall, 1880). New York alone consumed 500,000 tons per year (Encyclopedia Britannica, 1881).
For most of history, the ice trade was very localised, with ice harvested from nearby frozen lakes, rivers or mountains. This changed in 1806 when Frederic Tudor shipped 130 tons of natural ice from Boston to the Caribbean island of Martinique. After further refining the process of insulating the ice during the voyage and at the destination, shipments to other Caribbean destinations and the main cities in the southern US quickly followed. In 1833 Tudor sent an experimental shipment to Calcutta, and upon its success expanded this long-distance ice trade to Brazil, Indonesia, China, the Philippines, Australia and even (around Cape Horn) Peru and San Francisco. Drawn by the extreme profitability of the trade, other companies soon entered the market … The trade’s heyday was around 1860.
Just something to think about in August, as I walk over to the automatic ice dispenser built into the door of the refrigerator in our kitchen.
The painter Francis Bacon (1909-1992) did a series of interviews during his career with David Sylvester. Some of them are presented in Interviews with Francis Bacon: 1962-1979, by David Sylvester, published in 1980. I’m both interested in the art and also pretty clueless, but when I’m interested, I don’t mind being clueless. (In fact, life story pretty much in a nutshell, there.) Beyond the comments focused narrowly on painting and art, Sylvester also draws out insights on the creative process, the role of inspiration, the role of risk-taking, how trying to capture a certain subject distorts the subject, when trying to take something good and push it further and end up with something less good, and other issues that seem to me more broadly relevant not just to art but to academic work as well. This exchange about criticism is at the end of Interview 2 (pp. 66-67 in my edition). FB is Francis Bacon; DS is David Sylvester.
FB: I’ve always thought of friendship as where two people really tear one another apart and perhaps in that way learn something from each other.
DS: Have you ever got anything from what’s called destructive criticism made by critics?
FB: I think that destructive criticism, especially by other artists, is certainly the most helpful criticism. Even if, when you analyze it, you may feel that it’s wrong, at least you analyze it and think about it. When people praise you, well, it’s very pleasant to be praised, but it doesn’t actually help you.
DS: Do you find you can bring yourself to make destructive criticism of your friends’ work?
FB: Unfortunately, with most of them I can’t if I want to keep them as friends.
DS: Do you find you can criticize their personalities and keep them as friends?
FB: It’s easier, because people are less vain of their personalities than they are of their work. They feel in an odd way, I think, that they’re not irrevocably committed to their personality, that they can work on it and change it, whereas the work that has gone out–nothing can be done about it. But I’ve always hoped to find another painter I could really talk to–somebody whose qualities and sensibility I’d really believe in-who really tore my things to bits and whose judgement I could actually believe in. I envy very much, for instance, going to another art, I envy very much the situation when Eliot and Pound and Yeats were all working together. And in fact Pound made a kind of caesarean operation on The Waste Land; he also had a very strong influence on Yeats–although both of them may have been very much better poets than Pound. I think it would be marvellous to have somebody who would say to you, “Do this, do that, don’t do this, don’t do that!” and give you the reasons. I think it would be very helpful.
DS: You feel you really could use that kind of help?
FB: I could. Very much. Yes, I long for people to tell me what to do, to tell me where I go wrong.
Here are some thoughts:
1) Generalizing wildly from my own experience, most of us are not looking for a “friendship” where two people “really tear one another apart.” However, it does seem to me as if close friends often bring temperaments that are in some ineluctable way complementary, and friendship arises in that interaction.
2) Can economists be also friends and determined opponents and critics? It’s possible! Perhaps the most famous example in the economics literature is that of David Ricardo and Thomas Malthus, who together with their wives were among the most devoted of friends. However, they disagreed substantially about economic issues, and wrote back and forth for a decade with detailed criticisms and refutations of each other’s work. For example, two weeks before Ricardo died, he wrote one more letter to Malthus about their disagreements on the theory of value, and concluded:
And now, my dear Malthus, I have done. Like other disputants, after much discussion we each retain our own opinions. These discussions, however, never influence our friendship; I could not like you more than I do if you agreed in opinion with me. Pray give Mrs. Ricardo’s and my kind regards to Mrs. Malthus. Yours truly …
At Ricardo’s funeral, Malthus reportedly said:
I never loved anybody out of my own family so much. Our interchange of opinions was so unreserved, and the object after which we were both enquiring was so entirely the truth and nothing else, that I cannot but think we sooner or later must have agreed.
That combination of friendship and critique is of course rare. But Bacon is not wrong to yearn for it.
3) In my own career, working as Managing Editor of the Journal of Economic Perspectives since 1987, have been providing in-depth comments and hands-on editing to literally hundreds of economists over the years. When I took the job, one of my concerns was that I would have to wage a series of pitched battles with economists about standards of exposition. But over the years, less than a handful of the JEP authors been bitter and resentful about my editing and comments–at least to my face. To the contrary, most of them have been quite pleased at getting editorial guidance. In Bacon’s words: “I think it would be marvellous to have somebody who would say to you, `Do this, do that, don’t do this, don’t do that!’ and give you the reasons. I think it would be very helpful.” A few times over the years I have had an author respond that after working through my comments and editing, the author understood their own work better than they had previously–which as an editor is the highest of compliments.
Of course, authors don’t always agree with me, but as long as I feel that they have considered my reactions, that’s of course fine. I’m not challenging the authors in the sense of telling them that they are “wrong,” an accusation that carries heavy weight in academic work. Instead, my editorial goals are clarity, persuasiveness, and a degree of brevity. When it comes to criticism, tone seems especially important. Criticisms phrases in the form “I don’t understand” or “I don’t follow” or “the structure might work better this way” or “how do you respond to the common criticism that …” are not only more politic, but also usually more accurate, than “You’re wrong.”
My tradition on this blog is to take a break (mostly!) from current events in the later part of August. Instead, I pre-schedule daily posts based on things I read during the year about three of my preoccupations: economics, academia, and writing.
_____________
One of the most famous Marxist statements of how an economy should work is: “From each according to his ability; To each according to his needs.” But Marx’s comment (from his posthumously published “Critique of the Gotha Programme”) was not original to him. There are earlier versions in the writings of Étienne Cabet and Louis Blanc.
Luc Bovens and Adrien Lutz trace the origins of the Marxist slogan, and compare it with two other common socialist slogans in “`From Each according to Ability; To Each according to Needs’: Origin, Meaning, and Development of Socialist Slogans” (History of Political Economy, 2019, 51:2, pp. 237-257). At the beginning of their essay, they write:
There are three slogans in the history of socialism that are very close in wording, namely, the famous slogan of Étienne Cabet, Louis Blanc, and Karl Marx: From each according to his ability; To each according to his needs; the earlier Henri de Saint-Simon and Constantin Pecqueur slogan: To each according to his ability; To each according to his works; and the later slogan in the Soviet Constitution of 1936, referred to as the Stalin constitution: From each according to his ability; To each according to his work.
As Bovens and Lutz point out, the elements of all of these slogans are deeply Christian and can be found in Biblical texts, and I will reproduce two of their tables here:
Bovens and Lutz offer an in-depth discussion of what the authors of these discussions meant, which I will not try to summarize here. Instead, I’ll offer a few obvious questions.
1) Ability is to both given and developed. Thus, if what is given is to be derived from ability, some obvious questions for an economist would include: Who determines ability? Who invests in developing ability? Who decides whether a sufficient amount has been given? In some socialist countries, children are identified for potential athletic ability at young ages and sent to training camps to develop that ability. Should this be the broader social model for determining what is meant by “from each, according to ability”? In a number of the Biblical references, those with greater ability were given additional resources, in the belief that using those resources wisely was in the broader social interest.
2) The idea that people should receive according to their need is obviously different than the idea that they should receive according to their work. There is an obvious question for economists of how the determination of “need” will be made.
3) Receiving according to “works” or “work” raises obvious questions. The authors argue that “works” referred everything a person invested in creation of the social good, including ability, capital, and labor. The later reference “according to work” that went from the Bible to the Soviet Constitution refers to the effort and efficiency of labor alone. A final obvious question is how the value of “work” or “works” will be determined.
For contrast to these slogans, consider the alternative proposed by the libertarian philosopher Robert Nozick in his classic 1974 work Anarchy, State, Utopia. Nozick write:
To think that the task of a theory of distributive justice is to fill in the blank in “to each according to his _______” is to be predisposed to search for a pattern; and the separate treatment of “from each according to his _______” treats production and distribution as two separate issues.”
However, Nozick suggest that in the real world, looking for a pattern to fill in the first blank is a mistake. He writes:
The set of holdings that results when some persons receive their marginal products, others win at gambling, others receive a share of their mate’s income, others receive gifts from foundations, others receive interest on loans, others receive gifts from admirers, others receive return on investment, others make for themselves much of what they have, others find things, and so on, will not be patterned.
Nozick argues that we can understand how these outcomes arise, as a result of initial distributions of assets and then choices that are made, but that trying to categorize the result as a pattern based on “needs” or “works” or “work” or “ability” is not a useful exercise. He proposes a usefully provocative alternative slogan:
[W]e might say:
From each according to what he chooses to do, to each according to what he makes of himself (perhaps with the contracted aid of others) and what others choose to do for him and choose to give him of what they’ve been given previously (under this this maxim) and haven’t yet expended or transferred.
This, the discerning reader will have noticed, has its defects as a slogan. So as a summary and great simplification (and not as a maxim with any independent meaning) we have:
From each as they choose, to each as they are chosen.
I don’t find Nozick’s formulation fully satisfactory, but I suppose this just means that I agree with point that no single pattern will fully capture distributive justice. Of course, Nozick’s slogan lacks the sense of duty and social obligation, and thus an element of the moral commandment and prophetic force, embodied in the previous slogans. But it does highlight the reality when it comes to terms like “ability,” “need,” and “work,” there is not some overarching being, whether God or government, to determine such values, resolve any potential conflicts between them, and announce what should be.
Instead, Nozick is emphasizing that there is value in people being able to make choices, and people being free to respond to the choices of others. To put it another way, this view of social welfare isn’t based on just looking at outcomes like the distribution of work and of consumption. Instead, it’s based on looking at range the choices that people have available to them. Although Nozick is not using his slogan in this way, I would argue that social welfare is increased when there is social and government support for people to be able to make a wider range of choices, including support in gaining education and skills, and in helping people to pick themselves back up when life has gone badly. But there is also a social discipline here: your choices will be rewarded, or not, to the extent that you provide goods and services that are, in turn, chosen by others.
One of the ways that the labor and housing markets are supposed to work involves geographic mobility. Jobs are less available in one place than another? At least some workers should move for the new opportunities, which will help labor markets in both locations. Housing is cheaper in some areas than others? Then at least some people should be tempted to relocate. But Americans are moving less. The US Census Bureau has just publisheda series of graphs and data showing some trends (“CPS Historical Geographic Mobility/Migration Graphs,” August 10, 2023).
Here’s the overall picture. The “mover rate” on the right-hand axis looks at the share of households living in a different place than one year earlier. It used to be up around 20%; now its under 10%. (The breaks in the blue lines happen because the surveys changed such that comparable data isn’t available for those years.) Notice that the decline in the mover rate is a long-run trend over four decades. It is axiomatic that you cannot plausibly explain a long-run trend with a one-time event, like the Great Recession of 2007-9 or the more recent pandemic. Something else is going on.
One standard explanation that could apply to at least some periods is that people feel “locked in” by owning a house, and are unwilling to move for that reason. But the rate of moving for people owning homes (green line) has declined only a little. Most of the shift is from renters (orange line) moving less.
Are there some clues for the shift in how far people move? This graph only shows moves that involve changing residence from one county to another. The number of short-range moves hasn’t changed much, but the number of longer range moves has declined since the early 2000s.
By most measures, internal migration in the United States is at a 30-year low. Migration rates have fallen for most distances, demographic and socioeconomic groups, and geographic areas. The widespread nature of the decrease suggests that the drop in mobility is not related to demographics, income, employment, labor force participation, or homeownership. Moreover, three consecutive decades of declining migration rates is historically unprecedented in the available data series. The downward trend appears to have begun around the 1980s, pointing to explanations that should be relevant to the entire period, rather than specific to the current recession and recovery—that is, the decline in migration is not a particular feature of the past five years, but has been relatively steady since the 1980s.
To me, the most plausible explanation is related to the run-up housing prices in a number of major urban areas over time. Imagine someone who looks at the high-paying jobs available in, say, New York City or Los Angeles or Chicago–and I’m thinking how much relatively low-skilled jobs can pay in those cities–and thinks about moving there. The additional cost that would be paid to rent or buy housing in those areas probably offsets any income gains.
Rebecca Diamond and Enrico Moretti tackle this question more explicitly in “Where is Standard of Living the Highest: Local Prices and the Geography of Consumption” (NBER Working Paper #29533, revised January 2023, also available in an ungated version here). The authors compare what your income can actually buy in different areas. They write (from the abstract):
We uncover vast geographical differences in material standard of living for a given income level. Low income residents in the most affordable commuting zone enjoy a level of consumption that is 95% higher than that of low income residents in the most expensive commuting zone. … We find that for college graduates, there is essentially no relationship between consumption and cost of living, suggesting that college graduates living in cities with high costs of living—including the most expensive coastal cities—enjoy a standard of living on average similar to college graduates with the same observable characteristics living in cities with low cost of living—including the least expensive Rust Belt cities. By contrast, we find a significant negative relationship between consumption and cost of living for high school graduates and high school drop-outs, indicating that expensive cities offer lower standard of living than more affordable cities. The differences are quantitatively large: High school drop-outs moving from the most to the least affordable commuting zone would experience a 18.5% decline in consumption.
In other words, a big reason that moving rate are down is that the economic incentive to move can be low: in places with higher wages, the higher cost of living offsets these gains, especially for low-income workers.
The story starts with regulatory policy. “Between September 2021 and February 2022, the U.S. Food and Drug Administration (FDA) investigated four reports of ill children—two of whom died—after consuming formula produced by Abbott Nutrition at its plant in Sturgis, Michigan.” Abbott disputed whether the deaths were connected to its products, but it voluntarily shut down the plant and recalled some of what had been produced there. Because Abbott produces around 40% of all infant formula consumed in the US, most of it from this plant, shortage quickly appeared.
So far, this sounds like an example of the regulatory system functioning well–that is, a health hazard is detected, and production is shut down until safety is assured.
But the US infant formula industry is not very resilient. It turns out that 98% of all US formula is produced in the US. One reason is that the domestic industry is protected by tariffs of about 25% on imported infant formula. But the bigger reason is are rules imposed by the FDA. The authors write:
Even more restrictive than tariff barriers are the nontariff barriers that foreign‐made formula faces. Most notably, the FDA imposes strict nutritional, labeling, and other standards and requires retailers to notify the agency at least 90 days in advance of selling a new formula product—and that is after manufacturers of new brands have submitted detailed explanations of the development of the formula, the results of clinical trials and studies on the nutrients in the formula, and details on quality controls in the production facilities, as well as having undergone FDA sampling and inspections of their facilities.
Thus, just because an infant formula product is considered safe for use, say, across the European Union, it has to jump a lot of hurdles to be sold in the United States–and to jump those hurdles all over again if the product changes. EU producers of infant formula have little incentive to redo all the testing and labelling that they have done for their home market so that they can then face tariffs when trying to sell in the US market. So they don’t bother trying.
These regulations affect the US market, too. Regulatory costs are often a more-or-less fixed amount regardless of whether a company is large or small; as a result, small companies are likely to be unable to pay these costs. As a result, the US infant formula market is dominated by a few large producers: “Three large corporations—Abbott, Reckitt/Mead Johnson, and Nestlé Gerber—accounted for more than 83 percent of total market share in 2021.” When the biggest company (Abbott) shuts down production, there are not many other companies ready and able to ramp up production. The authors note that in 2021, just before the shortage hit, the FDA considered 42 new applications for infant formula. It managed to respond to 18 of those applications–without offering a clear “yes” or “no” in any of those cases.
Another issue is that the special Supplemental Nutrition Program for Women, Infants, and Children, commonly known as WIC, provides vouchers for low-income households to buy infant formula–but only formula produced by certain companies. Companies have to lobby to be included in the WIC list of approved brand names, which of course tends to favor larger companies as well.
During the pandemic, the government announced various policies that sounded impressive: using military planes to fly formula from Europe to the US, removing the tariffs for a limited time only, and invoking the Defense Production Act so that producers of infant formula had priority in ordering production inputs. The government announced that the FDA would consider allowing more but still limited and temporary imports of infant formula, but the FDA would continue to enforce its own version of health and safety certification. Dozens of companies from around the world applied, and the FDA approved nine of them.
The effects of these policies were predictably very small. Having stifled import competition and constructed a market with only a few large producers of infant formula for decades, you can’t just flick a few policy switches and suddenly create a new and resilient market.
The lawsuits and regulatory struggles over infant formula of course are ongoing. As best as I can tell, some bacterial contamination was found at the production site, but it is not clear that this contamination actually caused the infant death–or even that it contaminated the infant formula that was shipped. Abbott says it will recover all of the market share it lost in the infant formula market by the end of 2023. Meanwhile, a shortage of infant formula imposes costs of its own, especially on low-income families who were limited in the brand they could buy with WIC vouchers. One small-scale study found that when formula was unavailable “unsafe infant feeding practice, such as watering down infant formula, using expired infant formula, using homemade infant formula or using human milk from informal sharing” increased.
But as the legal issues directly involving Abbott are gradually resolved, the bigger question remains: We all want infant formula to be safe, but we also want a market that can adjust to supply shocks, and that has genuine competition across producers. US infant formula policies are gradually drifting back to what they were before the shortage of 2022: that is, the same regulations about testing and labelling; the return of tariffs and other restrictions on imports; and supporting low-income families by requiring them to buy certain infant formula brands. But after the shortage, some systematic rethinking of these rules seems appropriate.
The Federal Reserve has recently announced its FedNow® service, which allows banks and credit unions who have signed up to transfer money for their customers instantly, 24/7, any day of the year. The system is still being phased in. But in theory, it will become possible, for example, for someone to get a paycheck, put it in the bank immediately, and then spend it immediately–without fear that delays in moving money around in the banking the system will lead to a costly overdraft fee.
What exactly is a central bank digital currency? It comes in two forms, retail aimed at individual households and firms, and wholesale aimed at banks and financial institutions. The report says:
A CBDC is a digital payment instrument, denominated in the national unit of account, which is a direct liability of the central bank. If the CBDC is intended for use by households and firms for everyday transactions, it is referred to as a “general purpose” or “retail” CBDC. A retail CBDC differs from existing forms of cashless payment instruments (ie credit transfers, direct debits, card payments and e-money), as it represents a direct claim on a central bank rather than the liability of a private financial institution. In contrast to a retail CBDC, a wholesale CBDC targets a different group of end users. Wholesale CBDCs are meant for use for transactions between banks, central banks and other financial institutions. So wholesale CBDCs would serve a similar role as today’s reserves or settlement balances held at central banks. However, wholesale CBDCs could allow financial institutions to access new functionalities enabled by tokenisation, such as composability and programmability.
One big difference between fast payments and central bank digital currencies is that most central banks around the world already have fast payment systems (indeed, the Fed was probably slower to get such a system in place in the US than it should have been), but central bank digital currencies are mostly still in the experimental stage. Kosse and Mattei describe the thinking of central banks on the overlap between topics based on a survey conducted in late 2022 by the Bank for International Settlements:
Over the last two decades, fast payment systems (FPS) have spread around the world. … The current availability of FPS is higher in AEs [advanced economies] (84%) than in EMDEs [emerging market and developing economies] (70%) … Owned and operated by central banks, private sector entities or a combination of these, FPS process small-value account-based transactions such that the funds are made available to the payee in real or near real time and on a 24/7 basis (or close to it). In addition to providing users with high speeds and 24/7 availability, FPS can provide value-added services, such as request-to-pay functionalities or the possibility to initiate payments using a mobile number or an e-mail address, so-called proxy identifiers or aliases, instead of a bank account number. …
Depending on their design, FPS and retail CBDCs can achieve similar objectives, such as enhancing financial inclusion and promoting faster and more efficient domestic and cross-border payments. In addition, they both enable broader innovation and enhanced competition, which can increase the availability and accessibility of cheaper payment products and services. More diversity and competition can also lead to a more resilient payments ecosystem.
The general hope of central bank digital currencies is to improve the efficiency and safety of payments. It is unclear in the context of an advanced economy like the United States how much it would actually matter, in practice, to have a payment happen as a liability of a central bank rather than a liability of a regular bank. After all, in advanced economies, private-sector banks are generally quite safe and can be made near-instantaneous with fast payment systems–which may not be the case in all countries. (I wrote about Brazil’s fast payment system here.) The report says “currently four central banks that have issued a live retail CBDC: The Bahamas, the Eastern Caribbean, Jamaica and Nigeria.”
However, the BIS survey of central banks suggests that many of them see fast payment systems and some form of a central bank digital currency as complements, not substitutes. The report explains:
This is mainly because they believe that a CBDC has specific properties and may offer additional features, such as being a riskless form of digital money and allowing access to a wider set of financial institutions and the unbanked population. Also, programmability and offline payments were mentioned as features that an FPS may not provide. About 9% of central banks that see value in having both an FPS and a retail CBDC believe that this could benefit the efficiency and resilience of the payments market. Depending on the design of each, several central banks believe that an FPS could also complement a CBDC, for example when targeting different use cases or offering additional services.
At least to me, many of these goals remain uncertain in practice. In practical terms, for example, how will the “unbanked” (those without bank accounts) obtain the central bank digital currency, carry it around, and use it? I suppose this could happen through a debit card with pre-loaded amounts, but for advanced economies, that innovation already exists. The idea that digital money could be “programmable” is both intriguing and a little ominous. For example, it might be possible to track exactly how the CBDC is spent, or even to have a feature that if it not spent in a certain time, it expires. The purpose of the programming will presumably matter.
Some distinctions seem to be emerging. About two-thirds of central banks say in the survey that they are unlikely to have a retail CBDC and more than half say they are unlikely to have a wholesale CBDC in the next few years. However, central banks from emerging and developing market economies are showing a greater eagerness to try such policies. The optimistic interpretation here is that in a country where financial systems are not well-developed and a large share of the population doesn’t have an account at a back, perhaps the central bank digital currency can help the population and the banks become more connected to a smoothly functioning financial system. The pessimistic interpretation is that some of these central banks are getting in over their heads.
I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Summer 2023 issue, which in the Taylor household is known as issue #145. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.
___________________
Symposium on Supply Chains
“How Far Goods Travel: Global Transport and Supply Chains from 1965–2020,” by Sharat Ganapati and Woan Foong Wong
This paper considers the evolution of global transportation usage over the past half century and its implications for supply chains. Transportation usage has more than doubled as costs decreased by a third. Participation of emerging economies in world trade and longer-distance trade between countries contribute to this usage increase, thereby encouraging longer supply chains. We discuss technological advances over this period, and their interactions with endogenous responses from transportation costs and supply chain linkages. Supply chains involving more countries and longer distances are reflective of reliable and efficient transportation, but are also more exposed to disruptions, highlighting the importance of considering the interconnectedness of transportation and supply chains in policymaking and future work. Full-Text Access | Supplementary Materials
“The Changing Firm and Country Boundaries of US Manufacturers in Global Value Chains,” by Teresa C. Fort
This paper documents how US firms organize goods production across firm and country boundaries. Most US firms that perform physical transformation tasks in-house using foreign manufacturing plants in 2007 also own US manufacturing plants; moreover, manufacturing comprises their main domestic activity. By contrast, “factoryless goods producers” outsource all physical transformation tasks to arm’s-length contractors, focusing their in-house efforts on design and marketing. This distinct firm type is missing from standard analyses of manufacturing, growing in importance, and increasingly reliant on foreign suppliers. Physical transformation “within-the-firm” thus coincides with substantial physical transformation “within-the-country,” whereas its performance “outside-the-firm” often also implies “outside-the-country.” Despite these differences, factoryless goods producers and firms with foreign and domestic manufacturing plants both employ relatively high shares of US knowledge workers. These patterns call for new models and data to capture the potential for foreign production to support domestic innovation, which US firms leverage around the world. Full-Text Access | Supplementary Materials
“Global Value Chains in Developing Countries: A Relational Perspective from Coffee and Garments,” by Laura Boudreau, Julia Cajal-Grossi and Rocco Macchiavello
There is a consensus that global value chains have aided developing countries’ growth. This essay highlights the governance complexities arising from participating in such chains, drawing from lessons we have learned conducting research in the coffee and garment supply chains. Market power of international buyers can lead to inefficiently low wages, prices, quality standards, and poor working conditions. At the same time, some degree of market power might be needed to sustain long-term supply relationships that are beneficial in a world with incomplete contracts. We discuss how buyers’ market power and long-term supply relationships interact and how these relationships at the export-gate could be leveraged to enhance sustainability in the domestic part of the chains. We hope that the lessons learned by combining detailed data and contextual knowledge in two specific chains—coffee and garments—have broader applicability to other global value chains. Full-Text Access | Supplementary Materials
Symposium on International Dimensions of Climate Change
“Are Developed Countries Outsourcing Pollution?” by Arik Levinson
Have rich countries improved their environments by importing polluting goods? No, the mix of goods imported has shifted towards those from cleaner industries, not dirtier. Has pollution worsened in poor countries manufacturing goods for export to rich ones? That depends. Emissions intensities for similar industries are higher in poor countries, which means that even balanced trade causes more pollution there, even for the same goods. And proportional growth in trade has increased that gap. Whether we should consider that to be “outsourcing pollution” is debatable. Have environmental regulations enacted by rich countries caused either of the first two changes? No, the evidence does not show that regulations cause outsourcing. Full-Text Access | Supplementary Materials
“Think Globally, Act Globally: Opportunities to Mitigate Greenhouse Gas Emissions in Low- and Middle-Income Countries, by Rachel Glennerster and Seema Jayachandran
Reductions in greenhouse gas emissions are a global public good, which makes it efficient to act globally when addressing this challenge. We lay out several reasons that high-income countries seeking to mitigate climate change might have greater impact if they invest their resources in opportunities in low- and middle-income countries. Specifically, some of the easiest and cheapest options have already been tapped in high-income countries, land and labor costs are lower in low- and middle-income countries, it is cheaper to build green than to retrofit green, and global targeting matters in integrated economies. We also discuss economic counterarguments such as the challenge of monitoring emissions levels in low- and middle-income countries, ethical considerations, the importance of not double-counting mitigation funding as development aid, and policy steps that might help realize this opportunity. Full-Text Access | Supplementary Materials
“Carbon Border Adjustments, Climate Clubs, and Subsidy Races When Climate Policies Vary,” by Kimberly A. Clausing and Catherine Wolfram
Jurisdictions adopt climate policies that vary in terms of both ambition and policy approach, with some pricing carbon and others subsidizing clean production. We distinguish two types of policy spillovers from these diverse approaches. First, when countries have different levels of climate ambition, free-riders benefit at the expense of more committed countries. Second, when countries pursue different approaches, carbon-intensive producers within cost-imposing jurisdictions are at a relative competitive disadvantage compared with producers in subsidizing jurisdictions. Carbon border adjustments and climate clubs respond to these spillovers, but when countries have divergent approaches, one policy alone cannot address both spillovers. We also consider the policy dynamics arising from carbon border adjustments and climate clubs; both have the potential to encourage upward harmonization of climate policy, but come with risks. Further, the pressures of international competition may result in subsidy races, with attendant risks and benefits. Full-Text Access | Supplementary Materials
“Global Transportation Decarbonization,” by David Rapson ⓡ Erich Muehlegger
Replacing fossil fuels in the name of decarbonization is necessary but will be particularly difficult due to their as-yet unrivaled bundle of attributes: abundance, ubiquity, energy density, transportability and cost. There is a growing commitment to electrification as the dominant decarbonization pathway. While deep electrification is promising for road transportation in wealthy countries, it will face steep obstacles. In other sectors and in the developing world, it’s not even in pole position. Global transportation decarbonization will require decoupling emissions from economic growth, and decoupling emissions from growth will require not only new technologies, but cooperation in governance. The menu of policy options is replete with grim tradeoffs, particularly as the primacy of energy security and reliability (over emissions abatement) has once again been demonstrated in Europe and elsewhere. Full-Text Access | Supplementary Materials
Articles
“Compensating Wage Differentials in Labor Markets: Empirical Challenges and Applications,” by Kurt Lavetti
The model of compensating wage differentials is among the cornerstone models of equilibrium wage determination in labor economics. However, empirical estimates of compensating differentials have faced persistent credibility challenges. This article summarizes the Rosen model of compensating differentials and chronicles the advances, setbacks, and lessons learned from empirical studies. The progression from cross-sectional to panel models alleviated biases caused by unobserved human capital but yielded new insights into the importance of other biases, including those caused by labor market frictions and endogenous job mobility. I discuss recent approaches that use matched employer-employee data and quasi-random variation in job amenities to address some of these challenges. I then present two examples of applications of compensating differentials: the evaluation public health and safety policies that rely on the value of statistical life, and the measurement and interpretation of earnings inequality. Full-Text Access | Supplementary Materials
“What Can Historically Black Colleges and Universities Teach about Improving Higher Education Outcomes for Black Students?” by Gregory N. Price and Angelino C. G. Viceisza
Historically Black colleges and universities are institutions that were established prior to 1964 with the principal mission of educating Black Americans. In this essay, we focus on two main issues. We start by examining how Black College students perform across HBCUs and non-HBCUs by looking at a relatively broad range of outcomes, including college and graduate school completion, job satisfaction, social mobility, civic engagement, and health. HBCUs punch significantly above their weight, especially considering their significant lack of resources. We then turn to the potential causes of these differences and provide a glimpse into the “secret sauce” of HBCUs. We conclude with potential implications for HBCU and non-HBCU policy. Full-Text Access | Supplementary Materials
All over the country, cities are grappling with the issue of empty office space. Will the workers come back? How will local businesses that depend on commuters be affected? Should the office space be reused or repurposed in some way? When it comes to office space, the federal government offers a vivid example. It owns 511 million square feet of office space, along with leasing additional space from private firms. Total office space in Manhattan is about 460 million square feet, which is about 11% of the nation’s office space–and the federal government owns more that than.
The GAO surveyed federal office buildings for a week in January, again in February, and again in March of this year. It then calculated how fully the buildings were being utilized. The median building was 25% occupied. Here’s a graph showing the breakdown across 24 major agencies. In the lowest quartile, federal agencies are using less than 10% of their space.
Some of the costs here are financial. The GAO estimates that these federal agencies spend about $2 billion annually on operation and maintenance of buildings they own, and an additional $5 billion on leasing space. But of course, any economist will tell you that just because some government-owned office building has been sitting there for decades, its “cost” is not zero: instead, the opportunity cost of the building is what that building or that space could be used for instead.
In a broad sense, the federal government and the city of Washington, DC, face a challenge and an opportunity similar to that of cities throughout the country: What should happen with all this extra office space? Hotels? Theaters and entertainment? Commercial? Convert to housing? Or just wait and expect that, in a year or two, the workers will be back?
The federal government faces particular problems here. A number of the under-utilized buildings are giant government-owned structures. Many of them are “historical,” both the sense that a major teardown would face difficult bureaucratic hurdles, and also in the sense that they have old plumbing, electrical, and HVAC systems. Getting a bureaucracy to give up some of its space is hard: getting it to share space may be harder. Funds aren’t readily available for redesigning and reconstructing the space. These issues have been evident for decades. But then add that many government jobs are also well-suited to work-from-home at least part of the time, and commuting in the DC area can be beastly, and it seems unlikely that the glut of government-owned office space is going away on its own.
In DC, as in other cities, it feels as if there needs to be an acceptance that work-from-home or remote work in some form is here to stay, at least to some extent, and so a lot of existing office space is obsolete. However, in most cities there does not seem to be much activity in creating a positive vision of what might take the place of all this empty office space, and how this new vision would shape the future of neighborhoods and the city itself.
The “merger guidelines” that have been published by the Federal Trade Commission and the US Department of Justice since 1969–with updates happening every 10-15 years–serve an unusual role. They are not federal regulations like, say, rules about what level of pollutants can be emitted from the Environmental Protection Administration. Instead, the merger guidelines seek to spell out the established legal and economic understanding of how to think about whether a merger is permissible. When an antitrust suit goes to trial, it has been common for all parties to agree on the merger guidelines themselves–although of course they are disagreeing on how the guidelines apply to the specific case at hand.
Merger guidelines aren’t enforceable regulations. They have also never attempted to be a legal brief or offered an interpretation of the case law. Instead they have described widely accepted economic principles that the Justice Department and the FTC use to analyze mergers. As a result, the guidelines have commanded widespread respect and bipartisan support. Amazingly, for at least 25 years, when regulators have challenged mergers in court, the merging firms themselves have accepted the framework articulated in the guidelines.The new draft guidelines depart sharply from previous iterations by elevating regulators’ interpretation of case law over widely accepted economic principles. The guidelines have long helped courts use economic reasoning to evaluate government challenges to mergers. They shouldn’t become a debatable legal brief or, worse, a political football. Regulators say the guidelines are out of date and need to be updated to reflect the modern economy. Yet their draft draws heavily on Brown Shoe Co. v. U.S. (1962), a widely criticized Supreme Court case.
Furman and Shapiro dig down into some specific details of what they like and don’t like about the new guidelines, and I’m sure there will more commentary in the next few months about the details. Here, I want to focus on an overall point about antitrust and mergers: what is the overall policy goal here? The usual answer is to allow mergers that make consumers better off, whether by reducing prices or by providing products of higher quality, and to disallow mergers that would make consumers worse off.
The Brown Shoe case has been an example of how not to do antitrust. The case was about a proposed merger between two shoe companies: Brown Shoe and GR Kinney. The shoe industry was not very concentrated. Brown She made about 4% of all shoes in the US; Kinney made 0.5%. At the retail level, the two companies combined for 2.3% of the shoe stores in the US. The goal of the merger was that the retail outlets of both stores would then be able to sell shoes from both companies.
But the court held that this level of industrial concentration was excessive, which is highly questionable. But the court also held that the merger could lead to greater efficiency that would allow prices for shoes to decline, which could hurt other shoe companies. In Brown Shoe, in other words, the goal of antitrust regulation was not to help consumers with lower prices or improved quality of service; instead, the goal of antitrust was, paradoxically, to limit competition in the name of avoiding harm to competitors.
The Supreme Court’s 1962 Brown Shoe decision is sharply at odds with what courts do today in merger cases. Its troublesome doctrine was that antitrust law should be concerned about market concentration without regard to prices. It even indicated approval for the district court’s conclusion that the merger was harmful because it resulted “in lower prices or in higher quality for the same price….” Under that rationale, the principal beneficiaries of merger enforcement are not consumers or labor. The main benefits accrue to firms who are not integrated or are dedicated to older technologies. Today, by contrast, merger policy is heavily focused on mergers that threaten price increases or sometimes reduced innovation. Brown Shoe is indefensible if antitrust is concerned about competitive market performance and innovation.
Some readers will remember Douglas Ginsburg, who President Reagan announced would be nominated for a position on the US Supreme Court back in 1987, but who then withdrew his name from consideration after controversies over having smoked marijuana earlier in life. Although Ginsburg never ended up on the Supreme Court, he continued his career as a federal judge. He has written “Wither the Consumer Welfare Standard?” for the Harvard Journal of Law & Public Policy (Winter 2023, pp. 69-85).
As Ginsburg points out, the idea that consumer welfare should be the goal of antitrust law–via lower prices for existing goods, or provision of new and improved goods–is a relatively new viewpoint, dating back to the 1970s. For example, Ginsburg points out that Robert Pitofsky wrote a book in 1979 warning about the danger of large firms and how they might use their political influence. But in response to the idea that antitrust law should therefore focus on discouraging large firms, Ginsburg offers two main points. One is that if the concern is that big firms will use their political power to injure consumers, then the goal of antitrust remains consumer protection. However, if the concern is the more nebulous idea of how political competition should work in the United Stated, then there are many policy tools other than antitrust that are more direct and appropriate. Ginsburg writes:
Corporate political influence, which is usually used for “rent-seeking,” is a legitimate cause for concern. The result is too often a crony capitalism that distorts resource allocation, unjustly re-wards some and harms others, and is antithetical to the market competition that benefits consumers and the economy. …
In any event, it does not necessarily follow that antitrust enforcement is an appropriate preventative measure for corporate political influence. … There are a number of problems with using merger control to that end. First, and most obviously, it precludes realizing whatever efficiencies are motivating the merger, to the detriment of consumers. Second, size is a rather poor proxy for political influence. Many small firms and, particularly, associations of small firms, have substantial political clout, often besting large firms on the other side of an issue. Consider insurance agents versus insurance companies; automobile dealers versus automobile manufactur-ers; and gasoline retailers versus petroleum companies. These “small dealers and worthy men,” as Justice Peckham called them in 1897, prevail consistently, both in the state and the federal legislatures. Finally, some firms attain size—and perhaps also political influence—simply because they are successful in satisfying consumers.
Ginsburg also goes through a variety of other goals that have been proposed for antitrust law.
[O]ther voices have championed different goals for antitrust. All are arguably worthy goals, but ask yourself whether they are best, or even reasonably, achieved by reforming antitrust law or enforcement policy. They include the preservation of jobs that would be rendered redundant if a merger were approved; countering income inequality; preserving small, locally owned businesses … ; protecting the privacy of consumers’ personal data; and safeguarding the environment.
Traditional antitrust decisions focused on consumer welfare can be plenty hard, with room for reasonable differences of opinion. But it’s helpful for it to have a single goal. As a counterexample, imagine if a corporation must follow certain environmental rules, but the company could only be in compliance with those environmental rules if it also preserved jobs, supported greater income equality, reduced its political lobbying, helped consumers, and so on and so on. Or imagine that when the IRS check to see if a company has paid its taxes, it also evaluated whether the company preserved jobs, supported greater income equality, reduced its political lobbying, followed environmental rules, and so on and so on. Now imagine that all government authorities–antitrust, tax, environmental, political giving, and so on–all were taking all of these issues into account, all the time. The result would be a chaotic, politicized, and unaccountable form of government.