Practicalities of a Regulatory Budget

A “regulatory budget” begins with the notion that, just as governments write down their planned and actual taxes and spending, they should also write down the costs that are imposed by regulation. To take it a step further, one can imagine the government setting an overall “budget” for the costs of regulation that might be imposed in a given year, and then requiring that regulatory agencies operate within that budget.

In general, developing a good sense of the costs of regulation–and comparing costs to benefits–seems a worthwhile program. But the parallel from taxes and spending to regulation is obviously not exact.. The costs and benefits of regulation are both estimated with considerably less precision than, say, the taxes and spending involved in the Social Security system. The costs of implementing regulations are often not as simple as writing a check for a certain new item of pollution-control equipment or financial management software. Instead, producers will seek out ways to change how they operated in ways, sometimes in subtle ways, to reduce the costs of implementing regulations. Some benefits of regulation, like improved worker productivity or reduced health care costs, can be converted to monetary terms relatively easily, but gains in human health or environmental protection may be harder to put in monetary terms.

The ultimate costs and benefits of a series of regulations–that is, a comparison to what costs and benefits would have looked like if the series of regulations had never been implemented–is often unclear, especially five or 10 or 20 years down the road. And at a basic level, if a given regulation seems likely to have benefits greatly in excess of costs, do you really want to postpone the regulation because the current costs exceed some “regulatory budget”?

Where do these practical difficulties leave the issue of “regulatory budgeting”? The Harvard Journal of Law & Public Policy offers “A Symposium on Regulatory Budgeting” in its line-only “Per Curiam” Summer 2022 issue.

The Trump administration set a regulatory budget in 2017, thus making the idea anathema to many of the non-Trumpists. But as the symposium reminds us, the idea of a regulatory budget has deep nonpartisan roots. James Broughel writes in “The Regulatory Budget in Theory and Practice: Lessons from the U.S. States” (footnotes omitted):

Before the Trump administration’s actual implementation of a regulatory budget, interest in regulatory budgeting likely peaked in the United States in the late 1970s and early 1980s. Robert Crandall of the Brookings Institution has been credited as “probably the first proponent” of a regulatory budget. Democratic Senator Lloyd Bentsen introduced the Federal Regulatory Budget Act of 1978, which would have created a role for Congress in setting regulatory cost allocations for agencies, akin to the role it plays in making fiscal appropriations. At that time, there was considerable support for a regulatory budget throughout the U.S. federal government. President Jimmy Carter’s 1980 Economic Report of the President references a regulatory budget as a potential means of improving priority setting. The Joint Economic Committee of Congress issued a subsequent report endorsing a regulatory budget. Thereafter, OMB circulated a draft Regulatory Cost Accounting Act in 1980. Later, in 1992, John Morrall III, an OMB official, wrote a report for the Organisation for Economic Co-operation and Development endorsing a regulatory budget. These early proponents of regulatory budgets were noticeably bipartisan.

I took a closer look at Trump’s regulatory budget proposal a few years back when it was enacted. One component that got a lot of publicity was the requirement that two regulations be eliminated for each new regulation introduced. This requirement was probably of more symbolic than practical significance, given the possibilities for eliminating small-scale, long-age regulations, or for “eliminating” two regulations while replacing them with what would be counted as a single new regulation. The more interesting component was, as Broughel writes, “The major requirement was that each new dollar of regulatory cost was to be offset by the elimination of one existing dollar of regulatory cost.” In effect, regulatory agencies were asked to identify regulations where a given level of costs provided low or negative benefits, and replace them with regulations where that same given level of costs provided higher benefits.

This approach makes the most sense if you believe that many regulatory agencies are predisposed to add new regulations, rather than reconsider the effects of older regulations. The “regulatory budget” idea is intended to create some pushback, by making it necessary for regulatory agencies to continually reconsider their past regulations–especially those that may have worked less well or become outdated. Indeed, those who worked on these issues for the Trump administration (like Broughel) argue that they succeeded in putting a cap on regulatory costs during the Trump presidency.

While I am skeptical of the bigger claims made for a regulatory budget (for example, it’s the equivalent of saving thousands of dollars for every family every year, and in addition will unleash a surge of productivity growth), I do think that some form of pushback against the expansionary bias of regulators makes sense. In the symposium, Andrea Renda discusses the spread of regulatory budgeting around the world in “Regulatory Budgeting: Inhibiting or Promoting Better Policies?” She writes (footnotes omitted):

Over the past two decades, several governments have introduced tools to incentivize regulators to become more aware of the costs they impose on businesses and citizens when they propose new rules. In some European countries, such as the Netherlands and Germany, this cost-focused approach has taken priority over more comprehensive better regulation strategies such as the use of ex ante regulatory impact analysis (RIA), or
comprehensive retrospective reviews of the costs and benefits of individual regulations. In a dozen European Union Member States, plus Canada, Korea, Mexico and the United States, governments of various political orientations have introduced forms of regulatory budgeting, which require administrations to identify, every time they
introduce new regulation entailing significant regulatory costs, provisions to be repealed or revised, so that the net impact on overall regulatory costs is (at least) offset. These rules are generically referred to as “One-In-X-Out” (OIXO). … In their most common form of “One-In-One-Out” (OIOO), these rules amount to a commitment not to increase the estimated level of burdens over the chosen timeframe. The OECD refers to these commitments as “regulatory offsetting.”

Depending on the circumstances, the OIXO rule may explicitly refer to the number of regulations, and thus require that for every regulation introduced, one or more existing regulations are eliminated; or to the corresponding volume of regulatory costs, and hence require that when a new regulation is introduced, one or more regulations are modified or repealed, such that the overall change in regulatory costs is zero or negative. Most countries adopted the latter version, based on cost offsetting rather than on avoiding increases in the number of regulatory provisions. …

There are at least twenty countries in the world that have adopted an OIXO rule. These include ten EU member states (Austria, Finland, France, Germany, Hungary, Italy, Latvia, Lithuania, Spain and Sweden) as well as Canada, Mexico and Korea. In the past, three countries have had a similar rule in place (Denmark, the UK, and the United States), but later decided to gradually phase it out … . Four other countries were reportedly introducing similar regulatory budgeting systems in 2020: Poland, Romania, Slovakia, Slovenia.

Renda argues that many of these countries, of varying political persuasions, believe that the OIXO formulation has been a genuine success. She emphasizes, via a list of 10 lessons, that a regulatory budget is only one part of an overall approach to thinking about regulatory reform. One of her 10 lessons seemed worth repeating here:

Lesson 5. If carefully designed, regulatory budgeting rules are not incompatible with an ambitious policy agenda. Some countries have introduced OIXO rules and burden reduction targets in the context of a deregulatory effort. But the fact that these rules have been used in the context of a deregulatory attempt does not mean that they are, per se, incompatible with a more far-reaching and proactive approach to deregulation. In Germany, for example, the OIOO rule was adopted in a context in which by ambitious programs such as Energiewende are in place, and a systematic scrutiny of the impact of new legislation on sustainable development is carried out. In France, the government uses the OI2O rule but at the same time adopts ambitious proposals in terms of social and environmental benefits. In short, there is no incompatibility per se between the adoption of a cost reduction or regulatory budgeting system and an ambitious regulatory and policy agenda in the social and environmental
domain.

Clearly, some supporters of an activist and aggressive regulatory policy, in countries around the world, are recognizing that the best way to build public support for such policies is not by passing a lot of rules, but by reassuring the public that rules are being considered and reconsidered with appropriate care.

Warren Buffett on the Ovarian Lottery

Warren Buffett used to talk from time to time about the implications of what he called “the ovarian lottery”–the random accident of why you were born with one time, place, and identity rather than another.

It’s not a new idea. Some of you will know it as a version of what the philosopher John Rawls was talking about in his 1971 book A Theory of Justice when he discussed how justice required making decisions behind a “veil of ignorance.” Going back further, it’s version of what Adam Smith was talking about in 1759 in his first classic book The Moral Sentiments, when he discusses morality as evaluated by an “impartial spectator”–a hypothetical someone who not personally involved with the specific situation under discussion. There are of course differences in how this idea is used in each case, but overall notion is that to make a fair or moral evaluation, you need to remove yourself personally from the situation, so that you instead can imagine what it would be fair or ethical if you did not know what role you might end up playing.

Buffett’s telling is vivid in its own way, and focuses on his feelings of gratitude and thankfulness. Here, I’m quoting from the transcript of a question-and-answer session Warren Buffett had with a group of business school students at the University of Florida in 1998:

I have been extraordinarily lucky. I mean, I use this example and I will take a minute or two because I think it is worth thinking about a little bit. Let’s just assume it was 24 hours before you were born and a genie came to you and he said, “Herb, you look very promising and I have a big problem. I got to design the world in which you are going to live in. I have decided it is too tough; you design it. … You say, “I can design anything? There must be a catch?” The genie says there is a catch. You don’t know if you are going to be born black or white, rich or poor, male or female, infirm or able-bodied, bright or retarded. All you know is you are going to take one ball out of a barrel with 5.8 billion (balls). You are going to participate in the ovarian lottery. And that is going to be the most important thing in your life, because that is going to control whether you are born here or in Afghanistan or whether you are born with an IQ of 130 or an IQ of 70. It is going to determine a whole lot. What type of world are you going to design?

I think it is a good way to look at social questions, because not knowing which ball you are going to get, you are going to want to design a system that is going to provide lots of goods and services because you want people on balance to live well. And you want it to produce more and more so your kids live better than you do and your grandchildren live better than their parents. But you also want a system that does produce lots of goods and
services that does not leave behind a person who accidentally got the wrong ball and is not well wired for this particular system.

I am ideally wired for the system I fell into here. I came out and got into something that enables me to allocate capital. Nothing so wonderful about that. If all of us were stranded on a desert island somewhere and we were never going to get off of it, the most valuable person there would be the one who could raise the most rice over time. I can say, “I can allocate capital!” You wouldn’t be very excited about that. So I have been born in the right place. [Bill] Gates says that if I had been born three million years ago, I would have been some animal’s lunch. He says, “You can’t run very fast, you can’t climb trees, you can’t do anything.” You would just be chewed up the first day. You are lucky; you were born today. And I am.

The question getting back, here is this barrel with 6.5 billion balls, everybody in the world, if you could put your ball back, and they took out at random a 100 balls and you had to pick one of those, would you put your ball back in? Now those 100 balls you are going to get out, roughly 5 of them will be American, 95/5. So if you want to be in this country, you will only have 5 balls, half of them will be women and half men–I will let you decide how you will vote on that one. Half of them will below average in intelligence and half above average in intelligence. Do you want to put your ball in there? Most of you will not want to put your ball back to get 100. So what you are saying is: I am in the luckiest one percent of the world right now sitting in this room–the top one percent of the world.

Well, that is the way I feel. I am lucky to be born where I was because it was 50 to 1 in the United States when I was born. I have been lucky with parents, lucky with all kinds of things and lucky to be wired in a way that in a market economy, pays off like crazy for me. It doesn’t pay off as well for someone who is absolutely as good a citizen as I am (by) leading Boy Scout troops, teaching Sunday School or whatever, raising fine families, but just doesn’t happen to be wired in the same way that I am. So I have been extremely lucky.

I suppose it’s easy to feel “extremely lucky” if your name is near the top for the richest person in the world! But I too have been lucky in parents, lucky in marriage, lucky in good health, lucky in job, lucky to live in a time and place when being near-sighted and bookish could lead to upper middle-class economic security, lucky to live in a time and place with considerable freedom and not wrecked by war. My list of “meaningful life advice” is pretty short. But for me, “count your blessings” is high on the list.

An Economist Chews over Thanksgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change in turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there’s anything wrong with that. [This is an updated, amended, rearranged, and cobbled-together version of a post that was first published on Thanksgiving Day 2011.]

Maybe the biggest news about about Thanksgiving dinner this year is the rise in the cost of the traditional meal. For the economy as a whole, the starting point for measuring inflation is to define a relevant “basket” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical US household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner rose 20% from from 2021 to 2022, The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The lower line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been an OK measure of the overall inflation rate over long periods of time, but you can see the distinct rise in the real price of Thanksgiving dinner since 2020.

At least part of the reason for the overall rise in the price of Thanksgiving is supply shock affecting the turkey industry: a surge of Highly Pathogenic Avian Influenza (HPAI). Margaret Cornelius and Grace Grossen of the US Department of Agriculture offer a short overview in the “Livestock, Dairy, and Poultry Outlook: November 2022.” They write: “This year, the turkey industry has faced a particular challenge in supplying Thanksgiving dinner due to an outbreak of Highly Pathogenic Avian Influenza (HPAI), in addition to challenges common to all food industries this year—increased costs of production, a tight labor supply, and transportation constraints.”

The outbreak of HPAI in 2022 has led to a loss of about 8 million turkeys from US production this year: for comparison, this is about 4% of the total number of turkeys “slaughtered” (the USDA term) in 2021. Moreover, turkey farmers have an incentive to slaughter turkeys earlier than usual, to protect against the risk that the turkeys might become infected with HPAI, so the average weight of turkey has also declined in 2022.

Of course, for economists the price is only the beginning of the discussion of the turkey industry supply chain. This is just one small illustration of the old wisdom that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. The last time the U.S. Department of Agriculture did a detailed “Overview of the U.S. Turkey Industry” appears to be back in 2007, although an update was published in April 2014  Some themes about the turkey market waddle out from those reports on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but then declined somewhat, but appears to have flattened out. The figure below was taken from the Eatturkey.com website run by the National Turkey Federation a couple of years ago.

The USDA reports that overall US consumption of turkey has been falling in recent years, from 5.38 billion pounds in 2016 to 5.1 billion pounds in 2021.

On the supply side, turkey companies are what economists call “vertically integrated,” which means that they either carry out all the steps of production directly, or control these steps with contractual agreements. Over time, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.

Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.”

U.S. agriculture is full of examples of remarkable increases in yields over periods of a few decades, but such examples always drop my jaw. I tend to think of a “turkey” as a product that doesn’t have a lot of opportunity for technological development, but clearly I’m wrong. Here’s a graph showing the rise in size of turkeys over time from the 2007 report.

A more recent update from a news article shows this trend has continued. Indeed, most commercial turkeys are now bred through artificial insemination, because the males are too heavy to do otherwise.

The production of turkey is not a very concentrated industry with three relatively large producers (Butterball, Jennie-O, and Cargill Turkey & Cooked Meats) and then more than a dozen mid-sized producers.    Given this reasonably competitive environment, it’s interesting to note that the price markups for turkey–that is, the margin between the wholesale and the retail price–have in the past tended to decline around Thanksgiving, which obviously helps to keep the price lower for consumers. However, this pattern may be weakening over time, as margins have been higher in the last couple of Thanksgivings  Kim Ha of the US Department of Agriculture spells this out in the “Livestock, Dairy, and Poultry Outlook” report of November 2018. The vertical lines in the figure show Thanksgiving. She writes: “In the past, Thanksgiving holiday season retail turkey prices were commonly near annual low points, while wholesale prices rose. … The data indicate that the past Thanksgiving season relationship between retail and wholesale turkey prices may be lessening.”

If this post whets your your appetite for additional discussion, here’s a post on the processed pumpkin industry and another on some economics of mushroom production. Good times! Anyway, Thanksgiving is my favorite holiday. Good food, good company, no presents–and all these good topics for conversation. What’s not to like?

Thanksgiving Origins

Thanksgiving is a day for a traditional menu, and part of my holiday is to reprint this annual column on the origins of the day.

The first presidential proclamation of Thanksgiving as a national holiday was issued by George Washington on October 3, 1789. But it was a one-time event. Individual states (especially those in New England) continued to issue Thanksgiving proclamations on various days in the decades to come. But it wasn’t until 1863 when a magazine editor named Sarah Josepha Hale, after 15 years of letter-writing, prompted Abraham Lincoln in 1863 to designate the last Thursday in November as a national holiday–a pattern which then continued into the future.

An original and thus hard-to-read version of George Washington’s Thanksgiving proclamation can be viewed through the Library of Congress website. The economist in me was intrigued to notice that some of the causes for giving of thanks included “the means we have of acquiring and diffusing useful knowledge … the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.”

Also, the original Thankgiving proclamation was not without some controversy and dissent in the House of Representatives, as an example of unwanted and inappropriate federal government interventionism. As reported by the Papers of George Washington website at the University of Virginia.

The House was not unanimous in its determination to give thanks. Aedanus Burke of South Carolina objected that he “did not like this mimicking of European customs, where they made a mere mockery of thanksgivings.” Thomas Tudor Tucker “thought the House had no business to interfere in a matter which did not concern them. Why should the President direct the people to do what, perhaps, they have no mind to do? They may not be inclined to return thanks for a Constitution until they have experienced that it promotes their safety and happiness. We do not yet know but they may have reason to be dissatisfied with the effects it has already produced; but whether this be so or not, it is a business with which Congress have nothing to do; it is a religious matter, and, as such, is proscribed to us. If a day of thanksgiving must take place, let it be done by the authority of the several States.”

Here’s the transcript of George Washington’s Thanksgiving proclamation from the National Archives.

Thanksgiving Proclamation

By the President of the United States of America. a Proclamation.

Whereas it is the duty of all Nations to acknowledge the providence of Almighty God, to obey his will, to be grateful for his benefits, and humbly to implore his protection and favor—and whereas both Houses of Congress have by their joint Committee requested me “to recommend to the People of the United States a day of public thanksgiving and prayer to be observed by acknowledging with grateful hearts the many signal favors of Almighty God especially by affording them an opportunity peaceably to establish a form of government for their safety and happiness.”

Now therefore I do recommend and assign Thursday the 26th day of November next to be devoted by the People of these States to the service of that great and glorious Being, who is the beneficent Author of all the good that was, that is, or that will be—That we may then all unite in rendering unto him our sincere and humble thanks—for his kind care and protection of the People of this Country previous to their becoming a Nation—for the signal and manifold mercies, and the favorable interpositions of his Providence which we experienced in the course and conclusion of the late war—for the great degree of tranquillity, union, and plenty, which we have since enjoyed—for the peaceable and rational manner, in which we have been enabled to establish constitutions of government for our safety and happiness, and particularly the national One now lately instituted—for the civil and religious liberty with which we are blessed; and the means we have of acquiring and diffusing useful knowledge; and in general for all the great and various favors which he hath been pleased to confer upon us.

and also that we may then unite in most humbly offering our prayers and supplications to the great Lord and Ruler of Nations and beseech him to pardon our national and other transgressions—to enable us all, whether in public or private stations, to perform our several and relative duties properly and punctually—to render our national government a blessing to all the people, by constantly being a Government of wise, just, and constitutional laws, discreetly and faithfully executed and obeyed—to protect and guide all Sovereigns and Nations (especially such as have shewn kindness unto us) and to bless them with good government, peace, and concord—To promote the knowledge and practice of true religion and virtue, and the encrease of science among them and us—and generally to grant unto all Mankind such a degree of temporal prosperity as he alone knows to be best.

Given under my hand at the City of New-York the third day of October in the year of our Lord 1789.

Go: Washington

Sarah Josepha Hale was editor of a magazine first called Ladies’ Magazine and later called Ladies’ Book from 1828 to 1877. It was among the most widely-known and influential magazines for women of its time. Hale wrote to Abraham Lincoln on September 28, 1863, suggesting that he set a national date for a Thankgiving holiday. From the Library of Congress, here’s a PDF file of the Hale’s actual letter to Lincoln, along with a typed transcript for 21st-century eyes. Here are a few sentences from Hale’s letter to Lincoln:

“You may have observed that, for some years past, there has been an increasing interest felt in our land to have the Thanksgiving held on the same day, in all the States; it now needs National recognition and authoritive fixation, only, to become permanently, an American custom and institution. … For the last fifteen years I have set forth this idea in the “Lady’s Book”, and placed the papers before the Governors of all the States and Territories — also I have sent these to our Ministers abroad, and our Missionaries to the heathen — and commanders in the Navy. From the recipients I have received, uniformly the most kind approval. … But I find there are obstacles not possible to be overcome without legislative aid — that each State should, by statute, make it obligatory on the Governor to appoint the last Thursday of November, annually, as Thanksgiving Day; — or, as this way would require years to be realized, it has ocurred to me that a proclamation from the President of the United States would be the best, surest and most fitting method of National appointment. I have written to my friend, Hon. Wm. H. Seward, and requested him to confer with President Lincoln on this subject …”

William Seward was Lincoln’s Secretary of State. In a remarkable example of rapid government decision-making, Lincoln responded to Hale’s September 28 letter by issuing a proclamation on October 3. It seems likely that Seward actually wrote the proclamation, and then Lincoln signed off. Here’s the text of Lincoln’s Thanksgiving proclamation, which characteristically mixed themes of thankfulness, mercy, and penitence:

Washington, D.C.
October 3, 1863
By the President of the United States of America.
A Proclamation.

The year that is drawing towards its close, has been filled with the blessings of fruitful fields and healthful skies. To these bounties, which are so constantly enjoyed that we are prone to forget the source from which they come, others have been added, which are of so extraordinary a nature, that they cannot fail to penetrate and soften even the heart which is habitually insensible to the ever watchful providence of Almighty God. In the midst of a civil war of unequaled magnitude and severity, which has sometimes seemed to foreign States to invite and to provoke their aggression, peace has been preserved with all nations, order has been maintained, the laws have been respected and obeyed, and harmony has prevailed everywhere except in the theatre of military conflict; while that theatre has been greatly contracted by the advancing armies and navies of the Union. Needful diversions of wealth and of strength from the fields of peaceful industry to the national defence, have not arrested the plough, the shuttle or the ship; the axe has enlarged the borders of our settlements, and the mines, as well of iron and coal as of the precious metals, have yielded even more abundantly than heretofore. Population has steadily increased, notwithstanding the waste that has been made in the camp, the siege and the battle-field; and the country, rejoicing in the consiousness of augmented strength and vigor, is permitted to expect continuance of years with large increase of freedom. No human counsel hath devised nor hath any mortal hand worked out these great things. They are the gracious gifts of the Most High God, who, while dealing with us in anger for our sins, hath nevertheless remembered mercy. It has seemed to me fit and proper that they should be solemnly, reverently and gratefully acknowledged as with one heart and one voice by the whole American People. I do therefore invite my fellow citizens in every part of the United States, and also those who are at sea and those who are sojourning in foreign lands, to set apart and observe the last Thursday of November next, as a day of Thanksgiving and Praise to our beneficent Father who dwelleth in the Heavens. And I recommend to them that while offering up the ascriptions justly due to Him for such singular deliverances and blessings, they do also, with humble penitence for our national perverseness and disobedience, commend to His tender care all those who have become widows, orphans, mourners or sufferers in the lamentable civil strife in which we are unavoidably engaged, and fervently implore the interposition of the Almighty Hand to heal the wounds of the nation and to restore it as soon as may be consistent with the Divine purposes to the full enjoyment of peace, harmony, tranquillity and Union.

In testimony whereof, I have hereunto set my hand and caused the Seal of the United States to be affixed.

Done at the City of Washington, this Third day of October, in the year of our Lord one thousand eight hundred and sixty-three, and of the Independence of the United States the Eighty-eighth.

By the President: Abraham Lincoln
William H. Seward,
Secretary of State

Some Economics of Dominant Superstar Firms

A range of evidence suggests that in recent decades, the leading firms in a given industry have attained a more dominant position than in the past. I’ve noted some of this accumulating evidence over time.

For example, back in 2015 the OECD published a report on “The Future of Productivity,” arguing that the productivity slowdown problems of many countries were occurring not because high-productivity firms were slowing down in their productivity growth, but because the firms with median and lower productivity weren’t keeping up. That year, Jae Song, David J. Price, Fatih Guvenen, and Nicholas Bloom wrote about how the pattern of diverging productivity across firms also led to diverging wages across firms. They argued that within a given firm, wage inequality has not changed much. But some high-productivity, high-profit firms were notably higher wages than other firms in the same industry, which was a major driver of growing market inequality in labor income. Nicholas Bloom summarized this evidence in a cover story in March 2017 for the Harvard Business Review.

The McKinsey Global Institute took up the mantle in 2018 with a report summarizing past evidence and offering new evidence in Superstars: The Dynamics of Firms, Sectors, and Cities Leading the Global Economy (October 2018). It looks at about 6000 of the world’s largest public and private firms: “Over the past 20 years, the gap has widened between superstar firms and median firms, and also between the bottom 10 percent and median firms. … The growth of economic profit at the top end of the distribution is thus mirrored at the bottom end by growing and increasingly persistent economic losses …”  In 2019, the US Census Bureau and the Bureau of Labor Statistics created an experimental database called Dispersion Statistics on Productivity, which let researchers look at how productivity was distributed across firms in a given industry: for example, firms in a certain industry at the 75th percentile of productivity are about 2.4 times as productive as those at the 25th percentile, on average. Again, there was some evidence that this gap is widening, and that best-practice methods of improving productivity are not spreading as well as they used to.

In short, an array of evidence suggests that the edge of dominant firms over their competitors has increased in a variety of industries. Jan Eeckhout reviews this evidence, and also looks at causes and effects, in his essay on “Dominant firms in the digital age” (UBS Center Public Paper #12, November 2022).

Eeckhout argues that the edge of dominant firms can be achieved in several ways in the digital era. The better-known approach, I think, is in the idea of network effects. For example, many buyers go to Amazon because many sellers are also at Amazon, and vice versa. Once such a network exists, it can be hard for a new firm to gain a foothold.

The more subtle approach is for firms to make to make investments that fall under the accounting category of “`Selling, General and Administrative expenses’ (SG&A). Those include expenditures on Research and Development (R&D), advertising, manager salaries, etc. and are often interpreted as fixed costs or intangibles. The observed rise in SG&A is a source of economies of scale as the fixed cost of production leads to declining average costs even with moderately decreasing returns in the variable inputs.” To put the point another way, some firms make substantial investments in technologies, brand names, and managers who can build on these capabilities. Eeckhout argues:

The rise of dominant firms that we have seen during the advent of the digital age is built on cost-reducing and efficiency-enhancing innovations that create increasing returns to scale. This implies a winner-takes-all market with a dominant firm achieving a long-lasting monopoly position. And while monopoly is often associated with higher prices, most of these firms achieve this position by doing the opposite, that is lowering prices. They can do this because their innovations and investments lead to an even larger reduction in costs. And that is why the digital technology is so attractive for customers: technological innovation is the hero. But because costs decline more than prices due to scale economies, technological change is also the villain.

(For those interested in digging deeper here, the Summer 2022 issue of the Journal of Economic Perspectives includes a three-paper symposium on the rising importance of intangible capital in the US economy, including everything from innovations to brand names. The Summer 2019 issue includes a three-paper symposium on the issue of the extent to which price markups over cost have been changing over time, and the implications for labor markets and the macroeconomy. As has been true for more than decade now, all JEP articles back to the first issue are freely available. Full disclosure: I work as Managing Editor of the JEP, and thus am predisposed to think the articles are of wider interest!).

As Eeckhout points out, potential consequences of this rise in dominant superstar firms include greater inequality of wages created by these lasting differences across firms; a slowdown in new business startups as entrepreneurs face a more challenging environment; a shift in the flow of national income going to capital, rather than labor; and in general, a greater ability of more-dominant firms, less concerned competition, to charge higher prices.

What is an appropriate policy solution? One approach is higher taxes on the profits of dominant firm, but without staking out a position here on the extent to which this desirable, it’s worth noting that the higher taxes would not alter the dominance of these firms, and many of the negative consequences would persist.

An alternative approach would be to recognize the phenomenon, but to take more of a hands-off attitude. After all, if the dominant firms are achieving success by making productivity-enhancing investments that reduce costs, this is broadly speaking a desirable goal, rather than something to be penalized. Besides, today’s dominant firms are not invulnerable, as anyone tracking the current performance of Meta (Facebook) or Twitter will attest. Not that long ago, companies like America Online and MySpace seemed to have dominant positions.

Besides, to what extent are consumers being “harmed” by, say, free access to email, word-processing, and spreadsheets offered by Google? Preston McAfee put it this way in an interview a few years ago:

First, let’s be clear about what Facebook and Google monopolize: digital advertising. The accurate phrase is ”exercise market power,” rather than monopolize, but life is short. Both companies give away their consumer product; the product they sell is advertising. While digital advertising is probably a market for antitrust purposes, it is not in the top 10 social issues we face and possibly not in the top thousand. Indeed, insofar as advertising is bad for consumers, monopolization, by increasing the price of advertising, does a social good. 

Amazon is in several businesses. In retail, Walmart’s revenue is still twice Amazon’s. In cloud services, Amazon invented the market and faces stiff competition from Microsoft and Google and some competition from others. In streaming video, they face competition from Netflix, Hulu, and the verticals like Disney and CBS. Moreover, there is a lot of great content being created; I conclude that Netflix’s and Amazon’s entry into content creation has been fantastic for the consumer. …

A more active approach would be to look for targeted opportunities to ensure greater competition. For example, McAfee suggests that consumers may well being harmed in a meaningful way by the Android-Apple duopoly in the market for smartphones, as well as in the very limited competition to provide home internet services.

Eeckhout emphasizes the general issue of “interoperability”–that is, the ability of consumers to shift between companies. He writes:

Interoperability has many applications. It is the regulation that ensures that a hardware producer cannot change the charger plug from product to product thus forcing users to buy an expensive new one each time, or whenever they need to replace an existing plug. And the concept of interoperability was at the heart of the development of the internet where the founding fathers of the world wide web ensured that the accessibility of different services was built in. They ensured that an email message for example could be sent from one provider (say Gmail) to another (say your company email servers). Similarly with the access to web pages that are hosted by different providers. This generates a lot of entry and competition of internet service providers. But this concept of interoperability does not come without regulation. For example, interoperability is not engrained in messaging services. It is impossible to send a message from WhatsApp to Snapchat since messaging services are closed. None of the services has an incentive to open their messaging platform to the messages of their competitors. As a result, compared to the number of service providers for email and the world wide web, the number of messaging services is very small.

If people should make a choice to transfer their personal information, or offer access to that information, from one setting to another, competition can be expanded. This goal isn’t a simple one. But if people could move their preferences and past shopping lists, even their financial and banking records and their health data, from one provider to another, competition in a number of areas could become easier. Another suggestion is that antitrust regulators should be skeptical when a dominant firm seeks to buy up smaller firms that have the potential to grow into future large-scale competitors.

The most active approach would go beyond specific situations of anticompetitive behavior and seek to use antitrust regulation in more aggressive ways, perhaps even with the goal of breaking up dominant firms. I don’t see a strong case for this kind of action. When the underlying issue is strong network effects, such effects are not going to go away. When the underlying issue is firms making major productivity-enhancing investments, that’s a good thing, not a bad one. Perhaps rather than figure out how to slow down the productivity leaders, we should be thinking more about what kinds of market structures and institutions might help to diffuse what they are already doing across the rest of the economy. Finding ways to level up the laggards is often harder than levelling down the leaders, but also ultimately more productive.

Income Inequality for US Households

I sometimes say that I feel as if I have a pretty good grasp on the US economy–except that my understanding has about a 2-3 year lag. For example, right now I feel as if I’ve got a pretty good understanding of events up through about May 2020, but I’m still trying to develop a satisfactory understanding of what has happened since then.

When it comes to inequality of household incomes, the Congressional Budget Office is on a similar timeline. CBO has just published The Distribution of Household Income, 2019 (November 2022). But CBO has a better excuse for the time lag than I do. A good chunk of the underlying data behind this report is from income tax data. This data has the great advantage that is isn’t from a survey asking people about their incomes, but is from what people actually filed with the Internal Revenue Service, with in turn is cross-checked with data from employers, financial institutions, and other types of income (like royalty payments). But data for 2019 incomes doesn’t get sent in until 2020, and the pandemic led to delays on when taxes were due. Thus, anyone working with full tax data is always a couple of years behind the times.

The real strength of the report is not that it is up to the minute, but rather that it offers a snapshot in time along with useful sense of trends in income inequality since the late 1970s, when it began to rise. Here are a few of the graphs that caught my eye. This is a snapshot of inequality across income levels for 2019. The breakout panel on the right shows that while average income for the top 1% was about $2 million, this breaks down into average income of $1.2 million for the 99 to 99.9th percent (that is, the top 1% not including the top 0.1%), average income of $5.7 million for the 99.9 to 99.99th percent (that is, the top 0.1%, not including the top 0.01%), and average income of $43 million for the top 0.01%.

This figure shows where the income comes from for each group. In particular, the black lines show that the highest share of income is from labor income–that is, being paid for work done in the previous year–for everyone up to the 99.9th percent. For the very tip-top, capital income and capital gains (think rising prices of assets like stocks and real estate) are the biggest share.

It’s easy to gabble about whether inequality is too high or too low, but many people are better gabblers than I am, so I won’t do that here. It’s perhaps worth saying that no one should expect income levels to be equal in a given year, for the same reason that there’s no reason for a 19 year-old high school dropout to be earning the same income in a given year as a 50 year-old physician–or someone who started a company that hires dozens or hundreds of people. In addition, most people move between income levels over time as their skills and experience and savings increase.

But the CBO is a just-the-numbers organization. Thus, the report is a place to get information about the degree of redistribution of income in the US, and how that has evolved over time.

For example, here’s a figure showing average federal taxes by income group: this measure included all federal taxes (say, including payroll taxes for Social Security and Medicare), but does not include state and local income or sales taxes

Here’s the trend in average federal taxes paid at the top of the income distribution in the last 40 years or so. You will notice that while the subject has been the source of considerable political controversy, the ups and downs have pretty much levelled out over time.

This figure shows trends in what household in the lower quintile of the income distribution receive in federal redistribution programs. Notice that assistance in the form of Medicaid has risen substantially, but of course, Medicaid can’t be used to pay the rent or buy groceries. The other main transfers–Supplemental Security Income, food stamps (SNAP), and “other”–have been flat or trending down.

So with federal taxes and benefits taken into account, how much redistribution happened in 2019? The fiture shows the share of income for various groups before and after taxes and spending. The CBO writes: “The lowest quintile received 8 percent of income after transfers and taxes, compared with 4 percent of income before transfers and
taxes. … In contrast, the share of income after transfers and taxes for the highest quintile was about 6 percentage points less than the share of income before transfers and taxes. Because those households paid more in taxes than
they received in transfers, the transfer and tax systems combined to reduce their share of income from 55 percent to 48 percent. Much of that decline was experienced by households in the top 1 percent of the distribution, whose share of income after transfers and taxes was 13 percent, 3 percentage points lower than their share of income before transfers and taxes.”

The Gini coefficient is a standard way of compressing the distribution of income into a single number. The Gini ranges from 0 to 1, where a Gini of 0 would imply a completely equal distribution of income, and a Gini of 1 would imply that a single person received all the income. Here’s the Gini for the US income distribution over time. You’ll notice that the top line, income inequality based on market incomes, is rising over time. However, the bottom line, which is income inequality after taxes and transfers, shows a Gini that has been essentially flat since 2000. The Gini in 2019 is higher than most of the 1970s and 1980s, but it’s similar in 2019 to the peak years over those periods, like 1986.

The overall pattern is that as market income has become more unequal, the forces of taxation and redistribution in pushing toward greater equality of after-tax, after-transfer income have become stronger over time, and since about 2000 those forces have broadly balanced each other out. Of course, nothing in these numbers is an argument that the US should not do more (or less) to redistribute. But the factual claim that after-tax, after-transfer income inequality has been rising substantially over time is often overstated–and is not true for the last two decades.

Updates on the Social Cost of Carbon

A basic insight of environmental economics is that those who generate social costs to the environment should pay the price of those costs. From this view, estimates of the the social cost of carbon are a key parameter for environmental policy with regard to the risks of climate change–that is, it shows what prices should be faced by those whose economic activities generate carbon to take the environmental risks into account.

Perhaps unsurprisingly, this key parameter is also a political football. During the Obama administration, standard estimates of the social cost of carbon emissions were about $50 per metric ton (for example, see here and here). During the Trump administration, the official estimates of the social cost of carbon were in the range of $1-$7 per metric ton. The Biden administration first reverted to the estimates of the Obama administration, but now the US Environmental Protection Agency has published a social cost estimate of $190 per metric ton of carbon emissions (EPA External Review Draft of “Report on the Social Cost of Greenhouse Gases: Estimates Incorporating Recent Scientific Advances,” September 2022).

These estimates matter for practical policy making. When environmental regulators look at costs and benefits of reducing emissions, they use these numbers. The recent EPA estimates of the social cost of carbon were published as background for proposed new standards for methane emissions. So what is going on behind the scenes here?

Kevin Rennert and Brian C. Prest of Resources for the future offer a blessedly clear and readable overview in their article, “The US Environmental Protection Agency Introduces a New Social Cost of Carbon for Public Comment” (November 15, 2022). As they discuss, estimates of the social cost of carbon have four building blocks.

First, the estimates are built on projections of socioeconomic factors like future population size, economic growth, and emissions levels. Second, there is a climate system model, which basically estimates how changes in carbon emissions (and other greenhouse gases) affect the climate. Third, there is a “damage function” that translates the changes in climate into economic damages. As Rennert and Prest write:

Damage functions translate changes in global temperatures to dollar-value impacts of climate change on society, such as for human health, agriculture, and sea level rise. To update the damage functions, EPA uses three independent lines of evidence: the RFF-Berkeley GIVE model, the Climate Impact Lab’s Data-driven Spatial Climate Impact Model (DSCIM), and a recent meta-analysis by Peter Howard and Thomas Sterner. EPA averages the results from these three models to produce an SCC estimate for a given range of discount rates. (We’ll elaborate on the discount rate in the next section.) Averaging across damage functions is a reasonable approach when alternative damage functions can account for the overlapping impacts of climate change and are based on independent lines of evidence, both of which largely describe the damage functions that EPA uses.

You can think of the damage function as estimating certain categories of damages: for example, “temperature-driven mortality, agricultural markets, energy expenditures for heating and cooling buildings, and coastal impacts from sea level rise.”

A final crucial step involves choosing a “discount rate,” which is the rate used for comparing costs and benefits that occur at different points in time. If the “discount rate” was zero percent, then society would place an equal value on a benefit received in the future–even decades or centuries in the future–with a benefit received right now. Thus, if the discount rate was zero percent, we would as a society be willing to incur, say, $100 in costs right now even if a benefit of, say, $110 is received far off in the distant future. For a variety of economic and value-laden reasons, economists generally don’t think that a discount rate of 0% makes much sense. At a basic value level, providing $100 benefits or saving a life right now seems more valuable than benefits or saved lives that happen several generations in the future.

But what discount rate is reasonable? It makes a huge difference. Just to illustrate, consider a benefit worth $100 that will be received in 100 years. What is the value of that benefit right now–that is, what costs should we be willing to pay in the present for that future benefit? If the discount rate is zero percent, we should be willing to pay any amount lower than $100 in the present for those $100 in future benefits. If the discount rate is 1% per year, we should be willing to pay about $37 for a benefit of $100 that happens 100 years from now. (The calculation is that if you invested $37 for 100 years at a 1% rate of interest, it would end up equaling $100). If the discount rate is 2%, we should be willing to pay about $14 in the present for a benefit worth $100 that happens 100 years from now.

Given that the potential costs of climate change extend decades into the future, it ends up making a huge difference what discount rate you use. The Trump administration advocated using an annual discount rate of 7%. A higher discount rate means that future benefits aren’t worth much to us right now. With a discount rate of 7%, a benefit of $100 that is received 100 years in the future would be worth about 11 cents in the present. In addition, the Trump administration argued that the social cost of carbon used by US environmental regulators should only include costs to the United States, not global costs of carbon, which tended to reduce those costs considerably.

The latest estimates of the social cost of carbon from the Environmental Protection Agency have shifted the preferred discount rate from 3% to 2%. This shift will place a higher value on the future, but it still accepts that costs in the more-distant future are worth less than costs in the near-future or in the present.

Bottom line: The current EPA estimate of the social cost of carbon is $190 per metric ton, using a 2% discount rate. But to give a sense of how much the discount rate matters, the cost estimate would be $120/ton with all the same underlying estimates and a discount rate of 2.5%, but $340/ton with all the same estimates and a discount rate of 1.5%.

I’ll close with two thoughts here. One is that you can look at computer code for the EPA model and tinker with the estimates if you wish to do so. Rennert and Prest write: “In a major step forward for transparency, the computer code used for the sensitivity has been built using the open-source Mimi software platform (another output of the SCC Initiative), making the code free and easily accessible to download, replicate, and evaluate.”

The other thought is that if a social cost of carbon at $190 per metric ton was turned into a pollution tax on burning fossil fuels, on the principle that those who impose environmental costs should pay for them, it would (for example) raise gasoline costs by about $1.90 per gallon. That’s a large increase, but it’s not an increase that is outside the bounds of experience. After all, we have just seen a rise in the price of gasoline from about $2/gallon in May 2020 to $5/gallon in June 2022.

This vision of adjusting to a higher price for fossil fuels is a very economics perspective. Many people and politicians prefer to set lofty goals of zero carbon emissions–and then also to moan and complain every time the price of fossil fuels goes up. While zero carbon emissions has some emotional attractiveness as a slogan, or an ultimate goal that might arrive in the distant future, the problem for the next few decades to find practical ways of substantially reducing carbon emissions, while still providing the energy needed for people to travel and heat their homes, for industry to operate, and for the economy as a whole to function and grow. The policy message of the social cost of carbon is that if the economy faced much higher but not crazy-higher prices for fossil fuels on a consistent basis, we would find ways to adapt with changes in energy use and methods of production. In a way, the social cost of carbon captures the magnitude of the efforts that need to be undertaken.

Female Autonomy and Economic Development

Greater empowerment and autonomy is a worthy goal, in and of itself, for all individuals. In the last few decades, a number of development economists have focused on the more specific question of how improvements in the empowerment of of women–expanded opportunities for education, jobs, health care, and public participation, along with protection from violence–can boost economic development for a country. Siwan Anderson discusses the connections between these multiple dimensions of empowerment in the Innis Lecture on “Unbundling Female Autonomy,” delivered as part of the Canadian Economics Association meetings in June 2022, and now published in the November 2022 issue of the Canadian Journal of Economics.

Here are a few of the themes that caught my eye. Anderson writes:

Female empowerment is a multi-faceted concept that targets: improved female decision-making power in the household, reduction of violence against women, increased market and political opportunities, equal legal rights and dismantling gender-biased customs and norms. Whilst the multi-faceted nature of female empowerment is appreciated by academics and policy-makers alike, it is not well understood how the various dimensions interact and co-evolve with each other, or with society as a whole.

Perhaps the classic argument in this area is that empowered women invest more in children.

[This] step to arguing that relative female empowerment leads to improved economic development seems to rest on the assumption (and accompanying evidence) that women and men have different preferences.7 In particular women want to, ceteris paribus, allocate relatively more of household resources to children’s education and health than will men. Because both of these are crucial determinants of human capital formation and human capital formation is at least a proximate cause of economic development, development will be enhanced by factors that improve female autonomy (or a woman’s outside option) relative to their husbands through the channel of increasing their control over the allocation of household resources.

It follows that finding ways to tip the balance within the household to greater autonomy for women, and greater control over household spending, can have substantial payoffs.

The standard model of household decisions as a bargaining process assumes members have full information and are able to perfectly communicate. However, there is significant empirical and experimental evidence to the contrary. The ability (and willingness) to conceal information appears critical to affecting how resources are allocated. Anderson and Baland (2002) found that rotating savings and credit associations (ROSCAs), ubiquitous in the developing world, were used as way for women to keep savings hidden from their husbands. Women were less willing to access a bank account with an ATM card when it was easy for their husbands to gain access to the card (Schaner 2017). Researchers have found evidence of family pressure and also the capture of women targeted grants (De Mel et al. 2009, Friedson-Ridenour and Peirotti 2019). Relatedly, keeping money hidden from husbands (in bank accounts) was shown to avoid this to some extent (Dupas and Robinson 2013, Fiala 2018). Moreover, in-kind grants (Fafchamps et al. 2014) and mobile money deposits are less likely to be appropriated (Riley 2020).

In turn, the question becomes what factors lead to greater empowerment for women. Broadly speaking, one can discuss evolutionary and revolutionary factors. For example, technological changes that allow capital to substitute for what were traditionally women’s tasks, or allow women greater control over birthrates, can act as evolutionary factors. Revolutionary factors might be events like World War II in the US workplace that changed the opportunities available to women, or movements for no-fault divorce that altered bargaining positions within marriage. But is it possible to enact specific policies that might offer a nudge to greater gender equality? Anderson writes:

One may conjecture that short-term policy interventions would be unlikely to significantly shift strongly embedded societal norms, given that many have persisted for centuries. However, emerging evidence suggests the contrary. For example, reserving seats for female politicians in rural areas of India has helped curtail negative stereotypes about women as local leaders (Beaman et al. 2009). Television programs have been able to alter fertility preferences in multiple settings (Jensen and Oster 2009, La Ferrara et al. 2012). Bursztyn et al. (2020) were able to adjust pre-determined individual Saudi male beliefs regarding the appropriateness of their wives’ labour supply decisions by providing information on actual average male beliefs in their local geographical area. Regular secondary school class discussions, held amongst both boys and girls in India, were able to reshape some female negative attitudes and behaviours (Dhar et al. 2022).

Or in the political sphere:

A set of policies exist that aim to increase female political empowerment. There are 135 countries to date with constitutional, electoral or political party quotas for women. Many developing countries surpass developed ones in this regard. The most direct way to assure female leadership is to reserve political seats for women. Policies that reserve political seats for women, either at the national or sub-national level, are present only in less developed countries, no such policies are imposed in Western industrialized nations. … Rwanda leads the world, with 64% of legislators in national parliament (the lower or single house) being women, followed by Senegal (43%), South Africa (41%), Mozambique (39%), Angola (37%), Tanzania (36%) and Uganda (35%). This is in comparison to other developed countries with significantly lower female representation like Canada (27%) and the United States of America (24%).

Overall, it’s clear that there is a correlation from female empowerment to higher economic development. As Anderson writes: “[T]here is no simple causal link from female empowerment to declining overall poverty. Yet it still remains that gender equality is strongly positively correlated with measures of aggregate economic development (GDP/capita or a poverty head count ratio).” However, the direction of causation probably runs both directions: that is, female empowerment affects economic development, while economic development also affects the extent of female empowerment. Countries develop in different ways, from different starting points. There’s no reason to think that this dual process of female empowerment and economic development will proceed in the same way across countries. Anderson puts it this way:

In Latin America and the Caribbean, substantially rising female labour force participation rates have accompanied fertility decline, female education and the growth of services. The same factors have led to only a moderate increase in female participation in the Middle East and North Africa. And they have led to a decline in South Asia (primarily in India). An interesting hypothesis to explain this variation is the role of social stigma. …

There is no reason, then, to expect that cultural changes in the currently developing world will mimic the paths followed in the West. The heterogeneity in how such norms appear to be changing within the developed world today also suggests that local cultures may persist or change in differing ways under similar economic pressures. Additionally, there are a number of other reasons to be sceptical that the paths followed in the West will be a portend. First, the timing of structural changes is different. Developing countries today experienced expansion of education and growth of the service sector at much lower levels of GDP per capita than when they took off in the West (Jayachandran 2021). Their legal contexts are also markedly different. Today’s developing countries typically inherited the formal legal structures of their former colonists, which tend to be more progressive and favourable to women than the corresponding legal structures that prevailed at comparable levels of development in the West. At the same time, these formal legal structures often coexist in today’s developing countries alongside extremely male-biased forms of customary law. Finally, there does not seem to be a massive shock to married women’s labour supply, comparable to that occasioned by World War II, that could serve as a jolt to gender norms.

 

Peltzman on Stigler on Regulation

Back in 1971, George Stigler (Nobel ’82) published “The Theory of Economic Regulation.” At that time, and indeed, right up to the present, it was common for economists and policymakers to view “regulation” as motivated by a pure and selfless public spirit that searched only for the common good. Stigler instead analyzed regulation in terms of demand and supply of regulations. Sometimes the demand for regulation could be characterized as public-spirited, but other times it could be characterized as its own kind of special interest–like when existing firms support regulations that will kneecap their competitors, or when property-owners use regulations as a tool to block anything from high-density housing to an industrial plant to a wind farm. Sometimes the supply of regulation could be characterized as public spirited, but other times regulations were part of a political process that included backroom deals and political pressures.

Moreover, Stigler pointed out that looking at a regulation when it is first enacted is insufficient. Over time, the industries being regulated will have the strongest incentive to interact with the regulators: with information, through lobbying, even as a source of jobs for those leaving government service. Even if the regulation is originally passed to impose constraints on a certain industry, through a gradual process “regulatory capture,” current members of industry may gradually shift the regulations in a way that supports existing firms–at least relative to costs imposed consumers or impediments to potential competitors.

None of this proves that regulations are bad, of course. But it made a convincing case that regulations should not be presumed good, and instead should be analyzed in terms of costs and benefits, including who ultimately paid the costs and who received the benefits,

I’ve mentioned the intellectual legacy of this article before (for example, see, “Stigler’s Economic Theory of Regulation: The Semicentennial,” April 23, 2021). But the topic is worth revisiting, given that the October 2022 issue of Public Choice has published a symposium, six papers plus an introduction, on “George Stigler’s theory of economic regulation.” I was particular taken by the essay by Sam Peltzman, first a student and then a colleague of Stigler’s, titled “Stigler’s Theory of Economic Regulation After Fifty Years.” (I quote here from a freely available working paper version of the essay.) Peltzman writes:

I will argue that the enduring impact of Stigler, 1971 is more about the framing of a question than about the specific answer. … [T]he Captured Regulator of 1971 is overstated but highly provocative. But without the provocation would we be here commemorating a fiftieth anniversary?

Peltzman points out that at the time Stigler was writing the essay, Stigler often pushed the argument that the possibility of regulation going astray was actually a firm rule that regulation must necessarily go astray. Peltzman tells the story this way:

Stigler gradually abandoned the ineffective regulator under the weight of contrary evidence. But it took some doing. I was his Ph.D. student during that transition. My dissertation topic was the effect of federal insurance of bank deposits on entry into commercial banking (effectively you needed a grant of insurance to start a new bank). For a half year or so I worked on my own, learning about the history, assembling data, running regressions and writing it all up for Stigler’s perusal. I could not avoid the conclusion that the regulation had substantially reduced the rate of entry into banking. He had one comment: “This is fine for preliminary work. Come back in another 6 months with the right answer.” Facts are stubborn. I never could get the right answer, and fortunately for me he
eventually relented.

It’s also worth noting that at the time Stigler is considering these issues in the 1960s, a lot of government regulation was focused on regulation of prices and conditions of entry–in airlines, trucking, railroads, electricity production, occupational licenses, and other areas. It seems plausible to me that when government is setting prices and conditions of entry, the chances of regulatory capture over time may rise. But the 1970s saw the emergence of a different kind of regulatory structure, focused instead on health, safety, and the environment. Peltzman writes:

The capture theory does seem to fit some prominent cases, such as Stigler’s motivating examples of truck regulation and occupational licensure. However, problems not easily covered by the “as a rule” exception surfaced quickly. … We also have 20/20 hindsight of the proliferation of “social regulation” that was underway when Stigler (1971) appeared. Environmental regulation is perhaps the most prominent example. Others include worker safety, the security of their pensions and consumer product safety. By some measures this regulatory expansion was, and remains, historically unprecedented. Typically social regulation cut across many industries. And it was invariably resisted by those industries. On the other side, deregulation of industries like transportation and securities brokerage surfaced in the late 1970s amidst significant industry resistance. Then more recently we get “reverse capture”, where the industry is created by the regulator – as in renewable energy, biofuels and the like. None of these developments seem contemplated by the capture theory.

However, Peltzman also points out that even regulations which may on balance have positive effects on health or safety can still have anticompetitive tradeoffs. The Food and Drug Administration requirements that new drugs be proven safe and effects have high financial and administrative costs, which large and established firms can handle far better than small innovators. The Dodd-Frank financial reforms of 2010 focused on making banks safer by creating a set of rules with high financial and administrative costs, which in turn has led to the exit of many smaller banks.

Thus, Peltzman both insists on the continuing relevance of Stigler’s broader insights, but offers a less absolutist interpretation. Stigler wrote in the 1971 essay, “… as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit.” Peltzman offers this reformulation:

The distinction I want to pursue is between the creation (the “acquired” part of Stigler’s famous quote) and the output (design and operation) of regulatory bodies. Even casual history suggests that these often respond to different political forces and interest groups. In particular, the industry often – perhaps mainly – resists the establishment of regulation. The affected industries resisted the consumer reforms of the Progressive Era, the labor reforms of the New Deal and the social regulation of the 1970s. But, once confronted with the reality of the regulation, the industry interest usually plays a prominent role in what these agencies do.

Here’s the Table of Contents for the Public Choice symposium, although a library subscription is required to access the articles. However, the Peltzman article is available as a working paper:

Equity vs. Efficiency–and What Other Tradeoffs?

For at least 50 years, economists have been drawing up models that consider tradeoffs between equity and efficiency. A classic example is that a combination of taxation and redistribution of income by government will improve equity, but on the margin, it can also reduce efficient incentives to work for both the taxed (because they now receive a lower reward for working) and the recipients (who will experience a phase-out of their redistribution benefits as they increase their earnings).

As generations of classroom economists have admonished their students: The existence of a tradeoff doesn’t mean that something is or isn’t worth doing. It just means that, whatever your choice, the tradeoff should be openly specified and acknowledged. The equity/efficiency tradeoff, in particular, will be influenced by the values that people place on equity and efficiency, and economics as a discipline doesn’t have much to say about the what values should be used.

But what about other tradeoffs? Equity is not the only value that might be opposed to a desire for more efficient incentives. Tyler Cowen makes the case in a short essay, “The Dubious Trade-Off That Economists Love to Cite,” subtitled “Many public policies involve choosing between equity and efficiency, but those aren’t the only two principles that deserve consideration” (Washington Post, November 1, 2022). Cowen writes:

I start getting nervous, however, when I see equity given special status. After all, it most often is called “the equity-efficiency trade-off,” not “an equity-efficiency tradeoff,” and it is prominent in mainstream economics textbooks. By simply reiterating a concept, economists are trying to elevate their preferred value over a number of alternatives. They are trying to make economics more pluralistic with respect to values, but in reality they are making it more provincial.

If you poll the American people on their most important values, you will get a diverse set of answers, depending on whom you ask and how the question is worded. Americans will cite values such as individualism, liberty, community, godliness, merit and, yes equity (as they should). Another answer — taking care of their elders, especially if they contributed to the nation in their earlier years — does not always show up in polls, but seems to have a grip on many national policies and people’s minds.

I hear frequently about the equity-efficiency trade-off, but much less about the trade-offs between efficiency and these other values. 

During the pandemic, for example, Cowen points out that a primary conflict was between the value placed individual choice and the efficiency of policies like vaccinations, masks, or shut-downs. When it comes to taxation and redistribution, the public arguments are often not about efficiency costs, but rather about claims involving liberty and beliefs about being rewarded according to some definition of merit. The question of whether or how to alter programs like Social Security and Medicare often touch on equity and incentives, but often quickly turn to value-heavy arguments over society’s obligations to the elderly. In discussions about import restrictions, the focus is often on efficiency gains for the broader economy vs. harms to specific communities. And of course, one could add other values here like environmental preservation.

I suppose one can stretch the idea of equity to cover some of these issues, at least in part. But in doing so, you are really just admitting what we all know: equality, fairness, and justice are complicated in ways that a basic distribution of income or wealth doesn’t fully capture. Looking at equity is a start, but the potential tradeoffs with economic efficiency are multidimensional.