Greenpeace Denounces Plastic Recycling

Greenpeace and its philosophy of “non-violent creative action” in the service of environmentalist goals has inspired a wide range of reactions, but pretty much no one views the organization as a sell-out or a pawn for corporate interests. Thus, it’s intriguing that the organization has recently laid waste to the practicality and benefits of plastics recycling in its recent report, Circular Claims Fall Flat Again (October 24, 2022).

Here’s what Greenpeace has to say (footnotes omitted):

Mechanical and chemical recycling of plastic waste has largely failed and will always fail because plastic waste is: (1) extremely difficult to collect, (2) virtually impossible to sort for recycling, (3) environmentally harmful to reprocess, (4) often made of and contaminated by toxic materials, and (5) not economical to recycle. Paper, cardboard, metal, and glass do not have these problems, which is why they are recycled at much higher rates.

Due to toxicity risks, post-consumer recycled plastic from household waste is not being produced at commercial scale for food-grade uses globally or in the U.S., and likely never will be. While there is limited availability of food-grade PET#1 for beverage bottles only, there are growing toxicity concerns there, too.

As described in a May 2022 OpEd in The Atlantic, “The problem lies not with the concept or process of recycling but with the plastic material itself – it is plastic recycling that does not work.” The high recycling rates of post-consumer paper, cardboard, and metals in the U.S. prove that recycling can be an effective way to reclaim valuable natural material resources. Plastic recycling in particular has failed because the thousands of types of synthetic plastic materials produced are fundamentally not recyclable.

As support for these claims, Greenpeace gathered evidence from 370 “material recovery facilities” across the US in the 2020, and the updated the survey this year. In other words, they aren’t looking at how much plastic was collected with a claim that it could be recycled, but at how much is actually recycled and reused.

The failure of the concept of plastic recycling is finally becoming impossible for the companies and industry associations that promote it – and the nongovernmental organizations (NGOs) that they fund for this purpose – to ignore. After three decades and billions of dollars of taxpayer spending, the excuse offered by the American Chemistry Council (ACC) that plastic recycling is still “in its infancy” can now be seen for the delaying tactic that it is.

Corporate plastic pledge performance reporting does not reflect the failure of plastic recycling because it relies on the theoretical possibility of recycling a plastic item, rather than actual plastic waste processing rates. The reported shares of recyclable, reusable, or compostable plastic packaging used by EMF NPE and U.S. Plastics Pact member companies – 65.3% at the global level and 37% in the U.S. – can hardly be taken at face value when credible estimates show that only 9% of plastic was recycled globally in 201921 and only 5–6% of plastic waste was recycled in the U.S. in 2021.

The Greenpeace argument is definitely not that, with some additional effort, plastic recycling will work. The argument is that given the nature of plastics, recycling plastic will never work.

After more than 30 years, it is time to accept that plastic recycling is a failed concept. Unlike with paper or metals, there are two insurmountable barriers that prevent plastic recycling from ever working at scale: toxicity and economics. Plastic cannot be safely recycled from postconsumer household waste back into new food-grade plastic products. The flood of 400 million tons/year of cheap new plastic production kills the business case for large-scale investment in plastic recycling. And the problem lies not with the concept or process of recycling but with the plastic material itself – it is plastic recycling that does not work.

If one accepts the Greenpeace indictment of plastics recycling, the question becomes what to do next. The Greenpeace solution is to phase out all single-use plastics. I’m not confident that the policy approach can or should be that absolute. But a first step to having that conversation would be a broader acceptance that the efforts at recycling plastics have largely failed.

Reflections on Sources of US Energy Consumed

Amidst all the discussions of how to encourage non-carbon sources of energy, it can be useful to step back and look at the basic patterns of US energy consumption. Here, I draw upon the U.S. Energy Information Administration webpage on “U.S. Energy Facts Explained.”

This figure shows US “primary” energy consumption by source. The attentive reader will notice that “electricity” does not appear as a source of energy. The reason, of course, is that while electricity is used primarily as a mechanism for transmitting power, and sometimes (batteries) for storing power, electricity is not a “source” of power iteself, but instead needs to be generated from a underlying source.

As the figure shows, total US energy consumption levelled off in late 1990s. Use of coal is dramatically down in the last two decades, but use of natural gas has risen correspondingly–so that the sum of coal and natural gas in the figure hasn’t changed much in the last two decades. Petroleum as a source of energy is down about 10% in the last two decades. Nuclear has edged up just a bit in terms of total energy produced since 2000. The quantity of energy produced by renewables has doubled, from about about 6 to 12 quadrillion BTUs.

The category of renewables, however, includes more than wind and solar. Here’s a breakdown for US energy consumption in 2021:

As the figure shows, two-fifths of renewables is biofuels, including ethanol and wood. About one-fifth is hydroelectric dams. The other two-fifths are what a lot of people mean when they refer to renewables: wind, solar, and geothermal. In other words, wind, solar, and geothermal are a little less than 5% of US energy consumption at present.

For those who are mentally relying on solar/wind/geothermal as the primary power sources of the future, these numbers suggest the scale of the challenge. The share of US energy consumption coming from fossil fuels–coal, natural gas, petroleum–was about 85% of the total in 2000, and is now about 79%. The rise in renewables, especially biomass, wind, and solar, explains how this decline in fossil fuels occurred.

But this change has taken two decades. If the US economy is going to consume roughly the same level of energy over time, which is the pattern of the last few decades, then wind/solar/geothermal will need to grow by a multiple of 10 if these sources are to provide half of US energy consumption. A transformation of this scale would require, among other changes, an extraordinary build-out of new power lines from these new sources of energy to where the electricity is needed; an extraordinary rise in mining to provide the materials needed both for solar cells and for the power lines; re-engineering the power grid to be more capable of addressing sources of electricity that can fluctuate; the development of new methods for storage of electricity for when the wind isn’t blowing or the sun isn’t shining; and an extraordinary rise in capabilities to recycle old solar cells and wind turbines when they have reached the end of their cost-effective lifespans.

My own sense is that new technologies will be needed in all of these areas, and others, including use of hydrogen for energy storage, nuclear as an energy sources, probably methods of carbon capture and storage, and methods of conserving on existing energy use.

Here’s a final figure to show some of the complexity of the energy problem. The left-hand panel shows the primary sources of energy. The right-hand panel shows the end-use sectors for energy. Sometimes the source of energy is used directly by a certain sector: for example, the top line shows that 69% of petroleum is used in the transportations sector, where it represents 90% of all the energy used.

In other cases, the primary energy flows through the electricity sector. For example, 37% of natural gas energy goes to electricity, as does 59% of renewable energy, 90% of coal energy, and 100% of nuclear energy. But of the energy flowing into the electricity system, about two-thirds is lost in maintaining the system itself, and only one-third goes back out to end-use sectors. To put it another way, when we build renewable energy sources to feed the electrical grid, only one-third of the electricity produced finds its way to end-users.

The challenge of reducing reliance on fossil fuels isn’t just on the left-hand side of this figure: that is, raising output of renewables in a way that can offset use of fossil fuels. It’s about rethinking and rewiring all the connections between primary energy and end-users in this figure.

Jeremy Siegel on Stocks and the Fed

The first edition of Jeremy Siegel’s highly influential book, Stocks in the Long Run, came out almost 30 years ago in 1994. Now, the sixth edition has been published, and Jeremy Schwartz interviews Siegel (Knowledge at Wharton, “Why Stocks Will Remain Strong in the Long Run,” October 25, 2022). Given that I am becoming used to wincing each time I open up a statement from my retirement account and see how falling stock prices have affected my savings, the interview is at least a productive distraction. Here are some comments from Siegel that caught my eye:

On the long-run returns to stock market investing over time:

[T}he first edition, which came out in May 1994, used data through the end of 1992. The long-term real return (net of inflation, from investing in stocks) 1802 onward was 6.7% in real terms. I updated it till June of this year and it’s 6.7% real — exactly the same as the last 30 years, despite the financial crisis, COVID, and so forth. It’s remarkably durable. We also know returns from investing in stocks are remarkably volatile in the short run. But the durability of the equity premium (or the excess return from stocks over a risk-free rate like a Treasury bond) is quite remarkable.

On the performance of the Federal Reserve in the last two years:

I’ve been calling Jay Powell’s monetary policy the third worst in the 110-year history of the Fed. I may actually raise it to the second worst, but we’ll see what happens. The worst, of course, is the Great Depression, where they let all the banks fail when they were actually formed to prevent exactly that from happening. When the pandemic hit and money supply exploded, I said this is going to cause inflation. You’ve never seen that 25% M2 money supply increase — 1870 onward, there has never been a money supply increase that fast.

I said this is just absolutely crazy. This is going to produce a tremendous amount of inflation. And it did.

I was definitely hawkish [early this year], saying there would be eight increases [of 0.25% each] — that’s 2%. But now they’re talking about 4% by year-end. That’s 16 increases. In the September 2021 meeting … eight of the 16 FOMC (Federal Open Market Committee) members said there was no need to increase rates whatsoever this year. This was when inflation was already heating up; speculation was rampant in all asset markets. Five members said we will need an increase of 125 basis points. And three — the most hawkish — ventured that we might need an increase of 50 basis points by December this year.

Could you be more wrong than that? It’s impossible to be more wrong than that. It amuses me when people [predict the Fed’s actions] as if they have any concept of what they’re going to do in 2023. Clearly, in 2021, they had zero concept of what they were going to do this year.

Loopholes in the Global Corporate Minimum Tax

There has been a high-profile effort in the last couple of years for an international treaty that would impose a minimum tax on corporate profits across countries. The intuitive appeal is straightforward: corporations can use various methods–say, where they locate their headquarters or how they finance the firm–so that profits in an accounting sense happen in a place with low or zero corporate taxes. A minimum corporate tax across countries wouldn’t eliminate this incentive, but perhaps it could ameliorate it?

As I have observed before, this intuitive appeal is quickly muddled by the realities of global corporate taxation. For example, should the profits of a multinational firm be allocated across countries by where the production facilities of the firm are based, by where the sales occur, by the legal residence of the firm, by the “source” of where the profits are generated through research and development or intellectual property–or by some overall formula that brings all these factors into the picture? International corporate taxation is messy.

The underlying issue, of course, is that governments around the world want to attract productive, job-generating companies. Even if an international agreement could be signed to prevent governments from attracting firms by offering a lower corporate tax rate, they can use other kinds of subsidies to attract firms. Gary Hufbauer mentions some possibilities in “The global minimum corporate tax will not end forces that drive tax competition” (Peterson Institute for International Economics, October 25, 2022).

Hufbauer points out that even as President Biden’s administration participates in international talks for a global minimum tax, a number of its legislative successes would allow companies to pay less than the minimum. The Creating Helpful Incentives to Produce Semiconductors Act of 2022, know as the CHIPS Act? “This law will funnel US$76 billion in tax credits and grants to major firms producing semiconductors in the United States … In fact, by some estimates, the Biden administration’s three big accomplishments—the infrastructure law as well as the CHIPS and IRA laws—could funnel hundreds of billions of dollars in subsidies and tax incentives that could benefit large corporations, enabling them to lower the tax liabilities that would be imposed under the global minimum.” Indeed, the Inflation Reduction Act (IRA) explicitly says that the semiconductor firms receiving assistance from CHIPS can pay lower tax rates than the US corporate minimum.

This isn’t just a US issue, of course. Hufbauer writes:

In today’s highly competitive global economy, public officials are challenged not only to raise tax revenues but also to save jobs, create jobs, advance technology, or deliver essential services, by deploying government incentives. Officials are not always content to let market forces prevail. The result is a mixture of trade protection, subsidies, tax relief, and in extreme cases, state-owned enterprises, depending on the country and its politics. If overt tax competition is ruled out, some officials will likely turn to other means to help favored corporations.  

It is only when you read the fine print in the global minimum tax that you see that it gives an easy pass to these alternatives. So-called Qualified Refundable Tax Credits—credits payable within four years of the designated activity—are not deducted when calculating the tax paid by a business firm. Under International Financial Reporting Standards (IFRS) accepted by the Organization for Economic Cooperation and Development (OECD), subsidies can be allocated against the cost of an acquired asset, and are thereby only indirectly subject to tax over a period of years as the reduced cost of the asset is depreciated or amortized.

Of course, countries watch how other countries treat large corporations. For example, the US subsidizes semiconductor makers because other countries do so, and other countries subsidize semiconductor makers because the US does so. Hufbauer writes:

China, Japan, South Korea, and Taiwan have long subsidized semiconductor fabrication plants (fabs). According to data published by the Boston Consulting Group and the Semiconductor Industry Association, subsidies account for 15 percent of the cost of fab operations in Japan, up to 30 percent in Taiwan and South Korea, and up to 40 percent in China. Again, if a global minimum tax had existed in 2000, it would have made no difference to Asian fab incentives. Now that the US federal government has entered the fab subsidy race, so have EuropeIndia, and Mexico. Moreover, the CHIPS Act extends its application to two foreign semiconductor giants, Samsung and Taiwan Semiconductor Manufacturing Corporation (TSMC), and both have announced huge investments in US fabs. 

Just to be clear, the existing treaty talks that focus on a minimum global corporate tax rate would have no effect on any of these other ways of subsidizing firms.

Eight Billion and Counting

Global population will surpass 8 billion in just a few weeks on November 15, 2022, according to projections from the United Nations in its World Population Prospects 2022. I’m perhaps less concerned about global population growth than some commenters, because predictions about a population apocalypse have been around for a long time without coming true. But population changes do reflect underlying changes in life expectancy and birthrates in ways that shift age distributions and family patterns. In addition, shifts in where the global population lives will alter the shape of international politics in future decades.

Here’s the basic population pattern from the UN demographers from 1950 to 2050. Total population (blue line) is rising. It’s projected to top out at about 10.4 billion in the 2080s. The growth rate of the population (yellow line) has been falling to less than 1% annually, and heading lower.

The UN describes the underlying patterns of life expectancy and birthrates this way:

Population growth is caused in part by declining levels of mortality, as reflected in increased levels of life expectancy at birth. Globally, life expectancy reached 72.8 years in 2019, an increase of almost 9 years since 1990. Further reductions in mortality are projected to result in an average longevity of around 77.2 years globally in
2050. … In 2021, the average fertility of the world’s population stood at 2.3 births2
per woman over a lifetime, having fallen from about 5 births per woman in 1950. Global fertility is projected to decline further to 2.1 births per woman by 2050. … Given that most population increase until 2050 will be driven by the momentum of past growth, further actions by Governments aimed at reducing fertility would do little to slow the pace of growth between now and midcentury, beyond the gradual slowdown indicated by the projections presented here.

The increase in population is surely a matter for continued close attention, for economic, environmental and political reasons. But there have been prominent predictions of global overpopulation leading to mass famine for centuries now. Yes, predictions that have been wrong for centuries could still come true in the future. But before jumping to such conclusions, it’s worth taking a moment to acknowledge that the past predictions of overpopulation leading to a sharp degradation of living standards have been wrong, and to think about why.

For economists, the classic predictions of overpopulation are those of Thomas Robert Malthus in his 1798 Essay on the Principle of Population. Malthus argued that the growth of food supply happened in a linear way, while the growth of population happened in a geometric way. Thus, with population growing at a certain percentage rate each year, it would at some point spike upward beyond the food supply. Malthus didn’t provide a numerical estimate of the timing, but the general sense was that this shift was perhaps a few decades in the future. Global population was about 800 million when Malthus was writing, so the world population has growth 10-fold. The missing ingredient in Malthus’ reasoning was that food supplies–and in general economic growth–doesn’t need to happen in a linear way, but as a result of technological change it can also be geometric. There may be a race between the growth rates of population and food supply at certain times and places, but it is not a race that food supply is fated to lose.

A more recent example is Paul Erhlich’s 1968 best-seller The Population Bomb, which opened with the words: “The battle to feed all of humanity is over,” and predicted that hundreds of millions of people would starve to death in the 1970s and that the world death rate would rise substantially. Instead, it’s life expectancy that has risen substantially. And according to the World Bank, about 40% of the world population was below an international poverty line consumption level of $2.15 per day in 1985, but that share has now fallen to 8.4% by 2019. Like Malthus more than 150 years earlier, Ehrlich failed to appreciate the potential of technology–like the Green Revolution in agricultural technology–to expand food output.

Thus, while I know that higher global population will raise difficult issues, the historical record strongly suggests that cries of looming catastrophe have been overdone. It also suggest that the ultimate answers may involve developing new technologies.

The growth of global population is not evenly distributed around the world. Indeed, it will shake up some long-term patterns. Perhaps most notably, the UN predicts that the India will become the world’s most populous country, outstripping China, next year. Here are some shifts in the world’s most-populous countries over time.

The rise in the population rankings of Nigeria, Ethiopia, and Dem. Republic of the Congo by 2050 is part of a regional shift: the region of Africa will make up a larger share of global population. We are used to a world where the regions of Asia (light blue and dark blue lines) are the largest by population. But population growth in those regions has slowed. By about 2050, the region of sub-Saharan African will have almost caught up in total regional population. Thus, the issues of economic development for that region, and possibilities of out-migration from that region if development doesn’t take hold, are likely to loom especially large in the next few decades.

Marijuana Taxes: How to Do It?

Nineteen states have enacted marijuana taxes, although five of them (Connecticut, New York, Rhode Island, Vermont, and Virginia) have not yet actually collected any revenue from them. But there is no common model. Some states impose the tax as a percentage of the purchase price: some based on the weight of the product sold; and some based on the potency of the product. Some use more than one of these approaches. Richard Auxier and Nikhita Airi discuss what’s happening and look at the tradeoffs in “The Pros and Cons of Cannabis Taxes” (Tax Policy Center, September 28, 2022).

Here’s the selection of state taxes used:

Given the different kinds of taxes used, and the underlying differences in ordinary sales taxes, comparing marijuana taxes across states isn’t simple. But based on assumptions about a standard price per ounce,, weight, and potency, they find that the total state and local tax burden on marijuana, including both marijuana-specific taxes and general sales taxes, is in typically in the range of 20-40%. Alaska, Colorado, Nevada and Washington all collect from 1.2% to 1.7% of state tax revenue from marijuana excise taxes; indeed, “[a]mong all 11 states that collected cannabis tax revenue for the entire 2022 fiscal year, eight collected more revenue from cannabis taxes than alcohol taxes, while Colorado, Nevada, and Washington collected more from cannabis taxes than cigarette taxes. … Colorado and Washington both collected more from state cannabis taxes than state alcohol and cigarette taxes in fiscal year 2022.”

A sales tax approach has the advantage of simplicity, especially if it can just be combined with an existing state-level sales tax, similarly to the way that some places impose additional sales taxes on hotels or car rental sales. A downside is that when marijuana is legalized, the price often starts fairly high and the declines over time–and so it’s possible that a sales tax approach may generate declining revenue over time. Of course, a state could also adjust its marijuana tax rates over time.

For examples of a weight-based tax, “Alaska levies a $50-per-ounce tax on flower and $25-per-ounce tax on leaves while Maine levies roughly a $20-per-ounce tax on flower and a $6-per-ounce tax on leaves,” while the New Jersey weight-based tax is the same for all parts of the plant. From the state’s point of view, a weight-based tax will also generate records of the quantity of marijuana produced. With a weight-based tax, state tax revenues don’t shift with the price of marijuana, but only with the quantity. In addition, it creates a record of the quantity being produced–which can help in checking that marijuana producers are not diverting part of their production to the illegal and untaxed market. On the other side, a weight-based tax requires a new bureaucracy to administer it, which is the main reason that California (and other states) repealed their original weight-based tax and went with the sales tax approach instead.

A potency-based tax is similar in intention to alcohol taxes that impose lower rates on beer and higher rates on whiskey. A main concern here is that if the marijuana tax is based on price or weight, there is some incentive to choose whatever price or weight provides the highest dose, while a potency-based tax rewards choosing a lower potency. In addition, a potency-based tax means that relatively low-potency legal marijuana would not be at a tax disadvantage vs. low-potency illegal marijuana–and thus help to put a damper on the untaxed illegal market. A potency-based tax does require a new bureaucracy to monitor the sampling of products, lab processes, and retention of records. But these steps could be simplified in a future where marijuana potency was more clearly labelled.

Ultimately, the main problem with choosing between these methods is that different policy goals are involved. One goal is raising tax revenue, which will tend to imply that the state desires high total sales. Another goal is offsetting the cost of externalities from legalizing marijuana use, like social costs of driving under the influence, which involve discouraging overuse in certain situations. Yet another goal is discouraging the use of extremely potent products, which is also aimed at both social harms and potential harms to the user.

The discussion above has focused on taxation of marijuana for recreational use. The authors also point out that of the 37 states that have legalized marijuana for medicinal uses, 10 of them have imposed specific taxes on this particular medicine. Indeed, “both medical and recreational cannabis are subject to the same excise tax in California (15 percent retail excise tax), Illinois (7 percent cannabis cultivation privilege tax), and Nevada (15 percent weight-based tax).” Of course, when medical and recreational uses of marijuana have the same tax, it implies that the costs and benefits of such use are the same, however they are labelled.


I sometimes see articles that don’t feel worthy of a post here, but might interest the sort of discerning, intelligent, thoughtful, quirky, oddball readers who drop by this website. I tweet about these articles, but thought I might pass along some recent examples from the last week or so here as well. If you click on the image, it should take you to the underlying article.

From an Unapologetic Safety Regulator

Consumers may want more safety from the products they buy than unfettered market competition provides. Sure, market forces allow for certain firms to build up a reputation for safety (or lack of it) over time. Moreover, lawsuits can punish firms for unsafe products. But these forces operate imperfectly. After all, an “unsafe” product may be one that poses higher but not immediate risks, like when an electrical appliance is more likely to start a home fire and the stuffing in your future is more ignitable than you would have preferred. Also, the idea that reputation and lawsuits will provide an adequate level of safety operate by first letting a sufficient number of accidents happen so that a pattern is broadly recognized and publicized. It seems possible that a broad-scale effort in collecting information and looking patterns might at least in some cases accelerate the move to safer products

The Consumer Product Safety Commission came into existence in 1972, and by 1973, Robert S. Adler was a special assistant to one of the original commissioners. Adler’s career moved between academia (research and teaching on business law at the University of North Carolina) and government (among other stops, counsel to the Subcommittee on Health and the Environment of the Committee on Energy and Commerce in the U.S. House of Representatives). But he eventually came full circle back to the CPSC, where he was a commissioner from 2009-2021 (and acting chair the last two years). As practitioner, student, and teacher of business regulation, he has walked the walk. He offers some lessons about his experiences in “Reflections of an Unapologetic Safety Regulator” (Regulatory Review, October 17, 2022).

We’ve had the CPSC for 50 years now, and so we can observe some actual safety trends. Adler writes:

Similarly, in its almost 50 years of operation, the CPSC has seen substantial declines in death and injury in the face of a growing population: 43 percent reduction in residential fires, 80 percent reduction in crib deaths, 88 percent decline in baby walker injuries, 80 percent reduction in child poisonings, 35 percent decline in bicycle injury rates, 55 percent decline in injuries from in-ground swimming pools, and the virtual elimination of child suffocations in refrigerators and fatalities from garage doors.

It is of course theoretically possible that these sorts of declines would have happened just as quickly, or faster, without a CPSC–under pressure of reputation effects and lawsuits. This leads to what Adler calls the Great Safety Paradox:

Paradoxically, the more successful regulators are in protecting the public, the less anyone notices. This paradox occurs because well-crafted safety rules do not raise prices or interfere with products’ utility. In such cases, no one notices the improvement in safety. Most parents do not realize that the cribs they place their infants in no longer permit them to slip between the slats and strangle. Nor do they understand how much safer and less lead-laden their children’s toys are. Similarly, most consumers will never recognize that their children no longer face being crushed by a garage door that unexpectedly closes on them or that infants do not suffocate in refrigerators because the doors can now be easily opened from within. Numerous government safety rules operate in a similar fashion, with life-saving benefits but little public recognition.

What about the costs of such regulations? As Adler emphasizes, the costs of a rule that makes the slats of infant cribs closer together is pretty minimal. The costs of people being injured by unsafe products are quite real. Adler:

When health and safety agencies write a safety rule, they do so to eliminate or reduce deaths and injuries that consumers suffer in product-related accidents. The CPSC estimates that roughly 31,000 people die and 34 million people suffer product-related injuries every year. These deaths and injuries impose significant costs on the economy—roughly one trillion dollars annually. They do so first as medical costs and lost wages, then as higher premiums for health insurance—or higher taxes to pay for the uninsured. Moreover, product-related tragedies almost always result in a loss of economic productivity of the victims, not to mention the pain and suffering they experience. Accordingly, the argument that regulations necessarily impose new costs on society is not persuasive. The costs in the form of deaths and injuries are already there, and often they impose as much of a drag on the economy as any safety rule.

What about alternative, like, recalls of unsafe products? This can work to some extent, but a common patterns is that a relatively small share of products are returned to the manufacturer–and this typically only happens after the CPSC is involves. What about education about safe use? This can sometime be effective, over time, as in the case of government anti-smoking announcements and warning requirements. But public safety campaigns often cost a lot, for at best modest responses:

As former CPSC Commissioner R. David Pittle once said, “it is far easier to redesign products than it is to redesign consumers.” Regulators have undertaken education campaigns intending to produce substantial changes in consumer behavior with limited success. …

In the early 1980s, for example, NHTSA undertook a massive multiyear national campaign to encourage consumers to wear seatbelts. After spending millions of dollars on the campaign, the agency revealed that the rate of seatbelt use rose only from about 11 percent to 14 percent—a disappointing result however one looks at it. The seatbelt usage rate did not significantly increase until the U.S. Department of Transportation pressured the states to enact mandatory seatbelt laws.

What about industry getting together to set standards? For example, the companies that make portable generators recently got together to set voluntary standards for mitigating the risk of carbon monoxide poisoning. The CPSC had no big quarrel with the standards themselves–except that the same companies that had promulgated the standards were not actually living up to them, and 500 or so people each year are dying as a result. Industry participation in making safer products can be truly useful, but without a government agency, industry compliance with voluntary standards may not be high.

Adler is clearly a regulation fan-boy (not that there’s anything wrong with that!), and he probably has a tendency to overstate benefits and understate costs. It’s also worth noting that requiring safer design of consumer products is a particular regulatory task, with different types of tradeoffs than other prominent regulators like the Environmental Protection Agency or the Food and Drug Administration. I know regulators can overreach on safety issues, or become focused on overly costly approaches. But markets can also underreach on consumer safety issues. The real-world answer would seem to involve pushing-and-pulling between the two, with the arguments conducted quantitatively and out in the open.

The White-Black Wealth Ratio Since 1870

Wealth represents the accumulation of assets during a lifetime, and thus wealth gap are always larger than income gaps. Nonetheless, the wealth gaps between black and white Americans are remarkable. Ellora Derenoncourt, Chi Hyun Kim. Moritz Kuhn, and Moritz Schularick have done a deep dive into historical record to develop some estimates in ” Wealth of Two Nations: The U.S. Racial Wealth Gap, 1860-2020″ (Federal Reserve Bank of Minneapolis, Opportunity & Inclusive Growth Institute Working Paper 59, June 10, 2022). There’s also a readable overview of the main findings by Lisa Camner McKay, “How the racial wealth gap has evolved—and why it persists: New dataset identifies the causes of today’s wealth gap” (October 3, 2022).

Here’s a summary figure. The vertical axis measures average per capita white/black wealth as a multiple: that is, back in 1860, the average white person had more than 50 times the wealth of the average black person on a per capita basis, but now the multiple is more like six time.

Clearly, the drop in the white-to-black wealth ratio was fastest back in the 19th century. About one-quarter of the convergence can be explained by the white slaveholders’ loss of slaves as part of their “wealth,” but the rest of the convergence happened as wealth of blacks grew relatively more rapidly than that of whites.

Then the white-to-black ratio levels out from 1900 up to about 1930, during a time of legalized discrimination and segregation for black Americans. There is some move toward greater equality of wealth after World War II, and an additional move after the passage of the Civil Rights Act of 1964. But there has been a move toward greater inequality since about 1980, which seems mainly due to the fact that those who already had the resources to own housing or to have stock market investments have done especially well since then, while those who did not already own such assets had no way to benefit from the capital gains that have occurred.

As Camner McKay summarizes the work:

The role of capital gains is particularly important here. The high rate of return to capital holdings over the last 40 years—economic parlance for “stocks have really gone up a lot”—is a leading cause of the wealth dispersion in the United States today. According to analysis by economist Emmanuel Saez and others, wealth has become significantly more concentrated during this period: In 1980, the richest 0.1 percent of Americans—about 160,000 households—owned 7.7 percent of national wealth. In 2020, they owned 18.5 percent. “Given that there are so few Black households at the top of the wealth distribution,” Derenoncourt and co-authors write, “faster growth in wealth at the top will lead to further increases in racial wealth inequality.”

Here’s another way to slice the data, comparing the black share of the US population to the black share of total wealth.

One can of course raise a bunch of questions about the details, which in turn are addressed in the working paper. For example, where is the data from?

We do this by digitizing 50 years of data on Black wealth, from the 1860s to the 1910s, from southern state tax reports and combining this with information from the complete-count digitized censuses of 1860 and 1870. We extend this time series through the mid-20th century using historical estimates of total Black and national wealth, verified using the census of agriculture and population and household survey data from the 1930s. Finally, we draw on newly compiled data from historical and modern waves of the Survey of Consumer Finances to complete our coverage from 1949 to 2019 …

And yes, the data is publicly available at

There’s a lot that can be said about all this, but I’ll limit myself to the obvious: Four decades–call it two generations–of no progress on the white-to-black wealth ratio is a long, long time.

Africa: Too Many Currencies?

It is commonplace to observe that the enormous US domestic market benefits from having a single currency, rather than, say, 50 state-level currencies. Indeed, the case for a single currency across a broad market was compelling enough to persuade 19 of the 27 countries in the European Union to trade in their historically separate currencies for the euro. In contrast, the nations of Africa have 42 separate currencies.

There is a newly-founded African Continental Free Trade Area, seeking to reduce barriers to trade and investment across the countries of within Africa. It could offer a real boost to productivity and growth across Africa: a June 2022 World Bank study estimates that it could “bring income gains of 9 percent by 2035 and reduce extreme poverty by 50 million.” But for trade to work, it has to overcome the problems of 42 currencies. Chris Wellisz provides an overview in “Freeing Foreign Exchange in Africa” (Finance & Development, September 2022). He writes:

Making payments from one African country to another isn’t easy. Just ask Nana Yaw Owusu Banahene, who lives in Ghana and recently paid a lawyer in nearby Nigeria for his services. “It took two weeks for the guy to receive the money,” Owusu Banahene says. The cost of the $100 transaction? Almost $40. “Using the banking system is a very difficult process,” he says.

His experience is a small example of a much bigger problem for Africa’s economic development—the expense and difficulty of making payments across borders. It is one reason trade among Africa’s 55 countries amounts to only about 15 percent of their total imports and exports. By contrast, an estimated 60 percent of Asian trade takes place within the continent. In the European Union, the proportion is roughly 70 percent.

What are the options here? In theory, it would be possible for countries across Africa to unite with a single currency of their own. In practice, this seems pretty unlikely. At present, there are 14 countries in Africa that use the “CFA franc” as their currency: six in central Africa (Cameroon, the Central African Republic, Chad, the Congo, Equatorial Guinea and Gabon) and eight in west Africa (Benin, Burkina Faso, Côte d’Ivoire, Guinea Bissau, Mali, Niger, Senegal and Togo). Indeed, there are technically two different CFA francs, one for each of these regions, but their exchange rate in terms of euros is always the same. Together, these countries comprise about one-eighth of Africa’s GDP.

Current perceptions of the CFA franc are, at best, only partially favorable. It has provided monetary stability, but at times the exchange rate value of the currency has been so high that it strangled exports from these countries. It’s also a legacy of colonialism by France. The current plan seems to be that the west African version of the CFA franc will be phased out in favor of a shared currency called the “eco,” which may be more widely used across other nations of west Africa. But the potential transition is scheduled for a few years away, and it’s unclear (at least to me), whether the countries using the central African version of the CFA franc will join in. There’s a lot of talk about “taking back control of the currency,” but the current proposals for the “eco” would continue to have a fixed exchange rate with the euro.

In short, the existing currency unions in Africa are being sharply question and seem to be in transition. A even broader currency union isn’t on the table. And frankly, it’s not obvious that a broader currency for Africa is a good idea at this moment in time. A shared currency across a geographic area works best when the economy of that area is already somewhat united by flows of goods and services, finances and people, and shared government programs. Obviously, the question of whether, say, Greece should share a currency with Germany, has posed real problems for the euro.

So the current plan, as Wellicz describes it, is to create a Pan African Payment and Settlement System (PAPSS):

The system aims to link African central banks, commercial banks, and fintechs into a network that would enable quick and inexpensive transactions among any of the continent’s 42 currencies. … PAPSS aims to solve such problems by settling transactions in local African currencies, obviating the need to convert them into dollars or euros before swapping them for another African currency. In essence, PAPSS would eliminate costly overseas intermediaries. The system aims to complete transactions in less than two minutes at a low though unspecified cost.

The careful reader will note that this description makes heavy use of “aims to.” PAPSS was apparently formally launched in January 2022, but had not cleared any commercial transactions through this summer. The success of Africa’s efforts to promote trade across the continent may well depend on whether PAPSS or a similar arrangement can succeed.

For broader discussion of issues facing the economies of nations across Africa, a useful starting point is the five-paper symposium in the Winter 2022 issue of the Journal of Economic Perspectives (where I work a Managing Editor). As always, the articles are all freely available online: