Chesterton: “The Old Man is Always Wrong, and the Young People are Always Wrong about What is Wrong With Him”

Why are young people so often protesting against the conditions they have inherited from the older generation? GK Chesterton offered a hypothesis in an essay in his “Our Note Book” essay written for the Illustrated London News (June 3, 1922): “[T]he old man is always wrong , and the young people are always wrong about what is wrong with him. The practical form it takes is this: that, while the old man may stand by some stupid custom, the young man always attacks it with some theory that turns out to be equally stupid.”

Moreover, Chesterton argues, the young protesters often in practice turn out to be less focused on getting rid of the previous evil than they are on force-feeding their new theory to the older generation. Chesterton writes: “In other words, the young man is not half so eager to get the wicked old man to abolish his wicked old law, because it is wicked, as he is to convince him of the final and infallible truth of some entirely new law, of which the consequences might be equally wicked. The young man is much more interested in ramming his new theory down the old man’s throat than he is in tearing the other infernal infamy out of the old man’s heart.”

Thus, instead of the young protesters focusing on what should be common ground–the past evils that should be overturned–they present to the older generation the juicy target of brand-new theories ripe for debunking. Both older and younger generations can then dispute the new theories, while neither does a very good job of actually coming to grips with the reality of the past evils. Here’s Chesterton:

[I]t is always easy to talk about an old man as if he had always been old, or about young people as if they would always be young. They are no nearer to solving the recurrent riddle of humanity, the family quarrel in so far as it does really run through all history. If the rising generation had always been wise, we should have risen to a great deal more wisdom by this time. But the rising generation very often was wise; and the real interest is in how it could be so foolish when it had been so wise.

I believe what really happens in history is this: the old man is always wrong , and the young people are always wrong about what is wrong with him. The practical form it takes is this: that, while the old man may stand by some stupid custom, the young man always attacks it with some theory that turns out to be equally stupid. This has happened age after age: but to make it quite clear I will take an abstract and artificially simple case. Suppose there was a really barbarous and abominable law at some stage of history. Let us say that a peasant population must be restricted by every sixth child being killed or sold into slavery. I do not remember anything quite so bad as that in the past ; it seems to savour more of the scientific programmes of the future. Some of the eugenists or the experts in birth control might perhaps favour it. But there have been things nearly as bad, things at which our blood boils even in reading about them in a book. We wonder how any old men could be so vile as to defend them; we very rightly applaud the young men who called them indefensible. And we are amazed that anything so indefensible seemed so long to be indestructible. Now the real reason is rather odd.

The curious thing that happens is this. We naturally expect that the protest against that more than usually barbaric form of birth control will be a protest of indignant instinct and the common conscience of men. We expect the infanticide to be called by its own name, which is murder at its worst; not only the brand of Cain but the brand of Herod. We expect the protest to be full of the honour of men, of the memory of mothers, of the natural love of children. But when we look closer, and learn what the rising generation really said against the rotten custom, we find something very queer indeed. We do not find the young revolutionists chiefly concerned to say: “Down with King Herod who murders babies ! “What they are chiefly concerned to say, what they are passionately eager to say, is something like this: “What can be done with an old fool who has not accepted the Law of Melioristic Ultimogeniture ? He has not even read Pooch’s book I Nothing can be done till we have compulsory instruction in the New Biology, which shows that the higher type is not evolved until the sixth child, the previous five being only embryonic experiments.” In other words, the young man is not half so eager to get the wicked old man to abolish his wicked old law, because it is wicked, as he is to convince him of the final and infallible truth of some entirely new law, of which the consequences might be equally wicked. The young man is much more interested in ramming his new theory down the old man’s throat than he is in tearing the other infernal infamy out of the old man’s heart. He is more excited about the book than the baby. For him the bad law is a barbaric impediment that will soon disappear. It is Pooch’s great discovery, of the inevitable superiority of the sixth child, that is important and will remain. Now in fact Pooch’s discovery never does remain. It always disappears after doing one good work–inspiring the young reformer to get rid of the bad and barbarous law against babies. But it cuts both ways ; for it gives the old man, who has seen a good many Pooches pass away in his time, an excuse for calling the whole agitation stuff and nonsense. The old man is half ashamed of defending the old law, but he is not in the least ashamed of jeering at the new theory. And the young man always plays into his hands, by being more anxious to establish the theory than to abolish the law.

Now that has happened in history, century after century. … In short, the young man always insists that his new nostrum and panacea shall be swallowed first, before the old man gives up his bad habits and lives a healthy life. The old man knows the new medicine is a quack medicine, having seen many such quacks; and is only too delighted with an excuse for putting off the hour of repentance, and going his own drunken, dissipated old way. That cross-purpose is largely the story of mankind.

There’s an interesting potential lesson here. Perhaps it is more socially productive to focus pragmatically on addressing social evils and ills directly, rather than getting sidetracked into a desperate desire to ram our theories down the throats of others.

Harold Demsetz: Dissecting the Nirvana Viewpoint

Here are two different ways to see the world. One approach looks at current problems in the context of alternative real-world institutional arrangements, recognizing that all the real-world choices will be flawed in one way or another. The other approach looks at current problems as juxtaposed with ideal outcome. In a 1969 essay, Harold Demsetz critiqued that second approach, calling it the “nirvana viewpoint.” He also argued that economics might be prone to that approach. The Demsetz essay is “Information and Efficiency: Another Viewpoint” (Journal of Law & Economics, April 1969, 12: 1, pp. 1-22). He sets up the problem this way:

The view that now pervades much public policy economics implicitly presents the relevant choice as between an ideal norm and an existing “imperfect” institutional arrangement. This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem; practitioners of this approach may use an ideal norm to provide standards from which divergences are assessed for all practical alternatives of interest and select as efficient that alternative which seems most likely to minimize the divergence.

The nirvana approach is much more susceptible than is the comparative institution approach to committing three logical fallacies–the grass greener fallacy, the fallacy of the free lunch, and the people could be different fallacy.

Given that economists are not usually accused of being overoptimistic fantasists about the possibilities of solving real-world problems, why does Demsetz fear that they may be prone to the nirvana approach. He argues that a nirvana bias may be built into the arguments commonly used by economists. For example, economists often point to what they call “market failures,” like the negative externalities that lead unfettered free markets to excessive pollution, the positive externalities that unfettered free markets to underinvestment in R&D or education, the common pattern of unequal distributions of income in a market economy, and so on. In economic theory, each of these “market failures” has a potential solution in terms of taxes, subsidies, or redistributions that could address the problem at hand.

But as the non-economists are fond of reminding the economists, real life doesn’t happen inside an economic model. When there are real-world problems, a mixture of government and private institutions often evolve to address them. When government enacts an economic policy, it doesn’t happen inside an economic model, either. Instead, the new policy is enacted via a political process run by self-interested legislators and then implemented by a regulatory process run by self-interested regulators. As Demsetz saw it, the real choices is not between one model and another, but between the set of existing institutional arrangements to address a problem and what the shape would be of a new and untested set of institutional arrangements to address a problem.

Of course, this argument doesn’t suggest that all policies will have poor effects or will be self-defeating. It does suggest that pointing to economic models of a “market failure” and a government-enacted “solution” should only be a starting point for additional discussion of real-world institutions as they exist. More broadly, the Demsetz argument suggests that one should beware of those peddling nirvana–especially because many of making such claims do not even have a basic textbook model to back up the starting point of their argument. They just have a complaint and a promise.

George Bernard Shaw wrote a play called Back to Methuselah, with a scene where the Serpent in the Garden of Eden is trying to persuade Eve to eat the apple, with a promise that it will lead to ever-lasting life. The Serpent says to Eve: “When you and Adam talk, I hear you say ‘Why?’ Always ‘Why?’ You see things; and you say, ‘Why?’ But I dream things that never were; and I say, ‘Why not?’” That statement by the Serpent is the nirvana approach in action.

Finding the Path Between Berks and Wankers

An ongoing challenge in writing and editing is to avoid either being obsessive about detailed rules of grammar and usage or being proudly ignorant of any such rules. In his 1997 book The King’s English, Kingley Amis framed the choice as one between berks and wankers. He wrote:

Not every reader will immediately understand these two terms as I use them, but most people, most users of English, habitually distinguish between two types of person whose linguistic habits they deplore if not abhor. For  my present purpose, these habits exclude the way people say their vowel sounds, not because these are unimportant but because they are hard to notate and at least as hard to write about. 

Berks are careless, course, crass, gross and what anybody would agree is a lower social class than one’s own. They speak in a slipshod way with dropped Hs, intruded glottal stops, and many mistakes in grammar. Left to them the English language would die of impurity, like late Latin. 

Wankers are prissy, fussy, priggish, prim and of what they would probably misrepresent as a higher social class than one’s own. The speak in an over-precise way with much pedantic insistence on letters not generally sounded, like Hs. Left to them the language would die of purity, like medieval Latin. 

In cold fact, most speakers, like most writers if left to themselves, try to pursue a course between the slipshod and the punctilious, however they might describe the extremes they try to avoid, and this is healthy for them and the language.

As someone who clearly is in more personal danger of falling into the wanker category, I suppose I must resolve to let my inner berk out to play more often.

How Stalin and the Nazis Tried to Copy Henry Ford

In the early decades of the 20th century, the automakers of Detroit–and Henry Ford in particular–exerted a remarkable influence. This economic period had multiple periods of economic instability and world war–and that was even before the catastrophe of the Great Depression. In particular, authoritarian rulers of the 1930s were drawn to a model that seemed to combine large-scale facilities under central control with being at the cutting edge of modern industry. Stefan Link tells the story in his recent book. Forging global Fordism : Nazi Germany, Soviet Russia, and the contest over the industrial order (Princeton University Press, 2021).

When it came to developing fresh principles after the bankruptcy of the old economic order in the global crisis of the 1930s, it was Detroit that drew all modernizers of postliberal persuasion, left and right, Soviets and Nazis, fascists and socialists. To be sure, uncounted engineers and admirers had come to see Ford’s factories since the 1910s, when the old Highland Park, forge of the Model T, was first equipped with an assembly line. Yet in the 1930s Ford’s new factory—the much expanded, vertically integrated River Rouge—became the destination of engineering delegations bent on wholesale technology transfer. Italian, German, Russian, and Japanese specialists traveled to Detroit, spent weeks, months, even years at River Rouge to learn the American secret of mass production. With the Gorky Automobile Factory (Gaz) in central Russia, the Soviet Union opened its own “River Rouge” in 1932. In 1938, Hitler laid the cornerstone of the Volkswagen works. Nor were Nazis and Soviets alone. Toyota began operating its Koromo plant in 1938, and Fiat welcomed Mussolini for the opening ceremony of the brand-new Mirafiori facility in 1939. As is easily seen, these Depression-era exchanges laid the groundwork for the infrastructure of global Fordism after World War II. …

Evidently, both the Nazi and the Soviet self-diagnosis of underdevelopment vis-à-vis the United States was soaked in existential ideological sweat. This diagnosis, however, prescribed a simple and precise course of action: beat America with American methods. Lest Germany become “America’s prey,” it was necessary “to study the means and mechanisms of the Americans,” said Theodor Lüddecke, one of Fordism’s most vocal advocates on the Weimar right. Similarly, Arsenii Mikhailov, one of Fordism’s ardent Soviet champions, argued that the goals of the Five-Year Plan required “a swift and complete
switch to the most advanced American technology.

The making of Gaz, the Gorky “Auto Giant,” resulted from this course of action. Gaz marked an extraordinary attempt to transfer American technology wholesale and to indigenize it in a social and economic environment that seemed hardly ready for it. Soviet workers and engineers indeed struggled mightily to adopt what they took from Detroit. But despite enormous sacrifices and waste, somehow, by decade’s end, a capable motor mass production industry had materialized in central Russia. … Germany could dip into deep homegrown technological capabilities that the Soviet Union lacked and therefore struggled somewhat less to assimilate Fordism. The result was a double reception. The Volkswagen plant echoed the Soviet strategy of comprehensive copying. But the Nazi regime also tried (and largely succeeded) to harness the industrial acumen of Ford and General
Motors, both of which had branches in Germany, to its own ends. Ensnaring the Americans in a web of threats and incentives, the regime achieved pervasive, dollar-subsidized transfers of mass production technology into Germany.

There were attempts by Stalinist Russia and Nazi Germany to cut deals with a number of other leading American firms as well. Japan and Italy sought to import Fordism as well.

In Japan, no automobile industry existed after World War I, and during the Twenties both Ford and General Motors built assembly plants that fully covered the needs of the domestic market. By the mid-Thirties, however, the militarist government began to support fledgling attempts by Japanese industrialists to nurture a homegrown auto production. In 1936, the government passed the notorious Automobile Manufacturing Enterprise Law, a measure that discriminated against the American firms, penalized imports of vehicles, and encouraged Nissan and Toyota—weak and inexpert producers compared to the Americans—to expand investments and update their technologies. These measures eventually forced GM and Ford to exit the Japanese market and allowed Nissan and Toyota to acquire the Americans’ factory machinery and hire their workers and engineers.

In Italy, too, the regime tolerated the presence of American carmakers only for a brief period after World War I. In 1929, Mussolini personally thwarted an attempt by Ford to expand its presence in Italy, declaring that American competition would devastate the domestic automobile industry. Instead, Mussolini decisively backed the Turin-based carmaker Fiat, which benefited not only from the regime’s stifling labor policies but also from its military orders, export promotion schemes, and generous foreign exchange allocations for technology from the United States. Ford and GM eventually left the Italian market, while Fiat built its own brand-new, Rouge-style megaplant. Opened in 1939, Mirafiori was very similar to the Nazi Volkswagen project: a Fascist white elephant, valuable for propaganda purposes but also a monument to how assiduously the regime sought to alter its place in the global industrial pecking order

Link tells the story at book-length, and I certainly won’t attempt to recapitulate it here. But I do want to point out some of the ironies involved.

  1. Many people think of giant multinational manufacturing firms as the epitome of capitalism. But for authoritarian, socialist, and communist rulers, it seemed obvious that giant manufacturing firms could operate perfectly well as instruments of state power and control. Indeed, it has often been market-oriented economists who were more skeptical of whether extraordinarily giant firms were really needed for economic efficiency, or whether efficiency and innovation might be better-served with a group of middle-sized firms competitn with each other.
  2. For the Soviet Union in particular, the obsession with giant manufacturing firms led to central planning for giant firms all over the country. Indeed, Soviet central planners often seemed to equate sheer size of a factory with technological advancement. The size of the Soviet planned factories often outstripped the capacities of the central planners, leading to grotesque inefficiencies.
  3. When teaching intro econ students about economies of scale–that is, the common situation when larger production volumes can drive down average costs–a common question is whether there can be diseconomies of scale. Can there be cases when size gets so large that it average costs rise? In a reasonably competitive market, this won’t be possible, because the very large high-cost firm will go out of business. But if the ultra-large firm is sustained by the government, then yes, diseconomies of scale become possible.
  4. Each year, about 5% of global car production happens in the US economy. More broadly, manufacturing jobs have been a declining share of total US jobs for decades–all the job growth has been in service- and information-related industries, instead. not been the the center of the global car industry for a long time. Looking ahead, we seem to be living in a world where a combination of automation, robotics, and information technology may greatly limit the growth of manufacturing jobs anywhere in the world. Whatever the allure and tradeoffs of Fordism in the 20th century as a tool for economic development and government power, the plausible economic models for the 21st century look rather different.

Can a Dose of Randomness Help Fairness?

As the philosophers teach, there are many ways to think about “fairness.” One common approach is to identify a source of what is deemed to be unfairness, or a practice that leads to outcomes deemed to be unfair, and then to address this specific practice or outcome in a direct way. However, there is also a line of philosophy which suggests that thinking about “fairness” might usefully include an element of randomness. For example, the outcome of a lottery is “fair,” not in the sense that it corrects or addresses other aspects of unfairness, but in the sense that every ticket has an equal chance of winning.

Some recent research takes this connection between randomness and fairness and puts it to work. For example, in the July 2021 issue of the American Economic Review, Rustamdjan Hakimov, C.-Philipp Heller, Dorothea Kübler, and Morimitsu Kurino discuss “How to Avoid Black Markets for Appointments with Online Booking Systems” (111:7, pp. 2127-51).

They point out a number of examples of online booking systems where entrepreneurially-minded scalpers snap up many or all of the reservations, and then re-sell them in a secondary black market. This happened in California with prime-time appointments at the Department of Motor Vehicles, in Ireland and with the offices that immigrants need to get their residential permits and visas, in China with appointments at state-run hospitals, and so on. These were all first-come, first-served settings, which is of course a different working definition of “fairness.” However, the authors suggest an alternative mechanism. They write:

We propose an alternative system that collects applications in real time, and randomly allocates the slots among applicants (“batch” system). The system works as follows: a set of slots (batch) is offered, and applications are collected over a certain time period, e.g., for one day. At the end of the day, all slots in the batch are allocated to the appointment seekers. Thus, the allocation is in batches, not immediate as in the first-come-first-served system. In the case of excess demand, a lottery decides who gets a slot. If a slot is canceled, this slot is added to the batch in the next allocation period, e.g., the following day. Thus, the scalper cannot transfer the slot from the fake name to the customer by way of cancellations and rebookings. We show that under reasonable parameter restrictions, the scalper not entering the market is the unique equilibrium outcome of the batch system. The intuition for this result is that, keeping the booking behavior of the scalper fixed, a seeker has the same probability of getting a slot when buying from the scalper as when applying directly. Flooding the market with fake applications increases the probability that the scalper will receive many slots, but he cannot make sure that he gets slots for his clients, and he cannot transfer slots to the names of the clients

Notice how the built-in lottery is a key part of improving the fairness of the outcome in this outcome. When you think about it, other examples of randomness as a form of fairness come to mind. For example, when public charter schools are oversubscribed, the usual practice is they are required to select students through a lottery–and in turn, this randomness gives researchers a tool for evaluating the effectiveness of charter schools by comparing those randomly admitted with those randomly not admitted. In 2008, Oregon opened up its Medicaid program to additional enrollees, but had 90,000 applicants for only 30,000 slots–so it used a lottery to choose who would get the expanded coverage.

A dollop of randomness might have other applications as well. For example, imagine its application in hiring decisions. Say that the company has five qualified internal candidates. If they are all deemed qualified, perhaps the job should be handed out by random chance. There are several possible benefits of such an approach.

One is that when there is a group of candidates who are all deemed to be qualified, it’s a time when biases can begin to creep in. After all, if everyone is qualified, then maybe it feels safer or more comfortable to go with a familiar choice, or a politically connected choice. Joël Berger, Margit Osterloh, Katja Rosta, and Thomas Ehrmann explore this possibility in “How to prevent leadership hubris? Comparing competitive selections, lotteries, and their combination” (Leadership Quarterly, October 2020).

As an historical example, they point to a problem that arose at the University of Basel when professors were often being appointed based on political connections, and adding a dose of randomness was part of the answer. They write:

This problem led to changes that took place at the University of Basel in the 18th century. Until the end of the 17th century, the appointment of professors at this university was seriously compromised by the interventions of politically influential family dynasties and by corruption. To combat this problem, a law was passed requiring the appointment of new professors through a procedure that combined competitive and random selection (Burckhardt, 1916), termed the Wahl zu Dreyen or selection from three. The law, which was introduced in 1718, required candidates to submit proof of their qualifications to the governing body of the university, which then decided whether a can-
didate was eligible or not (Burckhardt, 1916). Subsequently, all the professors of the University of Basel came together to act as the electoral authority. If two or three candidates were eligible, the candidate to be appointed was chosen by lottery. If more than three candidates came into question, the electoral authority was divided by lottery into three electoral colleges. Each college had to propose one candidate by secret voting. Finally, the candidate to be appointed was decided by
lottery.

The authors argue further that applying a degree of randomness in this way might be useful in choosing top corporate executives: that is, decide on a group of qualified candidates, and then choose by lottery. They argue that top executives may be prone to excessive hubris (and one might add, excessive pay) because they view themselves as selected. If they viewed themselves as literally lucky to be chosen from among with equally capable potential replacements, they might act differently. The authors offer some experimental evidence that those who view themselves as selected to be group leaders may be more prone to hubris than those chosen from a mixture of selection and randomness.

Another possible application of randomness in a hiring context is discussed by an overlapping group of authors, Joël Berger, Margit Osterloh, and Katja Rost in “Local random selection closes the gender gap in competitiveness” (Science Advances, November 20, 2020). The authors point to a body of evidence suggesting that women in job settings are more likely to opt out of competitive settings than men, and in general to act in more risk-averse ways then men. Thus, if promotions are set up as a competitive tournament, women may be systematically disadvantaged. They argue, based on some experimental evidence, that a process in which people enter a process to be designated as qualified for a certain job, knowing in advance that the actual promotion will be determined by lottery, will attract more women without diminishing the quality of the applicant pool.

One can think of various ways of applying a dose of randomness in various academic settings, too. Imagine, for example, that an organization is evaluating grant proposals. For some of the proposals, everyone is enthusiastic. But for encouraging but marginal grants, there is disagreement over quality. When choosing among these marginal proposals, it’s easy to imagine various biases creeping in: for example, the evaluators–other things equal–might tend to favor the proposals from better-known institutions, or better-known people, or perhaps they will tend to favor more conventional ideas as opposed to higher-risk proposals that might look foolish in retrospect.

To avoid such biases, the Swiss National Science Foundation decided to use a degree of randomness when evaluating grant applications. Some grant applications evoke virtual unanimity, either about whether to fund them or whether not to do so. But what about the in-between cases? Rather than pretending that it’s possible to make fine distinctions between these applications–which is a situation where bias can easily creep in–instead a proportion of these grants are given by lottery.

Margit Osterloh and Bruno S. Frey discuss the rationale for applying a similar process at academic journals in “How to avoid borrowed plumes in academia” (Research Policy, February 2020). They note that of the economics articles published in top journals, some turn out to be highly influential and some do not. They also point to a fear that, at least in economics, being published in a top journal is viewed as proof of research of the highest quality, which (at least as judged by how often articles are later cited) is not necessarily true. Given the uncertainties involved, they propose experimentation with a partly random method of selection. If all the referees love a paper, or hate it, then the choice is simple. But for the in-between papers, they suggest random selection. They write:

Our own proposal is the most radical. It is based on the insight that fundamental uncertainty is symptomatic for scholarly work. This is indicated by the low prognostic quality of reviews and the low inter-rater reliability revealed by many empirical analyses. Our suggestion takes this evidence into account. It suggests the introduction of a partly random mechanism. Focal randomisation takes place after a thorough preselection of articles by peer reviews. Such a rationally founded and well-orchestrated procedure promises to downplay the importance (or even “tyranny”) of top journals and to encourage more unorthodox research than today.

Of course, one also sometimes hears proposals for the use of lotteries in admission to selective colleges. It’s not uncommon to hear presidents or admissions officers from such places say “we have many more qualified candidates than we can possibly admit.” So rather than turning of the decision to admissions offices, which will inevitably have shifting agendas and biases of their own about what makes a student “authentic” and full of potential, perhaps instead the admissions office should just decide who is super-qualified for the school, and who is not qualified for he school, and then admit the rest by lottery. It would be problematic for qualified applicants turned down by such a process to claim that it was unfair. But perhaps more interesting, it would be openly acknowledged that there was an element of luck in who is admitted–and the hubris that stems from being selected to a selective college might be reduced as a result.

Here’s one more example. In Nigeria, a competition called YouWiN! was launched in 2011 to provide funding to small businesses and start-ups. David McKenzie describes the process in “Identifying and Spurring High-Growth Entrepreneurship: Experimental Evidence from a Business Plan Competition” (American Economic Review, 107 (8): 2278-2307):

The YouWiN! competition was launched in late 2011 by the president of Nigeria, and in its first year attracted almost 24,000 applications aiming to start a new business or expand an existing one. The top 6,000 applications were selected for a 4-day business plan training course, and then winners were chosen to receive awards averaging US$50,000 each, paid out in four tranche payments conditional on achieving basic milestones. The top-scoring plans overall and within region were chosen as winners automatically, and then 729 additional winners were randomly selected from a group of 1,841 semifinalists, providing experimental variation from US$34 million in grants that enables causal estimation of the program’s impact.

Again, choosing most of the grant winners at random, out of those deemed qualified, seems fair in at least one sense of the word.

The idea of randomness as a form of fairness may seem counterintuitive. Much of the time, we think of fairness as the outcome of a process of selection. But selection inevitably has biases of its own, which will be tied up in issues like what information is used in the selection process, who is more or less comfortable being part of the selection, and biases of who does the selecting. None of the examples here suggest that a desired outcome should be totally allocated at random. There’s always a screening process first. But when a high level of uncertainty exists between candidates or projects that seem very similarly qualified, there is a case for using randomness as a way to reduce some of the biases that are always likely to exist in any selection process.

Bad Development Ideas

It has been my tradition at this blog to take a break from current events in late August. Instead, I offer a series of posts about economics, academia, and and editing, focusing on comments or themes that caught my eye in the last few months.

Back in 2008, the World Bank published the report of the Commission on Growth and Development, consisting of 19 policymakers and a couple of Nobel prize-winning economists. The Growth Report : Strategies for Sustained Growth and Inclusive Development still rewards reading today. Here, I focus on what Michael Spence, chair of the Commission, later referred to as probably the most popular section of the report–a two-page discussion just called “Bad Ideas.” The Commission wrote (pp. 68-69).

Debates help clarify good ideas, subjecting them to scrutiny and constructive criticism. But debates can also be infected by bad ideas. This poses two difficulties for policy makers. First they must identify bad ideas, because specious proposals can often sound promising. Then, they must prevent them from being implemented. An illustrative list of “bad ideas”, which are nonetheless often brought into the debate and should be resisted, is offered below. We hasten to add that just as our recommendations for good policies are qualified by the need to avoid one-size-fits-all approaches and to tailor the policies to country-specific circumstances, our list of bad policies must also similarly be qualified. There are situations and circumstances that may justify limited or temporary resort to some of the policies listed below, but the overwhelming weight of evidence suggests that such policies involve large costs and their stated objectives—which are often admirable—are usually much better served through other means.

  • Subsidizing energy except for very limited subsidies targeted at highly vulnerable sections of the population.
  • Dealing with joblessness by relying on the civil service as an “employer of last resort.” This is distinct from public-works programs, such as rural employment schemes, which can provide a valuable social safety net.
  • Reducing fiscal deficits, because of short term macroeconomic compulsions, by cutting expenditure on infrastructure investment (or other public spending that yields large social returns in the long run).
  • Providing open-ended protection of specific sectors, industries, firms, and jobs from competition. Where support is necessary, it should be for a limited period, with a clear strategy for moving to a self-supporting structure.
  • Imposing price controls to stem inflation, which is much better handled through other macroeconomic policies.
  • Banning exports for long periods of time to keep domestic prices low for consumers at the expense of producers.
  • Resisting urbanization and as a consequence underinvesting in urban infrastructure.
  • Ignoring environmental issues in the early stages of growth on the grounds that they are an “unaffordable luxury.”
  • Measuring educational progress solely by the construction of school infrastructure or even by higher enrollments, instead of focusing on the extent of learning and quality of education.
  • Underpaying civil servants (including teachers) relative to what the market would provide for comparable skills and combining this with promotion by seniority instead of evolving credible methods of measuring performance of civil servants and rewarding it.
  • Poor regulation of the banking system combined with excessive direct control and interference. In general, this prevents the development of an efficient system of financial intermediation that has higher costs in terms of productivity.
  • Allowing the exchange rate to appreciate excessively before the economy is ready for the transition towards higher-productivity industry.

The list above is illustrative and not exhaustive. Individual countries will have their own list of practices that appear to be desirable but are ineffective. Relentless scrutiny of policies should be an essential element in rational policy making. This due diligence needs to be doubled for policies of the type listed above.

I sometimes like to say that the most important role of economics in practical policy-making may not be in choosing the best option. There may be several options that work pretty well, and choosing the “best” may be a matter of opinion. But economics can help identify and rule out the worst options, and the gains from avoiding the truly awful choices can be quite substantial. 

Reconsidering the “Washington Consensus”

About 20 years ago, I found myself (via a story too long and tedious to relate here) part of a small group of economists travelling in South Africa for a week, meeting with various business, government, and academic groups. Within our little group , we each had some topics on which we would focus the first round of our comments. For example, one person talked about how to structure an emerging telecommunications industry, while another talked about barriers to international trade in agriculture. My own role, at least as perceived by the audience, was to be the defender of American imperialist capitalism. And no phrase was delivered to me with quite the same scorn and disdain–and frequency–as “the Washington consensus.”

In my naivete, I was at first surprised that “the Washington consensus” carried such weight. I knew it was controversial, of course, but I had not realized that it had become a such a rhetorical trope for an overall point of view. And like many such phrases, the use of the phrase in conversation had become disconnected from the original meaning of the term.

For those who want to dig into this issue, and how evidence and thinking about it has evolved with time, the Summer 2021 issue of the Journal of Economic Perspectives has a four-paper symposium on the subject. (Full disclosure: I have worked as Managing Editor of JEP since the first issue back in 1987. All articles in JEP back to the first issue are freely available online.) The first essay, by Michael Spence, sets the tone with “Some Thoughts on the Washington Consensus and Subsequent Global Development Experience.” Spence starts with a useful reminder of how the “Washington consensus” originally emerged and what it actually said. Spence begins:

In 1989, policymakers around the world were struggling to come to grips with the debt crisis and slow growth that had plagued developing economies during much of the 1980s, especially nations in Latin America and sub-Saharan Africa. The International Institute of Economics (now the Peterson Institute of International Economics) held a conference discussing the economic and debt situation, mostly focused on Latin American countries. The conference was run by John Williamson (who died in April 2021), a senior fellow at the institute who specialized in topics related to international capital flows, exchange rates, and development. To focus the conference discussion, Williamson (1990) wrote a background paper that began: “No statement about how to deal with the debt crisis in Latin America would be complete without a call for the debtors to fulfill their part of the proposed bargain by ‘setting their houses in order,’ ‘undertaking policy reforms,’ or ‘submitting to strong conditionality.’ The question posed in this paper is what such phrases mean, and especially what they are generally interpreted as meaning in Washington.”

Williamson (1990) described what he saw as a convergence of opinion about ten policies areas designed to promote stability and economic development that he felt had emerged during the 1980s. With hindsight, it appears that one of the principal targets was bouts of instability in inflation, public finances, and the balance of payments. If one asks who the consenting parties are in this “consensus,” the answer appears to include the US Treasury, the International Monetary Fund and World Bank, think tanks with related agendas, to some extent academia, and over time Latin American governments who came to understand the destructive power of macroeconomic instability with respect to growth. It is noteworthy that in the mid-1990s, inflation in a wide range of developing countries dropped substantially and stayed there.

A few points here are worth emphasizing. In discussions of the economies of Latin America, the 1980s are commonly referred to as the “lost decade” because of the mixture of slow growth, inflation and hyperinflation, and debt crises. The “Washington consensus” was never a broad strategy for economic reform or an overall agenda for economic development and growth. It was a discussion of what governments needed to do to qualify for a debt relief or debt reservicing agreement. Moreover, it was a list of the points that seemed to Williamson to command broad agreement–not the points that were more controversial. In the original 1990 essay, Williamson divided his discussion into 10 areas, but it was not until some years later that he turned them into a short list. Spence reproduces Williamson’s list:

1. Budget deficits . . . should be small enough to be financed without recourse to the inflation tax.

2. Public expenditure should be redirected from politically sensitive areas that receive more resources than their economic return can justify . . . toward neglected fields with high economic returns and the potential to improve income distribution, such as primary education and health, and infrastructure.

3.Tax reform . . . so as to broaden the tax base and cut marginal tax rates.

4. Financial liberalization, involving an ultimate objective of market-determined interest rates.

5. A unified exchange rate at a level sufficiently competitive to induce a rapid growth in nontraditional exports.

6. Quantitative trade restrictions to be rapidly replaced by tariffs, which would be progressively reduced until a uniform low rate in the range of 10 to 20 percent was achieved.

7. Abolition of barriers impeding the entry of FDI (foreign direct investment).

8. Privatization of state enterprises.

9. Abolition of regulations that impede the entry of new firms or restrict competition.

10. The provision of secure property rights, especially to the informal sector.

There are lots of things one can say about this list, but perhaps the first one is that, when countries are being told what they need to do for debt relief, calling it the “Washington consensus” is just terrible public relations. Williamson wrote in a 2004 essay: “I labeled this the ‘Washington Consensus,’ sublimely oblivious to the thought that I might be coining either an oxymoron or a battle cry for ideological disputes for the next coup.”

But let’s set aside the other arguments for a moment and consider a more basic question: Did countries that followed the Washington consensus recommendations more closely tend on average to have better economic outcomes? The answer seems to be “yes.”

In the Summer 2021 issue of JEP, Anusha Chari, Peter Blair Henry, and Hector Reyes discuss “The Baker Hypothesis: Stabilization, Structural Reforms, and Economic Growth.” (Then-Treasury Secretary James Baker laid out some of the basics of what came to be called the “Washington consensus” a few years before Williamson bestowed the actual label.) They look at the specific years that countries adopted various Washington consensus reforms, and what happened before and after. They write:

First, in the ten-year period after stabilizing high inflation, the average growth rate of real GDP in EMDEs [emerging market and developing economies] is 2.6 percentage points higher than in the prior ten-year period. Second, the corresponding growth increase for trade liberalization episodes is 2.66 percentage points. Third, in the decade after opening their capital markets to foreign equity investment, the spread between EMDEs average cost of equity capital and that of the US declines by 240 basis points.

Also in the JEP symposium, Ilan Goldfajn, Lorenza Martínez, and Rodrigo O. Valdés find broadly similar results of a positive effect in “Washington Consensus in Latin America: From Raw Model to Straw Man,” while Belinda Archibong,, Brahima Coulibaly, and Ngozi Okonjo-Iweala also finde a positive effect in  “Washington Consensus Reforms and Lessons for Economic Performance in Sub-Saharan Africa.”

 Some recent research papers in other journals agree with the general finding. For example, Kevin B. Grier and Robin M. Grier published “The Washington consensus works: Causal effects of reform, 1970-2015,” in the Journal of Comparative Economics (March 2021, 49:1, pp. 59-72).  William Easterly, who has been a critic of the “Washington consensus” in the past, offers an update and some new thinking in “In Search of Reforms for Growth New Stylized Facts on Policy and Growth Outcomes” (Cato Institute, Research Briefs #215, May 20, 2020, and NBER Working Paper 26318, September 2019).

But as a number of the JEP papers are quick to point out, the fact that the Washington consensus was a generally sensible set of policy recommendations does not address many of the underlying concerns directly.

  1. The major global growth success stories since the formulation of the Washington consensus have happened in Asia: China, India, and others. These countries have followed some aspects of the Washington consensus but clearly not others (like commitments to privatization, financial liberalization, floating exchange rates, and free trade). The Washington consensus doesn’t seem especially useful in thinking about what caused growth in Asian economies to take off.
  2. In the urgent push of resolving debt crises, some parts of the Washington consensus often got lost in the shuffle: in particular, the #2 recommendation about redistributing public resources “toward neglected fields with high economic returns and the potential to improve income distribution, such as primary education and health, and infrastructure” seemed to be left out when debt relief agreements were actually reached. Indeed, the debt relief agreements sometimes involved cutting government spending in those areas.
  3. The Washington consensus put too little emphasis on real-world transition problems. When a certain domestic industry is opened up to international trade, and many of the local producers are driven out of business, what is the government to do? When a large state-owned company is privatized and becomes a large unregulated private monopoly instead, what is the government to do? Praying to the gods of “it will be all right in the long run” is not a useful answer.
  4. In general, the Washington consensus seems to neglect the need for broad political and social buy-in on reforms, and thus feels like a mandate handed down from on high.
  5. Some topics that seem important both for growth don’t seem to be explicitly addressed in the Washington consensus. It’s generally believed that economic growth comes primarily from a society that has make use of new technologies developed elsewhere and create new technologies of its own. Some bits and pieces of the Washington consensus list can be interpreted in these terms, but it’s not an explicit focus.
  6. Some topics that seem important for social cohesion don’t seem to be addressed. For example, there is no mention of reducing corruption or crime, improving the environment, or an explicit goal of reducing inequality.

It would of course be a little silly to treat a set of reforms partially carried out by some countries in the late 1980s and into the 1990s as the sole factors determining economic performance since then. Thus, along with these kinds of concerns, Archibong, Coulibaly, and Okonjo-Iweala point out in their discussion of Africa’s growth experience that there is a general surge in Africa’s growth starting around 2000. One reason for that surge was growth in democracy: they discuss the “wave of democratization in the 1990s, with the number of countries that held multi-party elections increasing from just two (Botswana and Mauritius) before 1989 to 44 of 48 countries—or 92 percent of sub-Saharan Africa—by mid-2003 (Lynch and Crawford 2011). This had the effect of encouraging investment in infrastructure and in pro-poor policies.” In looking at the economic successes of Africa in the last two decades, they write:

[I]t is not obvious that the market-oriented reforms emphasized by international financial institutions are the best or only route to successful economic development. Skeptics of market-oriented reforms in Africa point out that in many successful development efforts around the world, including many countries across Asia, governments played a prominent role for much of the critical phase of their economic development. Historically, many of today’s developed economies did not fully embrace free market economies in the earlier phases of their economic development, which instead involved substantial state involvement including industrial subsidies and infant industry protection (for a discussion of the development experience of today’s advanced economies, one useful starting point is Chang 2002). In Africa, many of these same practices used at other places and times were frowned upon by proponents of market-oriented policies. But before countries of sub-Saharan Africa fell into the debt crisis of the 1980s, many of them had experienced success in the period immediately post-independence in the 1960s and 1970s (Mkandawire 1999). Indeed, some of the policies that were abandoned in favor of market-oriented reforms had rational, development-motivated justifications.

More broadly, what seems to have happened is that, among both supporters and detractors, the “Washington consensus” became a phrase that was used to refer to a recommendation for largely unfettered markets and limited government. This is why Goldfajn, Martínez, and Valdés, in their essay about Latin America, refer to the argument as a “straw man.” They write:

In current public policy debates in Latin America, controversy over “neoliberalism” dwarfs interest in the Washington Consensus. Neoliberalism is the straw man most commonly held up as responsible for Latin America’s economic problems. According to our calculations using the Google Books Ngram Viewer, books published in 2019 in Spanish had 70 times more references to “neoliberalism” than
to the “Washington Consensus.”

But neoliberalism is not a clearly defined concept in economics. In public discussion, neoliberalism is narrowly associated with a laissez-faire view (à la Hayek) and perhaps also with extreme monetarism (à la Friedman), and it is sometimes equated with rather orthodox and pro-market reforms. Neoliberalism has also been identified with policies that disregard some relevant aspects of development, such as inequality and poverty, and neglect any role for the state. More importantly for the issues discussed here, critics have sometimes caricatured the Washington Consensus as a neoliberal manifesto. As described by Thorsen (2010, p. 3), neoliberalism has become “a generic term of deprecation to describe almost any economic and political development deemed undesirable.” The Washington Consensus should not be mechanically associated with this neoliberal straw man.

As shown in this paper, the Washington Consensus was a list of recommendations that was partially adopted with mixed results, some of which were satisfactory and others clearly not. In our view, without some subset of the Washington Consensus policies, it would have been difficult, if not impossible, to achieve macroeconomic stability and to recover access to foreign financing in the late 1980s and early 1990s. The main risk in Latin America at present is that economic populism will gain ground and policymakers will discard the Washington Consensus policies altogether.

One lesson that should have been learned in the 1970s and 1980s, and that gave birth to the “Washington consensus” idea,” is that extreme macroeconomic stability is not good for growth or the standard of living.

Pandemic Recession: By Far the Shortest on Record

In the United States, there is no government committee to set the start and end dates of recessions–for obvious political reasons. However, the National Bureau of Economic Research has a Business Cycle Dating Committee which meets irregularly to determine peaks and troughs. A peak is the start of a recession: the high point before an economy starts down. A trough is the bottom of a recession: the low point before an economy starts up.

Notice in particular that in this framework, the “end” of a recession does not mean that an economy has returned to its pre-recession norms. It just means that the period of contraction is over an a period of expansion has started.

Thus, back on June 8, 2020, the Business Cycle Dating Committee named February 2020 as the peak month before the recession hit. Frankly, this was not a decision require a deep level of economic insight. The economic statistics for output and employment plunged in an almost audible way in March 2020.

I missed the announcement when it was made on July 19, but the Business Cycle Dating Committee decided that the pandemic recession was just two months long, ending in April 2020. There is an old rule of thumb that “a recession is two quarters of negative growth,” but that rule has never been official. For comparison, the contraction from peak to trough during the Great Recession was 18 months; the recessions of 2001 and of 1990-91 both had eight-month periods of actual contraction. The shortest previous US recession on record was six months, from January to June 1980, although this was then soon followed by a “double-dip” 16 month recession from July 1981 to November 1982. The Great Depression had a contraction period of 43 months from August 1929 to March 1933. With regard to the pandemic recession, the Committee wrote:

In determining that a trough occurred in April 2020, the committee did not conclude that the economy has returned to operating at normal capacity. An expansion is a period of rising economic activity spread across the economy, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales. Economic activity is typically below normal in the early stages of an expansion, and it sometimes remains so well into the expansion. The committee decided that any future downturn of the economy would be a new recession and not a continuation of the recession associated with the February 2020 peak. The basis for this decision was the length and strength of the recovery to date.

As has become its established practice, the NBER committee looked at measures of the economy involving both employment and production. The employment measures were a little complex this time, because some survey data was counting those who were being paid but not at work as “employed.”

On the employment side, the committee normally views the payroll employment measure produced by the Bureau of Labor Statistics (BLS), which is based on a large survey of employers, as the most reliable comprehensive estimate of employment. This series reached a clear trough in April before rebounding strongly the next few months and then settling into a more gradual rise. However, the committee recognized that this survey was affected by special circumstances associated with the COVID-19 pandemic in early 2020. In the survey, individuals who are paid but not at work are counted as employed, even though they are not in fact working or producing. Workers on paid furlough, who became more numerous during the pandemic, thus resulted in an overcount of people working. Accordingly, the committee also considered the employment measure from the BLS household survey, which excludes individuals who are paid but on furlough. This series also shows a clear trough in April. The committee concluded that both employment series were thus consistent with a business cycle trough in April.

On the production side, “[t]The committee believes that the two most reliable comprehensive estimates of aggregate production are the quarterly estimates of real Gross Domestic Product (GDP) and of real Gross Domestic Income (GDI), both produced by the Bureau of Economic Analysis (BEA). Both series attempt to measure the same underlying concept, but GDP does so using data on expenditure while GDI does so using data on income.”

However, the committee wants to name a month for the economic peak and trough, and these GDP and GDI statistics are produced on a quarterly basis. To dig down to the monthly level:

The most comprehensive income-based monthly measure of aggregate production is real personal income less transfers, from the BEA. The deduction of transfers is necessary because transfers are included in personal income but do not arise from production. This measure reached a clear trough in April 2020. The most comprehensive expenditure-based monthly measure of aggregate production is monthly real personal consumption expenditures (PCE), published by the BEA. This series also reached a clear trough in April 2020.

The Pandemic Recession: What Was Different in Labor Markets?

It felt back in September 2008, at least to me, as if the Great Recession erupted all at once. Sure, there had been some earlier warning signs about financial markets and subprime mortgages in late 2007, but in spring 2008, the consensus view (for example, in Congressional Budget Office forecasts) was that these housing market blips were only a modest threat to the overall US economy. But by comparison with the pandemic recession, the Great Recession practically happened in slow motion.

Here’s a figure showing the monthly unemployment rate since 1970. The shaded areas show recessions, and you can see the rise in unemployment during each recession. The rise during the pandemic is much higher and faster; conversely, the decline in unemployment from its peak has was also much faster–indeed, unemployment in July 2021 was down to 5.4%, which is conventionally considered to be pretty good.

But the pandemic recession also had a shocking effect on labor force participation rates. To be counted as “unemployed,” you need to be “in the labor force,” which means that you either have a job or are looking for a job. If you aren’t looking for a job, you are “out of the labor force” and thus are not counted as “unemployed.” There are good reasons for this distinction about being in and out of the labor force: in conventional times, it wouldn’t make sense to count, say, retirees or parents who are voluntarily staying home and looking after children as “unemployed,” because they aren’t looking for work. (Over time, the big inverted-U shape of labor force participation largely reflects the entry of women into the (paid) labor force, which topped out in the late 1990s, and then a gradual decline for both men and women since then.) But the pandemic recession is not a conventional time, and the sharp drop in labor force participation–together with what is so far only a very partial recovery–raises the likelihood that at least some of these people do want to get jobs in the future, when the time is right.

For completeness, here’s a figure showing the rate of real (that is, adjusted for inflation) GDP growth, measured as the change from 12 months earlier (with data through the second quarter of 2021). Again, the pattern of a remarkably sharp drop and then a sharp recovery is apparent.

In the Summer 2021 issue of the Journal of Economic Perspectives, Stefania Albanesi and Jiyeon Kim take a deeper dive into some aspects of the pandemic recession and the US labor market in “Effects of the COVID-19 Recession on the US Labor Market: Occupation, Family, and Gender.” (Full disclosure: I’ve been the Managing Editor of the JEP since the first issue in 1987.) They are focused in particular on what happened in 2020. But the evolution of the labor market at that time also suggests some of the future challenges. Here are a few themes that emerge.

In previous recent recessions, job losses for men tended to be greater than those for women. Indeed, married women usually increase their efforts in the (paid) labor market during recessions, which can be thought of as a way in which families adjust to the risk of income loss. But in the pandemic recession, women were more likely to lose jobs than men. For some discussion of this theme in earlier recessions in in JEP, see Hilary Hoynes, Douglas L. Miller, and Jessamyn Schaller. 2012. “Who Suffers during Recessions?” Journal of Economic Perspectives, 26 (3): 27-48.

The types of occupations where jobs were lost were different in the pandemic recession. For example, back in the Great Recession there was a large loss of construction jobs, which disproportionately affected men. Forsome discussion of the job losses for men in the Great Recession in JEP, see Charles, Kerwin Kofi, Erik Hurst, and Matthew J. Notowidigdo. 2016. “The Masking of the Decline in Manufacturing Employment by the Housing Bubble.” Journal of Economic Perspectives, 30 (2): 179-200.

Here’s a sample of the discussion from Albanesi and Kim:

During 2020, women—especially those with children—experienced a substantial reduction in employment compared to men, contrary to the pattern that prevailed in previous recessions. Both labor demand and supply factors likely contributed to this behavior. Women are more likely to be employed in service-providing industries and service occupations. These tend to be less cyclical compared to goods-producing industries and production occupations that employ a larger share of men, and Albanesi and S¸ahin (2018) show that this accounts for most of the difference in the loss
of employment during recessions since 1990. … However, during the COVID-19, infection risk was most severe in the service sector, leading to a large reduction in demand for services, due to government imposed mitigation measure and customer response to infection risk. The
overrepresentation of women in service jobs likely accounts for a sizable fraction of their decline in employment relative to men.

Another unique factor associated with the pandemic recession was the increased childcare needs associated with the disruptions to school activities, which may have contributed to a reduction in labor supply of parents. Why was it mothers in particular who responded to the lack of predictable in-person schooling activities in households where fathers were also present? Gender norms likely played a role. But from the perspective of an economic model of the family, this response should
also be driven by differences in the opportunity cost as measured by wages. In the United States and other advanced economies, there is a substantial “child penalty” that reduces wages for women when, and even before, they become mothers and throughout the course of their lifetime. The penalty is driven by a combination of occupational choices, labor supply on the extensive and intensive margin, that begin well before women have children (Kleven, Landais, and Søgaard 2019; Adda, Dustmann, and Stevens 2017). … In a recent sample of such work, Cortes and Pan (2020) estimate that the long-run child penalty—three years or more after having the first child—for US mothers is 39 percent, and they also find that child-related penalties account for two-thirds of the overall gender wage gap in the last decade. Given the child penalty, most working mothers at the start of the pandemic were likely to be earning less than their partners, and for those couples the optimal response to the increased child supervision needs was for mothers to reduce labor supply.

What do these patterns imply for the prospects of a more complete labor market recovery?

  1. The pandemic-related decline in service jobs has also offered a strong incentive to push harder toward automating such jobs where this is possible. As a result, the labor market recovery from the pandemic will not be as simple as employers just restoring the previous jobs as demand increases.
  2. The question of parents, day-care, and schools seems likely to remain fraught into this next school year, which will affect ability and willingness of parents to work.
  3. The sudden shift to home-based work cuts in several directions. On one hand, the greater availability of working-from-home may benefit certain workers, and parents in particular, by offering more flexibility. On the other side, one can imagine a two-tier labor market emerging, where the jobs that are viewed by employers as of central importance happen with a large component of personal interaction at an office, and the jobs that are viewed as peripheral, using short-term contracts, happen at home.

The US labor market is not just recovering from the pandemic recession, but along dimensions of occupation, family, and gender, it may also be reshaping itself in ways that are very much still evolving.

I should add that the Summer 2021 issue of JEP includes two other articles about the pandemic recession.

Marcella Alsan, Amitabh Chandra, and Kosali Simon discuss “The Great Unequalizer: Initial Health Effects of COVID-19 in the United States” (Journal of Economic Perspectives, 35:3, 25-46). Everyone knows that the pandemic hit the elderly harder. These authors point out that if you look at “excess deaths” by age group, the pandemic hit tended to hit hardest among those that were already disadvantaged–which is a common pattern in past pandemics, too.

Joseph Vavra writes about “Tracking the Pandemic in Real Time: Administrative Micro Data in Business Cycles Enters the Spotlight” (Journal of Economic Perspectives, 35:3, 47-66). His essay focuses on how economists have been making increasing use of private-sector real-time data. He writes:

Thus, a number of economists turned to private-sector micro data to try to understand the recession while it was still unfolding: for example, data on employment patterns from the payroll processing firm ADP and the scheduling firm Homebase, data on bank accounts and credit card payments from sources like the JPMorgan Chase Institute and firms that provide financial planning services like mint.com and SaverLife, and even data on locations of cell phone users from firms like PlaceIQ and SafeGraph. The use of administrative micro data from these and other sources allowed pandemic-related research to be produced in nearly real-time and the scope for analysis of individual behavior, which would be impossible using traditional aggregate data.

Is Geoengineering Research Objectionable?

Geoengineering is the idea of putting materials–say, certain aerosols–into the atmosphere to counteract the effects of carbon and other greenhouse gases. I’ve written about the technology and arguments a few times: for examples, see here and here.

But when small-scale experiments with this approach are proposed, like a recent effort in Sweden, there are often strong objections of the “slippery slope” variety: that is, there’s probably nothing especially dangerous or wrong with this particular limited experiment. But looking ahead, one risk is that future larger-scale experiments may pose larger risks. Also, if this research leads many people think that there is likely to be a cheap and easy techno-fix for climate change a few years down the road, they will be less likely to support near-term efforts to reduce carbon emissions. Daniel Bodansky and Andy Parker address these arguments in “Research on Solar Climate Intervention Is the Best Defense Against Moral Hazard” (Issues in Science and Technology, Summer 2021). T

Their essay suggests that the case against research experiments in geoengineering is shaky on several grounds:

  1. What if the results of the experiment suggest that geoengineering is not a plausible or workable idea? Bodansky and Parker point the example of a previous set of experiments on “ocean iron fertilization,” the idea proposed back in 1988 was that fertilizing the ocean with iron would create large algae blooms that would draw carbon dioxide from the atmosphere, and then would carry that carbon to the bottom of the ocean as the algae died. One early researcher in the area reportedly joked, “Give me half a tanker of iron and I’ll give you another ice age.” There were a dozen small-scale field experiments (here’s a review from 2012). But the experiments suggested the approach would not be very effective and might have negative side effects. So among experts who favor a broad array of efforts to reduce carbon emissions, this particular idea is not considered relevant. To put it another way, an unresearched idea will always have a certain attraction, especially in an emergency situation. A researched idea can easily seem less attractive.
  2. When people are confronted with a discussion of geoengineering, they often become more willing to consider the other responses. Bodansky and Parker write:

A] team at Yale University sought to test directly the moral hazard argument by assigning study participants in the United Kingdom and United States to two groups: one group was given information about climate intervention as a response to global warming; the other was given information about regulating pollution. The study’s results were remarkable. The researchers found that the group exposed to information about climate intervention was slightly more concerned about climate change risks. That is, they found evidence of a reverse moral hazard response. This research might be dismissed as an academic curiosity, but the same reverse moral hazard effect has been observed using different study methods in GermanySwedenthe United States, and the United Kingdom

It’s easy to imagine an underlying dynamic here. Imagine someone who is a little skeptical about the science behind climate change. When that person is confronted with a discussion of climate intervention, they start thinking, “Gee, if altering the atmosphere is under consideration, then non-carbon energy subsidies, energy efficiency efforts, and a carbon tax don’t sound so bad.”

3) Although Bodansky and Parker don’t emphasize this theme, there are people who have been arguing for some years now that the world was soon about to pass a threshold where, because of carbon and other greenhouse gases in the air, the risks of climate change would become irreversibly high. For the sake of argument, let’s say that those claims and predictions aren’t just posturing and exaggeration in an attempt to stir up a more aggressive policy response, but are literally true. In other words, say that the world reaches a point (or has already reached a point?) where the clean energy/fuel efficiency/carbon tax agenda for reducing risks of climate change is already too late, and something else needs to be done. In that situation, knowing more about when, where, and how climate intervention efforts might be done, in a way that promises the most benefit for the smallest risk, might be pretty important.

Finally, I’ll just add that if those concerned about climate change want to use a slogan of “follow the science”–which is I think is perhaps their strongest argument–then it’s a bad look to start arguing that certain kinds of science shouldn’t be followed.