Horace Mann on Human Brainpower

As the school year gets underway, it seemed a good time to pass along a bit of rhetoric from Horace Mann, the great 19th century advocate of the “Common School” idea that every child should receive a basic education at taxpayer expense. Here’s an excerpt from one of his speeches, “Man: His Mental Power,” as published in Stryker’s Quarterly Register and Magazine (March 1849, p. 206). Mann said:

The cotton mills of Massachusetts will turn out more cloth in one day than could have been manufactured by all the inhabitants of the Eastern continent during the tenth century. … The velocity of winds, the weight of waters, and the rage of steam, are powers; each one of which is infinitely stronger than all the strength of all the nations and races of mankind, were it all gathered into a single arm. And all these energies are given us on one condition–the condition of intelligence–that is of education. Had God intended that the work of the world should be done by human bones and sinews, He would have given us an arm as solid and as strong as the shaft of a steam engine, and enabled us to stand, day and night, and turn the crank of a steamship, while sailing to Liverpool and Calcutta. Had God designed the human muscles to do the work of the world, then, instead of the ingredients of gun-powder or gun-cotton and the expansive force of heat, He would have given us hands which could take a granite quarry and break its solid acres into suitable symmetrical blocks, as easily as we now open an orange. Had he intended us for bearing burdens, he would have given us Atlantean shoulders, by which we could carry the vast freights of rail-car and steamship as a porter carries his pack. He would have given us lungs by which we could blow fleets before us, and wings to sweep over ocean wastes.

But instead of iron arms, and Atlantean shoulders, and the lungs of Boreas, He has given us a mind, a soul, a capacity of knowledge, and thus a power of appropriating all these energies of nature to our own use. Instead of a telegraphic and microscopic eye, he has given us power to invent the telescope and microscope. Instead of ten thousand fingers, he has given us genius inventive of the power-loom and the printing-press. Without a cultivated intellect, man is the weakest of all the dynamical forces of nature: with a cultivated intellect he commands them all.

In some ways, of course, this comment is just an example of florid 19th-century rhetoric. I confess that I am less confident than Mann about the theological implications of why God gave us what capabilities. Mann also skims rather quickly over the fact that most of the “us” to whom he keeps referring do not actually invent the equivalent of the telescope or power-loom, but instead, in our production and consumption, we continually interact with others while making use of the inventions created by others.

But Mann also touches here on a deeper truth. In the modern world, few of us will  not make our living purely by the strength of our backs or the dexterity of our fingers, but by our abilities to learn, to implement what we learn, to mix our effort with the appropriate tools, to communicate with co-workers, and to coordinate our activities. In a broad sense, education represents the well-founded conviction that in life and work, people are so much more than their physical limits.

Thomas Sowell: Why “The Market” is a “Misleading Figure of Speech”

In discussions of social policy, one often hears comparisons between “the government” and “the market,” as if they were somehow similar options. Thomas Sowell argued that referring to “the market” in this way is a “misleading figure of speech.” I quote here from his book Knowledge and Decisions (1980, quoting here from 1996 edition, pp. 41-42):
 
“Society” is not the only figure of speech that confuses the actual decision-making units and conceals the determining incentives and constraints. “The market” is another such misleading figure of speech. Both the friends and foes of economic decision-making processes refer to “the market” as if it were an institution parallel with, and alternative to, the government as an institution. The government is indeed an institution, but “the market” is nothing more than an option for each individual to choose among numerous existing institutions, or to fashion new arrangements suited to his own situation and tastes.
The government establishes an army or a post office as the answer to a given problem. The market is simply the freedom to choose among many existing or still-to-be-created possibilities. The need for housing can be met through “the market” in a thousand different ways chosen by each person–anything from living in a commune to buying a house, renting rooms, moving in with relatives, living in quarters provided by an employers, etc., etc. The need for food can be met by buying groceries, eating at a restaurant, growing a garden, or letting someone else provide meals in exchange for work, property, or sex. “The market” is no particular set of institutions. Its advantages and disadvantages are due precisely to this fact. Any comparison of market processes and government processes for making a particular set of decisions is a comparison between given institutions, prescribed in advance, and an option to select or create institutions ad hoc. There are of course particular institutions existing in a market as of a given time. But there can be no definitive comparison of market institutions–such as the corporation–and a governmental institution, such as a federal bureaucracy. The corporation may be the predominant way of doing certain things during a particular era, but it will never be the only market mechanism even during that given era, and certainly not for all eras. Partnerships, cooperatives, episodic individual transactions, and long-run contractual agreements all exist as alternatives. The advantages of market institutions over government institutions are not so much in their particular characteristics as institutions but in the fact that people can usually make a better choice out of numerous options than by following a single prescribed process. 
 
The diversity of personal tastes insures that no given institution will become the answer to a human problem in the market. The need for food, housing or other desiderata can be met in a sweeping range of ways. Some of the methods most preferred by some will be the most abhorred by others. Responsiveness to individual diversity means that market processes necessarily produce “chaotic” results from the point of view of any given scale of values. No matter which particular way you think people should be housed or fed (or their other needs met) the market will not do it just that way, because the market is not a particular set of institutions. People who are convinced that their values are best–not only for themselves but for others–must necessarily be offended by many things that happen in a market economy … The diversity of tastes satisfied by a market may be its greatest economic achievement, but it is also its greatest political vulnerability. 
 
One can raise various objections to all of this. For example, the forms of government action can vary quite substantially as well, from regulatory power to tax incentives, from public-private partnerships to public ownership of organizations which (at least in theory) will raise their own money through sales to the public, to organizations like the armed forces that are not expected to raise money through sales, to administrative organizations for operating government payments.  When one thinks of different levels of government from local to state to regional to national to international, and the different ways that these operate around the world, it’s not just markets that can operate in a number of different ways. 
 
But there are also some useful insights here.  I have found over the years that many who object to “the market” are actually objecting to the characteristics of certain markets at certain times and places: for example, how markets operate in the oil industry, or the tech sector, or housing markets, or how manufacturing was organized in the late 19th and early 20th century. These critics of “the market” would not look at a list of failed government programs and policies–and such a list could certainly be compiled!–and conclude that it was a justification for objecting to all of government. Many of these critics tend to blame markets for all that is negative in the economy, without much considering whether government rules and interventions (say, in a “market” like housing) might also be at fault.
 
Many of these critics who object to aspects of American-style market capitalism then express a preference for the economic arrangements in other market-oriented economies like, say, Scandinavian countries of northern Europe or perhaps Japan. Some of these people call themselves “socialists,” but they often do not support the classic definition of socialism in terms of government ownership of the means of production. They favor a different types of markets, and different mixtures of markets and government, but such distinctions get lost when complaining about “the market” as if it was a fixed entity. Perhaps instead of thinking about government vs. the market, it’s more useful to think about government as embodying the set of ground-rules under which markets then operate. 

A “Medicare Funding Warning” from the Trustees

The trustees of the Medicare program have published their annual report, the imposingly titled 2021 Annual Report of the Boards of the Trustees of the Federal Hospital Insurance and Federal Supplementary Medical Insurance Trust Funds. The annual report came out considerably later than usual, about five months after the statutory deadline, and I suspect there’s a story there. But here, I’ll focus on the projections themselves. The trustees write:

The Trustees are issuing a determination of projected excess general revenue Medicare funding in this report because the difference between Medicare’s total outlays and its dedicated financing sources is projected to exceed 45 percent of outlays within 7 years. Since this determination was made last year as well, this year’s determination triggers a Medicare funding warning, which (i) requires the President to submit to Congress proposed legislation to respond to the warning within 15 days after the submission of the Fiscal Year 2023 Budget and (ii) requires Congress to consider the legislation on an expedited basis. This is the fifth consecutive year that a determination of excess general revenue Medicare funding has been issued, and the fourth consecutive year that a Medicare funding warning has been issued.

Two points are worth emphasizing here. One is that this “Medicare funding warning” was not created by COVID. As the report notes, the trustees have now been issuing the warning for the last four years. If you did not notice the Trump administration responding with proposed legislation to address the problem, along with Congress taking up such legislation on an expedited basis, that’s because it didn’t happen.

Second, this issue isn’t about COVID. Yes, COVID had lots of short-term effects on Medicare, as the report describes. For example, the drop in employment as a result of the pandemic recession reduced Medicare payroll taxes. But on the expenditure side, the rise one would expect in expenditures related to COVID was actually offset by declining Medicare spending in other areas. The trustees write:

Spending was directly affected by the coverage of testing and treatment of the disease. In addition, several regulatory policies and legislative provisions were enacted during the public health emergency that increased spending; notably, the 3-day inpatient stay requirement to receive skilled nursing facility services was waived, payments for inpatient admission related to COVID-19 were increased by 20 percent, and the use of telehealth was greatly expanded. More than offsetting these additional costs in 2020, spending for non-COVID care declined significantly … This decline was particularly true for elective services.

Medicare’s funding problems were apparent before the series of “Medicare funding warning” alarms started going off a few years ago. To understand the scope of the problem, it’s useful to sketch the structure of the program. Part A is Hospital Insurance. Part B is Supplementary Medical Insurance–that is, all the other non-hospital care. Part C is Medicare Advantage, where Medicare pays a flat annual premium for the recipient to enroll in a private health care plan that can provide hospital, non-hospital, and in some cases prescription drugs as well. In Part A and Part B, Medicare uses fee-for-service payments to providers. Part D is the Prescription Drug Benefit that was enacted in 2006.

As the trustees note: ” In 2020, Medicare covered 62.6 million people: 54.1 million aged 65 and older, and 8.5 million disabled. About 40 percent of these beneficiaries have chosen to enroll in Part C private health plans that contract with Medicare to provide Part A and Part B health services. … Total Medicare expenditures were $926 billion in 2020.” To put that number in perspective, total Social Security spending in 2020 is approaching $1.1 trillion, while total defense spending is a little under $800 billion.”

Of course, none of these parts of Medicare are funded in exactly the same way, which complicates talking about them. For example, Part A has a trust fund that is almost entirely funded by “Hospital Insurance” payroll taxes. Because this is the legislated source of funding for Part A, it’s possible for this trust fund to run out of money, which is currently projected for 2026. Indeed, the trustees have been sounding the alarm that the trust fund has fallen below a standard of short-term financial solvency every year since 2003.

However, the “trust funds” for Part B of Medicare, the Supplementary Medical Insurance, and for Part D, the prescription drug benefit, cannot go bankrupt. The reason is just a matter of bookkeeping–if this “trust fund” falls short, then legally the bills will be paid out of general federal revenues. Indeed, Presto! Bankruptcy for Part B is impossible! In 2020, general federal revenues pay for about 80% of Part B spending, and individual premiums cover most of the rest. For Part D, the prescription drug benefit, general federal revenue covers about 75% of all spending in 2020, with a mixture of individual premiums and state-level contributions covering most of the rest. For Part C, the Medicare Advantage plans, there is no separate source of funding–instead, money is switched over from the payroll taxes, individual premiums, and government payments that support Parts A and B.

A desire to cut through these legislative distinctions and get to the bottom line helps to explain the phrasing of the warning from the trustees: “[T]he difference between Medicare’s total outlays and its dedicated financing sources is projected to exceed 45 percent of outlays within 7 years.” What exactly are the “dedicated funding sources” for Medicare? As the trustees write: “Dedicated financing sources consist of HI payroll taxes, HI share of income taxes on Social Security benefits, Part D State transfers, Part B drug fees, and beneficiary premiums.”

For Medicare, let’s simplify the picture by looking at the entire program, not the separate parts. Under what seems like the inexorable pressures of higher health cares spending, Medicare is evolving in ways that have received little public attention.

Back in 2000, Medicare spending was about 2.2% of GDP. In 2020, total Medicare spending is about 4% of GDP. Looking out 20 years to 2040, total spending is projected at 6% of GDP. It’s worth noting that these projected long-term costs are likely to be conservative. The actuaries who produce the underlying calculations are required to focus on projections under current law. Thus, Congress has for some years been playing a merry game of legislating cost reductions (like lower payments to Medicare physicians) that don’t kick in until five or ten years down the road. These cost reductions don’t actually take place; instead, they keep getting postponed. A cynic might say that their only real purposes is to pretend do so something about future costs.

Back in the year 2000, general federal tax revenues were about 28% of Medicare’s income, while payroll taxes covered 60% and individual premiums covered 9%. Now in 2020, the share of Medicare’s income from payroll taxes has fallen to 34%; to counterbalance that change, the share from individual premiums has risen to 15% and the share from general revenues has risen to almost 47%.

Looking ahead, the share of Medicare income covered by payroll taxes is projected to keep falling to 25%, while the share covered by individual premiums is projected to rise to nearly 20% and the share from general federal revenues will reach about 50%.

In short, just 20 years ago, Medicare was a much smaller program primarily (60%) funded by payroll taxes. Looking ahead 20 years, it is a much larger program, funded primarily by a combination of general revenues (50%) and individual premiums (20%). This shift is really what the “Medicare funding warning” from the trustees is all about.

The working assumption over Medicare’s funding warning seems to be that any shortfalls will just be covered by general fund revenues. For the short-term, this is a workable if inelegant solution. But over longer time horizons, it becomes a problem. Higher general fund spending competes with other budgetary priorities. Higher health insurance premiums for the elderly competes with the rest of their household budget, too.

Continuing to ignore possible solutions is short-sighted. On the issue of climate change, a number of people are strongly in favor of taking near-term and fairly costly steps for a long-run benefit. They offer harsh criticism to anyone who says: “Maybe the underlying assumptions are wrong. And if they are correct, we’ll worry about it later.” But fiscal predictions of the Medicare actuaries are based on much simpler calculations than models of atmospheric climate change and its effects on Earth and the economy. The effects come sooner. And the same basic lesson holds: If you take wait to take action as the long-term problem arrives, the steps needed at that time are going to be substantial or even extreme. Taking actual real steps in the near-term helps to avert the need for extreme steps later.”

Chesterton: “The Old Man is Always Wrong, and the Young People are Always Wrong about What is Wrong With Him”

Why are young people so often protesting against the conditions they have inherited from the older generation? GK Chesterton offered a hypothesis in an essay in his “Our Note Book” essay written for the Illustrated London News (June 3, 1922): “[T]he old man is always wrong , and the young people are always wrong about what is wrong with him. The practical form it takes is this: that, while the old man may stand by some stupid custom, the young man always attacks it with some theory that turns out to be equally stupid.”

Moreover, Chesterton argues, the young protesters often in practice turn out to be less focused on getting rid of the previous evil than they are on force-feeding their new theory to the older generation. Chesterton writes: “In other words, the young man is not half so eager to get the wicked old man to abolish his wicked old law, because it is wicked, as he is to convince him of the final and infallible truth of some entirely new law, of which the consequences might be equally wicked. The young man is much more interested in ramming his new theory down the old man’s throat than he is in tearing the other infernal infamy out of the old man’s heart.”

Thus, instead of the young protesters focusing on what should be common ground–the past evils that should be overturned–they present to the older generation the juicy target of brand-new theories ripe for debunking. Both older and younger generations can then dispute the new theories, while neither does a very good job of actually coming to grips with the reality of the past evils. Here’s Chesterton:

[I]t is always easy to talk about an old man as if he had always been old, or about young people as if they would always be young. They are no nearer to solving the recurrent riddle of humanity, the family quarrel in so far as it does really run through all history. If the rising generation had always been wise, we should have risen to a great deal more wisdom by this time. But the rising generation very often was wise; and the real interest is in how it could be so foolish when it had been so wise.

I believe what really happens in history is this: the old man is always wrong , and the young people are always wrong about what is wrong with him. The practical form it takes is this: that, while the old man may stand by some stupid custom, the young man always attacks it with some theory that turns out to be equally stupid. This has happened age after age: but to make it quite clear I will take an abstract and artificially simple case. Suppose there was a really barbarous and abominable law at some stage of history. Let us say that a peasant population must be restricted by every sixth child being killed or sold into slavery. I do not remember anything quite so bad as that in the past ; it seems to savour more of the scientific programmes of the future. Some of the eugenists or the experts in birth control might perhaps favour it. But there have been things nearly as bad, things at which our blood boils even in reading about them in a book. We wonder how any old men could be so vile as to defend them; we very rightly applaud the young men who called them indefensible. And we are amazed that anything so indefensible seemed so long to be indestructible. Now the real reason is rather odd.

The curious thing that happens is this. We naturally expect that the protest against that more than usually barbaric form of birth control will be a protest of indignant instinct and the common conscience of men. We expect the infanticide to be called by its own name, which is murder at its worst; not only the brand of Cain but the brand of Herod. We expect the protest to be full of the honour of men, of the memory of mothers, of the natural love of children. But when we look closer, and learn what the rising generation really said against the rotten custom, we find something very queer indeed. We do not find the young revolutionists chiefly concerned to say: “Down with King Herod who murders babies ! “What they are chiefly concerned to say, what they are passionately eager to say, is something like this: “What can be done with an old fool who has not accepted the Law of Melioristic Ultimogeniture ? He has not even read Pooch’s book I Nothing can be done till we have compulsory instruction in the New Biology, which shows that the higher type is not evolved until the sixth child, the previous five being only embryonic experiments.” In other words, the young man is not half so eager to get the wicked old man to abolish his wicked old law, because it is wicked, as he is to convince him of the final and infallible truth of some entirely new law, of which the consequences might be equally wicked. The young man is much more interested in ramming his new theory down the old man’s throat than he is in tearing the other infernal infamy out of the old man’s heart. He is more excited about the book than the baby. For him the bad law is a barbaric impediment that will soon disappear. It is Pooch’s great discovery, of the inevitable superiority of the sixth child, that is important and will remain. Now in fact Pooch’s discovery never does remain. It always disappears after doing one good work–inspiring the young reformer to get rid of the bad and barbarous law against babies. But it cuts both ways ; for it gives the old man, who has seen a good many Pooches pass away in his time, an excuse for calling the whole agitation stuff and nonsense. The old man is half ashamed of defending the old law, but he is not in the least ashamed of jeering at the new theory. And the young man always plays into his hands, by being more anxious to establish the theory than to abolish the law.

Now that has happened in history, century after century. … In short, the young man always insists that his new nostrum and panacea shall be swallowed first, before the old man gives up his bad habits and lives a healthy life. The old man knows the new medicine is a quack medicine, having seen many such quacks; and is only too delighted with an excuse for putting off the hour of repentance, and going his own drunken, dissipated old way. That cross-purpose is largely the story of mankind.

There’s an interesting potential lesson here. Perhaps it is more socially productive to focus pragmatically on addressing social evils and ills directly, rather than getting sidetracked into a desperate desire to ram our theories down the throats of others.

Harold Demsetz: Dissecting the Nirvana Viewpoint

Here are two different ways to see the world. One approach looks at current problems in the context of alternative real-world institutional arrangements, recognizing that all the real-world choices will be flawed in one way or another. The other approach looks at current problems as juxtaposed with ideal outcome. In a 1969 essay, Harold Demsetz critiqued that second approach, calling it the “nirvana viewpoint.” He also argued that economics might be prone to that approach. The Demsetz essay is “Information and Efficiency: Another Viewpoint” (Journal of Law & Economics, April 1969, 12: 1, pp. 1-22). He sets up the problem this way:

The view that now pervades much public policy economics implicitly presents the relevant choice as between an ideal norm and an existing “imperfect” institutional arrangement. This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem; practitioners of this approach may use an ideal norm to provide standards from which divergences are assessed for all practical alternatives of interest and select as efficient that alternative which seems most likely to minimize the divergence.

The nirvana approach is much more susceptible than is the comparative institution approach to committing three logical fallacies–the grass greener fallacy, the fallacy of the free lunch, and the people could be different fallacy.

Given that economists are not usually accused of being overoptimistic fantasists about the possibilities of solving real-world problems, why does Demsetz fear that they may be prone to the nirvana approach. He argues that a nirvana bias may be built into the arguments commonly used by economists. For example, economists often point to what they call “market failures,” like the negative externalities that lead unfettered free markets to excessive pollution, the positive externalities that unfettered free markets to underinvestment in R&D or education, the common pattern of unequal distributions of income in a market economy, and so on. In economic theory, each of these “market failures” has a potential solution in terms of taxes, subsidies, or redistributions that could address the problem at hand.

But as the non-economists are fond of reminding the economists, real life doesn’t happen inside an economic model. When there are real-world problems, a mixture of government and private institutions often evolve to address them. When government enacts an economic policy, it doesn’t happen inside an economic model, either. Instead, the new policy is enacted via a political process run by self-interested legislators and then implemented by a regulatory process run by self-interested regulators. As Demsetz saw it, the real choices is not between one model and another, but between the set of existing institutional arrangements to address a problem and what the shape would be of a new and untested set of institutional arrangements to address a problem.

Of course, this argument doesn’t suggest that all policies will have poor effects or will be self-defeating. It does suggest that pointing to economic models of a “market failure” and a government-enacted “solution” should only be a starting point for additional discussion of real-world institutions as they exist. More broadly, the Demsetz argument suggests that one should beware of those peddling nirvana–especially because many of making such claims do not even have a basic textbook model to back up the starting point of their argument. They just have a complaint and a promise.

George Bernard Shaw wrote a play called Back to Methuselah, with a scene where the Serpent in the Garden of Eden is trying to persuade Eve to eat the apple, with a promise that it will lead to ever-lasting life. The Serpent says to Eve: “When you and Adam talk, I hear you say ‘Why?’ Always ‘Why?’ You see things; and you say, ‘Why?’ But I dream things that never were; and I say, ‘Why not?’” That statement by the Serpent is the nirvana approach in action.

Finding the Path Between Berks and Wankers

An ongoing challenge in writing and editing is to avoid either being obsessive about detailed rules of grammar and usage or being proudly ignorant of any such rules. In his 1997 book The King’s English, Kingley Amis framed the choice as one between berks and wankers. He wrote:

Not every reader will immediately understand these two terms as I use them, but most people, most users of English, habitually distinguish between two types of person whose linguistic habits they deplore if not abhor. For  my present purpose, these habits exclude the way people say their vowel sounds, not because these are unimportant but because they are hard to notate and at least as hard to write about. 

Berks are careless, course, crass, gross and what anybody would agree is a lower social class than one’s own. They speak in a slipshod way with dropped Hs, intruded glottal stops, and many mistakes in grammar. Left to them the English language would die of impurity, like late Latin. 

Wankers are prissy, fussy, priggish, prim and of what they would probably misrepresent as a higher social class than one’s own. The speak in an over-precise way with much pedantic insistence on letters not generally sounded, like Hs. Left to them the language would die of purity, like medieval Latin. 

In cold fact, most speakers, like most writers if left to themselves, try to pursue a course between the slipshod and the punctilious, however they might describe the extremes they try to avoid, and this is healthy for them and the language.

As someone who clearly is in more personal danger of falling into the wanker category, I suppose I must resolve to let my inner berk out to play more often.

How Stalin and the Nazis Tried to Copy Henry Ford

In the early decades of the 20th century, the automakers of Detroit–and Henry Ford in particular–exerted a remarkable influence. This economic period had multiple periods of economic instability and world war–and that was even before the catastrophe of the Great Depression. In particular, authoritarian rulers of the 1930s were drawn to a model that seemed to combine large-scale facilities under central control with being at the cutting edge of modern industry. Stefan Link tells the story in his recent book. Forging global Fordism : Nazi Germany, Soviet Russia, and the contest over the industrial order (Princeton University Press, 2021).

When it came to developing fresh principles after the bankruptcy of the old economic order in the global crisis of the 1930s, it was Detroit that drew all modernizers of postliberal persuasion, left and right, Soviets and Nazis, fascists and socialists. To be sure, uncounted engineers and admirers had come to see Ford’s factories since the 1910s, when the old Highland Park, forge of the Model T, was first equipped with an assembly line. Yet in the 1930s Ford’s new factory—the much expanded, vertically integrated River Rouge—became the destination of engineering delegations bent on wholesale technology transfer. Italian, German, Russian, and Japanese specialists traveled to Detroit, spent weeks, months, even years at River Rouge to learn the American secret of mass production. With the Gorky Automobile Factory (Gaz) in central Russia, the Soviet Union opened its own “River Rouge” in 1932. In 1938, Hitler laid the cornerstone of the Volkswagen works. Nor were Nazis and Soviets alone. Toyota began operating its Koromo plant in 1938, and Fiat welcomed Mussolini for the opening ceremony of the brand-new Mirafiori facility in 1939. As is easily seen, these Depression-era exchanges laid the groundwork for the infrastructure of global Fordism after World War II. …

Evidently, both the Nazi and the Soviet self-diagnosis of underdevelopment vis-à-vis the United States was soaked in existential ideological sweat. This diagnosis, however, prescribed a simple and precise course of action: beat America with American methods. Lest Germany become “America’s prey,” it was necessary “to study the means and mechanisms of the Americans,” said Theodor Lüddecke, one of Fordism’s most vocal advocates on the Weimar right. Similarly, Arsenii Mikhailov, one of Fordism’s ardent Soviet champions, argued that the goals of the Five-Year Plan required “a swift and complete
switch to the most advanced American technology.

The making of Gaz, the Gorky “Auto Giant,” resulted from this course of action. Gaz marked an extraordinary attempt to transfer American technology wholesale and to indigenize it in a social and economic environment that seemed hardly ready for it. Soviet workers and engineers indeed struggled mightily to adopt what they took from Detroit. But despite enormous sacrifices and waste, somehow, by decade’s end, a capable motor mass production industry had materialized in central Russia. … Germany could dip into deep homegrown technological capabilities that the Soviet Union lacked and therefore struggled somewhat less to assimilate Fordism. The result was a double reception. The Volkswagen plant echoed the Soviet strategy of comprehensive copying. But the Nazi regime also tried (and largely succeeded) to harness the industrial acumen of Ford and General
Motors, both of which had branches in Germany, to its own ends. Ensnaring the Americans in a web of threats and incentives, the regime achieved pervasive, dollar-subsidized transfers of mass production technology into Germany.

There were attempts by Stalinist Russia and Nazi Germany to cut deals with a number of other leading American firms as well. Japan and Italy sought to import Fordism as well.

In Japan, no automobile industry existed after World War I, and during the Twenties both Ford and General Motors built assembly plants that fully covered the needs of the domestic market. By the mid-Thirties, however, the militarist government began to support fledgling attempts by Japanese industrialists to nurture a homegrown auto production. In 1936, the government passed the notorious Automobile Manufacturing Enterprise Law, a measure that discriminated against the American firms, penalized imports of vehicles, and encouraged Nissan and Toyota—weak and inexpert producers compared to the Americans—to expand investments and update their technologies. These measures eventually forced GM and Ford to exit the Japanese market and allowed Nissan and Toyota to acquire the Americans’ factory machinery and hire their workers and engineers.

In Italy, too, the regime tolerated the presence of American carmakers only for a brief period after World War I. In 1929, Mussolini personally thwarted an attempt by Ford to expand its presence in Italy, declaring that American competition would devastate the domestic automobile industry. Instead, Mussolini decisively backed the Turin-based carmaker Fiat, which benefited not only from the regime’s stifling labor policies but also from its military orders, export promotion schemes, and generous foreign exchange allocations for technology from the United States. Ford and GM eventually left the Italian market, while Fiat built its own brand-new, Rouge-style megaplant. Opened in 1939, Mirafiori was very similar to the Nazi Volkswagen project: a Fascist white elephant, valuable for propaganda purposes but also a monument to how assiduously the regime sought to alter its place in the global industrial pecking order

Link tells the story at book-length, and I certainly won’t attempt to recapitulate it here. But I do want to point out some of the ironies involved.

  1. Many people think of giant multinational manufacturing firms as the epitome of capitalism. But for authoritarian, socialist, and communist rulers, it seemed obvious that giant manufacturing firms could operate perfectly well as instruments of state power and control. Indeed, it has often been market-oriented economists who were more skeptical of whether extraordinarily giant firms were really needed for economic efficiency, or whether efficiency and innovation might be better-served with a group of middle-sized firms competitn with each other.
  2. For the Soviet Union in particular, the obsession with giant manufacturing firms led to central planning for giant firms all over the country. Indeed, Soviet central planners often seemed to equate sheer size of a factory with technological advancement. The size of the Soviet planned factories often outstripped the capacities of the central planners, leading to grotesque inefficiencies.
  3. When teaching intro econ students about economies of scale–that is, the common situation when larger production volumes can drive down average costs–a common question is whether there can be diseconomies of scale. Can there be cases when size gets so large that it average costs rise? In a reasonably competitive market, this won’t be possible, because the very large high-cost firm will go out of business. But if the ultra-large firm is sustained by the government, then yes, diseconomies of scale become possible.
  4. Each year, about 5% of global car production happens in the US economy. More broadly, manufacturing jobs have been a declining share of total US jobs for decades–all the job growth has been in service- and information-related industries, instead. not been the the center of the global car industry for a long time. Looking ahead, we seem to be living in a world where a combination of automation, robotics, and information technology may greatly limit the growth of manufacturing jobs anywhere in the world. Whatever the allure and tradeoffs of Fordism in the 20th century as a tool for economic development and government power, the plausible economic models for the 21st century look rather different.

Can a Dose of Randomness Help Fairness?

As the philosophers teach, there are many ways to think about “fairness.” One common approach is to identify a source of what is deemed to be unfairness, or a practice that leads to outcomes deemed to be unfair, and then to address this specific practice or outcome in a direct way. However, there is also a line of philosophy which suggests that thinking about “fairness” might usefully include an element of randomness. For example, the outcome of a lottery is “fair,” not in the sense that it corrects or addresses other aspects of unfairness, but in the sense that every ticket has an equal chance of winning.

Some recent research takes this connection between randomness and fairness and puts it to work. For example, in the July 2021 issue of the American Economic Review, Rustamdjan Hakimov, C.-Philipp Heller, Dorothea Kübler, and Morimitsu Kurino discuss “How to Avoid Black Markets for Appointments with Online Booking Systems” (111:7, pp. 2127-51).

They point out a number of examples of online booking systems where entrepreneurially-minded scalpers snap up many or all of the reservations, and then re-sell them in a secondary black market. This happened in California with prime-time appointments at the Department of Motor Vehicles, in Ireland and with the offices that immigrants need to get their residential permits and visas, in China with appointments at state-run hospitals, and so on. These were all first-come, first-served settings, which is of course a different working definition of “fairness.” However, the authors suggest an alternative mechanism. They write:

We propose an alternative system that collects applications in real time, and randomly allocates the slots among applicants (“batch” system). The system works as follows: a set of slots (batch) is offered, and applications are collected over a certain time period, e.g., for one day. At the end of the day, all slots in the batch are allocated to the appointment seekers. Thus, the allocation is in batches, not immediate as in the first-come-first-served system. In the case of excess demand, a lottery decides who gets a slot. If a slot is canceled, this slot is added to the batch in the next allocation period, e.g., the following day. Thus, the scalper cannot transfer the slot from the fake name to the customer by way of cancellations and rebookings. We show that under reasonable parameter restrictions, the scalper not entering the market is the unique equilibrium outcome of the batch system. The intuition for this result is that, keeping the booking behavior of the scalper fixed, a seeker has the same probability of getting a slot when buying from the scalper as when applying directly. Flooding the market with fake applications increases the probability that the scalper will receive many slots, but he cannot make sure that he gets slots for his clients, and he cannot transfer slots to the names of the clients

Notice how the built-in lottery is a key part of improving the fairness of the outcome in this outcome. When you think about it, other examples of randomness as a form of fairness come to mind. For example, when public charter schools are oversubscribed, the usual practice is they are required to select students through a lottery–and in turn, this randomness gives researchers a tool for evaluating the effectiveness of charter schools by comparing those randomly admitted with those randomly not admitted. In 2008, Oregon opened up its Medicaid program to additional enrollees, but had 90,000 applicants for only 30,000 slots–so it used a lottery to choose who would get the expanded coverage.

A dollop of randomness might have other applications as well. For example, imagine its application in hiring decisions. Say that the company has five qualified internal candidates. If they are all deemed qualified, perhaps the job should be handed out by random chance. There are several possible benefits of such an approach.

One is that when there is a group of candidates who are all deemed to be qualified, it’s a time when biases can begin to creep in. After all, if everyone is qualified, then maybe it feels safer or more comfortable to go with a familiar choice, or a politically connected choice. Joël Berger, Margit Osterloh, Katja Rosta, and Thomas Ehrmann explore this possibility in “How to prevent leadership hubris? Comparing competitive selections, lotteries, and their combination” (Leadership Quarterly, October 2020).

As an historical example, they point to a problem that arose at the University of Basel when professors were often being appointed based on political connections, and adding a dose of randomness was part of the answer. They write:

This problem led to changes that took place at the University of Basel in the 18th century. Until the end of the 17th century, the appointment of professors at this university was seriously compromised by the interventions of politically influential family dynasties and by corruption. To combat this problem, a law was passed requiring the appointment of new professors through a procedure that combined competitive and random selection (Burckhardt, 1916), termed the Wahl zu Dreyen or selection from three. The law, which was introduced in 1718, required candidates to submit proof of their qualifications to the governing body of the university, which then decided whether a can-
didate was eligible or not (Burckhardt, 1916). Subsequently, all the professors of the University of Basel came together to act as the electoral authority. If two or three candidates were eligible, the candidate to be appointed was chosen by lottery. If more than three candidates came into question, the electoral authority was divided by lottery into three electoral colleges. Each college had to propose one candidate by secret voting. Finally, the candidate to be appointed was decided by
lottery.

The authors argue further that applying a degree of randomness in this way might be useful in choosing top corporate executives: that is, decide on a group of qualified candidates, and then choose by lottery. They argue that top executives may be prone to excessive hubris (and one might add, excessive pay) because they view themselves as selected. If they viewed themselves as literally lucky to be chosen from among with equally capable potential replacements, they might act differently. The authors offer some experimental evidence that those who view themselves as selected to be group leaders may be more prone to hubris than those chosen from a mixture of selection and randomness.

Another possible application of randomness in a hiring context is discussed by an overlapping group of authors, Joël Berger, Margit Osterloh, and Katja Rost in “Local random selection closes the gender gap in competitiveness” (Science Advances, November 20, 2020). The authors point to a body of evidence suggesting that women in job settings are more likely to opt out of competitive settings than men, and in general to act in more risk-averse ways then men. Thus, if promotions are set up as a competitive tournament, women may be systematically disadvantaged. They argue, based on some experimental evidence, that a process in which people enter a process to be designated as qualified for a certain job, knowing in advance that the actual promotion will be determined by lottery, will attract more women without diminishing the quality of the applicant pool.

One can think of various ways of applying a dose of randomness in various academic settings, too. Imagine, for example, that an organization is evaluating grant proposals. For some of the proposals, everyone is enthusiastic. But for encouraging but marginal grants, there is disagreement over quality. When choosing among these marginal proposals, it’s easy to imagine various biases creeping in: for example, the evaluators–other things equal–might tend to favor the proposals from better-known institutions, or better-known people, or perhaps they will tend to favor more conventional ideas as opposed to higher-risk proposals that might look foolish in retrospect.

To avoid such biases, the Swiss National Science Foundation decided to use a degree of randomness when evaluating grant applications. Some grant applications evoke virtual unanimity, either about whether to fund them or whether not to do so. But what about the in-between cases? Rather than pretending that it’s possible to make fine distinctions between these applications–which is a situation where bias can easily creep in–instead a proportion of these grants are given by lottery.

Margit Osterloh and Bruno S. Frey discuss the rationale for applying a similar process at academic journals in “How to avoid borrowed plumes in academia” (Research Policy, February 2020). They note that of the economics articles published in top journals, some turn out to be highly influential and some do not. They also point to a fear that, at least in economics, being published in a top journal is viewed as proof of research of the highest quality, which (at least as judged by how often articles are later cited) is not necessarily true. Given the uncertainties involved, they propose experimentation with a partly random method of selection. If all the referees love a paper, or hate it, then the choice is simple. But for the in-between papers, they suggest random selection. They write:

Our own proposal is the most radical. It is based on the insight that fundamental uncertainty is symptomatic for scholarly work. This is indicated by the low prognostic quality of reviews and the low inter-rater reliability revealed by many empirical analyses. Our suggestion takes this evidence into account. It suggests the introduction of a partly random mechanism. Focal randomisation takes place after a thorough preselection of articles by peer reviews. Such a rationally founded and well-orchestrated procedure promises to downplay the importance (or even “tyranny”) of top journals and to encourage more unorthodox research than today.

Of course, one also sometimes hears proposals for the use of lotteries in admission to selective colleges. It’s not uncommon to hear presidents or admissions officers from such places say “we have many more qualified candidates than we can possibly admit.” So rather than turning of the decision to admissions offices, which will inevitably have shifting agendas and biases of their own about what makes a student “authentic” and full of potential, perhaps instead the admissions office should just decide who is super-qualified for the school, and who is not qualified for he school, and then admit the rest by lottery. It would be problematic for qualified applicants turned down by such a process to claim that it was unfair. But perhaps more interesting, it would be openly acknowledged that there was an element of luck in who is admitted–and the hubris that stems from being selected to a selective college might be reduced as a result.

Here’s one more example. In Nigeria, a competition called YouWiN! was launched in 2011 to provide funding to small businesses and start-ups. David McKenzie describes the process in “Identifying and Spurring High-Growth Entrepreneurship: Experimental Evidence from a Business Plan Competition” (American Economic Review, 107 (8): 2278-2307):

The YouWiN! competition was launched in late 2011 by the president of Nigeria, and in its first year attracted almost 24,000 applications aiming to start a new business or expand an existing one. The top 6,000 applications were selected for a 4-day business plan training course, and then winners were chosen to receive awards averaging US$50,000 each, paid out in four tranche payments conditional on achieving basic milestones. The top-scoring plans overall and within region were chosen as winners automatically, and then 729 additional winners were randomly selected from a group of 1,841 semifinalists, providing experimental variation from US$34 million in grants that enables causal estimation of the program’s impact.

Again, choosing most of the grant winners at random, out of those deemed qualified, seems fair in at least one sense of the word.

The idea of randomness as a form of fairness may seem counterintuitive. Much of the time, we think of fairness as the outcome of a process of selection. But selection inevitably has biases of its own, which will be tied up in issues like what information is used in the selection process, who is more or less comfortable being part of the selection, and biases of who does the selecting. None of the examples here suggest that a desired outcome should be totally allocated at random. There’s always a screening process first. But when a high level of uncertainty exists between candidates or projects that seem very similarly qualified, there is a case for using randomness as a way to reduce some of the biases that are always likely to exist in any selection process.

Bad Development Ideas

It has been my tradition at this blog to take a break from current events in late August. Instead, I offer a series of posts about economics, academia, and and editing, focusing on comments or themes that caught my eye in the last few months.

Back in 2008, the World Bank published the report of the Commission on Growth and Development, consisting of 19 policymakers and a couple of Nobel prize-winning economists. The Growth Report : Strategies for Sustained Growth and Inclusive Development still rewards reading today. Here, I focus on what Michael Spence, chair of the Commission, later referred to as probably the most popular section of the report–a two-page discussion just called “Bad Ideas.” The Commission wrote (pp. 68-69).

Debates help clarify good ideas, subjecting them to scrutiny and constructive criticism. But debates can also be infected by bad ideas. This poses two difficulties for policy makers. First they must identify bad ideas, because specious proposals can often sound promising. Then, they must prevent them from being implemented. An illustrative list of “bad ideas”, which are nonetheless often brought into the debate and should be resisted, is offered below. We hasten to add that just as our recommendations for good policies are qualified by the need to avoid one-size-fits-all approaches and to tailor the policies to country-specific circumstances, our list of bad policies must also similarly be qualified. There are situations and circumstances that may justify limited or temporary resort to some of the policies listed below, but the overwhelming weight of evidence suggests that such policies involve large costs and their stated objectives—which are often admirable—are usually much better served through other means.

  • Subsidizing energy except for very limited subsidies targeted at highly vulnerable sections of the population.
  • Dealing with joblessness by relying on the civil service as an “employer of last resort.” This is distinct from public-works programs, such as rural employment schemes, which can provide a valuable social safety net.
  • Reducing fiscal deficits, because of short term macroeconomic compulsions, by cutting expenditure on infrastructure investment (or other public spending that yields large social returns in the long run).
  • Providing open-ended protection of specific sectors, industries, firms, and jobs from competition. Where support is necessary, it should be for a limited period, with a clear strategy for moving to a self-supporting structure.
  • Imposing price controls to stem inflation, which is much better handled through other macroeconomic policies.
  • Banning exports for long periods of time to keep domestic prices low for consumers at the expense of producers.
  • Resisting urbanization and as a consequence underinvesting in urban infrastructure.
  • Ignoring environmental issues in the early stages of growth on the grounds that they are an “unaffordable luxury.”
  • Measuring educational progress solely by the construction of school infrastructure or even by higher enrollments, instead of focusing on the extent of learning and quality of education.
  • Underpaying civil servants (including teachers) relative to what the market would provide for comparable skills and combining this with promotion by seniority instead of evolving credible methods of measuring performance of civil servants and rewarding it.
  • Poor regulation of the banking system combined with excessive direct control and interference. In general, this prevents the development of an efficient system of financial intermediation that has higher costs in terms of productivity.
  • Allowing the exchange rate to appreciate excessively before the economy is ready for the transition towards higher-productivity industry.

The list above is illustrative and not exhaustive. Individual countries will have their own list of practices that appear to be desirable but are ineffective. Relentless scrutiny of policies should be an essential element in rational policy making. This due diligence needs to be doubled for policies of the type listed above.

I sometimes like to say that the most important role of economics in practical policy-making may not be in choosing the best option. There may be several options that work pretty well, and choosing the “best” may be a matter of opinion. But economics can help identify and rule out the worst options, and the gains from avoiding the truly awful choices can be quite substantial. 

Reconsidering the “Washington Consensus”

About 20 years ago, I found myself (via a story too long and tedious to relate here) part of a small group of economists travelling in South Africa for a week, meeting with various business, government, and academic groups. Within our little group , we each had some topics on which we would focus the first round of our comments. For example, one person talked about how to structure an emerging telecommunications industry, while another talked about barriers to international trade in agriculture. My own role, at least as perceived by the audience, was to be the defender of American imperialist capitalism. And no phrase was delivered to me with quite the same scorn and disdain–and frequency–as “the Washington consensus.”

In my naivete, I was at first surprised that “the Washington consensus” carried such weight. I knew it was controversial, of course, but I had not realized that it had become a such a rhetorical trope for an overall point of view. And like many such phrases, the use of the phrase in conversation had become disconnected from the original meaning of the term.

For those who want to dig into this issue, and how evidence and thinking about it has evolved with time, the Summer 2021 issue of the Journal of Economic Perspectives has a four-paper symposium on the subject. (Full disclosure: I have worked as Managing Editor of JEP since the first issue back in 1987. All articles in JEP back to the first issue are freely available online.) The first essay, by Michael Spence, sets the tone with “Some Thoughts on the Washington Consensus and Subsequent Global Development Experience.” Spence starts with a useful reminder of how the “Washington consensus” originally emerged and what it actually said. Spence begins:

In 1989, policymakers around the world were struggling to come to grips with the debt crisis and slow growth that had plagued developing economies during much of the 1980s, especially nations in Latin America and sub-Saharan Africa. The International Institute of Economics (now the Peterson Institute of International Economics) held a conference discussing the economic and debt situation, mostly focused on Latin American countries. The conference was run by John Williamson (who died in April 2021), a senior fellow at the institute who specialized in topics related to international capital flows, exchange rates, and development. To focus the conference discussion, Williamson (1990) wrote a background paper that began: “No statement about how to deal with the debt crisis in Latin America would be complete without a call for the debtors to fulfill their part of the proposed bargain by ‘setting their houses in order,’ ‘undertaking policy reforms,’ or ‘submitting to strong conditionality.’ The question posed in this paper is what such phrases mean, and especially what they are generally interpreted as meaning in Washington.”

Williamson (1990) described what he saw as a convergence of opinion about ten policies areas designed to promote stability and economic development that he felt had emerged during the 1980s. With hindsight, it appears that one of the principal targets was bouts of instability in inflation, public finances, and the balance of payments. If one asks who the consenting parties are in this “consensus,” the answer appears to include the US Treasury, the International Monetary Fund and World Bank, think tanks with related agendas, to some extent academia, and over time Latin American governments who came to understand the destructive power of macroeconomic instability with respect to growth. It is noteworthy that in the mid-1990s, inflation in a wide range of developing countries dropped substantially and stayed there.

A few points here are worth emphasizing. In discussions of the economies of Latin America, the 1980s are commonly referred to as the “lost decade” because of the mixture of slow growth, inflation and hyperinflation, and debt crises. The “Washington consensus” was never a broad strategy for economic reform or an overall agenda for economic development and growth. It was a discussion of what governments needed to do to qualify for a debt relief or debt reservicing agreement. Moreover, it was a list of the points that seemed to Williamson to command broad agreement–not the points that were more controversial. In the original 1990 essay, Williamson divided his discussion into 10 areas, but it was not until some years later that he turned them into a short list. Spence reproduces Williamson’s list:

1. Budget deficits . . . should be small enough to be financed without recourse to the inflation tax.

2. Public expenditure should be redirected from politically sensitive areas that receive more resources than their economic return can justify . . . toward neglected fields with high economic returns and the potential to improve income distribution, such as primary education and health, and infrastructure.

3.Tax reform . . . so as to broaden the tax base and cut marginal tax rates.

4. Financial liberalization, involving an ultimate objective of market-determined interest rates.

5. A unified exchange rate at a level sufficiently competitive to induce a rapid growth in nontraditional exports.

6. Quantitative trade restrictions to be rapidly replaced by tariffs, which would be progressively reduced until a uniform low rate in the range of 10 to 20 percent was achieved.

7. Abolition of barriers impeding the entry of FDI (foreign direct investment).

8. Privatization of state enterprises.

9. Abolition of regulations that impede the entry of new firms or restrict competition.

10. The provision of secure property rights, especially to the informal sector.

There are lots of things one can say about this list, but perhaps the first one is that, when countries are being told what they need to do for debt relief, calling it the “Washington consensus” is just terrible public relations. Williamson wrote in a 2004 essay: “I labeled this the ‘Washington Consensus,’ sublimely oblivious to the thought that I might be coining either an oxymoron or a battle cry for ideological disputes for the next coup.”

But let’s set aside the other arguments for a moment and consider a more basic question: Did countries that followed the Washington consensus recommendations more closely tend on average to have better economic outcomes? The answer seems to be “yes.”

In the Summer 2021 issue of JEP, Anusha Chari, Peter Blair Henry, and Hector Reyes discuss “The Baker Hypothesis: Stabilization, Structural Reforms, and Economic Growth.” (Then-Treasury Secretary James Baker laid out some of the basics of what came to be called the “Washington consensus” a few years before Williamson bestowed the actual label.) They look at the specific years that countries adopted various Washington consensus reforms, and what happened before and after. They write:

First, in the ten-year period after stabilizing high inflation, the average growth rate of real GDP in EMDEs [emerging market and developing economies] is 2.6 percentage points higher than in the prior ten-year period. Second, the corresponding growth increase for trade liberalization episodes is 2.66 percentage points. Third, in the decade after opening their capital markets to foreign equity investment, the spread between EMDEs average cost of equity capital and that of the US declines by 240 basis points.

Also in the JEP symposium, Ilan Goldfajn, Lorenza Martínez, and Rodrigo O. Valdés find broadly similar results of a positive effect in “Washington Consensus in Latin America: From Raw Model to Straw Man,” while Belinda Archibong,, Brahima Coulibaly, and Ngozi Okonjo-Iweala also finde a positive effect in  “Washington Consensus Reforms and Lessons for Economic Performance in Sub-Saharan Africa.”

 Some recent research papers in other journals agree with the general finding. For example, Kevin B. Grier and Robin M. Grier published “The Washington consensus works: Causal effects of reform, 1970-2015,” in the Journal of Comparative Economics (March 2021, 49:1, pp. 59-72).  William Easterly, who has been a critic of the “Washington consensus” in the past, offers an update and some new thinking in “In Search of Reforms for Growth New Stylized Facts on Policy and Growth Outcomes” (Cato Institute, Research Briefs #215, May 20, 2020, and NBER Working Paper 26318, September 2019).

But as a number of the JEP papers are quick to point out, the fact that the Washington consensus was a generally sensible set of policy recommendations does not address many of the underlying concerns directly.

  1. The major global growth success stories since the formulation of the Washington consensus have happened in Asia: China, India, and others. These countries have followed some aspects of the Washington consensus but clearly not others (like commitments to privatization, financial liberalization, floating exchange rates, and free trade). The Washington consensus doesn’t seem especially useful in thinking about what caused growth in Asian economies to take off.
  2. In the urgent push of resolving debt crises, some parts of the Washington consensus often got lost in the shuffle: in particular, the #2 recommendation about redistributing public resources “toward neglected fields with high economic returns and the potential to improve income distribution, such as primary education and health, and infrastructure” seemed to be left out when debt relief agreements were actually reached. Indeed, the debt relief agreements sometimes involved cutting government spending in those areas.
  3. The Washington consensus put too little emphasis on real-world transition problems. When a certain domestic industry is opened up to international trade, and many of the local producers are driven out of business, what is the government to do? When a large state-owned company is privatized and becomes a large unregulated private monopoly instead, what is the government to do? Praying to the gods of “it will be all right in the long run” is not a useful answer.
  4. In general, the Washington consensus seems to neglect the need for broad political and social buy-in on reforms, and thus feels like a mandate handed down from on high.
  5. Some topics that seem important both for growth don’t seem to be explicitly addressed in the Washington consensus. It’s generally believed that economic growth comes primarily from a society that has make use of new technologies developed elsewhere and create new technologies of its own. Some bits and pieces of the Washington consensus list can be interpreted in these terms, but it’s not an explicit focus.
  6. Some topics that seem important for social cohesion don’t seem to be addressed. For example, there is no mention of reducing corruption or crime, improving the environment, or an explicit goal of reducing inequality.

It would of course be a little silly to treat a set of reforms partially carried out by some countries in the late 1980s and into the 1990s as the sole factors determining economic performance since then. Thus, along with these kinds of concerns, Archibong, Coulibaly, and Okonjo-Iweala point out in their discussion of Africa’s growth experience that there is a general surge in Africa’s growth starting around 2000. One reason for that surge was growth in democracy: they discuss the “wave of democratization in the 1990s, with the number of countries that held multi-party elections increasing from just two (Botswana and Mauritius) before 1989 to 44 of 48 countries—or 92 percent of sub-Saharan Africa—by mid-2003 (Lynch and Crawford 2011). This had the effect of encouraging investment in infrastructure and in pro-poor policies.” In looking at the economic successes of Africa in the last two decades, they write:

[I]t is not obvious that the market-oriented reforms emphasized by international financial institutions are the best or only route to successful economic development. Skeptics of market-oriented reforms in Africa point out that in many successful development efforts around the world, including many countries across Asia, governments played a prominent role for much of the critical phase of their economic development. Historically, many of today’s developed economies did not fully embrace free market economies in the earlier phases of their economic development, which instead involved substantial state involvement including industrial subsidies and infant industry protection (for a discussion of the development experience of today’s advanced economies, one useful starting point is Chang 2002). In Africa, many of these same practices used at other places and times were frowned upon by proponents of market-oriented policies. But before countries of sub-Saharan Africa fell into the debt crisis of the 1980s, many of them had experienced success in the period immediately post-independence in the 1960s and 1970s (Mkandawire 1999). Indeed, some of the policies that were abandoned in favor of market-oriented reforms had rational, development-motivated justifications.

More broadly, what seems to have happened is that, among both supporters and detractors, the “Washington consensus” became a phrase that was used to refer to a recommendation for largely unfettered markets and limited government. This is why Goldfajn, Martínez, and Valdés, in their essay about Latin America, refer to the argument as a “straw man.” They write:

In current public policy debates in Latin America, controversy over “neoliberalism” dwarfs interest in the Washington Consensus. Neoliberalism is the straw man most commonly held up as responsible for Latin America’s economic problems. According to our calculations using the Google Books Ngram Viewer, books published in 2019 in Spanish had 70 times more references to “neoliberalism” than
to the “Washington Consensus.”

But neoliberalism is not a clearly defined concept in economics. In public discussion, neoliberalism is narrowly associated with a laissez-faire view (à la Hayek) and perhaps also with extreme monetarism (à la Friedman), and it is sometimes equated with rather orthodox and pro-market reforms. Neoliberalism has also been identified with policies that disregard some relevant aspects of development, such as inequality and poverty, and neglect any role for the state. More importantly for the issues discussed here, critics have sometimes caricatured the Washington Consensus as a neoliberal manifesto. As described by Thorsen (2010, p. 3), neoliberalism has become “a generic term of deprecation to describe almost any economic and political development deemed undesirable.” The Washington Consensus should not be mechanically associated with this neoliberal straw man.

As shown in this paper, the Washington Consensus was a list of recommendations that was partially adopted with mixed results, some of which were satisfactory and others clearly not. In our view, without some subset of the Washington Consensus policies, it would have been difficult, if not impossible, to achieve macroeconomic stability and to recover access to foreign financing in the late 1980s and early 1990s. The main risk in Latin America at present is that economic populism will gain ground and policymakers will discard the Washington Consensus policies altogether.

One lesson that should have been learned in the 1970s and 1980s, and that gave birth to the “Washington consensus” idea,” is that extreme macroeconomic stability is not good for growth or the standard of living.