The “Gobbledygook” of Unreadable Prose

Sure, you are sometimes frustrated by the quality of writing in your organization. But are you as frustrated as Maury Maverick, a two-term member of the House of Representatives who then became Mayor of San Antonio, Texas, after he had been tasked with running the Smaller War Plants Corporation during World War II? His memo, among its other merits, is typically treated as the origin of the word “gobbledygook” as a synonym for unreadably obscure prose.

Medicare: Becoming a Channel for Private-Sector Insurance

When Medicare started, it was essentially fee-for-service health insurance–that is, health care providers were reimbursed for the specific services they provided. There was a Part A of Medicare which focused on inpatient hospital care (although it also includes aspects of skilled nursing care, hospice care, and home health care) and Part B of Medicare which focused on physician services (although it also includes ambulance care, some durable home medical equipment, some mental health care, and other services). But there is an obvious issue with fee-for-service care: the provision of health care services is shaped by the needs of the patient, but also by the local practice patterns and economic incentives of health care providers. Thus, although Medicare is a national program, and the payroll taxes to support Medicare are the same everywhere in the country, Medicare prices and services provided can vary considerably by location (for example, see here).

Thus, the next main change to the original structure of Medicare did not follow the fee-for-service approach. In 1997, President Bill Clinton signed a law establishing Medicare Part C, where Medicare patients could decide, instead of getting fee-for-service payments through Medicare A and B, to have the government direct payments to a private health insurance company–in effect, the government bought health insurance for you. This innovation wasn’t completely new: there had been some people getting their Medicare through private insurance going back to the 1970s. But in the late 1990s, the “Medicare Advantage” option, as Part C is often called, became available to all.

In a study done for the Kaiser Family Foundation, Meredith Freed , Jeannie Fuglesten Biniek, Anthony Damico , and Tricia Neuman look at “Medicare Advantage in 2022: Enrollment Update and Key Trends” (August 25, 2022). They point out that the share of Medicare enrollees signed up for Medicare Part C is nearly half and rising quickly. They cite projections Medicare Advantage may cover more than 60% of all Medicare patients 10 years from now.

Why are people choosing Medicare Advantage plans? The main reasons are that the insurance plan is required to cover everything in Medicare Part A and B, but it is also allowed to provide additional services–at no additional cost to the patient. Some common add-ons include certain vision, hearing, and dental services, and sometimes services like transportation to the doctor and subsidies for joining a health club. In addition, Medicare Advantage plans can be customized, so that they provide more coverage (say, lower co-pays) for the services you know you are more likely to use. In some areas, you can even make a fairly seamless transition from health insurance provided through your employer, by a certain company, to health insurance provided by Medicare, through the same insurance company.

Most of the Medicare Advantage plans are open to anyone, although there are also specific plans organized through certain unions and employers, and also some “special needs” plans limited to the low-income elderly who also have certain chronic or disabling conditions.

The rise of Medicare Part C poses some conundrums for advocates of national health insurance programs.

For example, various versions of “Medicare for All” legislation have been proposed. In some versions, this would be a universal national health insurance plan run by the government. Whatever the merits or demerits of such a proposal, actual real-world Medicare is shifting to a choice of plans run by insurance companies, and only funded by the government. The elderly have a choice between having their health insurance administered by the US government or by a private insurance firm–and they are choosing the private firm.

An obvious follow-up question is: How can the private insurance companies offer these Medicare Advantage plans–with limited but real choices of coverage along with additional services? Remember, most of the plans are open to all, so the insurance companies are not “skimming the cream” of signing up only the healthier elderly. Private insurance companies need to make a profit, while Medicare Part A and B do not.

I do not have a fully convincing answer here. It’s true that the government pays a little more for Medicare Part C, on average, than for Parts A and B, but it’s only about $300 per person per year, so that’s unlikely to be the main driver. My guess is that big insurance companies are better at managing health care costs, and perhaps no worse at managing paperwork and administrative costs. After all, the insurance companies are paid a flat amount per patient, rather than being reimbursed on a fee-for-service basis like traditional Medicare A and B. One can, of course, raise concerns about just how private insurance might seek to control health care costs. But again, the key point is that the elderly are increasingly showing by their actions that they prefer the Medicare Advantage plans, funded by the federal government, but run by private insurance companies.

Social Security: On Hold Until 2034?

For the last 30 years, the actuaries who draw up the long-run projections for Social Security have been forecasting that by the 2030s, there were be inadequate funds to pay promised Social Security benefits. There is no secret here–but nothing has been been done. Douglas Arnold describes the situation and makes some judicious predictions about what is likely to happen in “Fixing Social Security: The Politics of Reform in a Polarized Age,” (Milken Institute Review, Third Quarter 2022). It’s an excerpt from Arnold’s just-published book of the same title. Arnold writes:

We should not be surprised if Congress does nothing to fix Social Security before 2034, when the trust fund runs dry. Although experts first identified the long-term solvency problem nearly three decades ago, and opinion surveys have repeatedly shown that fixing the problem is one of the public’s top priorities, legislators have never voted on a proposal to fix it — not in committee, not on the floor, not in the House, not in the Senate. … And the principal reason for congressional inaction is clear: insolvency is a long-term problem without short-term consequences. Everything will change in 2034. Suddenly, insolvency will become an urgent problem with enormous consequences. Absent congressional action, an estimated 83 million Social Security recipients — 18 million more than today — will face automatic benefit cuts of 21 percent. Another 8 million people filing for Social Security benefits that year will face similar reductions from what they would otherwise collect.

It’s worth emphasizing that Arnold’s comment that “legislators have never voted on a proposal to fix it.” Apparently, both parties would prefer to have the Social Security issue linger, rather than find a way to take credit for fixing it. This seems especially striking to me because addressing the coming shortfall in Social Security is pretty straightforward. The same actuaries who point out that the system isn’t financially sustainable as it stands also offer financial estimates of a list of possible policy choices. I’ve written about these kinds of proposals before (for example, see here and here) and won’t run through them again here. But I sometimes say that if a bipartisan group was locked in a room one morning and told that they wouldn’t be served food until a compromise was reached, I think the group could easily be out with a plan before lunchtime.

If some mixture of proposals like these for a later retirement age, or an increase in the amount of income on which the payroll tax is levied had been adopted a 5 or 10 or 15 years ago, then as the effect of modest changes accumulated over time, Social Security could have been made solvent for many decades into the future with hardly anyone noticing. As the 2034 timeline approaches, the necessary changes have become bigger and bigger.

Arnold argues that Congress is unlikely to take action before the Social Security funding crisis is upon us. After all, the previous time that the Social Security trust fund was about to run out of money, in the early 1980s, Congress waited until the last minute and then appointed a commission (under the leadership of Alan Greenspan, who later became chair of the Fed) to propose a solution. Arnold points out that there are special rules in the federal budget process which require that any changes to Social Security will need to get 60 votes in the US Senate–that is, the changes cannot be made by a simple majority as part of the budget process. Thus, both parties will likely need to sign off.

When Congress decided to show its bravery by appointing another commission in about 2034, what choices will at that point be on the table?

There is a subgroup in both parties that would like to make relatively substantive changes to Social Security. On the Republican side, there is a group that is eager to transform much or all of Social Security into a set of individual retirement accounts, where the federal government would top up the accounts for those with low incomes. For example, if a proposal along these lines had been implemented about 15 years ago, so that holders of these retirement accounts could have benefited from the long run-up in the stock market since about 2010, a lot of people would be feeling a lot better about their retirements just now. But individual accounts would also create a need for a snake’s nest of rules about how such accounts could be invested, if one could use them as collateral for loans, if people would be allowed to dip into them for “worthy” purposes like house down-payments or college tuition for their children or paying legal settlements–and so on and so on.

Most Democrats are resolutely against altering Social Security in this way, but a certain subset of Democrats would like to see the benefits of the system substantially expanded. Because Social Security payments are linked to the taxes a person (or a spouse) paid into a system during a working, those who didn’t pay much into the system can end up in deep poverty when they are older. Of course, when a system is already on a track for a financial crash, a substantial addition to the benefits it would pay out would make the financial crunch worse.

By 2034, if nothing happens until then, Social Security beneficiaries would be facing a permanent drop of about 21% in their benefit checks. The gap to make up the difference would be about 1.2% of GDP for every year into the future. What is likely to happen?

Well, it would presumably be political suicide for politicians if Social Security benefit rates declined. Thus, while one can imagine longer-term changes in benefits, like a slow phase-in of a later retirement age, or changes in the details of how benefits are calculated. Over a few decades, these can make a big difference. But in the moment of the crisis in 2034 it’s unlikely that current benefits will be cut in any meaningful way.

On the tax side, a number of current Republicans have staked out ground that they will not support an increase in payroll taxes. Again, one can imagine policies that might have the effect of a slow phase-in of higher taxes–say, increasing the income taxes that those with high incomes might pay on Social Security benefit–but in the moment of crisis in 2034, a jagged upward jump in taxes for the system also seems unlikely.

Back when the Social Security system was being saved in the 1980s, one short-term step was to shuffle some money around from other federal trust funds for a few years. But the other federal trust funds are not flush with money, either. The Medicare trust fund is scheduled to run out in 2028. Borrowing from the federal trust funds for retired civilian and military personnel is not a long-run fix.

Thus, a plausible prediction for 2034 is that Social Security will be “fixed” by turning to general fund tax revenues–rather than the payroll tax–as a source of funding. I suspect this would be done with a lot of strong statements about how it was only a temporary change, but it’s the kind of temporary change that can easily become permanent. As Arnold points out, this outcome is plausible–and would also represent a major change to the operation of Social Security:

Policymakers have had good reasons for not using general funds to subsidize Social Security. President [Franklin]Roosevelt argued that a tight link between taxes and benefits served two important ends. It would protect Social Security from hostile actors — “No damn politician can ever scrap my Social Security program” — but it would also protect the program from unreasonable expansion. Legislators could not expand the program unless they were also willing to increase taxes.

This tight link has worked for nearly a century. The program’s detractors have never found a way to dismantle Social Security because workers earn their benefits by paying a dedicated tax. But neither have the program’s champions been able to expand benefits since 1972 because legislators have been unwilling to increase taxes.

As someone whose current life plan is to start drawing Social Security benefits in 2030, when I turn 70, I contemplate these possibilities with little cheer.

Face-to-face Society and Commercial Society

A common complaint about economics is that by focusing on exchange, it does not take the morality of that exchange into account. Paul Heyne suggests that one approach to this concern is to believe that we all live in two worlds: a face-to-face society and a commercial society. Moreover, the moral rules for the two societies are not the same. Here is some of Heyne’s discussion from his 1993 essay, “Are Economists Basically Immoral?” (in a 2008 volume of Heyne’s collected essays, edited by Geoffrey Brennan and A. M. C. Waterman, “Are Economists Basically Immoral?” and Other Essays on Economics, Ethics, and Religion). Heyne writes:

It seems clear to me that we all of us live simultaneously in two kinds of societies, each with its own quite distinct morality. One is the face-to-face like the family, in which we can and should directly pursue one another’s welfare. But we also live in large, necessarily impersonal societies in which we cooperate to our mutual advantage with thousands, even millions, of people whom we usually do not even see, but whose welfare we promote most effectively by diligently pursuing our own welfare. We live predominantly in what Adam Smith called a “commercial society.” When the division of labor, he wrote earlier in The Wealth of Nations, has thoroughly extended itself through society, then everyone lives by exchanging; everyone becomes, he says, in some measure a merchant and the society grows to be what is properly called a commercial society.

Economists have acquired their bad reputation largely by defending commercial society. Commercial society simply does not function in accordance with the moral principles that most people learned in their youth and now take for granted as the only possible principles of morality. In many people’s judgment that makes commercial society and its defenders morally objectionable. Now, I think most of these critics are deeply confused. In a family, or another face-to-face society, the members know one another well. In these situations people can reasonably be expected to take the other person’s specific interests and values into account. But in a large society this is impossible. If I tried to apply in a class of 50 or even 25 students the principles of justice that I try to use in my own family, such as “from each according to their ability, to each according to their need,” I would end up behaving not justly but arbitrarily. And therefore unjustly. I should not be expected to distribute grades to my students on the basis of need. The economist Kenneth Boulding once formulated the issue I’m asking you to consider by contrasting what he called “exchange systems” with “integrative systems.” Integrative systems work through a meeting of minds, through a convergence of images, values and aspirations. Participation in integrative social systems can be deeply satisfying, and I think some participation in integrative systems is essential to human health and happiness. But it is a serious mistake to use the features of integrative systems to pass moral judgment on exchange systems.

Here’s an example of such a mistake. It’s from an essay by the nineteenth-century British art critic John Ruskin, who criticized economists even more harshly than he criticized bad architecture and bad painting. “Employers,” Ruskin said, “should treat employees the way they would treat their own sons” (he didn’t say “daughters” because he didn’t contemplate women working). Does that strike you as a worthy ideal? Even if a hopeless ideal, people might say it’s a worthy ideal, something we should strive for. But I want you to think again. It is a monstrous ideal. The proper term for it is “paternalism”: or, as my wife tells me, “parentalism,” a much better word. Parentalism is a non-sexist word for what we used to call paternalism; it really captures the idea, which is behaving like a parent. Parentalism degrades its victims and corrupts its perpetrators. I do not want the Chancellor of my university to treat me like a child, not even like his own child; he is in reality not my father and should not behave toward me as if he were. Parentalism is appropriate at most in actual parents who know their children intimately, who love them as much as, if not more than, they love themselves, and who recognize that their children have a unique claim on their resources. In those cases parentalism is appropriate. When those conditions are not met, then parentalism is degrading and corrupting. Employers should treat their employees like human beings, of course, with decency and common courtesy. But beyond that they should treat them as people who have something of value to offer the firm for which they will therefore have to be paid. This is not only efficient; it is also less unfair than the parentalist alternative. It is more worthy of both the employer and the employee.

The employer/employee relationship is properly part of the exchange system in which people are equals and do things for one another. Our hankering to personalize our relationships is a romantic revolt against dominant features of the modern world. It’s the kind of yearning that if carried through would have us abandon such coldly impersonal social mechanisms as traffic lights in favor of an integrated system in which the motorists who meet at each intersection form an encounter group to decide who most needs to go through the intersection first.

My usual way of phrasing a Heyne-style argument is to say that we all wear different hats. I’m a husband, father, son, and brother; an employee and a co-worker to the American Economic Association; a cat-owner; a dinner-party host; a baker of birthday cakes for family celebrations; a consumer to the local grocery stores and gas stations; a borrower to the bank that holds our mortgage; a charitable contributor to some local arts groups; an Minnesotan and an American; an occasional volunteer for youth activities; a participant in making our front yard presentable to the neighborhood; and many more. Appropriate behavior varies across these relationships. I hope and try to treat all people with some level of consideration and respect, but it is not self-contradictory that I treat different people differently. It is not self-contradictory that some of these relationships involve monetary payments and others do not. Moral behavior need not look the same across all of these settings.

Robert Solow: Is Economics Disqualified by Ideological Bias?

Every economist is familiar with the complaint that too many of the results of economic studies are decided in advance by free-market ideology. For those familiar with actual academic economists, rather than cardboard cutouts, the complaint that they are exclusively committed to free markets will be risible. Moreover, a number of those who complain about what they perceive as the ideological bias of economics are actually complaining not about the existence of bias, but rather than they would prefer a different kind of bias.

Robert M. Solow (Nobel ’87) offered a more sophisticated answer to concerns over value-free social science and ideological bias in a 1970 essay “Science and ideology in economics,” The Public Interest, Fall 1970). He wrote:

The whole discussion of value-free social science suffers from being conducted in qualitative instead of quantitative terms. Many people seem to have rushed from the claim that no social science can be perfectly value-free to the conclusion that therefore anything goes. It is as if we were to discover that it is impossible to render an operating-room perfectly sterile and conclude that therefore one might as well do surgery in a sewer. There is probably more ideology in social science than mandarins like to admit. Crass propaganda is easy to spot, but a subtle failure to imagine that institutions, and therefore behavior, could be other than they are may easily pass unnoticed. It may even be that perfectly value-free social science is impossible, though I regard that claim as unproven about the kind of work that has genuine claims to be science. But suppose it is so. The proper response, I should think, would be to seek ways to make social science as nearly value-free as it is possible to be (and, of course, to be honest about the residue). The natural device for squeezing as much unacknowledged ideology as possible out of the subject is open professional criticism. Obviously, then, one must protect and encourage radical critics. I think that outsiders underrate the powerful discipline in favor of intellectual honesty that comes from the fact that there is a big professional payoff to anyone who conclusively shoots down a mandarin’s ideas.

Solow says later in the essay, in a comment that to me still has a ring of truth a half-century later:

The modern critics of economics and the other social sciences rarely seem to do any research themselves. One has the impression that they don’t believe in it, that the real object of their dislike is the idea of science itself, especially, but perhaps not only, social science. A sympathetic description of their point of view might be: if the ethos of objective science has led us to where we are, things can hardly be worse if we give it up. A more impatient version might be: what good is research to someone who already knows? The critics, whether from the New Left or elsewhere, do not criticize on the basis of some new discovery of their own, but on the basis that there is nothing worth discovering—or rather that anything that is discovered is likely to interfere with their own prescriptions for the good society. My own opinion is that the good society is going to need all the help it can get, in fact more than most. A society that wants to be humane, even at the cost of efficiency, should be looking for clever, unhurtful, practical knowledge.

Most of the economists I know do not see themselves, at least not in their professional work, as defending ideological barricades. Through their theoretical models and empirical applications, they see themselves as involved in a search for “clever, unhurtful, practical knowledge.”

Why Is It Called “Dynamic Programming”?

“Dynamic programming” is widely used across fields from economics to engineering, and from computer programming to machine learning. It’s essentially a method of breaking down a larger problem into sub-problems, so that if you work through the sub-problems in the right order, building each answer on the previous one, you eventually solve the larger problem. It was invented by Richard Bellman back in the 1950s–thus the eponymous “Bellman equation,” which describes an optimal process for the process of getting a big picture or long-term answer based on solving a series of sub-problems.

But why did Bellman call it “dynamic programming”? He tells the story in his 1984 autobiography, Eye of the Hurricane (p. 159):

I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word, “programming” I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying I thought, lets kill two birds with one stone. Lets take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is its impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. Its impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities.

As an aside, the Secretary of Defense mentioned here is Charles Wilson, who almost but didn’t quite ever say “what’s good for General Motors is good for the country,” a story I told here.

I can offer only a moderate recommendation of Bellman’s autobiography. On one side, he’s not an especially smooth writer, nor does he develop characters especially well. On the other side, he’s full of good stories and tart observations. Just before the story told above he describes giving a talk and writes:

I expounded on my favorite theme: “Progress is not due to those who roll up their sleeves and do things the way their fathers did. Progress is due to those who say, `There must be a simpler way.'”

Obviously, this story of “dynamic programming” has been out in the public view for awhile. However, I had not heard it before seeing a mention on Beatrice Cherrier’s Twitter feed last month.

Adam Smith and Pin-making: Some Inconvenient Truths

One of the famous anecdotes in economics is about division of labor in a pin factory, as told by Adam Smith starting in the third paragraph of The Wealth of Nations. (One suspects the fame of the story is partly related to the fact anyone who cracks open the book will find it at the very beginning.) Smith notes in the text that his example was already common at the time he used it. But those who specialize in this area have pointed that Smith’s example was based on second- and third-hand reporting, while actual studies of pin-making in the 18th century suggest that it may not be a great example of the gains from division of labor.

As a starting point, here’s how Smith tells the pin factory story in the third paragraph of the Wealth of Nations:

To take an example, therefore, from a very trifling manufacture; but one in which the division of labour has been very often taken notice of, the trade of the pin-maker; a workman not educated to this business (which the division of labour has rendered a distinct trade), nor acquainted with the use of the machinery employed in it (to the invention of which the same division of labour has probably given occasion), could scarce, perhaps, with his utmost industry, make one pin in a day, and certainly could not make twenty. But in the way in which this business is now carried on, not only the whole work is a peculiar trade, but it is divided into a number of branches, of which the greater part are likewise peculiar trades. One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on, is a peculiar business, to whiten the pins is another; it is even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about eighteen distinct operations, which, in some manufactories, are all performed by distinct hands, though in others the same man will sometimes perform two or three of them. I have seen a small manufactory of this kind where ten men only were employed, and where some of them consequently performed two or three distinct operations. But though they were very poor, and therefore but indifferently accommodated with the necessary machinery, they could, when they exerted themselves, make among them about twelve pounds of pins in a day. There are in a pound upwards of four thousand pins of a middling size. Those ten persons, therefore, could make among them upwards of forty-eight thousand pins in a day. Each person, therefore, making a tenth part of forty-eight thousand pins, might be considered as making four thousand eight hundred pins in a day. But if they had all wrought separately and independently, and without any of them having been educated to this peculiar business, they certainly could not each of them have made twenty, perhaps not one pin in a day; that is, certainly, not the two hundred and fortieth, perhaps not the four thousand eight hundredth part of what they are at present capable of performing, in consequence of a proper division and combination of their different operations.

When Smith writes at the start that pin manufacturing is “one in which the division of labour has been very often taken notice of,” what does he have mind? The standard answer seems to be an entry in Denis Diderot’s Encyclopedia, published in 1755, which seems partially correct. Jean-Louis Peaucelle and Cameron Guthrie dig deeper in “How Adam Smith Found Inspiration in French Texts on Pin Making in the Eighteenth Century” (History of Economic Ideas, 2011, 19: 3, pp. 41-67, available via JSTOR). They write:

Adam Smith used four French sources on pin-making: Journal des sçavans, 1761, Delaire’s article «Pin» in Diderot’s Encyclopaedia (1755), Duhamel’s The pinmaker’s art (1761), and Macquer’s Portative arts and crafts dictionary (1766) … We will see that the original texts do not support Smith’s analysis. The workers were specialized in eight or nine trades, and not eighteen as Smith understood. In a work shop there were many workers for heading but very few for cutting the pins, for example. Attempts to divide this latter operation further were unsuccessful. One of the original texts that Smith did not consult also provides an example of production without specialisation where productivity was a hundred times higher than Adam Smith believed.

However, the works on which Smith was drawing were often second-hand, based on earlier research. Apparently, the studies of pin-making in France apparently started around 1675, King Louis XIV asked the French Academy of Sciences to write up a detailed description of the arts and crafts. After some local efforts along these lines by trade inspectors, the first of these reports by Gilles Filleau des Billette was completed in 1702, but never published. However, he described 12 operations involved in making pins, calculated a speed of production for 10 of the operations, and provided a picture of the tools used for each one, with four workers carrying out the tasks.

Thus, Mathieu de Guéroult de Boisrobert, known as Guéroult, did a new study in 1715. He describes seven occupations involved in pin-making, although it’s not clear that they are done by seven different people.

“The first printed text about pin making is an article in the Universal trade dictionary, published in 1723 by Savary. Jacques Savary des Bruslons (1657-1715) was director of customs and collected for his own use all available information from the central Royal Administration and Academy. This information formed the basis of his dictionary.” He wrote that pin-making involved “more than 25 laborers, working in succession.” But this seems to involve an assumption that there was a different laborer for each task named in the earlier reports, rather than each laborer taking on multiple tasks.

Next in line was Jean-Rodolphe Perronet, who wrote an unpublished manuscript in 1739, which although it was a new work had a description quite similar to that of Guéroult–perhaps not surprising, given that Perronet succeeded Guéroult as an engineer in Avignon.

All of this leads up to the article in Diderot’s Encyclopedia by Alexandre Delaire: “Delaire wrote as if he had personally observed workshop activity: «this article is written by Mr. Delaire who describes the manufacture of pins in the workshops with the workers themselves» (Delaire 1755, 807). But this was not true. An analysis of the parts of Delaire’s article reveals his sources. The technical vocabulary was copied from previous descriptions of pin making, authored by Savary, Guéroult, Perronet, and Réaumur.” Apparently what happened here was that in an earlier version of the Encyclopedia, the Jesuits accused Diderot of copying their article on pins. So Diderot hired Delaire, who ‘had studied literature and knew very little about technology,” to write up a blend of the earlier sources in a way that wouldn’t be susceptible to the accusation of copying.

There’s more about all these linkages, but for the purposes of economists referring to this example today, perhaps the more interesting question is not the interrelationship between all the sources, but whether Adam Smith is basically telling the truth about pin manufacturing. There is reason for doubt.

  1. At a basic level, the count of operations in pin-making is suspect. Delaire is the one who came up with 18 “operations,” but in most places, these did not involve separate workers. At most, there seem to be 8 or 9 different types of workers.
  2. Smith’s estimate of the number of pins to be made by a single person in a day is made up. The 1723 report from Savary suggests: “The productive power of labour would be some 2,000 pins per day and per pin-maker working without any division of labour. Productivity im provements would not have been as spectacular as Adam Smith imagined. They would have been closer to a factor of 2.4 rather than 240.”
  3. In pin-making around Smith’s time, pin-makers with fewer employees seemed to compete with those with more employees on an equal basis. They all seemed to draw from the same pool of unskilled labor, and to pay similar wages. In that sense, pin-making firms with greater division of labor did not seem more efficient, and workers often switched between tasks rather than specializing in a certain task.

In 1794 an inventory of workshops was undertaken in Bourth, 10 km from Laigle. 500 pinmakers worked in 70 workshops. The average of 7 workers per workshop however is misleading. 40% of workers were employed in small workshops of 6 people of less, 40% in workshops of 7 to 9 workers, and 20% in large workshops with 10 to 20 workers (Marchand 1966, 35). Workshops of different sizes coexisted. The organisation of labour varied according to the size of the workshop. It was not standardized. There were no economies of scale, nor any productivity gains in large workshops where pin makers could be more specialized. No workshop would have had a significant advantage over another. The productivity of labour and the level of wages were the same. More di vided labour wasn’t more productive. The theory of the division of labour does not hold true in Smith’s first example, that of pin making.

4) The most important division of labor in pin-making during the 18th century may have been that certain tasks were reserved for men or for women. In his 1715 study, Guéroult noted that there were roughly twice as many women as men in the pin-making firms, focused on certain tasks (like putting the heads on pins), and typically paid less than half as much. This particular aspect of the division of labor typically goes unmentioned in modern pedagogy.

A few years ago, John Kay looked at this evidence and argued that teaching about the division of labor was useful, even if Smith’s use of the historical evidence was oversimplified at best. Kay wrote:

We might conclude that neither Smith nor Ferguson, neither Diderot nor Delaire, knew anything about the pin factories they claimed to describe. But does it matter? The two Frenchmen are more worthy of censure, because their readers might have been misled into thinking that their description provided guidance as to how you made pins. The two Scotsmen were making an important, and justly influential, argument even if the particular illustration they used in its support was wrong. In business schools, I have sometimes engaged in argument over whether the case studies that teachers use to illustrate issues need be true. The lesson of the pin factory is that it probably doesn’t matter. My students needed to understand the division of labour. Few if any needed to understand techniques of manufacturing pins in eighteenth century France.

In a similar spirit, Peaucelle and Guthrie conclude their article by writing: “The weak probative value of his [Adam Smith’s] pin-making example takes nothing away from the reach of his economic ideas.”

I have qualms on this point. Examples should stand and fall, at least to some extent, on their actual merits, not on whether they are picturesque. At very least, when using an historical example that is vivid and imperfect, it seems important to know something of the strengths and weaknesses that lurk in the background.

Drowning in Figures

About 35 years ago, when I started my career as the managing editor of an economics journal, producing figure and tables was expensive. My memory is that at Princeton University, where I was based at the time, there was still a skilled draftsman who hand-drew beautiful figures, plotting the points and and then putting in a best-fit curve for the data using French curves.

For those born after 1980 (or 1970?), a French curve was a clear piece of plastic with a smooth edge that combined many different types of curves. The design is commonly attributed to the German mathematician Ludwig Bermester (1840-1927).  There were three primary French curves: one for hyperbolas, one for ellipses, and one for parabolas. You rotated the French curve over your data until one of the curves seemed to fit the points, and then used the edge of the curve to draw a smooth curved line. Drawing publishable figure was once a specialized skill. In 8th-grade shop class in my Minnesota junior high school, the boys were required to take a one-quarter class in mechanical drawing, where we sat at sloped drafting tables and learned how to make blueprint-style detailed drawings of a screw, top and side views, with the head, the threaded shank, and the point precisely delineated. French curves were far too high-end for us.  

Figures have gone from time-consuming and expensive to dirt cheap. As an economist, I’m generally in favor of improvements in quality accompanied by a sharp drop in price. But the related economic lessons are that when something gets much cheaper, it may be used much more often. When something is used much more often, diminishing returns may arise: while the first few figures may be illuminating and valuable, the last few are likely to range from overkill to actively confusing.  

In addition, researchers have an incentive to generate lots of figures and tables as the basis for seminar presentations, so that listeners have something to look at.  By the time that the researcher writes up the paper for publication, long propinquity to the figures has made them part of how the researcher conceives of the ideas.

Put it all together, and a few decades ago it was fairly common for me to receive a first draft with zero figures, or just a few. Now, it’s not unusual for me to receive a first draft with 10, 15, even 20 figures and tables—some of those with four or six or 12 separate panels.  The paper can ends up feeling like a string of figures, each accompanied by bite-sized chunks of text.

Thus, I would promulgate some guidelines both for reducing the clutter of too many figures and for improving the quality of the remaining figures in written presentations.

  1. Written and spoken communication differ, just as reading and listening differ. In spoken exposition, summarizing a simple pattern with a simple bar graph or a line chart can help a listener focus. But for a reader, sometimes it’s just better to give people the numbers. Just because software will generate a figure doesn’t mean the figure is a good way of explaining to readers.
  2. Remember the basics. Figures need a title, and labels on the axes. Multiple lines or bars need labels, too. Use the note under the figure to list sources of data and to explain any abbreviations.
  3. A set of five or seven or eleven lines on a graph, each with its own separate key—one solid, one dashed, one dotted, one dash-dot-dash-dot, one dot-dot-dash, dot-dot-dash, and so on—requires some rethinking.  
  4. It’s generally better—although admittedly not always practical— to label lines or bars directly, rather than using a separate key under the paper, which requires the eyes of the reader to jump back and forth.
  5. A greater ability to use color is one of the great and useful breakthroughs of modern graphics.  Take advantage. But remember that some readers are red/green or blue/green or yellow/red colorblind. When using shades of color, the human eye is best at perceiving multiple shades of green, and worst at perceiving multiple shades of yellow.
  6. Sometimes it’s useful to label points: perhaps they represent countries or US states. But be cautious about labelling every country or state, which can lead to a blur of overlapping and unreadable labels. It’s often better to pick a few points worthy of labelling—perhaps specific points discussed directly in the text.
  7. A figure takes up the space of several hundred words. If it takes one or two sentences to convey the message of a figure, then the figure becomes just a big weirdly-shaped exclamation point for a message already fully conveyed in the text and dropping the figure probably makes sense.
  8. If you can’t make a good Figure 1 for an empirical paper with raw data, you ultimately aren’t likely to convince a lot of people (a saying I have heard attributed to the economist Steven Levitt).
  9. In the same way that you will automatically try to keep current with the evolution of terminology in your field of research,   keep expanding your vocabulary for presentation of data.  William Playfair, who invented the basic pie graph, bar graph, and line graph, died almost 200 years ago in 1823. More options are now available: bubble charts, waterfall charts, the bullet-graph variation on a bar chart, Mekko charts, heat maps of the cluster or spatial variety, and others.  
  10. If you’ve been a victim of figures ranging from purposeless to indecipherable, don’t be a perpetrator. If you can’t remember having been a victim, take a closer look at the figures you are producing.

An advertising executive named Frank Barnard popularized the phrase, “A picture is worth a thousand words.” But Barnard was writing a century ago, and his topic was how to design advertising for the sides of moving streetcars.  Now that modern statistical software can spit out a suite of figures on demand to be cut-and-pasted over to Powerpoint slides, Barnard’s rule of thumb needs rethinking.

The best figures are worth much more than a thousand words; indeed, a few well-chosen and well-constructed figures can sometimes convey the main arguments of an article (at least to a reader already somewhat knowledgeable in the subject). But figures can also be extraneous and unclear—imposing higher costs on readers than any plausible benefit. Expository wisdom involves learning the difference.


The “Honor” of Publication

Even in this time when any yahoo with a computer can self-publish a blog like this one–or perhaps especially in this time–there is still an honor in being published in a more formal way in a a recognized serial publication or by a known publisher. However, the honor comes with a price. Specifically, you need to deal with the editors at the recognized publication, who may interrupt your life with a lengthy series of requests for multiple rounds of time-consuming revisions and changes. Some of these changes may seem useful to you, and some may not. But it’s the quantity of them, arriving over a period of months or even years, that wear you out.

This process is what the English historian G.M Young was referring to when he apparently said: “Being published by the Oxford University Press is rather like being married to a duchess: the honour is almost greater than the pleasure.”

I say “apparently” because the quotation is attributed to Young in a letter from Rupert Hart-Davis to George Lyttelton in The Lyttelton Hart-Davis letters : correspondence of George Lyttelton and Rupert Hart-Davis (vol. 1, p. 122, letter of April 29, 1956).

The problem can be apparent from the publisher’s side, too. The New Yorker magazine of several decades ago was (in)famous for its detailed editing and fact-checking. The editor at the time, Harold Ross, once wrote in a letter to H.L. Mencken: ““We have carried editing to a very high degree of fussiness here, probably to a point approaching the ultimate. I don’t know how to get it under control.” The quotation can be found in Letters from the Editor: The New Yorker’s Harold Ross, edited by Thomas Kunkel (letter of November 9, 1948).

The “honor” of publication is reminiscent of the story commonly attributed to Abraham Lincoln, when he was asked about the “honor” of being the President of the United States during the US Civil War. Lincoln is supposed to have said: “You have heard the story, haven’t you, about the man who was tarred and feathered and carried out of town on a rail? A man in the crowd asked him how he liked it. His reply was that if it was not for the honor of the thing, he would much rather walk.”

Here, I say “attributed” because the quotation comes from a book written many years later: that is, Emanuel Hertz, Lincoln Talks: A Biography in Anecdote (1939, pp. 258-59). I don’t know of a more contemporaneous source. Thus, I include it here in the blog, but an fussy editor might well ask me to rewrite the attribution three times, and then to leave it out altogether.

Toni Morrison: “The Best Part of It All … Is Finishing It and Doing It Over”

Toni Morrison (Nobel ’93) described how, for her, writing is a process of self-editing and rewriting. This is from “The Site of Memory,” which appeared in the 1995 (second) edition of Inventing the Truth: The Art and Craft of Memoir, edited by William Zinsser (pp. 83-102). Here’s Morrison:

By now I know how to get to that place where something is working. I didn’t always know; I thought every thought I had was interesting — because it was mine. Now I know better how to throw away things that are not useful. …

When you first start writing – and I think its true for a lot of beginning writers – you’re scared to death that if you don’t get that sentence right that minute it’s never going to show up again. And it isn’t. But it doesn’t matter another one will, and it’ll probably be better. And I don’t mind writing badly for a couple of days because I know I can fix it — and fix it again and again and again, and it will be better. I don’t have the hysteria that used to accompany some of those dazzling passages that I thought the world was just dying for me to remember. I’m a little more sanguine about it now. Because the best part of it all, the absolutely most delicious part, is finishing it and then doing it over. That’s the thrill of a lifetime for me: if I can just get done with that first phase and then have infinite time to fix it and change it. I rewrite a lot, over and over again, so that it looks like I never did. I try to make it look like I never touched it, and that takes a lot of time and a lot of sweat.