Some Economics of the Middle Class

There are two things that \”everyone knows\” about the US middle class: it\’s shrinking in size and the government isn\’t helping. However, when one digs into the data, the evidence for these claims and some implications of that evidence are considerably more nuanced. 

Here, I draw upon a collection of essays called Securing Our Economic Future, edited by Melissa S. Kearney and Amy Ganz, and published by the Aspen Institute Economic Strategy Group late last year. In h first essay, Bruce Sacerdote asks, \”Is the Decline of the Middle Class Greatly Exaggerated?\” In the second essay, Adam Looney, Jeff Larrimore, David Splinter look at \”Middle-Class Redistribution: Tax and Transfer Policy for Most Americans.\”

For a flavor of Sacerdote\’s argument, define the middle class as those with between 75% and 200% of the median income. Then over time, the share of household incomes going to this group does decline. However, a closer look shows that the reason for the decline in the share of household incomes in the \”middle class\” category is not because the share in the below-75-percent-of-median group has rise, but rather because the share going to the above-200-percent-of-median group has risen.

In an arithmetic sense, this is a decline of the middle class. But it is not a shift to a bimodal or two-humped income distribution with both poor and rich rising. Instead, the middle class still has the largest share of income overall and is declining only because more households are moving up to the higher category.

To think about this shift, imagine a \”society\” in Scenario A with 100 people: 35 poor people, 51 middle-income people, and 14 rich people. Compare this with Scenario B, which has 35 poor people, 43 middle class people, and 22 rich people. (The numbers here are chosen to match Sacerdote\’s chart.) In other words, the change here is that eight of middle-income people move up to the \”rich\” category, and let\’s hypothesize that no one else is affected negatively.

Is society better off in Scenario A or B? For the purposes of this exercise, it\’s not fair to invent Scenarios C, D, or E: yes, we might make a case that more broad-based growth, or movement from the poor to the middle-class, would be preferable. But the question here is how to think about the actual shift that happened. In the shift from A to B, the size of the middle class has declined and inequality has risen. However, a standard argument for social welfare comparisons makes the plausible claim that if comparing two scenarios where at least some people are better off, and no one is worse off, then social welfare as a whole has improved

For those who hesitate before accepting this conclusion, consider running this argument in reverse: Say that you start in Scenario B, but then eight people moves from the \”rich\” to the \”middle-class\” category in Scenario A. In this situation, the share of those in the middle class has risen, and inequality has diminished. But it would seem perverse to argue that a society where some people have become worse-off (again assuming no effect on others) is a preferable outcome. Or to put it another way, one has to argue that income equality has such a high value that it is worth \”levelling down\” incomes so that some people are worse-off, even if there is no direct benefit to others from doing so. As Sacerdote writes: \”[T]he astounding growth at the top of the distribution need not be making the middle class worse off in absolute terms.\”

Sacerdote also refers back to the findings of an OECD study in 2019, which argued that \”middle class\” is associated in people\’s minds with certain kinds of consumption: in particular, it\’s associated with a certain level of housing, with relatively easy access to health insurance and health care, and with access to higher education. In the US and around the world, prices for housing, health care, and higher education have risen faster than average incomes. As he points out, one can \”ask whether homeownership or college attendance for children in the family has risen or fallen for people in the middle quintiles of the income distribution. I find that since the 1980s, homeownership, square footage of housing consumed, number of automobiles owned, and college attendance have all been rising. The one exception is the modest dip in homeownership that occurred immediately after the financial crisis of 2008.\”

My sense is that rising inequality has meant that our market-oriented economy will tend to focus more on what it can sell to the rising share of households with higher incomes than on the falling share of househoods with middle-class incomes. But again, the stress of the middle-class looking at this shift, or the stress of those have crossed over into the upper income class only to find that their housing, higher education, and health care expenses still look pretty high, is quite different from arguing that the middle-class are objectively worse off.

For a flavor of the argument from Looney, Larrimore, and Splinter, they look at the \”middle class\” as representing the middle three-fifths of the income distribution. They write: \”The `middle class\’ has benefitted from government redistribution in recent decades. For individuals in non-elderly households in the middle three income quintiles (the middle class), the share of federal taxes decreased, and the share of transfers increased. Between 1979 and 2016, market income per person increased 39 percent. But when accounting for taxes and transfers income increased 57 percent. Middle-class income support, however, is a recent phenomenon. Before 2000, market income and income after taxes and transfers grew together. Since 2000, middle-class income after taxes and transfers grew three times faster than market income.\”

Notice that their analysis is focused on the non-elderly, so the results have nothing to do with changes in Social Security or Medicare. Basically, they find that since about 2000, the US government has been able to use a pattern of gradually higher budget deficits along with the ongoing decline in defense spending (as a share of GDP) to finance lower taxes and higher spending for the middle class. Here are a couple of illustrative figures. 

Here\’s the fall in taxes paid by the middle class. Of course, part of the reason why the top 20% are paying more is because of rising inequality of incomes. But the shift is bigger than what can be accounted for by that factor alone. 

Here\’s a figure showing the rising share of transfer payments going to the middle three-fifths. To put this another way, many of the expansions of means-tested federal programs over recent decades have been less focused on raising the level of support for the poor, and more focused on expanding the program to the near-poor who would not previously have been covered. 

The authors summarize: 

Over the last several decades, more federal support flowed to the middle class, while the payments they made for federal programs through taxes have declined. Focusing just on amounts for non-elderly households, between 1979 and 2016, the share of means-tested transfers received by middle-class households increased from 27 percent to 49 percent. Their share of federal taxes paid fell from 45 to 31 percent. These changes are partially the result of economic trends, which reduced the share of market income earned by the middle class. However, changes in federal tax policy eliminated income tax liability for more middle-class households and reduced average tax rates on all but the highest-income households. Since 1979, the share of nonelderly adults facing no income tax nearly doubled, to about 40 percent. At the same time, average federal tax rates for non-elderly middle-class households fell about 4 percentage points. Since 1979, the cumulative effect of these policies was to boost the increase in non-elderly middle-class incomes by 18 percentage points. Federal support for middle-class households has clearly improved their economic stability and material well-being.

So if the federal government is doing less to tax and more to pay benefits to the middle class now than a few decades ago, why doesn\’t it feel that way to so many people? 

One main reason is that many of these benefits do not flow to households directly, but rather go to  health care providers.  For example, the value of excluding employer-provided health insurance from taxation has been rising. But that value doesn\’t show up in anyone\’s paycheck. Similarly, the cost of Medicaid has been rising, but this program involves payments from the federal government to health care providers, so the beneficiaries of this program do not see any additional income coming directly to their household. Another reason is that when inequality is rising, the middle-class may be more focused on the gap that is opening up with the rich, rather than on the calculations mentioned here. But the authors write: \”Since 2000, non-elderly, middle-class incomes grew three times faster when accounting for transfers and federal taxes.\”

Looney, Larrimore, and Splinter were writing their before the COVID-related financial rescue packages of 2020 and 2021. However, they were already pointing out that this shift toward rising federal support for the incomes of middle-class households faced some natural limits: for example, defense outlays as a share of GDP rose from about 3% of GDP in the pre-9/11 days of 2000 to above 4.5% of GDP in 2010, but since then has fallen back to the 3% level. Budget deficits were high during the Great Recession, and will be much higher in 2020. Looking ahead, it will be hard for the federal government to use lower defense spending and ever-higher deficits to increase incomes of the middle three-fifths. 

Will Population Fall for Many Countries–and the World?

During my adult life, the main arguments about global population typically revolved around the topic of whether growth in population would overwhelm natural resources and lead to mass starvation and environmental collapse, or whether growth in population would be accompanied by technological progress in a way that would lead to a generally rising standard of living. Although the dire predictions of overpopulation from the 1960s and 1970s have not materialized on schedule, those concerned about overpopulation could always argue that even if the doomsday predictions were premature or delayed, they were nonetheless on their way. 

However, both sides of this controversy started from an assumption that population levels would continue to rise. In the 21st century, this assumption may be proven false. 

US birthrates have been in decline for some years. William Frey recently reported some historical figures on US population growth from the Census Bureau. Here\’s population growth by decade. Notice that the rate of population growth in the 2010s is the lowest of any decade in US history.

Here\’s US population growth annually since 1900. It looks as if 2020 will be the lowest population growth in that time. 

The US pattern is reasonably representative of the world as a whole: that is, population growths is faster in some countries and slower in others. In Japan, Russia, and Spain, for example, total population has already peaked in the last few years and how has started to decline. For an look at projected global population growth, a group of 24 demographers published a \”Fertility, mortality, migration, and population scenarios for 195 countries and territories from 2017 to 2100: a forecasting analysis for the Global Burden of Disease Study (The Lancet, October 17, 2020, pp. 1285-1306).  Here\’s a flavor of their results (for readability, I\’ve deleted footnotes and parenthetical references to statistical confidence intervals):

Our reference scenario, based on robust statistical models of fertility, mortality, and migration, suggested that global population will peak in 2064 at 9·73 billion and then decline to 8·79 billion (6·in 2100 … 

Responding to sustained low fertility is likely to become an overriding policy concern in many nations given the economic, social, environmental, and geopolitical consequences of low birth rates. A decline in total world population in the latter half of the century is potentially good news for the global environment. Fewer people on the planet in every year between now and 2100 than the number forecasted by the UNPD would mean less carbon emission, less stress on global food systems, and less likelihood of transgressing planetary boundaries. …

Although good for the environment, population decline and associated shifts in age structure in many nations might have other profound and often negative consequences. In 23 countries, including Japan, Thailand, Spain, and Ukraine, populations are expected to decline by 50% or more. Another 34 countries will probably decline by 25–50%, including China, with a forecasted 48·0% decline.

For the United States, their baseline scenario suggests that population will rise from 324 million in 2017 to a peak of 363 million by 2064, before declining to 335 million in 2100. 

When confronted with predictions that are decades in the future, it\’s of course important to note that they rely on underlying estimates about fertility and longevity, which in turn rely on estimates about factors like education levels and use of contraception. Perhaps there will be a new global baby boom that will surprise the demographers. But it\’s also important to note that much of the future population has already been born. For example, those who will be 40 or older by the year 2061 are already born right now. A sizeable share of those born in the last few years will live to see 2100. Thus, it\’s worth some thought as to where we seem to be headed. 

One obvious shift is that countries around the world will be much more focused on the elderly, because the elderly will be a much large share of the population. The demographers writing in the Lancet note: 

In 2100, if labour force participation by age and sex does not change, the ratio of the non-working adult population to the working population might reach 1·16 globally, up from 0·80 in 2017. This ratio implies that, at the global level, each person working would have to support, through taxation and intra-family income transfers, 1·16 non-working individuals aged 15 years or older (the working age population is defined by the International Labour Organization as those aged 15 years or older).41 Moreover, the number of countries with a dependency ratio higher than 1 is expected to increase from 59 in 2017 to 145 in 2100. Taxation rates required to sustain national health insurance and social security programmes might be so large as to further reduce economic growth and investment. Insecurity from the risk that these programmes could fail might generate considerable political stress in societies with this demographic contraction …

When thinking of these challenges, one\’s mind immediately turns to financing of government programs that support the elderly, like Social Security and Medicare in the United States, but that\’s only the beginning. For example, the US and other countries are going to face an enormous challenge in financing and providing long-term care options for the elderly. There are also likely to be hard-to-predict effects on the rate of economic growth: 

Having fewer individuals between the ages of 15 and 64 years might, however, have larger effects on GDP growth than what we have captured here. For example, having fewer individuals in these age groups might reduce innovation in economies, and fewer workers in general might reduce domestic markets for consumer goods, because many retirees are less likely to purchase consumer durables than middle aged and young adults. Developments such as advancements in robotics could substantially change the trajectory of GDP per working-age adult, reducing the effect of the age structure on GDP growth. However, these effects are very difficult to model at this stage. Furthermore, the impact of robotics might have complex effects on countries for which the trajectory for economic growth might be through low-cost labour supply.

These population shifts will alter perspectives on the magnitude of of countries around the world, too. For example, China is the now the most populous country in the world with a population of 1,412 million in 2017. However, China took dramatic steps to reduce fertility back in the early 1970s, later culminating in the \”one-child\” policy. Thus, the forecast is for China\’s population to peak in 2024 at 1,431 billion, and then fall by nearly half to 731 million in 2100. 

The decline in fertility for India started later. India\’s population is 1,380 million in 2017, but it will overtake China in the next few years, before peaking in these projections at 1,605 million in 2048–and then falling back to 1,093 million by 2100. 

Meanwhile, the fertility decline has barely started in Nigeria. Thus, Nigeria\’s current population of 206 million is forecast to rise continually through the rest of this century, and by 2100 the 790 million Nigerians would outnumber the population of China. 

I do not know if the problems of flat and falling population will ultimately be bigger or smaller than the problems of continually rising population, but the problems will be different ones, and it\’s none too early to start thinking about them. 

The Curse of Knowledge: Bad Writing, Bad Teaching, and Bad Communication

The \”curse of knowledge\” refers to a bias documented in various psychology and behavioral economics studies. Once you know something, it can be hard to  remember what it was like before you knew it, or to put yourself in the shoes of someone who doesn\’t know it. It\’s a barrier to communication.

Iwo Hwerka provides a short readable overview of some of the evidence behind \”the curse of knowledge at the \”Towards Data Science\” blog (November 26, 2019). For example, one study asked a group of experience salespeople how long it would take an novice to learn to do certain tasks with a cellphone: their estimate were about twice as long as it actually took. 
One aspect of the curse of knowledge is what psychologists sometimes call \”hindsight bias.\” Say that you make a prediction, and later events show that your prediction was incorrect. Do you remember making the incorrect prediction? Or do you find some reason to believe that your prediction was actually correct all along? One of the early studies of this phenomenon was \”\”1 Knew It Would Happen\”: Remembered Probabilities of Once-Future Things\” by Baruch Fischhoff and Ruth Beyth (Organizational Behavior and Human Performance, 1975 13, 1-16).
For example, one of their sets of questions revolved around President Nixon\’s trip to China in 1972. Before Nixon went, they distributed a questionnaire to students asking them to estimate the probabilities of specific events: for example,  \”(1) The USA will establish a permanent diplomatic mission in
Peking, but not grant diplomatic recognition;(2)  President Nixon will meet Mao at least once; (3) President Nixon will announce that his trip was successful.\” Several weeks after the trip was done, they then gave the same students the same questions. They asked the students whether these events had actually happened, and asked them to remember what they had predicted. As it turns out, when students believed that an event had happened, they were more likely to believe that they had previously predicted it. 

Fischhoff and Ruth Beyth refer to this pattern as \”creeping determinism,\” by which they mean that once something has happened, we can\’t readily imagine it not happening. Scholars of events like wars (say, the US Civil War or World War II) or election outcomes often tend to emphasize that the outcome was not preordained. It could have gone the other way. There was an element of chance involved in the outcome. But once the event has happened, for many of us the nuance quickly falls away, and it becomes easy to explain–with the operation of full 20:20 hindsight–why the  outcome that happened was really almost certain to happen all along.  

The label of this bias seems to have originated in a 1989 Journal of Political Economy article, \”The Curse of Knowledge in Economic Settings: An Experimental Analysis,\” by Colin Camerer, George Loewenstein, and Martin Weber. They write that the term was suggested to them by Robin Hogarth. Their article is focused on a point that will immediately have occurred to economists: in most models, a party with more knowledge can in some way benefit from that knowledge over the party with less knowledge. But the curse of knowledge seems to suggest that the party with more knowledge won\’t be able to imagine not having that knowledge, and thus will not benefit from it (or at least will not benefit as much as expected). 
They set up a series of classroom experiments in which one group of students were given financial information about companies from 1970-1979, and then asked to make a prediction about those companies for 1980. Another group of students were given the same information from 1970-79, and then also were told the actual outcome for the companies in 1980. The set-up of the experiment then rewarded the second group of students (the ones who knew the outcome in 1980) for being able to estimate the predictions of the first group of students (the one who did not know the outcome in 1980). Could the students ignore the outcome they knew had happened, and instead just replicate the thinking of the other students, if there was a cash reward on the line? The answer is \”partly:\” \”[W]e find that market forces reduce the curse by approximately 50 percent but do not eliminate it.\”
Indeed, a different study found that those selling cars tend to overestimate how much consumers know about cars, and thus they underestimate how much ignorant customers would have been willing to pay for cars. 
The \”curse of knowledge\” leads to a variety of bad communications outcomes. The psychologist Stephen Pinker wrote: 

I once attended a lecture on biology addressed to a large general audience at a conference on technology, entertainment and design. The lecture was also being filmed for distribution over the Internet to millions of other laypeople. The speaker was an eminent biologist who had been invited to explain his recent breakthrough in the structure of DNA. He launched into a jargon-packed technical presentation that was geared to his fellow molecular biologists, and it was immediately apparent to everyone in the room that none of them understood a word and he was wasting their time. Apparent to everyone, that is, except the eminent biologist. When the host interrupted and asked him to explain the work more clearly, he seemed genuinely surprised and not a little annoyed. This is the kind of stupidity I am talking about. Call it the Curse of Knowledge: a difficulty in imagining what it is like for someone else not to know something that you know. 

Haven\’t we all been an audience of that kind, at one time or another? Maybe it was an academic lecture. Maybe it was your car mechanic telling you what was wrong with the engine, or your neighbor explaining their gardening tips, or a distant relative explaining their job to you. Pinker also writes: 

The curse of knowledge is the single best explanation of why good people write bad prose. It simply doesn\’t occur to the writer that her readers don\’t know what she knows–that they haven\’t mastered the argot of her guild, can\’t divine the missing steps that seem too obvious to mention, have no way to visualize a scene that to her is as clear as day. And so the writer doesn\’t bother to explain the jargon, or spell out the logic, or supply the necessary detail.

My guess is that the curse of knowledge goes well beyond these settings, and causes problems in all kind of communications between specialists in one area and others. In many companies, the communications between engineers and marketing departments is fraught with misunderstandings. When doctors and patients interact, can doctors really remember what it was like not to know about symptoms and health conditions? 

How does one fight the cognitive bubble that is the curse of knowledge? One of my own methods is to get comfortable with saying: \”I was wrong about that\” or \”I really didn\’t expect that.\” Admitting that you had inaccurate expectations is not a confession of weakness or gullibility: no one has a crystal ball for the future. After all, even if you are 90% confident that something will happen, you should expect to be wrong 10% of the time; indeed, Damon Runyon\’s law (as exposited by some characters in his 1935 story \”A Nice Price\”) holds that nothing between human beings deserves odds of more than three to one. 
Perhaps the deeper challenge is to be aware of how others perceive a given topic, including the likelihood that they may just not know (or care) much about it, so if you want to communicate with them, you need to reach out and meet them where they are, not where you are. 
I can\’t complain about the curse of knowledge: after all, as the editor of an academic journal, revising papers by authors afflicted to greater or lesser extents by the curse of knowledge is how I make my living. In some ways, this blog is an effort to avoid the curse of knowledge, too. 
Still, saying that one\’s professional goal is to avoid \”the curse of knowledge\” doesn\’t exactly sound complimentary. In a way, the \”curse of knowledge\” is misnamed, because it\’s not the knowledge that the problem; instead, one might more properly call it the \”curse of socially oblivious knowledge.\” There. Now I feel much better. 

The Planning Fallacy or How I Ever Get Anything Done

The \”planning fallacy\” refers to a psychological theory that people systematically underestimate how long it will take them to complete a given task. My work life is one long example of the planning fallacy. I set deadlines for myself, scramble to meet them, miss the earlier deadlines, rinse, lather, and repeat until the work somehow gets done. 
The context of planning provides many examples in which the distribution of outcomes in past experience is ignored. Scientists and writers, for example, are notoriously prone to underestimate the time required to complete a project, even when they have considerable experience of past failures to live up to planned schedules. A similar bias has been documented in engineers\’ estimates of the completion time for repairs of power stations (Kidd, 1970). Although this \’planning fallacy\’ is sometimes attributable to motivational factors such as wishful thinking, it frequently occurs even when underestimation of duration or cost is actually penalized. 

Roger Buehler, a researcher in this area, put it this way in a short explainer piece in 2019 (Character and Context blog of the Society for Personality and Social Psychology, \”The Planning Fallacy: An Inside View,\” May 30, 2019). 

The planning fallacy refers to an optimistic prediction bias in which people underestimate the time it will take them to complete a task, despite knowing that similar tasks have typically taken them much longer in the past. An intriguing aspect of the planning fallacy is that people simultaneously hold optimistic expectations concerning a specific future task along with more realistic beliefs concerning how long it has taken them to get things done in the past. When it comes to plans and predictions, people can know the past well and yet be doomed to repeat it. …
 For example, university students typically acknowledge that they have typically finished past assignments very close to their deadlines, yet they insist that they will finish the next project well ahead of the new deadline. Then, predictably, they go on to finish the next project (as usual) right at the deadline.

The planning fallacy is remarkably robust. It appears for small tasks like daily household chores (such as cleaning), as well as for large scale infrastructure projects such as building subways. It generalizes across individual differences in personality and culture, and it applies both to group and individual projects. For example, conscientious people often get things done well before procrastinators, but both groups typically underestimate how long it will take them to get things done.

Why does the planning fallacy happen? Kahneman and Tversky explained in 1977:  

The planning fallacy is a consequence of the tendency to neglect distributional data, and to adopt what may be termed an \’internal approach\’ to prediction, where one focuses on the constituents of the specific problem rather than on the distribution of outcomes in similar cases. The internal approach to the evaluation of plans is likely to produce underestimation. A building can only be completed on time, for example, if there are no delays in the delivery of  materials, no strikes, no unusual weather conditions, etc. Although each of these disturbances is unlikely, the probability that at least one of them will occur may be substantial. This combinatorial consideration, however, is not adequately represented in people\’s intuitions (Bar-Hillel, 1973). Attempts to combat this error by adding a slippage factor are rarely adequate, since the adjusted value tends to remain too close to the initial value that acts as an anchor (Tversky and Kahneman, 1974). The adoption of an \’external approach\’, which treats the specific problem as one of many, could help overcome this bias. In this approach, one does not attempt to define the specific manner in which a plan might fail. Rather, one relates the problem at hand to the distribution of completion time for similar projects. We suggest that more reasonable estimates are likely to be obtained by asking the external question \”how long do such projects usually last?\”, and not merely the internal question \”what are the specific factors and difficulties that operate in the particular problem?\”

At some level, this explanation strikes me as exactly correct. I am overly optimistic when thinking about how long it will take me to do a task because I assume that everything will go smoothly and that I won\’t be interrupted or distracted by other immediately pressing tasks. When I edit a paper, I think about the actual editing going smoothly, not about what happens when I hit a snag that takes a day or two to resolve or what happens when the rest of my life sneaks up on me and demands attention. 

I find myself asking: If wasn\’t subject to the planning fallacy, would I get things done on time? For me, daily motivation seems to be some combination of optimism and self-imposed stress. Both of these are embodied in the planning fallacy: that is, the planning fallacy gives me an optimistic view of how much progress I should be making, but then also stresses me when that progress doesn\’t happen.  If I instead started each day with: \”Things are going OK, more-or-less on schedule and it\’s 50:50, at best, whether I will get that paper edited by tomorrow,\” I\’m not confident that I would be happier or more productive.  
The end result of this dynamic is that I\’ve been the Managing Editor of an economics journal for 34 years, putting out quarterly issues more-or-less on time, but also feeling perpetually behind schedule. This seems a potentially unhealthy combination.
So I try to strike a balance. At some level, I know I\’m fooling myself most mornings. In some strictly rational part of my brain, I know the day\’s work isn\’t likely to go as well as I hoped, and I also know that I\’m not as behind as I feel. For me, the tradeoff is that when a task is completed like editing a paper, or when an issue of the journal is published, I get a jolt of surprise and happiness. In that same strict strictly rational corner of my brain, it\’s really not a surprise. After all, I\’ve been editing papers and putting out issues for a long time. But rather than seeing my life as a straightforward process of linear movement, plodding step-by-step to an expected outcome. I apparently prefer to see it as a mini-drama: I could do it! But I\’m not doing it as quickly as hoped! I\’m behind! But it\’s getting done! It\’s a race against the schedule! I\’ve done it! Then I start all over again. 
For 2021, I hope that you too can use the emotional energies unleashed by the planning fallacy to your advantage, both as an encouragement and a goad for daily effort, and to give you a sense of accomplishment at the result.