Misperceptions and Misinformation in Elections Campaigns

It\’s an election season, so many people are widely concerned about  how all those other voters are going to be misinformed into voting for the wrong candidate. Brendan Nyhan provides an overview of some research in this area in \”Facts and Myths about Misperceptions\” (Journal of Economic Perspectives, Summer 2020, 34:3, pp. 220-36). 

To be clear, Nyhan describes misperceptions as \”belief in claims that can be shown to be false (for example, that Osama bin Laden is still alive) or unsupported by convincing and systematic evidence (for example, that vaccines cause autism).\” Thus, he isn\’t talking about issues of shading or emphasis. Nyhan writes: \”Misperceptions present a serious problem, but claims that we live in a `post-truth\’ society with widespread consumption of `fake news\’ are not empirically supported and should not be used to support interventions that threaten democratic values.\” 
So why is the belief that everyone on the other side of the political fence is subject to dramatic misperceptions so widespread. One reason is that both academic research and examples of that research in the media tend to focus on examples with partisan distinctions. 

Public beliefs in such claims are frequently associated with people’s candidate preferences and partisanship. One December 2016 poll found that 62 percent of Trump supporters endorsed the baseless claim that millions of illegal votes were cast in the 2016 election, compared to 25 percent of supporters of Hillary Clinton (Frankovic 2016). Conversely, 50 percent of Clinton voters endorsed the false claim that Russia tampered with vote tallies to help Trump, compared to only 9 percent of Trump voters. But not all political misperceptions have a clear partisan valence: for example, 17 percent of Clinton supporters and 15 percent of Trump supporters in the same poll said the US government helped plan the terrorist attacks of September 11, 2001.

One of my favorite examples is a study which showed respondents pictures of the Inauguration Day crowds for  President Obama in 2009 and President Trump in 2017.: \”When the pictures were unlabeled, there was broad agreement that the Obama crowd was larger, but when the pictures were labelled, many Trump supporters looked at the pictures and indicated that Trump’ crowd was larger, an obviously false claim that the authors refer to as `expressive responding.\’” (I love the term \”expressive responding.\”)

Sometimes that people are aware of slanting their answers in this way. When people give these kinds of answers to poll questions, they often know (and will say when asked) that some of their answers are based on less evidence than others. One study offered small financial incentives (like $1) for accurate answers, and found that the partisan divide was reduce by more than 50%.  

But other times, people make meaningful real-world decisions based on these kinds of partisan feelings. as one example with particular relevance just now, evidence from the George W. Bush and Barack Obama administrations suggests that when the president you supported is in office, people \”express more trust in vaccine safety and greater intention to vaccinate themselves and their children than opposition partisans,\” which shows up in actual patterns of school vaccinations. 

An underlying pattern that comes up in this research is that if people are exposed to an concept many times (an example is the false statement “The Atlantic Ocean is the largest ocean on Earth”), they become more likely to rate it as true. The underlying psychology here seems to be that when a claim seems familiar to people, because of repeated prior exposure, they become more likely to view it as true. An implication here is that while those who marinate themselves in social media discussions of news may be more likely to think of themselves as well-informed, they are also probably more likely to have severe misperceptions. Indeed, people who are more knowledgeable are also the same people who have become aware of how to deploy counterarguments so that they believe their misperceptions even more strongly. 

Nyhan\’s paper mentions many intriguing studies along these lines. But do we need public action to fight misperceptions? It\’s not clear that we do. A common finding in these studies is that if someone discovers and admits that they have a misperception on a certain issue, it doesn\’t actually change their partisan beliefs.  \”Fact-checking\” websites have some use, but they can also be another way of expressing partisanship–and those who hold misperceptions most strongly are not likely to be reading fact-checking sites, anyway. Even general warnings about \”fake news\” can backfire. Some research suggests that when people are warned about fake news, they become skeptical of all news, not just part of it. One interesting study warned a random selection of candidates in nine states who were running for office in 2012 that the reputational effects of being called out by fact-checkers could be severe, and found that candidates who received the warnings were less likely to have their accuracy publicly challenged. 

Nyhan concludes with this response to suggestions for more severe and perhaps government-based interventions against misperceptions: 

Calls for such draconian interventions are commonly fueled by a moral panic over claims that “fake news” has created a supposedly “post-truth” era. These claims falsely suggest an earlier fictitious golden age in which political debate was based on facts and truth. In reality, false information, misperceptions, and conspiracy theories are general features of human society. For instance, belief that John F. Kennedy was killed in a conspiracy were already widespread by the late 1960s and 1970s (Bowman and Rugg 2013). Hofstadter (1964) goes further, showing that a “paranoid style” of conspiratorial thinking recurs in American political culture going back to the country’s founding. Moreover, exposure to the sorts of untrustworthy websites that are often called “fake news” was actually quite limited for most Americans during the 2016 campaign—far less than media accounts suggest (Guess, Nyhan, and Reifler 2020). In general, no systematic evidence exists to demonstrate that the prevalence of misperceptions today (while worrisome) is worse than in the past.

Or as I sometimes say, perhaps the reason for disagreement isn\’t that the other side has been gulled and deceived, and if they just learned the real true facts then they would agree with you. Maybe the most common reason for disagreement is that people actually disagree.

Shifts in How the Fed Perceives the US Economy

For the first time since 2012, the Federal Reserve  has updated its \”Statement on Longer-Run Goals and Monetary Policy Strategy,\” and has produced a useful \”Track Changes\” version of the alterations. A set of 12 notes and background papers for these changes is available, too. Perhaps the main substantive change is that the specifies that if inflation has run below its 2% annual target rate for a time, it will then expect inflation to run above that 2% rate for a time. Thus, the Fed\’s 2% annual rate of inflation should not be viewed as an upper bound on the inflation it will allow, but rather as a long-run average. I have nothing against this change, but I strongly suspect that it is not a fix for ails the US economy.  

Here, I want to focus instead on a different set of changes that have been happening since 2012: specifically, changes in how the Fed sees the long-run future of the US economy. To put it another way, when short-run fluctuations work themselves out, where is the US economy headed? In his speech describing the Fed\’s new policy statement, Fed chair Jerome Powell (\”New Economic Challenges and the Fed\’s Monetary Policy Review, August 27, 2020) described how the Fed\’s view have been shifting toward an expectation of slower long-run growth.
From Powell\’s speech, Here are some estimates of long-run economic growth from the Federal Open Market Committee (the committee within the Fed that sets monetary policy), as well as the private forecast summarized by the Blue Chip indicators and the Congressional Budget Office. Eight years ago, it was common to think that long-run growth in real US GDP would be about 2.5%; now, the long-run growth rates is more commonly estimated at 1.75%.
It\’s worth remembering that these growth rates are annual, and accumulate over time. Thus, a seemingly small difference in growth rates of 0.75%, accumulated over a decade, will mean a GDP that is about 7.5% smaller at that time. In very round numbers, the US GDP would be $2 trillion smaller in a decade as a result of this slower growth rate–which in turn means lower average incomes and less tax revenue for the government.
Another big shift is an expectation of a lower unemployment rate. Back in 2012, the common belief was that the unemployment rate wouldn\’t fall much lower than 6%; now, the sense is that it will eventually fall to about 4%. 
Powell also points out that the Fed believes interest rates have fallen around the world. The Fed calculates a \”neutral\” interest rate–that is, the interest rate which emerges from supply and demand and isn\’t either a stimulant or a drag on the economy in the long run. Powell says (footnotes and references to figures omitted): 

[T]he general level of interest rates has fallen both here in the United States and around the world. Estimates of the neutral federal funds rate, which is the rate consistent with the economy operating at full strength and with stable inflation, have fallen substantially … This rate is not affected by monetary policy but instead is driven by fundamental factors in the economy, including demographics and productivity growth—the same factors that drive potential economic growth. The median estimate from FOMC participants of the neutral federal funds rate has fallen by nearly half since early 2012, from 4.25 percent to 2.5 percent.

As Powell points out, the lower interest rate means that the Fed has less power to stimulate the economy by reducing interest rates–because the interest rate is already closer to zero percent. Powell writes: \”This decline in assessments of the neutral federal funds rate has profound implications for monetary policy. With interest rates generally running closer to their effective lower bound even in good times, the Fed has less scope to support the economy during an economic downturn by simply cutting the federal funds rate.\”

In my own view, these changes in beliefs about the long-run direction of the US economy have at least two main implications. One is that a serious economic agenda for the future needs to focus on how to improve productivity and long-run economic growth. Another is that when (not if) the economy goes bad the next time, the Federal Reserve will be in a weakened position to provide assistance, so thinking in advance about what policies could kick in very quickly seems worth consideration

What is a "Good Job"?

On the surface, it\’s easy to sketch what a \”good job\” means: having a job in the first place, along with good pay and access to benefits like health insurance. But that quick description is far from adequate, for several interrelated reasons. When most of us think about a \”good job,\” we have more than the paycheck in mind. Jobs can vary a lot in working conditions and predictability of hours. Jobs also vary according to whether the job offers a chance to develop useful skills and a chance for a career path over time. In turn, the extent to which a worker develops skills at a given job will affect whether that worker worker is a replaceable cog who can expect only minimal pay increases over time, or whether the worker will be in a position to get pay raises–or have options to be a leading candidate for jobs with other employers.

[This essay was originally published back in 2016, but it seemed worth revisiting with some minor updates on this Labor Day holiday.] 

A majority of Americans do not consider themselves to be \”engaged\” with their jobs. According to Gallup polling, the share of US workers who viewed themselves as \”engaged\” in their jobs had risen to 35% in 2019, while 52% were \”not engaged\” and 13% were \”actively disengaged.\” One suspects this level of engagement will drop after the pandemic recession

What makes a \”good job\” or an engaging job? The classic research on this seems to come from the Job Characteristics Theory put forward by Greg R. Oldham and J. Richard Hackman back in a series of papers written in the the 1970s: for an overview, a useful starting point is their 1980 book Work Redesign. Here, I\’ll focus on their 2010 article in the Journal of Organizational Behavior summarizing some findings from this line of research over time, \”Not what it was and not what it will be: The future of job design research\” (31: pp. 463–479).

Oldham and Hackman point out that from the time when Adam Smith described making pins and back in the eighteenth century up through when Frederick W. Taylor led a wave of industrial engineers doing time-and-motions studies of workplace activities in the early 20th century, and up through the assembly line as viewed by companies like General Motors and Ford, the concept of job design focused on the division of labor. In my own view, the job design efforts of this period tended to view workers as robots that carried out a specified set of physical tasks, and the problem was how to make those worker-robots more effective.

Whatever the merits of this view for its place and time, it has clearly become outdated in the last half-century or so. Even in assembly-line work, companies like Toyota that cross-trained workers for a variety of different jobs, including on-the-spot quality control, developed much higher productivity than their US counterparts. And for the swelling numbers of service-related and information-related jobs, the idea of an extreme division of labor, micro-managed at every stage, often seemed somewhere between irrelevant and counterproductive. When worker motivation matters, the question of how to design a \”good job\” has a different focus.

By the 1960s, Frederick Herzberg is arguing that jobs often need to be enriched, rather than simplified. In the 1970s, Oldham and Hackman develop their Job Characteristics Theory, which they describe in the 2010 article like this:

We eventually settled on five ‘‘core’’ job characteristics: Skill variety (i.e., the degree to which the job requires a variety of different activities in carrying out the work, involving the use of a number of different skills and talents of the person), task identity (i.e., the degree to which the job requires doing a whole and identifiable piece of work from beginning to end), task significance (i.e., the degree to which the job has a substantial impact on the lives of other people, whether those people are in the immediate
organization or the world at large), autonomy (i.e., the degree to which the job provides substantial freedom, independence, and discretion to the individual in scheduling the work and in determining the procedures to be used in carrying it out), and job-based feedback (i.e., the degree to which carrying out the work activities required by the job provides the individual with direct and clear information about the effectiveness of his or her performance).

Each of the first three of these characteristics, we proposed, would contribute to the experienced meaningfulness of the work. Having autonomy would contribute to jobholders felt responsibility for work outcomes. And built-in feedback, of course, would provide direct knowledge of the results of the work. When these three psychological states were present—that is, when jobholders experienced the work to be meaningful, felt personally responsible for outcomes, and had knowledge of the results of their work—they would become internally motivated to perform well. And, just as importantly, they would not be able to give themselves a psychological pat on the back for performing well if the work were devoid of meaning, or if they were merely following someone else’s required procedures, or if doing the work generated no information about how well they were performing.

 Of course, not everyone at all stages of life is looking for a job that is wrapped up with a high degree of motivation. At some times and places, all people want is a steady paycheck. Thus, Oldham and Hackman added two sets of distinctions between people:

So we incorporated two individual differences into our model—growth need strength (i.e., the degree to which an individual values opportunities for personal growth and development at work) and job-relevant knowledge and skill. Absent the former, a jobholder would not seek or respond to the internal ‘‘kick’’ that comes from succeeding on a challenging task, and without the latter the jobholder would experience more failure than success, never a motivating state of affairs.

There has been a considerable amount of follow-up work on this approach: for an overview, interested readers might begin with the other essays in the same 2010 issue of the Journal of Organizational Behavior that contains the Oldham-Hackman essay. Their overview of this work emphasizes a number of ways in which the typical job has evolved during the last 40 years. They describe the change in this way:

It is true that many specific, well-defined jobs continue to exist in contemporary organizations. But we presently are in the midst of what we believe are fundamental changes in the relationships among people, the work they do, and the organizations for which they do it. Now individuals may telecommute rather than come to the office or plant every morning. They may be responsible for balancing among several different activities and responsibilities, none of which is defined as their main job. They may work in temporary teams whose membership shifts as work requirements change. They may be independent contractors, managing simultaneously temporary or semi-permanent relationships with multiple enterprises. They may serve on a project team whose other members come from different organizations—suppliers, clients or organizational partners. They may be required to market their services within their own organizations, with no single boss, no home organizational unit, and no assurance of long-term employment. Even managers are not immune to the changes. For example, they may be members of a leadership team that is responsible for a large number of organizational activities rather than occupy a well-defined role as the sole leader of any one unit or function.

In their essay, Oldham and Hackman run through a number of ways in which jobs have evolved in ways that they did not expect or undervalued back in the 1970s. For example, they argue that the opportunities for enrichment in front-line jobs is larger than they expected, that they undervalued the
social aspects of jobs, that they didn\’t anticipate the \”job crafting\” phenomenon in which jobs are shaped by workers and employers rather than being firmly specified. They point out that although working in teams has become a phenomenon, employers and workers are not always clear on the different kinds of teams that are possible: for example, \”surgical teams\” led by one person with support; \”co-acting teams\” in which people act individually, but have little need to interact face-to-face; \”face-to-face teams\” that meet regularly as a group to combine expertise; \”distributed teams\” that can draw on a very wide level of expertise when needed, but don\’t have a lot of interdependence or a need to meet with great regularity; and even \”sand dune\” teams that are constantly remaking and re-forming themselves with changing memberships and management.

When you start thinking about \”good jobs\” in these broader terms, the challenge of creating good jobs for a 21st century economy becomes more complex. A good job has what economists have called an element of \”gift exchange,\” which means that a motivated worker stands ready to offer some extra effort and energy beyond the bare minimum, while a motivated employer stands ready to offer their workers at all skill levels some extra pay, training, and support beyond the bare minimum. A good job has a degree of stability and predictability in the present, along with prospects for growth of skills and corresponding pay raises in the future. We want good jobs to be available at all skill levels, so that there is a pathway in the job market for those with little experience or skill to work their way up. But in the current economy, the average time spent at a given job is declining and on-the-job training is in decline.

I certainly don\’t expect that we will ever reach a future in which jobs will be all about deep internal fulfillment, with a few giggles and some comradeship tossed in. As my wife and I remind each other when one of us has an especially tough day at the office, there\’s a reason they call it \”work,\” which is closely related to the reason that you get paid for doing it.

But along with a concern for how quickly jobs will return in the aftermath of the pandemic recession, a primary long-term issue in the workforce is how to encourage the economy to develop more good jobs. I don\’t have a well-designed agenda to offer here. But what\’s needed goes well beyond our standard public arguments about whether firms should be required to offer certain minimum levels of wages and benefits.

"The Best Thing for Being Sad is To Learn Something"

As another school year gets underway, I feel moved to speak for the pleasure of learning something, and how learning can banish sadness. The point is more than an academic one for social scientists. There\’s a body of \”happiness\” research, which often looks at things like income, changes in income, political/economic events, health and education levels, life events like parenting or patterns like commuting, and then tries to sort out the connections to \”happiness,\” which is often defined by a response to a survey. The implicit message in this research is often that \”happiness\” is from the ability to consume or from how events (like unemployment or illness) impinge upon us. But sometimes happiness may come not from what we consume or from what happens to us, but from investing in a learning or a new skill. 

One example comes from T.H. White, in his 1958 retelling of the Arthurian legend in Once and Future King. The wizard Merlin is teaching the young bow who would become King Arthur, but at this point in the story is known as the Wart. White writes:

\”The best thing for being sad,\” replied Merlyn, beginning to puff and blow, \”is to learn something. That is the only thing that never fails. You may grow old and trembling in your anatomies, you may lie awake at night listening to the disorder of your veins, you may miss your only love, you may see the world about you devastated by evil lunatics, or know your honour trampled in the sewers of baser minds. There is only one thing for it then—to learn. Learn why the world wags and what wags it. That is the only thing which the mind can never exhaust, never alienate, never be tortured by, never fear or distrust, and never dream of regretting. Learning is the thing for you. Look at what a lot of things there are to learn—pure science, the only purity there is. You can learn astronomy in a lifetime, natural history in three, literature in six. And then, after you have exhausted a milliard lifetimes in biology and medicine and theo-criticism and geography and history and economics—why, you can start to make a cartwheel out of the appropriate wood, or spend fifty years learning to begin to learn to beat your adversary at fencing. After that you can start again on mathematics, until it is time to learn to plough.\”

\”Apart from all these things,\” said the Wart, \”what do you suggest for me just now?

I always liked Wart\’s down-to-earth and so-very-human rejoinder. 

Another example is from Bertrand Russell\’s quirky 1930 book-length essay, The Conquest of Happiness. He writes: 

Perhaps the best introduction to the philosophy which I wish to advocate will be a few words of autobiography. I was not born happy. As a child, my favourite hymn was: \’Weary of earth and laden with my sin\’. At the age of five, I reflected that, if I should live to be seventy, I had only endured, so far, a fourteenth part of my whole life, and I felt the long-spread-out boredom ahead of me to be almost unendurable. In adolescence, I hated life and was continually on the verge of suicide, from which, however, I was restrained by the desire to know more mathematics.

Now, on the contrary, I enjoy life; I might almost say that with every year that passes I enjoy it more. This is due partly to having discovered what were the things that I most desired and having gradually acquired many of these things. Partly it is due to having successfully dismissed certain objects of desire – such as the acquisition of indubitable knowledge about something or other – as essentially unattainable. But very largely it is due to a diminishing preoccupation with myself.

Like others who had a Puritan education, I had the habit of meditating on my sins, follies, and shortcomings. I seemed to myself – no doubt justly – a miserable specimen. Gradually I learned to be indifferent to myself and my deficiencies; I came to centre my attention increasingly upon external objects: the state of the world, various branches of knowledge, individuals for whom I felt affection. External interests, it is true, bring each its own possibility of pain: the world may be plunged in war, knowledge in some direction may be hard to achieve, friends may die. But pains of these kinds do not destroy the essential quality of life, as do those that spring from disgust with self. And every external interest inspires some activity which, so long as the interest remains alive, is a complete preventive of ennui. Interest in oneself, on the contrary, leads to no activity of a progressive kind. It may lead to the keeping of a diary, to getting psycho-analysed, or perhaps to becoming a monk. But the monk will not be happy until the routine of the monastery has made him forget his own soul. The happiness which he attributes to religion he could have obtained from becoming a crossing-sweeper, provided he were compelled to remain one. External discipline is the only road to happiness for those unfortunates whose self-absorption is too profound to be cured in any other way. …

Of course, it would be silly to argue that those who feel sad just need to take up topology or computational statistics. And it would be silly to argue that struggling to learn is an unalloyed pleasure. But I do think there is something to the idea that happiness is facilitated by a sense of agency and understanding about one\’s life and work, and the act and accomplishment of learning across many dimensions of life–including the nonacademic dimensions–is a meaningful part of happiness. 

When Government Debt Explodes in Size, What Options Do Countries Have?

US government debt is exploding in size. The Congressional Budget Office lays out the patterns in \”An Update to the Budget Outlook: 2020 to 2030\” (September 2020). As usual, the baseline CBO estimates are based on currently existing law–for example, they do not take into account additional debt that would be incurred if one more fiscal stimulus bill was to pass before the election. Thus, the CBO estimates are focused on the large short-term spike in spending has already been legislated. Here\’s the pattern of total revenues and spending. 
That sharp spike in spending is being matched by a much larger annual budget deficit. The orange line shows the projection from March 2020; the darker line shows the change. Again, you\’ll notice that the CBO forecast is essentially for a short-term blip. But the deficit is going to be much larger than it had been back in Great Recession of 2007-9, which in turn was much larger than the deficits from back in the 1980s. 
Overall US government borrowing, using the standard metric of total federal debt held by the public, was already on a path to rise sharply in the next decade or so, but the 2020 rise in spending has accelerated that timetable. The highest debt/GDP ratio in US history was previously in 1946, with the spike from the borrowed money used to finance the military efforts of World War II. The US economy is now on track to break through that level in 2023, and then to remain at that higher level of debt.
As always, one can question whether the standard measurements given here capture the present moment. For example, the CBO notes that although federal debt held by the public is the standard measure of debt, one can make a case for subtracting out the portion of federal debt that is used to finance student loans: after all, one might think of this as money that is \”really\” borrowed by students, who are the ones who need to repay it, just using the federal government as a conduit. One might also make a case for not counting federal debt that is purchased and held by the Federal Reserve system, on the grounds that this federal borrowing does not have the same effect on credit markets as if the money was borrowed directly from the public.  For example, from 2019 to 2020 the standard debt/GDP ratio rises from 79.2% to 98.2%. However, if one subtracts out student debt and takes into account that about three-quarters of the federal debt issued in 2020 has been purchased by the Fed, the ratio rises 60.6% of GDP in 2019 to 65% of GDP in 2020. But by any of these measures, there is still a sharp rise in federal deb in the next decade. 
What are the risks of this fiscal path and what have other countries done when confronted with historically high and rising debt levels? In a report earlier this year, the CBO listed the main concerns: 

If federal debt as a percentage of GDP continues to rise at the pace of CBO’s current-law projections, the economy would be affected in two significant ways: Growth in the nation’s debt would dampen economic output over time, and higher interest costs would increase payments to foreign debt holders and thus reduce the income of U.S. households by rising amounts. … High and rising federal debt increases the likelihood of a fiscal crisis because it erodes investors’ confidence in the government’s fiscal position and could result in a sharp reduction in their valuation of Treasury securities, which would drive up interest rates on federal debt because investors would demand higher yields to purchase Treasury securities. … Although no one can predict whether or when a fiscal crisis might occur or how it would unfold, the risk is almost certainly increased by high and rising federal debt. …. In addition, high debt might cause policymakers to feel constrained from implementing deficit-financed fiscal policy to respond to unforeseen events …\”

We are two decades into the twenty-first century, and we have now had two once-in-a-century economic events: the Great Recession of 2007-9 and now the pandemic recession that started in March. Right now, addressing federal debt is far from the main public policy concern. But when that time comes (and it\’s starting to look more like \”when\” than \”if\”), how do countries bring down their debt burden? 

Carmen M. Reinhart and M. Belen Sbrancia look at the historical patterns in \”The Liquidation of Government Debt\” (January 2015, IMF Working Paper WP/15/7). They summarize: 

Throughout history, debt/GDP ratios have been reduced by (i) economic growth; (ii) substantive fiscal adjustment/austerity plans; (iii) explicit default or restructuring of private and/or public debt; (iv) a surprise burst in inflation; and (v) a steady dosage of financial repression accompanied by an equally steady dosage of inflation.

This post is not the place to discuss these choices in any detail. But just to state the obvious, the US economy has not been more prone to slow productivity than to periods of rapid economic growth in recent decades; the US political system has been unwilling to restructure big spending programs like Medicare and Social Security; a large-scale restructuring or default on US debt seems like a highly unlikely last resort; and US inflation has been stuck at low levels for 25 years now, for reasons not fully understood. Thus, I suspect the US economy may be headed, by fits and starts, to a period of what Reinhart and Sbrancia call \”financial repression.\” By this term, they mean a set of policies that invole much greater government management of the financial sector, including policies that  focus on keeping interest rates very low and also limit other options available to investors–so that the government will find it easier to keep borrowing at low interest rates. 

Have Americans Been Overworking?

There was a time, about 60-70 years ago, when the typical American worker spent several hundred fewer hours on the job each year compared with worked in major European economies. But for the last few decades, American workers now spend several hundred hours more on the job each hear. This shift was not a self-aware political decision.  No prominent US political leader advocated that Americans should work more hours than those in other high-income countries. But it has happened, and the question is what might be done about it. Isabel V. Sawhill and Katherine Guyot  lay out the background and offer some policy ideas in \”The Middle Class Time Squeeze\” (August 2020, Brookings Institution). 

Here\’s a figure showing historical data on annual hours per worker for the US and four European economies. You can see the dramatic fall in annual hours worked in all of these countries in the opening decades of the 20th century.  From the 1950s into the 1960s, the blue US line for annual hours per worker is below the other countries. But by the late 1970s, the US line is above the others, and the gap between the US and the other countries shown here seems to be rising in recent years. 
Why has this gap emerged? A number of answer have been proposed, not of them fully satisfactory. Maybe Europeans just have a greater preference for leisure than Americans? If so, this preference emerged rather suddenly in the 1960s. Maybe it\’s higher taxes on labor  in Europe, or more generous government support for those not working part of a given year? 

Sawhill and Guyot point out to research showing that that \”legally mandated vacations\” account for about 80% of the gap in annual hours worked between the US and European comparison countries. \”The European Union’s Working Time Directive guarantees 20 paid vacation days per year, and some member states go beyond this requirement, in addition to providing paid holidays.\” 

In addition, the average workweek is shorter in European countries, and especially in western Europe and Scandinavia, part-time work is more common and standard. 
An additional factor is while annual hours worked per years is discussed here on a per worker basis, the entry of women into the (paid) labor force in the last half-century or so means that more two-parent households and two earners, and more single-parent households have someone working full-time. For the household, these shifts make time feel tighter, too. 

An underlying concern here is that when it comes to hours worked per week or per year, individuals do not make fully free choices, but instead face options shaped by laws and customs.  I can\’t \”choose\” additional paid vacation. In most jobs, a US worker can try to negotiate over fewer weekly hours or part-time status, but it\’s likely to be difficult and career-limiting to do so. Sawhill and Guyot write: \”[I]t is unlikely that extensive worktime reductions will come about solely as a result of individual decision-making. Collective changes are needed if as a society we want to work a little less.\”

What sort of policies are might achieve this goal? Here are some of their suggestions.  
1) \”One clear approach to reducing overwork for the middle class is to simply extend current overtime protections to more middle-income workers. Current federal regulations exempt employees who meet certain duties tests and earn more than $35,648 per year; the threshold was raised from $23,660 at the beginning of 2020. While the 2020 increase extended overtime protection to an estimated 1.3 million workers, it falls short of restoring nearly half a century of decline. If the 1975 threshold had been adjusted for inflation, it would now be over $50,000. Additionally, better enforcement is needed to ensure that nonexempt workers are compensated for their time as required by current law.\”
2) \”[I]t may be time to consider a shorter standard workweek by reducing the federal standard from 40 hours per week to 35. … Reducing the federal standard is not a mandate; it would not prevent individuals from working more than 35 hours per week. It would simply nudge employers in that direction by making it more expensive to keep people on the job for over 35 hours a week …\”
3) \”[O]ne option would be to require U.S. employers to offer a minimum of four weeks (20 days) per year of Paid Time Off (PTO) to all full-time employees, with a prorated amount for part-time employees. PTO could be used for vacation, short-term illness, family care, or other reasons at workers’ discretion. …  One limitation of this approach is that employees may not use the additional leave benefits that are available to them, especially if they worry that they will face negative employment consequences. Fewer than half of American workers used all of their PTO days in 2018, leaving a total of 768 million days unused, according to the U.S. Travel Association. … This problem could be addressed by requiring employers to compensate employees for unused leave, as is already the case in some states, such as Nebraska and Massachusetts. Such a requirement would allow workers to choose between working less and earning more (through a payout for unused time) in addition to incentivizing employers to promote vacation-taking.\” 
4) Design social insurance for mid-life breaks from work? \”We may also want to reallocate work over the lifecycle, as proposed by Isabel Sawhill in her book, The Forgotten Americans: An Economic Agenda for A Divided Nation. … [W]orking-age families are now spending more total time in market work due primarily to the rise of dual earners. Further, work and family responsibilities tend to peak at the same life stage, with the result that adults between the ages of 30 and 44 spend about twice as much time in combined market and nonmarket work (including family care) as those between 70 and 84. In short, the elderly, especially the “young elderly” who are still healthy, are the new leisured class. … These developments suggest the need to reinvent social insurance for the modern era by freeing up some time in midlife to raise children, enable people to retrain, or make a fresh start with a new business or a new career, instead of saving all of our nonworking years for retirement. For these reasons Sawhill proposes to expand social insurance to cover new benefits for these kinds of midlife career breaks. In exchange, and to help cover the costs, she proposes to raise retirement ages, consistent with people’s longer and healthier lives.\”

5) Formalize telecommuting arrangements? \”One employer practice that would help is to formalize work-from-home arrangements and give employees the right to request to work remotely without facing negative employment consequences. In some contexts, this could be good for employers and employees: a growing body of research indicates that telecommuting can improve job satisfaction and raise productivity, in addition to reducing emissions and spreading work to more remote regions.\”

Change the standard federal workweek from 40 to 35 hours over time–which in effect means that employers would need to pay overtime rates above 35 hours worked. 
Sawhill and Guyot are clear in acknowledging the tradeoff that working fewer hours is also likely to mean lower pay. Thus, they suggest that changes like this could be phased in over time: for example, your annual raise for the next few years might be smaller, but your paid vacation time would be expanding. But more broadly, their point is that the annual and weekly hours worked by those in any country are not written on stone tablets, nor are they the result of a pure market negotiations. Hours worked are shaped by laws and norms. They have changed in the past, and could be changed again. 

George Orwell: "Vagueness and Sheer Incompetence is the Most Marked Characteristic of Modern English Prose"

Many readers of this blog are surely already familiar with George Orwell\’s famous 1946 essay, \”Politics and the English Language,\” where he makes a case that a \”mixture of vagueness and sheer incompetence is the most marked characteristic of modern English prose.\” Orwell is talking primarily about how, when writing on politics and public affairs, there is apparently an enormous temptation to succumb to crappy writing. He notes: 

[A]n effect can become a cause, reinforcing the original cause and producing the same effect in an intensified form, and so on indefinitely. A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts.

A passage that always makes me grin is Orwell\’s rendition of Ecclesiastes into ponderous bureaucratic/academic prose. Orwell writes (ital type inserted): 

I am going to translate a passage of good English into modern English of the worst sort. Here is a well-known verse from Ecclesiastes:

I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

Here it is in modern English:

Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account.

This is a parody, but not a very gross one. … It will be seen that I have not made a full translation. The beginning and ending of the sentence follow the original meaning fairly closely, but in the middle the concrete illustrations — race, battle, bread — dissolve into the vague phrases ‘success or failure in competitive activities’. This had to be so, because no modern writer of the kind I am discussing — no one capable of using phrases like ‘objective considerations of contemporary phenomena’ — would ever tabulate his thoughts in that precise and detailed way. The whole tendency of modern prose is away from concreteness. Now analyze these two sentences a little more closely. The first contains forty-nine words but only sixty syllables, and all its words are those of everyday life. The second contains thirty-eight words of ninety syllables: eighteen of those words are from Latin roots, and one from Greek. The first sentence contains six vivid images, and only one phrase (‘time and chance’) that could be called vague. The second contains not a single fresh, arresting phrase, and in spite of its ninety syllables it gives only a shortened version of the meaning contained in the first. Yet without a doubt it is the second kind of sentence that is gaining ground in modern English. I do not want to exaggerate. This kind of writing is not yet universal, and outcrops of simplicity will occur here and there in the worst-written page. Still, if you or I were told to write a few lines on the uncertainty of human fortunes, we should probably come much nearer to my imaginary sentence than to the one from Ecclesiastes.

As I have tried to show, modern writing at its worst does not consist in picking out words for the sake of their meaning and inventing images in order to make the meaning clearer. It consists in gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug. The attraction of this way of writing is that it is easy.

Later in the essay, Orwell proposes an attractively short list of rules to keep in mind as a writer:

But one can often be in doubt about the effect of a word or a phrase, and one needs rules that one can rely on when instinct fails. I think the following rules will cover most cases:

i. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.

ii. Never use a long word where a short one will do.

iii. If it is possible to cut a word out, always cut it out.

iv. Never use the passive where you can use the active.

v. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.

vi. Break any of these rules sooner than say anything outright barbarous.

These rules sound elementary, and so they are, but they demand a deep change of attitude in anyone who has grown used to writing in the style now fashionable. One could keep all of them and still write bad English, but one could not write the kind of stuff that I quoted in those five specimens at the beginning of this article.

When reading Orwell or some other maestro of expository prose tell me how it should be done, I remember the wonderful line in the Dylan Thomas prose-poem, A Child\’s Christmas in Wales, where he is remembering the kinds of gifts that children receive, and refers to: \”Easy Hobbi-Games for Little Engineers, complete with instructions. Oh, easy for Leonardo!\” In a similar spirit, when trying to absorb writing advice from those who have mastered the craft, I mutter to myself: \”Easy for Leonardo.\”

The Wellspring of Economics: "The Social Enthusiasm which Revolts from the Sordidness of Mean Streets and the Joylessness of Withered Lives "

A.C. Pigou offers not a definition of economics, but rather an account of the source of economics as a field of interest,  near the start of his classic 1920 book, The Economics of Welfare. He writes: 

It is not wonder, but rather the social enthusiasm which revolts from the sordidness of mean streets and the joylessness of withered lives, that is the beginning of economic science.

Here\’s the quotation in its fuller orotund context: 

When a man sets out upon any course of inquiry, the object of his search may be either light or fruit–either knowledge for its own sake or knowledge for the sake of good things to which it leads. … But there will, I think, be general agreement that in the sciences of human society, be their appeal as bearers of light never so high, it is the promise of fruit and not of light that chiefly merits our regard. There is a celebrated, if somewhat too strenuous, passage in Macaulay\’s Essay on History: \”No past event has any intrinsic importance. The knowledge of it is valuable, only as it leads us to form just calculations with regard to the future. A history which does not serve this purpose, though it may be filled with battles, treaties, and commotions, is as useless as the series of turnpike tickets collected by Sir Matthew Mite.

Sir Matthew Mite is the leading character in Samuel Foote\’s 1772 play The Nabob; a comedy, in Three Acts (1772). Mite is an avaricious and corrupt character who worked for the East India Company in India, got rich doing so, and has now returned to England. Pigou continues: 

That paradox is partly true. If it were not for the hope that a scientific study of man\’s social actions may lead, not necessarily directly or immediately, but at some time and in some way, to practical results of social improvement, not a few students of these actions would regard the time devoted to their study as time misspent. That is true of all social sciences, but especially true of economics. For economics is \”a study of mankind in the ordinary business of life\”; and it is not in the ordinary business of life that mankind is most interesting or inspiring. 

\”The ordinary business of life\” is the famous definition of economics from the work of Alfred Marshall\’s 1890 Principles of Economics. Marshall was Pigou\’s teacher, and also his predecessor at Cambridge. Pigou continues: 
One who desired knowledge of man apart from the fruits of knowledge would seek it in the history of religious enthusiasm, of martyrdom, or of love; he would not seek it in the market-place. When we elect to watch the play of human motives that are ordinary–that are sometimes mean and dismal and ignoble–our knowledge is not the philosopher\’s impulse, knowledge for the sake of knowledge, but rather the physiologist\’s, knowledge for the healing that knowledge may help to bring. Wonder, Carlyle declared, is the beginning of philosophy. It is not wonder, but rather the social enthusiasm which revolts from the sordidness of mean streets and the joylessness of withered lives, that is the beginning of economic science.
I would quibble with Pigou\’s formulation. In my own view, the \”ordinary\” actions of people earning an income, supporting families, opening up possibilities for consumption and pursuing their interests, and saving for the future offer deep insights into people as they really are. In many cases, the \”ordinary\” actions of people are intertwined with love and sacrifice for families, friends, and communities. I am suspicious of a claim that I learn more about human nature from martyrdom than from someone\’s efforts to build a career and support their family. I do have a sense of \”wonder\” in contemplating the workings of a decentralized economy. I am congenitally suspicious of those who want economics to offer a justification for their own preferred shortcut to the fruits of economic gain, without sufficient consideration of how their proposals will interact with the lives of \”ordinary\” people. But Pigou\’s formulation also has obvious merit, in that many economists have in fact been drawn to the field by a desire to improve social conditions, and in a recognition that a lack of economic activity and opportunities for economic participation can lead to mean streets and withered lives. 

Wearing Face-masks: The Mixed Evidence

What does the actual scientific literature say about wearing a face mask to prevent the spread of COVID-19? It\’s less clear than a non-scientist like me might prefer. Indeed, a lot of the discussion seems to be happening in real time in working papers that have not yet been peer-reviewed or published in a journal. Ultimately, the case for wearing face-masks lacks a clear-cut scientific base–but it may still be a good idea. Let\’s stroll through some of the studies. 

Several recent reviews of the existing literature on studies of face masks that use random controlled trial methods do not find a reason to wear a mask. For example, Julii Brainard, Natalia Jones, Iain Lake, Lee Hooper, and Paul R Hunter have published \”Facemasks and similar barriers to prevent respiratory illness such as COVID-19: A rapid systematic review\” (posted April 6, 2020).  It\’s at the medRxiv (pronounced \”med-archive\”), a \”preprint\” system for sharing early drafts run by Cold Spring Harbor Laboratory, Yale University, and BMJ, where it says at the website: \”Preprints are preliminary reports of work that have not been certified by peer review. They should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.\” The Brainerd et al. paper does a systematic analysis of 31 studies across different masks, different settings, and with different methods. They write: 

Based on the RCTs [randomized control trials]we would conclude that wearing facemasks can be very slightly protective against primary infection from casual community contact, and modestly protective against household infections when both infected and uninfected members wear facemasks. However, the RCTs often suffered from poor compliance and controls using facemasks. Across observational studies the evidence in favour of wearing facemasks was stronger. We expect RCTs to under-estimate the protective effect and observational studies to exaggerate it. The evidence is not sufficiently strong to support widespread use of facemasks as a protective measure against COVID19. However, there is enough evidence to support the use of facemasks for short periods of time by particularly vulnerable individuals when in transient higher risk situations. Further high quality trials are needed to assess when wearing a facemask in the community is most likely to be protective.  

Another group of authors has their own literature review up on medRxiv, with similar findings. T. Jefferson et al. have written, \”Physical interventions to interrupt or reduce the spread of respiratory viruses. Part 1 – Face masks, eye protection and person distancing: systematic review and meta-analysis\” (April 7, 2020). They look back at evidence from a previous coronavirus outbreak–the SARS episode in 2003, and then also at 15 randomized control trial studies since then. They write: 

Most included trials had poor design, reporting and sparse events. There was insufficient evidence to provide a recommendation on the use of facial barriers without other measures. We found insufficient evidence for a difference between surgical masks and N95 respirators and limited evidence to support effectiveness of quarantine … Despite the lack of evidence, we would still recommend using facial barriers in the setting of epidemic and pandemic viral respiratory infections, but there does not appear to be a difference between surgical and full respirator wear.

On the pro-mask side, a widely mentioned study is from Jeremy Howard et al. \”Face Masks Against COVID-19: An Evidence Review\” (most recent version July 12, 2020). This one is at preprint.org, a server run by the Multidisciplinary Digital Publishing Institute, and it again is not a peer-reviewed paper. These authors \”synthesize the relevant literature,\” which is a way of saying that they are writing a persuasive essay, not summing up the results of earlier studies. 

They point out: \”A primary route of transmission of SARS-CoV-2 is likely via small droplets that are ejected when speaking, coughing or sneezing.\” They focus on masks not as a method of protecting the wearer, but as a method of protecting others. They write: \”Although no randomized controlled trials (RCT) on the use of masks as source control for SARS-CoV-2 has been published, a number of studies have attempted to indirectly estimate the efficacy of masks.\” They point to one study with 10 people, another study with 4 people, a study of health care workers in Chinese hospitals, a case study of someone who flew on a plane from China to Toronto, and other pieces of evidence. Some studies looked at combinations of measures like masks, hand-washing, disinfecting and social distancing in earlier outbreaks of flu, and found that the combination helped to reduce the spread of disease. Finally, they argue that the costs of wearing masks are low, so the \”precautionary principle\” suggests that it is worth doing even given the imperfect evidence. They conclude: \”When used in conjunction with widespread testing, contact tracing, quarantining of anyone that may be infected, hand washing, and physical distancing, face masks are a valuable tool to reduce community transmission.\”

For a skeptical view of this argument, Graham Martin, Esmée Hanna, and Robert Dingwall have their own preprint paper, \”Face masks for the public during Covid-19: an appeal for caution in policy\” (April 24, 2020), available at the SocArXiv website.  They suggest several concerns (footnotes with citations omitted): 

First, the very weak evidence for face masks should be reiterated. Although some important studies followed the outbreak of SARS-Cov-1 in the 2000s, by and large the quality and clarity of the evidence base for face masks as a means of reducing transmission is disappointing. Few studies examine use of face masks in community settings; those that do find no evidence of reduced transmission compared with no face masks. … Of course, absence of evidence is not evidence of absence … But existing research also provides little information on potential harms, such as “discomfort, dehydration, facial dermatitis, distress, headaches, exhaustion.” Here, too, absence of evidence should not be taken as evidence of absence. 

Second, it is unclear how well equipped the general public is to make proper use of face masks, or how readily good practice might be disseminated and taken up. Appropriate use of face masks is challenging and is something healthcare workers themselves can struggle with;  poor use (including poor fitting, adjustment, touching) can reduce effectiveness and pose an infection risk in itself. … For non-disposable clothbased masks, the evidence base is slim, though one hospital-based three-arm trial found worse infection outcomes in wearers of cloth masks than in wearers of medical masks and in a control group (usual practice, which included much mask-wearing).  Cloth masks will retain moisture, with indeterminate consequences for their efficacy and for the creation of a microbiological environment favourable to other bacterial or viral organisms. …  An evidence base for homemade masks is likely to be elusive. However, the existing research, coupled with the potential for great variation in materials, fit, adherence, touching and adjustment, doffing, disposal, frequency of laundering and so on, suggests the need for caution in advising widespread uptake, especially given the paucity of evidence for cloth face masks, their use, and their possible microbiological downsides. 

Third, at the microsocial level, the argument might be made that encouraging uptake of face masks might lead to reduced compliance with other measures, due to the false sense of security presented by the mask. Such arguments rest on evidence around risk compensation in other areas of public health, for example seatbelts,  cycle helmets,  vaccination against sexually transmitted infections,  and injury prevention in competitive sports. … [T]here is a case that face masks might promote, if not active risk-taking, then at least a complacency that might reduce adherence to other measures, especially given the largely collective rather than individual benefits that the wearing of masks seeks to address. … There is also an argument that universal mask-wearing might aggravate the climate of fear already documented for Covid-19, 17 adding to mental health concerns by providing a constant reminder of the threat posed by other humans. 

Fourth, potential downsides of the promotion of face masks in community settings present themselves at the macrosocial level. … As a highly visible symbol of virtuous behaviour, those who fail to comply—for example, because of respiratory ailments that make prolonged mask-wearing dangerous, 20 or because of religious preferences such as beards worn by Sikh men or hijabs worn by Muslim women that may make mask-wearing difficult—may be subject to stigmatisation or worse. … Meanwhile, notwithstanding the weak evidence base for face masks as a standalone measure,  businesses or states might see widespread or mandatory mask-wearing as a warrant for a premature return to ‘business as usual’, justifying unsafe workplaces or crowded commuting conditions in terms of the protection offered by masks. 

This leads us to our final point. …  Face masks (and measures to secure their uptake) are a complex intervention in a complex system: the results of a change of this nature are emergent, unpredictable, and potentially counterintuitive. 

This list of potential costs of mask-wearing is not dire, but neither is it illusory. 

Where does this leave us? It\’s perhaps worth pausing a moment to be clear on what \”science\” is telling us here. If \”science\” means peer-reviewed studies, then none of these essays are telling us anything–because they have not been peer-reviewed. The advice to wear masks based on a combination of partial information and the precautionary principle may be seem sensible, and may in fact be sensible, but \”it just makes sense\” is not a proven scientific result. 

On the other side, the pandemic is happening now. The overall US death rate continues to be elevated, and COVID-19 is the likely cause. Comprehensive studies done with ideal methodology take months or years. We need to make a decision now, based on imperfect evidence, and then follow up as best we can with evaluating the results of that decision. Of course, the current wave of rule-making seems to emphasize mask-wearing in indoor or crowded settings. 

On yet another side, I\’ll point out that the argument for experimenting with imperfect steps now, based on imperfect scientific evidence, applies to a lot more than just wearing masks. For example, it applies to the use of imperfect tests for COVID-19 as they are developed, to experimentation of imperfect treatments for COVID-19 as they are proposed, and maybe in the not-too-distant future the use of an imperfect vaccine. In all of these areas, the \”science\” is likely to be shakier than one might optimally prefer.

It seems to me that the public health experts have badly muddled the question of whether mask-wearing was needed. The general advice back in March and April was that masks were not needed; now in August and September, masks have become near-mandatory in various settings. I don\’t know whether the public health crowd was underreacting then or is overreacting now.  I suppose one could even argue that the anti-mask recommendations early in the pandemic made sense because of the lack of clear-cut evidence, while the pro-mask recommendations now make sense because the pandemic is lasting longer than some of the early epidemiology models predicted. I have masks in my car and my office to follow the rules. But I don\’t automatically put on a mask when walking outside or standing six feet from someone and having a conversation. And if I see someone walking by without a mask when I happen to be wearing one, I don\’t snark at them–unless they are actually sneezing and coughing, or singing operatically, as they walk by. 

Jacob Viner: A Modest Proposal for Some Scholarship in Graduate Training

Academic specialization has its tradeoffs. On one side, extreme specialization and focus helps to develop insights and discoveries that would otherwise be  unlikely to occur. On the other side, it\’s possible, as the old saying goes, to know more and more about less and less until you end up knowing everything about nothing. Recognizing this tradeoff isn\’t a new insight, but it has rarely been addressed with more grace than by the economist Jacob Viner in a speech entitled \”A Modest Proposal for Some Stress on Scholarship in Graduate Training,\” given at the Graduate Convocation at Brown University on June 3, 1950, and then published a few yeas later in the Quarterly Journal of Speech (February 1954, 40:1, pp. 15-23). 
Viner\’s talk is the source of one of my own favorite comments about academic graduate training: \”Men are not narrow in their intellec­tual interests by nature; it takes special and rigorous training to accomplish that end.\”
Viner is a very prominent economist who does highly specialized work, and who values the specialized work done by others. Thus, he strives to be both mild but definite in his praise of broader scholarship. He offer a number of wry observations along the way. Here\’s a comment on academic specialization: 

This is the ever-growing specialization not on­ly as between departments but even within departments, a specialization car­ried so far that very often professors within even the same department can scarcely communicate with each other on intellectual matters except through the mediation at seminars and doctoral examinations of their as yet incomplete­ly specialized students. This develop­ment has not been capricious or without function. The growth in the accumula­tion of data, in the refinement and deli­cacy of tools for their analysis so that great application and concentration are necessary for mastery of their use, has not only ended the day of the polymath with all knowledge for his province, but seems steadily to be cutting down the number of those who would sacrifice even an inch of depth of knowledge for a mile of breadth. 

Viner suggests making some room for \”scholarship,\” by which he does not mean the addition of even more specialized work. He said:

My proposal is both sincere and mod­est. I give also only an old-fashioned and modest meaning to the term \”schol­arship.\” I mean by it nothing more than the pursuit of broad and exact knowl­edge of the history of the working of the human mind as revealed in written records. I exclude from it, as belonging to a higher order of human endeavor, the creative arts and scientific discovery. What I propose, stated briefly and simply, is that our graduate schools shall assume more responsibility than they ordinarily do, so that the philosophers,economists, mathematicians, physicists, and theologians they turn out as finished teachers, technicians, and practitioners shall have been put under some pressure or seduction to be also scholars. … 

A small place once given to scholarship, moreover, I would not object if it were then confined to its allotted space, or at least not permitted to spread with­out restraint into areas beyond its prop­er jurisdiction, where if it intrudes it steals time and other less valuable resources from what are generally acknowl­edged to be more important activities. A verger of a church, reproved for lock­ing the doors of the church, replied that when they were left open it often re­sulted in people praying all over the place. I concede that we don\’t want students and faculty unrestrainedly pur­suing scholarship all over our universi­ties while they have so much more ur­gent business to attend to. 

One virtue of scholarship is that it will help teaching, especially undergraduate teaching. Viner writes: 

The graduate schools, I repeat, tend to mould their students into narrow spe­cialists, who see only from the point of view of their subject, or of a special branch of their special subject, and fail to recognize the importance of looking even at their own subject from other than its own point of view. These stud­ents then acquire their doctoral degrees on the strength of theses which have demonstrated to the satisfaction of their supervisors that they have adequately decontaminated their minds from any influences surviving from their undergraduate training in other fields than those occupied by their chosen disci­pline. They then find their way back to the colleges to transmit to the next gen­eration the graduate school version of a liberal education, or how to see the world through the eye of a needle …

Men are not narrow in their intellec­tual interests by nature; it takes special and rigorous training to accomplish that end. And men who have been trained to think only within the limits of one subject, will never make teachers at the college level even in that subject. They may know exceedingly well the possibilities of that subject, but they will never be conscious of its limi­tations, or if conscious of them will never have an adequate motive or a good basis for judging as to their consequence or extent. 

A broader virtue of scholarship, Viner argues, is that it provides a context for satisfaction with a life spent in reading, writing, and thinking: 

And I plead on be­half of scholarship, not that it will save the world, although this has conceivably happened in the past and may happen again; not that it brings material re­wards to the scholar, although this also may have occurred, to the scandal of his academic superiors; not that it is an in­variably exciting activity, for it gener­ally involves a great deal of drudgery … All that I plead on behalf of scholarship, at least upon this occasion, is that, once the taste for it has been aroused, it gives a sense of largeness even to one\’s small quests, and a sense of fullness even to the small answers to problems large or small which it yields, a sense which can never in any other way be attained, for which no other source of human gratification can, to the addict, be a satisfying substitute, which gains instead of loses in quality and quantity and in pleasure-yielding capac­ity by being shared with others ….

Pride in one·s special subject is a virtue, not a vice. It is right and proper, and good to look upon, to see a tanner in love with leather and a carpenter in love with wood. But what a meager portion of the realm of the mind is covered even by the proudest single subject! If only there is the will, how much of the rich realm of the human mind lies open for invasion, for the physicist beyond, be­side, and behind nuclear fission, and for the economist in regions where the circulating medium is of more precious metal than even under the gold stand­ard!
I sometimes try to make Viner\’s argument, in a less graceful way by noting that academic specialization always has two functions. One is the use of specialization as a tool for research and communication about research. It would be difficult, for example, to communicate about cutting-edge research on a coronavirus vaccine without using specialized jargon. But academics are people, not just research machines, and so specialization ends up having a social dimension as well. In particular, extensive use of particular jargon and keeping up with how it changes defines a certain academic in-group, and thus can be an important tool for pure careerist purposes. Some academics find ways to move fluently between jargon and broader forums and purposes for communication–like the undergraduate classroom. Some have a harder time doing so. 
For a previous meditation on this subject, see \”How the Jargonauts Keep  Normies in their Place\” (August 20, 2017).