Snapshots of US Income Taxation Over Time

As Americans recover from our annual April 15 deadline for filing income taxes, here are a series of figures about longer-term patterns of taxes in the US economy. They are drawn from a series of blog posts by the Tax Foundation over the last few months.  The Tax Foundation is a nonpartisan group whose analysis typically leans toward side that taxes on those with high incomes are already high enough. However, the figures that follow are compiled from fairly standard data sources: IRS data, the Congressional Budget Office, and the like.

For example, here\’s a figure showing what taxes are the main sources of federal income over time from Erica York. She writes: \”Before 1941, excise taxes, such as gas and tobacco taxes, were the largest source of revenue for the federal government, comprising nearly one-third of government revenue in 1940. Excise taxes were followed by payroll taxes and then corporate income taxes. Today, payroll taxes remain the second largest source of revenue. However, other sources have shifted in relative importance. Specifically, individual income taxes have become a central pillar of the federal revenue system, now comprising nearly half of all revenue. Following an opposite trend, corporate income and excise taxes have decreased relative to other sources.\”

Indeed, for all the huffing and puffing over income taxes, it\’s worth remembering that 67.8% of US taxpayers in 2019 will pay more in federal payroll taxes (which fund Social Security, Medicare, and disability insurance) than in federal income taxes. Robert Bellefiore offers this figure, drawn from a Joint Committee on Taxation study, showing that this pattern holds on average for all income groups under $200,000.

Arguments over taxes often make fairness claims about the share of taxes paid by various income groups. Whatever one\’s ultimate conclusions about what should happen, it\’s useful to start from teh basis of what is actually happening.

It\’s common to hear a complaint that those with high incomes are evading federal taxes. Some do, of course. It\’s a big country. If a very rich person puts all their money into tax-exempt bonds, with the associated lower interest rates for being tax-free, they won\’t pay taxes on that income. But on average, those with higher incomes do pay a much larger share of taxes. Robert Bellefiore offers a couple of illustrative graphs. The first figure focuses only on federal income taxes.

The second figure includes the share of all federal taxes: that is, income, payroll, corporate (as attributed to individuals who benefit from corporate profits), excise taxes on gasoline, tobacco, and alcohol, and so on. Again, those with higher income levels pay a larger share of total federal taxes.

One can of course still argue that the share of taxes paid by those with high incomes should be larger. But again, arguing that those with high incomes don\’t already pay a larger share of federal taxes is not a true statement.

What about taxes paid at the very tip-top of the income distribution? Erica York offers this figure on the average tax rates paid by the top 0.1%. To be clear, the \”average\” rate rate is the actual share of income paid in taxes, which is different from the \”marginal\” tax rate charged on the highest $1 of income earned. Back in the 1950s, the highest marginal income tax rates sometimes reached 90%. The fact that the average tax rate is so much lower tells you that those very high marginal tax rates were largely for show, in the sense that they didn\’t actually apply to very much income. York writes: \”The graph below illustrates the average tax rates that the top 0.1 percent of Americans faced over the last century, based on research from Thomas Piketty, Emmanuel Saez, and Gabriel Zucman. The blue line includes the impact of all federal, state, and local taxes on individual income, payroll taxes, estates, corporate profits, properties, and sales. The purple line shows income taxes only, including federal, state, and local.\” The overall pattern is while effective tax rates on the top 0.1% were higher in the 1950s, they haven\’t shown much long-term trend one way or the other in the last half-century or so.

When listening to arguments over tax policy, it\’s common to hear complaints about whether deductions should be limited for purposes like mortgage interest, state and local taxes, or charitable contribution. It\’s useful to remember that those deductions don\’t apply to most taxpayers. Erica York explains: \”In 2016, barely a quarter of households with adjusted gross income (AGI) between $40,000 and $50,000 claimed itemized deductions when filing their taxes. In contrast, more than 90 percent of households making $200,000 and above itemized their deductions.\” One effect of the 2017 tax reform law is that the number of taxpayers who find it useful to itemize deductions will drop by as much as 60%.

The share of total federal taxes paid by those with high incomes has been rising over time. Part of that change is because the share of those who owe zero in federal income tax has been rising over time. Robert Bellefiore provides a graph. One main reason for the rise share of taxpayers who owe zero is the expansion of refundable tax credits aimed at those with lower income, including the Earned Income Tax Credit and the Child Credit. You can also see the share of those with zero income taxes owed rises in the Great Recession.

In a different post, Robert Bellefiore offers a chart showing the overall effects of federal tax and transfer policy on the share of income received by different groups. He writes: \”The lowest quintile’s income nearly doubles, while the second and middle quintiles experience relatively smaller increases in income. The fourth quintile’s income share remains constant, and only the highest quintile has a lower share of income after taxes and transfers. The top 1 percent’s share of income, for example, falls from 16.6 percent to 13.2 percent.\”

Again, one can argue that the amount of redistribution should be larger. But it would be untrue to argue that a significant amount of redistribution–like doubling the after-taxes-and-transfers share of the lowest quintile–doesn\’t already happen.

The High Costs of Renewable Portfolio Standards

A \”renewable portfolio standard\” is a rule that a certain percentage of electricity generation needs to come from renewable sources.  Such rule have been spreading in popularity. But Michael Greenstone and Ishan Nath argue in \”Do Renewable Portfolio Standards Deliver?\” that they are an overly costly way of reducing carbon emissions (Becker Friedman Institute, University of Chicago, April 21, 2019). As they explain in the Research Summary (a full working paper is also available at the link):

\”29 states and the District of Columbia have been successful in passing Renewable Portfolio Standards (RPS), which require that a percentage of the electricity generation come from renewable sources. These programs currently cover 64 percent of the electricity sold in the United States. 2. Until now, studies have suggested that RPS programs only marginally increase electricity costs, because they have only examined differences in the costs of generation. These studies fail to fully incorporate three key costs that the addition of renewable resources impose on the electricity system: 1) The intermittent nature of renewables means that back-up capacity must be added; 2) Because renewable sources take up a lot of physical space, are geographically dispersed and are frequently located away from population centers, they require the substantial addition of transmission capacity; and 3) In mandating an increase in renewable power, baseload generation is prematurely displaced, which imposes costs on ratepayers and owners of capital.\”

Their research design is straightforward. They compare states with and without RPS policies, using data over the quarter-century from 1990-2015. They find that the Renewable Portfolio Standards do increase the use of renewables in the generation of electricity, but at a cost.

Seven years after legislation creating an RPS program, retail electricity prices are 11 percent higher on average (1.3 cents per kWh), or about $30 billion annually across the 29 states. Twelve years afterward, prices are 17 percent higher on average (2 cents per kWh). In total, seven years after the start of the programs, consumers in the 29 RPS states paid $125.2 billion more for electricity than they would have in its absence. … In states with RPS policies, renewables’ share of generation increased about 1.8 percent seven years after passage, and 4.2 percent twelve years afterwards. These figures are net of renewable generation that was already in place at the time an RPS was implemented.

Even the most ardent advocates of reducing carbon emissions should desire to do so at the lowest practical cost. By that standard, the Renewable Portfolio Standards have not been a success. Greenstone and Nath write:

In increasing the share of renewable generation, the states with an RPS policy saved 95 to 175 million tons of carbon emissions seven years after the start of the programs. This was driven by a decrease in the carbon intensity of electricity supply in RPS states. However, this study finds that the cost of reducing carbon emissions through an RPS policy is more than $130 per ton of carbon abated and as much as $460 per ton of carbon abated—significantly higher than conventional estimates of the social and economic costs of carbon emissions. For example, the central estimate of the Social Cost of Carbon (SCC) tallied by the Obama Administration is approximately $50 per ton in today’s dollars. A second point of comparison comes from the cost of abating a metric ton of CO2 in current cap-and-trade markets in the US: it is about $5 in the northeast’s Regional Greenhouse Gas Initiative (RGGI) and $15 in California’s cap-and-trade system.

For the record, because we live in a time when people obsess over the potential bias of researchers, Greenstone has been, among a number of other professional affiliations, \”Chief Economist for President Obama’s Council of Economic Advisers, where he co-led the development of the United States Government’s social cost of carbon.\” Nath is a PhD student at the University of Chicago.

For discussion of cost-effective ways of reducing carbon emissions, a useful starting point is Kenneth Gillingham and James H. Stock. 2018. \”The Cost of Reducing Greenhouse Gas Emissions.\” Journal of Economic Perspectives, 32 (4): 53-72.

Is Something Different this Time about the Effect of Technology on Labor Markets?

There\’s a well-worn conversation about the relationship between new technology and possible job displacement which goes something like this:

Concerned person: \”New developments in information technology and artificial intelligence are going to threaten lots of jobs.\”

Skeptical person: \”Economies in developed countries have been experiencing extraordinary developments and shifts in new technology for literally a couple of centuries. But as old jobs have been dislocated, new jobs have been created.\”

Concerned person: \”This time seems different.\”

Skeptical person: \”Every time is different in the specific details. But there\’s certainly no downward pattern in the number of jobs in the last two centuries, or the last few decades.\”

Concerned person: \”Still, the way in which information technology and artificial intelligence replace workers seems different than the way in which, say, assembly lines replaced skilled artisan workers or combine harvesters replaced farm workers. \”

Skeptical person: \”Maybe this time will be different. After all, it\’s logically impossible to prove that something in the future will NOT be different. But based on the long-run historical pattern, the evidence that new technology leads to shifts in the labor market is clear-cut, while the evidence that it leads to permanent job loss for the population as a whole is nonexistent.\”

Concerned person: \”Still, this current wave of technology seems different.\”

Skeptical person: \”I guess we\’ll see how it unfolds in the next decade or two.\”

The most recent Spring 2019 issue of the Journal of Economic Perspectives has a symposium on \”Automation and Employment.\” Two of the articles in particular offer a concrete arguments about how something is different with how the current new technologies are interacting with labor markets.

Daron Acemoglu and Pascual Restrepo discuss \”Automation and New Tasks: How Technology Displaces and Reinstates Labor.\” They suggest a framework in which automation can have three possible effects on the tasks that are involved in doing a job: a displacement effect, when automation replaces a task previously done by a worker; a productivity effect in which the higher productivity from automation taking over certain tasks leads to more buying power in the economy, creating jobs in other sectors; and a reinstatement effect, when new technology reshuffles the production process in a way that leads to new tasks that will be done by labor.

In this approach, the effect of automation on labor is not predestined to be good, bad, or neutral. It depends on how these three factors interact. Acemoglu and Restrepo attempt to calculate the size of these three factors for the US economy in two time periods: 1947-1987 and 1987-2017. There is of course considerable technological change through all of this 60-year period. For example, I\’ve written  on this blog about \”Automation and Job Loss: The Fears of 1964\” (December 1, 2014)   and \”Automation and Job Loss: Leontief in 1982\” (August 22, 2016). But the later period can be associated more closely with the rise of computers and information technology.

Their calculations suggest that in the 1987-2017 period, the effects of automation have involved a larger displacement effect, lower productivity growth, and a lower reinstatement effect. The lower demand for labor can be seen in stagnant wage growth over this period for lower- and medium-skilled workers. They argue that the real issue isn\’t whether automation displaces tasks and alters jobs–or course it does–but rather how those displacement effects compare to how automation leads to greater productivity the possibility of new job-related tasks that reinstate labor. They argue that public policy has some power to affect how the forward movement of technology will affect demand for labor: for example, they argue that public policy has tended to favor investment in new equipment and machinery over investment in human capital, like on-the-job training by employers.

Another angle on new technology and labor markets in the same issue of JEP comes from Jeremy Atack, Robert A. Margo, and Paul W. Rhode in \”`Automation\’ of Manufacturing in the Late Nineteenth Century: The Hand and Machine Labor Study.\”  The focus of their paper is on a remarkably detailed US government study done in the 1890s of how machines were replacing the tasks involved in specific jobs.

The new assembly line machines of that time clearly displaced large number of tasks previously done by workers. However, the productivity effects of this wave of automation were very large. In addition, the new automation technology of that time had a powerful reinstatement effect of creating new tasks to be done by workers. They write:

[T]he net effect of the introduction of new tasks on labor demand appears to have been positive. This is because the share of time taken up by new tasks in machine labor was larger than the share of time associated with hand tasks that were abandoned—indeed, five times larger. Among other activities, these new tasks included maintenance of steam engines, a foreman supervising large numbers of workers (discussed further below), and workers packaging products for distant markets.

Atack, Margo, and Rhode also offer a broader point about technology and labor that seems to me worth considering. They point out that back in the 1890s, with a much heavier use of machines in the production process, there was a shift toward a broader division of labor: that is, the study counted more overall tasks to be done when machines were used, as compared to before the machines were used. One implication for workers of that time is that the path to a steady and well-paid job was to focus on a very particular niche of the production process. Indeed, one broad description of labor markets at this time is that there is a shift away from artisan workers (say, blacksmiths) who carried out many tasks, and toward workers who focused on a smaller set of tasks.

The authors suggest that one way in which modern technology is different from the 1890s is that it does not reward or encourage this kind of extreme division of labor. They write: 

The massive division of labor documented front and center in the Hand and Machine Labor study dramatically affected the nature of the human capital investment decision facing successive cohorts of American workers contemplating whether to enter the manufacturing sector. Earlier in the nineteenth century, the human capital investment problem such workers faced was mastering the diverse set of skills associated with most or all of the tasks involved in making a product, along with managing the affairs of a (very) small business, an artisan shop. The human capital investment problem facing the prospective manufacturing worker in the 1890s was quite different. There was little or no need to learn how to fashion a product from start to finish; mastery of one or two tasks
would do, and such mastery might be gained quickly on the job. The more able or ambitious might gravitate to learning new skills, such as designing, maintaining, or repairing steam engines, or clerical/managerial tasks, the demand for which had grown sharply as average establishment size increased over the century.

For many decades in the twentieth century, specialization was economically beneficial to workers—the costs of learning skills were relatively modest and the return on the investment—a relatively secure, highly paid job in manufacturing—made that investment worthwhile. The prospect of widespread automation has arguably changed this calculus. No single “job” is safe and the optimal investment strategy may be very different—a suite of diverse, relatively uncorrelated skills as insurance against displacement by robotics and artificial intelligence. This is perhaps the sense in which the history of how technology affects jobs is not repeating itself, and “this time” really is different.

In watching the cohort that includes my own children move from high school into young adulthood, this observation seems to me to contain a lot of truth. When it comes to training for a future job, many of us are still mentally in the 1890s, looking for one or a few particular focused skills that will guarantee a \”good job.\” But modern technologies are likely to disrupt what tasks are actually done in a very wide array of jobs, which will put a premium on workers with the ability to shift flexibly as the job situation is reshaped. 

How Single Payer Requires Many Choices

I sometimes hear \”single payer\” spoken as if it was a complete description of a plan for revising the US health care system. But \”single payer\” actually involves a lot of choices. The Congressional Budget Office walks through the options in their report \”Key Design Components and Considerations for Establishing a Single-Payer Health Care System\” (May 2019).

As a preview of some of these issues, its worth noting that a some prominent countries with universal health coverage and reasonably good cost control (at least by US standards!) use regulated multipayer systems, like Germany, Switzerland and Netherlands. For those who like the sound of \”Medicare for All,\” it\’s worth remembering that a certain number of analysts don\’t consider Medicare to be a single-payer system, because of the large role played by private insurers in the Medicare Advantage program, while all of Medicare\’s drug benefits (in Part D of the program) are delivered by private insurers.

However, if one narrows the options to an actual single player plan, which is the label typically put on Canada, Denmark, the UK, Sweden and others, a number of questions still need to be answered. Here\’s a chart from the report showing many of these questions, but because I fear it won\’t be readable in this blog format, I repeat a number of the questions below:

Would the plan be run by the federal government, the states, or some third-party administration? In Canada, for example, national health insurance is best-understood as 13 separate plans run by the provinces and territories. Would the single-payer plan use a single information technology infrastructure nationwide?

Who determines exactly what services are covered or not covered by the single payer plan? Who decides when new treatments would be covered? Would the mandated package of benefits cover outpatient prescription drugs? What about dental, vision, and mental health services? These are not mandated benefits in Canada.

Would there be cost-sharing for physician and hospital services? There is in Sweden, but not in the United Kingdom. How about a limit on out-of-pocket spending? There is such a limit in Sweden, but not in the UK. Would long-term care services be covered? The answer is \”yes\” in Sweden, \”limited\” in the UK, and \”no\” in Canada. If there is cost-sharing, would it take the form of deductibles, co-payments, or co-insurance?

Will supplemental health insurance be allowed? \”In England, private insurance givespeople access to private providers, faster access to care, or coverage for complementary or alternative therapies, but participants must pay for it separately in addition to paying their individual required tax contributions to the NHS. In Australia, private insurance covers services that the public plan does not, such as access to private hospitals, a choice of specialists in both public and private hospitals, and faster access to nonemergency care.\”

Would people be allowed to \”opt out\” of the government health insurance plan and purchase private insurance instead?

Will hospitals be publicly owned, privately owned, or a mixture? Will hospitals be paid with a global budget to allocate across patients, or by a payment based on what patients are diagnosed with what conditions, or by fee-for-service? \”Currently, about 70 percent of U.S. hospitals are privately owned: About half are private, nonprofit entities, and 20 percent are for-profit. Almost all physicians are self-employed or privately employed. A single-payer system could retain current ownership structures, or the government could play a larger role in owning hospitals and employing providers. In one scenario, the government could own the hospitals and employ the physicians, as it currently does in most of the VHA [Veterans Health Administration] system.\”

Will doctors be salaried public employees? If they are private providers, will they be paid on a fee-for-service basis, or receive a per-head or \”capitation\” payment based on the number of patients they serve? In many single-payer systems, the primary care physicians are private, but the outpatient specialist physicians are sometimes (Denmark) or always (UK) public and salaried.

How are prices to be determined for prescription drugs?

Does the financing for the system come from general tax revenues (Canada), an earmarked income tax (Denmark), a mixture of general revenues and payroll taxes (UK), or some other source?

The CBO report goes into these kinds of questions, and others, in more detail. My point here isn\’t to argue for or against \”single payer.\” There are versions of single payer I would prefer to others, and although it\’s a story for another day, I like a lot of the elements of the German and Swiss multi-payer systems for financing health care. My point here is that if you are trying to describe a direction for reform of the US health care system, all $3.5 trillion of it, \”single payer\” is barely the beginning of a useful description; indeed, it sidesteps many of the tough decisions that would still need to be made.

Spring 2019 Journal of Economic Perspectives Available Online

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Spring 2019 issue, which in the Taylor household is known as issue #128. Below that are abstracts and direct links for all of the papers. I may blog more specifically about some of the papers in the next week or two, as well.

____________________
Symposium on Automation and Employment
\”Automation and New Tasks: How Technology Displaces and Reinstates Labor,\” by Daron Acemoglu and Pascual Restrepo
We present a framework for understanding the effects of automation and other types of technological changes on labor demand, and use it to interpret changes in US employment over the recent past. At the center of our framework is the allocation of tasks to capital and labor—the task content of production. Automation, which enables capital to replace labor in tasks it was previously engaged in, shifts the task content of production against labor because of a displacement effect. As a result, automation always reduces the labor share in value added and may reduce labor demand even as it raises productivity. The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage. The introduction of new tasks changes the task content of production in favor of labor because of a reinstatement effect, and always raises the labor share and labor demand. We show how the role of changes in the task content of production—due to automation and new tasks—can be inferred from industry-level data. Our empirical decomposition suggests that the slower growth of employment over the last three decades is accounted for by an acceleration in the displacement effect, especially in manufacturing, a weaker reinstatement effect, and slower growth of productivity than in previous decades.
Full-Text Access | Supplementary Materials

\”Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction,\” by Ajay Agrawal, Joshua S. Gans and Avi Goldfarb
Recent advances in artificial intelligence are primarily driven by machine learning, a prediction technology. Prediction is useful because it is an input into decision-making. In order to appreciate the impact of artificial intelligence on jobs, it is important to understand the relative roles of prediction and decision tasks. We describe and provide examples of how artificial intelligence will affect labor, emphasizing differences between when the automation of prediction leads to automating decisions versus enhancing decision-making by humans.
Full-Text Access | Supplementary Materials

\”`Automation\’ of Manufacturing in the Late Nineteenth Century: The Hand and Machine Labor Study,\” by Jeremy Atack, Robert A. Margo and Paul W. Rhode
Recent advances in artificial intelligence and robotics have generated a robust debate about the future of work. An analogous debate occurred in the late nineteenth century when mechanization first transformed manufacturing. We analyze an extraordinary dataset from the late nineteenth century, the Hand and Machine Labor study carried out by the US Department of Labor in the mid-1890s. We focus on transitions at the task level from hand to machine production, and on the impact of inanimate power, especially of steam power, on labor productivity. Our analysis sheds light on the ability of modern task-based models to account for the effects of historical mechanization.
Full-Text Access | Supplementary Materials

\”The Rise of Robots in China,\” by Hong Cheng, Ruixue Jia, Dandan Li and Hongbin Li
China is the world\’s largest user of industrial robots. In 2016, sales of industrial robots in China reached 87,000 units, accounting for around 30 percent of the global market. To put this number in perspective, robot sales in all of Europe and the Americas in 2016 reached 97,300 units (according to data from the International Federation of Robotics). Between 2005 and 2016, the operational stock of industrial robots in China increased at an annual average rate of 38 percent. In this paper, we describe the adoption of robots by China\’s manufacturers using both aggregate industry-level and firm-level data, and we provide possible explanations from both the supply and demand sides for why robot use has risen so quickly in China. A key contribution of this paper is that we have collected some of the world\’s first data on firms\’ robot adoption behaviors with our China Employer-Employee Survey (CEES), which contains the first firm-level data that is representative of the entire Chinese manufacturing sector.
Full-Text Access | Supplementary Materials

Symposium on Fiscal Policy

\”Ten Years after the Financial Crisis: What Have We Learned from the Renaissance in Fiscal Research?\” by Valerie A. Ramey
This paper takes stock of what we have learned from the \”Renaissance\” in fiscal research in the ten years since the financial crisis. I first discuss the new innovations in methodology and various strengths and weaknesses of the main approaches to estimating fiscal multipliers. Reviewing the estimates, I come to the surprising conclusion that the bulk of the estimates for average spending and tax change multipliers lie in a fairly narrow range, 0.6 to 1 for spending multipliers and -2 to -3 for tax change multipliers. However, I identify economic circumstances in which multipliers lie outside those ranges. Finally, I review the debate on whether multipliers were higher for the 2009 Obama stimulus spending in the United States or for fiscal consolidations in Europe.
Full-Text Access | Supplementary Materials

\”Rising Government Debt: Causes and Solutions for a Decades-Old Trend,\” by Pierre Yared
Over the past four decades, government debt as a fraction of GDP has been on an upward trajectory in advanced economies, approaching levels not reached since World War II. While normative macroeconomic theories can explain the increase in the level of debt in certain periods as a response to macroeconomic shocks, they cannot explain the broad-based long-run trend in debt accumulation. In contrast, political economy theories can explain the long-run trend as resulting from an aging population, rising political polarization, and rising electoral uncertainty across advanced economies. These theories emphasize the time-inconsistency in government policymaking, and thus the need for fiscal rules that restrict policymakers. Fiscal rules trade off commitment to not overspend and flexibility to react to shocks. This tradeoff guides design features of optimal rules, such as information dependence, enforcement, cross-country coordination, escape clauses, and instrument versus target criteria.
Full-Text Access | Supplementary Materials

\”Effects of Austerity: Expenditure- and Tax-Based Approaches,\” by Alberto Alesina, Carlo Favero and Francesco Giavazzi
We review the debate surrounding the macroeconomic effects of deficit reduction policies (austerity). The discussion about \”austerity\” in general has distracted commentators and policymakers from a very important result, namely the enormous difference, on average, between expenditure- and tax-based austerity plans. Spending-based austerity plans are remarkably less costly than tax-based plans. The former have on average a close to zero effect on output and lead to a reduction of the debt/GDP ratio. Tax-based plans have the opposite effect and cause large and long-lasting recessions. These results also apply to the recent episodes of European austerity, which in this respect were not especially different from previous cases.
Full-Text Access | Supplementary Materials

Symposium on the Problems of Men

\”The Declining Labor Market Prospects of Less-Educated Men,\” by Ariel J. Binder and John Bound
Over the last half century, US wage growth stagnated, wage inequality rose, and the labor-force participation rate of prime-age men steadily declined. In this article, we examine these worrying labor market trends, focusing on outcomes for males without a college education. Though wages and participation have fallen in tandem for this population, we argue that the canonical neoclassical framework, which postulates a labor demand curve shifting inward across a stable labor supply curve, does not reasonably explain the data. Alternatives we discuss include adjustment frictions associated with labor demand shocks and effects of the changing marriage market—that is, the fact that fewer less-educated men are forming their own stable families—on male labor supply incentives. In the synthesis that emerges, the phenomenon of declining prime-age male labor-force participation is not coherently explained by a series of causal factors acting separately. A more reasonable interpretation, we argue, involves complex feedbacks between labor demand, family structure, and other factors that have disproportionately affected less-educated men.
Full-Text Access | Supplementary Materials

\”When Labor\’s Lost: Health, Family Life, Incarceration, and Education in a Time of Declining Economic Opportunity for Low-Skilled Men,\” by Courtney C. Coile and Mark G. Duggan
The economic progress of US men has stagnated in recent decades. The labor force participation rate of men ages 25-54 peaked in the mid-1960s and has declined since then (according to the Bureau of Labor Statistics), while men\’s real median earnings have been flat since the early 1970s. These population averages mask considerably larger declines in participation among less-educated and non-white men as well as substantial increases in wage inequality. In this paper, we seek to illuminate the broader context in which prime-age men are experiencing economic stagnation. We explore changes for prime-age men over time in education, mortality, morbidity, disability program receipt, family structure, and incarceration rates. We focus on prime-age men, namely those ages 25-54, and on the years 1980-2016 (or 2017 when possible), encompassing much of the period of reduced economic progress for low-skilled men.
Full-Text Access | Supplementary Materials

\”The Tenuous Attachments of Working-Class Men,\” by Kathryn Edin, Timothy Nelson, Andrew Cherlin and Robert Francis
In this essay, we explore how working-class men describe their attachments to work, family, and religion. We draw upon in-depth, life history interviews conducted in four metropolitan areas with racially and ethnically diverse groups of working-class men with a high school diploma but no four-year college degree. Between 2000 and 2013, we deployed heterogeneous sampling techniques in the black and white working-class neighborhoods of Boston, Massachusetts; Charleston, South Carolina; Chicago, Illinois; and the Philadelphia/Camden area of Pennsylvania and New Jersey. We screened to ensure that each respondent had at least one minor child, making sure to include a subset potentially subject to a child support order (because they were not married to, or living with, their child\’s mother). We interviewed roughly even numbers of black and white men in each site for a total of 107 respondents. Our approach allows us to explore complex questions in a rich and granular way that allows unanticipated results to emerge. These working-class men showed both a detachment from institutions and an engagement with more autonomous forms of work, childrearing, and spirituality, often with an emphasis on generativity, by which we mean a desire to guide and nurture the next generation. We also discuss the extent to which this autonomous and generative self is also a haphazard self, which may be aligned with counterproductive behaviors. And we look at racial and ethnic difference in perceptions of social standing.
Full-Text Access | Supplementary Materials

Features
\”Retrospectives: Ricardo on Machinery,\” by Samuel Hollander
We are currently experiencing an outpouring of concern both popular and professional regarding technological unemployment. I shall be discussing an apparent about-turn on the subject by David Ricardo (1772-1823), who at different times, even in different chapters of the same book, and, indeed, even at different places in the same chapter, seemed to be on both sides of the argument as to whether technological unemployment should be a matter for concern. In a chapter entitled \”On Machinery,\” added to the third edition of his Principles of Political Economy (1821), which comprises volume 1 of his Collected Works (1951-73), Ricardo announced that he had become concerned about the possibility, even likelihood, of technical change detrimental to labour\’s interests. However, in the very same \”On Machinery\” chapter, Ricardo also outlined qualifications to show that there was little need for concern. Ricardo\’s opposing messages are reflected in contrasting reactions to the chapter \”On Machinery.\” Some readers—including Thomas Robert Malthus and J. R. McCulloch—understood it as supporting working-class opposition to machinery. Others—including John Stuart Mill and Sir John Hicks—find therein the answer to such opposition.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

The Squeezed Middle Class: An International View

Being \”middle-class\” is a perception that includes more than a certain range of numerical rankings in the distribution of income. For example, it includes a sense that one\’s job is reasonably stable heading into the future, a sense that income from the job is sufficient to purchase the goods and services associated with middle-class social status, and a sense that this status is likely to be passed on to one\’s children.

Concerns over the middle class aren\’t just a US issue: they are coming up in high income countries all around the world. The OECD has just published \”Under Pressure: The Squeezed Middle Class\” (April 2019). To help focus the discussion of the size and stresses for the middle class, it defines \”middle-class\” as those with an income level between 75% and 200% of the median income. However, it also immediately notes that when people are surveyed about whether they perceive themselves as \”middle-class,\” this definition captures their perceptions in only a rough way.

For example, the vertical axis of the graph below shows the share of the population in a given country that has income between 75% and 200% of the median income. The horizontal axis shows the share of people in that country who refer to themselves as \”middle class.\” A country on the diagonal line would be one where the number who refer to themselves as \”middle-class\” matches the income-based definition.

Countries below the diagonal line are places where the share of those who say they are \”middle-income is lower than the income-based definition. The US, for example,  has about 50% of its population in the income range from 75% to 200% of the median income, but about 60% of people say they are \”middle class.\” In Canada, about 60% of the population also says they are \”middle-class,\” but has about 60% of Canada\’s population in the middle-income range. Great Britain is an interesting case where almost 60% of the population has income in the middle-income range, but only a little more than 40% of the population says they are \”middle class\” in survey results.

The rise in income inequality that has happened all around the world will tend to spread out the income distribution, and thus reduce the size of the middle class. But an intriguing pattern that emerges from the OECD analysis is that although the share of the population at the income level from 75% to 200% of the median varies a lot across countries (as shown in the figure above), it hasn\’t declined all that much over time. Their analysis across 17 high-income countries shows that 64% of the population had income from 75% to 200% of the median in the mid-1980s, and this has now fallwn to 61% of the total population–an overall decline of about 1% per decade.

But this shift in the share of population doesn\’t capture two more powerful trends: the share of total income received by those in this middle-income group and the prices paid for goods and services often associated with middle class status.

On the issue of the distribution of income, the OECD writes (references omitted):

The upper-income class controls a considerably larger share of income than in the past. Between the mid-1980s and mid-2010s, its share of income increased by an average of 5 percentage points from 18% to 23%, while grew 1.5 percentage points as a share of the population (Figure 2.5, Panel B). Save in Ireland, Switzerland and France, upper-income shares of total income climbed in all countries with available data, particularly in Israel, Sweden and the United States. And, in most countries, they outstripped its expansion as a share of the population. In the United States, for example, while the upper-income class’s share of the population increased 3 percentage points from 11% to 14%, its share of all income climbed 9 percentage points – from 26% to 35%. This change in shares of income in the United States was described as a shift in the “center of gravity in the economy.”

This figure makes the point by calculating that total income for the 61-64% of the population in the 75% to 200% of median income range was four times as much as total income for the upper-income group in the mid-1980s, but that has now fallen to three times as much. Markets pay attention to buying power, and the buying power of the middle class is relatively small. 

Another big shift is that the prices of certain goods typically viewed as part of a \”middle-class\” lifestyle have been rising faster than most prices. The black line HICP shows a measure of the overall level of inflation. The other lines show the rise in prices of education, health care, and housing. A number of the main consumption goods associated with \”middle-class\” status have become harder to afford.

Underlying these changes is a shift in the distribution of jobs, with a decline in middle-skill jobs in  particular. The OECD writes: 

Recent work by the OECD confirms that in most countries the share of jobs in middle-skill occupations has declined relative to high-skill and low-skill occupations  since the mid-1990s. The OECD also finds that occupational polarisation is closely associated with changes in the distribution of occupations within sectors, although de-industrialisation (the shift of employment from manufacturing to the services) also plays an important role. Furthermore, polarisation and de-industrialisation both appear strongly related to technological change. Evidence of an association between polarisation and globalisation is weaker, however.

Job polarisation has resulted in a net shift of employment to high skill occupations in most OECD countries. On average across the 21 OECD countries for which data were available, middle-skill occupations have lost 8 percentage points in employment shares, while low skill occupations have lost about 2 percentage points and the high skill occupations have gained 10 percentage points. Indeed, there was a shift towards highly skilled employment in most countries, with the aggregate share of middle-skill jobs declining in 19 countries, rising only in Mexico and the Slovak Republic. The increase in high-skill jobs offset the decline – except in Greece, Hungary and the United States. In those countries, the greatest climbs came in low-skill occupations, which nevertheless lost labour market shares in a number of other countries, though only in Belgium did they fare worse than middle-skill occupations. Overall, the most common pattern is one of a decline in middle-skill jobs relative to both high and low skill occupations, with most gains made by high-skill jobs … 

Changes in the fortunes of the different skill groups may explain some of the social frustration that has been at the centre of the political debate in recent years. Jobs increasingly fail to yield the income status traditionally associated with their skill levels. In most countries, there are fewer prospects of high-skill workers being in the upper-income class, and of middle and low-skill workers in the middle-income class.

What does all of this imply about how to address concerns over the middle class? The direct concerns over the middle class involve a desire for a labor market where workers find it more straightforward to make a lasting connection with an employer, with health and pension benefits, and prospects for a career path. They involve a desire to be able to afford the consumption goods like housing, education, and health care, which in most countries are either directly run by the government or highly affected by regulation in most of these countries.  
I\’m not someone who is reflexively opposed to higher taxes for those with high incomes, but these kinds of concerns over the future of the middle-class are unlikely to be addressed in a sustainable and long-term way by proposals to tax the rich and subsidize the middle class. Laws that seek to command higher wages and benefits, or seek to command lower prices for certain goods, are not a long-term answer either. Actual answers involve thinking in a more detailed way about how labor markets function, and more specifically how they improve productivity while incorporating and training workers. There also needs to be more detailed thinking about the rules and practices that governments have set up around the production of housing, health care, and education, and how those rules might be sensibly reformed.  

Globalization: More Than Before, But Less than You Think?

Globalization can be loosely defined as increases in the flow of goods, services, finance, people, and information across national boundaries. It has been generally on the rise in recent decades, although this trend experienced a substantial hiccup around the time of the Great Recession. Steven A. Altman, Pankaj Ghemawat, and Phillip Bastian have written  \”DHL Global Connectedness Index 2018: The State of Globalization in a Fragile World\” (February 2019). Most of the report is country- and region-level descriptions of the level globalization.

Here, I draw on the first chapter which tackles the overview question, \”How Globalized is the World?\” One of the themes: 

\”Surprisingly, one commonality between globalization’s supporters and its critics is that both tend to believe the world is already far more globalized than it really is. …  The world is both more globalized than ever before and less globalized than most people perceive it to be. The intriguing possibility embodied in that conclusion is that companies and countries have far larger opportunities to benefit from global connectedness and more tools to manage its challenges than many decision-makers recognize.\”

The authors point to survey data on what people believe about globalization, compared to the actual data. Here are the results of a survey of business managers (footnotes omitted):

Figure 1.1 also highlights how managers tend to greatly overestimate measures of the depth of globalization. The actual levels are juxtaposed on the graph against perceived levels from a survey of 6,035 managers across three advanced economies (Germany, the UK, and the US) and three emerging economies (Brazil, China, and India) that we conducted in 2017. On average, the managers guessed that the world was five times more deeply globalized than it really is! In fact, their perceptions were no more accurate than those of students surveyed across 138 countries or members of the general public in the United States. And CEOs and other senior executives had even more exaggerated perceptions than did junior and middle managers—perhaps because their own lives tend to be far more global than those of their employees and customers.


Thus, some of the discussion in the report emphasizes that most economic activity is domestic, even for multinational firms.

The combined output of all multinational firms outside of their home countries added up to only 9% of global economic output in 2017, and just 2% of all employees around the world worked in the international operations of multinational firms. In part, those statistics reflect the fact that most companies are still domestic. Less than 0.1% of all firms have foreign operations and about 1% export. Small firms are, on average, much less international than large ones, and most companies are small. But even among the Fortune Global 500, the world’s largest firms by revenue, domestic sales still exceed international sales. … The same pattern of limited breadth prevails at the firm level as well. Among the world’s 100 largest corporations ranked by foreign assets, the average firm earns roughly 60% of its revenue in just four countries (home plus three international markets).

Although the buzzword is \”globalization,\” what this typically means in practice is trade with a few neighbors who are geographically close, or share the same language, or both.

Most countries’ international flows are so highly concentrated with key partner countries (usually neighbors) that it hardly makes sense to think of them as global at all. In fact, flows between countries and their single largest partners (e.g. export destinations for trade) make up nearly one-quarter of all merchandise exports and more than onequarter of all of the other flows …  Expanding the same analysis beyond only countries and their single largest partners, more than half of all flows except merchandise exports and inbound students take place between countries and their top three partners, and 75% or more are between countries and their top 10 partners. Even in the case of merchandise trade, more than half takes place between countries and their top five export destinations. Most countries simply do not maintain strong connections to a large number of other countries. 

Geographic distance, along with cultural, administrative/ political, and economic differences go a long way toward explaining the distributions of countries’ flows across locations. For example, if one pair of countries is half as distant as another otherwise similar pair of countries, greater physical proximity alone would be expected to increase the merchandise trade between the closer pair by more than three times and to more than double the stock of foreign direct investment (FDI) between them. And to highlight a cultural commonality, sharing a common official language roughly doubles both trade and foreign direct investment. Thus, despite the widespread perception that advances in transportation and telecommunications technologies are rendering distance irrelevant, international activity continues to be more intense among proximate countries. 

The chapter also has some intriguing hints about future directions for globalization. One possibility is an expansion of international flows beyond the closest neighbors and those with common languages, and so that more small companies become involved. Globalization is not close to any theoretical ceiling. 
Another set of possibilities is that the international connections of the future will tend to put less emphasis on movements of goods, and instead more emphasis on flows of information.  Comparing from 2000 up to the present, flows of goods, finances, and services are all up about 20-25%. But flows of information across international borders–measured by international flows of internet traffic, phone calls, printed publications– have nearly tripled in that time. The authors write: 

On the other hand, trade growth might be slowed by developments that would reduce the attraction of trade motivated by labor cost arbitrage. Automation and 3-D printing could potentially reduce the attraction of offshoring to access low labor costs. And macroeconomic trends imply some narrowing of the scope for such trade as well. One very rough measure of the potential for labor cost arbitrage across countries is the GDP-weighted average of the ratios of countries’ per capita incomes (higher over lower). As large emerging economies (especially China) have become richer, this ratio has already fallen from 8 in 2001 to 5.6 in 2017, and projections from Oxford Economics suggest it will continue falling (more slowly) to about 4 by 2050. While wage arbitrage will continue to motivate trade, exports of labor-intensive products from emerging economies may become a smaller driver of trade growth than in the recent past.

Flows of international tourists and of international students have been up. But on a global basis, migration has risen less than many people seem to believe. 

On a global basis, migration is on a rising trend, but a very modest one. … The proportion of people living outside of the countries where they were born has risen from 2.8% in 2001 to 3.4% in 2017. Both of those values, however, still round to 3%—the same level that global migration depth has approximated for more than a century!The modest global increase in international migration,  however, masks significant increases that have taken place in some countries. In advanced economies, the share of immigrants in the population increased from 9% in 2001 to 13% in 2017. In the United States, 2017 was noteworthy as the year that the proportion of immigrants in the population first surpassed its 1910 peak level.

Thus, the globalization of the future could follow a path where the ties across information, ideas, finance, and people are rising briskly, but the share of actual goods moving across national borders rises much more slowly. Altman, Ghemawat, and Bastian write: 

To summarize, the depth and breadth of trade, capital, information, and people flows—as well as the international business activity of multinational firms—fall far short of levels that are commonly presumed. National borders and the distances and differences between countries still have large dampening effects on international activity. 

A Wrap-Up for the TARP

\”In October 2008, the Emergency Economic Stabilization Act of 2008 (Division A of Public Law 110-343) established the Troubled Asset Relief Program (TARP) to enable the Department of the Treasury to promote stability in financial markets through the purchase and guarantee of `troubled assets.\’” The Congressional Budget Office offers a retrospective on how the controversial program all turned out in \”Report on the Troubled Asset Relief Program—April 2019.\”

One issue (although certainly not the only one) is whether the bailout was mostly a matter of loans or asset purchases that were later repaid. A bailout that was later repaid might still be offensive on various ground, but at least it would seem preferable to a bailout that was not repaid.

As a reminder, TARP authorized the purchase of $700 billion in \”troubled assets.\” The program ended up spending $441 billion. Of that amount, $377 billion was repaid, $65 billion has been written off so far, and the remaining $2 billion seems likely to be written off in the future. Here\’s the breakdown:

Interestingly, the biggest single area of TARP write-offs was not the corporate bailouts, but the $29 billion for \”Mortgage Programs.\” The CBO writes: \”The Treasury initially committed a total of $50 billion in TARP funds for programs to help homeowners avoid foreclosure. Subsequent legislation reduced that amount, and CBO anticipates that $31 billion will ultimately be disbursed. About $10 billion of that total was designated for grants to certain state housing finance agencies and for programs of the Federal Housing Administration. Through February 28, 2019, total disbursements of
TARP funds for all mortgage programs were roughly $29 billion. Because most of those funds were in the form of direct grants that do not require repayment, the government’s cost is generally equal to the full amount disbursed.\”

The second-largest category of write-offs is the $17 billion for the GM and Chrysler. Here\’s my early take on \”The GM and Chrysler Bailouts\” (May 7, 2012). For a look at the deal from the perspective of two economists working for the Obama administration at the time, see Austan D. Goolsbee and Alan B. Krueger,\” A Retrospective Look at Rescuing and Restructuring General Motors and Chrysler,\” in the Spring 2015 issue of the Journal of Economic Perspectives.

Not far behind is the bailout for the insurance company AIG. It lost $50 billion making large investments and selling insurance on financial assets base on the belief that the price of real estate would not fall. The US Treasury writes: \”At the time, AIG was the largest provider of conventional insurance in the world. Millions depended on it for their life savings and it had a huge presence in many critical financial markets, including municipal bonds.\” A few years back, I wrote about  \”Revisiting the AIG Bailout\” (June 18, 2015), based in substantial part on \”AIG in Hindsight,\” by Robert McDonald and Anna Paulson in the Spring 2015 issue of the Journal of Economic Perspectives.

Finally, the Capital Purchase Program involved buying stock in about 700 different financial institutions. Almost all of that stock has now been resold (about $20 million remains), and the government ended up writing off $5 billion overall.

Of these programs, I\’m probably most skeptical about the auto bailouts, but even there, the risk of disruption from a disorderly bankruptcy, spreading from the companies to their suppliers to the surrounding communities, was disturbingly high. The purchases of stock in financial institutions, which may have been the most controversial step at the time, also came the closest to being repaid in full.

In the end, one\’s beliefs about whether TARP was a good idea depend on one\’s view of the US economy in October 2008. My own sense is that the US economy was in extraordinary danger during that month. B all means, let\’s figure out ways to improve the financial system so that future meltdowns are less likely, and I\’ve argued \”Why US Financial Regulators are Unprepared for the Next Financial Crisis\” (February 11, 2019). But when someone is in the middle of having a heart attack, it\’s time for emergency action rather than a lecture on diet and exercise. And when an economy is in the middle of a severe and widespread financial meltdown, which had a real risk of turning out even worse than what actually happened, the government needs to accept that however its regulators failed in the past and need to reform in the future, emergency actions including bailouts may need to be used. I do sometimes wonder if the TARP controversy would have been lessened if more individuals had been held accountable for their actions (or sometimes lack of action) in the lead-up to the crisis. 

Did Disability Insurance Just Fix Itself?

Back in 2015, the trust fund for the Social Security Disability Insurance Trust Fund was in deep trouble, scheduled to run out of money by 2016. A short-term legislative fix bought a few years more solvency for the trust fund, by moving some of the payroll tax for other Social Security benefits over to the disability trust fund, but the situation continued to look dire. For some of my previous takes on the situation, see this from 2016, this from 2013, or this from 2011. Or see this three-paper symposium on disability insurance from the Spring 2015 Journal of Economic Perspectives, with a discussion of what what going wrong in the US system and discussions of reforms from the Netherlands and the UK.

Well, the report of the 2019 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and the Federal Disability Insurance Trust Funds was recently published. And it now projects that the Disability Insurance trust fund won\’t run out for 33 years. In that sense, Disability Insurance looks to be in better shape than Medicare or the retirement portion of Social Security. What just happened?

This figure from the trustees shows the \”prevalence rate\” of disability–the rate of those receiving disability per 1,000 workers. The dashed line shows the actual prevalence of disability. The solid line shows the change after adjusting for age and sex. You can see the sharp rise in disability prevalence over several decades leading up to about 2014, which was leading to the concerns about the solvency of the system mentioned above. And then you see a drop in the prevalence of disability–both in the gross and the age-sex-adjusted lines.

As Henry Aaron writes: \”What happened next has stunned actuaries, economists, and analysts of all stripes. The number of people applying for disability benefits dropped…and kept on dropping. Some decline was expected as the economy recovered from the Great Recession and demand for workers increased. But the actual fall in applications has dwarfed expectations. In addition, the share of applicants approved for benefits has also fallen. … And if the drop in applications persists, current revenues may be adequate to cover currently scheduled benefits indefinitely.\”


Of course, the disability rate can fall for a number of reasons, some more welcome than others. To the extent that employment growth and a low unemployment rate has offered people a chance to find jobs in the paid workforce, rather than applying for disability, this seems clearly a good thing. But if this shift resulted from tightening the legal requirements on disability, then we might want to look more closely at whether this tightening made sense. The trust fund actuaries don\’t make judgments on this issue, but here are some bits of evidence.

The Center for Budget and Policy Priorities publishes a \”Chart Book: Social Security Disability Insurance,\” with the most recent version coming out last August. The first figure shows that applications for disability have been falling in recent years. The second figure shows that the number of applications being accepted have also fallen since about 2010 at all three possible stages of the process: initial

It\’s not easy to make a judgement on these patterns. A common concern among researchers studying this issues is that the standards for granting disability, and the resulting number of disabled workers, seem to vary a lot across different locations and decision-makers. For example, the CPBB report offers this figure showing differences across states:

A report from the Congressional Research Service, \”Trends in Social Security Disability InsuranceEnrollment\” (November 30, 2018), describes some potential causes of the lower disability rates.

Since 2010, new awards to disabled workers have decreased every year, dropping from 1 million to 762,100 in 2017. Although there has been no definitive cause identified, four factors may explain some of the decline in disability awards.

  1. Availability of jobs. The unemployment rate was as high as 9.6% in 2010 and then gradually decreased every year to about 4.35% in 2017. …
  2. Aging of lower-birth-rate cohorts. The lower-birth-rate cohorts (people born after 1964) started to enter peak disability-claiming years (usually considered ages 50 to FRA [federal retirement age]) in 2015, replacing the larger baby boom population. This transition would likely reduce the size of insured population who are ages 50 and above, as well as the number of disability applications. …
  3. Availability of Affordable Care Act (ACA). … Yhe availability of health insurance under the ACA may lower the incentive to use SSDI as a means of access to Medicare, thus reducing the number of disability applications. …
  4. Decline in the allowance rate. The total allowance rate at all adjudicative levels declined from 62% in 2001 to 48% in 2016. While this decline may in part reflect the impact of the Great Recession (since SSDI allowance rates typically fall during an economic downturn), the Social Security Advisory Board Technical Panel suspects that the declining initial allowance rate may be a result of the change in the SSDI adjudication process.
The Social Security actuaries are projecting that the share of Americans getting disability insurance isn\’t going to change much over time. But given the experience of the last few years, one\’s confidence in that projection is bound to be shaky. 

When Special Interests Play in the Sunlight

There\’s a common presumption in American politics that special interests are more likely to hold sway during secret negotiations in back rooms, while the broader public interest is more likely to win out in a transparent and open process. People making this case often quote Louis Brandeis, from his 1914 book Other People\’s Money: \”Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants …\” This use of the sunlight metaphor wasn\’t original to Brandeis: for example, James Bryce also used it in an 1888 book The American Commonwealth. 

But what if, at least in some important settings, the reverse is true? What if public processes that are more public actually result in greater power for special interests and less ability to find workable compromise solutions? James D\’Angelo and Brent Ranall make this case in their essay \”The Dark Side of Sunlight: How Transparency Helps Lobbyists and Hurts the Public,\” which appears in the May/June 2019 issue of Foreign Affairs. Responding to the Brandeis metaphor, they write: \”Endless sunshine—without some occasional shade—kills what it is meant to nourish.\”

The underlying argument goes something like this. Many of us have a reflexive belief that open political processes are more accountable, but we don\’t often ask \”accountable to who\”? It\’s easy to assume that the accountability is to a broader public interest. But in practice, transparency means that focused special interests can keep tabs on each proposal. If the special interests object, they can whip up a storm of protests. They also can threaten attempts at compromise, and push instead toward holding hard lines and party lines. Greater openness means a greater ability to monitor,  to pressure, and to punish possible and perceived deviations.

In US political history, the emphasis on sunlight is a relatively recent development. D\’Angelo and Ranall point out:

It used to be that secrecy was seen as essential to good government, especially when it came to crafting legislation. Terrified of outside pressures, the framers of the U.S. Constitution worked in strict privacy, boarding up the windows of Independence Hall and stationing armed sentinels at the door. As Alexander Hamilton later explained, “Had the deliberations been open while going on, the clamors of faction would have prevented any satisfactory result.” James Madison concurred, claiming, “No Constitution would ever have been adopted by the convention if the debates had been public.” The Founding Fathers even wrote opacity into the Constitution, permitting legislators to withhold publication of the parts of proceedings that “may in their Judgment require Secrecy.” …

One of the first acts of the U.S. House of Representatives was to establish the Committee of the Whole, a grouping that encompasses all representatives but operates under less formal rules than the House in full session, with no record kept of individual members’ votes. Much of the House’s most important business, such as debating and amending the legislation that comes out of the various standing committees—Ways and Means, Foreign Affairs, and so on—took place in the Committee of the Whole (and still does). The standing committees, meanwhile, in both the House and the Senate, normally marked up bills behind closed doors, and the most powerful ones did all their business that way. As a result, as the scholar George Kennedy has explained, “Virtually all the meetings at which bills were actually written or voted on were closed to the public.”

For 180 years, secrecy suited legislators well. It gave them the cover they needed to say no to petitioners and shut down wasteful programs, the ambiguity they needed to keep multiple constituencies happy, and the privacy they needed to maintain a working decorum.

But starting in the late 1960s and early 1970s, we have now had a half-century of experimenting with more open processes. How is that working out? When greater transparency in Congress arrived in the 1970s, did that mean special interest had more power or less? There\’s a simple (if imperfect) test. If special interests had less power, then it would not have been worthwhile to invest as much in lobbying, so more transparency should have been followed by a reduction in lobbying. Of course, the reverse is what actually happened: Lobbying rose dramatically in the 1970s, and has risen further since then. Apparently, greater political openness makes spending on lobbying more worthwhile, not less.

D\’Angelo and Ranall argue that a number of the less attractive features of American politics are tied to the push for greater transparency and openness. For example, we now have cameras operating in the House and Senate, which on rare occasions capture actual debate, but are more commonly used as a stage backdrop for politicians recording something for use in their next fundraiser or political ad. When public votes are taken much more often, then more votes are also taken just for show in an attempt to rally one\’s supporters or to embarrass the other party, rather than for any substantive legislative purpose. Politicians who are always on-stage are likely to display less civility and collegiality and greater polarization, lest they be perceived as insufficiently devoted to their own causes—or even showing the dreaded signs of a willingness to compromise.

As D\’Angelo and Ranall point out, it\’s interesting to note that when politicians are really serious about something, like gathering in a caucus to choose a party leader, they use a secret ballot. They write: \”Just as the introduction of the secret ballot in popular elections in the late nineteenth century put an end to widespread bribery and voter intimidation—gone were the orgies of free beer and sandwiches—it could achieve the same effect in Congress.\”

It\’s important to remember that here are wide array of forums, step-by-step process, and decisions that feed into any political process. Thus, the choice between secrecy and transparency isn\’t a binary one: that is, one doesn\’t need to be in favor of total openness or total secrecy in all situations. D\’Angelo and Ranall make a strong case toward questioning the reflexive presumption that more transparency in all settings will lead to better political outcomes.