The Squeezed Middle Class: An International View

Being \”middle-class\” is a perception that includes more than a certain range of numerical rankings in the distribution of income. For example, it includes a sense that one\’s job is reasonably stable heading into the future, a sense that income from the job is sufficient to purchase the goods and services associated with middle-class social status, and a sense that this status is likely to be passed on to one\’s children.

Concerns over the middle class aren\’t just a US issue: they are coming up in high income countries all around the world. The OECD has just published \”Under Pressure: The Squeezed Middle Class\” (April 2019). To help focus the discussion of the size and stresses for the middle class, it defines \”middle-class\” as those with an income level between 75% and 200% of the median income. However, it also immediately notes that when people are surveyed about whether they perceive themselves as \”middle-class,\” this definition captures their perceptions in only a rough way.

For example, the vertical axis of the graph below shows the share of the population in a given country that has income between 75% and 200% of the median income. The horizontal axis shows the share of people in that country who refer to themselves as \”middle class.\” A country on the diagonal line would be one where the number who refer to themselves as \”middle-class\” matches the income-based definition.

Countries below the diagonal line are places where the share of those who say they are \”middle-income is lower than the income-based definition. The US, for example,  has about 50% of its population in the income range from 75% to 200% of the median income, but about 60% of people say they are \”middle class.\” In Canada, about 60% of the population also says they are \”middle-class,\” but has about 60% of Canada\’s population in the middle-income range. Great Britain is an interesting case where almost 60% of the population has income in the middle-income range, but only a little more than 40% of the population says they are \”middle class\” in survey results.

The rise in income inequality that has happened all around the world will tend to spread out the income distribution, and thus reduce the size of the middle class. But an intriguing pattern that emerges from the OECD analysis is that although the share of the population at the income level from 75% to 200% of the median varies a lot across countries (as shown in the figure above), it hasn\’t declined all that much over time. Their analysis across 17 high-income countries shows that 64% of the population had income from 75% to 200% of the median in the mid-1980s, and this has now fallwn to 61% of the total population–an overall decline of about 1% per decade.

But this shift in the share of population doesn\’t capture two more powerful trends: the share of total income received by those in this middle-income group and the prices paid for goods and services often associated with middle class status.

On the issue of the distribution of income, the OECD writes (references omitted):

The upper-income class controls a considerably larger share of income than in the past. Between the mid-1980s and mid-2010s, its share of income increased by an average of 5 percentage points from 18% to 23%, while grew 1.5 percentage points as a share of the population (Figure 2.5, Panel B). Save in Ireland, Switzerland and France, upper-income shares of total income climbed in all countries with available data, particularly in Israel, Sweden and the United States. And, in most countries, they outstripped its expansion as a share of the population. In the United States, for example, while the upper-income class’s share of the population increased 3 percentage points from 11% to 14%, its share of all income climbed 9 percentage points – from 26% to 35%. This change in shares of income in the United States was described as a shift in the “center of gravity in the economy.”

This figure makes the point by calculating that total income for the 61-64% of the population in the 75% to 200% of median income range was four times as much as total income for the upper-income group in the mid-1980s, but that has now fallen to three times as much. Markets pay attention to buying power, and the buying power of the middle class is relatively small. 

Another big shift is that the prices of certain goods typically viewed as part of a \”middle-class\” lifestyle have been rising faster than most prices. The black line HICP shows a measure of the overall level of inflation. The other lines show the rise in prices of education, health care, and housing. A number of the main consumption goods associated with \”middle-class\” status have become harder to afford.

Underlying these changes is a shift in the distribution of jobs, with a decline in middle-skill jobs in  particular. The OECD writes: 

Recent work by the OECD confirms that in most countries the share of jobs in middle-skill occupations has declined relative to high-skill and low-skill occupations  since the mid-1990s. The OECD also finds that occupational polarisation is closely associated with changes in the distribution of occupations within sectors, although de-industrialisation (the shift of employment from manufacturing to the services) also plays an important role. Furthermore, polarisation and de-industrialisation both appear strongly related to technological change. Evidence of an association between polarisation and globalisation is weaker, however.

Job polarisation has resulted in a net shift of employment to high skill occupations in most OECD countries. On average across the 21 OECD countries for which data were available, middle-skill occupations have lost 8 percentage points in employment shares, while low skill occupations have lost about 2 percentage points and the high skill occupations have gained 10 percentage points. Indeed, there was a shift towards highly skilled employment in most countries, with the aggregate share of middle-skill jobs declining in 19 countries, rising only in Mexico and the Slovak Republic. The increase in high-skill jobs offset the decline – except in Greece, Hungary and the United States. In those countries, the greatest climbs came in low-skill occupations, which nevertheless lost labour market shares in a number of other countries, though only in Belgium did they fare worse than middle-skill occupations. Overall, the most common pattern is one of a decline in middle-skill jobs relative to both high and low skill occupations, with most gains made by high-skill jobs … 

Changes in the fortunes of the different skill groups may explain some of the social frustration that has been at the centre of the political debate in recent years. Jobs increasingly fail to yield the income status traditionally associated with their skill levels. In most countries, there are fewer prospects of high-skill workers being in the upper-income class, and of middle and low-skill workers in the middle-income class.

What does all of this imply about how to address concerns over the middle class? The direct concerns over the middle class involve a desire for a labor market where workers find it more straightforward to make a lasting connection with an employer, with health and pension benefits, and prospects for a career path. They involve a desire to be able to afford the consumption goods like housing, education, and health care, which in most countries are either directly run by the government or highly affected by regulation in most of these countries.  
I\’m not someone who is reflexively opposed to higher taxes for those with high incomes, but these kinds of concerns over the future of the middle-class are unlikely to be addressed in a sustainable and long-term way by proposals to tax the rich and subsidize the middle class. Laws that seek to command higher wages and benefits, or seek to command lower prices for certain goods, are not a long-term answer either. Actual answers involve thinking in a more detailed way about how labor markets function, and more specifically how they improve productivity while incorporating and training workers. There also needs to be more detailed thinking about the rules and practices that governments have set up around the production of housing, health care, and education, and how those rules might be sensibly reformed.  

Globalization: More Than Before, But Less than You Think?

Globalization can be loosely defined as increases in the flow of goods, services, finance, people, and information across national boundaries. It has been generally on the rise in recent decades, although this trend experienced a substantial hiccup around the time of the Great Recession. Steven A. Altman, Pankaj Ghemawat, and Phillip Bastian have written  \”DHL Global Connectedness Index 2018: The State of Globalization in a Fragile World\” (February 2019). Most of the report is country- and region-level descriptions of the level globalization.

Here, I draw on the first chapter which tackles the overview question, \”How Globalized is the World?\” One of the themes: 

\”Surprisingly, one commonality between globalization’s supporters and its critics is that both tend to believe the world is already far more globalized than it really is. …  The world is both more globalized than ever before and less globalized than most people perceive it to be. The intriguing possibility embodied in that conclusion is that companies and countries have far larger opportunities to benefit from global connectedness and more tools to manage its challenges than many decision-makers recognize.\”

The authors point to survey data on what people believe about globalization, compared to the actual data. Here are the results of a survey of business managers (footnotes omitted):

Figure 1.1 also highlights how managers tend to greatly overestimate measures of the depth of globalization. The actual levels are juxtaposed on the graph against perceived levels from a survey of 6,035 managers across three advanced economies (Germany, the UK, and the US) and three emerging economies (Brazil, China, and India) that we conducted in 2017. On average, the managers guessed that the world was five times more deeply globalized than it really is! In fact, their perceptions were no more accurate than those of students surveyed across 138 countries or members of the general public in the United States. And CEOs and other senior executives had even more exaggerated perceptions than did junior and middle managers—perhaps because their own lives tend to be far more global than those of their employees and customers.


Thus, some of the discussion in the report emphasizes that most economic activity is domestic, even for multinational firms.

The combined output of all multinational firms outside of their home countries added up to only 9% of global economic output in 2017, and just 2% of all employees around the world worked in the international operations of multinational firms. In part, those statistics reflect the fact that most companies are still domestic. Less than 0.1% of all firms have foreign operations and about 1% export. Small firms are, on average, much less international than large ones, and most companies are small. But even among the Fortune Global 500, the world’s largest firms by revenue, domestic sales still exceed international sales. … The same pattern of limited breadth prevails at the firm level as well. Among the world’s 100 largest corporations ranked by foreign assets, the average firm earns roughly 60% of its revenue in just four countries (home plus three international markets).

Although the buzzword is \”globalization,\” what this typically means in practice is trade with a few neighbors who are geographically close, or share the same language, or both.

Most countries’ international flows are so highly concentrated with key partner countries (usually neighbors) that it hardly makes sense to think of them as global at all. In fact, flows between countries and their single largest partners (e.g. export destinations for trade) make up nearly one-quarter of all merchandise exports and more than onequarter of all of the other flows …  Expanding the same analysis beyond only countries and their single largest partners, more than half of all flows except merchandise exports and inbound students take place between countries and their top three partners, and 75% or more are between countries and their top 10 partners. Even in the case of merchandise trade, more than half takes place between countries and their top five export destinations. Most countries simply do not maintain strong connections to a large number of other countries. 

Geographic distance, along with cultural, administrative/ political, and economic differences go a long way toward explaining the distributions of countries’ flows across locations. For example, if one pair of countries is half as distant as another otherwise similar pair of countries, greater physical proximity alone would be expected to increase the merchandise trade between the closer pair by more than three times and to more than double the stock of foreign direct investment (FDI) between them. And to highlight a cultural commonality, sharing a common official language roughly doubles both trade and foreign direct investment. Thus, despite the widespread perception that advances in transportation and telecommunications technologies are rendering distance irrelevant, international activity continues to be more intense among proximate countries. 

The chapter also has some intriguing hints about future directions for globalization. One possibility is an expansion of international flows beyond the closest neighbors and those with common languages, and so that more small companies become involved. Globalization is not close to any theoretical ceiling. 
Another set of possibilities is that the international connections of the future will tend to put less emphasis on movements of goods, and instead more emphasis on flows of information.  Comparing from 2000 up to the present, flows of goods, finances, and services are all up about 20-25%. But flows of information across international borders–measured by international flows of internet traffic, phone calls, printed publications– have nearly tripled in that time. The authors write: 

On the other hand, trade growth might be slowed by developments that would reduce the attraction of trade motivated by labor cost arbitrage. Automation and 3-D printing could potentially reduce the attraction of offshoring to access low labor costs. And macroeconomic trends imply some narrowing of the scope for such trade as well. One very rough measure of the potential for labor cost arbitrage across countries is the GDP-weighted average of the ratios of countries’ per capita incomes (higher over lower). As large emerging economies (especially China) have become richer, this ratio has already fallen from 8 in 2001 to 5.6 in 2017, and projections from Oxford Economics suggest it will continue falling (more slowly) to about 4 by 2050. While wage arbitrage will continue to motivate trade, exports of labor-intensive products from emerging economies may become a smaller driver of trade growth than in the recent past.

Flows of international tourists and of international students have been up. But on a global basis, migration has risen less than many people seem to believe. 

On a global basis, migration is on a rising trend, but a very modest one. … The proportion of people living outside of the countries where they were born has risen from 2.8% in 2001 to 3.4% in 2017. Both of those values, however, still round to 3%—the same level that global migration depth has approximated for more than a century!The modest global increase in international migration,  however, masks significant increases that have taken place in some countries. In advanced economies, the share of immigrants in the population increased from 9% in 2001 to 13% in 2017. In the United States, 2017 was noteworthy as the year that the proportion of immigrants in the population first surpassed its 1910 peak level.

Thus, the globalization of the future could follow a path where the ties across information, ideas, finance, and people are rising briskly, but the share of actual goods moving across national borders rises much more slowly. Altman, Ghemawat, and Bastian write: 

To summarize, the depth and breadth of trade, capital, information, and people flows—as well as the international business activity of multinational firms—fall far short of levels that are commonly presumed. National borders and the distances and differences between countries still have large dampening effects on international activity. 

A Wrap-Up for the TARP

\”In October 2008, the Emergency Economic Stabilization Act of 2008 (Division A of Public Law 110-343) established the Troubled Asset Relief Program (TARP) to enable the Department of the Treasury to promote stability in financial markets through the purchase and guarantee of `troubled assets.\’” The Congressional Budget Office offers a retrospective on how the controversial program all turned out in \”Report on the Troubled Asset Relief Program—April 2019.\”

One issue (although certainly not the only one) is whether the bailout was mostly a matter of loans or asset purchases that were later repaid. A bailout that was later repaid might still be offensive on various ground, but at least it would seem preferable to a bailout that was not repaid.

As a reminder, TARP authorized the purchase of $700 billion in \”troubled assets.\” The program ended up spending $441 billion. Of that amount, $377 billion was repaid, $65 billion has been written off so far, and the remaining $2 billion seems likely to be written off in the future. Here\’s the breakdown:

Interestingly, the biggest single area of TARP write-offs was not the corporate bailouts, but the $29 billion for \”Mortgage Programs.\” The CBO writes: \”The Treasury initially committed a total of $50 billion in TARP funds for programs to help homeowners avoid foreclosure. Subsequent legislation reduced that amount, and CBO anticipates that $31 billion will ultimately be disbursed. About $10 billion of that total was designated for grants to certain state housing finance agencies and for programs of the Federal Housing Administration. Through February 28, 2019, total disbursements of
TARP funds for all mortgage programs were roughly $29 billion. Because most of those funds were in the form of direct grants that do not require repayment, the government’s cost is generally equal to the full amount disbursed.\”

The second-largest category of write-offs is the $17 billion for the GM and Chrysler. Here\’s my early take on \”The GM and Chrysler Bailouts\” (May 7, 2012). For a look at the deal from the perspective of two economists working for the Obama administration at the time, see Austan D. Goolsbee and Alan B. Krueger,\” A Retrospective Look at Rescuing and Restructuring General Motors and Chrysler,\” in the Spring 2015 issue of the Journal of Economic Perspectives.

Not far behind is the bailout for the insurance company AIG. It lost $50 billion making large investments and selling insurance on financial assets base on the belief that the price of real estate would not fall. The US Treasury writes: \”At the time, AIG was the largest provider of conventional insurance in the world. Millions depended on it for their life savings and it had a huge presence in many critical financial markets, including municipal bonds.\” A few years back, I wrote about  \”Revisiting the AIG Bailout\” (June 18, 2015), based in substantial part on \”AIG in Hindsight,\” by Robert McDonald and Anna Paulson in the Spring 2015 issue of the Journal of Economic Perspectives.

Finally, the Capital Purchase Program involved buying stock in about 700 different financial institutions. Almost all of that stock has now been resold (about $20 million remains), and the government ended up writing off $5 billion overall.

Of these programs, I\’m probably most skeptical about the auto bailouts, but even there, the risk of disruption from a disorderly bankruptcy, spreading from the companies to their suppliers to the surrounding communities, was disturbingly high. The purchases of stock in financial institutions, which may have been the most controversial step at the time, also came the closest to being repaid in full.

In the end, one\’s beliefs about whether TARP was a good idea depend on one\’s view of the US economy in October 2008. My own sense is that the US economy was in extraordinary danger during that month. B all means, let\’s figure out ways to improve the financial system so that future meltdowns are less likely, and I\’ve argued \”Why US Financial Regulators are Unprepared for the Next Financial Crisis\” (February 11, 2019). But when someone is in the middle of having a heart attack, it\’s time for emergency action rather than a lecture on diet and exercise. And when an economy is in the middle of a severe and widespread financial meltdown, which had a real risk of turning out even worse than what actually happened, the government needs to accept that however its regulators failed in the past and need to reform in the future, emergency actions including bailouts may need to be used. I do sometimes wonder if the TARP controversy would have been lessened if more individuals had been held accountable for their actions (or sometimes lack of action) in the lead-up to the crisis. 

Did Disability Insurance Just Fix Itself?

Back in 2015, the trust fund for the Social Security Disability Insurance Trust Fund was in deep trouble, scheduled to run out of money by 2016. A short-term legislative fix bought a few years more solvency for the trust fund, by moving some of the payroll tax for other Social Security benefits over to the disability trust fund, but the situation continued to look dire. For some of my previous takes on the situation, see this from 2016, this from 2013, or this from 2011. Or see this three-paper symposium on disability insurance from the Spring 2015 Journal of Economic Perspectives, with a discussion of what what going wrong in the US system and discussions of reforms from the Netherlands and the UK.

Well, the report of the 2019 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and the Federal Disability Insurance Trust Funds was recently published. And it now projects that the Disability Insurance trust fund won\’t run out for 33 years. In that sense, Disability Insurance looks to be in better shape than Medicare or the retirement portion of Social Security. What just happened?

This figure from the trustees shows the \”prevalence rate\” of disability–the rate of those receiving disability per 1,000 workers. The dashed line shows the actual prevalence of disability. The solid line shows the change after adjusting for age and sex. You can see the sharp rise in disability prevalence over several decades leading up to about 2014, which was leading to the concerns about the solvency of the system mentioned above. And then you see a drop in the prevalence of disability–both in the gross and the age-sex-adjusted lines.

As Henry Aaron writes: \”What happened next has stunned actuaries, economists, and analysts of all stripes. The number of people applying for disability benefits dropped…and kept on dropping. Some decline was expected as the economy recovered from the Great Recession and demand for workers increased. But the actual fall in applications has dwarfed expectations. In addition, the share of applicants approved for benefits has also fallen. … And if the drop in applications persists, current revenues may be adequate to cover currently scheduled benefits indefinitely.\”


Of course, the disability rate can fall for a number of reasons, some more welcome than others. To the extent that employment growth and a low unemployment rate has offered people a chance to find jobs in the paid workforce, rather than applying for disability, this seems clearly a good thing. But if this shift resulted from tightening the legal requirements on disability, then we might want to look more closely at whether this tightening made sense. The trust fund actuaries don\’t make judgments on this issue, but here are some bits of evidence.

The Center for Budget and Policy Priorities publishes a \”Chart Book: Social Security Disability Insurance,\” with the most recent version coming out last August. The first figure shows that applications for disability have been falling in recent years. The second figure shows that the number of applications being accepted have also fallen since about 2010 at all three possible stages of the process: initial

It\’s not easy to make a judgement on these patterns. A common concern among researchers studying this issues is that the standards for granting disability, and the resulting number of disabled workers, seem to vary a lot across different locations and decision-makers. For example, the CPBB report offers this figure showing differences across states:

A report from the Congressional Research Service, \”Trends in Social Security Disability InsuranceEnrollment\” (November 30, 2018), describes some potential causes of the lower disability rates.

Since 2010, new awards to disabled workers have decreased every year, dropping from 1 million to 762,100 in 2017. Although there has been no definitive cause identified, four factors may explain some of the decline in disability awards.

  1. Availability of jobs. The unemployment rate was as high as 9.6% in 2010 and then gradually decreased every year to about 4.35% in 2017. …
  2. Aging of lower-birth-rate cohorts. The lower-birth-rate cohorts (people born after 1964) started to enter peak disability-claiming years (usually considered ages 50 to FRA [federal retirement age]) in 2015, replacing the larger baby boom population. This transition would likely reduce the size of insured population who are ages 50 and above, as well as the number of disability applications. …
  3. Availability of Affordable Care Act (ACA). … Yhe availability of health insurance under the ACA may lower the incentive to use SSDI as a means of access to Medicare, thus reducing the number of disability applications. …
  4. Decline in the allowance rate. The total allowance rate at all adjudicative levels declined from 62% in 2001 to 48% in 2016. While this decline may in part reflect the impact of the Great Recession (since SSDI allowance rates typically fall during an economic downturn), the Social Security Advisory Board Technical Panel suspects that the declining initial allowance rate may be a result of the change in the SSDI adjudication process.
The Social Security actuaries are projecting that the share of Americans getting disability insurance isn\’t going to change much over time. But given the experience of the last few years, one\’s confidence in that projection is bound to be shaky. 

When Special Interests Play in the Sunlight

There\’s a common presumption in American politics that special interests are more likely to hold sway during secret negotiations in back rooms, while the broader public interest is more likely to win out in a transparent and open process. People making this case often quote Louis Brandeis, from his 1914 book Other People\’s Money: \”Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants …\” This use of the sunlight metaphor wasn\’t original to Brandeis: for example, James Bryce also used it in an 1888 book The American Commonwealth. 

But what if, at least in some important settings, the reverse is true? What if public processes that are more public actually result in greater power for special interests and less ability to find workable compromise solutions? James D\’Angelo and Brent Ranall make this case in their essay \”The Dark Side of Sunlight: How Transparency Helps Lobbyists and Hurts the Public,\” which appears in the May/June 2019 issue of Foreign Affairs. Responding to the Brandeis metaphor, they write: \”Endless sunshine—without some occasional shade—kills what it is meant to nourish.\”

The underlying argument goes something like this. Many of us have a reflexive belief that open political processes are more accountable, but we don\’t often ask \”accountable to who\”? It\’s easy to assume that the accountability is to a broader public interest. But in practice, transparency means that focused special interests can keep tabs on each proposal. If the special interests object, they can whip up a storm of protests. They also can threaten attempts at compromise, and push instead toward holding hard lines and party lines. Greater openness means a greater ability to monitor,  to pressure, and to punish possible and perceived deviations.

In US political history, the emphasis on sunlight is a relatively recent development. D\’Angelo and Ranall point out:

It used to be that secrecy was seen as essential to good government, especially when it came to crafting legislation. Terrified of outside pressures, the framers of the U.S. Constitution worked in strict privacy, boarding up the windows of Independence Hall and stationing armed sentinels at the door. As Alexander Hamilton later explained, “Had the deliberations been open while going on, the clamors of faction would have prevented any satisfactory result.” James Madison concurred, claiming, “No Constitution would ever have been adopted by the convention if the debates had been public.” The Founding Fathers even wrote opacity into the Constitution, permitting legislators to withhold publication of the parts of proceedings that “may in their Judgment require Secrecy.” …

One of the first acts of the U.S. House of Representatives was to establish the Committee of the Whole, a grouping that encompasses all representatives but operates under less formal rules than the House in full session, with no record kept of individual members’ votes. Much of the House’s most important business, such as debating and amending the legislation that comes out of the various standing committees—Ways and Means, Foreign Affairs, and so on—took place in the Committee of the Whole (and still does). The standing committees, meanwhile, in both the House and the Senate, normally marked up bills behind closed doors, and the most powerful ones did all their business that way. As a result, as the scholar George Kennedy has explained, “Virtually all the meetings at which bills were actually written or voted on were closed to the public.”

For 180 years, secrecy suited legislators well. It gave them the cover they needed to say no to petitioners and shut down wasteful programs, the ambiguity they needed to keep multiple constituencies happy, and the privacy they needed to maintain a working decorum.

But starting in the late 1960s and early 1970s, we have now had a half-century of experimenting with more open processes. How is that working out? When greater transparency in Congress arrived in the 1970s, did that mean special interest had more power or less? There\’s a simple (if imperfect) test. If special interests had less power, then it would not have been worthwhile to invest as much in lobbying, so more transparency should have been followed by a reduction in lobbying. Of course, the reverse is what actually happened: Lobbying rose dramatically in the 1970s, and has risen further since then. Apparently, greater political openness makes spending on lobbying more worthwhile, not less.

D\’Angelo and Ranall argue that a number of the less attractive features of American politics are tied to the push for greater transparency and openness. For example, we now have cameras operating in the House and Senate, which on rare occasions capture actual debate, but are more commonly used as a stage backdrop for politicians recording something for use in their next fundraiser or political ad. When public votes are taken much more often, then more votes are also taken just for show in an attempt to rally one\’s supporters or to embarrass the other party, rather than for any substantive legislative purpose. Politicians who are always on-stage are likely to display less civility and collegiality and greater polarization, lest they be perceived as insufficiently devoted to their own causes—or even showing the dreaded signs of a willingness to compromise.

As D\’Angelo and Ranall point out, it\’s interesting to note that when politicians are really serious about something, like gathering in a caucus to choose a party leader, they use a secret ballot. They write: \”Just as the introduction of the secret ballot in popular elections in the late nineteenth century put an end to widespread bribery and voter intimidation—gone were the orgies of free beer and sandwiches—it could achieve the same effect in Congress.\”

It\’s important to remember that here are wide array of forums, step-by-step process, and decisions that feed into any political process. Thus, the choice between secrecy and transparency isn\’t a binary one: that is, one doesn\’t need to be in favor of total openness or total secrecy in all situations. D\’Angelo and Ranall make a strong case toward questioning the reflexive presumption that more transparency in all settings will lead to better political outcomes.

Washing Machine Tariffs: Who Paid? Who Benefits?

When import tariffs are proposed, there\’s a lot of talk about unfairness and helping workers. But when the tariffs are enacted, the standard pattern is that consumers pay more, profits for the protected firm go up, and jobs are reshuffled from unprotected to protected industries. Back in 1911, satirist Ambrose Bierce defined \”tariff\” this way in The Devil\’s Dictionary: \”TARIFF, n.A scale of taxes on imports, designed to protect the domestic producer against the greed of his consumer.\”

Back in fall 2017, the Trump administration announced worldwide tariffs on imported washing machines. Aaron Flaaen, Ali Hortaçsu, and Felix Tintelnot,  look at the results in “The Production, Relocation, and Price Effects of US Trade Policy: The Case of Washing Machines.” A readable overview of their work is here; the underlying research paper is here

One semi-comic aspect of the washing machine tariffs was that \”unfairness\” of lower-priced washing machines from abroad has been hopscotching across countries for a few years now. Back in 2011, South Korea and Mexico were the two leading exporters of washing machines to the US. They were accused of selling at unfairly low prices. But even as the anti-dumping investigation was announced, washing machine imports from those countries dropped sharply. China became the new leading exporter of washing machines to the US–essentially taking South Korea\’s market share. 
China was then being accused of selling washing machines to US consumers at unfairly low prices. But by the time the anti-dumping investigation against China started and was concluded in mid-2016, China\’s exports of washing machines to the US had dropped off. Thailand and Vietnam had become the main exporters of washing machines to the US. Tired of chasing the imports of low-priced washing machined from country to country, the Trump administration announced worldwide tariffs on washing machines late in 2017, and they took effect early in 2018. 
If one country is selling imports at an unfairly low price, one can at least argue that this practice is unfair. But when a series of countries are all willing to sell at a lower price, it strongly suggests that the issue a country-hopping kind of unfairness, leaping from one country to another, but instead reflects a reality that there are a lot of places around the world where washing machines that Americans want to buy can be produced cheaply. 
The results of the washing machine tariffs were as expected. Prices to consumers rose. As the authors write: 

Following the late-2017 announcement of tariffs on all washers imported to the United States, prices increased by about 12 percent in the first half of 2018 compared to a control group of other appliances. In addition, prices for dryers—often purchased in tandem with washing machines—also rose by about 12 percent, even though dryers were not subject to a tariff. … [T]hese price increases were unsurprising given the tariff announcement. … One revealing finding in this work is the tight price relationship between washers and dryers, even when washers, for example, are the product that is subject to tariffs. Among the five leading manufacturers of washers, roughly three- quarters of models have matching dryers. When the authors compare only electric washers and dryers, they show that in about 85 percent of the matching sets, the washers and dryers have the same price.

Yes, the domestic makers of washing machines with factories in the US like Whirlpool, Samsung and LG announced plans to hire a few thousand more workers in response to the tariffs. But a substantial share of the higher prices paid by consumers just went into higher corporate profits for these companies. For example, the most recent reports from Whirlpool suggest that the overall pattern is selling fewer machines, with higher profits margins per machine. 
When Flaaen, Hortaçsu, and Tintelnot estimate the total in higher prices paid by consumers, and divide by an estimate of the number of jobs saved or created in the washing machine industry, they write: \”The increases in consumer prices described above translate into a total consumer cost of $1.5 billion per year, or about $820,000 per new job.\”  

You can tell all kinds of stories about tariffs, full of words like \”unfairness\” and \”toughness\” and \”saving jobs.\” But the economic effects of the tariffs aren\’t determined by a desired story line. As Nikita Khrushchev is reputed to have said: \”[E]conomics is a subject that does not greatly respect one\’s wishes.\” The reality of the washing machine tariffs is that consumers pay more for a basic consumer appliance and company profits go up. Some workers at the protected companies do benefit, but at a high price per job. The higher prices for washing machines means less spending on other goods, which together with tit-for-tat retaliation by other countries for the tariffs, leads to a loss of jobs elsewhere in the US economy. This story plays out over and over: for example, here\’s a story from the steel tariffs earlier in the Trump administration or the tire tariffs in the Obama administration. It is not a strategy for US economic prosperity. 

Financial Managers and Misconduct

\”Financial advisers in the United States manage over $30 trillion in investible assets, and plan the financial futures of roughly half of U.S. households. At the same time, trust in the financial sector remains near all-time lows. The 2018 Edelman Trust Barometer ranks financial services as the least trusted sector by consumers, finding that only 54 percent of consumers `trust the financial services sector to do what is right.\’\”

Mark Egan, Gregor Matvos and Amit Seru went looking for actual data on misconduct by financial managers. Matvos and Seru provide an overview of their research in \”The Labor Market for Financial Misconduct\” (NBER Reporter 2019:1). For the details, see M. Egan, G. Matvos, and A. Seru, \”The Market for Financial Adviser Misconduct,\” NBER Working Paper No. 22050, February 2016.

The authors figured out that the Financial Industry Regulatory Authority keeps a record of the complete employment history of the 1.2 million people registered as financial adviser from 2005-2015 at its BrokerCheck website. This history includes employers, jobs tasks, and roles. In addition, \”FINRA requires financial advisers to formally disclose all customer complaints, disciplinary events, and financial matters, which we use to construct a measure of misconduct.\” They write:

Roughly one in 10 financial advisers who work with clients on a regular basis have a past record of misconduct. Common misconduct allegations include misrepresentation, unauthorized trading, and outright fraud— all events that could be interpreted as a conscious decision of the adviser. Adviser misconduct results in substantial costs: In our sample, the median settlement paid to consumers is $40,000, and the mean is $550,000. These settlements cost the financial industry almost half a billion dollars per year.

Looking at this data generates some provocative insights.  

When it comes to financial matters, there will of course inevitably be cases of frustrated expectations and hard feelings. But are the examples of misconduct spread out fairly evenly across all financial advisers, or is it the case that some advisers are responsible for most of the cases? They write: 
\”A substantial number of financial advisers are repeat offenders. Past offenders are five times more likely to engage in misconduct than otherwise comparable advisers in the same firm, at the same location, at the same point in time.\”

A common pattern is that a financial adviser is fired for misconduct at one firm. But then hired at another firm. They write: \”Although firms are strict in disciplining misconduct, the industry as a whole undoes some of the discipline by recycling advisers with past records of misconduct. Roughly half of advisers who lose a job after engaging in misconduct find new employment in the industry within a year. In total, roughly 75 percent of those advisers who engage in misconduct remain active and employed in the industry the following year.\”

Just to give this skeeviness an extra twist, there appears to be gender bias against women in rehiring those involved in misconduct:

[W]e also find evidence of a \”gender punishment gap.\” Following an incident of misconduct, female advisers are 9 percentage points more likely to lose their jobs than their male counterparts. … After engaging in misconduct, 54 percent of male advisers retain their jobs the following year while only 45 percent of female advisers retain their jobs, despite no differences in turnover rates for male and female advisers without misconduct records (19 percent). … While half of male advisers find new employment after losing their jobs following misconduct, only a third of female advisers find new employment. … Because of the incredible richness of our regulatory data, we are able to compare the career outcomes of male and female advisers who are working at the same firm, in the same location, at the same point in time, and in the same job role. Differences in production, or the nature of misconduct, do not explain the gap. If anything, misconduct by female advisers is on average substantially less costly for firms. The gender punishment gap increases in firms with a larger share of male managers at the firm and branch levels.

This process of firing and reshuffling leads to an outcome financial advisers involved in misconduct but then rehired tend to be clustered in some firms. \”We find large differences in misconduct across some of the largest and best-known financial advisory firms in the U.S. Figure 1 displays the top 10 firms with the highest share of advisers that have a record of misconduct. Almost one in five financial advisers at Oppenheimer & Co. had a record of misconduct. Conversely, at USAA Financial Advisors, the ratio was less than one in 36.\”

Figure1
As the authors point out, with academic understatement, this seems like a market that relies for its current method of operation on \”unsophisticated consumers.\” But FINRA\’s BrokerCheck website is open to the public. It just seems to need wider use. 

Two Can Live 1.414 Times as Cheaply as One: Household Equivalence Scales

A \”household equivalence\” scale offers an answer to this question: How much more income is needed by a a household with more people so that it has the same standard of living as a household with fewer people? The question may seem a little abstract, but it has immediate applications.

For example, the income level that officially defines the poverty line was $13,064 for a single-person household under age 65 in 2018. For a two-person household, the poverty line is $16,815 if they are both adults, but $17,308 if it\’s one adult and one child. For a single parent with two children the poverty line is $20,331; for a single parent with three children, it\’s $25,554. The extent to which the poverty line rises with the number of people, and with whether the people are adults or children or elderly, is a kind of equivalence scale.

Similarly, when benefits the poor and near-poor are adjusted by family size, such adjustments are based on a kind of equivalence scale.

And bigger picture, if you are comparing average household income between the early 1960s and the present, it presumably matters that the average household had 3.3 people in the early 1960s but closer to 2.5 people today (see Table HH-4).

However, it often turns out that poverty lines and poverty programs are set up with their own logic of what seems right at the time, but without necessarily using a common household equivalence scale. Richard V. Reeves and Christopher Pulliam offer a nice quick overview of the topic in \”Tipping the balance: Why equivalence scales matter more than you think\”  (Brookings Institution,  April 17, 2019). Here\’s a sample of some common household equivalence scales:

Reeves and Pulliam point out that if you look at the 2017 Tax Cuts and Jobs Act, and how it affects \”households\” at different income levels, your answer will look different depending on whether you apply a household equivalence scale. For example, think about ranking households by income using an equivalence scale. If there are two households with the same income, but one household has more children, then the household with more children will be treated as having a lower standard of living (or to put it another way, a household with more children would need more income to have the same standard of living). The 2017 tax law provides substantial benefits to families with children.

But setting aside that particular issue, it\’s  topic that matters to people planning a household budget, if they think about the question of whether living together will save money, or how much more money they will need if they have children. The commonly used \”square root\” approach, for example, suggests that a a household of two people will need 1.414 times the income of a single-person household for an equivalent standard of living, a household of three people will need 1.732 times the income of a single-person household for an equivalent standard of living and so on. Pick another household equivalence scale, if you prefer, but then plan accordingly.

The Statute of Limitations on College Grades

As we move toward the end of the academic year for many colleges and universities, it is perhaps useful for students to be reminded that the final grade for any given course is quite unlikely to have a long-lasting influence on your life. Here are some thoughts from the syndicated newspaper column written by Calvin Trillin, which I would sometimes read to my introductory economics students on the last day of class. (I don\’t have an online link, but my notes tell me that I originally read the column in the San Jose Mercury News, May 26, 1990, p. 9C). Trillin wrote:

What I\’m saying is that when it comes to college grades, there is a sort  of statute of limitations.

It kicks in pretty early. By the time you\’re 28, nobody much cares  whether or not you had to take Bonehead English or how you did in it. If you  happen to be a pompous 28, wearing a Phi Beta Kappa key on your watch  chain will make you seem more pompous. (If we\’re being absolutely truthful, the watch chain itself is a mistake.) Go ahead and tell that attractive young  woman at the bar that you graduated magna cum laude. To employ a phrase I  once heard from a country comedian in central Florida, `She just flat out do  not care.\’ The statute of limitation has already run out.

This information about the statute of limitation ought to be comforting  to the college seniors who are now limping toward commencement. In fact, it  may be that commencement speakers — some of whom, I regret to say, tend to  wear Phi Beta Kappa keys on their watch chains — ought to be telling the  assembled degree recipients. Instead of saying \’life is a challenge\’ or  \’commencement means beginning,\’ maybe a truly considerate commencement speaker would say, \’Take this degree, and, as to the question of how close you came to not getting it, your secret is safe with us.\’

It\’s probably a good thing that after a while nobody cares. On the  occasion of my college class\’ 25th reunion, I did a survey, using my usual  scientific controls, and came to the conclusion that we had finally reached the  point at which income seemed to be precisely in inverse proportion to  academic standing in the class. I think this is the sort of information that  students cramming desperately for the history final are better off not having.

One Case for Keeping "Statistical Significance:" Beats the Alternatives

I wrote a few weeks back that the American Statistical Association has published a special issue of it journal, the American Statistician, with a lead article proposing the abolition of \”statistical significance\” (\”Time to Abolish `Statistical Significance\’\”?). John Ioannidis has estimated that 90% of medical research is statistically flawed, so one might expect him to be among the harsher critics of statistical significance.  But in the Journal of the American Medical Association, he goes the other way in \”The Importance of Predefined Rules and Prespecified Statistical Analyses: Do Not Abandon Significance\” (April 4, 2019). Here are a few of his themes:

The result of statistical research is often a yes-or-no outcome. Should a medical treatment be approved or not? Should a certain program or policy be expanded or cut? Should one potential effect be studied more, or should it be ruled out as a cause? Thus, while it\’s fine for researchers to emphasize that all results come with degree of uncertainty, at some point it\’s necessary to decide both how research and how applications of that research in the real world should proceed. Ioannidis writes:

Changing the approach to defining statistical and clinical significance has some merits; for example, embracing uncertainty, avoiding hyped claims with weak statistical support, and recognizing that “statistical significance” is often poorly understood. However, technical matters of abandoning statistical methods may require further thought and debate. Behind the so-called war on significance lie fundamental issues about the conduct and interpretation of research that extend beyond (mis)interpretation of statistical significance. These issues include what effect sizes should be of interest, how to replicate or refute research findings, and how to decide and act based on evidence. Inferences are unavoidably dichotomous—yes or no—in many scientific fields ranging from particle physics to agnostic omics analyses (ie, massive testing of millions of biological features without any a priori preference that one feature is likely to be more important than others) and to medicine. Dichotomous decisions are the rule in medicine and public health interventions. An intervention, such as a new drug, will either be licensed or not and will either be used or not.

Yes, statistical significance has a number of problems. It would be foolish to rely on it exclusively. But what will be used instead? And will it be better or worse as a way of making such decisions? No method of making such decisions is proof against bias. Ioannidis writes: 

Many fields of investigation (ranging from bench studies and animal experiments to observational population studies and even clinical trials) have major gaps in the ways they conduct, analyze, and report studies and lack protection from bias. Instead of trying to fix what is lacking and set better and clearer rules, one reaction is to overturn the tables and abolish any gatekeeping rules (such as removing the term statistical significance). However, potential for falsification is a prerequisite for science. Fields that obstinately resist refutation can hide behind the abolition of statistical significance but risk becoming self-ostracized from the remit of science. Significance (not just statistical) is essential both for science and for science-based action, and some filtering process is useful to avoid drowning in noise.

Ioannidis argues that the removal of statistical significance will tend to make things harder to rule out, because those who wish to believe something is true will find it easier to make that argument. Or more precisely: 

Some skeptics maintain that there are few actionable effects and remain reluctant to endorse belabored policies and useless (or even harmful) interventions without very strong evidence. Conversely, some enthusiasts express concern about inaction, advocate for more policy, or think that new medications are not licensed quickly enough. Some scientists may be skeptical about some research questions and enthusiastic about others. The suggestion to abandon statistical significance1 espouses the perspective of enthusiasts: it raises concerns about unwarranted statements of “no difference” and unwarranted claims of refutation but does not address unwarranted claims of “difference” and unwarranted denial of refutation.

The case for not treating statistical significance as the primary goal of an analysis seems to me ironclad. The case is strong for putting less emphasis on statistical significance and correspondingly more emphasis on issues like what data is used, the accuracy of data measurement, how the measurement corresponds to theory, the potential importance of a result, what factors may be confounding the analysis, and others. But the case for eliminating statistical significance from the language of research altogether, with the possibility that it will be replaced by an even squishier and more subjective decision process, is a harder one to make.