Has Ben Bernanke Been Consistent?

Back in the late 1990s and early 2000s, Ben Bernanke sharply criticized the Bank of Japan. He argued that even though the BoJ had cut its target interest rate to near-zero, it could and should do much more to end deflation and to stimulate Japan\’s economy. In the last few months, a number of critics have accused Bernanke of inconsistency: that is, the Ben Bernanke who has been leading the Fed in the aftermath of the Great Recession is not following the advice of the Ben Bernanke who was criticizing the Bank of Japan back in 2000-2003. To me, this criticism seems like pretty thin gruel: indeed, I think the accusation of inconsistency basically an attention-getting cover for a more mundane policy disagreement over whether the Fed should immediately start another round of quantitative easing. Here, I\’ll lay out the arguments here as I seem them.

Bernanke\’s Earlier Criticisms

Bernanke first became a member of the Fed\’s Board of Governors in 2002. Back in 2000, while still a professor at Princeton, he published an essay called: “Japan’s Slump: A Case of Self-Induced Paralysis?” In the essay, Bernanke sharply criticizes the Bank of Japan for taking the position that since it had lowered its target interest rate to near-zero, there was nothing more it could do. In contrast Bernanke argued that a central bank had a number of other options when confronted with deflation: a long-term commitment to a near-zero interest rate; setting an inflation target of 3-4 percent per year; intervention in exchange-rate markets to depreciate the yen; using money creation to finance deficit spending; and central bank purchases of long-term government bonds, corporate bonds, and other financial securities.

Bernanke made similar arguments after joining the Fed. For examples, see his November 21, 2002 talk to the National Economists Club in Washington, D.C. called Deflation: Making Sure \”It\” Doesn\’t Happen Here\” or his May 31, 2003 talk to the Japan Society of Monetary Economics in Tokyo called \”Some Thoughts on Monetary Policy in Japan.\”  These talks do differ a bit in emphasis. For example, after joining the Fed, Bernanke was careful to say that he was in no way contemplating interventions in exchange rate markets. But the general messages remained the same: namely, that a central bank has a lot of tools available even when it has cut its target interest rate to near-zero, and that the Bank of Japan should make more aggressive use of these tools.

 
Accusing Bernanke of Inconsistency

One of the most prominent voices accusing Bernanke of inconsistency is Paul Krugman, who  wrote an essay in the New York Times Magazine on April 24, 2012, called  “Earth to Ben Bernanke: Chairman Bernanke Should Listen to Professor Bernanke,”  Here\’s Krugman:

\”Bernanke was and is a fine economist. More than that, before joining the Fed, he wrote extensively, in academic studies of both the Great Depression and modern Japan, about the exact problems he would confront at the end of 2008. He argued forcefully for an aggressive response, castigating the Bank of Japan, the Fed’s counterpart, for its passivity. Presumably, the Fed under his leadership would be different. 

\”Instead, while the Fed went to great lengths to rescue the financial system, it has done far less to rescue workers. The U.S. economy remains deeply depressed, with long-term unemployment in particular still disastrously high, a point Bernanke himself has recently emphasized. Yet the Fed isn’t taking strong action to rectify the situation.

\”The Bernanke Conundrum — the divergence between what Professor Bernanke advocated and what Chairman Bernanke has actually done — can be reconciled in a few possible ways. Maybe Professor Bernanke was wrong, and there’s nothing more a policy maker in this situation can do. Maybe politics are the impediment, and Chairman Bernanke has been forced to hide his inner professor. Or maybe the onetime academic has been assimilated by the Fed Borg and turned into a conventional central banker. Whichever account you prefer, however, the fact is that the Fed isn’t doing the job many economists expected it to do, and a result is mass suffering for American workers.\”

That\’s a heavy accusation, and Krugman isn\’t alone in making it. In a working paper for the National Bureau of Economic Research, Lawrence Ball argues that Bernanke has been inconsistent, and collects some other examples (\”Ben Bernanke and the Zero Bound, WP #17836, February 2012).
For example, after discussing some of Bernanke\’s earlier writings, Christina Romer (formerly head of the Council of Economic Advisers at the start of the Obama administration) reportedly said: “My reaction to it was, ‘I wish Ben would read this again.’” Joseph Gagnon (a former Fed economist now at the Peterson Institute) uses the title of Bernanke’s criticism back in in 2000 of the Bank of Japan to criticize Fed policy: “It’s really ironic. It’s a self-induced paralysis.”

Bernanke Pushes Back


Bernanke pushed back against the criticism of inconsistency in his press conference of April 25, 2012. In particular, here\’s part of what he answered in response to the question: \”[S]pecifically
could you address whether your current views are inconsistent with the views on that subject that you held as an academic?\” Bernanke answered:

\”So there\’s this view circulating that the views I expressed about 15 years ago on the Bank of Japan are somehow inconsistent with our current policies. That is absolutely incorrect. Our–my views and our policies today are completely consistent with the views that I held at that time. I made two points at that time to the Bank of Japan. The first was that I believe that a determined central bank could and should work to eliminate deflation, that is falling prices. The second point that I made was that when short-term interest rates hit zero, the tools of a central bank are no longer–are not exhausted, there are still other things that the central bank can do to create additional accommodation. Now looking at the current situation in United States, we are not in deflation. When deflation became a significant risk in late 2010 or at least a modest risk in late 2010, we used additional balance sheet tools to help return inflation close to the 2 percent target. Likewise, we have been aggressive and creative in using nonfederal funds rate centered tools to achieve additional accommodation for the U.S. economy. So the very critical difference between the Japanese situation 15 years ago and the U.S. situation today is that Japan was in deflation and clearly when you\’re in deflation and in recession, then both sides of your mandates, so to speak, are demanding additional accommodation. In this case is we are not in deflation, we have an inflation rate that\’s close to our objective. Now, why don\’t we do more? Well, first I would again reiterate that we are doing great deal, policy is extraordinarily accommodative, we–and I won\’t go through the list again, but you would–you know all the things that we have done to try to provide support to the economy. I guess the question is, does it make sense to actively seek a higher inflation rate in order to achieve a slightly increased reduction–a slightly increased pace of reduction in the unemployment rate? The view of the Committee is that that would be very reckless. We have–we, the Federal Reserve, have spent 30 years building up credibility for low and stable inflation which has proved extremely valuable in that we\’ve been be able to take strong accommodative actions in the last 4 or 5 years to support the economy without leading to a unanchoring of inflation expectations or a destabilization of inflation. To risk that asset for what I think would be quite tentative and perhaps doubtful gains on the real side would be, I think, an unwise thing to do.\”

As someone without any dog in this fight, how well-founded is the criticism that Bernanke has been inconsistent? Two broad points are worth considering here: How much does Japan\’s situation in the late 1990s differ from the U.S. situation in the Great Recession and its aftermath? And how has the Fed reaction differed from the Bank of Japan\’s reaction?

Remembering Japan\’s Asset Bubble and the Policy Reaction

Japan\’s Nikkei 22 stock market index rose almost seven-fold in the from 6,000 in 1980 to peak at almost 40,000 at the end of 1989. It then dropped by nearly half in 1990, and had slid back to 8,000 by 2003. Similarly, average land prices for Japan doubled from 1980 to 1991, and then fell back to 1980 levels by about 2004. Banks suffered huge paper losses, but were not forced to reorganize: instead, they took the low interest rates from the Bank of Japan and continued offering loans to underwater \”zombie\” firms. Moreover, Japan had begun by about 1998 to experience deflation, so even near-zero nominal interest rates were actually positive real interest rates.

In comparison, the U.S. Dow Jones Average rose by about 50% from 2003 to 2007, then fell back to below 2003 levels in early 2009, but now has recovered back to near the 2007 peak. The U.S. housing price bubble has been real and painful, but it wasn\’t a doubling and then a halving for the average of all housing prices nationwide. The U.S. had very low inflation for a time, but it hasn\’t actually dipped into deflation. U.S. banks have been recapitalized, by hook and by crook, and forced to undergo stress tests.

In short, an implicit argument that an impartial policymaker should react in exactly the same way to Japan circa 2000 and to the U.S. economy circa 2012 is off the mark, because the situations are substantially different.

Central Bank Policy Reactions

Bernanke\’s criticism back in 2000 was in response to the fact that in the face of this situation, Japan\’s central bank had done almost nothing but reduce interest rates for an entire decade.  In contrast, the Fed under Bernanke reacted much more quickly.

  1. The Fed started reducing the federal funds interest rate in 2007, and took it down to near-zero in late 2008. It took the Bank of Japan about four years after the crash to move its target interest rate down to near-zero; the Fed made this change in about 18 months. 
  2. The Fed set up a number of temporary agencies to give short-term loans to all sorts of players in the financial industry in late 2007, including the Term Auction Facility (TAF), Term Securities Lending Facility (TSLF), Primary Dealer Credit Facility (PDCF), Commercial Paper Funding Facility (CPFF), Term Asset-Backed Seucurities Loan Facility (TALF), and others.  All of these agencies were about making short-term loans to get through the crisis, and they were all closed by mid-2010.
  3. The Fed carried out \”quantitative easing\” through the direct purchase of U.S. Treasury debt–essentially printing money to finance $1 trillion or so of federal borrowing.
  4. The Fed also carried out \”quantitative easing\” through the direct purchase of a $1 trillion or so of mortgage-backed securities–essentially printing money to provide finance in this sector.
  5. The Fed offered forward guidance about its plans, announcing that it would keep the target interest rate near-zero interest rate low through 2014.   
  6. The Fed now has the power to pay interest to banks on the reserves they are required to hold with the Fed, giving the Fed another monetary policy tool.
  7. The Fed has also changed its policies so that it doesn\’t just buy short-term Treasury debt, but also buys long-term securities. 

My point here is not to argue over whether each of these policies is effective or useful or appropriate. Instead, I\’m listing the policies to emphasize that the Ben Bernanke who wrote back in 2000 has also deployed an unprecedented array of monetary policy tools. Back in 2007, it was reasonable to teach in an intro econ class that the Federal Reserve took action by affecting the federal funds interest rate through open market operations. By 2010, just three years later, the policy menu of the Fed had been transformed. The critics cannot plausibly accuse Bernanke or the Federal Reserve of following the Bank of Japan in the 1990s, cutting interest rates and then sitting on his hands. Instead, to the extent that their claim of inconsistency has any merit, the claim must be that Bernanke should lead the Fed toward even more aggressive use of these non-interest rate tools. But just what non-interest rate tools should be used

Back in his 2000 essay, Bernanke argued that one policy alternative for Japan was to intervene in foreign exchange markets to drive down the value of the yen. However, Bernanke\’s critics, when accusing him of inconsistency, often don\’t mention the policy alternative of driving down the U.S. dollar exchange rate. For example, Krugman doesn\’t mention this choice in his New York Times Magazine article.

Bernanke also recommended back in 2000 that the Bank of Japan announce an inflation target of 3-4 percent, but the Fed has not announced such a target for the U.S. economy. Part of the reason for this may be a matter of legality: the Fed has never announced an official target for the inflation rate, but instead has discussed its legal mandate to deliver \”price stability.\” However, by announcing that the federal funds interest rate will stay near-zero into 2014, the Federal Reserve has in effect promised that even if the economy recovers, it will allow inflation to rise rather than raising interest rates. Of course, the Fed could renege on this promise: but it could renege on an inflation target, too. My guess is that the Fed believes that it has made plenty clear to anyone with eyes to see that as long as economic growth remains so sluggish, it would not clamp down on a low level of inflation.

The remaining dispute is over whether Bernanke should push the Fed to do more of what it has already done; in particular, should the Fed print money to purchase another trillion or two (or three or four) in Treasury debt or private-sector financial securities? One can disagree back-and-forth about the usefulness and risks of this policy choice in good faith. I happen to agree with Bernanke and the Board of Governors that the time isn\’t ripe for such a step just now. But even if one thinks that the Federal Reserve should be even more aggressive with quantitative easing just now, there is no serious inconsistency between Bernanke\’s 2000 essay about Japan and deciding not to double down on quantitative easing in the U.S. economy back in 2011 or right now.

There is considerable evidence that recovering from a deep financial crisis takes several years, as households and firms and financial institutions shed debt and rebalance their financial situations. The assertion that this recovery process would have been dramatically shorter if only the Fed would have financed an additional few trillion dollars in quantitative easing is a highly controversial claim–a hopeful theoretical prediction not based on any historical examples.   Bernanke\’s writings from 2000 to 2003 argue that a central bank confronted with a financial crisis should make aggressive use of unorthodox non-interest rate policies to avoid deflation, and the Fed under his leadership has done so. But Bernanke\’s earlier writings do not suggest that such non-interest rate policies are a cure-all for what ails an economy and for getting the unemployed back to work the aftermath of a financial crisis. The earlier writings do not suggest that these non-interest rate policies be pursued without limit even after an economy has started growing again, and without regard for balancing their  potential gains and losses.  

How Many "Discouraged" Workers?

The good folks at the U.S. Bureau of Labor Statistics divide the adult population into three groups: the employed, the unemployed, and those out of the labor force. When an employed person stops working, if they are looking for work, they are counted as unemployed; if they aren\’t looking for work, they are counted as out of the labor force. This distinction makes conceptual sense: it would be peculiar to treat a 75 year-old retiree not looking for a job, or a stay-at-home spouse, as \”unemployed.\” But there are obvious practical difficulties with these distinctions as well.  For example, \”discouraged\” workers who would like a job, but who have given up looking, will be counted as out of the labor force, although their situation looks more like unemployment.

So how many discouraged workers are there? Actually, the same survey that is used to count the unemployed can also be used to count \”discouraged workers.\” In fact, BLS counts \”discouraged\” workers as one of two parts of an overall group of people who are \”Marginally attached to the labor force.\”

Within the overall category of \”marginally attached,\” the first subcategory of \”discouraged\” workers \”[i]ncludes those who did not actively look for work in the prior 4 weeks for reasons such as thinks no work available, could not find work, lacks schooling or training, employer thinks too young or old, and other types of discrimination.\” The other subcategory of \”other persons marginally attached to the labor force … [i]ncludes those who did not actively look for work in the prior 4 weeks for such reasons as school or family responsibilities, ill health, and transportation problems, as well as a number for whom reason for nonparticipation was not determined.\”

Here\’s a graph of the data from BLS, which I made using the ever-helpful FRED tool from the Federal Reserve Bank of St. Louis. The bottom line shows the number of discouraged workers fluctuated around 400,000 from 1994 up to about 2008–a little higher in the aftermath of the 1990-91 and 2011 recessions, a little lower other times. But in the aftermath of the Great Recession, the number of discouraged workers spiked above 1.2 million, before falling back to under  million more recently.

The top line shows the broader category of \”marginally attached\” workers, which includes both those classified as \”discouraged\” and those who are marginally attached for other reasons. From 1994 it fluctuates around 1.5 million: higher in the aftermath of the 1990-91 and 2001 recessions, lower other times. But since 2008, it spiked up to 2.8 million, before dropping back to around 2.4 million more recently.

For comparison, the number of unemployed people (and remember, \”discouraged\” doesn\’t count as unemployed) was 12.7 million in March 2012. There were 865,000 \”discouraged\” workers in March 2012, and 2,352,000 in the total \”marginally attached\” category. Thus, if one wanted a broader picture of the labor market including both those officially classified as unemployed along with the discouraged, the total number of people in these two categories would have been would have been about 7% higher than official unemployment alone, at 13,565,000. If one wanted a broader picture of the labor market including both the unemployed and all of the \”marginally attached,\” the total would have been 18% higher than official unemployment alone, at 15,052,000.

It doesn\’t seem right to treat the officially unemployed, who are actively looking for work, as in the same situation as the \”marginally attached.\” But the high numbers of discouraged and marginally attached do demonstrate that the unemployment rate only captures one aspect of the weakness in U.S. labor markets.

Tire Tariffs: Saving Jobs at $900,000 Apiece

In September 2009, President Obama approved a special tariff on imports of tires from China. In his 2012 State of the Union address, he stated that the policy had saved \”over a thousand\” jobs. Gary Clyde Hufbauer and Sean Lowry look at what happened in \”US Tire Tariffs: Saving Few Jobs at High Cost,\” written as an April 2012 \”Policy Brief\” for the Peterson Institute for International Economics.

The basic economic lessons here are the same as ever. There\’s never been any question that imposing tariffs on foreign competition could dissuade imports, and thus allow U.S. manufacturers to keep production and prices higher than they would otherwise be. As a result, U.S. consumers pay more, the firms make higher profits–and workers for those firms get some crumbs from the table. In this case, Hufbauer and Lowry estimate that consumers paid $1.1 billion in higher prices for tires in 2011. This saved a maximum of 1,200 jobs, so the average cost of the tariff was $900,000 per job saved. But of course, the worker didn\’t receive that $900,000; instead, most of it went to the tire companies. And in an especially odd twist, most of it contributed to profits earned by non-U.S. producers.

The story starts before in September 2009, when U.S. tariffs on tire imports were in the range of 3.4-4.0%. \”Starting on September 26, 2009, Chinese tires were subjected to an additional 35 percent ad valorem tariff duty in the first year, 30 percent ad valorem in the second year, and 25 percent ad valorem in the third year.\” The higher tariffs did reduce tire imports from China.  For example, \”radial car tires imported from China fell from a high of approximately 13.0 million tires in 2009Q3 to 5.6 million tires during 2009Q4—a 67 percent decrease.\”

Employment in the U.S. tire industry rose from 50,800 in September 2009 to 52,000 by September 2011, which is the basis for a rough estimate that 1,200 jobs were saved by the tariffs. (Of course, one could argue that jobs would have declined without the tariffs, so more than 1,200 jobs were saved, or one could argue that some of the job increase came from other forces, so less than 1,200 jobs were saved by the tariffs.)The average salary of a tire builder was $40,070 in 2011. So multiplying this income by 1,200 jobs, the total additional income received by tire workers would be $48 million.

Data from the Consumer Price Index shows that prices of tires from U.S. companies jumped after the tariff was imposed. This is totally expected, of course: the reason that import tariffs benefit U.S. firms is that it allows them to charge higher prices than they would otherwise be able to do. Hufbauer and Pauly calculate that the import restraints on Chinese tires led to additional higher prices for tires cost U.S. consumers about $295 million per year.

As U.S tire imports from China declined, tire imports increased from other countries. Indeed, the U.S. was importing about 27 million tires in the third quarter of 2009, when the tariff took effect, but was importing about 30 million tires by the third quarter of 2009. The tariff on Chinese-produced tires cut imports from China, but tire imports from places like Mexico, Indonesia and Thailand rose. The tariffs on China allowed these producers to raise prices for tires paid by U.S. consumers to the tune of about $800 million.

When tariff policy is laid out in this way, it looks literally insane. No one would ever advocate a policy of imposing a tax worth $1.1 billion on all U.S. purchasers of tires, with $48 million to go to actual workers who produce tires, $250 million to go to U.S tire companies, and $800 million of the revenue from that tax to go to foreign tire producers.

Moreover, any jobs saved in the tire industry were almost certainly more than offset by losses of jobs elsewhere in the economy. Hufbauer and Lowry do a back-of-the-envelope illustrative calculation that if the additional money spent on tires was diverted from other retail spending, it would cost something like 3,770 jobs in the retail sector. Also, China retaliated against the U.S. decision by imposing tariffs on U.S. exports of chicken parts. Hufbauer and Lowry report: \”The Chinese tariffs reduced exports by $1 billion as US poultry firms experienced a 90 percent collapse in their exports of chicken parts to China. Given the timing of the Chinese government’s actions, many trade policy experts view the trade dispute over China’s imports of “chicken feet” from the United States as a tit-for-tat response to the US safeguards on Chinese tire exports.\” Even as an attempt to save U.S. jobs at exorbitant cost,  President Obama\’s tariffs on Chinese tires were a failure.

Is Policy Uncertainty Delaying the Recovery?

The U.S. corporate sector has high profits. Interest rates are near historical lows. Those factors would seem to encourage investment, expansion, and hiring. But here we are in 2012, with the official end of the Great Recession nearly three years in the rear-view mirror, and many firms are still holding back. Scott R. Baker, Nick Bloom, and Steven J. Davis have written \”Is Policy Uncertainty Delaying the Recovery?\” as a policy brief for the Stanford Institute for Economic Policy Research. The underlying research paper and data are available here

There are lots or theoretical reasons why a high level of uncertainty might cause managers to be hesitant about starting new projects, investing, or hiring workers. But how does one collect data on the level of uncertainty–and in particular, on the level of uncertainty related to economic policy? Baker, Bloom and Davis mix together three sources of data into a single index:  \”We construct our index of policy uncertainty by combining three types of information: the frequency of newspaper articles that reference economic uncertainty and the role of policy; the number of federal tax code provisions that are set to expire in coming years; and the extent of disagreement among economic forecasters about future inflation and future government spending on goods and services.\” Here is their index, where a level of 100 is set arbitrarily to be equal to the average of the index for the 25 years from 1985 up to 2010.

Constructing any index like this involves some more-or-less arbitrary choices, so there will always be room for dispute. In addition, while the authors offer some arguments that this index is emphasizing \”policy\” uncertainty, I suspect that it\’s picking up other kinds of swings in economic confidence as well.

But for what it\’s worth, it does seem that the index is spiking at times one might expect: 9/11, the \”Black Monday\” stock market meltdown in 1987, wars, presidential elections, and the like.  In addition, policy uncertainty by this measure has been especially high since 2008, although in early 2012 the measure has fallen back to 2009 levels.   When the authors look more closely at the newspaper articles underlying their index, they find that the greatest sources of uncertainty are those related to monetary issues, which includes many steps taken by the Federal Reserve, and tax issues, like whether various tax provisions will be extended or ended.

How much does policy stability matter? As the authors ask (references to figures omitted):

\”How much near-term improvement could we expect from a stable, certainty-enhancing policy regime? We use techniques developed by Christopher Sims, one of the two 2011 Nobel laureates in economics, to estimate the effects of economic policy uncertainty. The results for the United States suggest that restoring 2006 (pre-crisis) levels of policy uncertainty could increase industrial production by 4% and employment by 2.3 million jobs over about 18 months. That would not be enough to create a booming economy, but it would be a big step in the right direction.\”

By the time one takes into account the problems of creating an index to measure policy uncertainty and the problems of blending policy uncertainty into a macroeconomic model, I wouldn\’t place much confidence in these exact numbers. But at a broader level, the calculations make a strong argument that the effects of policy uncertainty on output and employment have probably been a substantial contributor to the sluggishness of the U.S. economic recovery.

Let Critics See the Movie in Advance? A Strategic Game Analysis

Movies are usually shown to critics before being released to the general public. But about one-tenth of movies are not released. What do you as a movie-goers infer when a movie isn\’t released for review? But then, what is an appropriate strategy for movie studios in sending movies out for review, if they recognize what movie-goers like you are likely to infer? And what is the appropriate strategy for movie-goers, if they recognize what the movie studios are likely to infer about what they are likely to infer? You have just crossed the border into the land of strategic game theory. In the most recent issue of the  American Economic Journal: Microeconomics, Alexander L. Brown, Colin F. Camerer, and Dan Lovallo sort through the implications and inferences in  \”To Review or Not to Review? Limited Strategic Thinking at the Movie Box Office\” (vol. 4, number 2, pp. 1-26). The journal is not freely available on-line, although many academics will have access to it through a library subscription or their personal membership in the American Economic Association.

The usual starting point for analyzing these kinds of strategic interactions is to consider what would happen if all parties were completely rational. It might seem intuitively obvious that movie studios will send out their better-quality movies to be reviewed, but not send out their lower-quality movies. However, it turns out that if all parties are fully rational, movie studios would release every movie for review. The authors explain the underlying mathematical game theory by offering an illustration along these lines.

Say that the quality of a movie  can be measured on a scale between 0 and 100. Now say that studios decide that they will only release their better movies for review: for example, the studios might decide not to release for review any movie with quality below 50. In this situation, when moviegoers see that a movie has not been released for review, they will infer that it has a quality ranging from 0 to 50–on average, a value of 25.

But if consumers are going to assume that all unreviewed movies have a quality value of 25, then it makes sense for the movie studios to release for review all movies with qualities higher than 25, because they suffer diminished profits if a movie has quality of, say, 40, but moviegoers are assuming it\’s only a 25.

Now movie studios are releasing for review all movies with a quality score over 25, and movie goers will assume that the remaining movies are between 0 and 25, or an average of 12.5. Given these expectations of moviegoers, it will pay for the studios to release for review all movies with a quality score above 12.5, so that they don\’t face a situation where their movie with a true quality score of 20, when movie-goers are expecting only a 12.5.

However, now consumers will assume that all unreviewed movies have a quality value below 12.5. And as this cycle of inference and counterinference continues, eventually the movie studios will release all movies (except perhaps the single worst movie, which consumers will then know is the single worst movie) for review.

After identifying the purely rational outcome, the next step in this kind of analysis is to look at the underlying assumptions, and to think about which assumptions are mostly likely to be violated in this setting. The authors emphasize two such assumptions: 1) Consumers are always aware when  movies haven\’t been released for review; and 2) Consumers draw fully rational conclusions when a movie isn\’t released for review. Of course, in the real world neither of these assumptions holds true. The authors find that of the 1,414 that had wide release in the U.S. market from 2000 through 2009, about 11% were not released for review. In addition, that number has been higher in recent years.

Movie-goers who often don\’t notice that a movie hasn\’t been released for review, or who don\’t draw the rational inference when that happens, are likely to end up going to low-quality movies they would not otherwise have attended. As a result, they are more likely to be disappointed in their movie experience when going to an unreviewed movie than to a movie that was released for review. The authors set out to test whether this implication holds true.

To measure what critics think of the quality of a movie movie, they use data from Metacritic.com, a website that pulls together and averages ratings of more than 30 movie critics from newspapers, magazines, and websites. To measure what audiences think of a movie, they look at user reviews of movies at the of Internet Movie Database (IMDB). They plot a graph with the movie critic ratings on the horizontal axis and the movie-watcher ratings on the vertical axis. Movies that were released for review are solid dots; movies that had a \”cold open\” without a review from the critics before they were released (although they were reviewed later) are hollow dots. Here is the graph:

What patterns emerge here?

1) Notice that the dots form a generally upward-sloping pattern, which tells you that when the critics tend to rate a movie more highly (on the horizontal axis), moviegoers also tend to rate the movie more highly (on the vertical axis).

2) Cold-opened movies, the hollow dots, tend to have lower quality. \”No cold-opened movie has a metacritic rating higher than 67. The average rating for those movies is 30, 17 points below the sample average of 47.\”

3) The darker straight line is the best-fit line looking only at movies that were screened in advance. The lighter straight line is the best-fit line looking at movies that were not screened in advance. The lighter line is below the darker line. Think about a movie of a certain quality level as defined by the critics: if that movie is reviewed, people are more likely to enjoy that movie than if the movie was not released for early review. This pattern suggests that the reviews are helping people to sort out which movies they would prefer seeing, and that without reviews, people are more likely to end up disappointed.

After doing statistical calculations to adjust for factors like whether the movie features well-known stars, the size of the production budget, the rating of the movie, the genre of the movie, and other factors, they find: \”[C]old opening is correlated with a 10 –30 percent increase in domestic box-office revenue, and a pattern of fan disappointment, consistent with the hypothesis that some moviegoers do not infer low quality from cold opening.\”

So here\’s some advice you can use: If you\’re not sure whether a movie has been released for review by critics before it was distributed, find out. If it hasn\’t been releasedm think twice about whether you really want to see it. Maybe you do! Or maybe you are not paying enough attention to the signal the movie studio is sending by choosing a cold opening.  Here is the authors\’ explanation, based in part on interviews with studio executives (footnotes omitted):

\”[P]roduction budgets and personnel are decided early in the process. The number of theaters which agree to show the film is contracted far in advance of any cold-opening decision. Cold-opening decisions are made after distribution contracts have been signed and according to a major distributor and studio executives “are not a part of the contract.” There are no contracted decision rights about whether to cold open or not. The cold-opening decision is almost always made late in the process. After the film is completed, there is often audience surveying and test screenings. As one senior marketing and public relations (PR) veteran put it, “If a movie is not shown to critics, a decision has been made that the film will not be well received by them … After the PR executives have seen the film, if they believe the film will be poorly reviewed, they will have a heart to heart with the marketing execs and filmmakers about the pros and cons of screening for critics. … 

 \”A key ingredient in this story is that executives must think some moviegoers are strategically naïve, in the sense that those moviegoers … will not deduce from the lack of reviews that quality is lower than they think. (Otherwise, the decision to cold open would be tantamount to allowing critics to say the quality is low).\”

Falling Labor Force Participation

The percentage of the U.S. adult population that is either working or unemployed and looking for a job is called the labor force participation rate. From the early 1960s to the late 1990s, this percentage rose more-or-less steadily, from 59% to 67%. But since then, starting well before the Great Recession, the labor force participation rate has been falling. In my own Journal of Economic Perspectives, Chinhui Juhn and Simon Potter wrote about \”Changes in Labor Force Participation inthe United States\” back in the Summer 2006 issue.  Willem Van Zandweghe has a more recent take, with updated evidence, in \”Interpreting the Recent Decline in Labor Force Participation\” in the Economic Review of the Federal Reserve Bank of Kansas City.

Here\’s the pattern of the labor force participation rate. The first figure shows the overall number. The second figure shows the breakdown by gender: that is, the declining labor force participation rate for men, and the women\’s labor force participation rate that was rising until about 2000, but then flattened out and has now declined.

How much of the recent sharp decline in the labor force participation rate is the Great Recession, and how much is other factors? \”At the turn of the 21st century, labor force participation in the United States reversed its decades-long increase and started trending lower. A more startling development has been the recent sharp decline in the labor force participation rate—from 66.0 percent in 2007 to 64.1 percent in 2011—a far bigger drop than in any previous four-year period. … This article presents a variety of evidence—including data on demographic shifts, labor market flows, gender differences, and the effects of long-term unemployment—to disentangle the roles of the business cycle and trend factors in the recent drop in participation. Taken together, the evidence indicates that long-term trend factors account for  about half of the decline in labor force participation from 2007 to 2011, with cyclical factors accounting for the other half.\”

What are these long-term trend factors?

1) The baby boom generation (roughly those born starting in 1946 and up until about 1960) pushed up the labor force participation rate while they were moving through their prime earning years, and now are starting to pull down the labor force participation rate as they head into retirement.

2) Women entered the (paid) labor force in large numbers starting after World War II, which helped drive the overall labor force participation rate higher for decades. But the labor force participation rate for women seemed to top out at around 60%, and has flattened out since then.

3) Young adults in the 16-24 age group have become less likely to work. This group had a labor force participation rate of nearly 70% back in the 1970s and 1980s, but it has now fallen to about 55%. Part of the decline is that more young people are attending at least some college. Another part of the decline is that for many of the relatively low-skilled in this age group, the low wages they would earn don\’t seem worth taking a job.

4) The long-term trend of declining male labor force participation rates continues. What are these men doing when they leave the labor force? One doorway out of the labor force for many of them takes the form of applying for disability, which has nearly tripled in the last 10 years from 1 million to 3 million applications per year.  For a discussion of \”Disability Insurance: One More Trust Fund Going Broke,\” see this post from August 11, 2011. 

 A long-term decline in the labor force participation rate isn\’t good news for long-term economic growth, nor for the long-term solvency of Social Security and Medicare. There are two particular areas worth focus.

1) Labor force participation is especially low for those with lower levels of education. For example, in 2010 the overall labor force participation rate was 66.5%. For those with less than a high school education, it was  46.3%; for those with a high school education but no college, it was 61.6%; for those with some college but less than a bachelor\’s degree it was 70.5%; and for college graduates the labor force participation rate was 76.7%. The wages of low-skill work in the U.S. economy are apparently scanty enough that they don\’t get people to work, at least when compared with the alternative of not working.

2) The labor force participation of the elderly has been rising since the mid-1990s, albeit slowly. For example, the labor force participation rate of the 55-64 age group rose from 59.3% in 2000 to 64.9% in 2010; for the over 65 group, the increase was from 12.9% in 2000 to 17.4% in 2010; and for the over-75 group, the rise was 5.3% in 2000 to 7.4% in 2010. As the population ages, we need to think about design of retirement programs and labor force institutions in a way that certainly doesn\’t penalize–and perhaps can even reward–the decision to work a few more years.

Federal Reserve Swap Lines

The Federal Reserve has set up \”swap lines\” with other central banks around the world. What are these? Galina Alexeenko, Sandra Kollen, and Charles Davidson offer a nice overview in \”Swap Lines Underscore the Dollar\’s Global Role,\” in EconSouth from the Atlanta Fed.

The economic issue here is the central role of the U.S. dollar in global economic transactions. As they write, \”[O]ne of the major business lines of European banks is providing financing in dollars on
a global scale—for trade, purchasing dollar-denominated assets, or syndicating loans to corporations. Banks the world over, in fact, have a great need for dollars because much of the world’s trade, investment, and lending is conducted in U.S. currency.\” But during an international financial crisis, as various financial markets freeze up, it may be very expensive or even impossible at certain time for banks around the world to get the U.S. dollars they need to carry out transactions. The Federal Reserve\’s swap lines are a temporary measure to make U.S. dollars available at such time around the world, so that financial instability is less likely to persist and grow.

How does a swap line work? Alexeenko, Kollen, and Davidson explain:

\”The swaps involve two steps. The first is literally a swap—U.S. dollars for foreign currency—between the Federal Reserve and a foreign central bank. The exchange is based on the market exchange rate at the time of the transaction. The Fed holds the foreign currency in an account at the foreign central bank, while the other central bank deposits the dollars the Fed provides in an account at the Federal Reserve Bank of New York. The two central banks agree to swap back the money at the same exchange rate, thus creating no exchange rate risk for the Federal Reserve. The currencies can be swapped back as early as the next day or as far ahead as three months.

The second step involves the foreign central bank lending dollars to commercial banks in its jurisdiction. The foreign central bank determines which institutions can borrow dollars and whether to accept their collateral. The foreign central bank assumes the credit risk of lending to the commercial banks, and the foreign central bank remains obligated to return the dollars to the Fed. At the conclusion of the swap, the foreign central bank pays the Fed an amount of interest on the dollars borrowed that is equal to the amount the central bank earned on its dollar loans to the commercial banks. The interest rate on the swap lines is determined by the agreement between the Fed and foreign central banks.\”

The description helps to clarify why such swap lines are not a \”bailout\” or any such prejudicial term. The exchange rate for the swap is locked in, and any U.S. dollar loans that are made will pay interest to the Fed.  Because the U.S. dollar plays such a central role in global transactions, the Fed is just making sure that a temporary shortfall of dollars in a foreign financial system doesn\’t make a financial crisis worse.

Here\’s a timeline for these swap lines. (I found it interesting that these authors differentiate between the \”Global Financial Crisis (2007-2008)\” and the \”European Financial Crisis (2009-current).\” I\’ve been trying to sort out in my own mind, without yet reaching firm conclusions, about how to think of these episodes as connected in some ways and separate in others.)

How large have these swap lines been? This graph shows the sharp rise in assets held by the Federal Reserve starting in mid-2008. A fairly substantial portion of these assets (say, $500 billion or so) were held in the form of swap lines at the height of the global financial crisis in late 2008 and early 2009, but these swap lines were ended by February 2010. More recently, you can see on the graph the much smaller swap lines–in the range of $100 billion–established to address the European financial crisis.

Net Immigration from Mexico Stops — or Turns Negative?

One of the most sizzling of all hot-button issues over the last 40 years has been the sharp rise in immigration from Mexico, much of it illegal. Thus, it\’s intriguing to read the report by Jeffrey Passel, D’Vera Cohn, Ana Gonzalez-Barrera called \”Net Migration from Mexico Falls to Zero—and Perhaps Less,\” from the Pew Research Center.

They write (footnotes and citations omitted): \”The largest wave of immigration in history from a single country to the United States has come to a standstill…. The U.S. today has more immigrants from Mexico alone—12.0 million—than any other country in the world has from all countries of the world. Some 30% of all current U.S. immigrants were born in Mexico. The next largest sending country—China (including Hong Kong and Taiwan)—accounts for just 5% of the nation’s current stock of about 40 million immigrants…. Beyond its size, the most distinctive feature of the modern Mexican wave has been the unprecedented share of immigrants who have come to the U.S. illegally. Just over half (51%) of all current Mexican immigrants are unauthorized, and some 58% of the estimated 11.2 million unauthorized immigrants in the U.S. are Mexican.\”

Here\’s are two illustrative figures. The first shows the total Mexican-born population in the United States, showing how the total just takes off from about 1970 up through the middle of this decade.  The second figure breaks the total down into legal and \”unauthorized,\” and shows that the decline in the unauthorized total actually started back about 2007.


Will immigration from Mexico surge again in the next few years, if and when U.S. employment gradually recovers? I suspect that such immigration may rise again, but much more mildly than in the past. There are a number of reasons, going back several years, why net immigration from Mexico has leveled out or perhaps even turned slightly negative. Start by looking at a graph of annual immigration (that is, not the total Mexican-born population, but the annual flow). It actually peaked back in the late 1990s, and there has been an especially sharp decline going back to about 2004. 

 

What are the causes of this decline? In no particular order, here are some of the longer-term reasons dating back to before the Great Recession hit full force:

1) Border enforcement is way up. \”Appropriations for the U.S. Border Patrol within the Department of Homeland Security (DHS)—only a subset of all enforcement spending, but one especially relevant to Mexican immigrants—more than tripled from 2000 to 2011, and more than doubled from 2005 to 2011. The federal government doubled staffing along the southwest border from 2002 to 2011, expanded its use of surveillance technology such as ground sensors and unmanned flying vehicles, and built hundreds of miles of border fencing. .. In spite of (and perhaps because of) increases in the number of U.S. Border Patrol agents, apprehensions of Mexicans trying to cross the border illegally have plummeted in recent years—from more than 1 million in 2005 to 286,000 in 2011—a likely indication that fewer unauthorized migrants are trying to cross. Border Patrol apprehensions of all unauthorized immigrants are now at their lowest level since 1971.\”

2) Deportations are way up. \”As apprehensions at the border have declined, deportations of unauthorized Mexican immigrants–some of them picked up at work sites or after being arrested for other criminal violations–have risen to record levels. In 2010, 282,000 unauthorized Mexican immigrants were repatriated by U.S. authorities, via deportation or the expedited removal process.\”

3) Mexico\’s demography is changing, with fewer children per woman and an older population, so the pressures on young men to leave and look for work in the U.S. are much reduced. \”In Mexico, among the wide array of trends with potential impact on the decision to emigrate, the most significant demographic change is falling fertility: As of 2009, a typical Mexican woman was projected to have an average 2.4 children in her lifetime, compared with 7.3 for her 1960 counterpart.\”

4) Mexico\’s economy was a train wreck for substantial periods of the 1970s and 1980s, and the U.S. economy was an incredible jobs locomotive in the second half of the 1990s in particular. But Mexico\’s economy is maturing, and the gap between economic opportunities in Mexico and those in the U.S. seems less gaping.

\”Mexico today is the world’s 11th-largest country by population with 115 million people and the world’s 11th-largest economy as measured by gross domestic product (World Bank, 2011). The World Bank characterizes Mexico as an “upper-middle income economy,” placing it in the same category as Brazil, Turkey, Russia, South Africa and China. Mexico is also the most populous Spanish-speaking country in the world. … In the three decades from 1980 to 2010, Mexico’s per capita GDP rose by 22%—from $10,238 in 1980 to about $12,400 in 2010.17 This increase is somewhat less than the average for all Latin American/Caribbean countries during the same period (33%) and significantly less than the increase in per capita GDP in the United States during this period (66%). Meantime, during this same period, the per capita GDP in China shot up thirteenfold—from $524 in 1980 to $6,816 in 2010. In more recent years, Mexico’s economy, like that of the United States and other countries, fell into a deep recession in 2007-2009. But since 2010 it has experienced a stronger recovery than has its neighbor to the north …\”

5) Prospects for education and health care in Mexico have improved, as well. \”For example, 92.4% of all Mexicans ages 15 and older were literate in 2010, up from 83% in 1980.19 In 2010, the average number of years of education of Mexicans ages 15 and older was 8.6, compared with 7.3 years in 2000.  In terms of health care, almost three-in-five (59%) Mexicans in 2000 lacked health care coverage. In 2003, the Mexican federal government created a health care program, Seguro Popular, that provides basic coverage to the uninsured and is free for those living under the poverty line. The share of the Mexican population with access to health care had increased from less than half (41%) in 2000 to slightly more than two-thirds (67%) in 2010, an increase of 26 percentage points.\”

In short, Mexico in the 1970s and 1980s was demographically top-heavy with teenagers and young adults from large families living in a country with a weak economy and limited prospects for education and health care, right next to a much richer country with a weakly enforced border. A flood of immigration followed. Now, Mexico is on average older, with smaller families, and the prospects for education, health, and finding economic opportunity in Mexico are notably better. Enforcement at the border and within the U.S. economy have ramped up considerably. In that situation, a large resurgence of immigration from Mexico seems unlikely.

For earlier posts on immigration, see:

How to Reduce U.S. Housing Debts: The IMF Speaks

The global financial crises was preceded by a huge run-up in household debt, which in a number of countries helped to fuel a rise in housing prices. When housing prices then deflated, households were left with oversized debt burdens that they couldn\’t meet. One of the reasons behind the sluggish \”recovery\” since the Great Recession is that so many households have been struggling to pay down or renegotiate their debts. How to reduce these housing-related debt burdens? Some aggressive policy steps to reduce housing debt are recommended by (to me, at least) an unlikely source: the International Monetary Fund in the April 2012 World Economic Outlook

Chapter 3 of the report, \”Dealing with Household Debt,\” first rehearses facts about about this cycle of rising debt, housing price bubbles, and then after the bubble pops, a sluggish recovery. This story is  fairly conventional; for example, I posted on \”Leverage and the Business Cycle\” about a month ago on March 23. What surprised me about the IMF report was the policy recommendations: \”[B]old household debt restructuring programs such as those implemented in the United States in the 1930s … can significantly reduce debt repayment burdens and the number of household defaults and foreclosures. Such policies can therefore help avert self-reinforcing cycles of household defaults, further house price declines, and additional contractions in output.\”

The IMF uses the U.S. Home Owners\’ Loan Corporation, established in 1933, as it main example of how best to address housing debt–and contrasts it unfavorably with the policy steps the U.S. has taken since 2009. Here\’s the IMF\’s description of how the U.S. Home Owners\’ Loan Corporation worked (footnotes, citations, and references to tables and boxes omitted):

\”To prevent mortgage foreclosures, HOLC bought distressed mortgages from banks in exchange for bonds with federal guarantees on interest and principal. It then restructured these mortgages to make them more affordable to borrowers and developed methods of working with borrowers who became delinquent or unemployed, including job searches. HOLC bought about 1 million distressed mortgages that were at risk of foreclosure, or about one in five of all mortgages. Of these million mortgages, about 200,000 ended up foreclosing when the borrowers defaulted on their renegotiated mortgages. The HOLC program helped protect the remaining 800,000 mortgages from foreclosure, corresponding to 16 percent of all mortgages. HOLC mortgage purchases amounted to $4.75 billion (8.4 percent of 1933 GDP), and the mortgages were sold over time, yielding a nominal profit by the time of the HOLC program’s liquidation in 1951. The HOLC program’s success in preventing foreclosures at a limited fiscal cost may explain why academics and public figures called for a HOLC-style approach during the recent recession.

A key feature of HOLC was the effective transfer of funds to credit-constrained households with distressed balance sheets and a high marginal propensity to consume, which mitigated the negative effects on aggregate demand discussed above…. Accordingly, HOLC extended mortgage terms from a typical length of 5 to 10 years, often at variable rates, to fixed-rate 15-year terms, which were sometimes extended to 20 years. …  In a number of cases, HOLC also wrote off part of the principal to ensure that no loans exceeded 80 percent of the appraised value of the house, thus mitigating the negative effects of debt overhang discussed above.

Here\’s a figure showing the U.S. housing market in recent years. As the IMF reports: \”There were about 2.4 million properties in foreclosure in the United States at the end of 2011, a nearly fivefold increase over the precrisis level, and the “shadow inventory” of distressed mortgages suggests that this number could rise further.\” The area shade in blue shows the number of properties in foreclosure. The area shaded in yellow is an estimate of \”shadow inventory\”–that is, additional properties likely to go into foreclosure.\”Shadow inventory indicates properties likely to go into foreclosure based on a
number of assumptions. It includes a portion of all loans delinquent 90 days or more
(based on observed performance of such loans); a share of modifications in place (based
on redefault performance of modified mortgages); and a portion of negative equity
mortgages (based on observed default rates).\”

Notice that the spike in foreclosures starts at about the same time as housing prices top out, in late 2006, and peaks around early 2009. These numbers don\’t include the larger number of people– about 11 million, which is one in every four mortgages in the country–who have \”underwater\” mortgages where the value of the mortgage exceeds the value of the property.

What policies has the U.S. followed to deal with this foreclosure problem? The IMF reports that the main policy is \”the Home Affordable Modification Program (HAMP), the flagship mortgage debt restructuring initiative targeted at households in default or at risk of default.\” It was adopted in February 2009, and has been revised a number of times since. But so far, the policy hasn\’t accomplished much. Here\’s the IMF:

\”However, households already in default are excluded from HARP, and the impact on preventing foreclosures is likely to be more limited. HAMP had significant ambitions but has thus
far achieved far fewer modifications than envisaged. … Meanwhile, the number of permanently modified mortgages amounts to 951,000, or 1.9 percent of all mortgages. By contrast, some 20
percent of mortgages were modified by the Depression-era HOLC program, and HAMP’s targeted reach was 3 to 4 million homeowners. By the same token, the amount disbursed … as of December 2011 was only $2.3 billion, well below the allocation of $30 billion (0.2 percent of GDP).\”

Of course, there is a list of reasons for this minimal effect. It requires the cooperation of creditors and loan officers, which is voluntary. When mortgages have been bundled together and sold as securities, it\’s not clear how the renegotiation should work. Tight eligibility rules mean that the unemployed and those who have suffered big drops in income often aren\’t eligible. The policy typically relied on lower interest rates and longer mortgage terms, but only rarely could reduce the outstanding principal on a house that had lost value. The reductions in mortgage payments often wasn\’t large, so roughly a third of those who made it through the program ended up defaulting again–which of course reduces anyone\’s incentive to participate in the first place. Fannie Mae and Freddie Mac, which hold about 60% of outstanding U.S. mortgages, don\’t participate.

So here we are, six years after the wave of foreclosures started and three years after it peaked, still arguing about whether something substantial ought to be done–and if so, what. Without drilling down into details of the alternative proposals, it seems to me that a modest share of the trillions in federal borrowing in the last few years, along with the trillions of assets that the Federal Reserve has accumulated through its \”quantitative easing\” policy, might have been better applied to assisting the millions of American households who took out a mortgage and bought a house–implicitly relying on the ability of supposedly better-informed lenders to tell them what they could afford–and then were blindsided by the national downturn in housing market prices.

Occupations of the 1%

What jobs do those in the top 1% of the income distribution hold? How have those jobs shifted in recent decades? Jon Bakija, Adam Cole, and Bradley T. Heim have evidence on this question in \”Jobs and Income Growth of Top Earners and the Causes of Changing Income Inequality: Evidence from U.S. Tax Return Data.\” An April 2012  version of their working paper is here;  a very similar March 2012 version is here. They have a bunch of interesting tables and analysis: here, I\’ll give a sampling with two columns from two of their tables about the occupations of the top 1%, and some evidence on how what\’s driving the top 1% is really the top tenth of 1%.

Here\’s are the occupations of who was in the top 1% of the income distribution, in the tax return data, in 1979 and in 2005. What occupations have a smaller share of the top 1? The share of the top 1% who get their income as \”Executives, managers and supervisors (non-finance)\” has dropped 5.3 percentage points. The breakdown at the bottom of the table suggests that much of this fall is due to the subcategory of \”Executive, non-finance, salaried.\” The share of the top 1% in the medical profession falls by 1.7 percentage points. The share of the top 1% who are \”Farmers & ranchers\” falls by 1.5 percentage points.

What occupations comprise a larger share of the top 1%? The share of the top 1% whose occupation is classified as \”Financial professions, including management\” rises by 5.5 percentage points. The share in \”Real Estate\” rises by 1.8 percentage points–although surely this gain would be smaller if calculated on data after the drop in housing prices. The share who are lawyers rises by 1 percentage point.

Here\’s the share of GDP received as income by those in each occupation in the top 1%, again comparing 1979 and 2005. The top line shows that the share of income going to the to 1% roughly doubled over this time. Thus, the interesting question here is whether in some occupations the rise in income was substantially more or less than a doubling. For example, the share of income going to the top 1% in the \”Financial Professions, including management\” more than tripled, as did the share of income going to those \”Real Estate.\” The share of income going to \”Business operations (nonfinance)\” and to \”Professors and scientists\” almost tripled. On the other side, the share of income in the top 1% going to \”Medical\” rose by \”only\” about 50%, and the share of income in the top 1% going to \”Farmers & ranchers\” declined.

Finally, here\’s a striking figure that compares income growth from 1979 to 2005 for the top 0.1% to income growth for those in the bottom half of the top 1%: that is, those from the 99th percentile to those in the 99.5th percentile. The bottom line shows that on average, the growth rate of real income (in this case, excluding capital gains) was 2.4 times as fast for the top 0.1% as it was for those in the 99-99.5 percentiles. For certain occupations like \”Executive, non-finance\” and \”Supervisor, non-finance,\” the multiple is much higher. The overall pattern here is that while slogans often refer to the top 1%, for most occupations, referring to the top 0.1% might be a more accurate description of where the largest income gains have been occurring.