Spring 2015 Journal of Economic Perspectives On-line

Since 1986, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which several years back made the decision–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. The journal\’s website is here. I\’ll start here with Table of Contents for the just-released Spring 2015 issue. Below are abstracts and direct links to all the paper. I will probably blog about some of the individual papers in the next week or two, as well.

Symposium: The Bailouts of 2007-2009
\”A Retrospective Look at Rescuing and Restructuring General Motors and Chrysler,\” by Austan D. Goolsbee and Alan B. Krueger
The rescue of the US automobile industry amid the 2008-2009 recession and financial crisis was a consequential, controversial, and difficult decision made at a fraught moment for the US economy. Both of us were involved in the decision process at the time, but since have moved back to academia. More than five years have passed since the bailout began, and it is timely to look back at this unusual episode of economic policymaking to consider what we got right, what we got wrong, and why. In this article, we describe the events that brought two of the largest industrial companies in the world to seek a bailout from the US government, the analysis that was used to evaluate the decision (including what the alternatives were and whether a rescue would even work), the steps that were taken to rescue and restructure General Motors and Chrysler, and the performance of the US auto industry since the bailout. We close with general lessons to be learned from the episode.
Full-Text Access | Supplementary Materials

\”The Rescue of Fannie Mae and Freddie Mac,\” by W. Scott Frame, Andreas Fuster, Joseph Tracy and James Vickery

The imposition of federal conservatorships on September 6, 2008, at the Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation—commonly known as Fannie Mae and Freddie Mac—was one of the most dramatic events of the financial crisis. These two government-sponsored enterprises play a central role in the US housing finance system, and at the start of their conservatorships held or guaranteed about $5.2 trillion of home mortgage debt. The two firms were often cited as shining examples of public-private partnerships—that is, the harnessing of private capital to advance the social goal of expanding homeownership. But in reality, the hybrid structures of Fannie Mae and Freddie Mac were destined to fail at some point, owing to their singular exposure to resident ial real estate and moral hazard incentives emanating from the implicit guarantee of their liabilities. We describe the financial distress experienced by the two firms, the events that led the federal government to take dramatic action in an effort to stabilize housing and financial markets, and the various resolution options available to US policymakers at the time; and we evaluate the success of the choice of conservatorship in terms of its effects on financial markets and financial stability, on mortgage supply, and on the financial position of the two firms themselves. Conservatorship achieved its key short-run goals of stabilizing mortgage markets and promoting financial stability during a period of extreme stress. However, conservatorship was intended to be a temporary fix, not a long-term solution, and more than six years later, Fannie Mae and Freddie Mac still remain in conservatorship.
Full-Text Access | Supplementary Materials

\”An Assessment of TARP Assistance to Financial Institutions,\” by Charles W. Calomiris and Urooj Khan


Six years after the passage of the 2008 Troubled Asset Relief Program, commonly known as TARP, it remains hard to measure the total social costs and benefits of the assistance to banks provided under TARP programs. TARP was not a single approach to assisting weak banks but rather a variety of changing solutions to a set of evolving problems. TARP\’s passage was associated with significant improvements in financial markets and the health of financial intermediaries, as well as an increase in the supply of lending by recipients. However, a full evaluation must also take into account other factors, including the risks borne by taxpayers in the course of the bailouts; moral-hazard costs that could result in more risk-taking in the future; and social costs related to perceived unfairness. Our evaluation is organized in five parts: 1) What did policymakers do? 2) What are the proper objectives of interventions like TARP assistance to financial institutions? 3) Did TARP succeed in those economic objectives? 4) Were TARP funds allocated purely on an economic basis, or did political favoritism play a role? 5) Would alternative policies, either alongside or instead of TARP, and alternative design features of TARP, have worked better?
Full-Text Access | Supplementary Materials

\”AIG in Hindsight,\” by Robert McDonald and Anna Paulson

The near-failure on September 16, 2008, of American International Group (AIG) was an iconic moment in the financial crisis. Two large bets on real estate made with funding vulnerable to bank-run-like dynamics pushed AIG to the brink of bankruptcy. AIG used securities lending to transform insurance company assets into residential mortgage-backed securities and collateralized debt obligations, ultimately losing at least $21 billion and threatening the solvency of the life insurance companies. AIG also sold insurance on multisector collateralized debt obligations, backed by real estate assets, ultimately losing more than $30 billion. These activities were apparently motivated by a belief that AIG\’s real estate bets would not suffer defaults and were \”money-good.\” We find that these securities have in fact suffered write-downs and that the stark \”money-good\” claim can be rejected. Ultimately, both liquidity and solvency were issues for AIG.
Full-Text Access | Supplementary Materials

\”Legal, Political, and Institutional Constraints on the Financial Crisis Policy Response,\” by Phillip Swagel


As the financial crisis manifested itself and peaked in 2007 and 2008, the response of US policymakers and regulators was shaped in important ways by legal and political constraints. Policymakers lacked certain legal authorities that would have been useful for addressing the crisis, notably to use public capital to stabilize the banking sector or to deal with the failure of large financial firms such as insurance companies and investment banks that were outside the scope of bank regulators\’ authority to resolve deposit-taking commercial banks. Legal constraints were keenly felt at the US Department of the Treasury, where I served as a senior official from December 2006 to January 2009. Treasury had virtually no emergency economic authority at the onset of the crisis in 2007, with the exception of the Treasury\’s Exchange Stabilization Fund, which was intended for use in exchange rate interventions. As the systemic risks of the financial crisis became apparent, the initial policy response largely fell to the Federal Reserve, which had the authority to act under emergency circumstances. There will inevitably be another financial crisis, and the response will be shaped by both the lessons learned from recent history and the statutory and political changes in the wake of the crisis. The paper thus concludes by discussing changes in constraints since the crisis, with a focus on two developments: 1) the political reality that there will not in the near future be another wide-ranging grant of fiscal authority as was given with the Troubled Asset Relief Program, and 2) the new legal authorities provided in the Wall Street Reform and Consumer Protection Act of 2010, commonly known as the Dodd-Frank law.
Full-Text Access | Supplementary Materials

Symposium: Disability Insurance

\”Understanding the Increase in Disability Insurance Benefit Receipt in the United States,\” by Jeffrey B. Liebman


The share of working-age Americans receiving disability benefits from the federal Disability Insurance (DI) program has increased significantly in recent decades, from 2.2 percent in the late 1970s to 3.6 percent in the years immediately preceding the 2007-2009 recession and 4.6 percent in 2013. With the federal Disability Insurance Trust Fund currently projected to be depleted in 2016, Congressional action of some sort is likely to occur within the next several years. It is therefore a good time to sort out the competing explanations for the increase in disability benefit receipt and to review some of the ideas that economists have put forth for reforming US disability programs.
Full-Text Access | Supplementary Materials

\”The Rise and Fall of Disability Insurance Enrollment in the Netherlands,\” by Pierre Koning and Maarten Lindeboom
As recently as 15 years ago, the high level of Disability Insurance (DI) enrollment was considered to be one of the major social and economic problems of the Netherlands; indeed, the Netherlands was characterized as the country with the most out-of-control disability program of OECD countries. But since about 2002, the Netherlands has seen a spectacular decline in its Disability Insurance enrollment rate. Radical reforms to the Dutch DI system were implemented over the period 1996 to 2006. We cluster these reforms in three broad categories: 1) reducing the incentives of employers to move workers to disability; 2) increased gatekeeping; and 3) tightening disability eligibility criteria while enhancing worker incentives. The reforms appear to have been very effective. Since 2002, yearly DI inflow ra tes dropped from 1.5 percent in 2001 to about 0.5 percent of the insured population in 2012. We argue that particularly the interaction of employer incentives and formal employer obligations has contributed to the substantial decrease in DI inflow. On the downside, however, it seems workers with bad health have sorted into temporary employment—without employers bearing the financial responsibility of their benefit costs.
Full-Text Access | Supplementary Materials

\”Disability Benefit Receipt and Reform: Reconciling Trends in the United Kingdom,\” by James Banks, Richard Blundell and Carl Emmerson


The UK has enacted a number of reforms to the structure of disability benefits that has made it a major case study for other countries thinking of reform. The introduction of Incapacity Benefit in 1995 coincided with a strong decline in disability benefit expenditure, reversing previous sharp increases. From 2008 the replacement of Incapacity Benefit with Employment and Support Allowance was intended to reduce spending further. We bring together administrative and survey data over the period and highlight key differences in receipt of disability benefits by age, sex, and health. These disability benefit reforms and the trends in receipt are also put into the context of broader trends in health and employment by education and sex. We document a growing proportion of claimants in any age group with mental and behavioral disorders as their principal health condition. We also show the decline in the number of older working age men receiving disability benefits to have been partially offset by growth in the number of younger women receiving these benefits. We speculate on the impact of disability reforms on employment.
Full-Text Access | Supplementary Materials

Articles


\”Reforming LIBOR and Other Financial Market Benchmarks,\” Darrell Duffie and Jeremy C. Stein
LIBOR is the London Interbank Offered Rate: a measure of the interest rate at which large banks can borrow from one another on an unsecured basis. LIBOR is often used as a benchmark rate—meaning that the interest rates that consumers and businesses pay on trillions of dollars in loans adjust up and down contractually based on movements in LIBOR. Investors also rely on the difference between LIBOR and various risk-free interest rates as a gauge of stress in the banking system. Benchmarks such as LIBOR therefore play a central role in modern financial markets. Thus, news reports in 2008 revealing widespread manipulation of LIBOR threatened the integrity of this benchmark and lowered trust in financial markets. We begin with a discussion of the economic role of benchmarks in reducing market frictions. We explain how manipulation occurs in practice, and illustrate how benchmark definitions and fixing methods can mitigate manipulation. We then turn to an overall policy approach for reducing the susceptibility of LIBOR to manipulation before focusing on the practical problem of how to make an orderly transition to alternative reference rates without raising undue legal risks.
Full-Text Access | Supplementary Materials

\”Bitcoin: Economics, Technology, and Governance,\” by Rainer Böhme, Nicolas Christin, Benjamin Edelman and Tyler Moore

Bitcoin is an online communication protocol that facilitates the use of a virtual currency, including electronic payments. Bitcoin\’s rules were designed by engineers with no apparent influence from lawyers or regulators. Bitcoin is built on a transaction log that is distributed across a network of participating computers. It includes mechanisms to reward honest participation, to bootstrap acceptance by early adopters, and to guard against concentrations of power. Bitcoin\’s design allows for irreversible transactions, a prescribed path of money creation over time, and a public transaction history. Anyone can create a Bitcoin account, without charge and without any centralized vetting procedure—or even a requirement to provide a real name. Collectively, these rules yield a system that is understood to be more flexible, more private, and less amenable to regulatory oversight than other forms of payment—though as we discuss, all these benefits face important limits. Bitcoin is of interest to economists as a virtual currency with potential to disrupt existing payment systems and perhaps even monetary systems. This article presents the platform\’s design principles and properties for a nontechnical audience; reviews its past, present, and future uses; and points out risks and regulatory issues as Bitcoin interacts with the conventional financial system and the real economy.
Full-Text Access | Supplementary Materials

\”Systematic Bias and Nontransparency in US Social Security Administration Forecasts,\” by Konstantin Kashin, Gary King and Samir Soneji

We offer an evaluation of the Social Security Administration demographic and financial forecasts used to assess the long-term solvency of the Social Security Trust Funds. This same forecasting methodology is also used in evaluating policy proposals put forward by Congress to modify the Social Security program. Ours is the first evaluation to compare the SSA forecasts with observed truth; for example, we compare forecasts made in the 1980s, 1990s, and 2000s with outcomes that are now available. We find that Social Security Administration forecasting errors—as evaluated by how accurate the forecasts turned out to be—were approximately unbiased until 2000 and then became systematically biased afterward, and increasingly so over time. Also, most of the forecasting err ors since 2000 are in the same direction, consistently misleading users of the forecasts to conclude that the Social Security Trust Funds are in better financial shape than turns out to be the case. Finally, the Social Security Administration\’s informal uncertainty intervals appear to have become increasingly inaccurate since 2000. At present, the Office of the Chief Actuary, at the Social Security Administration, does not reveal in full how its forecasts are made. Every future Trustees Report, without exception, should include a routine evaluation of all prior forecasts, and a discussion of what forecasting mistakes were made, what was learned from the mistakes, and what actions might be taken to improve forecasts going forward. And the Social Security Administration and its Office of the Chief Actuary should follow best practices in academia and many other parts of government and make their forecasting procedures public and replicable, and should calculate and report calibrated un certainty intervals for all forecasts.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Social Costs of the Financial Sector

Luigi Zingales may seem like an unlikely person to ask, \”Does Finance Benefit Society?\” When he critiques the social benefits of the financial sector, he does so as an insider. He is Distinguished Service Professor of Entrepreneurship and Finance at the University of Chicago Booth School of Business, and recently president of the American Finance Association–made up of the economists and related disciplines who study finance in-depth. That question was the title of his 2015 Presidential Address to the AFA, which was delivered in January 2015. The address will be published later this summer in the Journal of Finance. However, you can watch video of the lecture or read the talk at his website.

In the aftermath of the dot-com boom, the housing price bubble, and the financial bailouts, it\’s easy enough for \”finance\” to sound like a swear-word. I pointed out a few days ago in a post on \”Breaking Down Corporate Profits\” (May 6, 2015), the \”finance and insurance\” sector earns more profits than another other sector of the US economy, just beating out the manufacturing sector and well ahead of wholesale and retail trade.  I\’ve offered some thoughts on the financial sector in \”Why Did the U.S. Financial Sector Grow?\” (May 15, 2013) and \”When is the Financial Sector Too Big?\” (July 19, 2012), or for an alternative view, \”A Defense of the Financial Sector\” (May 17, 2013).

So it\’s probably useful to begin, as Zingales does, by pointing out that certain aspects of finance are pretty useful. I like being able to keep my money in a bank, and to get a mortgage when I buy a house. I like being able to invest my retirement savings in a mutual fund. Farmers like being able to use futures markets to lock in a minimum price for what they will grow; airlines like being able to use those markets to lock in a maximum price for what fuel will cost six or twelve months in the future; multinational companies like using futures markets to protect themselves against shifts in exchange rates. There\’s lots of evidence that the financial sector of a country tends to expand with economic growth, as entrepreneurs and established businesses find ways to use the financial sector to raise funds for investment and to hold down on risks.

But after dutifully acknowledging these kinds of issues, Zingales challenges whether the growth of the financial sector hasn\’t gone beyond the useful zone. Here\’s a sample (citations omitted):

While there is no doubt that a developed economy needs a sophisticated financial sector, at the current state of knowledge there is no theoretical reason or empirical evidence to support the notion that all the growth of the financial sector in the last forty years has been beneficial to society. In fact, we have both theoretical reasons and empirical evidence to claim that a component has been pure rent seeking. …

There is a large body of evidence documenting that on average a bigger banking sector (often measured as the ratio of private credit to GDP) is correlated with higher growth, both cross-sectionally and over time. … [I]in this large body there is precious little evidence that shows the positive role of other forms of financial development, particularly important in the United States: equity market, junk bond market, option and future markets, interest rate swaps, etc. …

If anything, the empirical evidence suggests that the credit expansion in the United States was excessive. The problem is even more severe for other parts of the financial system. There is remarkably little evidence that the existence or the size of an equity market matters for growth. …  I am not aware of any evidence that the creation and growth of the junk bond market, the option and futures market, or the development of over-the-counter derivatives are positively correlated with economic growth. …

For the period 1996-2004, … the cost of (mostly financial) fraud among U.S. companies with more than $750m in revenues is $380bn a year. Table 1 reports the fines paid by financial institutions to U.S. enforcement agencies between January 2012 and December 2014. The total amount is $139 bn, $113bn of which related to mortgage fraud. This severely underestimates the magnitude of the problem. First, some of the main mortgage lenders (like New Century Financial) went bankrupt and therefore were never charged. Second, even if the fraudulent institution did not go bankrupt, it can effectively be sued only if it has enough capital. The table includes just one fine regarding Madoff, for only $2.9bn, when the overall amount of the Madoff fraud totaled $64.8 bn.  Finally, Dyck et al. (2014) estimate that only one fourth of the fraud are detected. 

Zingales argues that it in the face of this kind of evidence (and he cites much more than this), it won\’t do to defend the financial sector by pointing to the good it undoubtedly does for certain parties in certain situations. Instead, one needs to think about why the financial industry is so prone to excess and misbehavior. Zingales argues that there is clearly lots of money to be made in duping investors about risks and returns, and about acting behind the scenes in ways that favor those in the financial industry but do not favor their customers. Zingales cites survey evidence that many house-buyers don\’t understand what an adjustable-rate mortgage means. It\’s clear that many big-time investors didn\’t understand very clearly how the LIBOR was determined, which is why insiders were able to manipulate it.

As a clear-headed economist, Zingales also points out that government regulation is often part of the problem here. For example, one ingredient in the toxic stew of the housing price bubble was the behavior of Fannie Mae and Freddie Mac, government-backed agencies that helped sustain housing bubble–and Fannie and Freddie then needed a $180 billion bailout after the crisis hit. More broadly, fine print is not the answer. It\’s not the answer when government seeks to regulate the financial sector with blizzards of fine print. It\’s not the answer when government passes requirements that blizzards of fine print \”disclosure statements\” be handed to to customers in the name of \”transparency.\” As Zingales argues, simple rules are usually better (again, citations and footnotes omitted):

First, when the possibility of arbitrage and manipulation is considered, the best (most robust) solutions tend to be the simplest ones. … Second, simple rules also facilitate accountability. Complicated rules are difficult to enforce even under the best circumstances, and impossible when their enforcement is the domain of captured agencies. In the context of regulation, however, there is one added benefit of simplicity. Not only does simple regulation reduce lobbying costs and distortions; it also makes it easier for the public to monitor, reducing the amount of capture. Finally, when we factor in the enforcement and lobbying costs, simpler choices, which might have looked inefficient at first, often turn out to be optimal in a broader sense. Thus, we should make an effort to propose simple solutions, which are easier to explain to people and easier to enforce and monitor. For example, a simple way to deal with the problem of unsophisticated investors being duped is to put the liability on the sellers. Just like brokers have to prove that they sold options only to sophisticated buyers, the same should be true for other instruments like double short ETF [exchange-traded funds]. This shift in the liability rule (Caveat Venditor) risks shutting off ordinary people from access to financial services. For this reason, there should be an exemption for some very basic instruments – like fixed rate mortgages and a broad stock market index ETF. 

Readers interested in getting up to speed on these arguments that the financial sector is too big might begin with Zingales\’s accessible talk. Some other recent resources on this topic include:

Some International Minimum Wage Comparisons

How do minimum wages around world compare, and how have they changed in the last few years? The OECD offers an overview in a May 2015 FOCUS report called \”Minimum wages after the crisis: Making them pay.\”

As a starting point, here\’s the minimum wage relative to median income in various countries. The blue diamonds show the proportion in 2007; the orange triangles show the proportion in 2013. The minimum wage stayed the same or rose in most countries, with a few exceptions like Ireland, Greece, and Spain. The US would have ranked lowest in ratio of minimum wage to median wage in 2007 (lowest blue diamond), but ranks third-lowest after the recent increases. Of course, some will see this relatively low level as cause for concern, while others will see it as cause for congratulation. The minimum wage as a share of income for Colombia may be misleading because it applies only to workers in the \”formal\” sector of the economy, and those in the informal sector would on average have lower wages.

As a different metric, how many hours does a person need to work at a minimum wage job before they reach total earnings of half the minimum income? The calculation here includes an adjustment for payroll and income taxes paid, as well as for other cash benefits being scaled back. The orange bars show the number of hours for a single parent with two children; the blue diamonds show the number of hours for a single-earner, two-parent couple with two children. Of course, assumptions about the number of hours a person would need to work at a minimum wage job are based on the assumption that such a job is in fact available.

Finally, what share of workers is actually paid the minimum wage? Of course, this information gives a sense of how many workers would be directly affected by altering the minimum wage. The orange bars show the share of workers getting the minimum wage by country (based on differing data sources), while the blue triangles are a reminder of the minimum wage as a percentage of the median wage in each country. One would expect that countries with a high minimum wage would tend to have more workers receiving the minimum wage, but that pattern doesn\’t always hold. For example, the share of the population in Greece and in Portugal being paid the minimum wage in those countries is lower than in the US, although the minimum wage in those countries is a higher share of median income.

Breakdown of US Corporate Profits

Although the US corporate income tax is an almost continual subject of dispute, most people don\’t know any details about what kinds of corporations that earn the profits. For example, are the companies earning profits disproportionately large or small? What industries earn the most in profits? How much corporate profit falls outside the corporate income tax, because it is earned by \”Schedule S\” corporations which distribute profits to their owners, instead? The IRS has just published its 2012 Corporation Income Tax Returns Complete ReportThe report offers voluminous tables with lots of detail on these questions, but here\’s the big-picture overview. (I\’ve trimmed down the tables below from the form in which they appear in the report–for example, leaving out data for 2011.)

For starters, how big were corporate profits in 2012, and how do the profits earned and the corporate taxes paid break down by size of firm? Here\’s the table from the report. In 2012, the US economy had 5.8 million active corporations. Their total receipts were $29.4 trillion. (This amount is larger than GDP! But remember that in the process of production, companies sell to other companies: for example, a mining company sells iron to the steel company, which sells steel the car company, which sells the car to a consumer. The price of the car is included in GDP, because it includes all the earlier stages of production. But if you just add up company receipts, you add up the receipts of the mining company, the steel company, and the car company. Of course, in the real-world many production processes involve a lot more than three steps.) Total corporate income in 2012 was nearly $1.8 trillion. However, after subtracting out deductions for past losses, and other deductions, taxable corporate income was about $1.1 trillion. The tax owed on that amount was $402 billion. However, after subtracting out tax credits–credits for foreign taxes paid, credit to holders of tax credit bonds, qualified electric vehicle, general business, and prior-year minimum tax–the actual income tax owed was $267 billion.

How do the profits earned and the corporate taxes paid break down by size of companies? I\’ll just note that the roughly 3,000 largest firms, those with assets of $2.5 trillion or more, account for more than half of all corporate receipts (51%), for about two-thirds of all profits before deductions and credits (67%), and for about 70% of all corporate income taxes paid after deductions and credits. Readers can of course play around with the various categories in the table as they like.

What industries paid the most in corporate income tax in 2012? Out of the total of $1.7 trillion in pretax profits (before deductions and credits), about 80% can be traced to four sectors: finance and insurance (29% of total pretax profits), manufacturing (29%), wholesale and retail trade (15%), and management of companies (holding companies) (7%).

Finally, not all corporations pay corporate income tax. Some corporations pass their profits each year to their owners. For example, regulated investment companies pass profits to owners, and so do \”S corporations.\” Out of the $1,774 billion in profits, before deductions and credits, a full $678 billion (38% of the total) happen in these pass-through forms, which are applying to a larger share of corporate profits over time.

Global Income Inequality In Decline

One paradox of income inequality in our time is that although the distribution of income has become more unequal within many countries, from a global perspective the distribution of income is becoming more equal. The reason, of course, is that rapid income growth among substantial segments of the population in places like China, India, and even in sub-Saharan Africa will tend to reduce global inequality, at the same time that it increases inequality within these countries. Tomáš Hellebrandt and Paolo Mauro explore these patterns in \”The Future of Worldwide Income Distribution,\” written for the Peterson Institute for International Economics (April 2015, Working Paper 15-7).

They look at a wide array of household-level evidence on the distribution of income in more than 100 countries, and then use various assumptions (essentially assuming that within-country inequality of incomes doesn\’t continue to change over time) to project what the distribution of global income will look like in the future. The green line in the figure shows the distribution of global income per person in 2003, with a mean of $3451 and a median of $1090; the blue line shows the global distribution of income in 2013,m with a mean of $5375 and a median of $2010; and the red line shows their forecase for 2035, with a mean of $9,112 and a median of $4,000. The distribution of global income is clearly becoming flatter and more equal over time. Hellebrandt and Mauro write: \”Global income inequality started declining significantly at the turn of the century, and we project that this trend will continue for the next two decades, under what we consider the profession’s “consensus” projections for the growth rates of output and population.\”

However, it\’s important not to exaggerate how quickly this reduction in global inequality of incomes will occur. For example, the median income projected for 2035 (with half the world population below that level) is well below the average or mean income for 2013. The authors offer a useful illustration measuring inequality by the 90:10 ratio–that is, the ratio of the 90th percentile of the income distribution to the 10th percentile of the income distribution. (Calculations using the Gini coefficient look much the same.) The global 90:10 ratio shows greater inequality than almost any individual country in the world, with the exception of South Africa. By 2035, although the global 90:10 ratio falls, it will still be substantially higher than any even moderately large economy other than South Africa is currently experiencing.

The figure also offers some comparisons of the current level of income inequality across countries using the 90:10 ratio. The US economy clearly has one of the most unequal distributions of income among the high-income countries, but it is more equal than a number of emerging economies around the world.

The exercise in this paper reminds of an article by Robert E. Lucas Jr. that appeared in the Winter 2000 issue of the Journal of Economic Perspectives, called \”Some Macroeconomics for the 21st Century.\”  (Full disclosure: I have been Managing Editor of JEP since 1987. All JEP articles from the most recent issue back to the first issue are freely available on-line compliments of the American Economic Association.) Lucas offers a hypothetical model of growth patterns in the global economy that works like this:

\”We begin, then, with an image of the world economy of 1800 as consisting of a number of very poor, stagnant economies, equal in population and in income. Now imagine all of these economies lined up in a row, each behind the kind of mechanical starting gate used at the race track. In the race to industrialize that I am about to describe, though, the gates do not open all at once, the way they do at the track. Instead, at any date t a few of the gates that have not yet opened are selected by some random device. When the bell rings, these gates open and some of the economies that had been stagnant are released and begin to grow. The rest must wait their chances at the next date, t + 1. In any year after 1800, then, the world economy consists of those countries that have not begun to grow, stagnating at the $600 income level, and those countries that began to grow at some date in the past and have been growing every since. … 

\”[A]n economy that begins to grow at any date after 1800 grows at a rate equal to a α = .02, the growth rate of the leader, plus a term that is proportional to the percentage income gap between itself and the leader. The later a country starts to grow, the larger is this initial income gap, so a later start implies faster initial growth. But a country growing faster than the leader closes the income gap, which by my assumption reduces its growth rate toward .02. Thus, a late entrant to the industrial revolution will eventually have essentially the same income level as the leader, but will never surpass the leader’s level.\”

Lucas acknowledges repeatedly that this model is a very simple one. But he points out that it offers some interesting predictions. It predicts that global inequality of income will at first expand dramatically, as it indeed did from the 19th century well into the 20th century. It predicts that countries which start growing later will experience faster \”catch-up\” growth, which has held true in a number of countries including Japan, Korea, China, and others. Under the assumptions that Lucas uses, his model predicts that the global rate of economic growth will peak around 1970, at a time when a large share of the world is catching up at a rapid pace. It predicts that during the time period in the last few decades of the 20th century, global inequality won\’t change by much. And it predicts that in the 21st century, as the remaining nations that had not previously entered a period of rapid economic growth start to do so, global inequality of incomes will diminish. If Lucas\’s model captures the underlying dynamics of the global growth process, and predictions like those of Hellebrandt and Mauro hold up, the 21st century would be a time of rising equality across the global income distribution.

The US as a Debtor Economy

When a country runs a trade deficit and imports more than it exports, like the US economy, producers in other countries are by definition receiving a greater value in US dollars from selling in the United States than the value of the foreign currency that US producers are receiving from exporting abroad. Through the windings of the foreign exchange markets and the international financial system, these US dollars are invested or loaned back into the US economy. In this way, a trade deficit is inevitably accompanied by an inflow of investment capital. The long string of US trade deficits means that, over time, the US has become a debtor economy, which can be defined as an economy where the total amount that foreigners have invested over time in the US economy is greater than the total amount that US investors hold in the economies of other countries.

Here\’s a table from the IMF (specifically, it\’s an edited version of a table from Chapter 4 of the October 2014 World Economic Outlook) showing the biggest debtor and creditor economies around the world. In absolute size of net foreign liabilities, the US leads the way, although as a share of the US GDP, the US position looks less worrisome. The IMF report gives as a very rough guideline that when the net foreign liabilities of a country exceed 60%, there is reason for heightened concern. On this list, Spain and Poland exceed that threshold. The biggest creditor economies, based on mostly running trade surpluses over the years, are who you would expect: Japan, China (along with Hong Kong and Taiwan), Germany, and some big oil exporters.

Here\’s the evolution of US net international investment position, as shown by the size of foreign assets and liabilities, in the last few years (from the March 31, 2015, press release by the US Bureau of Economic Analysis).  In the last few years, the gap between US assets and liabilities has clearly been rising.

Just to be clear, these are not assets and liabilities of governments–although they include foreign assets and liabilities of government–but they are mainly foreign assets and liabilities of private-sector firms and individuals.

Dinah Walker at the Council on Foriegn Relations offered a summary of \”Foreign Ownership of U.S. Assets\” in a brief report on January 21, 2015. For example, in terms of US Treasury debt, foreign investors now own more than half of the total market, with a large share of that in \”official holdings\” by foreign governments.

Foreign holdings of US corporate debt and stocks also show a rising trend.

How much should the position of the US as a debtor economy be a cause for concern? It seems highly improbable that foreign investors will at some point panic and run from the US economy. The IMF report lays out the reasons why in some detail (footnotes and citations omitted):

Among the major debtors, the key exception to the trend of diminishing vulnerability is the unique case of the United States, whose net foreign liability position is projected to deteriorate from 4 percent of world GDP in 2006 to 8.5 percent of world GDP in 2019. Indeed, one of the concerns with growing global imbalances in the mid-2000s was the (admittedly remote) possibility of the U.S. liability position suddenly reaching a tipping point, after which private and public holders of U.S. assets would lose confidence, and the U.S. dollar would lose its reserve currency status. The U.S. net liability position in fact worsened to almost 8 percent of world GDP in 2013, but for a number of reasons, the likelihood that the dollar will lose its reserve currency status seems substantially lower than it did eight years ago. First, projected flow deficits of the United States are now considerably smaller than they were in 2006. Second, the U.S. dollar continues to be the leading transaction currency in foreign exchange markets and a key invoicing currency in international trade. It accounts for a dominant share of all outstanding debt securities issued anywhere in the world and especially of those securities sold outside the issuing country in a currency other than that of the issuer. Third, dollar assets held in central bank reserves are not excessive in relation to central banks’ “optimal” currency portfolios.Fourth, at present, the dollar has relatively few competitors, since being a reserve currency requires that a substantial stock of assets be denominated in that currency. Fifth, and perhaps most telling, during the global financial crisis—whose epicenter was the United States—investors rushed for the safety of the U.S. dollar.

But it would be fallacious to assume that because the US debtor position is unlikely to cause a crisis, that therefore it should be viewed as an altogether healthy situation. Standard economic theory has long argued that financial capital should tend to flow from high-income countries to low-income countries, because the returns to capital should be higher in the low-income countries where capital can be so scarce. Instead, what seems to be happening in the global economy is that the US is a producer of what investors all around the world regard as safe financial assets: meaning US Treasury debt, as well as government-guaranteed debt backed by mortgages and corporate debt.  Meanwhile, US investors abroad are more likely to make higher-risk, higher-return investments in equities or ownership of foreign assets. Just how this balance of international risk-taking will work out over time is not fully clear. But we do know that when foreign investors own US assets, then the returns from those assets will flow to the foreign investors.

The Share of Women and Other Patterns in the 1%

What\’s the share of women in the top 1% of the earnings distribution? How has it been changing over time? Fatih Guvenen, Greg Kaplan, and Jae Song tackle this question in \”The Glass Ceiling and The Paper Floor: Gender Differences among Top Earners, 1981–2012,\” published as NBER Working Paper No. 20560 in October 2014. (NBER working papers are not freely available online, but many readers will have access through library subscriptions.)

Here\’s a figure showing the share of women in the top 0.1% and the next 0.9% of the earnings distribution from 1981-2012. The dashed lines show the data for a given one-year period, with the darker line showing the 0,1% and the lighter line showing the next 0.9%. The solid lines show the data if you average over a five-year period. The darker lines fall below the dashed lines, which suggests that women are less likely than men to sustain a position in the top 0,1% or the next 0.9% over time, but the difference is not large. The trends is clearly upward. For example, based on one-year data, women were about 6% of the top top 0.1% in earnings back in 1982, and were about 18% of this top 0.1% by 2012. As the authors write: \”The glass ceiling is still there, but it is thinner than it
was three decades ago.\”

A different way of slicing up the same data is to look at the proportion of men to women in each group at any given time (basically, this is the reciprocal of the figure above).

The authors explore this data in considerably more detail. Here are some other facts about the study and highlights that caught my eye.

1) Many studies of the top of the earnings or income distribution look at tax data. In contrast, this study uses data on earnings as reported to the Social Security Administration: more technically, it\’s a representative 10% sample drawn from the Master Earnings File at the SSA. In the data, people are identified only by anonymous numbers, but because of those numbers, it\’s possible to track the same people over time–which is why it\’s possible to look at average income over five years in the figure above. The Social Security data also includes data on the industry of the employer. The data is limited to people between the ages of 25 and 60, who had annual earnings of at least $1,885 as measured in inflation-adjusted 2012 dollars. The measure of earnings here includes wages, bonuses, and any stock options that are cashed in.

2) How much in earnings does it take to be in the top 1% or the top 0.1%? In 2012, it\’s $291,000 to reach the top 1% of earnings, and $1,018,000 to reach the top 0.1%. Interestingly, these thresholds for the top income levels rose sharply in the 1980s and the 1990s, but have been pretty steady since then. Also, the thresholds don\’t vary much acor

3) Along with the \”glass ceiling\” that limits women from entering the highest-paying jobs, there is also a discussion in this literature a \”paper floor,\” which is the pattern that women who are highly-paid in one year are more likely than men to drop out of the top earnings group in the following year. However, the \”paper floor\” seems to have changed over time. The authors write:

This high tendency for top earners to fall out of the top earnings groups was particularly stark for females in the 1980s — a phenomenon we refer to as the paper floor. But the persistence of top earning females has dramatically increased in the last 30 years, so that today the paper floor has been largely mended. Whereas female top earners were once around twice as likely as men to drop out of the top earning groups, today they are no more likely than men to do so. Moreover, this change is not simply due to females being more equally represented in the upper parts of the top percentiles; the same paper  floor existed for the top percentiles of the female earnings distribution, but this paper floor has also largely disappeared.

4) the reason behind more women moving into the top earnings group can be traced to younger cohorts of women having more members in that group–not to increases in earnings for the older cohorts of women workers.

Entry of new cohorts, rather than changes within existing cohorts, account for most of the increase in the share of females among top earners. These new cohorts of females are making inroads into the top 1 percent earlier in their life cycles than previous cohorts. If this trend continues, and if these younger cohorts exhibit the same trajectory as existing cohorts in terms of the share of females among top earners, then we might expect to see further increases in the share of females in the overall top 1 percent in coming years. However, this is not true for the top 0.1 percent. At the very top of the distribution, young females have not made big strides: the share of females among the top 0.1 percent of young people in recent cohorts is no larger than the corresponding share of females among the top 0.1 percent of young people in older cohorts.

5) The persistence of earners who are in the top 0.1% for successive years seems to be increasing, not decreasing.

Throughout the 1980s and 1990s, the probability that a male in the top 0.1 percent was still in the top 0.1 percent one year later remained at around 45%, but by 2011 this probability had increased to 57%. When combined with our nding that the share of earnings accruing to the top 0.1 percent has leveled o ff since 2000, this implies a striking observation about the nature of top earnings inequality: despite the total share of earnings accruing to the top percentiles remaining relatively constant in the last decade, these earnings are being spread among a decreasing share of the overall population. Top earner status is thus becoming more persistent, with the top 0.1 percent slowly becoming a more entrenched subset of the population.

6) The industry mix of those at the very top is shifting toward finance.

[R]egarding the characteristics of top earners, we find that the dominance of the finance and insurance industry is staggering, for both males and females: in 2012, finance and insurance accounted for around one-third of workers in the top 0.1 percent. However, this was not the case 30 years ago, when the health care industry accounted for the largest share of the top 0.1 percent. Since then, top earning health care workers have dropped to the second 0.9 percent where, along with workers in finance and insurance, they have replaced workers in manufacturing, whose share of this group has dropped by roughly half.

Stiffness in the Joints of the US Economy

The US economy has considerable strengths: its enormous internal market, well-educated population, its scientific and research establishment, its physical and legal infrastructure, and more. In addition to these, the US economy has been known for its flexibility in creating new companies and jobs. And by those measures, problems have been increasing for a couple of decades. I just recently discovered that the Business Dynamics Statistics program at the U.S. Census Bureau has a handy-dandy figure and chart generator, which I recommend playing with.

For those not familiar with it, the BDS is a longitudinal database of business establishments and firms going back to 1976. \”Longitudinal\” means that it covers the same firms over time, so a researcher can investigate patterns of firms that have just started, or firms that are five years old, and so on. It looks at the rates at which firms start up and shut down. And it looks at whether firms of different sizes and ages are adding jobs or subtracting jobs.

One pattern which comes out of the data is a long-run decline in the birth rate of firms in the US economy. The figure below shows the latest BDS data, released in September 2014 and covering the years up through 2012. (The numbers on the vertical axis can be read as percentages: that is, the number of firms with age zero in 2012 was 11% of the total number of firms in 2012.) As I\’ve pointed out  in earlier posts on \”The Decline of US Entrepreneurship\” (August 4, 2014) and \”New Business Establishments: The Shift to Existing Firms\” (August 26, 2014), this downward trend in business births worsened during the Great Recession, but it had been on a downward trend for a couple of decades before that.

One sometimes hears a claim by politicians that we are now seeing for the first time that exit rate for firms is exceeding the entrance race. It\’s true that the exit rate of firms exceeded the birth rate for several years during the Great Recession, and that this had not happened during the previous two recessions. However, it did  happen for a year back during the 1981-82 recession. And the birth rate was again exceeding the exit rate by 2012. Of course, these claims do not contradict the overall pattern that the birth rate of firms has been declining over time.

Not surprisingly, as the birth rate of new firms has gradually declined, so has the job creation rate. The BDS data illustrates this point. The \”job creation rate\” sums up the number of jobs added at all establishments that added to their total number of jobs in a given year, and divides by total jobs. Conversely, the \”job destruction rate\” adds up the number of jobs subtracted at all establishments that reduced their number of jobs in a year, and divides by the total number of jobs. Notice that this method of counting \”job creation\” and \”job destruction\” is an underestimate of the amount of churn in the US labor market, because if a certain establishment shuts down a bunch of jobs in one area, but opens up an equivalent number jobs in a completely different area, then it is unlikely the workers will be able to transfer from one area to another–but from the standpoint of the BDS data, this establishment neither created new jobs nor destroyed existing ones.

Even by this understated measure  of job churn, it used to be that the number of new jobs created each year back in the late 1970s and into the mid-1980s was often about 20% of the total number of jobs. But this share has sagged over time, and during the Great Recession job creation fell almost to 10% of the existing jobs in a given year. Meanwhile, the rate of job destruction seems to hover around 15% of existing jobs in a given year–a little higher or lower depending on the state of the macroeconomy.  

Research using the BDS data shows that new firms play an outsized role in creating new jobs, both in the first year of the new firms, and also–even after attrition–in the few new firms that really take off as job creators. Looking ahead, part of the evaluation process for every new rule affecting business should take a look at how it affects incentives to start new firms and to hire. The US economy can\’t afford to take its entrepreneurism and labor market flexibility for granted.

The VAT and Consumption Taxes: International Comparisons

Economists tend to believe that taxing consumption makes more sense than taxing income. When you tax people on income, and then tax them on the return earned from saving, you discourage saving and investment. After all, income that is saved, along with any return on that income, will be consumed eventually. But the US relies much less on consumption taxes than any other high-income country.

Consumption taxes come in two main forms. One is a value-added tax, which in economic terms is essentially similar to a sales tax. The functional difference is that a sales tax is collected from consumers at the point of purchase, while a value-added tax is collected from producers according to the value they add along the production chain.  Then, the value-added tax thus the prices paid by consumers in the same way as a sales tax. The other forms of a consumption tax are taxes on specific kinds of consumption, and in most countries the biggest consumption taxes on specific items are those relating to energy (especially oil), alcohol, and tobacco. The OECD lays out the evidence in its December 2014 report, \”The Distributional Effects of Consumption Taxes in OECD Countries.\”

The US is at rock-bottom when it comes to consumption taxes. That\’s in part because, unlike 160 other countries around the world and all of the other countries in this figure, the US doesn\’t have a value-added tax. In addition, the US taxes on gasoline are lower than in most of these other countries. The OECD average in this figure shows that these countries as a group get about one-third of their tax revenue from consumption taxes: the US gets about half that proportion of its taxes from consumption taxes. A standard VAT rate in countries like Germany, France, Italy, and the United Kingdom is about 20%.

Perhaps the main concern with a value-added tax is that it is regressive: that is, because those with low income tend to consume a higher share of their income than those with higher incomes, those with lower incomes pay the VAT on a higher share of their income, too. Proponents of a VAT point out that this perspective is short-term, and that when viewed over a lifetime, a VAT will look somewhat more egalitarian. But even in the perspective of one or a few years, there are three ways to keep a consumption tax from being overly regressive–although only two of those ways work  especially well.

The most common method that seeks to prevent a consumption tax from being regressive is to exempt certain products from the tax. Most countries have a lower VAT rate on food products, and on certain other products, often including health care, education, newspapers and books, restaurant meals, hotels, water supply, children\’s clothing, and others. The main problem with this approach is one that also arises for sales taxes in US states, which often have a similar list of exemptions. In an attempt to help those who are poor–who are perhaps 10-20% of the population in these countries–the VAT is reduced for everyone. If the goal is to help the poor, it works much better to, well, help the poor by offering income rebates or food stamps or other income-tested programs. The OECD report sums up the problem this way:

[M]ost, if not all of the reduced VAT rates that are introduced for the distinct purpose of supporting the poor–such as reduce rates on food, water supply and energy products–do have the desired progressive effect. However, despite this progressive effect, these reduced VAT rates are still shown to be a very poor tool for targeting support to poor households: at best, rich households receive as much aggregate benefit from a reduced VAT rate as do poor households; at worst, rich households benefit vastly more in aggregate terms than poor households. Furthermore, reduce rates introduced to address social, cultural and other non-distributional goals–such as reduced rates on books, restaurant  food and hotel accommodation–often provide so large a benefit to rich households that the reduced VAT rate actually has a regressive effect. 

Another approach to the regressive nature spending of a VAT is to focus not on the tax side, but on the spending side. Previous research by the OECD suggests that precisely because the US relies less than other countries on consumption taxes, the US tax code is already highly progressive compared to most other countries. However, patterns of US government spending are aimed less at the poor than in other countries, so the overall result is that the US redistributes less than other countries. A Congressional Budget Office report a few years ago pointed out that while the amount of redistribution happening through the federal tax code hasn\’t changed much over time, the amount of redistribution happening through federal spending has been declining. In other words, perhaps the goal should be to worry less about focusing taxes on those with high incomes, and instead to work on focusing government spending on those with lower incomes.

Finally, it\’s quite possible to design a consumption tax that is progressive. It looks a lot like an income tax, except that instead of taxing people based on their income, you tax them on income minus saving–which is the definition of consumption. It would actually be fairly straightforward for the US to turn its income tax into a full consumption tax. It\’s already true that a high proportion of saving–certain retirement accounts, increases in home equity, capital gains–isn\’t taxed when the gains occur. The US would have a consumption tax if all saving was deducted from income before taxes were applied.

However, an outright VAT for the US seems highly unlikely. The last time the subject was even broached, back in April 2010, the US Senate quickly bestirred itself for an 85-13 nonbinding resolution that the US should not adopt a value-added tax. Economists have an old joke about a value-added tax which goes like this: \”Democrats oppose a VAT because it\’s regressive and Republicans oppose a VAT because it\’s potentially a money machine for government. However, the the US will enact a VAT as soon as Republicans recognize that it\’s regressive and Democrats recognize that it\’s a money machine for the government.\”

For more discussion of consumption taxation, with some emphasis on the US exception, a useful starting point is an article in the Winter 2007 of the Journal of Economic Perspectives by James R. Hines, \”Taxing Consumption and Other Sins\” (21:1, pp. 49-68).

.

The Status of Online of Learning in Higher Education

Everyone knows that online technologies have the potential to disrupt existing arrangements in higher education, perhaps in extreme ways. But how far has the disruption proceeded? And what are the main barriers ahead?

 I. Elaine Allen and Jeff Seaman have been doing annual surveys on these issues for 12 years. Their latest is \”Grade Level: Tracking Online Education in the United States,\” published by the Babson Survey Research Group and Quahog Research Group, LLC. (Accessing the report may require free registration.) As Allen and Seaman point out, the \”National Center for Education Statistics’ Integrated Postsecondary Education Data System (IPEDS) added “distance” education to the wealth of other data that they collect and report on US higher education institutions.\” Allen and Seaman were collecting data from a sample of over 600 colleges and universities, but the IPEDS data is mandatory for all institutions of higher education. I recommend the full report, but here are a few results that caught my eye.

Allen and Seaman provide evidence that over 70% of degree-granting institutions, and over 95% of those with the largest enrollments, now have distance-learning online options. Many of these institutions say that distance learning is crucial to the future of their institutions. Over 5 million students are currently taking at least one distance-learning class, although the rate of growth of students taking such classes seems to have slowed in recent years. On the other hand, faculty seem increasingly leery of online education, and many of them do not seem especially willing to embrace it. Many students are finding that completing online courses on their own is difficult. Skepticism about MOOCs, or \”massively open online courses,\” is on the rise.

In the BSRG survey, chief academic officers are asked for their own perceptions of how learning outcomes differ for online learning and face-to-face classes. The share that think online learning has inferior outcomes is falling. Most answer \”same,\” and a growing share–approaching 20%–says that online learning outcomes are superior.

These same chief academic officers say that \”blended\” courses, with elements of online learning but some face-to-face component, are more likely to favor blended courses over pure online courses. In the graph below, the light blue bar on the bottom is equal for both bars, because it\’s the share of those who say that learning outcomes are the same in on-line and blended courses. Those who think one or the other are superior are then shown in the gray and orange bars on top.

However, it\’s worth noting that \”perceptions of the relative quality of both online and blended instruction have shown the same small decline for each of the past two years.\” Some of the issue seems to be that faculty members are often not supportive of online learning.

Another problem is concern that that students taking such courses often don\’t finish, and need additional support or self-discipline to make it through.

What about the most extreme version of online education, the MOOCs or massively open online courses? Here, the bloom seems to be off the rose. \”The portion of academic leaders saying that they do not believe MOOCs are sustainable increased from 26.2% in 2012 to 28.5% in 2013, to 50.8% in 2014. A majority of all academic leaders now state that they do not think the MOOCs are a sustainable method for offering courses.\”

From the viewpoint of chief academic officers, about 8% of their institutions offer a MOOC, and an additional 5% are  planning to offer one. But the main reasons given for offering a MOOC are things like increasing institution visibility, driving student enrollment, and experimenting with new pedagogy. In a way, MOOCs are being treated as loss-leaders.

Perhaps there is a chicken-and-egg problem here: if more faculty embraced on-line learning and MOOCs, then they would work for more students. But maybe the problems of online education and MOOCs are in some ways embedded, and the issue is how to create a hybrid structure that builds on what online education can do well, without pretending that (at least in the current state of artificial intelligence) higher education can be automated.

After all, there\’s been a primitive version of a massively open course available for quite some time now. It\’s called a textbook, or a public library. Students could in theory learn everything they need in this way, but few have the energy and directedness and stamina to do so on their own. Perhaps the online version of courses in higher education will be so pedagogically wonderful that lots of students who couldn\’t learn the material on their own from a textbook or a library will be able to do so, but I\’m dubious. The challenge for higher education will be how to combine what online learning can do well (presenting examples in livelier and different ways, repetition, limited but immediate feedback) with the strengths of the human touch, which includes support from other students, along with a mixture of teaching assistants and faculty members. I\’m sure there\’s no one all-purpose formula. But some formulas will work better than others.