Factoryless Goods Producing Firms

Andrew B. Bernard and Teresa C. Fort sketch what is known about the \”Factoryless Goods Producing Firm\” in the May 2015 issue of the American Economic Review: Papers and Proceedings (vol. 105:5, pp. 518-523). The AER is not freely available on-line, but many readers will have access through a library subscription. Succumbing to acronyms, Bernard and Fort write: \”We define a FGPF as a firm that has no manufacturing establishments in the United States, but performs pre-production
activities such as design and engineering itself and is involved in production activities, either directly or through purchases of contract manufacturing services (CMS).\”

Want examples? Here are three:

Perhaps the canonical example of a factoryless goods producer is the British appliance firm, Dyson, best known for its innovative vacuum cleaners. The firm initially designed, engineered, and produced vacuum cleaners in Wiltshire, England but subsequently chose to offshore and outsource all the production to Malaysia while leaving several hundred research and other employees in the United Kingdom. Dyson’s more recent innovations in product lines such as hand dryers and fans have never been produced in the United Kingdom or by Dyson itself.

The best-known example of a factoryless goods producer is Apple Inc. Apple designs, engineers, develops, and sells consumer electronics, software, and computers. For the vast majority of its products, including iPhones, iPads, and MacBooks, Apple does none of the production and the actual manufacturing is performed by other firms in China and elsewhere. While Apple is known for its goods and services and closely controls all aspects of a product, almost none of Apple’s US establishments would be in the manufacturing sector.

The semiconductor industry is well-known to have factoryless goods producers in the form of “fabless” firms. Mindspeed Technologies, a fabless semiconductor manufacturer in Newport Beach, CA “designs, develops, and sells semiconductor solutions for communications applications in wireline and wireless network infrastructure equipment.” Mindspeed outsources all semiconductor manufacturing to other merchant foundries, such as TSMC, Samsung, and others. Mindspeed’s establishments would not be in the manufacturing sector.

How prominent are factoryless goods producing firms in the US economy, and how much have they expanded over time? By definition, you don\’t find these firms in the manufacturing sector of the economy. Bernard and Fort look at stastistics on the wholesale trade sector of the economy. As background, wholesale trade is about 6% of the US GDP when measured in value-added terms. which is about half the size of the manufacturing sector, or half the size of the professional and business services sector. Here are a few facts from Barnard and Fort ahout factoryless goods producing firms:

  • In 2007, the total number of factoryless good producing firms was 13,500, and these firms employed 672,000 workers. \”
  • Industries where factoryless goods producing firms tend to focus include electrical machinery and equipment, machine and mechanical appliances and computers, pharmaceuticals, and apparel. 
  • Compared to other firms in the wholesale industry, the factoryless goods producing firms tend to be larger and to pay higher wages. 
  • If you go back to 1992, and look at the factoryless goods producing firms of that time, you find that many of them begin manufacturing in the US at some poitn. Indeed, \”it is likely that the current set of FGPFs are a mix of different types of firms including former manufacturing firms, new firms created as FGPFs from their inception, and other firms that have made the transition to the design and manufacture of products. More work is needed to understand the evolution of FGPFs over time.\”
  • The imports of factoryless goods producing firms are equal to about 38% of their total sales. Thus, a majority of money spent at such firms ends up flowing to non-manufacturing inputs from the US economy.

The growth of factoryless goods producing firms may have effects on wages, employment, and productivity. It\’s a phenomenon worth understanding.

Full disclosure: The  AER is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.

The Shifting Geographical Center of US Population

Maybe this is the kind of factoid that is only of interest to me, but the US Bureau of the Census calculates the \”mean center of population for the United States\”–that is, if you average the locations where everyone lives in the US, what\’s the average location?

Back in 1790, the average location of the population was near Washington, DC. Unsurprisingly, over time the center of the population moved west, as additional western states were added.

I found myself mildly surprised by three factors about the movement of the average location of an American in the last couple of centuries.

1) I\’m surprised that the center of the US population was already in Maryland in 1790. I would have thought that with a substantial share of the population in Philadelphia and New York, as well as Boston and New England, that the central location in 1790would have been further north.
2) I\’m surprised that the movement has continued at such a steady pace in recent decades.
3) I\’m surprised that the average location of the population has reached the middle of Missouri, apparently headed for Oklahoma in another couple of decades.

China and India Overtake Mexico for Inflow of Foreign-Born US Residents

During my adult life, the main source of immigration to the U.S. has always been Mexico. Thus, I was surprised to see that for 2013, immigration from China and India exceeded that from Mexico. The data comes from analysts at the US Census Bureau, Eric B. Jensen, Anthony Knapp, C. Peter Borsella, and Kathleen Nestor, and presented at a recent conference under the title, \”The Place-of-Birth Composition of Immigrants to the United States: 2000 to 2013.\”

Here\’s a takeaway figure. It\’s a measure of those who are foreign-born, and who were living outside the US a year ago–in other words, it\’s a measure of migration to the US in the previous year.

As I have noted in the past, immigration from Mexico has dropped off substantially in the last few years. Indeed, a few years ago when the U.S. unemployment rate was still so elevated in the aftermath of the Great Recession, net migration from Mexico to the U.S.–that is, new arrivals minus departures–may have been slightly negative. Over the last decade or so, a combination of stronger enforcement at the border, along with a gradually stronger economy in Mexico and fewer children per women in Mexico have meant fewer young people on the move looking for work.

The data here comes from the American Community Survey. It asks people about whether they were in another country the year before. It does not ask whether they are legal or illegal immigrants. If undocumented immigrants are less likely to answer such surveys, they will be undercounted. While the point surely has some truth, it\’s also true that the ACE is often used together with a variety of other measures (including US and Mexican Census data) as part of inferring the number of undocumented immigrants. Precisely because the survey does not ask about legal or illegal immigration status, many undocumented migrants do seem to fill it out accurately.

The Census Bureau economists also offer some interesting observations about the demography of these migrant groups.

Concerning China: \”Initially, immigrants born in China were concentrated around the 20-29 age group. Also, there was a relatively high percentage of females relative to males in the 0-4 age group.  More recently, the age structure is concentrated around the college-aged populations, with the largest percentage point increases in the 15-19 and 20- 24 age groups for both males and females.\”

Concerning  Mexico: \”The age distribution of immigrants born in Mexico became older between the 2005-2007 and 2011-2013 periods. The percentage of males and females in the 0-4, 15-19 and 20-24 age groups declined. The percentage in the 40-44, 55-59, and 65 and over age groups increased for both males and females.\”

Concerning India: \”Most immigrants born in India are concentrated in ages 20-34, with the largest percentage in the 25-29 age bracket. This is consistent for both males and females.\”

The Rise of Remittances

The total number of migrants working in other countries worldwide is now approaching 250 million. Many of them send money home, and remittances are on their way to becoming an important part of the global financial system. The World Bank Migration and Remittances Team, Development Prospects Group, offers an overview in its Migration and Development Brief of April 13, 2015.

Here\’s a pattern showing the rise in remittances over time compared to some other international financial flows.  Back in 1990, international remittances were lower than official development assistance (ODA). Flows of foreign direct investment (FDI) to developing countries were also smaller than ODA, as were flows of private debt and portfolio equity to developing countries. (The FDI flows to developing countries show here exclude China.) Remittances have been larger than development assistance for some years now, and the gap is growing. Perhaps more surprising, remittances also outstripped debt and portfolio equity flows to developing countries in recent years. The flows of remittances also look quite stable compared to other private-sector capital flows.

As one might expect, the amount of remittances will be biggest for some of the larger economies in the developing world, but expressed as a share of GDP, the amount of remittances will be largest for some smaller economies.

Remittances are important for their sheer size, and because they offer a way for extended households and kinship networks in low-income countries to help themselves in a direct way. They are also goign to be a subject of policy.

For example, the World Bank estimates that at present, transferring $200 to a recipient country on average incurs a fee of about 8%. It seems plausible that technology and the growth of these markets should be able to reduce this cost substantially. If the cost of transfers could be cut to 3%, OECD estimates that it would benefit the recipients of these remittances by $20 billion per year. The key question here is that making international flows of capital cheaper and easier comes into conflict with the policy agenda of avoiding money-laundering and cutting off financial assistance to terrorist groups.

Another possibility is that low-income countries will issue bonds in their own currencies, hoping to attract investment from those with remittances to send. The World Bank researchers explain: \”A diaspora bond – a low denomination security with a face value of $1,000, say, carrying a 3-4% interest rate and 5-year maturity – issued by a country of origin could be attractive to migrant workers who currently earn near-zero interest on deposits held in host-country banks. Diaspora bonds could be used to mobilize a fraction – say, one-tenth – of the annual diaspora saving, that is, over $50 billion, for financing development projects.\” India and Israel have already taken steps along these lines, but many low-income countries around the world could give it a try.

A more complex step is to use the expected inflow of future remittances as collateral, in what is known a \”future-flow securitization of remittances\”–which can then lead to lower borrowing costs or longer borrowing terms for governments of developing countries.

For a more detailed overview of the economics of this subject, I recommend Dean Yang\’s article on  \”Migrant Remittances\” in the Spring 2011 issue of  the Journal of Economic Perspectives. (Full disclosure: I\’ve labored in the fields as Managing Editor of JEP since the first issue in 1987.)

Geoengineering: Forced Upon Us?

Say that you are someone who believes strongly that human-driven emissions of carbon and other greenhouse gases are leading to substantial climate change. You believe that the world probably needed to start acting aggressively to confront this issue back in 1992 when the United Nations Framework Convention on Climate Change was announced, and certainly after the Kyoto Protocol was signed in 1997 and went into force in 2005. You are deeply worried that the world may have already blown past a number of warning signs about climate change, and that time for taking meaningful action has become uncomfortably slim.  If you are that person, you need to be thinking seriously about geoengineering–that is, taking steps to deliberately alter the earth\’s climate to counteract the effects of climate change.

Gernot Wagner and Martin L . Weitzman go through the arguments in \”Climate Shock,\” an article in the Milken Institute Review (2015, Second Quarter, pp. 55-69). The article is based on a chapter of their just-published book Climate Shock: The Economic Consequences of a Warmer Planet. Here\’s a sample:

We may hate the idea of countering amazing amounts of pollution with yet more pollution of a different type. But the option is simply too cheap to ignore. It’s not like anyone would literally mimic Mount Pinatubo by pumping 20 million tons of sulfur dioxide into the stratosphere. At the very least, given current technology and knowledge, the sulfur would likely be delivered in the form of sulfuric acid vapor. Sooner rather than later, we may be looking at particles specifically engineered to reflect as much solar radiation back into space as possible, maximizing the leverage.

It may only take a fleet of a few dozen planes flying 24/7 to deliver the desired amount. Some have gone as far as to calculate how many Gulfstream G650 jets it would take to haul the necessary materials. But such specifics are indeed too specific. What matters is that the total costs would apparently be low compared to both the damage carbon dioxide causes and the cost of avoiding that damage by reducing carbon emissions. 

Estimates are all over the place, but most put the direct engineering costs of getting temperatures back down to pre-industrial levels on the order of $1-to-$10 billion a year. Now, $1-to-$10 billion is not nothing, but it’s well within the reach of many countries and maybe even the odd billionaire. If a ton of carbon dioxide emitted today generates $40 in damage, we are talking fractions of a penny for the sulfur to offset it. …

Geoengineering is too cheap to dismiss as a fringe strategy developed by sinister scientists looking for attention and grant money, as some pundits would have it. If anything, it’s the most experienced climate scientists who take the issue most seriously. And not because they want to. …Pick your favorite analogy. It’s like chemotherapy or a tracheostomy for the planet: a last-ditch effort to do what prevention failed to accomplish. … As always, it’s a matter of trade-offs. Climate change itself will have plenty of unsavory side effects. The question, then, is not whether geoengineering alone could wreak havoc. (It could.) The question is whether climate change plus geoengineering is better or worse than unmitigated climate change.

Wagner and Weitzman go on to discuss a variety of possible methods of geoengineering:
putting sulfur particles in the atmosphere; ships that spray water vapor high into the sky to generate more cloud cover; painting all roofs a more reflective white; dumping plant nutrients (like iron) into the ocean so that the resulting plants will absorb more carbon; and others.

For myself, I\’m uncomfortably aware that I don\’t know much about the details of climate modelling. It does seems clear that a healthy majority of those who work in the area and are represented in the Intergovernmental Panel on Climate Change (IPCC) reports are concerned about the risks of climate change, so from an economic viewpoint, my usual attitude is to treat the risk as real and imporant–but focus on the economic problem of how to reduce those risks in cost-effective ways. For some earlier posts bearing on these issues, see  \”Climate Change Strategies (Including Mangroves)\” (December 4, 2012), \”Setting a Carbon Price: What\’s Known, What\’s Not\” (June 25, 2013), \”Short-Term Benefits of Climate Change Policy\” (September 22, 2014),  \”Carbon Capture and Storage: An Update\” (December 24, 2013), \”Other Air Pollutants: Soot and Methane\” (June 28,  2012), and \”Should the U.S.  Government Cost-Benefit Analysis Look Outside the U.S.?\” (June 13, 2014).

In my reading, even though the most recent IPCC report comes out with largely the same bottom line–that climate change is a serious problem needing a substantial policy response in both the near-term and the long-term–the most recent report makes the arguments in a tone of less certainty than earlier reports. As one example, the most recent IPCC report  has a highlighted discussion in a box in the first chapter acknowledging that temperature trends rise 1998 to 2012 was much less steep than the earlier trend, and less than predicted (see Box 1.1 on p. 43 of the report). The report discusses various reasons why this might have occurred, with an emphasis that additional research is needed here: for example, one possibility is that volcanoes put more sulfur into the air than expected, a form of natural geoengineering that had a cooling effect; that El Nino warmed up the globe above trend in late 1990s, making the temperature rise in the 1990s look unexpectedly rapid, and the rise since then correspondingly slower; or that oceans trapped more heat than the models had predicted.

For me, the risks of any actual efforts at geoengineering seem too high at present. But of course, this is another way of saying that I think the risks of climate change are not immediate or severe enough to be worth the risks of geoengineering.  But as I noted at the start, if you believe that the risks of climate change are large and near-term–and moreover, if you have observed how difficult it seems to be for the world to take action to reduce carbon emissions substantially–then you should be looking at geoengineering very closely, even you hate the idea of needing to do so.

Spring 2015 Journal of Economic Perspectives On-line

Since 1986, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which several years back made the decision–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. The journal\’s website is here. I\’ll start here with Table of Contents for the just-released Spring 2015 issue. Below are abstracts and direct links to all the paper. I will probably blog about some of the individual papers in the next week or two, as well.

Symposium: The Bailouts of 2007-2009
\”A Retrospective Look at Rescuing and Restructuring General Motors and Chrysler,\” by Austan D. Goolsbee and Alan B. Krueger
The rescue of the US automobile industry amid the 2008-2009 recession and financial crisis was a consequential, controversial, and difficult decision made at a fraught moment for the US economy. Both of us were involved in the decision process at the time, but since have moved back to academia. More than five years have passed since the bailout began, and it is timely to look back at this unusual episode of economic policymaking to consider what we got right, what we got wrong, and why. In this article, we describe the events that brought two of the largest industrial companies in the world to seek a bailout from the US government, the analysis that was used to evaluate the decision (including what the alternatives were and whether a rescue would even work), the steps that were taken to rescue and restructure General Motors and Chrysler, and the performance of the US auto industry since the bailout. We close with general lessons to be learned from the episode.
Full-Text Access | Supplementary Materials

\”The Rescue of Fannie Mae and Freddie Mac,\” by W. Scott Frame, Andreas Fuster, Joseph Tracy and James Vickery

The imposition of federal conservatorships on September 6, 2008, at the Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation—commonly known as Fannie Mae and Freddie Mac—was one of the most dramatic events of the financial crisis. These two government-sponsored enterprises play a central role in the US housing finance system, and at the start of their conservatorships held or guaranteed about $5.2 trillion of home mortgage debt. The two firms were often cited as shining examples of public-private partnerships—that is, the harnessing of private capital to advance the social goal of expanding homeownership. But in reality, the hybrid structures of Fannie Mae and Freddie Mac were destined to fail at some point, owing to their singular exposure to resident ial real estate and moral hazard incentives emanating from the implicit guarantee of their liabilities. We describe the financial distress experienced by the two firms, the events that led the federal government to take dramatic action in an effort to stabilize housing and financial markets, and the various resolution options available to US policymakers at the time; and we evaluate the success of the choice of conservatorship in terms of its effects on financial markets and financial stability, on mortgage supply, and on the financial position of the two firms themselves. Conservatorship achieved its key short-run goals of stabilizing mortgage markets and promoting financial stability during a period of extreme stress. However, conservatorship was intended to be a temporary fix, not a long-term solution, and more than six years later, Fannie Mae and Freddie Mac still remain in conservatorship.
Full-Text Access | Supplementary Materials

\”An Assessment of TARP Assistance to Financial Institutions,\” by Charles W. Calomiris and Urooj Khan


Six years after the passage of the 2008 Troubled Asset Relief Program, commonly known as TARP, it remains hard to measure the total social costs and benefits of the assistance to banks provided under TARP programs. TARP was not a single approach to assisting weak banks but rather a variety of changing solutions to a set of evolving problems. TARP\’s passage was associated with significant improvements in financial markets and the health of financial intermediaries, as well as an increase in the supply of lending by recipients. However, a full evaluation must also take into account other factors, including the risks borne by taxpayers in the course of the bailouts; moral-hazard costs that could result in more risk-taking in the future; and social costs related to perceived unfairness. Our evaluation is organized in five parts: 1) What did policymakers do? 2) What are the proper objectives of interventions like TARP assistance to financial institutions? 3) Did TARP succeed in those economic objectives? 4) Were TARP funds allocated purely on an economic basis, or did political favoritism play a role? 5) Would alternative policies, either alongside or instead of TARP, and alternative design features of TARP, have worked better?
Full-Text Access | Supplementary Materials

\”AIG in Hindsight,\” by Robert McDonald and Anna Paulson

The near-failure on September 16, 2008, of American International Group (AIG) was an iconic moment in the financial crisis. Two large bets on real estate made with funding vulnerable to bank-run-like dynamics pushed AIG to the brink of bankruptcy. AIG used securities lending to transform insurance company assets into residential mortgage-backed securities and collateralized debt obligations, ultimately losing at least $21 billion and threatening the solvency of the life insurance companies. AIG also sold insurance on multisector collateralized debt obligations, backed by real estate assets, ultimately losing more than $30 billion. These activities were apparently motivated by a belief that AIG\’s real estate bets would not suffer defaults and were \”money-good.\” We find that these securities have in fact suffered write-downs and that the stark \”money-good\” claim can be rejected. Ultimately, both liquidity and solvency were issues for AIG.
Full-Text Access | Supplementary Materials

\”Legal, Political, and Institutional Constraints on the Financial Crisis Policy Response,\” by Phillip Swagel


As the financial crisis manifested itself and peaked in 2007 and 2008, the response of US policymakers and regulators was shaped in important ways by legal and political constraints. Policymakers lacked certain legal authorities that would have been useful for addressing the crisis, notably to use public capital to stabilize the banking sector or to deal with the failure of large financial firms such as insurance companies and investment banks that were outside the scope of bank regulators\’ authority to resolve deposit-taking commercial banks. Legal constraints were keenly felt at the US Department of the Treasury, where I served as a senior official from December 2006 to January 2009. Treasury had virtually no emergency economic authority at the onset of the crisis in 2007, with the exception of the Treasury\’s Exchange Stabilization Fund, which was intended for use in exchange rate interventions. As the systemic risks of the financial crisis became apparent, the initial policy response largely fell to the Federal Reserve, which had the authority to act under emergency circumstances. There will inevitably be another financial crisis, and the response will be shaped by both the lessons learned from recent history and the statutory and political changes in the wake of the crisis. The paper thus concludes by discussing changes in constraints since the crisis, with a focus on two developments: 1) the political reality that there will not in the near future be another wide-ranging grant of fiscal authority as was given with the Troubled Asset Relief Program, and 2) the new legal authorities provided in the Wall Street Reform and Consumer Protection Act of 2010, commonly known as the Dodd-Frank law.
Full-Text Access | Supplementary Materials

Symposium: Disability Insurance

\”Understanding the Increase in Disability Insurance Benefit Receipt in the United States,\” by Jeffrey B. Liebman


The share of working-age Americans receiving disability benefits from the federal Disability Insurance (DI) program has increased significantly in recent decades, from 2.2 percent in the late 1970s to 3.6 percent in the years immediately preceding the 2007-2009 recession and 4.6 percent in 2013. With the federal Disability Insurance Trust Fund currently projected to be depleted in 2016, Congressional action of some sort is likely to occur within the next several years. It is therefore a good time to sort out the competing explanations for the increase in disability benefit receipt and to review some of the ideas that economists have put forth for reforming US disability programs.
Full-Text Access | Supplementary Materials

\”The Rise and Fall of Disability Insurance Enrollment in the Netherlands,\” by Pierre Koning and Maarten Lindeboom
As recently as 15 years ago, the high level of Disability Insurance (DI) enrollment was considered to be one of the major social and economic problems of the Netherlands; indeed, the Netherlands was characterized as the country with the most out-of-control disability program of OECD countries. But since about 2002, the Netherlands has seen a spectacular decline in its Disability Insurance enrollment rate. Radical reforms to the Dutch DI system were implemented over the period 1996 to 2006. We cluster these reforms in three broad categories: 1) reducing the incentives of employers to move workers to disability; 2) increased gatekeeping; and 3) tightening disability eligibility criteria while enhancing worker incentives. The reforms appear to have been very effective. Since 2002, yearly DI inflow ra tes dropped from 1.5 percent in 2001 to about 0.5 percent of the insured population in 2012. We argue that particularly the interaction of employer incentives and formal employer obligations has contributed to the substantial decrease in DI inflow. On the downside, however, it seems workers with bad health have sorted into temporary employment—without employers bearing the financial responsibility of their benefit costs.
Full-Text Access | Supplementary Materials

\”Disability Benefit Receipt and Reform: Reconciling Trends in the United Kingdom,\” by James Banks, Richard Blundell and Carl Emmerson


The UK has enacted a number of reforms to the structure of disability benefits that has made it a major case study for other countries thinking of reform. The introduction of Incapacity Benefit in 1995 coincided with a strong decline in disability benefit expenditure, reversing previous sharp increases. From 2008 the replacement of Incapacity Benefit with Employment and Support Allowance was intended to reduce spending further. We bring together administrative and survey data over the period and highlight key differences in receipt of disability benefits by age, sex, and health. These disability benefit reforms and the trends in receipt are also put into the context of broader trends in health and employment by education and sex. We document a growing proportion of claimants in any age group with mental and behavioral disorders as their principal health condition. We also show the decline in the number of older working age men receiving disability benefits to have been partially offset by growth in the number of younger women receiving these benefits. We speculate on the impact of disability reforms on employment.
Full-Text Access | Supplementary Materials

Articles


\”Reforming LIBOR and Other Financial Market Benchmarks,\” Darrell Duffie and Jeremy C. Stein
LIBOR is the London Interbank Offered Rate: a measure of the interest rate at which large banks can borrow from one another on an unsecured basis. LIBOR is often used as a benchmark rate—meaning that the interest rates that consumers and businesses pay on trillions of dollars in loans adjust up and down contractually based on movements in LIBOR. Investors also rely on the difference between LIBOR and various risk-free interest rates as a gauge of stress in the banking system. Benchmarks such as LIBOR therefore play a central role in modern financial markets. Thus, news reports in 2008 revealing widespread manipulation of LIBOR threatened the integrity of this benchmark and lowered trust in financial markets. We begin with a discussion of the economic role of benchmarks in reducing market frictions. We explain how manipulation occurs in practice, and illustrate how benchmark definitions and fixing methods can mitigate manipulation. We then turn to an overall policy approach for reducing the susceptibility of LIBOR to manipulation before focusing on the practical problem of how to make an orderly transition to alternative reference rates without raising undue legal risks.
Full-Text Access | Supplementary Materials

\”Bitcoin: Economics, Technology, and Governance,\” by Rainer Böhme, Nicolas Christin, Benjamin Edelman and Tyler Moore

Bitcoin is an online communication protocol that facilitates the use of a virtual currency, including electronic payments. Bitcoin\’s rules were designed by engineers with no apparent influence from lawyers or regulators. Bitcoin is built on a transaction log that is distributed across a network of participating computers. It includes mechanisms to reward honest participation, to bootstrap acceptance by early adopters, and to guard against concentrations of power. Bitcoin\’s design allows for irreversible transactions, a prescribed path of money creation over time, and a public transaction history. Anyone can create a Bitcoin account, without charge and without any centralized vetting procedure—or even a requirement to provide a real name. Collectively, these rules yield a system that is understood to be more flexible, more private, and less amenable to regulatory oversight than other forms of payment—though as we discuss, all these benefits face important limits. Bitcoin is of interest to economists as a virtual currency with potential to disrupt existing payment systems and perhaps even monetary systems. This article presents the platform\’s design principles and properties for a nontechnical audience; reviews its past, present, and future uses; and points out risks and regulatory issues as Bitcoin interacts with the conventional financial system and the real economy.
Full-Text Access | Supplementary Materials

\”Systematic Bias and Nontransparency in US Social Security Administration Forecasts,\” by Konstantin Kashin, Gary King and Samir Soneji

We offer an evaluation of the Social Security Administration demographic and financial forecasts used to assess the long-term solvency of the Social Security Trust Funds. This same forecasting methodology is also used in evaluating policy proposals put forward by Congress to modify the Social Security program. Ours is the first evaluation to compare the SSA forecasts with observed truth; for example, we compare forecasts made in the 1980s, 1990s, and 2000s with outcomes that are now available. We find that Social Security Administration forecasting errors—as evaluated by how accurate the forecasts turned out to be—were approximately unbiased until 2000 and then became systematically biased afterward, and increasingly so over time. Also, most of the forecasting err ors since 2000 are in the same direction, consistently misleading users of the forecasts to conclude that the Social Security Trust Funds are in better financial shape than turns out to be the case. Finally, the Social Security Administration\’s informal uncertainty intervals appear to have become increasingly inaccurate since 2000. At present, the Office of the Chief Actuary, at the Social Security Administration, does not reveal in full how its forecasts are made. Every future Trustees Report, without exception, should include a routine evaluation of all prior forecasts, and a discussion of what forecasting mistakes were made, what was learned from the mistakes, and what actions might be taken to improve forecasts going forward. And the Social Security Administration and its Office of the Chief Actuary should follow best practices in academia and many other parts of government and make their forecasting procedures public and replicable, and should calculate and report calibrated un certainty intervals for all forecasts.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Social Costs of the Financial Sector

Luigi Zingales may seem like an unlikely person to ask, \”Does Finance Benefit Society?\” When he critiques the social benefits of the financial sector, he does so as an insider. He is Distinguished Service Professor of Entrepreneurship and Finance at the University of Chicago Booth School of Business, and recently president of the American Finance Association–made up of the economists and related disciplines who study finance in-depth. That question was the title of his 2015 Presidential Address to the AFA, which was delivered in January 2015. The address will be published later this summer in the Journal of Finance. However, you can watch video of the lecture or read the talk at his website.

In the aftermath of the dot-com boom, the housing price bubble, and the financial bailouts, it\’s easy enough for \”finance\” to sound like a swear-word. I pointed out a few days ago in a post on \”Breaking Down Corporate Profits\” (May 6, 2015), the \”finance and insurance\” sector earns more profits than another other sector of the US economy, just beating out the manufacturing sector and well ahead of wholesale and retail trade.  I\’ve offered some thoughts on the financial sector in \”Why Did the U.S. Financial Sector Grow?\” (May 15, 2013) and \”When is the Financial Sector Too Big?\” (July 19, 2012), or for an alternative view, \”A Defense of the Financial Sector\” (May 17, 2013).

So it\’s probably useful to begin, as Zingales does, by pointing out that certain aspects of finance are pretty useful. I like being able to keep my money in a bank, and to get a mortgage when I buy a house. I like being able to invest my retirement savings in a mutual fund. Farmers like being able to use futures markets to lock in a minimum price for what they will grow; airlines like being able to use those markets to lock in a maximum price for what fuel will cost six or twelve months in the future; multinational companies like using futures markets to protect themselves against shifts in exchange rates. There\’s lots of evidence that the financial sector of a country tends to expand with economic growth, as entrepreneurs and established businesses find ways to use the financial sector to raise funds for investment and to hold down on risks.

But after dutifully acknowledging these kinds of issues, Zingales challenges whether the growth of the financial sector hasn\’t gone beyond the useful zone. Here\’s a sample (citations omitted):

While there is no doubt that a developed economy needs a sophisticated financial sector, at the current state of knowledge there is no theoretical reason or empirical evidence to support the notion that all the growth of the financial sector in the last forty years has been beneficial to society. In fact, we have both theoretical reasons and empirical evidence to claim that a component has been pure rent seeking. …

There is a large body of evidence documenting that on average a bigger banking sector (often measured as the ratio of private credit to GDP) is correlated with higher growth, both cross-sectionally and over time. … [I]in this large body there is precious little evidence that shows the positive role of other forms of financial development, particularly important in the United States: equity market, junk bond market, option and future markets, interest rate swaps, etc. …

If anything, the empirical evidence suggests that the credit expansion in the United States was excessive. The problem is even more severe for other parts of the financial system. There is remarkably little evidence that the existence or the size of an equity market matters for growth. …  I am not aware of any evidence that the creation and growth of the junk bond market, the option and futures market, or the development of over-the-counter derivatives are positively correlated with economic growth. …

For the period 1996-2004, … the cost of (mostly financial) fraud among U.S. companies with more than $750m in revenues is $380bn a year. Table 1 reports the fines paid by financial institutions to U.S. enforcement agencies between January 2012 and December 2014. The total amount is $139 bn, $113bn of which related to mortgage fraud. This severely underestimates the magnitude of the problem. First, some of the main mortgage lenders (like New Century Financial) went bankrupt and therefore were never charged. Second, even if the fraudulent institution did not go bankrupt, it can effectively be sued only if it has enough capital. The table includes just one fine regarding Madoff, for only $2.9bn, when the overall amount of the Madoff fraud totaled $64.8 bn.  Finally, Dyck et al. (2014) estimate that only one fourth of the fraud are detected. 

Zingales argues that it in the face of this kind of evidence (and he cites much more than this), it won\’t do to defend the financial sector by pointing to the good it undoubtedly does for certain parties in certain situations. Instead, one needs to think about why the financial industry is so prone to excess and misbehavior. Zingales argues that there is clearly lots of money to be made in duping investors about risks and returns, and about acting behind the scenes in ways that favor those in the financial industry but do not favor their customers. Zingales cites survey evidence that many house-buyers don\’t understand what an adjustable-rate mortgage means. It\’s clear that many big-time investors didn\’t understand very clearly how the LIBOR was determined, which is why insiders were able to manipulate it.

As a clear-headed economist, Zingales also points out that government regulation is often part of the problem here. For example, one ingredient in the toxic stew of the housing price bubble was the behavior of Fannie Mae and Freddie Mac, government-backed agencies that helped sustain housing bubble–and Fannie and Freddie then needed a $180 billion bailout after the crisis hit. More broadly, fine print is not the answer. It\’s not the answer when government seeks to regulate the financial sector with blizzards of fine print. It\’s not the answer when government passes requirements that blizzards of fine print \”disclosure statements\” be handed to to customers in the name of \”transparency.\” As Zingales argues, simple rules are usually better (again, citations and footnotes omitted):

First, when the possibility of arbitrage and manipulation is considered, the best (most robust) solutions tend to be the simplest ones. … Second, simple rules also facilitate accountability. Complicated rules are difficult to enforce even under the best circumstances, and impossible when their enforcement is the domain of captured agencies. In the context of regulation, however, there is one added benefit of simplicity. Not only does simple regulation reduce lobbying costs and distortions; it also makes it easier for the public to monitor, reducing the amount of capture. Finally, when we factor in the enforcement and lobbying costs, simpler choices, which might have looked inefficient at first, often turn out to be optimal in a broader sense. Thus, we should make an effort to propose simple solutions, which are easier to explain to people and easier to enforce and monitor. For example, a simple way to deal with the problem of unsophisticated investors being duped is to put the liability on the sellers. Just like brokers have to prove that they sold options only to sophisticated buyers, the same should be true for other instruments like double short ETF [exchange-traded funds]. This shift in the liability rule (Caveat Venditor) risks shutting off ordinary people from access to financial services. For this reason, there should be an exemption for some very basic instruments – like fixed rate mortgages and a broad stock market index ETF. 

Readers interested in getting up to speed on these arguments that the financial sector is too big might begin with Zingales\’s accessible talk. Some other recent resources on this topic include:

Some International Minimum Wage Comparisons

How do minimum wages around world compare, and how have they changed in the last few years? The OECD offers an overview in a May 2015 FOCUS report called \”Minimum wages after the crisis: Making them pay.\”

As a starting point, here\’s the minimum wage relative to median income in various countries. The blue diamonds show the proportion in 2007; the orange triangles show the proportion in 2013. The minimum wage stayed the same or rose in most countries, with a few exceptions like Ireland, Greece, and Spain. The US would have ranked lowest in ratio of minimum wage to median wage in 2007 (lowest blue diamond), but ranks third-lowest after the recent increases. Of course, some will see this relatively low level as cause for concern, while others will see it as cause for congratulation. The minimum wage as a share of income for Colombia may be misleading because it applies only to workers in the \”formal\” sector of the economy, and those in the informal sector would on average have lower wages.

As a different metric, how many hours does a person need to work at a minimum wage job before they reach total earnings of half the minimum income? The calculation here includes an adjustment for payroll and income taxes paid, as well as for other cash benefits being scaled back. The orange bars show the number of hours for a single parent with two children; the blue diamonds show the number of hours for a single-earner, two-parent couple with two children. Of course, assumptions about the number of hours a person would need to work at a minimum wage job are based on the assumption that such a job is in fact available.

Finally, what share of workers is actually paid the minimum wage? Of course, this information gives a sense of how many workers would be directly affected by altering the minimum wage. The orange bars show the share of workers getting the minimum wage by country (based on differing data sources), while the blue triangles are a reminder of the minimum wage as a percentage of the median wage in each country. One would expect that countries with a high minimum wage would tend to have more workers receiving the minimum wage, but that pattern doesn\’t always hold. For example, the share of the population in Greece and in Portugal being paid the minimum wage in those countries is lower than in the US, although the minimum wage in those countries is a higher share of median income.

Breakdown of US Corporate Profits

Although the US corporate income tax is an almost continual subject of dispute, most people don\’t know any details about what kinds of corporations that earn the profits. For example, are the companies earning profits disproportionately large or small? What industries earn the most in profits? How much corporate profit falls outside the corporate income tax, because it is earned by \”Schedule S\” corporations which distribute profits to their owners, instead? The IRS has just published its 2012 Corporation Income Tax Returns Complete ReportThe report offers voluminous tables with lots of detail on these questions, but here\’s the big-picture overview. (I\’ve trimmed down the tables below from the form in which they appear in the report–for example, leaving out data for 2011.)

For starters, how big were corporate profits in 2012, and how do the profits earned and the corporate taxes paid break down by size of firm? Here\’s the table from the report. In 2012, the US economy had 5.8 million active corporations. Their total receipts were $29.4 trillion. (This amount is larger than GDP! But remember that in the process of production, companies sell to other companies: for example, a mining company sells iron to the steel company, which sells steel the car company, which sells the car to a consumer. The price of the car is included in GDP, because it includes all the earlier stages of production. But if you just add up company receipts, you add up the receipts of the mining company, the steel company, and the car company. Of course, in the real-world many production processes involve a lot more than three steps.) Total corporate income in 2012 was nearly $1.8 trillion. However, after subtracting out deductions for past losses, and other deductions, taxable corporate income was about $1.1 trillion. The tax owed on that amount was $402 billion. However, after subtracting out tax credits–credits for foreign taxes paid, credit to holders of tax credit bonds, qualified electric vehicle, general business, and prior-year minimum tax–the actual income tax owed was $267 billion.

How do the profits earned and the corporate taxes paid break down by size of companies? I\’ll just note that the roughly 3,000 largest firms, those with assets of $2.5 trillion or more, account for more than half of all corporate receipts (51%), for about two-thirds of all profits before deductions and credits (67%), and for about 70% of all corporate income taxes paid after deductions and credits. Readers can of course play around with the various categories in the table as they like.

What industries paid the most in corporate income tax in 2012? Out of the total of $1.7 trillion in pretax profits (before deductions and credits), about 80% can be traced to four sectors: finance and insurance (29% of total pretax profits), manufacturing (29%), wholesale and retail trade (15%), and management of companies (holding companies) (7%).

Finally, not all corporations pay corporate income tax. Some corporations pass their profits each year to their owners. For example, regulated investment companies pass profits to owners, and so do \”S corporations.\” Out of the $1,774 billion in profits, before deductions and credits, a full $678 billion (38% of the total) happen in these pass-through forms, which are applying to a larger share of corporate profits over time.

Global Income Inequality In Decline

One paradox of income inequality in our time is that although the distribution of income has become more unequal within many countries, from a global perspective the distribution of income is becoming more equal. The reason, of course, is that rapid income growth among substantial segments of the population in places like China, India, and even in sub-Saharan Africa will tend to reduce global inequality, at the same time that it increases inequality within these countries. Tomáš Hellebrandt and Paolo Mauro explore these patterns in \”The Future of Worldwide Income Distribution,\” written for the Peterson Institute for International Economics (April 2015, Working Paper 15-7).

They look at a wide array of household-level evidence on the distribution of income in more than 100 countries, and then use various assumptions (essentially assuming that within-country inequality of incomes doesn\’t continue to change over time) to project what the distribution of global income will look like in the future. The green line in the figure shows the distribution of global income per person in 2003, with a mean of $3451 and a median of $1090; the blue line shows the global distribution of income in 2013,m with a mean of $5375 and a median of $2010; and the red line shows their forecase for 2035, with a mean of $9,112 and a median of $4,000. The distribution of global income is clearly becoming flatter and more equal over time. Hellebrandt and Mauro write: \”Global income inequality started declining significantly at the turn of the century, and we project that this trend will continue for the next two decades, under what we consider the profession’s “consensus” projections for the growth rates of output and population.\”

However, it\’s important not to exaggerate how quickly this reduction in global inequality of incomes will occur. For example, the median income projected for 2035 (with half the world population below that level) is well below the average or mean income for 2013. The authors offer a useful illustration measuring inequality by the 90:10 ratio–that is, the ratio of the 90th percentile of the income distribution to the 10th percentile of the income distribution. (Calculations using the Gini coefficient look much the same.) The global 90:10 ratio shows greater inequality than almost any individual country in the world, with the exception of South Africa. By 2035, although the global 90:10 ratio falls, it will still be substantially higher than any even moderately large economy other than South Africa is currently experiencing.

The figure also offers some comparisons of the current level of income inequality across countries using the 90:10 ratio. The US economy clearly has one of the most unequal distributions of income among the high-income countries, but it is more equal than a number of emerging economies around the world.

The exercise in this paper reminds of an article by Robert E. Lucas Jr. that appeared in the Winter 2000 issue of the Journal of Economic Perspectives, called \”Some Macroeconomics for the 21st Century.\”  (Full disclosure: I have been Managing Editor of JEP since 1987. All JEP articles from the most recent issue back to the first issue are freely available on-line compliments of the American Economic Association.) Lucas offers a hypothetical model of growth patterns in the global economy that works like this:

\”We begin, then, with an image of the world economy of 1800 as consisting of a number of very poor, stagnant economies, equal in population and in income. Now imagine all of these economies lined up in a row, each behind the kind of mechanical starting gate used at the race track. In the race to industrialize that I am about to describe, though, the gates do not open all at once, the way they do at the track. Instead, at any date t a few of the gates that have not yet opened are selected by some random device. When the bell rings, these gates open and some of the economies that had been stagnant are released and begin to grow. The rest must wait their chances at the next date, t + 1. In any year after 1800, then, the world economy consists of those countries that have not begun to grow, stagnating at the $600 income level, and those countries that began to grow at some date in the past and have been growing every since. … 

\”[A]n economy that begins to grow at any date after 1800 grows at a rate equal to a α = .02, the growth rate of the leader, plus a term that is proportional to the percentage income gap between itself and the leader. The later a country starts to grow, the larger is this initial income gap, so a later start implies faster initial growth. But a country growing faster than the leader closes the income gap, which by my assumption reduces its growth rate toward .02. Thus, a late entrant to the industrial revolution will eventually have essentially the same income level as the leader, but will never surpass the leader’s level.\”

Lucas acknowledges repeatedly that this model is a very simple one. But he points out that it offers some interesting predictions. It predicts that global inequality of income will at first expand dramatically, as it indeed did from the 19th century well into the 20th century. It predicts that countries which start growing later will experience faster \”catch-up\” growth, which has held true in a number of countries including Japan, Korea, China, and others. Under the assumptions that Lucas uses, his model predicts that the global rate of economic growth will peak around 1970, at a time when a large share of the world is catching up at a rapid pace. It predicts that during the time period in the last few decades of the 20th century, global inequality won\’t change by much. And it predicts that in the 21st century, as the remaining nations that had not previously entered a period of rapid economic growth start to do so, global inequality of incomes will diminish. If Lucas\’s model captures the underlying dynamics of the global growth process, and predictions like those of Hellebrandt and Mauro hold up, the 21st century would be a time of rising equality across the global income distribution.