US Mergers and Antitrust in 2014

Each year the Federal Trade Commission and and the Department of Justice Antitrust Division publish the Hart-Scott-Rodino Annual Report, which offers an overview of merger and acquisition activity and antitrust enforcement during the previous year. The Hart-Scott-Rodino legislation requires that all mergers and acquisitions above a certain size–now set at $75.9 million–be reported to the antitrust authorities before they occur. The report thus offers an overview of recent merger and antitrust activity in the United States.

For example, here\’s a figure showing the total number of mergers and acquisitions reported.  There was a substantial jump in the total number of mergers in 2014, not quite back to the higher levels of 2006 and 2007, but headed in that direction.

The report also provides a breakdown on the size of mergers. Here\’s what it looked like in 2014. As the figure shows, there were 225 mergers and acquisitions of more than $1 billion.

After a proposed merger is reported, the FTC or the US Department of Justice can request a \”second notice\” if it perceives that the merger might raise some anticompetitive issues. In the last few years, about 3-4% of the reported mergers get this \”second request.\” This percentage may seem low, but it\’s not clear that it is too low. After all, the US government isn\’t second-guessing whether mergers and acquisitions make sense from a business point of view. It\’s only asking whether the merger might reduce competition in a substantial way. If two companies that aren\’t directly competing with other combine, or if two companies combine in a market with a number of other competitors, the merger/acquisition may turn out well or poorly from a business point of view, but it is less likely to raise competition issues.

Teachers of economics may find the report a useful place to come up with some recent examples of antitrust cases, and there are also links to some of the underlying case documents and analysis  (which students can be assigned to read). Here are a few examples. In the first one, a merger was questioned and called off. In the second, a merger was allowed only after a number of plants were divested, so as to allow competition to continue. The third case involved an airline merger in which the transaction was only allowed to proceed with provisions that required divestiture of gates and landing slots at a number of airports, thus opening the way for competition from other airlines.

In Jostens/American Achievement Group, the Commission issued an administrative complaint and authorized staff to seek a preliminary injunction in federal district court enjoining Jostens, Inc.’s proposed $500 million acquisition of American Achievement Corp. The Commission alleged that the acquisition would have substantially reduced quality and price competition in the high school and college class rings markets. Shortly after the Commission filed its administrative complaint, the parties abandoned the transaction. …

In April 2014, the FTC also concluded its 2013 challenge to Ardagh Group SA’sproposed acquisition of Saint-Gobain Containers, Inc. The $1.7 billion merger would have allegedly concentrated most of the $5 billion U.S. glass container industry in two companies – the newly combined Ardagh/Saint-Gobain, and Owens-Illinois, Inc. These two companies would have controlled about 85 percent of the glass container market for brewers and 77 percent of the market for distillers, reducing competition and likely leading to higher prices for customers that purchase beer or spirits glass containers. The FTC filed suit in July 2013 to stop the proposed transaction. While the challenge was pending, Ardagh agreed to sell six of its nine glass container manufacturing plants in the United States to a Commission-approved buyer.

In United States, et al. v. US Airways Group, Inc. and AMR Corporation, the Division and the states of Texas, Arizona, Pennsylvania, Florida, Tennessee, Virginia, and the
District of Columbia challenged the proposed $11 billion merger between US Airways Group, Inc. (“US Airways”) and American Airlines’ parent company, AMR Corporation. On April 25, 2014, the court entered the consent decree requiring US Airways and AMR Corporation to divest slots and gates in key constrained airports across the United States. These divestitures, the largest ever in an airline merger, have allowed low cost carriers to fly more direct and connecting flights in competition with legacy carriers and have enhanced system-wide competition in the airline industry.

China\’s Economic Growth: Pause or Trap?

China\’s rate of economic growth has slowed from the stratospheric 10% per year that it averaged from 1980 through 2010 to a merely torrid 8% per year from 2011-2014, and perhaps even a little slower in 2015. Now that China is on the verge of being the world\’s largest economy, the question of its future growth matters not just for 1.3 billion Chinese, but for the global economy as a whole. Zheng Liu offers some insight into the slowdown in a short essay \”Is China\’s Growth Miracle Over,\” published by as an \”Economic Letter\” by the Federal Reserve Bank of San Francisco (August 10, 2015).

For some overall perspective, the blue line shows annual per capita growth of GDP for China. The red line shows annual per capita growth of GDP for the US economy. These differences in annual growth rates are extraordinary,and they do add up, year after year after year.

In the last few years, China\’s growth patterns have been heavily affected by a slowdown in its exports during the Great Recession and its aftermath, and by an enormous surge in debt. But over the longer  term, the key question is whether China\’s growth rate slows down. Liu offers a thought-provoking figure here with a comparison of China with Japan and South Korea. China\’s growth is shown by the rectangles and you can see, for example, that it\’s growth in 2014 was below the earlier levels from the last few decades. Japan\’s growth is shown by the red circles, and you can see how its growth fell from the 1960 to the 1970s, and more recently to the circles at the bottom right. Korea\’s growth in shown by the triangles, and again, you can see the decline in growth rate over time.

So is China\’s going to keep growing at a annual rate around 7%, or will it fall down to the 2-3% range? Here\’s the capsule case for pessimism, and then for optimism, from Liu.

China had a real GDP per capita of about $2,000 in the 1980s, which rose steadily to about $5,000 in the 2000s and to over $10,000 in 2014. If China continues to grow at a rate of 6 or 7%, it could move into high-income status in the not-so-distant future. However, if China’s experience mirrors that of its neighbors, it could slow to about 3% average growth by the 2020s, when its per capita income is expected to rise to about $25,000.

This may appear to be quite a pessimistic scenario for China, but China’s long-term growth prospects are challenged by a number of structural imbalances. These include financial repression, the lack of a social safety net, an export-oriented growth strategy, and capital account restrictions, all of which contributed to excessively high domestic savings and trade imbalances. According to the National Bureau of Statistics of China, the household saving rate increased from 15% in 1990 to over 30% in 2014. High savings have boosted domestic investment, but allocations of credit and capital remain highly inefficient. The banking sector is largely state-controlled, and bank loans disproportionately favor state-owned enterprises (SOEs) at the expense of more productive private firms. According to one estimate, the misallocation of capital has significantly depressed productivity in China. If efficiency of capital allocations could be improved to a level similar to that in the United States, then China’s total factor productivity could be increased 30–50% …

Despite the slowdown, there are several reasons for optimism. First, China’s existing allocations of capital and labor leave a lot of room to improve efficiency. If the proposals for financial liberalization and fiscal and labor market reforms can be successfully put in place, improved resource allocations could provide a much-needed boost to productivity. Second, China’s technology is still far behind advanced countries’. According to the Penn World Tables, China’s total factor productivity remains about 40% of the U.S. level. If trade policies such as exchange rate pegs and capital controls are liberalized—as intended in the reform blueprints—then China could boost its productivity through catching up with the world technology frontier. Third, China is a large country, with highly uneven regional development. While the coastal area has been growing rapidly in the past 35 years, its interior region has lagged. As policy focus shifts to interior region development, growth in the less-developed regions should accelerate. With the high-speed rails, airports, and highways already built in the past few years, China has paved the way for this development. As the interior area catches up with the coastal region, convergence within the country should also help boost China’s overall growth 

I\’m a long-term optimist about China\’s economic growth, both because I think it has been laying the basis for future growth by boosting education, investment, and technology, and also because China\’s economy as a whole economy (not just the richer coastal areas) still has a lot of room for catch-up growth. But China\’s economy has reached a size where it urgently needs better functioning from its banks and financial system, and in the shorter term, the problems of the financial system are likely to keep getting the headlines. 

The Aftermath of LIBOR and Penny-Shaving Attacks

Anyone remember the LIBOR scandal from back in spring 2008? A trader for UBS Group and Citigroup named Tom Hayes was just sentenced by a British court to 14 years imprisonment for his role as a ringleader of the scandal. Darrell Duffie and Jeremy C. Stein discuss both the scandal and–perhaps more interesting to those of us who bleed economics–the economic function of financial market benchmarks in  \”Reforming LIBOR and Other Financial Market Benchmarks,\” in the Spring 2015 issue of the Journal of Economic Perspectives. (All JEP articles back to the first issue in 1987 are freely available online courtesy of the American Economic Association. Full disclosure: I\’ve worked as Managing Editor of the JEP since that first issue.)

For those who have blotted the episode from their memories, LIBOR stands for London Interbank Offered Rate. It\’s the interest rate at which big international banks borrow overnight from each other. A main use of LIBOR in financial markets was as a \”benchmark\” for adjustable interest rates. For example, if you are a potential borrower or lender worried about the risk that interest rates might shift, you might be able to agree on a loan where the interest rate was, say, the LIBOR rate plus 4%. Duffie and Stein point out that using LIBOR as a benchmark interest rate for international loans dates back to 1969, when \”a consortium of London-based banks led by Manufacturers Hanover introduced LIBOR in order to entice international borrowers such as the Shah of Iran to borrow from them.\” 
Two key details set the state for the LIBOR fraud. The first detail is that after LIBOR became well-established as a basis for interest rates on loans, the finance industry began to use LIBOR as the basis for lots of more complex financial transactions: for example, \”exchange-traded eurodollar futures and options available from Chicago Mercantile Exchange Group, and over-the-counter derivatives including caps, floors, and swaptions (that is, an option to engage in a swap contract).\” I won\’t plow through an explanation of those terms here. The key takeaway is that the benchmark LIBOR interest rate wasn\’t just linked to about $17 trillion in US dollar loans. It was also linked to $106 trillion in interest rate swap agreements, and tens of trillions more in interest rate options and futures, as well as cross-currency swaps. As a result, if you had some information on how LIBOR was likely to change on a day-to-day basis–even if the change was a seemingly tiny amount that didn\’t much matter to borrowers or lenders–you could make a substantial amount of money in these more complex financial markets. 
The second detail involves how LIBOR was actually calculated. Banks did not actually submit data on the costs of borrowing; indeed, someone at a bank responded to a survey each day with an estimate of what it would cost that bank to borrow–even though on a given day many of these banks weren\’t actually borrowing from other banks. In addition, during the financial crisis as it erupted in 2007 and 2008, no bank wanted to admit that it would have been charged a higher interest rate if it wanted to borrow, because financial market would be quick to infer that such bank might be in a shaky financial position. 
So on one side, LIBOR is a key financial benchmark that affects literally tens of trillions of dollars of continuously traded and complicated financial instruments.  On the other side, you have this key benchmark being determined by a survey of the opinions of fairly junior bank officers who have some incentive to shade the numbers. The British court found that Tom Hayes led a group of traders who sent messages to the bankers who responded to the LIBOR survey, requesting that the LIBOR rate be jerked a little higher one day, or pushed a little lower another day. Again, those who were just using the LIBOR rate as a benchmark for loans probably wouldn\’t even notice these fluctuations. But traders who knew in advance how the LIBOR was going to twitch up and down could make big money in the options and futures markets. 
What\’s the solution here? Duffie and Stein point out that financial benchmarks like LIBOR are extremely useful in financial markets. However, you need to design the benchmark with some care. For example, instead of using a survey of bank officers, it makes a lot more sense to use an actual market-determined interest rate for a benchmark. Moreover, the LIBOR rate is based on banks borrowing from banks, and so it will reflect risk in the banking sector. For certain kinds of lending and borrowing, it\’s not clear that you would want your interest rate to rise and fall with changes in the riskiness of the banking sector. Thus, they discuss the virtues of benchmark rates that are market-determined and not linked to the banking sector–like the interest rate for short-term borrowing by the US government. (They also discuss the merits of using some other less well-known  benchmark interest rates, like the Treasury general collateral repurchase rate or the  overnight index swap rate, fo those who want such details.)
More broadly, it seems to me that the LIBOR scandal is the actual real-life version of what seems to be an urban legend plot: the story of how a fraudster finds a way to program the computers of a bank or financial institution so that a tiny amount of certain transaction is siphoned off into a different account (for examples, see the 1983 movie Superman III, or the 1999 movie Office Space). The problem with these \”penny-shaving\” or \”salami-slicing\” attacks in real life is that if you steal a little bit from a large number of transactions, it\’s quite possible that no individual party will notice. But if you take a few million dollars out of a financial institution, the accountants are going to notice! 
In the LIBOR scandal, however, the fraud happened by knowing about tiny little changes in LIBOR a day in advance. Those who lost out from not knowing these changes in advance had no way of knowing that they were being cheated. In a similar scandal from earlier this year, Citicorp, JPMorgan, Barclays, Royal Bank of Scotland and UBS pled guilty to felony charges for their actions in foreign exchange markets. Again, these are very large markets, and so small acts of dishonesty can add up to large amounts. As the US. Department of Justice described it:

\”According to plea agreements to be filed in the District of Connecticut, between December 2007 and January 2013, euro-dollar traders at Citicorp, JPMorgan, Barclays and RBS – self-described members of “The Cartel” – used an exclusive electronic chat room and coded language to manipulate benchmark exchange rates. Those rates are set through, among other ways, two major daily “fixes,” the 1:15 p.m. European Central Bank fix and the 4:00 p.m. World Markets/Reuters fix. Third parties collect trading data at these times to calculate and publish a daily “fix rate,” which in turn is used to price orders for many large customers. “The Cartel” traders coordinated their trading of U.S. dollars and euros to manipulate the benchmark rates set at the 1:15 p.m. and 4:00 p.m. fixes in an effort to increase their profits.

As detailed in the plea agreements, these traders also used their exclusive electronic chats to manipulate the euro-dollar exchange rate in other ways. Members of “The Cartel” manipulated the euro-dollar exchange rate by agreeing to withhold bids or offers for euros or dollars to avoid moving the exchange rate in a direction adverse to open positions held by co-conspirators. By agreeing not to buy or sell at certain times, the traders protected each other’s trading positions by withholding supply of or demand for currency and suppressing competition in the FX market.\”

A trader at Barclay\’s reportedly wrote in the group\’s electronic chat room: “If you aint cheating, you aint trying,” Clearly, situations where relatively small groups of people can cause relatively small and almost imperceptible  tweaks in values that affect a very large market are ripe for manipulation. 

How Automation Affects Labor Markets: Rise of the "New Artisans"?

Will the plausible improvements in automation and robotics that seem to be on their way lead to mass unemployment? If not unemployment, what kinds of changes might they cause for the distribution of wages in labor markets? David H. Autor tackles these issues in \”Why Are There Still So Many Jobs? The History and Future of Workplace Automation,\” which appears in the Summer 2015 issue of the Journal of Economic Perspectives.

(Full disclosure: I have worked as Managing Editor of JEP since 1986. Autor was the editor of JEP from 2009-2014, and thus was my boss during that time. All articles in JEP back to the first issue in 1987 have been freely available online since 2011, courtesy of the American Economic Association.)

Autor\’s argues that most discussions of automation and the labor market focus on one effect–that is, how automation might substitute for certain existing jobs. That\’s clearly one potentially important effect, but not the only one. Autor suggests that a fuller framework for how automation affects employment and wages needs to look at three broad factors: 1) how automation substitutes for some jobs but complements others; 2) whether it\’s easier or harder for workers to shift into certain new jobs (that is, what economists call \”the elasticity of labor supply\”); and 3) what goods and services are demanded as income expands.

Of course, it\’s obvious that modern worker would have a dramatically lower standard of living if they were deprived of machines and computers and instead had to use a stick they found in the forest to till fields and kill prey. But more generally, Autor writes:

\”[T]asks that cannot be substituted by automation are generally complemented by it. Most work processes draw upon a multifaceted set of inputs: labor and capital; brains and brawn; creativity and rote repetition; technical mastery and intuitive judgment; perspiration and inspiration; adherence to rules and judicious application of discretion. Typically, these inputs each play essential roles; that is, improvements in one do not obviate the need for the other. If so, productivity improvements in one set of tasks almost necessarily increase the economic value of the remaining tasks.

An iconic representation of this idea is found in the O-ring production function studied by Kremer (1993). In the O-ring model, failure of any one step in the chain of production leads the entire production process to fail. Conversely, improvements in the reliability of any given link increase the value of improvements in all of the others. Intuitively, if n − 1 links in the chain are reasonably likely to fail, the fact that link n is somewhat unreliable is of little consequence. If the other n − 1 links are made reliable, then the value of making link n more reliable as well rises. Analogously, when automation or computerization makes some steps in a work process more reliable, cheaper, or faster, this increases the value of the remaining human links in the production chain.\”

However, if many workers are able to flock into the new jobs created by complementarities with automation, then wage increases in that job category are likely to be small. Only if it\’s hard for workers to enter the jobs complemented by automation are wage increases for that job likely to be higher. Autor writes:

\”[T]he elasticity of labor supply can mitigate wage gains. If the complementary tasks that construction workers or relationship bankers supply are abundantly available elsewhere in the economy, then it is plausible that a flood of new workers will temper any wage gains that would emanate from complementarities between automation and human labor input. While these kinds of supply effects will probably not offset productivity-driven wage gains fully, one can find extreme examples: Hsieh and Moretti (2003) document that new entry into the real estate broker occupation in response to rising house prices fully offsets average wage gains that would otherwise have occurred.\” 

Finally, the capital investments that lead to automation and improved robots tend to increase output, and the incomes for those who are producing these additional outputs will also rise. As incomes rise, people demand additional goods and services. The particular goods that are demanded as income rises will also affect how automation and labor markets interact. Here\’s Autor:

[T]he output elasticity of demand combined with income elasticity of demand can either dampen or amplify the gains from automation. In the case of agricultural products over the long run, spectacular productivity improvements have been accompanied by declines in the share of household income spent on food. In other cases, such as the health care sector, improvements in technology have led to ever-larger shares of income being spent on health. Even if … the sector shrinks as productivity rises—this does not imply that aggregate demand falls as technology advances; clearly, the surplus income can be spent elsewhere. As passenger cars displaced equestrian travel and the myriad occupations that supported it in the 1920s, the roadside motel and fast food industries rose up to serve the “motoring public” ( Jackson 1993). Rising income may also spur demand for activities that have nothing to do with the technological vanguard. Production of restaurant meals, cleaning services, haircare, and personal fitness is neither strongly complemented nor substituted by current technologies; these sectors are “technologically lagging” in Baumol’s (1967) phrase. But demand for these goods appears strongly income-elastic, so that rising productivity in technologically leading sectors may boost employment nevertheless in these activities. 

Autor argues that developments in information technology have dramatically reshaped labor markets in recent decades. But the primary effect is not through fewer jobs: after all, the US unemployment rate was below 5.5% for nearly four full years before the Great Recession took hold in 2008, and has now again fallen into that range. Instead, Autor argues that the effects of technological advances can be seen in job \”polarization.\” In this phenomenon, which seems to arise not only in the US but also across many European economies, many previously middle-skill jobs including factory jobs and office jobs have been replaced by technology and become less important relative to overall employment. At the same time, low-skill manual jobs that can\’t easily be replaced by automation have tended to grow in number, but not in wages–because it is fairly easy for workers to shift into these jobs. And high-skill jobs that are complemented by technology have increased relative to the overall workforce as well as in wages paid.

Interestingly, Autor argues that this job polarization is not likely to persist in the future, although information technology will shape what are the middle-class jobs of the future. He writes:

My own prediction is that employment polarization will not continue indefinitely (as argued in Autor 2013). While some of the tasks in many current middle-skill jobs are susceptible to automation, many middle-skill jobs will continue to demand a mixture of tasks from across the skill spectrum. For example, medical support occupations—radiology technicians, phlebotomists, nurse technicians, and others—are a significant and rapidly growing category of relatively well-remunerated, middle-skill employment. Most of these occupations require mastery of “middle-skill” mathematics, life sciences, and analytical reasoning. They typically require at least two years of postsecondary vocational training, and in some cases a four-year college degree or more. This broad description also fits numerous skilled trade and repair occupations, including plumbers, builders, electricians, heating/ventilating/air-conditioning installers, and automotive technicians. It also fits a number of modern clerical occupations that provide coordination and decision-making functions, rather than simply typing and filing, like a number of jobs in marketing. There are also cases where technology is enabling workers with less esoteric technical mastery to perform additional tasks: for example, the nurse practitioner occupation that increasingly performs diagnosing and prescribing tasks in lieu of physicians. 

I expect that a significant stratum of middle-skill jobs combining specific vocational skills with foundational middle-skills levels of literacy, numeracy, adaptability, problem solving, and common sense will persist in coming decades. My conjecture is that many of the tasks currently bundled into these jobs cannot readily be unbundled—with machines performing the middle-skill tasks and workers performing only a low-skill residual—without a substantial drop in quality. This argument suggests that many of the middle-skill jobs that persist in the future will combine routine technical tasks with the set of nonroutine tasks in which workers hold comparative advantage: interpersonal interaction, flexibility, adaptability, and problem solving. In general, these same demands for interaction frequently privilege face-to-face interactions over remote performance, meaning that these same middle-skill occupations may have relatively low susceptibility to offshoring. Lawrence Katz memorably titles workers who virtuously combine technical and interpersonal tasks as “the new artisans” (see Friedman 2010), and Holzer (2015) documents that “new middle skill jobs” are in fact growing rapidly, even as traditional production and clerical occupations contract.

The same issue of the JEP includes two other articles on automation and labor markets. Joel Mokyr, Chris Vickers, and Nicolas L. Ziebarth look at the history of these concerns among economists over the last two centuries in \”The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?\”  From the abstract:

\”Anxieties over technology can take on several forms, and we focus on three of the most prominent concerns. First, there is the concern that technological progress will cause widespread substitution of machines for labor, which in turn could lead to technological unemployment and a further increase in inequality in the short run, even if the long-run effects are beneficial. Second, there has been anxiety over the moral implications of technological process for human welfare, broadly defined. While, during the Industrial Revolution, the worry was about the dehumanizing effects of work, in modern times, perhaps the greater fear is a world where the elimination of work itself is the source of dehumanization. A third concern cuts in the opposite direction, suggesting that the epoch of major technological progress is behind us. Understanding the history of technological anxiety provides perspective on whether this time is truly different. We consider the role of these three anxieties among economists, primarily focusing on the historical period from the late 18th to the early 20th century, and then compare the historical and current manifestations of these three concerns.\”

Finally, Gill A. Pratt write: \”Is a Cambrian Explosion Coming for Robotics?\” Pratt is not an economist, but rather an expert in robotics who until recently was a program manager at the Defense Advanced Research Projects Agency, where one of his tasks was oversight of the DARPA Robotics Challenge. Pratt lays out a number of reasons why a dramatic increase in robot capabilities is likely to be near. Here are two of the most important:

Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth. In Cloud Robotics— a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows. Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. 

Pratt persuades me, at least, that \”[i]t is reasonable to assume that robots will in the not-too-distant future be able perform any associative memory problem at human levels.\” This includes a wide array of jobs that include sensing and recognizing what is in a given environment, considering a list of tasks that should be done, and then acting autonomously to complete those tasks. But as Pratt also notes: \”The human brain does much more than store a very large number of associations and access useful memories quickly. It also transforms sensory and other information into generalizable representations invariant to unimportant changes, stores episodic memories, and generalizes learned examples into understanding. The key problems in robot capability yet to be solved are those of generalizable  knowledge representation and of cognition based on that representation.\”
One might say that the \”new artisans\” of the middle class in Autor\’s description will be jobs that involve this kinds of \”generalizable  knowledge representation and of cognition based on that representation.\” Autor writes: Machine-learning algorithms may have fundamental problems with reasoning about “purposiveness” and intended uses, even given an arbitrarily large training database of images …\” Pratt suggests that the changes in robotics may come so rapidly that human worker and labor market institutions and workers will have a hard time adjusting quickly enough.  

Occupational Licensing and Its Discontents

An occupational license means that it is against the law for a potential worker to carry out a certain job without obtaining a licence. The basic tradeoff of occupational licensing is fairly well-understood: Does occupational licensing as actually implemented offer improved quality of service for consumers in a way that makes it worth a higher price for consumers and a higher wage for those who have the licenses? As one might suspect, the answer to this question is \”it depends.\” In particular, it depends on how much the rules for obtaining a license actually improve quality, and how much the rules limit potential competition. \”Occupational Licensing: A Framework for Policymakers,\” a July 2015 report from the US Department of the Treasury Office of Economic Policy, the Council of Economic Advisers, and the US Department of Labor, explores the issues.

Occupational licenses are a bigger issue than often recognized, because they are a growing share of the workforce, now at about one in four workers.

Much of the report is a review of the evidence on occupational licensing, which is often done by comparing prices and measures of the quality of work provided (like numbers of problems experienced by consumers) across states that have more or less stringent occupational licensing requirements, or in a given state that alters its rules. These studies often focus on one or a few occupations at a time, so overall conclusions are tricky. But after reviewing the evidence, the report  notes:

Estimates find that unlicensed workers earn 10 to 15 percent lower wages than licensed workers with similar levels of education, training, and experience. … Licensing laws also lead to higher prices for goods and services, with research showing effects on prices of between 3 and 16 percent. Moreover, in a number of other studies, licensing did not increase the quality of goods and services, suggesting that consumers are sometimes paying higher prices without getting improved goods or services.

A common dynamic is that the licensed profession tries to pass rules so that those outside the profession cannot offer certain services. In one prominent recent case, the occupational licensing board for dentists in North Carolina, which is made up of dentists, argued that only licensed dentists can provide teeth-whitening services, That decision seemed to be less about the quality of teeth-whitening services for consumers, and more about trying to lock up provision of lucrative service for dentists. It\’s also generally true that it\’s harder for those with lower skill levels or incomes to bear the costs of obtaining an occupational license–even if some of what is required for the license doesn\’t seem all that closely related to doing quality work.

Occupational licensing rules vary widely across states. It seems unlikely that the training needed to be, say, a safe cosmetologist or a good security guard will vary dramatically across states. Thus, the differences across states suggest that either some states have standards for occupational licenses that are way too high or way too low. The report writes:

Estimates suggest that over 1,100 occupations are regulated in at least one State, but fewer than 60 are regulated in all 50 States, showing substantial differences in which occupations States choose to regulate. For example, funeral attendants are licensed in nine States and florists are licensed in only one State. … The share of licensed workers varies widely State-by-State, ranging from a low of 12 percent in South Carolina to a high of 33 percent in Iowa. Most of these State differences are due to State policies, not differences in occupation mix across States. … States also have very different requirements for obtaining a license. For example, Michigan requires three years of education and training to become a licensed security guard, while most other States require only 11 days or less. South Dakota, Iowa, and Nebraska require 16 months of education to become a licensed cosmetologist, while New York and Massachusetts require less than 8 months. … For example, while all States require manicurists to be licensed, some also require proof of English proficiency, and the required amount of training at a State-approved cosmetology school varies from 100 to 600 hours.

One theme that struck me as I read the report is that there are a number of new pressures for reform of existing rules about occupational licensing. Here are some of the pressures:

Telework. Lots more work can be done long-distance, across state and of course also across national borders. As a result, occupational licensing rules that vary across states are looking obsolete. The report notes: \”Telework can enable more flexible scheduling and work locations, something that is important in helping workers with competing demands on their time stay in the labor force and maintain work-life balance. It has the potential to offer clients more continuous access to providers and access to more specialized providers, as well as increasing the pool of competing practitioners overall. … Insurance companies often provide clients access to a nurse-staffed call center to answer minor medical questions. These nurses reside and practice near the call center location, but they may take calls and make over-the-phone diagnoses for clients across the country. … Telehealth applications including hotlines and interactive web-based programs were used extensively following the September 11th, 2001 terrorist attacks on New York City and Washington, D.C., and during recent hurricanes. … State licensure has proven to be a barrier to the growth and development of telework …\”

Distance learning. Students now take courses from institutions not in their state. Again, occupational licensing rules that vary dramatically across states end up looking obsolete. \”In 2013, 11.3 percent of all U.S. undergraduate students (2.0 million students) were enrolled in institutions in which all instructional content was delivered through distance education, and more than one in four undergraduates took at least one distance education course (4.6 million students).70 Students can now take courses remotely from training providers in almost any State.\” Apparently there is a National Council for State Authorization Reciprocity Agreements–but with only 23 states participating.

The Possibility of Online Rating. Many of us now look at online ratings when choosing a provider of a good or service. \”In recent years, however, the growth of online consumer information and review websites has made it easier for consumers to find information on the quality of firms and practitioners, and some observers have argued that consumer protection regulation should be updated to reflect this new access to information.\” Online ratings are far from perfect, of course. But we are no longer in the world where you have to judge a potential plumber or electrician or manicurist by the size of their ad in the telephone yellow pages.

Military spouses. There\’s always political support for steps that wouldn\’t cost money but would make the lives of families where one person is working in the military easier. \”About 35 percent of military spouses in the labor force work in professions that require State licenses or certification, and they are ten times more likely to have moved across State lines in the last year than their civilian counterparts.\”

Reintegrating those leaving prison back into the workplace. There\’s a lot of concern about high US incarceration rates, and thus about what will happen as prisoners are released. \”Twenty-five States and the District of Columbia have no standards in place governing the relevance of conviction records of applicants for occupational licenses. In these States, a licensing board may deny a license to an applicant who has a criminal conviction, regardless of whether the conviction is relevant to the license sought, how recent it was, or whether there were any extenuating circumstances. In many States, employers and occupational licensing boards are also permitted to ask about and consider arrests that never led to a conviction in making their employment decision.\”

Student loan defaults and licenses. \”In 21 States, defaulting on student loan debt can result in the suspension or revocation of a worker’s occupational license.\” Obviously, encouraging people to repay their loans by taking away their ability to work seems likely to be counterproductive.

Encouraging mobility across the economy.  Geographical mobility is economically useful, as at least some workers can relocate to where jobs are more available. But geographical mobility has been declining in the US economy (see here and here), and the inability to move an occupational license in a different state seems to be one of the reasons. \”There are clear benefits to mobility, both for workers, employers, and the economy at large, and limits to mobility are themselves a cause for concern. … [S]ince many occupations are licensed at the State level, licensed practitioners typically have to acquire a new license when they move across States. This alone entails various procedural hurdles, such as paying fees, filling out administrative paperwork, and submitting an application and waiting for it to be processed. Moreover, since each State sets its own licensing requirements, these often vary across State lines, and licensed individuals seeking to move to another State often discover that they must meet new qualifications (such as education, experience, training, testing, etc.) if they want to continue working in their occupation. The resulting costs in both time and money can discourage people from moving or lead them to exit their occupation.\”

Under the force of these various pressures, policy with regard to occupational licenses is evolving. There are more interstate compacts, in which state agree to honor licenses for certain occupations from other states. Colorado has set up an umbrella agency for looking at all occupational licensing laws in the state, which reviews all licensing rules for costs and benefits and offers recommendations to the legislature.  It can be useful to require that the group overseeing the occupational licensing rules includes some people from outside the occupation being licensed–including representatives of those who use the good or service. It can also be useful to require that if the group that sets up an occupational licensing rule must also play a role in enforcing that rule.

For more background on the economics of occupational licensing, a useful starting point is \”Reforming Occupational Licensing Policies,\” a January 2015 report written by by Morris S. Kleiner for the Hamilton Project. I discussed some evidence on \”Occupational Licensing and Low Income Jobs\” in a May 2012 post. Or if you want to go back in time, Kleiner provided an overview of \”Occupational Licensing\” in the Fall 2000 issue of the Journal of Economic Perspectives. Kleiner starts off by tracing economic concerns about occupational licensing back to Adam Smith, who was writing at a time when overly long apprenticeships were the way in which occupational licenses could be obtained. Kleiner wrote:

 Occupational regulation was discussed by Adam Smith (1776 [1937]) in the Wealth of Nations (Book I, Ch. 10, Part II), where he focuses on the ability of the crafts to lengthen apprenticeship programs and limit the number of apprentices per master, thus ensuring higher earnings for persons in these occupations.

\”The patrimony of a poor man lies in the strength and dexterity of his hands; and to hinder him from employing this strength and dexterity in what manner he thinks proper without injury to his neighbor, is a plain violation of this most sacred property. It is a manifest encroachment upon the just liberty both of the workman, and of those who might be disposed to employ him. As it hinders the one from working at what he thinks proper, so it hinders the others from employing whom they think proper. To judge whether he is fit to be employed, may surely be trusted to the discretion of the employers whose interest it so much concerns. The affected anxiety of the law-giver lest they should employ an improper person, is evidently as impertinent as it is oppressive. The institution of long apprenticeships can give no security that insufficient workmanship shall not frequently be exposed to public sale.\”

Smith states that long apprenticeships are no assurance of quality, nor are they useful in inculcating industriousness among workers. Instead, he argues, they serve only to “prevent this reduction of price, and consequently of wages and profit, by restraining that free competition which would most certainly occasion it.”

The US Financial Sector in the Long-Run: Where are the Economies of Scale?

A larger financial sector is clearly correlated with economic development, in the sense that high-income countries around the world have on average larger markets for banks, credit cards, stock and bond markets, and so on compared with lower-income countries. But there are also concerns that the financial sector in high-income countries can grow in ways that end up creating economic instability (as I\’ve discussed herehere, and here). Thomas Philippon provides some basic evidence on the growth of the US financial sector over the past 130 years in \”Has the US Finance Industry Become Less Efficient? On the Theory and Measurement of Financial Intermediation,\” publishes in the
April 2015 issue of the American Economic Review (105:4, pp. 1408–1438). The AER is not freely available online, but many readers can obtain access through a library subscription.

There are a couple of ways to think about the size of a country\’s financial sector relative to its economy. One can add up the size of certain financial markets–the market value of bank loans, stocks, bonds, and the like–and divide by GDP. Or one can add up the economic value added by the financial sector. For example, instead of adding up the bank loans, you add up the value of banking services provided. Similarly, instead of adding up the value of the stock market, you add up the value of the services provided by stockbrokers and investment manager.

Here\’s a figure from Philippon showing both measures of finance as a share of the US economy over the long run since 1886.

The orange line measured on the right axis is \”intermediated assets,\” which measures the size of the financial sector as the sum of all debt and equity issued by nonfinancial firms, together with the sum of all household debt, and some other smaller categories. Back in the late 19th century, the US financial sector was roughly equal in size to GDP. By just before the Great Depression, it had risen to almost three times GDP, before sinking back to about 1.5 times GDP. More recently, you can see the financial sector spiking with the boom in real estate markets and stock markets in the mid-2000s at more than 4 times GDP, before dropping slightly. The overall trend is clearly up, but it\’s also clearly a bumpy ride.

The green line shows \”finance income,\” which can be understood as a measure of the value added by firms in the financial sector. For the uninitiated, \”value added\” has a specific meaning to economists. Basically, it is calculated by taking the total revenue of a firm and subtracting the cost of all goods and services purchased from other firms–for example, subtracting costs of supplies purchased or machinery. In the figure, most of the \”value-added\” that measures  \”finance income\” includes all wages and salaries paid by a firm, along with any profits earned.

An intriguing pattern emerges here: finance income tracks intermediated assets fairly closely. In other words, the amount paid to the financial sector is more-or-less a fixed proportion of total financial assets. It\’s not obvious why this should be so. For example, imagine that because of a rise in housing prices, the total mortgage debt of households rises substantially over time, or because of rising stock prices over several decades, the total value of the stock market is up. Especially in an economy where information technology is making rapid strides, it\’s not clear why incomes in the financial sector should be rising at the same pace. Does a bank need to incur twice the costs if it issues a mortgage for $500,000 as compared to when it issues a mortgage for $250,000? Does an investment adviser need to incur twice the costs when giving advice on a retirement account of $1 million as when giving advice on a retirement account of $500,000? Shouldn\’t there be some economies of scale in financial services?

Philippon isn\’t the first to raise this question: for example, Burton Malkiel has asked why there aren\’t economies of scale in asset management fees here. But Philippon provides evidence that, for whatever reason, a lack of economies of scale has been widespread and long-lasting in the US financial sector.

Full disclosure: The AER is published by the American Economic Association, which is also the publisher of the Journal of Economic Perspectives, where I have worked as Managing Editor since 1986.

Summer 2015 Journal of Economic Perspectives Available Online

Since 1986, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which about four years ago made the decision–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. The journal\’s website is here. I\’ll start here with Table of Contents for the just-released Summer 2015 issue. Below are abstracts and direct links to all the paper. I will probably blog about some of the individual papers in the next week or two, as well.


Symposium on Automation and Labor Markets

\”Why Are There Still So Many Jobs? The History and Future of Workplace Automation,\” by David H. Autor
In this essay, I begin by identifying the reasons that automation has not wiped out a majority of jobs over the decades and centuries. Automation does indeed substitute for labor—as it is typically intended to do. However, automation also complements labor, raises output in ways that leads to higher demand for labor, and interacts with adjustments in labor supply. Journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor. Changes in technology do alter the types of jobs available and what those jobs pay. In the last few decades, one noticeable change has been a \”polarization\” of the labor market, in which wage gains went disproportionately to those at the top and at the bottom of the income and skill distribution, not to those in the middle; however, I also argue, this polarization and is unlikely to continue very far into future. The final section of this paper reflects on how recent and future advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. I argue that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.
Full-Text Access | Supplementary Materials

\”The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?\” by Joel Mokyr, Chris Vickers and Nicolas L. Ziebarth

Technology is widely considered the main source of economic progress, but it has also generated cultural anxiety throughout history. The developed world is now suffering from another bout of such angst. Anxieties over technology can take on several forms, and we focus on three of the most prominent concerns. First, there is the concern that technological progress will cause widespread substitution of machines for labor, which in turn could lead to technological unemployment and a further increase in inequality in the short run, even if the long-run effects are beneficial. Second, there has been anxiety over the moral implications of technological process for human welfare, broadly defined. While, during the Industrial Revolution, the worry was about the dehumanizing effects of work, in modern times, perhaps the greater fear is a world where the elimination of work itself is the source of dehumanization. A third concern cuts in the opposite direction, suggesting that the epoch of major technological progress is behind us. Understanding the history of technological anxiety provides perspective on whether this time is truly different. We consider the role of these three anxieties among economists, primarily focusing on the historical period from the late 18th to the early 20th century, and then compare the historical and current manifestations of these three concerns.
Full-Text Access | Supplementary Materials

\”Is a Cambrian Explosion Coming for Robotics?\” by Gill A. Pratt

About half a billion years ago, life on earth experienced a short period of very rapid diversification called the \”Cambrian Explosion.\” Many theories have been proposed for the cause of the Cambrian Explosion, one of the most provocative being the evolution of vision, allowing animals to dramatically increase their ability to hunt and find mates. Today, technological developments on several fronts are fomenting a similar explosion in the diversification and applicability of robotics. Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—\”Cloud Robotics\” and \”Deep Learning\”—could leverage these base technologies in a virtuous cycle of explosive growth. I examine some key technologies contributing to the present excitement in the robotics field. As with other technological developments, there has been a significant uptick in concerns about the societal implication of robotics and artificial intelligence. Thus, I offer some thoughts about how robotics may affect the economy and some ways to address potential difficulties.
Full-Text Access | Supplementary Materials

Symposium on Pre-analysis Plans in Economics

\”Promises and Perils of Pre-analysis Plans,\” by Benjamin A. Olken
The purpose of this paper is to help think through the advantages and costs of rigorous pre-specification of statistical analysis plans in economics. A pre-analysis plan pre-specifies in a precise way the analysis to be run before examining the data. A researcher can specify variables, data cleaning procedures, regression specifications, and so on. If the regressions are pre-specified in advance and researchers are required to report all the results they pre-specify, data-mining problems are greatly reduced. I begin by laying out the basics of what a statistical analysis plan actually contains so those researchers unfamiliar with it can better understand how it is done. In so doing, I have drawn both on standards used in clinical trials, which are clearly specified by the Food and Drug Administration, as well as my own practical experience from writing these plans in economics contexts. I then lay out some of the advantages of pre-specified analysis plans, both for the scientific community as a whole and also for the researcher. I also explore some of the limitations and costs of such plans. I then review a few pieces of evidence that suggest that, in many contexts, the benefits of using pre-specified analysis plans may not be as high as one might have expected initially. For the most part, I will focus on the relatively narrow issue of pre-analysis for randomized controlled trials.
Full-Text Access | Supplementary Materials

\”Pre-analysis Plans Have Limited Upside, Especially Where Replications Are Feasible,\” by Lucas C. Coffman and Muriel Niederle

The social sciences—including economics—have long called for transparency in research to counter threats to producing robust and replicable results. In this paper, we discuss the pros and cons of three of the more prominent proposed approaches: pre-analysis plans, hypothesis registries, and replications. They have been primarily discussed for experimental research, both in the field including randomized control trials and the laboratory, so we focus on these areas. A pre-analysis plan is a credibly fixed plan of how a researcher will collect and analyze data, which is submitted before a project begins. Though pre-analysis plans have been lauded in the popular press and across the social sciences, we will argue that enthusiasm for pre-analysis plans should be tempered for several reasons. Hypothesis registries are a database of all projects attempted; the goal of this promising mechanism is to alleviate the \”file drawer problem,\” which is that statistically significant results are more likely to be published, while other results are consigned to the researcher\’s \”file drawer.\” Finally, we evaluate the efficacy of replications. We argue that even with modest amounts of researcher bias—either replication attempts bent on proving or disproving the published work, or poor replication attempts—replications correct even the most inaccurate beliefs within three to five replications. We offer practical proposals for how to increase the incentives for researchers to carry out replications.
Full-Text Access | Supplementary Materials

Symposium on Doing Business

\”Law, Regulation, and the Business Climate: The Nature and Influence of the World Bank Doing Business Project,\” by Timothy Besley
The importance of a well-functioning legal and regulatory system in creating an effective market economy is now widely accepted. One flagship project that tries to measure the environment in which businesses operate in countries across the world is the World Bank\’s Doing Business project, which was launched in 2002. This project gathers quantitative data to compare regulations faced by small and medium-size enterprises across economies and over time. The centerpiece of the project is the annual Doing Business report. It was first published in 2003 with five sets of indicators for 133 economies, and currently includes 11 sets of indicators for 189 economies. The report includes a table that ranks each country in the world according to its scores across the indicators. The Doing Business project has become a major resource for academics, journalists, and policymakers. The project also enjoys a high public profile with close to ten million hits on its website each year. With such interest, it\’s no surprise that the Doing Business report has come under intense scrutiny. In 2012, following discussions by its board, the World Bank commissioned an independent review panel to evaluate the project, on which I served as a member. In this paper, I first describe how the Doing Business project works and illustrate with some of the key findings of the 2015 report. Next, I address what is valuable about the project, the criticisms of it, and some wider political economy issues illustrated by the report.
Full-Text Access | Supplementary Materials

\”How Business Is Done in the Developing World: Deals versus Rules,\” Mary Hallward-Driemeier and Lant Pritchett
What happens in the developing world when stringent regulations characterizing the investment climate meet weak government willingness or capability to enforce those regulations? How is business actually done? The Doing Business project surveys experts concerning the legally required time and costs of regulatory compliance for various aspects of private enterprise—starting a firm, dealing with construction permits, trading across borders, paying taxes, getting credit, enforcing contracts, and so on—around the world. The World Bank\’s firm-level Enterprise Surveys around the world ask managers at a wide array of firms about their business, including questions about how long it took to go through various processes like obtaining an operating license or a construction permit, or bringing in imports. This paper compares the results of three broadly comparable indicators from the Doing Business and Enterprise Surveys. Overall, we find that the estimate of legally required time for firms to complete a certain legal and regulatory process provided by the Doing Business survey does not summarize even modestly well the experience of firms as reported by the Enterprise Surveys. When strict de jure regulation and high rates of taxation meet weak governmental capabilities for implementation and enforcement, we argue that researchers and policymakers should stop thinking about regulations as creating \”rules\” to be followed, but rather as creating a space in which \”deals\” of various kinds are possible.
Full-Text Access | Supplementary Materials


\”The Microeconomic Dimensions of the Eurozone Crisis and Why European Politics Cannot Solve Them,\” by Christian Thimann

The academic and policy debate about the crisis in Europe\’s single currency area is usually dominated by macroeconomic and public sector considerations. The microeconomic dimensions of the crisis and the private sector issues typically get much less attention. However, it is the private sector hiring choices of domestic and foreign firms that will ultimately be decisive. This paper argues there are two main problems holding back private sector employment creation in the stressed eurozone countries. First, there is a persistent competitiveness problem in some of the eurozone countries due to high labor costs relative to underlying productivity. Second, widespread structural barriers make job creation in these countries far more arduous than in many other advanced economies, and even more arduous than in some key emerging economies and formerly planned economies. Structural barriers to private sector development are particularly widespread in the areas of labor market functioning, goods market functioning, and government regulation. Evidence from the World Economic Forum\’s Global Competitiveness Index and the World Bank\’s Doing Business dataset confirms the immense size and persistence of these barriers, despite improvements in some countries in recent years. The paper also presents a novel explanation for the difficulty of structural reforms in the eurozone, tracing the challenge to the current trend to \”Europeanize\” and \”politicize\” economic reform discussions in national policy fields where \”Europe\” is not a legitimate actor and the European political level is not effective.
Full-Text Access | Supplementary Materials

\”E-Books: A Tale of Digital Disruption,\” by Richard J. Gilbert

E-book sales surged after Amazon introduced the Kindle e-reader at the end of 2007 and accounted for about one quarter of all trade book sales by the end of 2013. Amazon\’s aggressive (low) pricing of e-books led to allegations that e-books were bankrupting brick and mortar book booksellers. Amazon\’s commanding position as a bookseller also raises concerns about monopoly power, and publishers are concerned about Amazon\’s power to displace them in the book value chain. I find little evidence that e-books are primarily responsible for the decline of independent booksellers. I also conclude that entry barriers are not sufficient to allow Amazon to set monopoly prices. Publishers are at risk from Amazon\’s monopsony (buyer) power and so sought \”agency\” pricing in an effort to raise the price of ebooks, promote retail competition, and reduce Amazon\’s influence as an e-retailer. (In the agency pricing model, the publisher specifies the retail price with a commission for the retailer. In a traditional, \”wholesale\” pricing model, publishers sell a book to retailers at a wholesale price and retailers set the retail price.) Although agency pricing was challenged by the Department of Justice, it may yet prevail in some form as an equilibrium pricing model for e-book sales.
Full-Text Access | Supplementary Materials

\”The Indian Gaming Regulatory Act and Its Effects on American Indian Economic Development,\” by Randall K. Q. Akee, Katherine A. Spilde and Jonathan B. Taylor

The Indian Gaming Regulatory Act (IGRA), passed by the US Congress in 1988, was a watershed in the history of policymaking directed toward reservation-resident American Indians. IGRA set the stage for tribal government-owned gaming facilities. It also shaped how this new industry would develop and how tribal governments would invest gaming revenues. Since then, Indian gaming has approached commercial, state-licensed gaming in total revenues. Gaming operations have had a far-reaching and transformative effect on American Indian reservations and their economies. Specifically, Indian gaming has allowed marked improvements in several important dimensions of reservation life. For the first time, some tribal governments have moved to fiscal independence. Native nations have invested gaming revenues in their economies and societies, often with dramatic effect.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Editorial Note: Correction to Jeffrey B. Liebman\’s \”Understanding the Increase in Disability Insurance Benefit Receipt in the United States\”
Full-Text Access | Supplementary Materials