Network Effects, Big Data, and Antitrust Issues For Big Tech

You don\’t need to be a weatherman to see that the antitrust winds are blowing toward the big tech companies like Amazon, Facebook, Google, Apple, and others. But an immediate problem arises. At least under modern US law, being a monopoly (or a near-monopoly) is not illegal. Nor is making high profits illegal, especially when it is accomplished by providing services that are free to consumers and making money through advertising. Antitrust kicks in when anticompetitive behavior is involved: that is, a situation in which a firm takes actions which have the effect of blocking actual or potential competitors./

For example, the antitrust case against Microsoft that was settled back in 2001 wasn\’t that the firm was big or successful, bur rather that the firm was engaged in an anticompetitive practice of \”tying\” together separate products, and in this way trying to use its near-monopoly position in the operating systems that run personal computers to gain a similar monopoly position for its internet browser–and in this way to drive off potential competitors. .

In the case of big tech companies, a common theory is that they hold a monopoly position because of what economists call \”network effects.\” The economic theory of network effects started with the observation that certain products are only valuable if other people also own the same product–think of a telephone or fax machine. Moreover, the product becomes more valuable as the network gets bigger. When \”platform\” companies like Amazon or Facebook came along, network effects got a new twist. The idea became that if a website managed to gain a leadership position in attracting buyers and sellers (like Amazon, OpenTable, or Uber), or users and providers of content (like Facebook, YouTube, or Twitter), then others would be attracted to the website as well. Any potentially competing website might have a hard time building up its own critical mass of users, in which case network effects are acting as an anticompetitive barrier. 

Of course, the idea that an already-popular meeting place has an advantage isn\’t limited to the virtual world: many shopping malls and downtown areas rely on a version of network effects, too, as to stock markets, flea markets, and bazaars.

But while it\’s easy to sketch in the air an argument about network effects,  the question of how network effects work in reality isn\’t a simple one.  David S. Evans and Richard Schmalensee offer a short essay on \”Debunking the `Network Effects\’ Bogeyman: Policymakers need to march to the evidence, not to slogans,\” in Regulation magazine Winter 2017-18, pp. 36-39).

As they point out, lots of companies that might at the time seemed to have an advantage of \”network effects\”  have faltered: for example, eBay looked like the network Goliath back in 2001, but it was soon overtaken by Amazon. They write:

\”The flaw in that reasoning is that people can use multiple online communications platforms, what economists call `multihoming.\’   A few people in a social network try a new platform. If enough do so and like it, then eventually all network members could use it and even drop their initial platform. This process has happened repeatedly. AOL, MSN Messenger, Friendster, MySpace, and Orkut all rose to great heights and then rapidly declined, while Facebook, Snap, WhatsApp, Line, and others quickly rose. …

\”Systematic research on online platforms by several authors, including one of us, shows considerable churn in leadership for online platforms over periods shorter than a decade. Then there is the collection of dead or withered platforms that dot this sector, including Blackberry and Windows in smartphone operating systems, AOL in messaging, Orkut in social networking, and Yahoo in mass online media … 

\”The winner-take-all slogan also ignores the fact that many online platforms make their money from advertising. As many of the firms that died in the dot-com crash learned, winning the opportunity to provide services for free doesn’t pay the bills. When it comes to micro-blogging, Twitter has apparently won it all. But it is still losing money because it hasn’t been very successful at attracting advertisers, which are its main source of income. Ignoring the advertising side of these platforms is a mistake. Google is still the leading platform for conducting searches for free, but when it comes to product searches—which is where Google makes all its money—it faces serious competition from Amazon. Consumers are roughly as likely to start product searches on Amazon.com, the leading e-commerce firm, as on Google, the leading search-engine firm.\”

It should also be noted that if network effects are large and block new competition, they pose a problem for antitrust enforcement, too. Imagine that Amazon or Facebook was required by law to split into multiple pieces, with the idea that the pieces would compete with each other. But if network effects really are large, then one or another of the pieces will grow to critical mass and crowd out the others–until the status quo re-emerges.

A related argument is that big tech firms have access to Big Data from many players in a given market, which gives them an advantage. Evans and Schmalensee are skeptical of this point, too. They write:

\”Like the simple theory of network effects, the “big data is bad” theory, which is often asserted in competition policy circles as well as the media, is falsified by not one, but many counterexamples. AOL, Friendster, MySpace, Orkut, Yahoo, and many other attention platforms had data on their many users. So did Blackberry and Microsoft in mobile. As did numerous search engines, including AltaVista, Infoseek, and Lycos. Microsoft did in browsers. Yet in these and other categories, data didn’t give the incumbents the power to prevent competition. Nor is there any evidence that their data increased the network effects for these firms in any way that gave them a substantial advantage over challengers.

\”In fact, firms that at their inception had no data whatsoever sometimes displaced the leaders. When Facebook launched its social network in India in 2006 in competition with Orkut, it had no data on Indian users since it didn’t have any Indian users. That same year Orkut was the most popular social network in India, with millions of users and detailed data on them. Four years later, Facebook was the leading social network in India. Spotify provides a similar counterexample. When Spotify entered the United States in 2011, Apple had more than 50 million iTunes users and was selling downloaded music at a rate of one billion songs every four months. It had data on all those people and what they downloaded. Spotify had no users and no data when it started. Yet it has been able to grow to become the leading source of digital music in the world. In all these and many other cases the entrants provided a compelling product, got users, obtained data on those users, and grew.

\”The point isn’t that big data couldn’t provide a barrier to entry or even grease network effects. As far as we know, there is no way to rule that out entirely. But at this point there is no empirical support that this is anything more than a possibility, which one might explore in particular cases.\”

Evans and Schmalensee are careful to note that they are not suggesting that online platform companies should be exempt from antitrust scrutiny, and perhaps in some cases the network and data arguments might carry weight. As they write:

\”Nothing we’ve said here is intended to endorse a “go-easy” policy toward online platforms when it comes to antitrust enforcement. … There’s no particular reason to believe these firms are going to behave like angels. Whether they benefit from network effects or not, competition authorities ought to scrutinize dominant firms when it looks like they are breaking the rules and harming consumers. As always, the authorities should use evidence-based analysis grounded in sound economics. The new economics of multisided platforms provides insights into strategies these firms may engage in as well as cautioning against the rote application of antitrust analysis designed for single-sided firms to multisided ones.

\”It is time to retire the simple network effects theory—which is older than the fax machine—in place of deeper theories, with empirical support, of platform competition. And it is not too soon to ask for supporting evidence before accepting any version of the “big data is bad” theory. Competition policy should march to the evidence, not to the slogans.\”

For an introduction to the economics of multi-sided \”platform\” markets, a useful starting point is Marc Rysman\’s \”The Economics of Two-Sided Markets\” in the Summer 2009 issue of the Journal of Economic Perspectives (23:3, 125-43). 

For an economic analysis of policy, the underlying reasons matter a lot, because they set a precedent that will affect future actions by regulators and firms. Thus, it\’s not enough to rave against the size of Big Tech. It\’s necessary to get specific: for example, about how public policy should view network effects or online buyer-and-seller platforms, and about the collection, use, sharing, and privacy protections for data. We certainly don\’t want the current big tech companies to stifle new competition or abuse consumers. But in pushing back against the existing firms, we don\’t want regulators to set rules that could close off new competitors, either. 

Four Examples from the Automation Frontier

Cotton pickers. Shelf-scanners at Walmart. Quality control at building sites. Radiologists. These are just four examples of jobs that are being transformed and even sometime eliminated by the newest wave of automated and programmable machinery. Here are four short stories from various sources, which of course represent a much broader transformation happening across the global economy.
_____________________________

Virginia Postrel discusses \”Lessons From a Slow-Motion Robot Takeover: Cotton harvesting is now dominated by machines. But it took decades to happen\” (Bloomberg View, February 9, 2018). She describes a \”state-of-the-art John Deere cotton stripper.\” It costs $700,000, and harvests 100-120 acres each day. As it rolls across the field, \”every few minutes a plastic-wrapped cylinder eight feet across plops out the back, holding as much as 5,000 pounds of cotton ready for the gin.\” Compared to the old times some decades back of cotton-picking by hand, the machine replaces perhaps 1,000 workers.

One main lesson, Postrel emphasizes, is that big technological changes take time, in part because they often depend on a group of complementary innovations becoming available. In this case: \”Gins had to install dryers, for instance, because machine-harvested cotton retained more moisture. Farmers needed chemical defoliants to apply before harvesting so that their bales wouldn’t be contaminated with leaf trash. Breeders had to develop shorter plants with bolls that emerged at the same time, allowing a single pass through the fields.\” Previous farm innovations often took decades to diffuse, too: as I\’ve mentioned before on this website, that was the pattern for previous farm breakthroughs like the McCormick reaper and the tractor.

The high productivity of the modern cotton-stripper clearly costs jobs, but although it\’s easy for me to say, these were jobs that the US is better off without. Cotton-picking by hand was part of a social system built on generations of low-paid, predominantly black workers. And inexpensive clothing, made possible by cotton harvested more efficiently, is important for the budgets of low-income families.

____________________
Another example mentioned by Postrel is the case of robots at Walmart that autonomously roam the aisles, \”identifying when items are out of stock, locating incorrect prices, and detecting wrong or missing labels.\” Erin Winick tells the story in \”Walmart’s new robots are loved by staff—and ignored by customers\” Bossa Nova is creating robotic coworkers for the retail world\” (MIT Technology Review, January 31, 2018).

Again, these robots take jobs that a person could be doing. But the article notes that the robots are quite popular among the Walmart staff, who name the robots, make sure the robots are wearing their official Walmart nametags, and introduce the robots to customers. From the employee point of view, the robots are taking over the dull and menial task of scanning shelves–and the employees are glad to hand over that task. Apparently some shoppers are curious about the robots, and ask, but lots of other shoppers just ignore them and navigate around them.

_______________________

An even more high-tech example is technology which uses lidar-equipped robots to do quality control on construction sites. Even Ackerman explaines in \”AI Startup Using Robots and Lidar to Boost Productivity on Construction Sites Doxel\’s lidar-equipped robots help track construction projects and catch mistakes as they happen\” (IEEE Spectrum, January 24, 2018).

On big construction projects, the tradition has been that at the end of the workday, someone walks around and checks how everything is going. The person carries a clipboard and a tape measure, and spot-checks key measurements. This technology sends in a robot at the end of the day, instead, programmed to crawl all around the building site. It\’s equipped with lidar, which stands for \”Light Detection and Ranging,\” which essentially means using lasers to measure distances. It can check exactly what has been installed, and that it is installed in precisely the right place. Perhaps the area that is going to be the top of a staircase is not precisely aligned with the bottom? The robot will know. Any needed changes or corrections can thus happen much sooner, rather than waiting until a problem becomes apparent later in the building process.

As Ackerman writes: \”[I]t may or may not surprise you to learn that 98 percent of large construction projects are delivered (on average) 80 percent over budget and 20 months behind schedule. According to people who know more about these sorts of things than I do, productivity in the construction industry hasn’t improved significantly in 80 years.\” In a pilot study on one site,this technology raised labor productivity by 38%–because workers could fix little problems now, rather than bigger problems later.

But let\’s be honest: At least in the immediate short-run, this technology reduces the need for employment, too, because fewer workers would be needed to fix problems on a given site. Of course, ripping out previous work and reinstalling it again, perhaps more than once, isn\’t the most rewarding job, either. And the ultimate result is not just a building that is constructed more efficiently, but a building that is likely to be longer-lasting and perhaps safer, too.
___________________

A large proportion of hospital patients have some kind of imaging scan: X-ray, MRI, CAT, and so on. Diagnostic radiologists are the humans who look at those scans and interpret them. Could most of their work be turned over to computers, with perhaps a few humans in reserve for the tough cases?

Hugh Harvey offers a perspective in \”Why AI, Will Not Replace Radiologists,\” (Medium: Towards Data Science, January 24, 2018).  As Harvey notes: \” In late 2016 Prof Geoffrey Hinton, the godfather of neural networks, said that it’s “quite obvious that we should stop training radiologists.\” In constrast, Harvey offers arguments as to \”why diagnostic radiologists are safe (as long as they transform alongside technology).\” The parenthetical comment seems especially important to me. Technology is especially good at taking over routine tasks, and the challenge for humans is to work with that technology while doing the nonroutine. For example, even if the machines can do a first sort-through of images, many patients will continue to want a human to decide what scans should be done, and with whom the results can be discussed. For legal reasons alone, no institution is likely to hand over life-and-death personal decisions to an AI program completely.

In addition, Harvey points out that as AI makes it much cheaper to do diagnostic scans, a likely result is that scanning technologies will be used much more often, and will be more informative and effective. Harvey\’s vision is that radiologists of the future \” will be increasingly freed from the mundane tasks of the past, and lavished with gorgeous pre-filled reports to verify, and funky analytics tools on which to pour over oceans of fascinating ‘radiomic’ data.\”
_____________________

The effects of technology will vary in important ways across jobs, and I won\’t twist myself into knots trying to draw out common lessons across these four examples. I will say that embracing these four technologies, and many more, is the only route to long-term economic prosperity.

Olympic Economics

Before settling into my sofa for a couple of weeks of watching the athletes slip and slide through Winter Olympics from PyeongChang, I need to confess that the Games are a highly questionable economic proposition. One vivid illustration is that the $100 million new stadium in which the opening ceremonies will be held is going to be used four times in total–opening and closing of the Winter Olympics, opening and closing the Paralympics next month–and then it will torn down. Andrew Zimbalist goes through the issues in more detail in \”Tarnished Gold: Popping the Olympics Bubble,\” which appears in the Milken Institute Review (First Quarter 2018).

Building new facilities (or dramatically refurbishing older ones) is a major cost for the Games. Zimbalist notes that the previous Winter Games in 2014, \”the IOC embraced an ostentatious bid from Sochi for the 2014 Winter Olympics where almost none of the required venues or infrastructure were in place. It became the most expensive Olympics in history, with Russia ponying up between $50 billion and $67 billion — though how much of that actually went into construction and operations is unclear.\”

As it has become abundantly clear that that direct revenues that a host city receives from the Olympics–for example, revenues related to tickets and television rights–typically cover only about one-third of the costs of hosting, fewer cities are bidding  to host the Games.  The 2022 Winter Games came down to two bidders, Beijing in China and Almaty in Kazakhstan. Beijing \”won,\” as Zimbalist describes:

\”The Beijing organizing committee pitched its bid to the IOC by noting that it would use some venues left over from the 2008 summer Olympics. But China went along with the IOC’s penchant for creative accounting by excluding the cost of the high-speed railroad that will link Beijing to the downhill and cross-country ski areas (54 miles and 118 miles from the capital, respectively). That project will run to about $5 billion and have little value to the region after the Games are over.

\”Also excluded from the Beijing budget will be the substantial expense of new water diversion and desalination programs necessary for hosting Winter Games in China’s water- (and snow-) starved northern cities. North China has only 25 percent of the country’s water resources to supply nearly 50 percent of the population. Accordingly, China launched an $80 billion water-diversion program from the south before the 2008 summer Olympics.

\”But the north\’s water availability still remains below what the United Nations deems to be the critical level for health – let alone for an Olympics extravaganza. Zhangjiakou, the site of the Nordic skiing competition, gets fewer than eight inches of snow per year. Yanqing, the site of the Alpine skiing events, gets less than 15 inches of precipitation annually.

\”Both areas will thus require copious water for artificial snowmaking. But even if China manages to complete the necessary infrastructure for water diversion, it will amount to robbing Peter to pay Paul: Beijing, Zhangjiakou and Yanqing lie in one of China\’s most important agricultural regions, producing sorghum, corn, winter wheat, vegetables and cotton.

\”The government, moreover, is apparently counting on lasting value from the construction of the Winter Games, creating permanent ski resorts in the mountains bordering Inner Mongolia and the Gobi Desert. If the ski resorts survive, only China\’s richest residents will be able to afford them, while food supplies – and the incomes of the growers – will suffer.

\”Another strike against Beijing 2022 is that winter is one of the worst times for air pollution in this horribly polluted city. Deforestation of the northern mountains needed for Games infrastructure will only compound the problem.

\”One might wonder why, in light of the daunting complications of hosting the Winter Games in northern China, Beijing got the nod in the first place. The answer is simple: thanks to the prospect of big deficits, the only other city bidding was Almaty, capital of oildrenched Kazakhstan, which has been ruled with an iron fist by the kleptocrat Nursultan Nazarbayev since independence in 1991.\”

Zimbalist doesn\’t offer parallel estimates for the PyeongChang games. The standard estimate floating round seems to be that South Korea will spend about $13 billion on facilities for the Winter Games, but such estimates turn out to be too low. Also, this amount doesn\’t include infrastructure like a high-speed rail line over the 80 miles between Seoul and PyeongChang. A few years ago, analysts from the Hyundai Research Institute pegged these additional infrastructure costs at $43.8 billion.

The economic case for hosting the Olympics thus needs to rely on indirect benefits: short-term construction jobs before the Games, tourist spending during the Games, infrastructure and recognition that could last after the Games. Looking at past Olympics, such benefits are quite uncertain. The best-case economic scenario for the  PyeongChang Games may be the Salt Lake City Winter Games of 2002. The underlying reason is that this area was an attractive and reachable destination for winter sports, but somewhat underappreciated before the Games. It\’s visibility and tourism seemed to get a long-term boost from the Games. Indeed, Salt Lake City has just announced that it would be interested in hosting the Games again in 2026 or 2030.

However, other homes for the Winter Games in recent decades have not succeeded in the same way: either the destination was already fairly popular for winter activities, and thus didn\’t receive a long-term tourism boost, or the area just didn\’t get a long-term boost. Here\’s the roll call of locations for the last 10 Winter Games: Sochi (2014), Vancouver (2010), Turin (2006), Salt Lake City (2002), Nagano (1998), Lillehammer (1994), Albertville (1992), Calgary (1988), Sarajevo (1984), Lake Placid (1980).

For the PyeongChang Games, ticket sales have not been brisk. Television ratings seem likely to be fine, but with the big time difference and people who access their media in other ways, they may not be great. Spending on facilities seems to have been kept under control, although this may also mean that the details of cost overruns haven\’t yet filtered out. Even the International Olympic Committee, not known for encouraging parsimony, has warned publicly that many of the new venues may become useless after the Games.

The ultimate economic payoff is likely to depend on whether PyeongChang becomes a considerably more prominent destination for winter tourist activities in the years after the Games. On the downside, PyeongChang at present has a small population (about 44,000), and it\’s night life,  restaurants, and hotels are correspondingly limited. It also about 40 miles from the demilitarized zone separating South and North Korea, which might make potential tourists uncertain about plunking down money for reservations. On the upside, income levels have been growing rapidly in east Asia, especially in China. The demand for tourist destinations is rising. There are a number of successful South Korean ski resorts already. PyeongChang will almost have economic costs far exceeding the benefits. But it has a reasonable chance of less red ink than other recent Winter Games, and seems likely to do far better on a cost-benefit calculation than either its predecessor in Sochi or its successor in Beijing.

For more on economics and the Olympics, here\’s a discussion from Zimbalist of why Boston opted out of even trying to host the 2024 Summer Games, and here\’s a discussion of an n article Robert A. Baade and Victor A. Matheson that appeared in the Spring 2016 issue of the Journal of Economic Perspectives, \”Going for the Gold: The Economics ofthe Olympics,\”

What Charter Schools Can Teach the Rest of K-12 Education

If you\’re interested in how K-12 schools might improve their performance, charter schools can be viewed as a laboratory experiment. Sarah Cohodes discusses the lessons they have to teach in \”Charter Schools and the Achievement Gap,\” written as a Policy Issue paper for Future of Children (Winter 2018). From a social science view, charter schools have two especially useful properties: there are enough of them to have a reasonable sample size, and a number of them are required to admit students through a lottery process–which means that those randomly selected to attend the charter can be compared with those randomly not selected to do so. Cohodes notes:

\”The first charter schools were created in Minnesota in 1993. Forty-three states and Washington, DC, now have laws that permit the operation of charter schools, and around 7,000 charter schools now serve more than 5 percent of students in the United States.1 They’ve grown steadily over the past 10 years, adding about 300 or 400 schools each year. To put this in perspective, about 10 percent of US students attend a private school, and 3 percent are homeschooled. …

\”Because charter schools are required by law to admit students by a random lottery if they’re oversubscribed, charter school admissions are analogous to an experiment in which participants are randomly assigned to a treatment group or a control group (a randomized controlled trial). After accounting for important details that arise from operating a lottery in the real world versus doing so purely for research purposes, such as sibling preferences and late applicants, a random lottery assigns the seats for charter schools that are oversubscribed. This allows researchers to compare the outcomes of a treatment group of students who were offered a seat in the lottery to a control group of those who were not.\”

Of course, it\’s also possible to study the effects of attending charter schools by finding a good comparison group outside the school, in what is called an \”observational\” study, but choosing an  appropriate comparison group necessarily adds an element of uncertainty. The research on charter schools finds that on average, they perform about the same as K-12 schools. But charter schools  vary considerably: some do worse than K-12 schools, but some do better.

\”The best estimates find that attending a charter school has no impact compared to attending a traditional public school. That might surprise you if you were expecting negative or positive impacts based on the political debate around charter schools. But using both lottery-based and observational estimates of charter school effectiveness in samples that include a diverse group of charter schools, the evidence shows, on average, no difference between students who attend a charter and those who attend a traditional public school. However, much of the same research also finds that a subset of charter schools has significant positive impacts on student outcomes. These are typically urban charter schools serving minority and low-income students that use a no excuses curriculum.\”

 Here\’s how the \”no escuses approach\” is defined in a couple of the lottery-based studies:

\”In Massachusetts, school leaders were asked whether their school used the no excuses approach, and schools that did so tended to have better results. The study also drilled down to examine specific practices associated with no excuses. It found that a focus on discipline, uniforms, and student participation all predicted positive school impacts, with the important caveat that no excuses policies are often implemented together, so that it’s difficult to separate the correlations for individual characteristics.

\”The New York City study aggregated school characteristics into practice inputs and resource inputs. The practice inputs followed no excuses tenets: intensive teacher observation and training, data-driven instruction, increased instructional time, intensive tutoring, and a culture of high expectations. The resource inputs were more traditional things like per-pupil-spending and student-teacher ratios. The study found that the each of the five practice inputs, even when controlling for the others, positively correlated with better charter school effectiveness; the resource inputs did not.\”

Cohodes offers an overview of the empirical studies on charter schools. Of course, such studies need to take into account possibilities like whether better-qualified students are applying to charters in the first place, or whether charters may benefit from using disciplinary or suspension policies not allowed in other schools, and so on. But here\’s a bottom line:

\”Attending an urban, high-quality charter school can have transformative effects on individual students’ lives. Three years attending one of these high-performing charter schools produces test-score gains about the size of the black-white test-score gap. The best evidence we have so far suggests that these test-score gains will translate into beneficial effects on outcomes like college-going, teen pregnancy, and incarceration. Given the large and potentially longer-term effects, the most effective charter schools appear to hold promise as a way to reduce achievement gaps.\”

If swallowing the entire \”no excuses\” approach is too much, the one practice that seems most important to me is intensive tutoring, so that students don\’t fall so far behind that they lose touch with the classroom. She writes:

\”One charter school practice stood out: high-quality tutoring. Many high-quality charter schools require intensive tutoring as a means of remediation and learning, often incorporating one-on-one or small group tutoring into the school day rather than as an add-on or optional activity. … As a strategy to close achievement gaps, adopting intensive tutoring beyond the charter sector may be less controversial than focusing explicitly on charter schools.\”

There are both direct and indirect lessons from charter schools. Cohodes focuses on the direct lessons: a \”no excuses\” approach that includes intensive tutoring. The indirect lesson is that it\’s useful to have experimentation in how K-12 education is provided, and then to have those experiments evaluated rigorously, so that productive ideas have a better chance to spread.

Readers interested in more on lessons from charter schools might start with \”The Journey to Becoming a School Reformer\” (February 13, 2015), which describes how Roland Fryer, an economist and school reformer, first sought to figure out the key elements of charter school success and then to apply them in public schools in Houston and elsewhere. Also, Julia Chabrier, Sarah Cohodes, and Philip Oreopoulos offer a discussion of \”What Can We Learn from Charter School Lotteries?\” in the Summer 2016 issue of the Journal of Economic Perspectives (30:3, 57-84).

US History and the Path to European Integration

The early history of the United States involves a time when mobility of labor, goods, and capital between the 13 states was often costly and difficult, and when a weak central government had little power to address regional imbalances. But over time, the US political system and economy knitted themselves together. Thus, as the European Union seeks to increase the freedom of labor, goods, and capital to move across national borders, in a setting with a weak central European government, it is natural to reconsider some parallels to early US history. Along these lines, Jacob Funk Kirkegaard and Adam S. Posen have edited a collection of five essays for the European Commission, published in a report called Lessons for EU Integration from US History (January 2018, Peterson Institute for International Economics).

An underlying theme of the report is that the task of European policymakers has been made broader and more complex that it was back in the 1990s, before the euro came into existence. Kierkegaard and Posen write in an introductory essay: \”Monetary unification cannot stand stably on its own without additional integration of banking and capital markets, and some fiscal policies.\” Thus, the list of essays is:

  1. \”Realistic European Integration in Light of US Economic History,\” by Jacob Funk Kirkegaard and Adam S. Posen
  2. \”A More Perfect (Fiscal) Union: US Experience in Establishing a Continent‐Sized Fiscal Union and Its Key Elements Most Relevant to the Euro Area,\” by Jacob Funk Kirkegaard
  3. \”Federalizing a Central Bank: A Comparative Study of the Early Years of the Federal Reserve and the European Central Bank,\” by Jérémie Cohen‐Setton and Shahin Vallée
  4. \”The Long Road to a US Banking Union: Lessons for Europe,\” by Anna Gelpern and Nicolas Véron
  5. The Synchronization of US Regional Business Cycles: Evidence from Retail Sales, 1919–62,\” by Jérémie Cohen‐Setton and Egor Gornostay

Of course, the point isn\’t that Europe is or should be following in the footsteps of US history. As the authors write:

\”It is not important whether the European Union is integrating more or less quickly than the United States did. Such abstract benchmarking misses all the important points about the nature and sequencing of integration as political processes. The many fundamental differences between the United States and the European Union prevent drawing too precise, let alone literal, a mapping from US economic development to Europe’s path forward today. … Rather than pointing towards the current state of US continental integration as the guide for the European Union, we analyze the US responses throughout history to economic and political challenges and to numerous domestic political constraints—some not unlike what Europe faces today. We believe that EU leaders should draw lessons from these US responses for how, how far, and how fast their aspirations for EMU should progress. Yet, it must be acknowledged that the United States solved most of its political and economic challenges through centralization and federal government institution building.\”

Kierkegaard and Posen put together a thought-provoking list of nine \”Themes of US Economic Integration over the Long Run,\” which are of course explored in more detail in the essays that follow. Here\’s a sampler:

Institution Building Requires Repeated Attempts and Often Constitutional Revision: … The US Constitution itself has been updated, or amended, 27 times.  … [T]he first two central banks in the United States were closed down, and the initial monetary policy architecture of today’s Federal Reserve required repeated and far‐reaching reform in the first two decades after its founding. …  Economic integration cannot be limited forever to satisfy those who are averse to change.  … 

Fiscal Integration Takes a Very Long Time: From the beginning the US federal government had the power to issue its own debt, but for the first more than 130 years of American history it did so sparingly and essentially only to finance the nation’s wars. Only by the 1930s did outstanding US federal government debt permanently exceed total state and local government debt. …

The Right Fiscal Sequencing Is to First Identify the Need and then to Find the Resources: The US federal government budget expanded gradually, but each expansion generally followed the same clear political sequence. Congress would identify a problem that required a nationally consistent solution and would then proceed to find the necessary funding for it. Frequently, the federal government dedicated or earmarked particular revenue sources to solving specific preidentified problems. …

Large Centralized Fiscal Capacity Synchronizes Regional Business Cycles: The increasing synchronization of US business cycles across a diverse and continental‐sized economy occurred only after the dramatic increase in the federal government’s fiscal role in the 1930s New Deal (and subsequently World War II). Previously, a pattern of divergent regional booms and busts was the costly norm even as markets integrated over decades. …  US history suggests that European policymakers ought instead to contemplate the creation of a specialized asymmetric shock absorption instrument for at least the euro area. …

New Centralized Institutions Unite Opposition and Can be Vulnerable to Regulatory Arbitrage:  … In Europe, EMU itself, as originally designed in the Maastricht Treaty, is of course the most prominent example of a half‐built house that ultimately suffered a regionally driven crisis. This led to scapegoating for being too centralized, when the problem was that it was insufficiently so.

Only Complete Fiscal Support for the Lender of Last Resort Removed Redenomination Risk: During the early decades following the Federal Reserve System’s founding in 1913, negative feedback (or doom) loops akin to those in the euro crisis materialized between regional banking sectors, state governments, and the nonfinancial private sector in the same region(s). Only after the comprehensive reforms initiated by President Franklin Roosevelt—including the potentially unlimited fiscal support for the Federal Reserve Board and regional Federal Reserve banks and the establishment of the Federal Deposit Insurance Corporation (FDIC) with a federal fiscal backstop—did interregional differentials in interest rate and risk perceptions end. US history thus implies that only similarly credible actions to support the European Central Bank (ECB) and banking supervisors will alleviate stubborn country‐specific redenomination risks inside the euro area.

Central Absorption of Government Responsibilities Often Occurs Following State‐Level Policy Failures: Important additions to US federal government responsibilities historically took place as partial state‐level services provision collapsed financially. … Generally available old‐age pension provision through Social Security and unemployment benefits were introduced during the Great Depression, as similar programs existing in just a few states became unsustainable. And federal deposit insurance was similarly adopted in 1933, following the largest financial panic in a
sequence of them, when a wave of failures spread among smaller state‐level insurance
schemes. …

Few Core Government Functions Are Exclusively State or Federal Responsibilities: … [I]n practice, the federal government has only very few exclusive responsibilities, such as defense or foreign affairs. Many core social insurance and regulatory responsibilities are in practice carried out through state‐federal government partnerships both institutionally and financially. …

National Security Crises and Other External Pressures Are Important Integrationist Forces: Jean Monnet is famously credited for suggesting that the European Union would be forged from the group’s responses to its successive crises. The same is true for many of the core institutions of the American central government, but primarily these were security crises (economic crises, as noted, were usually insufficient to prompt greater integration on their own, despite their evident costs). … The vast majority of American federal government institutions created in crisis periods have subsequently been maintained. …

For some earlier thoughts about US history and lessons for European economic unification, see:

Behind the Declining Labor Share of Income

Total income earned can be divided into what is earned by labor in wages, salaries, and benefits, and what is earned by capital in profits and interest payments. The line between these categories can be a blurry: for example, should the income received by someone running their own business be counted as \”labor\” income received for their hours worked, or as \”capital\” income received from their ownership of the business, or some mixture of both? 

However, the US Bureau of Labor Statistics has been doing this calculation for  decades using a standardized methodology over time. The US labor share of income was in the range of 61-65% from the 1950s up through the 1990s. Indeed, for purposes of basic long-run economic models, the share was sometimes treated as a constant. But in the early 2000s, the labor share started dropping and fell to the historically low range of 56-58%. Loukas Karabarbounis and Brent Neiman provide some perspective on what has happened, citing a lot of the recent research. in \”Trends in Factor Shares: Facts and Implications,\” appearing in the NBER Reporter (2017, Number 4).


They built up a data set for a range of countries, and found that many of them had experienced a decline in labor share. Thus, the underlying economic explanation is unlikely to be a purely US factor, but instead needs to be something that reaches across many economies. They write: \”The decline has been broad-based. As shown in Figure 1, it occurred in seven of the eight largest economies of the world. It occurred in all Scandinavian countries, where labor unions have traditionally been strong. It occurred in emerging markets such as China, India, and Mexico that have opened up to international trade and received outsourcing from developed countries such as the United States.\”

They argue that one major factor behind this shift is cheaper information technology, which encouraged firms to substitute capital for labor. They write:

\”There was a decline in the price of investment relative to consumption that accelerated globally around the same time that the global labor share began its decline. A key hypothesis that we put forward is that the decline in the relative price of investment, often attributed to advances in information technology, automation, and the computer age, caused a decline in the cost of capital and induced firms to produce with greater capital intensity. If the elasticity of substitution between capital and labor — the percentage change in the capital-labor ratio in response to a percentage change in the relative cost of labor and capital — is greater than one, the lowering of the cost of capital results in a decline in the labor share…. [O]ur estimates imply that this form of technological change accounts for roughly half of the decline in the global labor share. …

\”If technology explains half of the global labor share decline, what might explain the other half? We use investment flows data to separate residual payments into payments to capital and economic profits, and find that the capital share did not rise as it should if capital-labor substitution entirely accounted for the decline in the labor share. Rather, we note that increases in markups and the share of economic profits also played an important role in the labor share decline.\” 

The fall in the labor share of income has consequences that ripple through the rest of the global economy. For example, it contributes to the rise in inequality. Another change from a few decades ago is that corporations used to raise money from household savers, by issuing bonds, taking out loans, or selling stock. But with the rise in the capital share and corporate profits, about two-thirds of global investments is financed by firms themselves. Indeed, it used to be that there were net flows of financial capital into the corporate sector; now, there are net flows of financial capital out of the corporate sector (through stock buy-backs, the rise in corporate cash holdings, and other mechanisms). When comparing current stock prices and price-earnings ratios to historical values, it\’s worth remembering when the capital share of income is higher, stock prices represent a different value proposition than they did several decades ago.

For previous posts on the declining labor share of income, see:

Could Driverless Trucks Create More Trucking Jobs? Uber Says "Maybe"

Could driverless trucks create more trucking jobs? It sounds logically impossible. But remember that automatic teller machines did not reduce the number of jobs bank tellers, and may even have increased it slightly,  because it changed altered the range of tasks typically done by a bank teller.  IN general, new technology doesn\’t just alter a single dimension of an industry, but can lead to complementary changes as well. Uber Advanced Technologies Group (!) spells out a scenario in which driverless trucks lead to more trucking jobs in \”The Future of Trucking: Mixed Fleets, Transfer Hubs, and More Opportunity for Truck Drivers\” (Medium, February 1, 2018).

Imagine that with the arrival of driverless trucks, the trucking industry splits into two parts: long-distance driverless trucks, which operate almost mostly on highways and large roads between a network of \”transfer hubs,\” and short-distance trucks with human drivers, which take the trucks from transfer hubs to local addresses. As the Uber authors point out:

\”The biggest technical hurdles for self-driving trucks are driving on tight and crowded city streets, backing into complex loading docks, and navigating through busy facilities. At each of the local haul pick ups and drop offs, there will need to be loading and unloading. These maneuvers require skills that will be hard for self-driving trucks to match for a long time. By taking on the long haul portion of driving, self-driving trucks can ease some of the burden of increasing demand, while also creating an opportunity for drivers to shift into local haul jobs that keep them closer to home.\”

The crucial part of the scenario is that most trucks, given their human drivers, are now on the road for only about one-third of every day. However, the long-distance driverless trucks could be on the road two-thirds or more of every day. As a result, the costs of long-distance shipping would drop substantially, which in turn would give firms and consumers an incentive to expand the quantity of what they ship by truck. In one simulation that has 1 million driverless long-distance trucks on the road, the result is an additional 1.4 million drivers needed for shorter-haul local trucking.

For a great many truckers, short-hauling offers a better lifestyle, in part because you can sleep in your own bed every night. It\’s not clear how wages might adjust in response to these kinds of changes. Wages for long-haul truckers might fall,  because competing with driverless technology on those routes would be tough, but the shift in wages for short-haul truckers depends on other ways in which the industry might evolve. The Uber folks are trying to crowd-source the economic analysis here by putting their models and data up on a GitHub site, so if this kind of analysis floats your boat (or you want to assign it as a student project), you have an option here.

Just to be clear, I\’m not endorsing the scenario that 10 years from now, there will be a million autonomous trucks on US highways and even more truckers in the short-haul business. But I am endorsing the broader point that a simple \”technology replaces jobs\” story–even one as seemingly straightforward as how autonomous trucks will affect the number of truck drivers–is always more complex and sometimes even counterintuitive to how it may appear at first glance.

Kill the Zombie Firms

I first ran into the idea of zombie firms–and the need to kill them–back in the late 1980s, when the US savings and loan industry was melting down. Here\’s an explanation from that time from  Edward Kane (\”The High Cost of Incompletely Funding the FSLIC Shortage of Explicit Capital.\” Journal of Economic Perspectives, 1989, 3:4, 31-47).

\”The events of the early 1980s broke the savings and loan industry into two divergent parts: the living and the living dead. This terminology portrays firms whose enterprise-contributed capital has been lost as soulless \”zombie\” institutions. … Zombie firms now constitute roughly 25 percent of the FSLIC-insured thrift industry. As in a George Romero zombie movie, capital forbearance brings dead firms back to a malefic form of quasi-life in which they attack the living, turning the prey they feed on into zombies, too. In a kind of Gresham\’s Law scenario (an analogy suggested by Joseph Stiglitz), \”bad\” zombie thrifts tend to drive out healthy competition. Zombie institutions do this by sucking deposits away from their competitors by offering high interest rates and by bidding down loan rates on high-risk projects. This squeezes profit margins and the proliferation of weak competitors and risky positions ultimately raises deposit-insurance premiums for everyone.\”

The key insight is that when governments show restraint in killing the zombies, they soak up capital and slash prices in a way that makes it hard for other firms to compete, thus creating more zombies. Frank Borman, an astronaut who commanded Apollo 8 and later ran Eastern Air Lines, liked to say: \”Capitalism without bankruptcy is like Christianity without hell\” (for example, see Time magazine, \”The Growing Bankruptcy Brigade,\” October 18, 1982, p. 104).

Zombie firms were also sighted in Japan after its economic meltdown in the early 1990s. For example, Takeo Hoshi and Anil K. Kashyap wrote (\”Japan\’s Financial Crisis and Economic Stagnation .\” Journal of Economic Perspectives, 2004, 18:1, 3-26):

\”Caballero, Hoshi and Kashyap (2003) explore the consequences of these subsidies for macro performance in Japan. They find that subsidies have not only kept many money-losing “zombie” firms in business, but also have depressed the creation of new businesses in the sectors where the subsidized firms are most prevalent. For instance, they show that in the construction industry, job creation has dropped sharply, while job destruction has remained relatively low. Thus, because of a lack of restructuring, the mix of firms in the economy has been distorted with inefficient firms crowding out new, more productive firms. Not only does the rise of the zombies help explain the overall slowdown in productivity, Caballero, Hoshi and Kashyap show that zombie-infested sectors have seen sharper declines in productivity growth than the sectors with fewer zombies. … For instance, the lack of lending by the healthy banks makes sense because these banks see no point in lending to firms that will have to compete against the zombies that are kept on life support by the sick banks.\”

But as anyone who watches television after midnight knows, despite all the warnings, zombies never actually die. Now they have been spotted in China. W. Raphael Lam, Alfred Schipke, Yuyan Tan, and Zhibo Tan have written \”Resolving China’s Zombies: Tackling Debt and Raising Productivity\” (IMF Working Paper WP/17/266, November 27, 2017).

\”Nonviable “zombie” firms have become a key concern in China. … [T]his paper illustrates the central role of zombies and their strong linkages with state-owned enterprises (SOEs) in contributing to debt vulnerabilities and low productivity. As a group, zombie firms and SOEs account for an outsized share of corporate debt, contribute to much of the rise in debt, and face weak fundamentals. Empirical results also show that resolving these weak firms can generate significant gains of 0.7–1.2 percentage points in long-term growth per year. … While the government has introduced various reforms to facilitate deleveraging and resolve weak companies, progress has been limited. The empirical results in this paper would support the arguments that accelerating that progress requires a more holistic and coordinated strategy, which should include debt restructuring to recognize losses, fostering operational restructuring, reducing implicit support, and liquidating zombies.\”

By their measures, the number of zombie firms in China and their share of debt had been declining, but are now on the rise again. (This happens in every zombie movie.)

But it\’s not just China.  Dan Andrews, Müge Adalet McGowan, and Valentine Millot confirms the worldwide threat in \”Confronting the zombies: policies for productivity revival\” (OECD Economic Policy Paper #21, December 2017), as well as in underlying research papers like Müge Adalet McGowan, Dan Andrews and Valentine Millot, \”The Walking Dead? Zombie Firms and Productivity Performance in OECD Countries\” (OECD Economics Department Working Papers, No. 1372, ECONOMICS DEPARTMENT WORKING PAPERS No. 1372, January 10, 2017). In the policy paper, they write (citations omitted):

\”There is growing recognition, however, that the productivity slowdown experienced over the past two decades is partly rooted in a rise of adjustment frictions that rein in the creative destruction process . One important dimension of this phenomenon is that firms that would typically exit or be forced to restructure in a competitive market – i.e. “zombie” firms – are increasingly lingering in a precarious state to the detriment of aggregate productivity. In this view, reviving productivity growth will partly depend on the policies that effectively facilitate the exit or restructuring of weak firms, while simultaneously coping with any social costs that arise from a heightened churning of firms and jobs. To this end, policies need to be reformed and packaged to enhance productivity growth in an inclusive fashion.

\”Against this background, this paper summarises the policy messages emerging from a large amount of cross-country research on Exit Policies and Productivity Growth. Main findings are reported under two main headings. First, the paper provides evidence for the conjecture that weak firms are stifling productivity growth and highlights the considerable scope for raising growth by spurring the orderly exit or restructuring of such firms. Second, it explores the potential for insolvency, financial and other reforms to revive productivity growth by addressing three inter-related sources of structural weakness in labour productivity: the survival of “zombie” firms, capital misallocation and stalling technological diffusion. Overall, the results suggest that there is much scope to revive productivity growth via reforms focused on improving the design of insolvency regimes, financial sector health and other dimensions of policy that spur corporate restructuring.\”

For example, one working definition of \”zombie\” firms is that they older firms, at least 10 years of age, that cannot cover their interest payments with their profits for three consecutive years. But tinkering with this definition doesn\’t alter the main conclusions.\” The propensity for high productivity firms to expand and low productivity firms to downsize has declined over time.\” The prevalence of zombie firms across OECD countries has risen, and the share of capital they absorb has risen.

For one more recent example, see the remarks by Claudio Borio of the Bank of International Settlements, \”A blind spot in today’s macroeconomics? (at a BIS-IMF-OECD Joint Conference on “Weak productivity: the role of financial factors and policies,“ January 10–11, 2018), who discusses \”the interaction between interest rates and the financial cycle and will also present some intriguing empirical regularities between the growing incidence of “zombie” firms in an economy and declining interest rates.\”

A dynamic economy needs to be continually shape-shifting. Recognizing that zombie firms should not be nourished at the expense of other firms in the economy is a useful step in that direction.

Is the Euro Out of Danger?

The euro was officially adopted in 1999, although it took a few more years to be phased in for everyday use. Over the years, my feelings about the new currency have see-sawed from one extreme to the other: wasn\’t sure it would be adopted in the first place, wasn\’t sure it would work if adopted, seemed to work pretty well at first, then led to large trade imbalances within Europe and a financial crisis, now seems to be functioning smoothly again. Is the euro now out of danger, or do certain underlying risks remain substantial?

The most recent issue of the Milken Institute Review (First Quarter 2018) has a couple of useful articles for getting up to speed on where the euro has been and where it might be headed next. Barry Eichengreen contributes \”Euro Malaise: From Remission to Cure,\” while Jean Tirole discusses the future of Europe after the euro crisis in an excerpt from his recent book, Economics for the Common Good. Eichengreen diagnoses five main issues of the euro in this way:

\”First, Europe has a financial-stability problem. As a result of bad management, bad supervision and badly designed regulation, euro-area banks became deeply entangled in the global financial crisis. On the cusp of the meltdown, they were undercapitalized, overleveraged and blithely unware of the risks of investing in U.S. securities backed by subprime mortgages. European regulators were then slow to clean up the post-meltdown mess, which goes a long way toward explaining why Europe’s recovery has been so sluggish.

\”Second, the euro area has a debt problem. Government debt as a share of GDP in the area as a whole is not noticeably higher than in the United States, but it is spread unevenly across countries. It is a problem for Belgium, Cyprus, Italy and Portugal with debt-to-GDP ratios well above 100 percent. And it is a monster problem for Greece, with an eye-watering ratio approaching 180 percent. Servicing these heavy debts is a drain on public finances that will become even more burdensome when interest rates rise from current, historically low levels. …

\”Third (and relatedly), fiscal policy is a problem. The euro area has an elaborate set of fiscal rules that are honored mainly in the breach. When Greece flaunted those rules at the end of the last decade, it was only following in the footsteps of France and Germany, which had broken the rules some five years earlier. Although the rules in question specify sanctions and fines for violators, those fines have never once been levied in the eurozone’s almost two decades of existence.

\”Fourth, the euro area lacks an adequate financial fire brigade, a regional equivalent of the International Monetary Fund. …

\”Fifth, the euro area lacks the flexibility to adjust to what the economist Robert Mundell, the intellectual father of the euro, referred to as “asymmetric disturbances.” There is no mechanism for eliminating the imbalances that arise when some member-states are booming while others are depressed, or when some members increase productivity more rapidly than others. It has no way of eliminating the chronic trade surpluses of some members and chronic deficits of others.

Eichengreen discusses what is happening in each of these areas, with particular attention to the negotiations between Angela Merkel in Germany and Emmanuel Macron in France. Deals could be cut to address at least some weaknesses of the euro, but it\’s not at all clear that they will be. He concludes: Marine Le Pen, the hard-right French politician who opposed Macron in the second round of the French election, called the euro `the corpse that still moves.\’ Merkel and Macron now have a narrow window in time to breathe new life into its body.\”

Jean Tirole offers a reminder of what the euro was intended to accomplish, and how it has gone some distance in that direction:

\”Even so, the euro represented an extraordinary symbol of European integration. Far more than a simple convenience for travelers, the single currency eliminated exchange rate uncertainty. Trade among euro area countries increased by around 50 percent between 1999 (the launch of the euro) and 2011. The euro was also intended to contribute to the stability of national economies by facilitating the diversification of savings across European countries: households and companies could invest abroad at lower cost, and their wealth was therefore less dependent on local conditions. Finally, the euro was intended to facilitate the circulation of capital in southern Europe, strengthening the financial credibility of those states and thus allowing them to finance their development.\”

Tirole also walks through some of the major difficulties the euro created. One issue was a divergence in wages and productivity levels that led to large trade imbalances:

\”Germany has consistently practiced wage moderation (in a relatively consensual way, because the labor unions in the sectors exposed to international competition have supported it), while wages in the southern countries exploded. In the countries of southern Europe plus Ireland, wages increased by 40 percent while labor productivity increased by only 7 percent. This divergence generated low prices for German products and high ones for those from southern Europe. Unsurprisingly, intra-European trade became massively unbalanced, with Germany exporting far more than it imported, and the southern countries doing the opposite.\”

For discussions of these issues in the Journal of Economic Perspectives, where I work as Managing Editor, readers might want to check Christian Dustmann, Christian, Bernd Fitzenberger, Uta Schönberg, and Alexandra Spitz-Oener, \”From Sick Man of Europe to Economic Superstar: Germany\’s Resurgent Economy,\” in the Winter 2014 issue, and Christian Thimann, \”The Microeconomic Dimensions of the Eurozone Crisis and Why European Politics Cannot Solve Them,\” from the Summer 2015 issue.

Another issue is that as the euro was facilitating capital movements to countries which had traditionally had to pay higher interest rates to borrow, that borrowing got out of control in some countries. There\’s Tirole:

\”More broadly, the confidence created by the poorer countries’ joining the eurozone substantially lowered the interest rates paid by borrowers in these countries. The easier access to funds generated capital inflows. These inflows, sometimes combined with weak regulation of banks’ risk-taking, fueled asset price increases and created financial bubbles, particularly in real estate.

\”Massive levels of debt, both public and private, are implicated in the origins of the crisis that threatens the existence of the eurozone today. Excessive borrowing was sometimes the fault of a spendthrift public sector or a failure to collect taxes (as in Greece), and sometimes the fault of the private financial sector (as in Spain and Ireland). When the Irish government budget deficit ballooned from 12 to 32 percent of GDP in 2010, it was because the banks had to be bailed out.\”

The Maastricht Treaty back in 1992 anticipated the possible problem that countries could be motivated to overborrow, and among other conditions set a rule that no country would have a public debt/GDP ratio over 60%. Tirole reminds us of the current levels, with the red vertical line marking the earlier promised 60% limit:

As Tirole writes: \”The Greek debt of 180 percent of GNP (characterized by a high rate of foreign holdings) is gigantic for a country with limited fiscal capacity, and has a long maturity (about twice as long as that of other national debts) and a low interest rate following the restructurings of 2010 and 2012. Payments are due to become large only after 2022, and then will be made over many years.\”

Again, readers interested in these dynamics may wish to check some earlier JEP articles. In the Summer 2012 issue, Philip R. Lane wrote \”The European Sovereign Debt Crisis.\” The Summer 2013 issue included four papers on euro-related issues: Enrico Spolaore, \”What Is European Integration Really About? A Political Guide for Economists\”; Jesús Fernández-Villaverde, Luis Garicano, and Tano Santos. \”Political Credit Cycles: The Case of the Eurozone\”; Kevin H.O\’Rourke and Alan M. Taylor, \”Cross of Euros\”; and Stephanie Schmitt-Grohé and Martin Uribe,\”Downward Nominal Wage Rigidity and the Case for Temporary Inflation in the Eurozone.\”

Tirole offers an even-handed discussion of the possible directions for the next set of reforms to solidify the euro, while admitting that at present, all possible directions are problematic.

One set of options involves a greater degree of unification across Europe, which Tirole calls the \”Maastricht approach.\”  For example, there could be a European Fiscal Council that would track borrowing in different countries and sound the alarm if it seemed to be getting out of control. But as Tirole writes: \”This fiscal council would have to truly represent Europe as a whole and have the authority to require prompt corrective action. In addition, since financial sanctions are not a good idea if a country is already in financial difficulty, other measures must be used — although these would only sharpen concerns about legitimacy and sovereignty. As things stand, the current impulse toward national sovereignty works against such improvement of the Maastricht approach.\”

The other broad set of options, which Tirole calls the \”federalist approach,\” instead starts from the assumption that the EU countries might look for certain limited opportunities to share risks and coordinate in limited ways. For example, one could imagine a system in which each country chooses its own pension contributions and benefits, but the pension funds themselves are run by a common European entity that would apply a common methodology so that scheduled payments into the system and promised benefits from the system remained in alignment. Similarly, one can imagine a cross-European plan for a least some minimum level of unemployment insurance, or a plan that provides for common standards of bank supervision and regulation, together with deposit insurance. But as Tirole points out, European countries have different political preferences, and so mixing countries with high and low pension levels, or high and low unemployment levels, or high and low levels of deposit insurance, is a tricky business. 

The euro situation is in a lull just now, which means there is some time and space for advance planning to reduce the risks of a future crisis. The question is whether European countries and institutions are going to squander their respite. 

For previous discussions of the euro, see:

Winter 2018 Journal of Economic Perspectives

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. Here, I\’ll start with Table of Contents for the just-released Winter 2018 issue, which in the Taylor household is known as issue #123. Below that are abstracts and direct links for all of the papers. I will blog more specifically about some of the papers in the next week or two, as well.

______________
Symposium: Housing
\”The Economic Implications of Housing Supply,\” by Edward Glaeser and Joseph Gyourko
In this essay, we review the basic economics of housing supply and the functioning of US housing markets to better understand the distribution of home prices, household wealth, and the spatial distribution of people across markets. We employ a cost-based approach to gauge whether a housing market is delivering appropriately priced units. Specifically, we investigate whether market prices (roughly) equal the costs of producing the housing unit. If so, the market is well-functioning in the sense that it efficiently delivers housing units at their production cost. The gap between price and production cost can be understood as a regulatory tax. The available evidence suggests, but does not definitively prove, that the implicit tax on development created by housing regulations is higher in many areas than any reasonable negative externalities associated with new construction. We discuss two main effects of developments in housing prices: on patterns of household wealth and on the incentives for relocation to high-wage, high-productivity areas. Finally, we turn to policy implications.
Full-Text Access | Supplementary Materials

\”Homeownership and the American Dream,\” by Laurie S. Goodman and Christopher Mayer
For decades, it was taken as a given that an increased homeownership rate was a desirable goal. But after the financial crises and Great Recession, in which roughly eight million homes were foreclosed on and about $7 trillion in home equity was erased, economists and policymakers are re-evaluating the role of homeownership in the American Dream. Many question whether the American Dream should really include homeownership or instead focus more on other aspects of upward mobility, and most acknowledge that homeownership is not for everyone. We take a detailed look at US homeownership from three different perspectives: 1) an international perspective, comparing US homeownership rates with those of other nations; 2) a demographic perspective, examining the correlation between changes in the US homeownership rate between 1985 and 2015 and factors like age, race/ethnicity, education, family status, and income; 3) and, a financial benefits perspective, using national data since 2002 to calculate the internal rate of return to homeownership compared to alternative investments. Our overall conclusion: homeownership is a valuable institution. While two decades of policies in the 1990s and early 2000s may have put too much faith in the benefits of homeownership, the pendulum seems to have swung too far the other way, and many now may have too little faith in homeownership as part of the American Dream.
Full-Text Access | Supplementary Materials

\”Sand Castles before the Tide? Affordable Housing in Expensive Cities,\” Gabriel Metcalf
This article focuses on cities with unprecedented economic success and a seemingly permanent crisis of affordable housing. In the expensive cities, policymakers expend great amounts of energy trying to bring down housing costs with subsidies for affordable housing and sometimes with rent control. But these efforts are undermined by planning decisions that make housing for most people vastly more expensive than it has to be by restricting the supply of new units even in the face of growing demand. I begin by describing current housing policy in the expensive metro areas of the United States. I then show how this combination of policies affecting housing, despite internal contradictions, makes sense from the perspective of the political coalitions that can form in a setting of fragmented local jurisdictions, local control over land use policies, and homeowner control over local government. Finally, I propose some more effective approaches to housing policy. My view is that the effects of the formal affordable housing policies of expensive cities are quite small in their impact when compared to the size of the problem—like sand castles before the tide. I will argue that we can do more, potentially much more, to create subsidized affordable housing in high-cost American cities. But more fundamentally, we will need to rethink the broader set of exclusionary land use policies that are the primary reason that housing in these cities has become so expensive. We cannot solve the problem unless we fix the housing market itself.
Full-Text Access | Supplementary Materials

Symposium: Friedman\’s Natural Rate Hypothesis after 50 Years

\”Friedman\’s Presidential Address in the Evolution of Macroeconomic Thought,\” by N. Gregory Mankiw and Ricardo Reis
Milton Friedman\’s presidential address, \”The Role of Monetary Policy,\” which was delivered 50 years ago in December 1967 and published in the March 1968 issue of the American Economic Review, is unusual in the outsized role it has played. What explains the huge influence of this work, merely 17 pages in length? One factor is that Friedman addresses an important topic. Another is that it is written in simple, clear prose, making it an ideal addition to the reading lists of many courses. But what distinguishes Friedman\’s address is that it invites readers to reorient their thinking in a fundamental way. It was an invitation that, after hearing the arguments, many readers chose to accept. Indeed, it is no exaggeration to view Friedman\’s 1967 AEA presidential address as marking a turning point in the history of macroeconomic research. Our goal here is to assess this contribution, with the benefit of a half-century of hindsight. We discuss where macroeconomics was before the address, what insights Friedman offered, where researchers and central bankers stand today on these issues, and (most speculatively) where we may be heading in the future.
Full-Text Access | Supplementary Materials

\”Should We Reject the Natural Rate Hypothesis?\” by Olivier Blanchard
Fifty years ago, Milton Friedman articulated the natural rate hypothesis. It was composed of two sub-hypotheses: First, the natural rate of unemployment is independent of monetary policy. Second, there is no long-run trade-off between the deviation of unemployment from the natural rate and inflation. Both propositions have been challenged. The paper reviews the arguments and the macro and micro evidence against each. It concludes that, in each case, the evidence is suggestive, but not conclusive. Policymakers should keep the natural rate hypothesis as their null hypothesis, but keep an open mind and put some weight on the alternatives.
Full-Text Access | Supplementary Materials

\”Short-Run and Long-Run Effects of Milton Friedman\’s Presidential Address,\” by Robert E. Hall and Thomas J. Sargent
The centerpiece of Milton Friedman\’s (1968) presidential address to the American Economic Association, delivered in Washington, DC, on December 29, 1967, was the striking proposition that monetary policy has no longer-run effects on the real economy. Friedman focused on two real measures, the unemployment rate and the real interest rate, but the message was broader—in the longer run, monetary policy controls only the price level. We call this the monetary-policy invariance hypothesis. By 1968, macroeconomics had adopted the basic Phillips curve as the favored model of correlations between inflation and unemployment, and Friedman used the Phillips curve in the exposition of the invariance hypothesis. Friedman\’s presidential address was commonly interpreted as a recommendation to add a previously omitted variable, the rate of inflation anticipated by the public, to the right-hand side of what then became an augmented Phillips curve. We believe that Friedman\’s main message, the invariance hypothesis about long-term outcomes, has prevailed over the last half-century based on the broad sweep of evidence from many economies over many years. Subsequent research has not been kind to the Phillips curve, but we will argue that Friedman\’s exposition of the invariance hypothesis in terms of a 1960s-style Phillips curve is incidental to his main message.
Full-Text Access | Supplementary Materials

Articles

\”Exchange-Traded Funds 101 for Economists,\” by Martin Lettau and Ananth Madhavan
Exchange-traded funds (ETFs) represent one of the most important financial innovations in decades. An ETF is an investment vehicle, with a specific architecture that typically seeks to track the performance of a specific index. The first US-listed ETF, the SPDR, was launched by State Street in January 1993 and seeks to track the S&P 500 index. It is still today the largest ETF by far, with assets of $178 billion. Following the introduction of the SPDR, new ETFs were launched tracking broad domestic and international indices, and more specialized sector, region, or country indexes. In recent years, ETFs have grown substantially in assets, diversity, and market significance, including substantial increases in assets in bond ETFs and so-called \”smart beta\” funds that track certain investment strategies often used by actively traded mutual funds and hedge funds. In this paper, we begin by describing the structure and organization of exchange-traded funds, contrasting them with mutual funds, which are close relatives of exchange-traded funds, describing the differences in how ETFs operate and their potential advantages in terms of liquidity, lower expenses, tax efficiency, and transparency. We then turn to concerns over whether the rise in ETFs may raise unexpected risks for investors or greater instability in financial markets. While concerns over financial fragility are worth serious consideration, some of the common concerns are overstated, and for others, a number of rules and practices are already in place that offer a substantial margin of safety.
Full-Text Access | Supplementary Materials

\”Frictions or Mental Gaps: What\’s Behind the Information We (Don\’t) Use and When Do We Care?\” by Benjamin Handel and Joshua Schwartzstein
Consumers suffer significant losses from not acting on available information. These losses stem from frictions such as search costs, switching costs, and rational inattention, as well as what we call mental gaps resulting from wrong priors/worldviews, or relevant features of a problem not being top of mind. Most research studying such losses does not empirically distinguish between these mechanisms. Instead, we show that most highly cited papers in this area presume one mechanism underlies consumer choices and assume away other potential explanations, or collapse many mechanisms together. We discuss the empirical difficulties that arise in distinguishing between different mechanisms, and some promising approaches for making progress in doing so. We also assess when it is more or less important for researchers to distinguish between these mechanisms. Approaches that seek to identify true value from demand, without specifying mechanisms behind this wedge, are most useful when researchers are interested in evaluating allocation policies that strongly steer consumers towards better options with regulation, traditional policy instruments, and defaults. On the other hand, understanding the precise mechanisms underlying consumer losses is essential to predicting the impact of mechanism policies aimed primarily at reducing specific frictions or mental gaps without otherwise steering consumers. We make the case that papers engaging with these questions empirically should be clear about whether their analyses distinguish between mechanisms behind poorly informed choices, and what that implies for the questions they can answer. We present examples from several empirical contexts to highlight these distinctions.
Full-Text Access | Supplementary Materials

\”Do Economists Swing for the Fences after Tenure?\” by Jonathan Brogaard, Joseph Engelberg and Edward Van Wesep
Using a sample of all academics who pass through top 50 economics and finance departments from 1996 through 2014, we study whether the granting of tenure leads faculty to pursue riskier ideas. We use the extreme tails of ex-post citations as our measure of risk and find that both the number of publications and the portion consisting of \”home runs\” peak at tenure and fall steadily for a decade thereafter. Similar patterns hold for faculty at elite (top 10) institutions and for faculty who take differing time to tenure. We find the opposite pattern among poorly cited publications: their numbers rise post-tenure.
Full-Text Access | Supplementary Materials

\”Retrospectives: Cost-Push and Demand-Pull Inflation: Milton Friedman and the \”Cruel Dilemma,\” by Johannes A. Schwarzer
This paper addresses two conflicting views in the 1950s and 1960s about the inflation-unemployment tradeoff as given by the Phillips curve. Many economists at this time emphasized the issue of a seemingly unavoidable inflationary pressure at or even below full employment. In contrast, Milton Friedman was convinced that full employment and price stability are not conflicting policy objectives. This dividing line between the two camps ultimately rested on fundamentally different views about the inflationary process: For economists of the 1950s and 1960s cost-push forces are responsible for the apparent conflict between price stability and full employment. On the other hand, Friedman, who regarded inflation to be an exclusively monetary phenomenon, rejected the notion of ongoing inflationary cost-push pressures at full employment. Besides his emphasis on the full adjustment of inflation expectations, this rejection of cost-push theories of inflation, which implied a decoupling of the two previously perceived incompatible policy objectives, was the other important element in Friedman\’s attack on the Phillips curve tradeoff in his 1967 presidential address to the American Economic Association.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

\”Using JEP Articles as Course Readings? Tell Us About It!\”
Full-Text Access | Supplementary Materials