China and Currency Manipulation

Treasury Secretary Steve Mnuchin has \”determined that China is a Currency Manipulator\” (with capital letters in the press release) The overall claim is that one major reason for China large trade surpluses is that China is keeping is exchange rate too low. This low exchange rate makes China\’s exports cheaper to the rest of the world, while also making foreign products more expensive in China, thus creating China\’s trade surplus.

The claim is not particularly plausible.  Indeed, a cynic might point out that if currency manipulation was the main trade problem all along then, then Trump has been wasting time by playing around with tariffs since early 2018. For perspective on the exchange rate issue, let\’s start with the Chinese yuan/US dollar exchange rate in the last 30 years.

Up to about 1995, the exchange rate shown here is not an especially meaningful number, because during that time China had an \”official\” exchange rate set by the government and an \”unofficial\” exchange rate set in markets. The official rate had a much stronger yuan than the unofficial rate, so when the two rates were united in 1995,  there is a steep jump upward in the exchange rate graph, as the yuan gets weaker (that is, it takes more yuan to buy a US dollar).

From about 1996-2005, the straight horizontal line on the graph is strong evidence that the Bank of China was keeping its exchange rate fixed.  Starting in mid-2005, China stopped holding its exchange rate fixed, and the yuan become stronger, moving from about 8.2 yuan/dollar in early 2005 to 6.8 yuan/dollar by mid-2008.  Since then, the yuan has shifted up and down, falling as low as about 6.1 yuan/dollar at times, but then often rising back up to about 6.8 yuan/dollar.

It\’s useful to compare the yuan exchange rate with China\’s balance of trade. Here\’s a figure based on World Bank data showing China\’s trade balance since 1990.  Back in the 1990s, China\’s trade surplus was usually positive, but also typically less than 2% of GDP. When China joins the WTO in 2001, its exports take off and so does its trade surplus, hitting 10% of GDP in 2007. It would be highly implausible to attribute this jump in China\’s trade surplus to currency manipulation, because the first figure shows that China\’s exchange rate is unchanged during this period. It is also highly implausible to attribute this rise to more Chinese protectionism, because China\’s giant trade surpluses resulted from higher exports, not lower imports.

But then China\’s extraordinary trade surplus soon went away. By 2011 China\’s trade surplus was under 2% of GDP; in 2018, it\’s under 1% of GDP. Thus the Trump administration complaint that China is using an extraordinarily weak exchange rate to power very large Chinese trade surpluses has not been plausible since 2011, and is even less plausible since 2018.

So how did the US Treasury decide in August 2019 that China was now manipulating its currency? Treasury is required by law to publish a semiannual report, \”Macroeconomic and Foreign Exchange Policies of Major Trading Partners of the United States.\”

At the time of the October 2018 report, exchange rate was about 6.9 yuan/dollar. The report did not find that China was acting like a currency manipulator at that time. As it points out, the IMF agreed with this view, as did other outside economists. For example, the Treasury wrote:

\”Over the last decade, the RMB has generally appreciated on a real, trade-weighted basis. This appreciation has led the IMF to shift its assessment of the RMB in recent years and conclude that the RMB is broadly in line with economic fundamentals.\”

That report also offers a macroeconomically odd complaint. It acknowledges that China\’s overall trade surplus in 2018 was near-zero, but then complains that China\’s trade position is \”unevenly spread\”–that is, China has a trade surplus with some countries like the US, but a nearly offsetting trade deficit with other countries. Treasury wrote:

Since then, China’s current account surplus has declined substantially, falling to 0.5 percent of GDP in the four quarters through June 2018. However, it remains unevenly spread among China’s trading partners. In particular, China retains a very large and persistent trade surplus with the United States, at $390 billion over the four quarters through June 2018.

So China was on warning that even if its overall trade balance was near-zero, the US was only focused on the bilateral trade balance. The next Treasury report arrives in May 2019, when the exchange rate was still 6.9 yuan/dollar, as it had been at the time of the October 2019 report. Again, Treasury does not find that China is a currency manipulator. However, in the exchange rate graph above you can see a little dip in the yuan/dollar exchange rate in March 2018, as the yuan becomes a bit weaker for a short time. Treasury warns:

Notwithstanding that China does not trigger all three criteria under the 2015 legislation, Treasury will continue its enhanced bilateral engagement with China regarding exchange rate issues, given that the RMB has fallen against the dollar by 8 percent over the last year in the context of an extremely large and widening bilateral trade surplus. Treasury continues to urge China to take the necessary steps to avoid a persistently weak currency.

Again, the focus is on China\’s bilateral trade surplus with the US. There\’s another interesting hint here, which is that Treasure is urging \”China to take the necessary steps to avoid a persistently weak currency.\” This phrasing is interesting, because it isn\’t a complaint that China is intervening to make its currency too weak; instead, it\’s a complaint that China should be intervening more to prevent its currency from being weak. It\’s a complaint that China is not being enough of a currency manipulator in the way the Trump administration would prefer.

Before the announcement from Mnuchin, the exchange rate in August 2019 was still about 6.9 yuan/dollar–that is, what it had been in May 2019 and October 2019. But now, this exchange rate was evidence that China was \”Currency Manipulator.\” In fact, the recent Treasury press release links to the May 2019 report, which did not find that China was manipulating its currency, in support of the finding that China was manipulating its currency.

The May 2019 report sets out three standards that Treasury will supposedly be looking at when thinking about currency manipulation. First is whether the country has a bilateral trade surplus with the US of more than $20 billion, which China does. Second is whether the country has an overall trade surplus of more than 2% of GDP, which China doesn\’t.  IMF statistics find that China\’s trade surplus in 2018 was 0,4% of GDP; moreover, the IMF finds that China is headed for a trade deficit in the next few years. Third is whether the country is intervening regularly in foreign exchange markets. As the Treasury report points out, Bank of China foreign exchange operations are shrouded in secrecy, but the evidence suggests that China\’s foreign exchange reserves haven\’t moved much since early 2018, or are perhaps a bit down overall, which is not consistent with the theory that the Bank of China has been buying lots of US dollars to keep the yuan exchange rate weak.

For those with eyes to see, it should be apparent that trade deficits and surpluses rise and fall as a a result of large-scale macroeconomic factors, including national levels of consumption, savings, and investment.  I won\’t belabor this point here, because I\’ve tried to explain it a few times: for example, see \”Misconceptions about Trade Deficits\” (March 30, 2018), \”Some Facts about Current Global Account Balances\” (August 7, 2018), and \”US Not the Source of China\’s Growth; China Not the Source of America\’s Problems\” (December 4, 2018).

Understanding the actual drivers of trade balances explains why raising tariffs for a year and watching the US trade deficit got larger, rather than smaller. China\’s yuan/dollar exchange rate is at a level where its overall trade balance is near-zero, and according to the IMF, headed for a modest trade deficit in the next few years. Thus, the IMF is unlikely to back the Trump administration argument that the Bank of China is manipulating its exchange rate. But if the Trump administration bludgeons China into having a substantially stronger exchange rate, what happens next?

A strong exchange rate for one currency necessarily means a weaker exchange rate for other currencies: for example, if it takes fewer yuan to buy a dollar, it necessarily takes more dollars to buy a yuan. By arguing for a stronger yuan exchange rate, the Trump administration is apparently trying to devalue its way to prosperity by arguing for a weaker US dollar exchange rate,  This makes it easier to sell US exports abroad, but the lower buying power for the US dollar also means that, in effect, everything which is imported by consumers and firms will cost more.  Economies with floating exchange rates, like the US, are built to absorb short- and even medium-term fluctuations in exchange rates without too much stress. But in effect, the current Treasury policy is to advocate that China take steps to produce a permanently weaker US dollar–and thus benefit exporters at the cost of higher prices for importers–until the bilateral US trade deficit with China is eliminated.

Some Thoughts on Markups

A group of recent research studies have argued that \”markups\” are on the rise. As one of several prominent examples, a study by Jan De Loecker, Jan Eeckhout, and Gabriel Unger, called \”The Rise of Market Power and the Macroeconomic Implications\” presents calculations suggesting that the average markup for a US firm rose from 1.21 in 1980 to 1.61 in 2016 (here\’s a working paper version from Eeckhout\’s website dated November 22, 2018). The Summer 2019 issue of the Journal of Economic Perspectives discusses the strengths and weaknesses of this evidence in a three-paper symposium:

Here are some of my own takeaways from these articles:

1) For economists, \”markups\” are not the same as profits. Profits happen when price is above the average cost of production. Markups are defined as a situation where the price is above the marginal cost of production. This definition is also why markups are so hard to measure. It\’s pretty straightforward to measure average cost: just take the total cost of production and divide by the quantity produced. But measuring marginal cost of production is hard, because it requires separating a firm\’s expenses into \”fixed\” and \”variable\” costs, which is a useful conceptual division for economists but not how firms divide up their costs for accounting purposes.

2) There are a number of economic situations where there can be persistent markups while profits remain at normal levels. As one example, Every intro textbook discusses \”monopolistic competition,\” a situation in which firms in a market sell similar but differentiated products. Examples include clothing stores in the same shopping mall with different styles, gas stations with different locations, products sold with different money-back guarantees, and all the everyday products like dishwasher soap. The basic textbook explanation is that in a setting of monopolistic competition, firms will be able to set prices above marginal cost, charging more to consumers who desire the specific differentiated characteristics of that product. However, part of the definition of monopolistic competition is that other firms can easily enter the market and expand production. The result is a situation of positive markups (price higher than marginal cost) but only average profits.

3) Another example of positive markups arises in companies with high fixed costs and low marginal costs: as a simple example, think of a video game company where the cost of creating the game is high, but once the game is created, the marginal cost of providing the game to an additional user is near zero. It\’s quite possible for this kind of company to have a positive markup (price over marginal cost), but also to suffer losses (because price isn\’t enough above the markup to cover the firm\’s fixed costs).

4) Many big tech companies–Facebook, Amazon, Google, Apple, Microsoft, and others–have a number of products that share this property of high fixed costs and lower marginal costs. Thus, these companies are likely to have high mark-ups. Moreover, given their technology, size, and business, practices, it may be difficult for entrants to challenge these companies. As a result, companies like these may have an ability for a combination of sustained high markups and high profits over time. For example, in the study mentioned above by De Loecker, Eeckhout, and Unger, the overall rise in mark-ups not because of a rise in markups for the median company, but because of a very sharp rise in markups for a much smaller number of companies.

5) An emergence of high-markup, high-profit firms may be in some cases a positive step for productivity and wages. There is an emerging literature on what are sometimes called \”superstar\” firms (here\’s a recent working paper version of \”The Fall of the Labor Share and the Rise of Superstar Firms,\” by David Autor,  David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenen ). Imagine a company that makes large-scale investments in information technology, both for the logistics of running its operations and for quality control, and also in widespread use of far-flung supply chains. Based on these high fixed costs, such a company may be able to expand in the market, taking market share from smaller firms. But economists commonly hold that when a company succeed by offering the products that consumers desire at attractive prices, this is a good thing. Thus, antitrust authorities need to think carefully about whether companies with high markups are gaining  high profits through pro-consumer fixed investments or by anti-consumer restraints on competitors. 

6)  It\’s hard to measure whether markups are rising for the economy as a whole. Detailed studies of a certain firm or industry look carefully at the production process for that firm or industry and estimate fixed and marginal costs. But how can such data be collected for most companies in an economy?  Some approaches use firm-level accounting data, or data on capital investment and depreciation across industries, or sectoral-level data on outputs and inputs. Using this data to estimate production functions across firms and markups involves a number of modelling choices. It turns out that assumptions like whether industry is producing with constant returns to scale or increasing returns to scale matter a lot. There are a bunch of genuinely hard and disputed question on how to proceed here. 
7) As an example, one approach is to rely on accounting data. The prominent study by De Loecker, Eeckhout, and Unger uses accounting data (from Compustat) which breaks down costs of production into two main categories: \”Cost of Goods Sold\” and  \”Selling, General, and Administrative.\” If one thinks of the Costs of Goods Sold as a proxy that captures variable costs, one can then use this data along with a measure of the fixed capital stock of firms to do more detailed calculations, which with some assumptions (like assuming that all profits are paid to owners of capital) will  imply estimates for markups. But there are a bundle of underlying assumptions here. For example, Cost of Goods Sold is defined differently by the accountants across industries, and it seems to be a falling share of total costs over time. Sorting out the underlying economic meaning of the accounting data, along with assumptions are necessary and what implications these assumptions have, is an ongoing area of research. 
8) Other attempts to measure markups often rely on measuring a firms\’s amount of capital, which could be thought of as a fixed costs, and then looking at the profits received by owners of capital. But many firms have both \”tangible\” capital like machines that is relatively well-measured, and also \”intangible\” investments, which include situations where a firm has made past investment in knowledge (like research and development) or organizational capabilities that are now paying off. Of course, measuring intangible investment (and thinking about how fast it might depreciate) is very hard, but there\’s some evidence that intangible investments have become more important over time. But if firms with high markups and profits are largely benefiting because they made large investments in the past–in the form of intangible capital–then this should probably be viewed as a positive for the economy. 
9) Estimates of markups over time often reveal patterns that raise additional questions. For example, the study by Loecker, Eeckhout, and Unger finds that most of the sharp rise in markups happened among a smaller segment of firms in the 1980s and 1990s, and hasn\’t changed much since. (How does that timeline fit with one\’s internal narrative about what has caused higher markups, and in particular a belief that the causes are relatively recent?) Other studies that project back further in time suggest that markups were especially large in the 1960s. (So perhaps we need multiple explanations for what affected markups then and now?) 
10) If very large rises in markups have occurred, then they should have implications for other areas of the economy. For example, Basu points out in his JEP paper that it\’s conceptually possible to draw a connection from higher corporate markups to labor receiving a lower share of income (a fact discussed here and here). But high estimated of the rise in markups suggest that the labor share of income should have fallen by much more than actually observed. Or as Syverson points out in his JEP paper, a rise in markups  implies either that prices have risen quickly or that (marginal) costs have plunged. Low inflation rates means that prices have not risen quickly, and evidence that costs have plunged economy-wide is scanty. Thus, both authors express a view that while it is it plausible that markups have risen, the size of such a rise must be relatively modest to fit with other observed economic changes. 
There\’s much more in the articles themselves about methods of measuring markups, what methods produce estimates that are higher or lower, possible connections (or not) to concentration of industry. Attempts to measure whether markups are rising in the US economy, and if so, by how much is a live and active area of research. If someone tells you that a very large rise in markups is a settled fact, they are showing a lack of awareness of the actual evolving state of play in this literature.

Administrative Data Moves Toward Center Stage

For the later decades of the 20th century, the most common source of data for economic studies was government surveys and statisticians. There are household surveys like the Current Population Survey, the Survey of Income and Program Participation, the Consumer Expenditure Survey, the  National Health Interview Survey, the Consumer Expenditure Survey, and the General Social Survey. There were government workers collecting data on prices at stores as input to measures of inflation like the Consumer Price Index. There were business surveys, like the Economic Census, the Retail Trade Survey, an Annual Survey of Manufactures, the Residential Finance Survey, and others. Branches of government like the Department of Energy, the Department of Agriculture, and the Health Care Financing Administration collect data on specific industries. There\’s also a Census of Governments to get data on state and local government. The Bureau of Economic Analysis would pull together data from these sources and others to estimate GDP.

But over time, a split developed. On one side was this body of data created by government for use by business and and policy-makers, as well as researchers. But on the other side was a vast amount of data collected in the process of administering these programs. Often this administrative data was not formatted or organized in a way that researchers could use it. Moreover, the administrative data was often siloed in one government agency; for example, student grades and their academic progress were traditionally kept inside school districts or in some cases state departments of education, and not easily connected to other data that might explain patterns of school grades, either within a school year or over time.

There were gaps between the government-collected survey data and the administrative data. As one  example, the administrative data told how much the government had paid out in welfare benefits and food stamps, but the surveys of households were reporting only about half of that total had been received.

There has been a considerable movement toward the use of administrative data. The Russell Sage Foundation Journal of the Social Sciences has put out a double-issue in March 2019 on the theme of \”Using Administrative Data for Science and Policy\” with a range of examples. From the introductory essay by Andrew M. Penner and Kenneth A. Dodge:

Research using administrative data has much in common with history and archeology, insofar as it observes the tracks that individuals leave as they move through society and draws lessons from these glimpses into their lives. …

Given their origin in a particular institutional context, administrative records are typically fragmented, and these data are often not linked to other data that would be useful for research and policy. Hospitals, for example, collect detailed information about patients’ health, schools regularly collect information about student development, and employers often keep records not only about the performance of employees, but also about applicants who were ultimately not offered positions. Although various combinations of these data can provide important insights, they are typically compartmentalized. Likewise, given their origin, administrative records often lack certain kinds of information that are less likely to be collected in these records. For example, information about attitudes, affinities, and motives are not often collected in administrative records. Combining administrative data with records from other sources—either by linking administrative records across sources or by making administrative records available to be linked to data collected via other means—is thus central to building administrative data infrastructure. …

By virtue of how they come into existence, administrative data are typically focused on one facet of an individual’s life, and data and insights are often siloed. … Administrative records from birth, education, criminal justice, labor market, and mortality often capture different points in an individual’s life; combining data across these stages allows us to understand how inequalities unfold over the arc of an individual’s life. … 

One clear example is in education. Despite their focus on preparing students who are “college- and career-ready,” schools have historically struggled to obtain data on the practices that will prepare their students to be successful because widespread links between students in K–12 educational systems and higher education outcomes have become available only recently, and links between K–12 data systems and the labor market remain relatively rare. These data linkages are important to understand the efficacy of school-based vocational programs, dropout recovery interventions, college readiness programs, and advancement placement course policies. But schools, like other organizations, typically lack the capacity and expertise to build this infrastructure and analyze the resulting data.

At a time when we are all sensitized to how big tech companies are gathering, combining, and marketing our personal data, the rise of administrative data clearly has a concerning side. Consider the footprints that many of us have left in administrative data over the years, about our education, physical and mental health, finances, how much we were paid, tax filings, car and real estate ownership, Social Security contributions, benefits from government programs, and many more–right down to the books checked out of the public library.

The obvious challenge is to find ways to use administrative data where protection of personal privacy is built-in from the start. For example, when the unemployment rate based on the Current Population Survey, no one is concerned that unemployment at specific identified households will become public as a result. In this issue, a paper by David B. Grusky, Michael Hout, Timothy M. Smeeding, C. Matthew Snipp descibes \”The American Opportunity Study: A New Infrastructure for Monitoring Outcomes, Evaluating Policy, and Advancing Basic Science.\” They write:

The American Opportunity Study (AOS) …  is an ongoing effort to link the censuses of 1960 through 2010 and the American Community Surveys (ACS) and thereby convert cross-sectional decennial census data into a bona fide panel that will represent the full U.S. population over the last seventy years. Because this panel will be continuously refreshed as additional census and ACS data become available, it can serve as a population-level scaffolding on which other administrative data (such as tax records, earnings reports, program data) are then hung. … In other countries that have linked data, such as Wales and New Zealand, a well-developed infrastructure allows access to carefully vetted scholars, with the result that high-quality evidence is more frequently brought to bear on policy decisions.

I can ramble on a bit about the merits of administrative data for research. It covers everyone, and thus allows detailed analysis of various subgroups, and tracking people over time, and even looking across generations. It describes what government programs have actually done, which can then be compared and combined with surveys of household or businesses. But rather than talking in generalities, let me just mention some of the studies from this double issue. Notice in particular how the studies often use administrative data, sometimes from  separate government agencies, in a way that addresses a worthwhile question.

From Sean F. Reardon, \”Educational Opportunity in Early and Middle Childhood: Using Full Population Administrative Data to Study Variation by Place and Age\”:

I use standardized test scores from roughly forty-five million students to describe the temporal structure of educational opportunity in more than eleven thousand school districts in the United States. Variation among school districts is considerable in both average third-grade scores and test score growth rates. The two measures are uncorrelated, indicating that the characteristics of communities that provide high levels of early childhood educational opportunity are not the same as those that provide high opportunities for growth from third to eighth grade.

From Janelle Downing and Tim Bruckner, \”Subprime Babies: The Foreclosure Crisis and Initial Health Endowments\”:

This research uses a probabilistic matching strategy to link foreclosure records with birth certificate records from 2006 to 2010 in California to identify birth parents who experienced a foreclosure. … [We] find that infants in gestation during or after the foreclosure had a lower birth weight for gestational age than those born earlier, suggesting that the foreclosure crisis was a plausible contributor to disparities in initial health endowments.

From Agustina Laurito, Johanna Lacoe, Amy Ellen Schwartz, Patrick Sharkey, and Ingrid Gould Ellen, \”School Climate and the Impact of Neighborhood Crime on Test Scores\”:

Using administrative data from the New York City Department of Education and the New York City Police Department, we find that exposure to violence in the residential neighborhood and an unsafe climate at school lead to substantial test score losses in English language arts (ELA).

From Roberto M. Fernandez and Brian Rubineau, \”Network Recruitment and the Glass Ceiling: Evidence from Two Firms\”:

Does network recruitment contribute to the glass ceiling? We use administrative data from two companies to answer the question. In the presence of gender homophily, recruitment through employee referrals can disadvantage women when an old boys’ network is in place. We calculate the segregating effects of network recruitment across multiple job levels in the two firms. If network recruitment is a factor, the segregating impact should disadvantage women more at higher levels. We find this pattern, but also find that network recruitment is a desegregating force overall. It promotes women’s representation strongly at all levels, but less so at higher levels.

One final thought: Using administrative data often requires academic researchers to become entrepreneurial about seeking out such data, working with the government agencies or private firms that hold the original data, finding ways to offer cast-iron reassurances about personal privacy, and only then being able to actually work with the data and see if something interesting emerges. For modern economists, this process is quite different from the old days of digging through data collected and made public in a way already prepared for their use by government agencies.

Summer 2019 Journal of Economic Perspectives Available Online

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Summer 2019 issue, which in the Taylor household is known as issue #129. Below that are abstracts and direct links for all of the papers. I may blog more specifically about some of the papers in the next week or two, as well.

_______________
Symposium on Markups

\”Are Price-Cost Markups Rising in the United States? A Discussion of the Evidence,\” by Susanto Basu
A number of recent papers have argued that US firms exert increasing market power, as measured by their markups of price over marginal cost. I review three of the main approaches to estimating economy-wide markups and show that all are based on the hypothesis of firm cost minimization. Yet different assumptions and methods of implementation lead to quite different conclusions regarding the levels and trends of markups. I survey the literature critically and argue that some of the startling findings of steeply rising markups are difficult to reconcile with other evidence and with aggregate data. Existing methods cannot determine whether markups have been stable or whether they have risen modestly over the past several decades. Even relatively small increases in markups are consistent with significant changes in aggregate outcomes, such as the observed decline in labor\’s share of national income.
Full-Text Access | Supplementary Materials

\”Macroeconomics and Market Power: Context, Implications, and Open Questions,\” by Chad Syverson
This article assesses several aspects of recent macroeconomic market power research. These include the ways market power is defined and measured; the use of accounting data to estimate markups; the quantitative implications of theoretical connections among markups, prices, costs, scale elasticities, and profits; and conflicting evidence on whether greater market power has led to lower investment rates and a lower labor share of income. Throughout this discussion, I characterize the congruencies and incongruencies between macro evidence and micro views of market power and, when they do not perfectly overlap, explain the open questions that need to be answered to make the connection complete.
Full-Text Access | Supplementary Materials

\”Do Increasing Markups Matter? Lessons from Empirical Industrial Organization,\” by Steven Berry, Martin Gaynor and Fiona Scott Morton
This article considers the recent literature on firm markups in light of both new and classic work in the field of industrial organization. We detail the shortcomings of papers that rely on discredited approaches from the \”structure-conduct-performance\” literature. In contrast, papers based on production function estimation have made useful progress in measuring broad trends in markups. However, industries are so heterogeneous that careful industry-specific studies are also required, and sorely needed. Examples of such studies illustrate differing explanations for rising markups, including endogenous increases in fixed costs associated with lower marginal costs. In some industries there is evidence of price increases driven by mergers. To fully understand markups, we must eventually recover the key economic primitives of demand, marginal cost, and fixed and sunk costs. We end by discussing the various aspects of antitrust enforcement that may be of increasing importance regardless of the cause of increased markups.
Full-Text Access | Supplementary Materials

Symposium on Issues in Antitrust

\”Protecting Competition in the American Economy: Merger Control, Tech Titans, Labor Markets,\” by Carl Shapiro
Accumulating evidence points to the need for more vigorous antitrust enforcement in the United States in three areas. First, stricter merger control is warranted in an economy where large, highly efficient and profitable \”superstar\” firms account for an increasing share of economic activity. Evidence from merger retrospectives further supports the conclusion that stricter merger control is needed. Second, greater vigilance is needed to prevent dominant firms, including the tech titans, from engaging in exclusionary conduct. The systematic shrinking of the scope of the Sherman Act by the Supreme Court over the past 40 years may make this difficult. Third, greater antitrust scrutiny should be given to the monopsony power of employers in labor markets.
Full-Text Access | Supplementary Materials

\”The Problem of Bigness: From Standard Oil to Google,\” by Naomi R. Lamoreaux
This article sets recent expressions of alarm about the monopoly power of technology giants such as Google and Amazon in the long history of Americans\’ response to big business. I argue that we cannot understand that history unless we realize that Americans have always been concerned about the political and economic dangers of bigness, not just the threat of high prices. The problem policymakers faced after the rise of Standard Oil was how to protect society against those dangers without punishing firms that grew large because they were innovative. The antitrust regime put in place in the early twentieth century managed this balancing act by focusing on large firms\’ conduct toward competitors and banning practices that were anticompetitive or exclusionary. Maintaining this balance was difficult, however, and it gave way over time—first to a preoccupation with market power during the post–World War II period, and then to a fixation on consumer welfare in the late twentieth century. Refocusing policy on large firms\’ conduct would do much to address current fears about bigness without penalizing firms whose market power comes from innovation.
Full-Text Access | Supplementary Materials

Articles

\”How Market Design Emerged from Game Theory: A Mutual Interview,\” by Alvin E. Roth and Robert B. Wilson
We interview each other about how game theory and mechanism design evolved into practical market design. When we learned game theory, games were modeled either in terms of the strategies available to the players (\”noncooperative games\”) or the outcomes attainable by coalitions (\”cooperative games\”), and these were viewed as models for different kinds of games. The model itself was viewed as a mathematical object that could be examined in its entirety. Market design, however, has come to view these models as complementary approaches for examining different ways marketplaces operate within their economic environment. Because that environment can be complex, there will be unobservable aspects of the game. Mathematical models themselves play a less heroic, stand-alone role in market design than in the theoretical mechanism design literature. Other kinds of investigation, communication, and persuasion are important in crafting a workable design and helping it to be adopted, implemented, maintained, and adapted.
Full-Text Access | Supplementary Materials

\”A Bridge from Monty Hall to the Hot Hand: The Principle of Restricted Choice,\” by Joshua B. Miller and Adam Sanjurjo
We show how classic conditional probability puzzles, such as the Monty Hall problem, are intimately related to the recently discovered hot hand selection bias. We explain the connection by way of the principle of restricted choice, an intuitive inferential rule from the card game bridge, which we show is naturally quantified as the updating factor in the odds form of Bayes\’s rule. We illustrate how, just as the experimental subject fails to use available information to update correctly when choosing a door in the Monty Hall problem, researchers may neglect analogous information when designing experiments, analyzing data, and interpreting results.
Full-Text Access | Supplementary Materials

\”A Toolkit of Policies to Promote Innovation,\” by Nicholas Bloom, John Van Reenen and Heidi Williams
Economic theory suggests that market economies are likely to underprovide innovation because of the public good nature of knowledge. Empirical evidence from the United States and other advanced economies supports this idea. We summarize the pros and cons of different policy instruments for promoting innovation and provide a basic \”toolkit\” describing which policies are most effective according to our reading of the evidence. In the short run, R&D tax credits and direct public funding seem the most productive, but in the longer run, increasing the supply of human capital (for example, relaxing immigration rules or expanding university STEM admissions) is likely more effective.
Full-Text Access | Supplementary Materials

\”How Prevalent Is Downward Rigidity in Nominal Wages? International Evidence from Payroll Records and Pay Slips,\” by Michael W. L. Elsby and Gary Solon
For more than 80 years, many macroeconomic analyses have been premised on the assumption that workers\’ nominal wage rates cannot be cut. Contrary evidence from household surveys reasonably has been discounted on the grounds that the measurement of frequent wage cuts might be an artifact of reporting error. This article summarizes a more recent wave of studies based on more accurate wage data from payroll records and pay slips. By and large, these studies indicate that, except in extreme circumstances (when nominal wage cuts are either legally prohibited or rendered beside the point by very high inflation), nominal wage cuts from one year to the next appear quite common, typically affecting 15–25 percent of job stayers in periods of low inflation.
Full-Text Access | Supplementary Materials

\”Should We Tax Sugar-Sweetened Beverages? An Overview of Theory and Evidence,\” by Hunt Allcott, Benjamin B. Lockwood and Dmitry Taubinsky
Taxes on sugar-sweetened beverages are growing in popularity and have generated an active public debate. Are they a good idea? If so, how high should they be? Are such taxes regressive? People in the United States and some other countries consume remarkable quantities of sugar-sweetened beverages, and the evidence suggests that this generates significant health costs. Building on recent work, we review the basic economic principles that determine the socially optimal sugar-sweetened beverage tax. The optimal tax depends on (1) externalities, or uninternalized health system costs from diseases caused by sugary drink consumption; (2) internalities, or costs consumers impose on themselves by consuming too many sugary drinks due to poor nutrition knowledge and/or lack of self-control; and (3) regressivity, or how much the financial burden and the internality benefits from the tax fall on the poor. We summarize the empirical evidence about the key parameters that determine how large the tax should be. Our calculations suggest that sugar-sweetened beverage taxes are welfare enhancing and indeed that the optimal sugar-sweetened beverage tax rate may be higher than the 1 cent per ounce rate most commonly used in US cities. We end with seven concrete suggestions for policymakers considering a sugar-sweetened beverage tax.
Full-Text Access | Supplementary Materials

Features
\”Retrospectives: Lord Keynes and Mr. Say: A Proximity of Ideas,\” by Alain Béraud and Guy Numa
Since the publication of Keynes\’s General Theory of Employment, Interest and Money, generations of economists have been led to believe that Say was Keynes\’s ultimate nemesis. By means of textual and contextual analysis, we show that Keynes and Say held similar views on several key issues, such as the possibility of aggregate-demand deficiency, the role of money in the economy, and government intervention. Our conclusion is that there are enough similarities to call into question the idea that Keynes\’s views were antithetical to Say\’s. The irony is that Keynes was not aware of these similarities. Our study sheds new light on the interpretation of Keynes\’s work and on his criticism of classical political economy. Moreover, it suggests that some policy implications of demand-side and supply-side frameworks overlap. Finally, the study underlines the importance of a thorough analysis of the primary sources to fully grasp the substance of Say\’s message.
Full-Text Access | Supplementary Materials

\”Some Journal of Economic Perspectives Articles Recommended for Classroom Use,\” by Timothy Taylor
In 2018, the editors of the Journal of Economic Perspectives invited faculty to send us examples of JEP articles that they had found useful for teaching. We received 250 responses. On the JEP website, we have created a landing page (https://www.aeaweb.org/journals/jep/classroom) that organizes the recommended articles into 33 categories. If you click on any of the categories at that link, you will see a list of JEP papers that were recommended by faculty members for classroom use for that category, presented in reverse date order. Each paper is listed with a hyperlink to its article page on the JEP website. In this article, I offer some thoughts about how this exercise was carried out, along with its strengths and weaknesses. Although we make no pretense of presenting a complete syllabus for any specific course, we offer the milder hope that these recommendations from peers might suggest some additional readings for your students.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor

Full-Text Access | Supplementary Materials