Fortuitously, the IMF just published its 2019 External Sector Report: The Dynamics of External Adjustment(July 2019). As the title implies, it\’s about trade surpluses and deficits all over the world, not just the US and China. But it has some content that gives a sense of how it is likely to respond to Mnuchin\’s importuning. Here\’s an overall comment:
The IMF’s multilateral approach suggests that about 35–45 percent of overall current account surpluses and deficits were deemed excessive in 2018. Higher-than-warranted balances remained centered in the euro area as a whole (driven by Germany and the Netherlands) and in other advanced economies (Korea, Singapore), while lower-than-warranted balances remained concentrated in the United Kingdom, the United States, and some emerging market economies (Argentina, Indonesia). China’s external position was assessed to be in line with fundamentals and desirable policies, as its current account surplus narrowed further …
A couple of points are worth noting here. First, the IMF does not believe that all trade deficits and surpluses are \”excessive,\” only that about 35-45% are \”excessive.\” For economists, there will be sensible reasons why some countries make net investment in other countries, or receive net investments from other countries, which means that some countries will have reasonable trade surpluses or deficits.
Second, the IMF is saying that the the excessive trade surpluses are centered in teh EU and in Korea and Singapore. The excessive and trade deficits are the United States, the UK, and some emerging markets. But China\’s trade picture is not \”excessive.\” Instead, it\’s in line with economic fundamentals.
Here\’s the IMF list of countries with the biggest trade deficits and surpluses in 2018, as shown the table adapted from the IMF report. The US has by far the biggest trade deficit in absolute terms, although relative to the size of the US economy it\’s similar or even smaller than many of the other countries with big trade deficits. Among countries with trade surpluses, China ranked 11th in absolute size in 2018, and as a share of China\’s giant GDP, it\’s trade surplus was by far the smallest of the top 15.
But what about China\’s exchange rate in particular? Here\’s a figure from the IMF showing China\’s exchange rate since 2007, along with China\’s trade surpluses over that time. China\’s exchange rate has appreciated 36% since 2007, and its trade surpluses have been falling. In effect, Mnuchin\’s complaint is about the slight upward bend at the far right-hand side of China\’s exchange rate line.
So what is the cause for the large US trade deficits? The IMF points to a standard economic phenomenon that back in the 1980s used to be called the \”twin deficits\” problem. The US is running very large budget deficits, at a time when its unemployment rate has been 4% or less for more than year. From a macroeconomic view, all that buying power has to go someplace, and with the US economy already near full employment, it ends up flowing by various indirect routes into buying more imports–and driving up the US trade deficit. As the IMF writes, \”many countries with lower-than-warranted current account balances had a looser-than-desirable fiscal policy, compared to its medium-term desirable level (Argentina, South Africa, Spain, United Kingdom, United States) …\”
Why has China\’s current account surplus faded? One reason is related to the appreciation of China\’s exchange rate, already described. In addition, the IMF report suggests that China may be experiencing \”export market saturation,\” given that Chinas\’ share of world exports more than tripled from 5% in 2001 to 16% by 2017. China has also had a modest decline in its still-high savings rate, which means higher consumption of all goods, including a greater willingness to import.
The US has legitimate trade issues with China. China\’s treatment of intellectual property has often been cavalier at best, criminal at worst. But when it comes to the overall US trade deficit problem, the it seems quite unlikely that the IMF will designate China as the culprit.
Making the case for (or against) a tax on sugar-sweetened beverage requires addressing a number of questions.
Why focus on sugar-sweetened beverages rather than on other sources of calories, or on candy and junk food?
How much does consumption of sugar-sweetened beverages lead to heath or other harms like tooth decay?
How much does is tax on sugar-sweetened beverages passed through from retailers to consumers?
How much does a tax on sugar-sweetened beverages lead to people just shopping in a nearby jurisdiction where they aren\’t taxed?
How much does the share of tax on sugar-sweetened beverages that is passed through to consumers affect the health harms–in particular for those consumers most at risk (like children who consume a high volume of such drinks)–especially after tak
To what extent should the harms from sugar-sweetened beverages be counted as \”externalities,\” which are costs imposed upon others, and to what extent are the \”internalities,\” a term which refers to costs that the consumers of these products were (perhaps because of imperfect information or lack of self-control) not taking into account that they were imposing on themselves?
How much money might a tax on sugar-sweetened beverages collect?
To what extent do the costs of such a tax, and also the health benefits of such a tax, fall more heavily on those with lower income levels?
Putting all these factors together, does a tax on sugar-sweetened beverages seem like a wise policy?
This may seem like a lot of complications for answering a question about a small-scale policy. But for those who want a serious and actual answer , these kinds of questions can\’t be avoided. I won\’t try to summarize all the points of the paper, but the tone of answers can be inferred from the bottom line:
[W]e estimate that the socially optimal sugar-sweetened beverage tax is between 1 and 2.1 cents per ounce. One can understand this as coming from the correction needed to offset the negative externality (about 0.8 cents per ounce) and internality (about 1 cent per ounce … Together, these rough estimates suggest an optimal tax of about 1.5 cents per ounce. While there is considerable uncertainty in these optimal tax estimates, the optimal tax is not zero and may be higher than the levels in most US cities to date. However, for policymakers who are philosophically opposed to considering internalities in an optimal tax calculation, the optimal tax considering only externalities is around 0.4 cents per ounce. …
[W]e estimate that the social welfare benefits from implementing the optimal tax nationwide (relative to having zero tax) are between $2.4 billion and $6.8 billion per year. These gains would be substantially larger if the tax rate were to scale with sugar content. Of course, such calculations require strong assumptions and depend on uncertain empirical estimates … Furthermore, sugar-sweetened beverage taxes are not a panacea—they will not, by themselves, solve the obesity epidemic in America or elsewhere. But sin taxes have proven to be a feasible and effective policy instrument in other domains, and the evidence suggests that the benefits of sugar-sweetened beverage taxes likely exceed the costs.
The United States, like most places, an ambivalent view of big business. When big firms are making high profits, we are concerned that they are out-of-control and exploitative. If big firms are is performing poorly, with losses and layoffs, we argue over how or whether to rescue them. (Remember the auto companybailouts in 2009?) Might it be possible to strike a more lasting balance?
For example, here\’s one possible combination of policies. Corporate bigness is fine by itself, and will not be prosecuted. However, the biggest firms will be sharply limited in their ability to acquire other companies. In addition, they may face limitations on their ability to participate in politics, as well as compulsory licensing of their key patents. In one famous case in 1956, the Bell System was required by antitrust authorities to license all of its existing patents to all US firms for free.
Something like this policy mix was implemented in the US economy in the middle decades of the 20th century. Naomi Lamoreaux writes about \”The Problem of Bigness: From Standard Oil to Google,\” in the Summer 2019 issue of the Journal of Economic Perspectives. She points out that many of the concerns about Standard Oil more than century ago, and about Google and other big tech companies in the present, are not about higher prices being changed to consumers. Instead, the concerns were about tactics used to choke potential competitors and about the political clout of bigness. Lamoreaux describes the pendulum swings of law and public opinion with regard to bigness, but here, I want to focus on the political balance that was struck with regard to bigness in the middle decades of the 20th century.
Lamoreaux points out that the large US firms that became well-established in the second and third decades of the 20th century remained successful for some time, but with no particular social trend toward greater concentration of industry. This is also a time when public attitudes toward corporate bigness were not very harsh. She writes:
Tracking the 100 largest firms in the US economy at various points between 1909 and 1958, Collins and Preston (1961) similarly found that the top firms gradually came to enjoy “an increasing amount of entrenchment of position by virtue of their size” (p. 1001). Over these same decades, moreover, there was remarkably little change in overall levels of economic concentration. Scholars have measured concentration in different ways and over different sets of years, and as a result, their estimates diverge somewhat. But … there was no clear trend toward increasing (or decreasing) concentration, either in the manufacturing sector or in the economy as a whole.
Intriguingly, even as large firms consolidated their positions, the public’s view of them became increasingly accepting. Galambos (1975) analyzed references to big business in a sample of periodicals read by various segments of the middle class over the period 1890–1940 and found that the antipathy of the late nineteenth century had greatly diminished by the interwar period. Auer and Petit (2018) conducted a similar analysis, searching the Proquest database of historical newspapers to find articles that included the word “monopoly.” Even though Auer and Petit were selecting on a word with generally negative connotations in American culture, they found that unfavorable mentions dropped from about 75 percent of the total in the late nineteenth century to a little over 50 percent starting in the 1920s.
One why people were more at peace with large companies during this time period is, as Lamoreaux describes, that Congress and state legislatures passed rules placing limits on corporate political involvement.
In addition to the new antitrust laws already discussed, Congress took a first step toward limiting business influence in politics by passing the Tillman Act in 1907, prohibiting corporations from contributing money to political campaigns for national office. The act was a reaction to a particular set of revelations—that large mutual insurance companies were using their members’ premiums to lobby for measures that weakened members’ protections (Winkler 2004)—but it built on pervasive fears that large-scale businesses were using their vast resources to shape the rules in their favor. By the end of 1908, 19 states had enacted corporate campaign-finance legislation of their own, and they had also begun to restrict lobbying expenditures by corporations (McCormick 1981, p. 266). Congress would write an expanded version of the Tillman law into the Federal Corrupt Practices Act in 1925 (Mager 1976).
Another reason why people of this time were more accepting of bigness is that the new antitrust laws gave them some reason to believe that bigness was less likely to be economically abusive. Lamoreaux writes:
The new antitrust regime seems to have been similarly reassuring, even though the 1920s are generally regarded as a period when antitrust enforcement was relatively lax (Cheffins 1989). The Federal Trade Commission got off to an inauspicious start in the early 1920s—most of the complaints it filed were dismissed by the courts—and in the late 1920s it was essentially captured by business interests (Davis 1962). By 1935, however, the agency was showing renewed vitality. The number of complaints it filed increased sharply, its dismissal rate fell to about one-quarter, and it was winning the vast majority of cases that proceeded to judicial review (Posner 1970, p. 382).
At the Department of Justice, there was no significant fall-off in the number of cases during the interwar period, with the exception of the early years of the Great Depression. Prosecutors seem to have targeted fewer large firms during the 1920s, but the department’s win rate increased from 64 percent in 1920–1924 to 93 percent in 1925–1929 (Posner 1970, pp. 368, 381; Cheffins 1989). Although most antitrust cases still involved horizontal combinations or conspiracies, by the 1930s about one-third of the cases filed by the Department of Justice were targeting abuses of market power, and the FTC’s proportion was closer to one-half (Posner 1970, pp. 396, 405, 408).
When the biggest firms recognized that anticompetitive practices, mergers, and political spending were going to come under enhances scrutiny, it encouraged them to move toward sustaining their competitive position through funding research and development and obtaining patents. Of course, all intellectual property is built on a tradeoff: on one side, it\’s an incentive for innovation, but on the other side, it locks in a competitive advantage for innovator for a decade or two. By the late 1930s, antitrust authorities were addressing this issue by requiring that large firms provide compulsory licenses to their key patents–in some cases without receiving any payments. Here\’s how Lamoreaux tells the story:
After World War I, large firms had stepped up both their investments in research and development and their efforts to accumulate patent portfolios. According to surveys conducted by the National Research Council, the number of new industrial research labs grew from about 37 per year between 1909 and 1918 to 74 per year between 1929 and 1936, and research employment in these labs increased by a factor of almost ten between 1921 and 1940 (Mowery and Rosenberg 1989, pp. 62–69). Large firms generated increasing numbers of patents internally, but they also bought them from outside inventors. … The competitive advantages to large firms that broad portfolios of patents could bring, in terms of both what they could achieve technologically and how they could forestall competition, were becoming increasingly apparent—not least to the firms themselves (Reich 1985). As early as the 1920s, valuations on the securities markets began to mirror the size and quality of large firms’ patent portfolios (Nicholas 2007).
Federal antitrust authorities began to pay attention as well, especially during the late 1930s … In 1938, a specially created commission, the Temporary National Economic Committee, launched a three-year investigation into the “Concentration of Economic Power.” The Temporary National Economic Committee began its hearings by examining large firms’ use of patents to achieve monopoly control, focusing in particular on the automobile and glass industries. In 1939, the committee held a second set of hearings to solicit ideas about how the patent system could be reformed (Hintz 2017). It also commissioned a book-length study by economist Walton Hamilton, Patents and Free Enterprise (Hamilton 1941). According to Hamilton, large firms had perverted the patent system. The system’s original purpose had been to encourage technological ingenuity, but now large firms were instead deploying patents as barriers to entry and using licensing agreements to divide up the market and limit competition among themselves (Hamilton 1941, pp. 158–63; John 2018).
The Temporary National Economic Committee’s patent investigation was headed by Thurman Arnold, assistant attorney general in charge of the Department of Justice’s antitrust division. Arnold’s views about the abuse of patents were similar to Hamilton’s, and at his insistence, the committee’s final report recommended compulsory licensing—requiring firms to license their technology at a fair royalty to anyone who wanted to use it. The recommendation went nowhere in Congress (Waller 2004), but Arnold nonetheless pursued it at Justice. As early as 1938, for example, he pushed Alcoa to license a set of its patents as part of an antitrust settlement, and the company agreed in a consent decree entered in 1942. By that time, Arnold had already secured three other compulsory licensing orders, and many more were to follow. Barnett (2018) compiled a complete list of such orders and their terms from 1938 to 1975. By the latter year, the total had risen to 136, one-third of which did not permit the firms to recoup any royalties at all for their intellectual property.
In the early and mid-twentieth century, concerns about excessive concentration of economic and political power in the hands of dominant firms helped constrain the ability of large firms to grow through mergers and acquisitions. During this period, if large firms wanted to grow, they often had little choice but to invest in internal R&D.
Antitrust policy not only encouraged large firms to invest in internal R&D, but also occasionally promoted technology diffusion. A leading example is the 1956 consent decree against the Bell System, one of the most significant antitrust rulings in U.S. history (Watzinger et al., 2017). The decree forced Bell to license all its existing patents royalty-free to all American firms. Thus, in 1956, 7,820 patents (or 1.3% of all unexpired U.S. patents) became freely available. Most of these patents covered technologies that had been developed by Bell Labs, the research subsidiary of the Bell System.
Compulsory licensing substantially increased follow-on innovation building on Bell patents. Using patent citations, Watzinger et al. (2017) estimate an average increase in follow-on innovation of 14 percent. This effect was highly heterogeneous. In the telecommunications sector, where Bell kept using exclusionary practices, there was no significant increase. However, outside of the telecommunications sector, follow-on innovation blossomed (a 21% increase). The increase in follow-on innovation was driven by young and small companies, and more than compensated Bell\’s reduced incentives to innovate. In an in-depth case study, Watzinger et al. demonstrate that the decree accelerated the diffusion of the transistor technology, one of the most important technologies of the twentieth century.
This view that the consent decree was decisive for U.S. post-World War II innovation, particularly by spurring the creation of whole industries, is shared by many observers. As Gordon Moore, the cofounder of Intel, notes: \”[o]ne of the most important developments for the commercial semiconductor industry (…) was the antitrust suit led against [the Bell System] in 1949 (…) which allowed the merchant semiconductor industry \\to really get started\” in the United States (…) [T]here is a direct connection between the liberal licensing policies of Bell Labs and people such as Gordon Teal leaving Bell Labs to start Texas Instruments and William Shockley doing the same thing to start, with the support of Beckman Instruments, Shockley Semiconductor in Palo Alto. This (…) started the growth of Silicon Valley\” (Wessner (2001, p.86) as quoted in Watzinger et al. (2017)).
Scholars such as Peter Grindley and David Teece concur: \”[AT&T\’s licensing policy shaped by antitrust policy] remains one of the most unheralded contributions to economic development possibly far exceeding the Marshall plan in terms of wealth generation it established abroad and in the United States\” (Grindley and Teece (1997) as quoted in Watzinger et al. (2017)).
Large companies will always face a temptation to extend their power in other ways: acquiring competitors, using \”patent thickets\” to block competition, or exerting political pressures. In one way or another, the goal of antitrust policy is to steer big firms away from these options, and in this way to keep big firms focused on providing consumers with desired goods and services and on continued innovation.
Reinhardt tackles the question of explaining high US health care costs–that is, per capita US spending on health care is roughly double the average for other high-income countries. Here\’s a flavor of Reinhardt\’s take on four possible explanations.
Possible explanation #1: US income levels are higher, and health care spending tends to rise as incomes rise. Reinhardt writes:
The figure shows that GDP per capita does indeed drive health spending systematically. But income alone leaves much unexplained. Even after adjusting the health spending data for GDP per capita (roughly, “ability to pay”), U.S. spending levels are much higher (about $2,200 higher in 2015) than would be predicted by the graph …
Possible explanation #2: US Demography leads to higher health care spending. Reinhardt writes:
The U.S. population is, on average, much younger than the populations of most countries in the OECD, yet we spend much more per capita on health care. In fact, although the United States has one of the youngest populations among developed nations, we have (as noted earlier) the world’s highest health spending per capita. Japan, in contrast, has the oldest population, but among the lowest health spending levels.
Possible explanation #3: Health care, or at least certain kinds of health care, cost more in the US. Reinhardt writes:
With the exception of a few high-tech procedures, Americans actually consume less health care service in real terms (visits with physicians, hospital admissions and hospital days per admission, medications and so on) than do Europeans. For better or for worse — better for the vendors of health care and worse for consumers — prices for virtually every health care product and service in the United States tend to be twice as high as those for comparable products or services in other countries.
Here are some examples of that pattern, using data from an International Federation of Health Plans report.
Possible explanation #4: High administrative costs in the US health insurance system. Here\’s Reinhardt:
According to a recent publication by America’s Health Insurance Plans, private health insurers on average take a haircut of about 17.8 percent off the insurance premiums paid by employers or individuals for “operating costs,” which means marketing and administration. Another 2.7 percent is profits. That haircut was as high as 45 percent pre-Obamacare for smaller insurers selling policies in the (non-group) market for individually purchased insurance. Under Obamacare the portion of the premium going to marketing, administration and profits was constrained to 20 percent for small insurers and to 15 percent for large insurers. … It’s worth noting that some analysts of administrative costs of Blue Cross Blue Shield plans and other private insurers report that the AHIP number is too high. They currently estimate the range to be between about 9 percent for the Blues and between 10 and 11 percent for other private insurers. …
Drugs that cost $17 to produce end up costing patients or purchasers of health insurance $100. Of a total $100 in consumer spending, health insurers pay pharmacy benefit managers $81, keeping $19 for themselves, of which $3 is profit. The rest goes for marketing and administration. …
I can think of no legislation ever to emerge from Congress that addressed the magnitude of this administrative overhead. It is as if Congress just does not care what health spending actually buys. On the contrary, every health reform emerging from Congress vastly complicates the system further and brings forth new fleets of non-clinical consultants who make a good living teaching clinicians and hospitals how to cope with the new onslaught. All of their income becomes the providers’ expense and thus ends up in the patient’s bill.
And here\’s one more figure from Reinhardt, showing the growth in health care administrators vs. the growth in actual health care providers over time.
Of course, this post is only an appetizer. There\’s much more in the MIR article, and much much more in the book itself.
There\’s a long-standing controversy over whether those who donate a kidney should be able to be paid for doing so. Whatever one\’s view of whether the seller of a kidney should get a positive price for doing so, it\’s worth considering that under traditional rules, the donor of a kidney in efect receives a negative price for doing so. That is, the donor of a kidney faces a variety of costs–both explicit (travel, hotel) and implicit (time off from work, physical discomfort). Even many those who feel that a kidney donors should not be paid for the act of donating itself seem willing to consider the possibility that donors should be reimbursed for expenses.
They readily admit that there is a high range of uncertainty around these kinds of estimates. Given that caveat, they argue:
We show that the total monetary value of the seven disincentives facing a typical living kidney donor is about $38,000. Removing all disincentives would increase kidney donations by roughly 12,500 per year, which would cut the adult waiting list for transplant kidneys in half in about 4 years. This would require an initial government outlay of only about $0.5 billion per year, but would ultimately result in net taxpayer savings of about $1.3 billion per year. The value to society of the government removing the disincentives would be about $14 billion per year, reflecting the great value of the additional donated kidneys to recipients and the savings from these recipients no longer needing expensive dialysis therapy.
But it\’s also true that while the new Trump administration policy would expand incentives for kidney donations by covering lost income and dependent care, it wouldn\’t include all the items on the list above, like compensation for the pain and discomfort of having a kidney removed.
This line between \”compensation for expenses\” and actual payment for donating a kidney seems murky to me. Is a kidney donor is allowed to travel to the operation in first-class or to stay in a fancy hotel, or to be compensated for eating at some nice restaurants? Or does \”compensation\” for expenses mean that only the least expensive transportation, hotels, and meals are allowed? What quality and cost is going to be allowed in terms of potential compensation for dependent care?
If we are going to compensate people for lost time at work, does this mean that kidney donors at higher-wage jobs get more compensation that those at lower-wage jobs? Does it mean that kidney donors not currently in the workforce get zero compensation for lost time at work? If there is one preset level of compensation for time lost at work, it will inevitably be too low for some jobs and too high for others.
Once you have accepted that the price of donating a kidney should not be negative, because prices for donating kidneys have incentive effects, then you don\’t want to turn around and demand that the expenses of kidney donors are treated as cheaply as conceivably possible. But what if some kidney donors overall can get compensation for \”medium-priced\” or \”high-priced\” expenses, but some donors would rather receive lower-priced compensation and also get some cash that they could use for, say, paying rent for a couple of months or buying a used car? Telling potential kidney donors that they can get generous compensation for any money they spend out-of-pocket, but zero compensation for pain or health risks from donating, applies a constricted notion of \”costs\” and incentives.
And ultimately, while we are picking through what is an acceptable expense for kidney donors, it\’s worth a mention that none of the other parties in the health care system–doctors, nurses, anesthesiologists, those in the test laboratories, hospitals, and so on–are being told they need to make any free contributions to the kidney transplant process.
For previous posts with relevance to the intersection of economics, incentives, and kidney transplant issues, see:
One of those things that \”everyone knows\” is that continued technological progress is vital to the continued success of the US economy, not just in terms of GDP growth (although that matters) but also for major social issues like providing quality health care and education in a cost-effective manner, addressing environmental dangers including climate change, and in other ways. Another thing that \”everyone knows\” is that research and development spending is an important part of generating new technology. But total US spending on R&D as a share of GDP has been nearly flat for decades, and government spending on R&D as a share of GDP has declined over time.
Here\’s a figure on funding sources for US R&D from the Science and Engineering Indicators 2018. The top line shows the rise in R&D spending in the 1960s (much of it associated with the space program and the military), a fall in the 1970s, and then how R&D spending has bobbed around 2.5% of GDP since then. The dark blue line shows the rise in business-funded R&D, while the light blue line shows the fall in government funding for R&D.
One underlying issue is that business-funded R&D is more likely to be focused on, well, the reasonably short-term needs of the business, while government R&D can take a broader and longer-term perspective,
Of course, the relationship between R&D spending and broader technological progress is complicated. Translating research discoveries into goods and services isn\’t a simple or mechanical process. Other important elements include the economic and regulatory environment for entrepreneurs, the diffusion of new technologies across firms, and the quantity of scientists and researchers. For an overview of the broader issues, Nicholas Bloom, John Van Reenen, and Heidi Williams offer \”A Toolkit of Policies to Promote Innovation\” in the Summer 2018 issue of the Journal of Economic Perspectives. They explain the case for government support of innovation:
Knowledge spillovers are the central market failure on which economists have focused when justifying government intervention in innovation. If one firm creates something truly innovative, this knowledge may spill over to other firms that either copy or learn from the original research—without having to pay the full research and development costs. Ideas are promiscuous; even with a well-designed intellectual property system, the benefits of new ideas are difficult to monetize in full. There is a long academic literature documenting the existence of these positive spillovers from innovations. …
As a whole, this literature on spillovers has consistently estimated that social returns to R&D are much higher than private returns, which provides a justification for government- supported innovation policy. In the United States, for example, recent estimates in Lucking, Bloom, and Van Reenen (2018) used three decades of firm-level data and a production function–based approach to document evidence of substantial positive net knowledge spillovers. The authors estimate that social returns are about 60 percent, compared with private returns of around 15 percent, suggesting the case for a substantial increase in public research subsidies.
Along with pointing out some advantages of government-funded R&D, Bloom, van Reenen, and Williams also point out that when it comes to tax subsidies for corporate R&D, the US lags well behind. They write:
The OECD (2018) reports that 33 of the 42 countries it examined provide some material level of tax generosity toward research and development. The US federal T&D tax credit is in the bottom one- third of OECD nations in terms of generosity, reducing the cost of US R&D spending by about 5 percent. … In countries with the most generous provisions, such as France, Portugal, and Chile, the corresponding tax incentives reduce the cost of R&D by more than 30 percent. Do research and development tax credits actually work to raise R&D spending? The answer seems to be “yes.”
Here\’s their toolkit of pro-innovation policies, with their own estimates of effectiveness along various dimensions.
Treasury Secretary Steve Mnuchin has \”determined that China is a Currency Manipulator\” (with capital letters in the press release) The overall claim is that one major reason for China large trade surpluses is that China is keeping is exchange rate too low. This low exchange rate makes China\’s exports cheaper to the rest of the world, while also making foreign products more expensive in China, thus creating China\’s trade surplus.
The claim is not particularly plausible. Indeed, a cynic might point out that if currency manipulation was the main trade problem all along then, then Trump has been wasting time by playing around with tariffs since early 2018. For perspective on the exchange rate issue, let\’s start with the Chinese yuan/US dollar exchange rate in the last 30 years.
Up to about 1995, the exchange rate shown here is not an especially meaningful number, because during that time China had an \”official\” exchange rate set by the government and an \”unofficial\” exchange rate set in markets. The official rate had a much stronger yuan than the unofficial rate, so when the two rates were united in 1995, there is a steep jump upward in the exchange rate graph, as the yuan gets weaker (that is, it takes more yuan to buy a US dollar).
From about 1996-2005, the straight horizontal line on the graph is strong evidence that the Bank of China was keeping its exchange rate fixed. Starting in mid-2005, China stopped holding its exchange rate fixed, and the yuan become stronger, moving from about 8.2 yuan/dollar in early 2005 to 6.8 yuan/dollar by mid-2008. Since then, the yuan has shifted up and down, falling as low as about 6.1 yuan/dollar at times, but then often rising back up to about 6.8 yuan/dollar.
It\’s useful to compare the yuan exchange rate with China\’s balance of trade. Here\’s a figure based on World Bank data showing China\’s trade balance since 1990. Back in the 1990s, China\’s trade surplus was usually positive, but also typically less than 2% of GDP. When China joins the WTO in 2001, its exports take off and so does its trade surplus, hitting 10% of GDP in 2007. It would be highly implausible to attribute this jump in China\’s trade surplus to currency manipulation, because the first figure shows that China\’s exchange rate is unchanged during this period. It is also highly implausible to attribute this rise to more Chinese protectionism, because China\’s giant trade surpluses resulted from higher exports, not lower imports.
But then China\’s extraordinary trade surplus soon went away. By 2011 China\’s trade surplus was under 2% of GDP; in 2018, it\’s under 1% of GDP. Thus the Trump administration complaint that China is using an extraordinarily weak exchange rate to power very large Chinese trade surpluses has not been plausible since 2011, and is even less plausible since 2018.
At the time of the October 2018 report, exchange rate was about 6.9 yuan/dollar. The report did not find that China was acting like a currency manipulator at that time. As it points out, the IMF agreed with this view, as did other outside economists. For example, the Treasury wrote:
\”Over the last decade, the RMB has generally appreciated on a real, trade-weighted basis. This appreciation has led the IMF to shift its assessment of the RMB in recent years and conclude that the RMB is broadly in line with economic fundamentals.\”
That report also offers a macroeconomically odd complaint. It acknowledges that China\’s overall trade surplus in 2018 was near-zero, but then complains that China\’s trade position is \”unevenly spread\”–that is, China has a trade surplus with some countries like the US, but a nearly offsetting trade deficit with other countries. Treasury wrote:
Since then, China’s current account surplus has declined substantially, falling to 0.5 percent of GDP in the four quarters through June 2018. However, it remains unevenly spread among China’s trading partners. In particular, China retains a very large and persistent trade surplus with the United States, at $390 billion over the four quarters through June 2018.
So China was on warning that even if its overall trade balance was near-zero, the US was only focused on the bilateral trade balance. The next Treasury report arrives in May 2019, when the exchange rate was still 6.9 yuan/dollar, as it had been at the time of the October 2019 report. Again, Treasury does not find that China is a currency manipulator. However, in the exchange rate graph above you can see a little dip in the yuan/dollar exchange rate in March 2018, as the yuan becomes a bit weaker for a short time. Treasury warns:
Notwithstanding that China does not trigger all three criteria under the 2015 legislation, Treasury will continue its enhanced bilateral engagement with China regarding exchange rate issues, given that the RMB has fallen against the dollar by 8 percent over the last year in the context of an extremely large and widening bilateral trade surplus. Treasury continues to urge China to take the necessary steps to avoid a persistently weak currency.
Again, the focus is on China\’s bilateral trade surplus with the US. There\’s another interesting hint here, which is that Treasure is urging \”China to take the necessary steps to avoid a persistently weak currency.\” This phrasing is interesting, because it isn\’t a complaint that China is intervening to make its currency too weak; instead, it\’s a complaint that China should be intervening more to prevent its currency from being weak. It\’s a complaint that China is not being enough of a currency manipulator in the way the Trump administration would prefer.
Before the announcement from Mnuchin, the exchange rate in August 2019 was still about 6.9 yuan/dollar–that is, what it had been in May 2019 and October 2019. But now, this exchange rate was evidence that China was \”Currency Manipulator.\” In fact, the recent Treasury press release links to the May 2019 report, which did not find that China was manipulating its currency, in support of the finding that China was manipulating its currency.
The May 2019 report sets out three standards that Treasury will supposedly be looking at when thinking about currency manipulation. First is whether the country has a bilateral trade surplus with the US of more than $20 billion, which China does. Second is whether the country has an overall trade surplus of more than 2% of GDP, which China doesn\’t. IMF statistics find that China\’s trade surplus in 2018 was 0,4% of GDP; moreover, the IMF finds that China is headed for a trade deficit in the next few years. Third is whether the country is intervening regularly in foreign exchange markets. As the Treasury report points out, Bank of China foreign exchange operations are shrouded in secrecy, but the evidence suggests that China\’s foreign exchange reserves haven\’t moved much since early 2018, or are perhaps a bit down overall, which is not consistent with the theory that the Bank of China has been buying lots of US dollars to keep the yuan exchange rate weak.
Understanding the actual drivers of trade balances explains why raising tariffs for a year and watching the US trade deficit got larger, rather than smaller. China\’s yuan/dollar exchange rate is at a level where its overall trade balance is near-zero, and according to the IMF, headed for a modest trade deficit in the next few years. Thus, the IMF is unlikely to back the Trump administration argument that the Bank of China is manipulating its exchange rate. But if the Trump administration bludgeons China into having a substantially stronger exchange rate, what happens next?
A strong exchange rate for one currency necessarily means a weaker exchange rate for other currencies: for example, if it takes fewer yuan to buy a dollar, it necessarily takes more dollars to buy a yuan. By arguing for a stronger yuan exchange rate, the Trump administration is apparently trying to devalue its way to prosperity by arguing for a weaker US dollar exchange rate, This makes it easier to sell US exports abroad, but the lower buying power for the US dollar also means that, in effect, everything which is imported by consumers and firms will cost more. Economies with floating exchange rates, like the US, are built to absorb short- and even medium-term fluctuations in exchange rates without too much stress. But in effect, the current Treasury policy is to advocate that China take steps to produce a permanently weaker US dollar–and thus benefit exporters at the cost of higher prices for importers–until the bilateral US trade deficit with China is eliminated.
A group of recent research studies have argued that \”markups\” are on the rise. As one of several prominent examples, a study by Jan De Loecker, Jan Eeckhout, and Gabriel Unger, called \”The Rise of Market Power and the Macroeconomic Implications\” presents calculations suggesting that the average markup for a US firm rose from 1.21 in 1980 to 1.61 in 2016 (here\’s a working paper version from Eeckhout\’s website dated November 22, 2018). The Summer 2019 issue of the Journal of Economic Perspectives discusses the strengths and weaknesses of this evidence in a three-paper symposium:
Here are some of my own takeaways from these articles:
1) For economists, \”markups\” are not the same as profits. Profits happen when price is above the average cost of production. Markups are defined as a situation where the price is above the marginal cost of production. This definition is also why markups are so hard to measure. It\’s pretty straightforward to measure average cost: just take the total cost of production and divide by the quantity produced. But measuring marginal cost of production is hard, because it requires separating a firm\’s expenses into \”fixed\” and \”variable\” costs, which is a useful conceptual division for economists but not how firms divide up their costs for accounting purposes.
2) There are a number of economic situations where there can be persistent markups while profits remain at normal levels. As one example, Every intro textbook discusses \”monopolistic competition,\” a situation in which firms in a market sell similar but differentiated products. Examples include clothing stores in the same shopping mall with different styles, gas stations with different locations, products sold with different money-back guarantees, and all the everyday products like dishwasher soap. The basic textbook explanation is that in a setting of monopolistic competition, firms will be able to set prices above marginal cost, charging more to consumers who desire the specific differentiated characteristics of that product. However, part of the definition of monopolistic competition is that other firms can easily enter the market and expand production. The result is a situation of positive markups (price higher than marginal cost) but only average profits.
3) Another example of positive markups arises in companies with high fixed costs and low marginal costs: as a simple example, think of a video game company where the cost of creating the game is high, but once the game is created, the marginal cost of providing the game to an additional user is near zero. It\’s quite possible for this kind of company to have a positive markup (price over marginal cost), but also to suffer losses (because price isn\’t enough above the markup to cover the firm\’s fixed costs).
4) Many big tech companies–Facebook, Amazon, Google, Apple, Microsoft, and others–have a number of products that share this property of high fixed costs and lower marginal costs. Thus, these companies are likely to have high mark-ups. Moreover, given their technology, size, and business, practices, it may be difficult for entrants to challenge these companies. As a result, companies like these may have an ability for a combination of sustained high markups and high profits over time. For example, in the study mentioned above by De Loecker, Eeckhout, and Unger, the overall rise in mark-ups not because of a rise in markups for the median company, but because of a very sharp rise in markups for a much smaller number of companies.
5) An emergence of high-markup, high-profit firms may be in some cases a positive step for productivity and wages. There is an emerging literature on what are sometimes called \”superstar\” firms (here\’s a recent working paper version of \”The Fall of the Labor Share and the Rise of Superstar Firms,\” by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenen ). Imagine a company that makes large-scale investments in information technology, both for the logistics of running its operations and for quality control, and also in widespread use of far-flung supply chains. Based on these high fixed costs, such a company may be able to expand in the market, taking market share from smaller firms. But economists commonly hold that when a company succeed by offering the products that consumers desire at attractive prices, this is a good thing. Thus, antitrust authorities need to think carefully about whether companies with high markups are gaining high profits through pro-consumer fixed investments or by anti-consumer restraints on competitors.
6) It\’s hard to measure whether markups are rising for the economy as a whole. Detailed studies of a certain firm or industry look carefully at the production process for that firm or industry and estimate fixed and marginal costs. But how can such data be collected for most companies in an economy? Some approaches use firm-level accounting data, or data on capital investment and depreciation across industries, or sectoral-level data on outputs and inputs. Using this data to estimate production functions across firms and markups involves a number of modelling choices. It turns out that assumptions like whether industry is producing with constant returns to scale or increasing returns to scale matter a lot. There are a bunch of genuinely hard and disputed question on how to proceed here.
7) As an example, one approach is to rely on accounting data. The prominent study by De Loecker, Eeckhout, and Unger uses accounting data (from Compustat) which breaks down costs of production into two main categories: \”Cost of Goods Sold\” and \”Selling, General, and Administrative.\” If one thinks of the Costs of Goods Sold as a proxy that captures variable costs, one can then use this data along with a measure of the fixed capital stock of firms to do more detailed calculations, which with some assumptions (like assuming that all profits are paid to owners of capital) will imply estimates for markups. But there are a bundle of underlying assumptions here. For example, Cost of Goods Sold is defined differently by the accountants across industries, and it seems to be a falling share of total costs over time. Sorting out the underlying economic meaning of the accounting data, along with assumptions are necessary and what implications these assumptions have, is an ongoing area of research.
8) Other attempts to measure markups often rely on measuring a firms\’s amount of capital, which could be thought of as a fixed costs, and then looking at the profits received by owners of capital. But many firms have both \”tangible\” capital like machines that is relatively well-measured, and also \”intangible\” investments, which include situations where a firm has made past investment in knowledge (like research and development) or organizational capabilities that are now paying off. Of course, measuring intangible investment (and thinking about how fast it might depreciate) is very hard, but there\’s some evidence that intangible investments have become more important over time. But if firms with high markups and profits are largely benefiting because they made large investments in the past–in the form of intangible capital–then this should probably be viewed as a positive for the economy.
9) Estimates of markups over time often reveal patterns that raise additional questions. For example, the study by Loecker, Eeckhout, and Unger finds that most of the sharp rise in markups happened among a smaller segment of firms in the 1980s and 1990s, and hasn\’t changed much since. (How does that timeline fit with one\’s internal narrative about what has caused higher markups, and in particular a belief that the causes are relatively recent?) Other studies that project back further in time suggest that markups were especially large in the 1960s. (So perhaps we need multiple explanations for what affected markups then and now?)
10) If very large rises in markups have occurred, then they should have implications for other areas of the economy. For example, Basu points out in his JEP paper that it\’s conceptually possible to draw a connection from higher corporate markups to labor receiving a lower share of income (a fact discussed here and here). But high estimated of the rise in markups suggest that the labor share of income should have fallen by much more than actually observed. Or as Syverson points out in his JEP paper, a rise in markups implies either that prices have risen quickly or that (marginal) costs have plunged. Low inflation rates means that prices have not risen quickly, and evidence that costs have plunged economy-wide is scanty. Thus, both authors express a view that while it is it plausible that markups have risen, the size of such a rise must be relatively modest to fit with other observed economic changes.
There\’s much more in the articles themselves about methods of measuring markups, what methods produce estimates that are higher or lower, possible connections (or not) to concentration of industry. Attempts to measure whether markups are rising in the US economy, and if so, by how much is a live and active area of research. If someone tells you that a very large rise in markups is a settled fact, they are showing a lack of awareness of the actual evolving state of play in this literature.
For the later decades of the 20th century, the most common source of data for economic studies was government surveys and statisticians. There are household surveys like the Current Population Survey, the Survey of Income and Program Participation, the Consumer Expenditure Survey, the National Health Interview Survey, the Consumer Expenditure Survey, and the General Social Survey. There were government workers collecting data on prices at stores as input to measures of inflation like the Consumer Price Index. There were business surveys, like the Economic Census, the Retail Trade Survey, an Annual Survey of Manufactures, the Residential Finance Survey, and others. Branches of government like the Department of Energy, the Department of Agriculture, and the Health Care Financing Administration collect data on specific industries. There\’s also a Census of Governments to get data on state and local government. The Bureau of Economic Analysis would pull together data from these sources and others to estimate GDP.
But over time, a split developed. On one side was this body of data created by government for use by business and and policy-makers, as well as researchers. But on the other side was a vast amount of data collected in the process of administering these programs. Often this administrative data was not formatted or organized in a way that researchers could use it. Moreover, the administrative data was often siloed in one government agency; for example, student grades and their academic progress were traditionally kept inside school districts or in some cases state departments of education, and not easily connected to other data that might explain patterns of school grades, either within a school year or over time.
Research using administrative data has much in common with history and archeology, insofar as it observes the tracks that individuals leave as they move through society and draws lessons from these glimpses into their lives. …
Given their origin in a particular institutional context, administrative records are typically fragmented, and these data are often not linked to other data that would be useful for research and policy. Hospitals, for example, collect detailed information about patients’ health, schools regularly collect information about student development, and employers often keep records not only about the performance of employees, but also about applicants who were ultimately not offered positions. Although various combinations of these data can provide important insights, they are typically compartmentalized. Likewise, given their origin, administrative records often lack certain kinds of information that are less likely to be collected in these records. For example, information about attitudes, affinities, and motives are not often collected in administrative records. Combining administrative data with records from other sources—either by linking administrative records across sources or by making administrative records available to be linked to data collected via other means—is thus central to building administrative data infrastructure. …
By virtue of how they come into existence, administrative data are typically focused on one facet of an individual’s life, and data and insights are often siloed. … Administrative records from birth, education, criminal justice, labor market, and mortality often capture different points in an individual’s life; combining data across these stages allows us to understand how inequalities unfold over the arc of an individual’s life. …
One clear example is in education. Despite their focus on preparing students who are “college- and career-ready,” schools have historically struggled to obtain data on the practices that will prepare their students to be successful because widespread links between students in K–12 educational systems and higher education outcomes have become available only recently, and links between K–12 data systems and the labor market remain relatively rare. These data linkages are important to understand the efficacy of school-based vocational programs, dropout recovery interventions, college readiness programs, and advancement placement course policies. But schools, like other organizations, typically lack the capacity and expertise to build this infrastructure and analyze the resulting data.
At a time when we are all sensitized to how big tech companies are gathering, combining, and marketing our personal data, the rise of administrative data clearly has a concerning side. Consider the footprints that many of us have left in administrative data over the years, about our education, physical and mental health, finances, how much we were paid, tax filings, car and real estate ownership, Social Security contributions, benefits from government programs, and many more–right down to the books checked out of the public library.
The obvious challenge is to find ways to use administrative data where protection of personal privacy is built-in from the start. For example, when the unemployment rate based on the Current Population Survey, no one is concerned that unemployment at specific identified households will become public as a result. In this issue, a paper by David B. Grusky, Michael Hout, Timothy M. Smeeding, C. Matthew Snipp descibes \”The American Opportunity Study: A New Infrastructure for Monitoring Outcomes, Evaluating Policy, and Advancing Basic Science.\” They write:
The American Opportunity Study (AOS) … is an ongoing effort to link the censuses of 1960 through 2010 and the American Community Surveys (ACS) and thereby convert cross-sectional decennial census data into a bona fide panel that will represent the full U.S. population over the last seventy years. Because this panel will be continuously refreshed as additional census and ACS data become available, it can serve as a population-level scaffolding on which other administrative data (such as tax records, earnings reports, program data) are then hung. … In other countries that have linked data, such as Wales and New Zealand, a well-developed infrastructure allows access to carefully vetted scholars, with the result that high-quality evidence is more frequently brought to bear on policy decisions.
I can ramble on a bit about the merits of administrative data for research. It covers everyone, and thus allows detailed analysis of various subgroups, and tracking people over time, and even looking across generations. It describes what government programs have actually done, which can then be compared and combined with surveys of household or businesses. But rather than talking in generalities, let me just mention some of the studies from this double issue. Notice in particular how the studies often use administrative data, sometimes from separate government agencies, in a way that addresses a worthwhile question.
I use standardized test scores from roughly forty-five million students to describe the temporal structure of educational opportunity in more than eleven thousand school districts in the United States. Variation among school districts is considerable in both average third-grade scores and test score growth rates. The two measures are uncorrelated, indicating that the characteristics of communities that provide high levels of early childhood educational opportunity are not the same as those that provide high opportunities for growth from third to eighth grade.
This research uses a probabilistic matching strategy to link foreclosure records with birth certificate records from 2006 to 2010 in California to identify birth parents who experienced a foreclosure. … [We] find that infants in gestation during or after the foreclosure had a lower birth weight for gestational age than those born earlier, suggesting that the foreclosure crisis was a plausible contributor to disparities in initial health endowments.
Using administrative data from the New York City Department of Education and the New York City Police Department, we find that exposure to violence in the residential neighborhood and an unsafe climate at school lead to substantial test score losses in English language arts (ELA).
Does network recruitment contribute to the glass ceiling? We use administrative data from two companies to answer the question. In the presence of gender homophily, recruitment through employee referrals can disadvantage women when an old boys’ network is in place. We calculate the segregating effects of network recruitment across multiple job levels in the two firms. If network recruitment is a factor, the segregating impact should disadvantage women more at higher levels. We find this pattern, but also find that network recruitment is a desegregating force overall. It promotes women’s representation strongly at all levels, but less so at higher levels.
One final thought: Using administrative data often requires academic researchers to become entrepreneurial about seeking out such data, working with the government agencies or private firms that hold the original data, finding ways to offer cast-iron reassurances about personal privacy, and only then being able to actually work with the data and see if something interesting emerges. For modern economists, this process is quite different from the old days of digging through data collected and made public in a way already prepared for their use by government agencies.
I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Summer 2019 issue, which in the Taylor household is known as issue #129. Below that are abstracts and direct links for all of the papers. I may blog more specifically about some of the papers in the next week or two, as well.
_______________
Symposium on Markups
\”Are Price-Cost Markups Rising in the United States? A Discussion of the Evidence,\” by Susanto Basu A number of recent papers have argued that US firms exert increasing market power, as measured by their markups of price over marginal cost. I review three of the main approaches to estimating economy-wide markups and show that all are based on the hypothesis of firm cost minimization. Yet different assumptions and methods of implementation lead to quite different conclusions regarding the levels and trends of markups. I survey the literature critically and argue that some of the startling findings of steeply rising markups are difficult to reconcile with other evidence and with aggregate data. Existing methods cannot determine whether markups have been stable or whether they have risen modestly over the past several decades. Even relatively small increases in markups are consistent with significant changes in aggregate outcomes, such as the observed decline in labor\’s share of national income. Full-Text Access | Supplementary Materials
\”Macroeconomics and Market Power: Context, Implications, and Open Questions,\” by Chad Syverson This article assesses several aspects of recent macroeconomic market power research. These include the ways market power is defined and measured; the use of accounting data to estimate markups; the quantitative implications of theoretical connections among markups, prices, costs, scale elasticities, and profits; and conflicting evidence on whether greater market power has led to lower investment rates and a lower labor share of income. Throughout this discussion, I characterize the congruencies and incongruencies between macro evidence and micro views of market power and, when they do not perfectly overlap, explain the open questions that need to be answered to make the connection complete. Full-Text Access | Supplementary Materials
\”Do Increasing Markups Matter? Lessons from Empirical Industrial Organization,\” by Steven Berry, Martin Gaynor and Fiona Scott Morton This article considers the recent literature on firm markups in light of both new and classic work in the field of industrial organization. We detail the shortcomings of papers that rely on discredited approaches from the \”structure-conduct-performance\” literature. In contrast, papers based on production function estimation have made useful progress in measuring broad trends in markups. However, industries are so heterogeneous that careful industry-specific studies are also required, and sorely needed. Examples of such studies illustrate differing explanations for rising markups, including endogenous increases in fixed costs associated with lower marginal costs. In some industries there is evidence of price increases driven by mergers. To fully understand markups, we must eventually recover the key economic primitives of demand, marginal cost, and fixed and sunk costs. We end by discussing the various aspects of antitrust enforcement that may be of increasing importance regardless of the cause of increased markups. Full-Text Access | Supplementary Materials
Symposium on Issues in Antitrust
\”Protecting Competition in the American Economy: Merger Control, Tech Titans, Labor Markets,\” by Carl Shapiro Accumulating evidence points to the need for more vigorous antitrust enforcement in the United States in three areas. First, stricter merger control is warranted in an economy where large, highly efficient and profitable \”superstar\” firms account for an increasing share of economic activity. Evidence from merger retrospectives further supports the conclusion that stricter merger control is needed. Second, greater vigilance is needed to prevent dominant firms, including the tech titans, from engaging in exclusionary conduct. The systematic shrinking of the scope of the Sherman Act by the Supreme Court over the past 40 years may make this difficult. Third, greater antitrust scrutiny should be given to the monopsony power of employers in labor markets. Full-Text Access | Supplementary Materials
\”The Problem of Bigness: From Standard Oil to Google,\” by Naomi R. Lamoreaux This article sets recent expressions of alarm about the monopoly power of technology giants such as Google and Amazon in the long history of Americans\’ response to big business. I argue that we cannot understand that history unless we realize that Americans have always been concerned about the political and economic dangers of bigness, not just the threat of high prices. The problem policymakers faced after the rise of Standard Oil was how to protect society against those dangers without punishing firms that grew large because they were innovative. The antitrust regime put in place in the early twentieth century managed this balancing act by focusing on large firms\’ conduct toward competitors and banning practices that were anticompetitive or exclusionary. Maintaining this balance was difficult, however, and it gave way over time—first to a preoccupation with market power during the post–World War II period, and then to a fixation on consumer welfare in the late twentieth century. Refocusing policy on large firms\’ conduct would do much to address current fears about bigness without penalizing firms whose market power comes from innovation. Full-Text Access | Supplementary Materials
Articles
\”How Market Design Emerged from Game Theory: A Mutual Interview,\” by Alvin E. Roth and Robert B. Wilson We interview each other about how game theory and mechanism design evolved into practical market design. When we learned game theory, games were modeled either in terms of the strategies available to the players (\”noncooperative games\”) or the outcomes attainable by coalitions (\”cooperative games\”), and these were viewed as models for different kinds of games. The model itself was viewed as a mathematical object that could be examined in its entirety. Market design, however, has come to view these models as complementary approaches for examining different ways marketplaces operate within their economic environment. Because that environment can be complex, there will be unobservable aspects of the game. Mathematical models themselves play a less heroic, stand-alone role in market design than in the theoretical mechanism design literature. Other kinds of investigation, communication, and persuasion are important in crafting a workable design and helping it to be adopted, implemented, maintained, and adapted. Full-Text Access | Supplementary Materials
\”A Bridge from Monty Hall to the Hot Hand: The Principle of Restricted Choice,\” by Joshua B. Miller and Adam Sanjurjo We show how classic conditional probability puzzles, such as the Monty Hall problem, are intimately related to the recently discovered hot hand selection bias. We explain the connection by way of the principle of restricted choice, an intuitive inferential rule from the card game bridge, which we show is naturally quantified as the updating factor in the odds form of Bayes\’s rule. We illustrate how, just as the experimental subject fails to use available information to update correctly when choosing a door in the Monty Hall problem, researchers may neglect analogous information when designing experiments, analyzing data, and interpreting results. Full-Text Access | Supplementary Materials
\”A Toolkit of Policies to Promote Innovation,\” by Nicholas Bloom, John Van Reenen and Heidi Williams Economic theory suggests that market economies are likely to underprovide innovation because of the public good nature of knowledge. Empirical evidence from the United States and other advanced economies supports this idea. We summarize the pros and cons of different policy instruments for promoting innovation and provide a basic \”toolkit\” describing which policies are most effective according to our reading of the evidence. In the short run, R&D tax credits and direct public funding seem the most productive, but in the longer run, increasing the supply of human capital (for example, relaxing immigration rules or expanding university STEM admissions) is likely more effective. Full-Text Access | Supplementary Materials
\”How Prevalent Is Downward Rigidity in Nominal Wages? International Evidence from Payroll Records and Pay Slips,\” by Michael W. L. Elsby and Gary Solon For more than 80 years, many macroeconomic analyses have been premised on the assumption that workers\’ nominal wage rates cannot be cut. Contrary evidence from household surveys reasonably has been discounted on the grounds that the measurement of frequent wage cuts might be an artifact of reporting error. This article summarizes a more recent wave of studies based on more accurate wage data from payroll records and pay slips. By and large, these studies indicate that, except in extreme circumstances (when nominal wage cuts are either legally prohibited or rendered beside the point by very high inflation), nominal wage cuts from one year to the next appear quite common, typically affecting 15–25 percent of job stayers in periods of low inflation. Full-Text Access | Supplementary Materials
\”Should We Tax Sugar-Sweetened Beverages? An Overview of Theory and Evidence,\” by Hunt Allcott, Benjamin B. Lockwood and Dmitry Taubinsky Taxes on sugar-sweetened beverages are growing in popularity and have generated an active public debate. Are they a good idea? If so, how high should they be? Are such taxes regressive? People in the United States and some other countries consume remarkable quantities of sugar-sweetened beverages, and the evidence suggests that this generates significant health costs. Building on recent work, we review the basic economic principles that determine the socially optimal sugar-sweetened beverage tax. The optimal tax depends on (1) externalities, or uninternalized health system costs from diseases caused by sugary drink consumption; (2) internalities, or costs consumers impose on themselves by consuming too many sugary drinks due to poor nutrition knowledge and/or lack of self-control; and (3) regressivity, or how much the financial burden and the internality benefits from the tax fall on the poor. We summarize the empirical evidence about the key parameters that determine how large the tax should be. Our calculations suggest that sugar-sweetened beverage taxes are welfare enhancing and indeed that the optimal sugar-sweetened beverage tax rate may be higher than the 1 cent per ounce rate most commonly used in US cities. We end with seven concrete suggestions for policymakers considering a sugar-sweetened beverage tax. Full-Text Access | Supplementary Materials
Features \”Retrospectives: Lord Keynes and Mr. Say: A Proximity of Ideas,\” by Alain Béraud and Guy Numa Since the publication of Keynes\’s General Theory of Employment, Interest and Money, generations of economists have been led to believe that Say was Keynes\’s ultimate nemesis. By means of textual and contextual analysis, we show that Keynes and Say held similar views on several key issues, such as the possibility of aggregate-demand deficiency, the role of money in the economy, and government intervention. Our conclusion is that there are enough similarities to call into question the idea that Keynes\’s views were antithetical to Say\’s. The irony is that Keynes was not aware of these similarities. Our study sheds new light on the interpretation of Keynes\’s work and on his criticism of classical political economy. Moreover, it suggests that some policy implications of demand-side and supply-side frameworks overlap. Finally, the study underlines the importance of a thorough analysis of the primary sources to fully grasp the substance of Say\’s message. Full-Text Access | Supplementary Materials
\”Some Journal of Economic Perspectives Articles Recommended for Classroom Use,\” by Timothy Taylor In 2018, the editors of the Journal of Economic Perspectives invited faculty to send us examples of JEP articles that they had found useful for teaching. We received 250 responses. On the JEP website, we have created a landing page (https://www.aeaweb.org/journals/jep/classroom) that organizes the recommended articles into 33 categories. If you click on any of the categories at that link, you will see a list of JEP papers that were recommended by faculty members for classroom use for that category, presented in reverse date order. Each paper is listed with a hyperlink to its article page on the JEP website. In this article, I offer some thoughts about how this exercise was carried out, along with its strengths and weaknesses. Although we make no pretense of presenting a complete syllabus for any specific course, we offer the milder hope that these recommendations from peers might suggest some additional readings for your students. Full-Text Access | Supplementary Materials \”Recommendations for Further Reading,\” by Timothy Taylor Full-Text Access | Supplementary Materials