One of those things that \”everyone knows\” is that continued technological progress is vital to the continued success of the US economy, not just in terms of GDP growth (although that matters) but also for major social issues like providing quality health care and education in a cost-effective manner, addressing environmental dangers including climate change, and in other ways. Another thing that \”everyone knows\” is that research and development spending is an important part of generating new technology. But total US spending on R&D as a share of GDP has been nearly flat for decades, and government spending on R&D as a share of GDP has declined over time.
Here\’s a figure on funding sources for US R&D from the Science and Engineering Indicators 2018. The top line shows the rise in R&D spending in the 1960s (much of it associated with the space program and the military), a fall in the 1970s, and then how R&D spending has bobbed around 2.5% of GDP since then. The dark blue line shows the rise in business-funded R&D, while the light blue line shows the fall in government funding for R&D.
One underlying issue is that business-funded R&D is more likely to be focused on, well, the reasonably short-term needs of the business, while government R&D can take a broader and longer-term perspective,
Of course, the relationship between R&D spending and broader technological progress is complicated. Translating research discoveries into goods and services isn\’t a simple or mechanical process. Other important elements include the economic and regulatory environment for entrepreneurs, the diffusion of new technologies across firms, and the quantity of scientists and researchers. For an overview of the broader issues, Nicholas Bloom, John Van Reenen, and Heidi Williams offer \”A Toolkit of Policies to Promote Innovation\” in the Summer 2018 issue of the Journal of Economic Perspectives. They explain the case for government support of innovation:
Knowledge spillovers are the central market failure on which economists have focused when justifying government intervention in innovation. If one firm creates something truly innovative, this knowledge may spill over to other firms that either copy or learn from the original research—without having to pay the full research and development costs. Ideas are promiscuous; even with a well-designed intellectual property system, the benefits of new ideas are difficult to monetize in full. There is a long academic literature documenting the existence of these positive spillovers from innovations. …
As a whole, this literature on spillovers has consistently estimated that social returns to R&D are much higher than private returns, which provides a justification for government- supported innovation policy. In the United States, for example, recent estimates in Lucking, Bloom, and Van Reenen (2018) used three decades of firm-level data and a production function–based approach to document evidence of substantial positive net knowledge spillovers. The authors estimate that social returns are about 60 percent, compared with private returns of around 15 percent, suggesting the case for a substantial increase in public research subsidies.
Along with pointing out some advantages of government-funded R&D, Bloom, van Reenen, and Williams also point out that when it comes to tax subsidies for corporate R&D, the US lags well behind. They write:
The OECD (2018) reports that 33 of the 42 countries it examined provide some material level of tax generosity toward research and development. The US federal T&D tax credit is in the bottom one- third of OECD nations in terms of generosity, reducing the cost of US R&D spending by about 5 percent. … In countries with the most generous provisions, such as France, Portugal, and Chile, the corresponding tax incentives reduce the cost of R&D by more than 30 percent. Do research and development tax credits actually work to raise R&D spending? The answer seems to be “yes.”
Here\’s their toolkit of pro-innovation policies, with their own estimates of effectiveness along various dimensions.
Treasury Secretary Steve Mnuchin has \”determined that China is a Currency Manipulator\” (with capital letters in the press release) The overall claim is that one major reason for China large trade surpluses is that China is keeping is exchange rate too low. This low exchange rate makes China\’s exports cheaper to the rest of the world, while also making foreign products more expensive in China, thus creating China\’s trade surplus.
The claim is not particularly plausible. Indeed, a cynic might point out that if currency manipulation was the main trade problem all along then, then Trump has been wasting time by playing around with tariffs since early 2018. For perspective on the exchange rate issue, let\’s start with the Chinese yuan/US dollar exchange rate in the last 30 years.
Up to about 1995, the exchange rate shown here is not an especially meaningful number, because during that time China had an \”official\” exchange rate set by the government and an \”unofficial\” exchange rate set in markets. The official rate had a much stronger yuan than the unofficial rate, so when the two rates were united in 1995, there is a steep jump upward in the exchange rate graph, as the yuan gets weaker (that is, it takes more yuan to buy a US dollar).
From about 1996-2005, the straight horizontal line on the graph is strong evidence that the Bank of China was keeping its exchange rate fixed. Starting in mid-2005, China stopped holding its exchange rate fixed, and the yuan become stronger, moving from about 8.2 yuan/dollar in early 2005 to 6.8 yuan/dollar by mid-2008. Since then, the yuan has shifted up and down, falling as low as about 6.1 yuan/dollar at times, but then often rising back up to about 6.8 yuan/dollar.
It\’s useful to compare the yuan exchange rate with China\’s balance of trade. Here\’s a figure based on World Bank data showing China\’s trade balance since 1990. Back in the 1990s, China\’s trade surplus was usually positive, but also typically less than 2% of GDP. When China joins the WTO in 2001, its exports take off and so does its trade surplus, hitting 10% of GDP in 2007. It would be highly implausible to attribute this jump in China\’s trade surplus to currency manipulation, because the first figure shows that China\’s exchange rate is unchanged during this period. It is also highly implausible to attribute this rise to more Chinese protectionism, because China\’s giant trade surpluses resulted from higher exports, not lower imports.
But then China\’s extraordinary trade surplus soon went away. By 2011 China\’s trade surplus was under 2% of GDP; in 2018, it\’s under 1% of GDP. Thus the Trump administration complaint that China is using an extraordinarily weak exchange rate to power very large Chinese trade surpluses has not been plausible since 2011, and is even less plausible since 2018.
At the time of the October 2018 report, exchange rate was about 6.9 yuan/dollar. The report did not find that China was acting like a currency manipulator at that time. As it points out, the IMF agreed with this view, as did other outside economists. For example, the Treasury wrote:
\”Over the last decade, the RMB has generally appreciated on a real, trade-weighted basis. This appreciation has led the IMF to shift its assessment of the RMB in recent years and conclude that the RMB is broadly in line with economic fundamentals.\”
That report also offers a macroeconomically odd complaint. It acknowledges that China\’s overall trade surplus in 2018 was near-zero, but then complains that China\’s trade position is \”unevenly spread\”–that is, China has a trade surplus with some countries like the US, but a nearly offsetting trade deficit with other countries. Treasury wrote:
Since then, China’s current account surplus has declined substantially, falling to 0.5 percent of GDP in the four quarters through June 2018. However, it remains unevenly spread among China’s trading partners. In particular, China retains a very large and persistent trade surplus with the United States, at $390 billion over the four quarters through June 2018.
So China was on warning that even if its overall trade balance was near-zero, the US was only focused on the bilateral trade balance. The next Treasury report arrives in May 2019, when the exchange rate was still 6.9 yuan/dollar, as it had been at the time of the October 2019 report. Again, Treasury does not find that China is a currency manipulator. However, in the exchange rate graph above you can see a little dip in the yuan/dollar exchange rate in March 2018, as the yuan becomes a bit weaker for a short time. Treasury warns:
Notwithstanding that China does not trigger all three criteria under the 2015 legislation, Treasury will continue its enhanced bilateral engagement with China regarding exchange rate issues, given that the RMB has fallen against the dollar by 8 percent over the last year in the context of an extremely large and widening bilateral trade surplus. Treasury continues to urge China to take the necessary steps to avoid a persistently weak currency.
Again, the focus is on China\’s bilateral trade surplus with the US. There\’s another interesting hint here, which is that Treasure is urging \”China to take the necessary steps to avoid a persistently weak currency.\” This phrasing is interesting, because it isn\’t a complaint that China is intervening to make its currency too weak; instead, it\’s a complaint that China should be intervening more to prevent its currency from being weak. It\’s a complaint that China is not being enough of a currency manipulator in the way the Trump administration would prefer.
Before the announcement from Mnuchin, the exchange rate in August 2019 was still about 6.9 yuan/dollar–that is, what it had been in May 2019 and October 2019. But now, this exchange rate was evidence that China was \”Currency Manipulator.\” In fact, the recent Treasury press release links to the May 2019 report, which did not find that China was manipulating its currency, in support of the finding that China was manipulating its currency.
The May 2019 report sets out three standards that Treasury will supposedly be looking at when thinking about currency manipulation. First is whether the country has a bilateral trade surplus with the US of more than $20 billion, which China does. Second is whether the country has an overall trade surplus of more than 2% of GDP, which China doesn\’t. IMF statistics find that China\’s trade surplus in 2018 was 0,4% of GDP; moreover, the IMF finds that China is headed for a trade deficit in the next few years. Third is whether the country is intervening regularly in foreign exchange markets. As the Treasury report points out, Bank of China foreign exchange operations are shrouded in secrecy, but the evidence suggests that China\’s foreign exchange reserves haven\’t moved much since early 2018, or are perhaps a bit down overall, which is not consistent with the theory that the Bank of China has been buying lots of US dollars to keep the yuan exchange rate weak.
Understanding the actual drivers of trade balances explains why raising tariffs for a year and watching the US trade deficit got larger, rather than smaller. China\’s yuan/dollar exchange rate is at a level where its overall trade balance is near-zero, and according to the IMF, headed for a modest trade deficit in the next few years. Thus, the IMF is unlikely to back the Trump administration argument that the Bank of China is manipulating its exchange rate. But if the Trump administration bludgeons China into having a substantially stronger exchange rate, what happens next?
A strong exchange rate for one currency necessarily means a weaker exchange rate for other currencies: for example, if it takes fewer yuan to buy a dollar, it necessarily takes more dollars to buy a yuan. By arguing for a stronger yuan exchange rate, the Trump administration is apparently trying to devalue its way to prosperity by arguing for a weaker US dollar exchange rate, This makes it easier to sell US exports abroad, but the lower buying power for the US dollar also means that, in effect, everything which is imported by consumers and firms will cost more. Economies with floating exchange rates, like the US, are built to absorb short- and even medium-term fluctuations in exchange rates without too much stress. But in effect, the current Treasury policy is to advocate that China take steps to produce a permanently weaker US dollar–and thus benefit exporters at the cost of higher prices for importers–until the bilateral US trade deficit with China is eliminated.
A group of recent research studies have argued that \”markups\” are on the rise. As one of several prominent examples, a study by Jan De Loecker, Jan Eeckhout, and Gabriel Unger, called \”The Rise of Market Power and the Macroeconomic Implications\” presents calculations suggesting that the average markup for a US firm rose from 1.21 in 1980 to 1.61 in 2016 (here\’s a working paper version from Eeckhout\’s website dated November 22, 2018). The Summer 2019 issue of the Journal of Economic Perspectives discusses the strengths and weaknesses of this evidence in a three-paper symposium:
Here are some of my own takeaways from these articles:
1) For economists, \”markups\” are not the same as profits. Profits happen when price is above the average cost of production. Markups are defined as a situation where the price is above the marginal cost of production. This definition is also why markups are so hard to measure. It\’s pretty straightforward to measure average cost: just take the total cost of production and divide by the quantity produced. But measuring marginal cost of production is hard, because it requires separating a firm\’s expenses into \”fixed\” and \”variable\” costs, which is a useful conceptual division for economists but not how firms divide up their costs for accounting purposes.
2) There are a number of economic situations where there can be persistent markups while profits remain at normal levels. As one example, Every intro textbook discusses \”monopolistic competition,\” a situation in which firms in a market sell similar but differentiated products. Examples include clothing stores in the same shopping mall with different styles, gas stations with different locations, products sold with different money-back guarantees, and all the everyday products like dishwasher soap. The basic textbook explanation is that in a setting of monopolistic competition, firms will be able to set prices above marginal cost, charging more to consumers who desire the specific differentiated characteristics of that product. However, part of the definition of monopolistic competition is that other firms can easily enter the market and expand production. The result is a situation of positive markups (price higher than marginal cost) but only average profits.
3) Another example of positive markups arises in companies with high fixed costs and low marginal costs: as a simple example, think of a video game company where the cost of creating the game is high, but once the game is created, the marginal cost of providing the game to an additional user is near zero. It\’s quite possible for this kind of company to have a positive markup (price over marginal cost), but also to suffer losses (because price isn\’t enough above the markup to cover the firm\’s fixed costs).
4) Many big tech companies–Facebook, Amazon, Google, Apple, Microsoft, and others–have a number of products that share this property of high fixed costs and lower marginal costs. Thus, these companies are likely to have high mark-ups. Moreover, given their technology, size, and business, practices, it may be difficult for entrants to challenge these companies. As a result, companies like these may have an ability for a combination of sustained high markups and high profits over time. For example, in the study mentioned above by De Loecker, Eeckhout, and Unger, the overall rise in mark-ups not because of a rise in markups for the median company, but because of a very sharp rise in markups for a much smaller number of companies.
5) An emergence of high-markup, high-profit firms may be in some cases a positive step for productivity and wages. There is an emerging literature on what are sometimes called \”superstar\” firms (here\’s a recent working paper version of \”The Fall of the Labor Share and the Rise of Superstar Firms,\” by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenen ). Imagine a company that makes large-scale investments in information technology, both for the logistics of running its operations and for quality control, and also in widespread use of far-flung supply chains. Based on these high fixed costs, such a company may be able to expand in the market, taking market share from smaller firms. But economists commonly hold that when a company succeed by offering the products that consumers desire at attractive prices, this is a good thing. Thus, antitrust authorities need to think carefully about whether companies with high markups are gaining high profits through pro-consumer fixed investments or by anti-consumer restraints on competitors.
6) It\’s hard to measure whether markups are rising for the economy as a whole. Detailed studies of a certain firm or industry look carefully at the production process for that firm or industry and estimate fixed and marginal costs. But how can such data be collected for most companies in an economy? Some approaches use firm-level accounting data, or data on capital investment and depreciation across industries, or sectoral-level data on outputs and inputs. Using this data to estimate production functions across firms and markups involves a number of modelling choices. It turns out that assumptions like whether industry is producing with constant returns to scale or increasing returns to scale matter a lot. There are a bunch of genuinely hard and disputed question on how to proceed here.
7) As an example, one approach is to rely on accounting data. The prominent study by De Loecker, Eeckhout, and Unger uses accounting data (from Compustat) which breaks down costs of production into two main categories: \”Cost of Goods Sold\” and \”Selling, General, and Administrative.\” If one thinks of the Costs of Goods Sold as a proxy that captures variable costs, one can then use this data along with a measure of the fixed capital stock of firms to do more detailed calculations, which with some assumptions (like assuming that all profits are paid to owners of capital) will imply estimates for markups. But there are a bundle of underlying assumptions here. For example, Cost of Goods Sold is defined differently by the accountants across industries, and it seems to be a falling share of total costs over time. Sorting out the underlying economic meaning of the accounting data, along with assumptions are necessary and what implications these assumptions have, is an ongoing area of research.
8) Other attempts to measure markups often rely on measuring a firms\’s amount of capital, which could be thought of as a fixed costs, and then looking at the profits received by owners of capital. But many firms have both \”tangible\” capital like machines that is relatively well-measured, and also \”intangible\” investments, which include situations where a firm has made past investment in knowledge (like research and development) or organizational capabilities that are now paying off. Of course, measuring intangible investment (and thinking about how fast it might depreciate) is very hard, but there\’s some evidence that intangible investments have become more important over time. But if firms with high markups and profits are largely benefiting because they made large investments in the past–in the form of intangible capital–then this should probably be viewed as a positive for the economy.
9) Estimates of markups over time often reveal patterns that raise additional questions. For example, the study by Loecker, Eeckhout, and Unger finds that most of the sharp rise in markups happened among a smaller segment of firms in the 1980s and 1990s, and hasn\’t changed much since. (How does that timeline fit with one\’s internal narrative about what has caused higher markups, and in particular a belief that the causes are relatively recent?) Other studies that project back further in time suggest that markups were especially large in the 1960s. (So perhaps we need multiple explanations for what affected markups then and now?)
10) If very large rises in markups have occurred, then they should have implications for other areas of the economy. For example, Basu points out in his JEP paper that it\’s conceptually possible to draw a connection from higher corporate markups to labor receiving a lower share of income (a fact discussed here and here). But high estimated of the rise in markups suggest that the labor share of income should have fallen by much more than actually observed. Or as Syverson points out in his JEP paper, a rise in markups implies either that prices have risen quickly or that (marginal) costs have plunged. Low inflation rates means that prices have not risen quickly, and evidence that costs have plunged economy-wide is scanty. Thus, both authors express a view that while it is it plausible that markups have risen, the size of such a rise must be relatively modest to fit with other observed economic changes.
There\’s much more in the articles themselves about methods of measuring markups, what methods produce estimates that are higher or lower, possible connections (or not) to concentration of industry. Attempts to measure whether markups are rising in the US economy, and if so, by how much is a live and active area of research. If someone tells you that a very large rise in markups is a settled fact, they are showing a lack of awareness of the actual evolving state of play in this literature.
For the later decades of the 20th century, the most common source of data for economic studies was government surveys and statisticians. There are household surveys like the Current Population Survey, the Survey of Income and Program Participation, the Consumer Expenditure Survey, the National Health Interview Survey, the Consumer Expenditure Survey, and the General Social Survey. There were government workers collecting data on prices at stores as input to measures of inflation like the Consumer Price Index. There were business surveys, like the Economic Census, the Retail Trade Survey, an Annual Survey of Manufactures, the Residential Finance Survey, and others. Branches of government like the Department of Energy, the Department of Agriculture, and the Health Care Financing Administration collect data on specific industries. There\’s also a Census of Governments to get data on state and local government. The Bureau of Economic Analysis would pull together data from these sources and others to estimate GDP.
But over time, a split developed. On one side was this body of data created by government for use by business and and policy-makers, as well as researchers. But on the other side was a vast amount of data collected in the process of administering these programs. Often this administrative data was not formatted or organized in a way that researchers could use it. Moreover, the administrative data was often siloed in one government agency; for example, student grades and their academic progress were traditionally kept inside school districts or in some cases state departments of education, and not easily connected to other data that might explain patterns of school grades, either within a school year or over time.
Research using administrative data has much in common with history and archeology, insofar as it observes the tracks that individuals leave as they move through society and draws lessons from these glimpses into their lives. …
Given their origin in a particular institutional context, administrative records are typically fragmented, and these data are often not linked to other data that would be useful for research and policy. Hospitals, for example, collect detailed information about patients’ health, schools regularly collect information about student development, and employers often keep records not only about the performance of employees, but also about applicants who were ultimately not offered positions. Although various combinations of these data can provide important insights, they are typically compartmentalized. Likewise, given their origin, administrative records often lack certain kinds of information that are less likely to be collected in these records. For example, information about attitudes, affinities, and motives are not often collected in administrative records. Combining administrative data with records from other sources—either by linking administrative records across sources or by making administrative records available to be linked to data collected via other means—is thus central to building administrative data infrastructure. …
By virtue of how they come into existence, administrative data are typically focused on one facet of an individual’s life, and data and insights are often siloed. … Administrative records from birth, education, criminal justice, labor market, and mortality often capture different points in an individual’s life; combining data across these stages allows us to understand how inequalities unfold over the arc of an individual’s life. …
One clear example is in education. Despite their focus on preparing students who are “college- and career-ready,” schools have historically struggled to obtain data on the practices that will prepare their students to be successful because widespread links between students in K–12 educational systems and higher education outcomes have become available only recently, and links between K–12 data systems and the labor market remain relatively rare. These data linkages are important to understand the efficacy of school-based vocational programs, dropout recovery interventions, college readiness programs, and advancement placement course policies. But schools, like other organizations, typically lack the capacity and expertise to build this infrastructure and analyze the resulting data.
At a time when we are all sensitized to how big tech companies are gathering, combining, and marketing our personal data, the rise of administrative data clearly has a concerning side. Consider the footprints that many of us have left in administrative data over the years, about our education, physical and mental health, finances, how much we were paid, tax filings, car and real estate ownership, Social Security contributions, benefits from government programs, and many more–right down to the books checked out of the public library.
The obvious challenge is to find ways to use administrative data where protection of personal privacy is built-in from the start. For example, when the unemployment rate based on the Current Population Survey, no one is concerned that unemployment at specific identified households will become public as a result. In this issue, a paper by David B. Grusky, Michael Hout, Timothy M. Smeeding, C. Matthew Snipp descibes \”The American Opportunity Study: A New Infrastructure for Monitoring Outcomes, Evaluating Policy, and Advancing Basic Science.\” They write:
The American Opportunity Study (AOS) … is an ongoing effort to link the censuses of 1960 through 2010 and the American Community Surveys (ACS) and thereby convert cross-sectional decennial census data into a bona fide panel that will represent the full U.S. population over the last seventy years. Because this panel will be continuously refreshed as additional census and ACS data become available, it can serve as a population-level scaffolding on which other administrative data (such as tax records, earnings reports, program data) are then hung. … In other countries that have linked data, such as Wales and New Zealand, a well-developed infrastructure allows access to carefully vetted scholars, with the result that high-quality evidence is more frequently brought to bear on policy decisions.
I can ramble on a bit about the merits of administrative data for research. It covers everyone, and thus allows detailed analysis of various subgroups, and tracking people over time, and even looking across generations. It describes what government programs have actually done, which can then be compared and combined with surveys of household or businesses. But rather than talking in generalities, let me just mention some of the studies from this double issue. Notice in particular how the studies often use administrative data, sometimes from separate government agencies, in a way that addresses a worthwhile question.
I use standardized test scores from roughly forty-five million students to describe the temporal structure of educational opportunity in more than eleven thousand school districts in the United States. Variation among school districts is considerable in both average third-grade scores and test score growth rates. The two measures are uncorrelated, indicating that the characteristics of communities that provide high levels of early childhood educational opportunity are not the same as those that provide high opportunities for growth from third to eighth grade.
This research uses a probabilistic matching strategy to link foreclosure records with birth certificate records from 2006 to 2010 in California to identify birth parents who experienced a foreclosure. … [We] find that infants in gestation during or after the foreclosure had a lower birth weight for gestational age than those born earlier, suggesting that the foreclosure crisis was a plausible contributor to disparities in initial health endowments.
Using administrative data from the New York City Department of Education and the New York City Police Department, we find that exposure to violence in the residential neighborhood and an unsafe climate at school lead to substantial test score losses in English language arts (ELA).
Does network recruitment contribute to the glass ceiling? We use administrative data from two companies to answer the question. In the presence of gender homophily, recruitment through employee referrals can disadvantage women when an old boys’ network is in place. We calculate the segregating effects of network recruitment across multiple job levels in the two firms. If network recruitment is a factor, the segregating impact should disadvantage women more at higher levels. We find this pattern, but also find that network recruitment is a desegregating force overall. It promotes women’s representation strongly at all levels, but less so at higher levels.
One final thought: Using administrative data often requires academic researchers to become entrepreneurial about seeking out such data, working with the government agencies or private firms that hold the original data, finding ways to offer cast-iron reassurances about personal privacy, and only then being able to actually work with the data and see if something interesting emerges. For modern economists, this process is quite different from the old days of digging through data collected and made public in a way already prepared for their use by government agencies.
I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Summer 2019 issue, which in the Taylor household is known as issue #129. Below that are abstracts and direct links for all of the papers. I may blog more specifically about some of the papers in the next week or two, as well.
_______________
Symposium on Markups
\”Are Price-Cost Markups Rising in the United States? A Discussion of the Evidence,\” by Susanto Basu A number of recent papers have argued that US firms exert increasing market power, as measured by their markups of price over marginal cost. I review three of the main approaches to estimating economy-wide markups and show that all are based on the hypothesis of firm cost minimization. Yet different assumptions and methods of implementation lead to quite different conclusions regarding the levels and trends of markups. I survey the literature critically and argue that some of the startling findings of steeply rising markups are difficult to reconcile with other evidence and with aggregate data. Existing methods cannot determine whether markups have been stable or whether they have risen modestly over the past several decades. Even relatively small increases in markups are consistent with significant changes in aggregate outcomes, such as the observed decline in labor\’s share of national income. Full-Text Access | Supplementary Materials
\”Macroeconomics and Market Power: Context, Implications, and Open Questions,\” by Chad Syverson This article assesses several aspects of recent macroeconomic market power research. These include the ways market power is defined and measured; the use of accounting data to estimate markups; the quantitative implications of theoretical connections among markups, prices, costs, scale elasticities, and profits; and conflicting evidence on whether greater market power has led to lower investment rates and a lower labor share of income. Throughout this discussion, I characterize the congruencies and incongruencies between macro evidence and micro views of market power and, when they do not perfectly overlap, explain the open questions that need to be answered to make the connection complete. Full-Text Access | Supplementary Materials
\”Do Increasing Markups Matter? Lessons from Empirical Industrial Organization,\” by Steven Berry, Martin Gaynor and Fiona Scott Morton This article considers the recent literature on firm markups in light of both new and classic work in the field of industrial organization. We detail the shortcomings of papers that rely on discredited approaches from the \”structure-conduct-performance\” literature. In contrast, papers based on production function estimation have made useful progress in measuring broad trends in markups. However, industries are so heterogeneous that careful industry-specific studies are also required, and sorely needed. Examples of such studies illustrate differing explanations for rising markups, including endogenous increases in fixed costs associated with lower marginal costs. In some industries there is evidence of price increases driven by mergers. To fully understand markups, we must eventually recover the key economic primitives of demand, marginal cost, and fixed and sunk costs. We end by discussing the various aspects of antitrust enforcement that may be of increasing importance regardless of the cause of increased markups. Full-Text Access | Supplementary Materials
Symposium on Issues in Antitrust
\”Protecting Competition in the American Economy: Merger Control, Tech Titans, Labor Markets,\” by Carl Shapiro Accumulating evidence points to the need for more vigorous antitrust enforcement in the United States in three areas. First, stricter merger control is warranted in an economy where large, highly efficient and profitable \”superstar\” firms account for an increasing share of economic activity. Evidence from merger retrospectives further supports the conclusion that stricter merger control is needed. Second, greater vigilance is needed to prevent dominant firms, including the tech titans, from engaging in exclusionary conduct. The systematic shrinking of the scope of the Sherman Act by the Supreme Court over the past 40 years may make this difficult. Third, greater antitrust scrutiny should be given to the monopsony power of employers in labor markets. Full-Text Access | Supplementary Materials
\”The Problem of Bigness: From Standard Oil to Google,\” by Naomi R. Lamoreaux This article sets recent expressions of alarm about the monopoly power of technology giants such as Google and Amazon in the long history of Americans\’ response to big business. I argue that we cannot understand that history unless we realize that Americans have always been concerned about the political and economic dangers of bigness, not just the threat of high prices. The problem policymakers faced after the rise of Standard Oil was how to protect society against those dangers without punishing firms that grew large because they were innovative. The antitrust regime put in place in the early twentieth century managed this balancing act by focusing on large firms\’ conduct toward competitors and banning practices that were anticompetitive or exclusionary. Maintaining this balance was difficult, however, and it gave way over time—first to a preoccupation with market power during the post–World War II period, and then to a fixation on consumer welfare in the late twentieth century. Refocusing policy on large firms\’ conduct would do much to address current fears about bigness without penalizing firms whose market power comes from innovation. Full-Text Access | Supplementary Materials
Articles
\”How Market Design Emerged from Game Theory: A Mutual Interview,\” by Alvin E. Roth and Robert B. Wilson We interview each other about how game theory and mechanism design evolved into practical market design. When we learned game theory, games were modeled either in terms of the strategies available to the players (\”noncooperative games\”) or the outcomes attainable by coalitions (\”cooperative games\”), and these were viewed as models for different kinds of games. The model itself was viewed as a mathematical object that could be examined in its entirety. Market design, however, has come to view these models as complementary approaches for examining different ways marketplaces operate within their economic environment. Because that environment can be complex, there will be unobservable aspects of the game. Mathematical models themselves play a less heroic, stand-alone role in market design than in the theoretical mechanism design literature. Other kinds of investigation, communication, and persuasion are important in crafting a workable design and helping it to be adopted, implemented, maintained, and adapted. Full-Text Access | Supplementary Materials
\”A Bridge from Monty Hall to the Hot Hand: The Principle of Restricted Choice,\” by Joshua B. Miller and Adam Sanjurjo We show how classic conditional probability puzzles, such as the Monty Hall problem, are intimately related to the recently discovered hot hand selection bias. We explain the connection by way of the principle of restricted choice, an intuitive inferential rule from the card game bridge, which we show is naturally quantified as the updating factor in the odds form of Bayes\’s rule. We illustrate how, just as the experimental subject fails to use available information to update correctly when choosing a door in the Monty Hall problem, researchers may neglect analogous information when designing experiments, analyzing data, and interpreting results. Full-Text Access | Supplementary Materials
\”A Toolkit of Policies to Promote Innovation,\” by Nicholas Bloom, John Van Reenen and Heidi Williams Economic theory suggests that market economies are likely to underprovide innovation because of the public good nature of knowledge. Empirical evidence from the United States and other advanced economies supports this idea. We summarize the pros and cons of different policy instruments for promoting innovation and provide a basic \”toolkit\” describing which policies are most effective according to our reading of the evidence. In the short run, R&D tax credits and direct public funding seem the most productive, but in the longer run, increasing the supply of human capital (for example, relaxing immigration rules or expanding university STEM admissions) is likely more effective. Full-Text Access | Supplementary Materials
\”How Prevalent Is Downward Rigidity in Nominal Wages? International Evidence from Payroll Records and Pay Slips,\” by Michael W. L. Elsby and Gary Solon For more than 80 years, many macroeconomic analyses have been premised on the assumption that workers\’ nominal wage rates cannot be cut. Contrary evidence from household surveys reasonably has been discounted on the grounds that the measurement of frequent wage cuts might be an artifact of reporting error. This article summarizes a more recent wave of studies based on more accurate wage data from payroll records and pay slips. By and large, these studies indicate that, except in extreme circumstances (when nominal wage cuts are either legally prohibited or rendered beside the point by very high inflation), nominal wage cuts from one year to the next appear quite common, typically affecting 15–25 percent of job stayers in periods of low inflation. Full-Text Access | Supplementary Materials
\”Should We Tax Sugar-Sweetened Beverages? An Overview of Theory and Evidence,\” by Hunt Allcott, Benjamin B. Lockwood and Dmitry Taubinsky Taxes on sugar-sweetened beverages are growing in popularity and have generated an active public debate. Are they a good idea? If so, how high should they be? Are such taxes regressive? People in the United States and some other countries consume remarkable quantities of sugar-sweetened beverages, and the evidence suggests that this generates significant health costs. Building on recent work, we review the basic economic principles that determine the socially optimal sugar-sweetened beverage tax. The optimal tax depends on (1) externalities, or uninternalized health system costs from diseases caused by sugary drink consumption; (2) internalities, or costs consumers impose on themselves by consuming too many sugary drinks due to poor nutrition knowledge and/or lack of self-control; and (3) regressivity, or how much the financial burden and the internality benefits from the tax fall on the poor. We summarize the empirical evidence about the key parameters that determine how large the tax should be. Our calculations suggest that sugar-sweetened beverage taxes are welfare enhancing and indeed that the optimal sugar-sweetened beverage tax rate may be higher than the 1 cent per ounce rate most commonly used in US cities. We end with seven concrete suggestions for policymakers considering a sugar-sweetened beverage tax. Full-Text Access | Supplementary Materials
Features \”Retrospectives: Lord Keynes and Mr. Say: A Proximity of Ideas,\” by Alain Béraud and Guy Numa Since the publication of Keynes\’s General Theory of Employment, Interest and Money, generations of economists have been led to believe that Say was Keynes\’s ultimate nemesis. By means of textual and contextual analysis, we show that Keynes and Say held similar views on several key issues, such as the possibility of aggregate-demand deficiency, the role of money in the economy, and government intervention. Our conclusion is that there are enough similarities to call into question the idea that Keynes\’s views were antithetical to Say\’s. The irony is that Keynes was not aware of these similarities. Our study sheds new light on the interpretation of Keynes\’s work and on his criticism of classical political economy. Moreover, it suggests that some policy implications of demand-side and supply-side frameworks overlap. Finally, the study underlines the importance of a thorough analysis of the primary sources to fully grasp the substance of Say\’s message. Full-Text Access | Supplementary Materials
\”Some Journal of Economic Perspectives Articles Recommended for Classroom Use,\” by Timothy Taylor In 2018, the editors of the Journal of Economic Perspectives invited faculty to send us examples of JEP articles that they had found useful for teaching. We received 250 responses. On the JEP website, we have created a landing page (https://www.aeaweb.org/journals/jep/classroom) that organizes the recommended articles into 33 categories. If you click on any of the categories at that link, you will see a list of JEP papers that were recommended by faculty members for classroom use for that category, presented in reverse date order. Each paper is listed with a hyperlink to its article page on the JEP website. In this article, I offer some thoughts about how this exercise was carried out, along with its strengths and weaknesses. Although we make no pretense of presenting a complete syllabus for any specific course, we offer the milder hope that these recommendations from peers might suggest some additional readings for your students. Full-Text Access | Supplementary Materials \”Recommendations for Further Reading,\” by Timothy Taylor Full-Text Access | Supplementary Materials
If a metropolitan area was to alter its system of permits and rules in a way that enabled a substantial expansion in the quantity of housing being built, would this step help to make housing more affordable for those with lower and moderate income levels?
Two answers are hypothetically possible here. One answer points out that new market-driven housing construction will tend to be higher-priced, and therefore that in it offers no near- or middle-term assistance to people struggling with housing affordability. The other answer readily admits that new market-driving housing construction will tend to be higher-priced, but argues that an overall rise in the quantity of housing supplied will affect prices across the entire housing market, not just one part of it.
In one part of the study, he divides up neighborhoods into ten different income deciles. He shows that when people move from the lowest income decile, for example, the typical move is to a neighborhood in the second income decile; similarly, the typical move for those from a neighborhood in the second income decile is to a neighborhood in the third income decile. For people in the seventh and eighth income deciles, the typical move was to remain in a neighborhood in the same income decile–but for these households, moving to a neighborhood with a lower income decile was more common than moving to a neighborhood with a higher income decile. Reshuffling across the housing market does happen.
In one part of his study, Mast offers a more detailed analysis. He writes:
I identify 686 large, new, market-rate multifamily buildings in 12 large central cities and track 52,000 of their current residents to their previous buildings of residence. I then find the tenants currently living in those buildings and track them to their previous residence, iterating for six rounds and, in order to focus on local connectivity, keeping only within-metro-area moves in each round.
He looks at the patterns of how much reshuffling is actually occurring in these cities, and then uses the results to set up a simulation model. Like all simulations, the devil is in the details, and research with other models would be useful. But Mast concludes:
The simulation results suggest that market-rate construction has an important effect on the middle- and low-income housing markets. In my baseline specification, 100 new market-rate units create 70 equivalent units in neighborhoods with household incomes below the metro area median, and 39 in neighborhoods with household incomes from the bottom fifth. This should open these housing markets and lower prices, all else equal, though I do not directly estimate these implied effects. Notably, however, the simulation implies these equivalent units are created within five years of the completion of the new building.
A few thoughts here:
1) Affordable housing policy is often focused on the most immediate question of housing units that people can move into right now. But housing markets churn: that is, there is always a flow of people moving out of some areas and into others. The effects of new market-rate housing construction do spread out across an urban area.
2) The churning and movement within the housing market is real, but it takes a few years. In some urban areas where construction has been severely hindered over the years (and even decades) by local regulation, allowing more market-rate building is not a quick or immediate fix for affordable housing. This shouldn\’t be a surprise. You can\’t reasonably expect the effects of years (or decades) of hindering the supply of housing to disappear quickly. But if an urban area with an affordability problem doesn\’t allow more market-rate building, it\’s affordability problem is very likely to be worse a few year down the road.
3) Nothing said here rules out policies that would involve more direct housing assistance to low-income people, whether in the form of housing vouchers or zoning regulations that require developers to build a range of housing for different income levels. Such policies need to be evaluated on their own merits. But for a housing market experiencing an affordability problem, these policies are not a substitute for the construction of more market-rate housing. You don\’t have to be an economist to recognize that if supply of housing is restricted, but demand for housing keeps growing, the rising price of housing is going to become an issue.
There\’s often a strong theoretical case for lending money to build more infrastructure in low-income countries. International lenders ranging from government and private debt markets to organizations like the World Bank and regional development banks have lists of rules and standards to try to assure that such projects work out. The problem for China is that many of its Belt and Road Initiative projects were previously discouraged or turned down by the standard international lenders. As the Economist magazine wrote in the November 3, 2018 issue in an article called \”Think of China as a giant sub-prime lender in Latin America\”:
Instead China stands out for its willingness to invest in risky or corrupt places with few alternative sources of capital or of cheap, robust technology. Its approach might be called sub-prime globalisation. At best, sub-prime lenders are non-judgmental sources of second chances. At worst, they are see-no-evil profiteers, and vulnerable to backlashes. China is a bit of both.
The dynamics that China is now learning are familiar to the US and other high-income countries. The starting point is often a combination of a high-minded desire to help economic development in low-income countries and a baser desire for a high-income country to help its exports and spread its foreign policy clout. Taken together, these seem to justify substantial infrastructure loans. But by the time the infrastructure plan is digested by political forces and logistical problems, a number of project end up either incomplete or not generating the expected revenue. When the loan and the project goes bad, the country that borrowed the money feels hostile toward the lender, and cheated by promises that failed to materialize. The lender feels aggrieved that the loan isn\’t being repaid.
China is now getting enough of this blowback that it is tightening its rules for making Belt and Road Initiative loans–basically saying that it\’s going to follow World Bank standards. This shift toward greater caution in China\’s attitudes toward the Belt and Road Initiative offers a moment to take stock of current thinking about the program. For recent overviews of the issues, some useful starting points are:
In the rest of this post, I\’ll refer to these essays and also some more more specializes work.
As Brautigam points out in the American Interest essay:
[T]he BRI slots neatly into low-income countries’ development aspirations. China has excess foreign exchange, construction capacity, and mid-level manufacturing and needs to send all of these overseas. According to the Asian Development Bank, the developing countries of Asia alone require infrastructure investments of about $1.7 trillion per year to maintain growth, reduce poverty, and mitigate climate change. In Africa, on the periphery of the BRI, the African Development Bank estimates annual infrastructure requirements to be $130 to $170 billion. The World Bank and donors in wealthy countries are only tentatively restarting their funding for developing country infrastructure after decades of decline.
Better infrastructure does support an increase in trade. Suprabha Baniya, Nadia Rocha, and Michele Ruta offer some estimates in \”Trade Effects of the New Silk Road A Gravity Analysis\” (World Bank Policy Research Working Paper 8694, January 2019). They find that \”a one‐day reduction in trading times increases exports between BRI economies by 5.2 percent on average. In addition, trading times are particularly important for time sensitive products that are used as inputs in production processes, suggesting that reductions in shipping times are key in the presence of global value chains …\” Overall, \”[t]he paper finds that (i) the Belt and Road Initiative increases trade flows among participating countries by up to 4.1 percent; (ii) these effects would be three times as large on average if trade reforms complemented the upgrading in transport infrastructure …\”
Although it doesn\’t seem a fashionable point of view in 2019, growth in trade do have the potential to benefit both sides. As the Euler Hermes report emphasizes, China can lend out some of it large supply of foreign reserves, open up access to additional markets, re-deploy some of the overcapacity in its construction industry, and see its currency more widely used in the world. The 80 or so countries with BRI infrastructure investments have the potential to expand their exports and productivity.
But what about the projects that don\’t work out, and the conflicts over debt? There\’s a paradox here, which is that China\’s policy of no-strings and few-strings lending for BRI projects made it very popular, until projects started going back. As Chatzky and McBride note in the CFR backgrounder:
Christopher Balding, a former professor at the HSBC Business School in Shenzhen, says that the BRI’s “no-strings approach” has, counterintuitively, made some of its investments less attractive. The approach “has fueled corruption while allowing governments to burden their countries with unpayable debts,” he says. Political backlash is perhaps less of a concern in authoritarian countries taking part in the BRI, where autocrats face less public scrutiny and where the Chinese model of governance might hold more appeal. But some governments, in places such as Kenya and Zambia, are carefully studying BRI investments before they sign up, and candidates in Malaysia have explicitly run—and won—campaigns on anti-BRI platforms.
On the other hand, China’s control over banking, and the close ties between state and capital prized by the East Asian model, have their own significant weak spot. The system can encourage investment; but it can and usually does also lead to rent-seeking and cronyism. Just as the pen is proverbially mightier than the sword, the pens that sign warped economic arrangements can be as devastating to prosperity, and at times to peace, as military tension and conflict. The Achilles Heel of China’s bank financing model is that it relies heavily on Chinese companies to develop projects together with host country officials. This creates strong incentives for kickbacks and inflated project costs. Particularly in election years, companies and public works ministers may collude to get projects approved.
How many countries could be at severe risk from BRI borrowing? John Hurley, Scott Morris, and Gailyn Portelance looked in more detail at \”Examining the Debt Implications of the Belt and Road Initiative from a Policy Perspective\” in Center for Global Development Policy Paper 121 (March 2018). They note: \”[T]here are 10-15 that could suffer from debt distress due to future BRI-related financing, with eight countries of particular concern. These countries are Djibouti, the Kyrgyz Republic (Kyrgyzstan), Lao People’s Democratic Republic (Laos), the Maldives, Mongolia, Montenegro, Pakistan, and Tajikistan.\” But there are a number of other countries where the BRI debt might end up being quite unpopular, even if it was technically not unbearable.
More broadly, the concern is over whether China can exert \”debt-trap diplomacy,\” where a combination of financial ties, local companies on the ground, and an ongoing debt relationship brings the country under China\’s geopolitical influence. I suspect that this may have been one of China\’s goals in pushing the Belt and Road Initiative. But at least so far, this does not seem to have happened in a widespread way, and the issues here are tricky ones.
If infrastructure loans go well and create substantial new economic value and trade, then yes, China\’s influence will grow. But if infrastructure loans go badly, then debtor countries and their politicians are going to push back hard against China–just as they have pushed back hard against western lender and the World Bank when infrastructure lending turned out badly. China is in the process of learning learn that when you lend to a government, the issues that arise in repayment are quite different than when you lend to a private company.
India\’s economy doesn\’t get nearly as much attention as China\’s. Both economies have a similar involvement in international trade: for example, they both export 19-20% of their GDP. But the US imports about ten times as much from China as from India, and there is a dimension of geopolitical competition with regard to China that isn\’t present with India.
One of my ways of keeping up with India is an annual report from the Ministry of Finance. The Economic Survey 2018-2019 is was published earlier this month. It\’s available in two volumes, with the chapters in the first volume more heavily focused on specific topics and the chapters in the second volume more focused on big-picture macro topics like growth, inflation, trade, budgets, monetary policy and the like.
Of course, I can\’t possibly do justice to entire report here. Here are some points and themes that jumped out at me, but there\’s much more in the report. Here\’s an overall view of the ten largest economies in the world, measured in current US dollars. The size of the circle shows the GDP for each country using the purchasing power parity exchange rate–in which case China\’s economy would already be larger than the United States, and India would already be the third-largest economy in the world. country. The vertical axis shows the rate of economic growth from 2014-2018.
India is at the edge of a demographic transition: lower fertility and a declining number of children, rising numbers of working age people, and a future with a rising share of elderly.
India is set to witness a sharp slowdown in population growth in the next two decades. … It will surprise many readers to learn that population in the 0-19 age bracket has already peaked due to sharp declines in total fertility rates (TFR) across the country. … The age distribution, however, implies that India\’s working-age population will grow by roughly 9.7mn per year during 2021-31 and 4.2mn per year in 2031-41. Meanwhile, the proportion of elementary school-going children, i.e. 5-14 age group, will witness significant declines. Contrary to popular perception, many states need to pay greater attention to consolidating/merging schools to make them viable rather than building new ones. At the other end of the age scale, policy makers need to prepare for ageing. This will need investments in health care as well as a plan for increasing the retirement age in a phased manner.
Here\’s the fall in India\’s fertility rate:
Here\’s a figure showing the fall in the age 5-14 population in India, and the corresponding rise in the number of schools per million children. As the report notes:
The “optimal” school size varies widely according to terrain and urban clustering, but this sharp increase in number of elementary schools per capita needs to be carefully studied. The time may soon come in many states to consolidate/merge elementary schools in order to keep them viable. Schools located within 1-3 kms radius of each other can be chosen for this purpose to ensure no significant change in access. This would also be in line with the experience of other major economies witnessing a decline in elementary school-going population, such as Japan, China, South Korea, Singapore and Canada, which have implemented policies to merge or close down schools
India is building its invisible infrastructure of contract enforcement and bankruptcy law
Arguably the single biggest constraint to ease of doing business in India is now the ability to enforce contracts and resolve disputes. This is not surprising given the 3.5 crore cases pending in the judicial system. Much of the problem is concentrated in the district and subordinate courts. Contrary to conventional belief, however, the problem is not insurmountable. A case clearance rate of 100 per cent (i.e. zero accumulation) can be achieved with the addition of merely 2,279 judges in the lower courts and 93 in High Courts even without efficiency gains. This is already within sanctioned strength and only needs filling vacancies. Scenario analysis of efficiency gains needed to clear the backlog in five years suggest that the required productivity gains are ambitions, but achievable. Given the potential economic and social multipliers of a well-functioning legal system, this may well be the best investment India can make. …
As a concerted effort made in the enactment and implementation of the [Insolvency and Bankruptcy Code], India improved its ‘Resolving Insolvency’ ranking from 134 in 2014 to 108 in 2019. This is a significant jump given that the country was stagnating in earlier position for many years. India won the Global Restructuring Review (GRR) award for the most improved jurisdiction in 2018. Financial Sector Assessment Program of IMF- World Bank in January 2018 observed: “India is moving towards a new state of the art bankruptcy regime. …\” An effective exit law promotes responsible corporate behaviour by encouraging higher standards of corporate governance, including cash and financial discipline, to avoid consequences of insolvency. The IBC has made a significant impact on the way the default of debts is viewed and treated by promoters and management. It has initiated a cultural shift in the dynamics between lender and borrower, promoter and creditor. The IBC has paved the way for Operational Creditors, mostly SMEs and small vendors to use the IBC as a recovery tool. The threat of promoters losing control of the company or protracted legal proceedings is forcing many corporate defaulters to pay off their debt even before the insolvency can be started.
India will use more energy as its economy develops, but also is concerned about conventional air pollution and carbon emissions. Finding a balance isn\’t easy. This figure compares India\’s energy consumption and economic growth with a group of emerging market economies where growth has been much faster: China, Mexico, Brazil, Turkey, and Malaysia. For each country, there is a string of points showing the level of per capita GDP and energy use in each year from 1965-2017. As India\’s per capita GDP rises, so will its energy consumption.
Where will the energy come from? At present, the majority of India\’s energy comes from \”thermal\” sources, which primarily means coal. There\’s a lot of talk in the report about the growth of renewable energy, but as the figure shows, at least so far the growth of renewables is displacing non-carbon-emitting hydroelectric power, not power from thermal sources.
India\’s government has been taking real steps to address air pollution. For example: \”Government is executing National Air Quality Monitoring Programme (NAMP) covering 312 cities/towns in 29 states and 6 Union Territories of the country. Under NAMP, four major air pollutants viz. Sulphur Dioxide (SO2), Oxides of Nitrogen as NO2, Suspended Particulate Matter (PM10) and Fine Particulate Matter (PM2.5) have been identified for regular monitoring at all the locations.\”
When it comes to climate change, India already ranks third among countries of the world in carbon emissions (China, 28% of world carbon emissions; US, 14.9%, India, 7.4%). India\’s success in hold downing the growth rate of future carbon emissions will be of global importance.
The Ganga (or Ganges) River is one of the most polluted in the world. India is taking steps to clean it up. One major step to protecting groundwater across India is the construction of toilets and latrines:
Aligning with the ideals of Mahatma Gandhi, the Swachh Bharat Mission (SBM) was initiated in 2014 to achieve universal sanitation coverage by 2 October 2019. This flagship programme is, perhaps, the largest cleanliness drive as well as an attempt to effect behavioural change in the world ever. Even 67 years after India’s independence, in 2014, around 100 million rural and about 10 million urban households in India were without a sanitary toilet; over 564 million, i.e. close to half the population, still practiced open defecation. Through SBM, 99.2 per cent of rural India has been covered in the last four years. Since October 2014, over 9.5 crore toilets have been built all over the country and 564,658 villages have been declared Open Defecation Free (ODF). As on 14 June 2019, 30 States/UTs are 100 per cent covered with Individual Household Latrine (IHHL). SBM has significantly improved health outcomes. To highlight the impact of SBM on health, new evidence is provided that SBM has helped reduce diarrhoea and malaria among children below five years, still birth and low birth weight (new born with weight less than 2.5 kgs). This effect is particularly, pronounced in districts where IHHL coverage was lower is 2015. Financial savings from a household toilet exceed the financial costs to the household by 1.7 times, on average and 2.4 times for poorest households.
Other steps involve identifying industrial polluters and setting up water quality monitoring stations:
To ensure proper inventorisation and inspection of point source pollution from industrial units, 1,109 Grossly Polluting Industries (GPIs) were identified and surveyed independently by 12 Technical Institutions. The compliance of the operational GPIs in 2017 as against 2018 improved from 39 per cent to 76 per cent. … 36 Real Time Water Quality Monitoring Stations (RTWQMS) are operational under Namami Gange Programme …
Those interested in minimum wage laws might want to keep track of India\’s ongoing policies in this area. Here\’s a flavor of the current situation:
[T]he present minimum wage system in India is extremely complex with 1,915 minimum wages defined for various scheduled job categories for unskilled workers across various states. Despite its complex structure and proliferation of scheduled employments over time, the Minimum Wages Act,1948 does not cover all wage workers. One in every three wage workers in India has fallen through the crack and is not protected by the minimum wage law.
Again, there is much more in the report itself. For example, I had not known that India receives more remittances from its immigrants abroad than any other country:
\”Although it may surprise scientists, one can receive a patent in many jurisdictions without implementing an invention in practice and demonstrating that it works as expected. Instead, inventors applying for patents are allowed to include predicted experimental methods and results, known as prophetic examples, as long as the examples are not written in the past tense. … The U.S. Patent and Trademark Office (USPTO) formally recognized prophetic examples in 1981, but the practice is considerably older. … Preliminary research suggests that these examples are particularly prevalent in chemistry and biology; an estimated 17% of examples in U.S. patents in these fields are prophetic, and almost one-quarter of U.S. patents in these fields have at least one prophetic example—making prophetic examples a commonplace feature …\”
The key legal point here seems to be in the verb tenses. Here\’s an example of a an prophetic patent, with text from U.S. Patent No. 6,869,610. Notice that nothing in the patent is written in the past tense, but only in the present tense. From this, you as the reader are meant to infer the possibility that the patent is describing a hoped-for result, not the result of an actual experimental result.
A 46-year-old woman presents with pain localized at the deltoid region due to an arthritic condition.The muscle is not in spasm, nor does it exhibit a hypertonic condition. The patient is treated by a bolus injection of between about 50 units and 200 units of intramuscular botulinum toxin type A. Within 1 to 7 days after neurotoxin administration the patient’s pain is substantially alleviated. The duration of significant pain alleviation is from about 2 to about 6 months.
Here\’s another, from U.S. Patent No. 7,291,497. Again, notice that the description in the present tense is supposed to be your heads-up that this actual granted patent is describing a hope, not an actual demonstrated result.
Each patch [for drawing and sampling 0.1 ml of blood for vancomycin] consists of two parts. … Micro-needles automatically draw small quantities of blood painlessly.A mechanical actuator inserts and withdraws the needle … mak[ing] several measurements after the patch is applied. … Needles are produced photolithographically in molds at [the Stanford Nanofabrication Facility]. … Blood flows through the micro-needles into the blood reservoir. …
A patent offers a social tradeoff: the intellectual property of an invention is protected for a time in exchange for immediate public disclosure of the invention. If what is being disclosed is a hope, rather than an actual invention, there is potential for confusion about what has actually been demonstrated and what is only hoped–and the authors offer examples that such confusion has arisen.
This seems a relatively straightforward issue to address. Either rule out \”prophetic patents,\” which apparently are much less common in jurisdictions outside the US. Or if prophetic patents are to be allowed, just require that hypothetical examples be clearly labelled as such. Freilich and Ouellette interviewed a group of \”professional patent prosecutors who write U.S. patents\”:
Interviewed prosecutors generally acknowledged the possibility that scientists reading prophetic examples might be unable to correctly interpret the verb tense rule, although they emphasized the legality of the practice and their duty to obtain the strongest possible patent for their clients. They also agreed, however, that an equally strong patent could be obtained with prophetic examples that were explicitly labeled as predictions that had not been carried out.
It seems clear that China\’s government and firms have been aggressive in their pursuit of intellectual property from firms in other countries. Sometimes this aggressiveness comes from government rules that if a foreign firm wished to sell or produce in China, it needs to have a Chinese firm as a partner and to share technology with that firm. There are also widespread allegations of technology theft. I once chatted with a group of California tech executives who did business in China, and they all said that if they take a computer or a phone to China, they only take one where the memory has been previously wiped clean.
Measuring the total amount of technology transfer to China is, in its nature, hard to do. But one approach is to look at royalty payments from China to outside firms. Ana Maria Santacreu and Makenzie Peake of the Federal Reserve Bank of St. Louis offer some data in \”A Closer Look at China’s Supposed Misappropriation of U.S. Intellectual Property\” (Economic Synopses, 2019, No. 5).
The top panel shows China\’s payments for use of US intellectual property, which have risen sharply, The second panel shows that this growth in China\’s payments has been faster than the growth of China\’s GDP. Of course, the rise in payments doesn\’t mean that the appropriate level of payments is being made. Annual payments from China to the US of about $8 billion for intellectual property don\’t seem extraordinarily high. And given that China\’s economy has been shifting from reliance on low-wage labor to an economy based more in technology and services, the pattern of intellectual property payments rising faster than GDP is the pattern one would expect.
This figure, also from Lardy, shows countries ranked by how much they pay in intellectual property fees. This figure illustrates something rather odd: the two countries that pay the highest charges for intellectual property are the relatively small economies of Ireland and Netherlands. As Lardy explains: \”However, licensing fees in Ireland and the Netherlands are paid mostly by foreign holding companies that are legally domiciled in these countries for tax reasons. Since the subsidiaries of these holding companies using the licensed foreign technology are located in other jurisdictions worldwide, China probably ranks second globally in the magnitude of licensing fees paid for technology used within national borders.\” (A few years ago, I offered a description of the famous \”Double Irish Dutch Sandwich\” technique for multinational firms to reduce their tax liabilities.)
It may well be true that China should be paying more to other countries, including the US, for use of intellectual property. But the numbers in these tables suggest that in purely economic terms, a negotiated solution would be affordable and therefore possible. If a set of trade negotiations led to China doubling its royalty payments to the world as a whole, that additional $27 billion or so would certainly not cripple China\’s $13.6 trillion (converted at the current US dollar exchange rate) economy. On the other side, while this shift would be a nice windfall for some multinational companies around the world, it\’s not an amount that is likely to change the course of the $20.5 trillion US economy, either. Also, if the rules for making payments for foreign intellectual property were tightened up, it\’s likely that US firms already making such payments (as shown in the figure above) would see those payments rise, as well.