Stephen King: ""The Editor is Always Right … To Edit is Divine"

Those of us who edit for a living, and especially those of us whose working-place value-added comes about by subtracting from the length of early drafts, will appreciate the comments of best-selling writer Stephen King in his On Writing: A Memoir of the Craft (quoting here from the third edition in 2000).

On the value of having a good editor (p. 13):

One rule of the road not directly stated elsewhere in this book: `The editor is always right.’ The corollary is that no writer will take all of his or her editor’s advice; for all have sinned and fallen short of editorial perfection. Put another way, to write is human, to edit is divine.

On the value of making an effort to hold down the length of your work (pp. 222-223):

In the spring of my senior year at Lisbon High—1966, this would’ve been—I got a scribbled comment that changed the way I rewrote my fiction once and forever. Jotted below the machine-generated signature of the editor was this mot: “Not bad, but PUFFY. You need to revise for length. Formula: 2nd Draft = 1st Draft – 10%. Good luck.” I wish I could remember who wrote that note … . Whoever it was did me a hell of a favor. I copied the formula out on a piece of shirt-cardboard and taped it to the wall beside my typewriter. Good things started to happen for me shortly after. There was no sudden golden flood of magazine sales, but the number of personal notes on the rejection slips went up fast … [E]very story and novel is collapsible to some degree. If you can’t get out ten per cent of it while retaining the basic story and flavor, you’re not trying very hard. The effect of judicious cutting is immediate and often amazing—literary Viagra. 

Of course, these themes don\’t just apply to writing fiction. Back in 2001, Hal Varian wrote an essay on \”What I’ve Learned about Writing Economics” in the Journal of Economic Methodology (8:1, 131-134). Varian wrote: 

It is critical to have a sounding board. The blues singer Taj Mahal says, “if you cain\’t get a wife, get a band.\’\’ My advice for authors is: “if you can\’t get a co-author, get an editor.\’\’ You need to have someone with good taste who can read your writing and tell you want works and what doesn\’t.

Tightening prose requires effort, but on average, it as King writes, \”the effect of judicious cutting is immediate and often amazing.\” My sense is that for many academic journals, the dialog between authors and editors that determines whether a paper will be published often does not take readers much into account. 

The Iron Law of Megaprojects vs. the Hiding Hand Principle

The next time you read about a \”bridge to nowhere\” or a giant infrastructure project that started and then stalled, you may wish to mutter to yourself  teh \”Iron Law of Megaprojects: Over budget, over time, over and over again.\” It\’s a coinage of Bent Flyvbjerg. For an overview of his arguments, you can check this Cato Policy Report (January 2017), which in turn is based on this article from the Project Management Journal (April/May 2014). In the Cato report, Flyvbjerg writes:

Megaprojects are large-scale, complex ventures that typically cost a billion dollars or more, take many years to develop and build, involve multiple public and private stakeholders, are transformational, and impact millions of people. Examples of megaprojects are high-speed rail lines, airports, seaports, motorways, hospitals, national health or pension information and communications technology (ICT) systems, national broadband, the Olympics, largescale signature architecture, dams, wind farms, offshore oil and gas extraction, aluminum smelters, the development of new aircrafts, the largest container and cruise ships, high-energy particle accelerators, and the logistics systems used to run large supply-chain-based companies like Amazon and Maersk.

For the largest of this type of project, costs of $50-100 billion are now common, as for the California and UK high-speed rail projects, and costs above $100 billion are not uncommon, as for the International Space Station and the Joint Strike Fighter. If they were nations, projects of this size would rank among the world’s top 100 countries measured by gross domestic product. When projects of this size go wrong, whole companies and national economies suffer. …

If, as the evidence indicates, approximately one out of ten megaprojects is on budget, one out of ten is on schedule, and one out of ten delivers the promised benefits, then approximately one in a thousand projects is a success, defined as on target for all three. Even if the numbers were wrong by a factor of two, the success rate would still be dismal.

A common comeback to the Iron Law of Megaprojects is that if we pay attention to it, we will be so dissuaded by costs and risks of megaprojects that nothing will ever get done. Alfred O. Hirschman offered a sophisticated expression of this concern in his 1967 essay, \”The Hiding Hand.\”  Hirschman argued there there is rough balance in megaprojects: we tend underestimate the costs and problems of megaprojects, but we also tend to underestimate the creative with which people address the costs and and problems that arise. (This is of course similar to the classic argument that without a dose of irrational \”animal spirits,\” leading entrepreneurs to ignore risks and difficulties of starting a business, there there would be too little entrepreneurship provided.) Hirschman wrote:

We may be dealing here with a general principle of action. Creativity always comes as a surprise to us; therefore we can never count on it and we dare not believe in it until it has happened. In other words, we would not consciously engage upon tasks whose success clearly requires that creativity be forthcoming. Hence, the only way in which we can bring our creative resources fully into play is by misjudging the nature o[ the task, by presenting it to ourselves as more routine, simple, undemanding of genuine creativity than it will turn out to be. 

Or, put differently: since we necessarily underestimate our creativity it is desirable that we underestimate to a roughly similar extent the difficulties of the tasks we face, so as to be tricked by these two offsetting underestimates into undertaking tasks which we can, but otherwise would not dare, tackle. The principle is important enough to deserve a name: since we are apparently on the trail here of some sort of Invisible or Hidden Hand that beneficially hides difficulties from us, I propose \”The Hiding Hand.\” 

What this principle suggests is that, far from seeking out and taking up challenges, people are apt to take on and plunge into new tasks because of the erroneously presumed absence of a challenge–because the task looks easier and more manageable that it will turn out to be. As a result, the Hiding Hand can help accelerate the rate at which men engage successfully in problem-solving: they take up problems they think they can solve, find them more difficult than expected, but then, being stuck with them, attack willy-nilly the unsuspected difficulties–and sometimes even succeed. 

As Hirschman acknowledges, this Hiding Hand Principle has its limits. It suggests that aspirational challenges of megaprojects need to be chosen carefully, so that they are realistic enough to be addressed with a dose of creative problem-solving, rather than ending up just as money-losing disasters.  
As one might expect, Flyvbjerg takes a different approach. He argues that a number of prominent megaprojects have been completed on time and on budget. When choosing which megaprojects to pursue, it is useful to avoid underestimating costs and overestimating benefits. He writes:

[M]any projects exist with sufficiently high benefits and low enough costs to justify building them. Even in the field of innovative and complex architecture, which is often singled out as particularly difficult, there is the Basque Abandoibarra urban regeneration project, including the Guggenheim Museum Bilbao, which is as complex, innovative, and iconic as any signature architecture, and was built on time and budget. Complex rail projects, too, like the Paris-Lyon high-speed rail line and the London Docklands light railway extension have been built to budget. The problem is not that projects worth undertaking do not exist or cannot be built on time and budget. The problem is that the dubious and widespread practices of underestimating costs and overestimating benefits used by many megaproject promoters, planners, and managers to promote their pet project create a distorted hall-of-mirrors in which it is extremely difficult to decide which projects deserve undertaking and which not.

Further, Flyvbjerg offers a reminder that even when a megaproject is eventually completed, and seems to be working well, project may still have been uneconomic–and society may have been better off without it. He offers the Chunnel as an example:

As a case in point, consider the Channel Tunnel in more detail. This project was originally promoted as highly beneficial both economically and financially. In fact, costs went 80 percent over budget for construction, as mentioned above, and 140 percent for financing. Revenues have been half of those forecasted. The internal rate of return on the investment is negative, with a total loss to the British economy of $17.8 billion. Thus the Channel Tunnel detracts from the economy instead of adding to it. This is difficult to believe when you use the service, which is fast, convenient, and competitive with alternative modes of travel. But in fact each passenger is heavily subsidized. Not by the taxpayer this time, but by the many private investors who lost their money when Eurotunnel, the company that built and opened the channel, went insolvent and was financially restructured. This drives home an important point: A megaproject may well be a technological success but a financial failure, and many are. An economic and financial ex post evaluation of the Channel Tunnel, which systematically compared actual with forecasted costs and benefits, concluded that “the British economy would have been better off had the tunnel never been constructed.”

I once ran across a maxim about megaprojects which held that the original investors always lost money, and often the second wave of investors lost money, too. But once the physical project was completed and had emerged from multiple bankruptcies, it might then earn money for the most recent wave of owners and be broadly viewed as a \”success.\”

Alfred Marshall in 1885: "The Present Position of Economics"

In the last few years, I have evolved a habit for that time in August when I head off for  vacation and other end-of-summer plans. I leave behind a series of scheduled daily posts about topics in economics, academia, and writing or editing that are usually based on historical essays and writings which caught my eye at some point. This year, I\’ll start with some thoughts about the 1885 address given by Alfred Marshall, \” The present position of economics.  An inaugural lecture given in the Senate House at Cambridge, 24 February, 1885.\”

The occasion for the lecture was that Henry Fawcett, the previous Professor of Political Economy at Cambridge University, had died, and Marshall was his successor in the position. The lecture is notable for many lovely turns of phase. As one example, it\’s the source of the comment that \”the most reckless and treacherous of all theorists is he who professes to let facts and figures speak for themselves, who keeps in the back-ground the part he has played, perhaps unconsciously, in selecting and grouping them.\”

But Marshall\’s lecture also has an oddly contemporary feel. In 1885, socialism was on the rise in England, and economics was often criticized for assuming too much rationality and being unwilling and unable to addressing real social problems. Thus, Marshall\’s lecture has several underlying currents. He wants to acknowledge that the socialists of his day are pointing to real problems, while arguing that their answers are not likely to be useful in addressing those problems. He wants to define what is at the core of the subject of economics. He wants to explain that economics is a useful mechanism for looking at social policies, but also that economics is limited in the answers it can give. He suggests that economics should be helpful where it can, and then should shut up.

Here\’s Marshall on the socialists of his time:

The perfectibility of man had indeed been asserted by Owen and other socialists. But their views were based on little historic and scientific study; and were expressed with an extravagance that moved the contempt of the business-like economists of the age. The socialists did not attempt to understand the doctrines which they attacked; and there was no difficulty in showing that they had not rightly apprehended the nature and efficiency of the existing economic organization of society. It is therefore not a matter for wonder that the economists, flushed with their victories over a set of much more solid thinkers, did not trouble themselves to examine any of the doctrines of the socialists, and least of all their speculations as to human nature.

But the socialists were men who had felt intensely, and who knew something about the hidden  springs of human action of which the economists took no account. Buried among their wild rhapsodies there were shrewd observations and pregnant suggestions from which philosophers and economists had much to learn. … Among the bad results of the narrowness of the work of English economists early in the century perhaps the most unfortunate was the opportunity which it gave to socialists to quote and misapply economic dogmas. …

Why should it be left for impetuous socialists and ignorant orators to cry aloud that none ought to be shut out by the want of material means from the opportunity of leading a life that is worthy of man? Of those who throw their whole souls into the discussion of this problem, the greater part put forth hastily conceived plans which would often increase the evils that they desire to remedy: because they have not had a training in thinking out hard and intricate problems, a training which is most rare in the world …

What is the core of economics that can offer some assistance in thinking through these hard and intricate problems? Marshall notes that when people think about the main intellectual contribution of Adam Smith, they often point to what we now refer to as the \”invisible hand\” idea–that when people act in their own self-interest–though hard work, innovation, shopping for desired goods and services–they will often benefit the social welfare. However, Marshall argues that in fact, Smith\’s key insight was something quite different: \”His work was to indicate the manner in which value measures human motive.\” In other words, Smith started the process of drawing linkages between the ways that people act and the monetary incentives they face in terms of prices and wages–which is what makes human motives into something measurable. Marshall thought this idea was the true core of economic thinking:

But it is becoming clear that the true philosophic raison d’ệtre of the [economic] theory is that it supplies a  machinery to aid us in reasoning about those motives of human action which are measurable. In the world in which we live, money as representing general purchasing power, is so much the best measure of motives that no other can compete with it. … 

When in this world we want to induce a man to do anything for us, we generally offer him money. It is true that we might appeal to his generosity or Nature in the indicative mood and her ethical laws in the imperative sense of duty; but this would be calling into action latent motives that are already in existence, rather than supplying new motives. If we have to supply a new motive we generally consider how much money will just make it worth his while to do it. Sometimes indeed the gratitude, or esteem, or honour which is held out as an inducement to the actions may appear as a new motive : particularly if it can be crystallised in some definite outward manifestation; as for instance in the right to make use of the letters C.B., or to wear a star or a garter. In this world such distinctions are comparatively rare and connected with but few transactions; and they would not serve as a measure of the ordinary motives that govern men in the acts of every day life.

Marshall emphasizes that just because economics is often focused on selfish and self-regarding behavior, in the sense of how behavior responds to monetary incentives, this is not a claim that selfish behavior is good or worthwhile. Instead, economics is a mechanism for thinking through the how these monetary incentives will affect behavior. Marshall uses the term \”organon,\” which has fallen out of use, but is defined as \”a means of reasoning or a system of logic.

But though in wording our economic organon this idea of measurability should be always present, it should not, I think, be prominent. For practical purposes, and in order to keep the better our touch of real life, it will be best to go on treating it as chiefly concerned with those motives to which a money price can be directly or indirectly assigned. But motives that are selfish or self-regarding have no claim to more consideration than others except in so far as they may be more easily measurable and may more easily have a money-price assigned to them. The organon then must have reference to an analysis of the positive motives of desire for different goods, and of the negative motives of unwillingness to undergo the fatigues and sacrifices involved in producing them.

Marshall makes the subtle but ever-useful point that when it comes to social problems and public public policy, economics has an important role to play in figuring out the direction and size of likely outcomes, and perhaps especially in pointing out that certain proposals are unlikely to have the effects that are promised. But he also emphasizes that economics doesn\’t provide answers. Instead, he argues that economics should help where it can, and then economics should shut up. In particular, economics in Marshall\’s view should not get in the way of or push out any other kinds of knowledge or common sense, and when economists given an overall opinion, they should always be clear to separate the actual economics from their own personal judgments.

In nearly every important social problem, one of these component parts has to do with those actions and sacrifices which commonly have a money price. This set of considerations is almost always one of the hardest, one of those in which untutored common sense is most likely to go wrong. But it is fortunately one of those which offer the firmest foot-hold to scientific treatment. The economic organon brings to bear the accumulated strength of much of the best genius of many generations of men. It shows how to analyse the motives at work, how to group best genius of many generations of men. them, how to trace their mutual relations. And thus by introducing systematic and organized methods of reasoning, it enables us to deal with this one side of the problem with greater force and certainty than almost any other side; although it would have probably been the most unmanageable side of all without such aid. Having done its work it retires and leaves to common sense the responsibility of the ultimate decision; not standing in the way of, or pushing out any other kind of knowledge, not hampering common sense in the use to which it is able to put any other available knowledge, nor in any way hindering; helping where it could help, and for the rest keeping silence.

Sometimes indeed the economist may give a practical decision as it were with the authority of his science, but such a decision is almost always merely negative or critical. It is to the effect that a proposed plan will not produce its desired result; just as an engineer might say with authority that a certain kind of canal lock is unsuitable for its purpose. But an economist as such cannot say which is the best course to pursue, any more than an engineer as such can decide which is the best route for the Panama canal. 

It is true that an economist, like any other citizen, may give his own judgment as to the best solution of various practical problems; just as an engineer may give his opinion as to the right method of financing the Panama canal. But in such cases the counsel bears only the authority of the individual who gives it: he does not speak with the voice of his science. And the economist has to be specially careful to make this clear; because there is much misunderstanding as to the scope of his science; and undue claims to authority on practical matters have often been put forward on its behalf.

Marshall also makes the point that when it comes to issues of social policy, what we think of as \”customs\” can often be altered over time by economic incentives.

To say that any arrangement is due to custom, is really little more than to say that we do not know its cause. I believe that very many economic customs could be traced, if we only had knowledge enough, to the slow equilibration of measurable motives: that even in such a country as India no custom retains its hold long after the relative positions of the motives of demand and supply have so changed, that the values, which would bring them into stable equilibrium, are far removed from those which the custom sanctions.

Where economic conditions change but little in one generation, the relative values of different things may keep very near what modern economists would call their normal position, and yet appear scarcely to move at all: just as, if one looks only for a. short time at the hour hand of a watch, it seems not to move. But if the preponderance of economic motive is strong in one direction, the custom, even while retaining its form, will change its substance, and really give way.

Ultimately, Marshall is insisting that pointing to facts is never enough. Instead, one needs to go through the economic analysis of looking at the interrelationship of motives and the measurable values of prices and wages, along with the interactions that happen throughout an economy. This is the only way to coming to a correct interpretation of the causes and effect that underlie the facts, and it\’s what economics is all about.

Experience in controversies such as these brings out the impossibility of learning anything from facts till they are examined and interpreted by reason; and teaches that the most reckless and treacherous of all theorists is he who professes to let facts and figures speak for themselves, who keeps in the back-ground the part he has played, perhaps unconsciously, in selecting and grouping them, and in suggesting the argument post hoc ergo propter hoc. In order to be able with any safety to interpret economic facts whether of the past or present time, we must know what kind of effects to expect from each cause and how these effects are likely to combine with one another. This is the knowledge which is got by the study of economic science.

Tradeoffs of "Free" Higher Education: Finland, South Korea, England, United States

All goods and service have both a cost of production and a price paid by the consumer. If government wishes to do so, it can raise revenues through taxing or borrowing to pay for the cost of production for certain goods and services, and thus allow the consumer to receive the good or service for \”free.\”  Many high-income countries around the world subsidize part or most of the cost of higher education in this way. 

A choice to make a good or service \”free\” to consumers has various tradeoffs. It makes the good or service easier to consume for those who could not otherwise afford it. It creates a need for higher government taxes or borrowing. Perhaps more subtle effects are that changing the nature of who pays will also tend to change the quality of the service. In the case of higher education students, if you (or your family) is paying tuition, the level of effort you give, your choice of courses, and the pressure you feel to finish a degree within a certain amount of time are all going to shift. In the case of providers of higher education, if attracting government funds is the pathway to survival, then the institutions will be inclined to follow the lead of government–rather than the desires of students–in choices about who and how many to admit, what to teach, how to staff courses, where to locate branch campuses,whether to expand into online education, what courses to offer, what counts for a passing grade and a graduation requirement, and more.

Some of these changes from switching to \”free\” higher education may be desirable, while others are less so. My point is that it would be blinkered to imagine that a switch to \”free\” higher education won\’t also lead to an array of other changes. In a similar spirit,  Jason D. Delisle and Preston Cooper offer a short essay on \”International Higher Education Rankings: Why No Country\’s WHY Higher Education System Can Be the Best\” (American Enterprise Institute, August 2019). As they note:

A government that pays for a greater share of each student’s college education can afford to send fewer of those students to college, resulting in lower overall degree attainment. Similarly, without the ability to raise revenue through tuition, colleges may have fewer resources to spend on each student’s education.

The theme of their exercise is to use OECD data on high-income countries to point out some tradeoffs between higher education attainment, total resources, and public subsidies. This seems to me like a useful start in thinking about the many tradeoffs that would be involved in \”free\” higher education.

Here\’s the vivid example of Finland, which leads the way in the share of higher education spending coming from private sources, but thus can only afford to have a relatively small share of students attending higher education.

For instance, Finland ranks first on the subsidies metric: 96 percent of the Finnish higher education system’s funding comes from public sources. Domestic and European Union students can attend a public or government-dependent private institution free of charge, and most students also benefit from additional grants to help cover living expenses. But Finland pays the price for those heavy subsidies in other areas: Of the 35 nations, the country ranks 11th on the resources metric and just 25th on attainment.

One reason for the low attainment rate is that Finnish universities have finite resources and considerable autonomy to set admissions standards. Largely lacking the ability to raise revenue from tuition, it makes little financial sense for institutions to admit large numbers of students, and therefore they are highly selective regarding which students they let in. In 2016, just 33 percent of Finnish applicants to first-degree tertiary education were accepted, one of the lowest admission rates in Europe. Universities rely on comprehensive entrance examinations to make admissions decisions, and low acceptance rates create backlogs of applicants who often reapply in later years.

Another example is Korea, which has high attainment and low cost for higher education, but also low government support. 

Korea is perhaps the clearest example of a nation prioritizing one of the higher education goals (attainment) over the other two. Despite its top ranking on attainment, the nation ranks near the bottom on both resources and subsidies. The Korean government pays just 36 percent of the cost of higher education, leaving students and other private entities to pick up the rest of the bill. But the amount Korean universities themselves spend to educate students is also low; they spend just 29 percent of per capita GDP per student. That Korean universities spend relatively less per student means that tuition at public universities in Korea is also relatively moderate, despite the low subsidy rate. Korean students pay less in tuition than other high-attainment countries such as Canada, Japan, and the United Kingdom.

A moderately priced higher education system that relies little on government support, combined with high-quality secondary schools that consistently produce high scorers on international standardized tests, has led the vast majority of the nation’s youth to earn college degrees. However, the relative value of these degrees is well below other OECD nations, as the supply of college graduates has outstripped the availability of college-level jobs. Relative to the rich-world average, college-educated South Koreans receive a smaller wage premium over their peers with lesser degrees. As of 2017, the unemployment rate for college graduates exceeded that of people with less education.

The United Kingdom has been in the middle of a transition from a high-subsidy model for higher education to a model of low direct-subsidies but high and income-contingent student loans (that is, the repayment schedule for the loan depends on what you earn, and if you don\’t earn enough to make all the payments, the loan is forgiven at some point).

In England, where the vast majority of the country’s population is concentrated, universities charge undergraduate students tuition of up to $11,856, making English universities some of the most expensive in the world. That is why the United Kingdom ranks last on subsidies in our analysis, with just 26 percent of higher education funding derived from public sources. However, Britain’s student loan program complicates this high-tuition, low-subsidy story. To enable students to afford these high fees, the government offers student loans that fully cover tuition. Ninety-five percent of eligible students borrow. Repayment is income contingent; new students pay back 9 percent of their income above a threshold for up to 30 years, after which remaining balances are forgiven. Despite the lengthy term, the program is heavily subsidized: The government estimates that just 45 percent of borrowers who take out loans after 2016 will repay them in full (a benefit not captured in the OECD data).

England’s high-resource, high-tuition model is relatively new. Until 1998, English universities were tuition-free, with the government directly appropriating the vast majority of higher education funding. According to an analysis of the system by Richard Murphy, Judith Scott-Clayton, and Gillian Wyness, rapid increases in demand for education during the late 20th century led to swelling numbers of students and therefore a precipitous decline in resources per head available to universities. In 1998, the center-left government of Tony Blair began allowing institutions to charge tuition to supplement their direct government funding. At the same time, the government expanded its student loan program and introduced income-contingent repayment. Over the next two decades, university enrollments and funding both surged, and today the United Kingdom ranks among the top nations for both resources and attainment. While the 1998 reform allowing institutions to charge tuition was a major development, England’s transition from a high-subsidy country to a low-subsidy
one happened more gradually.

For the record, the United States currently ranks 11th of the 34 countries in higher ed attainment (measured as the share of 25-34 year-olds with \”tertiary\” education); 3rd of the 34 countries in total amount spent per higher education student (measured as per capita spending on higher ed vs. per capita GDP for the country, so this US ranking doesn\’t just reflect higher US income levels); and 31st of the 34 countries in subsidies (measured as share of higher ed spending coming from public sources).

The authors offer some general patterns in this data (and they are careful to warn that these are correlations, not statements about underlying causes). Across the high-income countries, a higher share of higher ed funding coming from government is correlated with a lower level of total per student spending on higher education, and also a lower level of higher ed attainment for that country. The US experience generally fits the these patterns: the US has lower subsidies for higher ed, but higher total spending and top-third in higher educational attainment.

What the IMF Thinks about China\’s Exchange Rate and Trade Balance

A couple of weeks ago, US Treasury Secretary Steven Mnuchin announced his finding that China was manipulating its currency to keep it unfairly low, and further announced that he would be taking the issue up with the International Monetary Fund. I offered some of my own views on this announcement when it happened. But what\’s interesting here is not what I think, or even what Mnuchin thinks, but what the IMF thinks.

Fortuitously, the IMF just published its 2019 External Sector Report: The Dynamics of External Adjustment (July 2019). As the title implies, it\’s about trade surpluses and deficits all over the world, not just the US and China. But it has some content that gives a sense of how it is likely to respond to Mnuchin\’s importuning.  Here\’s an overall comment:

The IMF’s multilateral approach suggests that about 35–45 percent of overall current account surpluses and deficits were deemed excessive in 2018. Higher-than-warranted balances remained centered in the euro area as a whole (driven by Germany and the Netherlands) and in other advanced economies (Korea, Singapore), while lower-than-warranted balances remained concentrated in the United Kingdom, the United States, and some emerging market economies (Argentina, Indonesia). China’s external position was assessed to be in line with fundamentals and desirable policies, as its current account surplus narrowed further … 

A couple of points are worth noting here. First, the IMF does not believe that all trade deficits and surpluses are \”excessive,\” only that about 35-45% are \”excessive.\” For economists, there will be sensible reasons why some countries make net investment in other countries, or receive net investments from other countries, which means that some countries will have reasonable trade surpluses or deficits.

Second, the IMF is saying that the the excessive trade surpluses are centered in teh EU and in Korea and Singapore. The excessive and trade deficits are the United States, the UK, and some emerging markets. But China\’s trade picture is not \”excessive.\” Instead, it\’s in line with economic fundamentals.

Here\’s the IMF list of countries with the biggest trade deficits and surpluses in 2018, as shown the table adapted from the IMF report. The US has by far the biggest trade deficit in absolute terms, although relative to the size of the US economy it\’s similar or even smaller than many of the other countries with big trade deficits. Among countries with trade surpluses, China ranked 11th in absolute size in 2018, and as a share of China\’s giant GDP, it\’s trade surplus was by far the smallest of the top 15.

But what about China\’s exchange rate in particular? Here\’s a figure from the IMF showing China\’s exchange rate since 2007, along with China\’s trade surpluses over that time. China\’s exchange rate has appreciated 36% since 2007, and its trade surpluses have been falling. In effect, Mnuchin\’s complaint is about the slight upward bend at the far right-hand side of China\’s exchange rate line.

So what is the cause for the large US trade deficits? The IMF points to a standard economic phenomenon that back in the 1980s used to be called the \”twin deficits\” problem. The US is running very large budget deficits, at a time when its unemployment rate has been 4% or less for more than year. From a macroeconomic view, all that buying power has to go someplace, and with the US economy already near full employment, it ends up flowing by various indirect routes into buying more imports–and driving up the US trade deficit. As the IMF writes, \”many countries with lower-than-warranted current account balances had a looser-than-desirable fiscal policy, compared to its medium-term desirable level (Argentina, South Africa, Spain, United Kingdom, United States) …\”

Why has China\’s current account surplus faded? One reason is related to the appreciation of China\’s exchange rate, already described. In addition, the IMF report suggests that China may be experiencing \”export market saturation,\” given that Chinas\’ share of world exports more than tripled from 5% in 2001 to 16% by 2017. China has also had a modest decline in its still-high savings rate, which means higher consumption of all goods, including a greater willingness to import.

The US has legitimate trade issues with China. China\’s treatment of intellectual property has often been cavalier at best, criminal at worst. But when it comes to the overall US trade deficit problem, the it seems quite unlikely that the IMF will designate China as the culprit.

Taxing Sugar-Sweetened Beverages

Jurisdictions around the world have been implementing taxes on sugar-sweetened beverages. Here\’s a map

Hunt Allcott, Benjamin B. Lockwood, and Dmitry Taubinsky focus on studies of the eight US jurisdictions  that have adopted such a tax, along with the broader literature on causes and costs of obesity, in \”Should We Tax Sugar-Sweetened Beverages? An Overview of Theory and Evidence\” in the just-released Summer 2019  Journal of Economic Perspectives. (Full disclosure: I\’m the managing editor of JEP, and thus predisposed to believe that the articles are of high quality and widespread interest.) Here\’s their table of the US jurisdictions that have imposed a tax on sugar-sweetened beverages. 

Making the case for (or against) a tax on sugar-sweetened beverage requires addressing a number of questions. 
  • Why focus on sugar-sweetened beverages rather than on other sources of calories, or on candy and junk food?
  • How much does consumption of sugar-sweetened beverages lead to heath or other harms like tooth decay?
  • How much does is tax on sugar-sweetened beverages passed through from retailers to consumers?
  • How much does a tax on sugar-sweetened beverages lead to people just shopping in a nearby jurisdiction where they aren\’t taxed?
  • How much does the share of tax on sugar-sweetened beverages that is passed through to consumers affect the health harms–in particular for those consumers most at risk (like children who consume a high volume of such drinks)–especially after tak
  • To what extent should the harms from sugar-sweetened beverages be counted as \”externalities,\” which are costs imposed upon others, and to what extent are the \”internalities,\” a term which refers to costs that the consumers of these products were (perhaps because of imperfect information or lack of self-control) not taking into account that they were imposing on themselves?
  • How much money might a tax on sugar-sweetened beverages collect?
  • To what extent do the costs of such a tax, and also the health benefits of such a tax, fall more heavily on those with lower income levels?
  • Putting all these factors together, does a tax on sugar-sweetened beverages seem like a wise policy?
This may seem like a lot of complications for answering a question about a small-scale policy. But for those who want a serious and actual answer , these kinds of questions can\’t be avoided. I won\’t try to summarize all the points of the paper, but the tone of answers can be inferred from the bottom line: 

[W]e estimate that the socially optimal sugar-sweetened beverage tax is between 1 and 2.1 cents per ounce. One can understand this as coming from the correction needed to offset the negative externality (about 0.8 cents per ounce) and internality (about 1 cent per ounce … Together, these rough estimates suggest an optimal tax of about 1.5 cents per ounce. While there is considerable uncertainty in these optimal tax estimates, the optimal tax is not zero and may be higher than the levels in most US cities to date. However, for policymakers who are philosophically opposed to considering internalities in an optimal tax calculation, the optimal tax considering only externalities is around 0.4 cents per ounce. …

[W]e estimate that the social welfare benefits from implementing the optimal tax nationwide (relative to having zero tax) are between $2.4 billion and $6.8 billion
per year. These gains would be substantially larger if the tax rate were to scale with
sugar content. Of course, such calculations require strong assumptions and depend on uncertain empirical estimates …  Furthermore, sugar-sweetened beverage taxes are not a panacea—they will not, by themselves, solve the obesity epidemic in America or elsewhere. But sin taxes have proven to be a feasible and effective policy instrument in other domains, and the evidence suggests that the benefits of sugar-sweetened beverage taxes likely exceed the costs.

Limits for Corporate Bigness on Acquisitions, Patents, and Politics

The United States, like most places, an ambivalent view of big business. When big firms are making high profits, we are concerned that they are out-of-control and exploitative. If big firms are is performing poorly, with losses and layoffs, we argue over how or whether to rescue them. (Remember the auto company bailouts in 2009?) Might it be possible to strike a more lasting balance?

For example, here\’s one possible combination of policies. Corporate bigness is fine by itself, and will not be prosecuted. However, the biggest firms will be sharply limited in their ability to acquire other companies. In addition, they may face limitations on their ability to participate in politics, as well as compulsory licensing of their key patents. In one famous case in 1956, the Bell System was required by antitrust authorities to license all of its existing patents to all US firms for free.

Something like this policy mix was implemented in the US economy in the middle decades of the 20th century. Naomi Lamoreaux writes about \”The Problem of Bigness: From Standard Oil to Google,\” in the Summer 2019 issue of the Journal of Economic Perspectives. She points out that many of the concerns about Standard Oil more than century ago, and about Google and other big tech companies in the present, are not about higher prices being changed to consumers. Instead, the concerns were about tactics used to choke potential competitors and about the political clout of bigness. Lamoreaux describes the pendulum swings of law and public opinion with regard to bigness, but here, I want to focus on the political balance that was struck with regard to bigness in the middle decades of the 20th century.

Lamoreaux points out that the large US firms that became well-established in the second and third decades of the 20th century remained successful for some time, but with no particular social trend toward greater concentration of industry. This is also a time when public attitudes toward corporate bigness were not very harsh. She writes:

Tracking the 100 largest firms in the US economy at various points between 1909 and 1958, Collins and Preston (1961) similarly found that the top firms gradually came to enjoy “an increasing amount of entrenchment of position by virtue of their size” (p. 1001). Over these same decades, moreover, there was remarkably little change in overall levels of economic concentration. Scholars have measured concentration in different ways and over different sets of years, and as a result, their estimates diverge somewhat. But … there was no clear trend toward increasing (or decreasing) concentration, either in the manufacturing sector or in the economy as a whole.

Intriguingly, even as large firms consolidated their positions, the public’s view of them became increasingly accepting. Galambos (1975) analyzed references to big business in a sample of periodicals read by various segments of the middle class over the period 1890–1940 and found that the antipathy of the late nineteenth century had greatly diminished by the interwar period. Auer and Petit (2018) conducted a similar analysis, searching the Proquest database of historical newspapers to find articles that included the word “monopoly.” Even though Auer and Petit were selecting on a word with generally negative connotations in American culture, they found that unfavorable mentions dropped from about 75 percent of the total in the late nineteenth century to a little over 50 percent starting in the 1920s.

One why people were more at peace with large companies during this time period is, as Lamoreaux describes, that Congress and state legislatures passed rules placing limits on corporate political involvement. 

In addition to the new antitrust laws already discussed, Congress took a first step toward limiting business influence in politics by passing the Tillman Act in 1907, prohibiting corporations from contributing money to political campaigns for national office. The act was a reaction to a particular set of revelations—that large mutual insurance companies were using their members’ premiums to lobby for measures that weakened members’ protections (Winkler 2004)—but it built on pervasive fears that large-scale businesses were using their vast resources to shape the rules in their favor. By the end of 1908, 19 states had enacted corporate campaign-finance legislation of their own, and they had also begun to restrict lobbying expenditures by corporations (McCormick 1981, p. 266). Congress would write an expanded version of the Tillman law into the Federal Corrupt Practices Act in 1925 (Mager 1976).

Another reason why people of this time were more accepting of bigness is that the new antitrust laws gave them some reason to believe that bigness was less likely to be economically abusive. Lamoreaux writes:

The new antitrust regime seems to have been similarly reassuring, even though the 1920s are generally regarded as a period when antitrust enforcement was relatively lax (Cheffins 1989). The Federal Trade Commission got off to an inauspicious start in the early 1920s—most of the complaints it filed were dismissed by the courts—and in the late 1920s it was essentially captured by business interests (Davis 1962). By 1935, however, the agency was showing renewed vitality. The number of complaints it filed increased sharply, its dismissal rate fell to about one-quarter, and it was winning
the vast majority of cases that proceeded to judicial review (Posner 1970, p. 382). 

At the Department of Justice, there was no significant fall-off in the number of cases during the interwar period, with the exception of the early years of the Great Depression. Prosecutors seem to have targeted fewer large firms during the 1920s, but the department’s win rate increased from 64 percent in 1920–1924 to 93 percent in 1925–1929 (Posner 1970, pp. 368, 381; Cheffins 1989). Although most antitrust cases still involved horizontal combinations or conspiracies, by the 1930s about one-third of the cases filed by the Department of Justice were targeting abuses of market power, and the FTC’s proportion was closer to one-half (Posner 1970, pp. 396, 405, 408).

When the biggest firms recognized that anticompetitive practices, mergers, and political spending were going to come under enhances scrutiny, it encouraged them to move toward sustaining their competitive position through funding research and development and obtaining patents. Of course, all intellectual property is built on a tradeoff: on one side, it\’s an incentive for innovation, but on the other side, it locks in a competitive advantage for innovator for a decade or two. By the late 1930s, antitrust authorities were addressing this issue by requiring that large firms provide compulsory licenses to their key patents–in some cases without receiving any payments. Here\’s how Lamoreaux tells the story: 

After World War I, large firms had stepped up both their investments in research and development and their efforts to accumulate patent portfolios. According to surveys conducted by the National Research Council, the number of new industrial research labs grew from about 37 per year between 1909 and 1918 to 74 per year between 1929 and 1936, and research employment in these labs increased by a factor of almost ten between 1921 and 1940 (Mowery and Rosenberg 1989, pp. 62–69). Large firms generated increasing numbers of patents internally, but they also bought them from outside inventors. …  The competitive advantages to large firms that broad portfolios of patents could bring, in terms of both what they could achieve technologically and how they could forestall competition, were becoming increasingly apparent—not least to the firms themselves (Reich 1985). As early as the 1920s, valuations on the securities markets began to mirror the size and quality of large firms’ patent portfolios (Nicholas 2007). 

Federal antitrust authorities began to pay attention as well, especially during the late 1930s … In 1938, a specially created commission, the Temporary National Economic Committee, launched a three-year  investigation into the “Concentration of Economic Power.” The Temporary National Economic Committee began its hearings by examining large firms’ use of patents to achieve monopoly control, focusing in particular on the automobile and glass industries. In 1939, the committee held a second set of hearings to solicit ideas about how the patent system could be reformed (Hintz 2017). It also commissioned a book-length study by economist Walton Hamilton, Patents and Free Enterprise (Hamilton 1941). According to Hamilton, large firms had perverted the patent system. The system’s original purpose had been to encourage technological ingenuity, but now large firms were instead deploying patents as barriers to entry and using licensing agreements to divide up the market and limit competition among themselves (Hamilton 1941, pp. 158–63; John 2018).

The Temporary National Economic Committee’s patent investigation was headed by Thurman Arnold, assistant attorney general in charge of the Department of Justice’s antitrust division. Arnold’s views about the abuse of patents were similar to Hamilton’s, and at his insistence, the committee’s final report recommended compulsory licensing—requiring firms to license their technology at a fair royalty to anyone who wanted to use it. The recommendation went nowhere in Congress (Waller 2004), but Arnold nonetheless pursued it at Justice. As early as 1938, for example, he pushed Alcoa to license a set of its patents as part of an antitrust settlement, and the company agreed in a consent decree entered in 1942. By that time, Arnold had already secured three other compulsory licensing orders, and many more were to follow. Barnett (2018) compiled a complete list of such orders and their terms from 1938 to 1975. By the latter year, the total had risen to 136, one-third of which did not permit the firms to recoup any royalties at all for their intellectual property.

In the early and mid-twentieth century, concerns about excessive concentration of economic and political power in the hands of dominant firms helped constrain the ability of large firms to grow through mergers and acquisitions. During this period, if large firms wanted to grow, they often had little choice but to invest in internal R&D.

Antitrust policy not only encouraged large firms to invest in internal R&D, but also occasionally promoted technology diffusion. A leading example is the 1956 consent decree against the Bell System, one of the most significant antitrust rulings in U.S. history (Watzinger et al., 2017). The decree forced Bell to license all its existing patents royalty-free to all American firms. Thus, in 1956, 7,820 patents  (or 1.3% of all unexpired U.S. patents) became freely available. Most of these patents covered technologies that had been developed by Bell Labs, the research subsidiary of the Bell System.

Compulsory licensing substantially increased follow-on innovation building on Bell patents. Using patent citations, Watzinger et al. (2017) estimate an average increase in follow-on innovation of 14 percent. This effect was highly heterogeneous. In the telecommunications sector, where Bell kept using exclusionary practices, there was no significant increase. However, outside of the telecommunications sector, follow-on innovation blossomed (a 21% increase). The increase in follow-on innovation was driven by young and small companies, and more than compensated Bell\’s reduced incentives to innovate. In an in-depth case study, Watzinger et al. demonstrate that the decree accelerated the diffusion of the transistor technology, one of the most important technologies of the twentieth century.

This view that the consent decree was decisive for U.S. post-World War II innovation, particularly by spurring the creation of whole industries, is shared by many observers. As Gordon Moore, the cofounder of Intel, notes: \”[o]ne of the most important developments for the commercial semiconductor industry (…) was the antitrust suit led against [the Bell System] in 1949 (…) which allowed the merchant semiconductor industry \\to really get started\” in the United States (…) [T]here is a direct connection between the liberal licensing policies of Bell Labs and people such as Gordon Teal leaving Bell Labs to start Texas Instruments and William Shockley doing the same thing to start, with the support of Beckman Instruments, Shockley Semiconductor in Palo Alto. This (…) started the growth of Silicon Valley\” (Wessner (2001, p.86) as quoted in Watzinger et al. (2017)).

Scholars such as Peter Grindley and David Teece concur: \”[AT&T\’s licensing policy shaped by antitrust policy] remains one of the most unheralded contributions to economic development possibly far exceeding the Marshall plan in terms of wealth generation it established abroad and in the United States\” (Grindley and Teece (1997) as quoted in Watzinger et al. (2017)).

Large companies will always face a temptation to extend their power in other ways: acquiring competitors, using \”patent thickets\” to block competition, or exerting political pressures. In one way or another, the goal of antitrust policy is to steer big firms away from these options, and in this way to keep big firms focused on providing consumers with desired goods and services and on continued innovation. 

Uwe Reinhardt on High US Health Care Costs

Uwe Reinhardt had the remarkable skill that even when he was discussing a subject where you felt that you already knew a lot, he offered the kinds of live-wire facts and metaphors, insights and opinions, which informed and challenged–and often entertained as well. Reinhardt died in November 2017. His final book has just been published: Priced Out: The Economic and Ethical Costs of American Health Care. For a flavor of the book and Reinhardt\’s style, the Milken Institute Review has published an excerpt from the book in the its Third Quarter 2019 issue.

Reinhardt tackles the question of explaining high US health care costs–that is, per capita US spending on health care is roughly double the average for other high-income countries. Here\’s a flavor of Reinhardt\’s  take on four possible explanations.

Possible explanation #1: US income levels are higher, and health care spending tends to rise as incomes rise. Reinhardt writes:

The figure shows that GDP per capita does indeed drive health spending systematically. But income alone leaves much unexplained. Even after adjusting the health spending data for GDP per capita (roughly, “ability to pay”), U.S. spending levels are much higher (about $2,200 higher in 2015) than would be predicted by the graph …

Possible explanation #2: US Demography leads to higher health care spending. Reinhardt writes:

The U.S. population is, on average, much younger than the populations of most countries in the OECD, yet we spend much more per capita on health care. In fact, although the United States has one of the youngest populations among developed nations, we have (as noted earlier) the world’s highest health spending per capita. Japan, in contrast, has the oldest population, but among the lowest health spending levels.

 

Possible explanation #3:  Health care, or at least certain kinds of health care, cost more in the US. Reinhardt writes:

With the exception of a few high-tech procedures, Americans actually consume less health care service in real terms (visits with physicians, hospital admissions and hospital days per admission, medications and so on) than do Europeans. For better or for worse — better for the vendors of health care and worse for consumers — prices for virtually every health care product and service in the United States tend to be twice as high as those for comparable products or services in other countries.

 Here are some examples of that pattern, using data from an International Federation of Health Plans report.

Possible explanation #4: High administrative costs in the US health insurance system. Here\’s Reinhardt:

According to a recent publication by America’s Health Insurance Plans, private health insurers on average take a haircut of about 17.8 percent off the insurance premiums paid by employers or individuals for “operating costs,” which means marketing and administration. Another 2.7 percent is profits. That haircut was as high as 45 percent pre-Obamacare for smaller insurers selling policies in the (non-group) market for individually purchased insurance. Under Obama­care the portion of the premium going to marketing, administration and profits was constrained to 20 percent for small insurers and to 15 percent for large insurers. … It’s worth noting that some analysts of administrative costs of Blue Cross Blue Shield plans and other private insurers report that the AHIP number is too high. They currently estimate the range to be between about 9 percent for the Blues and between 10 and 11 percent for other private insurers. … 

Drugs that cost $17 to produce end up costing patients or purchasers of health insurance $100. Of a total $100 in consumer spending, health insurers pay pharmacy benefit managers $81, keeping $19 for themselves, of which $3 is profit. The rest goes for marketing and administration. …

I can think of no legislation ever to emerge from Congress that addressed the magnitude of this administrative overhead. It is as if Congress just does not care what health spending actually buys. On the contrary, every health reform emerging from Congress vastly complicates the system further and brings forth new fleets of non-clinical consultants who make a good living teaching clinicians and hospitals how to cope with the new onslaught. All of their income becomes the providers’ expense and thus ends up in the patient’s bill.

And here\’s one more figure from Reinhardt, showing the growth in health care administrators vs. the growth in actual health care providers over time. 

Of course, this post is only an appetizer. There\’s much more in the MIR article, and much much more in the book itself. 

Paying Kidney Donors: Covering Expenses?

There\’s a long-standing controversy over whether those who donate a kidney should be able to be paid for doing so. Whatever one\’s view of whether the seller of a kidney should get a positive price for doing so, it\’s worth considering that under traditional rules, the donor of a kidney in efect receives a negative price for doing so. That is, the donor of a kidney faces a variety of costs–both explicit (travel, hotel) and implicit (time off from work, physical discomfort). Even many those who feel that a kidney donors should not be paid for the act of donating itself seem willing to consider the possibility that donors should be reimbursed for expenses.

The Trump administration just signed an executive order which would expand the ability of the National Living Donor Assistance Center to compensate kidney donors for expenses: not just reimbursing them for travel expenses, but also for lost wages, child care, and other expenses.

How much difference would such a policy of expanding compensation for expenses make? Somewhat coincidentally, Frank McCormick, Philip J. Held, Glenn M. Chertow, Thomas G. Peters and John P. Roberts were just in the process of publishing \”Removing Disincentives to Kidney Donation: A Quantitative Analysis,\” which appeared in the Journal of the American Society of Nephrology (July 2019, pp. 1349-1357). Here\’s their list of seven disincentives to donate a kidney, with an estimated monetary cost for each:

They readily admit that there is a high range of uncertainty around these kinds of estimates. Given that caveat, they argue:

We show that the total monetary value of the seven disincentives facing a typical living kidney donor is about $38,000. Removing all disincentives would increase kidney donations by roughly 12,500 per year, which would cut the adult waiting list for transplant kidneys in half in about 4 years. This would require an initial government  outlay of only about $0.5 billion per year, but would ultimately result in net taxpayer savings of about $1.3 billion per year. The value to society of the government removing the disincentives would be about $14 billion per year, reflecting the great value of the additional donated kidneys to recipients and the savings from these recipients no longer needing expensive dialysis therapy.

To put it in bluntest possible terms, about  40,000 people die each year for lack of a kidney transplant. So explicit consideration of disincentives to donate and  how to address them could save thousands of lives.

But it\’s also true that while the new Trump administration policy would expand incentives for kidney donations by covering lost income and dependent care, it wouldn\’t include all the items on the list above, like compensation for the pain and discomfort of having a kidney removed. 

This line between \”compensation for expenses\” and actual payment for donating a kidney seems murky to me. Is a kidney donor is allowed to travel to the operation in first-class or to stay in a fancy hotel, or to be compensated for eating at some nice restaurants? Or does \”compensation\” for expenses mean that only the least expensive transportation,  hotels, and meals are allowed? What quality and cost is going to be allowed in terms of potential compensation for dependent care?  
If we are going to compensate people for lost time at work, does this mean that kidney donors at higher-wage jobs get more compensation that those at lower-wage jobs? Does it mean that kidney donors not currently in the workforce get zero compensation for lost time at work?  If there is one preset level of compensation for time lost at work, it will inevitably be too low for some jobs and too high for others.

Once you have accepted that the price of donating a kidney should not be negative, because prices for donating kidneys have incentive effects, then you don\’t want to turn around and demand that the expenses of kidney donors are treated as cheaply as conceivably possible. But what if some kidney donors overall can get compensation for \”medium-priced\” or \”high-priced\” expenses, but some donors would rather receive lower-priced compensation and also get some cash that they could use for, say, paying rent for a couple of months or buying a used car? Telling potential kidney donors that they can get generous compensation for any money they spend out-of-pocket, but zero compensation for pain or health risks from donating, applies a constricted notion of \”costs\” and incentives.

And ultimately, while we are picking through what is an acceptable expense for kidney donors, it\’s worth a mention that none of the other parties in the health care system–doctors, nurses, anesthesiologists, those in the test laboratories, hospitals, and so on–are being told they need to make any free contributions to the kidney transplant process.

For previous posts with relevance to the intersection of economics, incentives, and kidney transplant issues, see:

Innovation Policy: Federal Support for R&D Falls as its Importance Rises

One of those things that \”everyone knows\” is that continued technological progress is vital to the continued success of the US economy, not just in terms of GDP growth (although that matters) but also for major social issues like providing quality health care and education in a cost-effective manner, addressing environmental dangers including climate change, and in other ways. Another thing that \”everyone knows\” is that research and development spending is an important part of generating new technology. But total US spending on R&D as a share of GDP has been nearly flat for decades, and government spending on R&D as a share of GDP has declined over time.

Here\’s a figure on funding sources for US R&D from the Science and Engineering Indicators 2018The top line shows the rise in R&D spending in the 1960s (much of it associated with the space program and the military), a fall in the 1970s, and then how R&D spending has bobbed around 2.5%  of GDP since then. The dark blue line shows the rise in business-funded R&D, while the light blue line shows the fall in government funding for R&D.

One underlying issue is that business-funded R&D is more likely to be focused on, well, the reasonably short-term needs of the business, while government R&D can take a broader and longer-term perspective,

One signal of this dynamic is that the share of patents which rely on government government funding is on the rise. L. Fleming, H. Greene, G. Li, M. Marx, and D. Yao describe the pattern in \”Research Funding: Government-funded research increasingly fuels innovation\” (Science, June 21, 2019, pp. 11139-1141).

Of course, the relationship between R&D spending and broader technological progress is complicated. Translating research discoveries into goods and services isn\’t a simple or mechanical process. Other important elements include the economic and regulatory environment for entrepreneurs, the diffusion of new technologies across firms,  and the quantity of scientists and researchers. For an overview of the broader issues, Nicholas Bloom, John Van Reenen, and Heidi Williams offer \”A Toolkit of Policies to Promote Innovation\” in the Summer 2018 issue of the Journal of Economic Perspectives. They explain the case for government support of innovation:

Knowledge spillovers are the central market failure on which economists have focused when justifying government intervention in innovation. If one firm creates something truly innovative, this knowledge may spill over to other firms that either copy or learn from the original research—without having to pay the full research and development costs. Ideas are promiscuous; even with a well-designed intellectual property system, the benefits of new ideas are difficult to monetize in full. There is a long academic literature documenting the existence of these positive spillovers from innovations. …

As a whole, this literature on spillovers has consistently estimated that social returns to R&D are much higher than private returns, which provides a justification for government- supported innovation policy. In the United States, for example, recent estimates in Lucking, Bloom, and Van Reenen (2018) used three decades of firm-level data and a production function–based approach to document evidence of substantial positive net knowledge spillovers. The authors estimate that social returns are about 60 percent, compared with private returns of around 15 percent, suggesting the case for a substantial increase in public research subsidies.

Along with  pointing out some advantages of government-funded R&D, Bloom, van Reenen, and Williams also point out that when it comes to tax subsidies for corporate R&D, the US lags well behind. They write: 

The OECD (2018) reports that 33 of the 42 countries it examined provide some material level of tax generosity toward research and development. The US federal T&D tax credit is in the bottom one- third of OECD nations in terms of generosity, reducing the cost of US R&D spending by about 5 percent. …  In countries with the most generous provisions, such as France, Portugal, and Chile, the corresponding tax incentives reduce the cost of R&D by more than 30 percent. Do research and development tax credits actually work to raise R&D spending? The answer seems to be “yes.”

Here\’s their toolkit of pro-innovation policies, with their own estimates of effectiveness along various dimensions.