European Union: Functionalism and the Ratchet Effect

The euro and the European unification project have reached a difficult point. As Luigi Guiso,
Paola Sapienza, and Luigi Zingales write: \”Europe seems trapped in catch-22: there is no
desire to go backward, no interest in going forward, but it is economically unsustainable to stay
still.\” They discuss these issues in \”Monnet’s Error?\” presented as part of the Fall 2014 conference of the Brookings Papers on Economic Activity.

For non-EU readers, Monnet is Jean Monnet, a French diplomat who was one of the leaders of the movement toward European Union. Monnet died in 1979, but back in the early 1950s, he was President of the European Coal and Steel Community, the first forerunners of what eventually grew into the modern European Union.

Monnet was \”functionalist,\” who saw each step toward cross-European communication–whether the step was actually a success or not!–as a step toward something eventually resembling a United States of Europe. Here\’s how Guiso, Sapienza, and Zingales describe the political dynamic:

The functionalist view, advanced by Jean Monnet, assumes that moving some policy functions to the supranational level will create pressure for more integration through both positive feedback loops (as voters realize the benefits of integrating some functions and will want to integrate more) and negative ones (as partial integration leads to inconsistencies that force further integration). In the functionalists’ view integration is the result of a democratic process, but the product of an enlightened elite’s effort. In its desire to push forward the European agenda, this élite accept to make unsustainable integration steps, in the hope that future crises will force further integration. In the words of Padoa-Schioppa (2004, p. 14), a passionate Europe-supporter who espoused this theory, “[T]he road toward the single currency looks like a chain reaction in which each step resolved a preexisting contradiction and generated a new one that in turn required a further step forward. … In the words of Romano Prodi, one of these founding fathers, \”I am sure the euro will oblige us to introduce a new set of economic policy instruments. It is politically impossible to propose that now. But some day there will be a crisis and new instruments will be created.\”

As Guiso, Sapienza, and Zingales note, the \”functionalist approach implicitly assumes that there is no risk of a backlash.\” The past history of the EU project, along with the present experience of the euro, raise questions about whether Monnet\’s functionalist ratchet may be hitting its limits.

The authors point out that a number of the later entrants to the European Union joined even though only a minority supported the step in  public opinion polls: for example, \”United Kingdom (36%) and Denmark (46%) joined with only a minority supporting the EU. So did Greece (42%), Sweden (40%), and Austria (42%).\” In addition, each significant move to greater European unification has made the European project less popular over time: \”While EU membership has strong support in most of the EU-15, this support dropped every time the European project made a step forward and never recovered. Rightly or wrongly, the Eurozone crisis has contributed to further erode this support, albeit the drop appears more related to the terrible economic conditions and, thus,
potentially more reversible. Today a majority of Europeans think that the EU is going in the wrong direction. They do not want it to go further, but overall they do not want it to go backward either,
with all the countries (except Italy) having a pro Euro majority.\”

On one side, Monnet\’s functionalist ratchet effect has clearly worked. On the other side, it may be hitting some political and economic limits:

On the one hand, Monnet’s chain reaction theory seems to have worked. In spite of limited support in some countries, European integration has moved forward and has become almost irreversible. On the other hand, the strategy has worked so far at the cost of jeopardizing the future sustainability. The key word is “almost.” Europe and the euro are not irreversible, they are simply very costly to revert. As long as the political dissension is not large enough, Monnet’s chain reaction theory delivered the desired outcome, albeit in a very non-democratic way. The risk of a dramatic reversal, however, is real. The European project could probably survive a United Kingdom’s exit, but it would not survive the exit of a country from the euro, especially if that exit is not so costly as everybody anticipates. The risk is that a collapse of the euro might bring also the collapse of many European institutions, like the free movement of capital, people and goods. In other words, as all chain reactions, also Monnet’s one has an hidden cost: the risk of a meltdown.

For some earlier posts on the economics of the euro-zone, see \”A Euro Narrative\” (August 15, 2013) and \”Will We Look Back on the Euro as a Mistake?\” (February 28, 2014).

Coda: From a U.S. policy perspective, the most recent application of the functionalist view is probably the Patient Protection and Affordable Care Act that President Obama signed into law in 2010. Even those who supported the law four years ago are quick to acknowledge that it has all sorts of flaws and shortcomings, which of course is why the Obama administration has delayed or redefined so many substantial provisions over time. But to many supporters, the details of the law seem less relevant than the belief that a functionalist political ratchet is now in effect: that is, any successes of the law will be a reason to continue the law, and any shortcomings of the law will be an additional reason for federal intervention in the health care industry to address those shortcomings. For example, President Obama has spoken supportively of a single-payer health care system over time, but has also discussed the transitional difficulties that could arise from moving in that direction too rapidly.  Many opponents of course dislike the substance of the law, but my guess is that many opponents would not be so concerned over, say, state-level health insurance exchanges aimed at providing insurance to the uninsured, if they didn\’t fear that it was part of a ratchet effect for future federal control over the health care industry.  Similarly, my sense is that many of the arguments over the appropriate military stance for the United States in the Middle East are less about what immediate steps should be taken, but instead about either the hope by some, or the fear by others, that those immediate steps will lead to a ratchet effect of additional military steps in the future.

Numbers, Fertility, and Life Expectancy for the Human Race

Each year the Department of Economic and Social Affairs at the United Nations publishes a \”Concise Report on the World Population Situation.\”  The 2014 report is a chance for a quick check on the numbers, fertility, of the human race. As a starting point, here are global population projections through 2050. The high-fertility estimate is when women have on average a half-child more, and the low-fertility estimate is when women have on average a half-child less.

As your eye shows, the rate of increase in population is slowing a bit over time. Here\’s a figure breaking down the average population growth rates for the world and by by region. For the world, population growth rates are projected to fall from 2% in 1970 to 0.5% by 2050. For Europe, population growth rates are at zero percent now, and slated to fall lower. A number of other regions are headed that way as well, with population growth in Africa the clear outlier. The report notes: \”[T]the annual increase in that populationhas been declining since the late 1960s. By 2050, it is expected that the world’s population will be growing by 49 million people per year, more than half of whom will live in the least developed countries. Currently, of the 82 million people added to the world’s population every year, 54 per cent are in Asia and 33 per cent in Africa. By 2050, however, more than 80 per cent of the global increase will take place in Africa, with only 12 per cent in Asia.\” 

These lower rates of population growth are reflected in lower fertility rates. Back in 1970, only Europe and Northern America had fewer than three births per woman. Now the world average is less than  three births per woman, although Africa\’s rate remains higher. Still, Africa\’s birthrate per woman roughly matches where Latin America and Asia were in the late 1970s, and fertility rates can in some cases shift quite rapidly.

People are living longer. For the world as a whole, life expectancies are up from less than 60 years in 1970 to about 70 years now. The disparity in life expectancies is much smaller than the disparity in incomes. A person in a high-income country may have 10 or 20 times as much income as someone in a low-income country, but they don\’t live 10 or 20 times longer.

Indeed, much of the remaining disparity in life expectancy happens not in old age, but among children. For the world as a while, about one child in every five died before reaching the age of 5 back in the 1950s. Now, only about one child in 20 dies before reaching the age of five. Even in Africa, the under-5 infant mortality rate has dropped to what the world average was as recently as the early 1980s.

 These figures are all telling us something about how the typical or common life experience of a member of the human race is changing over time. In many countries of the world, population may move to different locations, but population is growing by little or nothing. Nuclear families are smaller. Women are spending less time pregnant, but when children are born, the parents are far less likely to face the tragedy of an early death (which of course could make such tragedies feel even more painful when they do occur). With longer life expectancies and smaller families, a giant family reunion in modern times is less likely to have a few older people and a swarm of children than the equivalent event a few decades ago. Instead, the attendees at that giant family reunion may be distributed fairly evenly across four generations. In these and other ways, our fundamental feelings about what seems usual and common in our families and communities is fundamentally different from previous generations of the human race.

What Was the Federal Reserve Thinking in Summer 2008?

The Great Recession didn\’t officially start until December 2007, but the warning signs came months earlier. Stephen G. Cecchetti explained in the Winter 2009 issue of the Journal of Economic Perspectives, in \”Crisis and Responses: The Federal Reserve in the Early Stages of the Financial Crisis.\”

A complete chronology of the recent financial crisis might start in February 2007, when several large subprime mortgage lenders started to report losses. It  might then describe how spreads between risky and risk-free bonds—“credit spreads”— began widening in July 2007. But the definitive trigger came on August 9, 2007, when the large French bank BNP Paribas temporarily halted redemptions from three of its funds because it could not reliably value the assets backed by U.S. subprime mortgage debt held in those funds. When one major institution took such a step, financial firms worldwide were encouraged to question the value of a variety of collateral they had been accepting in their lending operations—and to  worry about their own finances. The result was a sudden hoarding of cash and cessation of interbank lending, which in turn led to severe liquidity constraints on many financial institutions.\” 

By August and September 2007, the Fed was already cutting interest rates. By December 2007, the Fed had started creating an alphabet soup of temporary agencies for making emergency loans as needed: Term Auction Facility (TAF), Term Securities Lending Facility (TSLF), Primary Dealer Credit Facility (PDCF), Commercial Paper Funding Facility (CPFF), Term Asset-Backed Securities Loan Facility (TALF). The unemployment rate was climbing, from 5.0% in December 2007 to 6.1% by August 2008.

All of which raises an obvious question: How or why was the Fed so surprised in September 2008, when the US financial system nearly collapsed? This was the month when Lehman Brothers famously went broke. But in the same month, Fannie Mae and Freddie Mac were placed into conservatorship, Bank of America bought out Merrill Lynch, the Fed authorized lending up to $85 billion to bail out the American International Group (AIG); the value of shares in the Reserve Primary Money Fund falls below $1, leading the Fed to announce a $50 billion program to guarantee investments in money market mutual funds;  Citigroup bought otherwise bankrupt Wachovia; and the Troubled Asset Relief Program (TARP) went to Congress, where it would be approved in early October. 
Stephen Golub, Ayse Kaya, Michael Reay offer some thoughts about \”What were they thinking? The Federal Reserve in the run-up to the 2008 financial crisis,\” in a short piece written for VoxEU, which is a condensation of a longer article by the same title forthcoming next year in the Review of International Political Economy, but already available online at the journal\’s website for those who have a personal or library subscription. 
The authors discuss in some detail what was being said at the meetings of the Federal Open Market Committee (FOMC), since the minutes of those meetings are now publicly available. In the discussion, they offer some simple counts of how many times certain terms came up. For example, here\’s a figure showing how often the terms \”inflation\” and \”growth\” came up at various FOMC meetings. Notice that in summer 2007,inflation is coming up quite a lot; indeed, there is some talk at several of the meetings that the Fed might need to raise interest rates soon to head off a surge of inflation–which of course turned out to be a gross misreading of where the economy was headed. 
Here\’s a figure showing how often and when the term \”subprime\” comes up. Notice a surge of mentions in 2007, as the problems in subprime markets first surfaced, but by summer 2008 the term was rarely coming up in these meetings. 
Or as another example, consider CDO and CDS, which stand for \”collateralized debt obligation,\” a kind of subprime mortgage-backed security that turned out to be especially risky, and \”credit default swap,\” a way of trying to insure against the risk of the CDOs. Again, talk of these in the Fed Open Market Committee meetings spiked in late 2007 and the very start of 2008, but had died down considerably by summer 2008.
The Fed was clearly aware of many of the issues about the housing price bubble in its deliberations–there are plenty of individual examples of the subject coming up in meetings and speeches. But in summer 2008, the Fed saw little need to focus on these issues or to take action. Of the many reasons that can be put forward for this seeming neglect of a looming crisis, Golub, Kaya, and Reay offer two that seem plausible to them. 

First, the Fed policymaking wascharacterized by a dominant paradigm, which we call ‘post hoc interventionism’. Post hoc interventionism held that bubbles were difficult to spot correctly, and that if a bubble developed, it could effectively be controlled after it had burst. Further, preventative pricking of bubbles could lead to an unnecessary economic contraction. Thus, monetary policy, instead of aiming at bubbles, should focus on flexible inflation targeting. Post hoc interventionism explains in part the Fed’s de-emphasis on financial stability in favor of inflation targeting. Second, we argue that the Fed’s institutional structure, conventions, and routines were crucial in maintaining post hoc interventionism as well as in undermining the impact of contrary events and dissenting opinions, as suggested by the literature on institutional pathologies in sociology and political science …

I largely agree with their argument, but I would add that I think the discussion at the Fed was influenced by the experience of the dot-com boom and crash that preceded the previous recession. There had been calls for years through the mid and late 1990s for the Fed to raise interest rates to limit the \”irrational exuberance\” of the dot-com boom, but the Fed (mostly) just let the boom continue, until it brought on the recession in 2001. That recession had been only six months long and not too deep. Thus, the thinking in summer 2008 was to expect a shallow recession, and to avoid bringing on a deeper recession.  Of course, this thinking neglected what later seemed an obvious point: the 2001 dot-com collapse was about stock market values, and while that pinched the economy, the 2007-2009 recession was about losses the value of debt owed to banks and other financial institutions, which posed a much more fundamental economic risk.

Golub, Kaya, and Reay also emphasize the Fed meetings tended to follow a certain format, where everyone around the table made a short presentation, typically just following up on the latest iterations of the information they had presented earlier. The meetings aimed for unanimity. The format of the meetings and the institutions wasn\’t set up to encourage challenges from critical ideas. Indeed, even certain groups  within the Fed like the Division of Banking Supervision and Regulation was typically not represented at these meetings, just because it wasn\’t part of the usual flow of information presented. The lesson here for all organizations is that if you keep looking in the same place all the time, you will inevitably miss the dangers that arise from any other direction. 

Foreign-Controlled Domestic Corporations in the United States

U.S. companies turning into foreign-controlled U.S. companies are  the news: for example, Burger King and the Canadian coffee-and-doughnuts company Tim Horton\’sMedtronic and the Irish firm Covidien; or back in 2009, as part of the U.S. auto industry bailout, the autoparts maker Delphi emerged from bankruptcy, with assistance from the U.S. government, as a British-based firm.

I won\’t try to sort through all the tax issues involved here, but Donald J. Marples and Jane G. Gravelle offer a useful starting point in \”Corporate Expatriation, Inversions, and 
Mergers: Tax Issues,\” published on May 27, 2014 by the Congressional Research Service. But here\’s a sketch of the main issues.

A foreign-controlled domestic company in the U.S. still needs to pay U.S. corporate taxes on its U.S. operations at the U.S.-imposed rate, of course. But two other issues remain relevant. One is that the U.S. is the only major economy in the world that seeks to tax its companies on their global profits–not their national profits–and to do so at the relatively high U.S. corporate tax rate (although this U.S. corporate tax is postponed until the funds are sent back to the U.S,)  When a U.S. company turns into a foreign-controlled firm, it is only taxed on its U.S. operations, not on its global profits earned in other countries with lower corporate tax rates. The second issue is that companies often have ways, in how they set internal accounting prices in the company for provision of certain goods and services, and how they set up their financing, of making profits appear in one country rather than another.

How prevalent are foreign-controlled domestic corporations? James R. Hobbs provides some basic statistics in \”Foreign-Controlled Domestic Corporations, 2011,\” in the Summer 2014 Statistics of Income Bulletin, published by the U.S. Internal Revenue Service. While this statistics give a sense of the issue, it\’s worth noting that Hobbs is collecting data on U.S. domestic companies that have more than 50% foreign ownership. There are also foreign companies that have a U.S. subsidiary–which isn\’t quite the same thing–and companies that have their legal headquarters in another country even though a majority of the business sales and the shareholders are in the U.S.

Overall, Hobbs documents that for the 2011 tax year, there were 76,793 foreign-controlled domestic
corporations that \”collectively reported $4.6 trillion of receipts and $11.7 trillion of assets\” to the IRS. \”While Federal income tax returns for FCDCs accounted for just 1.3 percent of all United States (U.S.) corporate returns, they made up 16.2 percent of total receipts and 14.4 percent of total assets.\” Here\’s the gradual rise over the last 10 years for the share of foreign-controlled domestic corporations, relative to all U.S. corporations, in receipts, assets, and as a share of tax returns.

Although there is some increase in recent years, a lot of the increase happened before the 21st century. This table is a slightly cut-down version of one appearing in the Hobbs paper (I cut some of the years to make the longer-term trends easier to discern). For example, total receipts of foreign-controlled domestic corporations were 2.06% of all U.S. corporations in 1971, 9.29% by 1990, and 16.19% in 2011. Assets show a similar pattern. Total assets of foreign-controlled domestic corporations were 1.27% of all U.S. corporations in 1971, 9.08% by 1990, and 14.43% in 2011.

In which countries do the foreign owners of these domestic U.S. firms live? Here\’s a figure. You\’ll notice that essentially none of the foreign owners are in true tax havens like Grand Caymans or Bermuda. A law passed back in 2004 denied (or greatly restricted) any tax benefits from being based in a county where almost no actual sales or production happened. Thus, the current wave of foreign ownership is about being legally based in places like the  UK, Ireland, Canada, and so on.

Foreign-controlled domestic corporations can have more than half the sales of all U.S. corporations in some industries. For example, such foreign-controlled firms account for 78.3% of all receipts of U.S. corporations in the \”Breweries\” industry –for example, Anheuser-Busch is owned by the Belgian-headquartered firm InBev. Such firms also account for 64.1% of all U.S. corporate receipts in the \”Audio and video equipment manufacturing and reproducing magnetic and optical media\” industry; 62.7% of receipts in the \”Sound recording industries\”; 59.6% of receipts in the \”Engine, turbine, and power transmission equipment (manufacturing)\” industry; 59.5% of receipts in the \”Security brokerage\” industry; 54.1% of all receipts in the \”Rubber products (manufacturing)\” industry; 53.5% of all receipts in the \”Electrical and electronic goods (wholesale trade)\” industry; 51.7% of receipts in the \”Cement, concrete, lime and gypsum products (manufacturing)\” industry; and 51.4% of the receipts in the \”Motor vehicle and motor vehicle parts and supplies (wholesale trade)\” industry.

The best way to tax global corporations is a sticky problem, and I\’ll come back to it on this blog from time to time. In a globalizing world economy, the issues are only going to become more salient with time. But here, I\’ll just note that if the U.S. followed the pattern of just about every other high-income country in the world and had a corporate tax that was territorial–that is, aimed only at corporate income earned in the U.S.–the reasons for U.S. corporations to put their headquarters in another country would be much diminished.

Stuck on Economics

I have lamented in the past that when your brain is stuck on economics, it can be hard to escape from your obsession. For example, I explain here what it\’s like to be  driving around northern Montana wondering why the local population was obsessed with GNP, when everyone knows that the economy is now more commonly measured by GDP. Or here is how I ended up \”Endorsing Association 3E: Ethics, Excellence, Economics\”–and it tastes excellent on nibbles of sourdough bread. Or here is how the Economic Geyser spouts even in the middle of Yellowstone National Park.

Now McDonald\’s is messing with my ability to turn off the economics portion of my brain. A few years back they prominently advertised the CBO, which we all know stands for Congressional Budget Office, thus causing me to twitch every time I passed a billboard.

Of course, now it\’s the eco-nom-nom-nomics advertisements. Most of what I watch on television is live sports, and I\’m just trying to sit and relax and watch my baseball or football game in peace, when suddenly my brain is jolted into awareness of economics. Please make it stop.

Of course, my children think the ads are hilarious, partly because they make Dad twitch. The children are also fans of the \”lolcats\” books, which are cats with funny but ungrammatical captions (that badly need the work of an economics journal editor to fix them all right now. Sorry, lost my train of thought there for a moment.) Oh yes, the lolcats also say \”nom nom nom\” from time to time. So now the lolcats trigger thoughts of economics in my mind, too. Thanks a lot, McDonalds. I need another month of summer vacation.

19th Century Fencing and Information Technology

It\’s no surprise that US investment is disproportionately focused on information technology. The broad category of information processing technology and equipment was 8% of all private nonresidential US investment in 1950, but 30% of all investment by 2012. This raises the question: Is there a previous time in U.S.  history when investment has been so  heavily focused in a single category?

David Autor offers a possible answer: Investment in fences in the late 19th century U.S. economy. The answer is side comment in Autor\’s paper \”Polanyi\’s Paradox and the Shape of Employment Growth,\” presented in August at the Jackson Hole conference sponsored by the Kansas City Federal Reserve. The paper is well worth reading for what it has to say about the links from automation to jobs and wages. Here, I\’ll offer some thoughts of my own about fencing and information technology.  (Full disclosure: Autor is the Editor of the Journal of Economic Perspectives, and thus my boss.)

Richard Hornbeck published \”Barbed Wire: Property Rights and Agricultural Development\”, in a 2010 issue of Quarterly Journal of Economics (vol. 125: 2, pp. 767-810). He argues for the importance of fencing in understanding the development of the American West. Hornbeck writes (citations and footnotes omitted):

In 1872, fencing capital stock in the United States was roughly equal to the value of all livestock, the national debt, or the railroads; annual fencing repair costs were greater than combined annual tax receipts at all levels of government … Fencing became increasingly costly as settlement moved into areas with little woodland. High transportation costs made it impractical to supply low-woodland areas with enough timber for fencing. Although wood scarcity encouraged experimentation, hedge fences were costly to control and smooth iron fences could be broken by animals and were prone to rust. Writers in agricultural journals argued that the major barrier to settlement was the lack of timber for fencing: the Union Agriculturist and Western Prairie Farmer in 1841, the Prairie Farmer in 1848, and the Iowa Homestead in 1863 … Farmers mainly adjusted to fencing material shortages by settling in areas with nearby timber plots.\”

Then in 1874, Joseph Glidden patented \”the most practical and ultimately successful design for
barbed wire.\” The fencing business took off. Hornbeck quotes a story from a 1931 history:  “Glidden himself could hardly realize the magnitude of his business. One day he received an order for a
hundred tons; ‘he was dumbfounded and telegraphed to the purchaser asking if his order should not read one hundred pounds\’\”.

Remember that fencing was already of central importance to the U.S. capital stock in 1872. Hornbeck presents estimates of how the total stock of fencing expanded over the decades. The pent-up demand was enormous, and cheaper steel was becoming widely available after the 1870s. From 1880 to 1900, for example, the total amount of fencing in Prairie states went from 80 million rods (where a rod equals 16.5 feet or about 5 meters) to 607 million rods; in the Southwest region, the rise was from 162 million rods in 1880 to 710 million rods by 1900. In the South Central states, the gains were comparatively smaller, only about a doubling from 344 million rods in 1880 to 685 million rods in 1900. By comparing across regions with and without fencing, as the fencing arrived, Hornbeck argues:

\”Barbed wire may affect cattle production and county specialization through multiple channels, but these results suggest that barbed wire’s effects are not simply the direct technological benefits that would be expected for an isolated farm. On the contrary, it appears that barbed wire affected agricultural development largely by reducing the threat of encroachment by others’ cattle.\”

The juxtaposition between 19th century fencing and 21st century information technology offers an irresistible chance for loose speculations and comparisons. Fencing in the 19th century made property rights to U.S. land more valuable, especially in the Prairie and Southwest regions, because it protected the farmers crops. Of course, there was also considerable conflict and dislocation as the land was fenced, including conflicts between farmers and ranchers and between settlers and Native Americans. But for many Americans, the fencing of the American West felt like a clear-cut opening of productive opportunities.

The economic gains from modern information technology often seem to arrive in less clear form. True, for some workers the vast gains of electronic technology feel like a brand-new frontier. But many workers throughout the economy experience information technology as a continual mix of gains, costs, and disruptions. For example, email is great; and email eats up my day. Information technology can offer vast cost savings in office-work, greater efficiency in logistics and shipping, and faster development of new designs and technologies–all of which also disrupt companies and workers.

New information technology is far more mutable than fencing: it finds ways to slither into aspects of almost every job, including how that job is scheduled, organized, and paid for. Moreover, information technology is really a series of new technologies, as Moore\’s law drives the cost of computing lower and lower, creating waves of distinctively different growth opportunities. As Hornbeck points out, barbed-wire fencing did get substantially cheaper over time, with the cost falling by half from 1874 to 1880, and then again almost another two-thirds by 1890, and falling almost to half of that amount by 1897. But that impressive technological record is dwarfed by the productivity gains in information technology.

In short, 19th-century fencing may well have been an investment similar in relative size to modern information technology (although the economic statistics of the late 19th century don\’t allow anything resembling an apples-to-apples comparison). But at least to me, information technology seems considerably more disruptive, transformative, and ultimately beneficial for the economy.

Shaping the Direction of Health Care Innovation

My hope would be that the health care innovations of the future focus on two goals: how to attain improvements in health across the population, and how to provide the same or more effective health care at lower cost. My worry is that the direction of health care innovation is shaped by incentives related to beliefs about what can be brought to market and what will be demanded by patients and received with favor by health care providers  that are not necessarily well-aligned with these goals. Steven Garber, Susan M. Gates, Emmett B. Keeler, Mary E. Vaiana, Andrew W. Mulcahy, Christopher Lau and Arthur L. Kellermann tackle these issues in \”Redirecting Innovation in U.S. Health Care: Options to Decrease Spending and Increase Value,\” a report from the RAND Corporation.

The authors point out that since the 1950s, growth in U.S. health care spending has typically been about 2% per year faster than growth in GDP, and that most economists trace this cost difference to the continual arrival of new and more expensive health care technologies. They write: \” As we argue in this report, the U.S. health care system provides strong incentives for U.S. medical product innovators to invent high-cost products and provides relatively weak incentives to invent low-cost ones.\” The system also provides strong incentive to focus on drugs, devices, and health information technologies that will generate profits in high-income countries, not to find low-cost ways of addressing health problems in the rest of the world. Here are four of the examples they offer.

The cardiovascular “polypill” \”refers to a multidrug combination pill intended to reduce blood pressure and cholesterol, known risk factors for the development of cardiovascular disease. The rationale is that combining four beneficial drugs in low doses in a single pill should produce an easy and affordable way to dramatically modify cardiovascular risk.\” But as the authors point out, even though a \”polypill\” only combines existing drugs, putting them in a single pill means that it would have to go through very expensive and length health and safety testing. The result would be a product that might be cheaper and more effective, but given that people could still take a handful of the other pills, the \”polypill\” would almost certainly be a low-profit product. Moreover, there have been several patents granted on aspects of a \”polypill,\” so any company seeking to test such a pill would be likely to face a patent battle. No private company is likely to push this kind of innovation.

Better use of health information technology in patient records could save a lot of money in terms of lower paperwork costs, and also provide considerable health benefits by informing health care provides about past and current health experiences–for example, thus helping to minimize risks of allergic reactions or bad drug interactions. But despite various pushes and shoves, the health care sector has not been a leader in adopting and using information technology. Indeed, in many cases it seems to have soaked up the time of health care providers on one hand, while providing a tool for increasing the amount billed to insurance companies on the other hand.

The implantable cardioverter-defibrillator (ICD) is \”an implantable device consisting of a small pulse generator (roughly half the size of a smartphone) and one or more thin wire leads threaded through large blood vessels into the heart. ICDs are designed to sense a life-threatening cardiac arrhythmia and automatically provide a dose of direct current (DC) electricity to jolt the patient’s heart back to normal.\” This technology works very well for some patients with heart disease, but not for others: specifically, it isn\’t recommended for \”such as patients who are undergoing bypass surgery or in the early period following a heart attack, the first three months following coronary revascularization, severe heart failure (New York Heart Association Class IV), and those with newly diagnosed heart failure.\” Thus, this is a case of a positive and useful innovation that is quite likely overused–at substantial cost.

Prostate-specific antigen (PSA) is a test for whether men have prostate cancer. The authors write: \”Despite PSA screening’s initial promise, multiple studies in the United States and in Europe have found that it does not reduce prostate cancer–specific mortality. Moreover, screening is associated with substantial harms caused by over-diagnosis and the complications that can occur from aggressive treatment. . . . Based on unfavorable findings, in 2012 the United States Preventive Services Task Force recommended against routine PSA screening for prostate cancer because the harms of screening outweigh the potential benefits. However, because federal law has not been changed, Medicare must still pay for the test’s use, as well as for the subsequent biopsies, surgical procedures, nonsurgical treatments, and complications that these procedures can cause.\”

The RAND authors point out a number of features of the U.S. health care system that can push innovation away from the methods that would  most improve health and decrease costs. For example, the existing incentives for innovation don\’t tend to reward methods that will lead to reduced spending. As they note, in a market full of insured third-party payers, there is \”[l]imited price sensitivity on the part of consumers and payers. In addition, a bias arises from the  \”limited time horizon of providers when they decide which medical products to use for which patients: In many instances, the health benefits from using a drug, device, or HIT are not realized until years in the future, at which time the patient is likely to be covered by a different insurer, such as Medicare. When this is the case, only the later insurer will obtain the financial benefits associated with the (long-delayed) health benefits.\” More broadly, \”[m]any [health care] provider systems are siloed. When this is the case, most decisionmakers consider only the costs and benefits for their parts of their organizations, and few take into account savings that accrue outside of their silos.\”

They also write of \”treatment creep\” and the \”medical arms race.\”

\”Undesirable treatment creep often occurs when a medical product that provides substantial benefits to some patients is used for other patients for whom the health benefits are much smaller or completely absent. Treatment creep is encouraged by FFS [fee-for-service] payment arrangements, and it is enabled by lack of knowledge about which patients would truly benefit from which products. Treatment creep often involves using products for indications not approved by the FDA. Such “off-label” use—which delivers good value in some instances—is widespread and difficult to control. Treatment creep may reward developers with additional profits for inventing products whose use can be expanded to groups of patients who will benefit little. …\” 

\”The “medical arms race” refers to hospitals and other facilities competing for business by making themselves attractive to physicians, who may care more about using new high-tech services than they care about lower prices. … Robotic surgery for prostate cancer and proton beam radiation therapy provide striking examples of undesirable treatment creep: Although there is little or no evidence that they are superior to traditional treatments, these high-cost technologies have been successfully marketed directly to patients, hospitals, and physicians. High market rewards for such expensive technologies encourage inventors and investors to develop more of them—regardless of how much they improve health.\”

The authors have an eminently reasonable list of ways to alter the direction  of health care innovation: basically, thinking through the sources of R&D funding, regulatory approval, and decision-making by third-party payers. For example, there could be public prize contests for certain innovations, or some patents that seem to offer substantial health benefits could be bought out and placed in the public domain, and third-party payers (including Medicare and Medicaid) could place more emphasis on being willing to buy new technologies that cut costs. But I confess that as I look over their list of policy recommendations, I\’m not sure they suffice to overcome the incentives currently built into the U.S. healthcare system.

And Here Come the Interest Payments

The federal government has been on a borrowing binge since the start of the Great Recession. I\’ve argued that in the short-run, the path of the budget deficits has been basically correct, because the deficits have helped to cushion the brutal economy of 2008-2009 and the sluggish recovery since then. But the long-term budget deficit picture is a problem.  And even those of us who have largely supported the budget deficits of the last few years need to face that fact that the bills will eventually come due, and interest payments by the federal government are likely to head sharply upward in the next few years.

For some perspective, here\’s a figure from the August 2014 Congressional Budget Office report, \”An Update to the  Budget and  Economic Outlook:  2014 to 2024.\” The spending categories are expressed as a share of GDP. Thus, over the next decade Social Security and Major Health Care programs rise, and a number of other categories fall a bit. But the biggest spending jump in any of these categories is for interest payments.

Interest payments jump for two reasons: the recent accumulation of federal debt and the expectation that interest rates are going to rise. \”Between calendar years 2014 and 2019, CBO expects, the interest rate on 3-month Treasury bills will rise from 0.1 percent to 3.5 percent and the rate on 10-year Treasury notes will rise from 2.8 percent to 4.7 percent; both will remain at those levels through 2024.\” Of course, predictions don\’t always come true. But the CBO has already scaled down how much it expects interest rates to rise, and its projections of future deficits may well be on the optimistic side.

When looking at spending as a share of GDP, it\’s useful to remember that the GDP is now around $17 trillion. This prediction shows a rise in federal interest payments from 1.3 percent of GDP in 2014 to 3.0 percent of GDP by 2024. Converted to actual dollars, this prediction means that the projected rise in interest payments from from $231 billion in 2014 to $799 billion in 2024–more than tripling in unadjusted dollars. By 2024, that\’s going to be $568 billion per year that isn\’t available for other spending or to finance tax cuts. It\’s going to bite  hard.

For an historical comparison, a December 2010 CBO report looked at \”Federal Debt and
Interest Costs.\” The light blue line shows interest payments in nominal dollars, not adjusted for inflation or the size of the economy, and thus isn\’t useful for looking back several decades. The dark blue line helps to illustrate rise in interest rates is headed for its highest levels since we were paying off the government borrowing of the mid-1980s at relatively high interest rates into the mid-1990s.

When economic times are dire, as they were in the U.S. economy in 2008-2009, having the government borrow money makes sense. Given the lethargic pace of the growth that followed, and the underlying financial fragility of the economy, it made some sense not to make a dramatic push for lower deficits in the last few years. But the coming surge in interest payments is a warning signal that it\’s past time to start thinking about how to bring down budget deficits in the middle and longer-term.

Competition as a Form of Cooperation

Like most economists, I find myself from time to time confronting the complaint that economics is all about competition, when we should be emphasizing cooperation instead. One standard response to this concern focuses on making a distinction between the way people and firms actually behave and the ways in which moralists might prefer that they behave. But I often try a different answer, pointing out that the idea of cooperation is actually embedded in the meaning of the word \”compete.\”

Check the etymology of \”compete\” in the Oxford English Dictionary. It tells you that the word derives from Latin, in which \”com-\” means \”together\” and \”petĕre\” has a variety of meanings, which include \”to fall upon, assail, aim at, make for, try to reach, strive after, sue for, solicit, ask, seek.\” Based on this derivation, valid meanings of competition would be  \”to aim at together,\” \”to try to reach together\” and \”to strive after together.\” 
Competition can come in many forms. The kind of market competition that economists typically invoke is not about wolves competing in a pen full of sheep, nor is it competition between weeds to choke the flowerbed. The market-based competition envisioned in economics is disciplined by rules and reputations, and those who break the rules through fraud or theft or manipulation are clearly viewed as outside the shared process of competition. Market-based competition is closer in spirit to the interaction between Olympic figure-skaters, in which pressure from other competitors and from outside judges pushes individuals to strive for doing the old and familiar better, along with seeking out new innovations. Sure, the figure-skaters are trying their hardest to win. But in a broader sense, their process of training and coming together under agreed-upon rules is a deeply cooperative and shared enterprise.  

In fact, competition within a market context actually happens as a series of cooperative decisions, every time a buyer and seller come together in a mutually agreed and voluntarily made transaction. This idea of cooperation within the market is at the heart of what the philosopher Robert Nozick in his 1974 work Anarchy, State, Utopia referred to as “capitalist acts between consenting adults.”

Attendance Rates for U.S. K-12 Teachers

My heart always sinks a bit when one my children reports over dinner that their class had a substitute teacher that day. What usually follows is a discussion of the video they watched, or the worksheet they filled out, or how many other children in the class (never mine, of course) misbehaved. How prevalent is teacher absence from classes in the U.S.? The National Council on Teacher Quality collects some evidence in its June 2014 report \”Roll call: The importance
of teacher attendance.

The study collected data from 40 urban school districts across the United States for the 2012-13 school year. The definition of \”absence\” in this study was that a substitute teacher was used in the classroom. Thus, the overall totals mix together the times when a teacher was absent from the classroom for sickness, for other personal leave, and for some kind of professional development. As the authors of the study note: \”Importantly, we looked only at short-term absences, which are absences of 1 to 10 consecutive days. Long-term absences (absences lasting more than 10 consecutive days) were not included to exclude leave taken for serious illness and maternity/paternity leave.\”

The average teachers across these 40 districts was absent 11 days during the school year. This amount of teacher absence matters to students. The NCTQ study cites studies to make the point: \”
As common sense suggests, teacher attendance is directly related to student outcomes: the more
teachers are absent, the more their students’ achievement suffers. When teachers are absent 10 days, the decrease in student achievement is equivalent to the difference between having a brand new teacher and one with two or three years more experience.\”

Here\’s a figure showing average rates of absence across the 40 districts. Again, these include professional development activities that take teachers our of the classroom, but do not include long-term absences or parental leave. Indianapolis, the District of Columbia, Louisville, and Milwaukee lead the way with relatively few teacher absences, while Cleveland, Columbus (what\’s with Ohio teachers?), Nashville, and Portland have relatively high numbers of teacher absences.

Based on little more than my own gut reaction, an average of 11 teacher absences per year seems a little on the high side to me. But as with so many issues in education, the real problem doesn\’t lie with the averages, but with the tail end of the distribution. The study calculates that 28% of teachers are \”frequently absent,\” meaning that they missed 11-17 days of class, and an additional 16% are \”chronically absent,\” meaning that they missed 18 or more days of class.

Here\’s a city-by-city charts showing the breakdown of teacher absence by category.

I\’m willing to cut some slack to teachers who happen to have a lousy personal year and are chronically absent. But I have a hard time believing that across the United States, 1/6 of all teachers–that is, about 16%–are simultaneously having the kind of a lousy year that forces them to miss 18 or more school days. (Again, remember that these numbers don\’t include long-term sickness or parental leave.)  Those who can\’t find a way to show up for the job of classroom teacher, year after year, need to face some consequences.

A few years back in the Winter 2006 issue of the Journal of Economic Perspectives, Nazmul Chaudhury, Jeffrey Hammer, Michael Kremer, Karthik Muralidharan, and F. Halsey Rogers reported on \”Missing in Action: Teacher and Health Worker Absence in Developing Countries.\” They wrote: \”In this paper, we report results from surveys in which enumerators made unannounced visits to primary schools and health clinics in Bangladesh, Ecuador, India, Indonesia, Peru and Uganda and recorded whether they found teachers and health workers in the facilities. Averaging across the countries, about 19 percent of teachers and 35 percent of health workers were absent. The survey focused on whether providers were present in their facilities, but since many providers who were at their facilities were not working, even these figures may present too favorable a picture.\” The situation with U.S. teacher absence isn\’t directly comparable, of course. One suspect that the provision of substitute teachers is a lot better in Cleveland, Columbus, Nashville, and Portland than in Bangladesh, Ecuador, India, and Indonesia. Still, wherever it occurs, an institutional culture where many teachers don\’t show up needs to be confronted.