Franchise the National Parks?

The idea of franchising the national parks raises images of Mickey Mouse ears on top of Half-Dome at Yosemite, or the McDonald\’s \”golden arches\” as a scenic backdrop to the Old Faithful geyser in Yellowstone. But that\’s not what Holly Fretwell has in mind in her essay, \”The NPS Franchise:
A Better Way to Protect Our Heritage,\” which appears in the George Wright Forum (2015, vol. 32, number 2, pp. 114-122). Instead, she is suggesting that a number of national parks might be better run as independent nongovernment conservation-minded operations with greater control over their own revenues and spending. In such an arrangement, the role of the National Park Service would be to evaluate the financial and environmental plans of possible franchisees, provide brand-name and a degree of logistical support, and then to make sure the franchisees announced plans were then followed up in the future.

To understand the impetus behind Fretwell\’s proposal, you need to first face the hard truth that the national parks have severe financial problems, which are manifesting themselves both in decaying infrastructure for human visitors and also in a diminished ability to protect the parks themselves (for example, sewer systems in parks affect both human visitors and environmental protection). Politicians are often happy to set aside more parkland, but spending the money to manage the land is a harder sell. If you accept as a basic constraint that federal spending on park maintenance isn\’t going to rise, or at least not rise sufficiently, then you are driven to consider other possibilities. Here\’s Fretwell on the current problems of the National Park Service (footnotes omitted):

As it enters its second century, NPS faces a host of challenges. In 2014, the budget of the National Park Service was $2.6 billion. The maintenance backlog is four times that, at $11.5 billion and growing. According to the National Parks Conservation Association (NCPA), about one-third of the shortfall is for “critical systems” that are essential for park function. Without upgrades, many park water and sewer systems are at risk. A water pipe failure in Grand Canyon National Park during the spring of 2014 cost $25,000 for a quick fix to keep water flowing, but is estimated to cost about $200 million to replace. Yellowstone also has antiquated water and wastewater facilities where past failures have caused environmental degradation. Sewer system upgrades in Yosemite and Grand Teton are necessary to prevent raw sewage from spilling into nearby rivers. Deteriorating electrical cables have caused failures in Gateway National Recreation Area and in Glacier’s historic hotels. Roads are crumbling in many parks. They are patched rather than restored for longevity. Only 10% of park roads are considered to be in better than “fair” condition. At least 28 bridges in the system are “structurally deficient,” and more than one-third of park trails are in “poor” or “seriously deficient” condition.

Cultural heritage resources that the parks are set aside to protect are also at risk. Only 40% of park historic structures are considered to be in “good” or better condition and they need continual maintenance to remain that way. Exterior walls are weakening on historic structures such as Perry’s Victory and International Peace Memorial in Ohio, the Vanderbilt Mansion in New York, and the cellhouse in Golden Gate National Recreation Area in California. Weather, unmonitored visitation, and leaky roofs are degrading cultural artifacts. Many of the artifacts and museum collections have never been catalogued. … 

Even though the NPS maintenance backlog is four times the annual discretionary budget, rather than focus funding on maintaining what NPS already has, the system continues to grow. … The continual expansion of park units and acreage without corresponding funding is what former NPS Director James Ridenour called “thinning the blood.” …  The national park system has grown from 25.7 million acres and about 200 units in 1960 to 84.5 million acres and 407 units in 2015. Seven new parks were added under the 2014 National Defense Authorization Act and nine parks were expanded. The growth came with no additional funding for operations or maintenance—more “thinning the blood.”

I\’ve had great family vacations in a number of national parks since I was a child. They were inexpensive to visit then, and they remain cheap. Indeed, there\’s sometimes an odd moment, when visiting a national park, when you realize that what you just spent at the gift shop, or for a family meal, considerably exceeds what you spent to enter the park. Fretwell writes:

Numerous parks have increased user and entrance fees for the 2015 summer season after
seeking public input and Washington approval. Even with the higher fees, a visit to destination parks like Grand Canyon and Yellowstone costs $30 for a seven-day vehicle permit, or just over $1 per person per day for a family of four. … The current low fees to enter units of the national park system typically make up a small portion of the total park visit expense. It has been estimated that the entry fee is less than 2% of park visit costs for visitors to Yellowstone and Yosemite. The bulk of the expenditures when visiting destination parks go to lodging, travel, and food. Higher fees have little effect on visitation to most parks. … Even modest fees (though sometimes large fee increases) could cover the operating costs of some destination parks. About $5 per person per day could cover operations in Grand Canyon National Park, as would just over $10 in Yellowstone.

An obvious question here is why the parks can\’t just raise fees on their own, but of course, that choice runs into political constraints as well. It is at least arguable that franchisees could spell out the facilities that need renovating and building, along with other services that could be offered, and then also be able to charge the fees that would cover the costs.

Fretwell recognizes that not all national parks will have enough visitors to work well with a franchise model (for example, some of the huge national parks in Alaska), and a need for direct government spending on such parks will remain. But it\’s worth remembering that national park visitors tend to have above-average income levels. A franchise proposal can be understood as a way of circumventing the political constraints that first prevent national parks from collecting money, and then don\’t allocate sufficient financial resources from other government revenues. A group of franchise proposals would also give national parks a way to move away from \”thinning the blood\”–that is, focusing heavily on how to persevere with tight and inflexible financial constraints–and instead offer an infusion of new ideas and how they might be financed.

War on Cancer: Redux

In his 1971 State of the Union Address, President Richard Nixon launched what came to be known as the War on Cancer:

“I will also ask for an appropriation of an extra $100 million to launch an intensive campaign to find a cure for cancer, and I will ask later for whatever additional funds can effectively be used. The time has come in America when the same kind of concentrated effort that split the atom and took man to the moon should be turned toward conquering this dread disease. Let us make a total national commitment to achieve this goal.”

And now, 45 year later in the 2016 State of the Union address, President Barack Obama is relaunching the War on Cancer:

\”Last year, Vice President Biden said that with a new moonshot, America can cure cancer. Last month, he worked with this Congress to give scientists at the National Institutes of Health the strongest resources that they’ve had in over a decade. So tonight, I’m announcing a new national effort to get it done. And because he’s gone to the mat for all of us on so many issues over the past 40 years, I’m putting Joe in charge of Mission Control. For the loved ones we’ve all lost, for the families that we can still save, let’s make America the country that cures cancer once and for all.\”

So how did that first War on Cancer turn out? At the tail end of 2008, just before President Obama took office, David Cutler took a stab at answering that question in \”Are We Finally Winning the War on Cancer?\” which appeared in the Fall 2008 issue of the Journal of Economic Perspectives (22:4, pp. 3-26). Here\’s a figure showing the mortality rate from cancer over time.

As Cutler reports, spending on cancer research and treatment did rise steadily after Nixon\’s speech at about 4-5% per year. But as the figure shows, cancer death rates kept rising for a time as well, at about 8% per year during the 1970s and 1980s. By 1997, the New England Journal of Medicine ran an article noting these trends called \”Cancer Undefeated.\”  Perhaps inevitably, that article was soon followed by a sharp decline in cancer mortality. Apparently, Obama is re-enlisting in a war on cancer that has been going pretty well for a couple of decades. 
But the War on Cancer has been fought with several different tools–and biomedical research on a \”cure for cancer\” isn\’t the biggest one. Cutler focuses on four main types of cancer: lung, colorectal, female breast, and prostate. After reviewing the evidence on each, he wrote:  

\”[B]ehaviors, screening, and treatment advances for the four cancers I consider were each important in improved cancer survival. Together, they explain 78 percent of the reduction in cancer mortality between 1990 and 2004. Thirty-five percent of reduced cancer mortality is attributable to greater screening—partly through earlier detection of disease, and partly through removal of precancerous adenomas in the colon and rectum. Behavioral factors are next in importance, at 23 percent; the impact of smoking reductions on lung cancer is the single most important factor in this category. Finally, treatment innovation is third in importance, accounting for 20 percent of reduced mortality.

\”The relative importance of these different strategies seems surprising, but it is easily understandable. Despite the vast array of medical technologies, metastatic cancer remains incurable and fatal. The armamentarium of medicine can delay death, but cannot prevent it. Thus, technologies in metastatic settings have only limited effectiveness. Far more important is making sure that people do not get cancer in the first place (prevention) and that cancer is caught early (screening), when it can be successfully treated.\”

From this perspective, emphatic calls for a \”cure for cancer\” highlight a bias in US medicine, in favor of later-stage interventions which are often at high-cost, rather than early stage interventions of prevention and early screening that may often happen pretty much outside the health care system, but have the potential save many more lives at much lower cost.
David H. Howard, Peter B. Bach, Ernst R. Berndt, and Rena M. Conti looked at \”Pricing in the Market for Anticancer Drugs\” in the Winter 2015 issue of the Journal of Economic Perspectives (29:1, pp. 139-62). As I\’ve discussed on this blog, before a new anti-cancer drug is approved, various clinical trials and studies are done, and these studies provide an estimate of the median expected extension of life as a result of using the drugs. Then based on the market price of the drug when it is announced, it\’s straightforward to calculate the price of the drug per year of life gained. Their calculations show that back in 1995, new anti-cancer drugs reaching the market were costing about $54,000 to save a year of life. By 2014, the new drugs were costing about $170,000 to save a year of life. This is an increase in cost per year of life saved of roughly 10% per year.
As both Nixon and Obama can attest, calling for a \”cure for cancer\” with an analogy to putting a person on the moon has been political magic for 45 years now. But if the \”cure for cancer\” rhetoric creates the expectation of a magic pill that lets everyone go back to smoking cigarettes again, I fear that it is both missing the point and even raising false hopes for cancer patients and their families. I\’m pretty much always a supporter of additional research and development, and like everyone else, I hear anecdotes (which I cannot evaluate) about how some great new anti-cancer drugs are already in the research pipeline. But a focus on developing more extremely expensive anti-cancer drugs that often provide only very limited gains in average life expectancy (in a number of cases, only a few months) shouldn\’t be the primary approach here. At least for the near-term and probably the medium-term, too,  primary tools that can keep cancer mortality on a downward trends are more likely to be prevention and early detection, along with ongoing improvements in the health-effectiveness and cost-effectiveness of treatment, not a \”moonshot\” for a cure. 
Full disclosure: I\’ve been Managing Editor of the Journal of Economic Perspectives since 1987, and so part of my paycheck came from working to publish the two articles mentioned here. 

Why Do People Say They Aren\’t In the Labor Force?

Although the unemployment rate has dropped to 5%, the labor force participation rate has continued its long-term decline. For those not clear on the difference in these terms, the government only counts people as unemployed if they are both without a job and also looking for a job.  A person who is out of a job but not looking for one is not counted as \”unemployed,\” but instead is counted as out of the labor force.

This definition makes some sense: after all, it would seem peculiar to count those who are happily retired or spouses who stay home by choice as \”unemployed.\” But the definition also raises a legitimate concerns that the drop in unemployment is only in part the healthy sign of a recovering economy, but could also be the unhealthy sign of potential who have given up on looking for a decent job with decent pay. How can we distinguish between these possibilities? One piece of evidence is to look at the reasons that those who are out of the labor force give as to why they are not looking for a job. Steven F. Hipple of the US Bureau of Labor Statistics pulls together evidence from a US Census Bureau survey called the Annual Social and Economic Supplement, which is part of the  Current Population Survey. \”People who are not in the labor force: why aren\’t they working?\” appears in the BLS newsletter Beyond the Numbers (December 2015, vol. 4, #15).

As a starting point, here\’s the overall civilian labor force participation rate. The long rise in the share of US workers in the labor force from about 1970 up through the mid 1990s is usually associate with a much larger share of women entering the (paid) labor force. There\’s a peak in the late 1990s, and a decline since then, which suggests that the main causes of the decline are long-run, not rooted in the Great Recession of 2007-2009.

Hipple breaks down this data in various ways. Here, I\’ll focus on some breakdowns by age and gender. First, the age groups from 16-19 and from 20-24 have seen large declines in labor force participation, as shown in the figures below.


In answers to Census questions, Hipple points out that basically all this decline from 2004-2014, at least, can be accounted for the larger number who say that they aren\’t looking for work because it would conflict with school. Here are Hipple\’s charts for these two age groups.

Of course, economists are always suspicious of answers that people give in response to survey data. When people in the 16-24 age bracket say that they aren\’t looking for a job because they are going to school, are they saying that they just aren\’t attracted by the jobs on offer? If they had ready access jobs that paid (say) twice the minimum wage were readily available to them, maybe more of them would squeeze some time into their schedule for part-time work. It also seems to me that there\’s a change in cultural expectations here about whether high school and college students are expected by their peers and parents to find at least a part-time job. But overall, the decline in labor force participation by these young adults because of going to school doesn\’t strike me as a major social problem.

At the other end of the age distribution, here\’s the labor force participation rate for those 55 and older. It steadily falls until the early 1990s, largely as a result of people retiring earlier, then steadily rises up to about 2010, and since then has leveled out.

Hipple\’s survey evidence compare 2004 to 2014, a period when labor force participation among older workers is rising. The survey evidence is that older workers are less likely to say that they are out of the labor force because of retirement or home responsibilities, but more likely to say that they are out of the workforce because they are ill or disabled. Here are Hipple\’s figures for the 55-64 age group and then 65 and over age group.

From a medium-run perspective, surely one of the group\’s hardest-hit by the Great Recession were the near-elderly who lost jobs and ended up forced into retirement several years earlier than expected. Many in this group also found that in an economic environment of low interest rates, their savings brought in less interest income than they had reason to expect. But from a long-run perspective, the pattern for labor force participation of those over 55 is consistent with two counterbalancing factors: on one side, average dates of retirement are moving back and people are working until later ages, which tends to raise labor force participation for this group; but on the other side, the share of the over-55 population that is well past the common age of retirement is rising, which tends to lower labor force participation rates for this group.

Finally, what about those in the 25-54 age bracket, sometimes called \”prime age\” workers? In some ways, when the issue is whether the fall in unemployment is just masking people leaving the labor force altogether, this age group is of greatest concern. This group shows a steadily rising labor force participation rate from 1950 up to about 1990, then a levelling out of that growth in the 1990s, and decline since about 2000.

In this case, Hipple\’s discussion emphasizes the different reasons given by men and women in this age bracket as to why they are out of the labor force. Men mostly say that they are out of the labor force because they are ill or disabled. Women mostly say that they are out of the labor force because of home responsibilities.

Again, one should be alert the likelihood that the reasons people give for being out of the labor force are strongly shaped both by social custom and the labor market opportunities available to them. For example, one suspects that in 2016, it remains true that it is more socially acceptable and also a more accurate reflection of the division of labor in many households for a woman to report \”home responsibilities\” as a reason for not working than it is for a man. Hipple points out that men with lower levels of education (high school education, or less) are more likely to be out of the labor force as a result of being ill or disabled, and women with lower levels of education are more likely to be out of the labor force as a result of home responsibilities. If more low-skilled men had jobs available that didn\’t involve a lot of physical labor, it\’s likely that fewer of them would be report being ill or disabled. If more low-skilled women had decently-paid jobs available, they would have more of a reason to rearrange home responsibilities to suit the job.

The reasons people give for economic actions aren\’t the end of the story, but they do matter, because they tell us something about how people perceive their situations.

I\’ve commented on the falling labor force participation rate and its relationship to the unemployment rate a number of times in the past. Here are a few of those posts with various differing angles: For international context, \”Putting U.S. Labor Force Participation in Context\” (February 27, 2015). For discussions of how to interpret the fall in unemployment rates and the overall health of the labor market in the context of the dropping labor force participation rates, see \”How Tight is the US Labor Market?\” (October 26, 2015),  \”Underutilized Labor in the US Economy\” (November 24, 2014), and \”Unemployment and Labor Force Participation: Revisiting the Puzzle\” (July 23, 2014).

Marriage: Homogamy or Heterogamy

Do people have a tendency to marry those with similar educational and other background to themselves? Social scientists, with their gift for turning simple ideas into jargon, call this \”homogamy.\” Conversely, if people have a tendency to marry those with different educational or other background, social scientists would say that marriage is \”heterogamous.\” At a given point in time, the question of whether marriage is homogamous or heterogamous reveals something about the degree of mixing across socioeconomic classes in society. Over time, a society with greater homogamy is likely to also have more inequality. W

Robert D. Mare compiles and presents evidence on \”Homogamy in Two Gilded Ages: Evidence
from Intergenerational Social Mobility Data,\” in the Annals of the American Academy of Political and Social Science (January 2016, pp 117-139). (The journal is not freely available online, but many readers will have access through library subscriptions.) The earlier research on this subject looked at data going back to about 1940, and it produced measures of the extent to what proportion of marriages were between people with a similar education level which looked like this:

An obvious concern with this graph is what to make of the data for the single data point of 1940, based on Census data. Is there a reason why marriage homogamy might have dropped so much from 1940 to 1960? Or is there just something in the way that 1940 data was collected or tabulated that makes it not directly comparable to the later data? In this paper, Mare makes use of previously unutilized data data on what adults report about the marriages of their parents to extend this data back in time. He finds that there is indeed a downward pattern of marriage homogamy in the first half of the 20th century.

It\’s tricky and perhaps impossible to provide a single rigorous explanation of this U-shaped pattern of marriage homogamy that works equally well when homogamy goes from high to low and when it goes from low to high. But as Mare writes (citations omitted):

\”Two broad sociodemographic trends that provide a context for assortative mating patterns are nonetheless worth noting. First, the comparatively low level of educational homogamy for young couples in the early 1950s coincides with the century’s lowest median age at first marriage. The median age at first marriage was approximately 26 for men and 22 for women in 1900, declined steadily to approximately 23 for men and 20 for women in 1950 and increased thereafter to approximately 27 for men and 25 for women in 2000. When couples marry early, one or both partners may have not yet completed their schooling or have only just left school. Although schools may structure marriage markets, couples who marry early may not, at the time of marriage, be able to take full account of the characteristics of their partners that are associated with their educational attainment. Conversely, when couples marry later, their preferences and opportunities for marriage may be more strongly based on the “realized” characteristics of their potential partners, which may, to a significant degree, be a result of their partners’ educational attainments. …

\”A second important trend is in the differential life chances associated with educational attainment, perhaps the most important of which are the economic returns to schooling. When individuals expect that earnings and income gaps between educational groups will be large during their adult years, they not only have a greater incentive to stay in school themselves, but also may place more weight on the educational attainments of prospective marriage partners. Conversely, if the economic gaps between educational attainment levels are small, factors other than schooling are more likely to govern
educational choice. During the latter half of the twentieth century and especially since the 1960s, the differences in earnings across individuals with varying amounts of education grew markedly, a trend that has a strong positive association with various indicators of educational assortative mating for couples who married during this era.\” 

In short, social and economic inequality clearly interact with marriage homogamy. On one side, in a society with higher levels of inequality, people are less likely to interact with others from different socioeconomic groups in a way that would lead to heterogamy. On the other side, a society with more marriage homogamy will will be one in which those with higher wage and employment prospects are marrying each other. As a result, differences in household income will be larger. In addition, those households will have greater resources to invest in their children, which could lead to a greater persistence of inequality across generations. Many different factors affect marriage and economic inequality, but still, it seems unlikely to be a coincidence that homogamy was associated with greater inequality of the wage distribution and higher returns to schooling both early and late in the 20th century, while heterogamy was associated with a more equal distribution of income and lower returns to schooling in the middle of the 20th century.

Multipolarity: The Next Step After Globalization?

The world economy during the last few decades has experienced \”globalization,\” a broad and admittedly vague term which refers among other factors to a rise in the ratio of world exports to world GDP, as well as the pattern that an ever-rising share of global economic output is happening in the \”emerging markets\” rather than in the traditional high-income countries. But since the Great Recession, there has been a slowdown in the rise of global trade. If globalization falters, what might evolve next? The Credit Suisse Research Institute asks in a September 2015 report: \”The End of Globalization or a More Multipolar World?\” The report makes this argument:

Our sense is that the world is currently in a benign transition from full globalization to multipolar state, though this is not complete. … We find evidence of region-specific trends in economic, social and technological factors that are distinct from aggregate world trends. … In this context, we are increasingly mindful of George Orwell’s 1984, where he divided the world into three regions – Oceania, East Asia and Eurasia on the basis of economic power and form of government. Although it requires some conceptual shoehorning we could well fit the major countries of the world into the following categories: Oceania (USA, Canada and Latin America), Eurasia (Europe, the Middle East and Russia), East Asia (Africa, Asia and the Pacific economies). Some countries like the UK, Japan and Australia could just as easily fit in two categories. In today’s world, Orwell’s classification is not a ‘clean’ one but the three broad regions he has set out give a sense as to how a multipolar world might evolve at a high level.

What is some of the evidence for the movement to a multipolar world economy? One chunk of evidence is a de-emphasis of the role of international organizations like the World Trade Organization in favor of regional and preferential trade agreements, along with bilateral investment treaties. Here are some trends in each of these areas.

 When it comes to international finance, the world economy still operates on more of a globalized US dollar standard than on a multipolar standard that would give greater weight to, say, the euro and the Chinese renminbi. This doesn\’t seem likely to change in the short run, given the economic turmoil in China and across the euro zone. But in the medium- and long-run, it seems very likely that this will change. The figure below offered a longer-run perspective. When it comes to currency reserves, largely held at central banks, the US dollar still rules–but it\’s a lower share than back in the 1970s, and it\’s not a rising share.

The Credit Suisse report looks at a number of other dimensions of a multipolar world, like whether companies are continuing to increase their investments across national borders at the same rate, and the levels at which issues of governance and conflict are happening. Michael Sullivan summarizes these findings in this way:

\”Our analysis of corporate investment and revenue growth shows that globalization remains intact in terms of consumption and marketing patterns, there appears to have been a retrenchment in cross-border investment by corporates. … We read these results as pointing towards a more multipolar world where companies continue to sell across borders but are more cautious in investing across them.

\”In terms of governance, the impetus provided to the spread of democracy by globalization looks to have reached a limit, with less democratic forms of government being perceived to produce economic success and new regional institutions replacing the activities of world ones. … Geopolitically, conflict now takes place more within countries and regions, than between countries.

\”The world is increasingly undercut by faultlines in terms of religion, climate change, language, military development and indebtedness to name a few.\” 

If a multipolar world is coming, it behooves the United States to consider what our core alliance would look like. I\’ve posted here before on \”The North American Vision\” (November 5, 2014) of stronger US ties with Canada and Mexico, but I suspect that broad vision is should be expanded to include Latin America as well.

Finally, here\’s an angle on the very long-run evolution of the global economy: a breakdown of what share of the global economy was represented by different countries or regions over the last millenium. Maybe the easiest way to read this figure is to start from the bottom dark green area, representing China, and then work your way up through the other countries in the order they appear in the key.

About 1,000 years ago, the world economy was dominated by China and India.  You can see their shares of global output dwindling over time, and the gradual rise of the United States (the second shaded area from the bottom), along with the growing importance of Japan, other countries in Asia, and Latin America during much of the 20th century. In just the last few decades, China\’s importance grows substantially and India\’s importance grows noticeably, too. Indeed, a few decades down the road we could be headed back to a global economy in which the two largest players are, again, China and India.

Bernanke on the Fed, the US Dollar, and the Global Economy

  • Has the Federal Reserve been manipulating the value of the US dollar downward to give US exports a boost? 
  • Are Federal Reserve policies causing swings in capital flows to and from emerging-market economies, in a way that creates financial and economic instability elsewhere in the world? 
  • Does the dominant role of the US dollar in international economic transactions provide large economic benefits to the US economy? 

Ben Bernanke focuses on these three questions delivered the Mundell-Fleming lecture, which he delivered at the IMF\’s 16th Jacques Polak Annual Research Conference on November 5, 2015, on the subject, \”Federal ReservePolicy in an International Context.\” Video of the lecture being delivered is available, too.

On the first question, concerning whether the Fed is pushing for a lower US dollar exchange rate, the answer at first glance is \”no,\” and upon further reflection is still \”no.\” At first glance, the value of the US dollar did fall a bit just after the Great Recession, but it has risen since then–so it\’s pretty much impossible to make a case that the Fed has been pursuing a cheap-dollar export-spurring policy. Here\’s a figure from Bernanke\’s paper showing the exchange rate of the dollar in the last decade. Whether you compare it to just major currencies, or other trading partners, or the broad index of all trading partners, the same rough pattern emerges.

Moreover, Bernanke points out that when monetary easing in the US causes the exchange rate value of the US dollar to fall, there are two effects on other countries. One is that US exports are cheaper in world markets, which tends to hurt other economies, but the other effect is that the US economy is stimulated to expand more rapidly, which tends to help other countries selling to the US market. Bernanke writes:

\”Notably, although monetary easing usually leads to a weaker currency and thus greater trade competitiveness, it also tends to increase domestic incomes, which in turn raises home demand for foreign goods and services. … In the case of the United States, … the available evidence suggests that these two effects of monetary policy largely offset, limiting the overall effect on US trading partners.\”

On the second issue, about whether Federal Reserve policy may cause financial swings in other countries, Bernanke offers a more cautious answer. The issue here is that with very low US interest rates in recent years, a certain amount of international investment capital has been headed into financial markets of emerging economies, pushing up stock markets and asset prices in those countries. With the Fed now starting to raise US interest rates, some of that money will now exit the emerging markets. The problem arises because financial markets in  emerging markets can be quite small by global standards, so the start-stop-reverse movement of what would be a fairly modest amount of capital by the standards of the US or the EU economy can severely shake up a smaller emerging market economy.

Bernanke acknowledges the possibility of such disruptions. He also points out that economic policy-making in other countries has a lot to do with whether they are susceptible to a danger of volatile international capital movements. In an earlier episode, Bernanke points out \”commentators referred to the “fragile five” emerging markets—Turkey, Brazil, India, South Africa, Indonesia—whose initial conditions, structural weaknesses, and macroeconomic policies made them more vulnerable to global financial developments.\” Bernanke adds:

\”Importantly, `improvement\’ in the financial sphere does not necessarily require continuous liberalization. … [I]n somecases, macroprudential policies and even capital controls may be needed to manage credit and capital flows during the process of reform. … Financial regulation and supervision are also the obvious tools to use against other plausible sources of spillovers, including currency mismatches in the banking system, excessive cyclicality in lending standards, and opaque and illiquid markets.\”

Bernanke is clearly correct that Federal Reserve actions will affect other countries in a variety of ways, and that it\’s obviously impossible to set Fed policy in a way that would be equally satisfactory to all countries in the world. But that said, his response on this point isn\’t 100% persuasive. Sure, if economic policy-makers in other countries are smart, alert, and responsive, they can address these dangers of start-and-stop international capital flows. But economic policy-makers in other countries will at times be obsessed with their own domestic economy and politics, and as a result, Fed actions will sometimes bring considerable disruption.

On the third question, the extent to which the US economy benefits from the use of the dollar in international transactions, Bernanke points out that the use of the US dollar in international transactions has in a number of ways been quite beneficial to the global economy. In comparison, the benefits of international use of the US dollar to the US economy have diminished with time and are relatively small.  On the value of the US dollar in international transactions, Bernanke writes:

\”[I]n practice it has benefited the global economy in several ways. First, … over the past three decades or so the Federal Reserve has been successful at keeping inflation low and stable. Consequently, the dollar has served its principal function as global numeraire, namely, to maintain a stable value in terms of goods and services. 

\”Second, there is a strong and growing global demand for safe, liquid assets, which the United States—with its political stability and deep, liquid financial markets—has been generally successful in providing. The US also maintains open trade and capital accounts, preserving international access to US assets. 

\”Third, dollar assets have proved to be a valuable hedge for foreign holders against downside geopolitical and financial risks (Gourinchas et al., 2010; Obstfeld, 2010). Broadly speaking, US international liabilities are in the form of relatively liquid, fixed-income assets, notably government bonds and government-backed mortgage securities, whereas US international assets tend to be riskier, e.g., equities. For this reason, and because the dollar is a “safe haven” currency that tends to appreciate when global risks increase, the US net asset position improves during tranquil times but worsens during periods of stress. Gourinchas et al. (2010) calculate that about $2 trillion was transferred from the United States to other countries via valuation changes during the financial crisis. Obviously, the US role as provider of hedge assets is not the result of conscious policy. Instead, it reflects US comparative advantages in providing safe liquid liabilities and investing in riskier foreign assets, as well as the dollar’s role as a safe haven.

Fourth, the Federal Reserve has shown its willingness to serve as a lender of last resort to dollar-based lenders.\”

Concerning the question of how the widespread international use of the US dollar specifically benefits the US economy, as Bernanke points out, it\’s not 1970 any more:

\”The dollar’s monopoly power has also been eroded over recent decades, in that assets denominated in euros, British pounds, and yen have become increasingly viable not only as reserve currencies but for other purposes, such as posting collateral. … [W]e shouldn’t be overly exercised over controversies about whether the dollar will retain its pre-eminence, the future of the renminbi as a reserve currency, and so on. These debates are more about symbolism than substance. In purely economic terms, the universal usage of English, say, is far more valuable to the United States than the broad use of the dollar.\”

Back to Basics: What Drives US Economic Growth?

I like to say that the formula for economic growth is simple: it\’s a mixture of more workers, improved human capital, increases in physical capital, and better technology–all operating in an economic environment that provides incentives for efficiency and innovation. Rebecca M. Blank fleshes out this framework in \”What Drives American Competitiveness?\” which was delivered as the 2015 Daniel Patrick Moynihan Lecture on Social Science and Public Policy and published in the Annals of the American Academy of Political and Social  Science (January 2016, 663, pp. 8-30).

Blank carries out a \”growth decomposition\”–that is, looking at the actual rise in real GDP during the 45 years from 1970 to 2014, and attributing it to the following causes: \”GDP growth = growth in hours worked (25%) + growth in labor quality (10%) + capital deepening (39%) + TFP (26%).\” The phrase \”capital deepening\” refers to a higher amount of average physical capital per worker. \”TFP\” stands for \”total factor productivity,\”  a measure of the growth of productivity over time.

How are these building blocks of economic growth expected to evolve in the next 10-20 years? The answer will imply how US growth will evolve.

For example, the total hours worked in the US economy was actually a little lower in 2014 than it was back in 2000. Blank writes:

\”Sheer growth in the number of workers has explained 25 percent of economic growth over the past 45 years. But the three big elements that drove this growth—immigration, the baby boom population bulge, and increases in women’s labor force involvement—are now either growing more slowly or moving in the opposite direction. The result is recent small declines in the work hours of the population. It is hard to see how work hours will grow substantially in the years ahead.\” 

What about future trends in human capital? A standard method for estimating the amount of human capital is to look at education levels and job experience. Blank focuses on education levels. She points out that at the lower end of educational achievement: \”[H]igh school graduation has largely stalled out at around 88 percent of the population for both men and women. This means that a substantial share of the population is still entering the workforce without even a high school degree. Furthermore, a growing share of high school graduates hold GED degrees, which may not provide even the same skill level as a high school degree. From everything we know about the labor market, these young adults will face low wages and higher unemployment throughout their working lives, as job opportunities for the least skilled continue to deteriorate.\”

At the higher levels of education, US college completion rates are on the rise, but not as quickly as in many other countries. Blank writes: 

While the U.S. population has shown relatively slow growth in the share of the population with a college degree, other countries have made very rapid progress on this front in recent decades. As a result, while this country had one of the most educated populations in 1970, other countries are rapidly surpassing the United States in educational attainment. In 2011, the United States ranked fourteenth among the thirty-six OECD nations in the percentage of 25- to 34-year-olds with associate’s degrees or higher. Even more concerning, this percentage is virtually the same among 25- to 35-year-olds as it is among 55- to 64-year-olds in the United States, while virtually all other countries have seen substantial gains in higher education for the younger age group … 

Blank doesn\’t discuss the work experience of the average US worker, but with the retirement of the baby boom generation subtracting large numbers of high-experience workers from the US economy, this isn\’t likely to be a growth area for human capital, either. 
What about expanded physical capital and innovation? Blank discusses these together, on the grounds that new technologies are one of the main reasons why businesses would expand their physical capital per worker. But for some years now, business investment has been sluggish, in a way that has led some to predict a future of \”secular stagnation\” for the US economy. Overall US government support for research and development, one of the drivers of innovation, has been flat for several decades (as discussed, for example, here and here). 
In short, looking at the basic determinants of economic growth does not paint a pretty picture for long-run economic growth in the US. Thus, the question becomes to what extent at least some of these determinants of growth might be affected by public policy. It\’s easy to list what the targets of such policies might be, although it\’s of course harder to be confident about which specific policies would work in meeting these targets. 
For example,  if job opportunities for low-skilled workers expanded in a way that pulled large numbers of them into the labor force,  or if If a very large number of Americans postponed retirement and continued to work later in life, the number of hours worked in the US economy would not keep falling. US human capital would improve with changes in the K-12 school system that both increased the proportion and the quality of high school graduates, followed by methods of financing more higher education for those for whom acquiring more skills in college makes sense. More government spending on R&D makes sense, but much more important is a business environment that has the incentives and ability to use the results of government R&D–in combination with the firm\’s own innovative efforts–to grow and expand. 
Blank notes: \”Innovation, when it leads to new products that consumers and businesses demand, creates new companies and new jobs. Most job growth comes from rapidly growing new companies that are expanding in high-demand markets. We need that innovation to continue to occur at a high rate in this country if we are to reap these economic benefits.\” She also writes of the need \”to ensure that the United States is an excellent place to start and grow businesses, with modern infrastructure, strong intellectual property protections, a reasonable tax regime, reasonable regulatory structures, and
so forth.\” 
She adds: \”I optimistically note that support for many of these actions should be bipartisan,
although there will be partisan disagreement on how to achieve them.\” I would add that one way to judge candidates for political office in 2016 is whether they have a detailed and at least somewhat plausible plan–not just a slogan or an expression of good intentions–for improving the main determinants of economic growth. 

What is Getting Too Little Attention from Financial Regulators?

\”The mission of US Commodity Futures Trading Commission,\” as its website notes, \”is to foster open, transparent, competitive, and financially sound markets, to avoid systemic risk, and to protect the market users and their funds, consumers, and the public from fraud, manipulation, and abusive practices related to derivatives …\” The CFTC is supposed to operate with five commissioners, but it is currently making do with three. One of them is J. Christopher “Chris” Giancarlo who was nominated by President Obama in 2013 and started his role in 2014. Giancarlo recently participated in the Fidelity Guest Lecture Series on International Finance at Harvard Law School, and in his December 1, 2015, lecture, he expressed his frustration about how the financial regulatory apparatus is still so focused on working through the implementation of the Dodd-Frank law passed five years ago–a law designed to address the problems of the 2007-2009 financial crisis–that too little attention is being given to what are right now the more important challenges of regulatory policy. Giancarlo sets the stage this way:

\”The Dodd-Frank Act was passed over five years ago, but U.S. market participants and Washington financial regulators must still spend much of their professional time arguing over and addressing its myriad mandates and peculiar prescriptions – regulatory edicts ostensibly designed to prevent a recurrence of the last crisis. The same is true for much of the European and Asian discussion around the G-20 regulatory reform efforts initiated in Pittsburgh in 2009 and coordinated by the Financial Stability Board (FSB). The hue and cry of the ongoing financial market reforms under Dodd-Frank and the FSB leaves market regulators and participants with very little available bandwidth to assess and prepare for the next financial crisis – a crisis that will certainly be unlike the last one.

Just as “peacetime generals are always fighting the last war” and “economists fight the last depression,” so too do financial regulators outlaw past market abuses that are not a looming threat to our financial markets and economies. The Dodd-Frank Act and its unceasing implementation are uniquely positioned to ensure U.S. market regulators stay focused on the past.

Allow me to use a simple analogy. U.S. market regulators are riding together in an automobile on a high-speed interstate highway. The Dodd-Frank Act is an oversized rear-view mirror covering almost the entire windshield. That rear-view mirror directs our attention to the enormous amount of rules and requirements generated over the past five years that need to be or reworked to meet Dodd-Frank’s never-ending demands. Meanwhile, financial markets continue to evolve and pass by at remarkable speed. New dangers are coming right at us. As we regulators barrel down the road of 21st century financial markets, we must shed this backwards-looking approach to regulating or we will not be able to see the oncoming traffic and looming dangers ahead.\”

What dangers does Giancarlo believe, looking around around from his perch at the CFTC, should be the main focus of financial regulators right now? Here are his six priorities–and the Dodd-Frank legislation has very little to say about any of them: 
1) Cybersecurity. Both Giancarlo and CFTC chair Timothy Massad are on record as saying that \”cybersecurity is the most important single issue facing our markets today in terms of market integrity and financial stability.\”

2) Disruptive Technology. Regulators need to figure out how to deal with issues like automated electronic trading. Giancarlo writes: \”It is hard to deny that finance is increasingly becoming an industry where machines and humans are swapping their dominant roles – transforming modern finance into what scholar Tom Lin has called `cyborg finance.\’\” Other new technologies include \”distributed open ledger\” systems in which records of financial transactions are held openly by many parties, as in the case of Bitcoin and a number of efforts by private-sector banks. and the development of \”financial cartography,\” by which he means maps of how financial networks interact.

3) Government intervention. Here, Giancarlo is referring to the very large role that the Federal Reserve and other central banks have come to play in financial markets. He writes (footnotes omitted):

\”Since the 2008 financial crisis, the Federal Reserve (Fed) has made itself an increasingly outsized player in the U.S. government debt markets … Through its “quantitative easing” (QE) program, the Fed has purchased an unprecedented 61 percent of all Treasuries issued, peaking at close to 80 percent in 2014. Today, the Fed has become the multi-trillion dollar “Washington Whale.” Its intervention in the Treasury and mortgage-backed security markets misprices the true cost of credit below its natural level and distorts the integrity of prices and exchange rates. The Fed is having an increasingly direct and immediate impact on all other markets, from corporate bonds to equities and foreign exchange rates to developing nations’ sovereign debt. It has reduced the heterogeneity of the investor base, herding it into one-way bets on anticipated changes in Fed policy rather than traditional fundamental credit or value analysis. …. Central banks have replaced major dealers and money center banks as marketplace Leviathans plunging into increasingly shallower pools of trading liquidity. With one flip of their policy tails, these central bank behemoths can whack a whole lot of smaller market participants out of once-liquid markets and leave them stranded.

4) Market illiquidity. Here, the main concern is that in writing rules to limit what banks and financial institutions are allowed to do, we may be contributing inadvertently to market that are less liquid and thus more prone to episodes of high volatility or even manipulation. Giancarlo said (footnotes omitted):

Market participants know that liquidity is the lifeblood of healthy trading markets. In essence, liquidity is the degree to which a financial instrument may be easily bought or sold with minimal price disturbance by ready and willing buyers and sellers. …  

We saw evidence of such pronounced liquidity contraction this past August in enormously volatile equity markets, when major global banks focused on executing trades for their clients rather than for their own account.We saw it in June with sudden spikes in the German Bond market.We saw it a year ago when the market for U.S. Treasury securities, futures and other closely related financial markets experienced an unusually high level of volatility and a very rapid and pronounced round-trip. A few weeks ago, Chairman Massad cited new CFTC research showing that “flash” volatility spikes have become increasingly common, with 35 spike events so far this year in core futures products such as corn, gold, WTI crude oil, E-Mini S&P and Euro FX.

Traditionally, large global money center banks served to reduce such market volatility by buying and selling reserves of securities and other financial instruments to take advantage of short-term anomalies in market prices. Their balance sheets served as market “shock absorbers” in times of market turbulence. … According to one senior banker, “Wall Street’s role as an intermediary and risk taker has shrunk.” This evolution appears to have been underway for some time.

A major catalyst of the reduced bank trading liquidity in financial markets is the new regulatory policies of U.S. and overseas bank prudential regulators imposed in the wake of the financial crisis. … Most of the new regulations have the effect of reducing the ability of medium and large financial institutions to deploy capital in trading markets. Combined, these disparate regulations are already sapping global markets of enormous amounts of trading liquidity. …In trying to stamp out risk, global regulators are instead harming trading liquidity. … We need to understand the full implications of constrained bank capital on market health and resiliency and the ability of financial markets to underpin sorely needed global economic growth. 

5) Market concentration. Giancarlo writes: 

\”A wave of consolidation is taking place across the financial landscape, concentrating the provision of essential market services within fewer and fewer institutions. It is now widely recognized that Dodd-Frank regulations have wiped out small community banks across America’s agriculture landscape. It is less well-acknowledged that large banks are broadly reducing market services, jettisoning less-profitable clients and increasing some fees on others in such critical areas as prime brokerage and administrative services. A similar narrowing of market services is taking place in the swaps market, where rising regulatory costs are driving consolidation of transaction service providers into a few remaining major SEFs [Swap Execution Facilities]. This wave of consolidation is perhaps most glaringly apparent in the case of America’s futures commission merchants (FCMs).\” 

6) De-globalization. Here, the concern is that because of regulatory differences across countries, global pools of capital are being splintered and rearranged to sidestep regulations–a game that often does not end well. Giancarlo writes: 

\”Traditionally, users of swaps products chose to do business with global financial institutions based on factors such as quality of service, product expertise, financial resources and professional relationship. Now, those criteria are secondary to the question of the institution’s regulatory profile. Overseas market participants are avoiding financial firms bearing the scarlet letters of “U.S. person” in certain swaps products to steer clear of the CFTC’s problematic regulations. As a result, non-U.S. market participants’ efforts to escape the CFTC’s flawed swaps trading rules are fragmenting global swaps trading and driving global capital away from U.S. markets. … According to a survey conducted by the International Swaps and Derivatives Association (ISDA), the market for euro interest-rate swaps (IRS) has effectively split. Volumes between European and U.S. dealers have declined 55 percent since the introduction of the U.S. SEF [Swap Exchange Facility] regime. The average cross-border volume of euro IRS transacted between European and U.S. dealers as a percentage of total euro IRS volume was twenty-five percent before the CFTC put its SEF regime in place and has fallen to just ten percent since.

Fragmentation has exacerbated the already inherent challenge in swaps trading – adequate liquidity – and is increasing market fragility as a result. Fragmentation has led to smaller, disconnected liquidity pools and less efficient and more volatile pricing. Divided markets are more brittle, with shallower liquidity, posing a risk of failure in times of economic stress or crisis. Fragmentation has increased firms’ operational risks as they structure themselves to avoid U.S. rules and manage multiple liquidity pools in different jurisdictions …\”  

I\’m not sure that Giancarlo\’s six priorities are the right ones. Some seem to me more important than others, and in particular, I don\’t know much detail about the regulation of the swaps market. But I find it easy to believe that while politicians and regulators are refighting the battles of 2008–and in particular, how to reduce the risk of future bailouts by the US Treasury or the Federal Reserve–we are giving insufficient thought to other issues of financial regulation that should at least be on the radar screen in 2016.

Interview with James Poterba: Retirement Finance and Tax Policy

David Price has an interview with James Poterba in Econ Focus, published by the Federal Reserve Bank of Richmond (2015, Second Quarter, pp. 24-29). Poterba is of course a long-time professor at MIT, and since 2008 he has also been president of the National Bureau of Economic Research. His research has focused on retirement finance and tax policy. Here are some snippets from Poterba\’s comments in the interview that caught my eye.

On how the focus of tax policy research has changed over time

\”One difference is that tax policy discussions and research on the economics of tax policy in the late 1970s and early 1980s were set in an environment with marginal tax rates that were significantly higher than those today. The United States had a top tax rate on capital income of 70 percent until 1981. The top marginal tax rate on earned income in the United States at the federal level was 50 percent until 1986. Today, the top statutory rate is 39.6 percent, although with some add-on taxes, the actual rate can be in the low 40s. We have been through periods when the top rate was as low as 28 percent. There was a lot more concern about the distortions associated with the capital income tax and with taxation in general.

\”At the same time, the opportunities for studying how behavior was affected by the tax system when I started in this field were dramatically different than they are today. We relied primarily on cross-sectional household surveys. It\’s hard to study how taxation affects behavior when the variation in the tax system is coming in differences in household incomes that place different taxpayers in different tax brackets, because income variation is related to so many other characteristics. Today, by comparison, the field of public finance has moved forward to use large administrative databases from many countries, often including tax returns. It is possible to do a much more refined kind of empirical analysis than when I started.\”

On the economics of the deductibility of mortgage interest

\”I began studying various aspects of the tax code and the housing market in my undergraduate thesis research in 1979-1980. This is an issue that\’s near and dear to my heart. Let me note several things about the way we currently tax owner-occupied housing in the United States.

First, because mortgage interest is deductible only for households that are itemizers on their tax returns and then is deductible at the household\’s marginal income tax rate, this results in a larger subsidy to households at a higher income and higher marginal tax rate than for those at lower levels. 

Second, the real place where the tax code provides a subsidy for owner-occupied housing is not by allowing mortgage deductibility, because if you or I were to borrow to buy other assets — for instance, if we bought a portfolio of stocks and we borrowed to do that — we\’d be able to deduct the interest on that asset purchase, too. If we bought a rental property, we could deduct the interest we paid on the debt we incurred in that context. What we don\’t get taxed on under the current income tax system is the income flow that we effectively earn from our owner-occupied house, what some people would call the imputed income or the imputed rent on the house. The simple comparison is that if you buy an apartment building and rent it out, and you buy a home and you live in it, the income from the apartment building would be taxable income, but the \”income\” from living in your home — the rent you pay to yourself — is never taxed. This is the core tax distortion in the housing market: the tax-free rental flow from being your own landlord 

The natural way to fix this would be to compute a measure of imputed income on your home and include that in the income tax base. As a matter of practical tax policy, creating an income flow that taxpayers don\’t see and saying they\’re going to have to report that on their tax return is probably a nonstarter. A number of European countries tried in the past to do something in this direction, typically in a very simple way, saying something like 3 percent of the value of the home is included in your income for the year. Almost all of those countries have moved away from this. It therefore seems that the tax reform that one might like on conceptual grounds is probably not politically realistic.

Given that situation, other policy reforms that might move in the same direction probably deserve some attention. Property tax rates vary from place to place in the United States, but they are typically proportional to the value of the property. They are currently deductible from the income tax base. Disallowing property tax deductions would be one way of trying to move gently toward a tax system that was closer to one that taxed imputed rent. One could think about other potential reforms along similar lines, but eliminating the mortgage interest deduction turns out not to be the most natural fix here because it would create distortions between borrowing to buy a home and borrowing to buy other assets.\”

On low levels of asset accumulation for retirees

The University of Michigan Health and Retirement Study, which is a comprehensive database on older individuals in the United States, begins tracking survey respondents in their mid-50s. It follows them until they die, so the last survey is typically filed about a year before the individual\’s death. Nearly half of the respondents in the survey turn out to have very low levels of financial assets, under $20,000, as they get close to death. For any economist who\’s been steeped in the life cycle model, the notion that you would reach such a low level of asset holdings, even at old ages and when health is poor, is surprising, particularly given the risk of out-of-pocket expenses for medical care or nursing homes. …

I have been quite interested in how individuals arrive at such low levels of financial assets. Many of those who have very little financial wealth as they approach death also reached retirement age with very little wealth. Nearly half of American retirees rely overwhelmingly on Social Security as their source of income. One often hears references to a three-legged stool of retirement support, which involves Social Security, private saving, and employer-based saving in a retirement plan. The reality is that nearly half of the population is relying on a one-legged stool, with Social Security as the sole leg. Only in the top half of the retiree wealth distribution does one start to see substantial amounts of support from private pension plans, and only in the top quarter is there substantial support from private saving outside retirement accounts.

On heterogeneity in preferences for retirement saving

\”There is a lot of heterogeneity across individuals in their relative tastes for retirement versus pre-retirement consumption. Some people may regard the availability of more time in retirement as an opportunity to ramp up their spending, to travel, or to enjoy a second home. Others, particularly lower-income retirees, may devote more time to shopping sales for groceries and for other products they buy. They may spend more time cooking at home relative to consuming food away from home. They may scale back on clothing purchases because they are not required to buy clothes for work. The notion that spending time can save money is very evident in the behavior of some retirees.

One of the notable examples of this is that early research on the well-being of retirees pointed to the fact that expenditures on food declined for a number of retirees lower in the income distribution. That was often viewed as evidence that these individuals must be worse off when they retired than they were when they were working — they could not even sustain their food consumption. Yet more refined analysis of the food expenditure data found that caloric intake did not decline very much even for those for whom food expenditure declined. What happened? They shifted from buying takeaway meals at the grocery store or stopping at a restaurant to purchasing more food to prepare at home. Spending declined, but the ultimate objective — nutritious meals — was not affected nearly as much as the spending decline suggested. This is microeconomics in action, right? When money becomes scarce relative to time, individuals alter the way they choose to produce things.

Many individuals also have some reason for preserving financial assets until late in life. Textbook life cycle theory would lead you to expect that peak assets are basically observed at the moment when someone retires. After that, leaving aside bequest considerations and the possible need for late-life precautionary saving, retirees should begin to draw down assets as they move toward the end of life. But in fact, at least in the early years of retirement, the late 60s and into the 70s, many households that have financial assets experience relatively stable assets over that time. Some even appear to save more during this period. What\’s happening here? Well, either they are planning to leave these assets to the next generation or to make charitable gifts late in life, or they are saving for precautionary reasons like health care costs.

The times when financial assets are drawn down significantly are often when one spouse in a married couple dies, which may be associated with medical and other costs, and at the onset of a major medical episode. Health care shocks may lead to costs for caregivers who may not be covered by Medicare and other insurance. Retirement is not a homogeneous period from the standpoint of financial behavior: Behavior for the \”young elderly\” can be quite different from the behavior of those who are in their 80s and 90s.\”

For readers who would like more from Poterba on retirement finance issues, a useful starting point is the 2014 Ely Lecture concerning \”Retirement Security in an Aging Population,\” which I discussed here.

Variability in Health Care Prices and Malfunctioning Markets

One of the signs of a well-functioning market is that prices for very similar goods or services are much the same in different places. For example, if a specific television model cost five times as much in one city as in another nearby city, it would be a pretty clear sign that something in the market was malfunctioning. Similarly, if an insurance company noticed that drivers in one urban area are experiencing five times as many accidents as seemingly similar drivers in another urban area, something is likely to be wrong. When it comes to health care, this kind of evidence suggests that markets are not working well. Some recent persuasive evidence comes from \”The Price Ain’t Right? Hospital Prices and Health Spending on the Privately Insured,\” a December 2015 paper by Zack Cooper, Stuart Craig, Martin Gaynor, and John Van Reenen, published by a research collaboration called the Health Care Pricing Project.

The authors have access to idata that \”includes insurance claims for nearly every individual with employer-sponsored coverage from Aetna, Humana, and UnitedHealth,\” which is 88 million people or \”27.6 percent of individuals with private employer-sponsored insurance in the US between 2007 and 2011.\” To my knowledge, this kind of big data showing geographically detailed data on transaction prices in hospitals and total health care spending for those with private insurance hasn\’t previously been available. Most previous work on health care prices and spending in different geographic areas has instead used Medicare data. Here are four of their main findings (citations omitted):

\”First, health spending on the privately insured varies by more than a factor of three across the 306 hospital referral regions (HRRs) in the US.\” In addition, the results show that geographic areas which which have higher spending on Medicare patients don\’t seem to be much correlated with the geographic areas that have higher spending on on private insurance patients. Indeed, there are even some examples of geographic areas that have been praised for keeping Medicare spending relatively low, but it turns out that some of these same areas have higher spending on private insurance patients–raising the likely possibility that the private insurance patients are to some extent subsidizing the Medicare patients in those cities.

\”To illustrate the point, policy-makers have identified Grand Junction, Colorado as an exemplar of health-sector efficiency based on analyses of Medicare data. In 2011, we find that Grand Junction does indeed have the third lowest spending per Medicare beneficiary among HRRs. However, in the same year, Grand Junction had the ninth highest average inpatient prices and the forty-third highest spending per privately insured beneficiary of the nation’s 306 HRRs. Likewise, we find that other regions, such as Rochester, Minnesota, and La Crosse, Wisconsin, which have also received attention from policy-makers for their low spending on Medicare, are among the highest spending regions for the privately insured.\” 

For illustration, here\’s a graph showing the distribution of Medicare spending per patient on the horizontal axis, and private insurance spending per patient on the vertical axis. The points represent different geographic areas. Notice there is every possible mix here: high on both, low on both, high on one and low on the other. Clearly, it could potentially be misleading to draw conclusions about the overall operation of US health care spending or prices across geographic regions based on Medicare data alone.

\”Second, for the privately insured, hospital transaction prices play a large role in driving inpatient spending variation across HRRs.\” That is, in the geographic areas when private insurance spending is higher, the main underlying reason is not that patients are receiving a higher quantity of care, but instead that they (and their insurance companies) are paying higher prices for specific care that is received.

\”Third, we find that hospitals’ negotiated transaction prices vary substantially across the nation. For example, looking at the most homogeneous of the seven procedures that examine, hospital-based MRIs of lower-limb joints, the most expensive hospital in the nation has prices twelve times as high as the least expensive hospital. What is more, this price variation occurs across and within geographic areas. The most expensive HRR has average MRI prices for the privately insured that are five times as high as average prices in the HRR with the lowest average prices. Likewise, within HRRs, on average, the most expensive hospital has MRI negotiated transaction prices twice as large as the least expensive hospital. …\” 

\”Finally, we describe some of the observable factors correlated with hospital prices. Measures of hospital market structure are strongly correlated with higher hospital prices. Being for-profit, having more medical technologies, being located in an area with high labor costs, being a bigger hospital, being located in an area with lower income, and having a low share of Medicare patients are all associated with higher prices. However, even after controlling for these factors and including HRR fixed effects, we estimate that monopoly hospitals have 15.3 percent higher prices than markets with four or more hospitals. Similarly, hospitals in duopoly markets have prices that are 6.4 percent higher and hospitals in triopoly markets have prices that are 4.8 percent higher than hospitals located in markets with four or more hospitals. While we cannot make strong causal statements, these estimates do suggest that hospital market structure is strongly related to hospital prices.\”

The overall message here is to provide clear documentation of some patterns that most of us already strongly suspected. Health care prices aren\’t being set in a well-functioning competitive market. Some geographic markets just lack competition in health care. But in many others, prices for hospital services are being negotiated in ways that result in big variations between geographic areas–like prices for a given service in one place being a multiple of what it costs in other places. In turn, these different prices being charged are linked to big differences in spending across geographic areas. Moreover, all of this is happening in a setting with potentially large cross-subsidies–potentially running in either direction–between private health insurance and public health insurance programs like Medicare.  It\’s all part of the reason why designing policies to slow the rise in health care spending is so difficult.