The Macroeconomy in Ongoing Transition: Mervyn King

Mervyn King delivered a provocative and intriguing 2017 Martin Feldstein Lecture at the National Bureau of Economic Research on the subject of \”Uncertainty and Large Swings in Activity\” (July 19, 2017). A written version of the presentation is available in the NBER Reporter (2017: 3, pp. 1-10), or you can watch the lecture and download the slides here.

King\’s argument has both a broad conceptual message for the study of macroeconomics, which is that it is literally impossible to demonstrate with statistics that a certain macroeconomic model is \”true.\” After all, drawing statistical conclusions requires a decent sample size. But to get a sample size of, say, 20 or 30 recessions in a given economy would take a long time–perhaps several centuries–and it is not plausible that any macroeconomic model remains \”true\” over that length of time. As King puts it (footnotes omitted):

\”Let me give a simple example. It relates to my own experience when, as deputy governor of the Bank of England, I was asked to give evidence before the House of Commons Select Committee on Education and Employment on whether Britain should join the European Monetary Union. I was asked how we might know when the business cycle in the U.K. had converged with that on the Continent. I responded that given the typical length of the business cycle, and the need to have a minimum of 20 or 30 observations before one could draw statistically significant conclusions, it would be 200 years or more before we would know. And of course it would be absurd to claim that the stochastic process generating the relevant shocks had been stationary since the beginning of the Industrial Revolution. There was no basis for pretending that we could construct a probability distribution. As I concluded, `You will never be at a point where you can be confident that the cycles have genuinely converged; it is always going to be a matter of judgment.\’\”

In the current economic context, King takes aim at the macroeconomic perspective which argues that we had a pretty good model of the macroeconomy for the decades leading up to the Great Recession, but the model has broken down since then. The dashed line in the figure shows a trendline for growth of GDP per capita from 1960-2016. For the US economy, you can project that trendline backward to 1900: as I noted a few years ago, long-run US economic growth had a remarkable consistency from the late 19th century up through about 2010. However, the divergence from this long-run path in the aftermath of the Great Recession is quite noticeable. The trendline for the United Kingdom data doesn\’t project backward as well, but it does show a similar divergence from that trend in recent years.

Looking at the economy as represented in this figure, one might plausibly argue that the macroeconomy can be modeled by a fairly steady long-run trend, with some up-and-down fluctuations of recessions and recoveries around that trend. However, King suggests that this appearance is misleading. Instead, the world economy saw a dramatic shift starting in the mid-1990s that has continued since then, which can be seen in the pattern of real interest rates over time. King says: 

\”From around the time when China and the members of the former Soviet Union entered the world trading system, long-term real interest rates have steadily declined to reach their present level of around zero. Such a fall over a long period is unprecedented. … [M]uch effort has been invested in the attempt to explain why the \”natural\” real rate of interest has fallen to zero or negative levels. But there is nothing natural about a negative real rate of interest. It is simpler to see Figure 3 as a disequilibrium phenomenon that cannot persist indefinitely.\”

In King\’s view, the world economy is still adjusting to this shift, which has a number of components. High savings rates in China and Germany have helped to drive down real interest rates. Moreover, we have moved into a world economy where some countries have seemingly perpetual trade surpluses while others have seemingly perpetual trade deficits. King writes: 

\”Both the U.S. and U.K. had substantial current account deficits, amounting in aggregate to around $600 billion, and China and Germany had correspondingly large current account surpluses. All four economies need to move back to a balanced growth path. But far too little attention has been paid to the problems involved in doing that. With unemployment at low levels, the key problem with slower-than-expected growth is not insufficient aggregate demand but a long period away from the balanced path, reflecting the fact that relative prices are away from their steady-state levels. The result is that the shortfall of GDP per head relative to the pre-crisis trend path was over 15 percent in both the U.S. and U.K. at the end of last year. Policies which focus only on reducing the real interest rate miss the point; all the relevant relative prices need to change, too.\” 

In short, King is offering an alternative diagnosis of our current slow-growth woes. In his view, the slow growth, it\’s not due to lingering hangover from the high debt burdens that preceded the Great Recession, nor is it due to a decline in technological opportunities, or to a shortfall in investment related to \”secular stagnation.\” Instead, King argues that what needs to happen is a shift in global prices in the sectors of tradeable and nontradeable goods.

I\’m adding King\’s explanation to my list of mental possibilities for what forces are underlying the slow productivity growth in the US economy.  But in addition, it\’s worth adding a dose of King-size skepticism about economists who arrive at any macroeconomic situation with a given model fixed in their minds, rather than trying to figure out which model is most likely to apply in a given case. King notes:

\”Imagine that you had a problem in your kitchen, and summoned a plumber. You would hope that he might arrive with a large box of tools, examine carefully the nature of the problem, and select the appropriate tool to deal with it. Now imagine that when the plumber arrived, he said that he was a professional economist but did plumbing in his spare time. He arrived with just a single tool. And he looked around the kitchen for a problem to which he could apply that one tool. You might think he should stick to economics. But when dealing with economic problems, you should also hope that he had a box of tools from which it was possible to choose the relevant one. And there are times when there is no good model to explain what we see. The proposition that `it takes a model to beat a model\’ is rather peculiar. Why does it not take a fact to beat a model? And although models can be helpful, why do we always have to have one? After the financial crisis, a degree of doubt and skepticism about many models would be appropriate.\”

A Range of International Poverty Lines

Poverty is inevitably a relative phenomenon; that, whether you are \”poor\” depends on the typical standard of living in your society. For example, the World Bank has used a poverty line of $1.90 per person per day since 2015. If you multiplied this poverty line by a family of 3, for 365 days in a year, it equates to an annual poverty line of $2,080 per year for that family. For comparison, the US poverty line in 2016 for a three-person family with a parent and two children would be $19,337.

It would take some odd mixture of clueless, heartless, and moral blindness to argue that poverty in the United States or other high-income countries should be defined in the same way as in low-income countries. But by similar logic, it seems unsuitable to use the same poverty line for what the World Bank would classify as \”low-income\” countries with a per capita GDP of less than $1,005 per year (for example, Afghanistan, Ethiopia, and Haiti), \”lower middle income\” countries with a per capita GDP between $1,006 TO $3,955 (like Bangladesh, Nicaragua, and Nigeria), and \”upper middle-income\” countries with a per capita GDP from $3,956 TO $12,235 (like Mexico, China,and Turkey). Thus, the World Bank is now planning to use \”A Richer Array of Poverty Lines,\” in the words of Franciscon Ferreira.

The figure shows per capita income on the horizontal axis, with the groups of countries separated by income level. The corresponding poverty line for each country as determined by that country is plotted on the vertical axis. The horizontal line shows an average poverty line for the countries within that income group.

The underlying data for national poverty lines is from an article by Dean Jolliffe and Espen Beer Prydz, \”Estimating international poverty lines from comparable national thresholds,\” which appeared in the Journal of Economic Inequality (2016, 14, pp. 185-198). An ungated version is available from the World Bank here.

Trade, Technology, and Job Disruption

Both technological developments and international trade can disrupt an economy, and in somewhat similar ways, but many people have very different reactions to these forces. To illustrate the point, I sometimes pose this question:

There\’s a US company which has developed a new technology that allows them to make a certain product more cheaply. This company hires some additional workers, but the other firms trying to make that same product don\’t have the technology, so they lay off workers or even go bankrupt. Should step be taken to ban or limit the use of this new technology?

Pause for thought. The usual reaction that emerges from the discussion is that we can\’t hope to freeze technology in place. Ultimately, we don\’t want to be a society with lots of workers who light gas streetlamps, or who operate telegraphs or who plow fields with oxen. Sure, it\’s important to have social policies to cushion the transition to new industries, but overall, we need to be facilitating new technology rather than blocking it.

All of which is fair enough, but here\’s the kicker. Now you discover that the \”new technology\” from the US firm is that it is importing more cheaply from a foreign provider. The same disruption of the US  labor force is occurring, but as a result of an expansion of international trade rather than as a result of technology. Personally, my response to the economic disruption of trade is essentially same as my response to the economic disruption of technology: that is, I believe in assisting the transition for dislocated workers no matter the reason behind the dislocation. But for many people, their reaction to economic disruption is different depending on whether the underlying cause is technology or trade.

There arguments are renewed and refreshed in a couple of recent publications. J. Bradford DeLong has written \”When Globalization Is Public Enemy Number One\” in the most recent issue of the Milken Institute Review (Fourth Quarter 2017, 19:4, pp. 22-31). Also, the World Trade Report 2017 from the World Trade Organization is centered on the theme, \”Trade, Technology, and Jobs.\”

As a starting point, here\’s a figure from DeLong\’s paper about the rise of globalization. The red line shows the sum of exports and imports compared to world GDP. The first explosion of globalization starting in the 19th century, and the more recent rise of globalization, are both readily apparent.
Delong Bradford Chart1
But of course, a rise in trade isn\’t the only economic change taking place. Brad points out that the fall in blue-collar and manufacturing jobs was well underway back in the 1950s and 1960s, well before globalization had restarted in force–because of changes in technology  Indeed, I\’ve written before about \”Automation and Job Loss: The Fears of 1964\” (December 1, 2014). Brad readily admits that the shock of increased trade with China starting around 2001 was an important event, and of course the Great Recession had a powerful effect on jobs too. But overall, he writes:

by his calculations. only a very minor part of the decline in blue-collar jobs since 1948 is about international trade: it\’s mostly about technological change, and to some extent about the rising strength of economies in other parts of the world and misjudgments of macroeconomic policy by the US government.

\”To repeat, because it bears repeating: globalization in general and the rise of the Chinese export economy have cost some blue-collar jobs for Americans. But globalization has had only a minor impact on the long decline in the portion of the economy that makes use of high-paying blue-collar labor traditionally associated with men. … Pascal Lamy, the former head of the World Trade Organization, likes to quote China’s sixth Buddhist patriarch: `When the wise man points at the moon, the fool looks at the finger.\’ Market capitalism, he says, is the moon. Globalization is the finger.\”

Given that comment from Lamy, it is perhaps unsurprising that the World Trade Report 2017 takes a position similar to DeLong. There are roughly a jillion examples of how technology both improves productivity but also can also disrupt job markets. The report summarizes:

\”By making some products or production processes obsolete, and by creating new products or expanding demand for products that are continuously innovated, technological change is necessarily associated with the reallocation of labour across and within sectors and firms. Such technology-induced reallocations affect workers differently, depending on their skills or on the tasks they perform. ICTs tend to be used more intensively and more productively by skilled workers than by unskilled workers. Automation tends to affect routine activities more than non-routine activities, because machines still do not perform as well as humans when it comes to dexterity or communication skills. … [T]he labour market effects of technology are relatively more favourable to skilled workers and to workers performing tasks that are harder to automate.\”

What about the worry that technology will lead to a dramatic reduction in the total number of jobs? Obviously, this prediction is not an extrapolation from history. The US and world economy have been experiencing technological growth in a serious way for a couple of centuries, and there is no long-run downward trend in the total number of jobs. Why is that? The report offers these reasons (citations
omitted):

\”The view that the new technological advances in artificial intelligence and robotics will not lead to a `jobless future\’ is based on historical experience. Although each wave of technological change has generated technological anxiety and led to temporary disruptions with the disappearance of some tasks and jobs, other jobs have been modified, and new and often better jobs have eventually been developed and filled through three interrelated mechanisms.

\”First, new technological innovations still require a workforce to produce and provide the goods, services and equipment necessary to implement the new technologies. Recent empirical evidence suggests that employment growth in the United States between 1980 and 2007 was significantly greater in occupations encompassing more new job titles. 

\”Second, the new wave of technologies may enhance the competitiveness of firms adopting these technologies by increasing their productivity. These firms may experience a higher demand for the goods or services they produce, which could imply an increase in their labour demand. Several empirical studies … find that the adoption of labour-saving technologies did not reduce the overall labour demand in European countries and other developed economies. 

\”Finally, … the upcoming technological advances may complement some tasks or occupations and therefore increase labour productivity, which could lead to either higher employment or higher wages, or both. The new workers and/or those benefitting from a pay rise may increase their consumption spending, which in turn tends to maintain or raise the demand for labour in the economy. Recent empirical evidence suggests that the use of industrial robots at the sector level has led to an increase in both labour productivity and wages for workers in Australia, 14 European countries, the Republic of Korea and the United States.\”

It\’s of course impossible to prove that future patterns will be similar. But the historical evidence suggests that finding ways to stimulate and work with technology is a better path to prosperity than trying to limit or block it.

In the discussion of trade and jobs, the report readily admits that trade (like technology) causes economic change and dislocation. After a substantial discussion of the empirical evidence, here are some conclusions from the report:

\”First, evidence consistently shows that the welfare gains from trade are considerably larger than the costs. Effects on aggregate employment are minor and tend to be positive. The net effect on welfare depends on the magnitude of adjustment costs and trade gains. But existing evidence evaluates costs to be just a fraction of the gains.

\”Second, the debate over the labour market effects of import competition needs to be qualified. While some manufacturing jobs may be lost in some local labour markets, other jobs may be created in other zones or in the services sector. When researchers take these effects into account their findings suggest a positive overall effect of trade on employment. Similar results are found when input-output linkages are taken into account or when the response of the labour supply to increased real wages is accounted for. Clearly, those who lose jobs because of import competition are not necessarily the same workers who get new jobs in exporting firms, because they are likely to have different skillsets or limited labour mobility. These adjustment costs need to be taken into account, but without losing sight of the overall picture. 

\”Third, there is evidence that export opportunities are associated with employment growth. In developing countries, improved access to foreign markets has contributed to the movement of workers away from agriculture and towards services and manufacturing, as well as away from household businesses toward  firms in the enterprise sector, and away from state-owned firms toward private domestic and foreign-owned firms. Although more should be done to understand how labour markets in least-developed countries (LDCs) are affected by trade opening, there is evidence that the involvement of LDCs in GVCs [global value chains] has been a vehicle for developing employment opportunities.

\”Fourth, trade offers opportunities for better-paid jobs. A significant share of jobs is related to trade, either through exports or imports, and both exporters and importers pay higher wages. This is because trading is a skills-intensive activity. International trade requires the services of skilled workers, who can ensure compliance with international standards, manage international marketing and distribution, and meet the demanding standards of customers from high-income countries; and trade leads to the selection of more productive firms and provides firms with an incentive to upgrade their technology. There is evidence that better access to foreign markets benefits exporting firms and thus their workers. This in turn positively affects regions where these firms are located, as well as occupations that are intensively used by these firms.

\”As regards the evidence on the impact of trade on wage dispersion, there is evidence that by increasing the demand for skills, trade contributes to wage differences between high- and low-skilled workers. … It is also worth noting that most of the existing analysis fails to account for the fact that most of the gains from trade opening come through a reduction in prices. Workers are also consumers. Trade impacts their well-being not only through changes in the wage received, but also through changes in the price of the goods that they consume. Given that most of the gains from trade opening through the consumption channel accrue to lower-income groups, failing to account for the income-group specific price changes overestimates the impact on wage disparity.\”

For some additional discussion of concerns that technology (or trade) would decimate the number of jobs, see:

How Food Banks Use Markets

\”Imagine that someone gave you 300 million pounds of food and asked you to distribute it to the poor—through food banks—all across the United States. The nonprofit Feeding America faces this problem every year. The food in question is donated to Feeding America by manufacturers and distributors across the United States. As an example, a Walmart in Georgia could have 25,000 pounds of excess tinned fruit at one of its warehouses and give it to Feeding America to distribute to one of 210 regional food banks. How should this be accomplished?\”

Contemplate your answer for a moment. Canice Prendergast discusses how Feeding America used to tackle this problem, and how it switched to a market-oriented solution, in \”How Food Banks Use Markets to Feed the Poor,\” which appears in the Journal of Economic Perspectives (Fall 2017, 31:4, pp. 145-62). 
One piece of context that is useful here is that the 210 regional food banks all have local donors, and they typically get a majority of food from those donors. The question here is how to allocate the additional food donations received at the national level. Here\’s how Prendergast describes the earlier system: 

\”Until 2005, Feeding America had a method of allocating resources that is fairly common among not-for-profits: a “wait your turn” system, where it gave out food based on a food bank’s position in a queue. The queue was determined by the amount of food that a food bank had received compared to a measure of need called the “Goal Factor,” which is (roughly) the number of poor in a food bank’s area compared to the national average. The formula is more nuanced than a simple head count, as it distinguishes between usage rates for those below the poverty line, between 100 and 125 percent of the poverty line, and between 125 and 185 percent. When a food bank’s position in the queue was high enough, it would receive a call or email from Feeding America to say that it had been assigned a “load.” The load had to be collected from the donor, and food banks were (and remain) liable for transportation costs. The food bank had 4–6 hours to say “yes” or “no.” After a food bank was offered food, its position in the queue would be recalculated, as its measure of food received relative to need would change. If it turned down the offer, the load would go to the next food bank in the queue. This mechanism had been used since the late 1980s, and it allocated 200–220 million pounds of food each year from 2000 to 2004. Feeding America did not distinguish much between different kinds of food, so that each food bank on average got a similar product mix from them (though randomly a food bank could get lucky or unlucky in whether it would get food that was popular among participants).\”

The rationale for a system like this one is pretty clear, and so where the practical difficulties. For example, it was quite possible for a food bank in Idaho to get offered a large donation of potatoes, when it already had lots in stock.  It could take a few days to work out who might get a certain donation of, say, fresh produce–in which time it could spoil. Some food banks have lots of local donors, while others do not, but the Goal Factor approach–based on number of poor people in the area–doesn\’t take this into account. And so on. 

Feeding America put together a committee to consider alternatives. \”The group consisted of eight food bank directors, three staff from Feeding America, and four University of Chicago faculty.\” Prendergast gives gives a tone of the early interactions, when the Chicago faculty started talking about market approaches, in this way:

John Arnold, a member of the redesign group who was for many years Director of the Feeding America Western Michigan Food Bank said to me once near the start of the process: “I am a socialist. That’s why I run a food bank. I don’t believe in markets. I’m not saying I won’t listen, but I am against this.’’

 This situation is clearly not one in which a pure cash-based market is going to serve the desired function. But the committee came up with a market-related approach, which it called the Choice System.  An internal currency called \”shares\” was created, which were given to food banks using the same Goal Factor criterion.

Now what happens is that Feeding America holds an internal sealed-bid auction twice a day, Monday through Friday, at 12 and 4 o-clock. On a typical day, there can be 50 truckloads of food donated, 25,000 pounds apiece. The loads are posted online at least two hours before the auction.

The practical advantages of this approach are manifold, and here are a few of them

  • Food banks can bid on the specific things they need, rather than being offered stuff they don\’t need. For example, food banks often put a higher value on dry goods that will last well,  like cereal or pasta, or on supplies like disposable plates and tableware. 
  • Food banks have some ability to borrow \”shares,\” or for several smaller food banks to bid jointly. 
  • If a food bank has extra local donations, it can offer them to the Choice System and receive additional shares. 
  • Food banks can bid more on donations that are geographically close to them (remember, the food bank is responsible for transportation costs). 
  • If a food bank doesn\’t need anything that is being donated right now, it doesn\’t bid, and carries its shares over to the next auction. 
  • A food bank in an area with a low level of local donations can focus its bidding on loads that have a  high nutritional value and calorie count, but aren\’t as attractive to other food banks. 
  • In a few cases, no food bank really wants a certain donation, but it\’s important not to upset potential donors, so food banks can bid negative shares–that is, they can receive shares in exchange for picking up that particular load. 
  • Each night, the \”shares\” that were spent that day are reallocated among all the food banks, using a formula related to the \”Goal Factor.\” 
  • However, if a food bank with lots of local donations–and thus no need to bid–accumulates a certain number of shares, then it doesn\’t receive any additional shares above that level, on the basis that it clearly doesn\’t need them. 

Prendergast describes in more detail in his paper how the system has worked in practice, with specific empirical details. But for many economist-readers, the key theme here will be the interaction of local information, incentives, and bidding, working together as a mechanism for efficient allocation. It may not be possible to have central authority with better altruistic intentions than Feeding America. But when it comes to allocating scarce resources, the decentralized market-like mechanism performs considerably better. 

Boston, the 2024 Olympics, and the Power of Economics

On January 8, 2015, the US Olympic committee chose the city of Boston from among four finalists to be the US city that would compete for the right to host the 2024 Summer Olympic games. By July, the USOC had retracted the invitation. What happened? Andrew Zimbalist, who had a ringside seat for the controversy from his position at Smith College as well as a professional interest as a researcher in sports economics, tells the story in \”Boston Takes a Pass on the Olympic Torch: Scholarly research does sometimes have a positive effect on public policy,\” which appears in the Fall 2017 issue of Regulation magazine (pp. 28-33)

Part of the issue was a lack of transparency so complete that it blended into outright disinformation. For example, a group called Boston 2024 had submitted Boston\’s proposal to the USOC, but the proposal was not publicly released. The mayor of Boston, without a vote of the city council or a public debate,  signed a \”joinder agreement\” that committed the city to accept all terms of the US Olympic Committee and the International Olympic Committee if the city was chosen.

As the details came out, they weren\’t pretty. As Zimbalist reports:

\”One such term [of the joinder agreement] was that the city would provide a financial guarantee to cover any deficits in the event of a cost overrun or revenue shortfall. … The 2012 Games in London alone had a nearly threefold overrun, with a final cost in excess of $18 billion. Given that background and the fact that the entire Boston city budget was only $2.7 billion, it was not a trivial matter that Walsh had signed this agreement.\”

Other elements of the plan turned out to include a gag rule: \”The City, including its employees, officers, and representatives, shall not make, publish, or communicate to any Person, or communicate in any public forum, any comments or statements (written or oral) that reflect unfavorably upon, denigrate or disparage, or are detrimental to the reputation or stature of, the ICO, the IPC, the USOC, the IOC Bid, the Bid Committee, or the Olympic or Paralympic movement. …\”

Other requirements turned out to involve tax breaks, shutting down the Boston Common, and more:

\”Another IOC requirement was that the city clear all its public billboards so they would be available for IOC marks as well as those of IOC sponsors. Still another requirement was that all activities connected to constructing the Olympic venues and infrastructure, the sale of tickets, and income to the athletes would be tax-exempt. … 

\”The initial plan called for constructing the beach volleyball venue in the middle of Boston Common. While this might have produced nice images for international television, it was viewed as heresy in Boston. The Common is enjoyed by thousands of Bostonians every day for strolls and recreation. To make room for the beach volleyball facility, dozens of trees would have to be felled and months of pre- and post-Games disruption would render the Common unusable. The bid also called for $5.2 billion in public transportation infrastructure investment. Bid supporters claimed that those investments were already planned and funded. It turned out, however, that they were little more than unapproved and unfunded conceptual designs. Further, Bill Straus, the co-chair of the state legislature’s transportation committee, said on local television that the actual costs of the projects would exceed $13 billion. …  The bid identified the Columbia Point area of southeast Boston as the future home of the Olympic Village. The Widett Circle area, south of South Station, would be the location of the Olympic Stadium. Among other problems with these sites, the bid claimed that the existing property owners had been contacted and were on board with the repurposing of their land. Upon learning of the bid’s intentions, the affected landowners stated that they knew nothing about the plans. …  Further, the bid identified no developers who were interested in building the proposed venues, nor found any community ready to host either the Velodrome or the Aquatic Center, and counted on Harvard and MIT to host various competitions while the schools disavowed any interest in doing so.\”

But maybe these issues could have been negotiated or worked out, if the financial picture for a city hosting the Summer Games has not been so generally grim. As Zimbalist reports:

\”[T]he typical host of the Summer Games experiences costs on the order of $15–20 billion, yet receives only $3–5 billion in revenue—not a very salubrious financial balance. The IOC propaganda machine will claim that any short-term financial losses will be offset by long-term gains. Most notably, the host city will be put on the world map, occasioning growth in tourism, trade, and foreign investment. Those are nice thoughts, but there is little evidence from academic research that they ever materialize.

\”First, most Olympic host cities are already on the world map. People and businesses that have the resources and interest to travel internationally already know about the city and its allurements. Second, Olympic hosts often experience a decrease in tourism during the Games as travelers stay away from the congestion, inconvenience, high prices, and security issues. Hotel occupancy may drop even more because most cities expand lodging capacity appreciably in anticipation of an elusive tourism bonanza. Third, the tourists who do attend the Games return home and tell their friends, neighbors, and relatives about the exciting 100-yard dash or swimming relay they watched; they rarely tell stories about the cultural or culinary attractions of the host city. Thus, tourism loses its most effective propagator: word of mouth. Fourth, exposure on the world stage does not necessarily burnish a city’s image; instead, it may tarnish it—just ask Mexico 1968, Munich 1972, Montreal 1976, Athens 2004, Sochi 2014, and Rio 2016.

\”The long-term effects, in fact, may well be negative. After spending billions of dollars on Olympic-related construction, the host city then faces the challenge of what to do with the venues after the Olympics leave town.\”

As the cost and revenue estimates of the Boston 2024 organizing effort came out, it seemed clear that the costs were systematically underestimated and revenues systematically overestimated.

The Olympic Games are often viewed as an honor. Thus, cities (and countries) lined up to participate. But you don\’t pay the bills with honor, and the research of Zimbalist and others has documented that large shares of the cost of the Games have usually fallen to taxpayers. For more discussion, see  \”The Economics of Hosting the Olympics\” (May 13, 2016).

One can imagine an alternative model of the Summer Olympics, in which the competition between cities was over how to  hold the Games at the lowest cost, using existing facilities as much as possible. Maybe (and I know this is crazy talk) the focus could even shift from promotionalism to the actual athletes and events. The 2024 Games will be held in Paris, and an alternative US city, Los Angeles, is on the docket for the 2028 Games. It will be interesting to see if they can negotiate the process in a way that holds down the typical losses. 

Fall 2017 Journal of Economic Perspectives On-line

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon was launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. Here, I\’ll start with Table of Contents for the just-released Fall 2017 issue, which in the Taylor household is known as issue #122. Below that are abstracts and direct links for all of the papers. I will blog more specifically about some of the papers in the next week or two, as well.

___________________

Symposium: Health Insurance and Choice

\”Delivering Public Health Insurance through Private Plan Choice in the United States,\” by Jonathan Gruber
The United States has seen a sea change in the way that publicly financed health insurance coverage is provided to low-income, elderly, and disabled enrollees. When programs such as Medicare and Medicaid were introduced in the 1960s, the government directly reimbursed medical providers for the care that they provided, through a classic \”single payer system.\” Since the mid-1980s, however, there has been an evolution towards a model where the government subsidizes enrollees who choose among privately provided insurance options. In the United States, privatized delivery of public health insurance appears to be here to stay, with debates now focused on how much to expand its reach. Yet such privatized delivery raises a variety of thorny issues. Will choice among private insurance options lead to adverse selection and market failures in privatized insurance markets? Can individuals choose appropriately over a wide range of expensive and confusing plan options? Will a privatized approach deliver the promised increases in delivery efficiency claimed by advocates? What policy mechanisms have been used, or might be used, to address these issues? A growing literature in health economics has begun to make headway on these questions. In this essay, I discuss that literature and the lessons for both economics more generally and health care policymakers more specifically.
Full-Text Access | Supplementary Materials

\”Selection in Health Insurance Markets and Its Policy Remedies,\” by Michael Geruso and Timothy J. Layton
Selection (adverse or advantageous) is the central problem that inhibits the smooth, efficient functioning of competitive health insurance markets. Even—and perhaps especially—when consumers are well-informed decision makers and insurance markets are highly competitive and offer choice, such markets may function inefficiently due to risk selection. Selection can cause markets to unravel with skyrocketing premiums and can cause consumers to be under- or overinsured. In its simplest form, adverse selection arises due to the tendency of those who expect to incur high health care costs in the future to be the most motivated purchasers. The costlier enrollees are more likely to become insured rather than to remain uninsured, and conditional on having health insurance, the costlier enrollees sort themselves to the more generous plans in the choice set. These dual problems represent the primary concerns for policymakers designing regulations for health insurance markets. In this essay, we review the theory and evidence concerning selection in competitive health insurance markets and discuss the common policy tools used to address the problems it creates. We emphasize the two markets that seem especially likely to be targets of reform in the short and medium term: Medicare Advantage (the private plan option available under Medicare) and the state-level individual insurance markets.
Full-Text Access | Supplementary Materials

\”The Questionable Value of Having a Choice of Levels of Health Insurance Coverage,\” by Keith Marzilli Ericson and Justin Sydnor
In most health insurance markets in the United States, consumers have substantial choice about their health insurance plan. However additional choice is not an unmixed blessing as it creates challenges related to both consumer confusion and adverse selection. There is mounting evidence that many people have difficulty understanding the value of insurance coverage, like evaluating the relative benefits of lower premiums versus lower deductibles. Also, in most US health insurance markets, people cannot be charged different prices for insurance based on their individual level of health risk. This creates the potential for well-known problems of adverse selection because people will often base the level of health insurance coverage they choose partly on their health status. In this essay, we examine how the forces of consumer confusion and adverse selection interact with each other and with market institutions to affect how valuable it is to have multiple levels of health insurance coverage available in the market.
Full-Text Access | Supplementary Materials

Symposium: From Experiments to Economic Policy

\”From Proof of Concept to Scalable Policies: Challenges and Solutions, with an Application,\” by Abhijit Banerjee, Rukmini Banerji, James Berry, Esther Duflo, Harini Kannan, Shobhini Mukerji, Marc Shotland and Michael Walton
The promise of randomized controlled trials is that evidence gathered through the evaluation of a specific program helps us—possibly after several rounds of fine-tuning and multiple replications in different contexts—to inform policy. However, critics have pointed out that a potential constraint in this agenda is that results from small \”proof-of-concept\” studies run by nongovernment organizations may not apply to policies that can be implemented by governments on a large scale. After discussing the potential issues, this paper describes the journey from the original concept to the design and evaluation of scalable policy. We do so by evaluating a series of strategies that aim to integrate the nongovernment organization Pratham\’s \”Teaching at the Right Level\” methodology into elementary schools in India. The methodology consists of reorganizing instruction based on children\’s actual learning levels, rather than on a prescribed syllabus, and has previously been shown to be very effective when properly implemented. We present evidence from randomized controlled trials involving some designs that failed to produce impacts within the regular schooling system but still helped shape subsequent versions of the program. As a result of this process, two versions of the programs were developed that successfully raised children\’s learning levels using scalable models in government schools. We use this example to draw general lessons about using randomized control trials to design scalable policies.
Full-Text Access | Supplementary Materials

\”Experimentation at Scale,\” by Karthik Muralidharan and Paul Niehaus
This paper makes the case for greater use of randomized experiments \”at scale.\” We review various critiques of experimental program evaluation in developing countries, and discuss how experimenting at scale along three specific dimensions—the size of the sampling frame, the number of units treated, and the size of the unit of randomization—can help alleviate the concerns raised. We find that program-evaluation randomized controlled trials published over the last 15 years have typically been \”small\” in these senses, but also identify a number of examples—including from our own work—demonstrating that experimentation at much larger scales is both feasible and valuable.
Full-Text Access | Supplementary Materials

\”Scaling for Economists: Lessons from the Non-Adherence Problem in the Medical Literature,\” by Omar Al-Ubaydli, John A. List, Danielle LoRe and Dana Suskind
Economists often conduct experiments that demonstrate the benefits to individuals of modifying their behavior, such as using a new production process at work or investing in energy saving technologies. A common occurrence is for the success of the intervention in these small-scale studies to diminish substantially when applied at a larger scale, severely undermining the optimism advertised in the original research studies. One key contributor to the lack of general success is that the change that has been demonstrated to be beneficial is not adopted to the extent that would be optimal. This problem is isomorphic to the problem of patient non-adherence to medications that are known to be effective. The large medical literature on countermeasures furnishes economists with potential remedies to this manifestation of the scaling problem.
Full-Text Access | Supplementary Materials

Articles

\”How Food Banks Use Markets to Feed the Poor,\” by Canice Prendergast
A difficult issue for organizations is how to assign valuable resources across competing opportunities. This work describes how Feeding America allocates about 300 million pounds of food a year to over two hundred food banks across the United States. It does so in an unusual way: in 2005, it switched from a centralized queuing system, where food banks would wait their turn, to a market-based mechanism where they bid daily on truckloads of food using a \”fake\” currency called shares. The change and its impact are described here, showing how the market system allowed food banks to sort based on their preferences.
Full-Text Access | Supplementary Materials

\”Brexit: The Economics of International Disintegration,\” by Thomas Sampson
On June 23, 2016, the United Kingdom held a referendum on its membership in the European Union. Although most of Britain\’s establishment backed remaining in the European Union, 52 percent of voters disagreed and handed a surprise victory to the \”leave\” campaign. Brexit, as the act of Britain exiting the EU has become known, is likely to occur in early 2019. This article discusses the economic consequences of Brexit and the lessons of Brexit for the future of European and global integration. I start by describing the options for post-Brexit relations between the United Kingdom and the European Union and then review studies of the likely economic effects of Brexit. The main conclusion of this literature is that Brexit will make the United Kingdom poorer than it would otherwise have been because it will lead to new barriers to trade and migration between the UK and the European Union. There is considerable uncertainty over how large the costs of Brexit will be, with plausible estimates ranging between 1 and 10 percent of UK per capita income. The costs will be lower if Britain stays in the European Single Market following Brexit. Empirical estimates that incorporate the effects of trade barriers on foreign direct investment and productivity find costs 2–3 times larger than estimates obtained from quantitative trade models that hold technologies fixed.
Full-Text Access | Supplementary Materials

\”Enrollment without Learning: Teacher Effort, Knowledge, and Skill in Primary Schools in Africa,\” by Tessa Bold, Deon Filmer, Gayle Martin, Ezequiel Molina, Brian Stacy, Christophe Rockmore, Jakob Svensson and Waly Wane
School enrollment has universally increased over the last 25 years in low-income countries. Enrolling in school, however, does not assure that children learn. A large share of children in low-income countries complete their primary education lacking even basic reading, writing, and arithmetic skills. Teacher quality is a key determinant of student learning, but not much is known about teacher quality in low-income countries. This paper discusses an ongoing research program intended to help fill this void. We use data collected through direct observations, unannounced visits, and tests from primary schools in seven sub-Saharan African countries to answer three questions: How much do teachers teach? What do teachers know? How well do teachers teach?
Full-Text Access | Supplementary Materials

\”Population Control Policies and Fertility Convergence,\” by Tiloka de Silva and Silvana Tenreyro
Rapid population growth in developing countries in the middle of the 20th century led to fears of a population explosion and motivated the inception of what effectively became a global population- control program. The initiative, propelled in its beginnings by intellectual elites in the United States, Sweden, and some developing countries, mobilized resources to enact policies aimed at reducing fertility by widening contraception provision and changing family-size norms. In the following five decades, fertility rates fell dramatically, with a majority of countries converging to a fertility rate just above two children per woman, despite large cross-country differences in economic variables such as GDP per capita, education levels, urbanization, and female labor force participation. The fast decline in fertility rates in developing economies stands in sharp contrast with the gradual decline experienced earlier by more mature economies. In this paper, we argue that population-control policies likely played a central role in the global decline in fertility rates in recent decades and can explain some patterns of that fertility decline that are not well accounted for by other socioeconomic factors.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Parsing Trump\’s Deregulation Plans

Many people have visceral but opposite reactions to the word \”regulation.\” Some have an immediately positive reaction to almost any mention of regulation, in a belief that it is likely to be necessary corrective. Others have an immediately negative reaction, in the belief that it is likely to be a wasteful and perhaps even harmful overreaction. Me, I\’m just a wishy-washy guy who thinks that some regulations can be useful, while others are misguided.  On the off chance that there are a few more like me out there, how should we be reacting to the Trump administration\’s deregulation agenda?

Ted Gayer, Robert Litan, Philip Wallach provide an overview in \”Evaluating the Trump Administration’s Regulatory Reform Program\” (published by the Center on Regulation and Markets at Brookings, October 2017). For a discussion of potential benefits from regulatory reform, one of the first reports from the Council of Economic Advisers in the Trump administration is \”The Growth Potential of Deregulation\” (October 2, 2017). I\’ll draw on both reports here.

Gayer, Litan, and Wallach start off this way: \”In his first week in office, President Trump issued Executive Order 13771, which aims to `manage the costs associated with the governmental imposition of private expenditures required to comply with Federal regulations.\’ It requires that `for every new regulation issued, at least two prior regulations be identified for elimination, and that the cost of planned regulations be prudently managed and controlled through a budgeting process.\’” To be clear, there are two separate ideas here: the two-for-one reform and the \”budgeting process\” reform. Versions of both of these proposals have been implemented in other countries, so one can look to that experience.

But before digging into those details, it\’s worth noting that this \”regulatory reform\” or \”deregulation\” effort is in some ways quite different from the two main types of policy changes that have previously gone under the name of \”regulatory reform.\” As one example, the \”deregulation\” that happened in the late 1970s and into the 1980s. The \”deregulation\” of airlines, banking, trucking, railroads, and some other industries around that time involved removing rules that involved setting prices and limiting competition. Two-for-one and a budgeting process for regulations is not this kind of deregulation.

The other main kind of regulatory reform, which dates back to the 1970s but has been pursued by each president since then, has typically involved requirements that cost-benefit analysis be carried out before certain kinds of rules are implemented. This may seem like an obvious step, but as Gayer, Litan, and Wallach point out, \”many regulatory statutes—those that authorize or compel agencies to issue rules in the first place—do not permit agencies to balance benefits against costs, or effectively limit their ability to do so.\” In contrast, \”President Trump’s approach—both the “two-for-one” requirement and the regulatory budget—breaks from the historical emphasis on maximizing net benefits and improving the use of and commitment to benefit-cost analysis, and instead offers a blunt institutional reform to rein in regulatory costs (without attention to benefits). This is presented as necessary to counter the political impulses that may produce excessive or inefficient regulation, or regulation that could be better designed (for example by using market-like incentives rather than commands and controls).\”

So what are the potential gains from a regulatory reform agenda? And how have the two-for-one and regulatory budget policies worked in other countries?

The Council of Economic Advisers report on \”The Growth Potential of Deregulation\” offers an advocate\’s case, but at least for me, the evidence is a mixed bag. As already noted, I\’m certainly someone who is prepared to believe that misguided or overly severe regulation can impose costs in excess of benefits in some cases, and the report mentions a number of examples. Airline deregulation saves US consumers something like $18 billion per year. Finding ways to streamline the way in which the Food and Drug Administration evaluates new drugs can both save money for producers and also save lives of patients. State-level rules for occupational licensing can impose costs on workers and the economy, and local rules that limit housing supply can make housing less affordable and limit geographic mobility into certain urban markets. These kinds of examples are fairly bipartisan: for example, the Council of Economic Advisers under the Obama administration expressed concerns about the extent of occupational licensing and restrictions on housing supply, too. 

The CEA cites evidence from a study by the OECD which ranks countries by the extent of their product market regulations. A few decades ago, the US was usually close to the least-regulated, which is no longer true.

The federal government reports each year the number “economically significant” rules that have an annual effect of $100 million or more. \”Under the Obama Administration, the government promulgated 494 new rules deemed `economically significant,\’ and under the W. Bush Administration, the government issued 358 such rules. Under the Clinton Administration, those agencies issued 361 such rules …\”

A basic eyeball test also suggests that certain kinds of regulation have become quite extensive. I sometimes note that from Pearl Harbor in December 1941 to the surrender of Germany and Japan in mid-1945 was about 3 1/2 years. In any major US city, it can in some cases take longer than that to go through the process of obtaining building permits for a single high-rise.

But while I\’m certainly open to the idea that a lot of regulations should be reduced or eliminated, the top-line big-number estimates in the CEA report are on a weak footing. For example, the first line of the \”Summary\” begins: \”Excessive regulation is a tax on the economy, costing the U.S. an

average of 0.8 percent of GDP growth per year since 1980.\” If true, this would be a truly enormous amount. It implies that the $18 trillion US economy would be about one-third larger this year–call it an extra $6 trillion in output this year and every year moving forward–without the regulatory burden.
But in the report, the evidence for this claim rests entirely on a single unpublished working paper by Bentley Coffey, Patrick A. McLaughlin,and Pietro Peretto called \”The Cumulative Cost of Regulations\” Mercatus Working Paper, April 2016).  The study measures the extent of US regulation in each  industry by using machine learning algorithms to classify chunks of text in the Code of Federal Regulations. They look at three kinds of investment in 22 different industries (out of the nearly 100 industries in this US government data). They acknowledge that it\’s hard to figure out cause-and-effect between regulation and investment: for example, some regulations might require new investment (say, in antipollution equipment) while others might reduce investment by making it less profitable. But they do their best to build up a model that relates investment and regulation. They then estimate what would have happened in the US economy if regulation had been frozen in place since 1980. 
This working paper seems to me like a worthy exercise to do as a working paper and a piece of academic research. But there\’s no way in the world that it should serve as the main justification for a national program of regulatory reform. The results depend heavily on how they measure regulation, on the structure of the model, and on what other potential confounding factors aren\’t taken into account. As the authors note, the study does not look at potential benefits of such regulation in terms of health or safety. 
Indeed, the CEA report itself mentions a 2004 government study from the Netherlands which suggests that cutting administrative costs of regulation by 25% can increase real GDP by a total of up to 1.7% in the long run. Without endorsing that particular study, I\’ll note that the claim that lower regulation can lead to a long-run increase that totals 1.7% is a LOT different from the claim that less regulation would have led to an economy that was 0.8% larger every single year for decades.

Another claim in the CEA report is that federal regulation imposes costs of $2.03 trillion per year. (This is of course quite a bit smaller than the $6 trillion in annual costs implied by the earlier study, but maybe they can be considered roughly the same.) The source for this claim is an appendix in a study by W. Mark and Nicole V. Crain. “The Cost of Federal Regulation to the U.S. Economy,
Manufacturing, and Small Business,” done for the National Association of Manufacturers (2014). The main study reports survey results from manufacturers on how they experience the burden of regulation. But back in Appendix C, there is a description of a calculation which measures the extent of regulation by using three components of the Global Competitiveness Index put together by the World Economic Forum. They then do a regression with per capita income as the dependent variable, and the measure of regulation as an explanatory variable, also including as control variables the trade/GDP, tax revenue/GDP, capital investment/GDP, and the dependency ratio. Some of these are lagged one year; some are in log values. I commented on a study by the same authors using this methodology a few years back in \”Does Federal Regulation Impose Costs of $1.75 Trillion Per Year?\” (June 6, 2011). My bottom line is that there\’s certainly nothing wrong with doing it as an illustrative exercise. But there\’s no reason to trust the results very much, either.

Thus, my overall sense of the CEA report is that it makes a soft case for that regulatory reform is worth considering, and may have real gains. However, when the report tries to make these gains look enormous, it ends up relying on shaky calculations from little-known working papers and reports.

What has been the experience of other countries with the Trump proposals? The Gayer, Litan, and Wallach report notes that both Canada and the UK have adopted versions of two-for-one and a regulatory budget. For example, here\’s an overview of happened in Canada (citations omitted):

\”In 2001, the Canadian province of British Columbia committed to reducing the regulatory burden by one-third in three years. It required that each ministry establish a baseline of its existing inventory of “regulatory requirements,” defined as “an action or step that must be taken, or information that must be provided to access services, carry out business, or meet legal responsibilities under provincial legislation, regulation, policy, or forms”. . The initial count found over 330,000 such regulatory actions. In order to meet the three-year goal, each cabinet minister was required to match any new regulatory requirement with a plan to eliminate at least two offsetting requirements. In 2004, having surpassed the goal and achieving a 40 percent reduction in regulatory requirements, British Columbia imposed a regulatory cap mandating no net increase in regulatory requirements. This requirement has been extended three times, most recently to last until 2019, leading to a total reduction in regulatory requirements of 49 percent since 2001. Motivated by the success in British Columbia, in 2012 the Conservative-led Government of Canada released the Red Tape Reduction Action Plan, which required that for any new or amended regulation, regulators offset “an equal amount of administrative burden cost” from existing regulations. It also required at least one regulation be eliminated for every new one introduced. …

\”In January 2011, the Conservative and Liberal Democrat coalition government of the United Kingdom instituted a regulatory reform plan that included a “one-in, one-out” system in which each department must assess the “net cost to business” of complying with any proposed regulation, ensure that the cost estimate is validated by an independent committee of experts (known as the Regulatory Policy Committee), and find a deregulatory measure that offsets the cost of the new regulation. In January 2013, the requirement was increased to a “one-in, two-out” rule, which requires that the deregulatory measures must offset twice the cost of the new regulation, not merely eliminating two other regulations, as Canada has required and the Trump administration has just adopted. In March 2016, the United Kingdom ramped up its regulatory offset program again, to become “one-in, three-out,” again referring to costs, not the number of regulations. The “net cost to business” under the United Kingdom’s approach is computed as the “annualized direct net cost to business, incorporating direct recurring costs and transition costs, direct recurring benefits, and direct transitional benefits, spread out over the lifetime of the policy”. The “deregulatory” measures pursued as offsets in the U.K. system often do not actually remove any regulatory requirements, but rather make regulatory compliance less costly, for instance by streamlining paperwork processes so that businesses could make some filings without the need of a lawyer … The United Kingdom’s regulatory initiative, however, does not use a social welfare yardstick, and thus does not seek to maximize the net benefits of its regulations to society as a whole.\”

In short, these general types of regulatory reforms have worked reasonably well in Canada and the UK. What about the Trump proposals in a US context? Here, Gayer, Litan, and Wallach are more cautious.

In US law, an existing regulation that has been duly created through a legislative and regulatory process cannot just be wiped out by the president. Instead, the two regulations that are supposed to be wiped out for each new regulation are will instead need to go through a process of comment and review. As the authors note: \”Since some of the proposals to eliminate rules will undoubtedly invite legal challenges, there will be considerable uncertainty as to whether those rules really will end up being wiped from the Code of Federal Regulations. Any approach that attempted to bypass notice-and-comment procedures would likely run afoul of the APA [Administrative Procedure Act],  leading to defeats in courts and likely political backlash as well. Even when standard procedures are observed, there is no guarantee that attempts to roll back regulatory requirements will pass APA muster.\” 
The idea of a regulatory budget is that government will first set a certain total amount of regulatory cost that is acceptable. Then it will attempt to allocate those regulatory costs in the way that brings the greatest total benefit. Of course, measuring the total cost and benefit of a regulation is a tricky business. Gayer, Litan, and Wallach write: 

\”It is fair to say that the Trump administration has launched the most ambitious regulatory budgeting program in human history—just a tremendous undertaking. Whereas Canada and the United Kingdom have managed to get their programs up and running with some success thanks to relying on relatively simple metrics of cost, in the United States the regulatory budget will attempt to get much closer to real social costs, at the expense of adding considerable complexity. That makes it potentially more meaningful and deep reaching, but also more likely to bog down and create a massive bureaucratic headache to go with those that already exist.\”

Again, I\’m certainly open to the notion that lots of regulations have too little justification, and that regulatory reform can be socially beneficial. But the Trump proposals run a real risk that they will simply freeze all existing regulations in place: it will be too difficulty to remove old rules, and also too difficult to justify implementing new ones. As Gayer, Litan, and Wallach note: 

\”But if all that the Trump administration’s regulatory budget turns out to be is an elaborate moratorium on new actions, that would represent a missed opportunity for would-be deregulators. The whole purpose of instituting a forcing mechanism is to confront the problem of accumulated and outdated regulatory requirements that burden U.S. businesses, thereby freeing Americans’ energies for productive purposes and unleashing economic growth. If this administration’s initiative ends up being nothing more than a pause in further accumulation—of both good and bad  prospective regulations—it would stand as a harsh judgment on the likelihood that existing regulation would ever be seriously reformed.\”

Some Watery Economics

The quantity of fresh water on planet Earth doesn\’t change much over time, but the demands on that water and how it is distributed can cause considerable stress. A group of World Bank authors–Richard Damania, Sébastien Desbureaux, Marie Hyland, Asif Islam, Scott Moore, Aude-Sophie Rodella, Jason Russ, and Esha Zaveri–provide an overview of the global situation in \”Uncharted Waters: The New Economics of Water Scarcity and Variability\” (October 2017).  As the report notes:

\”The future will be thirsty and uncertain. Already more than 60 percent of humanity live in areas of water stress where available supplies cannot sustainably meet demand. If water is not managed more prudently—from source, to tap, and back to source—the crises observed today will become the catastrophes of tomorrow.

\”Projections suggest that by 2050, global demand for water will increase by 30–50 percent, driven by population growth, rising consumption, urbanization, and energy needs. At the same time, water supplies are limited and under stress from negligent management, growing pollution, degraded watersheds, and climate change. As many as 4 billion people already live in regions that experience severe water stress for at least part of the year. With populations rising, these stresses will mount.  …

Water stress is emerging as a growing and at times underappreciated challenge in many countries of the developed and developing worlds. One in four cities, with a total of US$4.2 trillion in economic activity, is classified as water-stressed. Moreover, 150 million people live in cities with perennial water shortages, defined as having less than 100 liters per person per day of sustainable surface water or groundwater. In coming years, population growth and continuing urbanization will bring a 50–70 percent rise in the demand for water in cities. This will be fueled not only by the growing numbers of urban dwellers but also by lifestyles and consumption patterns that are more water-intensive. By 2050, almost 1 billion urban dwellers will live in water-stressed cities.

Water issues hurt both rural agriculture and urban areas. In rural areas, the losses involve both reduced agricultural production and environmental consequences:

\”Throughout much of the world, even moderate deviations from normal rainfall levels can cause large changes in crop yields. … Such variability is responsible for a considerable net loss of food production every year—enough to feed 81 million people every day, a population the size of Germany’s. … Rainfall shocks cascade consequences from declining agricultural yields to shrinking forest cover. Faced with declining agricultural productivity due to rainfall shocks, farmers often seek to recoup these losses by expanding cropland, at the expense of natural habitats. Rainfall variability can account for as much as 60 percent of the increase in the average rate of cropland expansion, and, as a result, is responsible for much of the pressure on forested areas.\”

There\’s less evidence on water issues and urban economies, but here\’s a taste:

\”While urban infrastructure is generally able to buffer residents against the effects of moderate rainfall shocks, cities are still at the mercy of large rainfall shocks. Further, while the immediate devastation caused by floods attracts much attention, droughts in cities may have the longer-lasting, more severe impact on firms and their employees. In Latin America, losses in income caused by a dry shock are four times greater than that of a wet shock. Droughts have poorly understood consequences within cities, causing higher incidences of diarrheal diseases, health impacts on young children, and an increased frequency of power outages.

\”The performance of firms in cities is also affected by the availability of water. While the private sector’s reliance on transport and energy infrastructure is well established, little is known about the significance of water to firms. Findings in this book show that when urban water services are disrupted, whether by climate, inadequate infrastructure, or both, firms suffer significant reductions in their sales and employment. Particularly vulnerable are small and informal firms, a major source of employment in developing countries. The impacts of water supply and sanitation services in cities therefore extend beyond the widely documented effects on human health.\”

 The report also emphasizes at at number of places that prices need to play a role in addressing water issues, because of what the paper calls \”the paradox of supply\”:  

\”IThe availability of irrigation typically provides both a buffer against rainfall variability and a significant boost to crop yields in normal years. However, in many dry regions of the world these systems fail to protect farmers from the impacts of droughts. Free irrigation water creates the illusion of abundance, which buoys the cultivation of water-intensive crops such as rice and sugarcane that are ultimately unsuited to these regions. The ironclad laws of demand and supply then dictate that when water is provided too cheaply, it is also consumed recklessly. As a result, crop productivity suffers disproportionately in times of dry shocks due to extraordinary water needs that cannot be met. This book demonstrates that this paradox of supply is a widespread problem in areas where water is scarce and its demand is uncontrolled.\”

When water is underpriced, the undesired consequences just keep multiplying. At the most obvious level, pricing water encourages a degree of conservation. Another issue is that water-hungry crops are wrongly encouraged. Moreover, when water isn\’t priced, then the (public or private) organizations that provide water can\’t cover their costs. They become unable to attract outside investment capital for additional water storage or delivery systems, because with insufficient revcnues, how can they repay the investment? When water is underpriced, water providers need to focus on  how they will keep getting government subsidies–which are their financial life-blood–instead of how they serve customers. Here\’s a selection of comments from the report on these themes:  

\”The supply of free or underpriced water in arid areas spurs the cultivation of water-intensive crops such as rice, sugarcane, and cotton, which in turn increases vulnerability to drought and magnifies the impacts of dry shocks. One well-known study found that access to the Ogallala aquifer in the United States induced a shift to water-intensive crops that increased drought sensitivity over time. The Aral Sea is a more extreme example of a resource that has gone from abundance to depletion within a generation. To increase cotton production, the then–Soviet government diverted rivers that fed the Aral Sea, and as a result it today holds less than one-tenth of its former volume. This book demonstrates that this paradox of supply is a more widespread phenomenon than was previously known and can be found at a global scale. …

\”In cities, water pricing tends to be the simplest and most effective tool for compensating the service provider. At the same time, high water prices generally work in reducing city demand, and targeted subsidies or bloc tariffs can be strategically employed to ensure that the most vulnerable residents retain access to affordable water. When utilities need to recover costs through pricing, they also have an incentive to prevent wastage and revenue losses by fixing leaks in the system. In fact, a staggering 32 billion cubic meters of treated water is lost from urban systems around the world each year through leaks in the pipes. Half of these losses occur in developing countries, where customers frequently suffer from supply interruptions and poor water quality. Further, when water is priced appropriately, water utilities become beholden to their customers for generating revenue, rather than to political interests for providing subsidies. This increases their incentives to expand service and quality throughout cities, rather than to only politically connected communities.

\”Utility cost recovery is also important for ensuring that utilities can secure adequate financing. Private financiers are reluctant to invest in utilities that are not self-standing and rely on government subsidies to stay afloat. As a result, private financing is unavailable to many utilities, or it must be backed by public guarantees, greatly reducing the utilities’ ability to invest in upgrading or expanding their infrastructure. In the developing world, cost recovery rates are abysmally low; in 2004, 89 percent of utilities in low-income countries and 37 percent of utilities in lower-middle-income countries charged tariffs that were too low to cover basic operation and maintenance costs, and little has changed since then. Closing this gap could greatly improve the ability of utilities to make investments that increase access to and the reliability of piped water. …

\”[P]er capita reservoir storage has been declining since about 2000, partly because of poor management and loss of storage capacity to sedimentation. At the same time, the stage is set for a large increase in the world’s number of dams, projected to rise 16 percent by 2030, with storage volume increasing by about 40 percent. Estimates suggest that even an expansion of this scale may not suffice to meet future demand.\”

Those interested in the economics of water might want to check back on these earlier posts:

Almost 10 Billion Hours of Government-Imposed Paperwork

\”Under the Paperwork Reduction Act of 1995 (PRA), the Office of Management and Budget (OMB) is required to report to Congress on the paperwork burden imposed on the public by the Federal Government and efforts to reduce this burden.\” The most recent such Information Collection Budget of the United States Government was produced by the Office of Management and Budget of the outgoing Obama administration and published in December 2016

\”In FY 2015, the public spent an estimated 9.78 billion hours responding to Federal information collections.\” When the report breaks down these time costs by the government agency imposing them, by far the biggest contributor are rules from the US Department of the Treasury–which is in large part the hours spent filling out tax forms.

Here\’s a figure showing how the time burden has evolved in the last decade or so. The red line is the actual estimate. The green line shows that most of the changes are due to new rules, not to higher time costs for the already-existing earlier rules.

These time are just estimates by the federal agencies themselves, of course, and should be taken with a few tablespoons of salt. For example, if you dig into the details of the report, the drop in paperwork burden in 2010 arises not from changing any actual government paperwork burden, but rather because the US Department of the Treasury decided that its already-existing paperwork burdens took about 1.5 billion hours less than it had estimated the previous year.

But for what it\’s worth, let\’s take the estimate of 9.78 billion hours of paperwork burden in 2015 and put it into a little perspective. As a round-number estimate, say that a full-time employee works 2000 hours per year (that is, 40 hours/week and 50 working weeks in a year). If you divide the 9.78 billion hours by 2000 hours/year worked, it\’s the equivalent of 4,890,000 full-time jobs.

 Of course, in a number of cases no one is getting  paid for the job of filling out federal paperwork requirements; for example, a person who pulls together financial records and fill out their own tax form doesn\’t get paid for doing so. But a nonmonetary cost is still a cost. The US economy had about 142 million jobs in 2015. So measured in terms of time, federal paperwork costs alone were equal to roughly 1/30 of the entire US workforce.

How to Increase Women\’s Labor Force Participation

In labor market jargon, \”prime age\” refers to workers in the 25-54 age bracket. Looking at this group takes the focus off issues like whether more young adults are attending college rather than taking full time jobs, and whether older adults are retiring sooner or later. In the US, labor force participation has been falling for prime-age male workers since the 1960s, and for prime-age female workers since the 1990s. Some of the reasons are similar across gender, like the low prospects, pay, and security of jobs for low-skilled workers. But some of the reasons are related to gender. A new ebook edited by Diane Whitmore Schanzenbach and Ryan Nunn, The 51%: Driving Growth through Women’s Economic Participation, offers a  collection of readable essays with facts and concrete proposals about the labor force participation rate of US women (published by Hamilton Project at the Brookings Institution, October 2017)

Here are a few of the facts that caught my eye in the first essay, \”The Recent Decline in Women’s Labor Force Participation,\” by Sandra E. Black, Diane Whitmore Schanzenbach, and Audrey Breitwieser.

As background, here\’s a comparison of labor force participation rates for prime-age US men and women.

A half century ago, the labor force participation rate of prime-age women was slightly higher in the US than in other countries with high female labor force participation rates like France, Canada, and the United Kingdom. All of these countries were far above the rate for the OECD (high-income) countries as a group, But while women\’s labor force participation rates in other countries have been creeping higher or not changing much, the US labor force participation rate for women in this age group has been declining since the 1990s, and is now close to the average for OECD countries.

One intriguing tidbit in the text discussion of this figure is that \”a higher share of women’s employment in the United States is full time, compared with other OECD countries. The share of full-time work among women in the United States has trended up somewhat over time, indicating that the relative decline of U.S. women would be less pronounced if one examined hours worked rather than participation.\”

Yet another tidbit is that the labor force participation of prime-age US women is not dramatically different depending on whether they are married or single, or whether they have children. A half century ago, single women with no children had a labor force participation rate of 80%, roughly double the rate for married women with children. While married women with children still have a noticeably lower labor force participation rate, it has risen dramatically over time and is now fairly close to the other married/not married, children/no children categories.

Papers in the rest of the volume consider a variety of policies that might help to raise the labor force participation of US women.  I\’ll mention three broad categories of such proposals here: changes in the tax code, paid leave, and support for child care.

In the category of tax code changes, Hilary Hoynes, Jesse Rothstein, and Krista Ruffini suggest \”Making Work Pay Better Through an Expanded Earned Income Tax Credit,\” which would be available to all low-income parents, but because women are more likely to be single-parenting, it would tend to have a bigger effect on women workers. Sara LaLumia discussed \”Tax Policies to Encourage Women’s Labor Force Participation.\” Her specific proposal is \”a new second-earner deduction, equal to 15 percent of the earnings of a lower-earning spouse. The proposed deduction would raise the after-tax return to work for many wives, encouraging an increase in married women’s labor supply, and would reduce marriage penalties on average.\”

Several of the proposals focus on expanded job leave: for example, Nicole Maestas writed about \”Expanding Access to Earned Sick Leave to Support Caregiving\” and Christopher J. Ruhm considers \”A National Paid Parental Leave Policy for the United States.\” Both policies reflect the social reality that time commitments for children and other caregiving tend to fall more heavily on women.

My own concern is that these proposals tend to mix together two different. goals. One goal is to give households more support when family responsibilities are heightened, like a new child or a sick relative. Expanded job leave can accomplish that goal. The other goal is to increase the labor force participation rate and job prospects for women over time. The effect of expanded job leave on this goal is more ambiguous. In some countries with especially generous parental leave, like Italy, these provisions look more like an off-ramp for women to leave the labor force, rather than a mechanism to help women remain connected. I discussed this evidence in \”Some Economics of Parental Leave\” (March 3, 2017).

With that concern noted, the US is a considerable outlier among high-income nations in not having any national provisions for paid maternity leave or for paid parental and home care leave, as vividly shown in the table below from the Ruhm paper. Proposals for moderate leave, with moderate pay, are more likely to balance the concerns of supporting families in a time of stress while still keeping women attached to the labor force. A recent bipartisan proposal for a moderate parental leave law, which specifies funding and conditions, is discussed in \”Facing the Costs of Paid Parental Leave\” (June 12, 2017).


The final set of proposals look at public support of child care: for example, Elizabeth U. Cascio considers \”Public Investments in Child Care.\” while Bridget Terry Long looks at the issue of college students who are also parents in \”Helping Women to Succeed in Higher Education: Supporting Student-Parents with Child Care.\”

The international evidence mentioned above does offer support for the idea that reliable and affordable child care helps women remain attached to the labor market. Here\’s a figure from Cascio\’s paper. The left-hand diagram shows that the share of mothers employed in families with income below $25,000 is about 60%, with median child-care costs around $2,000 per year. Only about 20% of these mothers use paid child care for very young children. In contrast, for mothers in families with income of $75,000 or more, about half of the mothers use paid child care for very young children ad a median cost in the range of $7,000-$8,000 per year. The mothers in these higher income families have an employment rate of about 80% when children are very young, which rises as the children reach school-age.

One final issue, involving flexibility of hours both in the shorter and the longer term, is mentioned in several of the papers, but is not the subject of a direct proposal. Claudia Goldin\’s Presidential Address at the 2014 meetings of the American Economic Association focused on \”A Grand Gender Convergence: Its Last Chapter.\” Part of her thesis was to focus on the relationship between hours worked and total pay. For example, does someone who works half as many hours get half the pay–or less for being a part-time worker? Does a high-powered lawyer or executive who work 50% more hours get 50% more pay–or considerably more as an important superstar employee? Goldin argued that workers who need more flexibility of hours (often women) are penalized in the labor market, while those who make outsized commitments of time (often men) receive outsized rewards. She offers the intriguing example of pharmacists, who are essentially paid by the hour even if they choose to work part-time for a few years. Perhaps not surprisingly, women were less than 10% of all pharmacists back in the 1960s, and are now more than half. Pharmacists are a very specific job where highly-skilled people can readily substitute for each other and shift hours. I do not have useful advice for how to make that model operate in other contexts. But I suspect that organizations which structure themselves to offer high-powered, career-path jobs to workers who may go through several extended periods of part-time work might be rewarded with an influx of highly motivated women workers.

For other previous discussions of women in the labor market, see: