Dynamic Pricing: Uber, Coca Cola, Disneyland and Elsewhere

Dynamic pricing refers to the practice of changing prices in real time depending on fluctuations in demand or supply.  Most consumers are inured to dynamic pricing in certain contexts. For example, when a movie theater charges more on a Friday or a Saturday night than for an afternoon matinee, or when a restaurant offers an early-bird dinner special, or when mass transit buses or trains offer a lower fare during off-peak hours, or when airlines charge more for a ticket ordered one day before the flight rather than three months before the flight, it doesn\’t raise many eyebrows.

In other cases, dynamic pricing is more controversial. One classic example is that back in 1999, Coca Cola experimented with vending machines that would automatically rise the price on hot days. The then-chairman, M. Douglas Ivester, pointed out that demand for a cold drink can increase on hot days and said: \”\’So, it is fair that it should be more expensive. … The machine will simply make this process automatic.\’\’ However, the reaction from customers stopped the experiment in its tracks. On the other side, in 2012 certain Coca-Cola owned vending machines in Spain were set to cut the price of certain lemonade drinks by as much as half on hot days. To my knowledge, there was no outcry over this policy.

Information technology is enabling dynamic pricing to become more widespread in a number of contexts. The on-line Knowledge magazine published by the Wharton School at the University of Pennsylvania has been publishing some readable commentary on  dynamic pricing. \”The Promise — and Perils — of Dynamic Pricing\” (February 23, 2016) offers an overview of the arguments with links to some research.  In \”Frustrated by Surge Pricing? Here’s How It Benefits You in the Long Run\” (January 5, 2016)  Ruben Lobel and Kaitlin Daniels discuss how it\’s important to see the whole picture–both higher prices at peak times, but also lower prices at other times. In \”The Price Is Pliant: The Risks and Rewards of Dynamic Pricing\” (January 15, 2016), Senthil Veeraraghavan looks at the choices that sellers face in considering dynamic pricing if they are taking their long-term relationships with customers into account.

Many of the most current examples seem to involve the entertainment industry. For example, the St. Louis Cardinals baseball team uses \”a dynamic pricing program tied to its ticketing system in which the team changes ticket prices daily based on such factors as pitching match-ups, weather, team performance and ticket demand.\” Some ski resorts are adjusting prices based on demand and recent snowfall. Disneyland recently announced a plan to raise admissions prices by as much as 20% on days that are historically known to be busy, while lowering them on other days.
These examples are worthy of study: for example, one paper points out that if a seller only uses dynamic pricing to raise prices on busy days, but doesn\’t correspondingly lower prices to entice more people on on non-busy days, it can end up losing revenue overall. But at the end of the day, it\’s hard to argue that these industries involve any great issue of fairness or justice. If you don\’t want to go to Disneyland or a certain ski resort, then don\’t go. Sure, sellers in the entertainment industry should be very cautious about a perception that they are jerking their customers around. But there\’s now an active online market for reselling tickets for a lot of entertainment events, and prices in that market are going to affect last-minute supply and demand factors.

The current controversies over dynamic pricing often seem to bring up Uber, with its policy of having fares that rise during peak times. Uber released a research paper in September 2015 called \”The Effects of Uber’s Surge Pricing: A Case Study,\” by  Jonathan Hall, Cory Kendrick, and Chris Nosko. Part of the paper focuses on the evening of March 21, 2015, when Ariana Grande played a sold-out show at Madison Square Garden. When the concert let out, Uber prices surged: more specifically, the usual Uber price was raised by a multiple of \”1.2 for 5 minutes, 1.3 for 5 minutes, 1.4 for 5 minutes, 1.5 for 15 minutes, and 1.8 for 5 minutes.\” Here\’s the pattern that emerged in the market.

The red dots show the pattern of people opening the Uber app after the concert; the red line is smoothed out to show the overall pattern. The blue dots and the blue line show the actual ride request. Notice that this rises, but not by as much, probably in part because some of those who looked at the higher surge price decided it wasn\’t worth it, and found another way of getting home. The green dots and green line show the rise in Uber drivers in the area, with the rise presumably occurring in part because drivers were attracted by the surge price.

I don\’t think even the authors of the paper would make strong claims here that Uber surge pricing worked perfectly on the night of March 21, 2015. But it did get more cars on the streets, and it did mean that people willing to pay the price had an additional option for getting home.

Those interested in a fuller analysis of Uber might want to track down \”Disruptive Change in the Taxi Business: The Case of Uber,\” by Judd Cramer and Alan B. Krueger. (It\’s downloadable for free as  Princeton Industrial Relations Working Paper #595, released December 2015, and also was released in March 2016 as National Bureau of Economic Research Working Paper 22083.) Their estimate suggests that \”UberX drivers spend a significantly higher fraction of their time, and drive a substantially higher share of miles, with a passenger in their car than do taxi drivers.\” They write:

\”Because we are only able to obtain estimates of capacity utilization for taxis for a handful of major cities – Boston, Los Angeles, New York, San Francisco and Seattle – our estimates should be viewed as suggestive. Nonetheless, the results indicate that UberX drivers, on average, have a passenger in the car about half the time that they have their app turned on, and this average varies relatively little across cities, probably due to relatively elastic labor supply given the ease of entry and exit of Uber drivers at various times of the day. In contrast, taxi drivers have a passenger in the car an average of anywhere from 30 percent to 50 percent of the time they are working, depending on the city. Our results also point to higher productivity for UberX drivers than taxi drivers when the share of miles driven with a passenger in the car is used to measure capacity utilization. On average, the capacity utilization rate is 30 percent higher for UberX drivers than taxi drivers when measured by time, and 50 percent higher when measured by miles, although taxi data are not available to calculate both measures for the same set of cities. Four factors likely contribute to the higher utilization rate of UberX drivers: 1) Uber’s more efficient driver-passenger matching technology; 2) Uber’s larger scale, which supports faster matches; 3) inefficient taxi regulations; and 4) Uber’s flexible labor supply model and surge pricing, which more closely match supply with demand throughout the day.\”

However, I\’d argue that the two up-and-coming examples of surge pricing that could have the biggest effect on the the most people involve electricity and traffic jams. In the case of variable prices for electricity, a policy of charging more for electricity on hot days will encourage more people to ease back on their use of air conditioning at those times and look for opportunities to conserve, which in turn means less chance of power outages and less need to use expensive back-up generating capacity. A policy of charging higher tolls on congested roads will encourage people to find other ways to travel, and provide a market demand for when building additional lanes of highway is really worth doing.  As these examples suggest, the economic theory behind dynamic pricing or \”surge pricing\” is well-understood. When the quantity demanded of a good or service rises and falls at predictable times, broader social benefits emerge from charging more at that time.

This economic logic even applies in what is surely the most controversial case of surge pricing, which is when prices of certain goods rise either just before or just after a giant storm or other disaster. The higher price–often attacked as \”price gouging\”— gives buyers an incentive not to purchase and hoard the entire stock, and it gives outside sellers an incentive to hop in their pick-up trucks and vans and bring more of the product to the disaster area. What\’s worse than being in a disaster area and having to pay extra for certain key goods? Being in a disaster area where those goods aren\’t available at any price, because the price stayed low and they were sold out before you arrived.

The ongoing gains in information technology are only going to make dynamic pricing more common, because it is only going to become easier both to track changes in demand either historically or in real time and also to make price adjustments in real time (think of the ability to adjust electricity bills or road tolls, for example).  There are going to be changes that will feel like abuses. For example, I wouldn\’t be surprised if some online retailers already have software in place so that if there is a demand surge for some product, the price jumps automatically. Of course, many of those who want to push back against companies that use surge pricing, like Uber, will have no problem with personally using that same information technology to re-sell their tickets to a highly demanded or sold-out event at well above face-value.

Automation and Job Loss: The Fears of 1927

As I\’ve noted from time to time, blasts of concerns over how automation would reduce the number of jobs have been erupting for more than 200 years. As one example, in \”Automation and Job Loss: The Fears of 1964\” (December 1, 2014), I wrote about what were called the \”automation jobless\” in a 1961 news story and how John F. Kennedy advocated and Lyndon Johnson signed into law a National Commission on Technology, Automation, and Economic Progress. The Commission eventually released its report in February 1966. when the unemployment rate was 3.8%.

Here\’s an example of concerns about automation replacing labor from a speech given in 1927 by the US Secretary of Labor James J. Davis called \”The Problem of the Worker Displaced by Machinery, which was published in the Monthly Labor Review of September 1927 (25: 3, pp. 32-37, available through JSTOR).  Before offering an extended quotation from Davis, here are a few quick bits of background. 
  • When Davis delivered this speech in 1927, the extremely severe recession of 1920-21 was six years in the past, but between 1921 and 1927 the economy had had two milder recessions
  • The unemployment rate in 1927 was 3.9%, according to the Historical Statistics of the United States
  • At several points in his speech, Davis expresses deep concerns over immigration, and how much worse the job loss due to automation would have been if immigration had not been limited earlier in the 1920s. Both then and now. economic stress and concerns about economic transition seem to be accompanied by heightened concern over immigration. 
  • Lewis ends up with what many economists have traditionally viewed as the \”right\” answer to concerns about automation and jobs: that is, find ways to help workers who are dislocated in the process of technological innovation, but by no means try to slow the course of automation itself. 
  • As a bit of trivia, Davis is the only person to serve as Secretary of Labor under three different presidents: Harding, Coolidge, and Hoover. 
Here\’s what Davis had to say in his 1927 talk.

\”Every day sees the perfection of some new mechanical miracle that enables one man to do better and more quickly what many men used to do. In the past six years especially, our progress in the lavish use of power and in harnessing that power to high-speed productive machinery has been tremendous. Nothing like it has ever been seen on earth. But what is all this machinery doing for us? What is it doing to us? I think the time is ripe for us to pause and inquire.

\”Take for example the revolution that has come in the glass industry. For a long time it was thought impossible to turn out machines capable of replacing human skill in the making of glass. Now practically all forms of glassware are being made by machinery, some of the machines being extraordinarily efficient. Thus, in the case of one type of bottle, automatic machinery produces forty-one times as much per worker as the old hand processes, and the machine production requires no skilled glass blowers. In other words, one man now does what 41 men formerly did. What are we doing with the men displaced?

\”The glass industry is only one of many industries that have been revolutionized in this manner. I began my working life as an iron puddler, and sweated and toiled before the furnace. In the iron and steel industry, too, it was long thought that no machinery could ever take the place of the human touch; yet last week I witnessed the inauguration of a new mechanical sheet-rolling process with six times the capacity of the former method. 

\”Like the bottle machine, this new mechanical wonder in steel will abolish jobs. It dispenses with men, many of whom have put in years acquiring their skill, and take a natural pride in that skill. We must, I think, soon begin to think a little less of our wonderful machines and a little more of our wonderful American workers, the alternative being that we may have discontent on our hands. This amazing industrial organization that we have built up in our country must not be allowed to get in its own way. If we are to go on prospering, we must give some thought to this matter.

\”Understand me, I am not an alarmist. If you take the long view, there is nothing in sight to give us grave concern. I am no more concerned over the men once needed to blow bottles than I am over the seamstresses that we once were afraid would starve when the sewing machine came in. We know that thousands more seamstresses than before earn a living that would be impossible without the sewing machine. In the end, every device that lightens human toil and increases production is a boon to humanity. It is only the period of adjustment, when machines turn workers out of their old jobs into new ones, that we must learn to handle them so as to reduce distress to the minimum. 

\”To-day when new machines are coming in more rapidly than ever,that period of adjustment becomes a more serious matter. Twenty years ago we thought we had reached the peak in mass production. Now we know that we had hardly begun. … In the long run new types of industries have always absorbed the workers displaced by machinery, but of late we have been developing new machinery at a faster rate than we have been developing new industries. Inventive genius needs to turn itself in this direction.

\”I tremble to think what a state we might be in as a result of this development of machinery without the bars we have lately set up against wholesale immigration: If we had gone on admitting the tide of aliens that formerly poured in here at the rate of a million or more a year, and this at a time when new machinery was constantly eating into the number of jobs, we might have had on our hands something much more serious than the quiet industrial revolution now in progress. 

\”Fortunately we were wise in time, and the industrial situation before us is, as I say, a cause only for thought, not alarm. Nevertheless I submit that it does call for thought. There seems to be no limit to our national efficiency. At the same time we must ask ourselves, is automatic machinery, driven by limitless power going to leave on our hands a state of chronic and increasing unemployment? Is the machine that turns out wealth also to create poverty? Is it giving us a permanent jobless class? Is prosperity going to double back on itself and bring us social distress? …

\”We saved ourselves from the millions of aliens who would have poured in here when business was especially slack and unemployment high. In the old days we used to admit these aliens by the shipload, regardless of the state of the times. I remember that in my own days in the mill when a new machine was put into operation or a new plant was to be opened, aliens were always brought in to man it. When we older hands were through there was no place for us to go. No one had a thought for the man turned out of a job. He went his way forgotten.

\”With a certain amount of unemployment even now to trouble us, think of the nation-wide distress in 1920-21 with the bars down and aliens flooding in, and nowhere near enough jobs to go round. Our duty, as we saw it, was to care as best we could for the workers already here, native or foreign born. Restrictive immigration enabled us to do so, and thus work out of a situation bad enough as it was. Now, just as we were wise in season in this matter of immigration, so we must be wise in sparing our people to-day as much as possible from the curse of unemployment as a result of the ceaseless invention of machinery. It is a thought to be entertained, whatever the pride we naturally take in our progress in other directions.

\”Please understand me, there must be no limits to that progress. We must not in any way restrict new means of pouring out wealth. Labor must not loaf on the job or cut down output. Capital must not, after building up its great industrial organization shut down its mills. That way lies dry rot. We must ever go on, fearlessly scrapping old methods and old machines as fast as we find them obsolete. But we can not afford the human and business waste of scrapping men. In former times the man suddenly displaced by a machine was left to his fate. The new invention we need is a way of caring for this fellow made temporarily jobless. In this enlightened day we want him to go on earning, buying, consuming, adding his bit to the national wealth in the form of product and wages. When a man loses a job, we all lose something. Our national efficiency is not what it should be unless we stop that loss.

\”As I look into the future, far beyond this occasional distress of the present, I see a world made better by the very machines invented to-day. I see the machine becoming the real slave of man that it was meant to be. …  We are going to be masters of a far different and better life.\”

I\’ll add my obligatory reminder here that just because past concerns about automation replacing workers have turned out to be overblown certainly doesn\’t prove that current concerns will also prove out to be overblown. But it is an historical fact that for the last two centuries, automation and technology has played a dramatic role in reshaping jobs, and also helped to lower the average work-week, without leading to a jobless dystopia. 

Remembering Lloyd Shapley: Surprised to Find Himself Speaking Economics

When I heard that 2012 Nobel laureate in economics Lloyd Shapley had died, I was reminded of Molière\’s well-known 1670 play, \”Le Bourgeois Gentilhomme,\” in which a character named Monsieur Jourdain is astonished to find that he has been speaking prose all his life. Shapley was a mathematician, worked in a department of mathematics, and back in the early 1960s developed at theorem that cracked a problem in mathematics–but then found that he had been speaking economics all along.  When Shapley won the Nobel, along with Alvin Roth, he said: \”\”I consider myself a mathematician and the award is for economics. I never, never in my life took a course in economics.\”

Molière\’s described Monsieur Jourdain\’s discovery that he was speaking prose like this:

MONSIEUR JOURDAIN: Please do. But now, I must confide in you. I\’m in love with a lady of great quality, and I wish that you would help me write something to her in a little note that I will let fall at her feet.
PHILOSOPHY MASTER: Very well.

MONSIEUR JOURDAIN: That will be gallant, yes?

PHILOSOPHY MASTER: Without doubt. Is it verse that you wish to write her?

MONSIEUR JOURDAIN: No, no. No verse.

PHILOSOPHY MASTER: Do you want only prose?

MONSIEUR JOURDAIN: No, I don\’t want either prose or verse.

PHILOSOPHY MASTER: It must be one or the other.

MONSIEUR JOURDAIN: Why?

PHILOSOPHY MASTER: Because, sir, there is no other way to express oneself than with prose or verse.

MONSIEUR JOURDAIN: There is nothing but prose or verse?

PHILOSOPHY MASTER: No, sir, everything that is not prose is verse, and everything that is not verse is prose.

MONSIEUR JOURDAIN: And when one speaks, what is that then?

PHILOSOPHY MASTER: Prose.

MONSIEUR JOURDAIN: What! When I say, \”Nicole, bring me my slippers, and give me my nightcap,\” that\’s prose?

PHILOSOPHY MASTER: Yes, Sir.

MONSIEUR JOURDAIN: By my faith! For more than forty years I have been speaking prose without knowing anything about it, and I am much obliged to you for having taught me that.

Economists often find that in trying to convey the precise meaning of their arguments, they are speaking in mathematics. Thus, it\’s perhaps not a huge surprise that a mathematician like Shapley might discover that he was speaking economics. In a 2012 post on \”The 2012 Nobel Prize to Shapley and Roth\” (October 17, 2012), I tried to lay out

The fundamental problem looks like this. Imagine that there are two groups, A and B. Members of Group A are to be matched with members of Group B. However, each of the members of Group A has likes and dislikes about the members of Group B, each of the members of Group B has likes and dislikes about the members of Group A, and these likes and dislikes don\’t necessarily line up. Is there a way to match members of Group A and Group B so that, after the match, everyone won\’t be ditching their match and trying for another?

Before trying to describe the result, the Gale-Shapley algorithm, it\’s worth noting that this general situation of finding stable matches applies in many settings. (David Gale was a very prominent mathematical economist who died in 2008, before the Nobel was awarded to Shapley and Roth.) The 1962 paper offered verbal illustrations based on college admissions and on what economists think of as the \”marriage market.\” Later on, Shapley\’s co-laureate Alvin Roth offered a number of practical applications: for example, processes used in K-12 school lotteries in various cities, in the \”matching\” process for medical school residencies, and in matching donors and recipients of kidney transplants.

Here is how the Nobel committee describes Gale and Shapley  \”deferred acceptance\” procedure to get a stable result for their matching problem.

Agents on one side of the market, say the medical departments, make offers to agents on the other side, the medical students. Each student reviews the proposals she receives, holds on to the one she prefers (assuming it is acceptable), and rejects the rest. A crucial aspect of this algorithm is that desirable offers are not immediately accepted, but simply held on to: deferred acceptance. Any department whose offer is rejected can make a new offer to a different student. The procedure continues until no department wishes to make another offer, at which time the students finally accept the proposals they hold.

In this process, each department starts by making its first offer to its top-ranked applicant, i.e., the medical student it would most like to have as an intern. If the offer is rejected, it then makes an offer to the applicant it ranks as number two, etc. Thus, during the operation of the algorithm, the department’\’s expectations are lowered as it makes offers to students further and further down its preference ordering. (Of course, no offers are made to unacceptable applicants.) Conversely, since students always hold on to the most desirable offer they have received, and as offers cannot be withdrawn, each student’s satisfaction is monotonically increasing during the operation of the algorithm. When the departments’decreased expectations have become consistent with the students’increased aspirations, the algorithm stops.\”

Here\’s how the procedure would work in the marriage market:

\”The Gale-Shapley algorithm can be set up in two alternative ways: either men propose to women, or women propose to men. In the latter case, the process begins with each woman proposing to the man she likes the best. Each man then looks at the different proposals he has received (if any), retains what he regards as the most attractive proposal (but defers from accepting it) and rejects the others. The women who were rejected in the first round then propose to their second-best choices, while the men again keep their best offer and reject the rest. This continues until no women want to make any further proposals. As each of the men then accepts the proposal he holds, the process comes to an end.\”

I described some of context of the result this way in my 2012 post:

Gale and Shapley prove that this procedure leads to a \”stable\” outcome. Again, this doesn\’t mean that everyone gets their first choice! It means that when the outcome is reached, there is no combination of medical school and applicant, or of man and woman in the marriage example, who would both prefer a different match from the one with which they ended up. But Gale and Shapley went further. It turns out that there are often many stable combinations, and in comparing these stable outcomes, the question of who does the choosing matters. If women propose to men, women will view the outcome as the best of all the stable matching possibilities, while men will view it as the worst; if men propose to women, men as a group will view it as the best of all stable matching possibilities, while women will view it as the worst. As the Nobel committee writes, \”stable institutions can be designed to systematically favor one side of the market.\” …

The Nobel prize to Shapley and Roth is one of those prizes that I suspect I will have a hard time explaining to non-economists. The non-economists I know ask practical questions. They want to know how the work done for the prize will spur the economy, or create jobs, or reduce inequality, or help the poor, or save the government money. Somehow, better matching for medical school students won\’t seem, to some non-economists like it\’s \”big enough\” to deserve a Nobel. But economics isn\’t all about today\’s public policy questions. The prize rewards thinking deeply about how a matching process works. In a world of increasingly powerful information-processing technology, where we may all find ourselves \”matched\” in various ways based on questions we answer by software we don\’t understand, I suspect that Alvin Roth\’s current applications are just the starting point for ways to apply the insights developed from Lloyd Shapley\’s \”deferred acceptance\” mechanism.

Shapley\’s obituary in the Economist is here, from the New York Times is here, and from the Associated Press is here.

A Fundamental Shift in the Nature of Trade Agreements

Controversy over agreements that seek to encourage free trade has been going on for decades. But the nature of the underlying trade agreements has fundamentally shifted. The old trade agenda under first the GATT and then its successor the World Trade Organization was focused on reducing tariffs and other trade barriers. The new generation of trade agreements are about assuring that interlocking webs of production that cross international borders will be enabled to function. Richard Baldwin explores the change in \”The World Trade Organization and the Future of Multilateralism,\” published in the Winter 2016 issue of the Journal of Economic Perspectives.  Here\’s a taste of his theme:

\”[T]he rules and procedures of the WTO were designed for a global economy in which made-here–sold-there goods moved across national borders. But the rapid rising of offshoring from high-technology nations to low-wage nations has created a new type of international commerce. In essence, the flows of goods, services, investment, training, and know-how that used to move inside or between advanced-nation factories have now become part of international commerce. For this sort of offshoring-linked international commerce, the trade rules that matter are less about tariffs and more about protection of investments and intellectual property, along with legal and regulatory steps to assure that the two-way flows of goods, services, investment, and people will not be impeded. It’s possible to imagine a hypothetical WTO that would incorporate these rules. But in practice, the rules are being written in a series of regional and megaregional agreements like the Trans-Pacific Partnership (TPP) and Transatlantic Trade and Investment Partnership (TTIP) between the United States and the European Union. The most likely outcome for the future governance of international trade  is a two-pillar structure in which the WTO continues to govern with its 1994-era rules while the new rules for international production networks, or “global value chains,” are set by a decentralized process of sometimes overlapping and inconsistent megaregional agreements.\”

Let\’s unpack these dynamics a bit. Baldwin argues that the rounds of international trade talks starting with the GATT in 1946 displayed what he calls \”juggernaut\” dynamics. Before the multilateral trade talks, there wasn\’t much reason for exporters in a country to care about whether the country imposed tariffs on imports. But the multilateral trade talks shifted the political balance, because exporters realized that in order to get lower tariffs in their foreign markets, their own country would need to reduce its tariffs, too. Moreover, each time tariffs were reduced, it tended to weaken firms and industries that faced tough import competition, while benefiting export-oriented firms. Thus, export-oriented firms were in a stronger position to advocate for future tariff cuts as well.

But by the 1990s, there was a rise in global supply chains that crossed international borders. Many emerging markets figured out pretty quickly that if they wanted to be part of global supply chains, they not only needed to reduce their tariffs, but they also needed to implement rules about protection of investment and intellectual property, as well as pursuing the \”trade facilitation\” agenda of making it easier for goods to move across borders. Literally hundreds of these regional trade agreements hae already been signed, and these were not \”shallow\” agreements focused on reducing tariffs a bit, but \”deep\” agreements that got way down into the nitty-gritty of facilitating cross-border trade.

Here are a couple of figures from  Baldwin to illustrate the point. The bars in the left-hand figure show the number new regional trade agreements signed each year, and the blue line show the typicaly number of \”deep\” provisions in these treaties. The right-hand figure shows the number of bilateral investment treaties signed each year–there was clearly a boom in such treaties from the late 1990s into the early 2000s.

The controversial megaregional trade agreements now in the news are mostly about combining and standardizing provisions that are pretty much already incorporated in these hundreds of regional agreements. As Baldwin writes; \”The thousands of bilateral investment treaties, for instance, are not all that different, and so network externalities could be realized by melding them together. The emergence of so-called megaregionals like the Trans-Pacific Partnership and Trans-Atlantic Trade and Investment Partnership should be thought of as partial multilateralization of existing deep disciplines by sub-groups of WTO members who are deeply involved in offshoring and global value chains.\”

My sense is that this fundamental shift in trade agreements from \”shallow\” to \”deep\” is part of what drives the controversy surrounding them. The new generation of trade agreements aren\’t just about reducing tariffs or trade barriers as traditionally understood; instead, they are full of very specific rules that seek to harmonize and facilitate business flows across international borders. There is an uncomfortable sense that in the process of negotiating the details, there are too many times when juicy little plums are included for favored special interests. The fundamental economic arguments for the overall benefits of free trade, even though it is a disruptive force, have to be re-interpreted through a haze of fine print in such cases.

Baldwin sees some difficult issues emerging with a two-track system of wold trade agreements. He writes:

\”The megaregionals like the Trans-Pacific Partnership and Trans-Atlantic Trade and Investment Partnership, however, are not a good substitute for multilateralization inside the WTO. They will create an international trading system marked by fragmentation (because they are not harmonized among themselves) and exclusion (because emerging trade giants like China and India are not members now and may never be). Whatever the conceptual merits of moving the megaregionals into the WTO, I have argued elsewhere that the actual WTO does not seem well-suited to the task. … 

\”What all this suggests is that world trade governance is heading towards a two-pillar system. The first pillar, the WTO, continues to govern traditional trade as it has done since it was founded in 1995. The second pillar is a system where disciplines on trade in intermediate goods and services, investment and intellectual property protection, capital flows, and the movement of key personnel are multilateralised in megaregionals. China and certain other large emerging markets may have enough economic clout to counter their exclusion from the current megaregionals. Live and let live within this two-pillar system is a very likely outcome.\”

(Full disclosure: I\’ve worked as the Managing Editor of the Journal of Economic Perspectives, where Baldwin\’s article appeared, for the last 30 years.)

Insights on Infrastructure

Infrastructure is one of those odd topics where it\’s hard to find anyone who takes a strong stand against it, but as a government and a society we never quite get around to doing it.  The Council of Economic Advisers, in its 2016 Economic Report of the President, devotes a chapter to explicating the issues. The overall tone of the chapter leans strongly in the direction of more infrastructure spending, but I was intrigued by some evidence that the condition of US infrastructure may not be as awful as I had expected, as well by implications of the alternative justifications for infrastructure spending.

For example, while the number of structurally deficient and obsolete bridges in the US still numbers in the tens of thousands, the number is diminishing rather than rising. The report notes:

\”In 2014, the number of bridges that were rated as structurally deficient was just above 61,000, while the number that were rated as functionally obsolete, or inadequate for performing the tasks for which the structures were originally designed, was slightly below 85,000 (DOT 2015d). The number of structurally deficient bridges has declined on average 2.7 percent a year since 2000, below the 4.2-percent average annual rate of decline throughout the 1990s. The number of functionally obsolete bridges has also declined steadily since 2000, falling on average about 0.5 percent a year. Combined, these two groups accounted for just below 24 percent of all bridges in 2014, the smallest annual percentage on record.\”

Moreover, when US infrastructure is ranked in comparison with other high-income countries, US looks OK.

\”The World Economic Forum releases annual ratings that gauge the quality of infrastructure throughout the world, and its ratings for the United States are displayed in Figure 6-4. These ratings are determined on a 1-7 scale, with a higher score indicating a better quality level. In 2015, the United States received a rating of 5.8 for its overall infrastructure, which was above the 5.4-average rating across the world’s advanced economies, the 3.8-average across emerging and developing Asian nations, and the 4.1 global average. However, the overall U.S. rating for infrastructure in 2015 was noticeably below its level in the mid-2000s, falling nearly 8 percent since 2006. In comparison, the overall infrastructure rating for the world’s advanced economies increased about 2 percent over the same period.\”

When it comes to overall public \”gross fixed investment,\” the US looks pretty similar to France and Canada, and in recent years to Japan as well, with all of these countries running ahead of Germany.

Of course, the case for infrastructure investment should be rooted in analysis of costs and benefits. Four kinds of benefits for infrastructure investment are mentioned in the report: 1) it can boost demand in the short-term when an economy is in recession; 2) it can reduce congestion; 3) it can reduce maintenance costs; 4) it can complement the private economy in ways that add to long-term productivity growth. These justifications have different implications, so I\’ll say a few words about each.

The case for infrastructure spending to boost demand in a recession is of course a lot more powerful when the unemployment rate is 10% (as in October 2009) rather than when it is 4.9% (as in January and February of this year). There are also some practical problems with using infrastructure spending in this way, because it needs to ramp up quickly while the effects of the recession are still occurring. For those who want more on this topic, a starting point is \”Thoughts on Shovel-Ready Infrastructure\” (October 15, 2015).

Traffic congestion is a real and severe problem. The average US commuter spends about 40 hours per year in traffic delays, which is not only a loss of time equivalent to a full work-week, but also involves wastes of fuel and pollution.

That noted, it\’s very difficult to build one\’s way out of traffic congestion. Sure, many cities have some poorly designed interchanges or other bottlenecks where building anew would help. But the broader problem, as Anthony Downs pointed out in his 1992 book Stuck in Traffic, is that three kinds of substitution occur when rush-hour traffic gets bad on the highway: people adjust the times that they travel, people shift to different and less congested routes, and people shift to alternative modes of transportation (like mass transit). But when additional lanes are added to a freeway, this substitution works in reverse. As some commuters are attracted by the additional traffic lanes to shift back from the other times they were travelling, or to shift back from the alternative routes, or to shift back from the alternative modes of transportation, building more highways ends up not having much effect on congestion. Ultimately, the way out of traffic congestion involves some combination of \”congestion pricing,\” which means charging drivers for being on the road at peak times, or the future of driverless cars. These steps involve specific kinds of infrastructure spending, but just fixing up the existing roads and bridges, or adding some traffic lanes, isn\’t likely to make much of a dent in congestion.

A rising share of infrastructure spending has been going to maintenance and operations, rather than to new infrastructure. This trend makes sense. For example, back in the 1950s and 1960s the main focus was on building the interstate highway system, but now there will be a greater emphasis on maintaining it. But here\’s the pattern.

Infrastructure spending that takes the form of maintenance can pay for itself, both by reducing future repair costs and also, for example, by reducing wear and tear on vehicles.

\”One estimate is that every $1 spent on preventive pavement maintenance reduces future repair costs by $4 to $10 (Baladi et al. 2002). Transportation engineers have developed economic methods that determine the optimal timing for applying preventive maintenance treatments to flexible and rigid pavements by assessing the benefits and costs for each year the treatment could be applied (Peshkin, Hoerner, and Zimmerman 2004). Allowing the condition of transportation infrastructure to deteriorate exacerbates wear and tear on vehicles. Cars and trucks that drive more frequently on substandard roads will require tire changes or other repairs more often—estimated to cost each driver, on average, an additional $516 annually in vehicle maintenance (TRIP 2015). Delaying maintenance can also induce more accidents on transit systems. Not repaving a road, replacing a rail, reinforcing a bridge, or restoring a runway can result in increased vehicle crashes that can disrupt transportation flows and create substantial safety hazards.\”

However, there\’s not a lot reason to think that infrastructure focused on maintenance will have the same effects on helping to improve economic productivity as, say, the original infrastructure investments in railroads, seaports, highways, and airports.  One of my quibbles with this chapter is the amount of emphasis that it puts on transportation infrastructure. While additional investments in transportation can be justified for the reasons just given, it\’s also true that government tends to spread out the transportation money so that it covers a lot of Congressional districts, rather than being focused on the biggest needs, and also that it tends to prioritize new roads when maintenance may be more important. Here\’s the Congressional Budget Office in its February 2016 report  Approaches to Make Federal Highway Spending More Productive. 

For example, even though highway travel is more concentrated on Interstates and in urban areas, and urban roads are typically in poorer condition than rural ones, the federal government and state governments typically have spent more per mile of travel for major repairs on rural roads. Moreover, the extent to which new highways boost economic activity has generally declined over time, increasing the importance of maintaining existing capacity. Yet spending has not shifted much accordingly.

For detailed graphs and numbers on these issues, the March 2016 CBO report on  \”Public Spending on Transportation and Water Infrastructure, 1956 to 2014\” is also useful.

The concept what infrastructure is important for the 21st century needs to be much broader than roads and bridges, and even broader than transportation. It needs to include investments in energy infrastructure, which includes oil and gas pipelines and the electrical grid–and in particular, expanding the capabilities of the electrical grid so that it can more easily  incorporate intermittent but renewable energy sources like solar and wind, and also so that it can be used for variable pricing to encourage conservation. Infrastructure also needs to include the communications infrastructure. Both energy and communications systems also need updating to reduce their vulnerabilities to hacking and to damage from natural disasters or terrorist attacks. At one point, the CEA report notes:

\”Some research found that increasing aggregate public investment by $1 can increase long-term private investment by $0.64 (Pereira 2001). However, this effect was found to vary noticeably among different types of infrastructure: Pereira (2001) estimated that publicly investing $1 in electric and gas facilities, transit systems, and airfields induces a $2.38 rise in long-term private investment, whereas an additional $1 of public investment in highways and streets increases private capital investment by only $0.11.\”

A policy agenda for updating the communications and energy infrastructure is more difficult, because it involves thinking about infrastructure that is owned by private companies and because it opens up broader questions, like choices about  energy policy choices and the security of communications. I\’m fine with also updating and maintaining roads and bridges. But those kinds of road-based infrastructure investments aren\’t likely to be main drivers of 21st century American economic growth.

A Cross-National View of Health Care Systems: Thoughts on Canada, the UK, and Germany

Discussions of the US health care system and how it might be reformed sometimes have a tendency to imply that the other high-income countries of the world have a single template for the financing and provision of health care, and only the US is unique. For example, one sometimes hears statements about how \”the United States is the only high-income country without national health insurance,\” which is true, but which also neglects the fact that other high-income countries finance and provide health insurance in some very different ways.

For an overview of the details and differences across the health care systems of major countries around the world, a useful starting point is the 2015 International Profiles of Health Care Systems
published by the Commonwealth Fund in January 2016. The volume includes some overview table of differences across countries, followed by pithy essays sketching the nuts and bolts differences across countries–some written by individuals, some by the Commonwealth Fund. It was edited by Elias Mossialos and Martin Wenzl of the London School of Economics and Political Science and by Robin Osborn and Dana Sarnak of the Commonwealth Fund. As they write: \”Each overview covers health insurance, public and private financing, health system organization and governance, health care quality and coordination, disparities, efficiency and integration, use of information technology and evidence-based practice, cost containment, and recent reforms and innovations.\” The 18 countries included in the report are mostly high-income countries, but overviews of China and India are also included. Every reader will mine their own nuggets from a report like this, but here are a few points that caught my eye.

I recently heard someone in a casual discussion suggest that \”the US health care system should be more like the UK or Canada\”–but of course, the UK and Canada actually have rather different health care systems. To list just a few of the differences as they arise in the report:

  • Canada spends $4,500 per person per year on health care; in the United Kingdom, it\’s $3,300 per person per year. The US is much higher at $9,100 per person, but the Canada-UK gap is still significant. 
  • Britain\’s National Health Service is run at the national level, while many of Canada\’s government health care funding and policy-making is at the regional level. As the report notes: \”\”Provinces and territories in Canada have primary responsibility for organizing and delivering health services and supervising providers. Many have established regional health authorities that plan and deliver publicly funded services locally. Generally, those authorities are responsible for the funding and delivery of hospital, community, and long-term care, as well as mental and public health services.\”
  • In Canada, \”Nearly all health care providers are private.\” In the UK, about two-thirds of general practictioners are private. When it comes to specialists in England, \”Nearly all specialists are salaried employees of NHS [National Health Service] hospitals, and CCGs [cliical commissioning groups] pay hospitals for outpatient consultations at nationally determined rates. Specialists are free to engage in private practice within specially designated wards in NHS or in private hospitals; the most recent estimates (2006) were that 55 percent of doctors performed private work …\”
  • In England, 11% of the population buys private health insurance for uncovered services; in Canada, 67% of the population buys private health insurance for uncovered services.
  • Average per capita out-of-pocket health care spending is about $300 in the UK, $600 in Canada, and $1100 in the US. 
  • Of all primary care physicians, 98% use electronic medical records in the UK, compared with 84% in the US and 73% in Canada.  
  • As one measure of access to medical technology, the UK has 6.1 MRI (magnetic resonance imaging) machines per 1 million people; Canada has 8.8 MRI machines per million; and the US has 35 MRI machines per million. 
  • When people are surveyed about whether they are \”Able to get same-day/next-day appointment when sick,\” 52% say \”yes\” in the UK, 48% say \”yes\” in the US, and 41% say \”yes\” in Canada. 
  • getting care, 
  • When people are surveyed about whether they have \”Waited 2 months or more for specialist appointment,\” 29% of Canadians say \”yes,\” compared with 7% in England and 6% in the US. 
  • When people are surveyed about whether they have \”Waited 4 months or more for elective surgery,\” 18% of Canadians say \”yes\” compared with 7% of Americans. This measure isn\’t available for the UK.
  • When it come to cost control in England, \”Rather than using patient cost-sharing or imposing direct constraints on supply, costs in the NHS [National Health Service] are constrained by a global budget that cannot be exceeded. NHS budgets are set at the national level, usually on a three-year cycle. CCGs [Clinical Commissioning Groups] are allocated funds by NHS England, which closely monitors their financial performance to prevent overspending. They are expected to achieve a balanced budget each year. The current economic situation has resulted in a largely flat NHS budget against a backdrop of rising demand.\” In Canada, \”\”Costs are controlled principally through single-payer purchasing, and increases in real spending mainly reflect government investment decisions or budgetary overruns. Cost-control measures include mandatory global budgets for hospitals and regional health authorities, negotiated fee schedules for providers, drug formularies, and resource restrictions vis-à-vis physicians and nurses (e.g., provincial quotas of students admitted annually) as well as restrictions on new investment in capital and technology. The national health technology assessment process is one of the mechanisms for containing the costs of new technologies … The federal Patented Medicine Prices Review Board, an independent, quasi-judicial body, regulates the introductory prices of new patented medications.\” 

So when a US person says \”be like Canada or the UK,\” they are ducking some real differences. Are they advocating that US health care spending per person should be cut by 50% (Canada) or by 65% (UK)? Are they saying that the health care system should be run by states, or by the national government? Are they envisioning a system where most people have outside private health insurance, or where no one does? A system where most health care specialists are direct employees of the government, or not? What kinds of waiting times will be expected? What kinds of cost controls and budget caps?

Saying the US health care system \”should be like the UK or Canada\” is a little like says that we should head either northeast or northwest–sure, both directions are north, but there\’s a considerable difference in where you eventually end up.

My other concern with the invocation of Canada and the UK as models for US health care policy goes back to a standard comment in discussions of public policy that how to design a new policy can be quite different from issues of how to reform an existing policy. For example, one might not choose to design a US tax code which doesn\’t tax employer-provided health insurance as income, which then helps to feed a system of private health insurance provided through employers. But once those provisions have been in place for decades, and people and companies have made plans based on those tax provisions, figuring out how to reform the existing system becomes a delicate problem.

For that reason, I\’ve for some years been intrigued by Germany\’s approach to a national health insurance system, because it\’s based on somewhat decentralized system of 124 \”sickness funds\”–essentially nonprofit and nongovernmental health insurance companies–competing against each other in a national exchange. Those with high incomes can opt out and buy private health insurance from one of about 42 companies. About 11% of the population does so. However, when you buy private health insurance, the price and coverage is based on an expectation of a lifetime contract between you and the insurance company. Doctors belong to regional associations that negotiate fees with the sickness funds. The same health care providers treat those with insurance from the sickness funds and those who have private insurance, and \”Individuals have free choice among GPs, specialists, and, if referred to inpatient care, hospitals.\”

As the report describes the German health care system: \”States own most university hospitals, while municipalities play a role in public health activities, and own about half of hospital beds. However, the various levels of government have virtually no role in the direct financing or delivery of health care. A large degree of regulation is delegated to self-governing associations of the sickness funds and the provider associations, which together constitute the most important body, the Federal Joint Committee. … Within the legal framework set by the Ministry of Health, the Federal Joint Committee has wide-ranging regulatory power to determine the services to be covered by sickness funds and to set quality measures for providers …\”

Of course, just as the Canadian and UK health care systems could not be easily transplanted to the US, neither could the German system. In particular, Germany seems better able than the US to have organizations like the Federal Joint Committee that manage to shape consensus decisions with input from health care providers, insurance companies, patient representatives, and government. That said, the German health care system is in many ways a closer cousin to the US approach, and as long as we are tossing out casual comparisons of where the US health care system might look to learn some lessons, it should surely be included. For a readable comparison of the German and US systems, here\’s a link to an article in the Atlantic a couple of years ago.

For Women, Higher Labor Force Participation Means More Babies

When people discuss the reasons behind lower fertility rates, a commonly heard claim is that as women have entered the paid workforce in greater numbers, the time and money tradeoffs make having children look less attractive. But that simple story doesn\’t capture the facts. The actual pattern is that the high-income countries where women are more likely to be in the labor force are also the countries with higher fertility rates. Yuko Kinoshita and Kalpana Kochhar lay out the evidence, along with broader arguments about the economic gains from including more women in the workforce, in their article \”She Is The Answer,\” which appears in the March 2016 issue of Finance & Development, published by the International Monetary Fund.

Here\’s a graph showing the female labor force participation rate for high-income countries on the horizontal axis, and the fertility rate on the vertical axis. The best-fit line clearly shows that countries where women are more likely to be in the labor force are also countries with higher fertility rates.

kinoshita chart 2

The details of this evidence are interesting. Traditionally, if one looks at data from within a single country, a typical finding was that women who had more children were indeed less likely to be in the labor force during their lifetimes. But in high-income countries, that relationship now appears to have changed. Kinoshita and Kochhar  explain (citations omitted):

Researchers have explained this apparent contradiction by looking at the contribution that men make to their households. They find that women in countries where men participate more in housework and child care are better able to combine motherhood and a job, which leads to greater participation in the labor force at relatively high fertility levels. 

Moreover, the relationship between female labor participation and fertility seems to have shifted from negative to positive in Organisation for Economic Co-operation and Development advanced economies since 1985. This shift implies that when more women work and bring home a paycheck households can support more children. This trend also reflects changes in social attitudes toward working mothers, fathers’ involvement in child care, and advances in technology that allow more workplace flexibility. Public policies such as more generous parental leave and greater availability of child care also helped.

My own unscientific explanation for this change, vetted mainly by my wife and some female friends, is that women in societies with more traditional gender roles may not have the freedom to get the level of skills and jobs they want in the paid labor market, but they recognize that having children could lock them into a very traditional lifestyle that they do not want. When a society breaks out of those traditional gender roles, women both have greater opportunities in the labor market and also greater time-and-energy support from their spouse within the family, at which point having children looks like a more attractive life choice.

Interview with Emi Nakamura: Price Stickiness and Shocks

Renee Haltom has an \”Interview\” with Emi Nakamura in Econ Focus, published by the Federal Reserve Bank of Richmond (Third Quarter 2015, pp. 26-30). Much of her work focuses on measurements of price stickiness, and then implications of those facts for the effects of macroeconomic policies and outcomes. Here are a few of the comments that jumped out at me.

Sales,  Price Stickiness, Demand Shocks

\”All this means that even if we were to see a huge number of price changes in the micro data, the aggregate inflation rate may still be pretty sticky. And if one abstracts from the huge number of sales in retail price data, then prices look a lot less flexible than they first appear. … It turns out sales have quite special characteristics that suggest that they do not contribute much to aggregate price flexibility — for example, they are very transient; they often return to the original price after a sale. … To me, the key consequence of sticky prices is that demand shocks matter. Demand shocks can come from many places: house prices, fiscal stimulus, animal spirits, and so on. But the key prediction is that prices don\’t adjust rapidly enough to eliminate the impact of demand shocks.\”

What are \”real rigidities\”?

\”I think we have a pretty good sense by now of how often prices change. But there\’s a lot of evidence from the aggregate data suggesting that prices don\’t respond fully even when they do change. If the pricing decisions of one firm depend on what other firms do, then even when one firm changes its prices, it might adjust only partway. And then the next firm adjusts only partway, and so on. This goes under the heading of real rigidities, and there are many sources of them. One example is intermediate inputs; if you buy a lot of stuff from other firms, then if they haven\’t yet raised their prices to you, then you don\’t want to raise your prices, and so on. Another source is basic competition: If your competitors haven\’t raised their prices, you might not want to raise your prices. The same thing occurs if some price changes are on autopilot, or if the people changing prices aren\’t fully responding to macro news — this is the core of the sticky information literature. These knock-on effects mean that inflation can still be \”sticky\” long after all the prices in the economy have adjusted.

\”Real rigidities are where it\’s much more complicated to do an empirical study. You have to ask not only whether the price changed, but whether it responded fully; so you need to have not only the price data, but also to see the shock to form an idea of what the efficient response would be. For that, the difficulty is that you don\’t often have good cost data. … The other type of evidence that speaks to this question comes from exchange rate movements. When you have changes in the exchange rate, you have a situation where there\’s an observable shock to firms\’ marginal costs, and you can use that to figure out how much prices respond conditional on having adjusted at all. But fundamentally, this is a much more challenging empirical problem.\”

Sticky Prices and the Great Recession

\”I think the Great Recession has actually increased the emphasis in macroeconomics on traditional Keynesian frictions. The shock that led to the Great Recession was probably some combination of financial shocks and housing shocks — but what happened afterward looked very Keynesian. Output and employment fell, as did inflation. And for demand shocks to have a big impact, there have to be some frictions in the adjustment of prices. The models that have been successful in explaining the Great Recession have typically been the ones that have combined nominal frictions with a financial shock of some kind to households or firms.

\”One can also see the effects of traditional Keynesian factors in other countries. Jón is from Iceland, which experienced a massive exchange rate devaluation during its crisis. Other countries that were part of the euro, such as Spain, did not. I think this probably mattered a lot; if prices and wages were flexible, the distinction between a fixed and flexible exchange rate wouldn\’t matter. Another example is Detroit. If Detroit had had a flexible exchange rate with the rest of the United States, a devaluation would have been possible to lower the relative wages of autoworkers, which might have been very helpful. Much of what happened during the Great Recession felt like a textbook example of the consequences of Keynesian frictions.\”

Bias in China\’s Inflation Rate?

\”There\’s a lot of skepticism about Chinese official statistics, and we wanted to think about alternative ways of estimating Chinese inflation. We use Chinese consumption data to estimate Engel curves, which give you a relationship between people\’s income and the fraction of their income that they spend on luxuries versus necessities. All else equal, if Chinese people are spending a lot more of their total food budget on luxuries such as fish, that could tell us that their consumption is growing very rapidly. Holding nominal quantities fixed, higher growth is associated with lower inflation, so we can invert estimates of consumption growth to get the bias in the inflation rate.

\”This approach has been applied to many countries, including the United States, and the usual finding is that the inflation estimate you get is lower than official statistics. This is usually attributed to the idea that official statistics don\’t accurately account for the role of new goods, resulting in lower estimates of inflation.

\”But for China we found an interesting pattern. We did find lower estimates of inflation for the late 1990s. But for the last five or 10 years, we find the opposite: Official inflation was understating true inflation, and official estimates of consumption growth were overstating consumption growth. Our estimates suggest that the official statistics are a smoothed version of reality.

\”There are a couple of reasons why this could be. One possibility is, of course, tampering. Whenever we present this work to an audience of Chinese economists, they are far more skeptical of the Chinese data than we are. But a second possible interpretation is that it\’s just very difficult to measure inflation in a country like China where things are changing so quickly.\”

So you want to be an academic researcher? Nakamura finished her Ph.D. in 2007, so this particular project has been bubbling along for a decade or more.

\”One of the things I\’ve been doing since grad school is working on recovering data underlying the CPI from the late 1970s and early 1980s. This is an exciting period for analyzing price dynamics since it incorporates the U.S. Great Inflation and the Volcker disinflation — the only period in recent U.S. history when inflation was really high. In the course of our other research, Jón and I figured out that there were ancient microfilm cartridges at the BLS from the 1970s in old filing cabinets. The last microfilm readers that could read them had literally broken, and they couldn\’t be read by any modern readers. Moreover, they couldn\’t be taken out of the BLS because they\’re confidential.

\”So we decided to try to recover these microfilm cartridges. We had an excellent grad student, who became our co-author, who learned a lot about microfilm cartridge readers and found some that could be retrofitted to read these old cartridges. After we scanned in the data, we had to use an optical character recognition program to convert it into machine-readable form. That was very tricky. The first quote we got to do this was over a million dollars, but our grad student ultimately found a company that would do it for a 100th of the cost. This has been quite an odyssey of a project, and there were many times when I thought we might never pull it off. We are now finally getting to analyze the data.\”

Global Demography: Tectonic Shifts

Demographic change is a relatively slow and gradual process, but then, so are the shifts of tectonic plates that can lead to earthquakes and volcanoes. The March 2016 issue of Finance & Development, published by the International Monetary Fund, contains five articles on various major population shifts. David Bloom contributed the lead article, called in \”Taking the Power Back.\” He writes:

The world continues to experience the most significant demographic transformation in human history. Changes in longevity and fertility, together with urbanization and migration, are powerful shapers of our demographic future, and they presage significant social, political, economic, and environmental consequences.

Bloom backs up the big claim (\”most significant … in human history\”) with some vivid examples o shifts that are underway. Here are a few that especially caught my eye.

Some Shifts in Global Population

\”Ninety-nine percent of projected [population] growth over the next four decades will occur in countries that are classified as less developed—Africa, Asia (excluding Japan), Latin America and the Caribbean, Melanesia, Micronesia, and Polynesia. Africa is currently home to one-sixth of the world’s population, but between now and 2050, it will account for 54 percent of global population growth. Africa’s population is projected to catch up to that of the more-developed regions (Australia, Europe, Japan, New Zealand, and northern America—mainly Canada and the United States) by 2018; by 2050, it will be nearly double their size. …
Between now and mid-2050, other notable projected shifts in population include:

  • India surpassing China in 2022 to have the largest national population; 
  • Nigeria reaching nearly 400 million people, more than double its current level, moving it ahead of Brazil, Indonesia, Pakistan, and the United States to become the world’s third-largest population; 
  • Russia’s population declining 10 percent and Mexico’s growing slightly below the 32 percent world rate to drop both countries from the top 10 list of national populations, while the Democratic Republic of the Congo (153 percent increase) and Ethiopia (90 percent) join the top 10; and 
  • Eighteen countries—mostly in eastern Europe (and including Russia)—experiencing population declines of 10 percent or more, while 30 countries (mostly in sub-Saharan Africa) at least double their populations.\” 

Urbanization: From Megacity to Metacity

\”More than half the world’s population now lives in urban areas, up from 30 percent in 1950, and the proportion is projected to reach two-thirds by 2050 . … The number of megacities—urban areas with populations greater than 10 million—grew from 4 in 1975 to 29 today. Megacities are home to 471 million people—12 percent of the world’s urban population and 6 percent of the world’s total population. The United Nations recently introduced the concept of metacities, which are urban areas with 20 million or more residents. Eight cities had reached “meta” status in 2015. Tokyo heads the list, with 38 million residents—more than the population of Canada. No. 2 Delhi’s 26 million exceeds Australia’s population. Other metacities are Shanghai, São Paulo, Mumbai, Mexico City, Beijing, and Osaka. By 2025, Dhaka, Karachi, Lagos, and Cairo are projected to grow into metacities.\”

Life Expectancy

\”The number of global deaths annually per 1,000 people has declined steadily from 19.2 in 1950–55, to 7.8 today. … It corresponds to a 24-year gain in global life expectancy—from 47 in 1950–55 to 71 now. Given that the average newborn lived to about age 30 during most of human history, this 24-year increase, an average of nine hours of life expectancy a day for 65 years, is a truly astonishing human achievement—and one that has yet to run its course.\”

Fertility

Aging

In 1950, 8 percent of the world’s population was classified as old (that is, age 60 or over). Since then, the old-age share of world population has risen gradually to 12 percent today, about 900 million people. But a sharp change is afoot. By 2050 about 2.1 billion people, 22 percent of global population, will be older than 60. The United Nations projects that the global median age will increase from about 30 years today to 36 years in 2050 and that, with the exception of Niger, the proportion of elderly will grow in every country. …

Japan’s median age of 47 is the world’s highest and is projected to rise to 53 by 2050. But by then South Korea’s median age will be 54. In 2050, 34 countries will have median ages at or above Japan’s current 47. The world’s 15- to 24-year-olds now outnumber those ages 60 and above by 32 percent. But by 2026 these two groups will be equal in size. After that, those over age 60 will rapidly come to outnumber adolescents and young adults. This crossover already took place in 1984 among advanced economies and is projected to occur in 2035 in less-developed regions.

For insights into the possible implications of these trends, the imaginations of science fiction writers may be as relevant as the social science research literature. I\’ll just say that a number of assumptions that we take more-or-less granted for granted–say, about patterns of immediate family, extended family, age distribution of population, the shape of communities, and location of world population–are going to be transformed. Much of the change will happen in my own lifetime, and it will shape the world in which my children live.

For a post from a few weeks back on how demographic shifts will affect the future global workforce and make the distinctions between GDP and GDP per capita ever more important, see \”Demography is Destiny: Global Economy Edition\” (February 23, 2016).

State and Local Pensions: A Golden Opportunity Missed

It\’s well-known that lots of state and local pension funds have \”unfunded liabilities,\” which means that what they have already promised to pay retirees is more than what they are likely to have in the pension fund when the payments come due (based on what\’s currently in the pension fund and assuming a rate of return which is often pretty optimistic). What\’s not as well-known is that over the last few decades, a number of state and local governments missed a golden opportunity to run their pension funds sensibly. The Council of Economic Advisers lays out some of the evidence in its the 2016 Economic Report of the President. The report notes:

Unfunded pension obligations place a heavy burden on State and local government finances. The size of these unfunded pension liabilities relative to State and local receipts ballooned immediately after the recession and remains elevated at a level that was about 65 percent of a year’s revenue in the first three quarters of 2015 …

Here\’s a figure showing the pattern of unfunded liabilities since 1950. Notice that through the 1950s, 1960s and 1970s, the unfunded liabilities were gradually reduced. The underlying causes were a combination of good returns on pension fund assets, along with avoiding an overpromising of benefits.  Notice that by the mid-1980s, unfunded liabilities were essentially zero; indeed, by the late 1990s the unfunded liabilities were negative, which  means that state and local pension funds had more funds on hand than they needed.

The excellent position of state and local pension funds in the late 1990s was in some ways misleading, since it was based on the dot-com stock market boom that turned south around the year 2000. But if one looks back over the last 30 years, stock markets overall show a dramatic rise. The S&P 500 index, for example, rose from about 250 in 1986 to roughly 2,000 over a 30-year period. That\’s a nominal rise of about 7% per year, and adding the returns to shareholders from dividends paid out by firms would make the total return over this time a few percentage points higher. There\’s certainly no guarantee that stock markets in next 30 years will perform as well as they have over the last 30.

In short, the reason why the unfunded liabilities of state and local pension funds are so much higher in 2016 than 30 years ago isn\’t because the overall stock market performed poorly. Instead, it\’s a grim story of mistiming the moves in the market (rather than just being steadily invested throughout), trying out alternative investments that didn\’t pan out, not putting enough money aside in the first place, and overpromising what benefits could be paid. I have a lot of sympathy for the retirees and soon-to-be retirees who were depending on a pension from a state or local government, and are now finding that the money isn\’t there to pay for the promises. But when blame gets assessed for this grim situation, it\’s worth remembering that those who have been making the decisions about state and local pensions had a very favorable situation 30 years ago–that is, unfunded liabilities near zero, with a period of strong stock market growth over the next three decades coming up–and they messed it up.