Dissecting the Concept of Opportunity Cost

Perhaps no topic is really simple, if you look at it closely. In the Winter 2016 issue of the Journal of Economic Education, a group of five economists put the intro-year topic of \”opportunity cost\” under a definitional microscope. The JEE is not freely available online, but many readers will have access through library subscriptions.

David Colander provides an introduction. Michael Parkin provides a brisk overview of the history of thought about opportunity cost, and argues that opportunity cost is more usefully based on the quantity of what is given up, rather than on attempts to calculate the value of what is given up. Daniel G. Arce, Rod O’Donnell, and Daniel F. Stone offer critiques. Parkin then seeks to synthesize the various views by arguing that opportunity cost as value can be interpreted in several different ways, and claims that one of these interpretations reconciles his view with the critics.

Parkin\’s first essay is full of interesting tidbits. For example, I had not known that the concept of opportunity cost dates back to an essay in the January 1894 issue of the Quarterly Journal of Economics called \”Pain-Cost and Opportunity-Cost\” (8: 2, 218-229). This reference sent me scurrying to the JSTOR archive, where I find that David I. Green starts off in a discussion of the true cost of labor, before moving to a more general argument:

But what is commonly summed up in the term \”cost\” is not principally the pain or weariness on the part of the laborer, and of long delay in consumption on the part of the capitalist; but the costs consists for the most part of the sacrifice of opportunity. … By devoting our efforts to any one task, we necessarily give up the opportunity of doing certain other things which would yield us some return; and it is, in general, this sacrifice of opportunity that we insist upon being paid for rather than for any pain which may be involved in the work performed. … But when we once recognize the sacrifice of opportunity as an element in the cost of production, we find that the principle has a very wide application. Not only time and strength, but commodities, capital, and many of the free gifts of nature, such as mineral deposits and the use of fruitful land, must be economized if we are to act reasonably. Before devoting any one of these resources to a particular use, we must consider the other uses from which it will be withheld by our action; and the most advantageous opportunity which we deliberately forego constitutes a sacrifice for which we must expect at least an equivalent return.

But Parkin\’s main focus is more on the concept than on the history. He writes:

The idea of opportunity cost helps to address five issues that range from the simple and basic to the complex and sophisticated. The simplest and most basic purpose of opportunity cost is to express the fundamental economic problem: Faced with scarcity, we must make choices, and in choosing we are confronted by cost. The second purpose, equally basic, is to see cost as an alternative forgone rather than dollars of expenditure. Its third purpose is to identify, and to correctly establish, what the forgone alternative is. Its fourth purpose is to use the appropriately identified cost alongside an appropriately identified benefit to make (and to analyze) a rational choice. Its fifth purpose, and its most complex and sophisticated, is to derive theorems about the determination of relative prices.

He gives examples from the 1920s and 1930s up through modern textbooks to illustrate that while some writers have preferred to think of opportunity cost in terms of quantity foregone, others have preferred to think of value foregone. He writes:

The two definitions of opportunity cost (hereafter OC) differ in what is forgone. For the “quantity” version, it is the highest-valued alternative: the physical thing or things that otherwise would have been chosen. For the “value” version, it is the value of the highest-valued alternative: the value of the physical thing or things that otherwise would have been chosen.

Parkin argues  that the quantity measure is most useful, in part because using \”value\” adds an additional and potentially controversial step to the concept. 
Daniel Arce argues that value-based calculations of opportunity cost are useful in certain contexts, like looking at shadow prices or deriving a measure of economic profit. Along the way, he makes the interesting claim that teaching and learning about opportunity cost suffers less from imprecise definition than from lack of good old-fashioned examples. Arce writes:

In over 25 years of teaching principles of economics, I have used at least 10 different textbooks and cannot recall a single student expressing concern that the textbook’s treatment of opportunity cost was ambiguous, nor have I had any difficulties with how opportunity cost is operationalized in the associated test banks.What I have had trouble with is the dearth of examples in textbooks and test banks. Opportunity cost is a major takeaway in principles of economics and in managerial economics for MBAs. Yet, I can think of no textbook in either area in which the coverage of opportunity cost would sustain even half a lecture. With consulting firms earning millions of dollars calculating economic profits for their clients (where the hard work is in identifying opportunity costs), how can this be? This is compounded by the virtual absence of any discussion of opportunity cost in undergraduate and MBA textbooks’ coverage of marginal decision making (e.g., utility maximization, cost minimization, and profit maximization) and a similar lack of material on marginal decision making when opportunity cost is covered. 

Rod O\’Donnell and Daniel F. Stone offer further arguments in favor of the value criterion: for example, that it is especially useful in talking about interest rates (or foregone rate of return) as an opportunity cost, and that using value terms for opportunity cost offers an advantage of making comparisons across similar units.

Parkin argues in his closing essay that the \”value\” approach to opportunity cost can be divided into two approaches as well: 

For the “value” version of OC, what is forgone is the highest amount that would be willingly paid for the forgone alternative. Value is willingness to pay. … Another commonly used value concept is the number of dollars thatmust be paid at market prices to buy a defined basket of goods and services. … For the “quantity” version of OC, it is the physical basket (not its dollar value) that is the defining feature. The dollars are merely a convenient measuring rod. To be clear, for the “value” version of OC, the dollars represent the largest amount that would be willingly paid, while for the “quantity” version of OC, the dollars represent the amount that must be paid.

Parkin argues that with this distinction in mind, all the writers are in agreement. I suspect the other writer would not agree with this assessment! But I\’d like to add my agreement to Arce\’s point that opportunity cost is a powerful idea that gets short shrift in the classroom, both because it is less tied into other concepts than it could be, and also because it lack a wide range of strong examples that help give students a sense for its many applications.

Is the Essence of Globalization Shifting?

Since the Great Recession of 2007-2009, a number of the standard economic measures of globalization have declined–flow of goods, services, and finance. But other aspects of globalization are on the rise, like communication and the ability of small firms and individuals to participate in international markets. The McKinsey Global Institute explores these changes in a March 2016 report  Digital globalization: The new era of global flows, written by a team led by James Manyika, Susan Lund, Jacques Bughin, Jonathan Woetzel, Kalin Stamenov, and Dhruv Dhingra.

Here\’s a rough measure of the recent drop in standard measures of globalization. The bars show measures of international flows of goods services, and finance measured in trillions of dollars. The line shows the total flows as a share of global GDP.

Of course, it\’s easy to come up with reasons why this slowdown in standard measures of globalization is just a short-term blip; the recession slowed down trade, the fall in the price of oil and other commodities reduced the value of trade, China\’s growth is slower, and so on. But the report argues that some more fundamental factors are shifting:

\”Yet there is more behind the slowdown in global goods trade than a commodities cycle. Trade in manufactured goods has also been flat to declining for both finished goods and intermediate inputs. Global container shipping volumes grew by 7.8 percent from 2000 to 2005, but from 2011 to 2014, growth was markedly slower, at only 2.8 percent. Multiple cyclical factors have sapped momentum in the trade of manufactured goods. Many of the world’s major economies—notably China, Europe, and Japan—have been experiencing slowdowns. China, for example, posted almost 18 percent annual growth in both imports and exports from 2000 to 2011. But since then its export growth has slowed to 4.6 percent, and imports have actually shrunk. However, there may be structural reasons in global manufacturing that explain decelerating growth in traded goods. Our analysis find that global consumption growth is outpacing trade growth for some types of finished goods, such as automobiles, pharmaceuticals, fertilizers, and plastic and rubber goods. This indicates that more production is happening in the countries where the good is consumed. This may reflect the “reshoring” of some manufacturing to advanced economies as well as increasing consumption in emerging markets where these goods are produced.\”

The McKinsey report argues that the form of globalization is shifting. Much of the discussion emphasizes international flows of data and information crossing borders, but there is also some emphasis on international flows of people at tourists, migrants, and students, as well as changes in e-commerce. For example, the report states:  

\”The world has become more intricately connected than ever before. Back in 1990, the total value of global flows of goods, services, and finance amounted to $5 trillion, or 24 percent of world GDP. There were some 435 million international tourist arrivals, and the public Internet was in its infancy. Fast forward to 2014: some $30 trillion worth of goods, services, and finance, equivalent to 39 percent of GDP, was exchanged across the world’s borders. International tourist arrivals soared above 1.1 billion. And the Internet is now a global network instantly connecting billions of people and countless companies around the world. Flows of physical goods and finance were the hallmarks of the 20th-century global economy, but today those flows have flattened or declined. Twenty-first-century globalization is increasingly defined by flows of data and information. This phenomenon now underpins virtually all cross-border transactions within traditional flows while simultaneously transmitting a valuable stream of ideas and innovation around the world.

\”Our econometric research indicates that global flows of goods, foreign direct investment, and data have increased current global GDP by roughly 10 percent compared to what would have occurred in a world without any flows. This value was equivalent to $7.8 trillion in 2014 alone. Data flows account for $2.8 trillion of this effect, exerting a larger impact on growth than traditional goods flows. This is a remarkable development given that the world’s trade networks have developed over centuries but cross-border data flows were nascent just 15 years ago.\”

What do some of these data flows look like?

Cross-border data flows are the hallmarks of 21st-century globalization. Not only do they transmit valuable streams of information and ideas in their own right, but they also enable other flows of goods, services, finance, and people. Virtually every type of cross-border transaction now has a digital component. …

Approximately 12 percent of the global goods trade is conducted via international e‑commerce, with much of it driven by platforms such as Alibaba, Amazon, eBay, Flipkart, and Rakuten. Beyond e‑commerce, digital platforms for both traditional employment and freelance assignments are beginning to create a more global labor market. Some 50 percent of the world’s traded services are already digitized. Digitization also enables instantaneous exchanges of virtual goods. E-books, apps, online games, MP3 music files and streaming services, software, and cloud computing services can all be transmitted to customers anywhere in the world there is an Internet connection. Many major media websites are shifting from building national audiences to global ones; a range of publications, including The Guardian, Vogue, BBC, and BuzzFeed, attract more than half of their online traffic from foreign countries. By expanding its business model from mailing DVDs to selling subscriptions for online streaming, Netflix has dramatically broadened its international reach to more than 190 countries. While media, music, books, and games represent the first wave of digital trade, 3D printing could eventually expand digital commerce to many more product categories.

Finally, “digital wrappers” are digital add-ons that enable and raise the value of other types of flows. Logistics firms, for example, use sensors, data, and software to track physical shipments, reducing losses in transit and enabling more valuable merchandise to be shipped and insured. Online user-generated reviews and ratings give many individuals the comfort level needed to make cross-border transactions, whether they are buying a consumer product on Amazon or booking a hotel room halfway around the world on Airbnb, Agoda, or TripAdvisor. …

Small and medium-sized enterprises (SMEs) worldwide are using the “plug-and-play” infrastructure of Internet platforms to put themselves in front of an enormous global customer base and become exporters. Amazon, for instance, now hosts some two million third-party sellers. In countries around the world, the share of SMEs that export is sharply higher on eBay than among offline businesses of comparable size. PayPal enables crossborder transactions by acting as an intermediary for SMEs and their customers. Participants from emerging economies are senders or receivers in 68 percent of cross-border PayPal transactions. Microenterprises and projects in need of capital can turn to platforms such as Kickstarter, where nearly 3.3 million people representing nearly all countries made pledges in 2014. Facebook estimates that 50 million SMEs are on its platform, up from 25 million in 2013; on average 30 percent of their fans are from other countries. To put this number in perspective, consider that the World Bank estimated there were 125 million SMEs worldwide in 2010. For small businesses in the developing world, digital platforms are a way to overcome constraints in their local markets.

As one vivid example of international data flows from the report, the number of people on the most popular online social media platforms exceeds the population of most countries–showing that these platforms are crossing lots of international borders.

Another example involves the rise in digital phone calls: \”We also analyzed cross-border digital calls, which have more than doubled from 274 billion call minutes in 2005 to 569 billion call minutes in 2014. This rising volume is primarily attributable to the expanded use of voice over Internet protocol (VoIP) technology. Since 2005, VoIP call minutes have grown by 19 percent per year, while traditional call minutes have grown by 4 percent. Additionally, cross-border computer-to-computer Skype communications have soared, with call minutes increasing by some 500 percent over the past five years. In 2014,  computer-to-computer Skype call minutes were equal to 46 percent of traditional phone call minutes.\”

Although the report doesn\’t especially emphasize how flows of people have increased, I found this graphic interesting. Over the last few decades, the change in the number of migrants and refugees has largely reflected growth of the overall world population. But many more people are having an shorter-term international experience, either as students or as travelers.

What\’s the bottom line on these changes? It\’s already true that international trade in goods has shifted away from being about final products, and instead become more a matter of intermediate products being shipped along a global production chain. Now, information in all its forms (design, marketing, managerial expertise) is becoming a bigger share of the final value of many physical products.  Moreover, a wired world will be more able able to buy and sell digital products. New technologies like 3D printing will make it easier to produce many physical products on-site, wherever they are needed, by shipping only the necessary software, rather than the product itself. The greater ease and cheapness of international communication will presumably strengthen many person-to-person cross-border ties, which is not just a matter of broadening one\’s social life, but also means a greater ability to manage business and economic relationships over distance.

It\’s interesting to speculate on how these shift in globalization, as it percolate through economies around the world, will affect attitudes about globalization. Imagine a situation in which globalization is less about big companies shipping cars and steel and computers, and more about small and medium companies shipping non-standard products or services. And imagine a situation in which globalization becomes less faceless, because it will be so much easier to communicate with those in other countries–as well as so much more common to visit in person as a student or tourist. Changes in how globalization manifests itself seems sure to shake up how economists, and everyone else, view its costs and benefits.

The Next Big M&A Boom is Here

The conditions have seemed ripe for a boom in mergers and acquisitions for a few years now. Lots of companies are sitting on piles of cash. Interest rates are low,  so borrowing money to complete a deal is cheap. Whether in the national or the global economy, production chains are being reshaped by waves of new technology, outsourcing, and in-sourcing. Such economic shakeups can affect the shape of firms and their perceptions of whether a merger or acquisition makes sense. But in 2015, many of these forces came together and the next big mergers and acquisitions boom seems to have arrived.

Here\’s a comment and figure taken from Congressional testimony on March 9, 2016, by William Baer, who is Assistant Attorney General at the Antitrust Division of the U.S. Department of Justice, before the US Senate (more specifically, before the Subcommittee on Antitrust, Competition Policy and Consumer Rights, of the Committee on the Judiciary). Baer said:

\”The merger wave is back. Big time. Global merger and acquisition volume has reached historic levels in terms of number, size and complexity. In FY 2015, 67 proposed mergers were valued at more than $10 billion. That is more than double the annual volume in 2014. Last year 280 deals were worth more than $1 billion, nearly double the number from FY 2010.\”

The global volume of mergers and acquisitions in 2015 exceeded $5 trillion, about twice as high ast the total volume in 2013. According to Dealogic, about one-half of the global M&A deals in 2015 (by dollar value) targeted US firms, and about one-quarter targeted Asian Pacific firms. Here\’s a list of the big M&A deals announced in 2015–many of which are still pending.

There\’s of course nothing intrinsically wrong with merger and acquisition deals. Sometimes, it\’s just a way for companies to evolve and grow. That said, when the level of such deals hits historically high levels, and with historically large sizes, it\’s reasonable to raise some questions. 

For example, in past merger waves a common finding in the academic research is that on average such deals turns out to be a gain for the shareholders of the firm that gets acquired, but on average it\’s neutral or even a small loss for shareholders of the firm doing the acquiring. This pattern suggests that the executives of firms which acquire other  firms are often too optimistic about the gains that will result. A few years from now, we\’ll have a sense as to whether that common pattern continued through the current wave of deals.

Another issue involves the appropriate actions of antitrust regulators in the face of a merger wave. In Baer\’s testimony, he shows that the actions of US antitrust regulators did increase in 2015–as one would expect given the rise in the number and size of deals proposed. But even with a rise in antitrust enforcement, it\’s still a tiny minority of M&A deals that are challenged, presumably those representing what the regulators think are the most clear-cut or egregious cases. The antitrust authorities often enter into negotiations with the companies who are proposing a deal, which result in various tweaks and adjustments to the deal–like an agreement to sell off some parts of the merged company to preserve a degree of competition. But Baer\’s testimony also hints that in the past, these adjustments to M&A deals may not have worked to protect consumers. He said:

\”When we find a merger between rivals that risks decreasing competition in one or more markets, we are invariably urged to accept some form of settlement, typically modest asset divestitures and sometimes conduct commitments or supply agreements. We thoroughly review every offer to settle, but we have learned to be skeptical of settlement offers consisting of behavioral remedies or asset divestitures that only partially remedy the likely harm. We will not settle Clayton Act violations unless we have a high degree of confidence that a remedy will fully protect consumers from anticompetitive harm both today and tomorrow. In doing so, we are guided by the Clayton Act and the Supreme Court, which instruct us to not only stop imminent anticompetitive effects, but
also to be forward-looking and arrest potential restraints on competition “in their incipiency.” Settlements need to preserve the status quo ante in markets where there is a risk of competitive harm. Where complex transactions pose antitrust risks in multiple markets, our confidence that Rube Goldberg settlements will preserve competition diminishes. Consumers should not have to bear the risks that a complex settlement may not succeed. If a transaction simply cannot be fixed, then we will not hesitate to challenge it.\”

But with the current wave of mergers and acquisitions, the big issue to me is not so much what is legal, but what it reveals about the priorities and perceptions of top executives of these companies. A huge M&A deal is a huge commitment of the time of executives, not just in negotiating the deal, but then in following up and integrating various parts of the two companies.

In that spirit, I find it discouraging that the top executives at Pfizer apparently believe that the best focus of their time and energy at present isn\’t about developing new pharmaceuticals in-house, but rather to do a tax-inversion deal with the Irish firm Allergan, so that Pfizer can pay the lower Irish corporate tax rate instead. I find it discouraging that the Belgian firm Anheuser-Busch Inbev and the US firm SABMiller, both of which are the result of earlier large-scale mergers, apparently believe that the most important corporate priority isn\’t to focus on selling fizzy water to their customers, but instead to do yet another merger. The same discouragement applies to most of the mergers on the top 10 list above.

The merger wave means that top executives across a wide range of industries–pharmaceuticals, oil and gas, food and beverages, chemicals, technology, telecom, health care, and others–are all deciding that the most productive use of their time and energy is to devote a historically enormous amount of capital is to merge with or acquire other existing firms. I\’m sure every one of these firms can offer some flashy case about the \”synergies\” from the mergers, and probably a few of those cases will even turn out to be correct. But I find myself wondering about the potential gains to productivity and consumers if, instead of pursuing mergers, these top executives focused the same time and energy and financial resources on building the capabilities of their own workforce, innovating in their product areas, and competing with the other firms in their industries.

Eliminate High-Denomination Bills?

Most of us use a fair number of $20 bill, and maybe a $50 or $100 bill every now and again. But of the total US currency in circulation, 78% is held in the form of $100 bills. To put it differently the $1,014 billion outstanding in $100 bills is the equivalent of about 3,000 $100 bills for every person in the United States. I\’ve noted this phenomenon before: for example, here and here. Peter Sands argues that it\’s time to do something about it in \”Making it Harder for the Bad Guys:The Case for Eliminating High Denomination Notes,\” which was published in February 2016 by the Mossavar-Rahmani Center for Business & Government at the Harvard Kennedy School (Working Paper #52). He writes:

Our proposal is to eliminate high denomination, high value currency notes, such as the €500 note, the $100 bill, the CHF1,000 [Swiss franc] note and the £50 note. Such notes are the preferred payment mechanism of those pursuing illicit activities, given the anonymity and lack of transaction record they offer, and the relative ease with which they can be transported and moved. By eliminating high denomination, high value notes we would make life harder for those pursuing tax evasion, financial crime, terrorist finance and corruption. …

To get a sense of why this might matter to criminals, tax evaders or terrorists, consider what it would take to transport US$1m in cash. In US$20 bills, US$1m in cash weighs roughly 110lbs and would fill 4 normal briefcases. One courier could not do this. In US$100 bills, the same amount would weigh roughly 22lbs and take only one briefcase. A single person could certainly do this, but it would not be that discrete. In €500 notes, US$1m equivalent weighs about 5lbs and would fit in a small bag. … It should be no surprise that in the underworld the €500 note is known as a “Bin Laden”.

For example, consider the cross-border flows of cash between the United States and Mexico from drug trafficking. These amount to billions, which in turn means thousands or tens of thousands of trucks, pick-ups and individual couriers carrying cash. As pointed out earlier, interdiction rates are very low: against cross-border flows of the order of US$20-30bn per year, total seizures in the decade to 2013 amounted to under US$550m. Suppose the US$100 bill was eliminated and the drug traffickers switched entirely to US$50 bills. All else equal, the number of trucks, pick-ups and couriers would have to double. Costs and interdiction rates would probably more than double. Taking the logic further, suppose US$50 issuance was constrained so that the drug traffickers had to rely largely on US$20 bills. The transportation task would increase by up to five times. It would be very surprising if this did not have a very significant impact on costs and interdiction. …

Once the decision is made to eliminate high denomination notes, there are a range of options about how to implement this, which vary in pace and impact. These are not examined in any depth in this paper. However, the most straightforward option is very simple: stop issuing the highest denominations and withdraw the notes whenever they are presented to a bank. More assertive options would put restrictions on where and how they can used (e.g., “no more than 20 on any one transaction”) or put a maximum value on permissible cash transactions (as Italy has done). The most aggressive option would be to put a time limit on how long the high denomination notes would be honored. However, this would be contrary to the established doctrines of a number of central banks, which continue to honor withdrawn notes many years after the event.

As Sands readily recognizes at a number of places throughout the article, the case against big bills isn\’t an easy one to prove with ironclad systematic evidence, because no one really knows where the big bills are. The exception seems to be Japan, where lots of people carry and use 10,000 yen notes in everyday life.  But in other countries, a lot of the currency consists of big bills that the average person rarely sees.

Thus, Sands\’s argument tried to piece together the bits and pieces of evidence that do exist. At least in my reading, the attempt is more successful in some cases than others. For example, while the cash economy certainly contributes to tax evasion, it\’s not clear to me that very large numbers of large-denomination bills are the main issue here.

However, the importance of large-denomination bills in moving the profits of the illegal drug trade seems pretty clear. Sands writes: 

\”By far the largest quantum of income from transnational organized crime is derived from the illicit production and sale of narcotics. UNODC estimates drug-trafficking revenues amount to about 0.4-0.6% of global GDP, or roughly US$300-450bn. … In the drug economy, cash dominates. Sales of illicit narcotics are almost exclusively conducted as cash transactions. As a result, large amounts of currency accumulate at collection points and across supply lines over relatively short periods of time. Storing, transporting and smuggling the proceeds of drug sales is a key operational challenge for international syndicates keen to hide the proceeds of their crimes from authorities. Cash derived from sales across the United States is typically taken to regional counting houses in major cities, converted into higher denomination notes, vacuum sealed to further reduce bulk then “concealed in the structure of cars or articulated trucks that are hitherto unknown to law enforcement”. The United States Custom and Border Patrol confirm that most proceeds from illicit drugs are transported as bulk cash, with an estimated US$20-30bn in currency crossing from the United States across the border with Mexico each year. Indeed, as governments have increased scrutiny and control over formal payment systems, cash smuggling has become the principal mechanism for distributing proceeds through global drug production chains.\”

Sands also makes a strong case that large-denomination bills play a substantial role in human trafficking and human smuggling, cash plays a large role, \”not least because the ability to move large amounts of money across borders without detection is a critical part of the business model.\” ISIS seems to rely for its financing on flows of large-denomination bills, too:

\”The biggest source of money for ISIS is oil smuggling, estimated at its peak to be around US$500m per year, but probably significantly less now, given air-strikes on pumping stations, refineries, pipelines and oil tanker convoys by the US-led coalition, as well as the decline in the oil price. There is very little reliable information on how the oil is sold, but it appears that much is sold for cash, largely US dollars (and given the volumes almost certainly US$100 bills). Sometimes payments are made to the bank accounts of ISIS sympathizers elsewhere, with the money then couriered into ISIS territory in cash (again, almost certainly in US$ or Euro).\”

It\’s easy enough to come up with reasons why some law-abiding people, whether in the US or in countries beset by economic or political instability, might want to hold a stash of $100 or €500 notes. It\’s also easy to suggest ways that if the large-denomination bills were phased out, other stores of value like diamonds or gold or anonymous electronic money like Bitcoin might take its place.  Perhaps the most creative argument I\’ve heard for keeping the large-denomination bills is that the authorities could figure out a way to mark some of them in a way that could be traced, and then could follow the large-denomination bills to the criminals and the terrorists.

But without making all this too complicated, the basic tradeoff here is whether it\’s worth inconveniencing a relatively small number of law-abiding people with legitimate needs for large-denomination bills in exchange for, as the title of Sands\’s paper says, \”making it harder for the bad guys.\” One interesting fact is that in exchange rate markets, large-denomination bills actually trade at above face value, presumably because of their ability to maintain and transport value.

For some reason, thinking about phasing out $100 bills made me think about the ongoing argument for dropping the penny. Seems to me that the real-world gains from dropping the penny are small compared to the gains from phasing out large-denomination bills.

Jérémie Cohen-Setton offers a useful overview of the arguments with links to a number of comments in a blog post on \”The elimination of High Denomination Notes\” (March 7, 2016) at the website of Bruegel, a European think-tank.

Dynamic Pricing: Uber, Coca Cola, Disneyland and Elsewhere

Dynamic pricing refers to the practice of changing prices in real time depending on fluctuations in demand or supply.  Most consumers are inured to dynamic pricing in certain contexts. For example, when a movie theater charges more on a Friday or a Saturday night than for an afternoon matinee, or when a restaurant offers an early-bird dinner special, or when mass transit buses or trains offer a lower fare during off-peak hours, or when airlines charge more for a ticket ordered one day before the flight rather than three months before the flight, it doesn\’t raise many eyebrows.

In other cases, dynamic pricing is more controversial. One classic example is that back in 1999, Coca Cola experimented with vending machines that would automatically rise the price on hot days. The then-chairman, M. Douglas Ivester, pointed out that demand for a cold drink can increase on hot days and said: \”\’So, it is fair that it should be more expensive. … The machine will simply make this process automatic.\’\’ However, the reaction from customers stopped the experiment in its tracks. On the other side, in 2012 certain Coca-Cola owned vending machines in Spain were set to cut the price of certain lemonade drinks by as much as half on hot days. To my knowledge, there was no outcry over this policy.

Information technology is enabling dynamic pricing to become more widespread in a number of contexts. The on-line Knowledge magazine published by the Wharton School at the University of Pennsylvania has been publishing some readable commentary on  dynamic pricing. \”The Promise — and Perils — of Dynamic Pricing\” (February 23, 2016) offers an overview of the arguments with links to some research.  In \”Frustrated by Surge Pricing? Here’s How It Benefits You in the Long Run\” (January 5, 2016)  Ruben Lobel and Kaitlin Daniels discuss how it\’s important to see the whole picture–both higher prices at peak times, but also lower prices at other times. In \”The Price Is Pliant: The Risks and Rewards of Dynamic Pricing\” (January 15, 2016), Senthil Veeraraghavan looks at the choices that sellers face in considering dynamic pricing if they are taking their long-term relationships with customers into account.

Many of the most current examples seem to involve the entertainment industry. For example, the St. Louis Cardinals baseball team uses \”a dynamic pricing program tied to its ticketing system in which the team changes ticket prices daily based on such factors as pitching match-ups, weather, team performance and ticket demand.\” Some ski resorts are adjusting prices based on demand and recent snowfall. Disneyland recently announced a plan to raise admissions prices by as much as 20% on days that are historically known to be busy, while lowering them on other days.
These examples are worthy of study: for example, one paper points out that if a seller only uses dynamic pricing to raise prices on busy days, but doesn\’t correspondingly lower prices to entice more people on on non-busy days, it can end up losing revenue overall. But at the end of the day, it\’s hard to argue that these industries involve any great issue of fairness or justice. If you don\’t want to go to Disneyland or a certain ski resort, then don\’t go. Sure, sellers in the entertainment industry should be very cautious about a perception that they are jerking their customers around. But there\’s now an active online market for reselling tickets for a lot of entertainment events, and prices in that market are going to affect last-minute supply and demand factors.

The current controversies over dynamic pricing often seem to bring up Uber, with its policy of having fares that rise during peak times. Uber released a research paper in September 2015 called \”The Effects of Uber’s Surge Pricing: A Case Study,\” by  Jonathan Hall, Cory Kendrick, and Chris Nosko. Part of the paper focuses on the evening of March 21, 2015, when Ariana Grande played a sold-out show at Madison Square Garden. When the concert let out, Uber prices surged: more specifically, the usual Uber price was raised by a multiple of \”1.2 for 5 minutes, 1.3 for 5 minutes, 1.4 for 5 minutes, 1.5 for 15 minutes, and 1.8 for 5 minutes.\” Here\’s the pattern that emerged in the market.

The red dots show the pattern of people opening the Uber app after the concert; the red line is smoothed out to show the overall pattern. The blue dots and the blue line show the actual ride request. Notice that this rises, but not by as much, probably in part because some of those who looked at the higher surge price decided it wasn\’t worth it, and found another way of getting home. The green dots and green line show the rise in Uber drivers in the area, with the rise presumably occurring in part because drivers were attracted by the surge price.

I don\’t think even the authors of the paper would make strong claims here that Uber surge pricing worked perfectly on the night of March 21, 2015. But it did get more cars on the streets, and it did mean that people willing to pay the price had an additional option for getting home.

Those interested in a fuller analysis of Uber might want to track down \”Disruptive Change in the Taxi Business: The Case of Uber,\” by Judd Cramer and Alan B. Krueger. (It\’s downloadable for free as  Princeton Industrial Relations Working Paper #595, released December 2015, and also was released in March 2016 as National Bureau of Economic Research Working Paper 22083.) Their estimate suggests that \”UberX drivers spend a significantly higher fraction of their time, and drive a substantially higher share of miles, with a passenger in their car than do taxi drivers.\” They write:

\”Because we are only able to obtain estimates of capacity utilization for taxis for a handful of major cities – Boston, Los Angeles, New York, San Francisco and Seattle – our estimates should be viewed as suggestive. Nonetheless, the results indicate that UberX drivers, on average, have a passenger in the car about half the time that they have their app turned on, and this average varies relatively little across cities, probably due to relatively elastic labor supply given the ease of entry and exit of Uber drivers at various times of the day. In contrast, taxi drivers have a passenger in the car an average of anywhere from 30 percent to 50 percent of the time they are working, depending on the city. Our results also point to higher productivity for UberX drivers than taxi drivers when the share of miles driven with a passenger in the car is used to measure capacity utilization. On average, the capacity utilization rate is 30 percent higher for UberX drivers than taxi drivers when measured by time, and 50 percent higher when measured by miles, although taxi data are not available to calculate both measures for the same set of cities. Four factors likely contribute to the higher utilization rate of UberX drivers: 1) Uber’s more efficient driver-passenger matching technology; 2) Uber’s larger scale, which supports faster matches; 3) inefficient taxi regulations; and 4) Uber’s flexible labor supply model and surge pricing, which more closely match supply with demand throughout the day.\”

However, I\’d argue that the two up-and-coming examples of surge pricing that could have the biggest effect on the the most people involve electricity and traffic jams. In the case of variable prices for electricity, a policy of charging more for electricity on hot days will encourage more people to ease back on their use of air conditioning at those times and look for opportunities to conserve, which in turn means less chance of power outages and less need to use expensive back-up generating capacity. A policy of charging higher tolls on congested roads will encourage people to find other ways to travel, and provide a market demand for when building additional lanes of highway is really worth doing.  As these examples suggest, the economic theory behind dynamic pricing or \”surge pricing\” is well-understood. When the quantity demanded of a good or service rises and falls at predictable times, broader social benefits emerge from charging more at that time.

This economic logic even applies in what is surely the most controversial case of surge pricing, which is when prices of certain goods rise either just before or just after a giant storm or other disaster. The higher price–often attacked as \”price gouging\”— gives buyers an incentive not to purchase and hoard the entire stock, and it gives outside sellers an incentive to hop in their pick-up trucks and vans and bring more of the product to the disaster area. What\’s worse than being in a disaster area and having to pay extra for certain key goods? Being in a disaster area where those goods aren\’t available at any price, because the price stayed low and they were sold out before you arrived.

The ongoing gains in information technology are only going to make dynamic pricing more common, because it is only going to become easier both to track changes in demand either historically or in real time and also to make price adjustments in real time (think of the ability to adjust electricity bills or road tolls, for example).  There are going to be changes that will feel like abuses. For example, I wouldn\’t be surprised if some online retailers already have software in place so that if there is a demand surge for some product, the price jumps automatically. Of course, many of those who want to push back against companies that use surge pricing, like Uber, will have no problem with personally using that same information technology to re-sell their tickets to a highly demanded or sold-out event at well above face-value.

Automation and Job Loss: The Fears of 1927

As I\’ve noted from time to time, blasts of concerns over how automation would reduce the number of jobs have been erupting for more than 200 years. As one example, in \”Automation and Job Loss: The Fears of 1964\” (December 1, 2014), I wrote about what were called the \”automation jobless\” in a 1961 news story and how John F. Kennedy advocated and Lyndon Johnson signed into law a National Commission on Technology, Automation, and Economic Progress. The Commission eventually released its report in February 1966. when the unemployment rate was 3.8%.

Here\’s an example of concerns about automation replacing labor from a speech given in 1927 by the US Secretary of Labor James J. Davis called \”The Problem of the Worker Displaced by Machinery, which was published in the Monthly Labor Review of September 1927 (25: 3, pp. 32-37, available through JSTOR).  Before offering an extended quotation from Davis, here are a few quick bits of background. 
  • When Davis delivered this speech in 1927, the extremely severe recession of 1920-21 was six years in the past, but between 1921 and 1927 the economy had had two milder recessions
  • The unemployment rate in 1927 was 3.9%, according to the Historical Statistics of the United States
  • At several points in his speech, Davis expresses deep concerns over immigration, and how much worse the job loss due to automation would have been if immigration had not been limited earlier in the 1920s. Both then and now. economic stress and concerns about economic transition seem to be accompanied by heightened concern over immigration. 
  • Lewis ends up with what many economists have traditionally viewed as the \”right\” answer to concerns about automation and jobs: that is, find ways to help workers who are dislocated in the process of technological innovation, but by no means try to slow the course of automation itself. 
  • As a bit of trivia, Davis is the only person to serve as Secretary of Labor under three different presidents: Harding, Coolidge, and Hoover. 
Here\’s what Davis had to say in his 1927 talk.

\”Every day sees the perfection of some new mechanical miracle that enables one man to do better and more quickly what many men used to do. In the past six years especially, our progress in the lavish use of power and in harnessing that power to high-speed productive machinery has been tremendous. Nothing like it has ever been seen on earth. But what is all this machinery doing for us? What is it doing to us? I think the time is ripe for us to pause and inquire.

\”Take for example the revolution that has come in the glass industry. For a long time it was thought impossible to turn out machines capable of replacing human skill in the making of glass. Now practically all forms of glassware are being made by machinery, some of the machines being extraordinarily efficient. Thus, in the case of one type of bottle, automatic machinery produces forty-one times as much per worker as the old hand processes, and the machine production requires no skilled glass blowers. In other words, one man now does what 41 men formerly did. What are we doing with the men displaced?

\”The glass industry is only one of many industries that have been revolutionized in this manner. I began my working life as an iron puddler, and sweated and toiled before the furnace. In the iron and steel industry, too, it was long thought that no machinery could ever take the place of the human touch; yet last week I witnessed the inauguration of a new mechanical sheet-rolling process with six times the capacity of the former method. 

\”Like the bottle machine, this new mechanical wonder in steel will abolish jobs. It dispenses with men, many of whom have put in years acquiring their skill, and take a natural pride in that skill. We must, I think, soon begin to think a little less of our wonderful machines and a little more of our wonderful American workers, the alternative being that we may have discontent on our hands. This amazing industrial organization that we have built up in our country must not be allowed to get in its own way. If we are to go on prospering, we must give some thought to this matter.

\”Understand me, I am not an alarmist. If you take the long view, there is nothing in sight to give us grave concern. I am no more concerned over the men once needed to blow bottles than I am over the seamstresses that we once were afraid would starve when the sewing machine came in. We know that thousands more seamstresses than before earn a living that would be impossible without the sewing machine. In the end, every device that lightens human toil and increases production is a boon to humanity. It is only the period of adjustment, when machines turn workers out of their old jobs into new ones, that we must learn to handle them so as to reduce distress to the minimum. 

\”To-day when new machines are coming in more rapidly than ever,that period of adjustment becomes a more serious matter. Twenty years ago we thought we had reached the peak in mass production. Now we know that we had hardly begun. … In the long run new types of industries have always absorbed the workers displaced by machinery, but of late we have been developing new machinery at a faster rate than we have been developing new industries. Inventive genius needs to turn itself in this direction.

\”I tremble to think what a state we might be in as a result of this development of machinery without the bars we have lately set up against wholesale immigration: If we had gone on admitting the tide of aliens that formerly poured in here at the rate of a million or more a year, and this at a time when new machinery was constantly eating into the number of jobs, we might have had on our hands something much more serious than the quiet industrial revolution now in progress. 

\”Fortunately we were wise in time, and the industrial situation before us is, as I say, a cause only for thought, not alarm. Nevertheless I submit that it does call for thought. There seems to be no limit to our national efficiency. At the same time we must ask ourselves, is automatic machinery, driven by limitless power going to leave on our hands a state of chronic and increasing unemployment? Is the machine that turns out wealth also to create poverty? Is it giving us a permanent jobless class? Is prosperity going to double back on itself and bring us social distress? …

\”We saved ourselves from the millions of aliens who would have poured in here when business was especially slack and unemployment high. In the old days we used to admit these aliens by the shipload, regardless of the state of the times. I remember that in my own days in the mill when a new machine was put into operation or a new plant was to be opened, aliens were always brought in to man it. When we older hands were through there was no place for us to go. No one had a thought for the man turned out of a job. He went his way forgotten.

\”With a certain amount of unemployment even now to trouble us, think of the nation-wide distress in 1920-21 with the bars down and aliens flooding in, and nowhere near enough jobs to go round. Our duty, as we saw it, was to care as best we could for the workers already here, native or foreign born. Restrictive immigration enabled us to do so, and thus work out of a situation bad enough as it was. Now, just as we were wise in season in this matter of immigration, so we must be wise in sparing our people to-day as much as possible from the curse of unemployment as a result of the ceaseless invention of machinery. It is a thought to be entertained, whatever the pride we naturally take in our progress in other directions.

\”Please understand me, there must be no limits to that progress. We must not in any way restrict new means of pouring out wealth. Labor must not loaf on the job or cut down output. Capital must not, after building up its great industrial organization shut down its mills. That way lies dry rot. We must ever go on, fearlessly scrapping old methods and old machines as fast as we find them obsolete. But we can not afford the human and business waste of scrapping men. In former times the man suddenly displaced by a machine was left to his fate. The new invention we need is a way of caring for this fellow made temporarily jobless. In this enlightened day we want him to go on earning, buying, consuming, adding his bit to the national wealth in the form of product and wages. When a man loses a job, we all lose something. Our national efficiency is not what it should be unless we stop that loss.

\”As I look into the future, far beyond this occasional distress of the present, I see a world made better by the very machines invented to-day. I see the machine becoming the real slave of man that it was meant to be. …  We are going to be masters of a far different and better life.\”

I\’ll add my obligatory reminder here that just because past concerns about automation replacing workers have turned out to be overblown certainly doesn\’t prove that current concerns will also prove out to be overblown. But it is an historical fact that for the last two centuries, automation and technology has played a dramatic role in reshaping jobs, and also helped to lower the average work-week, without leading to a jobless dystopia. 

Remembering Lloyd Shapley: Surprised to Find Himself Speaking Economics

When I heard that 2012 Nobel laureate in economics Lloyd Shapley had died, I was reminded of Molière\’s well-known 1670 play, \”Le Bourgeois Gentilhomme,\” in which a character named Monsieur Jourdain is astonished to find that he has been speaking prose all his life. Shapley was a mathematician, worked in a department of mathematics, and back in the early 1960s developed at theorem that cracked a problem in mathematics–but then found that he had been speaking economics all along.  When Shapley won the Nobel, along with Alvin Roth, he said: \”\”I consider myself a mathematician and the award is for economics. I never, never in my life took a course in economics.\”

Molière\’s described Monsieur Jourdain\’s discovery that he was speaking prose like this:

MONSIEUR JOURDAIN: Please do. But now, I must confide in you. I\’m in love with a lady of great quality, and I wish that you would help me write something to her in a little note that I will let fall at her feet.
PHILOSOPHY MASTER: Very well.

MONSIEUR JOURDAIN: That will be gallant, yes?

PHILOSOPHY MASTER: Without doubt. Is it verse that you wish to write her?

MONSIEUR JOURDAIN: No, no. No verse.

PHILOSOPHY MASTER: Do you want only prose?

MONSIEUR JOURDAIN: No, I don\’t want either prose or verse.

PHILOSOPHY MASTER: It must be one or the other.

MONSIEUR JOURDAIN: Why?

PHILOSOPHY MASTER: Because, sir, there is no other way to express oneself than with prose or verse.

MONSIEUR JOURDAIN: There is nothing but prose or verse?

PHILOSOPHY MASTER: No, sir, everything that is not prose is verse, and everything that is not verse is prose.

MONSIEUR JOURDAIN: And when one speaks, what is that then?

PHILOSOPHY MASTER: Prose.

MONSIEUR JOURDAIN: What! When I say, \”Nicole, bring me my slippers, and give me my nightcap,\” that\’s prose?

PHILOSOPHY MASTER: Yes, Sir.

MONSIEUR JOURDAIN: By my faith! For more than forty years I have been speaking prose without knowing anything about it, and I am much obliged to you for having taught me that.

Economists often find that in trying to convey the precise meaning of their arguments, they are speaking in mathematics. Thus, it\’s perhaps not a huge surprise that a mathematician like Shapley might discover that he was speaking economics. In a 2012 post on \”The 2012 Nobel Prize to Shapley and Roth\” (October 17, 2012), I tried to lay out

The fundamental problem looks like this. Imagine that there are two groups, A and B. Members of Group A are to be matched with members of Group B. However, each of the members of Group A has likes and dislikes about the members of Group B, each of the members of Group B has likes and dislikes about the members of Group A, and these likes and dislikes don\’t necessarily line up. Is there a way to match members of Group A and Group B so that, after the match, everyone won\’t be ditching their match and trying for another?

Before trying to describe the result, the Gale-Shapley algorithm, it\’s worth noting that this general situation of finding stable matches applies in many settings. (David Gale was a very prominent mathematical economist who died in 2008, before the Nobel was awarded to Shapley and Roth.) The 1962 paper offered verbal illustrations based on college admissions and on what economists think of as the \”marriage market.\” Later on, Shapley\’s co-laureate Alvin Roth offered a number of practical applications: for example, processes used in K-12 school lotteries in various cities, in the \”matching\” process for medical school residencies, and in matching donors and recipients of kidney transplants.

Here is how the Nobel committee describes Gale and Shapley  \”deferred acceptance\” procedure to get a stable result for their matching problem.

Agents on one side of the market, say the medical departments, make offers to agents on the other side, the medical students. Each student reviews the proposals she receives, holds on to the one she prefers (assuming it is acceptable), and rejects the rest. A crucial aspect of this algorithm is that desirable offers are not immediately accepted, but simply held on to: deferred acceptance. Any department whose offer is rejected can make a new offer to a different student. The procedure continues until no department wishes to make another offer, at which time the students finally accept the proposals they hold.

In this process, each department starts by making its first offer to its top-ranked applicant, i.e., the medical student it would most like to have as an intern. If the offer is rejected, it then makes an offer to the applicant it ranks as number two, etc. Thus, during the operation of the algorithm, the department’\’s expectations are lowered as it makes offers to students further and further down its preference ordering. (Of course, no offers are made to unacceptable applicants.) Conversely, since students always hold on to the most desirable offer they have received, and as offers cannot be withdrawn, each student’s satisfaction is monotonically increasing during the operation of the algorithm. When the departments’decreased expectations have become consistent with the students’increased aspirations, the algorithm stops.\”

Here\’s how the procedure would work in the marriage market:

\”The Gale-Shapley algorithm can be set up in two alternative ways: either men propose to women, or women propose to men. In the latter case, the process begins with each woman proposing to the man she likes the best. Each man then looks at the different proposals he has received (if any), retains what he regards as the most attractive proposal (but defers from accepting it) and rejects the others. The women who were rejected in the first round then propose to their second-best choices, while the men again keep their best offer and reject the rest. This continues until no women want to make any further proposals. As each of the men then accepts the proposal he holds, the process comes to an end.\”

I described some of context of the result this way in my 2012 post:

Gale and Shapley prove that this procedure leads to a \”stable\” outcome. Again, this doesn\’t mean that everyone gets their first choice! It means that when the outcome is reached, there is no combination of medical school and applicant, or of man and woman in the marriage example, who would both prefer a different match from the one with which they ended up. But Gale and Shapley went further. It turns out that there are often many stable combinations, and in comparing these stable outcomes, the question of who does the choosing matters. If women propose to men, women will view the outcome as the best of all the stable matching possibilities, while men will view it as the worst; if men propose to women, men as a group will view it as the best of all stable matching possibilities, while women will view it as the worst. As the Nobel committee writes, \”stable institutions can be designed to systematically favor one side of the market.\” …

The Nobel prize to Shapley and Roth is one of those prizes that I suspect I will have a hard time explaining to non-economists. The non-economists I know ask practical questions. They want to know how the work done for the prize will spur the economy, or create jobs, or reduce inequality, or help the poor, or save the government money. Somehow, better matching for medical school students won\’t seem, to some non-economists like it\’s \”big enough\” to deserve a Nobel. But economics isn\’t all about today\’s public policy questions. The prize rewards thinking deeply about how a matching process works. In a world of increasingly powerful information-processing technology, where we may all find ourselves \”matched\” in various ways based on questions we answer by software we don\’t understand, I suspect that Alvin Roth\’s current applications are just the starting point for ways to apply the insights developed from Lloyd Shapley\’s \”deferred acceptance\” mechanism.

Shapley\’s obituary in the Economist is here, from the New York Times is here, and from the Associated Press is here.

A Fundamental Shift in the Nature of Trade Agreements

Controversy over agreements that seek to encourage free trade has been going on for decades. But the nature of the underlying trade agreements has fundamentally shifted. The old trade agenda under first the GATT and then its successor the World Trade Organization was focused on reducing tariffs and other trade barriers. The new generation of trade agreements are about assuring that interlocking webs of production that cross international borders will be enabled to function. Richard Baldwin explores the change in \”The World Trade Organization and the Future of Multilateralism,\” published in the Winter 2016 issue of the Journal of Economic Perspectives.  Here\’s a taste of his theme:

\”[T]he rules and procedures of the WTO were designed for a global economy in which made-here–sold-there goods moved across national borders. But the rapid rising of offshoring from high-technology nations to low-wage nations has created a new type of international commerce. In essence, the flows of goods, services, investment, training, and know-how that used to move inside or between advanced-nation factories have now become part of international commerce. For this sort of offshoring-linked international commerce, the trade rules that matter are less about tariffs and more about protection of investments and intellectual property, along with legal and regulatory steps to assure that the two-way flows of goods, services, investment, and people will not be impeded. It’s possible to imagine a hypothetical WTO that would incorporate these rules. But in practice, the rules are being written in a series of regional and megaregional agreements like the Trans-Pacific Partnership (TPP) and Transatlantic Trade and Investment Partnership (TTIP) between the United States and the European Union. The most likely outcome for the future governance of international trade  is a two-pillar structure in which the WTO continues to govern with its 1994-era rules while the new rules for international production networks, or “global value chains,” are set by a decentralized process of sometimes overlapping and inconsistent megaregional agreements.\”

Let\’s unpack these dynamics a bit. Baldwin argues that the rounds of international trade talks starting with the GATT in 1946 displayed what he calls \”juggernaut\” dynamics. Before the multilateral trade talks, there wasn\’t much reason for exporters in a country to care about whether the country imposed tariffs on imports. But the multilateral trade talks shifted the political balance, because exporters realized that in order to get lower tariffs in their foreign markets, their own country would need to reduce its tariffs, too. Moreover, each time tariffs were reduced, it tended to weaken firms and industries that faced tough import competition, while benefiting export-oriented firms. Thus, export-oriented firms were in a stronger position to advocate for future tariff cuts as well.

But by the 1990s, there was a rise in global supply chains that crossed international borders. Many emerging markets figured out pretty quickly that if they wanted to be part of global supply chains, they not only needed to reduce their tariffs, but they also needed to implement rules about protection of investment and intellectual property, as well as pursuing the \”trade facilitation\” agenda of making it easier for goods to move across borders. Literally hundreds of these regional trade agreements hae already been signed, and these were not \”shallow\” agreements focused on reducing tariffs a bit, but \”deep\” agreements that got way down into the nitty-gritty of facilitating cross-border trade.

Here are a couple of figures from  Baldwin to illustrate the point. The bars in the left-hand figure show the number new regional trade agreements signed each year, and the blue line show the typicaly number of \”deep\” provisions in these treaties. The right-hand figure shows the number of bilateral investment treaties signed each year–there was clearly a boom in such treaties from the late 1990s into the early 2000s.

The controversial megaregional trade agreements now in the news are mostly about combining and standardizing provisions that are pretty much already incorporated in these hundreds of regional agreements. As Baldwin writes; \”The thousands of bilateral investment treaties, for instance, are not all that different, and so network externalities could be realized by melding them together. The emergence of so-called megaregionals like the Trans-Pacific Partnership and Trans-Atlantic Trade and Investment Partnership should be thought of as partial multilateralization of existing deep disciplines by sub-groups of WTO members who are deeply involved in offshoring and global value chains.\”

My sense is that this fundamental shift in trade agreements from \”shallow\” to \”deep\” is part of what drives the controversy surrounding them. The new generation of trade agreements aren\’t just about reducing tariffs or trade barriers as traditionally understood; instead, they are full of very specific rules that seek to harmonize and facilitate business flows across international borders. There is an uncomfortable sense that in the process of negotiating the details, there are too many times when juicy little plums are included for favored special interests. The fundamental economic arguments for the overall benefits of free trade, even though it is a disruptive force, have to be re-interpreted through a haze of fine print in such cases.

Baldwin sees some difficult issues emerging with a two-track system of wold trade agreements. He writes:

\”The megaregionals like the Trans-Pacific Partnership and Trans-Atlantic Trade and Investment Partnership, however, are not a good substitute for multilateralization inside the WTO. They will create an international trading system marked by fragmentation (because they are not harmonized among themselves) and exclusion (because emerging trade giants like China and India are not members now and may never be). Whatever the conceptual merits of moving the megaregionals into the WTO, I have argued elsewhere that the actual WTO does not seem well-suited to the task. … 

\”What all this suggests is that world trade governance is heading towards a two-pillar system. The first pillar, the WTO, continues to govern traditional trade as it has done since it was founded in 1995. The second pillar is a system where disciplines on trade in intermediate goods and services, investment and intellectual property protection, capital flows, and the movement of key personnel are multilateralised in megaregionals. China and certain other large emerging markets may have enough economic clout to counter their exclusion from the current megaregionals. Live and let live within this two-pillar system is a very likely outcome.\”

(Full disclosure: I\’ve worked as the Managing Editor of the Journal of Economic Perspectives, where Baldwin\’s article appeared, for the last 30 years.)

Insights on Infrastructure

Infrastructure is one of those odd topics where it\’s hard to find anyone who takes a strong stand against it, but as a government and a society we never quite get around to doing it.  The Council of Economic Advisers, in its 2016 Economic Report of the President, devotes a chapter to explicating the issues. The overall tone of the chapter leans strongly in the direction of more infrastructure spending, but I was intrigued by some evidence that the condition of US infrastructure may not be as awful as I had expected, as well by implications of the alternative justifications for infrastructure spending.

For example, while the number of structurally deficient and obsolete bridges in the US still numbers in the tens of thousands, the number is diminishing rather than rising. The report notes:

\”In 2014, the number of bridges that were rated as structurally deficient was just above 61,000, while the number that were rated as functionally obsolete, or inadequate for performing the tasks for which the structures were originally designed, was slightly below 85,000 (DOT 2015d). The number of structurally deficient bridges has declined on average 2.7 percent a year since 2000, below the 4.2-percent average annual rate of decline throughout the 1990s. The number of functionally obsolete bridges has also declined steadily since 2000, falling on average about 0.5 percent a year. Combined, these two groups accounted for just below 24 percent of all bridges in 2014, the smallest annual percentage on record.\”

Moreover, when US infrastructure is ranked in comparison with other high-income countries, US looks OK.

\”The World Economic Forum releases annual ratings that gauge the quality of infrastructure throughout the world, and its ratings for the United States are displayed in Figure 6-4. These ratings are determined on a 1-7 scale, with a higher score indicating a better quality level. In 2015, the United States received a rating of 5.8 for its overall infrastructure, which was above the 5.4-average rating across the world’s advanced economies, the 3.8-average across emerging and developing Asian nations, and the 4.1 global average. However, the overall U.S. rating for infrastructure in 2015 was noticeably below its level in the mid-2000s, falling nearly 8 percent since 2006. In comparison, the overall infrastructure rating for the world’s advanced economies increased about 2 percent over the same period.\”

When it comes to overall public \”gross fixed investment,\” the US looks pretty similar to France and Canada, and in recent years to Japan as well, with all of these countries running ahead of Germany.

Of course, the case for infrastructure investment should be rooted in analysis of costs and benefits. Four kinds of benefits for infrastructure investment are mentioned in the report: 1) it can boost demand in the short-term when an economy is in recession; 2) it can reduce congestion; 3) it can reduce maintenance costs; 4) it can complement the private economy in ways that add to long-term productivity growth. These justifications have different implications, so I\’ll say a few words about each.

The case for infrastructure spending to boost demand in a recession is of course a lot more powerful when the unemployment rate is 10% (as in October 2009) rather than when it is 4.9% (as in January and February of this year). There are also some practical problems with using infrastructure spending in this way, because it needs to ramp up quickly while the effects of the recession are still occurring. For those who want more on this topic, a starting point is \”Thoughts on Shovel-Ready Infrastructure\” (October 15, 2015).

Traffic congestion is a real and severe problem. The average US commuter spends about 40 hours per year in traffic delays, which is not only a loss of time equivalent to a full work-week, but also involves wastes of fuel and pollution.

That noted, it\’s very difficult to build one\’s way out of traffic congestion. Sure, many cities have some poorly designed interchanges or other bottlenecks where building anew would help. But the broader problem, as Anthony Downs pointed out in his 1992 book Stuck in Traffic, is that three kinds of substitution occur when rush-hour traffic gets bad on the highway: people adjust the times that they travel, people shift to different and less congested routes, and people shift to alternative modes of transportation (like mass transit). But when additional lanes are added to a freeway, this substitution works in reverse. As some commuters are attracted by the additional traffic lanes to shift back from the other times they were travelling, or to shift back from the alternative routes, or to shift back from the alternative modes of transportation, building more highways ends up not having much effect on congestion. Ultimately, the way out of traffic congestion involves some combination of \”congestion pricing,\” which means charging drivers for being on the road at peak times, or the future of driverless cars. These steps involve specific kinds of infrastructure spending, but just fixing up the existing roads and bridges, or adding some traffic lanes, isn\’t likely to make much of a dent in congestion.

A rising share of infrastructure spending has been going to maintenance and operations, rather than to new infrastructure. This trend makes sense. For example, back in the 1950s and 1960s the main focus was on building the interstate highway system, but now there will be a greater emphasis on maintaining it. But here\’s the pattern.

Infrastructure spending that takes the form of maintenance can pay for itself, both by reducing future repair costs and also, for example, by reducing wear and tear on vehicles.

\”One estimate is that every $1 spent on preventive pavement maintenance reduces future repair costs by $4 to $10 (Baladi et al. 2002). Transportation engineers have developed economic methods that determine the optimal timing for applying preventive maintenance treatments to flexible and rigid pavements by assessing the benefits and costs for each year the treatment could be applied (Peshkin, Hoerner, and Zimmerman 2004). Allowing the condition of transportation infrastructure to deteriorate exacerbates wear and tear on vehicles. Cars and trucks that drive more frequently on substandard roads will require tire changes or other repairs more often—estimated to cost each driver, on average, an additional $516 annually in vehicle maintenance (TRIP 2015). Delaying maintenance can also induce more accidents on transit systems. Not repaving a road, replacing a rail, reinforcing a bridge, or restoring a runway can result in increased vehicle crashes that can disrupt transportation flows and create substantial safety hazards.\”

However, there\’s not a lot reason to think that infrastructure focused on maintenance will have the same effects on helping to improve economic productivity as, say, the original infrastructure investments in railroads, seaports, highways, and airports.  One of my quibbles with this chapter is the amount of emphasis that it puts on transportation infrastructure. While additional investments in transportation can be justified for the reasons just given, it\’s also true that government tends to spread out the transportation money so that it covers a lot of Congressional districts, rather than being focused on the biggest needs, and also that it tends to prioritize new roads when maintenance may be more important. Here\’s the Congressional Budget Office in its February 2016 report  Approaches to Make Federal Highway Spending More Productive. 

For example, even though highway travel is more concentrated on Interstates and in urban areas, and urban roads are typically in poorer condition than rural ones, the federal government and state governments typically have spent more per mile of travel for major repairs on rural roads. Moreover, the extent to which new highways boost economic activity has generally declined over time, increasing the importance of maintaining existing capacity. Yet spending has not shifted much accordingly.

For detailed graphs and numbers on these issues, the March 2016 CBO report on  \”Public Spending on Transportation and Water Infrastructure, 1956 to 2014\” is also useful.

The concept what infrastructure is important for the 21st century needs to be much broader than roads and bridges, and even broader than transportation. It needs to include investments in energy infrastructure, which includes oil and gas pipelines and the electrical grid–and in particular, expanding the capabilities of the electrical grid so that it can more easily  incorporate intermittent but renewable energy sources like solar and wind, and also so that it can be used for variable pricing to encourage conservation. Infrastructure also needs to include the communications infrastructure. Both energy and communications systems also need updating to reduce their vulnerabilities to hacking and to damage from natural disasters or terrorist attacks. At one point, the CEA report notes:

\”Some research found that increasing aggregate public investment by $1 can increase long-term private investment by $0.64 (Pereira 2001). However, this effect was found to vary noticeably among different types of infrastructure: Pereira (2001) estimated that publicly investing $1 in electric and gas facilities, transit systems, and airfields induces a $2.38 rise in long-term private investment, whereas an additional $1 of public investment in highways and streets increases private capital investment by only $0.11.\”

A policy agenda for updating the communications and energy infrastructure is more difficult, because it involves thinking about infrastructure that is owned by private companies and because it opens up broader questions, like choices about  energy policy choices and the security of communications. I\’m fine with also updating and maintaining roads and bridges. But those kinds of road-based infrastructure investments aren\’t likely to be main drivers of 21st century American economic growth.