Will We Look Back on the Euro as a Mistake?

For the last few months, the euro situation has not been a crisis that dominates headlines. But the economic situation surrounding the euro remains grim and unresolved. Finance and Development, published by the IMF, offers four angles on Europe\’s road in its March 2014 issue. For example,
Reza Moghadam discusses how Europe has moved toward greater integration over time, Nicolas Véron looks at plans and prospects for a European banking union, and Helge Berger and Martin Schindler
consider the policy agenda for reducing unemployment and spurring growth.  But I was especially drawn to \”Whither the Euro?\” by Kevin Hjortshøj O’Rourke, because he finds himself driven to contemplating whether the euro will survive. He concludes:

The demise of the euro would be a major crisis, no doubt about it. We shouldn’t wish for it. But if a crisis is inevitable then it is best to get on with it, while centrists and Europhiles are still in charge. Whichever way we jump, we have to do so democratically, and there is no sense in waiting forever. If the euro is eventually abandoned, my prediction is that historians 50 years from now will wonder how it ever came to be introduced in the first place.

To understand where O\’Rourke is coming from, start with some basic statistics on unemployment and growth in the euro-zone. Here\’s the path of unemployment in Europe through the end of 2013, with the average for all 28 countries of the European Union shown by the black line, and the average for the 17 countries using the euro shown by the blue line. 

In the U.S. economy, we agonize (and rightfully so!) over how slowly the unemployment rate has fallen from its peak of 10% in October 2009 to 6.6% in January 2014. In the euro zone, unemployment across countries averaged 7.5% before the Great Recession, and has risen since then to more than 11.5%. And remember, this  average include countries with low unemployment rates: for example, Germany\’s unemployment rate has plummeted to 5.1%. But  Greece has unemployment of 27.8%; Spain, 25.8%; and Croatia, Cyprus, and Portugal all have unemployment rates above 15%.

Here\’s the quarterly growth rate of GDP for the 17 euro countries, for all 28 countries in the European Union, and with the U.S. economy for comparison. Notice that the European Union and the euro zone actually had two recessions: the Great Recession that was deeper than the U.S. recession, and the a follow-up period of negative growth from early 2011 to early 2013. As O\’Rourke writes: \”In December 2013 euro area GDP was still 3 percent lower than in the first quarter of 2008, in stark contrast with the United States, where GDP was 6 percent higher. GDP was 8 percent below its precrisis level in Ireland, 9 percent below in Italy, and 12 percent below in Greece.\”
 

For American readers, try to imagine what the U.S. political climate would be like if unemployment had been rising almost continually for the last five years, and if the rate was well into double-digits for the country as a whole. Or contemplate what the U.S. political climate would look like if instead of sluggish recovery, U.S. economic growth had actually been in reverse for most of 2011 and 2012.

O\’Rourke points out that this dire outcome was actually a predictable and predicted result based on standard economic theory before the euro was  put in  place. And he points out that there is no particular reason to think that the EU is on the brink of addressing the underlying issues.

The relevant economic theory here points out that if two areas experience different patterns of productivity or growth, some adjustment will be necessary between them. One possibility, for exmaple, is that the exchange rate adjusts between the two countries. But if the countries have agreed to use a common currency, so that an exchange rate adjustment is impossible, then other adjustments are possible. For example, some workers might move from the lower-wage to the higher-wage area. Instead of a shift in exchange rates cutting the wages and prices in global markets, wages and prices themselves could fall in an \”internal devaluation.\” A central government might redistribute some income from the higher-income to the lower-income area.

But in the euro-zone, these adjustments are either not-yet-practical or impossible. With the euro as a common currency, exchange rate changes are out. Movement of workers across national borders is not that large, which is why unemployment can be 5% in Germany and more than 25% in Spain and Greece. Wages are often \”sticky downward,\” as economists say, meaning that it is unusual for wages to decline substantially  in nominal terms. The EU central government has a relatively small budget and no mandate to redistribute from higher-income to lower-income areas. Without any adjustment, the outcome is that certain countries have depressed economies with high unemployment and slow or negative growth, and no near-term way out.

Sure, one can propose various steps that in time might work. But for all such proposals, O\’Rourke lays two unpleasantly real facts on the table.

First, crisis management since 2010 has been shockingly poor, which raises the question of whether it is sensible for any country, especially a small one, to place itself at the mercy of decision makers in Brussels, Frankfurt, or Berlin. … Second, it is becoming increasingly clear that a meaningful banking union, let alone a fiscal union or a safe euro area asset, is not coming anytime soon.

Given the unemployment and growth situations in the depressed areas of Europe, it\’s no surprise that pressure for more extreme political choices is building up. For Europe, sitting in one place while certain nations experience depression-level unemployment for years while other nations experience booms, and waiting for the political pressure for extreme change to become irresistible,  is not a sensible policy. O\’Rourke summarizes in this way: 

For years economists have argued that Europe must make up its mind: move in a more federal direction, as seems required by the logic of a single currency, or move backward? It is now 2014: at what stage do we conclude that Europe has indeed made up its mind, and that a deeper union is off the table? The longer this crisis continues, the greater the anti-European political backlash will be, and understandably so: waiting will not help the federalists. We should give the new German government a few months to surprise us all, and when it doesn’t, draw the logical conclusion. With forward movement excluded, retreat from the EMU may become both inevitable and desirable.

Death of a Statistic

OK, I know that only a very small group of people actually care about government statistics. I know I\’m a weirdo.  I accept it. But data is not the plural of anecdote, as the saying goes. If you care about deciphering real-world economic patterns, you need statistical evidence. Thus, it\’s unpleasant news to see the press release from the US Bureau of Labor Statistics reporting that, because its budget has been cut by $21 million down to $592 million, it will cut back on the International Price Program and on the Quarterly Census of Employment and Wages.

I know, serious MEGO, right? (MEGO–My Eyes Glaze Over.)

But as Susan Houseman and Carol Corrado explain, the change means the end of the export price program, which calculates price levels for U.S. exports, and thus allows economists \”to understand trends in real trade balances, the competitiveness of U.S. industries, and the impact of exchange rate movements. It is highly unusual for a statistical agency to cut a so-called principal federal economic indicator.\” As BLS notes: \”The Quarterly Census of Employment and Wages (QCEW) program publishes a quarterly count of employment and wages reported by employers covering 98 percent of U.S. jobs, available at the county, MSA [Metropolitan Statistical Area], state and national levels by industry.\” The survey is being reduced in scope and frequency, not eliminated. If you don\’t think that a deeper and detailed understanding of employment and wages is all that important, maybe cutting back funding for this survey seems like a good idea.

These changes seem part of series of sneaky little unpleasant cuts. Last year, the Bureau of Labor Statistics saved a whopping $2 million by cutting the International Labor Comparisons program, which produced a wide array of labor market and economic data produced with a common conceptual framework, so that one could meaningfully compare, say, \”unemployment\” across different countries. And of course, some of us are still mourning the decision of the U.S. Census Bureau in 2012 to save $3 million per year by ending the U.S. Statistical Abstract, which for since 1878  had provided a useful summary and reference work for locating a wide array of government statistics.

The amounts of money saved with these kinds of cuts is tiny by federal government standards, and the costs of not having high-quality statistics can be severe. But don\’t listen to me. Each year, the White House releases an Analytical Perspectives volume with its proposed federal budget, and in recent years that volume  usually contains a chapter on  \”Strengthening Federal Statistics.\” As last year\’s report says:

\”The share of budget resources spent on supporting Federal statistics is relatively modest—about 0.04 percent of GDP in non-decennial census years and roughly double that in decennial census years—but that funding is leveraged to inform crucial decisions in a wide variety of spheres. The ability of governments, businesses, and the general public to make appropriate decisions about budgets, employment, investments, taxes, and a host of other important matters depends critically on the ready and equitable availability of objective, relevant, accurate, and timely Federal statistics.\”

I wish I had some way to dramatize the foolishness and loss of these decisions to trim back on government statistics. After all, doesn\’t the death of a single statistic diminish us all? Ask not for whom the statistics toll; they toll for thee. It\’s not working, is it?

It won\’t do to blame these kinds of cutbacks in the statistics program on the big budget battles, because in the context of the $3.8 trillion federal budget this year, a few tens of millions are pocket change. These cuts could easily be reversed by trimming back on the outside conference budgets of larger agencies. But all statistics do is offer facts that might get in the way of what you already know is true. Who needs the aggravation?

Highways of the Future

Highways, roads, and bridges are are still mostly an early to mid-20th century technology.  Clifford Winston and Fred Mannering point to some of the directions for highways of the future in \”Implementing technology to improve public highway performance: A leapfrog technology from the private sector is going to be necessary,\” published in the Economics of Transportation. They set the stage like this (citations and notes omitted throughout):

\”The nation\’s road system is vital to the U.S. economy.Valued at close to $3 trillion, according to the Bureau of Economic Analysis of the U.S. Department of Commerce, 75 percent of goods, based on value, are transported on roads by truck, 93 percent of workers\’ commutes are on roads by private automobiles and public buses, and by far the largest share of non-work and pleasure trips are taken by road. Indeed, roads can be accurately characterized as the arterial network of the United States. Unfortunately,the arteries are clogged: the benefits that commuters, families,truckers,and shippers receive from the nation\’s road system have been increasingly compromised by growing congestion, vehicle damage, and accident costs.\”

These costs are high. Estimates of the value of time and fuel spent on congested roads are $100 billion per year. Poor road conditions cost American car drivers $80 billion in operating costs and repairs. And 30,000 Americans die in traffic fatalities each year.

Many of the policy recommendations are familiar enough. For example, the traditional economist\’s answer to road congestion is to charge tolls for driving during congested times. \”[P]oor signal timing and coordination, often caused by outdated signal control technology or reliance on obsolete data on relative  traffic volumes, contributes to some 300 million vehicle hours of annual delay on major roadways.\” Earlier work by Winston emphasized that roads and bridges are primarily damaged by heavier trucks, not cars: \”Almost all pavement damage tends to be caused by trucks and buses because, for example, the rear axle of a typical 13-ton trailer causes over 1000 times as much pavement damage as that of a car.\” Thus, charging heavy vehicles for the damage they cause is a natural prescription. For greater safety, enforcement of laws against drunk driving and driving-while-texting can be a useful step.

But as Winston and Mannering note, new technologies are expanding possibilities for the highway of the future. Certain technologies, like automated collection of tolls from cars that don\’t need to stop, are already widespread. The combination of GPS technology and information about road conditions is already helping many drivers find alternative routes through congestion. But more is coming. As they write:

\”Specific highway and vehicle technologies include weigh-in-motion capabilities, which provide real-time information to highway officials about truck weights and axle configurations that they can use to set efficient pavement-wear charges and to enforce safety standards efficiently; adjustable lane technologies,which allow variations in the number and width of lanes in response to real-time traffic flows; new vehicle attributes, such as automatic vehicle braking that could decrease vehicle headways and thus increase roadway capacities; improved construction and design technologies to increase pavement life and to strengthen roads and bridges; and photo-enforcement technologies that monitor vehicles\’ speeds and make more efficient use of road capacity by improving traffic flows and safety. … The rapid evolution of material science (including nanotechnologies) has produced advances in construction materials, construction processes, and quality control that have significantly improved pavement design, resulting in greater durability, longer lifetimes, lower maintenance costs, and less vehicle damage caused by potholes.\”

Of course, ultimately, the driverless car may dramatically change how cars and roads are used. (Indeed, driverless trucks are already in use in places like an iron ore mine in Australia, comfortably far from public roads–at least so far.)

But the roads and bridges are not a competitive company, trying out new technologies in the hope of attracting new customer and raising profits. They are run by government bureaucracies that are set in their old ways. The federal fuel tax isn\’t raising enough money for new investments in road technology, partly because it is fixed in nominal terms and inflation keeps eating away at its real value, and partly because higher fuel economy means that a fuel tax collects less money. Lobbies for truckers oppose charges that would reflect road damage; lobbies for motorists oppose charges that would reflect congestion. Stir up all these ingredients, and the result is not a big push for applying new technology to America\’s roads and bridges.

Winston and Mannering offer a ultimately optimistic view in which private investments in the driverless car trigger a wide array of other technological investments in roads and bridges. Maybe they will be proven right. I believe the social gains from applying all kinds of technology to roads and bridges could be very large. But I also envision a complex array of interrelated and potentially costly technologies, which would be confronting a thorny tangle of political and regulatory obstacles at every turn and straightaway.

Financial Services From the US Postal Service?

About 8% of Americans are \”unbanked,\” and have no bank account, while another 21% are \”underbanked,\” which means that they have a bank account, but also use alternative financial services like payday loans, pawnshops, non-bank check cashing and money orders.  The Office of Inspector General of the U.S. Postal Service has published a report asking if the Post Office might be a useful mechanism in \”Providing Non-Bank Financial Services for the Underserved.\” The  report points out that the average unbanked or underbanked household household spends $2,412 each year just on interest and fees for alternative financial services, or about $89 billion annually.

I admit that, at first glance, this proposal gives me a sinking feeling. The postal service is facing severe financial difficulties in large part because of the collapse in first-class mail, and it has been scrambling to consider alternatives. After watching the tremors and collapses in the U.S. financial system in recent years, providing financial services seems like a shaky railing to grasp.

But as the Inspector General report points out, a connection from the post office to financial services isn\’t brand-new.  For example, \”The Postal Service has played a longstanding role in providing domestic and international money orders. The Postal Service is actually the leader in the U.S. domestic paper money order market, with an approximately 70 percent market share. This is a lucrative business line and demonstrates that the Postal Service already has a direct connection to the underserved, who purchased 109 million money orders in fiscal year (FY) 2012. … While its domestic and international money orders are currently paper-based, the Postal Service does offer electronic money transfers to nine Latin American countries through the Dinero Seguro® (Spanish for “sure money”) service.\” For several years now, the Post Office has been selling debit cards, both for American Express and for specific retailers like Amazon, Barnes & Noble, Subway, and Macy’s.

In many countries, the postal service takes deposits and provides financial services. The Universal Postal Union published a report in March 2013 by Alexandre Berthaud  and Gisela Davico,  \”Global Panorama on Postal Financial Inclusion: Key Issues and Business Models,\” which among more detailed findings notes that 1 billion people around the world in 50 countries do at least some of their banking through postal banking systems. The Universal Postal Union also oversees the International Financial System, which is software that allows a variety of fund transfers across postal operators in more than 60 countries.The U.S. Postal Service is not currently a member. Indeed, the US Inspector General report notes that among high-income countries, the postal services earn on average about 14% of their income from financial services.

The U.S. Postal Service even used to take deposits in the past: \”[F]rom 1911 to 1967, the Postal Savings System gave people the opportunity to make savings deposits at designated Post Offices nationwide. The system hit its peak in 1947 with nearly $3.4 billion in savings deposits from more than 4 million customers using more than 8,100 postal units. The system was discontinued in 1967 after a long decline in usage.\” Essentially, the post office collected deposits and then loaned them along to local banks, taking a small cut of the interest.  

There\’s of course a vision that in the future, everyone will do their banking over the web, often through their cellphones. But especially for people with a weak or nonexistent link to the banking system, the web-based financial future is still some years away. Until then, they will be turning to to cash and physical checks, exchanged at physical locations.  However, as the Inspector General report notes, \”Banks are closing branches across the country (nearly 2,300 in 2012). … The closings are heavily hitting low-income communities, including rural and inner-city areas — the places where many of the underserved live. In fact, an astounding 93 percent of the bank branch closings since late 2008 have been in ZIP Codes with below-national median household income levels.\” Conversely, there are 35,000 Post Offices, stations, branches, and contract units, and \”59 percent of Post Offices are in ZIP Codes with one or no bank branches.\”

There are at least two main challenges in the vision of having the U.S. Postal Service provide nonbank financial services. First, the USPS should do everything on a fee basis. It should not in any way, shape or form be directly making investment or loans–just handling payments. However, it could have partners to provide other kinds of financial services, which leads to the second challenge. There is an anticompetitive temptation when an organization like the Post Office creates partnerships to provide outside services. Potential partners will be willing to pay the Post Office more if they have an exclusive right to sell certain services. Of course, the exclusive right also gives them an ability to charge higher fees, which is why they can pay the Post Office more and still earn higher profits. But the intended beneficiaries of the financial services end up paying higher fees. Thus, if the US Postal Service is going to make space for ATM machines, or selling and reloading debit cards, or cashing checks, it should always be seeking to offer a choice between three or more providers of such services, not just a single financial services partner. To put it another way, if the Postal Service is linked to a single provider of financial services, then the reputation of the Postal Service is hostage to how the provider performs. It\’s much better if the Postal Service acts as an honest broker, collecting its fees for facilitating payments and transactions in a setting where people can always switch between multiple providers.

Finally, there is at least one additional benefit worth noting. Many communities lack safe spaces: safe for play, safe for walking down the street, safe for carrying out a financial transaction with minimal fear of fraud or assault.  Having post offices provide financial services could be one part of an overall social effort for adding to the number of safe spaces in these communities.

From BRICs to MINTs?

Back in 2001, Jim O\’Neill–then chief economist at Goldman Sachs–invented the terminology of BRICs. As we all know two decades later, this shorthand is a quick way of discussing the argument that the course of the world economy will be shaped by the performance of Brazil, Russia, India, and china. Well, O\’Neill is back with a new acronym, the MINTs, which stands for Mexico, Indonesia, Nigeria, and Turkey. In an interview with the New Statesman, O\’Neill offers some thoughts about the new acronym. If you would like more detail on his views of these countries, O\’Neill has also recorded a set of four radio shows for the BBC on Mexico, Indonesia, Nigeria, and Turkey.

In the interview, O\’Neill is disarmingly quick to acknowledge the arbitrariness of these kinds of groupings. About the BRICs, for example, he says: \”If I dreamt it up again today, I’d probably just call it ‘C’ … China’s one and a half times bigger than the rest of them put together.” Or about the MINTs, apparently his original plan was to include South Korea, but the BBC persuaded him to include Nigeria instead. O\’Neill says: “It’s slightly embarrassing but also amusing that I kind of get acronyms decided for me.” But even arbitrary divisions can still be useful and revealing. In that spirit, here are some basic statistics on GDP and per capita GDP for the BRICs and the MINTs in 2012.

What patterns jump out here?

1) The representative growth economy for Latin America is now Mexico, rather than Brazil. This change makes some sense. Brazil has had four years of sub-par growth, its economy is in recession, and international capital is fleeing. Meanwhile, Mexico is forming an economic alliance with the three other nations with the fastest growth, lowest inflation, and best climates for business in Latin America: Chile, Columbia and Peru.

2) All of the MINTs have smaller economies than all of the BRICs. If O\’Neill would today just refer to C, for China, rather than the BRICs as a group, it\’s still likely to be true that C for China is the key factor shaping the growth of emerging markets in the future.

3) O\’Neill argues that although the MINTs differ in many ways, their populations are both large and relatively young, which should help to boost growth. He says: \”That’s key. If you’ve got good demographics that makes things easy.\” Easy may be overstating it! But there is a well-established theory of the \”demographic dividend,\” in which countries with a larger proportion of young workers are well-positioned for economic growth, as opposed to countries with a growing proportion of older workers and retirees.

4) One way to think about the MINTs is that they are standing as representatives for certain regions. Thus, Mexico, although half the size of Brazil\’s economy, represents the future for Latin America. Indonesia, although smaller than India\’s economy and much smaller than China\’s, represents the growth potential for Factory Asia–that group of countries building international supply chains across this region. Turkey represents the potential for growth in Factory Europe–the economic connections happening around the periphery of Europe. Nigeria\’s economy looks especially small on this list, but estimates for Nigeria are likely to be revised sharply upward in the near future, because the Nigerian government statistical agencyis “re-basing” the GDP calculations so that they represent the structure of Nigeria’s economy in 2014, rather than the previous “base” year of 1990. Even with this rebasing, Nigeria will remain the smallest economy on this list, but it is expected to become the largest economy in sub-Saharan Africa (surpassing South Africa). Thus, Nigeria represent the possibility that at long last, economic growth may be finding a foothold in Africa. 
I\’m not especially confident that MINTs will catch on, at least not in the same way that BRICs did. But of the BRICs, Brazil, Russia, and to some extent India have not performed to expectations in the last few years. It\’s time for me to broaden the number of salient examples of emerging markets that I tote around in my head. In that spirit, the MINTs deserve attention.

Intuition Behind the Birthday Bets

The \”birthday bets\” are a standard example in statistics classes. How many people must be in a room before it is more likely than not that two of them were born during the same month? Or in a more complex form, how many people must be in a room to make it more likely than not that two of them share the same birthday?

The misguided intro-student logic usually goes something like this. There are 12 months in a year. So to have more than a 50% chance of two people sharing a birth month, I need 7 people in the room (that is, 50% of 12 plus one more). Or there are 365 days in a year. So to have more than a 50% chance of two people sharing a specific birthdate, we need 183 people in the room. In a short article in Scientific American, David Hand explains the math behind the 365-day birthday bets.

Hand argues that the common fallacy in thinking about these bets is that people think about how many people it would take to share the same birth month or birthday with them. Thus, I think about how many people would need to be in the room to share my birth month, or my birth date. But that\’s not the actual question being asked. The question is about whether any two people in the room share the same birth month or the same birth date.

The math for the birth month problem looks like this. The first person is born in a certain month. For the second person added to the room, the chances are 11/12 that the two people do not share a birth month. For the third person added to the room, the chances are 11/12 x 10/12 that all three of the people do not share a birth month. For the fourth person added to a room, the chances are 11/12 x 10/12 x 9/12 that all four of the people do not share a birth month. And for the fifth person added to the room, the chances are 11/12 x 10/12 x 9/12 x 8/12 that none of the five share a birth month. This multiplies to about 38%, which means that in a room with five people, there is a 62% chance that two of them will share a birth month.

Applying the same logic to the birthday problem, it turns out that when you have a room with 23 people, the probability is greater than 50% that two of them will share a birthday.

I\’ve come up with a mental image or metaphor that seems to help in explaining the intuition behind this result. Think of the birth months, or the birthdays, as written on squares on a wall. Now blindfold a person with very bad aim, and have them randomly throw a ball dipped in paint at the wall, so that it marks where it hits The question becomes: If a wall has 12 squares, how many random throws will be needed before there is a greater than 50% chance of hitting the same square twice?

The point here is that after you have hit the wall once, there is one chance in 12 of hitting the same square with a second throw. If that second throw hits a previously untouched square, then the third throw has one chance in six (that is, 2/12) of hitting a marked square. If the third throw hits a previously untouched square, then the fourth throw has one chance in four (that is, 3/12) of hitting a marked square. And if the fourth throw hits a previously untouched square, then the fifth throw has one chance in three (4/12) of hitting a previously touched square.

The metaphor helps in understanding the problem as a sequence of events. It also clarifies that the question is not how many additions it takes to match where the first throw (or the birth of the first person entering the room), but whether any two match. It also helps in understanding that if you have a reasonably sequence of events, even if none of the events individually have a greater than 50% chance of happening, it can still be likely that during the sequence the event will actually happen.

For example, when randomly throwing paint-dipped balls at a wall with 365 squares, think about a situation where you have thrown 18 balls without a match, so that approximately 5% of the wall is now covered. The next throw has about a 5% chance of matching a previous hit, as does the next throw, as does the next throw, as does the next throw. Taken together, all those roughly 5% chances one after another mean that you have a greater than 50% chance of matching a previous hit fairly soon–certainly well before you get up to 183 throws!

The Health of US Manufacturing

The future the U.S. manufacturing sector is a matter of legitimate concern. After all, manufacturing accounts for a disproportionate share of research and development, innovation, and the leading-edge industries of the future. A healthy manufacturing sector not only supports well-paying jobs directly, but also supports a surrounding nimbus of service-sector jobs in finance, design, marketing, sales, and other areas. On the world stage, manufacturing is still most of what is traded in the world economy. If the U.S. wants to downsize its trade deficits, a healthier manufacturing sector is part of the answer. But perceptions of U.S. manufacturing, along with the reasons for concern, vary across authors.

Oya Celasun, Gabriel Di Bella, Tim Mahedy and Chris Papageorgiou focus on the perhaps surprising strength of U.S. manufacturing in the last few years in the immediate aftermath of the Great Recession, in \”The U.S. Manufacturing Recovery: Uptick or Renaissance?\” published as an IMF working paper in February 2014. They note that this is the first post-recession period since the 1970s when manufacturing value-added rebounded starting a couple of year after the end of the recession.

In addition, they note that while U.S. manufacturing as a share of world manufacturing was falling from 2000 to 2007, in the last five years the U.S. share of world manufacturing seems to have stabilized at about 20%. Interestingly, China\’s share of world manufacturing, which had been the rise before the recession, also seems to have stabilized since then at about 20%.

How does one make sense of these patterns? The IMF economists emphasize three factors: a lower real exchange rate of the U.S. dollar, which boosts exports; restraint in the growth of labor costs for U.S. manufacturing firms; and cheaper energy costs and expanding oil and gas drilling activity which matters considerably for many manufacturing operations. They write: \”The contribution of manufacturing exports to growth could exceed those of the recent past,  fueled by rising global trade. U.S. manufacturing exports have proven resilient during the  crisis. Further increases will require that the U.S. diversify further its export base towards the  more dynamic world regions.\”

The Winter 2014 issue of the Journal of Economic Perspectives has several articles about U.S. manufacturing, with the lead-off article by Martin Neil Baily and Barry P. Bosworth, \”US Manufacturing: Understanding Its  Past and Its Potential Future.\” (Full disclosure: I\’ve been Managing Editor of the JEP since 1986.) They point out that when measured in terms of value-added, manufacturing has been a more-or-less constant share of the U.S. economy for decades. The share of U.S. employment in manufacturing has been dropping steadily over time, but as they write:

\”The decline in manufacturing employment as a share of the economy-wide total is a long-standing feature of the US data and also a trend shared by all high-income economies. Indeed, data from the OECD indicate that the decline in the share of US employment accounted for by the manufacturing sector over the past 40 years—at about 14 percentage points—is equivalent to the average of the G -7 economies (that is, Canada, France, Germany, Italy, Japan, and the United Kingdom, along with the United States).\”

Of course, there are reasons for concern, as well. For example, manufacturing output has held its ground in large part because of rapid growth in computing and information technology, while many other manufacturing industries have had a much harder time. But Baily and Bosworth argue that the real test for U.S. manufacturing is how well it competes in the emerging manufacturing industries of the future, including robotics, 3D printing, materials science, biotechnology, the \”Internet of Things\” in which speeds and interconnections of machinery and buildings are hooked into the web. It also depends on how how U.S. manufacturing interacts with the recent developments in the U.S. energy industry, with its prospect of lower-cost domestic natural gas. In terms of public policy, they argue that the policies most important for U.S. manufacturing are not specific to manufacturing, but instead involve more basic policies like opening global markets, reducing budget deficits over time, improving education and training for U.S. workers, investing in R&D and infrastructure, adjusting the U.S. corporate tax code, and the like.

In a companion article in the Winter 2014 JEP, Gregory Tassey offers a different perspective in \”Competing in Advanced Manufacturing: The Need for Improved Growth Models
and Policies.\” Tassey\’s focus is less on the manufacturing sector as a whole and more on cutting-edge advanced manufacturing. He notes: \”\”One result has been a steady deterioration in the US Census Bureau’s “advanced technology products” trade balance (see http://www.census .gov/foreign-trade/balance/c0007.html) over the past decade, which turned negative in 2002 and continued to deteriorate to a record deficit of $100 billion in 2011, improving only slightly to a deficit of $91 billion in 2012.\”

In Tassey\’s discussion of advanced manufacturing, he discusses \”how it differs from the conventional
simplified characterization of such investment as a two-step process in which the government supports basic research and then private firms build on that scientific base with applied research and development to produce “proprietary technologies” that lead directly to commercial products. Instead, the process of bringing new advanced manufacturing products to market usually consists of two additional distinct elements. One is “proof-of-concept research” to establish broad “technology platforms” that can then be used as a basis for developing actual products. The second is a technical
infrastructure of “infratechnologies” that include the analytical tools and standards needed for measuring and classifying the components of the new technology; metrics and methods for determining the adequacy of the multiple performance attributes of the technology; and the interfaces among hardware and software components that must work together for a complex product to perform as specified.

Tassey argues that \”proof-of-concept research\” and \”infra-technologies\” are not going to be pursued by private firms acting alone, because the risks are too high, and will not be pursued effectively by the public sector acting alone, because the public sector is not well-suited to focusing on desired market products.  Instead, these intermediate steps between basic research and proprietary applied development need to be developed through well-structured public-private partnerships. Further, without such partnerships, he argues that many advanced manufacturing technologies which show great promise in basic research will enter a \”valley of death\” and will not manage to be transformed into viable commercial products.

Of course the various perspectives described here are not mutually exclusive. U.S. manufacturing could can be benefiting from a short-term bounceback in cars and durable goods in the aftermath of the Great Recession, as well as from a weaker U.S. exchange rate and lower energy prices. It could probably use both broad-based economic policies as well as support for public-private partnerships. But the bottom-line lesson is that in a rapidly globalizing economy, a tautology has sharpened its teeth: U.S.-based manufacturing will only succeed to the extent that it makes economic sense to do the manufacturing in the United States.

The Modest Effect of a Higher Minimum Wage

The mainstream arguments about \”The Effects of a  Minimum-Wage  Increase on  Employment and 
Family Income\” are compactly laid out in a report released earlier this week from the Congressional Budget Office. On one side, the report estimates that about 16.5 million workers would see a rise in their average weekly income if the minimum wage raised to $10.10/hour by the second half of 2016. On the other side, the higher minimum wage would reduce employment I(in their central estimate) by 500,000 jobs, which from one perspective is only 0.3% of the labor force, but from another perspective is a loss of jobs concentrated almost entirely at the bottom and low-skilled end of the wage distribution. I\’ve laid out some of my thoughts about weighing and balancing these and related tradeoffs here and here.

In this post, I want to focus on a different issue: the modest effect of raising the minimum wage on helping the working poor near and below the poverty line. The fundamental difficulty is that many of the working poor suffer from a lack of full-time work, rather than working for a sustained time at a full-time minimum wage job. As a result, many of the working poor aren\’t much affected by raising the minimum wage. Here are the CBO estimates:

\”Families whose income will be below the poverty threshold in 2016 under current law will have an average income of $10,700, CBO projects … The agency estimates that the $10.10 option would raise their average real income by about $300, or 2.8 percent. For families whose income would otherwise have been between the poverty threshold and 1.5 times that amount, average real income would increase by about $300, or 1.1 percent. The increase in average income would be smaller, both in dollar amounts and as a share of family income, for families whose income would have been between 1.5 times and six times the poverty threshold.\”

Of course, these are averages, and families who are now working many hours at the minimum wage would see larger increases, if they keep their jobs. But the higher minimum wage actually sends an amount of money to these workers that is relatively small in the context of other government programs to  assist the working poor. CBO estimates that families below the poverty line, as a group, would receive an additional $5 billion in income from raising the minimum wage to $10.10/hour, while families with incomes between the poverty line and three times the poverty line would receive a total of $12 billion.

To put those numbers in context, consider a quick and dirty list of some other government programs to assist those near or below the poverty line.

Of course, this list doesn\’t include unemployment insurance, disability insurance, Social Security, Medicare, and other programs that may sometimes assist households with low incomes, along with their extended families.

A few thoughts:

1) Of course, the fact that raising the minimum wage  has a relatively small effect in the context of these other programs doesn\’t make it a bad idea. But it does suggest some caution for both advocates and opponents about over-hyping the importance of the issue to the working poor.

2) In particular, it\’s fairly common to hear people talk about the rise in U.S. inequality and a need to raise the minimum wage in the same breath–as if one was closely related to the other. If only such a view was true! If only it was possible to substantially offset the rise in inequality over the last several decades by bumping up the minimum wage by a couple of bucks an hour! But the rise in inequality of incomes to the tip-top part of the income distribution is far, far larger (probably measured in hundreds of billions of dollars) than the $17 billion the higher minimum wage would distribute to the working poor and near-poor below three times the poverty line. To put it another way, the problems of low-wage workers in a technology-intensive and globalizing United States are far more severe than a couple of dollars on the minimum wage.

3) A number of the current programs to help those with low incomes either didn\’t exist or existed in a much smaller form a decade or two or three ago, including the Earned Income Tax Credit, the Child Tax Credit, the expansion of Food Stamps,  and the rise in Medicaid spending. It seems peculiar to offer simple-minded comparisons of the hourly minimum wage now to, say, its inflation-adjusted levels of the late 1960s or early 1970s without taking into account that the entire policy structure for assisting those with low incomes has been dramatically overhauled since then, largely for the better, in ways that provide much more help to the working poor and their families than would a higher minimum wage.

4) For me, it\’s impossible to look at this list of government programs that provide assistance to those with low incomes and not notice that the costs of the U.S. health care system, in this case as embodied in Medicaid, is crowding out other spending. To put it another way, if lifting the minimum wage to $10.10/hour raises incomes for those at less than three times the official poverty rate by $17 billion per year, that would be about what Medicaid spends every two weeks.

Behavioral Investors and the Dumb Money Effect

Individual stock market investors often underperform the market averages because of terrible timing: in particular, they are often buy after the market has already risen, and sell when the market has already falling, and this pattern means that they end up buying high and selling low. Michael J. Mauboussin investigates this pattern, and what investors might do about it, in \”A behavioural take on investment returns,\” one of the essays appearing at the start of the Credit Suisse Global Investment Returns Yearbook 2014. He explains (citations omitted):

Perhaps the most dismal numbers in investing relate to the difference between three investment returns: those of the market, those of active investment managers, and those of investors. For example, the annual total shareholder returns were 9.3% for the S&P 500 Index over the past 20 years ended 31 December 2013. The annual return for the average actively managed mutual fund was 1.0–1.5 percentage points less, reflecting expense ratios and transaction costs. This makes sense because the returns for passive and active funds are the same before costs, on average, but are lower for active funds after costs. … But the average return that investors earned was another 1–2 percentage points less than that of the average actively managed fund. This means that the investor return was roughly 60%–80% that of the market. At first glance, it does not make sense that investors who own actively managed funds could earn returns lower than the funds themselves. The root of the problem is bad timing. … [I]nvestors tend to extrapolate recent results. This pattern of investor behavior is so consistent that academics have a name for it: the “dumb money effect.” When markets are down investors are fearful and withdraw their cash. When markets are up they are greedy and add more cash.

Here\’s a figure illustrating this pattern. The MSCI World Index, with annual changes shown by the red line, covers large and mid-sized stocks in 23 developed economies, representing about 85% of the total equity market in those countries. The blue bars show inflows and outflows of investor capital. Notice, for example, that investors were still piling into equity markets for a year after stock prices started falling in the late 1990s. More recently, investors were so hesitant to return to stock markets after 2008 that they pretty much missed the bounceback in global stock prices in 2009, as well as in 2012.


What\’s the right strategy for avoiding this dumb money effect? Mauboussin explains:

\”More than 40 years ago, Daniel Kahneman and Amos Tversky suggested an approach to making predictions that can help counterbalance this tendency. In cases where the correlation coefficient is close to zero, as it is for year-to-year equity market returns, a prediction that relies predominantly on the base rate is likely to outperform predictions derived from other approaches. … The lesson should be clear. Since year-to-year results for the stock market are very difficult to predict, investors should not be lured by last year’s good results any more than they should be repelled by poor outcomes. It is better to focus on long-term averages and avoid being too swayed by recent outcomes. Avoiding the dumb money effect boils down to maintaining consistent exposure.\”


There are two other essays of interest at the start of this volume, both by Elroy Dimson, Paul Marsh, and Mike Staunton. In the first, \”Emerging markets revisited,\” they write: \”We construct an index of emerging market performance from 1900 to the present day and document the historical equity premium from the perspective of a global investor. We show how volatility is dampened as countries develop, study trends in international correlations and document style returns in emerging markets. Finally we explore trading strategies for long-term investors in the emerging world.\” In the second essay, \”The Growth Puzzle,\” Dimson, Marsh, and Staunton explore the question of why stock prices over time have not measured up to economic growth in the ways one might expect. The report also offers a lively brief country-by-country overview of investment returns often back to 1900 in a wide array of countries and regions around the world.

Moore\’s Law: At Least a Little Longer

One can argue that the primary driver of U.S. and even world economic growth in the last quarter-century is Moore\’s law–that is, the claim first advanced back in 1965 by Gordon Moore, one of the founders of Intel Corporation that the number of transistors on a computer chip would double every two years. But can it go on? Harald Bauer, Jan Veira, and Florian Weig of the McKinsey Global Institute consider the issues in \”Moore’s law: Repeal or renewal?\” a December 2013 paper. They write:

\”Moore’s law states that the number of transistors on integrated circuits doubles every two years, and for the past four decades it has set the pace for progress in the semiconductor industry. The positive by-products of the constant scaling down that Moore’s law predicts include simultaneous cost declines, made possible by fitting more transistors per area onto silicon chips, and performance increases with regard to speed, compactness, and power consumption. … Adherence to Moore’s law has led to continuously falling semiconductor prices. Per-bit prices of dynamic random-access memory chips, for example, have fallen by as much as 30 to 35 percent a year for several decades.
As a result, Moore’s law has swept much of the modern world along with it. Some estimates ascribe up to 40 percent of the global productivity growth achieved during the last two decades to the expansion of information and communication technologies made possible by semiconductor performance and cost improvements.\”

The authors argue that technological advances already in the works are likely to sustain Moore\’s law for another 5-10 years. This As I\’ve written before, the power of doubling is difficult to appreciate at an intuitive level, but it means that the increase is as big as everything that came before. Intel is now etching transistors at 22 nanometers, and as the company points out, you could fit 6,000 of these transistors across the width of a human hair; or if you prefer, it would take 6 million of these 22 nanometer transistors to cover the period at the end of a sentence. Also, a 22 nanometer transistor can switch on and off 100 billion times in a second. 
The McKinsey analysts point out that while it is technologically possible for Moore\’s law to continue, the economic costs of further advances are becoming very high. They write: \”A McKinsey analysis shows that moving from 32nm to 22nm nodes on 300-millimeter (mm) wafers causes typical fabrication costs to grow by roughly 40 percent. It also boosts the costs associated with process development by about 45 percent and with chip design by up to 50 percent. These dramatic increases will lead to process-development costs that exceed $1 billion for nodes below 20nm. In addition, the state-of-the art fabs needed to produce them will likely cost $10 billion or more. As a result, the number of companies capable of financing next-generation nodes and fabs will likely dwindle.\”
Of course, it\’s also possible to have performance improvements and cost decreases on chips already in production: for example, the cutting edge of computer chips today will probably look like a steady old cheap workhorse of a chip in about five years. I suspect that we are still near the beginning, and certainly not yet at the middle, of finding ways for information and communications technology to alter our work and personal lives. But the physical problems and  higher costs of making silicon-based transistors at an ever-smaller scale won\’t be denied forever, either.