Financial Services From the US Postal Service?

About 8% of Americans are \”unbanked,\” and have no bank account, while another 21% are \”underbanked,\” which means that they have a bank account, but also use alternative financial services like payday loans, pawnshops, non-bank check cashing and money orders.  The Office of Inspector General of the U.S. Postal Service has published a report asking if the Post Office might be a useful mechanism in \”Providing Non-Bank Financial Services for the Underserved.\” The  report points out that the average unbanked or underbanked household household spends $2,412 each year just on interest and fees for alternative financial services, or about $89 billion annually.

I admit that, at first glance, this proposal gives me a sinking feeling. The postal service is facing severe financial difficulties in large part because of the collapse in first-class mail, and it has been scrambling to consider alternatives. After watching the tremors and collapses in the U.S. financial system in recent years, providing financial services seems like a shaky railing to grasp.

But as the Inspector General report points out, a connection from the post office to financial services isn\’t brand-new.  For example, \”The Postal Service has played a longstanding role in providing domestic and international money orders. The Postal Service is actually the leader in the U.S. domestic paper money order market, with an approximately 70 percent market share. This is a lucrative business line and demonstrates that the Postal Service already has a direct connection to the underserved, who purchased 109 million money orders in fiscal year (FY) 2012. … While its domestic and international money orders are currently paper-based, the Postal Service does offer electronic money transfers to nine Latin American countries through the Dinero Seguro® (Spanish for “sure money”) service.\” For several years now, the Post Office has been selling debit cards, both for American Express and for specific retailers like Amazon, Barnes & Noble, Subway, and Macy’s.

In many countries, the postal service takes deposits and provides financial services. The Universal Postal Union published a report in March 2013 by Alexandre Berthaud  and Gisela Davico,  \”Global Panorama on Postal Financial Inclusion: Key Issues and Business Models,\” which among more detailed findings notes that 1 billion people around the world in 50 countries do at least some of their banking through postal banking systems. The Universal Postal Union also oversees the International Financial System, which is software that allows a variety of fund transfers across postal operators in more than 60 countries.The U.S. Postal Service is not currently a member. Indeed, the US Inspector General report notes that among high-income countries, the postal services earn on average about 14% of their income from financial services.

The U.S. Postal Service even used to take deposits in the past: \”[F]rom 1911 to 1967, the Postal Savings System gave people the opportunity to make savings deposits at designated Post Offices nationwide. The system hit its peak in 1947 with nearly $3.4 billion in savings deposits from more than 4 million customers using more than 8,100 postal units. The system was discontinued in 1967 after a long decline in usage.\” Essentially, the post office collected deposits and then loaned them along to local banks, taking a small cut of the interest.  

There\’s of course a vision that in the future, everyone will do their banking over the web, often through their cellphones. But especially for people with a weak or nonexistent link to the banking system, the web-based financial future is still some years away. Until then, they will be turning to to cash and physical checks, exchanged at physical locations.  However, as the Inspector General report notes, \”Banks are closing branches across the country (nearly 2,300 in 2012). … The closings are heavily hitting low-income communities, including rural and inner-city areas — the places where many of the underserved live. In fact, an astounding 93 percent of the bank branch closings since late 2008 have been in ZIP Codes with below-national median household income levels.\” Conversely, there are 35,000 Post Offices, stations, branches, and contract units, and \”59 percent of Post Offices are in ZIP Codes with one or no bank branches.\”

There are at least two main challenges in the vision of having the U.S. Postal Service provide nonbank financial services. First, the USPS should do everything on a fee basis. It should not in any way, shape or form be directly making investment or loans–just handling payments. However, it could have partners to provide other kinds of financial services, which leads to the second challenge. There is an anticompetitive temptation when an organization like the Post Office creates partnerships to provide outside services. Potential partners will be willing to pay the Post Office more if they have an exclusive right to sell certain services. Of course, the exclusive right also gives them an ability to charge higher fees, which is why they can pay the Post Office more and still earn higher profits. But the intended beneficiaries of the financial services end up paying higher fees. Thus, if the US Postal Service is going to make space for ATM machines, or selling and reloading debit cards, or cashing checks, it should always be seeking to offer a choice between three or more providers of such services, not just a single financial services partner. To put it another way, if the Postal Service is linked to a single provider of financial services, then the reputation of the Postal Service is hostage to how the provider performs. It\’s much better if the Postal Service acts as an honest broker, collecting its fees for facilitating payments and transactions in a setting where people can always switch between multiple providers.

Finally, there is at least one additional benefit worth noting. Many communities lack safe spaces: safe for play, safe for walking down the street, safe for carrying out a financial transaction with minimal fear of fraud or assault.  Having post offices provide financial services could be one part of an overall social effort for adding to the number of safe spaces in these communities.

From BRICs to MINTs?

Back in 2001, Jim O\’Neill–then chief economist at Goldman Sachs–invented the terminology of BRICs. As we all know two decades later, this shorthand is a quick way of discussing the argument that the course of the world economy will be shaped by the performance of Brazil, Russia, India, and china. Well, O\’Neill is back with a new acronym, the MINTs, which stands for Mexico, Indonesia, Nigeria, and Turkey. In an interview with the New Statesman, O\’Neill offers some thoughts about the new acronym. If you would like more detail on his views of these countries, O\’Neill has also recorded a set of four radio shows for the BBC on Mexico, Indonesia, Nigeria, and Turkey.

In the interview, O\’Neill is disarmingly quick to acknowledge the arbitrariness of these kinds of groupings. About the BRICs, for example, he says: \”If I dreamt it up again today, I’d probably just call it ‘C’ … China’s one and a half times bigger than the rest of them put together.” Or about the MINTs, apparently his original plan was to include South Korea, but the BBC persuaded him to include Nigeria instead. O\’Neill says: “It’s slightly embarrassing but also amusing that I kind of get acronyms decided for me.” But even arbitrary divisions can still be useful and revealing. In that spirit, here are some basic statistics on GDP and per capita GDP for the BRICs and the MINTs in 2012.

What patterns jump out here?

1) The representative growth economy for Latin America is now Mexico, rather than Brazil. This change makes some sense. Brazil has had four years of sub-par growth, its economy is in recession, and international capital is fleeing. Meanwhile, Mexico is forming an economic alliance with the three other nations with the fastest growth, lowest inflation, and best climates for business in Latin America: Chile, Columbia and Peru.

2) All of the MINTs have smaller economies than all of the BRICs. If O\’Neill would today just refer to C, for China, rather than the BRICs as a group, it\’s still likely to be true that C for China is the key factor shaping the growth of emerging markets in the future.

3) O\’Neill argues that although the MINTs differ in many ways, their populations are both large and relatively young, which should help to boost growth. He says: \”That’s key. If you’ve got good demographics that makes things easy.\” Easy may be overstating it! But there is a well-established theory of the \”demographic dividend,\” in which countries with a larger proportion of young workers are well-positioned for economic growth, as opposed to countries with a growing proportion of older workers and retirees.

4) One way to think about the MINTs is that they are standing as representatives for certain regions. Thus, Mexico, although half the size of Brazil\’s economy, represents the future for Latin America. Indonesia, although smaller than India\’s economy and much smaller than China\’s, represents the growth potential for Factory Asia–that group of countries building international supply chains across this region. Turkey represents the potential for growth in Factory Europe–the economic connections happening around the periphery of Europe. Nigeria\’s economy looks especially small on this list, but estimates for Nigeria are likely to be revised sharply upward in the near future, because the Nigerian government statistical agencyis “re-basing” the GDP calculations so that they represent the structure of Nigeria’s economy in 2014, rather than the previous “base” year of 1990. Even with this rebasing, Nigeria will remain the smallest economy on this list, but it is expected to become the largest economy in sub-Saharan Africa (surpassing South Africa). Thus, Nigeria represent the possibility that at long last, economic growth may be finding a foothold in Africa. 
I\’m not especially confident that MINTs will catch on, at least not in the same way that BRICs did. But of the BRICs, Brazil, Russia, and to some extent India have not performed to expectations in the last few years. It\’s time for me to broaden the number of salient examples of emerging markets that I tote around in my head. In that spirit, the MINTs deserve attention.

Intuition Behind the Birthday Bets

The \”birthday bets\” are a standard example in statistics classes. How many people must be in a room before it is more likely than not that two of them were born during the same month? Or in a more complex form, how many people must be in a room to make it more likely than not that two of them share the same birthday?

The misguided intro-student logic usually goes something like this. There are 12 months in a year. So to have more than a 50% chance of two people sharing a birth month, I need 7 people in the room (that is, 50% of 12 plus one more). Or there are 365 days in a year. So to have more than a 50% chance of two people sharing a specific birthdate, we need 183 people in the room. In a short article in Scientific American, David Hand explains the math behind the 365-day birthday bets.

Hand argues that the common fallacy in thinking about these bets is that people think about how many people it would take to share the same birth month or birthday with them. Thus, I think about how many people would need to be in the room to share my birth month, or my birth date. But that\’s not the actual question being asked. The question is about whether any two people in the room share the same birth month or the same birth date.

The math for the birth month problem looks like this. The first person is born in a certain month. For the second person added to the room, the chances are 11/12 that the two people do not share a birth month. For the third person added to the room, the chances are 11/12 x 10/12 that all three of the people do not share a birth month. For the fourth person added to a room, the chances are 11/12 x 10/12 x 9/12 that all four of the people do not share a birth month. And for the fifth person added to the room, the chances are 11/12 x 10/12 x 9/12 x 8/12 that none of the five share a birth month. This multiplies to about 38%, which means that in a room with five people, there is a 62% chance that two of them will share a birth month.

Applying the same logic to the birthday problem, it turns out that when you have a room with 23 people, the probability is greater than 50% that two of them will share a birthday.

I\’ve come up with a mental image or metaphor that seems to help in explaining the intuition behind this result. Think of the birth months, or the birthdays, as written on squares on a wall. Now blindfold a person with very bad aim, and have them randomly throw a ball dipped in paint at the wall, so that it marks where it hits The question becomes: If a wall has 12 squares, how many random throws will be needed before there is a greater than 50% chance of hitting the same square twice?

The point here is that after you have hit the wall once, there is one chance in 12 of hitting the same square with a second throw. If that second throw hits a previously untouched square, then the third throw has one chance in six (that is, 2/12) of hitting a marked square. If the third throw hits a previously untouched square, then the fourth throw has one chance in four (that is, 3/12) of hitting a marked square. And if the fourth throw hits a previously untouched square, then the fifth throw has one chance in three (4/12) of hitting a previously touched square.

The metaphor helps in understanding the problem as a sequence of events. It also clarifies that the question is not how many additions it takes to match where the first throw (or the birth of the first person entering the room), but whether any two match. It also helps in understanding that if you have a reasonably sequence of events, even if none of the events individually have a greater than 50% chance of happening, it can still be likely that during the sequence the event will actually happen.

For example, when randomly throwing paint-dipped balls at a wall with 365 squares, think about a situation where you have thrown 18 balls without a match, so that approximately 5% of the wall is now covered. The next throw has about a 5% chance of matching a previous hit, as does the next throw, as does the next throw, as does the next throw. Taken together, all those roughly 5% chances one after another mean that you have a greater than 50% chance of matching a previous hit fairly soon–certainly well before you get up to 183 throws!

The Health of US Manufacturing

The future the U.S. manufacturing sector is a matter of legitimate concern. After all, manufacturing accounts for a disproportionate share of research and development, innovation, and the leading-edge industries of the future. A healthy manufacturing sector not only supports well-paying jobs directly, but also supports a surrounding nimbus of service-sector jobs in finance, design, marketing, sales, and other areas. On the world stage, manufacturing is still most of what is traded in the world economy. If the U.S. wants to downsize its trade deficits, a healthier manufacturing sector is part of the answer. But perceptions of U.S. manufacturing, along with the reasons for concern, vary across authors.

Oya Celasun, Gabriel Di Bella, Tim Mahedy and Chris Papageorgiou focus on the perhaps surprising strength of U.S. manufacturing in the last few years in the immediate aftermath of the Great Recession, in \”The U.S. Manufacturing Recovery: Uptick or Renaissance?\” published as an IMF working paper in February 2014. They note that this is the first post-recession period since the 1970s when manufacturing value-added rebounded starting a couple of year after the end of the recession.

In addition, they note that while U.S. manufacturing as a share of world manufacturing was falling from 2000 to 2007, in the last five years the U.S. share of world manufacturing seems to have stabilized at about 20%. Interestingly, China\’s share of world manufacturing, which had been the rise before the recession, also seems to have stabilized since then at about 20%.

How does one make sense of these patterns? The IMF economists emphasize three factors: a lower real exchange rate of the U.S. dollar, which boosts exports; restraint in the growth of labor costs for U.S. manufacturing firms; and cheaper energy costs and expanding oil and gas drilling activity which matters considerably for many manufacturing operations. They write: \”The contribution of manufacturing exports to growth could exceed those of the recent past,  fueled by rising global trade. U.S. manufacturing exports have proven resilient during the  crisis. Further increases will require that the U.S. diversify further its export base towards the  more dynamic world regions.\”

The Winter 2014 issue of the Journal of Economic Perspectives has several articles about U.S. manufacturing, with the lead-off article by Martin Neil Baily and Barry P. Bosworth, \”US Manufacturing: Understanding Its  Past and Its Potential Future.\” (Full disclosure: I\’ve been Managing Editor of the JEP since 1986.) They point out that when measured in terms of value-added, manufacturing has been a more-or-less constant share of the U.S. economy for decades. The share of U.S. employment in manufacturing has been dropping steadily over time, but as they write:

\”The decline in manufacturing employment as a share of the economy-wide total is a long-standing feature of the US data and also a trend shared by all high-income economies. Indeed, data from the OECD indicate that the decline in the share of US employment accounted for by the manufacturing sector over the past 40 years—at about 14 percentage points—is equivalent to the average of the G -7 economies (that is, Canada, France, Germany, Italy, Japan, and the United Kingdom, along with the United States).\”

Of course, there are reasons for concern, as well. For example, manufacturing output has held its ground in large part because of rapid growth in computing and information technology, while many other manufacturing industries have had a much harder time. But Baily and Bosworth argue that the real test for U.S. manufacturing is how well it competes in the emerging manufacturing industries of the future, including robotics, 3D printing, materials science, biotechnology, the \”Internet of Things\” in which speeds and interconnections of machinery and buildings are hooked into the web. It also depends on how how U.S. manufacturing interacts with the recent developments in the U.S. energy industry, with its prospect of lower-cost domestic natural gas. In terms of public policy, they argue that the policies most important for U.S. manufacturing are not specific to manufacturing, but instead involve more basic policies like opening global markets, reducing budget deficits over time, improving education and training for U.S. workers, investing in R&D and infrastructure, adjusting the U.S. corporate tax code, and the like.

In a companion article in the Winter 2014 JEP, Gregory Tassey offers a different perspective in \”Competing in Advanced Manufacturing: The Need for Improved Growth Models
and Policies.\” Tassey\’s focus is less on the manufacturing sector as a whole and more on cutting-edge advanced manufacturing. He notes: \”\”One result has been a steady deterioration in the US Census Bureau’s “advanced technology products” trade balance (see http://www.census .gov/foreign-trade/balance/c0007.html) over the past decade, which turned negative in 2002 and continued to deteriorate to a record deficit of $100 billion in 2011, improving only slightly to a deficit of $91 billion in 2012.\”

In Tassey\’s discussion of advanced manufacturing, he discusses \”how it differs from the conventional
simplified characterization of such investment as a two-step process in which the government supports basic research and then private firms build on that scientific base with applied research and development to produce “proprietary technologies” that lead directly to commercial products. Instead, the process of bringing new advanced manufacturing products to market usually consists of two additional distinct elements. One is “proof-of-concept research” to establish broad “technology platforms” that can then be used as a basis for developing actual products. The second is a technical
infrastructure of “infratechnologies” that include the analytical tools and standards needed for measuring and classifying the components of the new technology; metrics and methods for determining the adequacy of the multiple performance attributes of the technology; and the interfaces among hardware and software components that must work together for a complex product to perform as specified.

Tassey argues that \”proof-of-concept research\” and \”infra-technologies\” are not going to be pursued by private firms acting alone, because the risks are too high, and will not be pursued effectively by the public sector acting alone, because the public sector is not well-suited to focusing on desired market products.  Instead, these intermediate steps between basic research and proprietary applied development need to be developed through well-structured public-private partnerships. Further, without such partnerships, he argues that many advanced manufacturing technologies which show great promise in basic research will enter a \”valley of death\” and will not manage to be transformed into viable commercial products.

Of course the various perspectives described here are not mutually exclusive. U.S. manufacturing could can be benefiting from a short-term bounceback in cars and durable goods in the aftermath of the Great Recession, as well as from a weaker U.S. exchange rate and lower energy prices. It could probably use both broad-based economic policies as well as support for public-private partnerships. But the bottom-line lesson is that in a rapidly globalizing economy, a tautology has sharpened its teeth: U.S.-based manufacturing will only succeed to the extent that it makes economic sense to do the manufacturing in the United States.

The Modest Effect of a Higher Minimum Wage

The mainstream arguments about \”The Effects of a  Minimum-Wage  Increase on  Employment and 
Family Income\” are compactly laid out in a report released earlier this week from the Congressional Budget Office. On one side, the report estimates that about 16.5 million workers would see a rise in their average weekly income if the minimum wage raised to $10.10/hour by the second half of 2016. On the other side, the higher minimum wage would reduce employment I(in their central estimate) by 500,000 jobs, which from one perspective is only 0.3% of the labor force, but from another perspective is a loss of jobs concentrated almost entirely at the bottom and low-skilled end of the wage distribution. I\’ve laid out some of my thoughts about weighing and balancing these and related tradeoffs here and here.

In this post, I want to focus on a different issue: the modest effect of raising the minimum wage on helping the working poor near and below the poverty line. The fundamental difficulty is that many of the working poor suffer from a lack of full-time work, rather than working for a sustained time at a full-time minimum wage job. As a result, many of the working poor aren\’t much affected by raising the minimum wage. Here are the CBO estimates:

\”Families whose income will be below the poverty threshold in 2016 under current law will have an average income of $10,700, CBO projects … The agency estimates that the $10.10 option would raise their average real income by about $300, or 2.8 percent. For families whose income would otherwise have been between the poverty threshold and 1.5 times that amount, average real income would increase by about $300, or 1.1 percent. The increase in average income would be smaller, both in dollar amounts and as a share of family income, for families whose income would have been between 1.5 times and six times the poverty threshold.\”

Of course, these are averages, and families who are now working many hours at the minimum wage would see larger increases, if they keep their jobs. But the higher minimum wage actually sends an amount of money to these workers that is relatively small in the context of other government programs to  assist the working poor. CBO estimates that families below the poverty line, as a group, would receive an additional $5 billion in income from raising the minimum wage to $10.10/hour, while families with incomes between the poverty line and three times the poverty line would receive a total of $12 billion.

To put those numbers in context, consider a quick and dirty list of some other government programs to assist those near or below the poverty line.

Of course, this list doesn\’t include unemployment insurance, disability insurance, Social Security, Medicare, and other programs that may sometimes assist households with low incomes, along with their extended families.

A few thoughts:

1) Of course, the fact that raising the minimum wage  has a relatively small effect in the context of these other programs doesn\’t make it a bad idea. But it does suggest some caution for both advocates and opponents about over-hyping the importance of the issue to the working poor.

2) In particular, it\’s fairly common to hear people talk about the rise in U.S. inequality and a need to raise the minimum wage in the same breath–as if one was closely related to the other. If only such a view was true! If only it was possible to substantially offset the rise in inequality over the last several decades by bumping up the minimum wage by a couple of bucks an hour! But the rise in inequality of incomes to the tip-top part of the income distribution is far, far larger (probably measured in hundreds of billions of dollars) than the $17 billion the higher minimum wage would distribute to the working poor and near-poor below three times the poverty line. To put it another way, the problems of low-wage workers in a technology-intensive and globalizing United States are far more severe than a couple of dollars on the minimum wage.

3) A number of the current programs to help those with low incomes either didn\’t exist or existed in a much smaller form a decade or two or three ago, including the Earned Income Tax Credit, the Child Tax Credit, the expansion of Food Stamps,  and the rise in Medicaid spending. It seems peculiar to offer simple-minded comparisons of the hourly minimum wage now to, say, its inflation-adjusted levels of the late 1960s or early 1970s without taking into account that the entire policy structure for assisting those with low incomes has been dramatically overhauled since then, largely for the better, in ways that provide much more help to the working poor and their families than would a higher minimum wage.

4) For me, it\’s impossible to look at this list of government programs that provide assistance to those with low incomes and not notice that the costs of the U.S. health care system, in this case as embodied in Medicaid, is crowding out other spending. To put it another way, if lifting the minimum wage to $10.10/hour raises incomes for those at less than three times the official poverty rate by $17 billion per year, that would be about what Medicaid spends every two weeks.

Behavioral Investors and the Dumb Money Effect

Individual stock market investors often underperform the market averages because of terrible timing: in particular, they are often buy after the market has already risen, and sell when the market has already falling, and this pattern means that they end up buying high and selling low. Michael J. Mauboussin investigates this pattern, and what investors might do about it, in \”A behavioural take on investment returns,\” one of the essays appearing at the start of the Credit Suisse Global Investment Returns Yearbook 2014. He explains (citations omitted):

Perhaps the most dismal numbers in investing relate to the difference between three investment returns: those of the market, those of active investment managers, and those of investors. For example, the annual total shareholder returns were 9.3% for the S&P 500 Index over the past 20 years ended 31 December 2013. The annual return for the average actively managed mutual fund was 1.0–1.5 percentage points less, reflecting expense ratios and transaction costs. This makes sense because the returns for passive and active funds are the same before costs, on average, but are lower for active funds after costs. … But the average return that investors earned was another 1–2 percentage points less than that of the average actively managed fund. This means that the investor return was roughly 60%–80% that of the market. At first glance, it does not make sense that investors who own actively managed funds could earn returns lower than the funds themselves. The root of the problem is bad timing. … [I]nvestors tend to extrapolate recent results. This pattern of investor behavior is so consistent that academics have a name for it: the “dumb money effect.” When markets are down investors are fearful and withdraw their cash. When markets are up they are greedy and add more cash.

Here\’s a figure illustrating this pattern. The MSCI World Index, with annual changes shown by the red line, covers large and mid-sized stocks in 23 developed economies, representing about 85% of the total equity market in those countries. The blue bars show inflows and outflows of investor capital. Notice, for example, that investors were still piling into equity markets for a year after stock prices started falling in the late 1990s. More recently, investors were so hesitant to return to stock markets after 2008 that they pretty much missed the bounceback in global stock prices in 2009, as well as in 2012.


What\’s the right strategy for avoiding this dumb money effect? Mauboussin explains:

\”More than 40 years ago, Daniel Kahneman and Amos Tversky suggested an approach to making predictions that can help counterbalance this tendency. In cases where the correlation coefficient is close to zero, as it is for year-to-year equity market returns, a prediction that relies predominantly on the base rate is likely to outperform predictions derived from other approaches. … The lesson should be clear. Since year-to-year results for the stock market are very difficult to predict, investors should not be lured by last year’s good results any more than they should be repelled by poor outcomes. It is better to focus on long-term averages and avoid being too swayed by recent outcomes. Avoiding the dumb money effect boils down to maintaining consistent exposure.\”


There are two other essays of interest at the start of this volume, both by Elroy Dimson, Paul Marsh, and Mike Staunton. In the first, \”Emerging markets revisited,\” they write: \”We construct an index of emerging market performance from 1900 to the present day and document the historical equity premium from the perspective of a global investor. We show how volatility is dampened as countries develop, study trends in international correlations and document style returns in emerging markets. Finally we explore trading strategies for long-term investors in the emerging world.\” In the second essay, \”The Growth Puzzle,\” Dimson, Marsh, and Staunton explore the question of why stock prices over time have not measured up to economic growth in the ways one might expect. The report also offers a lively brief country-by-country overview of investment returns often back to 1900 in a wide array of countries and regions around the world.

Moore\’s Law: At Least a Little Longer

One can argue that the primary driver of U.S. and even world economic growth in the last quarter-century is Moore\’s law–that is, the claim first advanced back in 1965 by Gordon Moore, one of the founders of Intel Corporation that the number of transistors on a computer chip would double every two years. But can it go on? Harald Bauer, Jan Veira, and Florian Weig of the McKinsey Global Institute consider the issues in \”Moore’s law: Repeal or renewal?\” a December 2013 paper. They write:

\”Moore’s law states that the number of transistors on integrated circuits doubles every two years, and for the past four decades it has set the pace for progress in the semiconductor industry. The positive by-products of the constant scaling down that Moore’s law predicts include simultaneous cost declines, made possible by fitting more transistors per area onto silicon chips, and performance increases with regard to speed, compactness, and power consumption. … Adherence to Moore’s law has led to continuously falling semiconductor prices. Per-bit prices of dynamic random-access memory chips, for example, have fallen by as much as 30 to 35 percent a year for several decades.
As a result, Moore’s law has swept much of the modern world along with it. Some estimates ascribe up to 40 percent of the global productivity growth achieved during the last two decades to the expansion of information and communication technologies made possible by semiconductor performance and cost improvements.\”

The authors argue that technological advances already in the works are likely to sustain Moore\’s law for another 5-10 years. This As I\’ve written before, the power of doubling is difficult to appreciate at an intuitive level, but it means that the increase is as big as everything that came before. Intel is now etching transistors at 22 nanometers, and as the company points out, you could fit 6,000 of these transistors across the width of a human hair; or if you prefer, it would take 6 million of these 22 nanometer transistors to cover the period at the end of a sentence. Also, a 22 nanometer transistor can switch on and off 100 billion times in a second. 
The McKinsey analysts point out that while it is technologically possible for Moore\’s law to continue, the economic costs of further advances are becoming very high. They write: \”A McKinsey analysis shows that moving from 32nm to 22nm nodes on 300-millimeter (mm) wafers causes typical fabrication costs to grow by roughly 40 percent. It also boosts the costs associated with process development by about 45 percent and with chip design by up to 50 percent. These dramatic increases will lead to process-development costs that exceed $1 billion for nodes below 20nm. In addition, the state-of-the art fabs needed to produce them will likely cost $10 billion or more. As a result, the number of companies capable of financing next-generation nodes and fabs will likely dwindle.\”
Of course, it\’s also possible to have performance improvements and cost decreases on chips already in production: for example, the cutting edge of computer chips today will probably look like a steady old cheap workhorse of a chip in about five years. I suspect that we are still near the beginning, and certainly not yet at the middle, of finding ways for information and communications technology to alter our work and personal lives. But the physical problems and  higher costs of making silicon-based transistors at an ever-smaller scale won\’t be denied forever, either.  

Jousting over the One Percent

Robert Solow vs. Greg Mankiw, jousting over inequality. What more could those who enjoy academic blood sports desire? Their exchange is in the \”Correspondence\” section of the Winter 2014 issue of the Journal of Economic Perspectives.  Solow is writing in response to Mankiw\’s article in the Summer 2013 issue of JEP, called \”Defending the One Percent.\” (All articles in JEP are freely available and ungated, courtesy of the American Economic Association.) Here\’s a quick taste of the exchange, to whet your appetite for the rest.

Solow\’s opening paragraph:

\”The cheerful blandness of N. Gregory Mankiw’s “Defending the One Percent” (Summer 2013, pp. 21–34) may divert attention from its occasional unstated premises, dubious assumptions, and omitted facts. I have room to point only to a few such weaknesses; but the One Percent are pretty good at defending themselves, so that any assistance they get from the sidelines deserves scrutiny.\”

Mankiw\’s opening paragraph:

\”Robert Solow’s scattershot letter offers various gripes about my paper “Defending the One Percent.” Let me respond, as blandly and cheerfully as I can, to his points.\”

Solow\’s closing paragraph:

\”Sixth, who could be against allowing people their “just deserts?” But there is that matter of what is “just.” Most serious ethical thinkers distinguish between deservingness and happenstance. Deservingness has to be rigorously earned. You do not “deserve” that part of your income that comes from your parents’ wealth or connections or, for that matter, their DNA. You may be born just plain gorgeous or smart or tall, and those characteristics add to the market value of your marginal product, but not to your just deserts. It may be impractical to separate effort from happenstance numerically, but that is no reason to confound them, especially when you are thinking about taxation and redistribution. That is why we may want to temper the wind to the shorn lamb, and let it blow on the sable coat.\” 

Mankiw\’s closing paragraph:

\”Sixth, and finally, Solow asks, who could be against allowing people their “just deserts”? Actually, much of the economics literature on redistribution takes precisely that stand, albeit without acknowledging doing so. The standard model assumes something like a utilitarian objective function and concludes that the optimal tax code comes from balancing diminishing marginal utility against the adverse incentive effects of redistribution. In this model, what people deserve plays no role in the formulation of optimal policy. I agree with Solow that figuring out what people deserve is hard, and I don’t pretend to have the final word on the topic. But if my paper gets economists to focus a bit more on just deserts when thinking about policy, I will feel I have succeeded.\”

Full disclosure: I\’ve been Managing Editor of the JEP since 1987, so there is a distinct possibility that I am prejudiced toward finding the contents of the journal to be highly interesting.

How the 2009 Tax Haven Agreement Failed

Back in April 2009, a summit of the G20 countries agreed to lean hard on tax haven nations to sign treaties to exchange information with other countries. News stories made much of the agreement (for examples, here and here.)  But what effect did the agreement actually have? Niels Johannesen and Gabriel Zucman tackle this question in \”The End of Bank Secrecy?  An Evaluation of the G20 Tax Haven Crackdown,\” which appears in the most recent issue of American Economic Journal: Economic Policy (6:1, pp. 65–91). The journal isn\’t freely available on-line, but many readers will have access through library subscriptions.

The short answer is that the treaty didn\’t work very well. The tax haven countries were encouraged to sign bilateral treaties with other nations, and they went ahead and signed 300 or so of these treaties. But not every tax haven has a treaty with every country, and so the overall effect has been a relocation of money between tax havens. Here\’s the data they have available:

\”For  the purpose of our study, the Bank for International Settlements (BIS) has given us  access to bilateral bank deposit data for 13 major tax havens, including Switzerland,  Luxembourg, and the Cayman Islands. We thus observe the value of the deposits  held by French residents in Switzerland, by German residents in Luxembourg, by  US residents in the Cayman Islands and so forth, on a quarterly basis from the  end of 2003 to the middle of 2011.\”

The full list of the 13 tax havens is Austria,  Belgium, the Cayman Islands, Chile, Cyprus, Guernsey, the Isle of Man, Jersey, Luxembourg, Macao, Malaysia, Panama, and Switzerland. These 13 jurisdictions account for about 75% of all the deposits of tax haven countries that report to the Bank of International Settlements. The authors also have data grouped together for five other tax havens: Bahamas, Bahrain, Hong Kong, the Netherlands Antilles, and Singapore. They write:

\”We obtain two main results. First, treaties have had a statistically significant but quite modest impact on bank deposits in tax havens: a treaty between say France and Switzerland causes an approximately 11 percent decline in the Swiss deposits held by French residents. Second, and more importantly, the treaties signed by tax havens have not triggered significant repatriations of funds, but rather a relocation of deposits between tax havens. We observe this pattern in the aggregate data: the global value of deposits in havens remains the same two years after the start of the crackdown, but the havens that have signed many treaties have lost deposits at the expense of those that have signed few. We also observe this pattern in the bilateral panel regressions: after say France and Switzerland sign a treaty, French deposits increase in havens that have no treaty with France.\”

As with most studies, there are complications of interpretation. Are front companies being used to hide the movement of funds in a way that doesn\’t show up in these statistics? Perhaps as a response to the treaties some people are reporting to domestic tax authorities more of the income held in tax havens? This data set doesn\’t allow one to address that question.  But the evidence from this study strongly suggests that trying to deal with tax havens through bilateral agreements is likely to be a very long-running game, and is ultimately unlikely to make much difference to how companies and individuals in the rest of the world are able to make use of tax havens.

Finally, James Hines wrote \”Treasure Islands\” for the Fall 2010 Journal of Economic Perspectives, which makes an effort to look at both the concerns over tax havens and some possible benefits they might convey. From the abstract: \”The United States and other higher-tax countries frequently express concerns over how tax havens may affect their economies. Do they erode domestic tax collections; attract economic activity away from higher-tax countries; facilitate criminal activities; or reduce the transparency of financial accounts and so impede the smooth operation and regulation of legal and financial systems around the world? Do they contribute to excessive international tax competition? These concerns are plausible, albeit often founded on anecdotal rather than systematic evidence. Yet tax haven policies may also benefit other economies and even facilitate the effective operation of the tax systems of other countries.\”

Full disclosure: The AEJ:EP is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor. All JEP articles, like the Hines article mentioned above, are freely available courtesy of the AEA.

A German Employment Miracle Narrative

The German unemployment peaked at 12.1% in March 2005 (based on OECD statistics), and then declined more-or-less steadily since then, with only a small hiccup during the Great Recession. Here\’s a figure to illustrate from the ever-useful FRED website run by the Federal Reserve  Bank of St. Louis. How did Germany–the world\’s fourth-largest national economy–do it? Are there lessons to learn?

Graph of Registered Unemployment Rate for Germany

There are essentially three categories of explanation that have been suggested for Germany\’s remarkable labor market performance during the Great Recession: 1) decentralization of wage bargaining in Germany starting in the 1990s; 2) the \”Hartz reforms\” implemented in the mid-2000s; and 3) how the adoption of the euro influenced Germany\’s economic situation.

A nice statement of the first point of view, decentralization of German wage bargaining, appears in
\”From Sick Man of Europe to Economic Superstar: Germany’s Resurgent Economy,\” by Christian Dustmann, Bernd Fitzenberger, Uta Schönberg, and Alexandra Spitz-Oener, in the Winter 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve been the Managing Editor of the JEP since 1987.) Dustman et al. start their story in the early 1990s. Germany was facing the enormous costs and disruptions of reunification, in which higher-wage West Germany found itself part of the same country with lower-wage East Germany. In addition, the fall of the Soviet Union also offered German firms access to imports produced by a number of lower-wage eastern European workers, many of who already had educational, economic, or cultural ties to Germany. Germany industry began a \”factory Europe\” approach which built international supply chains across countries of eastern Europe, as well as the rest of the world.

Under these pressures, German unions at the industry- and the firm-level showed considerable flexibility. The number of German workers covered by unions declined: \”From 1995 to 2008, the share of employees covered by industry-wide agreements fell from 75 to 56 percent, while the share covered by firm-level agreements fell from 10.5 to 9 percent.\” Wages rose more slowly than productivity,a nd so starting around 1994, Germany\’s labor costs rose more slowly than those in other European countries, as well as the United State. In addition, Germany\’s wages became markedly more unequal. Here\’s a figure showing wage growth at the 85th percentile of wages, the 50th percentile, and the 15th percentile.

Dustmann, Fitzenberger, Schönberg, and Spitz-Oener argue that Germany\’s labor market institutions, which emphasize consensus bargaining at the firm-level and industry-level, actually turned out to be much more flexible than labor market institutions in other countries like France and Italy, where union wage negotiations happen at a national level. Another sign of Germany\’s labor market flexibility is that the country has no minimum wage.

A second set of explanations for Germany\’s strong labor market performance in recent years emphasizes the \”Hartz reforms\” that were undertaken between 2003 and 2005. Ulf Rinne and Klaus F. Zimmermann offer a nice exposition of this point of view in \”Is Germany the North Star of Labor Market Policy?\” in the December 2013 issue of the IMF Economic Review. (This journal isn\’t freely available online, but readers may have access through library subscriptions.) They summarize the reforms this way:

\”First, the reforms reorganized existing employment services and related policy measures. Importantly, unemployment benefit and social assistance schemes were restructured, and a means-tested flat-rate benefit replaced earnings-related, long-term unemployment assistance. Second, a significant reduction of long-term unemployment benefits—in terms of both amount and duration—and stricter monitoring activities were implemented to stimulate labor supply by providing the unemployed with more incentives to take up a job. Third, massive deregulation of fixed-term contracts, agency work, and marginal part-time work was undertaken to stimulate labor demand. The implementation of the reforms in these three areas was tied to an evaluation mandate that systematically analyzed the effectiveness and efficiency of the various measures of ALMP [active labor market policy].\”

To put all this a little more bluntly, it was strong medicine. Early retirement options were phased out. Unemployment benefits were limited in eligibility, size, and duration. For example, the unemployed had to prove periodically that they were really looking for work. Also, remember that Germany was enacting many of these policies right around 2005 when its economy was going through a deep recession that spiked the unemployment rate.  During the recession, a number of German firms avoided layoffs by using the flexibility of the Hartz reforms to reduce hours worked–and wages paid.

The third set of explanations for Germany\’s lower unemployment rate focuses on the creation of the euro and the pattern of German trade surpluses that has resulted. The figure shows Germany\’s trade surplus. Notice that around 2001, when the euro moves into general use, Germany\’s trade surplus takes off. This has been called the \”Chermany problem\”–that is, both China and Germany after about 2000 had exchange rate that was at a low enough level to generate large and rising trade surpluses.

Graph of Current Account Balance: Total Trade of Goods for Germany

But notice that Germany\’s unemployment rate is falling from about 2000-2005, even though the trade surpluses are rising. Then the trade surpluses declined after about 2008, as the global financial occurred, and haven\’t yet rebounded to their peak. This is at a time when Germany\’s unemployment rate is falling. In short, outsized trade deficits and surpluses can lead to economic problems of various kinds, but trade imbalances often don\’t have a tight link to unemployment rates. (In the US economy, for example, trade deficits were quite high when unemployment was low during the height of the housing bubble back around 2006, but since the Great Recession US trade deficits have been lower while unemployment  rate has been higher.)

While academics and policymakers will continue to dispute the reasons for Germany\’s stellar performance in reducing unemployment in the last few years, I\’ll just note that none of the possible answers look easy. Having productivity growth outstrip wages over time, so that labor costs relative to competitors fall, isn\’t easy. Reorganizing industry around global supply chains that include suppliers from lower-wage economies isn\’t easy. Increasing inequality of wages isn\’t easy. The \”structural labor market reforms\” that include trimming back on  early retirement and unemployment insurance isn\’t easy. U.S. discussions of economic policy sometimes make it sound as if the government can just \”create jobs\” with large enough spending and/or tax cuts, or low enough interest rates. But real and lasting solutions to reducing unemployment and keeping it low aren\’t that easy.