Return Mail From Nonexistent Addresses: An International Comparison

A large proportion of academic research isn\’t about trying to resolve a big question once and for all. Instead, it\’s about putting together one brick of evidence, and when enough bricks accumulate, then it becomes possible to offer evidence-backed judgements on bigger questions.

In that spirit, Alberto Chong, Rafael La Porta, Florencio Lopez-de-Silanes, and Andrei Shleifer offer a study of \”Letter Grading Government Efficiency\” in an August 2012 working paper for the National Bureau of Economic Research. (NBER working papers are not freely available on-line, but many readers will have access through their academic institutions. Also, earlier copies of the paper are available on the web, like here.) Full disclosure: One of the authors of the paper, Andrei Shleifer, was the editor of my own Journal of Economic Perspectives, and thus my boss, from 2003 to 2008.

The notion is to measure one simple dimension of government efficiency: whether a letter sent to a real city with an actual zip code, but a nonexistent address, is returned to the sender. Thus, the authors sent 10 letters to nonexistent addresses in each of 159 countries: two letters to each of a country\’s five largest cities. The letters were all sent with a return address to the Tuck School of Business at Dartmouth University with a request to \”return to the sender if undeliverable.\” A one-page letter requesting a response from the recipient was inside.

In theory, all countries belong to an international postal convention requiring them to return letters with an incorrect address, and to do so within about a month. Here\’s an overview of their results and a table:

\”We got 100% of the letters back from 21 out of 159 countries, including from the usual suspects of efficient government such as Canada, Norway, Germany and Japan, but also from Uruguay, Barbados, and Algeria. At the same time, we got 0% of the letters back from 16 countries, most of which are in Africa but also including Tajikistan, Cambodia, and Russia. Overall, we had received 59% of the letters back within a year after they were sent out. Another measure we look at is the percentage of the letters we got back in 90 days. Only 4 countries sent all the letters back in 90 days (United States, El Salvador, Czech Republic, and Luxembourg), while 42 did not manage to deliver any back within 3 months. Overall, only 35% of the letters came back within 3 months. … In statistical terms, the variation in our measures of postal efficiency is comparable to the variation of per capita incomes across countries.\”

Not unexpectedly, the data shows that countries with higher per capita GDP or with higher average levels of education typically did better at returning misaddressed mail. The U.S. Postal Service is at the top of the list–but because the letters were mailed in the U.S. and being returned to a U.S. address, it would be quite troublesome if this were not true!

The authors can account for about half of the variation across countries by looking at factors like the types of machines used for reading postal codes, and whether the country uses a Latin alphabet (although the international postal conventions actually require that addresses be written in Latin letters). But more intriguingly, much of the variation across countries in whether they return misaddressed letters seems to be correlated with other measures of the quality of government and management in that country. In that sense, the ability to return misaddressed letters may well be a sort of simple diagnostic tool that suggests something about broader patterns of efficiency in government and the economy.

A Systematic List of Hyperinflations

Most discussions of hyperinflation focus on a particular episode: for example, here\’s my post from last March 5 on \”Hyperinflation and the Zimbabwe Example.\” But Steve H. Hanke and Nicholas Krus have taken the useful step of compiling a list of \”World Hyperinflations\” in an August 2012 Cato Working Paper. 

Hanke and Krus define an episode of hyperinflation as starting when the rate of inflation exceeds 50% in a month, and ending after a year in which inflation rates do not exceed this level. But the task of compiling a systematic and well-documented list of hyperinflations is tricky. Data on prices can be scarce. While data on consumer prices is preferable, looking at data on wholesale prices or even exchange rates is sometimes necessary. As one example, the Republika Srpska is currently one of the two main parts of Bosnia and Herzegovina, which in turn was formed from the break-up of Yugoslavia. But for a time in the early 1990s, the Republic Srpska had its own currency circulating. Finding monthly price data for this currency is not a simple task! As another example, the city of Danzig in 1923 carried out its own currency reform: Danzig was at the time technically a free city, but heavily influenced by the German hyperinflation around it, in the midst of the overall German hyperinflation at that time. 

Their paper offers a much fuller discussion of details and approaches, but here is a taste of their findings: a much-abbreviated version of their main table showing the top 25 hyperinflations, measured by whether the single highest monthly inflation rate exceeded 200%.

A few themes jump out at me:

1) The infamous German hyperinflation of 1922-23 is near the top of the list, but ranks only fifth for highest monthly rate of inflation. The dubious honor of record-holder for highest monthly hyperinflation rate is apparently Hungary, which in July 1946 had a hyperinflation rate that was causing priced to double every 15 hours. The Zimbabwe hyperinflation mentioned above is a close second, with a hyperinflation rate in November 2008 causing prices to double every 25 hours.

2) The earliest hyperinflation on the list is France in 1795-1796, and there are no examples of hyperinflation in the 1800s.

3) Many of the hyperinflations on the list occur either in the aftermath of World War II, or in the aftermath of the break-up of the Soviet Union in the early 1990s.

4) Finally, Hanke and Krus state in a footnote that they would now make one addition to the table, which would be the most recent episode of all: the experience of North Korea from December 2009 to January 2011.

 \”We are aware of one other case of hyperinflation: North Korea. We reached this conclusion after calculating inflation rates using data from the foreign exchange black market, as well as the price of rice. We estimate that this episode of hyperinflation occurred from December 2009 to mid-January 2011. Using black market exchange rate data, and calculations based on purchasing power parity, we determined that the North Korean hyperinflation peaked in early March 2010, with a monthly rate of 496% (implying a 6.13% daily inflation rate and a price-doubling time of 11.8 days). When we used rice price data, we calculated the peak month to be mid-January 2010, with a monthly rate of 348% (implying a 5.12% daily inflation rate and a price-doubling time of 14.1 days).\”

What is a Beveridge Curve and What is it Telling Us?

A Beveridge curve is a graphical relationship between job openings and the unemployment rate that macroeoconomists and labor economists have been looking at since the 1950s. But in the last decade or so, it has taken on some new importance. It is used as part of the explanation for search models of unemployment, part of the work for which Peter Diamond, Dale Mortensen and Christopher Pissarides won the Nobel Prize back in 2010. It also has some lessons for how we think about the high and sustained levels of unemployment in the U.S. economy in the last few years. In the most recent issue of my own Journal of Economic Perspectives, Mary C. Daly, Bart Hobijn, Aysegül Sahin, and Robert G. Valletta provide an overview of the analysis and implications in \”A Search and Matching Approach to Labor Markets: Did the Natural Rate of Unemployment Rise?\” (Like all JEP articles back to 1994, it is freely available on the web courtesy of the American Economic Association.)

Let\’s start with an actual Beveridge curve. The monthly press release from the Job Openings and Labor Turnover Statistics (JOLTS) data from the U.S. Bureau of Labor Statistics offers a Beveridge curve plotted with real data. Here\’s the curve from this month\’s press release:

 
BLS explains:  \”This graph plots the job openings rate against the unemployment rate. This graphical representation is known as the Beveridge Curve, named after the British economist William Henry Beveridge (1879-1963). The economy’s position on the downward sloping Beveridge Curve reflects the state of the business cycle. During an expansion, the unemployment rate is low and the job openings rate is high. Conversely, during a contraction, the unemployment rate is high and the job openings rate is low. The position of the curve is determined by the efficiency of the labor market. For example, a greater mismatch between available jobs and the unemployed in terms of skills or location would cause the curve to shift outward, up and toward the right.\”

Thus, on the graph the U.S. economy slides down the Beveridge curve during the recession from March 2001 to November 2001, shown by the dark blue line on the graph. During the Great Recession from December 2007 to June 2009, the economy slides down the Beveridge curve again. On a given Beveridge curve, recessions move toward the bottom right, and periods of growth move toward the upper left.

However, the right hand-side of the Beveridge curve seems to convey a dispiriting message. Instead of the economy working its way directly back up the Beveridge curve since the end of the recession, it seems instead to be looping out to the right: that is, even though the number of job openings has been rising, the unemployment rate has not been falling as fast as might be expected. Why not?

One possible reason is that this kind of looping counterclockwise pattern in the Beveridge curve is not unusual in the aftermath of recessions. Daly, Hobijn, Sahin, and Valletta provide a graph graphing Beveridge curve data back to 1951. Notice that the Beveridge curve can shift from decade to decade. Also, if you look at the labels for 1990s and 1980s, it\’s clear that the Beveridge curve can have an outward counterclockwise shift for a time. A partial explanation here is that at the tail end of recession, employers are still reluctant to hire, so that even as they start to post more job openings, their actual hiring doesn\’t go full speed ahead until they are confident that the recovery will be sustained. Conversely, if employers aren\’t fully confident that the recovery will be sustained, they won\’t be quick to hire.

My own thoughts about the pattern of U.S. unemployment situation developed along these lines: The unemployment rate hovered at 4.4-4.5% from September 2006 through May 2007. In October 2009 it peaked at 10%. By December 2011 it had fallen to 8.5%, but since then, it has remained stuck above 8%–for example, 8.3% in July. How can these patterns be explained?

As a starting point, we should recognize that the 4.4% rate of unemployment back in late 2006 and early 2007 was part of a bubble economy at that time–an unsustainably low unemployment rate being juiced by the bubble in housing prices and the associated financial industry bubble. Estimating the sustainable rate of unemployment for an economy–the so-called \”natural rate of unemployment\”–is as much art as science. But in retrospect, a reasonable guess might be that the dynamics of the bubble were pushing the unemployment rate down from a natural rate of maybe 5.5%.

When the U.S. unemployment rate hit 10% in October 2009, it was countered with an enormous blast of expansionary fiscal and monetary policy. That is, the economy was stimulated both through huge budget deficits and through near-zero interest rates from the Federal Reserve. The ability of these policies to bring down unemployment quickly was overpromised, but they made a real contribution to stopping the unemployment rate from climbing above 10% and to getting it down to 8.3%.

But the unemployment rate still needs to come down by 2.5-3%, and that is where the Beveridge curve arguments enter in. In seeking reasons for the outward shift of the Beveridge curve in the last few years, Daly, Hobijn, Sahin, and Valletta point to three factors: a mismatch between the skills of unemployed workers and the available jobs; incentives from extended unemployment insurance that have slowed the incentive to take available jobs; and heightened uncertainty over the future course of the economy and economic policy. These factors together can explain why the unemployment rate has stayed high over the last several years. Fortunately, they are also factors which should ameliorate themselves over time. That\’s why the January 2012 projections from the Congressional Budget Office for the future of the unemployment rate look like this:

 To put it another way, I strongly suspect that whoever is elected president in November 2012 will look like an economic policy genius by early in 2014. It won\’t be so much because of any policies enacted during that time, but just a matter of the slow economic adjustment of Beveridge curve.

As an afterthought, I\’ll add that Beveridge curve is apparently one more manifestation of an old pattern in academic work: Curves and laws and rules are often named after people who did not actually discover them. This is sometimes called Stigler\’s law:  \”No scientific discovery is named after its original discoverer.\” Of course, Steve Stigler was quick to point out in his 1980 article that he didn\’t discover his own law, either!

But William Beveridge is a worthy namesake, in the sense that he did write a lot about job openings and unemployment. For example, here\’s a representative comment from his 1944 report, Full Employment in a Free Society: 

\”Full employment does not mean literally no unemployment; that is to say, it does not mean that every man and woman in the country who is fit and free for work is employed productively every day of his or her working life … Full employment means that unemployment is reduced to short intervals of standing by, with the certainty that very soon one will be wanted in one\’s old job again or will be wanted in a new job that is within one\’s powers.”

The U.S. economy since the Great Recession is clearly failing Beveridge\’s test for full employment, and failing it badly.

The Rise of Residential Segregation by Income

 I\’ve posted in the past about \”The Big Decline in Housing Segregation\” by race, but it seems likely that another kind of residential segregation is on the rise. In a report for the Pew Research Center, Paul Taylor (no relation) and Richard Fry discuss \”The Rise of Residential Segregation by Income Social & Demographic Trends.\”

To measure the extent to which households are segregated by income, the authors take three steps. First, they look at the 30 U.S. cities with the largest number of households. Second, they categorized households by income as lower, middle, or upper income. \”For the purpose of this analysis, low-income households are defined as having less than two-thirds of the national median annual income and upper-income households as having more than double the national median annual income. Using these thresholds, it took an annual household income of less than $34,000 in 2010 to be labeled low income and $104,000 or above to be labeled upper income. The Center conducted multiple analyses using different thresholds to define lower- and upper-income households. The basic finding reported here of increased residential segregation by income was consistent regardless of which thresholds were used.\” Third, they look at where households are living by Census tract: \”The nation’s 73,000 census tracts are the best statistical proxy available from the Census Bureau to define neighborhoods. The typical census tract has about 4,200 residents. In a sparsely populated rural area, a tract might cover many square miles; in a densely populated urban area, it might cover just a city block or two. But these are outliers. As a general rule, a census tract conforms to what people typically think of as a neighborhood.\”

Next, the authors calculate what they call a Residential Income Segregation Index, which comes from \”adding together the share of lower-income households living in a majority lower-income tract and the share of upper-income households living in a majority upper-income tract … (The maximum possible RISI score is 200. In such a metropolitan area, 100% of lower-income and 100% of upper-income households would be situated in a census tract where a majority of households were in their same income bracket.)\”  Here are the RISI scores for the 30 cities in 1980 and 2010:

Overall, the national index rose from 32 in 1980 to 46 in 2010. The report does not seek to analyze the differences across cities, which are presumably influenced by a range of local factors. At the regional level, \”one finds that the metro areas in the Southwest have the highest average RISI score (57), followed by those in the Northeast (48), Midwest (44), West (38) and Southeast (35). The analysis also shows that the level of residential segregation by income in the big Southwestern metro areas have, on average, increased much more rapidly from 1980 to 2010 than have those in other parts of the country. But all regions have had some increase.\”

At a broad level, the main reason for the rise in segregation by income is the rising inequality of incomes in the U.S. The authors write: 

\”[T]here has been shrinkage over time in the share of households in the U.S. that have an annual income that falls within 67% to 200% of the national median, which are the boundaries used in this report to define middle-income households. In 1980, 54% of the nation’s households fell within this statistically defined middle; by 2010, just 48% did. The decline in the share of middle-income households is largely accounted for by an increase in the share of upper-income households. … With fewer households now in the middle income group, it’s not surprising that there are now also more census tracts in which at least half of the households are either upper income or lower income. In 2010, 24% of all census tracts fell into one category or the other—with 18% in the majority lower-income category and 6% in the majority upper-income category. Back in 1980, 15% of all census tracts fell into one category or the other—with 12% majority lower and 3% majority upper. To be sure, even with these increases over time in the shares of tracts that have a high concentration of households at one end of the income scale or the other, the vast majority of tracts in the country—76%—do not fit this profile. Most of America’s neighborhoods are still mostly middle income or mixed income—just not as many as before.\”

I have no strong prior belief about how much residential segregation by income is desirable, and I have no reason to believe that the extent of residential segregation by income in 1980 was some golden historical ideal to which we should aspire. But in a U.S. economy with rising inequality of incomes, and in which our economic and political future will depend on shared interactions, a rising degree of residential segregation by income does give me a queasy feeling.

Voter Turnout Since 1964

To hear the candidates and the media tell it, every presidential election year has greater historical importance and is more frenzied and intense than any previous election. But the share of U.S. adults who actually vote has been voter turnout has mostly been trending down over time. Here\’s are some basic facts from a chartbook put together by the Stanford Institute for Economic Policy Research.

The youngest group of voters from age 18-24 have seen a rise in turnout recently, especially from 2000 to 2004, and there is a more modest rise in turnout for some other age groups. But all elections since 1988 have had lower turnout than that year; in turn, 1988 had lower turnout that the presidential elections from 1964-1972.

I see the chart as a reminder of a basic truth: Elections aren\’t decided by what people say to pollsters. They are determined by who actually casts a vote.

What is the Tragedy of the Commons?

A couple of weeks ago, I posted on how \”The Economics of Antibiotics Resistance\” could be viewed as an example of the \”tragedy of the commons.\” I got a few notes suggesting that I explain the term more fully. Here\’s the explanation from my own Principles of Economics textbook. (Of course, if you are teaching a college-level intro economics class, I would encourage you to take a look at it at the website of the publisher, Textbook Media. Along with many expository virtues, my book is priced far below the $200 price of many leading textbooks, at at $40 for a combination of a soft-cover paper copy and on-line access. On-line access alone, or micro and macro splits, are priced even lower.)  From Chapter 15:

\”The historical meaning of a commons is a piece of pasture land that is open to anyone who wishes to graze their cattle upon it. More recently, the term has come to apply to any area that is open to all, like a city park. In a famous 1968 article, a professor of ecology named Garrett Hardin (1915-2003) described a scenario called the tragedy of the commons, in which the utility-maximizing behavior of individuals ruins the commons for all.\”

 \”Hardin imagined a pasture that is open to many herdsmen, each with their own herd of cattle. A herdsman benefits from adding cows, but too many cows will lead to overgrazing and even to ruining the commons. The problem that when a herdsman adds a cow, the herdsman personally receives all of the gain, but when that cow contributes to overgrazing and injures the commons, the loss is suffered by all of the herdsmen as a group—so any individual herdsman suffers only a small fraction of the loss. Hardin wrote: `Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.\’\”

\”This tragedy of the commons can arise in any situations where benefits are primarily received by one party, while the costs are spread out over many parties. For example, clean air can be regarded as a commons, where firms that pollute air can gain higher profits, but firms that pay for anti-pollution equipment provide a benefit to others. A commons can be regarded as a public good, where it is difficult to exclude anyone from use (nonexcludability) and where many parties can use the resource simultaneously (nonrivalrous).\”

\”The historical commons was often protected, at least for a time, by social rules that limited how many cattle a  herdsman could graze. Avoiding a tragedy of the commons with the environment will require its own set of rules which limit how the common resource can be used.\”

Hardin\’s original 1968 article is widely available on the web–for example, here.

A few years ago in 2008, Ian Angus wrote a provocative essay in Monthly Review called “The Myth of the Tragedy of the Commons.” The tragedy of the commons is often considered a politically liberal insight, because it offers a potential justification for government regulation of shared resources. But Angus attacks from the thesis from further to the political left, argues that Hardin\’s thesis is evidence-free, that it ignores the reality of community self-regulation, and that it amounts to blaming the poor for their poverty. Here are a few words from Angus\’s trenchant essay:

“Since its publication in Science in December 1968, ‘The Tragedy of the Commons’ [by Garrett Hardin] has been anthologized in at least 111 books, making it one of the most-reprinted articles ever to appear in any scientific journal. . . . For 40 years it has been, in the words of a World Bank Discussion Paper, ‘the dominant paradigm within which social scientists assess natural resource
issues’. . . It’s shocking to realize that he provided no evidence at all to support his sweeping conclusions. He claimed that the ‘tragedy’ was inevitable—but he didn’t show that it had happened even once. Hardin simply ignored what actually happens in a real commons: self-regulation by the communities involved. . . . The success of Hardin’s argument reflects its usefulness as a pseudo-scientific explanation of global poverty and inequality, an explanation that doesn’t question the
dominant social and political order. It confirms the prejudices of those in power: logical and factual errors are nothing compared to the very attractive (to the rich) claim that the poor are responsible for their own poverty. The fact that Hardin’s argument also blames the poor for ecological destruction is a bonus.”

I enjoyed Angus\’s counterattack, but in the end, it seemed to me overwrought. The logic behind the tragedy of the commons is solid enough that it is often a useful starting point for thinking about shared resource issues. I\’m fairly confident that Hardin didn\’t see himself as blaming the poor for their own poverty and for ecological destruction! However, it\’s important to emphasize that because a tragedy of the commons is possible doesn\’t make it inevitable. And further, as the penultimate sentence from my short textbook description mentions,social rules and community self-regulation have often been able to manage the commons for a considerable period of time.

The Stagnant U.S. R&D Effort

Everyone knows that the future of the world economy will be heavily influenced by the development of new technology. Everyone is right! But the U.S. research and development effort has largely been stagnant in recent decades, with a gradual reduction in the importance of government support for R&D. Meanwhile, countries like China and South Korea are greatly expanding their R&D efforts. The facts are laid out in Chapter 4 of a biennial report from the National Science Foundation called
Science and Engineering Indicators 2012.  Here are some points that caught my eye.

Total R&D spending as a share of GDP has been more-or-less flat in the U.S. over the last few decades at about 2.7% of GDP. However, the share of R&D spending from the federal government has been dropping steadily, while the share from business has been rising steadily. The report explains: \”The federal government was once the predominant sponsor of the nation’s R&D, funding some 67% of all U.S. R&D in 1964 …  But the federal share decreased in subsequent
years to less than half in 1979 and to a low of 25% in 2000. Changing business conditions and expanded federal funding of health, defense, and counterterrorism R&D pushed it back up above 30% in 2009.\”

Business now dominates U.S. R&D both in terms of providing the funding and in terms of being the location where the research is actually done. Not surprisingly, the biggest category of R&D is not the basic research that seeks fundamental scientific breakthroughs, nor the applied research that looks for applications of those breakthroughs, but the experimental development of new products, which accounts for about 60% of total U.S. R&D spending. Here are the activities on which U.S. companies spend R&D dollars: the top single category by far is pharmaceutical and medicinal products, followed by software and semiconductors and other electronic components.
U.S. government spending on R&D has tilted more to defense-related than to non-defense products in recent decades. Among non-defense R&D, health-related research is far and away the leader. 
The enormous U.S. economy remains the single largest spender on R&D in the world in absolute dollars, substantially outstripping the European Union and Japan. But notice that R&D spending in China is rising fast, and recently outstripped R&D spending in Japan.
In terms of the share of GDP spent on R&D, the U.S. is mid-range. Also, notice that R&D spending as a share of GDP is rising in Japan, and rising rapidly in China and in South Korea. 
I have long found it frustrating that the U.S. government seems unable to prioritize greater R&D spending. Total U.S. R&D spending in 2009 was about $400 billion, with $124 billion coming from the federal government. Total federal spending in 2009 was about $3.1 trillion, so about 4% of the federal budget went to R&D. I believe that the budget deficit is a big problem. But I find it depressing to contemplate that the federal government can\’t commit, say, 5% of its spending to R&D instead of 4%. Most businesses will tend to focus most of their R&D on projects that could lead to new products in the fairly near term. Government should be supporting the research that has a possibility, over time, of creating whole new industries.

The Economics of Spam

Did you know that there are about 100 billion spam e-mails sent every day? Did you know that the overwhelming majority of this spam is screened out by your e-mail provider and never even ends up in your \”junk mail\” folder. That when one big spammer was taken down in 2009, global e-mail traffic fell by one-third? That American firms and consumers experience costs of about $20 billion per year because of e-mail spam? All this and more is discussed by Justin M. Rao and David H. Reiley in \”The Economics of Spam,\” which appears in the Summer 2012 issue of my own Journal of Economic Perspectives. Like all JEP articles from the current issue back to 1994, it is freely available on-line courtesy of the American Economic Association. 

I found especially interesting what the authors describe as a cat-and-mouse game between spammers and anti-spam software. For example, when many people label a message as \”spam,\” then it helps the anti-spam software to look for those words or URLs repeated in other messages, so that those messages can be filtered out. But then spammer responded with creative misspellings (like \”VIagrA\”) to trick the anti-spam filter, and used many different URLs that would all take the unwary to the same sales page. 

In addition, the spammers use software to mark messages as \”not spam,\” thus trying to offset those who label them as spam. Rao and Reiley write: \”In four months of 2009 Yahoo! Mail data, our Yahoo! colleagues found that (suspiciously) 63 percent of all “not spam” votes were cast by users who never cast a single “spam” vote.\”

Anti-spam software can try to identify the computer that is sending spam, and shut it down. But spammers have responded with \”botnets,\” which are a network of computers infected by malware that will send out spam e-mails. In addition, \”a zombie could be programmed to sign up for hundreds
of thousands of free email accounts at Gmail, and then send spam email through these accounts. … In 2011, Yahoo! Mail experienced an average of 2.5 million sign-ups for new accounts each day. The anti-spam team deactivated 25 percent of these immediately, because of clearly suspicious patterns in account creation (such as sequentially signing up account names JohnExample1, JohnExample2, . . .) and deactivated another 25 percent of these accounts within a week of activation due to
suspicious outbound email activity.\”


The volume of e-mail sent by botnets can be enormous: \”The largest botnet on record, known as Rustock, infected over a million computers and had the capacity to send 30 billion spam emails per day before it was taken down in March 2011. Microsoft, Pfifi zer, FireEye network security, and security experts at the University of Washington collaborated to reverse engineer the Rustock software to determine the location of the command servers. They then obtained orders from federal
courts in the United States and the Netherlands allowing them to seize Rustock’s command-and-control computers in a number of different geographic locations. … The takedown of this
single botnet coincided with a one-third reduction in global email spam— and hence a one-quarter reduction in global email traffic.\” 


Various websites began to use what is called a \”CAPTCHA, which is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart” to prevent spam and other automated software. As a first response, \”Spammers turned to visual-recognition software to break
CAPTCHAs, and in response email providers have created progressively more difficult CAPTCHAS, to the point where many legitimate human users struggle to solve them.\” But then spammers figured out how to get humans to solve the CAPTCHAs for them. \”[A] spammer would set up a pornography site, offering to display a free photo to any user who could successfully type in the text characters
in a CAPTCHA image. In the background, their software had applied for a mail account at a site like Hotmail, received a CAPTCHA image, and relayed it to the porn site; they would obtain text from a user interested in free porn and relay this back to the Hotmail site …\” And now one can hire faraway workers to break CAPTCHAs, perfectly legally: \”The market wage advertised for CAPTCHA-breaking laborers declined from nearly $10 per thousand CAPTCHAs in 2007 to $1 per thousand in
2009. These labor markets started with Eastern European labor and then moved to locations with lower wages: India, China, and Southeast Asia.\”


Several teams of researchers have managed to take over botnets and other spam-related software, which allowed them to see how many messages were going out, how many were blocked by anti-spam software, and how many responses were being received.  In one such study: \”In total, the group modified 345 million pharmaceutical emails sent from botnet zombies. Three-quarters of these were blocked through blacklisting, and the remaining 82 million emails led to a scant 28 conversions, or about 1 in 3,000,000.\”  Thus, e-mail spam is a profitable if illegal business if it receives one attempt to purchase a good out of every three million spam e-mails sent!

Given the ability of spammers to react to anti-spam efforts, what might be done about e-mail spam? Rao and Reiley are lukewarm about proposals that would seek to impose a small charge on senders of messages, which would go to recipients of e-mails as compensation. If a recipient desired, they could identify any e-mail senders who would not have to pay them when sending an e-mail. Rao and Reilley note that there is currently no mechanism for linking the sending of e-mails to payments. And if sending an e-mail automatically generated a payment, then spammers would have an incentive to hijack accounts and send thousands of messages–so that the senders could collect. Instead, they suggest approaches like going after the relatively few financial institutions around the world that process transactions for sales that result from spam e-mail, or even setting up a task force that would seek to \”spam the spammers,\” thus raising the costs of spam operators and perhaps making their operation unprofitable. 

European Currency Union Breakups: Lessons for the Euro

The euro isn\’t the first currency area to risk breaking up.  Anders Åslund makes the case as to \”Why a Breakup of the Euro Area Must Be Avoided: Lessons from Previous Breakups,\” written as Policy Brief PB12-20 for the Peterson Institute of International Economics.

Åslund focuses in particular on six breakups of monetary unions in Europe in the last century. He argues that three of them are not especially relevant to the euro situation. (Citations are omitted throughout for readability.) \”It was rather easy to dissolve a currency zone under the gold standard when countries maintained separate central banks and payments systems. Two prominent examples are the Latin Monetary Union and the Scandinavian Monetary Union. The Latin Monetary Union was formed first with France, Belgium, Italy, and Switzerland and later included Spain, Greece, Romania, Bulgaria, Serbia, and Venezuela. It lasted from 1865 to 1927. It failed because of misaligned exchange rates, the abandonment of the gold standard, and the debasement by some central banks of the currency. The similar Scandinavian Monetary Union among Sweden, Denmark, and Norway existed from 1873 until 1914. It was easily dissolved when Sweden abandoned the gold standard. These two currency zones were hardly real, because they did not involve a common central bank or a centralized payments system. They amounted to little but pegs to the gold standard. Therefore, they are not very relevant to the EMU.\”

The division of Czechoslovokia in 1992 into two countries with their own currencies also went smoothly, in part because financial connections were limited under the previous communist regime.  \”In particular, no financial instruments were available with which investors could speculate against the Slovak koruna.\”

However, three other European currency breakups are more relevant to the euro, and more concerning, because they all led to steep recessions and hyperinflations.

 \”The three other European examples of breakups in the last century are of the Habsburg Empire, the Soviet Union, and Yugoslavia. They are ominous indeed. All three ended in major disasters, each with hyperinflation in several countries. In the Habsburg Empire, Austria and Hungary faced hyperinflation. Yugoslavia experienced hyperinflation twice. In the former Soviet Union, 10 out of 15 republics had hyperinflation. The combined output falls were horrendous, though poorly documented because of the chaos. Officially, the average output fall in the former Soviet Union was 52 percent, and in the Baltics it amounted to 42 percent. According to the World Bank, in 2010, 5 out of 12 post-Soviet countries—Ukraine, Moldova, Georgia, Kyrgyzstan, and Tajikistan—had still not reached their 1990 GDP per capita levels in purchasing power parities. Similarly, out of seven Yugoslav successor states, at least Serbia and Montenegro, and probably Kosovo and Bosnia-Herzegovina, had not exceeded their 1990 GDP per capita levels in purchasing power parities two decades later. Arguably, Austria and Hungary did not recover from their hyperinflations in the early 1920s until the mid-1950s. Thus the historical record is that half the countries in a currency zone that breaks up experience hyperinflation and do not reach their prior GDP per capita as measured in purchasing power parities until about a quarter of a century later …\”

As Åslund notes, one might dismiss these historical parallels by saying that these examples are too far in the past (the Hapsburg empire), or are too interrelated with the breakup of Communist economic regimes (the Soviet Union and Yugoslavia). Thus, he is at some pains to spell out just how and why these consequences could easily happen in modern Europe. For example, he cites one estimate that \”the overall decline in output in the EMU countries of a complete breakup of the EMU at 5 to 9 percent during the first year and at 9 to 14 percent over three years.\” But his purpose is not to defend the specific numbers, but rather to spell out out what happens in a scenario where countries revert to national currencies as quickly as possible, and need to introduce temporary capital controls during the transition. He quotes five concerns from an earlier study, and then adds a sixth concern of his own. Here are the first five concerns:

\”First, “the logistical and legal problems of reintroducing national currencies, while transitional, would be serve and protracted.” Second, “capital flight and distress in the financial system would disrupt trade and investment.” Third, a “plunge in business and consumer confidence would likely be accompanied by a renewed dive in asset prices inside and outside the Eurozone.” Fourth, the “challenge of maintaining fiscal credibility and securing government funding would be intensified. This would call for yet more fiscal tightening measures, particularly for the weaker peripheral Eurozone countries.” Fifth, non–euro area countries would suffer from sharp appreciation of their currencies, “compounding the damage to their export growth.”

Åslund emphasizes yet another concern, that the payments system across the euro countries could be sharply disrupted. These issues lead him to conclude: \”The Economic and Monetary Union must be maintained at almost any cost. … The exit of any single country from the EMU, at the present time
when large imbalances have been accumulated, would likely lead to a bank run, which would cause the EMU payments system to break down and with it the EMU itself.\”

Åslund is a pragmatist, and thus is willing to contemplate the ways in which a \”velvet divorce\” of the euro could happen in practical terms. He writes: \”If the need for dissolution of the euro area appears
inevitable, all countries should agree on an early exit date. Fortunately, all the euro countries still have fully equipped central banks, which should greatly facilitate the process of recovering their old functions—distribution of bank notes, monetary policy, maintenance of international currency
reserves, exchange rate policy, foreign currency exchange, and payment routines.\”

But he points out that in terms of practical politics, a smooth dissolution of the euro is highly unlikely: \”In the end, no velvet divorce is likely. No serious politician is likely to promote a dissolution of the euro area unless forced to  do so, because no one wants to risk going down in history as the person who destroyed the EMU or the European Union. This is most of all true of German politicians. Th erefore, if the euro area breaks up, a messy collapse is most likely to ensue.\”

U.S. Education: "Unthinking, Unilateral Educational Disarmament"

Everyone knows that the future of the U.S. economy and standard of living is tied up with how well Americans are educated. Everyone is right! But the U.S. education system has been stuck in neutral for decades, while other countries have been moving ahead. Martin West summarizes some of the evidence in \”Global Lessons for Improving U.S. Education\”, which appears in the Spring 2012 edition of Issues in Science and Technology.The questions that follow are mine: the answers are excerpted from the article. I add a few remarks at the end.

What is the OECD test called the PISA, or \”Program for International Student Assessment\”?

\”The PISA is administered every three years to nationally representative samples of students in each OECD country and in a growing number of partner countries and subnational units such as Shanghai. The 74 education systems that participated in the latest PISA study, conducted during 2009, represented more than 85% of the global economy and included virtually all of the United States’ major trading partners, making it a particularly useful source of information on U.S. students’ relative standing.\”

What does the PISA show about U.S. educational performance?

\”Among the 34 developed democracies that are members of the Organization for Economic Cooperation and Development (OECD), 15-year-olds in the United States ranked 14th in reading, 17th in science, and no better than 25th in mathematics. … U.S. students performed well below the OECD average in math and essentially matched the average in science. In math, the United States trailed 17 OECD countries by a statistically significant margin, its performance was indistinguishable from that of 11 countries, and it significantly outperformed only five countries. In science, the United States significantly trailed 12 countries and outperformed nine. Countries scoring at similar levels to the United States in both subjects include Austria, the Czech Republic, Hungary, Ireland, Poland, Portugal, and Sweden. …. The gap in average math and science achievement between the United States and the top-performing national school systems is dramatic. In math, the average U.S. student by age 15 was at least a full year behind the average student in six countries, including Canada, Japan, and the Netherlands. Students in six additional countries, including Australia, Belgium, Estonia, and Germany, outperformed U.S. students by more than half a year.\”

Is the poor U.S. performance a recent development?

\”U.S. students, however, have never fared well in various international comparisons of student achievement. The United States ranked 11th out of 12 countries participating in the first major international study of student achievement, conducted in 1964, and its math and science scores on the 2009 PISA actually reflected modest improvements from the previous test. …
\”The United States’ traditional reputation as the world’s educational leader stems instead from the fact that it experienced a far earlier spread of mass secondary education than did most other nations. …. The United States’ historical advantage in terms of educational attainment has long since eroded, however. U.S. high-school graduation rates peaked in 1970 at roughly 80% and have declined slightly since, a trend often masked in official statistics by the growing number of students receiving alternative credentials, such as a General Educational Development (GED) certificate. … The U.S. high-school graduation rate now trails the average for European Union countries and ranks no better than 18th among the 26 OECD countries for which comparable data are available.\”
Can the U.S. make up for poor performance at the K-12 level by better performance in higher education? 

\”Although the share of [U.S.] students enrolling in college has continued to climb, the share completing a college degree has hardly budged. … Meanwhile, other developed countries have continued to see steady increases in educational attainment and, in many cases, now have postsecondary completion rates that exceed those in the United States. …  On average across the OECD, postsecondary completion rates have increased steadily from one age cohort to the next. Although only 20% of those aged 55 to 64 have a postsecondary degree, the share among those aged 25 to 34 is up to 35%. The postsecondary completion rate of U.S. residents aged 25 to 34 remains above the OECD average at 42%, but this reflects a decline of one percentage point relative to those aged 35 to 44 and is only marginally higher than the rate registered by older cohorts.

How large might the economic gains be from improving U.S. education performance? 

\”Consider the results of a simulation in which it is assumed that the math achievement of U.S. students improves by 0.25 standard deviation gradually over 20 years. This increase would raise U.S. performance to roughly that of some mid-level OECD countries, such as New Zealand and the Netherlands, but not to that of the highest-performing OECD countries. Assuming that the past relationship between test scores and economic growth holds true in the future, the net present value of the resulting increment to GDP over an 80-year horizon would amount to almost $44 trillion. A parallel simulation of the consequences of bringing U.S. students up to the level of the top-performing countries suggests that doing so would yield benefits with a net present value approaching $112 trillion.\”
What are some common factors across the other countries where the education systems seem to be outperforming the U.S. education system? 
\”[T]here are three broad areas in which the consistency of findings across studies using different international tests and country samples bears attention.
\”Exit exams. Perhaps the best-documented factor is that students perform at higher levels in countries (and in regions within countries) with externally administered, curriculum-based exams at the completion of secondary schooling that carry significant consequences for students of all ability levels. Although many states in the United States now require students to pass an exam in order to receive a high-school diploma, these tests are typically designed to assess minimum competency in math and reading and are all but irrelevant to students elsewhere in the performance distribution. In contrast, exit exams in many European and Asian countries cover a broader swath of the curriculum, play a central role in determining students’ postsecondary options, and carry significant weight in the labor market. … The most rigorous available evidence indicates that math and science achievement is a full grade-level equivalent higher in countries with such an exam system in the relevant subject.
\”Private-school competition. Countries vary widely in the extent to which they make use of the private sector to provide public education. …  Rigorous studies confirm that students in countries that for historical reasons have a larger share of students in private schools perform at higher levels on international assessments while spending less on primary and secondary education. Such evidence suggests that competition can spur school productivity. In addition, the achievement gap between socioeconomically disadvantaged and advantaged students is reduced in countries in which private schools receive more government funds.
\”High-ability teachers. Much attention has recently been devoted to the fact that several of the highest-performing countries internationally draw their teachers disproportionately from the top third of all students completing college degrees. This contrasts sharply with recruitment patterns in the United States.\”
Brief reflections

I remember back in 1983 when something called the National Commission on Excellence in Education issued a report called \”A Nation At Risk: Imperative For Educational Reform.\” It\’s available several places on the web, like here and here. Here are the introductory paragraphs, much quoted at the time:

\”Our Nation is at risk. Our once unchallenged preeminence in commerce, industry, science, and technological innovation is being overtaken by competitors throughout the world. This report is concerned with only one of the many causes and dimensions of the problem, but it is the one that undergirds American prosperity, security, and civility. We report to the American people that while we can take justifiable pride in what our schools and colleges have historically accomplished and contributed to the United States and the well-being of its people, the educational foundations of our society are presently being eroded by a rising tide of mediocrity that threatens our very future as a
Nation and a people. What was unimaginable a generation ago has begun to occur– others are matching and surpassing our educational attainments.\”

\”If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves. We have even squandered the gains in student achievement made in the wake of the Sputnik challenge. Moreover, we have dismantled essential support systems which helped make those gains possible. We have, in effect, been committing an act of unthinking, unilateral educational disarmament.\”

In 2008, the U.S. Department of Education followed up with a report called \”A Nation Accountable: Twenty-five Years After A Nation at Risk.\” It said: \”If we were “at risk” in 1983, we are at even greater risk now. The rising demands of our global economy, together with demographic shifts, require that we educate more students to higher levels than ever before. Yet, our education system is not keeping pace with these growing demands.\”

Adam Smith, patron saint of all economists, is said to have responded to overwrought predictions that Britain was about to be ruined by some setbacks during the Revolutionary War by remarking: \”There is a lot of ruin in a nation.\” I do not wish to sound overwrought. But a nation that does not steadily improve the education level of its population is not preparing itself sufficiently for a future of growing and shared prosperity.