Does Economics Make You a Bad Person?

Economic reasoning often begins with a presumption that people act in a self-interested manner. And so there\’s been a string of small-scale research over over time about whether studying economics makes people more likely to act in a selfish manner. Adam Grant offers a nice short overview of this research (with links to underlying papers!) for Psychology Today last October, in \”Does Studying Economics Breed Greed?\”

The especially tough question in this research is how to distinguish between several possibilities: 1) Maybe studying economics changes the extent to which people act selfishly; 2) Maybe studying economics makes people more willing to admit that they might act selfishly, or makes them act more selfishly in small-scale classroom game interactions, but doesn\’t actually change their real-world behavior; or 3) Maybe people who are more likely to act in their own self-interest, or more likely to admit that they act in their own self-interest, are drawn to economics in the first place.

I won\’t try to summarize the literature here, especially because Grant already offers such a nice overview, but here are a few samples. A 2012 study by Andrew L. Molinsky, Adam M. Grant, and Joshua D. Margolis in the journal Organizational Behavior and Human Decision Processes (v. 119, pp. 27-37) considers \”The bedside manner of homo economicus: How and why priming an economic schema reduces compassion.\” Here\’s how Grant describes the study:

\”In one experiment, Andy Molinsky, Joshua Margolis, and I recruited presidents, CEOs, partners, VPs, directors, and managers who supervised an average of 140 employees. We randomly assigned them to unscramble 30 sentences, with either neutral phrases like [green tree was a] or economic words like [continues economy growing our]. Then, the executives wrote letters conveying bad news to an employee who was transferred to an undesirable city and disciplining a highly competent employee for being late to meetings because she lacked a car. Independent coders rated their letters for compassion.
Executives who unscrambled sentences with economic words expressed significantly less compassion. There were two factors at play: empathy and unprofessionalism. After thinking about economics, executives felt less empathy—and even when they did empathize, they worried that expressing concern and offering help would be inappropriate.\”

Of course, a skeptic might object that even if the \”priming\” with economics terms changes the amount of compassion expressed in a \”bad news\” letter, the executives would still deliver the bad news when they felt it was necessary.

In a 2011 study published in Academy of Management Learning & Education, Long Wang,  Deepak Malhotra, and J. Keith Murnighan carry out various experiments on the topic of \”Economics Education and Greed.\” (10 (4): 643–660). For example, several of their experiments are online surveys in which people in various ways revealed their attitudes about greed. Those who majored in economics, or who read a snippet of economics before answering, were less likely to express attitudes that strongly condemned greed. Of course, a skeptic might point out that such surveys show what people say, but not necessarily what they feel or how they act.

Some of the early studies about how the study of economics might affect its students were published in the Journal of Economic Perspectives (where I\’ve worked as the Managing Editor since the start of the journal in 1987). For example, in the Spring 1993 issue, Robert H. Frank, Thomas Gilovich,
and Dennis T. Regan ask \”Does Studying Economics Inhibit Cooperation?\” They present a variety of evidence that raises cause for concern. For exmaple, they present data that economists in academia are more likely to give zero to charity than others. They report the results of surveys in which students are asked what would happen if they were working for a company that paid for nine computers, but accidentally received ten. Do they expect the error would be reported back to the seller, and would they personally report the error? Those who have taken an economics class are less likely to say that the error would be reported, or that they would report it, than students in an astronomy class. 
Again, a skeptic might point out that even if economists are less likely to give to charity, this pattern may have been established well before they entered economics. And maybe the economics class is just causing students to be more realistic about whether errors would be reported and more honest in reporting what they would really do, compared with those in other classes. 
In a study in the Winter 1996 issue of JEP that Grant doesn\’t mention in his brief Psychology Today overview, Anthony M. Yezer, Robert S. Goldfarb, and Paul J. Poppen discussed \”\”Does Studying Economics Discourage Cooperation? Watch What We Do, Not What We Say or How We Play,\”  They carried out a \”dropped letter\” experiment, in which a letter was left behind in various classrooms around the George Washington University campus, some economics classrooms and some not. The letter was addressed and stamped, but unsealed and with no return address. Inside there was $10 in cash, and a brief note saying that the money was being sent to repay a loan. In their experiment, over half of the letters that were dropped in economics classrooms were mailed in with the cash, compared with less than a third of the letters dropped in noneconomics classrooms. They also argue that substantial parts of the economics curriculum are about mutual benefits from trade, both between individuals and across national borders, and in that sense economics may encourage students to see economic interactions as friendly to decentralized cooperation, rather than as just an arena for clashes of unfettered selfishness.

In a similar spirit, I sometimes argue that many students enter an economics class seeing the world and the economy as fundamentally a zero-sum game, where anyone who benefits must do so at the expense of someone else. Learning about how division of labor and voluntary exchange at least has the possibility of being a positive-sum win-win game makes them more likely to consider the possibility of benefiting both themselves and others. Maybe there are some unreconstructed business schools which do teach that \”greed is good.\” But all the economics curricula with which I\’m familiar is painstaking about explaining the situations and conditions in which the interactions of self-interested parties is likely to lead to positive outcomes like free choice and efficiency, and also the situations and conditions in which it can lead to pollution, unemployment, inequality, and poverty.

My own sense is everyone plays many roles and wears many hats. A surgeon who cuts into  people all day would never dream of getting into a knife fight on the way home. An athlete who competes ferociously all week goes to church and volunteers to help handicapped children during off hours. A parent fights grimly over the appropriate annual marketing plan, and then goes home and hugs their children and takes dinner over to the neighbors who just had a baby. Some economic behaviors certainly shouldn\’t be generalized to the rest of life, and it\’s possible to set up situations with questionnaires and little classroom experiments where the boundaries can become blurred. But there\’s not much reason to believe that Darwinian biologists practice survival of the fittest in their spare time, nor that sociologists give up on personal responsibility because everything is society\’s fault, nor that lawyers go home and argue over fine print with their families.

It\’s worth remembering that Adam Smith, the intellectual godfather of economics, reflected on selfishness and economics at the start of his first great work, The Moral Sentiments, published in 1759. The opening words of the book are: \”How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it.\” And later in the same chapter: \”And hence it is, that to feel much for others and little for ourselves, that to restrain our selfish, and to indulge our benevolent affections, constitutes the perfection of human nature; and can alone produce among mankind that harmony of sentiments and passions in which consists their whole grace and propriety.\”  Smith saw no contradiction in thinking about people as containing both selfishness and \”benevolent affections,\” and most people, even economists, seek a comfortable balance between the two depending on the situation and context.

Warren Buffett on Index Funds for the Non-Professional Investor

Each year the legendary investor Warren Buffett writes an annual letter to the shareholders of Berkshire Hathaway. If you want a quick overview of the thinking behind how Buffett has achieved returns averaging 19.7% annually from 1965 to 2013, while the S&P 500 (with reinvested dividends) has clocked in at 9.8% annually over this time, this is a good starting point. Oddly enough, this year\’s letter to shareholders includes a section on \”Some Thoughts about Investing\” in which Buffett recommends that ordinary stock market investors should look into the merits of no-load mutual funds, like those run by Vanguard. Indeed, in his will, Buffett instructs that the trustee for the money he is leaving to his wife invest those funds in a no-load index fund. Buffett writes:

\”When Charlie and I buy stocks – which we think of as small portions of businesses – our analysis is very similar to that which we use in buying entire businesses. We first have to decide whether we can sensibly estimate an earnings range for five years out, or more. If the answer is yes, we will buy the stock (or business) if it sells at a reasonable price in relation to the bottom boundary of our estimate. If, however, we lack the ability to estimate future earnings – which is usually the case – we simply move on to other prospects. In the 54 years we have worked together, we have never foregone an attractive purchase because of the macro or political environment, or the views of other people. In fact, these subjects never come up when we make decisions. It’s vital, however, that we recognize the perimeter of our “circle of competence” and stay well inside of it. …

Most investors, of course, have not made the study of business prospects a priority in their lives. If wise, they will conclude that they do not know enough about specific businesses to predict their future earning power. 

I have good news for these non-professionals: The typical investor doesn’t need this skill. In aggregate, American business has done wonderfully over time and will continue to do so (though, most assuredly, in unpredictable fits and starts). In the 20th Century, the Dow Jones Industrials index advanced from 66 to 11,497, paying a rising stream of dividends to boot. The 21st Century will witness further gains, almost certain to be substantial. The goal of the non-professional should not be to pick winners – neither he nor his “helpers” can do that – but should rather be to own a cross-section of businesses that in aggregate are bound to do well. A low-cost S&P 500 index fund will achieve this goal.

That’s the “what” of investing for the non-professional. The “when” is also important. The main danger is that the timid or beginning investor will enter the market at a time of extreme exuberance and then become disillusioned when paper losses occur. (Remember the late Barton Biggs’ observation: “A bull market is like sex. It feels best just before it ends.”) The antidote to that kind of mistiming is for an investor to accumulate shares over a long period and never to sell when the news is bad and stocks are well off their highs. Following those rules, the “know-nothing” investor who both diversifies and keeps his costs minimal is virtually certain to get satisfactory results. Indeed, the unsophisticated investor who is realistic about his shortcomings is likely to obtain better long-term results than the knowledgeable professional who is blind to even a single weakness.

If “investors” frenetically bought and sold farmland to each other, neither the yields nor prices of their crops would be increased. The only consequence of such behavior would be decreases in the overall earnings realized by the farm-owning population because of the substantial costs it would incur as it sought advice and switched properties. Nevertheless, both individuals and institutions will constantly be urged to be active by those who profit from giving advice or effecting transactions. The resulting frictional costs can be huge and, for investors in aggregate, devoid of benefit. So ignore the chatter, keep your costs minimal, and invest in stocks as you would in a farm.

My money, I should add, is where my mouth is: What I advise here is essentially identical to certain instructions I’ve laid out in my will. One bequest provides that cash will be delivered to a trustee for my wife’s benefit. (I have to use cash for individual bequests, because all of my Berkshire shares will be fully distributed to certain philanthropic organizations over the ten years following the closing of my estate.) My advice to the trustee could not be more simple: Put 10% of the cash in short-term government bonds and 90% in a very low-cost S&P 500 index fund. (I suggest Vanguard’s.) I believe the trust’s long-term results from this policy will be superior to those attained by most investors – whether pension funds, institutions or individuals – who employ high-fee managers

Here\’s a discussion on this blog about \”Behavioral Investors and the Dumb Money Effect,\” concerning the folly of individual investors trying to time market swings. And here\’s a discussion drawing on the work of Burton Malkiel about how difficult it is for the average investor to beat the market, a mathematical fact which remains true even when the average investor is an active financial manager being paid fees by customers.

An Inequality Chartbook: Long-Run Patterns in 25 Countries

Tony Atkinson and Salvatore Morelli have combine to produce an intriguing \”Chartbook of Economic Inequality.\” The main feature of the chartbook is a set of figures showing long-run trends in inequality as measured by a variety of statistics for 25 different countries, with all the statistics appearing on a single chart for each country. The charts appears in two forms: there\’s a colorful online version, and then a black-and-white version that can be printed out from a PDF file.

The 25 countries are partly determined by the availability of long-run (meaning a good chunk of the 20th century) data. Along with the United States, the other countries are  Argentina, Brazil, Australia, Canada, Finland, France, Germany, Iceland, India, Indonesia, Italy, Japan, Malaysia, Mauritius, Netherlands, New Zealand, Norway, Portugal, Singapore, South Africa, Spain, Sweden, Switzerland, and the United Kingdom. The figure for each country is also followed by a few bulletpoints that highlight some main trends. Detailed sources for the data are also provided.

Any single economic statistic is a one-dimensional slice of reality. When you put a lot of economic statistics together, and let your eye move from one to another, you develop a better overall perspective. As one example, here\’s the chart for the United States. But if you have an interest in these topics, I encourage you to surf the countries of the Chartbook.

Among the key takeaways from this figure: U.S. inequality follows a U-shaped pattern, with a number of measures of inequality falling in the 1930s and 1940s, and then rising since the 1970s. For example,
\”the top decile of earnings has risen from 150 per cent of median in 1950 to 244 per cent in 2012.\” The table also suggests some puzzles. For example, the share of total wealth held by the top 1%, based on estate data, doesn\’t seem to have risen in the last few decades along with inequality of incomes. The dispersion of earnings as measured by the top decile starts rising in the 1950s, but the overall inequality of earnings doesn\’t seem to start rising until the 1970s–presumably because during the 1950s and 1960s, there was declining inequality at the bottom of the income distribution, as seen in the falling poverty rate, to offset rising dispersion of incomes at the top.

Hat tip: I was alerted to the \”Chartbook\” by a post from Larry Willmore at his \”Thought du Jour\” blog.

Who Sings Homework Blues?

I have three children between grades 6 and 10, and I view homework as a plague upon the land. Family time is tight. The hours between dinner and bedtime are few. Weekends are often full of activities. It can feel as if homework overshadows and steals any available down-time and flexibility. That said, I have to admit that the evidence about actual amounts of homework tends to suggest that it\’s not a problem for most families or students. Instead, my suspicion is that too much homework is a problem for one group and too little homework may be the problem for another group. There\’s a nice summary of the evidence on \”Homework in America\” by Tom Loveless in Chapter 2 of the 2014 Brown Center Report on American Education:How Well Are American Students Learning?\”

As Loveless points out, controversies over \”too much homework\” have been going on for a long time.

\”In 1900, Edward Bok, editor of the Ladies Home Journal, published an impassioned article, “A National Crime at the Feet of Parents,” accusing homework of destroying American youth. . . .Bok argued that study at home interfered with children’s natural inclination towards play and free movement, threatened children’s physical and mental health, and usurped the right of parents to decide activities in the home. The Journal was an influential magazine, especially with parents. An anti-homework campaign burst forth that grew into a national crusade. School districts across the land passed restrictions on homework, culminating in a 1901 statewide prohibition of homework in California for any student under the age of 15 . . .\”

Among more recent examples in the last 10-15 years, Loveless points to cover stories on the \”too much homework\” theme in TIME, Newsweek, and People, the documentary movie Race to Nowhere, and a September 2013 story in the Atlantic, \”My Daughter\’s Homework is Killing Me.\” But of course, it\’s an old saying among social scientists that the plural of anecdote is not data. The systematic data suggests that the homework burden is not a widespread or growing problem.

For example, one of the standard national surveys is the National Assessment of Educational Progress (NAEP). Here are their results from surveying students about how much homework they had the night before. As you can see, there is a small share of students who report more than 2 hours of homework per night, but that share doesn\’t seem to be growing over time. And among 17 year-olds, 27% had no homework assigned the night before the survey.

Of course, one might wonder about just how this survey was carried out, but Loveless runs through a number of other surveys of parents and students that reach similar results. As one more example, here\’s a survey of college freshmen–thus only a sample of those who went on to college–about how much homework they had done as high school seniors. About two-thirds report have done less than 6 hours per week, which of course averages less than an hour per day.

So how do the homework-haters, like me, chew and swallow this evidence?

The evidence is very clear that most students don\’t have too much homework. But it certainly leaves open the possibility that some high school and even middle-school students, perhaps 10-15%, average more than 2 hours of homework each night. My guess is that a lot of these students are also active in school activities: clubs, sports, plays. By the time you add their regular school day, they are in effect committed to one activity or another for about 12 hours out of every day, which doesn\’t leave a lot of time for eating, commuting, sleeping, and anything else.

For example, an article by Mollie Galloway, Jerusha Conner, and Denise Pope late last year in the Journal of Experimental Education looks at \”Nonacademic Effects of Homework in Privileged, High-Performing High Schools\” (81:4, pp. 490-510, 2013). (I don\’t think the article is freely available on-line, but readers may have access through a library subscription.) The survey looked at 4,317 student in 10 high-performing high schools in California. These students averaged more than three hours of homework per night. Here\’s as sampling of comments from these students:

  • \”There’s never a time to rest. There’s always something more you should be doing. If I go to bed before 1:30 I feel like I’m slacking off, or just screwing myself over for an even later night later in the week . . . There’s never a break. Never.\”
  • “[I have] way too much homework! I cannot focus on sports and family if I have 4 hours of homework like I normally do.”
  • “[I have an] overload of activities in a day. School till 3, football till 6:30, Advanced Jazz Band till 8:30, Homework till 12 (or later) then all over again. It is hard to imagine that such intensive schedules could be sustainable for very long.”

In some cases, it feels to me as if parents want the school to push their children to achieve, and schools are responding with a blizzard of homework, while students are caught in the middle. The obvious answer here is for parents to push back, hard, at the individual schools where this happens.

But at the other end of the academic achievement scale, America\’s difficulties with low graduation rates in many high schools, and the need to raise academic achievement across the board for social mobility, active citizenship, and economic growth, suggests that many students are not getting sufficient homework–which is to say that many are getting none at all, or none that they ever do.

Finally, I\’ll add that the way in which a school day interacts with the modern family is part of the issue here. Fewer households with children have a stay-at-home parent waiting with milk-and-cookies at 2:30 or 3. Children get signed up for afternoon activities in part to provide coverage through the work-day of the parents. By the time the parents and children get home, everyone has had a long day already. At that point in the day, starting at 7 or 8, even a moderate amount of homework feels like a heavy load, and 2 hours or more can feel crushing.

Personally, I\’d love to see schools experiment with a schedule where most of the the children would be there from 8-5. (Parents who want to pick their children up at 2:30 or 3 could do so.) But when the child came  home at 5, they would be typically be done with homework for the day. Optional activities like sports or music could still be in the evenings, as desired. Or I\’d like to see schools experiment with a rule that all homework is due Wednesday, Thursday, and Friday, but under no circumstances ever due on Monday or Tuesday, thus trying to save the weekend. I know family life will always be busy, and I wouldn\’t have it any other way. But the ebb and flow of even moderate levels of homework across multiple students isn\’t just a student burden. It takes a toll on families, too.

Should Federal IT Spending be Flat?

In its latest proposed budget , the Obama administration praising itself for limiting the rise in information technology spending by the federal government. This seems peculiar. After all, a number of stories in the news suggest that some additional federal spending on information technology might be needed, like the  the botched and halting roll-out of the health care exchanges in the Affordable Care and Patient Protection Act of 2009, or the rising concerns over cybersecurity, or difficulties for the IRS in handling information and payments. The Washington Post just published a \”Sinkhole of Bureaucracy,\” by David Farentholdwhich tells the eyebrow-raising story of how story of how 600 employees of the federal Office of Personnel Management work in an office that is set in an old Pennsylvania limestone mine outside of Pittsburgh, where they file retirement papers of governnment workers entirely by hand, 230 feet underground. The federal government has been making plans to computerize the operation since the late 1980s, but has not succeeded after a quarter-century of trying.

More broadly, the federal government is at its heart an enormous information-generating and information-processing organization. Over the last few decades, information technology has progressed by leaps and bounds. It would seem plausible to me that the federal government should be a major consumer of this technology: after all, the private sector has used this technology to provide a wide range of services to consumers while also replacing large numbers of middle managers.

The discussion of federal IT spending appears is in Chapter 19 of the Analytical Perspectives volume, released each year as a supplement to the budget proposals from the president. Here\’s Figure 19-1, in which the Obama budget folks choose to emphasize that IT spending rose by 7% annually during the Bush administration, but by less than 1% per year during the Obama administration.

Some of the discussion in this chapter makes perfectly sensible points. Yes, you don\’t want every agency in the federal government running around with its own proprietary software packages that won\’t link up to others. Yes, it\’s useful to make some decisions about shared tools and platforms. Yes, it\’s always good to be on the lookout for the \”duplicative\” and the \”underperforming\” projects.

But in the middle of an IT revolution, it\’s easy for me to believe that rising levels of IT spending make economic sense.  Think about the 600 federal workers filing by hand in a mine in Pennsylvania for the last few decades. Or consider some of the rhetorical flourishes from this chapter of the budget, and ask yourself: \”Is this consistent with a federal IT budget that, after adjusting for inflation, is falling in real dollars?\” As the budget chapter says:

\”The interconnectedness of our digital world dictates that the Government buy, build and manage IT in a new way. Rapidly adopting innovative technologies, improving the efficiency and effectiveness of the Federal workforce through technology, and fostering a more participatory and citizen-centric Government are critical to providing the services that citizens expect from a 21st Century Government. …\”

\”The President has identified the Cybersecurity threat as one of the most serious national security, public safety, and economic challenges we face as a nation. …\”

\”On May 23, 2012, the President issued a directive entitled “Building a 21st Century Digital Government.” It launched a comprehensive Digital Government Strategy aimed at delivering better digital services to the American people. The strategy has three main objectives: (1) enabling the American people and an increasingly mobile workforce to access high-quality, digital Government information and services anywhere, anytime, on any device; (2) ensuring that as the Government adjusts to this new digital world, we seize the opportunity to procure and manage devices, applications, and data in smart, secure and affordable ways; and (3) unlocking the power of Government data to spur
innovation across our Nation and improve the quality of services for Federal employees and the American people.\”

I suppose one might argue that the capabilities of information are rising so quickly, and prices for IT are falling so rapidly, that the federal government will be able to achieve these kinds of IT goals even with no increase in spending. I\’m writing myself a little mental note here: Whenever the federal government under runs into IT-related issues and problems, I\’m going to wonder if the proud determination of the Obama administration to hold down IT spending was such a smart decision.

Long-Run Unemployment Arrives in the U.S. Economy

One lasting consequence of the Great Recession has been that the problem of long-run unemployment has now arrived in the U.S economy. Alan B. Krueger, Judd Cramer, and David Cho present the evidence and some striking analysis in \”Are the Long-Term Unemployed on the 
Margins of the Labor Market?\” written for the just-completed Spring 2014 conference of the Brookings Panel on Economic Activity. To get a sense of the issue here are a couple of striking figures from Krueger, Cramer, and Cho.

Split up those unemployment rate into three groups: those unemployment for 14 weeks or less, those unemployed for 15-26 weeks, and those unemployed for more than 26 weeks. What do the patterns look like, both over time and more recently?

A few patterns jump out here:

1) In the last 65 years, the short-term unemployment rate, 14 weeks or less, has been higher than the middle-term or long-term unemployment rate. But for a time just after the Great Recession, the long-term unemployment rate spike so severely that it exceeded the short-term rate.

2) In the last 65 years, the medium term unemployment rate for those without jobs from 15-26 weeks moved in quite a similar way and at a similar level to the longer-term unemployment rate for those without jobs for more than 26 weeks. But after the Great Recession, the long-term unemployment rate spiked far out of line with the medium-term unemployment rate.

3) Moreover, notice that right after the Great Recession, the long-term unemployment was spiking at a time when the short-term and medium-term unemployment rates had already peaked and had started to decline.

4) The short-term unemployment rate is now below the pre-recession average for the years 2001-2007. The medium term unemployment rate is almost back to its pre-recession average. The long-term unemployment rate, although it has declined in recent months, is still near its highest level for the period from 1948-2007.

This outcome is troubling for many reasons. Krueger, Cramer and Cho present evidence that the most recent wave of long-run have become detached from the labor market: they have a much-reduced change of finding a job, little effect on wage increases, and if they do find a job are likely to soon become unemployed again. A summary of the paper notes:

\”The short-term unemployment rate is a much stronger predictor of inflation and real wage growth than the overall unemployment rate in the US. Even in good times, the long-term unemployed are on the margins of the labor market, with diminished job prospects and high labor force withdrawal rates, and as a result they exert little pressure on wage growth or inflation. Even after finding another job, reemployment does not fully reset the clock for the long-term unemployed, who are frequently jobless again soon after they gain reemployment: only 11 percent of those who were long-term unemployed in a given month returned to steady, full-time employment a year later. The long-term unemployed are spread throughout all corners of the economy, with a majority previously employed in sales and service jobs (36 percent) and blue collar jobs (28 percent), they find.\”

For me, one of the most troubling of the graphs looks at long-term unemployment rates across other high-income countries. The figure shows what share of the total unemployed in a country qualify as long-term unemployed–that is, what share of the unemployed have been out of work for more than six months.

Historically, the U.S. and Canadian economies have been places where the long-run unemployed were maybe 10-20% of total unemployment. Meanwhile, in countries like Italy, Germany and France, the long-run unemployed were often 60-70%, or more, of total unemployment. The fundamental meaning of what it means to be \”unemployed\” is pretty different, depending on whether the experience is usually fairly short or usually fairly long. In the U.S., the share of the unemployed who are long-run unemployed hasn\’t yet reached some of levels common in those other economies. But the experience of those other countries points out that when the share of the unemployed who are long-run unemployed is very high, that situation can persist for decades. 
I\’m not sure exactly what policies will work best for bringing the long-term unemployed back into the labor force. But Sweden and Canada, to pick two examples from the figure, have apparently had some success in doing so. But it\’s reasonable to worry that past U.S. approaches to addressing unemployment are not well-suited to the long-run unemployment that emerged after the Great Recession.   

Will the U.S. Dollar Remain the World\’s Reserve Currency?

In the question-and-answer period after I give talks about the U.S. economy, someone always seems to ask about whether the U.S. dollar will remain the world\’s reserve currency. But at least so far, it\’s holding steady. Eswar Prasad reviews the arguments in \”The Dollar Reigns Supreme, by Default,\”
which appears in the March 2014 issue of Finance & Development.

One way to look at the importance of the U.S. dollar is in terms of what currency governments and investors around the world choose hold as their reserves. In the aftermath of the 1998 financial and economic crash in east Asia, lots of countries started ramping up their U.S. dollar international reserves. In the aftermath of the Great Recession, a number of countries spent down their dollar reserves to some some extent–and now want to rebuild. For example, here\’s a figure from Prasad on the form in which governments hold foreign exchange reserves. Notice that there\’s no drop-off in the years after the recession.

Prasad writes:

\”The global financial crisis shattered conventional views about the amount of reserves an economy needs to protect itself from the spillover effects of global crises. Even countries with large stockpiles found that their reserves shrank rapidly over a short period during the crisis as they sought to protect their currencies from collapse. Thirteen economies that I studied lost between a quarter and a third of their reserve stocks over about eight months
during the worst of the crisis.\”

 The U.S. dollar is also by far the dominant currency in world economic transactions. It is often how global prices are denominated. When bills are paid between counties, or investments are made between countries, and there is a need to carry out an exchange rate conversion, the U.S. dollar is often involved even when the transaction has nothing to do with the U.S. economy, because other currencies are converted into dollars in the inner processes of the exchange rate markets. The U.S. dollar is used in 87% of all foreign exchange transactions, in foreign exchange markets that are now trading over $5 trillion per day.

The U.S. economy clearly benefits from being the world\’s reserve currency in an important way: There is an enormous appetite around the world to hold U.S. dollar assets, which makes it a lot easier when a low-saving economy like the United States wants to the borrowing large amounts from foreign investors.  Here\’s a figure from Prasad showing who holds of U.S. government debt:

Many foreign investors, including governments, have expressed concerns about being so heavily invested in U.S. dollar assets. They worry that if inflation does rise, the real value of their debt holdings would decline. As Prasad writes:

\”Still, emerging market countries are frustrated that they have no place other than dollar assets to park most of their reserves, especially since interest rates on Treasury securities have remained low for an extended period, barely  keeping up with inflation. This frustration is heightened by the disconcerting prospect that, despite its strength as  the dominant reserve currency, the dollar is likely to fall in value over the long term. China and other key emerging  markets are expected to continue registering higher
productivity growth than the United States, so once global financial markets settle down, the dollar is likely to return  to the gradual depreciation it has experienced since the early 2000s. In other words, foreign investors stand to get a  smaller payout in terms of their domestic currencies when they eventually sell their dollar investments.

Here\’s a figure from the ever-useful FRED website run by the St. Louis Fed showing the overall sag of the U.S. dollar over time, with some notable bumps in the road along the way.

Over time, one would expect the role of the U.S. dollar to decline as the global reserve currency. Other economies are growing faster. More closely integrated global financial markets are making it easier to carry out transactions that don\’t involve the U.S. dollar. But the typical predictions, from Prasad and others, is that the dollar will remain dominant not just for a few more years, but perhaps for a few more decades. Part of the reason is that no clear alternative is available. Some well-informed folks continue to doubt whether the euro can survive. China\’s economy is headed for being the largest in the world, but as Prasad judiciously writes, \”the limited financial market development and structure of political and legal institutions in China make it unlikely that the renminbi will become a major reserve asset hat foreign investors, including other central banks, turn to for safekeeping of their funds. At best, the renminbi will erode but not significantly challenge the dollar’s preeminent status. No other emerging market economies are in a position to have their currencies ascend to reserve status, let alone challenge the dollar.\”

For those with an appetite for more on this subject, Prasad has just published a book on the subject,
The Dollar Trap: How the U.S. Dollar Tightened Its Grip on Global Finance. I can also recommend Barry Eichengreen\’s book on the subject, Exorbitant Privilege: The Rise and Fall of the Dollar and the Future of the International Monetary System, which offers additional historical perspective.

Eliminate U.S. Tourist Visas?

International tourism is an industry that now involves more than one billion travellers per year and more than $1 trillion per year in total spending. Welcoming more international visitors to spend their vacation dollars in the U.S. is both a plausible way of putting a dent in the U.S. trade deficit and a potential growth industry for new jobs. But international tourism is also an industry where America doesn\’t really try to compete; indeed, it actively hinders international tourists through its visa requirements.  Robert A. Lawson, Saurav Roychoudhury, and Ryan Murphy consider \”The Economic Gains from Eliminating U.S. Travel Visas\” in a Cato Institute Economic Development Bulletin (February 9, 2014).  Here\’s a realistic if hypothetical scenario to set the stage:

\”Suppose a reasonably affluent Brazilian family was interested in visiting Disney World in Florida. First they must fill out the DS-160 online application. Then they must pay a $160 application fee per visa and perform two separate interviews, one of which requires invasive questions and fingerprinting. The interviews typically must take place at an American embassy or consulate, of which there are only four locations in all of Brazil–a country as big as the continental United States–meaning that many of the Brazilian applicants will have to travel for the interview. While visas may take only 10 days to process, delays are common, and the United States government recommends not making travel plans until receiving the visa. To make matters even more uncertain, consular officials can stop the process at any moment and deny the family a visa without reason. The Brazilian family could go through that entire lengthy, expensive, and uncertain process or they could go to Disneyland Paris, France, without getting a visa at all. Unsurprisingly, many Brazilian families choose to go to Disneyland in France over the United States.\”

By their estimates, removing visa requirements completely could triple the number of international tourists in the United States. \”Eliminating all travel visas to the United States could increase tourism by 45–67 million visitors annually, corresponding to an additional $90–123 billion in tourist spending.\”

Of course, there are two main concerns about international tourists: they may be using the tourist visa as a way to immigrate to the U.S., or they may be a terrorist risk.  Lawson, Roychoudhury, and Murphy suggest the possibility of phasing in the removal of travel visas for countries where these issues seem less likely to be important. After all, the U.S. already has a Visa Waiver Program under which people from 37 countries can visit the U.S. for up to 90 days without a visa. They write:

\”The United States could phase in tourist visa reciprocity with nations that do not have a history of sending many unauthorized immigrants or that do not present security threats. Tourists from Malaysia, Botswana, Mongolia, Uruguay, and Georgia–nations that do not require Americans to obtain visas before visiting–could be allowed to enter without a visa to begin with, phasing in other nations depending on the success of those liberalizations. If the American authorities grow confident in their ability to limit visa overstays or the possibility of unauthorized immigration is greatly reduced, reciprocity could eventually be phased in with Mexico, Central American countries, and Caribbean nations as well.\”

Americans need to stop thinking about international tourism as only something where Americans spend money abroad, and start thinking about it as an economic opportunity, too. In a globalizing world economy, the countries that make an effort to be at the crossroads of that economy will have particular advantages.

How Academics Learn to Write Badly

Most of my days are spent editing articles by academic economists. So when I saw a book called Learn to Write Badly: How to Succeed in the Social Sciences, the author Michael Billig had me at the title. The book is a careful dissection of the rhetorical habits of social scientists, and in particular their tendency to banish actual people from their writing, and instead to turn everything into a string of nouns (often ending in -icity or -ization) linked with passive verbs to other strings of nouns. (If that sentence sounded ugly to you, welcome to my work life!)

I found especially thought-provoking Billig\’s argument early in the book about how the necessity for continual publications is relatively recent innovation in academic life, and how it has altered the incentives for quantity and quality of academic writing.  Here are a few of Billig\’s thoughts (citations omitted):

\”In the late 1960s, only a minority of those working in American four-year higher educational colleges tended to publish regularly; today over sixty per cent do … In 1969 only half of American academics in universities had published during the previous two years; by the late 1990s, the figure had risen to two-thirds, with even higher proportions in the research universities. The number of prolific publishers is increasing. In American universities the proportion of faculty, who had published five or more publications in the previous two years, exploded from a quarter in 1987 to nearly two-thirds by 1998, with the rise in the natural and social sciences particularly noticeable …

\”Experienced academics know that teaming up with other academics can be a means to increasing their collective output and thereby the total number of papers of which they can be credited as an author. In a field such as economics,  jointly written papers were rare before the 1970s but now they are commonplace. Journal editors, as well as those who have studied academic publishing, recognize the phenomenon of `salami slicing\’. Academic authors will cut their research findings thinly, so that they can maximize the number of publications they can obtain from a single piece of research. …

So, we produce our papers, as if on a relentless production line. We cannot wait for inspiration; we must maintain our output. To do our jobs successfully, we need to acquire a fundamental academic skill that the scholars of old generally did not possess; modern academics must be able to keep writing and publishing even when they have nothing to say. ….

As professional academics, we must extract the small nuggets of material relevant to our our interests from the mass of stuff that is being produced. Finding what we need to read necessarily means overlooking so much else. The more that is published in our discipline, the more there is to ignore. In consequence, the sheer volume of published material will be narrowing, not widening, horizons, containing us within ever smaller, less varied sub-worlds. It is important to remember that no one designed this system. There was not a moment in history when a group of powerful figures sat down in secret around a table and said: `Let us create a situation where academics have to read narrowly and to write at speed; that will stop them making trouble.\’ No secret meeting planned all this. But this is where we are now.\”

The U.S. Productivity Challenge

Over time, productivity growth determines the rise in a society\’s standard of living. I often find myself talking to people who are skeptical of economic growth for a variety of reasons, so let me specify that by productivity growth, I mean health-increasing, education-improving, job-creating, wage-gaining, pollution-reducing, energy-saving growth. More broadly, the nice thing about the problems of economic growth is that you can afford to do something about all your other social desires, because a bigger pie creates room for both higher government spending and lower tax rates.  Chapter 5 of the In the 2014 Economic Report of the President, released last week by President Obama\’s Council of Economic Advisers, tells the story of U.S. productivity growth in recent decades.

To put some intuitive meat the bones of the productivity idea, the discussion starts with a basic example of productivity for an Iowa corn farmer.

\”In 1870, a family farmer planting corn in Iowa would have expected to grow 35 bushels an acre. Today, that settler’s descendant can grow nearly 180 bushels an acre and uses sophisticated equipment to work many times the acreage of his or her forbearer. Because of higher yields and the use of time-saving machinery, the quantity of corn produced by an hour of farm labor has risen from an estimated 0.64 bushel in 1870 to more than 60 bushels in 2013. This 90-fold increase in labor productivity—that is, bushels of corn (real output) an hour—corresponds to an annual rate of increase of 3.2 percent compounded over 143 years. In 1870, a bushel of corn sold for approximately $0.80, about two days of earnings for a typical manufacturing worker; today, that bushel sells for approximately $4.30, or 12 minutes worth of average earnings.

This extraordinary increase in corn output, fall in the real price of corn, and the resulting improvement in physical well-being, did not come about because we are stronger, harder-working, or tougher today than the early settlers who first plowed the prairies. Rather, through a combination of invention, more advanced equipment, and better education, the Iowa farmer today uses more productive strains of corn and sophisticated farming methods to get more output an acre. … Technological advances such as corn hybridization, fertilizer technology, disease resistance, and mechanical planting and harvesting have resulted from decades of research and development.\”

In the picture, a typical American worker had more than four times the output per hour than a worker in 1948. As the table shows, about 10% of the gain can be traced to higher education levels and about 38% of the gain to workers that are working with capital investments of greater value. But the majority of the change is growth in multifactor producitivity: that is, innovations big and small that make it possible for a given worker with a given amount of capital to produce more.

The U.S. productivity challenge can be seen in the statistics of the last few decades. U.S. productivity growth was healthy and high in the 1950s and 1960s, plummeted from the early 1970s up to the mid-1990s, and has rebounded somewhat since then.

The reasons for the productivity slowdown around 1970 not fully understood. The report lists some of the likely candidates: energy price shocks that made a lot of energy-guzzling capital investment nearly obsolete; a relatively less-experienced labor force as a result of the baby boom generation entering the labor force and the widespread entry of women into the (paid) labor force; and a slowdown after the boost that productivity had received from World War II innovations like jet engines and synthetic rubber, as well as the completion of the interstate highway system in the 1950s. The bounceback of productivity since the mid-1990s is typically traced to information and communications technology, both making it and finding ways to use it. There is a considerable controversy about whether future productivity growth is likely to be faster or slower. But given that economists failed to predict either the productivity slowdown of the 1970s (and still don\’t fully understand it) or the productivity surge of the 1990s, I am not filled with optimism about our ability to foretell future productivity trends.

Sometimes people look at the vertical axis on these productivity graphs and wonder what all the fuss is about. Does the fall from 1.8% to 0.4% matter all that much? Aren\’t they both really small? But remember that the growth rate of productivity is an annual rate that shapes how much the overall economy grows. Say that from 1974 to 1995 the productivity growth rate had been 1% per year faster. After 22 years, with the growth rate compounding, the U.S. economy would have been about 25% larger. If the U.S. GDP was 25% larger in 2014, it would be $21.5 trillion instead of $17.2 trillion.

Policy-makers spend an inordinate amount of time trying to fine-tune the outcomes of the market system: for example, consider the recent arguments over raising the minimum wage, raising pay for those on federal contracts, changing how overtime compensation is calculated, or the top tax rate for those with high incomes. Given the rise in inequality in recent decades, I feel some sympathy with the impetus behind policies that seek to slice the pie differently–although I\’m sometimes more skeptical about the actual policies proposed. But 20 or 30 years in the future, what will really matter in the U.S. economy is whether annual rates of productivity growth have on average been, say, 1% higher per year.

The agenda for productivity growth is a broad one, and it would include improving education and job training for American workers; tax and regulatory conditions to support business investment; innovation clusters that mix government, higher education, and the private sector; and sensible enforcement of intellectual property law. But here, I\’ll add a few words about research and development spending, which is often at that growth in innovative ideas that are a primary reason for rises in productivity over. The Council of Economic Advisers writes:

\”Investments in R&D often have “spillover” effects; that is, a part of the returns to the investment accrue to parties other than the investor. As a result, investments that are worth making for society at large might not be profitable for any one firm, leaving aggregate R&D investment below the socially optimal level (for example, Nelson 1959). This tendency toward underinvestment creates a role for research that is performed or funded by the government as well as by nonprofit organizations such as universities. These positive spillovers can be particularly large for basic scientific research. Discoveries made through basic research are often of great social value because of their broad applicability, but are of little value to any individual private firm, which would likely have few, if any, profitable applications for them. The empirical analyses of Jones and Williams (1998) and Bloom et al. (2012) suggest that the optimal level of R&D investment is two to four times the actual level.\”

In other words, it\’s been clear to economists for a long time that society probably underinvests in R&D. Indeed, some of the biggest cliches of the last few decades is that we are moving to a \”knowledge economy\” or an \”information economy.\” We should be thinking about doubling our levels of R&D spending, just for starters. But here\’s what U.S. R&D spending as a share of GDP looks like: a boost related to aerospace R&D in the late 1950s and into the 1960s, which drops off, and basically flat since around 1980.

How best to increase R&D spending is a worthy subject: Direct government spending on R&D? Matching grants from government to universities or corporations? Tax breaks for corporate R&D? Helping collaborative R&D efforts across industry and across public-private lines? Whether we should be increase R&D is a settled question, and the answer is \”yes.\”

Finally, here\’s an article from the New York Times last weekend on how the U.S. research establishment is depending more and more on private-sector and non-profit funding. The graph above includes all R&D spending–government, private-sector, nonprofit–not just government. Nonprofit private foundations can do some extremely productive work, and I\’m all for them. But they are currently filling in the gaps for research programs that lack other support, not causing  total R&D spending to rise.